Compare commits

...

123 Commits

Author SHA1 Message Date
Ayaz Salikhov
c35649eb6e chore: Commits for 2.7.0-rc1 (#2846) 2025-12-10 16:26:07 +00:00
Ayaz Salikhov
f2f5a6ab19 chore: Switch to xrpl/3.0.0 (#2843) 2025-12-10 16:06:21 +00:00
Ayaz Salikhov
1469d4b198 chore: Add systemd file to the debian package (#2844) 2025-12-10 16:02:43 +00:00
yinyiqian1
06ea05891d feat: Add DynamicMPT in account_mptoken_issuances (#2820)
Support DynamicMPT for the  account_mptoken_issuances handler.

Related commit:
eed757e0c4

The original spec for `DynamicMPT` can be found here:
https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0094-dynamic-MPT

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2025-12-10 11:36:24 +00:00
Ayaz Salikhov
c7c270cc03 style: Use shfmt for shell scripts (#2841) 2025-12-09 18:51:56 +00:00
Alex Kremer
c1f2f5b100 chore: Less delay in ETL taskman (#2802) 2025-12-09 12:25:00 +00:00
github-actions[bot]
bea0b51c8b style: clang-tidy auto fixes (#2840) 2025-12-09 10:36:53 +00:00
Alex Kremer
69b8e5bd06 feat: Add observable value util (#2831)
This implements a simple observable value. Can be used for a more
reactive approach. Will be used in ETL state and across the codebase
with time.
2025-12-08 16:44:43 +00:00
dependabot[bot]
33dc4ad95a ci: [DEPENDABOT] Bump ytanikin/pr-conventional-commits from 1.4.2 to 1.5.1 (#2835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:52:52 +00:00
dependabot[bot]
13cbb405c7 ci: [DEPENDABOT] Bump peter-evans/create-pull-request from 7.0.9 to 7.0.11 (#2836)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:52:02 +00:00
dependabot[bot]
8a37a2e083 ci: [DEPENDABOT] Bump actions/checkout from 6.0.0 to 6.0.1 (#2837)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:51:32 +00:00
github-actions[bot]
f8b6c98219 style: Update pre-commit hooks (#2825)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-12-07 18:42:50 +00:00
dependabot[bot]
92883bf012 ci: [DEPENDABOT] Bump docker/metadata-action from 5.9.0 to 5.10.0 in /.github/actions/build-docker-image (#2826)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-07 18:42:34 +00:00
Alex Kremer
88881e95dd chore: TSAN fix async-signal-unsafe (#2824)
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2025-12-02 17:36:36 +00:00
Alex Kremer
94e70e4026 chore: Add mathbunnyru to maintainers (#2823) 2025-11-27 19:24:05 +00:00
github-actions[bot]
b534570cdd style: clang-tidy auto fixes (#2822)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-27 09:46:53 +00:00
Ayaz Salikhov
56fbfc63c2 chore: Update lockfile (#2818) 2025-11-26 17:06:38 +00:00
Ayaz Salikhov
80978657c0 ci: Update images to use latest Ninja (#2817) 2025-11-26 16:15:18 +00:00
Ayaz Salikhov
067449c3f8 chore: Install latest Ninja in images (#2813) 2025-11-26 13:54:19 +00:00
Ayaz Salikhov
946976546a chore: Use boost::asio::ssl::stream instead of boost::beast::ssl_stream (#2814) 2025-11-26 12:27:48 +00:00
github-actions[bot]
73e90b0a3f style: clang-tidy auto fixes (#2816)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-26 09:44:53 +00:00
Ayaz Salikhov
7681c58a3a style: Add black pre-commit hook (#2811) 2025-11-25 17:13:29 +00:00
Alex Kremer
391e7b07ab chore: WebServerAdminTestsSuit TSAN issues (#2809) 2025-11-25 12:17:24 +00:00
Alex Kremer
4eadaa85fa chore: Repeat-based tests TSAN fixes (#2810) 2025-11-25 12:15:43 +00:00
Ayaz Salikhov
1b1a46c429 feat: Handle prometheus requests in WorkQueue (#2790) 2025-11-24 16:17:45 +00:00
Ayaz Salikhov
89707d9668 ci: Run clang-tidy 3 times to make sure we don't have to fix again (#2803) 2025-11-24 12:19:34 +00:00
Ayaz Salikhov
ae260d1229 chore: Update spdlog and fmt libraries (#2804) 2025-11-24 11:27:29 +00:00
dependabot[bot]
058c05cfb6 ci: [DEPENDABOT] Bump actions/checkout from 5.0.0 to 6.0.0 (#2806)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 09:59:28 +00:00
dependabot[bot]
b2a7d185cb ci: [DEPENDABOT] Bump peter-evans/create-pull-request from 7.0.8 to 7.0.9 (#2805)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 09:58:51 +00:00
github-actions[bot]
9ea61ba6b9 style: clang-tidy auto fixes (#2801)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-11-21 11:27:40 +00:00
github-actions[bot]
19157dec74 style: clang-tidy auto fixes (#2799)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-11-21 11:02:07 +00:00
github-actions[bot]
42a6f516dc style: clang-tidy auto fixes (#2797) 2025-11-21 10:17:56 +00:00
emrearıyürek
2cd8226a11 refactor: Make getLedgerIndex return std::expected instead of throwing (#2788)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <kuzzz99@gmail.com>
2025-11-20 17:46:15 +00:00
Sergey Kuznetsov
4da4b49eda chore: Commits for 2.7.0-b2 (#2795) 2025-11-20 14:53:12 +00:00
Sergey Kuznetsov
e3170203de fix: Print cache saving error (#2794) 2025-11-20 14:48:42 +00:00
Ayaz Salikhov
8b280e7742 ci: Always upload cache on develop (#2793) 2025-11-19 20:42:18 +00:00
Ayaz Salikhov
7ed30bc40d ci: Don't download ccache on develop branch (#2792) 2025-11-19 19:32:12 +00:00
Sergey Kuznetsov
ac608004bc docs: Fix graceful_period description (#2791) 2025-11-19 19:17:44 +00:00
Alex Kremer
6ab92ca0a6 chore: Enable TSAN in CI (#2785) 2025-11-19 18:06:33 +00:00
Alex Kremer
77387d8f9f chore: Add defines for asan/tsan to conan profile (#2784)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-11-19 17:28:10 +00:00
Sergey Kuznetsov
b62cfe949f feat: Graceful shutdown with old web server (#2786)
- Stop accepting connections during graceful shutdown in the old web server
- Stop all the services before Clio exits
- Move cache saving into stop callback
2025-11-19 15:40:33 +00:00
Ayaz Salikhov
56f074e6ee ci: Use env vars instead of input in cache-key (#2789) 2025-11-17 17:54:37 +00:00
Ayaz Salikhov
f0becbbec3 chore: Update nudb recipe to remove linker warnings (#2787) 2025-11-17 15:26:03 +00:00
emrearıyürek
2075171ca5 fix: Match ledger_entry error codes with rippled (#2549)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
Co-authored-by: Peter Chen <34582813+PeterChen13579@users.noreply.github.com>
2025-11-17 15:14:31 +00:00
Ayaz Salikhov
3a4249dcc3 ci: Improve cache implementation (#2780) 2025-11-14 14:14:52 +00:00
Ayaz Salikhov
8742dcab3d ci: Use env vars instead of input (#2781) 2025-11-14 11:48:14 +00:00
github-actions[bot]
1ef7ec3464 style: clang-tidy auto fixes (#2783) 2025-11-14 10:52:44 +00:00
Ayaz Salikhov
20e7e275cf style: Fix hadolint issues (#2777) 2025-11-13 18:28:06 +00:00
Alex Kremer
addb17ae7d chore: Remove redundant silencing of ASAN errors in CI (#2779) 2025-11-13 18:07:23 +00:00
Sergey Kuznetsov
346c9f9bdf feat: Read and write LedgerCache to file (#2761)
Fixes #2413.
2025-11-13 17:01:40 +00:00
Alex Kremer
c6308ce036 chore: Use ucontext with ASAN (#2774) 2025-11-13 16:10:10 +00:00
Ayaz Salikhov
d023ed2be2 chore: Start using xrpl/3.0.0-rc1 (#2776) 2025-11-13 13:34:51 +00:00
Alex Kremer
6236941140 ci: Force ucontext in ASAN builds (#2775) 2025-11-13 13:10:15 +00:00
Ayaz Salikhov
59b7b249ff ci: Specify bash as default shell in workflows (#2772) 2025-11-12 13:34:53 +00:00
Alex Kremer
893daab8f8 chore: Change default max_queue_size to 1000 (#2771) 2025-11-11 16:37:00 +00:00
github-actions[bot]
be9f0615fa style: clang-tidy auto fixes (#2770) 2025-11-11 09:34:02 +00:00
emrearıyürek
093606106c refactor: Duplicate ledger_index pattern for RPC handlers (#2755)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-11-10 17:11:12 +00:00
dependabot[bot]
224e835e7c ci: [DEPENDABOT] bump docker/metadata-action from 5.8.0 to 5.9.0 in /.github/actions/build-docker-image (#2762)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 14:42:05 +00:00
dependabot[bot]
138a2d3440 ci: [DEPENDABOT] bump docker/setup-qemu-action from 3.6.0 to 3.7.0 in /.github/actions/build-docker-image (#2763)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 14:41:42 +00:00
Sergey Kuznetsov
c0eedd273d chore: Fix pre commit hook failing on empty file (#2766) 2025-11-10 14:35:19 +00:00
github-actions[bot]
a5b1dcfe55 style: clang-tidy auto fixes (#2765)
Fixes #2764.
2025-11-10 11:49:11 +00:00
Alex Kremer
c973e99f4b feat: WorkQueue priorities (#2721)
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-11-07 17:42:55 +00:00
Peter Chen
51dbd09ef6 fix: Empty signer list (#2746)
fixes #2730
2025-11-07 07:41:02 -08:00
Ayaz Salikhov
1ecc6a6040 chore: Specify apple-clang 17.0 in conan profile (#2757) 2025-11-06 11:51:47 +00:00
github-actions[bot]
1d3e34b392 style: clang-tidy auto fixes (#2759)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-06 09:35:52 +00:00
Alex Kremer
2f8a704071 feat: Ledger publisher use async framework (#2756) 2025-11-05 15:26:03 +00:00
Alex Kremer
fcc5a5425e feat: New ETL by default (#2752) 2025-11-05 13:29:36 +00:00
Ayaz Salikhov
316126746b ci: Remove backticks from release date (#2754) 2025-11-05 11:48:12 +00:00
Alex Kremer
6d79dd6b2b feat: Async framework submit on strand/ctx (#2751) 2025-11-04 19:14:31 +00:00
Ayaz Salikhov
d6ab2cc1e4 style: Fix comment in pre-commit-autoupdate.yml (#2750) 2025-11-03 18:30:52 +00:00
Ayaz Salikhov
13baa42993 chore: Update prepare-runner to fix ccache on macOS (#2749) 2025-11-03 17:55:44 +00:00
Ayaz Salikhov
b485fdc18d ci: Add date to nightly release title (#2748) 2025-11-03 17:16:29 +00:00
Ayaz Salikhov
7e4e12385f ci: Update docker images (#2745) 2025-10-30 17:08:11 +00:00
Ayaz Salikhov
c117f470f2 ci: Install pre-commit in the main CI image as well (#2744) 2025-10-30 14:32:23 +00:00
Ayaz Salikhov
30e88fe72c style: Fix pre-commit style issues (#2743) 2025-10-30 14:04:15 +00:00
Ayaz Salikhov
cecf082952 chore: Update tooling in Docker images (#2737) 2025-10-30 14:04:05 +00:00
Ayaz Salikhov
d5b95c2e61 chore: Use new prepare-runner (#2742) 2025-10-30 14:03:50 +00:00
github-actions[bot]
8375eb1766 style: clang-tidy auto fixes (#2741)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-10-30 11:20:32 +00:00
Ayaz Salikhov
be6aaffa7a ci: Fix nightly commits link (#2738) 2025-10-30 11:19:22 +00:00
Ayaz Salikhov
104ef6a9dc ci: Release nightly with date (#2731) 2025-10-29 15:33:13 +00:00
yinyiqian1
eed757e0c4 feat: Support account_mptoken_issuances and account_mptokens (#2680)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-29 14:17:43 +00:00
github-actions[bot]
3b61a85ba0 style: clang-tidy auto fixes (#2736) 2025-10-29 09:31:21 +00:00
Sergey Kuznetsov
7c8152d76f test: Fix flaky test (#2729) 2025-10-28 17:36:52 +00:00
dependabot[bot]
0425d34b55 ci: [DEPENDABOT] bump actions/checkout from 4.3.0 to 5.0.0 (#2724)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.3.0
to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="08eba0b27e...08c6903cd8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.3.0&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-28 16:44:09 +00:00
Ayaz Salikhov
8c8a7ff3b8 ci: Use XRPLF/get-nproc (#2727) 2025-10-27 18:50:40 +00:00
Ayaz Salikhov
16493abd0d ci: Better pre-commit failure message (#2720) 2025-10-27 12:14:56 +00:00
dependabot[bot]
3dd72d94e1 ci: [DEPENDABOT] bump actions/download-artifact from 5.0.0 to 6.0.0 (#2723)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:17:41 +00:00
dependabot[bot]
5e914abf29 ci: [DEPENDABOT] bump actions/upload-artifact from 4.6.2 to 5.0.0 in /.github/actions/code-coverage (#2725)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:16:14 +00:00
dependabot[bot]
9603968808 ci: [DEPENDABOT] bump actions/upload-artifact from 4.6.2 to 5.0.0 (#2722)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:15:39 +00:00
Ayaz Salikhov
0124c06a53 ci: Enable clang asan builds (#2717) 2025-10-24 13:36:14 +01:00
Ayaz Salikhov
1bfdd0dd89 ci: Save full logs for failed sanitized tests (#2715) 2025-10-23 17:06:02 +01:00
Alex Kremer
f41d574204 fix: Flaky DeadlineIsHandledCorrectly (#2716) 2025-10-23 16:33:58 +01:00
Ayaz Salikhov
d0ec60381b ci: Use intermediate environment variables for improved security (#2713) 2025-10-23 11:34:53 +01:00
Ayaz Salikhov
0b19a42a96 ci: Pin all GitHub actions (#2712) 2025-10-22 16:17:15 +01:00
emrearıyürek
030f4f1b22 docs: Remove logging.md from readme (#2710) 2025-10-22 11:36:49 +01:00
github-actions[bot]
2de49b4d33 style: clang-tidy auto fixes (#2706) 2025-10-20 10:41:59 +01:00
Ayaz Salikhov
3de2bf2910 ci: Update pre-commit workflow to latest version (#2702) 2025-10-18 07:55:14 +01:00
Peter Chen
7538efb01e fix: Add mpt_issuance_id to meta of MPTIssuanceCreate (#2701)
fixes #2332
2025-10-17 09:58:38 -04:00
Alex Kremer
685f611434 chore: Disable flaky DisconnectClientOnInactivity (#2699) 2025-10-16 18:55:58 +01:00
Ayaz Salikhov
2528dee6b6 chore: Use pre-commit image with libatomic (#2698) 2025-10-16 13:03:52 +01:00
Ayaz Salikhov
b2be4b51d1 ci: Add libatomic as dependency for pre-commit image (#2697) 2025-10-16 12:02:14 +01:00
Alex Kremer
b4e40558c9 fix: Address AmendmentBlockHandler flakiness in old ETL tests (#2694)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-15 17:15:12 +01:00
emrearıyürek
b361e3a108 feat: Support new types in ledger_entry (#2654)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-14 17:37:14 +01:00
Ayaz Salikhov
a4b47da57a docs: Update doxygen-awesome-css to 2.4.1 (#2690) 2025-10-13 18:55:21 +01:00
github-actions[bot]
2ed1a45ef1 style: clang-tidy auto fixes (#2688)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-10-10 10:45:47 +01:00
Ayaz Salikhov
dabaa5bf80 fix: Drop dynamic loggers to fix memory leak (#2686) 2025-10-09 16:51:55 +01:00
Ayaz Salikhov
b4fb3e42b8 chore: Publish RCs as non-draft (#2685) 2025-10-09 14:48:29 +01:00
Peter Chen
aa64bb7b6b refactor: Keyspace comments (#2684)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-08 19:58:05 +01:00
rrmanukyan
dc5f8b9c23 fix: Add gRPC Timeout and keepalive to handle stuck connections (#2676) 2025-10-08 13:50:11 +01:00
Ayaz Salikhov
7300529484 docs: All files are .hpp (#2683) 2025-10-07 19:28:00 +01:00
Ayaz Salikhov
33802f475f docs: Build docs using doxygen 1.14.0 (#2681) 2025-10-07 18:48:19 +01:00
Ayaz Salikhov
213752862c chore: Install Doxygen 1.14.0 in our images (#2682) 2025-10-07 18:20:24 +01:00
Ayaz Salikhov
a189eeb952 ci: Use separate pre-commit image (#2678) 2025-10-07 16:01:46 +01:00
Ayaz Salikhov
3c1811233a chore: Add git to pre-commit image (#2679) 2025-10-07 15:41:19 +01:00
Alex Kremer
693ed2061c fix: ASAN issue with AmendmentBlockHandler test (#2674)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-07 15:18:01 +01:00
Ayaz Salikhov
1e2f4b5ca2 chore: Add separate pre-commit image (#2677) 2025-10-07 14:23:49 +01:00
Ayaz Salikhov
1da8464d75 style: Rename actions to use dash (#2669) 2025-10-06 16:19:27 +01:00
Ayaz Salikhov
d48fb168c6 ci: Allow PR titles to start with [ (#2668) 2025-10-06 15:20:26 +01:00
dependabot[bot]
92595f95a0 ci: [DEPENDABOT] bump docker/login-action from 3.5.0 to 3.6.0 (#2662)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 15:16:15 +01:00
dependabot[bot]
fc9de87136 ci: [DEPENDABOT] bump docker/login-action from 3.5.0 to 3.6.0 in /.github/actions/build_docker_image (#2663)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-06 15:16:06 +01:00
Ayaz Salikhov
67f5ca445f style: Rename workflows to use dash and show reusable (#2667) 2025-10-06 15:02:17 +01:00
Ayaz Salikhov
897c255b8c chore: Start pr title with uppercase (#2666) 2025-10-06 13:54:11 +01:00
github-actions[bot]
aa9eea0d99 style: clang-tidy auto fixes (#2665) 2025-10-06 10:35:33 +01:00
353 changed files with 11405 additions and 9259 deletions

View File

@@ -49,6 +49,7 @@ IndentFunctionDeclarationAfterType: false
IndentWidth: 4
IndentWrappedFunctionNames: false
IndentRequiresClause: true
InsertNewlineAtEOF: true
RequiresClausePosition: OwnLine
KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1

View File

@@ -54,7 +54,7 @@ format:
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 6
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)

31
.github/actions/build-clio/action.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Build clio
description: Build clio in build directory
inputs:
targets:
description: Space-separated build target names
default: all
nproc_subtract:
description: The number of processors to subtract when calculating parallelism.
required: true
default: "0"
runs:
using: composite
steps:
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}
- name: Build targets
shell: bash
env:
CMAKE_TARGETS: ${{ inputs.targets }}
run: |
cd build
cmake \
--build . \
--parallel "${{ steps.nproc.outputs.nproc }}" \
--target ${CMAKE_TARGETS}

View File

@@ -34,25 +34,25 @@ runs:
steps:
- name: Login to DockerHub
if: ${{ inputs.push_image == 'true' && inputs.dockerhub_repo != '' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
- name: Login to GitHub Container Registry
if: ${{ inputs.push_image == 'true' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ env.GITHUB_TOKEN }}
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0
with:
cache-image: false
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
- uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
id: meta
with:
images: ${{ inputs.images }}

View File

@@ -1,29 +0,0 @@
name: Build clio
description: Build clio in build directory
inputs:
targets:
description: Space-separated build target names
default: all
subtract_threads:
description: An option for the action get_number_of_threads. See get_number_of_threads
required: true
default: "0"
runs:
using: composite
steps:
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
with:
subtract_threads: ${{ inputs.subtract_threads }}
- name: Build targets
shell: bash
run: |
cd build
cmake \
--build . \
--parallel "${{ steps.number_of_threads.outputs.threads_number }}" \
--target ${{ inputs.targets }}

41
.github/actions/cache-key/action.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Cache key
description: Generate cache key for ccache
inputs:
conan_profile:
description: Conan profile name
required: true
build_type:
description: Current build type (e.g. Release, Debug)
required: true
default: Release
code_coverage:
description: Whether code coverage is on
required: true
default: "false"
outputs:
key:
description: Generated cache key for ccache
value: ${{ steps.key_without_commit.outputs.key }}-${{ steps.git_common_ancestor.outputs.commit }}
restore_keys:
description: Cache restore keys for fallback
value: ${{ steps.key_without_commit.outputs.key }}
runs:
using: composite
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git-common-ancestor
- name: Set cache key without commit
id: key_without_commit
shell: bash
env:
RUNNER_OS: ${{ runner.os }}
BUILD_TYPE: ${{ inputs.build_type }}
CODE_COVERAGE: ${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}
CONAN_PROFILE: ${{ inputs.conan_profile }}
run: |
echo "key=clio-ccache-${RUNNER_OS}-${BUILD_TYPE}${CODE_COVERAGE}-${CONAN_PROFILE}-develop" >> "${GITHUB_OUTPUT}"

View File

@@ -44,6 +44,7 @@ runs:
- name: Run cmake
shell: bash
env:
BUILD_DIR: "${{ inputs.build_dir }}"
BUILD_TYPE: "${{ inputs.build_type }}"
SANITIZER_OPTION: |-
${{ endsWith(inputs.conan_profile, '.asan') && '-Dsan=address' ||
@@ -58,7 +59,7 @@ runs:
PACKAGE: "${{ inputs.package == 'true' && 'ON' || 'OFF' }}"
run: |
cmake \
-B ${{inputs.build_dir}} \
-B "${BUILD_DIR}" \
-S . \
-G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \

View File

@@ -24,7 +24,7 @@ runs:
-j8 --exclude-throw-branches
- name: Archive coverage report
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coverage-report.xml
path: build/coverage_report.xml

View File

@@ -28,11 +28,14 @@ runs:
- name: Run conan
shell: bash
env:
BUILD_DIR: "${{ inputs.build_dir }}"
CONAN_BUILD_OPTION: "${{ inputs.force_conan_source_build == 'true' && '*' || 'missing' }}"
BUILD_TYPE: "${{ inputs.build_type }}"
CONAN_PROFILE: "${{ inputs.conan_profile }}"
run: |
conan \
install . \
-of build \
-b "$CONAN_BUILD_OPTION" \
-s "build_type=${{ inputs.build_type }}" \
--profile:all "${{ inputs.conan_profile }}"
-of "${BUILD_DIR}" \
-b "${CONAN_BUILD_OPTION}" \
-s "build_type=${BUILD_TYPE}" \
--profile:all "${CONAN_PROFILE}"

View File

@@ -28,12 +28,17 @@ runs:
- name: Create an issue
id: create_issue
shell: bash
env:
ISSUE_BODY: ${{ inputs.body }}
ISSUE_ASSIGNEES: ${{ inputs.assignees }}
ISSUE_LABELS: ${{ inputs.labels }}
ISSUE_TITLE: ${{ inputs.title }}
run: |
echo -e '${{ inputs.body }}' > issue.md
echo -e "${ISSUE_BODY}" > issue.md
gh issue create \
--assignee '${{ inputs.assignees }}' \
--label '${{ inputs.labels }}' \
--title '${{ inputs.title }}' \
--assignee "${ISSUE_ASSIGNEES}" \
--label "${ISSUE_LABELS}" \
--title "${ISSUE_TITLE}" \
--body-file ./issue.md \
> create_issue.log
created_issue="$(sed 's|.*/||' create_issue.log)"

View File

@@ -1,36 +0,0 @@
name: Get number of threads
description: Determines number of threads to use on macOS and Linux
inputs:
subtract_threads:
description: How many threads to subtract from the calculated number
required: true
default: "0"
outputs:
threads_number:
description: Number of threads to use
value: ${{ steps.number_of_threads_export.outputs.num }}
runs:
using: composite
steps:
- name: Get number of threads on mac
id: mac_threads
if: ${{ runner.os == 'macOS' }}
shell: bash
run: echo "num=$(($(sysctl -n hw.logicalcpu) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads on Linux
id: linux_threads
if: ${{ runner.os == 'Linux' }}
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Shift and export number of threads
id: number_of_threads_export
shell: bash
run: |
num_of_threads="${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}"
shift_by="${{ inputs.subtract_threads }}"
shifted="$((num_of_threads - shift_by))"
echo "num=$(( shifted > 1 ? shifted : 1 ))" >> $GITHUB_OUTPUT

View File

@@ -1,38 +0,0 @@
name: Restore cache
description: Find and restores ccache cache
inputs:
conan_profile:
description: Conan profile name
required: true
ccache_dir:
description: Path to .ccache directory
required: true
build_type:
description: Current build type (e.g. Release, Debug)
required: true
default: Release
code_coverage:
description: Whether code coverage is on
required: true
default: "false"
outputs:
ccache_cache_hit:
description: True if ccache cache has been downloaded
value: ${{ steps.ccache_cache.outputs.cache-hit }}
runs:
using: composite
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Restore ccache cache
uses: actions/cache/restore@v4
id: ccache_cache
if: ${{ env.CCACHE_DISABLE != '1' }}
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-${{ inputs.build_type }}${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}-${{ inputs.conan_profile }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

View File

@@ -1,38 +0,0 @@
name: Save cache
description: Save ccache cache for develop branch
inputs:
conan_profile:
description: Conan profile name
required: true
ccache_dir:
description: Path to .ccache directory
required: true
build_type:
description: Current build type (e.g. Release, Debug)
required: true
default: Release
code_coverage:
description: Whether code coverage is on
required: true
default: "false"
ccache_cache_hit:
description: Whether ccache cache has been downloaded
required: true
ccache_cache_miss_rate:
description: How many ccache cache misses happened
runs:
using: composite
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Save ccache cache
if: ${{ inputs.ccache_cache_hit != 'true' || inputs.ccache_cache_miss_rate == '100.0' }}
uses: actions/cache/save@v4
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-${{ inputs.build_type }}${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}-${{ inputs.conan_profile }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

View File

@@ -14,7 +14,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build_clio/
directory: .github/actions/build-clio/
schedule:
interval: weekly
day: monday
@@ -27,7 +27,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build_docker_image/
directory: .github/actions/build-docker-image/
schedule:
interval: weekly
day: monday
@@ -53,7 +53,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/code_coverage/
directory: .github/actions/code-coverage/
schedule:
interval: weekly
day: monday
@@ -79,7 +79,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/create_issue/
directory: .github/actions/create-issue/
schedule:
interval: weekly
day: monday
@@ -92,7 +92,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/get_number_of_threads/
directory: .github/actions/git-common-ancestor/
schedule:
interval: weekly
day: monday
@@ -105,33 +105,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/git_common_ancestor/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
reviewers:
- XRPLF/clio-dev-team
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/restore_cache/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
reviewers:
- XRPLF/clio-dev-team
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/save_cache/
directory: .github/actions/cache-key/
schedule:
interval: weekly
day: monday

View File

@@ -4,7 +4,7 @@ build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=17
compiler.version=17.0
os=Macos
[conf]

View File

@@ -3,7 +3,9 @@ import itertools
import json
LINUX_OS = ["heavy", "heavy-arm64"]
LINUX_CONTAINERS = ['{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }']
LINUX_CONTAINERS = [
'{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
]
LINUX_COMPILERS = ["gcc", "clang"]
MACOS_OS = ["macos15"]

View File

@@ -40,9 +40,9 @@ mkdir -p "$PROFILES_DIR"
if [[ "$(uname)" == "Darwin" ]]; then
create_profile_with_sanitizers "apple-clang" "$APPLE_CLANG_PROFILE"
echo "include(apple-clang)" > "$PROFILES_DIR/default"
echo "include(apple-clang)" >"$PROFILES_DIR/default"
else
create_profile_with_sanitizers "clang" "$CLANG_PROFILE"
create_profile_with_sanitizers "gcc" "$GCC_PROFILE"
echo "include(gcc)" > "$PROFILES_DIR/default"
echo "include(gcc)" >"$PROFILES_DIR/default"
fi

View File

@@ -22,8 +22,8 @@ fi
TEST_BINARY=$1
if [[ ! -f "$TEST_BINARY" ]]; then
echo "Test binary not found: $TEST_BINARY"
exit 1
echo "Test binary not found: $TEST_BINARY"
exit 1
fi
TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {suite=$1}')
@@ -31,15 +31,16 @@ TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {su
OUTPUT_DIR="./.sanitizer-report"
mkdir -p "$OUTPUT_DIR"
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}"
export TSAN_OPTIONS="log_path=\"$OUTPUT_FILE\" die_after_fork=0"
export ASAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export UBSAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export MallocNanoZone='0' # for MacOSX
$TEST_BINARY --gtest_filter="$TEST" > /dev/null 2>&1
export TSAN_OPTIONS="die_after_fork=0"
export MallocNanoZone='0' # for MacOSX
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
fi
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}.log"
$TEST_BINARY --gtest_filter="$TEST" >"$OUTPUT_FILE" 2>&1
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
else
rm "$OUTPUT_FILE"
fi
done

View File

@@ -20,5 +20,5 @@ for artifact_name in $(ls); do
rm "${artifact_name}/${BINARY_NAME}"
rm -r "${artifact_name}"
sha256sum "./${artifact_name}.zip" > "./${artifact_name}.zip.sha256sum"
sha256sum "./${artifact_name}.zip" >"./${artifact_name}.zip.sha256sum"
done

View File

@@ -38,32 +38,37 @@ on:
description: Whether to strip clio binary
default: true
defaults:
run:
shell: bash
jobs:
build_and_publish_image:
name: Build and publish image
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Download Clio binary from artifact
if: ${{ inputs.artifact_name != null }}
uses: actions/download-artifact@v5
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: ${{ inputs.artifact_name }}
path: ./docker/clio/artifact/
- name: Download Clio binary from url
if: ${{ inputs.clio_server_binary_url != null }}
shell: bash
env:
BINARY_URL: ${{ inputs.clio_server_binary_url }}
BINARY_SHA256: ${{ inputs.binary_sha256 }}
run: |
wget "${{inputs.clio_server_binary_url}}" -P ./docker/clio/artifact/
if [ "$(sha256sum ./docker/clio/clio_server | awk '{print $1}')" != "${{inputs.binary_sha256}}" ]; then
wget "${BINARY_URL}" -P ./docker/clio/artifact/
if [ "$(sha256sum ./docker/clio/clio_server | awk '{print $1}')" != "${BINARY_SHA256}" ]; then
echo "Binary sha256 sum doesn't match"
exit 1
fi
- name: Unpack binary
shell: bash
run: |
sudo apt update && sudo apt install -y tar unzip
cd docker/clio/artifact
@@ -80,7 +85,6 @@ jobs:
- name: Strip binary
if: ${{ inputs.strip_binary }}
shell: bash
run: strip ./docker/clio/clio_server
- name: Set GHCR_REPO
@@ -89,7 +93,7 @@ jobs:
echo "GHCR_REPO=$(echo ghcr.io/${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> ${GITHUB_OUTPUT}
- name: Build Docker image
uses: ./.github/actions/build_docker_image
uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/build.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/upload_coverage_report.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- .github/workflows/reusable-upload-coverage-report.yml
- ".github/actions/**"
- "!.github/actions/build_docker_image/**"
- "!.github/actions/create_issue/**"
- "!.github/actions/build-docker-image/**"
- "!.github/actions/create-issue/**"
- CMakeLists.txt
- conanfile.py
@@ -33,6 +33,10 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ github.ref == 'refs/heads/develop' && github.run_number || 'branch' }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
build-and-test:
name: Build and Test
@@ -45,7 +49,7 @@ jobs:
build_type: [Release, Debug]
container:
[
'{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }',
'{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }',
]
static: [true]
@@ -56,7 +60,7 @@ jobs:
container: ""
static: false
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -72,14 +76,14 @@ jobs:
code_coverage:
name: Run Code Coverage
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
conan_profile: gcc
build_type: Debug
download_ccache: true
upload_ccache: false
upload_ccache: true
code_coverage: true
static: true
upload_clio_server: false
@@ -91,10 +95,10 @@ jobs:
package:
name: Build packages
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
conan_profile: gcc
build_type: Release
download_ccache: true
@@ -111,17 +115,16 @@ jobs:
needs: build-and-test
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_server_Linux_Release_gcc
- name: Compare Config Description
shell: bash
run: |
repoConfigFile=docs/config-description.md
configDescriptionFile=config_description_new.md

View File

@@ -12,31 +12,33 @@ concurrency:
env:
CONAN_PROFILE: gcc
defaults:
run:
shell: bash
jobs:
build:
name: Build Clio / `libXRPL ${{ github.event.client_payload.version }}`
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- name: Update libXRPL version requirement
shell: bash
run: |
sed -i.bak -E "s|'xrpl/[a-zA-Z0-9\\.\\-]+'|'xrpl/${{ github.event.client_payload.conan_ref }}'|g" conanfile.py
rm -f conanfile.py.bak
- name: Update conan lockfile
shell: bash
run: |
conan lock create . --profile:all ${{ env.CONAN_PROFILE }}
@@ -51,13 +53,13 @@ jobs:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Build Clio
uses: ./.github/actions/build_clio
uses: ./.github/actions/build-clio
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_tests_check_libxrpl
path: build/clio_tests
@@ -67,10 +69,10 @@ jobs:
needs: build
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_tests_check_libxrpl
@@ -90,10 +92,10 @@ jobs:
issues: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Create an issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:

View File

@@ -5,13 +5,26 @@ on:
types: [opened, edited, reopened, synchronize]
branches: [develop]
defaults:
run:
shell: bash
jobs:
check_title:
runs-on: ubuntu-latest
steps:
- uses: ytanikin/pr-conventional-commits@b72758283dcbee706975950e96bc4bf323a8d8c0 # v1.4.2
- uses: ytanikin/pr-conventional-commits@fda730cb152c05a849d6d84325e50c6182d9d1e9 # 1.5.1
with:
task_types: '["build","feat","fix","docs","test","ci","style","refactor","perf","chore"]'
add_label: false
custom_labels: '{"build":"build", "feat":"enhancement", "fix":"bug", "docs":"documentation", "test":"testability", "ci":"ci", "style":"refactoring", "refactor":"refactoring", "perf":"performance", "chore":"tooling"}'
- name: Check if message starts with upper-case letter
env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
if [[ ! "${PR_TITLE}" =~ ^[a-z]+:\ [\[A-Z] ]]; then
echo "Error: PR title must start with an upper-case letter."
exit 1
fi

View File

@@ -22,12 +22,16 @@ env:
CONAN_PROFILE: clang
LLVM_TOOLS_VERSION: 20
defaults:
run:
shell: bash
jobs:
clang_tidy:
if: github.event_name != 'push' || contains(github.event.head_commit.message, 'clang-tidy auto fixes')
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
permissions:
contents: write
@@ -35,22 +39,15 @@ jobs:
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_profile: ${{ env.CONAN_PROFILE }}
ccache_dir: ${{ env.CCACHE_DIR }}
- name: Run conan
uses: ./.github/actions/conan
with:
@@ -61,36 +58,36 @@ jobs:
with:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
- name: Run clang-tidy
- name: Run clang-tidy (several times)
continue-on-error: true
shell: bash
id: run_clang_tidy
id: clang_tidy
run: |
run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j "${{ steps.number_of_threads.outputs.threads_number }}" -fix -quiet 1>output.txt
# We run clang-tidy several times, because some fixes may enable new fixes in subsequent runs.
CLANG_TIDY_COMMAND="run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j ${{ steps.nproc.outputs.nproc }} -fix -quiet"
${CLANG_TIDY_COMMAND} ||
${CLANG_TIDY_COMMAND} ||
${CLANG_TIDY_COMMAND}
- name: Check for changes
id: files_changed
continue-on-error: true
run: |
git diff --exit-code
- name: Fix local includes and clang-format style
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
shell: bash
if: ${{ steps.files_changed.outcome != 'success' }}
run: |
pre-commit run --all-files fix-local-includes || true
pre-commit run --all-files clang-format || true
- name: Print issues found
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
shell: bash
run: |
sed -i '/error\||/!d' ./output.txt
cat output.txt
rm output.txt
- name: Create an issue
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
if: ${{ (steps.clang_tidy.outcome != 'success' || steps.files_changed.outcome != 'success') && github.event_name != 'pull_request' }}
id: create_issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:
@@ -101,7 +98,7 @@ jobs:
List of the issues found: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/
- uses: crazy-max/ghaction-import-gpg@e89d40939c28e39f97cf32126055eeae86ba74ec # v6.3.0
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
if: ${{ steps.files_changed.outcome != 'success' && github.event_name != 'pull_request' }}
with:
gpg_private_key: ${{ secrets.ACTIONS_GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.ACTIONS_GPG_PASSPHRASE }}
@@ -109,8 +106,8 @@ jobs:
git_commit_gpgsign: true
- name: Create PR with fixes
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
if: ${{ steps.files_changed.outcome != 'success' && github.event_name != 'pull_request' }}
uses: peter-evans/create-pull-request@22a9089034f40e5a961c8808d113e2c98fb63676 # v7.0.11
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -125,6 +122,5 @@ jobs:
reviewers: "godexsoft,kuznetsss,PeterChen13579,mathbunnyru"
- name: Fail the job
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
shell: bash
if: ${{ steps.clang_tidy.outcome != 'success' || steps.files_changed.outcome != 'success' }}
run: exit 1

View File

@@ -10,20 +10,24 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
build:
runs-on: ubuntu-latest
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
lfs: true
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
@@ -39,10 +43,10 @@ jobs:
run: cmake --build . --target docs
- name: Setup Pages
uses: actions/configure-pages@v5
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # v5.0.0
- name: Upload artifact
uses: actions/upload-pages-artifact@v4
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # v4.0.0
with:
path: build_docs/html
name: docs-develop
@@ -62,6 +66,6 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5
with:
artifact_name: docs-develop

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/nightly.yml
- .github/workflows/release_impl.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/build_clio_docker_image.yml
- .github/workflows/reusable-release.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- .github/workflows/build-clio-docker-image.yml
- ".github/actions/**"
- "!.github/actions/code_coverage/**"
- "!.github/actions/code-coverage/**"
- .github/scripts/prepare-release-artifacts.sh
concurrency:
@@ -23,6 +23,10 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
build-and-test:
name: Build and Test
@@ -39,19 +43,19 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
- os: heavy
conan_profile: gcc
build_type: Debug
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
- os: heavy
conan_profile: gcc.ubsan
build_type: Release
static: false
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -73,13 +77,13 @@ jobs:
include:
- os: heavy
conan_profile: clang
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
static: true
- os: macos15
conan_profile: apple-clang
container: ""
static: false
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -93,23 +97,34 @@ jobs:
targets: all
analyze_build_time: true
get_date:
name: Get Date
runs-on: ubuntu-latest
outputs:
date: ${{ steps.get_date.outputs.date }}
steps:
- name: Get current date
id: get_date
run: |
echo "date=$(date +'%Y%m%d')" >> $GITHUB_OUTPUT
nightly_release:
needs: build-and-test
uses: ./.github/workflows/release_impl.yml
needs: [build-and-test, get_date]
uses: ./.github/workflows/reusable-release.yml
with:
overwrite_release: true
delete_pattern: "nightly-*"
prerelease: true
title: "Clio development (nightly) build"
version: nightly
title: "Clio development build (nightly-${{ needs.get_date.outputs.date }})"
version: nightly-${{ needs.get_date.outputs.date }}
header: >
> **Note:** Please remember that this is a development release and it is not recommended for production use.
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly>
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly-${{ needs.get_date.outputs.date }}>
generate_changelog: false
draft: false
build_and_publish_docker_image:
uses: ./.github/workflows/build_clio_docker_image.yml
uses: ./.github/workflows/build-clio-docker-image.yml
needs: build-and-test
secrets: inherit
with:
@@ -130,10 +145,10 @@ jobs:
issues: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Create an issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:

View File

@@ -1,8 +1,8 @@
name: Pre-commit auto-update
on:
# every first day of the month
schedule:
# every first day of the month
- cron: "0 0 1 * *"
pull_request:
branches: [release/*, develop]

View File

@@ -8,7 +8,7 @@ on:
jobs:
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@afbcbdafbe0ce5439492fb87eda6441371086386
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-pre-commit:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'

View File

@@ -29,9 +29,9 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -47,13 +47,13 @@ jobs:
release:
needs: build-and-test
uses: ./.github/workflows/release_impl.yml
uses: ./.github/workflows/reusable-release.yml
with:
overwrite_release: false
delete_pattern: ""
prerelease: ${{ contains(github.ref_name, '-') }}
title: "${{ github.ref_name}}"
title: "${{ github.ref_name }}"
version: "${{ github.ref_name }}"
header: >
${{ contains(github.ref_name, '-') && '> **Note:** Please remember that this is a release candidate and it is not recommended for production use.' || '' }}
generate_changelog: ${{ !contains(github.ref_name, '-') }}
draft: true
draft: ${{ !contains(github.ref_name, '-') }}

View File

@@ -77,7 +77,7 @@ on:
jobs:
build:
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: ${{ inputs.runs_on }}
container: ${{ inputs.container }}
@@ -95,7 +95,7 @@ jobs:
test:
needs: build
uses: ./.github/workflows/test_impl.yml
uses: ./.github/workflows/reusable-test.yml
with:
runs_on: ${{ inputs.runs_on }}
container: ${{ inputs.container }}

View File

@@ -75,6 +75,10 @@ on:
CODECOV_TOKEN:
required: false
defaults:
run:
shell: bash
jobs:
build:
name: Build
@@ -86,7 +90,7 @@ jobs:
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
# We need to fetch tags to have correct version in the release
@@ -95,25 +99,31 @@ jobs:
ref: ${{ github.ref }}
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: ${{ !inputs.download_ccache }}
- name: Setup conan on macOS
if: ${{ runner.os == 'macOS' }}
shell: bash
run: ./.github/scripts/conan/init.sh
- name: Restore cache
if: ${{ inputs.download_ccache }}
uses: ./.github/actions/restore_cache
id: restore_cache
- name: Generate cache key
uses: ./.github/actions/cache-key
id: cache_key
with:
conan_profile: ${{ inputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
- name: Restore ccache cache
if: ${{ inputs.download_ccache && github.ref != 'refs/heads/develop' }}
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ steps.cache_key.outputs.key }}
restore-keys: |
${{ steps.cache_key.outputs.restore_keys }}
- name: Run conan
uses: ./.github/actions/conan
with:
@@ -131,7 +141,7 @@ jobs:
package: ${{ inputs.package }}
- name: Build Clio
uses: ./.github/actions/build_clio
uses: ./.github/actions/build-clio
with:
targets: ${{ inputs.targets }}
@@ -141,24 +151,26 @@ jobs:
ClangBuildAnalyzer --all build/ build_time_report.bin
ClangBuildAnalyzer --analyze build_time_report.bin > build_time_report.txt
cat build_time_report.txt
shell: bash
- name: Upload build time analyze report
if: ${{ inputs.analyze_build_time }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: build_time_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build_time_report.txt
- name: Show ccache's statistics
- name: Show ccache's statistics and zero it
if: ${{ inputs.download_ccache }}
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
ccache --show-stats
ccache --zero-stats
- name: Save ccache cache
if: ${{ inputs.upload_ccache && github.ref == 'refs/heads/develop' }}
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ steps.cache_key.outputs.key }}
- name: Strip unit_tests
if: ${{ !endsWith(inputs.conan_profile, 'san') && !inputs.code_coverage && !inputs.analyze_build_time }}
@@ -170,44 +182,32 @@ jobs:
- name: Upload clio_server
if: ${{ inputs.upload_clio_server && !inputs.code_coverage && !inputs.analyze_build_time }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_server_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_tests
- name: Upload clio_integration_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_integration_tests
- name: Upload Clio Linux package
if: ${{ inputs.package }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_deb_package_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/*.deb
- name: Save cache
if: ${{ inputs.upload_ccache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save_cache
with:
conan_profile: ${{ inputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
# This is run as part of the build job, because it requires the following:
# - source code
# - conan packages
@@ -216,17 +216,18 @@ jobs:
# It's all available in the build job, but not in the test job
- name: Run code coverage
if: ${{ inputs.code_coverage }}
uses: ./.github/actions/code_coverage
uses: ./.github/actions/code-coverage
- name: Verify expected version
if: ${{ inputs.expected_version != '' }}
shell: bash
env:
INPUT_EXPECTED_VERSION: ${{ inputs.expected_version }}
run: |
set -e
EXPECTED_VERSION="clio-${{ inputs.expected_version }}"
EXPECTED_VERSION="clio-${INPUT_EXPECTED_VERSION}"
actual_version=$(./build/clio_server --version)
if [[ "$actual_version" != "$EXPECTED_VERSION" ]]; then
echo "Expected version '$EXPECTED_VERSION', but got '$actual_version'"
echo "Expected version '${EXPECTED_VERSION}', but got '${actual_version}'"
exit 1
fi
@@ -238,6 +239,6 @@ jobs:
if: ${{ inputs.code_coverage }}
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
uses: ./.github/workflows/reusable-upload-coverage-report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -3,10 +3,10 @@ name: Make release
on:
workflow_call:
inputs:
overwrite_release:
description: "Overwrite the current release and tag"
delete_pattern:
description: "Pattern to delete previous releases"
required: true
type: boolean
type: string
prerelease:
description: "Create a prerelease"
@@ -38,11 +38,15 @@ on:
required: true
type: boolean
defaults:
run:
shell: bash
jobs:
release:
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -51,29 +55,29 @@ jobs:
contents: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
path: release_artifacts
pattern: clio_server_*
- name: Create release notes
shell: bash
env:
RELEASE_HEADER: ${{ inputs.header }}
run: |
echo "# Release notes" > "${RUNNER_TEMP}/release_notes.md"
echo "" >> "${RUNNER_TEMP}/release_notes.md"
printf '%s\n' "${{ inputs.header }}" >> "${RUNNER_TEMP}/release_notes.md"
printf '%s\n' "${RELEASE_HEADER}" >> "${RUNNER_TEMP}/release_notes.md"
- name: Generate changelog
shell: bash
if: ${{ inputs.generate_changelog }}
run: |
LAST_TAG="$(gh release view --json tagName -q .tagName --repo XRPLF/clio)"
@@ -83,30 +87,39 @@ jobs:
cat CHANGELOG.md >> "${RUNNER_TEMP}/release_notes.md"
- name: Prepare release artifacts
shell: bash
run: .github/scripts/prepare-release-artifacts.sh release_artifacts
- name: Upload release notes
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: release_notes_${{ inputs.version }}
path: "${RUNNER_TEMP}/release_notes.md"
- name: Remove current release and tag
if: ${{ github.event_name != 'pull_request' && inputs.overwrite_release }}
shell: bash
- name: Remove previous release with a pattern
if: ${{ github.event_name != 'pull_request' && inputs.delete_pattern != '' }}
env:
DELETE_PATTERN: ${{ inputs.delete_pattern }}
run: |
gh release delete ${{ inputs.version }} --yes || true
git push origin :${{ inputs.version }} || true
RELEASES_TO_DELETE=$(gh release list --limit 50 --repo "${GH_REPO}" | grep -E "${DELETE_PATTERN}" | awk -F'\t' '{print $3}' || true)
if [ -n "$RELEASES_TO_DELETE" ]; then
for RELEASE in $RELEASES_TO_DELETE; do
echo "Deleting release: $RELEASE"
gh release delete "$RELEASE" --repo "${GH_REPO}" --yes --cleanup-tag
done
fi
- name: Publish release
if: ${{ github.event_name != 'pull_request' }}
shell: bash
env:
RELEASE_VERSION: ${{ inputs.version }}
PRERELEASE_OPTION: ${{ inputs.prerelease && '--prerelease' || '' }}
RELEASE_TITLE: ${{ inputs.title }}
DRAFT_OPTION: ${{ inputs.draft && '--draft' || '' }}
run: |
gh release create "${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
--title "${{ inputs.title }}" \
gh release create "${RELEASE_VERSION}" \
${PRERELEASE_OPTION} \
--title "${RELEASE_TITLE}" \
--target "${GITHUB_SHA}" \
${{ inputs.draft && '--draft' || '' }} \
${DRAFT_OPTION} \
--notes-file "${RUNNER_TEMP}/release_notes.md" \
./release_artifacts/clio_server*

View File

@@ -33,6 +33,10 @@ on:
required: true
type: boolean
defaults:
run:
shell: bash
jobs:
unit_tests:
name: Unit testing
@@ -43,23 +47,22 @@ jobs:
env:
# TODO: remove completely when we have fixed all currently existing issues with sanitizers
SANITIZER_IGNORE_ERRORS: ${{ endsWith(inputs.conan_profile, '.tsan') || inputs.conan_profile == 'clang.asan' || (inputs.conan_profile == 'gcc.asan' && inputs.build_type == 'Release') }}
SANITIZER_IGNORE_ERRORS: ${{ endsWith(inputs.conan_profile, '.tsan') }}
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
- name: Make clio_tests executable
shell: bash
run: chmod +x ./clio_tests
- name: Run clio_tests (regular)
@@ -68,11 +71,10 @@ jobs:
- name: Run clio_tests (sanitizer errors ignored)
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' }}
run: ./.github/scripts/execute-tests-under-sanitizer ./clio_tests
run: ./.github/scripts/execute-tests-under-sanitizer.sh ./clio_tests
- name: Check for sanitizer report
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' }}
shell: bash
id: check_report
run: |
if ls .sanitizer-report/* 1> /dev/null 2>&1; then
@@ -83,7 +85,7 @@ jobs:
- name: Upload sanitizer report
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true' }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: sanitizer_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: .sanitizer-report/*
@@ -91,7 +93,7 @@ jobs:
- name: Create an issue
if: ${{ false && env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true' }}
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:
@@ -144,7 +146,7 @@ jobs:
sleep 5
done
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}

View File

@@ -1,24 +1,27 @@
name: Upload report
on:
workflow_dispatch:
workflow_call:
secrets:
CODECOV_TOKEN:
required: true
defaults:
run:
shell: bash
jobs:
upload_report:
name: Upload report
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Download report artifact
uses: actions/download-artifact@v5
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: coverage-report.xml
path: build

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/sanitizers.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- ".github/actions/**"
- "!.github/actions/build_docker_image/**"
- "!.github/actions/create_issue/**"
- .github/scripts/execute-tests-under-sanitizer
- "!.github/actions/build-docker-image/**"
- "!.github/actions/create-issue/**"
- .github/scripts/execute-tests-under-sanitizer.sh
- CMakeLists.txt
- conanfile.py
@@ -41,17 +41,16 @@ jobs:
sanitizer_ext: [.asan, .tsan, .ubsan]
build_type: [Release, Debug]
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
download_ccache: false
upload_ccache: false
conan_profile: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
build_type: ${{ matrix.build_type }}
static: false
# Currently, both gcc.tsan and clang.tsan unit tests hang
run_unit_tests: ${{ matrix.sanitizer_ext != '.tsan' }}
run_unit_tests: true
run_integration_tests: false
upload_clio_server: false
targets: clio_tests clio_integration_tests

View File

@@ -3,23 +3,23 @@ name: Update CI docker image
on:
pull_request:
paths:
- .github/workflows/update_docker_ci.yml
- .github/workflows/update-docker-ci.yml
- ".github/actions/build_docker_image/**"
- ".github/actions/build-docker-image/**"
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
- "docker/**"
- "!docker/clio/**"
- "!docker/develop/**"
push:
branches: [develop]
paths:
- .github/workflows/update_docker_ci.yml
- .github/workflows/update-docker-ci.yml
- ".github/actions/build_docker_image/**"
- ".github/actions/build-docker-image/**"
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
- "docker/**"
- "!docker/clio/**"
- "!docker/develop/**"
workflow_dispatch:
concurrency:
@@ -33,6 +33,10 @@ env:
GCC_MAJOR_VERSION: 15
GCC_VERSION: 15.2.0
defaults:
run:
shell: bash
jobs:
repo:
name: Calculate repo name
@@ -52,7 +56,7 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
@@ -60,7 +64,7 @@ jobs:
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -90,15 +94,15 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -128,7 +132,7 @@ jobs:
needs: [repo, gcc-amd64, gcc-arm64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
@@ -137,11 +141,11 @@ jobs:
files: "docker/compilers/gcc/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Login to GitHub Container Registry
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -149,7 +153,7 @@ jobs:
- name: Login to DockerHub
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
@@ -179,7 +183,7 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
@@ -187,7 +191,7 @@ jobs:
with:
files: "docker/compilers/clang/**"
- uses: ./.github/actions/build_docker_image
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -215,7 +219,7 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
@@ -223,7 +227,7 @@ jobs:
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -246,15 +250,15 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -277,7 +281,7 @@ jobs:
needs: [repo, tools-amd64, tools-arm64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
@@ -286,11 +290,11 @@ jobs:
files: "docker/tools/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Login to GitHub Container Registry
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -306,14 +310,36 @@ jobs:
$image:arm64-latest \
$image:amd64-latest
pre-commit:
name: Build and push pre-commit docker image
runs-on: heavy
needs: [repo, tools-merge]
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ needs.repo.outputs.GHCR_REPO }}/clio-pre-commit
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/pre-commit
tags: |
type=raw,value=latest
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
build_args: |
GHCR_REPO=${{ needs.repo.outputs.GHCR_REPO }}
ci:
name: Build and push CI docker image
runs-on: heavy
needs: [repo, gcc-merge, clang, tools-merge]
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/build_docker_image
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}

View File

@@ -18,7 +18,7 @@ on:
pull_request:
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/workflows/upload-conan-deps.yml
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
@@ -28,7 +28,7 @@ on:
push:
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/workflows/upload-conan-deps.yml
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
@@ -40,13 +40,17 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
generate-matrix:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate conan matrix
id: set-matrix
@@ -69,16 +73,15 @@ jobs:
CONAN_PROFILE: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- name: Setup conan on macOS
if: ${{ runner.os == 'macOS' }}
shell: bash
run: ./.github/scripts/conan/init.sh
- name: Show conan profile
@@ -99,4 +102,6 @@ jobs:
- name: Upload Conan packages
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' && github.event_name != 'schedule' }}
run: conan upload "*" -r=xrplf --confirm ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" -r=xrplf --confirm ${FORCE_OPTION}

View File

@@ -11,7 +11,10 @@
#
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: ^(docs/doxygen-awesome-theme/|conan\.lock$)
exclude: |
(?x)^(
docs/doxygen-awesome-theme/.*
)$
repos:
# `pre-commit sample-config` default hooks
@@ -26,12 +29,12 @@ repos:
# Autoformat: YAML, JSON, Markdown, etc.
- repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
rev: 3c603eae8faac85303ae675fd33325cff699a797 # frozen: v3.7.3
hooks:
- id: prettier
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: 192ad822316c3a22fb3d3cc8aa6eafa0b8488360 # frozen: v0.45.0
rev: c8fd5003603dd6f12447314ecd935ba87c09aff5 # frozen: v0.46.0
hooks:
- id: markdownlint-fix
exclude: LICENSE.md
@@ -43,7 +46,7 @@ repos:
# hadolint-docker is a special hook that runs hadolint in a Docker container
# Docker is not installed in the environment where pre-commit is run
stages: [manual]
entry: hadolint/hadolint:v2.14 hadolint
entry: hadolint/hadolint:v2.14.0 hadolint
- repo: https://github.com/codespell-project/codespell
rev: 63c8f8312b7559622c0d82815639671ae42132ac # frozen: v2.4.1
@@ -55,6 +58,17 @@ repos:
--ignore-words=pre-commit-hooks/codespell_ignore.txt,
]
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 2892f1f81088477370d4fbc56545c05d33d2493f # frozen: 25.11.0
hooks:
- id: black
- repo: https://github.com/scop/pre-commit-shfmt
rev: 2a30809d16bc7a60d9b97353c797f42b510d3368 # frozen: v3.12.0-2
hooks:
- id: shfmt
args: ["-i", "4", "--write"]
# Running some C++ hooks before clang-format
# to ensure that the style is consistent.
- repo: local
@@ -80,7 +94,7 @@ repos:
language: script
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 719856d56a62953b8d2839fb9e851f25c3cfeef8 # frozen: v21.1.2
rev: 4c26f99731e7c22a047c35224150ee9e43d7c03e # frozen: v21.1.6
hooks:
- id: clang-format
args: [--style=file]

View File

@@ -75,10 +75,6 @@ if (san)
endif ()
target_compile_options(clio_options INTERFACE ${SAN_OPTIMIZATION_FLAG} ${SAN_FLAG} -fno-omit-frame-pointer)
target_compile_definitions(
clio_options INTERFACE $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>
)
target_link_libraries(clio_options INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()

View File

@@ -180,6 +180,7 @@ Existing maintainers can resign, or be subject to a vote for removal at the behe
- [kuznetsss](https://github.com/kuznetsss) (Ripple)
- [legleux](https://github.com/legleux) (Ripple)
- [PeterChen13579](https://github.com/PeterChen13579) (Ripple)
- [mathbunnyru](https://github.com/mathbunnyru) (Ripple)
### Honorable ex-Maintainers

View File

@@ -34,7 +34,6 @@ Below are some useful docs to learn more about Clio.
- [How to configure Clio and rippled](./docs/configure-clio.md)
- [How to run Clio](./docs/run-clio.md)
- [Logging](./docs/logging.md)
- [Troubleshooting guide](./docs/trouble_shooting.md)
**General reference material:**

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Clio XRPL API server
Documentation=https://github.com/XRPLF/clio.git
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=@CLIO_INSTALL_DIR@/bin/clio_server @CLIO_INSTALL_DIR@/etc/config.json
Restart=on-failure
User=clio
Group=clio
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -11,3 +11,6 @@ file(READ docs/examples/config/example-config.json config)
string(REGEX REPLACE "./clio_log" "/var/log/clio/" config "${config}")
file(WRITE ${CMAKE_BINARY_DIR}/install-config.json "${config}")
install(FILES ${CMAKE_BINARY_DIR}/install-config.json DESTINATION etc RENAME config.json)
configure_file("${CMAKE_SOURCE_DIR}/cmake/install/clio.service.in" "${CMAKE_BINARY_DIR}/clio.service")
install(FILES "${CMAKE_BINARY_DIR}/clio.service" DESTINATION /lib/systemd/system)

View File

@@ -10,37 +10,36 @@ CLIO_BIN="$CLIO_PREFIX/bin/${CLIO_EXECUTABLE}"
CLIO_CONFIG="$CLIO_PREFIX/etc/config.json"
case "$1" in
configure)
if ! id -u "$USER_NAME" >/dev/null 2>&1; then
# Users who should not have a home directory should have their home directory set to /nonexistent
# https://www.debian.org/doc/debian-policy/ch-opersys.html#non-existent-home-directories
useradd \
--system \
--home-dir /nonexistent \
--no-create-home \
--shell /usr/sbin/nologin \
--comment "system user for ${CLIO_EXECUTABLE}" \
--user-group \
${USER_NAME}
fi
configure)
if ! id -u "$USER_NAME" >/dev/null 2>&1; then
# Users who should not have a home directory should have their home directory set to /nonexistent
# https://www.debian.org/doc/debian-policy/ch-opersys.html#non-existent-home-directories
useradd \
--system \
--home-dir /nonexistent \
--no-create-home \
--shell /usr/sbin/nologin \
--comment "system user for ${CLIO_EXECUTABLE}" \
--user-group \
${USER_NAME}
fi
install -d -o "$USER_NAME" -g "$GROUP_NAME" /var/log/clio
install -d -o "$USER_NAME" -g "$GROUP_NAME" /var/log/clio
if [ -f "$CLIO_CONFIG" ]; then
chown "$USER_NAME:$GROUP_NAME" "$CLIO_CONFIG"
fi
if [ -f "$CLIO_CONFIG" ]; then
chown "$USER_NAME:$GROUP_NAME" "$CLIO_CONFIG"
fi
chown -R "$USER_NAME:$GROUP_NAME" "$CLIO_PREFIX"
chown -R "$USER_NAME:$GROUP_NAME" "$CLIO_PREFIX"
ln -sf "$CLIO_BIN" "/usr/bin/${CLIO_EXECUTABLE}"
ln -sf "$CLIO_BIN" "/usr/bin/${CLIO_EXECUTABLE}"
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
;;
abort-upgrade | abort-remove | abort-deconfigure) ;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0

View File

@@ -3,56 +3,60 @@
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1756234289.683",
"xrpl/2.6.1#973af2bf9631f239941dd9f5a100bb84%1759275059.342",
"xrpl/3.0.0#534d3f65a336109eee929b88962bae4e%1765375071.547",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1756234266.869",
"spdlog/1.15.3#3ca0e9e6b83af4d0151e26541d140c86%1754401846.61",
"spdlog/1.16.0#942c2c39562ae25ba575d9c8e2bdf3b6%1763984117.108",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1756234262.318",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1756234257.976",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1764175362.029",
"rapidjson/cci.20220822#1b9d8c2256876a154172dc5cfbe447c6%1754325007.656",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88%1764175361.456",
"openssl/1.1.1w#a8f0792d7c5121b954578a7149d23e03%1756223730.729",
"nudb/2.0.9#c62cfd501e57055a7e0d8ee3d5e5427d%1756234237.107",
"nudb/2.0.9#fb8dfd1a5557f5e0528114c2da17721e%1763150366.909",
"minizip/1.2.13#9e87d57804bd372d6d1e32b1871517a3%1754325004.374",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1756234228.999",
"libuv/1.46.0#dc28c1f653fa197f00db5b577a6f6011%1754325003.592",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1756223727.64",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1756230911.03",
"libarchive/3.8.1#5cf685686322e906cb42706ab7e099a8%1756234256.696",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1764175360.142",
"http_parser/2.9.4#98d91690d6fd021e9e624218a85d9d97%1754325001.385",
"gtest/1.14.0#f8f0757a574a8dd747d16af62d6eb1b7%1754325000.842",
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1756234248.958",
"fmt/11.2.0#579bb2cdf4a7607621beea4eb4651e0f%1754324999.086",
"fmt/12.1.0#50abab23274d56bb8f42c94b3b9a40c7%1763984116.926",
"doctest/2.4.11#a4211dfc329a16ba9f280f9574025659%1756234220.819",
"date/3.0.4#f74bbba5a08fa388256688743136cb6f%1756234217.493",
"cassandra-cpp-driver/2.17.0#e50919efac8418c26be6671fd702540a%1754324997.363",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1756234217.915",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1756234261.716",
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368%1754325043.336",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1763584497.32",
"cassandra-cpp-driver/2.17.0#bd3934138689482102c265d01288a316%1764175359.611",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1764175359.429",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1764175359.429",
"boost/1.83.0#91d8b1572534d2c334d6790e3c34d0c1%1764175359.61",
"benchmark/1.9.4#ce4403f7a24d3e1f907cd9da4b678be4%1754578869.672",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1756234220.907"
"abseil/20230802.1#90ba607d4ee8fb5fb157c3db540671fc%1764175359.429"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"cmake/3.31.8#dde3bde00bb843687e55aea5afa0e220%1756234232.89",
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88%1764175361.456",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1764175359.44",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1764175359.429",
"b2/5.3.3#107c15377719889654eb9a162a673975%1756234226.28"
],
"python_requires": [],
"overrides": {
"boost/1.83.0": [
null,
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368"
"boost/1.83.0#91d8b1572534d2c334d6790e3c34d0c1"
],
"protobuf/3.21.12": [
null,
"protobuf/3.21.12"
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
],
"fmt/12.0.0": [
"fmt/12.1.0"
]
},
"config_requires": []
}
}

View File

@@ -3,62 +3,60 @@ from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
class ClioConan(ConanFile):
name = 'clio'
license = 'ISC'
author = 'Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>, Ayaz Salikhov <asalikhov@ripple.com>'
url = 'https://github.com/xrplf/clio'
description = 'Clio RPC server'
settings = 'os', 'compiler', 'build_type', 'arch'
name = "clio"
license = "ISC"
author = "Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>, Ayaz Salikhov <asalikhov@ripple.com>"
url = "https://github.com/xrplf/clio"
description = "Clio RPC server"
settings = "os", "compiler", "build_type", "arch"
options = {}
requires = [
'boost/1.83.0',
'cassandra-cpp-driver/2.17.0',
'fmt/11.2.0',
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1w',
'xrpl/2.6.1',
'zlib/1.3.1',
'libbacktrace/cci.20210118',
'spdlog/1.15.3',
"boost/1.83.0",
"cassandra-cpp-driver/2.17.0",
"protobuf/3.21.12",
"grpc/1.50.1",
"openssl/1.1.1w",
"xrpl/3.0.0",
"zlib/1.3.1",
"libbacktrace/cci.20210118",
"spdlog/1.16.0",
]
default_options = {
'xrpl/*:tests': False,
'xrpl/*:rocksdb': False,
'cassandra-cpp-driver/*:shared': False,
'date/*:header_only': True,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libpq/*:shared': False,
'lz4/*:shared': False,
'openssl/*:shared': False,
'protobuf/*:shared': False,
'protobuf/*:with_zlib': True,
'snappy/*:shared': False,
'gtest/*:no_main': True,
"xrpl/*:tests": False,
"xrpl/*:rocksdb": False,
"cassandra-cpp-driver/*:shared": False,
"date/*:header_only": True,
"grpc/*:shared": False,
"grpc/*:secure": True,
"libpq/*:shared": False,
"lz4/*:shared": False,
"openssl/*:shared": False,
"protobuf/*:shared": False,
"protobuf/*:with_zlib": True,
"snappy/*:shared": False,
"gtest/*:no_main": True,
}
exports_sources = (
'CMakeLists.txt', 'cmake/*', 'src/*'
)
exports_sources = ("CMakeLists.txt", "cmake/*", "src/*")
def requirements(self):
self.requires('gtest/1.14.0')
self.requires('benchmark/1.9.4')
self.requires("gtest/1.14.0")
self.requires("benchmark/1.9.4")
self.requires("fmt/12.1.0", force=True)
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
if self.settings.compiler == "apple-clang":
self.options["boost"].visibility = "global"
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
self.folders.generators = "build/generators"
generators = 'CMakeDeps'
generators = "CMakeDeps"
def generate(self):
tc = CMakeToolchain(self)

View File

@@ -36,32 +36,28 @@ RUN apt-get update \
libmpfr-dev \
libncurses-dev \
make \
ninja-build \
wget \
zip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install Python tools
ARG PYTHON_VERSION=3.13
RUN add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-venv \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sS https://bootstrap.pypa.io/get-pip.py | python${PYTHON_VERSION}
# Create a virtual environment for python tools
RUN python${PYTHON_VERSION} -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
&& rm -rf /var/lib/apt/lists/*
RUN pip install -q --no-cache-dir \
# TODO: Remove this once we switch to newer Ubuntu base image
# lxml 6.0.0 is not compatible with our image
'lxml<6.0.0' \
cmake \
conan==2.20.1 \
conan==2.22.1 \
gcovr \
# We're adding pre-commit to this image as well,
# because clang-tidy workflow requires it
pre-commit
# Install LLVM tools
@@ -110,6 +106,7 @@ COPY --from=clio-tools \
/usr/local/bin/git-cliff \
/usr/local/bin/gh \
/usr/local/bin/gdb \
/usr/local/bin/ninja \
/usr/local/bin/
WORKDIR /root

View File

@@ -5,17 +5,18 @@ It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but c
The image is based on Ubuntu 20.04 and contains:
- ccache 4.11.3
- ccache 4.12.1
- Clang 19
- ClangBuildAnalyzer 1.6.0
- Conan 2.20.1
- Doxygen 1.12
- Conan 2.22.1
- Doxygen 1.15.0
- GCC 15.2.0
- GDB 16.3
- gh 2.74
- git-cliff 2.9.1
- mold 2.40.1
- Python 3.13
- gh 2.82.1
- git-cliff 2.10.1
- mold 2.40.4
- Ninja 1.13.2
- Python 3.8
- and some other useful tools
Conan is set up to build Clio without any additional steps.

View File

@@ -3,6 +3,13 @@
{% set sanitizer_opt_map = {"asan": "address", "tsan": "thread", "ubsan": "undefined"} %}
{% set sanitizer = sanitizer_opt_map[sani] %}
{% set sanitizer_b2_flags_map = {
"address": "context-impl=ucontext address-sanitizer=norecover",
"thread": "context-impl=ucontext thread-sanitizer=norecover",
"undefined": "undefined-sanitizer=norecover"
} %}
{% set sanitizer_b2_flags_str = sanitizer_b2_flags_map[sanitizer] %}
{% set sanitizer_build_flags_str = "-fsanitize=" ~ sanitizer ~ " -g -O1 -fno-omit-frame-pointer" %}
{% set sanitizer_build_flags = sanitizer_build_flags_str.split(' ') %}
{% set sanitizer_link_flags_str = "-fsanitize=" ~ sanitizer %}
@@ -11,7 +18,8 @@
include({{ compiler }})
[options]
boost/*:extra_b2_flags="cxxflags=\"{{ sanitizer_build_flags_str }}\" linkflags=\"{{ sanitizer_link_flags_str }}\""
boost/*:extra_b2_flags="{{ sanitizer_b2_flags_str }}"
boost/*:without_context=False
boost/*:without_stacktrace=True
[conf]
@@ -20,4 +28,10 @@ tools.build:cxxflags+={{ sanitizer_build_flags }}
tools.build:exelinkflags+={{ sanitizer_link_flags }}
tools.build:sharedlinkflags+={{ sanitizer_link_flags }}
tools.info.package_id:confs+=["tools.build:cflags", "tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]
{% if sanitizer == "address" %}
tools.build:defines+=["BOOST_USE_ASAN", "BOOST_USE_UCONTEXT"]
{% elif sanitizer == "thread" %}
tools.build:defines+=["BOOST_USE_TSAN", "BOOST_USE_UCONTEXT"]
{% endif %}
tools.info.package_id:confs+=["tools.build:cflags", "tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags", "tools.build:defines"]

View File

@@ -8,7 +8,7 @@ ARG UBUNTU_VERSION
ARG GCC_MAJOR_VERSION
ARG BUILD_VERSION=1
ARG BUILD_VERSION=0
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
@@ -34,6 +34,7 @@ RUN wget --progress=dot:giga https://gcc.gnu.org/pub/gcc/releases/gcc-$GCC_VERSI
WORKDIR /gcc-$GCC_VERSION
RUN ./contrib/download_prerequisites
# hadolint ignore=DL3059
RUN mkdir /gcc-build
WORKDIR /gcc-build
RUN /gcc-$GCC_VERSION/configure \

View File

@@ -1,6 +1,6 @@
services:
clio_develop:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
volumes:
- clio_develop_conan_data:/root/.conan2/p
- clio_develop_ccache:/root/.ccache

View File

@@ -2,7 +2,7 @@
script_dir=$(dirname $0)
pushd $script_dir > /dev/null
pushd $script_dir >/dev/null
function start_container {
if [ -z "$(docker ps -q -f name=clio_develop)" ]; then
@@ -41,21 +41,26 @@ EOF
}
case $1 in
-h|--help)
print_help ;;
-h | --help)
print_help
;;
-t|--terminal)
open_terminal ;;
-t | --terminal)
open_terminal
;;
-s|--stop)
stop_container ;;
-s | --stop)
stop_container
;;
-*)
echo "Unknown option: $1"
print_help ;;
-*)
echo "Unknown option: $1"
print_help
;;
*)
run "$@" ;;
*)
run "$@"
;;
esac
popd > /dev/null
popd >/dev/null

View File

@@ -0,0 +1,38 @@
ARG GHCR_REPO=invalid
FROM ${GHCR_REPO}/clio-tools:latest AS clio-tools
# We're using Ubuntu 24.04 to have a more recent version of Python
FROM ubuntu:24.04
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# hadolint ignore=DL3002
USER root
WORKDIR /root
# Install common tools and dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
curl \
git \
libatomic1 \
software-properties-common \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install Python tools
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN pip install -q --no-cache-dir --break-system-packages \
pre-commit
COPY --from=clio-tools \
/usr/local/bin/doxygen \
/usr/local/bin/

View File

@@ -8,11 +8,10 @@ ARG TARGETARCH
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
ARG BUILD_VERSION=2
ARG BUILD_VERSION=0
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
ninja-build \
python3 \
python3-pip \
software-properties-common \
@@ -24,7 +23,16 @@ RUN apt-get update \
WORKDIR /tmp
ARG MOLD_VERSION=2.40.1
ARG NINJA_VERSION=1.13.2
RUN wget --progress=dot:giga "https://github.com/ninja-build/ninja/archive/refs/tags/v${NINJA_VERSION}.tar.gz" \
&& tar xf "v${NINJA_VERSION}.tar.gz" \
&& cd "ninja-${NINJA_VERSION}" \
&& ./configure.py --bootstrap \
&& mv ninja /usr/local/bin/ninja \
&& rm -rf /tmp/* /var/tmp/*
ARG MOLD_VERSION=2.40.4
RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v${MOLD_VERSION}.tar.gz" \
&& tar xf "v${MOLD_VERSION}.tar.gz" \
&& cd "mold-${MOLD_VERSION}" \
@@ -34,7 +42,7 @@ RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG CCACHE_VERSION=4.11.3
ARG CCACHE_VERSION=4.12.1
RUN wget --progress=dot:giga "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
&& cd "ccache-${CCACHE_VERSION}" \
@@ -51,7 +59,7 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG DOXYGEN_VERSION=1.12.0
ARG DOXYGEN_VERSION=1.15.0
RUN wget --progress=dot:giga "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& tar xf "doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& cd "doxygen-${DOXYGEN_VERSION}" \
@@ -71,13 +79,13 @@ RUN wget --progress=dot:giga "https://github.com/aras-p/ClangBuildAnalyzer/archi
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG GIT_CLIFF_VERSION=2.9.1
ARG GIT_CLIFF_VERSION=2.10.1
RUN wget --progress=dot:giga "https://github.com/orhun/git-cliff/releases/download/v${GIT_CLIFF_VERSION}/git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz" \
&& tar xf git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz \
&& mv git-cliff-${GIT_CLIFF_VERSION}/git-cliff /usr/local/bin/git-cliff \
&& rm -rf /tmp/* /var/tmp/*
ARG GH_VERSION=2.74.0
ARG GH_VERSION=2.82.1
RUN wget --progress=dot:giga "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz" \
&& tar xf gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& mv gh_${GH_VERSION}_linux_${TARGETARCH}/bin/gh /usr/local/bin/gh \

View File

@@ -15,6 +15,7 @@ EXTRACT_ANON_NSPACES = NO
SORT_MEMBERS_CTORS_1ST = YES
INPUT = ${SOURCE}/src
USE_MDFILE_AS_MAINPAGE = ${SOURCE}/src/README.md
EXCLUDE_SYMBOLS = ${EXCLUDES}
RECURSIVE = YES
HAVE_DOT = ${USE_DOT}

View File

@@ -177,7 +177,7 @@ There are several CMake options you can use to customize the build:
### Generating API docs for Clio
The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/index.html). If you want to generate the API documentation when building Clio, make sure to install Doxygen 1.12.0 on your system.
The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/index.html). If you want to generate the API documentation when building Clio, make sure to install Doxygen 1.14.0 on your system.
To generate the API docs, please use CMake option `-Ddocs=ON` as described above and build the `docs` target.
@@ -191,7 +191,7 @@ Open the `index.html` file in your browser to see the documentation pages.
It is also possible to build Clio using [Docker](https://www.docker.com/) if you don't want to install all the dependencies on your machine.
```sh
docker run -it ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
docker run -it ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
git clone https://github.com/XRPLF/clio
cd clio
```

View File

@@ -293,7 +293,7 @@ This document provides a list of all available Clio configuration properties in
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Default value**: `1000`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum size of the server's request queue. If set to `0`, this means there is no queue size limit.
@@ -391,7 +391,7 @@ This document provides a list of all available Clio configuration properties in
- **Type**: double
- **Default value**: `10`
- **Constraints**: The value must be a positive double number.
- **Description**: The number of milliseconds the server waits to shutdown gracefully. If Clio does not shutdown gracefully after the specified value, it will be killed instead.
- **Description**: The number of seconds the server waits to shutdown gracefully. If Clio does not shutdown gracefully after the specified value, it will be killed instead.
### cache.num_diffs
@@ -441,6 +441,22 @@ This document provides a list of all available Clio configuration properties in
- **Constraints**: The value must be one of the following: `sync`, `async`, `none`.
- **Description**: The strategy used for Cache loading.
### cache.file.path
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The path to a file where cache will be saved to on shutdown and loaded from on startup. If the file couldn't be read Clio will load cache as usual (from DB or from rippled).
### cache.file.max_sequence_age
- **Required**: True
- **Type**: int
- **Default value**: `5000`
- **Constraints**: None
- **Description**: Max allowed difference between the latest sequence in DB and in cache file. If the cache file is too old (contains too low latest sequence) Clio will reject using it.
### log.channels.[].channel
- **Required**: False

View File

@@ -951,7 +951,7 @@ span.arrowhead {
border-color: var(--primary-color);
}
#nav-tree ul li:first-child > div > a {
#nav-tree-contents > ul > li:first-child > div > a {
opacity: 0;
pointer-events: none;
}

View File

@@ -61,7 +61,7 @@
"ip": "0.0.0.0",
"port": 51233,
// Max number of requests to queue up before rejecting further requests.
// Defaults to 0, which disables the limit.
// Defaults to 1000 (use 0 to make the queue unbound).
"max_queue_size": 500,
// If request contains header with authorization, Clio will check if it matches the prefix 'Password ' + this value's sha256 hash
// If matches, the request will be considered as admin request
@@ -137,7 +137,11 @@
// "num_cursors_from_account": 3200, // Read the cursors from the account table until we have enough cursors to partition the ledger to load concurrently.
"num_markers": 48, // The number of markers is the number of coroutines to load the cache concurrently.
"page_fetch_size": 512, // The number of rows to load for each page.
"load": "async" // "sync" to load cache synchronously or "async" to load cache asynchronously or "none"/"no" to turn off the cache.
"load": "async", // "sync" to load cache synchronously or "async" to load cache asynchronously or "none"/"no" to turn off the cache.
"file": {
"path": "./cache.bin",
"max_sequence_age": 5000
}
},
"prometheus": {
"enabled": true,

View File

@@ -77,7 +77,7 @@ It's possible to configure `minimum`, `maximum` and `default` version like so:
All of the above are optional.
Clio will fallback to hardcoded defaults when these values are not specified in the config file, or if the configured values are outside of the minimum and maximum supported versions hardcoded in [src/rpc/common/APIVersion.h](../src/rpc/common/APIVersion.hpp).
Clio will fallback to hardcoded defaults when these values are not specified in the config file, or if the configured values are outside of the minimum and maximum supported versions hardcoded in [src/rpc/common/APIVersion.hpp](../src/rpc/common/APIVersion.hpp).
> [!TIP]
> See the [example-config.json](../docs/examples/config/example-config.json) for more details.

View File

@@ -36,45 +36,45 @@ EOF
exit 0
fi
# Check version of doxygen is at least 1.12
# Check version of doxygen is at least 1.14
version=$($DOXYGEN --version | grep -o '[0-9\.]*')
if [[ "1.12.0" > "$version" ]]; then
if [[ "1.14.0" > "$version" ]]; then
# No hard error if doxygen version is not the one we want - let CI deal with it
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 1.12 of `which doxygen` is required.
Your version is $version. Please upgrade it for next time.
A minimum of version 1.14 of $(which doxygen) is required.
Your version is $version. Please upgrade it.
Your changes may fail to pass CI once pushed.
Your changes may fail CI checks.
-----------------------------------------------------------------------------
EOF
exit 0
fi
mkdir -p ${DOCDIR} > /dev/null 2>&1
pushd ${DOCDIR} > /dev/null 2>&1
mkdir -p ${DOCDIR} >/dev/null 2>&1
pushd ${DOCDIR} >/dev/null 2>&1
cat ${ROOT}/docs/Doxyfile | \
sed \
-e "s/\${LINT}/YES/" \
-e "s/\${WARN_AS_ERROR}/NO/" \
-e "s!\${SOURCE}!${ROOT}!" \
-e "s/\${USE_DOT}/NO/" \
-e "s/\${EXCLUDES}/impl/" \
| ${DOXYGEN} - 2> ${TMPFILE} 1> /dev/null
cat ${ROOT}/docs/Doxyfile |
sed \
-e "s/\${LINT}/YES/" \
-e "s/\${WARN_AS_ERROR}/NO/" \
-e "s!\${SOURCE}!${ROOT}!" \
-e "s/\${USE_DOT}/NO/" \
-e "s/\${EXCLUDES}/impl/" |
${DOXYGEN} - 2>${TMPFILE} 1>/dev/null
# We don't want to check for default values and typedefs as well as for member variables
OUT=$(cat ${TMPFILE} \
| grep -v "=default" \
| grep -v "\(variable\)" \
| grep -v "\(typedef\)")
OUT=$(cat ${TMPFILE} |
grep -v "=default" |
grep -v "\(variable\)" |
grep -v "\(typedef\)")
rm -rf ${TMPFILE} > /dev/null 2>&1
popd > /dev/null 2>&1
rm -rf ${TMPFILE} >/dev/null 2>&1
popd >/dev/null 2>&1
if [[ ! -z "$OUT" ]]; then
cat <<EOF

View File

@@ -23,10 +23,10 @@ fix_includes() {
file_path_fixed="${file_path}.tmp.fixed"
# Make all includes to be <...> style
sed -E 's|#include "(.*)"|#include <\1>|g' "$file_path" > "$file_path_all_global"
sed -E 's|#include "(.*)"|#include <\1>|g' "$file_path" >"$file_path_all_global"
# Make local includes to be "..." style
sed -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g" "$file_path_all_global" > "$file_path_fixed"
sed -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g" "$file_path_all_global" >"$file_path_fixed"
rm "$file_path_all_global"
# Check if the temporary file is different from the original file

View File

@@ -4,7 +4,6 @@ import argparse
import re
from pathlib import Path
PATTERN = r'R"JSON\((.*?)\)JSON"'
@@ -40,16 +39,22 @@ def fix_colon_spacing(cpp_content: str) -> str:
raw_json = match.group(1)
raw_json = re.sub(r'":\n\s*(\[|\{)', r'": \1', raw_json)
return f'R"JSON({raw_json})JSON"'
return re.sub(PATTERN, replace_json, cpp_content, flags=re.DOTALL)
def fix_indentation(cpp_content: str) -> str:
if "JSON(" not in cpp_content:
return cpp_content
lines = cpp_content.splitlines()
ends_with_newline = cpp_content.endswith("\n")
def find_indentation(line: str) -> int:
return len(line) - len(line.lstrip())
for (line_num, (line, next_line)) in enumerate(zip(lines[:-1], lines[1:])):
for line_num, (line, next_line) in enumerate(zip(lines[:-1], lines[1:])):
if "JSON(" in line and ")JSON" not in line:
indent = find_indentation(line)
next_indent = find_indentation(next_line)
@@ -64,9 +69,17 @@ def fix_indentation(cpp_content: str) -> str:
if ")JSON" in lines[i]:
lines[i] = " " * indent + lines[i].lstrip()
break
lines[i] = lines[i][by_how_much:] if by_how_much > 0 else " " * (-by_how_much) + lines[i]
lines[i] = (
lines[i][by_how_much:]
if by_how_much > 0
else " " * (-by_how_much) + lines[i]
)
return "\n".join(lines) + "\n"
result = "\n".join(lines)
if ends_with_newline:
result += "\n"
return result
def process_file(file_path: Path, dry_run: bool) -> bool:

View File

@@ -4,7 +4,7 @@
#
set -e -o pipefail
if ! command -v gofmt &> /dev/null ; then
if ! command -v gofmt &>/dev/null; then
echo "gofmt not installed or available in the PATH" >&2
exit 1
fi

View File

@@ -1,5 +1,4 @@
#!/bin/sh
#!/bin/bash
# git for-each-ref refs/tags # see which tags are annotated and which are lightweight. Annotated tags are "tag" objects.
# # Set these so your commits and tags are always signed
@@ -7,7 +6,7 @@
# git config tag.gpgsign true
verify_commit_signed() {
if git verify-commit HEAD &> /dev/null; then
if git verify-commit HEAD &>/dev/null; then
:
# echo "HEAD commit seems signed..."
else
@@ -17,7 +16,7 @@ verify_commit_signed() {
}
verify_tag() {
if git describe --exact-match --tags HEAD &> /dev/null; then
if git describe --exact-match --tags HEAD &>/dev/null; then
: # You might be ok to push
# echo "Tag is annotated."
return 0
@@ -28,7 +27,7 @@ verify_tag() {
}
verify_tag_signed() {
if git verify-tag "$version" &> /dev/null ; then
if git verify-tag "$version" &>/dev/null; then
: # ok, I guess we'll let you push
# echo "Tag appears signed"
return 0
@@ -40,11 +39,11 @@ verify_tag_signed() {
}
# Check some things if we're pushing a branch called "release/"
if echo "$PRE_COMMIT_REMOTE_BRANCH" | grep ^refs\/heads\/release\/ &> /dev/null ; then
if echo "$PRE_COMMIT_REMOTE_BRANCH" | grep ^refs\/heads\/release\/ &>/dev/null; then
version=$(git tag --points-at HEAD)
echo "Looks like you're trying to push a $version release..."
echo "Making sure you've signed and tagged it."
if verify_commit_signed && verify_tag && verify_tag_signed ; then
if verify_commit_signed && verify_tag && verify_tag_signed; then
: # Ok, I guess you can push
else
exit 1

View File

@@ -2,7 +2,6 @@ add_subdirectory(util)
add_subdirectory(data)
add_subdirectory(cluster)
add_subdirectory(etl)
add_subdirectory(etlng)
add_subdirectory(feed)
add_subdirectory(rpc)
add_subdirectory(web)

20
src/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Clio API server
## Introduction
Clio is an XRP Ledger API server optimized for RPC calls over WebSocket or JSON-RPC.
It stores validated historical ledger and transaction data in a more space efficient format, and uses up to 4 times
less space than [rippled](https://github.com/XRPLF/rippled).
Clio can be configured to store data in [Apache Cassandra](https://cassandra.apache.org/_/index.html) or
[ScyllaDB](https://www.scylladb.com/), enabling scalable read throughput. Multiple Clio nodes can share
access to the same dataset, which allows for a highly available cluster of Clio nodes without the need for redundant
data storage or computation.
## Develop
As you prepare to develop code for Clio, please be sure you are aware of our current
[Contribution guidelines](https://github.com/XRPLF/clio/blob/develop/CONTRIBUTING.md).
Read about @ref "rpc" carefully to know more about writing your own handlers for Clio.

View File

@@ -5,10 +5,9 @@ target_link_libraries(
clio_app
PUBLIC clio_cluster
clio_etl
clio_etlng
clio_feed
clio_web
clio_rpc
clio_migration
clio_rpc
clio_web
PRIVATE Boost::program_options
)

View File

@@ -25,11 +25,10 @@
#include "data/AmendmentCenter.hpp"
#include "data/BackendFactory.hpp"
#include "data/LedgerCache.hpp"
#include "data/LedgerCacheSaver.hpp"
#include "etl/ETLService.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgers.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManager.hpp"
#include "migration/MigrationInspectorFactory.hpp"
#include "rpc/Counters.hpp"
@@ -57,6 +56,7 @@
#include <cstdlib>
#include <memory>
#include <optional>
#include <string>
#include <thread>
#include <utility>
#include <vector>
@@ -91,6 +91,7 @@ ClioApplication::ClioApplication(util::config::ClioConfigDefinition const& confi
{
LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString();
signalsHandler_.subscribeToStop([this]() { appStopper_.stop(); });
appStopper_.setOnComplete([this]() { signalsHandler_.notifyGracefulShutdownComplete(); });
}
int
@@ -99,20 +100,23 @@ ClioApplication::run(bool const useNgWebServer)
auto const threads = config_.get<uint16_t>("io_threads");
LOG(util::LogService::info()) << "Number of io threads = " << threads;
// Similarly we need a context to run ETL on
// In the future we can remove the raw ioc and use ctx instead
// This context should be above ioc because its reference is getting into tasks inside ioc
util::async::CoroExecutionContext ctx{threads};
// IO context to handle all incoming requests, as well as other things.
// This is not the only io context in the application.
boost::asio::io_context ioc{threads};
// Similarly we need a context to run ETLng on
// In the future we can remove the raw ioc and use ctx instead
util::async::CoroExecutionContext ctx{threads};
// Rate limiter, to prevent abuse
auto whitelistHandler = web::dosguard::WhitelistHandler{config_};
auto const dosguardWeights = web::dosguard::Weights::make(config_);
auto dosGuard = web::dosguard::DOSGuard{config_, whitelistHandler, dosguardWeights};
auto sweepHandler = web::dosguard::IntervalSweepHandler{config_, ioc, dosGuard};
auto cache = data::LedgerCache{};
auto cacheSaver = data::LedgerCacheSaver{config_, cache};
// Interface to the database
auto backend = data::makeBackend(config_, cache);
@@ -142,20 +146,12 @@ ClioApplication::run(bool const useNgWebServer)
// ETL uses the balancer to extract data.
// The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and accounts_proposed)
auto balancer = [&] -> std::shared_ptr<etlng::LoadBalancerInterface> {
if (config_.get<bool>("__ng_etl")) {
return etlng::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}
return etl::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}();
auto balancer = etl::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
// ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes
auto etl = etl::ETLService::makeETLService(config_, ioc, ctx, backend, subscriptions, balancer, ledgers);
auto etl = etl::ETLService::makeETLService(config_, ctx, backend, subscriptions, balancer, ledgers);
auto workQueue = rpc::WorkQueue::makeWorkQueue(config_);
auto counters = rpc::Counters::makeCounters(workQueue);
@@ -187,7 +183,7 @@ ClioApplication::run(bool const useNgWebServer)
return EXIT_FAILURE;
}
httpServer->onGet("/metrics", MetricsHandler{adminVerifier});
httpServer->onGet("/metrics", MetricsHandler{adminVerifier, workQueue});
httpServer->onGet("/health", HealthCheckHandler{});
httpServer->onGet("/cache_state", CacheStateHandler{cache});
auto requestHandler = RequestHandler{adminVerifier, handler};
@@ -201,7 +197,7 @@ ClioApplication::run(bool const useNgWebServer)
}
appStopper_.setOnStop(
Stopper::makeOnStopCallback(httpServer.value(), *balancer, *etl, *subscriptions, *backend, ioc)
Stopper::makeOnStopCallback(httpServer.value(), *balancer, *etl, *subscriptions, *backend, cacheSaver, ioc)
);
// Blocks until stopped.
@@ -216,6 +212,9 @@ ClioApplication::run(bool const useNgWebServer)
auto handler = std::make_shared<web::RPCServerHandler<RPCEngineType>>(config_, backend, rpcEngine, etl, dosGuard);
auto const httpServer = web::makeHttpServer(config_, ioc, dosGuard, handler, cache);
appStopper_.setOnStop(
Stopper::makeOnStopCallback(*httpServer, *balancer, *etl, *subscriptions, *backend, cacheSaver, ioc)
);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope

View File

@@ -38,7 +38,18 @@ Stopper::~Stopper()
void
Stopper::setOnStop(std::function<void(boost::asio::yield_context)> cb)
{
util::spawn(ctx_, std::move(cb));
util::spawn(ctx_, [this, cb = std::move(cb)](auto yield) {
cb(yield);
if (onCompleteCallback_)
onCompleteCallback_();
});
}
void
Stopper::setOnComplete(std::function<void()> cb)
{
onCompleteCallback_ = std::move(cb);
}
void

View File

@@ -20,12 +20,13 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "data/LedgerCacheSaver.hpp"
#include "etl/ETLServiceInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/log/Logger.hpp"
#include "web/ng/Server.hpp"
#include "web/interface/Concepts.hpp"
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
@@ -42,6 +43,7 @@ namespace app {
class Stopper {
boost::asio::io_context ctx_;
std::thread worker_;
std::function<void()> onCompleteCallback_;
public:
/**
@@ -57,6 +59,14 @@ public:
void
setOnStop(std::function<void(boost::asio::yield_context)> cb);
/**
* @brief Set the callback to be called when graceful shutdown completes.
*
* @param cb The callback to be called when shutdown completes.
*/
void
setOnComplete(std::function<void()> cb);
/**
* @brief Stop the application and run the shutdown tasks.
*/
@@ -71,21 +81,25 @@ public:
* @param etl The ETL service to stop.
* @param subscriptions The subscription manager to stop.
* @param backend The backend to stop.
* @param cacheSaver The ledger cache saver
* @param ioc The io_context to stop.
* @return The callback to be called on application stop.
*/
template <web::ng::SomeServer ServerType>
template <web::SomeServer ServerType, data::SomeLedgerCacheSaver LedgerCacheSaverType>
static std::function<void(boost::asio::yield_context)>
makeOnStopCallback(
ServerType& server,
etlng::LoadBalancerInterface& balancer,
etlng::ETLServiceInterface& etl,
etl::LoadBalancerInterface& balancer,
etl::ETLServiceInterface& etl,
feed::SubscriptionManagerInterface& subscriptions,
data::BackendInterface& backend,
LedgerCacheSaverType& cacheSaver,
boost::asio::io_context& ioc
)
{
return [&](boost::asio::yield_context yield) {
cacheSaver.save();
util::CoroutineGroup coroutineGroup{yield};
coroutineGroup.spawn(yield, [&server](auto innerYield) {
server.stop(innerYield);
@@ -106,6 +120,8 @@ public:
backend.waitForWritesToFinish();
LOG(util::LogService::info()) << "Backend writes finished";
cacheSaver.waitToFinish();
ioc.stop();
LOG(util::LogService::info()) << "io_context stopped";

View File

@@ -19,7 +19,10 @@
#include "app/WebHandlers.hpp"
#include "rpc/Errors.hpp"
#include "rpc/WorkQueue.hpp"
#include "util/Assert.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/prometheus/Http.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
@@ -31,6 +34,7 @@
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <functional>
#include <memory>
#include <optional>
#include <string>
@@ -76,8 +80,8 @@ DisconnectHook::operator()(web::ng::Connection const& connection)
dosguard_.get().decrement(connection.ip());
}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier)
: adminVerifier_{std::move(adminVerifier)}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier, rpc::WorkQueue& workQueue)
: adminVerifier_{std::move(adminVerifier)}, workQueue_{std::ref(workQueue)}
{
}
@@ -86,19 +90,45 @@ MetricsHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
boost::asio::yield_context yield
)
{
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
std::optional<web::ng::Response> response;
util::CoroutineGroup coroutineGroup{yield, 1};
auto const onTaskComplete = coroutineGroup.registerForeign(yield);
ASSERT(onTaskComplete.has_value(), "Coroutine group can't be full");
// FIXME(#1702): Using veb server thread to handle prometheus request. Better to post on work queue.
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
bool const postSuccessful = workQueue_.get().postCoro(
[this, &request, &response, &onTaskComplete = onTaskComplete.value(), &connectionMetadata](
boost::asio::yield_context
) mutable {
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
response = web::ng::Response{std::move(maybeResponse).value(), request};
// notify the coroutine group that the foreign task is done
onTaskComplete();
},
/* isWhiteListed= */ true,
rpc::WorkQueue::Priority::High
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
return web::ng::Response{std::move(maybeResponse).value(), request};
if (!postSuccessful) {
return web::ng::Response{
boost::beast::http::status::too_many_requests, rpc::makeError(rpc::RippledError::rpcTOO_BUSY), request
};
}
// Put the coroutine to sleep until the foreign task is done
coroutineGroup.asyncWait(yield);
ASSERT(response.has_value(), "Woke up coroutine without setting response");
return std::move(response).value();
}
web::ng::Response

View File

@@ -21,6 +21,7 @@
#include "data/LedgerCacheInterface.hpp"
#include "rpc/Errors.hpp"
#include "rpc/WorkQueue.hpp"
#include "util/log/Logger.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
@@ -119,20 +120,23 @@ public:
*/
class MetricsHandler {
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
std::reference_wrapper<rpc::WorkQueue> workQueue_;
public:
/**
* @brief Construct a new MetricsHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
* @param workQueue The WorkQueue to use for handling the request.
*/
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier);
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier, rpc::WorkQueue& workQueue);
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @param yield The yield context.
* @return The response to the request.
*/
web::ng::Response
@@ -140,7 +144,7 @@ public:
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
boost::asio::yield_context yield
);
};

View File

@@ -147,6 +147,12 @@ struct Amendments {
REGISTER(fixAMMClawbackRounding);
REGISTER(fixMPTDeliveredAmount);
REGISTER(fixPriceOracleOrder);
REGISTER(DynamicMPT);
REGISTER(fixDelegateV1_1);
REGISTER(fixDirectoryLimit);
REGISTER(fixIncludeKeyletFields);
REGISTER(fixTokenEscrowV1);
REGISTER(LendingProtocol);
// Obsolete but supported by libxrpl
REGISTER(CryptoConditionsSuite);

View File

@@ -46,6 +46,7 @@ namespace data {
inline std::shared_ptr<BackendInterface>
makeBackend(util::config::ClioConfigDefinition const& config, data::LedgerCacheInterface& cache)
{
using namespace cassandra::impl;
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming)
LOG(log.info()) << "Constructing BackendInterface";
@@ -56,7 +57,7 @@ makeBackend(util::config::ClioConfigDefinition const& config, data::LedgerCacheI
if (boost::iequals(type, "cassandra")) {
auto const cfg = config.getObject("database." + type);
if (cfg.getValueView("provider").asString() == toString(cassandra::impl::Provider::Keyspace)) {
if (providerFromString(cfg.getValueView("provider").asString()) == Provider::Keyspace) {
backend = std::make_shared<data::cassandra::KeyspaceBackend>(
data::cassandra::SettingsProvider{cfg}, cache, readOnly
);

View File

@@ -270,7 +270,7 @@ BackendInterface::updateRange(uint32_t newMax)
{
std::scoped_lock const lck(rngMtx_);
if (range_.has_value() && newMax < range_->maxSequence) {
if (range_.has_value() and newMax < range_->maxSequence) {
ASSERT(
false,
"Range shouldn't exist yet or newMax should be at least range->maxSequence. newMax = {}, "
@@ -280,11 +280,14 @@ BackendInterface::updateRange(uint32_t newMax)
);
}
if (!range_.has_value()) {
range_ = {.minSequence = newMax, .maxSequence = newMax};
} else {
range_->maxSequence = newMax;
}
updateRangeImpl(newMax);
}
void
BackendInterface::forceUpdateRange(uint32_t newMax)
{
std::scoped_lock const lck(rngMtx_);
updateRangeImpl(newMax);
}
void
@@ -410,4 +413,14 @@ BackendInterface::fetchFees(std::uint32_t const seq, boost::asio::yield_context
return fees;
}
void
BackendInterface::updateRangeImpl(uint32_t newMax)
{
if (!range_.has_value()) {
range_ = {.minSequence = newMax, .maxSequence = newMax};
} else {
range_->maxSequence = newMax;
}
}
} // namespace data

View File

@@ -249,6 +249,15 @@ public:
void
updateRange(uint32_t newMax);
/**
* @brief Updates the range of sequences that are stored in the DB without any checks
* @note In the most cases you should use updateRange() instead
*
* @param newMax The new maximum sequence available
*/
void
forceUpdateRange(uint32_t newMax);
/**
* @brief Sets the range of sequences that are stored in the DB.
*
@@ -776,6 +785,9 @@ private:
*/
virtual bool
doFinishWrites() = 0;
void
updateRangeImpl(uint32_t newMax);
};
} // namespace data

View File

@@ -5,6 +5,7 @@ target_sources(
BackendCounters.cpp
BackendInterface.cpp
LedgerCache.cpp
LedgerCacheSaver.cpp
LedgerHeaderCache.cpp
cassandra/impl/Future.cpp
cassandra/impl/Cluster.cpp
@@ -14,6 +15,9 @@ target_sources(
cassandra/impl/SslContext.cpp
cassandra/Handle.cpp
cassandra/SettingsProvider.cpp
impl/InputFile.cpp
impl/LedgerCacheFile.cpp
impl/OutputFile.cpp
)
target_link_libraries(clio_data PUBLIC cassandra-cpp-driver::cassandra-cpp-driver clio_util)

View File

@@ -189,10 +189,11 @@ public:
auto const nftUris = executor_.readEach(yield, selectNFTURIStatements);
for (auto i = 0u; i < nftIDs.size(); i++) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>(); maybeRow) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>();
maybeRow.has_value()) {
auto [seq, owner, isBurned] = *maybeRow;
NFT nft(nftIDs[i], seq, owner, isBurned);
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri)
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri.has_value())
nft.uri = *maybeUri;
ret.nfts.push_back(nft);
}

View File

@@ -57,9 +57,9 @@ namespace data::cassandra {
/**
* @brief Implements @ref CassandraBackendFamily for Keyspace
*
* @tparam SettingsProviderType The settings provider type to use
* @tparam ExecutionStrategyType The execution strategy type to use
* @tparam FetchLedgerCacheType The ledger header cache type to use
* @tparam SettingsProviderType The settings provider type
* @tparam ExecutionStrategyType The execution strategy type
* @tparam FetchLedgerCacheType The ledger header cache type
*/
template <
SomeSettingsProvider SettingsProviderType,
@@ -101,9 +101,9 @@ public:
// !range_.has_value() means the table 'ledger_range' is not populated;
// This would be the first write to the table.
// In this case, insert both min_sequence/max_sequence range into the table.
if (not(range_.has_value())) {
executor_.writeSync(schema_->insertLedgerRange, false, ledgerSequence_);
executor_.writeSync(schema_->insertLedgerRange, true, ledgerSequence_);
if (not range_.has_value()) {
executor_.writeSync(schema_->insertLedgerRange, /* isLatestLedger =*/false, ledgerSequence_);
executor_.writeSync(schema_->insertLedgerRange, /* isLatestLedger =*/true, ledgerSequence_);
}
if (not this->executeSyncUpdate(schema_->updateLedgerRange.bind(ledgerSequence_, true, ledgerSequence_ - 1))) {
@@ -130,30 +130,30 @@ public:
// Keyspace and ScyllaDB uses the same logic for taxon-filtered queries
nftIDs = fetchNFTIDsByTaxon(issuer, *taxon, limit, cursorIn, yield);
} else {
// --- Amazon Keyspaces Workflow for non-taxon queries ---
// Amazon Keyspaces Workflow for non-taxon queries
auto const startTaxon = cursorIn.has_value() ? ripple::nft::toUInt32(ripple::nft::getTaxon(*cursorIn)) : 0;
auto const startTokenID = cursorIn.value_or(ripple::uint256(0));
Statement firstQuery = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
Statement const firstQuery = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
firstQuery.bindAt(1, startTaxon);
firstQuery.bindAt(2, startTokenID);
firstQuery.bindAt(3, Limit{limit});
auto const firstRes = executor_.read(yield, firstQuery);
if (firstRes) {
for (auto const [nftID] : extract<ripple::uint256>(firstRes.value()))
if (firstRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*firstRes))
nftIDs.push_back(nftID);
}
if (nftIDs.size() < limit) {
auto const remainingLimit = limit - nftIDs.size();
Statement secondQuery = schema_->selectNFTsAfterTaxonKeyspaces.bind(issuer);
Statement const secondQuery = schema_->selectNFTsAfterTaxonKeyspaces.bind(issuer);
secondQuery.bindAt(1, startTaxon);
secondQuery.bindAt(2, Limit{remainingLimit});
auto const secondRes = executor_.read(yield, secondQuery);
if (secondRes) {
for (auto const [nftID] : extract<ripple::uint256>(secondRes.value()))
if (secondRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*secondRes))
nftIDs.push_back(nftID);
}
}
@@ -163,7 +163,7 @@ public:
/**
* @brief (Unsupported in Keyspaces) Fetches account root object indexes by page.
* * @note Loading the cache by enumerating all accounts is currently unsupported by the AWS Keyspaces backend.
* @note Loading the cache by enumerating all accounts is currently unsupported by the AWS Keyspaces backend.
* This function's logic relies on "PER PARTITION LIMIT 1", which Keyspaces does not support, and there is
* no efficient alternative. This is acceptable as the cache is primarily loaded via diffs. Calling this
* function will throw an exception.
@@ -197,14 +197,14 @@ private:
) const
{
std::vector<ripple::uint256> nftIDs;
Statement statement = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
Statement const statement = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
statement.bindAt(1, taxon);
statement.bindAt(2, cursorIn.value_or(ripple::uint256(0)));
statement.bindAt(3, Limit{limit});
auto const res = executor_.read(yield, statement);
if (res && res.value().hasRows()) {
for (auto const [nftID] : extract<ripple::uint256>(res.value()))
if (res.has_value() && res->hasRows()) {
for (auto const [nftID] : extract<ripple::uint256>(*res))
nftIDs.push_back(nftID);
}
return nftIDs;
@@ -229,8 +229,8 @@ private:
firstQuery.bindAt(3, Limit{limit});
auto const firstRes = executor_.read(yield, firstQuery);
if (firstRes) {
for (auto const [nftID] : extract<ripple::uint256>(firstRes.value()))
if (firstRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*firstRes))
nftIDs.push_back(nftID);
}
@@ -241,8 +241,8 @@ private:
secondQuery.bindAt(2, Limit{remainingLimit});
auto const secondRes = executor_.read(yield, secondQuery);
if (secondRes) {
for (auto const [nftID] : extract<ripple::uint256>(secondRes.value()))
if (secondRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*secondRes))
nftIDs.push_back(nftID);
}
}
@@ -291,10 +291,11 @@ private:
// Combine the results into final NFT objects.
for (auto i = 0u; i < nftIDs.size(); ++i) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>(); maybeRow) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>();
maybeRow.has_value()) {
auto [seq, owner, isBurned] = *maybeRow;
NFT nft(nftIDs[i], seq, owner, isBurned);
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri)
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri.has_value())
nft.uri = *maybeUri;
ret.nfts.push_back(nft);
}

View File

@@ -20,16 +20,22 @@
#include "data/LedgerCache.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "data/impl/LedgerCacheFile.hpp"
#include "etl/Models.hpp"
#include "util/Assert.hpp"
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <map>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <string>
#include <utility>
#include <vector>
namespace data {
@@ -89,7 +95,7 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
}
void
LedgerCache::update(std::vector<etlng::model::Object> const& objs, uint32_t seq)
LedgerCache::update(std::vector<etl::model::Object> const& objs, uint32_t seq)
{
if (disabled_)
return;
@@ -251,4 +257,34 @@ LedgerCache::getSuccessorHitRate() const
return static_cast<float>(successorHitCounter_.get().value()) / successorReqCounter_.get().value();
}
std::expected<void, std::string>
LedgerCache::saveToFile(std::string const& path) const
{
if (not isFull()) {
return std::unexpected{"Ledger cache is not full"};
}
impl::LedgerCacheFile file{path};
std::shared_lock const lock{mtx_};
impl::LedgerCacheFile::DataView const data{.latestSeq = latestSeq_, .map = map_, .deleted = deleted_};
return file.write(data);
}
std::expected<void, std::string>
LedgerCache::loadFromFile(std::string const& path, uint32_t minLatestSequence)
{
impl::LedgerCacheFile file{path};
auto data = file.read(minLatestSequence);
if (not data.has_value()) {
return std::unexpected(std::move(data).error());
}
auto [latestSeq, map, deleted] = std::move(data).value();
std::unique_lock const lock{mtx_};
latestSeq_ = latestSeq;
map_ = std::move(map);
deleted_ = std::move(deleted);
full_ = true;
return {};
}
} // namespace data

View File

@@ -21,7 +21,7 @@
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include "util/prometheus/Bool.hpp"
#include "util/prometheus/Counter.hpp"
#include "util/prometheus/Label.hpp"
@@ -37,6 +37,7 @@
#include <map>
#include <optional>
#include <shared_mutex>
#include <string>
#include <unordered_set>
#include <vector>
@@ -46,11 +47,16 @@ namespace data {
* @brief Cache for an entire ledger.
*/
class LedgerCache : public LedgerCacheInterface {
public:
/** @brief An entry of the cache */
struct CacheEntry {
uint32_t seq = 0;
Blob blob;
};
using CacheMap = std::map<ripple::uint256, CacheEntry>;
private:
// counters for fetchLedgerObject(s) hit rate
std::reference_wrapper<util::prometheus::CounterInt> objectReqCounter_{PrometheusService::counterInt(
"ledger_cache_counter_total_number",
@@ -73,8 +79,8 @@ class LedgerCache : public LedgerCacheInterface {
util::prometheus::Labels({{"type", "cache_hit"}, {"fetch", "successor_key"}})
)};
std::map<ripple::uint256, CacheEntry> map_;
std::map<ripple::uint256, CacheEntry> deleted_;
CacheMap map_;
CacheMap deleted_;
mutable std::shared_mutex mtx_;
std::condition_variable_any cv_;
@@ -98,7 +104,7 @@ public:
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground) override;
void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) override;
update(std::vector<etl::model::Object> const& objs, uint32_t seq) override;
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const override;
@@ -138,6 +144,12 @@ public:
void
waitUntilCacheContainsSeq(uint32_t seq) override;
std::expected<void, std::string>
saveToFile(std::string const& path) const override;
std::expected<void, std::string>
loadFromFile(std::string const& path, uint32_t minLatestSequence) override;
};
} // namespace data

View File

@@ -20,14 +20,16 @@
#pragma once
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/hardened_hash.h>
#include <cstddef>
#include <cstdint>
#include <expected>
#include <optional>
#include <string>
#include <vector>
namespace data {
@@ -63,7 +65,7 @@ public:
* @param seq The sequence to update cache for
*/
virtual void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) = 0;
update(std::vector<etl::model::Object> const& objs, uint32_t seq) = 0;
/**
* @brief Fetch a cached object by its key and sequence number.
@@ -168,6 +170,27 @@ public:
*/
virtual void
waitUntilCacheContainsSeq(uint32_t seq) = 0;
/**
* @brief Save the cache to file
* @note This operation takes about 7 seconds and it keeps a shared lock of mtx_
*
* @param path The file path to save the cache to
* @return An error as a string if any
*/
[[nodiscard]] virtual std::expected<void, std::string>
saveToFile(std::string const& path) const = 0;
/**
* @brief Load the cache from file
* @note This operation takes about 7 seconds and it keeps mtx_ exclusively locked
*
* @param path The file path to load data from
* @param minLatestSequence The minimum allowed value of the latestLedgerSequence in cache file
* @return An error as a string if any
*/
[[nodiscard]] virtual std::expected<void, std::string>
loadFromFile(std::string const& path, uint32_t minLatestSequence) = 0;
};
} // namespace data

View File

@@ -0,0 +1,70 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "data/LedgerCacheSaver.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "util/Assert.hpp"
#include "util/Profiler.hpp"
#include "util/log/Logger.hpp"
#include <string>
#include <thread>
namespace data {
LedgerCacheSaver::LedgerCacheSaver(util::config::ClioConfigDefinition const& config, LedgerCacheInterface const& cache)
: cacheFilePath_(config.maybeValue<std::string>("cache.file.path")), cache_(cache)
{
}
LedgerCacheSaver::~LedgerCacheSaver()
{
waitToFinish();
}
void
LedgerCacheSaver::save()
{
ASSERT(not savingThread_.has_value(), "Multiple save() calls are not allowed");
savingThread_ = std::thread([this]() {
if (not cacheFilePath_.has_value()) {
return;
}
LOG(util::LogService::info()) << "Saving ledger cache to " << *cacheFilePath_;
if (auto const [success, durationMs] = util::timed([&]() { return cache_.get().saveToFile(*cacheFilePath_); });
success.has_value()) {
LOG(util::LogService::info()) << "Successfully saved ledger cache in " << durationMs << " ms";
} else {
LOG(util::LogService::error()) << "Error saving LedgerCache to file: " << success.error();
}
});
}
void
LedgerCacheSaver::waitToFinish()
{
if (savingThread_.has_value() and savingThread_->joinable()) {
savingThread_->join();
}
savingThread_.reset();
}
} // namespace data

View File

@@ -0,0 +1,93 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/LedgerCacheInterface.hpp"
#include "util/config/ConfigDefinition.hpp"
#include <concepts>
#include <functional>
#include <optional>
#include <string>
#include <thread>
namespace data {
/**
* @brief A concept for a class that can save ledger cache asynchronously.
*
* This concept defines the interface requirements for any type that manages
* asynchronous saving of ledger cache to persistent storage.
*/
template <typename T>
concept SomeLedgerCacheSaver = requires(T a) {
{ a.save() } -> std::same_as<void>;
{ a.waitToFinish() } -> std::same_as<void>;
};
/**
* @brief Manages asynchronous saving of ledger cache to a file.
*
* This class provides functionality to save the ledger cache to a file in a separate thread,
* allowing the main application to continue without blocking. The file path is configured
* through the application's configuration system.
*/
class LedgerCacheSaver {
std::optional<std::string> cacheFilePath_;
std::reference_wrapper<LedgerCacheInterface const> cache_;
std::optional<std::thread> savingThread_;
public:
/**
* @brief Constructs a LedgerCacheSaver instance.
*
* @param config The configuration object containing the cache file path setting
* @param cache Reference to the ledger cache interface to be saved
*/
LedgerCacheSaver(util::config::ClioConfigDefinition const& config, LedgerCacheInterface const& cache);
/**
* @brief Destructor that ensures the saving thread is properly joined.
*
* Waits for any ongoing save operation to complete before destruction.
*/
~LedgerCacheSaver();
/**
* @brief Initiates an asynchronous save operation of the ledger cache.
*
* Spawns a new thread that saves the ledger cache to the configured file path.
* If no file path is configured, the operation is skipped. Logs the progress
* and result of the save operation.
*/
void
save();
/**
* @brief Waits for the saving thread to complete.
*
* Blocks until the saving operation finishes if a thread is currently active.
* Safe to call multiple times or when no save operation is in progress.
*/
void
waitToFinish();
};
} // namespace data

View File

@@ -1,8 +1,10 @@
# Backend
# Backend
@page "backend" Backend
The backend of Clio is responsible for handling the proper reading and writing of past ledger data from and to a given database. Currently, Cassandra and ScyllaDB are the only supported databases that are production-ready.
To support additional database types, you can create new classes that implement the virtual methods in [BackendInterface.h](https://github.com/XRPLF/clio/blob/develop/src/data/BackendInterface.hpp). Then, leveraging the Factory Object Design Pattern, modify [BackendFactory.h](https://github.com/XRPLF/clio/blob/develop/src/data/BackendFactory.hpp) with logic that returns the new database interface if the relevant `type` is provided in Clio's configuration file.
To support additional database types, you can create new classes that implement the virtual methods in [BackendInterface.hpp](https://github.com/XRPLF/clio/blob/develop/src/data/BackendInterface.hpp). Then, leveraging the Factory Object Design Pattern, modify [BackendFactory.hpp](https://github.com/XRPLF/clio/blob/develop/src/data/BackendFactory.hpp) with logic that returns the new database interface if the relevant `type` is provided in Clio's configuration file.
## Data Model

View File

@@ -247,6 +247,9 @@ struct MPTHoldersAndCursor {
struct LedgerRange {
std::uint32_t minSequence = 0;
std::uint32_t maxSequence = 0;
bool
operator==(LedgerRange const&) const = default;
};
/**

View File

@@ -70,10 +70,10 @@ namespace data::cassandra {
*
* Note: This is a safer and more correct rewrite of the original implementation of the backend.
*
* @tparam SettingsProviderType The settings provider type to use
* @tparam ExecutionStrategyType The execution strategy type to use
* @tparam SchemaType The Schema type to use
* @tparam FetchLedgerCacheType The ledger header cache type to use
* @tparam SettingsProviderType The settings provider type
* @tparam ExecutionStrategyType The execution strategy type
* @tparam SchemaType The Schema type
* @tparam FetchLedgerCacheType The ledger header cache type
*/
template <
SomeSettingsProvider SettingsProviderType,
@@ -100,8 +100,8 @@ public:
/**
* @brief Create a new cassandra/scylla backend instance.
*
* @param settingsProvider The settings provider to use
* @param cache The ledger cache to use
* @param settingsProvider The settings provider
* @param cache The ledger cache
* @param readOnly Whether the database should be in readonly mode
*/
CassandraBackendFamily(SettingsProviderType settingsProvider, data::LedgerCacheInterface& cache, bool readOnly)
@@ -111,18 +111,18 @@ public:
, handle_{settingsProvider_.getSettings()}
, executor_{settingsProvider_.getSettings(), handle_}
{
if (auto const res = handle_.connect(); not res)
if (auto const res = handle_.connect(); not res.has_value())
throw std::runtime_error("Could not connect to database: " + res.error());
if (not readOnly) {
if (auto const res = handle_.execute(schema_.createKeyspace); not res) {
if (auto const res = handle_.execute(schema_.createKeyspace); not res.has_value()) {
// on datastax, creation of keyspaces can be configured to only be done thru the admin
// interface. this does not mean that the keyspace does not already exist tho.
if (res.error().code() != CASS_ERROR_SERVER_UNAUTHORIZED)
throw std::runtime_error("Could not create keyspace: " + res.error());
}
if (auto const res = handle_.executeEach(schema_.createSchema); not res)
if (auto const res = handle_.executeEach(schema_.createSchema); not res.has_value())
throw std::runtime_error("Could not create schema: " + res.error());
}
@@ -146,9 +146,6 @@ public:
*/
CassandraBackendFamily(CassandraBackendFamily&&) = delete;
/**
* @copydoc BackendInterface::fetchAccountTransactions
*/
TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
@@ -217,18 +214,12 @@ public:
return {txns, {}};
}
/**
* @copydoc BackendInterface::waitForWritesToFinish
*/
void
waitForWritesToFinish() override
{
executor_.sync();
}
/**
* @copydoc BackendInterface::writeLedger
*/
void
writeLedger(ripple::LedgerHeader const& ledgerHeader, std::string&& blob) override
{
@@ -239,16 +230,13 @@ public:
ledgerSequence_ = ledgerHeader.seq;
}
/**
* @copydoc BackendInterface::fetchLatestLedgerSequence
*/
std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context yield) const override
{
if (auto const res = executor_.read(yield, schema_->selectLatestLedger); res) {
if (auto const& result = res.value(); result) {
if (auto const maybeValue = result.template get<uint32_t>(); maybeValue)
return maybeValue;
if (auto const res = executor_.read(yield, schema_->selectLatestLedger); res.has_value()) {
if (auto const& rows = *res; rows) {
if (auto const maybeRow = rows.template get<uint32_t>(); maybeRow.has_value())
return maybeRow;
LOG(log_.error()) << "Could not fetch latest ledger - no rows";
return std::nullopt;
@@ -262,9 +250,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchLedgerBySequence
*/
std::optional<ripple::LedgerHeader>
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context yield) const override
{
@@ -292,9 +277,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchLedgerByHash
*/
std::optional<ripple::LedgerHeader>
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context yield) const override
{
@@ -315,9 +297,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::hardFetchLedgerRange(boost::asio::yield_context) const
*/
std::optional<LedgerRange>
hardFetchLedgerRange(boost::asio::yield_context yield) const override
{
@@ -356,9 +335,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchAllTransactionsInLedger
*/
std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context yield) const override
{
@@ -366,9 +342,6 @@ public:
return fetchTransactions(hashes, yield);
}
/**
* @copydoc BackendInterface::fetchAllTransactionHashesInLedger
*/
std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(
std::uint32_t const ledgerSequence,
@@ -402,9 +375,6 @@ public:
return hashes;
}
/**
* @copydoc BackendInterface::fetchNFT
*/
std::optional<NFT>
fetchNFT(
ripple::uint256 const& tokenID,
@@ -444,9 +414,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchNFTTransactions
*/
TransactionsAndCursor
fetchNFTTransactions(
ripple::uint256 const& tokenID,
@@ -518,9 +485,6 @@ public:
return {txns, {}};
}
/**
* @copydoc BackendInterface::fetchMPTHolders
*/
MPTHoldersAndCursor
fetchMPTHolders(
ripple::uint192 const& mptID,
@@ -560,9 +524,6 @@ public:
return {mptObjects, {}};
}
/**
* @copydoc BackendInterface::doFetchLedgerObject
*/
std::optional<Blob>
doFetchLedgerObject(
ripple::uint256 const& key,
@@ -585,9 +546,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::doFetchLedgerObjectSeq
*/
std::optional<std::uint32_t>
doFetchLedgerObjectSeq(
ripple::uint256 const& key,
@@ -609,9 +567,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchTransaction
*/
std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context yield) const override
{
@@ -629,9 +584,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::doFetchSuccessorKey
*/
std::optional<ripple::uint256>
doFetchSuccessorKey(
ripple::uint256 key,
@@ -654,9 +606,6 @@ public:
return std::nullopt;
}
/**
* @copydoc BackendInterface::fetchTransactions
*/
std::vector<TransactionAndMetadata>
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context yield) const override
{
@@ -698,9 +647,6 @@ public:
return results;
}
/**
* @copydoc BackendInterface::doFetchLedgerObjects
*/
std::vector<Blob>
doFetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
@@ -741,9 +687,6 @@ public:
return results;
}
/**
* @copydoc BackendInterface::fetchLedgerDiff
*/
std::vector<LedgerObject>
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context yield) const override
{
@@ -789,9 +732,6 @@ public:
return results;
}
/**
* @copydoc BackendInterface::fetchMigratorStatus
*/
std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const override
{
@@ -812,9 +752,6 @@ public:
return {};
}
/**
* @copydoc BackendInterface::fetchClioNodesData
*/
std::expected<std::vector<std::pair<boost::uuids::uuid, std::string>>, std::string>
fetchClioNodesData(boost::asio::yield_context yield) const override
{
@@ -831,9 +768,6 @@ public:
return result;
}
/**
* @copydoc BackendInterface::doWriteLedgerObject
*/
void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override
{
@@ -845,9 +779,6 @@ public:
executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob));
}
/**
* @copydoc BackendInterface::writeSuccessor
*/
void
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) override
{
@@ -859,9 +790,6 @@ public:
executor_.write(schema_->insertSuccessor, std::move(key), seq, std::move(successor));
}
/**
* @copydoc BackendInterface::writeAccountTransactions
*/
void
writeAccountTransactions(std::vector<AccountTransactionsData> data) override
{
@@ -881,9 +809,6 @@ public:
executor_.write(std::move(statements));
}
/**
* @copydoc BackendInterface::writeAccountTransaction
*/
void
writeAccountTransaction(AccountTransactionsData record) override
{
@@ -901,9 +826,6 @@ public:
executor_.write(std::move(statements));
}
/**
* @copydoc BackendInterface::writeNFTTransactions
*/
void
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) override
{
@@ -919,9 +841,6 @@ public:
executor_.write(std::move(statements));
}
/**
* @copydoc BackendInterface::writeTransaction
*/
void
writeTransaction(
std::string&& hash,
@@ -939,9 +858,6 @@ public:
);
}
/**
* @copydoc BackendInterface::writeNFTs
*/
void
writeNFTs(std::vector<NFTsData> const& data) override
{
@@ -980,9 +896,6 @@ public:
executor_.writeEach(std::move(statements));
}
/**
* @copydoc BackendInterface::writeNFTs
*/
void
writeMPTHolders(std::vector<MPTHolderData> const& data) override
{
@@ -994,9 +907,6 @@ public:
executor_.write(std::move(statements));
}
/**
* @copydoc BackendInterface::startWrites
*/
void
startWrites() const override
{
@@ -1004,9 +914,6 @@ public:
// probably was used in PG to start a transaction or smth.
}
/**
* @copydoc BackendInterface::writeMigratorStatus
*/
void
writeMigratorStatus(std::string const& migratorName, std::string const& status) override
{
@@ -1015,27 +922,18 @@ public:
);
}
/**
* @copydoc BackendInterface::writeNodeMessage
*/
void
writeNodeMessage(boost::uuids::uuid const& uuid, std::string message) override
{
executor_.writeSync(schema_->updateClioNodeMessage, data::cassandra::Text{std::move(message)}, uuid);
}
/**
* @copydoc BackendInterface::isTooBusy
*/
bool
isTooBusy() const override
{
return executor_.isTooBusy();
}
/**
* @copydoc BackendInterface::stats
*/
boost::json::object
stats() const override
{

View File

@@ -97,7 +97,7 @@ SettingsProvider::parseSettings() const
settings.coreConnectionsPerHost = config_.get<uint32_t>("core_connections_per_host");
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
settings.writeBatchSize = config_.get<std::size_t>("write_batch_size");
settings.provider = config_.get<std::string>("provider");
settings.provider = impl::providerFromString(config_.get<std::string>("provider"));
if (config_.getValueView("connect_timeout").hasValue()) {
auto const connectTimeoutSecond = config_.get<uint32_t>("connect_timeout");

View File

@@ -61,7 +61,7 @@ Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), k
cass_cluster_set_request_timeout(*this, settings.requestTimeout.count());
// TODO: AWS keyspace reads should be local_one to save cost
if (settings.provider == toString(cassandra::impl::Provider::Keyspace)) {
if (settings.provider == cassandra::impl::Provider::Keyspace) {
if (auto const rc = cass_cluster_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM); rc != CASS_OK) {
throw std::runtime_error(fmt::format("Error setting keyspace consistency: {}", cass_error_desc(rc)));
}

View File

@@ -20,6 +20,7 @@
#pragma once
#include "data/cassandra/impl/ManagedObject.hpp"
#include "util/Assert.hpp"
#include "util/log/Logger.hpp"
#include <cassandra.h>
@@ -31,29 +32,22 @@
#include <string>
#include <string_view>
#include <thread>
#include <utility>
#include <variant>
namespace data::cassandra::impl {
namespace {
enum class Provider { Cassandra, Keyspace };
inline std::string
toString(Provider provider)
inline Provider
providerFromString(std::string const& provider)
{
switch (provider) {
case Provider::Cassandra:
return "cassandra";
case Provider::Keyspace:
return "aws_keyspace";
}
std::unreachable();
ASSERT(
provider == "cassandra" || provider == "aws_keyspace",
"Provider type must be one of 'cassandra' or 'aws_keyspace'"
);
return provider == "cassandra" ? Provider::Cassandra : Provider::Keyspace;
}
} // namespace
// TODO: move Settings to public interface, not impl
/**
@@ -109,7 +103,7 @@ struct Settings {
std::size_t writeBatchSize = kDEFAULT_BATCH_SIZE;
/** @brief Provider to know if we are using scylladb or keyspace */
std::string provider = toString(kDEFAULT_PROVIDER);
Provider provider = kDEFAULT_PROVIDER;
/** @brief Size of the IO queue */
std::optional<uint32_t> queueSizeIO = std::nullopt; // NOLINT(readability-redundant-member-init)

View File

@@ -0,0 +1,58 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "data/impl/InputFile.hpp"
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstring>
#include <ios>
#include <iosfwd>
#include <string>
#include <utility>
namespace data::impl {
InputFile::InputFile(std::string const& path) : file_(path, std::ios::binary | std::ios::in)
{
}
bool
InputFile::isOpen() const
{
return file_.is_open();
}
bool
InputFile::readRaw(char* data, size_t size)
{
file_.read(data, size);
shasum_.update(data, size);
return not file_.fail();
}
ripple::uint256
InputFile::hash() const
{
auto sum = shasum_;
return std::move(sum).finalize();
}
} // namespace data::impl

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -19,24 +19,39 @@
#pragma once
#include "etl/impl/LedgerLoader.hpp"
#include "util/FakeFetchResponse.hpp"
#include "util/Shasum.hpp"
#include <gmock/gmock.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/basics/base_uint.h>
#include <cstdint>
#include <optional>
#include <cstddef>
#include <cstring>
#include <fstream>
#include <iosfwd>
#include <string>
struct MockLedgerLoader {
using GetLedgerResponseType = FakeFetchResponse;
using RawLedgerObjectType = FakeLedgerObject;
namespace data::impl {
MOCK_METHOD(
FormattedTransactionsData,
insertTransactions,
(ripple::LedgerHeader const&, GetLedgerResponseType& data),
()
);
MOCK_METHOD(std::optional<ripple::LedgerHeader>, loadInitialLedger, (uint32_t sequence), ());
class InputFile {
std::ifstream file_;
util::Sha256sum shasum_;
public:
InputFile(std::string const& path);
bool
isOpen() const;
template <typename T>
bool
read(T& t)
{
return readRaw(reinterpret_cast<char*>(&t), sizeof(T));
}
bool
readRaw(char* data, size_t size);
ripple::uint256
hash() const;
};
} // namespace data::impl

View File

@@ -0,0 +1,210 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "data/impl/LedgerCacheFile.hpp"
#include "data/LedgerCache.hpp"
#include "data/Types.hpp"
#include <fmt/format.h>
#include <xrpl/basics/base_uint.h>
#include <algorithm>
#include <array>
#include <cstddef>
#include <cstdint>
#include <exception>
#include <filesystem>
#include <string>
#include <utility>
namespace data::impl {
using Hash = ripple::uint256;
using Separator = std::array<char, 16>;
static constexpr Separator kSEPARATOR = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
namespace {
std::expected<std::pair<ripple::uint256, LedgerCache::CacheEntry>, std::string>
readCacheEntry(InputFile& file, size_t i)
{
ripple::uint256 key;
if (not file.readRaw(reinterpret_cast<char*>(key.data()), ripple::base_uint<256>::bytes)) {
return std::unexpected(fmt::format("Failed to read key at index {}", i));
}
uint32_t seq{};
if (not file.read(seq)) {
return std::unexpected(fmt::format("Failed to read sequence at index {}", i));
}
size_t blobSize{};
if (not file.read(blobSize)) {
return std::unexpected(fmt::format("Failed to read blob size at index {}", i));
}
Blob blob(blobSize);
if (not file.readRaw(reinterpret_cast<char*>(blob.data()), blobSize)) {
return std::unexpected(fmt::format("Failed to read blob data at index {}", i));
}
return std::make_pair(key, LedgerCache::CacheEntry{.seq = seq, .blob = std::move(blob)});
}
std::expected<void, std::string>
verifySeparator(Separator const& s)
{
if (not std::ranges::all_of(s, [](char c) { return c == 0; })) {
return std::unexpected{"Separator verification failed - data corruption detected"};
}
return {};
}
} // anonymous namespace
LedgerCacheFile::LedgerCacheFile(std::string path) : path_(std::move(path))
{
}
std::expected<void, std::string>
LedgerCacheFile::write(DataView dataView)
{
auto const newFilePath = fmt::format("{}.new", path_);
auto file = OutputFile{newFilePath};
if (not file.isOpen()) {
return std::unexpected{fmt::format("Couldn't open file: {}", newFilePath)};
}
Header const header{
.latestSeq = dataView.latestSeq, .mapSize = dataView.map.size(), .deletedSize = dataView.deleted.size()
};
file.write(header);
file.write(kSEPARATOR);
for (auto const& [k, v] : dataView.map) {
file.write(k.data(), decltype(k)::bytes);
file.write(v.seq);
file.write(v.blob.size());
file.writeRaw(reinterpret_cast<char const*>(v.blob.data()), v.blob.size());
}
file.write(kSEPARATOR);
for (auto const& [k, v] : dataView.deleted) {
file.write(k.data(), decltype(k)::bytes);
file.write(v.seq);
file.write(v.blob.size());
file.writeRaw(reinterpret_cast<char const*>(v.blob.data()), v.blob.size());
}
file.write(kSEPARATOR);
auto const hash = file.hash();
file.write(hash.data(), decltype(hash)::bytes);
try {
std::filesystem::rename(newFilePath, path_);
} catch (std::exception const& e) {
return std::unexpected{fmt::format("Error moving cache file from {} to {}: {}", newFilePath, path_, e.what())};
}
return {};
}
std::expected<LedgerCacheFile::Data, std::string>
LedgerCacheFile::read(uint32_t minLatestSequence)
{
try {
auto file = InputFile{path_};
if (not file.isOpen()) {
return std::unexpected{fmt::format("Couldn't open file: {}", path_)};
}
Data result;
Header header{};
if (not file.read(header)) {
return std::unexpected{"Error reading cache header"};
}
if (header.version != kVERSION) {
return std::unexpected{
fmt::format("Cache has wrong version: expected {} found {}", kVERSION, header.version)
};
}
if (header.latestSeq < minLatestSequence) {
return std::unexpected{fmt::format("Latest sequence ({}) in the cache file is too low.", header.latestSeq)};
}
result.latestSeq = header.latestSeq;
Separator separator{};
if (not file.readRaw(separator.data(), separator.size())) {
return std::unexpected{"Error reading cache header"};
}
if (auto verificationResult = verifySeparator(separator); not verificationResult.has_value()) {
return std::unexpected{std::move(verificationResult).error()};
}
for (size_t i = 0; i < header.mapSize; ++i) {
auto cacheEntryExpected = readCacheEntry(file, i);
if (not cacheEntryExpected.has_value()) {
return std::unexpected{std::move(cacheEntryExpected).error()};
}
// Using insert with hint here to decrease insert operation complexity to the amortized constant instead of
// logN
result.map.insert(result.map.end(), std::move(cacheEntryExpected).value());
}
if (not file.readRaw(separator.data(), separator.size())) {
return std::unexpected{"Error reading separator"};
}
if (auto verificationResult = verifySeparator(separator); not verificationResult.has_value()) {
return std::unexpected{std::move(verificationResult).error()};
}
for (size_t i = 0; i < header.deletedSize; ++i) {
auto cacheEntryExpected = readCacheEntry(file, i);
if (not cacheEntryExpected.has_value()) {
return std::unexpected{std::move(cacheEntryExpected).error()};
}
result.deleted.insert(result.deleted.end(), std::move(cacheEntryExpected).value());
}
if (not file.readRaw(separator.data(), separator.size())) {
return std::unexpected{"Error reading separator"};
}
if (auto verificationResult = verifySeparator(separator); not verificationResult.has_value()) {
return std::unexpected{std::move(verificationResult).error()};
}
auto const dataHash = file.hash();
ripple::uint256 hashFromFile{};
if (not file.readRaw(reinterpret_cast<char*>(hashFromFile.data()), decltype(hashFromFile)::bytes)) {
return std::unexpected{"Error reading hash"};
}
if (dataHash != hashFromFile) {
return std::unexpected{"Hash file corruption detected"};
}
return result;
} catch (std::exception const& e) {
return std::unexpected{fmt::format(" Error reading cache file: {}", e.what())};
} catch (...) {
return std::unexpected{fmt::format(" Error reading cache file")};
}
}
} // namespace data::impl

View File

@@ -0,0 +1,70 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/LedgerCache.hpp"
#include "data/impl/InputFile.hpp"
#include "data/impl/OutputFile.hpp"
#include <fmt/format.h>
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstdint>
#include <cstring>
#include <string>
namespace data::impl {
class LedgerCacheFile {
public:
struct Header {
uint32_t version = kVERSION;
uint32_t latestSeq{};
uint64_t mapSize{};
uint64_t deletedSize{};
};
private:
static constexpr uint32_t kVERSION = 1;
std::string path_;
public:
template <typename T>
struct DataBase {
uint32_t latestSeq{0};
T map;
T deleted;
};
using DataView = DataBase<LedgerCache::CacheMap const&>;
using Data = DataBase<LedgerCache::CacheMap>;
LedgerCacheFile(std::string path);
std::expected<void, std::string>
write(DataView dataView);
std::expected<Data, std::string>
read(uint32_t minLatestSequence);
};
} // namespace data::impl

View File

@@ -0,0 +1,62 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "data/impl/OutputFile.hpp"
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstring>
#include <ios>
#include <string>
#include <utility>
namespace data::impl {
OutputFile::OutputFile(std::string const& path) : file_(path, std::ios::binary | std::ios::out)
{
}
bool
OutputFile::isOpen() const
{
return file_.is_open();
}
void
OutputFile::writeRaw(char const* data, size_t size)
{
writeToFile(data, size);
}
void
OutputFile::writeToFile(char const* data, size_t size)
{
file_.write(data, size);
shasum_.update(data, size);
}
ripple::uint256
OutputFile::hash() const
{
auto sum = shasum_;
return std::move(sum).finalize();
}
} // namespace data::impl

View File

@@ -0,0 +1,68 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "util/Shasum.hpp"
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstring>
#include <fstream>
#include <string>
namespace data::impl {
class OutputFile {
std::ofstream file_;
util::Sha256sum shasum_;
public:
OutputFile(std::string const& path);
bool
isOpen() const;
template <typename T>
void
write(T&& data)
{
writeRaw(reinterpret_cast<char const*>(&data), sizeof(T));
}
template <typename T>
void
write(T const* data, size_t const size)
{
writeRaw(reinterpret_cast<char const*>(data), size);
}
void
writeRaw(char const* data, size_t size);
ripple::uint256
hash() const;
private:
void
writeToFile(char const* data, size_t size);
};
} // namespace data::impl

View File

@@ -19,7 +19,7 @@
#pragma once
namespace etlng {
namespace etl {
/**
* @brief The interface of a handler for amendment blocking
@@ -32,6 +32,12 @@ struct AmendmentBlockHandlerInterface {
*/
virtual void
notifyAmendmentBlocked() = 0;
/**
* @brief Stop the block handler from repeatedly executing
*/
virtual void
stop() = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -7,14 +7,24 @@ target_sources(
ETLService.cpp
ETLState.cpp
LoadBalancer.cpp
MPTHelpers.cpp
NetworkValidatedLedgers.cpp
NFTHelpers.cpp
Source.cpp
MPTHelpers.cpp
impl/AmendmentBlockHandler.cpp
impl/AsyncGrpcCall.cpp
impl/Extraction.cpp
impl/ForwardingSource.cpp
impl/GrpcSource.cpp
impl/Loading.cpp
impl/Monitor.cpp
impl/SubscriptionSource.cpp
impl/TaskManager.cpp
impl/ext/Cache.cpp
impl/ext/Core.cpp
impl/ext/MPT.cpp
impl/ext/NFT.cpp
impl/ext/Successor.cpp
)
target_link_libraries(clio_etl PUBLIC clio_data)

Some files were not shown because too many files have changed in this diff Show More