Compare commits

..

79 Commits

Author SHA1 Message Date
Ayaz Salikhov
1ecc6a6040 chore: Specify apple-clang 17.0 in conan profile (#2757) 2025-11-06 11:51:47 +00:00
github-actions[bot]
1d3e34b392 style: clang-tidy auto fixes (#2759)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-06 09:35:52 +00:00
Alex Kremer
2f8a704071 feat: Ledger publisher use async framework (#2756) 2025-11-05 15:26:03 +00:00
Alex Kremer
fcc5a5425e feat: New ETL by default (#2752) 2025-11-05 13:29:36 +00:00
Ayaz Salikhov
316126746b ci: Remove backticks from release date (#2754) 2025-11-05 11:48:12 +00:00
Alex Kremer
6d79dd6b2b feat: Async framework submit on strand/ctx (#2751) 2025-11-04 19:14:31 +00:00
Ayaz Salikhov
d6ab2cc1e4 style: Fix comment in pre-commit-autoupdate.yml (#2750) 2025-11-03 18:30:52 +00:00
Ayaz Salikhov
13baa42993 chore: Update prepare-runner to fix ccache on macOS (#2749) 2025-11-03 17:55:44 +00:00
Ayaz Salikhov
b485fdc18d ci: Add date to nightly release title (#2748) 2025-11-03 17:16:29 +00:00
Ayaz Salikhov
7e4e12385f ci: Update docker images (#2745) 2025-10-30 17:08:11 +00:00
Ayaz Salikhov
c117f470f2 ci: Install pre-commit in the main CI image as well (#2744) 2025-10-30 14:32:23 +00:00
Ayaz Salikhov
30e88fe72c style: Fix pre-commit style issues (#2743) 2025-10-30 14:04:15 +00:00
Ayaz Salikhov
cecf082952 chore: Update tooling in Docker images (#2737) 2025-10-30 14:04:05 +00:00
Ayaz Salikhov
d5b95c2e61 chore: Use new prepare-runner (#2742) 2025-10-30 14:03:50 +00:00
github-actions[bot]
8375eb1766 style: clang-tidy auto fixes (#2741)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-10-30 11:20:32 +00:00
Ayaz Salikhov
be6aaffa7a ci: Fix nightly commits link (#2738) 2025-10-30 11:19:22 +00:00
Ayaz Salikhov
104ef6a9dc ci: Release nightly with date (#2731) 2025-10-29 15:33:13 +00:00
yinyiqian1
eed757e0c4 feat: Support account_mptoken_issuances and account_mptokens (#2680)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-29 14:17:43 +00:00
github-actions[bot]
3b61a85ba0 style: clang-tidy auto fixes (#2736) 2025-10-29 09:31:21 +00:00
Sergey Kuznetsov
7c8152d76f test: Fix flaky test (#2729) 2025-10-28 17:36:52 +00:00
dependabot[bot]
0425d34b55 ci: [DEPENDABOT] bump actions/checkout from 4.3.0 to 5.0.0 (#2724)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.3.0
to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="08eba0b27e...08c6903cd8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4.3.0&new-version=5.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-28 16:44:09 +00:00
Ayaz Salikhov
8c8a7ff3b8 ci: Use XRPLF/get-nproc (#2727) 2025-10-27 18:50:40 +00:00
Ayaz Salikhov
16493abd0d ci: Better pre-commit failure message (#2720) 2025-10-27 12:14:56 +00:00
dependabot[bot]
3dd72d94e1 ci: [DEPENDABOT] bump actions/download-artifact from 5.0.0 to 6.0.0 (#2723)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:17:41 +00:00
dependabot[bot]
5e914abf29 ci: [DEPENDABOT] bump actions/upload-artifact from 4.6.2 to 5.0.0 in /.github/actions/code-coverage (#2725)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:16:14 +00:00
dependabot[bot]
9603968808 ci: [DEPENDABOT] bump actions/upload-artifact from 4.6.2 to 5.0.0 (#2722)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 10:15:39 +00:00
Ayaz Salikhov
0124c06a53 ci: Enable clang asan builds (#2717) 2025-10-24 13:36:14 +01:00
Ayaz Salikhov
1bfdd0dd89 ci: Save full logs for failed sanitized tests (#2715) 2025-10-23 17:06:02 +01:00
Alex Kremer
f41d574204 fix: Flaky DeadlineIsHandledCorrectly (#2716) 2025-10-23 16:33:58 +01:00
Ayaz Salikhov
d0ec60381b ci: Use intermediate environment variables for improved security (#2713) 2025-10-23 11:34:53 +01:00
Ayaz Salikhov
0b19a42a96 ci: Pin all GitHub actions (#2712) 2025-10-22 16:17:15 +01:00
emrearıyürek
030f4f1b22 docs: Remove logging.md from readme (#2710) 2025-10-22 11:36:49 +01:00
github-actions[bot]
2de49b4d33 style: clang-tidy auto fixes (#2706) 2025-10-20 10:41:59 +01:00
Ayaz Salikhov
3de2bf2910 ci: Update pre-commit workflow to latest version (#2702) 2025-10-18 07:55:14 +01:00
Peter Chen
7538efb01e fix: Add mpt_issuance_id to meta of MPTIssuanceCreate (#2701)
fixes #2332
2025-10-17 09:58:38 -04:00
Alex Kremer
685f611434 chore: Disable flaky DisconnectClientOnInactivity (#2699) 2025-10-16 18:55:58 +01:00
Ayaz Salikhov
2528dee6b6 chore: Use pre-commit image with libatomic (#2698) 2025-10-16 13:03:52 +01:00
Ayaz Salikhov
b2be4b51d1 ci: Add libatomic as dependency for pre-commit image (#2697) 2025-10-16 12:02:14 +01:00
Alex Kremer
b4e40558c9 fix: Address AmendmentBlockHandler flakiness in old ETL tests (#2694)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-15 17:15:12 +01:00
emrearıyürek
b361e3a108 feat: Support new types in ledger_entry (#2654)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-14 17:37:14 +01:00
Ayaz Salikhov
a4b47da57a docs: Update doxygen-awesome-css to 2.4.1 (#2690) 2025-10-13 18:55:21 +01:00
github-actions[bot]
2ed1a45ef1 style: clang-tidy auto fixes (#2688)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-10-10 10:45:47 +01:00
Ayaz Salikhov
dabaa5bf80 fix: Drop dynamic loggers to fix memory leak (#2686) 2025-10-09 16:51:55 +01:00
Ayaz Salikhov
b4fb3e42b8 chore: Publish RCs as non-draft (#2685) 2025-10-09 14:48:29 +01:00
Peter Chen
aa64bb7b6b refactor: Keyspace comments (#2684)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-08 19:58:05 +01:00
rrmanukyan
dc5f8b9c23 fix: Add gRPC Timeout and keepalive to handle stuck connections (#2676) 2025-10-08 13:50:11 +01:00
Ayaz Salikhov
7300529484 docs: All files are .hpp (#2683) 2025-10-07 19:28:00 +01:00
Ayaz Salikhov
33802f475f docs: Build docs using doxygen 1.14.0 (#2681) 2025-10-07 18:48:19 +01:00
Ayaz Salikhov
213752862c chore: Install Doxygen 1.14.0 in our images (#2682) 2025-10-07 18:20:24 +01:00
Ayaz Salikhov
a189eeb952 ci: Use separate pre-commit image (#2678) 2025-10-07 16:01:46 +01:00
Ayaz Salikhov
3c1811233a chore: Add git to pre-commit image (#2679) 2025-10-07 15:41:19 +01:00
Alex Kremer
693ed2061c fix: ASAN issue with AmendmentBlockHandler test (#2674)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-07 15:18:01 +01:00
Ayaz Salikhov
1e2f4b5ca2 chore: Add separate pre-commit image (#2677) 2025-10-07 14:23:49 +01:00
Ayaz Salikhov
1da8464d75 style: Rename actions to use dash (#2669) 2025-10-06 16:19:27 +01:00
Ayaz Salikhov
d48fb168c6 ci: Allow PR titles to start with [ (#2668) 2025-10-06 15:20:26 +01:00
dependabot[bot]
92595f95a0 ci: [DEPENDABOT] bump docker/login-action from 3.5.0 to 3.6.0 (#2662)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 15:16:15 +01:00
dependabot[bot]
fc9de87136 ci: [DEPENDABOT] bump docker/login-action from 3.5.0 to 3.6.0 in /.github/actions/build_docker_image (#2663)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-10-06 15:16:06 +01:00
Ayaz Salikhov
67f5ca445f style: Rename workflows to use dash and show reusable (#2667) 2025-10-06 15:02:17 +01:00
Ayaz Salikhov
897c255b8c chore: Start pr title with uppercase (#2666) 2025-10-06 13:54:11 +01:00
github-actions[bot]
aa9eea0d99 style: clang-tidy auto fixes (#2665) 2025-10-06 10:35:33 +01:00
Peter Chen
1cfa06c9aa feat: Support Keyspace (#2454)
Support AWS Keyspace queries

---------

Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2025-10-03 11:28:50 -04:00
Ayaz Salikhov
2b937bf098 style: Update pre-commit hooks (#2658) 2025-10-01 17:20:02 +01:00
Peter Chen
c6f2197878 chore: Update libXRPL to 2.6.1 (#2656) 2025-10-01 13:17:41 +01:00
Peter Chen
48855633b5 chore: Update libxrpl to 2.6.1-rc2 (#2652)
We'll still need to update to `xrpl/2.6.1` when it is out, but let's
first update it to merge the other PR that depends on xrpl changes
2025-09-29 14:27:26 -04:00
github-actions[bot]
c26df5e1fb style: clang-tidy auto fixes (#2651)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-09-29 18:11:02 +02:00
emrearıyürek
9fb26d77ed chore: Remove redundant headers and include CassandraDBHelper.h (#2649) 2025-09-26 10:53:04 +01:00
Ayaz Salikhov
c3c8b2d796 docs: Update doxygen-awesome-css to 2.4.0 (#2647) 2025-09-24 12:32:13 +02:00
github-actions[bot]
2f3e9498dc style: clang-tidy auto fixes (#2645) 2025-09-23 11:31:26 +02:00
emrearıyürek
d2de240389 fix: Print out error details of web context (#2351) 2025-09-22 13:45:04 +01:00
Alex Kremer
245a808e4b feat: Cache state endpoint (#2642) 2025-09-18 16:20:17 +01:00
Ayaz Salikhov
586a2116a4 test: Enable address sanitizer testing (#2628) 2025-09-18 16:08:02 +01:00
Ayaz Salikhov
93919ab8b7 ci: Move clang-tidy_on_fix_merged workflow into a main one (#2641) 2025-09-18 14:55:59 +01:00
Ayaz Salikhov
e92a63f4e5 ci: Make separate download/upload options for ccache (#2637) 2025-09-18 13:46:34 +01:00
github-actions[bot]
5e5dd649cf style: clang-tidy auto fixes (#2639) 2025-09-18 10:32:06 +01:00
Peter Chen
fb1cdcbde5 fix: mpt_issuance_id not in tx for MPTIssuanceCreate (#2630)
fixes: #2332
2025-09-17 16:17:58 +01:00
Alex Kremer
b66d13bc74 fix: Workaround large number validation and parsing (#2608)
Fixes #2586
2025-09-17 16:12:32 +01:00
Bart
e78ff5c442 ci: Wrap GitHub CI conditionals in curly braces (#2629)
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-09-16 16:13:03 +01:00
Ayaz Salikhov
c499f9d679 test: Run LoggerFixture::init in integration tests (#2636) 2025-09-16 15:27:03 +01:00
github-actions[bot]
420b99cfa1 style: clang-tidy auto fixes (#2635)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-09-16 14:14:55 +01:00
333 changed files with 8882 additions and 10369 deletions

View File

@@ -54,7 +54,7 @@ format:
_help_max_pargs_hwrap:
- If a positional argument group contains more than this many
- arguments, then force it to a vertical layout.
max_pargs_hwrap: 6
max_pargs_hwrap: 5
_help_max_rows_cmdline:
- If a cmdline positional group consumes more than this many
- lines without nesting, then invalidate the layout (and nest)

31
.github/actions/build-clio/action.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Build clio
description: Build clio in build directory
inputs:
targets:
description: Space-separated build target names
default: all
nproc_subtract:
description: The number of processors to subtract when calculating parallelism.
required: true
default: "0"
runs:
using: composite
steps:
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}
- name: Build targets
shell: bash
env:
CMAKE_TARGETS: ${{ inputs.targets }}
run: |
cd build
cmake \
--build . \
--parallel "${{ steps.nproc.outputs.nproc }}" \
--target ${CMAKE_TARGETS}

View File

@@ -34,14 +34,14 @@ runs:
steps:
- name: Login to DockerHub
if: ${{ inputs.push_image == 'true' && inputs.dockerhub_repo != '' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
- name: Login to GitHub Container Registry
if: ${{ inputs.push_image == 'true' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}

View File

@@ -1,29 +0,0 @@
name: Build clio
description: Build clio in build directory
inputs:
targets:
description: Space-separated build target names
default: all
subtract_threads:
description: An option for the action get_number_of_threads. See get_number_of_threads
required: true
default: "0"
runs:
using: composite
steps:
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
with:
subtract_threads: ${{ inputs.subtract_threads }}
- name: Build targets
shell: bash
run: |
cd build
cmake \
--build . \
--parallel "${{ steps.number_of_threads.outputs.threads_number }}" \
--target ${{ inputs.targets }}

View File

@@ -24,7 +24,7 @@ runs:
-j8 --exclude-throw-branches
- name: Archive coverage report
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coverage-report.xml
path: build/coverage_report.xml

View File

@@ -28,12 +28,17 @@ runs:
- name: Create an issue
id: create_issue
shell: bash
env:
ISSUE_BODY: ${{ inputs.body }}
ISSUE_ASSIGNEES: ${{ inputs.assignees }}
ISSUE_LABELS: ${{ inputs.labels }}
ISSUE_TITLE: ${{ inputs.title }}
run: |
echo -e '${{ inputs.body }}' > issue.md
echo -e "${ISSUE_BODY}" > issue.md
gh issue create \
--assignee '${{ inputs.assignees }}' \
--label '${{ inputs.labels }}' \
--title '${{ inputs.title }}' \
--assignee "${ISSUE_ASSIGNEES}" \
--label "${ISSUE_LABELS}" \
--title "${ISSUE_TITLE}" \
--body-file ./issue.md \
> create_issue.log
created_issue="$(sed 's|.*/||' create_issue.log)"

View File

@@ -1,36 +0,0 @@
name: Get number of threads
description: Determines number of threads to use on macOS and Linux
inputs:
subtract_threads:
description: How many threads to subtract from the calculated number
required: true
default: "0"
outputs:
threads_number:
description: Number of threads to use
value: ${{ steps.number_of_threads_export.outputs.num }}
runs:
using: composite
steps:
- name: Get number of threads on mac
id: mac_threads
if: ${{ runner.os == 'macOS' }}
shell: bash
run: echo "num=$(($(sysctl -n hw.logicalcpu) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads on Linux
id: linux_threads
if: ${{ runner.os == 'Linux' }}
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Shift and export number of threads
id: number_of_threads_export
shell: bash
run: |
num_of_threads="${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}"
shift_by="${{ inputs.subtract_threads }}"
shifted="$((num_of_threads - shift_by))"
echo "num=$(( shifted > 1 ? shifted : 1 ))" >> $GITHUB_OUTPUT

View File

@@ -27,10 +27,10 @@ runs:
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
uses: ./.github/actions/git-common-ancestor
- name: Restore ccache cache
uses: actions/cache/restore@v4
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
id: ccache_cache
if: ${{ env.CCACHE_DISABLE != '1' }}
with:

View File

@@ -28,11 +28,11 @@ runs:
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
uses: ./.github/actions/git-common-ancestor
- name: Save ccache cache
if: ${{ inputs.ccache_cache_hit != 'true' || inputs.ccache_cache_miss_rate == '100.0' }}
uses: actions/cache/save@v4
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-${{ inputs.build_type }}${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}-${{ inputs.conan_profile }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

View File

@@ -14,7 +14,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build_clio/
directory: .github/actions/build-clio/
schedule:
interval: weekly
day: monday
@@ -27,7 +27,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/build_docker_image/
directory: .github/actions/build-docker-image/
schedule:
interval: weekly
day: monday
@@ -53,7 +53,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/code_coverage/
directory: .github/actions/code-coverage/
schedule:
interval: weekly
day: monday
@@ -79,7 +79,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/create_issue/
directory: .github/actions/create-issue/
schedule:
interval: weekly
day: monday
@@ -92,7 +92,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/get_number_of_threads/
directory: .github/actions/git-common-ancestor/
schedule:
interval: weekly
day: monday
@@ -105,7 +105,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/git_common_ancestor/
directory: .github/actions/restore-cache/
schedule:
interval: weekly
day: monday
@@ -118,20 +118,7 @@ updates:
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/restore_cache/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
reviewers:
- XRPLF/clio-dev-team
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/save_cache/
directory: .github/actions/save-cache/
schedule:
interval: weekly
day: monday

View File

@@ -4,7 +4,7 @@ build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=17
compiler.version=17.0
os=Macos
[conf]

View File

@@ -3,7 +3,9 @@ import itertools
import json
LINUX_OS = ["heavy", "heavy-arm64"]
LINUX_CONTAINERS = ['{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }']
LINUX_CONTAINERS = [
'{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
]
LINUX_COMPILERS = ["gcc", "clang"]
MACOS_OS = ["macos15"]

View File

@@ -31,15 +31,16 @@ TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {su
OUTPUT_DIR="./.sanitizer-report"
mkdir -p "$OUTPUT_DIR"
export TSAN_OPTIONS="die_after_fork=0"
export MallocNanoZone='0' # for MacOSX
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}"
export TSAN_OPTIONS="log_path=\"$OUTPUT_FILE\" die_after_fork=0"
export ASAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export UBSAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export MallocNanoZone='0' # for MacOSX
$TEST_BINARY --gtest_filter="$TEST" > /dev/null 2>&1
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}.log"
$TEST_BINARY --gtest_filter="$TEST" > "$OUTPUT_FILE" 2>&1
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
else
rm "$OUTPUT_FILE"
fi
done

View File

@@ -44,11 +44,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Download Clio binary from artifact
if: ${{ inputs.artifact_name != null }}
uses: actions/download-artifact@v5
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: ${{ inputs.artifact_name }}
path: ./docker/clio/artifact/
@@ -56,9 +56,12 @@ jobs:
- name: Download Clio binary from url
if: ${{ inputs.clio_server_binary_url != null }}
shell: bash
env:
BINARY_URL: ${{ inputs.clio_server_binary_url }}
BINARY_SHA256: ${{ inputs.binary_sha256 }}
run: |
wget "${{inputs.clio_server_binary_url}}" -P ./docker/clio/artifact/
if [ "$(sha256sum ./docker/clio/clio_server | awk '{print $1}')" != "${{inputs.binary_sha256}}" ]; then
wget "${BINARY_URL}" -P ./docker/clio/artifact/
if [ "$(sha256sum ./docker/clio/clio_server | awk '{print $1}')" != "${BINARY_SHA256}" ]; then
echo "Binary sha256 sum doesn't match"
exit 1
fi
@@ -89,7 +92,7 @@ jobs:
echo "GHCR_REPO=$(echo ghcr.io/${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> ${GITHUB_OUTPUT}
- name: Build Docker image
uses: ./.github/actions/build_docker_image
uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/build.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/upload_coverage_report.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- .github/workflows/reusable-upload-coverage-report.yml
- ".github/actions/**"
- "!.github/actions/build_docker_image/**"
- "!.github/actions/create_issue/**"
- "!.github/actions/build-docker-image/**"
- "!.github/actions/create-issue/**"
- CMakeLists.txt
- conanfile.py
@@ -45,7 +45,7 @@ jobs:
build_type: [Release, Debug]
container:
[
'{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }',
'{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }',
]
static: [true]
@@ -56,12 +56,14 @@ jobs:
container: ""
static: false
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
conan_profile: ${{ matrix.conan_profile }}
build_type: ${{ matrix.build_type }}
download_ccache: true
upload_ccache: true
static: ${{ matrix.static }}
run_unit_tests: true
run_integration_tests: false
@@ -70,13 +72,14 @@ jobs:
code_coverage:
name: Run Code Coverage
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
conan_profile: gcc
build_type: Debug
disable_cache: false
download_ccache: true
upload_ccache: false
code_coverage: true
static: true
upload_clio_server: false
@@ -88,13 +91,14 @@ jobs:
package:
name: Build packages
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
conan_profile: gcc
build_type: Release
disable_cache: false
download_ccache: true
upload_ccache: false
code_coverage: false
static: true
upload_clio_server: false
@@ -107,12 +111,12 @@ jobs:
needs: build-and-test
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_server_Linux_Release_gcc

View File

@@ -17,15 +17,15 @@ jobs:
name: Build Clio / `libXRPL ${{ github.event.client_payload.version }}`
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
@@ -51,13 +51,13 @@ jobs:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Build Clio
uses: ./.github/actions/build_clio
uses: ./.github/actions/build-clio
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_tests_check_libxrpl
path: build/clio_tests
@@ -67,10 +67,10 @@ jobs:
needs: build
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
steps:
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_tests_check_libxrpl
@@ -90,10 +90,10 @@ jobs:
issues: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create an issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:

View File

@@ -10,8 +10,17 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: ytanikin/pr-conventional-commits@b72758283dcbee706975950e96bc4bf323a8d8c0 # v1.4.2
- uses: ytanikin/pr-conventional-commits@b72758283dcbee706975950e96bc4bf323a8d8c0 # 1.4.2
with:
task_types: '["build","feat","fix","docs","test","ci","style","refactor","perf","chore"]'
add_label: false
custom_labels: '{"build":"build", "feat":"enhancement", "fix":"bug", "docs":"documentation", "test":"testability", "ci":"ci", "style":"refactoring", "refactor":"refactoring", "perf":"performance", "chore":"tooling"}'
- name: Check if message starts with upper-case letter
env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
if [[ ! "${PR_TITLE}" =~ ^[a-z]+:\ [\[A-Z] ]]; then
echo "Error: PR title must start with an upper-case letter."
exit 1
fi

View File

@@ -1,6 +1,8 @@
name: Clang-tidy check
on:
push:
branches: [develop]
schedule:
- cron: "0 9 * * 1-5"
workflow_dispatch:
@@ -22,9 +24,10 @@ env:
jobs:
clang_tidy:
if: github.event_name != 'push' || contains(github.event.head_commit.message, 'clang-tidy auto fixes')
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
permissions:
contents: write
@@ -32,17 +35,17 @@ jobs:
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- name: Restore cache
uses: ./.github/actions/restore_cache
uses: ./.github/actions/restore-cache
id: restore_cache
with:
conan_profile: ${{ env.CONAN_PROFILE }}
@@ -58,16 +61,16 @@ jobs:
with:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
id: nproc
- name: Run clang-tidy
continue-on-error: true
shell: bash
id: run_clang_tidy
run: |
run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j "${{ steps.number_of_threads.outputs.threads_number }}" -fix -quiet 1>output.txt
run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j "${{ steps.nproc.outputs.nproc }}" -fix -quiet 1>output.txt
- name: Fix local includes and clang-format style
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
@@ -87,7 +90,7 @@ jobs:
- name: Create an issue
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
id: create_issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:

View File

@@ -1,30 +0,0 @@
name: Restart clang-tidy workflow
on:
push:
branches: [develop]
workflow_dispatch:
jobs:
restart_clang_tidy:
runs-on: ubuntu-latest
permissions:
actions: write
steps:
- uses: actions/checkout@v4
- name: Check last commit matches clang-tidy auto fixes
id: check
shell: bash
run: |
passed=$(if [[ "$(git log -1 --pretty=format:%s | grep 'style: clang-tidy auto fixes')" ]]; then echo 'true' ; else echo 'false' ; fi)
echo "passed=\"$passed\"" >> $GITHUB_OUTPUT
- name: Run clang-tidy workflow
if: ${{ contains(steps.check.outputs.passed, 'true') }}
shell: bash
env:
GH_TOKEN: ${{ github.token }}
GH_REPO: ${{ github.repository }}
run: gh workflow run clang-tidy.yml

View File

@@ -14,16 +14,16 @@ jobs:
build:
runs-on: ubuntu-latest
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
lfs: true
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
@@ -39,10 +39,10 @@ jobs:
run: cmake --build . --target docs
- name: Setup Pages
uses: actions/configure-pages@v5
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # v5.0.0
- name: Upload artifact
uses: actions/upload-pages-artifact@v4
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # v4.0.0
with:
path: build_docs/html
name: docs-develop
@@ -62,6 +62,6 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5
with:
artifact_name: docs-develop

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/nightly.yml
- .github/workflows/release_impl.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/build_clio_docker_image.yml
- .github/workflows/reusable-release.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- .github/workflows/build-clio-docker-image.yml
- ".github/actions/**"
- "!.github/actions/code_coverage/**"
- "!.github/actions/code-coverage/**"
- .github/scripts/prepare-release-artifacts.sh
concurrency:
@@ -39,19 +39,19 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
- os: heavy
conan_profile: gcc
build_type: Debug
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
- os: heavy
conan_profile: gcc.ubsan
build_type: Release
static: false
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -61,7 +61,8 @@ jobs:
run_unit_tests: true
run_integration_tests: true
upload_clio_server: true
disable_cache: true
download_ccache: false
upload_ccache: false
analyze_build_time:
name: Analyze Build Time
@@ -72,42 +73,54 @@ jobs:
include:
- os: heavy
conan_profile: clang
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
static: true
- os: macos15
conan_profile: apple-clang
container: ""
static: false
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
conan_profile: ${{ matrix.conan_profile }}
build_type: Release
disable_cache: true
download_ccache: false
upload_ccache: false
code_coverage: false
static: ${{ matrix.static }}
upload_clio_server: false
targets: all
analyze_build_time: true
get_date:
name: Get Date
runs-on: ubuntu-latest
outputs:
date: ${{ steps.get_date.outputs.date }}
steps:
- name: Get current date
id: get_date
run: |
echo "date=$(date +'%Y%m%d')" >> $GITHUB_OUTPUT
nightly_release:
needs: build-and-test
uses: ./.github/workflows/release_impl.yml
needs: [build-and-test, get_date]
uses: ./.github/workflows/reusable-release.yml
with:
overwrite_release: true
delete_pattern: "nightly-*"
prerelease: true
title: "Clio development (nightly) build"
version: nightly
title: "Clio development build (nightly-${{ needs.get_date.outputs.date }})"
version: nightly-${{ needs.get_date.outputs.date }}
header: >
> **Note:** Please remember that this is a development release and it is not recommended for production use.
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly>
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly-${{ needs.get_date.outputs.date }}>
generate_changelog: false
draft: false
build_and_publish_docker_image:
uses: ./.github/workflows/build_clio_docker_image.yml
uses: ./.github/workflows/build-clio-docker-image.yml
needs: build-and-test
secrets: inherit
with:
@@ -128,10 +141,10 @@ jobs:
issues: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Create an issue
uses: ./.github/actions/create_issue
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:

View File

@@ -1,8 +1,8 @@
name: Pre-commit auto-update
on:
# every first day of the month
schedule:
# every first day of the month
- cron: "0 0 1 * *"
pull_request:
branches: [release/*, develop]

View File

@@ -8,7 +8,7 @@ on:
jobs:
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@afbcbdafbe0ce5439492fb87eda6441371086386
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-pre-commit:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'

View File

@@ -29,9 +29,9 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
@@ -41,18 +41,19 @@ jobs:
run_unit_tests: true
run_integration_tests: true
upload_clio_server: true
disable_cache: true
download_ccache: false
upload_ccache: false
expected_version: ${{ github.event_name == 'push' && github.ref_name || '' }}
release:
needs: build-and-test
uses: ./.github/workflows/release_impl.yml
uses: ./.github/workflows/reusable-release.yml
with:
overwrite_release: false
delete_pattern: ""
prerelease: ${{ contains(github.ref_name, '-') }}
title: "${{ github.ref_name}}"
title: "${{ github.ref_name }}"
version: "${{ github.ref_name }}"
header: >
${{ contains(github.ref_name, '-') && '> **Note:** Please remember that this is a release candidate and it is not recommended for production use.' || '' }}
generate_changelog: ${{ !contains(github.ref_name, '-') }}
draft: true
draft: ${{ !contains(github.ref_name, '-') }}

View File

@@ -23,8 +23,14 @@ on:
required: true
type: string
disable_cache:
description: Whether ccache should be disabled
download_ccache:
description: Whether to download ccache from the cache
required: false
type: boolean
default: true
upload_ccache:
description: Whether to upload ccache to the cache
required: false
type: boolean
default: false
@@ -71,13 +77,14 @@ on:
jobs:
build:
uses: ./.github/workflows/build_impl.yml
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: ${{ inputs.runs_on }}
container: ${{ inputs.container }}
conan_profile: ${{ inputs.conan_profile }}
build_type: ${{ inputs.build_type }}
disable_cache: ${{ inputs.disable_cache }}
download_ccache: ${{ inputs.download_ccache }}
upload_ccache: ${{ inputs.upload_ccache }}
code_coverage: false
static: ${{ inputs.static }}
upload_clio_server: ${{ inputs.upload_clio_server }}
@@ -88,7 +95,7 @@ jobs:
test:
needs: build
uses: ./.github/workflows/test_impl.yml
uses: ./.github/workflows/reusable-test.yml
with:
runs_on: ${{ inputs.runs_on }}
container: ${{ inputs.container }}

View File

@@ -23,10 +23,17 @@ on:
required: true
type: string
disable_cache:
description: Whether ccache should be disabled
download_ccache:
description: Whether to download ccache from the cache
required: false
type: boolean
default: true
upload_ccache:
description: Whether to upload ccache to the cache
required: false
type: boolean
default: false
code_coverage:
description: Whether to enable code coverage
@@ -79,7 +86,7 @@ jobs:
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
# We need to fetch tags to have correct version in the release
@@ -88,18 +95,18 @@ jobs:
ref: ${{ github.ref }}
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: ${{ inputs.disable_cache }}
disable_ccache: ${{ !inputs.download_ccache }}
- name: Setup conan on macOS
if: runner.os == 'macOS'
if: ${{ runner.os == 'macOS' }}
shell: bash
run: ./.github/scripts/conan/init.sh
- name: Restore cache
if: ${{ !inputs.disable_cache }}
uses: ./.github/actions/restore_cache
if: ${{ inputs.download_ccache }}
uses: ./.github/actions/restore-cache
id: restore_cache
with:
conan_profile: ${{ inputs.conan_profile }}
@@ -124,7 +131,7 @@ jobs:
package: ${{ inputs.package }}
- name: Build Clio
uses: ./.github/actions/build_clio
uses: ./.github/actions/build-clio
with:
targets: ${{ inputs.targets }}
@@ -138,13 +145,13 @@ jobs:
- name: Upload build time analyze report
if: ${{ inputs.analyze_build_time }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: build_time_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build_time_report.txt
- name: Show ccache's statistics
if: ${{ !inputs.disable_cache }}
if: ${{ inputs.download_ccache }}
shell: bash
id: ccache_stats
run: |
@@ -162,36 +169,36 @@ jobs:
run: strip build/clio_integration_tests
- name: Upload clio_server
if: inputs.upload_clio_server && !inputs.code_coverage && !inputs.analyze_build_time
uses: actions/upload-artifact@v4
if: ${{ inputs.upload_clio_server && !inputs.code_coverage && !inputs.analyze_build_time }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_server_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_tests
- name: Upload clio_integration_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_integration_tests
- name: Upload Clio Linux package
if: inputs.package
uses: actions/upload-artifact@v4
if: ${{ inputs.package }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: clio_deb_package_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/*.deb
- name: Save cache
if: ${{ !inputs.disable_cache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save_cache
if: ${{ inputs.upload_ccache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save-cache
with:
conan_profile: ${{ inputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
@@ -209,17 +216,19 @@ jobs:
# It's all available in the build job, but not in the test job
- name: Run code coverage
if: ${{ inputs.code_coverage }}
uses: ./.github/actions/code_coverage
uses: ./.github/actions/code-coverage
- name: Verify expected version
if: ${{ inputs.expected_version != '' }}
shell: bash
env:
INPUT_EXPECTED_VERSION: ${{ inputs.expected_version }}
run: |
set -e
EXPECTED_VERSION="clio-${{ inputs.expected_version }}"
EXPECTED_VERSION="clio-${INPUT_EXPECTED_VERSION}"
actual_version=$(./build/clio_server --version)
if [[ "$actual_version" != "$EXPECTED_VERSION" ]]; then
echo "Expected version '$EXPECTED_VERSION', but got '$actual_version'"
echo "Expected version '${EXPECTED_VERSION}', but got '${actual_version}'"
exit 1
fi
@@ -231,6 +240,6 @@ jobs:
if: ${{ inputs.code_coverage }}
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
uses: ./.github/workflows/reusable-upload-coverage-report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -3,10 +3,10 @@ name: Make release
on:
workflow_call:
inputs:
overwrite_release:
description: "Overwrite the current release and tag"
delete_pattern:
description: "Pattern to delete previous releases"
required: true
type: boolean
type: string
prerelease:
description: "Create a prerelease"
@@ -42,7 +42,7 @@ jobs:
release:
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -51,26 +51,28 @@ jobs:
contents: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
path: release_artifacts
pattern: clio_server_*
- name: Create release notes
shell: bash
env:
RELEASE_HEADER: ${{ inputs.header }}
run: |
echo "# Release notes" > "${RUNNER_TEMP}/release_notes.md"
echo "" >> "${RUNNER_TEMP}/release_notes.md"
printf '%s\n' "${{ inputs.header }}" >> "${RUNNER_TEMP}/release_notes.md"
printf '%s\n' "${RELEASE_HEADER}" >> "${RUNNER_TEMP}/release_notes.md"
- name: Generate changelog
shell: bash
@@ -87,26 +89,38 @@ jobs:
run: .github/scripts/prepare-release-artifacts.sh release_artifacts
- name: Upload release notes
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: release_notes_${{ inputs.version }}
path: "${RUNNER_TEMP}/release_notes.md"
- name: Remove current release and tag
if: ${{ github.event_name != 'pull_request' && inputs.overwrite_release }}
- name: Remove previous release with a pattern
if: ${{ github.event_name != 'pull_request' && inputs.delete_pattern != '' }}
shell: bash
env:
DELETE_PATTERN: ${{ inputs.delete_pattern }}
run: |
gh release delete ${{ inputs.version }} --yes || true
git push origin :${{ inputs.version }} || true
RELEASES_TO_DELETE=$(gh release list --limit 50 --repo "${GH_REPO}" | grep -E "${DELETE_PATTERN}" | awk -F'\t' '{print $3}' || true)
if [ -n "$RELEASES_TO_DELETE" ]; then
for RELEASE in $RELEASES_TO_DELETE; do
echo "Deleting release: $RELEASE"
gh release delete "$RELEASE" --repo "${GH_REPO}" --yes --cleanup-tag
done
fi
- name: Publish release
if: ${{ github.event_name != 'pull_request' }}
shell: bash
env:
RELEASE_VERSION: ${{ inputs.version }}
PRERELEASE_OPTION: ${{ inputs.prerelease && '--prerelease' || '' }}
RELEASE_TITLE: ${{ inputs.title }}
DRAFT_OPTION: ${{ inputs.draft && '--draft' || '' }}
run: |
gh release create "${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
--title "${{ inputs.title }}" \
gh release create "${RELEASE_VERSION}" \
${PRERELEASE_OPTION} \
--title "${RELEASE_TITLE}" \
--target "${GITHUB_SHA}" \
${{ inputs.draft && '--draft' || '' }} \
${DRAFT_OPTION} \
--notes-file "${RUNNER_TEMP}/release_notes.md" \
./release_artifacts/clio_server*

View File

@@ -39,22 +39,22 @@ jobs:
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
if: inputs.run_unit_tests
if: ${{ inputs.run_unit_tests }}
env:
# TODO: remove completely when we have fixed all currently existing issues with sanitizers
SANITIZER_IGNORE_ERRORS: ${{ endsWith(inputs.conan_profile, '.asan') || endsWith(inputs.conan_profile, '.tsan') }}
SANITIZER_IGNORE_ERRORS: ${{ endsWith(inputs.conan_profile, '.tsan') || (inputs.conan_profile == 'gcc.asan' && inputs.build_type == 'Release') }}
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
@@ -63,15 +63,15 @@ jobs:
run: chmod +x ./clio_tests
- name: Run clio_tests (regular)
if: env.SANITIZER_IGNORE_ERRORS == 'false'
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'false' }}
run: ./clio_tests
- name: Run clio_tests (sanitizer errors ignored)
if: env.SANITIZER_IGNORE_ERRORS == 'true'
run: ./.github/scripts/execute-tests-under-sanitizer ./clio_tests
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' }}
run: ./.github/scripts/execute-tests-under-sanitizer.sh ./clio_tests
- name: Check for sanitizer report
if: env.SANITIZER_IGNORE_ERRORS == 'true'
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' }}
shell: bash
id: check_report
run: |
@@ -82,16 +82,16 @@ jobs:
fi
- name: Upload sanitizer report
if: env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true'
uses: actions/upload-artifact@v4
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true' }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: sanitizer_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: .sanitizer-report/*
include-hidden-files: true
- name: Create an issue
if: false && env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true'
uses: ./.github/actions/create_issue
if: ${{ false && env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true' }}
uses: ./.github/actions/create-issue
env:
GH_TOKEN: ${{ github.token }}
with:
@@ -108,7 +108,7 @@ jobs:
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
if: inputs.run_integration_tests
if: ${{ inputs.run_integration_tests }}
services:
scylladb:
@@ -144,7 +144,7 @@ jobs:
sleep 5
done
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}

View File

@@ -1,7 +1,6 @@
name: Upload report
on:
workflow_dispatch:
workflow_call:
secrets:
CODECOV_TOKEN:
@@ -13,12 +12,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Download report artifact
uses: actions/download-artifact@v5
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: coverage-report.xml
path: build

View File

@@ -8,14 +8,14 @@ on:
paths:
- .github/workflows/sanitizers.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/reusable-build-test.yml
- .github/workflows/reusable-build.yml
- .github/workflows/reusable-test.yml
- ".github/actions/**"
- "!.github/actions/build_docker_image/**"
- "!.github/actions/create_issue/**"
- .github/scripts/execute-tests-under-sanitizer
- "!.github/actions/build-docker-image/**"
- "!.github/actions/create-issue/**"
- .github/scripts/execute-tests-under-sanitizer.sh
- CMakeLists.txt
- conanfile.py
@@ -41,11 +41,12 @@ jobs:
sanitizer_ext: [.asan, .tsan, .ubsan]
build_type: [Release, Debug]
uses: ./.github/workflows/build_and_test.yml
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d" }'
disable_cache: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
download_ccache: false
upload_ccache: false
conan_profile: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
build_type: ${{ matrix.build_type }}
static: false

View File

@@ -3,23 +3,23 @@ name: Update CI docker image
on:
pull_request:
paths:
- .github/workflows/update_docker_ci.yml
- .github/workflows/update-docker-ci.yml
- ".github/actions/build_docker_image/**"
- ".github/actions/build-docker-image/**"
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
- "docker/**"
- "!docker/clio/**"
- "!docker/develop/**"
push:
branches: [develop]
paths:
- .github/workflows/update_docker_ci.yml
- .github/workflows/update-docker-ci.yml
- ".github/actions/build_docker_image/**"
- ".github/actions/build-docker-image/**"
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
- "docker/**"
- "!docker/clio/**"
- "!docker/develop/**"
workflow_dispatch:
concurrency:
@@ -52,7 +52,7 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
@@ -60,8 +60,8 @@ jobs:
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
@@ -90,16 +90,16 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
@@ -128,7 +128,7 @@ jobs:
needs: [repo, gcc-amd64, gcc-arm64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
@@ -137,25 +137,25 @@ jobs:
files: "docker/compilers/gcc/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to DockerHub
if: github.repository_owner == 'XRPLF' && github.event_name != 'pull_request'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' }}
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
- name: Create and push multi-arch manifest
if: github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true'
if: ${{ github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true' }}
run: |
push_image() {
image=$1
@@ -179,7 +179,7 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
@@ -187,8 +187,8 @@ jobs:
with:
files: "docker/compilers/clang/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
@@ -215,7 +215,7 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
@@ -223,8 +223,8 @@ jobs:
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -246,16 +246,16 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
- uses: ./.github/actions/build-docker-image
if: ${{ steps.changed-files.outputs.any_changed == 'true' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -277,7 +277,7 @@ jobs:
needs: [repo, tools-amd64, tools-arm64]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get changed files
id: changed-files
@@ -286,18 +286,18 @@ jobs:
files: "docker/tools/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push multi-arch manifest
if: github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true'
if: ${{ github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true' }}
run: |
image=${{ needs.repo.outputs.GHCR_REPO }}/clio-tools
docker buildx imagetools create \
@@ -306,14 +306,36 @@ jobs:
$image:arm64-latest \
$image:amd64-latest
pre-commit:
name: Build and push pre-commit docker image
runs-on: heavy
needs: [repo, tools-merge]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ needs.repo.outputs.GHCR_REPO }}/clio-pre-commit
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/pre-commit
tags: |
type=raw,value=latest
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
build_args: |
GHCR_REPO=${{ needs.repo.outputs.GHCR_REPO }}
ci:
name: Build and push CI docker image
runs-on: heavy
needs: [repo, gcc-merge, clang, tools-merge]
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/build_docker_image
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}

View File

@@ -18,7 +18,7 @@ on:
pull_request:
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/workflows/upload-conan-deps.yml
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
@@ -28,7 +28,7 @@ on:
push:
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/workflows/upload-conan-deps.yml
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
@@ -46,7 +46,7 @@ jobs:
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Calculate conan matrix
id: set-matrix
@@ -69,15 +69,15 @@ jobs:
CONAN_PROFILE: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@7951b682e5a2973b28b0719a72f01fc4b0d0c34f
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
with:
disable_ccache: true
- name: Setup conan on macOS
if: runner.os == 'macOS'
if: ${{ runner.os == 'macOS' }}
shell: bash
run: ./.github/scripts/conan/init.sh
@@ -94,9 +94,11 @@ jobs:
build_type: ${{ matrix.build_type }}
- name: Login to Conan
if: github.repository_owner == 'XRPLF' && github.event_name != 'pull_request'
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' }}
run: conan remote login -p ${{ secrets.CONAN_PASSWORD }} xrplf ${{ secrets.CONAN_USERNAME }}
- name: Upload Conan packages
if: github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' && github.event_name != 'schedule'
run: conan upload "*" -r=xrplf --confirm ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
if: ${{ github.repository_owner == 'XRPLF' && github.event_name != 'pull_request' && github.event_name != 'schedule' }}
env:
FORCE_OPTION: ${{ github.event.inputs.force_upload == 'true' && '--force' || '' }}
run: conan upload "*" -r=xrplf --confirm ${FORCE_OPTION}

View File

@@ -11,7 +11,10 @@
#
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: ^(docs/doxygen-awesome-theme/|conan\.lock$)
exclude: |
(?x)^(
docs/doxygen-awesome-theme/.*
)$
repos:
# `pre-commit sample-config` default hooks
@@ -37,13 +40,13 @@ repos:
exclude: LICENSE.md
- repo: https://github.com/hadolint/hadolint
rev: c3dc18df7a501f02a560a2cc7ba3c69a85ca01d3 # frozen: v2.13.1-beta
rev: 4e697ba704fd23b2409b947a319c19c3ee54d24f # frozen: v2.14.0
hooks:
- id: hadolint-docker
# hadolint-docker is a special hook that runs hadolint in a Docker container
# Docker is not installed in the environment where pre-commit is run
stages: [manual]
entry: hadolint/hadolint:v2.12.1-beta hadolint
entry: hadolint/hadolint:v2.14.0 hadolint
- repo: https://github.com/codespell-project/codespell
rev: 63c8f8312b7559622c0d82815639671ae42132ac # frozen: v2.4.1
@@ -80,7 +83,7 @@ repos:
language: script
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 86fdcc9bd34d6afbbd29358b97436c8ffe3aa3b2 # frozen: v21.1.0
rev: 719856d56a62953b8d2839fb9e851f25c3cfeef8 # frozen: v21.1.2
hooks:
- id: clang-format
args: [--style=file]

View File

@@ -34,7 +34,6 @@ Below are some useful docs to learn more about Clio.
- [How to configure Clio and rippled](./docs/configure-clio.md)
- [How to run Clio](./docs/run-clio.md)
- [Logging](./docs/logging.md)
- [Troubleshooting guide](./docs/trouble_shooting.md)
**General reference material:**

View File

@@ -3,7 +3,7 @@
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1756234289.683",
"xrpl/2.6.0#57b93b5a6c99dc8511fccb3bb5390352%1756820296.642",
"xrpl/2.6.1#973af2bf9631f239941dd9f5a100bb84%1759275059.342",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1756234266.869",
"spdlog/1.15.3#3ca0e9e6b83af4d0151e26541d140c86%1754401846.61",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1756234262.318",
@@ -41,15 +41,12 @@
"overrides": {
"boost/1.83.0": [
null,
"boost/1.83.0"
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368"
],
"protobuf/3.21.12": [
null,
"protobuf/3.21.12"
],
"boost/1.86.0": [
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
@@ -58,4 +55,4 @@
]
},
"config_requires": []
}
}

View File

@@ -18,7 +18,7 @@ class ClioConan(ConanFile):
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1w',
'xrpl/2.6.0',
'xrpl/2.6.1',
'zlib/1.3.1',
'libbacktrace/cci.20210118',
'spdlog/1.15.3',

View File

@@ -43,25 +43,22 @@ RUN apt-get update \
&& rm -rf /var/lib/apt/lists/*
# Install Python tools
ARG PYTHON_VERSION=3.13
RUN add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-venv \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sS https://bootstrap.pypa.io/get-pip.py | python${PYTHON_VERSION}
# Create a virtual environment for python tools
RUN python${PYTHON_VERSION} -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
&& rm -rf /var/lib/apt/lists/*
RUN pip install -q --no-cache-dir \
# TODO: Remove this once we switch to newer Ubuntu base image
# lxml 6.0.0 is not compatible with our image
'lxml<6.0.0' \
cmake \
conan==2.20.1 \
conan==2.22.1 \
gcovr \
# We're adding pre-commit to this image as well,
# because clang-tidy workflow requires it
pre-commit
# Install LLVM tools

View File

@@ -5,17 +5,17 @@ It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but c
The image is based on Ubuntu 20.04 and contains:
- ccache 4.11.3
- ccache 4.12.1
- Clang 19
- ClangBuildAnalyzer 1.6.0
- Conan 2.20.1
- Doxygen 1.12
- Conan 2.22.1
- Doxygen 1.15.0
- GCC 15.2.0
- GDB 16.3
- gh 2.74
- git-cliff 2.9.1
- mold 2.40.1
- Python 3.13
- gh 2.82.1
- git-cliff 2.10.1
- mold 2.40.4
- Python 3.8
- and some other useful tools
Conan is set up to build Clio without any additional steps.

View File

@@ -1,6 +1,6 @@
services:
clio_develop:
image: ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
image: ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
volumes:
- clio_develop_conan_data:/root/.conan2/p
- clio_develop_ccache:/root/.ccache

View File

@@ -0,0 +1,38 @@
ARG GHCR_REPO=invalid
FROM ${GHCR_REPO}/clio-tools:latest AS clio-tools
# We're using Ubuntu 24.04 to have a more recent version of Python
FROM ubuntu:24.04
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# hadolint ignore=DL3002
USER root
WORKDIR /root
# Install common tools and dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
curl \
git \
libatomic1 \
software-properties-common \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install Python tools
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN pip install -q --no-cache-dir --break-system-packages \
pre-commit
COPY --from=clio-tools \
/usr/local/bin/doxygen \
/usr/local/bin/

View File

@@ -8,7 +8,7 @@ ARG TARGETARCH
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
ARG BUILD_VERSION=2
ARG BUILD_VERSION=0
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
@@ -24,7 +24,7 @@ RUN apt-get update \
WORKDIR /tmp
ARG MOLD_VERSION=2.40.1
ARG MOLD_VERSION=2.40.4
RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v${MOLD_VERSION}.tar.gz" \
&& tar xf "v${MOLD_VERSION}.tar.gz" \
&& cd "mold-${MOLD_VERSION}" \
@@ -34,7 +34,7 @@ RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG CCACHE_VERSION=4.11.3
ARG CCACHE_VERSION=4.12.1
RUN wget --progress=dot:giga "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
&& cd "ccache-${CCACHE_VERSION}" \
@@ -51,7 +51,7 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG DOXYGEN_VERSION=1.12.0
ARG DOXYGEN_VERSION=1.15.0
RUN wget --progress=dot:giga "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& tar xf "doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& cd "doxygen-${DOXYGEN_VERSION}" \
@@ -71,13 +71,13 @@ RUN wget --progress=dot:giga "https://github.com/aras-p/ClangBuildAnalyzer/archi
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG GIT_CLIFF_VERSION=2.9.1
ARG GIT_CLIFF_VERSION=2.10.1
RUN wget --progress=dot:giga "https://github.com/orhun/git-cliff/releases/download/v${GIT_CLIFF_VERSION}/git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz" \
&& tar xf git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz \
&& mv git-cliff-${GIT_CLIFF_VERSION}/git-cliff /usr/local/bin/git-cliff \
&& rm -rf /tmp/* /var/tmp/*
ARG GH_VERSION=2.74.0
ARG GH_VERSION=2.82.1
RUN wget --progress=dot:giga "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz" \
&& tar xf gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& mv gh_${GH_VERSION}_linux_${TARGETARCH}/bin/gh /usr/local/bin/gh \

View File

@@ -3,6 +3,8 @@ PROJECT_LOGO = ${SOURCE}/docs/img/xrpl-logo.svg
PROJECT_NUMBER = ${DOC_CLIO_VERSION}
PROJECT_BRIEF = The XRP Ledger API server.
DOT_GRAPH_MAX_NODES = 100
EXTRACT_ALL = NO
EXTRACT_PRIVATE = NO
EXTRACT_PACKAGE = YES
@@ -13,6 +15,7 @@ EXTRACT_ANON_NSPACES = NO
SORT_MEMBERS_CTORS_1ST = YES
INPUT = ${SOURCE}/src
USE_MDFILE_AS_MAINPAGE = ${SOURCE}/src/README.md
EXCLUDE_SYMBOLS = ${EXCLUDES}
RECURSIVE = YES
HAVE_DOT = ${USE_DOT}
@@ -32,12 +35,18 @@ SORT_MEMBERS_CTORS_1ST = YES
GENERATE_TREEVIEW = YES
DISABLE_INDEX = NO
FULL_SIDEBAR = NO
HTML_HEADER = ${SOURCE}/docs/doxygen-awesome-theme/header.html
HTML_HEADER = ${SOURCE}/docs/doxygen-awesome-theme/doxygen-custom/header.html
HTML_EXTRA_STYLESHEET = ${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome.css \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-sidebar-only.css \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-sidebar-only-darkmode-toggle.css
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-sidebar-only-darkmode-toggle.css \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-custom/custom.css \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-custom/theme-robot.css \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-custom/theme-round.css \
${SOURCE}/docs/github-corner-disable.css
HTML_EXTRA_FILES = ${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-darkmode-toggle.js \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-interactive-toc.js
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-fragment-copy-button.js \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-awesome-interactive-toc.js \
${SOURCE}/docs/doxygen-awesome-theme/doxygen-custom/toggle-alternative-theme.js
HTML_COLORSTYLE = LIGHT
HTML_COLORSTYLE_HUE = 209

View File

@@ -177,7 +177,7 @@ There are several CMake options you can use to customize the build:
### Generating API docs for Clio
The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/index.html). If you want to generate the API documentation when building Clio, make sure to install Doxygen 1.12.0 on your system.
The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/index.html). If you want to generate the API documentation when building Clio, make sure to install Doxygen 1.14.0 on your system.
To generate the API docs, please use CMake option `-Ddocs=ON` as described above and build the `docs` target.
@@ -191,7 +191,7 @@ Open the `index.html` file in your browser to see the documentation pages.
It is also possible to build Clio using [Docker](https://www.docker.com/) if you don't want to install all the dependencies on your machine.
```sh
docker run -it ghcr.io/xrplf/clio-ci:384e79cd32f5f6c0ab9be3a1122ead41c5a7e67d
docker run -it ghcr.io/xrplf/clio-ci:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a
git clone https://github.com/XRPLF/clio
cd clio
```

View File

@@ -1,29 +1,10 @@
// SPDX-License-Identifier: MIT
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
MIT License
Copyright (c) 2021 - 2023 jothepro
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2021 - 2025 jothepro
*/

View File

@@ -0,0 +1,66 @@
// SPDX-License-Identifier: MIT
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
Copyright (c) 2022 - 2025 jothepro
*/
class DoxygenAwesomeFragmentCopyButton extends HTMLElement {
constructor() {
super();
this.onclick=this.copyContent
}
static title = "Copy to clipboard"
static copyIcon = `<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24" height="24"><path d="M0 0h24v24H0V0z" fill="none"/><path d="M23.04,10.322c0,-2.582 -2.096,-4.678 -4.678,-4.678l-6.918,-0c-2.582,-0 -4.678,2.096 -4.678,4.678c0,-0 0,8.04 0,8.04c0,2.582 2.096,4.678 4.678,4.678c0,-0 6.918,-0 6.918,-0c2.582,-0 4.678,-2.096 4.678,-4.678c0,-0 0,-8.04 0,-8.04Zm-2.438,-0l-0,8.04c-0,1.236 -1.004,2.24 -2.24,2.24l-6.918,-0c-1.236,-0 -2.239,-1.004 -2.239,-2.24l-0,-8.04c-0,-1.236 1.003,-2.24 2.239,-2.24c0,0 6.918,0 6.918,0c1.236,0 2.24,1.004 2.24,2.24Z"/><path d="M5.327,16.748c-0,0.358 -0.291,0.648 -0.649,0.648c0,0 0,0 0,0c-2.582,0 -4.678,-2.096 -4.678,-4.678c0,0 0,-8.04 0,-8.04c0,-2.582 2.096,-4.678 4.678,-4.678l6.918,0c2.168,0 3.994,1.478 4.523,3.481c0.038,0.149 0.005,0.306 -0.09,0.428c-0.094,0.121 -0.239,0.191 -0.392,0.191c-0.451,0.005 -1.057,0.005 -1.457,0.005c-0.238,0 -0.455,-0.14 -0.553,-0.357c-0.348,-0.773 -1.128,-1.31 -2.031,-1.31c-0,0 -6.918,0 -6.918,0c-1.236,0 -2.24,1.004 -2.24,2.24l0,8.04c0,1.236 1.004,2.24 2.24,2.24l0,-0c0.358,-0 0.649,0.29 0.649,0.648c-0,0.353 -0,0.789 -0,1.142Z" style="fill-opacity:0.6;"/></svg>`
static successIcon = `<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24" height="24"><path d="M8.084,16.111c-0.09,0.09 -0.212,0.141 -0.34,0.141c-0.127,-0 -0.249,-0.051 -0.339,-0.141c-0.746,-0.746 -2.538,-2.538 -3.525,-3.525c-0.375,-0.375 -0.983,-0.375 -1.357,0c-0.178,0.178 -0.369,0.369 -0.547,0.547c-0.375,0.375 -0.375,0.982 -0,1.357c1.135,1.135 3.422,3.422 4.75,4.751c0.27,0.27 0.637,0.421 1.018,0.421c0.382,0 0.749,-0.151 1.019,-0.421c2.731,-2.732 10.166,-10.167 12.454,-12.455c0.375,-0.375 0.375,-0.982 -0,-1.357c-0.178,-0.178 -0.369,-0.369 -0.547,-0.547c-0.375,-0.375 -0.982,-0.375 -1.357,0c-2.273,2.273 -9.567,9.567 -11.229,11.229Z"/></svg>`
static successDuration = 980
static init() {
$(function() {
$(document).ready(function() {
if(navigator.clipboard) {
const fragments = document.getElementsByClassName("fragment")
for(const fragment of fragments) {
const fragmentWrapper = document.createElement("div")
fragmentWrapper.className = "doxygen-awesome-fragment-wrapper"
const fragmentCopyButton = document.createElement("doxygen-awesome-fragment-copy-button")
fragmentCopyButton.innerHTML = DoxygenAwesomeFragmentCopyButton.copyIcon
fragmentCopyButton.title = DoxygenAwesomeFragmentCopyButton.title
fragment.parentNode.replaceChild(fragmentWrapper, fragment)
fragmentWrapper.appendChild(fragment)
fragmentWrapper.appendChild(fragmentCopyButton)
}
}
})
})
}
copyContent() {
const content = this.previousSibling.cloneNode(true)
// filter out line number from file listings
content.querySelectorAll(".lineno, .ttc").forEach((node) => {
node.remove()
})
let textContent = content.textContent
// remove trailing newlines that appear in file listings
let numberOfTrailingNewlines = 0
while(textContent.charAt(textContent.length - (numberOfTrailingNewlines + 1)) == '\n') {
numberOfTrailingNewlines++;
}
textContent = textContent.substring(0, textContent.length - numberOfTrailingNewlines)
navigator.clipboard.writeText(textContent);
this.classList.add("success")
this.innerHTML = DoxygenAwesomeFragmentCopyButton.successIcon
window.setTimeout(() => {
this.classList.remove("success")
this.innerHTML = DoxygenAwesomeFragmentCopyButton.copyIcon
}, DoxygenAwesomeFragmentCopyButton.successDuration);
}
}
customElements.define("doxygen-awesome-fragment-copy-button", DoxygenAwesomeFragmentCopyButton)

View File

@@ -1,29 +1,10 @@
// SPDX-License-Identifier: MIT
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
MIT License
Copyright (c) 2022 - 2023 jothepro
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2022 - 2025 jothepro
*/
@@ -55,9 +36,7 @@ class DoxygenAwesomeInteractiveToc {
headerNode: document.getElementById(id)
})
document.getElementById("doc-content")?.addEventListener("scroll", () => {
DoxygenAwesomeInteractiveToc.update()
})
document.getElementById("doc-content")?.addEventListener("scroll",this.throttle(DoxygenAwesomeInteractiveToc.update, 100))
})
DoxygenAwesomeInteractiveToc.update()
}
@@ -78,4 +57,16 @@ class DoxygenAwesomeInteractiveToc {
active?.classList.add("active")
active?.classList.remove("aboveActive")
}
}
static throttle(func, delay) {
let lastCall = 0;
return function (...args) {
const now = new Date().getTime();
if (now - lastCall < delay) {
return;
}
lastCall = now;
return setTimeout(() => {func(...args)}, delay);
};
}
}

View File

@@ -1,30 +1,10 @@
/* SPDX-License-Identifier: MIT */
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
MIT License
Copyright (c) 2021 - 2023 jothepro
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2021 - 2025 jothepro
*/

View File

@@ -1,29 +1,10 @@
/* SPDX-License-Identifier: MIT */
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
MIT License
Copyright (c) 2021 - 2023 jothepro
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2021 - 2025 jothepro
*/
@@ -60,10 +41,6 @@ html {
height: calc(100vh - var(--top-height)) !important;
}
#nav-tree {
padding: 0;
}
#top {
display: block;
border-bottom: none;
@@ -73,22 +50,24 @@ html {
overflow: hidden;
background: var(--side-nav-background);
}
#main-nav {
float: left;
padding-right: 0;
}
.ui-resizable-handle {
cursor: default;
width: 1px !important;
background: var(--separator-color);
box-shadow: 0 calc(-2 * var(--top-height)) 0 0 var(--separator-color);
display: none;
}
.ui-resizable-e {
width: 0;
}
#nav-path {
position: fixed;
right: 0;
left: var(--side-nav-fixed-width);
left: calc(var(--side-nav-fixed-width) + 1px);
bottom: 0;
width: auto;
}
@@ -113,4 +92,14 @@ html {
left: var(--spacing-medium) !important;
right: auto;
}
#nav-sync {
bottom: 4px;
right: auto;
left: 300px;
width: 35px;
top: auto !important;
user-select: none;
position: fixed
}
}

View File

@@ -1,29 +1,10 @@
/* SPDX-License-Identifier: MIT */
/**
Doxygen Awesome
https://github.com/jothepro/doxygen-awesome-css
MIT License
Copyright (c) 2021 - 2023 jothepro
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2021 - 2025 jothepro
*/
@@ -32,6 +13,9 @@ html {
--primary-color: #1779c4;
--primary-dark-color: #335c80;
--primary-light-color: #70b1e9;
--on-primary-color: #ffffff;
--link-color: var(--primary-color);
/* page base colors */
--page-background-color: #ffffff;
@@ -42,14 +26,15 @@ html {
--separator-color: #dedede;
/* border radius for all rounded components. Will affect many components, like dropdowns, memitems, codeblocks, ... */
--border-radius-large: 8px;
--border-radius-small: 4px;
--border-radius-medium: 6px;
--border-radius-large: 10px;
--border-radius-small: 5px;
--border-radius-medium: 8px;
/* default spacings. Most components reference these values for spacing, to provide uniform spacing on the page. */
--spacing-small: 5px;
--spacing-medium: 10px;
--spacing-large: 16px;
--spacing-xlarge: 20px;
/* default box shadow used for raising an element above the normal content. Used in dropdowns, search result, ... */
--box-shadow: 0 2px 8px 0 rgba(0,0,0,.075);
@@ -113,7 +98,7 @@ html {
*/
--menu-display: block;
--menu-focus-foreground: var(--page-background-color);
--menu-focus-foreground: var(--on-primary-color);
--menu-focus-background: var(--primary-color);
--menu-selected-background: rgba(0,0,0,.05);
@@ -310,10 +295,11 @@ body {
font-size: var(--page-font-size);
}
body, table, div, p, dl, #nav-tree .label, .title,
body, table, div, p, dl, #nav-tree .label, #nav-tree a, .title,
.sm-dox a, .sm-dox a:hover, .sm-dox a:focus, #projectname,
.SelectItem, #MSearchField, .navpath li.navelem a,
.navpath li.navelem a:hover, p.reference, p.definition, div.toc li, div.toc h3 {
.navpath li.navelem a:hover, p.reference, p.definition, div.toc li, div.toc h3,
#page-nav ul.page-outline li a {
font-family: var(--font-family);
}
@@ -332,8 +318,13 @@ p.reference, p.definition {
}
a:link, a:visited, a:hover, a:focus, a:active {
color: var(--primary-color) !important;
color: var(--link-color) !important;
font-weight: 500;
background: none;
}
a:hover {
text-decoration: underline;
}
a.anchor {
@@ -348,6 +339,8 @@ a.anchor {
#top {
background: var(--header-background);
border-bottom: 1px solid var(--separator-color);
position: relative;
z-index: 99;
}
@media screen and (min-width: 768px) {
@@ -362,6 +355,7 @@ a.anchor {
#main-nav {
flex-grow: 5;
padding: var(--spacing-small) var(--spacing-medium);
border-bottom: 0;
}
#titlearea {
@@ -441,19 +435,36 @@ a.anchor {
}
.sm-dox a span.sub-arrow {
border-color: var(--header-foreground) transparent transparent transparent;
top: 15px;
right: 10px;
box-sizing: content-box;
padding: 0;
margin: 0;
display: inline-block;
width: 5px;
height: 5px;
transform: rotate(45deg);
border-width: 0;
border-right: 2px solid var(--header-foreground);
border-bottom: 2px solid var(--header-foreground);
background: none;
}
.sm-dox a:hover span.sub-arrow {
border-color: var(--menu-focus-foreground) transparent transparent transparent;
border-color: var(--menu-focus-foreground);
background: none;
}
.sm-dox ul a span.sub-arrow {
border-color: transparent transparent transparent var(--page-foreground-color);
transform: rotate(-45deg);
border-width: 0;
border-right: 2px solid var(--header-foreground);
border-bottom: 2px solid var(--header-foreground);
}
.sm-dox ul a:hover span.sub-arrow {
border-color: transparent transparent transparent var(--menu-focus-foreground);
border-color: var(--menu-focus-foreground);
background: none;
}
}
@@ -480,7 +491,7 @@ a.anchor {
.sm-dox ul a {
color: var(--page-foreground-color) !important;
background: var(--page-background-color);
background: none;
font-size: var(--navigation-font-size);
}
@@ -552,6 +563,13 @@ a.anchor {
box-shadow: none;
display: block;
margin-top: 0;
margin-right: 0;
}
@media (min-width: 768px) {
.sm-dox li {
padding: 0;
}
}
/* until Doxygen 1.9.4 */
@@ -573,6 +591,17 @@ a.anchor {
padding-left: 0
}
/* Doxygen 1.14.0 */
.search-icon::before {
background: none;
top: 5px;
}
.search-icon::after {
background: none;
top: 12px;
}
.SelectionMark {
user-select: none;
}
@@ -776,12 +805,15 @@ html.dark-mode iframe#MSearchResults {
*/
#side-nav {
padding: 0 !important;
background: var(--side-nav-background);
min-width: 8px;
max-width: 50vw;
}
#nav-tree, #top {
border-right: 1px solid var(--separator-color);
}
@media screen and (max-width: 767px) {
#side-nav {
display: none;
@@ -790,34 +822,95 @@ html.dark-mode iframe#MSearchResults {
#doc-content {
margin-left: 0 !important;
}
#top {
border-right: none;
}
}
#nav-tree {
background: transparent;
margin-right: 1px;
background: var(--side-nav-background);
margin-right: -1px;
padding: 0;
}
#nav-tree .label {
font-size: var(--navigation-font-size);
line-height: var(--tree-item-height);
}
#nav-tree span.label a:hover {
background: none;
}
#nav-tree .item {
height: var(--tree-item-height);
line-height: var(--tree-item-height);
overflow: hidden;
text-overflow: ellipsis;
margin: 0;
padding: 0;
}
#nav-tree-contents {
margin: 0;
}
#main-menu > li:last-child {
height: auto;
}
#nav-tree .item > a:focus {
outline: none;
}
#nav-sync {
bottom: 12px;
right: 12px;
bottom: var(--spacing-medium);
right: var(--spacing-medium) !important;
top: auto !important;
user-select: none;
}
div.nav-sync-icon {
border: 1px solid var(--separator-color);
border-radius: var(--border-radius-medium);
background: var(--page-background-color);
width: 30px;
height: 20px;
}
div.nav-sync-icon:hover {
background: var(--page-background-color);
}
span.sync-icon-left, div.nav-sync-icon:hover span.sync-icon-left {
border-left: 2px solid var(--primary-color);
border-top: 2px solid var(--primary-color);
top: 5px;
left: 6px;
}
span.sync-icon-right, div.nav-sync-icon:hover span.sync-icon-right {
border-right: 2px solid var(--primary-color);
border-bottom: 2px solid var(--primary-color);
top: 5px;
left: initial;
right: 6px;
}
div.nav-sync-icon.active::after, div.nav-sync-icon.active:hover::after {
border-top: 2px solid var(--primary-color);
top: 9px;
left: 6px;
width: 19px;
}
#nav-tree .selected {
text-shadow: none;
background-image: none;
background-color: transparent;
position: relative;
color: var(--primary-color) !important;
font-weight: 500;
}
#nav-tree .selected::after {
@@ -843,9 +936,27 @@ html.dark-mode iframe#MSearchResults {
#nav-tree .arrow {
opacity: var(--side-nav-arrow-opacity);
background: none;
}
.arrow {
#nav-tree span.arrowhead {
margin: 0 0 1px 2px;
}
span.arrowhead {
border-color: var(--primary-light-color);
}
.selected span.arrowhead {
border-color: var(--primary-color);
}
#nav-tree-contents > ul > li:first-child > div > a {
opacity: 0;
pointer-events: none;
}
.contents .arrow {
color: inherit;
cursor: pointer;
font-size: 45%;
@@ -853,7 +964,7 @@ html.dark-mode iframe#MSearchResults {
margin-right: 2px;
font-family: serif;
height: auto;
text-align: right;
padding-bottom: 4px;
}
#nav-tree div.item:hover .arrow, #nav-tree a:focus .arrow {
@@ -867,9 +978,11 @@ html.dark-mode iframe#MSearchResults {
}
.ui-resizable-e {
width: 4px;
background: transparent;
box-shadow: inset -1px 0 0 0 var(--separator-color);
background: none;
}
.ui-resizable-e:hover {
background: var(--separator-color);
}
/*
@@ -878,7 +991,7 @@ html.dark-mode iframe#MSearchResults {
div.header {
border-bottom: 1px solid var(--separator-color);
background-color: var(--page-background-color);
background: none;
background-image: none;
}
@@ -917,7 +1030,7 @@ div.headertitle {
div.header .title {
font-weight: 600;
font-size: 225%;
padding: var(--spacing-medium) var(--spacing-large);
padding: var(--spacing-medium) var(--spacing-xlarge);
word-break: break-word;
}
@@ -934,9 +1047,10 @@ td.memSeparator {
span.mlabel {
background: var(--primary-color);
color: var(--on-primary-color);
border: none;
padding: 4px 9px;
border-radius: 12px;
border-radius: var(--border-radius-large);
margin-right: var(--spacing-medium);
}
@@ -945,7 +1059,7 @@ span.mlabel:last-of-type {
}
div.contents {
padding: 0 var(--spacing-large);
padding: 0 var(--spacing-xlarge);
}
div.contents p, div.contents li {
@@ -956,6 +1070,16 @@ div.contents div.dyncontent {
margin: var(--spacing-medium) 0;
}
@media screen and (max-width: 767px) {
div.contents {
padding: 0 var(--spacing-large);
}
div.header .title {
padding: var(--spacing-medium) var(--spacing-large);
}
}
@media (prefers-color-scheme: dark) {
html:not(.light-mode) div.contents div.dyncontent img,
html:not(.light-mode) div.contents center img,
@@ -979,7 +1103,7 @@ html.dark-mode div.contents .dotgraph iframe
filter: brightness(89%) hue-rotate(180deg) invert();
}
h2.groupheader {
td h2.groupheader, h2.groupheader {
border-bottom: 0px;
color: var(--page-foreground-color);
box-shadow:
@@ -1040,7 +1164,7 @@ blockquote::after {
blockquote p {
margin: var(--spacing-small) 0 var(--spacing-medium) 0;
}
.paramname {
.paramname, .paramname em {
font-weight: 600;
color: var(--primary-dark-color);
}
@@ -1090,7 +1214,7 @@ div.contents .toc {
border: 0;
border-left: 1px solid var(--separator-color);
border-radius: 0;
background-color: transparent;
background-color: var(--page-background-color);
box-shadow: none;
position: sticky;
top: var(--toc-sticky-top);
@@ -1198,24 +1322,115 @@ div.toc li a.aboveActive {
}
}
/*
Page Outline (Doxygen >= 1.14.0)
*/
#page-nav {
background: var(--page-background-color);
border-left: 1px solid var(--separator-color);
}
#page-nav #page-nav-resize-handle {
background: var(--separator-color);
}
#page-nav #page-nav-resize-handle::after {
border-left: 1px solid var(--primary-color);
border-right: 1px solid var(--primary-color);
}
#page-nav #page-nav-tree #page-nav-contents {
top: var(--spacing-large);
}
#page-nav ul.page-outline {
margin: 0;
padding: 0;
}
#page-nav ul.page-outline li a {
font-size: var(--toc-font-size) !important;
color: var(--page-secondary-foreground-color) !important;
display: inline-block;
line-height: calc(2 * var(--toc-font-size));
}
#page-nav ul.page-outline li a a.anchorlink {
display: none;
}
#page-nav ul.page-outline li.vis ~ * a {
color: var(--page-foreground-color) !important;
}
#page-nav ul.page-outline li.vis:not(.vis ~ .vis) a, #page-nav ul.page-outline li a:hover {
color: var(--primary-color) !important;
}
#page-nav ul.page-outline .vis {
background: var(--page-background-color);
position: relative;
}
#page-nav ul.page-outline .vis::after {
content: "";
position: absolute;
top: 0;
bottom: 0;
left: 0;
width: 4px;
background: var(--page-secondary-foreground-color);
}
#page-nav ul.page-outline .vis:not(.vis ~ .vis)::after {
top: 1px;
border-top-right-radius: var(--border-radius-small);
}
#page-nav ul.page-outline .vis:not(:has(~ .vis))::after {
bottom: 1px;
border-bottom-right-radius: var(--border-radius-small);
}
#page-nav ul.page-outline .arrow {
display: inline-block;
}
#page-nav ul.page-outline .arrow span {
display: none;
}
@media screen and (max-width: 767px) {
#container {
grid-template-columns: initial !important;
}
#page-nav {
display: none;
}
}
/*
Code & Fragments
*/
code, div.fragment, pre.fragment {
border-radius: var(--border-radius-small);
code, div.fragment, pre.fragment, span.tt {
border: 1px solid var(--separator-color);
overflow: hidden;
}
code {
code, span.tt {
display: inline;
background: var(--code-background);
color: var(--code-foreground);
padding: 2px 6px;
border-radius: var(--border-radius-small);
}
div.fragment, pre.fragment {
border-radius: var(--border-radius-medium);
margin: var(--spacing-medium) 0;
padding: calc(var(--spacing-large) - (var(--spacing-large) / 6)) var(--spacing-large);
background: var(--fragment-background);
@@ -1273,7 +1488,7 @@ div.fragment, pre.fragment {
}
}
code, code a, pre.fragment, div.fragment, div.fragment .line, div.fragment span, div.fragment .line a, div.fragment .line span {
code, code a, pre.fragment, div.fragment, div.fragment .line, div.fragment span, div.fragment .line a, div.fragment .line span, span.tt {
font-family: var(--font-family-monospace);
font-size: var(--code-font-size) !important;
}
@@ -1347,6 +1562,10 @@ div.line.glow {
dl warning, attention, note, deprecated, bug, ...
*/
dl {
line-height: calc(1.65 * var(--page-font-size));
}
dl.bug dt a, dl.deprecated dt a, dl.todo dt a {
font-weight: bold !important;
}
@@ -1512,6 +1731,7 @@ div.memitem {
border-top-right-radius: var(--border-radius-medium);
border-bottom-right-radius: var(--border-radius-medium);
border-bottom-left-radius: var(--border-radius-medium);
border-top-left-radius: 0;
overflow: hidden;
display: block !important;
}
@@ -1743,7 +1963,7 @@ table.fieldtable th {
color: var(--tablehead-foreground);
}
table.fieldtable td.fieldtype, .fieldtable td.fieldname, .fieldtable td.fielddoc, .fieldtable th {
table.fieldtable td.fieldtype, .fieldtable td.fieldname, .fieldtable td.fieldinit, .fieldtable td.fielddoc, .fieldtable th {
border-bottom: 1px solid var(--separator-color);
border-right: 1px solid var(--separator-color);
}
@@ -1778,8 +1998,10 @@ table.memberdecls tr[class^='memitem'] .memTemplParams {
white-space: normal;
}
table.memberdecls .memItemLeft,
table.memberdecls .memItemRight,
table.memberdecls tr.heading + tr[class^='memitem'] td.memItemLeft,
table.memberdecls tr.heading + tr[class^='memitem'] td.memItemRight,
table.memberdecls td.memItemLeft,
table.memberdecls td.memItemRight,
table.memberdecls .memTemplItemLeft,
table.memberdecls .memTemplItemRight,
table.memberdecls .memTemplParams {
@@ -1791,8 +2013,34 @@ table.memberdecls .memTemplParams {
background-color: var(--fragment-background);
}
@media screen and (min-width: 768px) {
tr.heading + tr[class^='memitem'] td.memItemRight, tr.groupHeader + tr[class^='memitem'] td.memItemRight, tr.inherit_header + tr[class^='memitem'] td.memItemRight {
border-top-right-radius: var(--border-radius-small);
}
table.memberdecls tr:last-child td.memItemRight, table.memberdecls tr:last-child td.mdescRight, table.memberdecls tr[class^='memitem']:has(+ tr.groupHeader) td.memItemRight, table.memberdecls tr[class^='memitem']:has(+ tr.inherit_header) td.memItemRight, table.memberdecls tr[class^='memdesc']:has(+ tr.groupHeader) td.mdescRight, table.memberdecls tr[class^='memdesc']:has(+ tr.inherit_header) td.mdescRight {
border-bottom-right-radius: var(--border-radius-small);
}
table.memberdecls tr:last-child td.memItemLeft, table.memberdecls tr:last-child td.mdescLeft, table.memberdecls tr[class^='memitem']:has(+ tr.groupHeader) td.memItemLeft, table.memberdecls tr[class^='memitem']:has(+ tr.inherit_header) td.memItemLeft, table.memberdecls tr[class^='memdesc']:has(+ tr.groupHeader) td.mdescLeft, table.memberdecls tr[class^='memdesc']:has(+ tr.inherit_header) td.mdescLeft {
border-bottom-left-radius: var(--border-radius-small);
}
tr.heading + tr[class^='memitem'] td.memItemLeft, tr.groupHeader + tr[class^='memitem'] td.memItemLeft, tr.inherit_header + tr[class^='memitem'] td.memItemLeft {
border-top-left-radius: var(--border-radius-small);
}
}
table.memname td.memname {
font-size: var(--memname-font-size);
}
table.memberdecls .memTemplItemLeft,
table.memberdecls .memTemplItemRight {
table.memberdecls .template .memItemLeft,
table.memberdecls .memTemplItemRight,
table.memberdecls .template .memItemRight {
padding-top: 2px;
}
@@ -1804,13 +2052,13 @@ table.memberdecls .memTemplParams {
padding-bottom: var(--spacing-small);
}
table.memberdecls .memTemplItemLeft {
table.memberdecls .memTemplItemLeft, table.memberdecls .template .memItemLeft {
border-radius: 0 0 0 var(--border-radius-small);
border-left: 1px solid var(--separator-color);
border-top: 0;
}
table.memberdecls .memTemplItemRight {
table.memberdecls .memTemplItemRight, table.memberdecls .template .memItemRight {
border-radius: 0 0 var(--border-radius-small) 0;
border-right: 1px solid var(--separator-color);
padding-left: 0;
@@ -1836,8 +2084,14 @@ table.memberdecls .mdescLeft, table.memberdecls .mdescRight {
background: none;
color: var(--page-foreground-color);
padding: var(--spacing-small) 0;
border: 0;
}
table.memberdecls [class^="memdesc"] {
box-shadow: none;
}
table.memberdecls .memItemLeft,
table.memberdecls .memTemplItemLeft {
padding-right: var(--spacing-medium);
@@ -1860,6 +2114,10 @@ table.memberdecls .inherit_header td {
color: var(--page-secondary-foreground-color);
}
table.memberdecls span.dynarrow {
left: 10px;
}
table.memberdecls img[src="closed.png"],
table.memberdecls img[src="open.png"],
div.dynheader img[src="open.png"],
@@ -1876,6 +2134,10 @@ div.dynheader img[src="closed.png"] {
transition: transform var(--animation-duration) ease-out;
}
tr.heading + tr[class^='memitem'] td.memItemLeft, tr.groupHeader + tr[class^='memitem'] td.memItemLeft, tr.inherit_header + tr[class^='memitem'] td.memItemLeft, tr.heading + tr[class^='memitem'] td.memItemRight, tr.groupHeader + tr[class^='memitem'] td.memItemRight, tr.inherit_header + tr[class^='memitem'] td.memItemRight {
border-top: 1px solid var(--separator-color);
}
table.memberdecls img {
margin-right: 10px;
}
@@ -1900,7 +2162,10 @@ div.dynheader img[src="closed.png"] {
table.memberdecls .mdescRight,
table.memberdecls .memTemplItemLeft,
table.memberdecls .memTemplItemRight,
table.memberdecls .memTemplParams {
table.memberdecls .memTemplParams,
table.memberdecls .template .memItemLeft,
table.memberdecls .template .memItemRight,
table.memberdecls .template .memParams {
display: block;
text-align: left;
padding-left: var(--spacing-large);
@@ -1913,12 +2178,14 @@ div.dynheader img[src="closed.png"] {
table.memberdecls .memItemLeft,
table.memberdecls .mdescLeft,
table.memberdecls .memTemplItemLeft {
border-bottom: 0;
padding-bottom: 0;
table.memberdecls .memTemplItemLeft,
table.memberdecls .template .memItemLeft {
border-bottom: 0 !important;
padding-bottom: 0 !important;
}
table.memberdecls .memTemplItemLeft {
table.memberdecls .memTemplItemLeft,
table.memberdecls .template .memItemLeft {
padding-top: 0;
}
@@ -1928,10 +2195,12 @@ div.dynheader img[src="closed.png"] {
table.memberdecls .memItemRight,
table.memberdecls .mdescRight,
table.memberdecls .memTemplItemRight {
border-top: 0;
padding-top: 0;
table.memberdecls .memTemplItemRight,
table.memberdecls .template .memItemRight {
border-top: 0 !important;
padding-top: 0 !important;
padding-right: var(--spacing-large);
padding-bottom: var(--spacing-medium);
overflow-x: auto;
}
@@ -1966,6 +2235,22 @@ div.dynheader img[src="closed.png"] {
max-height: 200px;
}
}
tr.heading + tr[class^='memitem'] td.memItemRight, tr.groupHeader + tr[class^='memitem'] td.memItemRight, tr.inherit_header + tr[class^='memitem'] td.memItemRight {
border-top-right-radius: 0;
}
table.memberdecls tr:last-child td.memItemRight, table.memberdecls tr:last-child td.mdescRight, table.memberdecls tr[class^='memitem']:has(+ tr.groupHeader) td.memItemRight, table.memberdecls tr[class^='memitem']:has(+ tr.inherit_header) td.memItemRight, table.memberdecls tr[class^='memdesc']:has(+ tr.groupHeader) td.mdescRight, table.memberdecls tr[class^='memdesc']:has(+ tr.inherit_header) td.mdescRight {
border-bottom-right-radius: 0;
}
table.memberdecls tr:last-child td.memItemLeft, table.memberdecls tr:last-child td.mdescLeft, table.memberdecls tr[class^='memitem']:has(+ tr.groupHeader) td.memItemLeft, table.memberdecls tr[class^='memitem']:has(+ tr.inherit_header) td.memItemLeft, table.memberdecls tr[class^='memdesc']:has(+ tr.groupHeader) td.mdescLeft, table.memberdecls tr[class^='memdesc']:has(+ tr.inherit_header) td.mdescLeft {
border-bottom-left-radius: 0;
}
tr.heading + tr[class^='memitem'] td.memItemLeft, tr.groupHeader + tr[class^='memitem'] td.memItemLeft, tr.inherit_header + tr[class^='memitem'] td.memItemLeft {
border-top-left-radius: 0;
}
}
@@ -1982,14 +2267,16 @@ hr {
}
.contents hr {
box-shadow: 100px 0 0 var(--separator-color),
-100px 0 0 var(--separator-color),
500px 0 0 var(--separator-color),
-500px 0 0 var(--separator-color),
1500px 0 0 var(--separator-color),
-1500px 0 0 var(--separator-color),
2000px 0 0 var(--separator-color),
-2000px 0 0 var(--separator-color);
box-shadow: 100px 0 var(--separator-color),
-100px 0 var(--separator-color),
500px 0 var(--separator-color),
-500px 0 var(--separator-color),
900px 0 var(--separator-color),
-900px 0 var(--separator-color),
1400px 0 var(--separator-color),
-1400px 0 var(--separator-color),
1900px 0 var(--separator-color),
-1900px 0 var(--separator-color);
}
.contents img, .contents .center, .contents center, .contents div.image object {
@@ -2152,9 +2439,7 @@ div.qindex {
background: var(--page-background-color);
border: none;
border-top: 1px solid var(--separator-color);
border-bottom: 1px solid var(--separator-color);
border-bottom: 0;
box-shadow: 0 0.75px 0 var(--separator-color);
font-size: var(--navigation-font-size);
}
@@ -2183,6 +2468,10 @@ address.footer {
color: var(--primary-color) !important;
}
.navpath li.navelem a:hover {
text-shadow: none;
}
.navpath li.navelem b {
color: var(--primary-dark-color);
font-weight: 500;
@@ -2201,7 +2490,11 @@ li.navelem:first-child:before {
display: none;
}
#nav-path li.navelem:after {
#nav-path ul {
padding-left: 0;
}
#nav-path li.navelem:has(.el):after {
content: '';
border: 5px solid var(--page-background-color);
border-bottom-color: transparent;
@@ -2212,7 +2505,21 @@ li.navelem:first-child:before {
margin-left: 6px;
}
#nav-path li.navelem:before {
#nav-path li.navelem:not(:has(.el)):after {
background: var(--page-background-color);
box-shadow: 1px -1px 0 1px var(--separator-color);
border-radius: 0 var(--border-radius-medium) 0 50px;
}
#nav-path li.navelem:not(:has(.el)) {
margin-left: 0;
}
#nav-path li.navelem:not(:has(.el)):hover, #nav-path li.navelem:not(:has(.el)):hover:after {
background-color: var(--separator-color);
}
#nav-path li.navelem:has(.el):before {
content: '';
border: 5px solid var(--separator-color);
border-bottom-color: transparent;
@@ -2338,7 +2645,7 @@ doxygen-awesome-dark-mode-toggle {
height: var(--searchbar-height);
background: none;
border: none;
border-radius: var(--searchbar-height);
border-radius: var(--searchbar-border-radius);
vertical-align: middle;
text-align: center;
line-height: var(--searchbar-height);
@@ -2523,6 +2830,7 @@ h2:hover a.anchorlink, h1:hover a.anchorlink, h3:hover a.anchorlink, h4:hover a.
float: left;
white-space: nowrap;
font-weight: normal;
font-family: var(--font-family);
padding: calc(var(--spacing-large) / 2) var(--spacing-large);
border-radius: var(--border-radius-medium);
transition: background-color var(--animation-duration) ease-in-out, font-weight var(--animation-duration) ease-in-out;
@@ -2667,3 +2975,46 @@ h2:hover a.anchorlink, h1:hover a.anchorlink, h3:hover a.anchorlink, h4:hover a.
border-radius: 0 var(--border-radius-medium) var(--border-radius-medium) 0;
}
}
/*
Bordered image
*/
html.dark-mode .darkmode_inverted_image img, /* < doxygen 1.9.3 */
html.dark-mode .darkmode_inverted_image object[type="image/svg+xml"] /* doxygen 1.9.3 */ {
filter: brightness(89%) hue-rotate(180deg) invert();
}
.bordered_image {
border-radius: var(--border-radius-small);
border: 1px solid var(--separator-color);
display: inline-block;
overflow: hidden;
}
.bordered_image:empty {
border: none;
}
html.dark-mode .bordered_image img, /* < doxygen 1.9.3 */
html.dark-mode .bordered_image object[type="image/svg+xml"] /* doxygen 1.9.3 */ {
border-radius: var(--border-radius-small);
}
/*
Button
*/
.primary-button {
display: inline-block;
cursor: pointer;
background: var(--primary-color);
color: var(--page-background-color) !important;
border-radius: var(--border-radius-medium);
padding: var(--spacing-small) var(--spacing-medium);
text-decoration: none;
}
.primary-button:hover {
background: var(--primary-dark-color);
}

View File

@@ -0,0 +1,129 @@
.github-corner svg {
fill: var(--primary-light-color);
color: var(--page-background-color);
width: 72px;
height: 72px;
}
@media screen and (max-width: 767px) {
.github-corner svg {
width: 50px;
height: 50px;
}
#projectnumber {
margin-right: 22px;
}
}
.title_screenshot {
filter: drop-shadow(0px 3px 10px rgba(0,0,0,0.22));
max-width: 500px;
margin: var(--spacing-large) 0;
}
.title_screenshot .caption {
display: none;
}
#theme-selection {
position: fixed;
bottom: 0;
left: 0;
background: var(--side-nav-background);
padding: 5px 2px 5px 8px;
box-shadow: 0 -4px 4px -2px var(--side-nav-background);
display: flex;
}
#theme-selection label {
border: 1px solid var(--separator-color);
border-right: 0;
color: var(--page-foreground-color);
font-size: var(--toc-font-size);
padding: 0 8px;
display: inline-block;
height: 22px;
box-sizing: border-box;
border-radius: var(--border-radius-medium) 0 0 var(--border-radius-medium);
line-height: 20px;
background: var(--page-background-color);
opacity: 0.7;
}
@media (prefers-color-scheme: dark) {
html:not(.light-mode) #theme-select {
background: url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='100' height='100' fill='%23aaaaaa'><polygon points='0,0 100,0 50,50'/></svg>") no-repeat;
background-size: 8px;
background-position: calc(100% - 6px) 65%;
background-color: var(--page-background-color);
}
}
html.dark-mode #theme-select {
background: url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='100' height='100' fill='%23aaaaaa'><polygon points='0,0 100,0 50,50'/></svg>") no-repeat;
background-size: 8px;
background-position: calc(100% - 6px) 65%;
background-color: var(--page-background-color);
}
#theme-select {
border: 1px solid var(--separator-color);
border-radius: 0 var(--border-radius-medium) var(--border-radius-medium) 0;
padding: 0;
height: 22px;
font-size: var(--toc-font-size);
font-family: var(--font-family);
width: 215px;
color: var(--primary-color);
border-left: 0;
display: inline-block;
opacity: 0.7;
outline: none;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
background: url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='100' height='100' fill='%23888888'><polygon points='0,0 100,0 50,50'/></svg>") no-repeat;
background-size: 8px;
background-position: calc(100% - 6px) 65%;
background-repeat: no-repeat;
background-color: var(--page-background-color);
}
#theme-selection:hover #theme-select, #theme-selection:hover label {
opacity: 1;
}
#nav-tree-contents {
margin-bottom: 30px;
}
@media screen and (max-width: 767px) {
#theme-selection {
box-shadow: none;
background: none;
height: 20px;
}
#theme-select {
width: 80px;
opacity: 1;
}
#theme-selection label {
opacity: 1;
}
#nav-path ul li.navelem:first-child {
margin-left: 160px;
}
ul li.footer:not(:first-child) {
display: none;
}
#nav-path {
position: fixed;
bottom: 0;
background: var(--page-background-color);
}
}

View File

@@ -0,0 +1,98 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen $doxygenversion"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<!-- BEGIN opengraph metadata -->
<meta property="og:title" content="Doxygen Awesome" />
<meta property="og:image" content="https://repository-images.githubusercontent.com/348492097/4f16df80-88fb-11eb-9d31-4015ff22c452" />
<meta property="og:description" content="Custom CSS theme for doxygen html-documentation with lots of customization parameters." />
<meta property="og:url" content="https://jothepro.github.io/doxygen-awesome-css/" />
<!-- END opengraph metadata -->
<!-- BEGIN twitter metadata -->
<meta name="twitter:image:src" content="https://repository-images.githubusercontent.com/348492097/4f16df80-88fb-11eb-9d31-4015ff22c452" />
<meta name="twitter:title" content="Doxygen Awesome" />
<meta name="twitter:description" content="Custom CSS theme for doxygen html-documentation with lots of customization parameters." />
<!-- END twitter metadata -->
<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
<link rel="icon" type="image/svg+xml" href="logo.drawio.svg"/>
<script type="text/javascript" src="$relpath^jquery.js"></script>
<script type="text/javascript" src="$relpath^dynsections.js"></script>
<script type="text/javascript" src="$relpath^doxygen-awesome-darkmode-toggle.js"></script>
<script type="text/javascript" src="$relpath^doxygen-awesome-fragment-copy-button.js"></script>
<script type="text/javascript" src="$relpath^doxygen-awesome-paragraph-link.js"></script>
<script type="text/javascript" src="$relpath^doxygen-awesome-interactive-toc.js"></script>
<script type="text/javascript" src="$relpath^doxygen-awesome-tabs.js"></script>
<script type="text/javascript" src="$relpath^toggle-alternative-theme.js"></script>
<script type="text/javascript">
DoxygenAwesomeFragmentCopyButton.init()
DoxygenAwesomeDarkModeToggle.init()
DoxygenAwesomeParagraphLink.init()
DoxygenAwesomeInteractiveToc.init()
DoxygenAwesomeTabs.init()
</script>
$treeview
$search
$mathjax
<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
$extrastylesheet
</head>
<body>
<!-- https://tholman.com/github-corners/ -->
<a href="https://github.com/jothepro/doxygen-awesome-css" class="github-corner" title="View source on GitHub" target="_blank" rel="noopener noreferrer">
<svg viewBox="0 0 250 250" width="40" height="40" style="position: absolute; top: 0; border: 0; right: 0; z-index: 99;" aria-hidden="true">
<path d="M0,0 L115,115 L130,115 L142,142 L250,250 L250,0 Z"></path><path d="M128.3,109.0 C113.8,99.7 119.0,89.6 119.0,89.6 C122.0,82.7 120.5,78.6 120.5,78.6 C119.2,72.0 123.4,76.3 123.4,76.3 C127.3,80.9 125.5,87.3 125.5,87.3 C122.9,97.6 130.6,101.9 134.4,103.2" fill="currentColor" style="transform-origin: 130px 106px;" class="octo-arm"></path><path d="M115.0,115.0 C114.9,115.1 118.7,116.5 119.8,115.4 L133.7,101.6 C136.9,99.2 139.9,98.4 142.2,98.6 C133.8,88.0 127.5,74.4 143.8,58.0 C148.5,53.4 154.0,51.2 159.7,51.0 C160.3,49.4 163.2,43.6 171.4,40.1 C171.4,40.1 176.1,42.5 178.8,56.2 C183.1,58.6 187.2,61.8 190.9,65.4 C194.5,69.0 197.7,73.2 200.1,77.6 C213.8,80.2 216.3,84.9 216.3,84.9 C212.7,93.1 206.9,96.0 205.4,96.6 C205.1,102.4 203.0,107.8 198.3,112.5 C181.9,128.9 168.3,122.5 157.7,114.1 C157.9,116.9 156.7,120.9 152.7,124.9 L141.0,136.5 C139.8,137.7 141.6,141.9 141.8,141.8 Z" fill="currentColor" class="octo-body"></path></svg></a><style>.github-corner:hover .octo-arm{animation:octocat-wave 560ms ease-in-out}@keyframes octocat-wave{0%,100%{transform:rotate(0)}20%,60%{transform:rotate(-25deg)}40%,80%{transform:rotate(10deg)}}@media (max-width:500px){.github-corner:hover .octo-arm{animation:none}.github-corner .octo-arm{animation:octocat-wave 560ms ease-in-out}}</style>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<!--BEGIN TITLEAREA-->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<!--BEGIN PROJECT_LOGO-->
<td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
<!--END PROJECT_LOGO-->
<!--BEGIN PROJECT_NAME-->
<td id="projectalign" style="padding-left: 0.5em;">
<div id="projectname">$projectname
<!--BEGIN PROJECT_NUMBER-->&#160;<span id="projectnumber">$projectnumber</span><!--END PROJECT_NUMBER-->
</div>
<!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
</td>
<!--END PROJECT_NAME-->
<!--BEGIN !PROJECT_NAME-->
<!--BEGIN PROJECT_BRIEF-->
<td style="padding-left: 0.5em;">
<div id="projectbrief">$projectbrief</div>
</td>
<!--END PROJECT_BRIEF-->
<!--END !PROJECT_NAME-->
<!--BEGIN DISABLE_INDEX-->
<!--BEGIN SEARCHENGINE-->
<td>$searchbox</td>
<!--END SEARCHENGINE-->
<!--END DISABLE_INDEX-->
</tr>
</tbody>
</table>
<div id="theme-selection">
<label for="theme-select">Theme:</label>
<select id="theme-select">
<option value="theme-default">Default</option>
<option value="theme-round">Round</option>
<option value="theme-robot">Robot</option>
</select>
</div>
</div>
<!--END TITLEAREA-->
<!-- end header part -->

View File

@@ -0,0 +1,62 @@
html.theme-robot {
/* primary theme color. This will affect the entire websites color scheme: links, arrows, labels, ... */
--primary-color: #1c89a4;
--primary-dark-color: #1a6f84;
--primary-light-color: #5abcd4;
--primary-lighter-color: #cae1f1;
--primary-lightest-color: #e9f1f8;
--fragment-background: #ececec;
--code-background: #ececec;
/* page base colors */
--page-background-color: white;
--page-foreground-color: #2c3e50;
--page-secondary-foreground-color: #67727e;
--border-radius-large: 0px;
--border-radius-small: 0px;
--border-radius-medium: 0px;
--spacing-small: 3px;
--spacing-medium: 6px;
--spacing-large: 12px;
--top-height: 125px;
--side-nav-background: var(--page-background-color);
--side-nav-foreground: var(--page-foreground-color);
--header-foreground: var(--side-nav-foreground);
--searchbar-border-radius: var(--border-radius-medium);
--header-background: var(--side-nav-background);
--header-foreground: var(--side-nav-foreground);
--toc-background: rgb(243, 240, 252);
--toc-foreground: var(--page-foreground-color);
--font-family: ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;
--page-font-size: 14px;
--box-shadow: none;
--separator-color: #cdcdcd;
}
html.theme-robot.dark-mode {
color-scheme: dark;
--primary-color: #49cad3;
--primary-dark-color: #8ed2d7;
--primary-light-color: #377479;
--primary-lighter-color: #191e21;
--primary-lightest-color: #191a1c;
--fragment-background: #000000;
--code-background: #000000;
--page-background-color: #161616;
--page-foreground-color: #d2dbde;
--page-secondary-foreground-color: #555555;
--separator-color: #545454;
--toc-background: #20142C;
}

View File

@@ -0,0 +1,55 @@
html.theme-round {
/* primary theme color. This will affect the entire websites color scheme: links, arrows, labels, ... */
--primary-color: #AF7FE4;
--primary-dark-color: #9270E4;
--primary-light-color: #d2b7ef;
--primary-lighter-color: #cae1f1;
--primary-lightest-color: #e9f1f8;
/* page base colors */
--page-background-color: white;
--page-foreground-color: #2c3e50;
--page-secondary-foreground-color: #67727e;
--border-radius-large: 22px;
--border-radius-small: 9px;
--border-radius-medium: 14px;
--spacing-small: 8px;
--spacing-medium: 14px;
--spacing-large: 19px;
--spacing-xlarge: 21px;
--top-height: 125px;
--side-nav-background: #324067;
--side-nav-foreground: #F1FDFF;
--header-foreground: var(--side-nav-foreground);
--searchbar-background: var(--side-nav-foreground);
--searchbar-border-radius: var(--border-radius-medium);
--header-background: var(--side-nav-background);
--header-foreground: var(--side-nav-foreground);
--toc-background: rgb(243, 240, 252);
--toc-foreground: var(--page-foreground-color);
}
html.theme-round.dark-mode {
color-scheme: dark;
--primary-color: #AF7FE4;
--primary-dark-color: #715292;
--primary-light-color: #ae97c7;
--primary-lighter-color: #191e21;
--primary-lightest-color: #191a1c;
--page-background-color: #1C1D1F;
--page-foreground-color: #d2dbde;
--page-secondary-foreground-color: #859399;
--separator-color: #3a3246;
--side-nav-background: #171D32;
--side-nav-foreground: #F1FDFF;
--toc-background: #20142C;
--searchbar-background: var(--page-background-color);
}

View File

@@ -0,0 +1,52 @@
// Toggle zwischen drei Theme-Zuständen und speichere im localStorage
const THEME_CLASSES = ['theme-default', 'theme-round', 'theme-robot'];
// Ermögliche das Umschalten per Button/Funktion (z.B. für onclick im HTML)
function toggleThemeVariant() {
let idx = getCurrentThemeIndex();
idx = (idx + 1) % THEME_CLASSES.length;
applyThemeClass(idx);
}
// Funktion global verfügbar machen
window.toggleThemeVariant = toggleThemeVariant;
function getCurrentThemeIndex() {
const stored = localStorage.getItem('theme-variant');
if (stored === null) return 0;
const idx = THEME_CLASSES.indexOf(stored);
return idx === -1 ? 0 : idx;
}
function applyThemeClass(idx) {
document.documentElement.classList.remove(...THEME_CLASSES);
if (THEME_CLASSES[idx] && THEME_CLASSES[idx] !== 'theme-default') {
document.documentElement.classList.add(THEME_CLASSES[idx]);
}
localStorage.setItem('theme-variant', THEME_CLASSES[idx] || 'theme-default');
// Select synchronisieren, falls vorhanden
const select = document.getElementById('theme-select');
if (select) select.value = THEME_CLASSES[idx];
}
function setThemeByName(themeName) {
const idx = THEME_CLASSES.indexOf(themeName);
applyThemeClass(idx === -1 ? 0 : idx);
}
document.addEventListener('DOMContentLoaded', () => {
const select = document.getElementById('theme-select');
if (select) {
// Initialisiere Auswahl aus localStorage
const idx = getCurrentThemeIndex();
select.value = THEME_CLASSES[idx];
applyThemeClass(idx);
// Theme bei Auswahl ändern
select.addEventListener('change', e => {
setThemeByName(e.target.value);
});
} else {
// Fallback: Theme trotzdem setzen
applyThemeClass(getCurrentThemeIndex());
}
});

View File

@@ -1,82 +0,0 @@
<!-- HTML header for doxygen 1.9.7-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="$langISO">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=11"/>
<meta name="generator" content="Doxygen $doxygenversion"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
<!--BEGIN DISABLE_INDEX-->
<!--BEGIN FULL_SIDEBAR-->
<script type="text/javascript">var page_layout=1;</script>
<!--END FULL_SIDEBAR-->
<!--END DISABLE_INDEX-->
<script type="text/javascript" src="$relpath^jquery.js"></script>
<script type="text/javascript" src="$relpath^dynsections.js"></script>
$treeview
$search
$mathjax
$darkmode
<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
$extrastylesheet
<script type="text/javascript" src="$relpath^doxygen-awesome-darkmode-toggle.js"></script>
<script type="text/javascript">
DoxygenAwesomeDarkModeToggle.init()
</script>
<script type="text/javascript" src="$relpath^doxygen-awesome-interactive-toc.js"></script>
<script type="text/javascript">
DoxygenAwesomeInteractiveToc.init()
</script>
</head>
<body>
<!--BEGIN DISABLE_INDEX-->
<!--BEGIN FULL_SIDEBAR-->
<div id="side-nav" class="ui-resizable side-nav-resizable"><!-- do not remove this div, it is closed by doxygen! -->
<!--END FULL_SIDEBAR-->
<!--END DISABLE_INDEX-->
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<!--BEGIN TITLEAREA-->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr id="projectrow">
<!--BEGIN PROJECT_LOGO-->
<td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
<!--END PROJECT_LOGO-->
<!--BEGIN PROJECT_NAME-->
<td id="projectalign">
<div id="projectname">$projectname<!--BEGIN PROJECT_NUMBER--><span id="projectnumber">&#160;$projectnumber</span><!--END PROJECT_NUMBER-->
</div>
<!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
</td>
<!--END PROJECT_NAME-->
<!--BEGIN !PROJECT_NAME-->
<!--BEGIN PROJECT_BRIEF-->
<td>
<div id="projectbrief">$projectbrief</div>
</td>
<!--END PROJECT_BRIEF-->
<!--END !PROJECT_NAME-->
<!--BEGIN DISABLE_INDEX-->
<!--BEGIN SEARCHENGINE-->
<!--BEGIN !FULL_SIDEBAR-->
<td>$searchbox</td>
<!--END !FULL_SIDEBAR-->
<!--END SEARCHENGINE-->
<!--END DISABLE_INDEX-->
</tr>
<!--BEGIN SEARCHENGINE-->
<!--BEGIN FULL_SIDEBAR-->
<tr><td colspan="2">$searchbox</td></tr>
<!--END FULL_SIDEBAR-->
<!--END SEARCHENGINE-->
</tbody>
</table>
</div>
<!--END TITLEAREA-->
<!-- end header part -->

View File

@@ -0,0 +1,3 @@
.github-corner {
display: none !important;
}

View File

@@ -77,7 +77,7 @@ It's possible to configure `minimum`, `maximum` and `default` version like so:
All of the above are optional.
Clio will fallback to hardcoded defaults when these values are not specified in the config file, or if the configured values are outside of the minimum and maximum supported versions hardcoded in [src/rpc/common/APIVersion.h](../src/rpc/common/APIVersion.hpp).
Clio will fallback to hardcoded defaults when these values are not specified in the config file, or if the configured values are outside of the minimum and maximum supported versions hardcoded in [src/rpc/common/APIVersion.hpp](../src/rpc/common/APIVersion.hpp).
> [!TIP]
> See the [example-config.json](../docs/examples/config/example-config.json) for more details.

View File

@@ -36,19 +36,19 @@ EOF
exit 0
fi
# Check version of doxygen is at least 1.12
# Check version of doxygen is at least 1.14
version=$($DOXYGEN --version | grep -o '[0-9\.]*')
if [[ "1.12.0" > "$version" ]]; then
if [[ "1.14.0" > "$version" ]]; then
# No hard error if doxygen version is not the one we want - let CI deal with it
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 1.12 of `which doxygen` is required.
Your version is $version. Please upgrade it for next time.
A minimum of version 1.14 of `which doxygen` is required.
Your version is $version. Please upgrade it.
Your changes may fail to pass CI once pushed.
Your changes may fail CI checks.
-----------------------------------------------------------------------------
EOF

View File

@@ -2,7 +2,6 @@ add_subdirectory(util)
add_subdirectory(data)
add_subdirectory(cluster)
add_subdirectory(etl)
add_subdirectory(etlng)
add_subdirectory(feed)
add_subdirectory(rpc)
add_subdirectory(web)

20
src/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Clio API server
## Introduction
Clio is an XRP Ledger API server optimized for RPC calls over WebSocket or JSON-RPC.
It stores validated historical ledger and transaction data in a more space efficient format, and uses up to 4 times
less space than [rippled](https://github.com/XRPLF/rippled).
Clio can be configured to store data in [Apache Cassandra](https://cassandra.apache.org/_/index.html) or
[ScyllaDB](https://www.scylladb.com/), enabling scalable read throughput. Multiple Clio nodes can share
access to the same dataset, which allows for a highly available cluster of Clio nodes without the need for redundant
data storage or computation.
## Develop
As you prepare to develop code for Clio, please be sure you are aware of our current
[Contribution guidelines](https://github.com/XRPLF/clio/blob/develop/CONTRIBUTING.md).
Read about @ref "rpc" carefully to know more about writing your own handlers for Clio.

View File

@@ -5,10 +5,9 @@ target_link_libraries(
clio_app
PUBLIC clio_cluster
clio_etl
clio_etlng
clio_feed
clio_web
clio_rpc
clio_migration
clio_rpc
clio_web
PRIVATE Boost::program_options
)

View File

@@ -28,8 +28,6 @@
#include "etl/ETLService.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgers.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManager.hpp"
#include "migration/MigrationInspectorFactory.hpp"
#include "rpc/Counters.hpp"
@@ -103,7 +101,7 @@ ClioApplication::run(bool const useNgWebServer)
// This is not the only io context in the application.
boost::asio::io_context ioc{threads};
// Similarly we need a context to run ETLng on
// Similarly we need a context to run ETL on
// In the future we can remove the raw ioc and use ctx instead
util::async::CoroExecutionContext ctx{threads};
@@ -142,20 +140,12 @@ ClioApplication::run(bool const useNgWebServer)
// ETL uses the balancer to extract data.
// The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and accounts_proposed)
auto balancer = [&] -> std::shared_ptr<etlng::LoadBalancerInterface> {
if (config_.get<bool>("__ng_etl")) {
return etlng::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}
return etl::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}();
auto balancer = etl::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
// ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes
auto etl = etl::ETLService::makeETLService(config_, ioc, ctx, backend, subscriptions, balancer, ledgers);
auto etl = etl::ETLService::makeETLService(config_, ctx, backend, subscriptions, balancer, ledgers);
auto workQueue = rpc::WorkQueue::makeWorkQueue(config_);
auto counters = rpc::Counters::makeCounters(workQueue);
@@ -189,6 +179,7 @@ ClioApplication::run(bool const useNgWebServer)
httpServer->onGet("/metrics", MetricsHandler{adminVerifier});
httpServer->onGet("/health", HealthCheckHandler{});
httpServer->onGet("/cache_state", CacheStateHandler{cache});
auto requestHandler = RequestHandler{adminVerifier, handler};
httpServer->onPost("/", requestHandler);
httpServer->onWs(std::move(requestHandler));
@@ -214,7 +205,7 @@ ClioApplication::run(bool const useNgWebServer)
// Init the web server
auto handler = std::make_shared<web::RPCServerHandler<RPCEngineType>>(config_, backend, rpcEngine, etl, dosGuard);
auto const httpServer = web::makeHttpServer(config_, ioc, dosGuard, handler);
auto const httpServer = web::makeHttpServer(config_, ioc, dosGuard, handler, cache);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope

View File

@@ -20,8 +20,8 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etl/ETLServiceInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/log/Logger.hpp"
@@ -78,8 +78,8 @@ public:
static std::function<void(boost::asio::yield_context)>
makeOnStopCallback(
ServerType& server,
etlng::LoadBalancerInterface& balancer,
etlng::ETLServiceInterface& etl,
etl::LoadBalancerInterface& balancer,
etl::ETLServiceInterface& etl,
feed::SubscriptionManagerInterface& subscriptions,
data::BackendInterface& backend,
boost::asio::io_context& ioc

View File

@@ -120,4 +120,34 @@ HealthCheckHandler::operator()(
return web::ng::Response{boost::beast::http::status::ok, kHEALTH_CHECK_HTML, request};
}
web::ng::Response
CacheStateHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
)
{
static constexpr auto kCACHE_CHECK_LOADED_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Cache state</title></head>
<body><h1>Cache state</h1><p>Cache is fully loaded</p></body>
</html>
)html";
static constexpr auto kCACHE_CHECK_NOT_LOADED_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Cache state</title></head>
<body><h1>Cache state</h1><p>Cache is not yet loaded</p></body>
</html>
)html";
if (cache_.get().isFull())
return web::ng::Response{boost::beast::http::status::ok, kCACHE_CHECK_LOADED_HTML, request};
return web::ng::Response{boost::beast::http::status::service_unavailable, kCACHE_CHECK_NOT_LOADED_HTML, request};
}
} // namespace app

View File

@@ -19,6 +19,7 @@
#pragma once
#include "data/LedgerCacheInterface.hpp"
#include "rpc/Errors.hpp"
#include "util/log/Logger.hpp"
#include "web/AdminVerificationStrategy.hpp"
@@ -163,6 +164,37 @@ public:
);
};
/**
* @brief A function object that handles the cache state check endpoint.
*/
class CacheStateHandler {
std::reference_wrapper<data::LedgerCacheInterface const> cache_;
public:
/**
* @brief Construct a new CacheStateHandler object.
*
* @param cache The ledger cache to use.
*/
CacheStateHandler(data::LedgerCacheInterface const& cache) : cache_{cache}
{
}
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @return The response to the request
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
);
};
/**
* @brief A function object that handles the websocket endpoint.
*

View File

@@ -21,6 +21,7 @@
#include "data/BackendInterface.hpp"
#include "data/CassandraBackend.hpp"
#include "data/KeyspaceBackend.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/cassandra/SettingsProvider.hpp"
#include "util/config/ConfigDefinition.hpp"
@@ -45,6 +46,7 @@ namespace data {
inline std::shared_ptr<BackendInterface>
makeBackend(util::config::ClioConfigDefinition const& config, data::LedgerCacheInterface& cache)
{
using namespace cassandra::impl;
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming)
LOG(log.info()) << "Constructing BackendInterface";
@@ -55,9 +57,15 @@ makeBackend(util::config::ClioConfigDefinition const& config, data::LedgerCacheI
if (boost::iequals(type, "cassandra")) {
auto const cfg = config.getObject("database." + type);
backend = std::make_shared<data::cassandra::CassandraBackend>(
data::cassandra::SettingsProvider{cfg}, cache, readOnly
);
if (providerFromString(cfg.getValueView("provider").asString()) == Provider::Keyspace) {
backend = std::make_shared<data::cassandra::KeyspaceBackend>(
data::cassandra::SettingsProvider{cfg}, cache, readOnly
);
} else {
backend = std::make_shared<data::cassandra::CassandraBackend>(
data::cassandra::SettingsProvider{cfg}, cache, readOnly
);
}
}
if (!backend)

View File

@@ -295,7 +295,7 @@ public:
* @param account The account to fetch transactions for
* @param limit The maximum number of transactions per result page
* @param forward Whether to fetch the page forwards or backwards from the given cursor
* @param cursor The cursor to resume fetching from
* @param txnCursor The cursor to resume fetching from
* @param yield The coroutine context
* @return Results and a cursor to resume from
*/
@@ -304,7 +304,7 @@ public:
ripple::AccountID const& account,
std::uint32_t limit,
bool forward,
std::optional<TransactionsCursor> const& cursor,
std::optional<TransactionsCursor> const& txnCursor,
boost::asio::yield_context yield
) const = 0;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,309 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/LedgerHeaderCache.hpp"
#include "data/Types.hpp"
#include "data/cassandra/CassandraBackendFamily.hpp"
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/KeyspaceSchema.hpp"
#include "data/cassandra/SettingsProvider.hpp"
#include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/ExecutionStrategy.hpp"
#include "util/Assert.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <boost/uuid/string_generator.hpp>
#include <boost/uuid/uuid.hpp>
#include <cassandra.h>
#include <fmt/format.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/protocol/nft.h>
#include <cstddef>
#include <cstdint>
#include <iterator>
#include <optional>
#include <stdexcept>
#include <utility>
#include <vector>
namespace data::cassandra {
/**
* @brief Implements @ref CassandraBackendFamily for Keyspace
*
* @tparam SettingsProviderType The settings provider type
* @tparam ExecutionStrategyType The execution strategy type
* @tparam FetchLedgerCacheType The ledger header cache type
*/
template <
SomeSettingsProvider SettingsProviderType,
SomeExecutionStrategy ExecutionStrategyType,
typename FetchLedgerCacheType = FetchLedgerCache>
class BasicKeyspaceBackend : public CassandraBackendFamily<
SettingsProviderType,
ExecutionStrategyType,
KeyspaceSchema<SettingsProviderType>,
FetchLedgerCacheType> {
using DefaultCassandraFamily = CassandraBackendFamily<
SettingsProviderType,
ExecutionStrategyType,
KeyspaceSchema<SettingsProviderType>,
FetchLedgerCacheType>;
using DefaultCassandraFamily::executor_;
using DefaultCassandraFamily::ledgerSequence_;
using DefaultCassandraFamily::log_;
using DefaultCassandraFamily::range_;
using DefaultCassandraFamily::schema_;
public:
/**
* @brief Inherit the constructors of the base class.
*/
using DefaultCassandraFamily::DefaultCassandraFamily;
/**
* @brief Move constructor is deleted because handle_ is shared by reference with executor
*/
BasicKeyspaceBackend(BasicKeyspaceBackend&&) = delete;
bool
doFinishWrites() override
{
this->waitForWritesToFinish();
// !range_.has_value() means the table 'ledger_range' is not populated;
// This would be the first write to the table.
// In this case, insert both min_sequence/max_sequence range into the table.
if (not range_.has_value()) {
executor_.writeSync(schema_->insertLedgerRange, /* isLatestLedger =*/false, ledgerSequence_);
executor_.writeSync(schema_->insertLedgerRange, /* isLatestLedger =*/true, ledgerSequence_);
}
if (not this->executeSyncUpdate(schema_->updateLedgerRange.bind(ledgerSequence_, true, ledgerSequence_ - 1))) {
log_.warn() << "Update failed for ledger " << ledgerSequence_;
return false;
}
log_.info() << "Committed ledger " << ledgerSequence_;
return true;
}
NFTsAndCursor
fetchNFTsByIssuer(
ripple::AccountID const& issuer,
std::optional<std::uint32_t> const& taxon,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
std::optional<ripple::uint256> const& cursorIn,
boost::asio::yield_context yield
) const override
{
std::vector<ripple::uint256> nftIDs;
if (taxon.has_value()) {
// Keyspace and ScyllaDB uses the same logic for taxon-filtered queries
nftIDs = fetchNFTIDsByTaxon(issuer, *taxon, limit, cursorIn, yield);
} else {
// Amazon Keyspaces Workflow for non-taxon queries
auto const startTaxon = cursorIn.has_value() ? ripple::nft::toUInt32(ripple::nft::getTaxon(*cursorIn)) : 0;
auto const startTokenID = cursorIn.value_or(ripple::uint256(0));
Statement const firstQuery = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
firstQuery.bindAt(1, startTaxon);
firstQuery.bindAt(2, startTokenID);
firstQuery.bindAt(3, Limit{limit});
auto const firstRes = executor_.read(yield, firstQuery);
if (firstRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*firstRes))
nftIDs.push_back(nftID);
}
if (nftIDs.size() < limit) {
auto const remainingLimit = limit - nftIDs.size();
Statement const secondQuery = schema_->selectNFTsAfterTaxonKeyspaces.bind(issuer);
secondQuery.bindAt(1, startTaxon);
secondQuery.bindAt(2, Limit{remainingLimit});
auto const secondRes = executor_.read(yield, secondQuery);
if (secondRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*secondRes))
nftIDs.push_back(nftID);
}
}
}
return populateNFTsAndCreateCursor(nftIDs, ledgerSequence, limit, yield);
}
/**
* @brief (Unsupported in Keyspaces) Fetches account root object indexes by page.
* @note Loading the cache by enumerating all accounts is currently unsupported by the AWS Keyspaces backend.
* This function's logic relies on "PER PARTITION LIMIT 1", which Keyspaces does not support, and there is
* no efficient alternative. This is acceptable as the cache is primarily loaded via diffs. Calling this
* function will throw an exception.
*
* @param number The total number of accounts to fetch.
* @param pageSize The maximum number of accounts per page.
* @param seq The accounts need to exist at this ledger sequence.
* @param yield The coroutine context.
* @return A vector of ripple::uint256 representing the account root hashes.
*/
std::vector<ripple::uint256>
fetchAccountRoots(
[[maybe_unused]] std::uint32_t number,
[[maybe_unused]] std::uint32_t pageSize,
[[maybe_unused]] std::uint32_t seq,
[[maybe_unused]] boost::asio::yield_context yield
) const override
{
ASSERT(false, "Fetching account roots is not supported by the Keyspaces backend.");
std::unreachable();
}
private:
std::vector<ripple::uint256>
fetchNFTIDsByTaxon(
ripple::AccountID const& issuer,
std::uint32_t const taxon,
std::uint32_t const limit,
std::optional<ripple::uint256> const& cursorIn,
boost::asio::yield_context yield
) const
{
std::vector<ripple::uint256> nftIDs;
Statement const statement = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
statement.bindAt(1, taxon);
statement.bindAt(2, cursorIn.value_or(ripple::uint256(0)));
statement.bindAt(3, Limit{limit});
auto const res = executor_.read(yield, statement);
if (res.has_value() && res->hasRows()) {
for (auto const [nftID] : extract<ripple::uint256>(*res))
nftIDs.push_back(nftID);
}
return nftIDs;
}
std::vector<ripple::uint256>
fetchNFTIDsWithoutTaxon(
ripple::AccountID const& issuer,
std::uint32_t const limit,
std::optional<ripple::uint256> const& cursorIn,
boost::asio::yield_context yield
) const
{
std::vector<ripple::uint256> nftIDs;
auto const startTaxon = cursorIn.has_value() ? ripple::nft::toUInt32(ripple::nft::getTaxon(*cursorIn)) : 0;
auto const startTokenID = cursorIn.value_or(ripple::uint256(0));
Statement firstQuery = schema_->selectNFTIDsByIssuerTaxon.bind(issuer);
firstQuery.bindAt(1, startTaxon);
firstQuery.bindAt(2, startTokenID);
firstQuery.bindAt(3, Limit{limit});
auto const firstRes = executor_.read(yield, firstQuery);
if (firstRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*firstRes))
nftIDs.push_back(nftID);
}
if (nftIDs.size() < limit) {
auto const remainingLimit = limit - nftIDs.size();
Statement secondQuery = schema_->selectNFTsAfterTaxonKeyspaces.bind(issuer);
secondQuery.bindAt(1, startTaxon);
secondQuery.bindAt(2, Limit{remainingLimit});
auto const secondRes = executor_.read(yield, secondQuery);
if (secondRes.has_value()) {
for (auto const [nftID] : extract<ripple::uint256>(*secondRes))
nftIDs.push_back(nftID);
}
}
return nftIDs;
}
/**
* @brief Takes a list of NFT IDs, fetches their full data, and assembles the final result with a cursor.
*/
NFTsAndCursor
populateNFTsAndCreateCursor(
std::vector<ripple::uint256> const& nftIDs,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
boost::asio::yield_context yield
) const
{
if (nftIDs.empty()) {
LOG(log_.debug()) << "No rows returned";
return {};
}
NFTsAndCursor ret;
if (nftIDs.size() == limit)
ret.cursor = nftIDs.back();
// Prepare and execute queries to fetch NFT info and URIs in parallel.
std::vector<Statement> selectNFTStatements;
selectNFTStatements.reserve(nftIDs.size());
std::transform(
std::cbegin(nftIDs), std::cend(nftIDs), std::back_inserter(selectNFTStatements), [&](auto const& nftID) {
return schema_->selectNFT.bind(nftID, ledgerSequence);
}
);
std::vector<Statement> selectNFTURIStatements;
selectNFTURIStatements.reserve(nftIDs.size());
std::transform(
std::cbegin(nftIDs), std::cend(nftIDs), std::back_inserter(selectNFTURIStatements), [&](auto const& nftID) {
return schema_->selectNFTURI.bind(nftID, ledgerSequence);
}
);
auto const nftInfos = executor_.readEach(yield, selectNFTStatements);
auto const nftUris = executor_.readEach(yield, selectNFTURIStatements);
// Combine the results into final NFT objects.
for (auto i = 0u; i < nftIDs.size(); ++i) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>();
maybeRow.has_value()) {
auto [seq, owner, isBurned] = *maybeRow;
NFT nft(nftIDs[i], seq, owner, isBurned);
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri.has_value())
nft.uri = *maybeUri;
ret.nfts.push_back(nft);
}
}
return ret;
}
};
using KeyspaceBackend = BasicKeyspaceBackend<SettingsProvider, impl::DefaultExecutionStrategy<>>;
} // namespace data::cassandra

View File

@@ -20,7 +20,7 @@
#include "data/LedgerCache.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include "util/Assert.hpp"
#include <xrpl/basics/base_uint.h>
@@ -89,7 +89,7 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
}
void
LedgerCache::update(std::vector<etlng::model::Object> const& objs, uint32_t seq)
LedgerCache::update(std::vector<etl::model::Object> const& objs, uint32_t seq)
{
if (disabled_)
return;

View File

@@ -21,7 +21,7 @@
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include "util/prometheus/Bool.hpp"
#include "util/prometheus/Counter.hpp"
#include "util/prometheus/Label.hpp"
@@ -98,7 +98,7 @@ public:
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground) override;
void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) override;
update(std::vector<etl::model::Object> const& objs, uint32_t seq) override;
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const override;

View File

@@ -20,7 +20,7 @@
#pragma once
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/hardened_hash.h>
@@ -63,7 +63,7 @@ public:
* @param seq The sequence to update cache for
*/
virtual void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) = 0;
update(std::vector<etl::model::Object> const& objs, uint32_t seq) = 0;
/**
* @brief Fetch a cached object by its key and sequence number.

View File

@@ -1,8 +1,10 @@
# Backend
# Backend
@page "backend" Backend
The backend of Clio is responsible for handling the proper reading and writing of past ledger data from and to a given database. Currently, Cassandra and ScyllaDB are the only supported databases that are production-ready.
To support additional database types, you can create new classes that implement the virtual methods in [BackendInterface.h](https://github.com/XRPLF/clio/blob/develop/src/data/BackendInterface.hpp). Then, leveraging the Factory Object Design Pattern, modify [BackendFactory.h](https://github.com/XRPLF/clio/blob/develop/src/data/BackendFactory.hpp) with logic that returns the new database interface if the relevant `type` is provided in Clio's configuration file.
To support additional database types, you can create new classes that implement the virtual methods in [BackendInterface.hpp](https://github.com/XRPLF/clio/blob/develop/src/data/BackendInterface.hpp). Then, leveraging the Factory Object Design Pattern, modify [BackendFactory.hpp](https://github.com/XRPLF/clio/blob/develop/src/data/BackendFactory.hpp) with logic that returns the new database interface if the relevant `type` is provided in Clio's configuration file.
## Data Model

View File

@@ -0,0 +1,975 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "data/DBHelpers.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/LedgerHeaderCache.hpp"
#include "data/Types.hpp"
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/ExecutionStrategy.hpp"
#include "util/Assert.hpp"
#include "util/LedgerUtils.hpp"
#include "util/Profiler.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <boost/uuid/string_generator.hpp>
#include <boost/uuid/uuid.hpp>
#include <cassandra.h>
#include <fmt/format.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/protocol/nft.h>
#include <algorithm>
#include <atomic>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <iterator>
#include <limits>
#include <optional>
#include <stdexcept>
#include <string>
#include <tuple>
#include <utility>
#include <vector>
class CacheBackendCassandraTest;
namespace data::cassandra {
/**
* @brief Implements @ref BackendInterface for Cassandra/ScyllaDB/Keyspace.
*
* Note: This is a safer and more correct rewrite of the original implementation of the backend.
*
* @tparam SettingsProviderType The settings provider type
* @tparam ExecutionStrategyType The execution strategy type
* @tparam SchemaType The Schema type
* @tparam FetchLedgerCacheType The ledger header cache type
*/
template <
SomeSettingsProvider SettingsProviderType,
SomeExecutionStrategy ExecutionStrategyType,
typename SchemaType,
typename FetchLedgerCacheType = FetchLedgerCache>
class CassandraBackendFamily : public BackendInterface {
protected:
util::Logger log_{"Backend"};
SettingsProviderType settingsProvider_;
SchemaType schema_;
std::atomic_uint32_t ledgerSequence_ = 0u;
friend class ::CacheBackendCassandraTest;
Handle handle_;
// have to be mutable because BackendInterface constness :(
mutable ExecutionStrategyType executor_;
// TODO: move to interface level
mutable FetchLedgerCacheType ledgerCache_{};
public:
/**
* @brief Create a new cassandra/scylla backend instance.
*
* @param settingsProvider The settings provider
* @param cache The ledger cache
* @param readOnly Whether the database should be in readonly mode
*/
CassandraBackendFamily(SettingsProviderType settingsProvider, data::LedgerCacheInterface& cache, bool readOnly)
: BackendInterface(cache)
, settingsProvider_{std::move(settingsProvider)}
, schema_{settingsProvider_}
, handle_{settingsProvider_.getSettings()}
, executor_{settingsProvider_.getSettings(), handle_}
{
if (auto const res = handle_.connect(); not res.has_value())
throw std::runtime_error("Could not connect to database: " + res.error());
if (not readOnly) {
if (auto const res = handle_.execute(schema_.createKeyspace); not res.has_value()) {
// on datastax, creation of keyspaces can be configured to only be done thru the admin
// interface. this does not mean that the keyspace does not already exist tho.
if (res.error().code() != CASS_ERROR_SERVER_UNAUTHORIZED)
throw std::runtime_error("Could not create keyspace: " + res.error());
}
if (auto const res = handle_.executeEach(schema_.createSchema); not res.has_value())
throw std::runtime_error("Could not create schema: " + res.error());
}
try {
schema_.prepareStatements(handle_);
} catch (std::runtime_error const& ex) {
auto const error = fmt::format(
"Failed to prepare the statements: {}; readOnly: {}. ReadOnly should be turned off or another Clio "
"node with write access to DB should be started first.",
ex.what(),
readOnly
);
LOG(log_.error()) << error;
throw std::runtime_error(error);
}
LOG(log_.info()) << "Created (revamped) CassandraBackend";
}
/*
* @brief Move constructor is deleted because handle_ is shared by reference with executor
*/
CassandraBackendFamily(CassandraBackendFamily&&) = delete;
TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
std::uint32_t const limit,
bool forward,
std::optional<TransactionsCursor> const& txnCursor,
boost::asio::yield_context yield
) const override
{
auto rng = fetchLedgerRange();
if (!rng)
return {.txns = {}, .cursor = {}};
Statement const statement = [this, forward, &account]() {
if (forward)
return schema_->selectAccountTxForward.bind(account);
return schema_->selectAccountTx.bind(account);
}();
auto cursor = txnCursor;
if (cursor) {
statement.bindAt(1, cursor->asTuple());
LOG(log_.debug()) << "account = " << ripple::strHex(account) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
} else {
auto const seq = forward ? rng->minSequence : rng->maxSequence;
auto const placeHolder = forward ? 0u : std::numeric_limits<std::uint32_t>::max();
statement.bindAt(1, std::make_tuple(placeHolder, placeHolder));
LOG(log_.debug()) << "account = " << ripple::strHex(account) << " idx = " << seq
<< " tuple = " << placeHolder;
}
// FIXME: Limit is a hack to support uint32_t properly for the time
// being. Should be removed later and schema updated to use proper
// types.
statement.bindAt(2, Limit{limit});
auto const res = executor_.read(yield, statement);
auto const& results = res.value();
if (not results.hasRows()) {
LOG(log_.debug()) << "No rows returned";
return {};
}
std::vector<ripple::uint256> hashes = {};
auto numRows = results.numRows();
LOG(log_.info()) << "num_rows = " << numRows;
for (auto [hash, data] : extract<ripple::uint256, std::tuple<uint32_t, uint32_t>>(results)) {
hashes.push_back(hash);
if (--numRows == 0) {
LOG(log_.debug()) << "Setting cursor";
cursor = data;
}
}
auto const txns = fetchTransactions(hashes, yield);
LOG(log_.debug()) << "Txns = " << txns.size();
if (txns.size() == limit) {
LOG(log_.debug()) << "Returning cursor";
return {txns, cursor};
}
return {txns, {}};
}
void
waitForWritesToFinish() override
{
executor_.sync();
}
void
writeLedger(ripple::LedgerHeader const& ledgerHeader, std::string&& blob) override
{
executor_.write(schema_->insertLedgerHeader, ledgerHeader.seq, std::move(blob));
executor_.write(schema_->insertLedgerHash, ledgerHeader.hash, ledgerHeader.seq);
ledgerSequence_ = ledgerHeader.seq;
}
std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context yield) const override
{
if (auto const res = executor_.read(yield, schema_->selectLatestLedger); res.has_value()) {
if (auto const& rows = *res; rows) {
if (auto const maybeRow = rows.template get<uint32_t>(); maybeRow.has_value())
return maybeRow;
LOG(log_.error()) << "Could not fetch latest ledger - no rows";
return std::nullopt;
}
LOG(log_.error()) << "Could not fetch latest ledger - no result";
} else {
LOG(log_.error()) << "Could not fetch latest ledger: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::LedgerHeader>
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context yield) const override
{
if (auto const lock = ledgerCache_.get(); lock.has_value() && lock->seq == sequence)
return lock->ledger;
auto const res = executor_.read(yield, schema_->selectLedgerBySeq, sequence);
if (res) {
if (auto const& result = res.value(); result) {
if (auto const maybeValue = result.template get<std::vector<unsigned char>>(); maybeValue) {
auto const header = util::deserializeHeader(ripple::makeSlice(*maybeValue));
ledgerCache_.put(FetchLedgerCache::CacheEntry{header, sequence});
return header;
}
LOG(log_.error()) << "Could not fetch ledger by sequence - no rows";
return std::nullopt;
}
LOG(log_.error()) << "Could not fetch ledger by sequence - no result";
} else {
LOG(log_.error()) << "Could not fetch ledger by sequence: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::LedgerHeader>
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context yield) const override
{
if (auto const res = executor_.read(yield, schema_->selectLedgerByHash, hash); res) {
if (auto const& result = res.value(); result) {
if (auto const maybeValue = result.template get<uint32_t>(); maybeValue)
return fetchLedgerBySequence(*maybeValue, yield);
LOG(log_.error()) << "Could not fetch ledger by hash - no rows";
return std::nullopt;
}
LOG(log_.error()) << "Could not fetch ledger by hash - no result";
} else {
LOG(log_.error()) << "Could not fetch ledger by hash: " << res.error();
}
return std::nullopt;
}
std::optional<LedgerRange>
hardFetchLedgerRange(boost::asio::yield_context yield) const override
{
auto const res = executor_.read(yield, schema_->selectLedgerRange);
if (res) {
auto const& results = res.value();
if (not results.hasRows()) {
LOG(log_.debug()) << "Could not fetch ledger range - no rows";
return std::nullopt;
}
// TODO: this is probably a good place to use user type in
// cassandra instead of having two rows with bool flag. or maybe at
// least use tuple<int, int>?
LedgerRange range;
std::size_t idx = 0;
for (auto [seq] : extract<uint32_t>(results)) {
if (idx == 0) {
range.maxSequence = range.minSequence = seq;
} else if (idx == 1) {
range.maxSequence = seq;
}
++idx;
}
if (range.minSequence > range.maxSequence)
std::swap(range.minSequence, range.maxSequence);
LOG(log_.debug()) << "After hardFetchLedgerRange range is " << range.minSequence << ":"
<< range.maxSequence;
return range;
}
LOG(log_.error()) << "Could not fetch ledger range: " << res.error();
return std::nullopt;
}
std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context yield) const override
{
auto hashes = fetchAllTransactionHashesInLedger(ledgerSequence, yield);
return fetchTransactions(hashes, yield);
}
std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto start = std::chrono::system_clock::now();
auto const res = executor_.read(yield, schema_->selectAllTransactionHashesInLedger, ledgerSequence);
if (not res) {
LOG(log_.error()) << "Could not fetch all transaction hashes: " << res.error();
return {};
}
auto const& result = res.value();
if (not result.hasRows()) {
LOG(log_.warn()) << "Could not fetch all transaction hashes - no rows; ledger = "
<< std::to_string(ledgerSequence);
return {};
}
std::vector<ripple::uint256> hashes;
for (auto [hash] : extract<ripple::uint256>(result))
hashes.push_back(std::move(hash));
auto end = std::chrono::system_clock::now();
LOG(log_.debug()) << "Fetched " << hashes.size() << " transaction hashes from database in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()
<< " milliseconds";
return hashes;
}
std::optional<NFT>
fetchNFT(
ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto const res = executor_.read(yield, schema_->selectNFT, tokenID, ledgerSequence);
if (not res)
return std::nullopt;
if (auto const maybeRow = res->template get<uint32_t, ripple::AccountID, bool>(); maybeRow) {
auto [seq, owner, isBurned] = *maybeRow;
auto result = std::make_optional<NFT>(tokenID, seq, owner, isBurned);
// now fetch URI. Usually we will have the URI even for burned NFTs,
// but if the first ledger on this clio included NFTokenBurn
// transactions we will not have the URIs for any of those tokens.
// In any other case not having the URI indicates something went
// wrong with our data.
//
// TODO - in the future would be great for any handlers that use
// this could inject a warning in this case (the case of not having
// a URI because it was burned in the first ledger) to indicate that
// even though we are returning a blank URI, the NFT might have had
// one.
auto uriRes = executor_.read(yield, schema_->selectNFTURI, tokenID, ledgerSequence);
if (uriRes) {
if (auto const maybeUri = uriRes->template get<ripple::Blob>(); maybeUri)
result->uri = *maybeUri;
}
return result;
}
LOG(log_.error()) << "Could not fetch NFT - no rows";
return std::nullopt;
}
TransactionsAndCursor
fetchNFTTransactions(
ripple::uint256 const& tokenID,
std::uint32_t const limit,
bool const forward,
std::optional<TransactionsCursor> const& cursorIn,
boost::asio::yield_context yield
) const override
{
auto rng = fetchLedgerRange();
if (!rng)
return {.txns = {}, .cursor = {}};
Statement const statement = [this, forward, &tokenID]() {
if (forward)
return schema_->selectNFTTxForward.bind(tokenID);
return schema_->selectNFTTx.bind(tokenID);
}();
auto cursor = cursorIn;
if (cursor) {
statement.bindAt(1, cursor->asTuple());
LOG(log_.debug()) << "token_id = " << ripple::strHex(tokenID) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
} else {
auto const seq = forward ? rng->minSequence : rng->maxSequence;
auto const placeHolder = forward ? 0 : std::numeric_limits<std::uint32_t>::max();
statement.bindAt(1, std::make_tuple(placeHolder, placeHolder));
LOG(log_.debug()) << "token_id = " << ripple::strHex(tokenID) << " idx = " << seq
<< " tuple = " << placeHolder;
}
statement.bindAt(2, Limit{limit});
auto const res = executor_.read(yield, statement);
auto const& results = res.value();
if (not results.hasRows()) {
LOG(log_.debug()) << "No rows returned";
return {};
}
std::vector<ripple::uint256> hashes = {};
auto numRows = results.numRows();
LOG(log_.info()) << "num_rows = " << numRows;
for (auto [hash, data] : extract<ripple::uint256, std::tuple<uint32_t, uint32_t>>(results)) {
hashes.push_back(hash);
if (--numRows == 0) {
LOG(log_.debug()) << "Setting cursor";
cursor = data;
// forward queries by ledger/tx sequence `>=`
// so we have to advance the index by one
if (forward)
++cursor->transactionIndex;
}
}
auto const txns = fetchTransactions(hashes, yield);
LOG(log_.debug()) << "NFT Txns = " << txns.size();
if (txns.size() == limit) {
LOG(log_.debug()) << "Returning cursor";
return {txns, cursor};
}
return {txns, {}};
}
MPTHoldersAndCursor
fetchMPTHolders(
ripple::uint192 const& mptID,
std::uint32_t const limit,
std::optional<ripple::AccountID> const& cursorIn,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto const holderEntries = executor_.read(
yield, schema_->selectMPTHolders, mptID, cursorIn.value_or(ripple::AccountID(0)), Limit{limit}
);
auto const& holderResults = holderEntries.value();
if (not holderResults.hasRows()) {
LOG(log_.debug()) << "No rows returned";
return {};
}
std::vector<ripple::uint256> mptKeys;
std::optional<ripple::AccountID> cursor;
for (auto const [holder] : extract<ripple::AccountID>(holderResults)) {
mptKeys.push_back(ripple::keylet::mptoken(mptID, holder).key);
cursor = holder;
}
auto mptObjects = doFetchLedgerObjects(mptKeys, ledgerSequence, yield);
auto it = std::remove_if(mptObjects.begin(), mptObjects.end(), [](Blob const& mpt) { return mpt.empty(); });
mptObjects.erase(it, mptObjects.end());
ASSERT(mptKeys.size() <= limit, "Number of keys can't exceed the limit");
if (mptKeys.size() == limit)
return {mptObjects, cursor};
return {mptObjects, {}};
}
std::optional<Blob>
doFetchLedgerObject(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const override
{
LOG(log_.debug()) << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res) {
if (auto const result = res->template get<Blob>(); result) {
if (result->size())
return result;
} else {
LOG(log_.debug()) << "Could not fetch ledger object - no rows";
}
} else {
LOG(log_.error()) << "Could not fetch ledger object: " << res.error();
}
return std::nullopt;
}
std::optional<std::uint32_t>
doFetchLedgerObjectSeq(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const override
{
LOG(log_.debug()) << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res) {
if (auto const result = res->template get<Blob, std::uint32_t>(); result) {
auto [_, seq] = result.value();
return seq;
}
LOG(log_.debug()) << "Could not fetch ledger object sequence - no rows";
} else {
LOG(log_.error()) << "Could not fetch ledger object sequence: " << res.error();
}
return std::nullopt;
}
std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context yield) const override
{
if (auto const res = executor_.read(yield, schema_->selectTransaction, hash); res) {
if (auto const maybeValue = res->template get<Blob, Blob, uint32_t, uint32_t>(); maybeValue) {
auto [transaction, meta, seq, date] = *maybeValue;
return std::make_optional<TransactionAndMetadata>(transaction, meta, seq, date);
}
LOG(log_.debug()) << "Could not fetch transaction - no rows";
} else {
LOG(log_.error()) << "Could not fetch transaction: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::uint256>
doFetchSuccessorKey(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res) {
if (auto const result = res->template get<ripple::uint256>(); result) {
if (*result == kLAST_KEY)
return std::nullopt;
return result;
}
LOG(log_.debug()) << "Could not fetch successor - no rows";
} else {
LOG(log_.error()) << "Could not fetch successor: " << res.error();
}
return std::nullopt;
}
std::vector<TransactionAndMetadata>
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context yield) const override
{
if (hashes.empty())
return {};
auto const numHashes = hashes.size();
std::vector<TransactionAndMetadata> results;
results.reserve(numHashes);
std::vector<Statement> statements;
statements.reserve(numHashes);
auto const timeDiff = util::timed([this, yield, &results, &hashes, &statements]() {
// TODO: seems like a job for "hash IN (list of hashes)" instead?
std::transform(
std::cbegin(hashes), std::cend(hashes), std::back_inserter(statements), [this](auto const& hash) {
return schema_->selectTransaction.bind(hash);
}
);
auto const entries = executor_.readEach(yield, statements);
std::transform(
std::cbegin(entries),
std::cend(entries),
std::back_inserter(results),
[](auto const& res) -> TransactionAndMetadata {
if (auto const maybeRow = res.template get<Blob, Blob, uint32_t, uint32_t>(); maybeRow)
return *maybeRow;
return {};
}
);
});
ASSERT(numHashes == results.size(), "Number of hashes and results must match");
LOG(log_.debug()) << "Fetched " << numHashes << " transactions from database in " << timeDiff
<< " milliseconds";
return results;
}
std::vector<Blob>
doFetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const override
{
if (keys.empty())
return {};
auto const numKeys = keys.size();
LOG(log_.trace()) << "Fetching " << numKeys << " objects";
std::vector<Blob> results;
results.reserve(numKeys);
std::vector<Statement> statements;
statements.reserve(numKeys);
// TODO: seems like a job for "key IN (list of keys)" instead?
std::transform(
std::cbegin(keys), std::cend(keys), std::back_inserter(statements), [this, &sequence](auto const& key) {
return schema_->selectObject.bind(key, sequence);
}
);
auto const entries = executor_.readEach(yield, statements);
std::transform(
std::cbegin(entries), std::cend(entries), std::back_inserter(results), [](auto const& res) -> Blob {
if (auto const maybeValue = res.template get<Blob>(); maybeValue)
return *maybeValue;
return {};
}
);
LOG(log_.trace()) << "Fetched " << numKeys << " objects";
return results;
}
std::vector<LedgerObject>
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context yield) const override
{
auto const [keys, timeDiff] = util::timed([this, &ledgerSequence, yield]() -> std::vector<ripple::uint256> {
auto const res = executor_.read(yield, schema_->selectDiff, ledgerSequence);
if (not res) {
LOG(log_.error()) << "Could not fetch ledger diff: " << res.error() << "; ledger = " << ledgerSequence;
return {};
}
auto const& results = res.value();
if (not results) {
LOG(log_.error()) << "Could not fetch ledger diff - no rows; ledger = " << ledgerSequence;
return {};
}
std::vector<ripple::uint256> resultKeys;
for (auto [key] : extract<ripple::uint256>(results))
resultKeys.push_back(key);
return resultKeys;
});
// one of the above errors must have happened
if (keys.empty())
return {};
LOG(log_.debug()) << "Fetched " << keys.size() << " diff hashes from database in " << timeDiff
<< " milliseconds";
auto const objs = fetchLedgerObjects(keys, ledgerSequence, yield);
std::vector<LedgerObject> results;
results.reserve(keys.size());
std::transform(
std::cbegin(keys),
std::cend(keys),
std::cbegin(objs),
std::back_inserter(results),
[](auto const& key, auto const& obj) { return LedgerObject{key, obj}; }
);
return results;
}
std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const override
{
auto const res = executor_.read(yield, schema_->selectMigratorStatus, Text(migratorName));
if (not res) {
LOG(log_.error()) << "Could not fetch migrator status: " << res.error();
return {};
}
auto const& results = res.value();
if (not results) {
return {};
}
for (auto [statusString] : extract<std::string>(results))
return statusString;
return {};
}
std::expected<std::vector<std::pair<boost::uuids::uuid, std::string>>, std::string>
fetchClioNodesData(boost::asio::yield_context yield) const override
{
auto const readResult = executor_.read(yield, schema_->selectClioNodesData);
if (not readResult)
return std::unexpected{readResult.error().message()};
std::vector<std::pair<boost::uuids::uuid, std::string>> result;
for (auto [uuid, message] : extract<boost::uuids::uuid, std::string>(*readResult)) {
result.emplace_back(uuid, std::move(message));
}
return result;
}
void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override
{
LOG(log_.trace()) << " Writing ledger object " << key.size() << ":" << seq << " [" << blob.size() << " bytes]";
if (range_)
executor_.write(schema_->insertDiff, seq, key);
executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob));
}
void
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) override
{
LOG(log_.trace()) << "Writing successor. key = " << key.size() << " bytes. "
<< " seq = " << std::to_string(seq) << " successor = " << successor.size() << " bytes.";
ASSERT(!key.empty(), "Key must not be empty");
ASSERT(!successor.empty(), "Successor must not be empty");
executor_.write(schema_->insertSuccessor, std::move(key), seq, std::move(successor));
}
void
writeAccountTransactions(std::vector<AccountTransactionsData> data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 10); // assume 10 transactions avg
for (auto& record : data) {
std::ranges::transform(record.accounts, std::back_inserter(statements), [this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::forward<decltype(account)>(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash
);
});
}
executor_.write(std::move(statements));
}
void
writeAccountTransaction(AccountTransactionsData record) override
{
std::vector<Statement> statements;
statements.reserve(record.accounts.size());
std::ranges::transform(record.accounts, std::back_inserter(statements), [this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::forward<decltype(account)>(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash
);
});
executor_.write(std::move(statements));
}
void
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
std::ranges::transform(data, std::back_inserter(statements), [this](auto const& record) {
return schema_->insertNFTTx.bind(
record.tokenID, std::make_tuple(record.ledgerSequence, record.transactionIndex), record.txHash
);
});
executor_.write(std::move(statements));
}
void
writeTransaction(
std::string&& hash,
std::uint32_t const seq,
std::uint32_t const date,
std::string&& transaction,
std::string&& metadata
) override
{
LOG(log_.trace()) << "Writing txn to database";
executor_.write(schema_->insertLedgerTransaction, seq, hash);
executor_.write(
schema_->insertTransaction, std::move(hash), seq, date, std::move(transaction), std::move(metadata)
);
}
void
writeNFTs(std::vector<NFTsData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 3);
for (NFTsData const& record : data) {
if (!record.onlyUriChanged) {
statements.push_back(
schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned)
);
// If `uri` is set (and it can be set to an empty uri), we know this
// is a net-new NFT. That is, this NFT has not been seen before by
// us _OR_ it is in the extreme edge case of a re-minted NFT ID with
// the same NFT ID as an already-burned token. In this case, we need
// to record the URI and link to the issuer_nf_tokens table.
if (record.uri) {
statements.push_back(schema_->insertIssuerNFT.bind(
ripple::nft::getIssuer(record.tokenID),
static_cast<uint32_t>(ripple::nft::getTaxon(record.tokenID)),
record.tokenID
));
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
);
}
} else {
// only uri changed, we update the uri table only
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
);
}
}
executor_.writeEach(std::move(statements));
}
void
writeMPTHolders(std::vector<MPTHolderData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
for (auto [mptId, holder] : data)
statements.push_back(schema_->insertMPTHolder.bind(mptId, holder));
executor_.write(std::move(statements));
}
void
startWrites() const override
{
// Note: no-op in original implementation too.
// probably was used in PG to start a transaction or smth.
}
void
writeMigratorStatus(std::string const& migratorName, std::string const& status) override
{
executor_.writeSync(
schema_->insertMigratorStatus, data::cassandra::Text{migratorName}, data::cassandra::Text(status)
);
}
void
writeNodeMessage(boost::uuids::uuid const& uuid, std::string message) override
{
executor_.writeSync(schema_->updateClioNodeMessage, data::cassandra::Text{std::move(message)}, uuid);
}
bool
isTooBusy() const override
{
return executor_.isTooBusy();
}
boost::json::object
stats() const override
{
return executor_.stats();
}
protected:
/**
* @brief Executes statements and tries to write to DB
*
* @param statement statement to execute
* @return true if successful, false if it fails
*/
bool
executeSyncUpdate(Statement statement)
{
auto const res = executor_.writeSync(statement);
auto maybeSuccess = res->template get<bool>();
if (not maybeSuccess) {
LOG(log_.error()) << "executeSyncUpdate - error getting result - no row";
return false;
}
if (not maybeSuccess.value()) {
LOG(log_.warn()) << "Update failed. Checking if DB state is what we expect";
// error may indicate that another writer wrote something.
// in this case let's just compare the current state of things
// against what we were trying to write in the first place and
// use that as the source of truth for the result.
auto rng = hardFetchLedgerRangeNoThrow();
return rng && rng->maxSequence == ledgerSequence_;
}
return true;
}
};
} // namespace data::cassandra

View File

@@ -0,0 +1,178 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Schema.hpp"
#include "data/cassandra/SettingsProvider.hpp"
#include "data/cassandra/Types.hpp"
#include "util/log/Logger.hpp"
#include <boost/json/string.hpp>
#include <fmt/compile.h>
#include <functional>
#include <memory>
namespace data::cassandra {
/**
* @brief Manages the DB schema and provides access to prepared statements.
*/
template <SomeSettingsProvider SettingsProviderType>
class CassandraSchema : public Schema<SettingsProvider> {
using Schema::Schema;
public:
/**
* @brief Construct a new Cassandra Schema object
*
* @param settingsProvider The settings provider
*/
struct CassandraStatements : public Schema<SettingsProvider>::Statements {
using Schema<SettingsProvider>::Statements::Statements;
//
// Update (and "delete") queries
//
PreparedStatement updateLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = ?
IF sequence IN (?, null)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
//
// Select queries
//
PreparedStatement selectNFTIDsByIssuer = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND (taxon, token_id) > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
)
);
}();
PreparedStatement selectAccountFromBeginning = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT account
FROM {}
WHERE token(account) > 0
PER PARTITION LIMIT 1
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")
)
);
}();
PreparedStatement selectAccountFromToken = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT account
FROM {}
WHERE token(account) > token(?)
PER PARTITION LIMIT 1
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")
)
);
}();
PreparedStatement selectLedgerPageKeys = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")
)
);
}();
PreparedStatement selectLedgerPage = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT object, key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")
)
);
}();
};
void
prepareStatements(Handle const& handle) override
{
LOG(log_.info()) << "Preparing cassandra statements";
statements_ = std::make_unique<CassandraStatements>(settingsProvider_, handle);
LOG(log_.info()) << "Finished preparing statements";
}
/**
* @brief Provides access to statements.
*
* @return The statements
*/
std::unique_ptr<CassandraStatements> const&
operator->() const
{
return statements_;
}
private:
std::unique_ptr<CassandraStatements> statements_{nullptr};
};
} // namespace data::cassandra

View File

@@ -0,0 +1,140 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Schema.hpp"
#include "data/cassandra/SettingsProvider.hpp"
#include "data/cassandra/Types.hpp"
#include "util/log/Logger.hpp"
#include <boost/json/string.hpp>
#include <fmt/compile.h>
#include <functional>
#include <memory>
namespace data::cassandra {
/**
* @brief Manages the DB schema and provides access to prepared statements.
*/
template <SomeSettingsProvider SettingsProviderType>
class KeyspaceSchema : public Schema<SettingsProvider> {
public:
using Schema::Schema;
/**
* @brief Construct a new Keyspace Schema object
*
* @param settingsProvider The settings provider
*/
struct KeyspaceStatements : public Schema<SettingsProvider>::Statements {
using Schema<SettingsProvider>::Statements::Statements;
//
// Insert queries
//
PreparedStatement insertLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
INSERT INTO {} (is_latest, sequence) VALUES (?, ?) IF NOT EXISTS
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
//
// Update (and "delete") queries
//
PreparedStatement updateLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = ?
IF sequence = ?
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
PreparedStatement selectLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT sequence
FROM {}
WHERE is_latest in (True, False)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
//
// Select queries
//
PreparedStatement selectNFTsAfterTaxonKeyspaces = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND taxon > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
)
);
}();
};
void
prepareStatements(Handle const& handle) override
{
LOG(log_.info()) << "Preparing aws keyspace statements";
statements_ = std::make_unique<KeyspaceStatements>(settingsProvider_, handle);
LOG(log_.info()) << "Finished preparing statements";
}
/**
* @brief Provides access to statements.
*
* @return The statements
*/
std::unique_ptr<KeyspaceStatements> const&
operator->() const
{
return statements_;
}
private:
std::unique_ptr<KeyspaceStatements> statements_{nullptr};
};
} // namespace data::cassandra

View File

@@ -24,11 +24,10 @@
#include "data/cassandra/Types.hpp"
#include "util/log/Logger.hpp"
#include <boost/json/string.hpp>
#include <fmt/compile.h>
#include <functional>
#include <memory>
#include <optional>
#include <string>
#include <string_view>
#include <vector>
@@ -54,12 +53,15 @@ template <SomeSettingsProvider SettingsProviderType>
*/
template <SomeSettingsProvider SettingsProviderType>
class Schema {
protected:
util::Logger log_{"Backend"};
std::reference_wrapper<SettingsProviderType const> settingsProvider_;
public:
virtual ~Schema() = default;
/**
* @brief Construct a new Schema object
* @brief Shared Schema's between all Schema classes (Cassandra and Keyspace)
*
* @param settingsProvider The settings provider
*/
@@ -335,6 +337,7 @@ public:
* @brief Prepared statements holder.
*/
class Statements {
protected:
std::reference_wrapper<SettingsProviderType const> settingsProvider_;
std::reference_wrapper<Handle const> handle_;
@@ -348,86 +351,6 @@ public:
Statements(SettingsProviderType const& settingsProvider, Handle const& handle)
: settingsProvider_{settingsProvider}, handle_{std::cref(handle)}
{
// initialize scylladb supported queries
if (settingsProvider_.get().getSettings().provider == "scylladb") {
selectAccountFromBeginningScylla = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT account
FROM {}
WHERE token(account) > 0
PER PARTITION LIMIT 1
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")
)
);
}();
selectAccountFromTokenScylla = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT account
FROM {}
WHERE token(account) > token(?)
PER PARTITION LIMIT 1
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")
)
);
}();
selectNFTsByIssuerScylla = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND (taxon, token_id) > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
)
);
}();
updateLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = ?
IF sequence IN (?, null)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
// AWS_keyspace supported queries
} else if (settingsProvider_.get().getSettings().provider == "aws_keyspace") {
selectNFTsAfterTaxonKeyspaces = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND taxon > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
)
);
}();
}
}
//
@@ -607,31 +530,6 @@ public:
// Update (and "delete") queries
//
PreparedStatement insertLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
INSERT INTO {} (is_latest, sequence) VALUES (?, ?) IF NOT EXISTS
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
PreparedStatement updateLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = ?
IF sequence = ?
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
)
);
}();
PreparedStatement deleteLedgerRange = [this]() {
return handle_.get().prepare(
fmt::format(
@@ -746,45 +644,6 @@ public:
);
}();
/*
Currently, these two SELECT statements is not used.
If we ever use them, will need to change the PER PARTITION LIMIT to support for Keyspace
PreparedStatement selectLedgerPageKeys = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")
)
);
}();
PreparedStatement selectLedgerPage = [this]() {
return handle_.get().prepare(
fmt::format(
R"(
SELECT object, key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")
)
);
}();
*/
PreparedStatement getToken = [this]() {
return handle_.get().prepare(
fmt::format(
@@ -1004,15 +863,6 @@ public:
)
);
}();
// For ScyllaDB / Cassandra ONLY
std::optional<PreparedStatement> selectAccountFromBeginningScylla;
std::optional<PreparedStatement> selectAccountFromTokenScylla;
std::optional<PreparedStatement> selectNFTsByIssuerScylla;
// For AWS Keyspaces ONLY
// NOTE: AWS keyspace is not able to load cache with accounts
std::optional<PreparedStatement> selectNFTsAfterTaxonKeyspaces;
};
/**
@@ -1020,27 +870,8 @@ public:
*
* @param handle The handle to the DB
*/
void
prepareStatements(Handle const& handle)
{
LOG(log_.info()) << "Preparing cassandra statements";
statements_ = std::make_unique<Statements>(settingsProvider_, handle);
LOG(log_.info()) << "Finished preparing statements";
}
/**
* @brief Provides access to statements.
*
* @return The statements
*/
std::unique_ptr<Statements> const&
operator->() const
{
return statements_;
}
private:
std::unique_ptr<Statements> statements_{nullptr};
virtual void
prepareStatements(Handle const& handle) = 0;
};
} // namespace data::cassandra

View File

@@ -97,7 +97,7 @@ SettingsProvider::parseSettings() const
settings.coreConnectionsPerHost = config_.get<uint32_t>("core_connections_per_host");
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
settings.writeBatchSize = config_.get<std::size_t>("write_batch_size");
settings.provider = config_.get<std::string>("provider");
settings.provider = impl::providerFromString(config_.get<std::string>("provider"));
if (config_.getValueView("connect_timeout").hasValue()) {
auto const connectTimeoutSecond = config_.get<uint32_t>("connect_timeout");

View File

@@ -25,8 +25,8 @@
#include <utility>
namespace data::cassandra {
namespace impl {
struct Settings;
class Session;
class Cluster;
@@ -36,6 +36,7 @@ struct Result;
class Statement;
class PreparedStatement;
struct Batch;
} // namespace impl
using Settings = impl::Settings;

View File

@@ -61,8 +61,12 @@ Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), k
cass_cluster_set_request_timeout(*this, settings.requestTimeout.count());
// TODO: AWS keyspace reads should be local_one to save cost
if (settings.provider == "aws_keyspace") {
if (settings.provider == cassandra::impl::Provider::Keyspace) {
if (auto const rc = cass_cluster_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM); rc != CASS_OK) {
throw std::runtime_error(fmt::format("Error setting keyspace consistency: {}", cass_error_desc(rc)));
}
} else {
if (auto const rc = cass_cluster_set_consistency(*this, CASS_CONSISTENCY_QUORUM); rc != CASS_OK) {
throw std::runtime_error(fmt::format("Error setting cassandra consistency: {}", cass_error_desc(rc)));
}
}

View File

@@ -20,6 +20,7 @@
#pragma once
#include "data/cassandra/impl/ManagedObject.hpp"
#include "util/Assert.hpp"
#include "util/log/Logger.hpp"
#include <cassandra.h>
@@ -35,6 +36,18 @@
namespace data::cassandra::impl {
enum class Provider { Cassandra, Keyspace };
inline Provider
providerFromString(std::string const& provider)
{
ASSERT(
provider == "cassandra" || provider == "aws_keyspace",
"Provider type must be one of 'cassandra' or 'aws_keyspace'"
);
return provider == "cassandra" ? Provider::Cassandra : Provider::Keyspace;
}
// TODO: move Settings to public interface, not impl
/**
@@ -45,7 +58,7 @@ struct Settings {
static constexpr uint32_t kDEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING = 10'000;
static constexpr uint32_t kDEFAULT_MAX_READ_REQUESTS_OUTSTANDING = 100'000;
static constexpr std::size_t kDEFAULT_BATCH_SIZE = 20;
static constexpr std::string kDEFAULT_PROVIDER = "cassandra";
static constexpr Provider kDEFAULT_PROVIDER = Provider::Cassandra;
/**
* @brief Represents the configuration of contact points for cassandra.
@@ -90,7 +103,7 @@ struct Settings {
std::size_t writeBatchSize = kDEFAULT_BATCH_SIZE;
/** @brief Provider to know if we are using scylladb or keyspace */
std::string provider = kDEFAULT_PROVIDER;
Provider provider = kDEFAULT_PROVIDER;
/** @brief Size of the IO queue */
std::optional<uint32_t> queueSizeIO = std::nullopt; // NOLINT(readability-redundant-member-init)

View File

@@ -58,14 +58,16 @@ public:
explicit Statement(std::string_view query, Args&&... args)
: ManagedObject{cass_statement_new_n(query.data(), query.size(), sizeof...(args)), kDELETER}
{
cass_statement_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM);
// TODO: figure out how to set consistency level in config
// NOTE: Keyspace doesn't support QUORUM at write level
// cass_statement_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true);
bind<Args...>(std::forward<Args>(args)...);
}
/* implicit */ Statement(CassStatement* ptr) : ManagedObject{ptr, kDELETER}
{
cass_statement_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM);
// cass_statement_set_consistency(*this, CASS_CONSISTENCY_LOCAL_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true);
}

View File

@@ -19,7 +19,7 @@
#pragma once
namespace etlng {
namespace etl {
/**
* @brief The interface of a handler for amendment blocking
@@ -32,6 +32,12 @@ struct AmendmentBlockHandlerInterface {
*/
virtual void
notifyAmendmentBlocked() = 0;
/**
* @brief Stop the block handler from repeatedly executing
*/
virtual void
stop() = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -7,14 +7,24 @@ target_sources(
ETLService.cpp
ETLState.cpp
LoadBalancer.cpp
MPTHelpers.cpp
NetworkValidatedLedgers.cpp
NFTHelpers.cpp
Source.cpp
MPTHelpers.cpp
impl/AmendmentBlockHandler.cpp
impl/AsyncGrpcCall.cpp
impl/Extraction.cpp
impl/ForwardingSource.cpp
impl/GrpcSource.cpp
impl/Loading.cpp
impl/Monitor.cpp
impl/SubscriptionSource.cpp
impl/TaskManager.cpp
impl/ext/Cache.cpp
impl/ext/Core.cpp
impl/ext/MPT.cpp
impl/ext/NFT.cpp
impl/ext/Successor.cpp
)
target_link_libraries(clio_etl PUBLIC clio_data)

View File

@@ -21,12 +21,12 @@
#include "data/BackendInterface.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "etl/CacheLoaderInterface.hpp"
#include "etl/CacheLoaderSettings.hpp"
#include "etl/impl/CacheLoader.hpp"
#include "etl/impl/CursorFromAccountProvider.hpp"
#include "etl/impl/CursorFromDiffProvider.hpp"
#include "etl/impl/CursorFromFixDiffNumProvider.hpp"
#include "etlng/CacheLoaderInterface.hpp"
#include "util/Assert.hpp"
#include "util/async/context/BasicExecutionContext.hpp"
#include "util/log/Logger.hpp"
@@ -48,7 +48,7 @@ namespace etl {
* @tparam ExecutionContextType The type of the execution context to use
*/
template <typename ExecutionContextType = util::async::CoroExecutionContext>
class CacheLoader : public etlng::CacheLoaderInterface {
class CacheLoader : public CacheLoaderInterface {
using CacheLoaderType = impl::CacheLoaderImpl<data::LedgerCacheInterface>;
util::Logger log_{"ETL"};

View File

@@ -21,7 +21,7 @@
#include <cstdint>
namespace etlng {
namespace etl {
/**
* @brief An interface for the Cache Loader
@@ -50,4 +50,4 @@ struct CacheLoaderInterface {
wait() noexcept = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -20,12 +20,12 @@
#pragma once
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <cstdint>
#include <vector>
namespace etlng {
namespace etl {
/**
* @brief An interface for the Cache Updater
@@ -63,4 +63,4 @@ struct CacheUpdaterInterface {
setFull() = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -20,371 +20,387 @@
#include "etl/ETLService.hpp"
#include "data/BackendInterface.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etl/CacheLoader.hpp"
#include "etl/CacheLoaderInterface.hpp"
#include "etl/CacheUpdaterInterface.hpp"
#include "etl/CorruptionDetector.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/ETLServiceInterface.hpp"
#include "etl/ETLState.hpp"
#include "etl/ExtractorInterface.hpp"
#include "etl/InitialLoadObserverInterface.hpp"
#include "etl/LedgerPublisherInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "etl/LoaderInterface.hpp"
#include "etl/MonitorInterface.hpp"
#include "etl/MonitorProviderInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/TaskManagerProviderInterface.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
#include "etl/impl/ExtractionDataPipe.hpp"
#include "etl/impl/Extractor.hpp"
#include "etl/impl/CacheUpdater.hpp"
#include "etl/impl/Extraction.hpp"
#include "etl/impl/LedgerFetcher.hpp"
#include "etl/impl/LedgerLoader.hpp"
#include "etl/impl/LedgerPublisher.hpp"
#include "etl/impl/Transformer.hpp"
#include "etlng/ETLService.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/MonitorProvider.hpp"
#include "etlng/impl/TaskManagerProvider.hpp"
#include "etlng/impl/ext/Cache.hpp"
#include "etlng/impl/ext/Core.hpp"
#include "etlng/impl/ext/MPT.hpp"
#include "etlng/impl/ext/NFT.hpp"
#include "etlng/impl/ext/Successor.hpp"
#include "etl/impl/Loading.hpp"
#include "etl/impl/MonitorProvider.hpp"
#include "etl/impl/Registry.hpp"
#include "etl/impl/Scheduling.hpp"
#include "etl/impl/TaskManager.hpp"
#include "etl/impl/TaskManagerProvider.hpp"
#include "etl/impl/ext/Cache.hpp"
#include "etl/impl/ext/Core.hpp"
#include "etl/impl/ext/MPT.hpp"
#include "etl/impl/ext/NFT.hpp"
#include "etl/impl/ext/Successor.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/Constants.hpp"
#include "util/Profiler.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/io_context.hpp>
#include <xrpl/beast/core/CurrentThreadName.h>
#include <boost/json/object.hpp>
#include <boost/signals2/connection.hpp>
#include <xrpl/protocol/LedgerHeader.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
#include <stdexcept>
#include <thread>
#include <string>
#include <utility>
#include <vector>
namespace etl {
std::shared_ptr<etlng::ETLServiceInterface>
std::shared_ptr<ETLServiceInterface>
ETLService::makeETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
)
{
std::shared_ptr<etlng::ETLServiceInterface> ret;
std::shared_ptr<ETLServiceInterface> ret;
if (config.get<bool>("__ng_etl")) {
ASSERT(
std::dynamic_pointer_cast<etlng::LoadBalancer>(balancer), "LoadBalancer type must be etlng::LoadBalancer"
);
auto state = std::make_shared<SystemState>();
state->isStrictReadonly = config.get<bool>("read_only");
auto state = std::make_shared<etl::SystemState>();
state->isStrictReadonly = config.get<bool>("read_only");
auto fetcher = std::make_shared<impl::LedgerFetcher>(backend, balancer);
auto extractor = std::make_shared<impl::Extractor>(fetcher);
auto publisher = std::make_shared<impl::LedgerPublisher>(ctx, backend, subscriptions, *state);
auto cacheLoader = std::make_shared<CacheLoader<>>(config, backend, backend->cache());
auto cacheUpdater = std::make_shared<impl::CacheUpdater>(backend->cache());
auto amendmentBlockHandler = std::make_shared<impl::AmendmentBlockHandler>(ctx, *state);
auto monitorProvider = std::make_shared<impl::MonitorProvider>();
auto fetcher = std::make_shared<etl::impl::LedgerFetcher>(backend, balancer);
auto extractor = std::make_shared<etlng::impl::Extractor>(fetcher);
auto publisher = std::make_shared<etlng::impl::LedgerPublisher>(ioc, backend, subscriptions, *state);
auto cacheLoader = std::make_shared<etl::CacheLoader<>>(config, backend, backend->cache());
auto cacheUpdater = std::make_shared<etlng::impl::CacheUpdater>(backend->cache());
auto amendmentBlockHandler = std::make_shared<etlng::impl::AmendmentBlockHandler>(ctx, *state);
auto monitorProvider = std::make_shared<etlng::impl::MonitorProvider>();
backend->setCorruptionDetector(CorruptionDetector{*state, backend->cache()});
backend->setCorruptionDetector(CorruptionDetector{*state, backend->cache()});
auto loader = std::make_shared<impl::Loader>(
backend,
impl::makeRegistry(
*state,
impl::CacheExt{cacheUpdater},
impl::CoreExt{backend},
impl::SuccessorExt{backend, backend->cache()},
impl::NFTExt{backend},
impl::MPTExt{backend}
),
amendmentBlockHandler,
state
);
auto loader = std::make_shared<etlng::impl::Loader>(
backend,
etlng::impl::makeRegistry(
*state,
etlng::impl::CacheExt{cacheUpdater},
etlng::impl::CoreExt{backend},
etlng::impl::SuccessorExt{backend, backend->cache()},
etlng::impl::NFTExt{backend},
etlng::impl::MPTExt{backend}
),
amendmentBlockHandler,
state
);
auto taskManagerProvider = std::make_shared<impl::TaskManagerProvider>(*ledgers, extractor, loader);
auto taskManagerProvider = std::make_shared<etlng::impl::TaskManagerProvider>(*ledgers, extractor, loader);
ret = std::make_shared<etlng::ETLService>(
ctx,
config,
backend,
balancer,
ledgers,
publisher,
cacheLoader,
cacheUpdater,
extractor,
loader, // loader itself
loader, // initial load observer
taskManagerProvider,
monitorProvider,
state
);
} else {
ASSERT(std::dynamic_pointer_cast<etl::LoadBalancer>(balancer), "LoadBalancer type must be etl::LoadBalancer");
ret = std::make_shared<etl::ETLService>(config, ioc, backend, subscriptions, balancer, ledgers);
}
ret = std::make_shared<ETLService>(
ctx,
config,
backend,
balancer,
ledgers,
publisher,
cacheLoader,
cacheUpdater,
extractor,
loader, // loader itself
loader, // initial load observer
taskManagerProvider,
monitorProvider,
state
);
// inject networkID into subscriptions, as transaction feed require it to inject CTID in response
if (auto const state = ret->getETLState(); state)
subscriptions->setNetworkID(state->networkID);
if (auto const etlState = ret->getETLState(); etlState)
subscriptions->setNetworkID(etlState->networkID);
ret->run();
return ret;
}
// Database must be populated when this starts
std::optional<uint32_t>
ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
ETLService::ETLService(
util::async::AnyExecutionContext ctx,
std::reference_wrapper<util::config::ClioConfigDefinition const> config,
std::shared_ptr<data::BackendInterface> backend,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers,
std::shared_ptr<LedgerPublisherInterface> publisher,
std::shared_ptr<CacheLoaderInterface> cacheLoader,
std::shared_ptr<CacheUpdaterInterface> cacheUpdater,
std::shared_ptr<ExtractorInterface> extractor,
std::shared_ptr<LoaderInterface> loader,
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver,
std::shared_ptr<TaskManagerProviderInterface> taskManagerProvider,
std::shared_ptr<MonitorProviderInterface> monitorProvider,
std::shared_ptr<SystemState> state
)
: ctx_(std::move(ctx))
, config_(config)
, backend_(std::move(backend))
, balancer_(std::move(balancer))
, ledgers_(std::move(ledgers))
, publisher_(std::move(publisher))
, cacheLoader_(std::move(cacheLoader))
, cacheUpdater_(std::move(cacheUpdater))
, extractor_(std::move(extractor))
, loader_(std::move(loader))
, initialLoadObserver_(std::move(initialLoadObserver))
, taskManagerProvider_(std::move(taskManagerProvider))
, monitorProvider_(std::move(monitorProvider))
, state_(std::move(state))
, startSequence_(config.get().maybeValue<uint32_t>("start_sequence"))
, finishSequence_(config.get().maybeValue<uint32_t>("finish_sequence"))
{
if (finishSequence_ && startSequence > *finishSequence_)
return {};
ASSERT(not state_->isWriting, "ETL should never start in writer mode");
LOG(log_.debug()) << "Wait for cache containing seq " << startSequence - 1
<< " current cache last seq =" << backend_->cache().latestLedgerSequence();
backend_->cache().waitUntilCacheContainsSeq(startSequence - 1);
if (startSequence_.has_value())
LOG(log_.info()) << "Start sequence: " << *startSequence_;
LOG(log_.debug()) << "Starting etl pipeline";
state_.isWriting = true;
if (finishSequence_.has_value())
LOG(log_.info()) << "Finish sequence: " << *finishSequence_;
auto const rng = backend_->hardFetchLedgerRangeNoThrow();
ASSERT(rng.has_value(), "Parent ledger range can't be null");
ASSERT(
rng->maxSequence >= startSequence - 1,
"Got not parent ledger. rnd->maxSequence = {}, startSequence = {}",
rng->maxSequence,
startSequence
);
auto const begin = std::chrono::system_clock::now();
auto extractors = std::vector<std::unique_ptr<ExtractorType>>{};
auto pipe = DataPipeType{numExtractors, startSequence};
for (auto i = 0u; i < numExtractors; ++i) {
extractors.push_back(
std::make_unique<ExtractorType>(
pipe, networkValidatedLedgers_, ledgerFetcher_, startSequence + i, finishSequence_, state_
)
);
}
auto transformer =
TransformerType{pipe, backend_, ledgerLoader_, ledgerPublisher_, amendmentBlockHandler_, startSequence, state_};
transformer.waitTillFinished(); // suspend current thread until exit condition is met
pipe.cleanup(); // TODO: this should probably happen automatically using destructor
// wait for all of the extractors to stop
for (auto& t : extractors)
t->waitTillFinished();
auto const end = std::chrono::system_clock::now();
auto const lastPublishedSeq = ledgerPublisher_.getLastPublishedSequence();
static constexpr auto kNANOSECONDS_PER_SECOND = 1'000'000'000.0;
LOG(log_.debug()) << "Extracted and wrote " << lastPublishedSeq.value_or(startSequence) - startSequence << " in "
<< ((end - begin).count()) / kNANOSECONDS_PER_SECOND;
state_.isWriting = false;
LOG(log_.debug()) << "Stopping etl pipeline";
return lastPublishedSeq;
LOG(log_.info()) << "Starting in " << (state_->isStrictReadonly ? "STRICT READONLY MODE" : "WRITE MODE");
}
// Main loop of ETL.
// The software begins monitoring the ledgers that are validated by the network.
// The member networkValidatedLedgers_ keeps track of the sequences of ledgers validated by the network.
// Whenever a ledger is validated by the network, the software looks for that ledger in the database. Once the ledger is
// found in the database, the software publishes that ledger to the ledgers stream. If a network validated ledger is not
// found in the database after a certain amount of time, then the software attempts to take over responsibility of the
// ETL process, where it writes new ledgers to the database. The software will relinquish control of the ETL process if
// it detects that another process has taken over ETL.
void
ETLService::monitor()
ETLService::~ETLService()
{
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (!rng) {
LOG(log_.info()) << "Database is empty. Will download a ledger from the network.";
std::optional<ripple::LedgerHeader> ledger;
try {
if (startSequence_) {
LOG(log_.info()) << "ledger sequence specified in config. "
<< "Will begin ETL process starting with ledger " << *startSequence_;
ledger = ledgerLoader_.loadInitialLedger(*startSequence_);
} else {
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> mostRecentValidated = networkValidatedLedgers_->getMostRecent();
if (mostRecentValidated) {
LOG(log_.info()) << "Ledger " << *mostRecentValidated << " has been validated. Downloading...";
ledger = ledgerLoader_.loadInitialLedger(*mostRecentValidated);
} else {
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return;
}
}
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial ledger: " << e.what();
amendmentBlockHandler_.notifyAmendmentBlocked();
return;
}
if (ledger) {
rng = backend_->hardFetchLedgerRangeNoThrow();
} else {
LOG(log_.error()) << "Failed to load initial ledger. Exiting monitor loop";
return;
}
} else {
if (startSequence_)
LOG(log_.warn()) << "start sequence specified but db is already populated";
LOG(log_.info()) << "Database already populated. Picking up from the tip of history";
cacheLoader_.load(rng->maxSequence);
}
ASSERT(rng.has_value(), "Ledger range can't be null");
uint32_t nextSequence = rng->maxSequence + 1;
LOG(log_.debug()) << "Database is populated. Starting monitor loop. sequence = " << nextSequence;
while (not isStopping()) {
nextSequence = publishNextSequence(nextSequence);
}
}
uint32_t
ETLService::publishNextSequence(uint32_t nextSequence)
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence) {
ledgerPublisher_.publish(nextSequence, {});
++nextSequence;
} else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, util::kMILLISECONDS_PER_SECOND)) {
LOG(log_.info()) << "Ledger with sequence = " << nextSequence << " has been validated by the network. "
<< "Attempting to find in database and publish";
// Attempt to take over responsibility of ETL writer after 10 failed
// attempts to publish the ledger. publishLedger() fails if the
// ledger that has been validated by the network is not found in the
// database after the specified number of attempts. publishLedger()
// waits one second between each attempt to read the ledger from the
// database
constexpr size_t kTIMEOUT_SECONDS = 10;
bool const success = ledgerPublisher_.publish(nextSequence, kTIMEOUT_SECONDS);
if (!success) {
LOG(log_.warn()) << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL";
// returns the most recent sequence published. empty optional if no sequence was published
std::optional<uint32_t> lastPublished = runETLPipeline(nextSequence, extractorThreads_);
LOG(log_.info()) << "Aborting ETL. Falling back to publishing";
// if no ledger was published, don't increment nextSequence
if (lastPublished)
nextSequence = *lastPublished + 1;
} else {
++nextSequence;
}
}
return nextSequence;
}
void
ETLService::monitorReadOnly()
{
LOG(log_.debug()) << "Starting reporting in strict read only mode";
auto const latestSequenceOpt = [this]() -> std::optional<uint32_t> {
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (!rng) {
if (auto net = networkValidatedLedgers_->getMostRecent()) {
return net;
}
return std::nullopt;
}
return rng->maxSequence;
}();
if (!latestSequenceOpt.has_value()) {
return;
}
uint32_t latestSequence = *latestSequenceOpt;
cacheLoader_.load(latestSequence);
latestSequence++;
while (not isStopping()) {
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= latestSequence) {
ledgerPublisher_.publish(latestSequence, {});
latestSequence = latestSequence + 1;
} else {
// if we can't, wait until it's validated by the network, or 1 second passes, whichever occurs
// first. Even if we don't hear from rippled, if ledgers are being written to the db, we publish
// them.
networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, util::kMILLISECONDS_PER_SECOND);
}
}
stop();
LOG(log_.debug()) << "Destroying ETL";
}
void
ETLService::run()
{
LOG(log_.info()) << "Starting reporting etl";
state_.isStopping = false;
LOG(log_.info()) << "Running ETL...";
doWork();
mainLoop_.emplace(ctx_.execute([this] {
auto const rng = loadInitialLedgerIfNeeded();
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> const mostRecentValidated = ledgers_->getMostRecent();
if (not mostRecentValidated) {
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return;
}
if (not rng.has_value()) {
LOG(log_.warn()) << "Initial ledger download got cancelled - stopping ETL service";
return;
}
auto const nextSequence = rng->maxSequence + 1;
LOG(log_.debug()) << "Database is populated. Starting monitor loop. sequence = " << nextSequence;
startMonitor(nextSequence);
// If we are a writer as the result of loading the initial ledger - start loading
if (state_->isWriting)
startLoading(nextSequence);
}));
}
void
ETLService::doWork()
ETLService::stop()
{
worker_ = std::thread([this]() {
beast::setCurrentThreadName("ETLService worker");
LOG(log_.info()) << "Stop called";
if (state_.isStrictReadonly) {
monitorReadOnly();
} else {
monitor();
if (mainLoop_)
mainLoop_->wait();
if (taskMan_)
taskMan_->stop();
if (monitor_)
monitor_->stop();
}
boost::json::object
ETLService::getInfo() const
{
boost::json::object result;
result["etl_sources"] = balancer_->toJson();
result["is_writer"] = static_cast<int>(state_->isWriting);
result["read_only"] = static_cast<int>(state_->isStrictReadonly);
auto last = publisher_->getLastPublish();
if (last.time_since_epoch().count() != 0)
result["last_publish_age_seconds"] = std::to_string(publisher_->lastPublishAgeSeconds());
return result;
}
bool
ETLService::isAmendmentBlocked() const
{
return state_->isAmendmentBlocked;
}
bool
ETLService::isCorruptionDetected() const
{
return state_->isCorruptionDetected;
}
std::optional<ETLState>
ETLService::getETLState() const
{
return balancer_->getETLState();
}
std::uint32_t
ETLService::lastCloseAgeSeconds() const
{
return publisher_->lastCloseAgeSeconds();
}
std::optional<data::LedgerRange>
ETLService::loadInitialLedgerIfNeeded()
{
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (not rng.has_value()) {
ASSERT(
not state_->isStrictReadonly,
"Database is empty but this node is in strict readonly mode. Can't write initial ledger."
);
LOG(log_.info()) << "Database is empty. Will download a ledger from the network.";
state_->isWriting = true; // immediately become writer as the db is empty
auto const getMostRecent = [this]() {
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
return ledgers_->getMostRecent();
};
if (auto const maybeSeq = startSequence_.or_else(getMostRecent); maybeSeq.has_value()) {
auto const seq = *maybeSeq;
LOG(log_.info()) << "Starting from sequence " << seq
<< ". Initial ledger download and extraction can take a while...";
auto [ledger, timeDiff] = ::util::timed<std::chrono::duration<double>>([this, seq]() {
return extractor_->extractLedgerOnly(seq).and_then(
[this, seq](auto&& data) -> std::optional<ripple::LedgerHeader> {
// TODO: loadInitialLedger in balancer should be called fetchEdgeKeys or similar
auto res = balancer_->loadInitialLedger(seq, *initialLoadObserver_);
if (not res.has_value() and res.error() == InitialLedgerLoadError::Cancelled) {
LOG(log_.debug()) << "Initial ledger load got cancelled";
return std::nullopt;
}
ASSERT(res.has_value(), "Initial ledger retry logic failed");
data.edgeKeys = std::move(res).value();
return loader_->loadInitialLedger(data);
}
);
});
if (not ledger.has_value()) {
LOG(log_.error()) << "Failed to load initial ledger. Exiting monitor loop";
return std::nullopt;
}
LOG(log_.debug()) << "Time to download and store ledger = " << timeDiff;
LOG(log_.info()) << "Finished loadInitialLedger. cache size = " << backend_->cache().size();
return backend_->hardFetchLedgerRangeNoThrow();
}
});
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return std::nullopt;
}
LOG(log_.info()) << "Database already populated. Picking up from the tip of history";
cacheLoader_->load(rng->maxSequence);
return rng;
}
ETLService::ETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
)
: backend_(backend)
, loadBalancer_(balancer)
, networkValidatedLedgers_(std::move(ledgers))
, cacheLoader_(config, backend, backend->cache())
, ledgerFetcher_(backend, balancer)
, ledgerLoader_(backend, balancer, ledgerFetcher_, state_)
, ledgerPublisher_(ioc, backend, backend->cache(), subscriptions, state_)
, amendmentBlockHandler_(ioc, state_)
void
ETLService::startMonitor(uint32_t seq)
{
startSequence_ = config.maybeValue<uint32_t>("start_sequence");
finishSequence_ = config.maybeValue<uint32_t>("finish_sequence");
state_.isStrictReadonly = config.get<bool>("read_only");
extractorThreads_ = config.get<uint32_t>("extractor_threads");
monitor_ = monitorProvider_->make(ctx_, backend_, ledgers_, seq);
// This should probably be done in the backend factory but we don't have state available until here
backend_->setCorruptionDetector(CorruptionDetector{state_, backend->cache()});
monitorNewSeqSubscription_ = monitor_->subscribeToNewSequence([this](uint32_t seq) {
LOG(log_.info()) << "ETLService (via Monitor) got new seq from db: " << seq;
if (state_->writeConflict) {
LOG(log_.info()) << "Got a write conflict; Giving up writer seat immediately";
giveUpWriter();
}
if (not state_->isWriting) {
auto const diff = data::synchronousAndRetryOnTimeout([this, seq](auto yield) {
return backend_->fetchLedgerDiff(seq, yield);
});
cacheUpdater_->update(seq, diff);
backend_->updateRange(seq);
}
publisher_->publish(seq, {});
});
monitorDbStalledSubscription_ = monitor_->subscribeToDbStalled([this]() {
LOG(log_.warn()) << "ETLService received DbStalled signal from Monitor";
if (not state_->isStrictReadonly and not state_->isWriting)
attemptTakeoverWriter();
});
monitor_->run();
}
void
ETLService::startLoading(uint32_t seq)
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
taskMan_ = taskManagerProvider_->make(ctx_, *monitor_, seq, finishSequence_);
// FIXME: this legacy name "extractor_threads" is no longer accurate (we have coroutines now)
taskMan_->run(config_.get().get<std::size_t>("extractor_threads"));
}
void
ETLService::attemptTakeoverWriter()
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
auto rng = backend_->hardFetchLedgerRangeNoThrow();
ASSERT(rng.has_value(), "Ledger range can't be null");
state_->isWriting = true; // switch to writer
LOG(log_.info()) << "Taking over the ETL writer seat";
startLoading(rng->maxSequence + 1);
}
void
ETLService::giveUpWriter()
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
state_->isWriting = false;
state_->writeConflict = false;
taskMan_ = nullptr;
}
} // namespace etl

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -20,57 +20,64 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/CacheLoader.hpp"
#include "data/Types.hpp"
#include "etl/CacheLoaderInterface.hpp"
#include "etl/CacheUpdaterInterface.hpp"
#include "etl/ETLServiceInterface.hpp"
#include "etl/ETLState.hpp"
#include "etl/ExtractorInterface.hpp"
#include "etl/InitialLoadObserverInterface.hpp"
#include "etl/LedgerPublisherInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "etl/LoaderInterface.hpp"
#include "etl/MonitorInterface.hpp"
#include "etl/MonitorProviderInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/TaskManagerInterface.hpp"
#include "etl/TaskManagerProviderInterface.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
#include "etl/impl/ExtractionDataPipe.hpp"
#include "etl/impl/Extractor.hpp"
#include "etl/impl/CacheUpdater.hpp"
#include "etl/impl/Extraction.hpp"
#include "etl/impl/LedgerFetcher.hpp"
#include "etl/impl/LedgerLoader.hpp"
#include "etl/impl/LedgerPublisher.hpp"
#include "etl/impl/Transformer.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/TaskManagerProvider.hpp"
#include "etl/impl/Loading.hpp"
#include "etl/impl/Registry.hpp"
#include "etl/impl/Scheduling.hpp"
#include "etl/impl/TaskManager.hpp"
#include "etl/impl/ext/Cache.hpp"
#include "etl/impl/ext/Core.hpp"
#include "etl/impl/ext/NFT.hpp"
#include "etl/impl/ext/Successor.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/async/AnyOperation.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/io_context.hpp>
#include <boost/json/object.hpp>
#include <grpcpp/grpcpp.h>
#include <org/xrpl/rpc/v1/get_ledger.pb.h>
#include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/signals2/connection.hpp>
#include <fmt/format.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/proto/org/xrpl/rpc/v1/get_ledger.pb.h>
#include <xrpl/proto/org/xrpl/rpc/v1/ledger.pb.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/protocol/STTx.h>
#include <xrpl/protocol/TxFormats.h>
#include <xrpl/protocol/TxMeta.h>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
#include <string>
#include <thread>
struct AccountTransactionsData;
struct NFTTransactionsData;
struct NFTsData;
/**
* @brief This namespace contains everything to do with the ETL and ETL sources.
*/
namespace etl {
/**
* @brief A tag class to help identify ETLService in templated code.
*/
struct ETLServiceTag {
virtual ~ETLServiceTag() = default;
};
template <typename T>
concept SomeETLService = std::derived_from<T, ETLServiceTag>;
/**
* @brief This class is responsible for continuously extracting data from a p2p node, and writing that data to the
* databases.
@@ -84,71 +91,42 @@ concept SomeETLService = std::derived_from<T, ETLServiceTag>;
* the others will fall back to monitoring/publishing. In this sense, this class dynamically transitions from monitoring
* to writing and from writing to monitoring, based on the activity of other processes running on different machines.
*/
class ETLService : public etlng::ETLServiceInterface, ETLServiceTag {
// TODO: make these template parameters in ETLService
using DataPipeType = etl::impl::ExtractionDataPipe<org::xrpl::rpc::v1::GetLedgerResponse>;
using CacheLoaderType = etl::CacheLoader<>;
using LedgerFetcherType = etl::impl::LedgerFetcher;
using ExtractorType = etl::impl::Extractor<DataPipeType, LedgerFetcherType>;
using LedgerLoaderType = etl::impl::LedgerLoader<LedgerFetcherType>;
using LedgerPublisherType = etl::impl::LedgerPublisher;
using AmendmentBlockHandlerType = etl::impl::AmendmentBlockHandler;
using TransformerType =
etl::impl::Transformer<DataPipeType, LedgerLoaderType, LedgerPublisherType, AmendmentBlockHandlerType>;
class ETLService : public ETLServiceInterface {
util::Logger log_{"ETL"};
util::async::AnyExecutionContext ctx_;
std::reference_wrapper<util::config::ClioConfigDefinition const> config_;
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<etlng::LoadBalancerInterface> loadBalancer_;
std::shared_ptr<NetworkValidatedLedgersInterface> networkValidatedLedgers_;
std::shared_ptr<LoadBalancerInterface> balancer_;
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers_;
std::shared_ptr<LedgerPublisherInterface> publisher_;
std::shared_ptr<CacheLoaderInterface> cacheLoader_;
std::shared_ptr<CacheUpdaterInterface> cacheUpdater_;
std::shared_ptr<ExtractorInterface> extractor_;
std::shared_ptr<LoaderInterface> loader_;
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver_;
std::shared_ptr<TaskManagerProviderInterface> taskManagerProvider_;
std::shared_ptr<MonitorProviderInterface> monitorProvider_;
std::shared_ptr<SystemState> state_;
std::uint32_t extractorThreads_ = 1;
std::thread worker_;
CacheLoaderType cacheLoader_;
LedgerFetcherType ledgerFetcher_;
LedgerLoaderType ledgerLoader_;
LedgerPublisherType ledgerPublisher_;
AmendmentBlockHandlerType amendmentBlockHandler_;
SystemState state_;
size_t numMarkers_ = 2;
std::optional<uint32_t> startSequence_;
std::optional<uint32_t> finishSequence_;
std::unique_ptr<MonitorInterface> monitor_;
std::unique_ptr<TaskManagerInterface> taskMan_;
boost::signals2::scoped_connection monitorNewSeqSubscription_;
boost::signals2::scoped_connection monitorDbStalledSubscription_;
std::optional<util::async::AnyOperation<void>> mainLoop_;
public:
/**
* @brief Create an instance of ETLService.
*
* @param config The configuration to use
* @param ioc io context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param balancer Load balancer to use
* @param ledgers The network validated ledgers datastructure
*/
ETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
);
/**
* @brief Move constructor is deleted because ETL service shares its fields by reference
*/
ETLService(ETLService&&) = delete;
/**
* @brief A factory function to spawn new ETLService instances.
*
* Creates and runs the ETL service.
*
* @param config The configuration to use
* @param ioc io context to run on
* @param ctx Execution context for asynchronous operations
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
@@ -156,182 +134,89 @@ public:
* @param ledgers The network validated ledgers datastructure
* @return A shared pointer to a new instance of ETLService
*/
static std::shared_ptr<etlng::ETLServiceInterface>
static std::shared_ptr<ETLServiceInterface>
makeETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
);
/**
* @brief Stops components and joins worker thread.
*/
~ETLService() override
{
if (not state_.isStopping)
stop();
}
/**
* @brief Stop the ETL service.
* @note This method blocks until the ETL service has stopped.
*/
void
stop() override
{
LOG(log_.info()) << "Stop called";
state_.isStopping = true;
cacheLoader_.stop();
if (worker_.joinable())
worker_.join();
LOG(log_.debug()) << "Joined ETLService worker thread";
}
/**
* @brief Get time passed since last ledger close, in seconds.
* @brief Create an instance of ETLService.
*
* @return Time passed since last ledger close
* @param ctx The execution context for asynchronous operations
* @param config The Clio configuration definition
* @param backend Interface to the backend database
* @param balancer Load balancer for distributing work
* @param ledgers Interface for accessing network validated ledgers
* @param publisher Interface for publishing ledger data
* @param cacheLoader Interface for loading cache data
* @param cacheUpdater Interface for updating cache data
* @param extractor The extractor to use
* @param loader Interface for loading data
* @param initialLoadObserver The observer for initial data loading
* @param taskManagerProvider The provider of the task manager instance
* @param monitorProvider The provider of the monitor instance
* @param state System state tracking object
*/
std::uint32_t
lastCloseAgeSeconds() const override
{
return ledgerPublisher_.lastCloseAgeSeconds();
}
ETLService(
util::async::AnyExecutionContext ctx,
std::reference_wrapper<util::config::ClioConfigDefinition const> config,
std::shared_ptr<data::BackendInterface> backend,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers,
std::shared_ptr<LedgerPublisherInterface> publisher,
std::shared_ptr<CacheLoaderInterface> cacheLoader,
std::shared_ptr<CacheUpdaterInterface> cacheUpdater,
std::shared_ptr<ExtractorInterface> extractor,
std::shared_ptr<LoaderInterface> loader,
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver,
std::shared_ptr<TaskManagerProviderInterface> taskManagerProvider,
std::shared_ptr<MonitorProviderInterface> monitorProvider,
std::shared_ptr<SystemState> state
);
/**
* @brief Check for the amendment blocked state.
*
* @return true if currently amendment blocked; false otherwise
*/
bool
isAmendmentBlocked() const override
{
return state_.isAmendmentBlocked;
}
~ETLService() override;
/**
* @brief Check whether Clio detected DB corruptions.
*
* @return true if corruption of DB was detected and cache was stopped.
*/
bool
isCorruptionDetected() const override
{
return state_.isCorruptionDetected;
}
/**
* @brief Get state of ETL as a JSON object
*
* @return The state of ETL as a JSON object
*/
boost::json::object
getInfo() const override
{
boost::json::object result;
result["etl_sources"] = loadBalancer_->toJson();
result["is_writer"] = static_cast<int>(state_.isWriting);
result["read_only"] = static_cast<int>(state_.isStrictReadonly);
auto last = ledgerPublisher_.getLastPublish();
if (last.time_since_epoch().count() != 0)
result["last_publish_age_seconds"] = std::to_string(ledgerPublisher_.lastPublishAgeSeconds());
return result;
}
/**
* @brief Get the etl nodes' state
* @return The etl nodes' state, nullopt if etl nodes are not connected
*/
std::optional<etl::ETLState>
getETLState() const noexcept override
{
return loadBalancer_->getETLState();
}
/**
* @brief Start all components to run ETL service.
*/
void
run() override;
private:
/**
* @brief Run the ETL pipeline.
*
* Extracts ledgers and writes them to the database, until a write conflict occurs (or the server shuts down).
* @note database must already be populated when this function is called
*
* @param startSequence the first ledger to extract
* @param numExtractors number of extractors to use
* @return The last ledger written to the database, if any
*/
std::optional<uint32_t>
runETLPipeline(uint32_t startSequence, uint32_t numExtractors);
/**
* @brief Monitor the network for newly validated ledgers.
*
* Also monitor the database to see if any process is writing those ledgers.
* This function is called when the application starts, and will only return when the application is shutting down.
* If the software detects the database is empty, this function will call loadInitialLedger(). If the software
* detects ledgers are not being written, this function calls runETLPipeline(). Otherwise, this function publishes
* ledgers as they are written to the database.
*/
void
monitor();
stop() override;
/**
* @brief Monitor the network for newly validated ledgers and publish them to the ledgers stream
*
* @param nextSequence the ledger sequence to publish
* @return The next ledger sequence to publish
*/
uint32_t
publishNextSequence(uint32_t nextSequence);
boost::json::object
getInfo() const override;
/**
* @brief Monitor the database for newly written ledgers.
*
* Similar to the monitor(), except this function will never call runETLPipeline() or loadInitialLedger().
* This function only publishes ledgers as they are written to the database.
*/
void
monitorReadOnly();
/**
* @return true if stopping; false otherwise
*/
bool
isStopping() const
{
return state_.isStopping;
}
isAmendmentBlocked() const override;
bool
isCorruptionDetected() const override;
std::optional<ETLState>
getETLState() const override;
/**
* @brief Get the number of markers to use during the initial ledger download.
*
* This is equivalent to the degree of parallelism during the initial ledger download.
*
* @return The number of markers
*/
std::uint32_t
getNumMarkers() const
{
return numMarkers_;
}
lastCloseAgeSeconds() const override;
private:
std::optional<data::LedgerRange>
loadInitialLedgerIfNeeded();
/**
* @brief Spawn the worker thread and start monitoring.
*/
void
doWork();
startMonitor(uint32_t seq);
void
startLoading(uint32_t seq);
void
attemptTakeoverWriter();
void
giveUpWriter();
};
} // namespace etl

View File

@@ -26,7 +26,7 @@
#include <cstdint>
#include <optional>
namespace etlng {
namespace etl {
/**
* @brief This is a base class for any ETL service implementations.
@@ -77,7 +77,7 @@ struct ETLServiceInterface {
* @brief Get the etl nodes' state
* @return The etl nodes' state, nullopt if etl nodes are not connected
*/
[[nodiscard]] virtual std::optional<etl::ETLState>
[[nodiscard]] virtual std::optional<ETLState>
getETLState() const = 0;
/**
@@ -89,4 +89,4 @@ struct ETLServiceInterface {
lastCloseAgeSeconds() const = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -19,12 +19,12 @@
#pragma once
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <cstdint>
#include <optional>
namespace etlng {
namespace etl {
/**
* @brief An interface for the Extractor
@@ -51,4 +51,4 @@ struct ExtractorInterface {
extractLedgerOnly(uint32_t seq) = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -19,7 +19,7 @@
#pragma once
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <xrpl/protocol/LedgerHeader.h>
@@ -28,7 +28,7 @@
#include <string>
#include <vector>
namespace etlng {
namespace etl {
/**
* @brief The interface for observing the initial ledger load
@@ -51,4 +51,4 @@ struct InitialLoadObserverInterface {
) = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -23,7 +23,7 @@
#include <cstdint>
#include <optional>
namespace etlng {
namespace etl {
/**
* @brief The interface of a scheduler for the extraction process
@@ -71,4 +71,4 @@ struct LedgerPublisherInterface {
lastPublishAgeSeconds() const = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -21,9 +21,10 @@
#include "data/BackendInterface.hpp"
#include "etl/ETLState.hpp"
#include "etl/InitialLoadObserverInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/Source.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "rpc/Errors.hpp"
#include "util/Assert.hpp"
@@ -64,7 +65,7 @@ using util::prometheus::Labels;
namespace etl {
std::shared_ptr<etlng::LoadBalancerInterface>
std::shared_ptr<LoadBalancerInterface>
LoadBalancer::makeLoadBalancer(
ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
@@ -158,7 +159,6 @@ LoadBalancer::LoadBalancer(
auto source = sourceFactory(
*it,
ioc,
backend,
subscriptions,
validatedLedgers,
forwardingTimeout,
@@ -212,26 +212,32 @@ LoadBalancer::LoadBalancer(
}
}
std::vector<std::string>
LoadBalancer::loadInitialLedger(uint32_t sequence, std::chrono::steady_clock::duration retryAfter)
InitialLedgerLoadResult
LoadBalancer::loadInitialLedger(
uint32_t sequence,
InitialLoadObserverInterface& loadObserver,
std::chrono::steady_clock::duration retryAfter
)
{
std::vector<std::string> response;
execute(
[this, &response, &sequence](auto& source) {
auto [data, res] = source->loadInitialLedger(sequence, downloadRanges_);
InitialLedgerLoadResult response;
if (!res) {
execute(
[this, &response, &sequence, &loadObserver](auto& source) {
auto res = source->loadInitialLedger(sequence, downloadRanges_, loadObserver);
if (not res.has_value() and res.error() == InitialLedgerLoadError::Errored) {
LOG(log_.error()) << "Failed to download initial ledger."
<< " Sequence = " << sequence << " source = " << source->toString();
} else {
response = std::move(data);
return false; // should retry on error
}
return res;
response = std::move(res); // cancelled or data received
return true;
},
sequence,
retryAfter
);
return response;
}

View File

@@ -21,13 +21,12 @@
#include "data/BackendInterface.hpp"
#include "etl/ETLState.hpp"
#include "etl/InitialLoadObserverInterface.hpp"
#include "etl/LoadBalancerInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/Source.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "rpc/Errors.hpp"
#include "util/Assert.hpp"
#include "util/Mutex.hpp"
#include "util/Random.hpp"
#include "util/ResponseExpirationCache.hpp"
@@ -54,7 +53,6 @@
#include <optional>
#include <string>
#include <string_view>
#include <utility>
#include <vector>
namespace etl {
@@ -76,7 +74,7 @@ concept SomeLoadBalancer = std::derived_from<T, LoadBalancerTag>;
* which ledgers have been validated by the network, and the range of ledgers each etl source has). This class also
* allows requests for ledger data to be load balanced across all possible ETL sources.
*/
class LoadBalancer : public etlng::LoadBalancerInterface, LoadBalancerTag {
class LoadBalancer : public LoadBalancerInterface, LoadBalancerTag {
public:
using RawLedgerObjectType = org::xrpl::rpc::v1::RawLedgerObject;
using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse;
@@ -149,7 +147,7 @@ public:
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param randomGenerator A random generator to use for selecting sources
* @param validatedLedgers The network validated ledgers data structure
* @param validatedLedgers The network validated ledgers datastructure
* @param sourceFactory A factory function to create a source
* @return A shared pointer to a new instance of LoadBalancer
*/
@@ -164,20 +162,6 @@ public:
SourceFactory sourceFactory = makeSource
);
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded.
*
* @param sequence Sequence of ledger to download
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
*/
std::vector<std::string>
loadInitialLedger(
uint32_t sequence,
std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2}
) override;
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded or the download is cancelled.
@@ -187,16 +171,12 @@ public:
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::expected with ledger edge keys on success, or InitialLedgerLoadError on failure
*/
etlng::InitialLedgerLoadResult
InitialLedgerLoadResult
loadInitialLedger(
[[maybe_unused]] uint32_t sequence,
[[maybe_unused]] etlng::InitialLoadObserverInterface& observer,
[[maybe_unused]] std::chrono::steady_clock::duration retryAfter
) override
{
ASSERT(false, "Not available for old ETL");
std::unreachable();
}
uint32_t sequence,
InitialLoadObserverInterface& observer,
std::chrono::steady_clock::duration retryAfter
) override;
/**
* @brief Fetch data for a specific ledger.

View File

@@ -21,7 +21,7 @@
#pragma once
#include "etl/ETLState.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etl/InitialLoadObserverInterface.hpp"
#include "rpc/Errors.hpp"
#include <boost/asio/spawn.hpp>
@@ -37,7 +37,7 @@
#include <string>
#include <vector>
namespace etlng {
namespace etl {
/**
* @brief Represents possible errors for initial ledger load
@@ -76,21 +76,10 @@ public:
[[nodiscard]] virtual InitialLedgerLoadResult
loadInitialLedger(
uint32_t sequence,
etlng::InitialLoadObserverInterface& loader,
InitialLoadObserverInterface& loader,
std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2}
) = 0;
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded.
*
* @param sequence Sequence of ledger to download
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
*/
[[nodiscard]] virtual std::vector<std::string>
loadInitialLedger(uint32_t sequence, std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2}) = 0;
/**
* @brief Fetch data for a specific ledger.
*
@@ -141,7 +130,7 @@ public:
* @brief Return state of ETL nodes.
* @return ETL state, nullopt if etl nodes not available
*/
[[nodiscard]] virtual std::optional<etl::ETLState>
[[nodiscard]] virtual std::optional<ETLState>
getETLState() noexcept = 0;
/**
@@ -154,4 +143,4 @@ public:
stop(boost::asio::yield_context yield) = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -19,14 +19,14 @@
#pragma once
#include "etlng/Models.hpp"
#include "etl/Models.hpp"
#include <xrpl/protocol/LedgerHeader.h>
#include <expected>
#include <optional>
namespace etlng {
namespace etl {
/**
* @brief Enumeration of possible errors that can occur during loading operations
@@ -59,4 +59,4 @@ struct LoaderInterface {
loadInitialLedger(model::LedgerData const& data) = 0;
};
} // namespace etlng
} // namespace etl

View File

@@ -38,7 +38,7 @@
#include <string>
#include <vector>
namespace etlng::model {
namespace etl::model {
/**
* @brief A specification for the Registry.
@@ -190,4 +190,4 @@ struct Task {
uint32_t seq;
};
} // namespace etlng::model
} // namespace etl::model

View File

@@ -26,7 +26,7 @@
#include <chrono>
#include <cstdint>
namespace etlng {
namespace etl {
/**
* @brief An interface for the monitor service
@@ -88,4 +88,4 @@ public:
stop() = 0;
};
} // namespace etlng
} // namespace etl

Some files were not shown because too many files have changed in this diff Show More