Compare commits

..

84 Commits

Author SHA1 Message Date
Sergey Kuznetsov
f04d2a97ec chore: Commits for 2.5.0 (#2268) 2025-06-30 11:32:26 +01:00
Ayaz Salikhov
8fcc2dfa19 fix: Pin lxml<6.0.0 (#2269) 2025-06-27 18:56:14 +01:00
Ayaz Salikhov
123e09695e feat: Switch to xrpl/2.5.0 release (#2267) 2025-06-27 17:13:05 +01:00
Peter Chen
371237487b feat: Support single asset vault (#1979)
fixes #1921

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-06-27 15:27:34 +01:00
Ayaz Salikhov
d97f19ba1d style: Fix JSON style in C++ code (#2262) 2025-06-27 11:45:11 +01:00
github-actions[bot]
e92dbc8fce style: clang-tidy auto fixes (#2264)
Fixes #2263. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-06-27 10:30:17 +01:00
Ayaz Salikhov
769fdab6b7 feat: Add Support For Token Escrow (#2252)
Fix: https://github.com/XRPLF/clio/issues/2174
2025-06-26 18:03:26 +01:00
Ayaz Salikhov
363344d36e feat: Add init_conan script (#2242)
This should make life of a developer so much easier
2025-06-26 17:12:32 +01:00
Ayaz Salikhov
4f7e8194f0 fix: Don't cancel ci image builds (#2259) 2025-06-26 14:51:34 +01:00
dependabot[bot]
04c80c62f5 ci: [DEPENDABOT] bump docker/setup-buildx-action from 3.11.0 to 3.11.1 in /.github/actions/build_docker_image (#2256) 2025-06-23 12:26:26 +01:00
Ayaz Salikhov
92fdebf590 chore: Update conan.lock (#2250) 2025-06-23 12:11:03 +01:00
Ayaz Salikhov
b054a8424d refactor: Refactor GCC image to make upgrades easier (#2246)
Work on: https://github.com/XRPLF/clio/issues/2047
2025-06-23 12:07:44 +01:00
Ayaz Salikhov
162b1305e0 feat: Download and upload conan packages in parallel (#2254) 2025-06-23 12:06:38 +01:00
Ayaz Salikhov
bdaa04d1ec ci: Don't use dynamic names when they are not needed (#2251) 2025-06-23 11:44:17 +01:00
Ayaz Salikhov
6c69453bda fix: Disable conan uploads on schedule (#2253) 2025-06-23 11:41:31 +01:00
Ayaz Salikhov
7661ee6a3b fix: Make update-libxrpl-version work with lockfile (#2249) 2025-06-23 11:38:28 +01:00
Ayaz Salikhov
6cabe89601 chore: Don't use self-hosted runner tag (#2248) 2025-06-23 11:35:34 +01:00
Ayaz Salikhov
70f7635dda refactor: Simplify check_config implementation (#2247) 2025-06-23 11:34:43 +01:00
Ayaz Salikhov
c8574ea42a chore: Cleanup conanfile (#2243) 2025-06-23 11:28:22 +01:00
Ayaz Salikhov
e4fbf5131f feat: Build sanitizers with clang (#2239) 2025-06-23 11:26:05 +01:00
Ayaz Salikhov
87ee358297 ci: Silence brew warnings (#2255) 2025-06-23 11:22:19 +01:00
github-actions[bot]
27e29d0421 style: clang-tidy auto fixes (#2245)
Fixes #2244. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-06-19 11:27:06 +01:00
Alex Kremer
63ec563135 feat: ETLng cleanup and graceful shutdown (#2232) 2025-06-18 21:40:11 +01:00
Ayaz Salikhov
2c6f52a0ed ci: Full build conan dependencies only on schedule (#2241) 2025-06-18 20:03:58 +01:00
Ayaz Salikhov
97956b1718 feat: Build macos dependencies with sanitizers (#2240) 2025-06-18 18:20:26 +01:00
Ayaz Salikhov
ebfe4e6468 ci: Don't use save/restore cache for conan; use artifactory (#2230) 2025-06-18 15:25:40 +01:00
Ayaz Salikhov
534518f13e docs: Add information about global.conf (#2238) 2025-06-18 11:26:46 +01:00
Ayaz Salikhov
4ed51c22d0 fix: Force reupload new artifacts (#2236)
The issue is that we previously didn't care about `[conf]` section.
And for example, we uploaded `clang.ubsan` build with the same
package_id as a regular clang build.
This was fixed in https://github.com/XRPLF/clio/pull/2233 and
https://github.com/XRPLF/clio/pull/2234

Adding `global.conf` almost fixed the problem, but since our
non-sanitized builds don't set anything in `[conf]`, we use the same
package_id as before.
So, for the `clang` build we end up with previously uploaded
`clang.ubsan` build artifacts.

To fix this, we should force the upload.
2025-06-18 11:14:03 +01:00
dependabot[bot]
4364c07f1e ci: [DEPENDABOT] bump docker/setup-buildx-action from 3.10.0 to 3.11.0 in /.github/actions/build_docker_image (#2235)
Bumps
[docker/setup-buildx-action](https://github.com/docker/setup-buildx-action)
from 3.10.0 to 3.11.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.11.0</h2>
<ul>
<li>Keep BuildKit state support by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/docker/setup-buildx-action/pull/427">docker/setup-buildx-action#427</a></li>
<li>Remove aliases created when installing by default by <a
href="https://github.com/hashhar"><code>@​hashhar</code></a> in <a
href="https://redirect.github.com/docker/setup-buildx-action/pull/139">docker/setup-buildx-action#139</a></li>
<li>Bump <code>@​docker/actions-toolkit</code> from 0.56.0 to 0.62.1 in
<a
href="https://redirect.github.com/docker/setup-buildx-action/pull/422">docker/setup-buildx-action#422</a>
<a
href="https://redirect.github.com/docker/setup-buildx-action/pull/425">docker/setup-buildx-action#425</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/docker/setup-buildx-action/compare/v3.10.0...v3.11.0">https://github.com/docker/setup-buildx-action/compare/v3.10.0...v3.11.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="18ce135bb5"><code>18ce135</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/setup-buildx-action/issues/425">#425</a>
from docker/dependabot/npm_and_yarn/docker/actions-to...</li>
<li><a
href="0e198e93af"><code>0e198e9</code></a>
chore: update generated content</li>
<li><a
href="05f3f3ac10"><code>05f3f3a</code></a>
build(deps): bump <code>@​docker/actions-toolkit</code> from 0.61.0 to
0.62.1</li>
<li><a
href="622913496d"><code>6229134</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/setup-buildx-action/issues/427">#427</a>
from crazy-max/keep-state</li>
<li><a
href="c6f6a07025"><code>c6f6a07</code></a>
chore: update generated content</li>
<li><a
href="6c5e29d848"><code>6c5e29d</code></a>
skip builder creation if one already exists with the same name</li>
<li><a
href="548b297749"><code>548b297</code></a>
ci: keep-state check</li>
<li><a
href="36590ad0c1"><code>36590ad</code></a>
check if driver compatible with keep-state</li>
<li><a
href="4143b5899b"><code>4143b58</code></a>
Support to retain cache</li>
<li><a
href="3f1544eb9e"><code>3f1544e</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/setup-buildx-action/issues/139">#139</a>
from hashhar/hashhar/cleanup-aliases</li>
<li>Additional commits viewable in <a
href="b5ca514318...18ce135bb5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=3.10.0&new-version=3.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 21:30:16 +01:00
Ayaz Salikhov
f20efae75a fix: Use new CI image with global.conf for sanitizers to affect packa… (#2234) 2025-06-17 19:26:25 +01:00
Ayaz Salikhov
67b27ee344 fix: Update CI image to provide global.conf for sanitizers to affect … (#2233)
…package_id
2025-06-17 18:45:41 +01:00
Ayaz Salikhov
082f2fe21e style: Put static before type (#2231) 2025-06-17 16:07:19 +01:00
Ayaz Salikhov
7584a683dd fix: Add domain to book_changes (#2229)
Fix: https://github.com/XRPLF/clio/issues/2221
2025-06-17 14:22:06 +01:00
Ayaz Salikhov
f58c85d203 fix: Return domainMalformed when domain is malformed (#2228)
Fix: https://github.com/XRPLF/clio/issues/2222

Code in rippled to handle errors:
https://github.com/XRPLF/rippled/blob/2.5.0-rc1/src/xrpld/rpc/handlers/BookOffers.cpp#L183
2025-06-16 15:42:19 +01:00
Ayaz Salikhov
95698ee2de fix: Run upload_conan_deps.yml on conan.lock changes (#2227) 2025-06-13 17:40:55 +01:00
Alex Kremer
3d3db68508 feat: Support start and finish sequence in ETLng (#2226)
This PR adds support for start and finish sequence specified through the
config (for parity).
2025-06-13 17:39:21 +01:00
Ayaz Salikhov
7fcabd1ce7 feat: Build all possible conan configurations in CI (#2225) 2025-06-13 16:53:04 +01:00
Ayaz Salikhov
59bb9a11ab ci: Upload conan deps for all profiles (#2217) 2025-06-13 13:55:35 +01:00
Ayaz Salikhov
ac5fcc7f4b feat: Add conan lockfile (#2220) 2025-06-13 13:51:59 +01:00
Ayaz Salikhov
3d0e722176 fix: Use conan v2 dirs and commands in docs (#2219) 2025-06-12 21:50:14 +01:00
Ayaz Salikhov
93add775b2 fix: Make GHCR lowercase (#2218) 2025-06-12 20:42:36 +01:00
Alex Kremer
0273ba0da3 chore: Unreachable code is error (#2216) 2025-06-12 16:16:11 +01:00
Ayaz Salikhov
276477c494 feat: Build GCC natively and then merge the image (#2212) 2025-06-12 15:48:10 +01:00
github-actions[bot]
d0b2a24a30 style: clang-tidy auto fixes (#2215) 2025-06-12 10:47:34 +01:00
Ayaz Salikhov
e44a058b13 chore: Don't flex and don't install bison in CI image (#2210)
Test in: https://github.com/XRPLF/clio/pull/2211
2025-06-11 17:57:00 +01:00
Alex Kremer
743c9b92de feat: Read-write switching in ETLng (#2199)
Fixes #1597
2025-06-11 17:53:14 +01:00
Alex Kremer
35c90e64ec feat: Add flags to deps for sanitizer builds (#2205)
Fix: https://github.com/XRPLF/clio/issues/2198
Tested in #2208
2025-06-11 16:26:09 +01:00
Ayaz Salikhov
6e0d7a0fac feat: Pass sanitizer as part of conan_profile (#2189)
I noticed we don't need `sanitizer` value anymore, so removed it.
2025-06-10 16:04:00 +01:00
Ayaz Salikhov
d3c98ab2a8 fix: Only set compile flag for grpc (#2204)
I built all conan packages locally and this flag is only required for
grpc, so let's only set it for grpc.
This is better - it's explicit, and we'll know that if we update grpc
recipe, we can remove this.

I also uploaded all rebuilt packages to the artifactory.
2025-06-10 15:57:23 +01:00
Ayaz Salikhov
2d172f470d feat: Always build with native arch in Conan 2 (#2201)
Will test it works in https://github.com/XRPLF/clio/pull/2202

Work on: https://github.com/XRPLF/clio/issues/1692
2025-06-09 15:47:43 +01:00
Sergey Kuznetsov
b8b82e5dd9 chore: Commits for 2.5.0-b3 (#2197) 2025-06-09 11:48:12 +01:00
Ayaz Salikhov
a68229e9d7 feat: Use Conan 2 (#2179)
Merge right after: https://github.com/XRPLF/clio/pull/2178
Waits for: https://github.com/XRPLF/rippled/pull/5462
2025-06-06 19:55:46 +01:00
Ayaz Salikhov
cec8b29998 ci: Update CI image to use Conan 2 (#2178)
I pushed this image to my fork, and will use it to test everything works
2025-06-06 19:37:43 +01:00
Ayaz Salikhov
13524be6cc feat: Support Permissioned DEX (#2152)
Fix: https://github.com/XRPLF/clio/issues/2143

Will add tests
2025-06-06 17:12:18 +01:00
Ayaz Salikhov
2bf582839e feat: Add ubsan build to nightly release (#2190) 2025-06-06 17:11:48 +01:00
Peter Chen
ed47db7d39 chore: update libxrpl (#2186) 2025-06-06 15:25:21 +01:00
Ayaz Salikhov
4994f9db92 fix: Analyze build time on clang as well (#2195) 2025-06-06 14:01:37 +01:00
Ayaz Salikhov
50851aed16 chore: Use gcc from Docker Hub for now (#2194) 2025-06-06 13:02:37 +01:00
github-actions[bot]
4118fb2584 style: clang-tidy auto fixes (#2193) 2025-06-06 10:40:42 +01:00
Sergey Kuznetsov
837a547849 chore: Revert "feat: Use new web server by default (#2182)" (#2187)
There is an issue found in the new web server, so we couldn't use it by
default for now.
This reverts commit b3f3259b14.
2025-06-05 17:35:21 +01:00
Ayaz Salikhov
b57618a211 fix: Add README to docker/compilers (#2188) 2025-06-05 15:48:45 +01:00
Ayaz Salikhov
588ed91d1b feat: New docker images structure, tools image (#2185) 2025-06-05 15:11:10 +01:00
Sergey Kuznetsov
4904c4b4d4 chore: Cancellable coroutines (#2180)
Add a wrap for `boost::asio::yield_context` to make async operations
cancellable by default. The API could be a bit adjusted when we start
switching our code to use it.
2025-06-05 12:17:58 +01:00
Ayaz Salikhov
03070d7582 feat: Create releases in CI (#2168)
Fix: https://github.com/XRPLF/clio/issues/1779
2025-06-04 16:25:03 +01:00
Sergey Kuznetsov
b3f3259b14 feat: Use new web server by default (#2182)
Fixes #1781.
2025-06-04 15:01:30 +01:00
github-actions[bot]
357b96ab0d style: clang-tidy auto fixes (#2184)
Fixes #2183. Please review and commit clang-tidy fixes.

Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-06-03 18:15:08 +01:00
Ayaz Salikhov
550f0fae85 refactor: Use std::expected instead of std::variant for errors (#2160) 2025-06-03 13:34:25 +01:00
Ayaz Salikhov
19257f8aa9 style: Beautify installation lists in Dockerfile (#2172)
Needed for: https://github.com/XRPLF/clio/pull/2168
2025-06-02 11:48:37 +01:00
Ayaz Salikhov
49b4af1a56 fix: Add style to the name of pre-commit autoupdate PR title (#2177) 2025-06-02 11:43:44 +01:00
Ayaz Salikhov
c7600057bc ci: Install zip in Dockerfile (#2176) 2025-06-02 11:42:25 +01:00
dependabot[bot]
0b7fd64a4c ci: [DEPENDABOT] bump docker/build-push-action from 6.17.0 to 6.18.0 in /.github/actions/build_docker_image (#2175)
Bumps
[docker/build-push-action](https://github.com/docker/build-push-action)
from 6.17.0 to 6.18.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/docker/build-push-action/releases">docker/build-push-action's
releases</a>.</em></p>
<blockquote>
<h2>v6.18.0</h2>
<ul>
<li>Bump <code>@​docker/actions-toolkit</code> from 0.61.0 to 0.62.1 in
<a
href="https://redirect.github.com/docker/build-push-action/pull/1381">docker/build-push-action#1381</a></li>
</ul>
<blockquote>
<p>[!NOTE]
<a
href="https://docs.docker.com/build/ci/github-actions/build-summary/">Build
summary</a> is now supported with <a
href="https://docs.docker.com/build-cloud/">Docker Build Cloud</a>.</p>
</blockquote>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/docker/build-push-action/compare/v6.17.0...v6.18.0">https://github.com/docker/build-push-action/compare/v6.17.0...v6.18.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="263435318d"><code>2634353</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1381">#1381</a>
from docker/dependabot/npm_and_yarn/docker/actions-t...</li>
<li><a
href="c0432d2e01"><code>c0432d2</code></a>
chore: update generated content</li>
<li><a
href="0bb1f27d6b"><code>0bb1f27</code></a>
set builder driver and endpoint attributes for dbc summary support</li>
<li><a
href="5f9dbf956c"><code>5f9dbf9</code></a>
chore(deps): Bump <code>@​docker/actions-toolkit</code> from 0.61.0 to
0.62.1</li>
<li><a
href="0788c444d8"><code>0788c44</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1375">#1375</a>
from crazy-max/remove-gcr</li>
<li><a
href="aa179ca4f4"><code>aa179ca</code></a>
e2e: remove GCR</li>
<li>See full diff in <a
href="1dc7386353...263435318d">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=6.17.0&new-version=6.18.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-02 11:08:46 +01:00
Ayaz Salikhov
ecdea015b9 style: Mark JSON literal strings with R"JSON (#2169) 2025-05-30 15:50:39 +01:00
Peter Chen
7588e9d5bf feat: Support Batch (#2162)
fixes #2161.

- Tested locally to confirm that Clio forwards Batch transactions
correctly.
2025-05-30 10:00:40 -04:00
github-actions[bot]
bfa17134d2 style: clang-tidy auto fixes (#2167)
Fixes #2166. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-05-30 10:54:16 +01:00
Ayaz Salikhov
57b8ff1c49 fix: Use UniformRandomGenerator class to prevent threading issue (#2165) 2025-05-29 19:59:40 +01:00
Ayaz Salikhov
9b69da7f91 test: Skip slow DB sleep-based test on Mac (#2148)
Fix: https://github.com/XRPLF/clio/issues/2147
Fix: https://github.com/XRPLF/clio/issues/2132
2025-05-28 14:03:38 +01:00
Ayaz Salikhov
09409fc05d ci: Add missing workflow dependencies (#2155)
Was discovered in https://github.com/XRPLF/clio/pull/2150, better to be
fixed separately, as I'm not sure it'll be merged.
2025-05-28 12:58:37 +01:00
github-actions[bot]
561eae1b7f style: clang-tidy auto fixes (#2164) 2025-05-28 11:13:01 +01:00
Alex Kremer
28062496eb feat: ETLng MPT support (#2154) 2025-05-27 13:05:03 +01:00
github-actions[bot]
3e83b54332 style: clang-tidy auto fixes (#2159)
Fixes #2158. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-05-26 16:56:39 +01:00
Ayaz Salikhov
3e520c8742 chore: Fix: nagetive -> negative (#2156) 2025-05-26 15:48:17 +01:00
Alex Kremer
2a147b9487 feat: ETLng publisher and service refactoring (#2138) 2025-05-23 15:01:50 +01:00
Sergey Kuznetsov
8aab33c18c fix: Add Delegate to Ledger types (#2151)
Fix discrepancy with rippled for `account_objects` API.
2025-05-22 13:43:52 +01:00
Ayaz Salikhov
aef3119efb fix: Fix some doxygen docs errors (#2130) 2025-05-21 15:06:31 +01:00
282 changed files with 12308 additions and 6413 deletions

View File

@@ -5,9 +5,6 @@ inputs:
images:
description: Name of the images to use as a base name
required: true
dockerhub_repo:
description: DockerHub repository name
required: true
push_image:
description: Whether to push the image to the registry (true/false)
required: true
@@ -20,15 +17,19 @@ inputs:
platforms:
description: Platforms to build the image for (e.g. linux/amd64,linux/arm64)
required: true
description:
dockerhub_repo:
description: DockerHub repository name
required: false
dockerhub_description:
description: Short description of the image
required: true
required: false
runs:
using: composite
steps:
- name: Login to DockerHub
if: ${{ inputs.push_image == 'true' }}
if: ${{ inputs.push_image == 'true' && inputs.dockerhub_repo != '' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ env.DOCKERHUB_USER }}
@@ -45,7 +46,7 @@ runs:
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
with:
cache-image: false
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
id: meta
@@ -54,7 +55,7 @@ runs:
tags: ${{ inputs.tags }}
- name: Build and push
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: ${{ inputs.directory }}
platforms: ${{ inputs.platforms }}
@@ -62,11 +63,11 @@ runs:
tags: ${{ steps.meta.outputs.tags }}
- name: Update DockerHub description
if: ${{ inputs.push_image == 'true' }}
if: ${{ inputs.push_image == 'true' && inputs.dockerhub_repo != '' }}
uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
repository: ${{ inputs.dockerhub_repo }}
short-description: ${{ inputs.description }}
short-description: ${{ inputs.dockerhub_description }}
readme-filepath: ${{ inputs.directory }}/README.md

View File

@@ -5,8 +5,8 @@ inputs:
conan_profile:
description: Conan profile name
required: true
conan_cache_hit:
description: Whether conan cache has been downloaded
force_conan_source_build:
description: Whether conan should build all dependencies from source
required: true
default: "false"
build_type:
@@ -25,15 +25,6 @@ inputs:
description: Whether Clio is to be statically linked
required: true
default: "false"
sanitizer:
description: Sanitizer to use
required: true
default: "false"
choices:
- "false"
- "tsan"
- "asan"
- "ubsan"
time_trace:
description: Whether to enable compiler trace reports
required: true
@@ -49,7 +40,7 @@ runs:
- name: Run conan
shell: bash
env:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
CONAN_BUILD_OPTION: "${{ inputs.force_conan_source_build == 'true' && '*' || 'missing' }}"
CODE_COVERAGE: "${{ inputs.code_coverage == 'true' && 'True' || 'False' }}"
STATIC_OPTION: "${{ inputs.static == 'true' && 'True' || 'False' }}"
INTEGRATION_TESTS_OPTION: "${{ inputs.build_integration_tests == 'true' && 'True' || 'False' }}"
@@ -59,24 +50,24 @@ runs:
conan \
install .. \
-of . \
-b $BUILD_OPTION \
-s build_type="${{ inputs.build_type }}" \
-o clio:static="${STATIC_OPTION}" \
-o clio:tests=True \
-o clio:integration_tests="${INTEGRATION_TESTS_OPTION}" \
-o clio:lint=False \
-o clio:coverage="${CODE_COVERAGE}" \
-o clio:time_trace="${TIME_TRACE}" \
--profile "${{ inputs.conan_profile }}"
-b "$CONAN_BUILD_OPTION" \
-s "build_type=${{ inputs.build_type }}" \
-o "&:static=${STATIC_OPTION}" \
-o "&:tests=True" \
-o "&:integration_tests=${INTEGRATION_TESTS_OPTION}" \
-o "&:lint=False" \
-o "&:coverage=${CODE_COVERAGE}" \
-o "&:time_trace=${TIME_TRACE}" \
--profile:all "${{ inputs.conan_profile }}"
- name: Run cmake
shell: bash
env:
BUILD_TYPE: "${{ inputs.build_type }}"
SANITIZER_OPTION: |-
${{ inputs.sanitizer == 'tsan' && '-Dsan=thread' ||
inputs.sanitizer == 'ubsan' && '-Dsan=undefined' ||
inputs.sanitizer == 'asan' && '-Dsan=address' ||
${{ endsWith(inputs.conan_profile, '.asan') && '-Dsan=address' ||
endsWith(inputs.conan_profile, '.tsan') && '-Dsan=thread' ||
endsWith(inputs.conan_profile, '.ubsan') && '-Dsan=undefined' ||
'' }}
run: |
cd build

View File

@@ -13,25 +13,25 @@ runs:
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
brew install \
brew install --quiet \
bison \
ca-certificates \
ccache \
clang-build-analyzer \
conan@1 \
conan \
gh \
jq \
llvm@14 \
ninja \
pkg-config
echo "/opt/homebrew/opt/conan@1/bin" >> $GITHUB_PATH
echo "/opt/homebrew/opt/conan@2/bin" >> $GITHUB_PATH
- name: Install CMake 3.31.6 on mac
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
# Uninstall any existing cmake
brew uninstall cmake --ignore-dependencies || true
brew uninstall --formula cmake --ignore-dependencies || true
# Download specific cmake formula
FORMULA_URL="https://raw.githubusercontent.com/Homebrew/homebrew-core/b4e46db74e74a8c1650b38b1da222284ce1ec5ce/Formula/c/cmake.rb"
@@ -43,7 +43,7 @@ runs:
echo "$FORMULA_EXPECTED_SHA256 /tmp/homebrew-formula/cmake.rb" | shasum -a 256 -c
# Install cmake from the specific formula with force flag
brew install --formula --force /tmp/homebrew-formula/cmake.rb
brew install --formula --quiet --force /tmp/homebrew-formula/cmake.rb
- name: Fix git permissions on Linux
if: ${{ runner.os == 'Linux' }}
@@ -55,14 +55,14 @@ runs:
shell: bash
run: |
echo "CCACHE_DIR=${{ github.workspace }}/.ccache" >> $GITHUB_ENV
echo "CONAN_USER_HOME=${{ github.workspace }}" >> $GITHUB_ENV
echo "CONAN_HOME=${{ github.workspace }}/.conan2" >> $GITHUB_ENV
- name: Set env variables for Linux
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
echo "CCACHE_DIR=/root/.ccache" >> $GITHUB_ENV
echo "CONAN_USER_HOME=/root/" >> $GITHUB_ENV
echo "CONAN_HOME=/root/.conan2" >> $GITHUB_ENV
- name: Set CCACHE_DISABLE=1
if: ${{ inputs.disable_ccache == 'true' }}
@@ -74,4 +74,4 @@ runs:
shell: bash
run: |
mkdir -p "$CCACHE_DIR"
mkdir -p "$CONAN_USER_HOME/.conan"
mkdir -p "$CONAN_HOME"

View File

@@ -1,10 +1,7 @@
name: Restore cache
description: Find and restores conan and ccache cache
description: Find and restores ccache cache
inputs:
conan_dir:
description: Path to .conan directory
required: true
conan_profile:
description: Conan profile name
required: true
@@ -19,13 +16,8 @@ inputs:
description: Whether code coverage is on
required: true
default: "false"
outputs:
conan_hash:
description: Hash to use as a part of conan cache key
value: ${{ steps.conan_hash.outputs.hash }}
conan_cache_hit:
description: True if conan cache has been downloaded
value: ${{ steps.conan_cache.outputs.cache-hit }}
ccache_cache_hit:
description: True if ccache cache has been downloaded
value: ${{ steps.ccache_cache.outputs.cache-hit }}
@@ -37,24 +29,6 @@ runs:
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Calculate conan hash
id: conan_hash
shell: bash
run: |
conan info . -j info.json -o clio:tests=True
packages_info="$(cat info.json | jq '.[] | "\(.display_name): \(.id)"' | grep -v 'clio')"
echo "$packages_info"
hash="$(echo "$packages_info" | shasum -a 256 | cut -d ' ' -f 1)"
rm info.json
echo "hash=$hash" >> $GITHUB_OUTPUT
- name: Restore conan cache
uses: actions/cache/restore@v4
id: conan_cache
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-${{ inputs.build_type }}-${{ inputs.conan_profile }}-develop-${{ steps.conan_hash.outputs.hash }}
- name: Restore ccache cache
uses: actions/cache/restore@v4
id: ccache_cache

View File

@@ -1,27 +1,13 @@
name: Save cache
description: Save conan and ccache cache for develop branch
description: Save ccache cache for develop branch
inputs:
conan_dir:
description: Path to .conan directory
required: true
conan_profile:
description: Conan profile name
required: true
conan_hash:
description: Hash to use as a part of conan cache key
required: true
conan_cache_hit:
description: Whether conan cache has been downloaded
required: true
ccache_dir:
description: Path to .ccache directory
required: true
ccache_cache_hit:
description: Whether conan cache has been downloaded
required: true
ccache_cache_miss_rate:
description: How many cache misses happened
build_type:
description: Current build type (e.g. Release, Debug)
required: true
@@ -31,6 +17,12 @@ inputs:
required: true
default: "false"
ccache_cache_hit:
description: Whether ccache cache has been downloaded
required: true
ccache_cache_miss_rate:
description: How many ccache cache misses happened
runs:
using: composite
steps:
@@ -38,19 +30,6 @@ runs:
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Cleanup conan directory from extra data
if: ${{ inputs.conan_cache_hit != 'true' }}
shell: bash
run: |
conan remove "*" -s -b -f
- name: Save conan cache
if: ${{ inputs.conan_cache_hit != 'true' }}
uses: actions/cache/save@v4
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-${{ inputs.build_type }}-${{ inputs.conan_profile }}-develop-${{ inputs.conan_hash }}
- name: Save ccache cache
if: ${{ inputs.ccache_cache_hit != 'true' || inputs.ccache_cache_miss_rate == '100.0' }}
uses: actions/cache/save@v4

View File

@@ -1,33 +0,0 @@
name: Setup conan
description: Setup conan profile and artifactory
inputs:
conan_profile:
description: Conan profile name
required: true
runs:
using: composite
steps:
- name: Create conan profile on macOS
if: ${{ runner.os == 'macOS' }}
shell: bash
env:
CONAN_PROFILE: ${{ inputs.conan_profile }}
run: |
echo "Creating \"$CONAN_PROFILE\" conan profile"
conan profile new "$CONAN_PROFILE" --detect --force
conan profile update settings.compiler.libcxx=libc++ "$CONAN_PROFILE"
conan profile update settings.compiler.cppstd=20 "$CONAN_PROFILE"
conan profile update env.CXXFLAGS=-DBOOST_ASIO_DISABLE_CONCEPTS "$CONAN_PROFILE"
conan profile update "conf.tools.build:cxxflags+=[\"-DBOOST_ASIO_DISABLE_CONCEPTS\"]" "$CONAN_PROFILE"
- name: Add conan-non-prod artifactory
shell: bash
run: |
if [[ -z "$(conan remote list | grep conan-non-prod)" ]]; then
echo "Adding conan-non-prod"
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
else
echo "Conan-non-prod is available"
fi

View File

@@ -142,16 +142,3 @@ updates:
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop
- package-ecosystem: github-actions
directory: .github/actions/setup_conan/
schedule:
interval: weekly
day: monday
time: "04:00"
timezone: Etc/GMT
reviewers:
- XRPLF/clio-dev-team
commit-message:
prefix: "ci: [DEPENDABOT] "
target-branch: develop

View File

@@ -0,0 +1,8 @@
[settings]
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=16
os=Macos

42
.github/scripts/conan/generate_matrix.py vendored Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env python3
import itertools
import json
LINUX_OS = ["heavy", "heavy-arm64"]
LINUX_CONTAINERS = ['{ "image": "ghcr.io/xrplf/clio-ci:latest" }']
LINUX_COMPILERS = ["gcc", "clang"]
MACOS_OS = ["macos15"]
MACOS_CONTAINERS = [""]
MACOS_COMPILERS = ["apple-clang"]
BUILD_TYPES = ["Release", "Debug"]
SANITIZER_EXT = [".asan", ".tsan", ".ubsan", ""]
def generate_matrix():
configurations = []
for os, container, compiler in itertools.chain(
itertools.product(LINUX_OS, LINUX_CONTAINERS, LINUX_COMPILERS),
itertools.product(MACOS_OS, MACOS_CONTAINERS, MACOS_COMPILERS),
):
for sanitizer_ext, build_type in itertools.product(SANITIZER_EXT, BUILD_TYPES):
# libbacktrace doesn't build on arm64 with gcc.tsan
if os == "heavy-arm64" and compiler == "gcc" and sanitizer_ext == ".tsan":
continue
configurations.append(
{
"os": os,
"container": container,
"compiler": compiler,
"sanitizer_ext": sanitizer_ext,
"build_type": build_type,
}
)
return {"include": configurations}
if __name__ == "__main__":
print(f"matrix={json.dumps(generate_matrix())}")

43
.github/scripts/conan/init.sh vendored Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash
set -ex
CURRENT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_DIR="$(cd "$CURRENT_DIR/../../../" && pwd)"
CONAN_DIR="${CONAN_HOME:-$HOME/.conan2}"
PROFILES_DIR="$CONAN_DIR/profiles"
APPLE_CLANG_PROFILE="$CURRENT_DIR/apple-clang.profile"
GCC_PROFILE="$REPO_DIR/docker/ci/conan/gcc.profile"
CLANG_PROFILE="$REPO_DIR/docker/ci/conan/clang.profile"
SANITIZER_TEMPLATE_FILE="$REPO_DIR/docker/ci/conan/sanitizer_template.profile"
rm -rf "$CONAN_DIR"
conan remote add --index 0 ripple http://18.143.149.228:8081/artifactory/api/conan/dev
cp "$REPO_DIR/docker/ci/conan/global.conf" "$CONAN_DIR/global.conf"
create_profile_with_sanitizers() {
profile_name="$1"
profile_source="$2"
cp "$profile_source" "$PROFILES_DIR/$profile_name"
cp "$SANITIZER_TEMPLATE_FILE" "$PROFILES_DIR/$profile_name.asan"
cp "$SANITIZER_TEMPLATE_FILE" "$PROFILES_DIR/$profile_name.tsan"
cp "$SANITIZER_TEMPLATE_FILE" "$PROFILES_DIR/$profile_name.ubsan"
}
mkdir -p "$PROFILES_DIR"
if [[ "$(uname)" == "Darwin" ]]; then
create_profile_with_sanitizers "apple-clang" "$APPLE_CLANG_PROFILE"
echo "include(apple-clang)" > "$PROFILES_DIR/default"
else
create_profile_with_sanitizers "clang" "$CLANG_PROFILE"
create_profile_with_sanitizers "gcc" "$GCC_PROFILE"
echo "include(gcc)" > "$PROFILES_DIR/default"
fi

24
.github/scripts/prepare-release-artifacts.sh vendored Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
set -ex -o pipefail
BINARY_NAME="clio_server"
ARTIFACTS_DIR="$1"
if [ -z "${ARTIFACTS_DIR}" ]; then
echo "Usage: $0 <artifacts_directory>"
exit 1
fi
cd "${ARTIFACTS_DIR}" || exit 1
for artifact_name in $(ls); do
pushd "${artifact_name}" || exit 1
zip -r "../${artifact_name}.zip" ./${BINARY_NAME}
popd || exit 1
rm "${artifact_name}/${BINARY_NAME}"
rm -r "${artifact_name}"
sha256sum "./${artifact_name}.zip" > "./${artifact_name}.zip.sha256sum"
done

View File

@@ -1,28 +0,0 @@
#!/bin/bash
# Note: This script is intended to be run from the root of the repository.
#
# This script modifies conanfile.py such that the specified version of libXRPL is used.
if [[ -z "$1" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
Version should be passed as first argument to the script.
-----------------------------------------------------------------------------
EOF
exit 1
fi
VERSION=$1
GNU_SED=$(sed --version 2>&1 | grep -q 'GNU' && echo true || echo false)
echo "+ Updating required libXRPL version to $VERSION"
if [[ "$GNU_SED" == "false" ]]; then
sed -i '' -E "s|'xrpl/[a-zA-Z0-9\\.\\-]+'|'xrpl/$VERSION'|g" conanfile.py
else
sed -i -E "s|'xrpl/[a-zA-Z0-9\\.\\-]+'|'xrpl/$VERSION'|g" conanfile.py
fi

View File

@@ -18,6 +18,8 @@ on:
- "!.github/actions/create_issue/**"
- CMakeLists.txt
- conanfile.py
- conan.lock
- "cmake/**"
- "src/**"
- "tests/**"
@@ -45,7 +47,7 @@ jobs:
include:
- os: macos15
conan_profile: default_apple_clang
conan_profile: apple-clang
build_type: Release
container: ""
static: false
@@ -75,7 +77,6 @@ jobs:
static: true
upload_clio_server: false
targets: all
sanitizer: "false"
analyze_build_time: false
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
@@ -98,23 +99,9 @@ jobs:
shell: bash
run: |
repoConfigFile=docs/config-description.md
if ! [ -f "${repoConfigFile}" ]; then
echo "Config Description markdown file is missing in docs folder"
exit 1
fi
configDescriptionFile=config_description_new.md
chmod +x ./clio_server
configDescriptionFile=config_description_new.md
./clio_server -d "${configDescriptionFile}"
configDescriptionHash=$(sha256sum "${configDescriptionFile}" | cut -d' ' -f1)
repoConfigHash=$(sha256sum "${repoConfigFile}" | cut -d' ' -f1)
if [ "${configDescriptionHash}" != "${repoConfigHash}" ]; then
echo "Markdown file is not up to date"
diff -u "${repoConfigFile}" "${configDescriptionFile}"
rm -f "${configDescriptionFile}"
exit 1
fi
rm -f "${configDescriptionFile}"
exit 0
diff -u "${repoConfigFile}" "${configDescriptionFile}"

View File

@@ -24,7 +24,7 @@ on:
type: string
disable_cache:
description: Whether ccache and conan cache should be disabled
description: Whether ccache should be disabled
required: false
type: boolean
default: false
@@ -57,12 +57,6 @@ on:
type: string
default: all
sanitizer:
description: Sanitizer to use
required: false
type: string
default: "false"
jobs:
build:
uses: ./.github/workflows/build_impl.yml
@@ -76,7 +70,6 @@ jobs:
static: ${{ inputs.static }}
upload_clio_server: ${{ inputs.upload_clio_server }}
targets: ${{ inputs.targets }}
sanitizer: ${{ inputs.sanitizer }}
analyze_build_time: false
test:
@@ -89,4 +82,3 @@ jobs:
build_type: ${{ inputs.build_type }}
run_unit_tests: ${{ inputs.run_unit_tests }}
run_integration_tests: ${{ inputs.run_integration_tests }}
sanitizer: ${{ inputs.sanitizer }}

View File

@@ -85,16 +85,16 @@ jobs:
- name: Build Docker image
uses: ./.github/actions/build_docker_image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
rippleci/clio
ghcr.io/xrplf/clio
dockerhub_repo: rippleci/clio
rippleci/clio
push_image: ${{ inputs.publish_image }}
directory: docker/clio
tags: ${{ inputs.tags }}
platforms: linux/amd64
description: Clio is an XRP Ledger API server.
dockerhub_repo: rippleci/clio
dockerhub_description: Clio is an XRP Ledger API server.

View File

@@ -24,7 +24,7 @@ on:
type: string
disable_cache:
description: Whether ccache and conan cache should be disabled
description: Whether ccache should be disabled
required: false
type: boolean
@@ -48,11 +48,6 @@ on:
required: true
type: string
sanitizer:
description: Sanitizer to use
required: true
type: string
analyze_build_time:
description: Whether to enable build time analysis
required: true
@@ -64,7 +59,7 @@ on:
jobs:
build:
name: Build ${{ inputs.container != '' && 'in container' || 'natively' }}
name: Build
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
@@ -82,17 +77,16 @@ jobs:
with:
disable_ccache: ${{ inputs.disable_cache }}
- name: Setup conan
uses: ./.github/actions/setup_conan
with:
conan_profile: ${{ inputs.conan_profile }}
- name: Setup conan on macOS
if: runner.os == 'macOS'
shell: bash
run: ./.github/scripts/conan/init.sh
- name: Restore cache
if: ${{ !inputs.disable_cache }}
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_profile: ${{ inputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ inputs.build_type }}
@@ -102,11 +96,9 @@ jobs:
uses: ./.github/actions/generate
with:
conan_profile: ${{ inputs.conan_profile }}
conan_cache_hit: ${{ !inputs.disable_cache && steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
static: ${{ inputs.static }}
sanitizer: ${{ inputs.sanitizer }}
time_trace: ${{ inputs.analyze_build_time }}
- name: Build Clio
@@ -140,11 +132,11 @@ jobs:
cat /tmp/ccache.stats
- name: Strip unit_tests
if: inputs.sanitizer == 'false' && !inputs.code_coverage && !inputs.analyze_build_time
if: ${{ !endsWith(inputs.conan_profile, 'san') && !inputs.code_coverage && !inputs.analyze_build_time }}
run: strip build/clio_tests
- name: Strip integration_tests
if: inputs.sanitizer == 'false' && !inputs.code_coverage && !inputs.analyze_build_time
if: ${{ !endsWith(inputs.conan_profile, 'san') && !inputs.code_coverage && !inputs.analyze_build_time }}
run: strip build/clio_integration_tests
- name: Upload clio_server
@@ -172,15 +164,13 @@ jobs:
if: ${{ !inputs.disable_cache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
conan_profile: ${{ inputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
conan_profile: ${{ inputs.conan_profile }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
# This is run as part of the build job, because it requires the following:
# - source code

View File

@@ -15,7 +15,7 @@ env:
jobs:
build:
name: Build Clio / `libXRPL ${{ github.event.client_payload.version }}`
runs-on: [self-hosted, heavy]
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:latest
@@ -27,24 +27,23 @@ jobs:
- name: Update libXRPL version requirement
shell: bash
run: |
./.github/scripts/update-libxrpl-version ${{ github.event.client_payload.version }}
sed -i.bak -E "s|'xrpl/[a-zA-Z0-9\\.\\-]+'|'xrpl/${{ github.event.client_payload.version }}'|g" conanfile.py
rm -f conanfile.py.bak
- name: Update conan lockfile
shell: bash
run: |
conan lock create . -o '&:tests=True' -o '&:benchmark=True'
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
with:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ env.CONAN_PROFILE }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: Release
- name: Build Clio
uses: ./.github/actions/build_clio
@@ -61,7 +60,7 @@ jobs:
run_tests:
name: Run tests
needs: build
runs-on: [self-hosted, heavy]
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:latest

View File

@@ -40,25 +40,17 @@ jobs:
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
with:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
conan_profile: ${{ env.CONAN_PROFILE }}
ccache_dir: ${{ env.CCACHE_DIR }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ env.CONAN_PROFILE }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: Release
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads

View File

@@ -5,23 +5,14 @@ on:
branches: [develop]
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
# Only cancel in-progress jobs or runs for the current workflow - matches against branch & tags
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
build:
runs-on: ubuntu-latest
continue-on-error: true
container:
image: ghcr.io/xrplf/clio-ci:latest
@@ -31,10 +22,16 @@ jobs:
with:
lfs: true
- name: Build docs
run: |
mkdir -p build_docs && cd build_docs
cmake ../docs && cmake --build . --target docs
- name: Create build directory
run: mkdir build_docs
- name: Configure CMake
working-directory: build_docs
run: cmake ../docs
- name: Build
working-directory: build_docs
run: cmake --build . --target docs
- name: Setup Pages
uses: actions/configure-pages@v5
@@ -45,6 +42,19 @@ jobs:
path: build_docs/html
name: docs-develop
deploy:
needs: build
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -11,10 +11,12 @@ on:
- .github/workflows/release_impl.yml
- .github/workflows/build_and_test.yml
- .github/workflows/build_impl.yml
- .github/workflows/test_impl.yml
- .github/workflows/build_clio_docker_image.yml
- ".github/actions/**"
- "!.github/actions/code_coverage/**"
- .github/scripts/prepare-release-artifacts.sh
concurrency:
# Only cancel in-progress jobs or runs for the current workflow - matches against branch & tags
@@ -30,7 +32,7 @@ jobs:
matrix:
include:
- os: macos15
conan_profile: default_apple_clang
conan_profile: apple-clang
build_type: Release
static: false
- os: heavy
@@ -43,6 +45,11 @@ jobs:
build_type: Debug
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
- os: heavy
conan_profile: gcc.ubsan
build_type: Release
static: false
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
uses: ./.github/workflows/build_and_test.yml
with:
@@ -63,15 +70,12 @@ jobs:
fail-fast: false
matrix:
include:
# TODO: Enable when we have at least ubuntu 22.04
# as ClangBuildAnalyzer requires relatively modern glibc
#
# - os: heavy
# conan_profile: clang
# container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
# static: true
- os: heavy
conan_profile: clang
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
static: true
- os: macos15
conan_profile: default_apple_clang
conan_profile: apple-clang
container: ""
static: false
uses: ./.github/workflows/build_impl.yml
@@ -85,7 +89,6 @@ jobs:
static: ${{ matrix.static }}
upload_clio_server: false
targets: all
sanitizer: "false"
analyze_build_time: true
nightly_release:
@@ -95,7 +98,14 @@ jobs:
overwrite_release: true
title: "Clio development (nightly) build"
version: nightly
notes_header_file: nightly_notes.md
header: >
# Release notes
> **Note:** Please remember that this is a development release and it is not recommended for production use.
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly>
generate_changelog: false
draft: false
build_and_publish_docker_image:
uses: ./.github/workflows/build_clio_docker_image.yml

View File

@@ -1,7 +0,0 @@
# Release notes
> **Note:** Please remember that this is a development release and it is not recommended for production use.
Changelog (including previous releases): <https://github.com/XRPLF/clio/commits/nightly>
## SHA256 checksums

View File

@@ -33,7 +33,7 @@ jobs:
GH_TOKEN: ${{ github.token }}
with:
branch: update/pre-commit-hooks
title: Update pre-commit hooks
commit-message: "style: update pre-commit hooks"
title: "style: Update pre-commit hooks"
commit-message: "style: Update pre-commit hooks"
body: Update versions of pre-commit hooks to latest version.
reviewers: "godexsoft,kuznetsss,PeterChen13579,mathbunnyru"

56
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: Create release
on:
push:
tags:
- "*.*.*"
pull_request:
paths:
- .github/workflows/release.yml
concurrency:
# Only cancel in-progress jobs or runs for the current workflow - matches against branch & tags
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-and-test:
name: Build and Test
strategy:
fail-fast: false
matrix:
include:
- os: macos15
conan_profile: apple-clang
build_type: Release
static: false
- os: heavy
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
uses: ./.github/workflows/build_and_test.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
conan_profile: ${{ matrix.conan_profile }}
build_type: ${{ matrix.build_type }}
static: ${{ matrix.static }}
run_unit_tests: true
run_integration_tests: true
upload_clio_server: true
disable_cache: true
release:
needs: build-and-test
uses: ./.github/workflows/release_impl.yml
with:
overwrite_release: false
title: "${{ github.ref_name}}"
version: "${{ github.ref_name }}"
header: >
# Introducing Clio version ${{ github.ref_name }}
generate_changelog: true
draft: true

View File

@@ -18,14 +18,26 @@ on:
required: true
type: string
notes_header_file:
description: "Release notes header file"
header:
description: "Release notes header"
required: true
type: string
generate_changelog:
description: "Generate changelog"
required: false
type: boolean
draft:
description: "Create a draft release"
required: false
type: boolean
jobs:
release:
runs-on: ubuntu-latest
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:latest
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -35,29 +47,55 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- uses: actions/download-artifact@v4
with:
path: release_artifacts
pattern: clio_server_*
- name: Prepare files
- name: Create release notes
shell: bash
run: |
printf '%s\n' "${{ inputs.header }}" > "${RUNNER_TEMP}/release_notes.md"
- name: Generate changelog
shell: bash
if: ${{ inputs.generate_changelog }}
run: |
LAST_TAG=$(gh release view --json tagName -q .tagName)
LAST_TAG_COMMIT=$(git rev-parse $LAST_TAG)
BASE_COMMIT=$(git merge-base HEAD $LAST_TAG_COMMIT)
git-cliff "${BASE_COMMIT}..HEAD" --ignore-tags "nightly|-b"
cat CHANGELOG.md >> "${RUNNER_TEMP}/release_notes.md"
- name: Prepare release artifacts
shell: bash
run: .github/scripts/prepare-release-artifacts.sh release_artifacts
- name: Append sha256 checksums
shell: bash
working-directory: release_artifacts
run: |
cp ${{ github.workspace }}/.github/workflows/${{ inputs.notes_header_file }} "${RUNNER_TEMP}/release_notes.md"
echo '' >> "${RUNNER_TEMP}/release_notes.md"
echo '```' >> "${RUNNER_TEMP}/release_notes.md"
{
echo '## SHA256 checksums'
echo
echo '```'
cat *.sha256sum
echo '```'
} >> "${RUNNER_TEMP}/release_notes.md"
for d in $(ls); do
archive_name=$(ls $d)
mv ${d}/${archive_name} ./
rm -r $d
sha256sum ./$archive_name > ./${archive_name}.sha256sum
cat ./$archive_name.sha256sum >> "${RUNNER_TEMP}/release_notes.md"
done
echo '```' >> "${RUNNER_TEMP}/release_notes.md"
- name: Upload release notes
uses: actions/upload-artifact@v4
with:
name: release_notes_${{ inputs.version }}
path: "${RUNNER_TEMP}/release_notes.md"
- name: Remove current release and tag
if: ${{ github.event_name != 'pull_request' && inputs.overwrite_release }}
@@ -74,5 +112,6 @@ jobs:
${{ inputs.overwrite_release && '--prerelease' || '' }} \
--title "${{ inputs.title }}" \
--target $GITHUB_SHA \
${{ inputs.draft && '--draft' || '' }} \
--notes-file "${RUNNER_TEMP}/release_notes.md" \
./release_artifacts/clio_server*

View File

@@ -18,6 +18,8 @@ on:
- .github/scripts/execute-tests-under-sanitizer
- CMakeLists.txt
- conanfile.py
- conan.lock
- "cmake/**"
# We don't run sanitizer on code change, because it takes too long
# - "src/**"
@@ -35,24 +37,22 @@ jobs:
strategy:
fail-fast: false
matrix:
include:
- sanitizer: tsan
compiler: gcc
- sanitizer: asan
compiler: gcc
- sanitizer: ubsan
compiler: gcc
compiler: ["gcc", "clang"]
sanitizer_ext: [".asan", ".tsan", ".ubsan"]
exclude:
# Currently, clang.tsan unit tests hang
- compiler: clang
sanitizer_ext: .tsan
uses: ./.github/workflows/build_and_test.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
disable_cache: true
conan_profile: ${{ matrix.compiler }}.${{ matrix.sanitizer }}
conan_profile: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
build_type: Release
static: false
run_unit_tests: true
run_integration_tests: false
upload_clio_server: false
targets: clio_tests clio_integration_tests
sanitizer: ${{ matrix.sanitizer }}

View File

@@ -33,22 +33,17 @@ on:
required: true
type: boolean
sanitizer:
description: Sanitizer to use
required: true
type: string
jobs:
unit_tests:
name: Unit testing ${{ inputs.container != '' && 'in container' || 'natively' }}
name: Unit testing
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
if: inputs.run_unit_tests
env:
# TODO: remove when we have fixed all currently existing issues from sanitizers
SANITIZER_IGNORE_ERRORS: ${{ inputs.sanitizer != 'false' && inputs.sanitizer != 'ubsan' }}
# TODO: remove completely when we have fixed all currently existing issues with sanitizers
SANITIZER_IGNORE_ERRORS: ${{ endsWith(inputs.conan_profile, '.asan') || endsWith(inputs.conan_profile, '.tsan') }}
steps:
- name: Clean workdir
@@ -109,7 +104,7 @@ jobs:
Reports are available as artifacts.
integration_tests:
name: Integration testing ${{ inputs.container != '' && 'in container' || 'natively' }}
name: Integration testing
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}

View File

@@ -9,6 +9,7 @@ on:
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
push:
branches: [develop]
paths:
@@ -16,34 +17,214 @@ on:
- ".github/actions/build_docker_image/**"
# CI image must update when either its Dockerfile changes
# or any compilers changed and were pushed by hand
- "docker/ci/**"
- "docker/compilers/**"
- "docker/tools/**"
workflow_dispatch:
concurrency:
# Only cancel in-progress jobs or runs for the current workflow - matches against branch & tags
# Only matches runs for the current workflow - matches against branch & tags
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# We want to execute all builds sequentially in develop
cancel-in-progress: false
env:
GHCR_REPO: ghcr.io/${{ github.repository_owner }}
jobs:
build_and_push:
name: Build and push docker image
runs-on: [self-hosted, heavy]
gcc-amd64:
name: Build and push GCC docker image (amd64)
runs-on: heavy
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-gcc
rippleci/clio_gcc
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/gcc
tags: |
type=raw,value=amd64-latest
type=raw,value=amd64-12
type=raw,value=amd64-12.3.0
type=raw,value=amd64-${{ github.sha }}
platforms: linux/amd64
dockerhub_repo: rippleci/clio_gcc
dockerhub_description: GCC compiler for XRPLF/clio.
gcc-arm64:
name: Build and push GCC docker image (arm64)
runs-on: heavy-arm64
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/gcc/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-gcc
rippleci/clio_gcc
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/gcc
tags: |
type=raw,value=arm64-latest
type=raw,value=arm64-12
type=raw,value=arm64-12.3.0
type=raw,value=arm64-${{ github.sha }}
platforms: linux/arm64
dockerhub_repo: rippleci/clio_gcc
dockerhub_description: GCC compiler for XRPLF/clio.
gcc-merge:
name: Merge and push multi-arch GCC docker image
runs-on: heavy
needs: [gcc-amd64, gcc-arm64]
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/gcc/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
- name: Make GHCR_REPO lowercase
run: |
echo "GHCR_REPO_LC=$(echo ${{env.GHCR_REPO}} | tr '[:upper:]' '[:lower:]')" >> ${GITHUB_ENV}
- name: Create and push multi-arch manifest
if: github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true'
run: |
for image in ${{ env.GHCR_REPO_LC }}/clio-gcc rippleci/clio_gcc; do
docker buildx imagetools create \
-t $image:latest \
-t $image:12 \
-t $image:12.3.0 \
-t $image:${{ github.sha }} \
$image:arm64-latest \
$image:amd64-latest
done
clang:
name: Build and push Clang docker image
runs-on: heavy
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/compilers/clang/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-clang
rippleci/clio_clang
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/clang
tags: |
type=raw,value=latest
type=raw,value=16
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
dockerhub_repo: rippleci/clio_clang
dockerhub_description: Clang compiler for XRPLF/clio.
tools:
name: Build and push tools docker image
runs-on: heavy
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ env.GHCR_REPO }}/clio-tools
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/tools
tags: |
type=raw,value=latest
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
ci:
name: Build and push CI docker image
runs-on: heavy
needs: [gcc-merge, clang, tools]
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/build_docker_image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ env.GHCR_REPO }}/clio-ci
rippleci/clio_ci
ghcr.io/xrplf/clio-ci
dockerhub_repo: rippleci/clio_ci
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/ci
tags: |
@@ -51,4 +232,5 @@ jobs:
type=raw,value=gcc_12_clang_16
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
description: CI image for XRPLF/clio.
dockerhub_repo: rippleci/clio_ci
dockerhub_description: CI image for XRPLF/clio.

102
.github/workflows/upload_conan_deps.yml vendored Normal file
View File

@@ -0,0 +1,102 @@
name: Upload Conan Dependencies
on:
schedule:
- cron: "0 9 * * 1-5"
workflow_dispatch:
inputs:
force_source_build:
description: "Force source build of all dependencies"
required: false
default: false
type: boolean
pull_request:
branches:
- develop
paths:
- .github/workflows/upload_conan_deps.yml
- .github/actions/generate/action.yml
- .github/actions/prepare_runner/action.yml
- .github/scripts/conan/generate_matrix.py
- .github/scripts/conan/init.sh
- conanfile.py
- conan.lock
push:
branches:
- develop
paths:
- .github/workflows/upload_conan_deps.yml
- .github/actions/generate/action.yml
- .github/actions/prepare_runner/action.yml
- .github/scripts/conan/generate_matrix.py
- .github/scripts/conan/init.sh
- conanfile.py
- conan.lock
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
generate-matrix:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- name: Calculate conan matrix
id: set-matrix
run: .github/scripts/conan/generate_matrix.py >> "${GITHUB_OUTPUT}"
upload-conan-deps:
name: Build ${{ matrix.compiler }}${{ matrix.sanitizer_ext }} ${{ matrix.build_type }}
needs: generate-matrix
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
runs-on: ${{ matrix.os }}
container: ${{ matrix.container != '' && fromJson(matrix.container) || null }}
env:
CONAN_PROFILE: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
steps:
- uses: actions/checkout@v4
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan on macOS
if: runner.os == 'macOS'
shell: bash
run: ./.github/scripts/conan/init.sh
- name: Show conan profile
run: conan profile show --profile:all ${{ env.CONAN_PROFILE }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ env.CONAN_PROFILE }}
# We check that everything builds fine from source on scheduled runs
# But we do build and upload packages with build=missing by default
force_conan_source_build: ${{ github.event_name == 'schedule' || github.event.inputs.force_source_build == 'true' }}
build_type: ${{ matrix.build_type }}
- name: Login to Conan
if: github.event_name != 'pull_request'
run: conan remote login -p ${{ secrets.CONAN_PASSWORD }} ripple ${{ secrets.CONAN_USERNAME }}
- name: Upload Conan packages
if: github.event_name != 'pull_request' && github.event_name != 'schedule'
run: conan upload "*" -r=ripple --confirm

View File

@@ -1,8 +1,6 @@
---
ignored:
- DL3003
- DL3007
- DL3008
- DL3013
- DL3015
- DL3027
- DL3047

View File

@@ -11,6 +11,8 @@
#
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: ^(docs/doxygen-awesome-theme/|conan\.lock$)
repos:
# `pre-commit sample-config` default hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
@@ -20,16 +22,13 @@ repos:
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: end-of-file-fixer
exclude: ^docs/doxygen-awesome-theme/
- id: trailing-whitespace
exclude: ^docs/doxygen-awesome-theme/
# Autoformat: YAML, JSON, Markdown, etc.
- repo: https://github.com/rbubley/mirrors-prettier
rev: 787fb9f542b140ba0b2aced38e6a3e68021647a3 # frozen: v3.5.3
hooks:
- id: prettier
exclude: ^docs/doxygen-awesome-theme/
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: 586c3ea3f51230da42bab657c6a32e9e66c364f0 # frozen: v0.44.0

View File

@@ -191,8 +191,9 @@ generateData()
constexpr auto kTOTAL = 10'000;
std::vector<uint64_t> data;
data.reserve(kTOTAL);
util::MTRandomGenerator randomGenerator;
for (auto i = 0; i < kTOTAL; ++i)
data.push_back(util::Random::uniform(1, 100'000'000));
data.push_back(randomGenerator.uniform(1, 100'000'000));
return data;
}

View File

@@ -8,15 +8,24 @@
[changelog]
# template for the changelog header
header = """
# Changelog\n
All notable changes to this project will be documented in this file.\n
"""
# template for the changelog body
# https://keats.github.io/tera/docs/#introduction
body = """
{% if version %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
Version {{ version | trim_start_matches(pat="v") }} of Clio, an XRP Ledger API server optimized for HTTP and WebSocket API calls, is now available.
{% else %}\
Clio, an XRP Ledger API server optimized for HTTP and WebSocket API calls, is under active development.
{% endif %}\
<!-- Please, remove one of the 2 following lines -->
This release adds new features and bug fixes.
This release adds bug fixes.
\
{% if version %}
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}
## [unreleased]
{% endif %}\
{% for group, commits in commits | filter(attribute="merge_commit", value=false) | group_by(attribute="group") %}
@@ -24,7 +33,7 @@ body = """
{% for commit in commits %}
- {% if commit.scope %}*({{ commit.scope }})* {% endif %}\
{% if commit.breaking %}[**breaking**] {% endif %}\
{{ commit.message | upper_first }} {% if commit.remote.username %}by @{{ commit.remote.username }}{% endif %}\
{{ commit.message | upper_first }}{% if commit.remote.username %} by @{{ commit.remote.username }}{% endif %}\
{% endfor %}
{% endfor %}\n
"""

View File

@@ -1,21 +1,22 @@
set(COMPILER_FLAGS
-pedantic
-Wall
-Wcast-align
-Wdouble-promotion
-Wextra
-Werror
-Wextra
-Wformat=2
-Wimplicit-fallthrough
-Wmisleading-indentation
-Wno-narrowing
-Wno-deprecated-declarations
-Wno-dangling-else
-Wno-deprecated-declarations
-Wno-narrowing
-Wno-unused-but-set-variable
-Wnon-virtual-dtor
-Wnull-dereference
-Wold-style-cast
-pedantic
-Wpedantic
-Wunreachable-code
-Wunused
# FIXME: The following bunch are needed for gcc12 atm.
-Wno-missing-requires

57
conan.lock Normal file
View File

@@ -0,0 +1,57 @@
{
"version": "0.5",
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1750263732.782",
"xxhash/0.8.2#7856c968c985b2981b707ee8f2413b2b%1750263730.908",
"xrpl/2.5.0#7880d1696f11fceb1d498570f1a184c8%1751035095.524809",
"sqlite3/3.47.0#7a0904fd061f5f8a2366c294f9387830%1750263721.79",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1750263717.455",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1750263715.145",
"rapidjson/cci.20220822#1b9d8c2256876a154172dc5cfbe447c6%1750263713.526",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1750263698.841",
"openssl/1.1.1v#216374e4fb5b2e0f5ab1fb6f27b5b434%1750263685.885",
"nudb/2.0.8#63990d3e517038e04bf529eb8167f69f%1750263683.814",
"minizip/1.2.13#9e87d57804bd372d6d1e32b1871517a3%1750263681.745",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1750263679.891",
"libuv/1.46.0#78565d142ac7102776256328a26cdf60%1750263677.819",
"libiconv/1.17#1ae2f60ab5d08de1643a22a81b360c59%1750257497.552",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1750263675.748",
"libarchive/3.7.6#e0453864b2a4d225f06b3304903cb2b7%1750263671.05",
"http_parser/2.9.4#98d91690d6fd021e9e624218a85d9d97%1750263668.751",
"gtest/1.14.0#f8f0757a574a8dd747d16af62d6eb1b7%1750263666.833",
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1750263646.614",
"fmt/10.1.1#021e170cf81db57da82b5f737b6906c1%1750263644.741",
"date/3.0.3#cf28fe9c0aab99fe12da08aa42df65e1%1750263643.099",
"cassandra-cpp-driver/2.17.0#e50919efac8418c26be6671fd702540a%1750263632.157",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1750263630.06",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1750263627.95",
"boost/1.83.0#8eb22f36ddfb61f54bbc412c4555bd66%1750263616.444",
"benchmark/1.8.3#1a2ce62c99e2b3feaa57b1f0c15a8c46%1724323740.181",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1750263609.776"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1750263732.782",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1750263698.841",
"protobuf/3.21.9#64ce20e1d9ea24f3d6c504015d5f6fa8%1750263690.822",
"cmake/3.31.7#57c3e118bcf267552c0ea3f8bee1e7d5%1749863707.208",
"b2/5.3.2#7b5fabfe7088ae933fb3e78302343ea0%1750263614.565"
],
"python_requires": [],
"overrides": {
"boost/1.83.0": [
null,
"boost/1.83.0#8eb22f36ddfb61f54bbc412c4555bd66"
],
"protobuf/3.21.9": [
null,
"protobuf/3.21.12"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.47.0"
]
},
"config_requires": []
}

View File

@@ -2,16 +2,15 @@ from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
class Clio(ConanFile):
class ClioConan(ConanFile):
name = 'clio'
license = 'ISC'
author = 'Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>'
author = 'Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>, Ayaz Salikhov <asalikhov@ripple.com>'
url = 'https://github.com/xrplf/clio'
description = 'Clio RPC server'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'static': [True, False], # static linkage
'fPIC': [True, False], # unused?
'verbose': [True, False],
'tests': [True, False], # build unit tests; create `clio_tests` binary
'integration_tests': [True, False], # build integration tests; create `clio_integration_tests` binary
@@ -28,17 +27,16 @@ class Clio(ConanFile):
'boost/1.83.0',
'cassandra-cpp-driver/2.17.0',
'fmt/10.1.1',
'protobuf/3.21.9',
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1v',
'xrpl/2.5.0-b1',
'xrpl/2.5.0',
'zlib/1.3.1',
'libbacktrace/cci.20210118'
]
default_options = {
'static': False,
'fPIC': True,
'verbose': False,
'tests': False,
'integration_tests': False,
@@ -89,17 +87,8 @@ class Clio(ConanFile):
def generate(self):
tc = CMakeToolchain(self)
tc.variables['verbose'] = self.options.verbose
tc.variables['static'] = self.options.static
tc.variables['tests'] = self.options.tests
tc.variables['integration_tests'] = self.options.integration_tests
tc.variables['coverage'] = self.options.coverage
tc.variables['lint'] = self.options.lint
tc.variables['docs'] = self.options.docs
tc.variables['packaging'] = self.options.packaging
tc.variables['benchmark'] = self.options.benchmark
tc.variables['snapshot'] = self.options.snapshot
tc.variables['time_trace'] = self.options.time_trace
for option_name, option_value in self.options.items():
tc.variables[option_name] = option_value
tc.generate()
def build(self):

View File

@@ -1,6 +1,9 @@
FROM rippleci/clio_clang:16
FROM ghcr.io/xrplf/clio-gcc:12.3.0 AS clio-gcc
FROM ghcr.io/xrplf/clio-tools:latest AS clio-tools
FROM ghcr.io/xrplf/clio-clang:16
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
@@ -13,31 +16,55 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root
WORKDIR /root
ENV CCACHE_VERSION=4.10.2 \
LLVM_TOOLS_VERSION=19 \
GH_VERSION=2.40.0 \
DOXYGEN_VERSION=1.12.0 \
CLANG_BUILD_ANALYZER_VERSION=1.6.0 \
GIT_CLIFF_VERSION=2.8.0
ARG LLVM_TOOLS_VERSION=19
# Add repositories
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends --no-install-suggests gnupg wget curl software-properties-common \
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
curl \
gnupg \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& echo "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-${LLVM_TOOLS_VERSION} main" >> /etc/apt/sources.list \
&& wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add - && \
apt-get clean && rm -rf /var/lib/apt/lists/*
&& wget --progress=dot:giga -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
# Install packages
RUN apt update -qq \
&& apt install -y --no-install-recommends --no-install-suggests python3 python3-pip git git-lfs make ninja-build flex bison jq graphviz \
clang-tidy-${LLVM_TOOLS_VERSION} clang-tools-${LLVM_TOOLS_VERSION} \
&& pip3 install -q --upgrade --no-cache-dir pip && pip3 install -q --no-cache-dir conan==1.62 gcovr cmake==3.31.6 pre-commit \
&& apt-get clean && apt remove -y software-properties-common
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
clang-tidy-${LLVM_TOOLS_VERSION} \
clang-tools-${LLVM_TOOLS_VERSION} \
git \
git-lfs \
graphviz \
jq \
make \
ninja-build \
python3 \
python3-pip \
zip \
&& pip3 install -q --upgrade --no-cache-dir pip \
&& pip3 install -q --no-cache-dir \
# TODO: Remove this once we switch to newer Ubuntu base image
# lxml 6.0.0 is not compatible with our image
'lxml<6.0.0' \
\
cmake==3.31.6 \
conan==2.17.0 \
gcovr \
pre-commit \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gcc-12 and make ldconfig aware of the new libstdc++ location (for gcc)
# Note: Clang is using libc++ instead
COPY --from=rippleci/clio_gcc:12.3.0 /gcc12.deb /
RUN apt update && apt-get install -y binutils libc6-dev \
COPY --from=clio-gcc /gcc12.deb /
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
binutils \
libc6-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& dpkg -i /gcc12.deb \
&& rm -rf /gcc12.deb \
&& ldconfig
@@ -51,74 +78,32 @@ RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-12 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-12 100
WORKDIR /tmp
# Install ccache from source
RUN wget "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
&& cd "ccache-${CCACHE_VERSION}" \
&& mkdir build && cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& rm -rf /tmp/* /var/tmp/*
# Install doxygen from source
RUN wget "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& tar xf "doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& cd "doxygen-${DOXYGEN_VERSION}" \
&& mkdir build && cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& rm -rf /tmp/* /var/tmp/*
# Install ClangBuildAnalyzer
RUN wget "https://github.com/aras-p/ClangBuildAnalyzer/releases/download/v${CLANG_BUILD_ANALYZER_VERSION}/ClangBuildAnalyzer-linux" \
&& chmod +x ClangBuildAnalyzer-linux \
&& mv ClangBuildAnalyzer-linux /usr/bin/ClangBuildAnalyzer \
&& rm -rf /tmp/* /var/tmp/*
# Install git-cliff
RUN wget "https://github.com/orhun/git-cliff/releases/download/v${GIT_CLIFF_VERSION}/git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz" \
&& tar xf git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz \
&& mv git-cliff-${GIT_CLIFF_VERSION}/git-cliff /usr/bin/git-cliff \
&& rm -rf /tmp/* /var/tmp/*
# Install gh
RUN wget "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz" \
&& tar xf gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& mv gh_${GH_VERSION}_linux_${TARGETARCH}/bin/gh /usr/bin/gh \
&& rm -rf /tmp/* /var/tmp/*
COPY --from=clio-tools \
/usr/local/bin/ccache \
/usr/local/bin/doxygen \
/usr/local/bin/ClangBuildAnalyzer \
/usr/local/bin/git-cliff \
/usr/local/bin/gh \
/usr/local/bin/
WORKDIR /root
# Setup conan
RUN conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
RUN conan remote add --index 0 ripple http://18.143.149.228:8081/artifactory/api/conan/dev
# Note: intentionally leaving cppstd=20
RUN conan profile new gcc --detect \
&& conan profile update settings.compiler=gcc gcc \
&& conan profile update settings.compiler.version=12 gcc \
&& conan profile update settings.compiler.cppstd=20 gcc \
&& conan profile update settings.compiler.libcxx=libstdc++11 gcc \
&& conan profile update env.CC=/usr/bin/gcc-12 gcc \
&& conan profile update env.CXX=/usr/bin/g++-12 gcc \
&& conan profile update "conf.tools.build:compiler_executables={\"c\": \"/usr/bin/gcc-12\", \"cpp\": \"/usr/bin/g++-12\"}" gcc
WORKDIR /root/.conan2
COPY conan/global.conf ./global.conf
RUN conan profile new clang --detect \
&& conan profile update settings.compiler=clang clang \
&& conan profile update settings.compiler.version=16 clang \
&& conan profile update settings.compiler.cppstd=20 clang \
&& conan profile update settings.compiler.libcxx=libc++ clang \
&& conan profile update env.CC=/usr/bin/clang-16 clang \
&& conan profile update env.CXX=/usr/bin/clang++-16 clang \
&& conan profile update env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS" clang \
&& conan profile update "conf.tools.build:compiler_executables={\"c\": \"/usr/bin/clang-16\", \"cpp\": \"/usr/bin/clang++-16\"}" clang
WORKDIR /root/.conan2/profiles
RUN echo "include(gcc)" >> .conan/profiles/default
COPY conan/clang.profile ./clang
COPY conan/sanitizer_template.profile ./clang.asan
COPY conan/sanitizer_template.profile ./clang.tsan
COPY conan/sanitizer_template.profile ./clang.ubsan
COPY conan/gcc.asan /root/.conan/profiles
COPY conan/gcc.tsan /root/.conan/profiles
COPY conan/gcc.ubsan /root/.conan/profiles
COPY conan/clang.asan /root/.conan/profiles
COPY conan/clang.tsan /root/.conan/profiles
COPY conan/clang.ubsan /root/.conan/profiles
COPY conan/gcc.profile ./gcc
COPY conan/sanitizer_template.profile ./gcc.asan
COPY conan/sanitizer_template.profile ./gcc.tsan
COPY conan/sanitizer_template.profile ./gcc.ubsan
WORKDIR /root

View File

@@ -5,13 +5,17 @@ It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but c
The image is based on Ubuntu 20.04 and contains:
- ccache 4.11.3
- clang 16.0.6
- gcc 12.3
- ClangBuildAnalyzer 1.6.0
- conan 2.17.0
- doxygen 1.12
- gh 2.40
- ccache 4.10.2
- conan 1.62
- gcc 12.3.0
- gh 2.74
- git-cliff 2.9.1
- and some other useful tools
Conan is set up to build Clio without any additional steps. There are two preset conan profiles: `clang` and `gcc` to use corresponding compiler. By default conan is setup to use `gcc`.
Sanitizer builds for `ASAN`, `TSAN` and `UBSAN` are enabled via conan profiles for each of the supported compilers. These can be selected using the following pattern (all lowercase): `[compiler].[sanitizer]` (e.g. `--profile gcc.tsan`).
Conan is set up to build Clio without any additional steps.
There are two preset conan profiles: `clang` and `gcc` to use corresponding compiler.
`ASan`, `TSan` and `UBSan` sanitizer builds are enabled via conan profiles for each of the supported compilers.
These can be selected using the following pattern (all lowercase): `[compiler].[sanitizer]` (e.g. `--profile:all gcc.tsan`).

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

View File

@@ -0,0 +1,11 @@
[settings]
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=16
os=Linux
[conf]
tools.build:compiler_executables={'c': '/usr/bin/clang-16', 'cpp': '/usr/bin/clang++-16'}

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

View File

@@ -0,0 +1,11 @@
[settings]
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=gcc
compiler.cppstd=20
compiler.libcxx=libstdc++11
compiler.version=12
os=Linux
[conf]
tools.build:compiler_executables={'c': '/usr/bin/gcc-12', 'cpp': '/usr/bin/g++-12'}

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

View File

@@ -0,0 +1,3 @@
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
tools.info.package_id:confs = ["tools.build:cflags", "tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]

View File

@@ -0,0 +1,20 @@
{% set compiler, sani = profile_name.split('.') %}
{% set sanitizer_opt_map = {'asan': 'address', 'tsan': 'thread', 'ubsan': 'undefined'} %}
{% set sanitizer = sanitizer_opt_map[sani] %}
{% set sanitizer_build_flags_str = "-fsanitize=" ~ sanitizer ~ " -g -O1 -fno-omit-frame-pointer" %}
{% set sanitizer_build_flags = sanitizer_build_flags_str.split(' ') %}
{% set sanitizer_link_flags_str = "-fsanitize=" ~ sanitizer %}
{% set sanitizer_link_flags = sanitizer_link_flags_str.split(' ') %}
include({{ compiler }})
[options]
boost/*:extra_b2_flags = "cxxflags=\"{{ sanitizer_build_flags_str }}\" linkflags=\"{{ sanitizer_link_flags_str }}\""
boost/*:without_stacktrace = True
[conf]
tools.build:cflags += {{ sanitizer_build_flags }}
tools.build:cxxflags += {{ sanitizer_build_flags }}
tools.build:exelinkflags += {{ sanitizer_link_flags }}
tools.build:sharedlinkflags += {{ sanitizer_link_flags }}

View File

@@ -2,14 +2,17 @@ FROM ubuntu:22.04
COPY ./clio_server /opt/clio/bin/clio_server
RUN ln -s /opt/clio/bin/clio_server /usr/local/bin/clio_server && \
mkdir -p /opt/clio/etc/ && \
mkdir -p /opt/clio/log/ && \
groupadd -g 10001 clio && \
useradd -u 10000 -g 10001 -s /bin/bash clio && \
chown clio:clio /opt/clio/log && \
apt update && \
apt install -y libatomic1
RUN ln -s /opt/clio/bin/clio_server /usr/local/bin/clio_server \
&& mkdir -p /opt/clio/etc/ \
&& mkdir -p /opt/clio/log/ \
&& groupadd -g 10001 clio \
&& useradd -u 10000 -g 10001 -s /bin/bash clio \
&& chown clio:clio /opt/clio/log \
&& apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
libatomic1 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER clio
ENTRYPOINT ["/opt/clio/bin/clio_server"]

View File

@@ -1,21 +0,0 @@
FROM ubuntu:focal
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
SHELL ["/bin/bash", "-c"]
# hadolint ignore=DL3002
USER root
WORKDIR /root
ENV CLANG_VERSION=16
RUN apt update -qq \
&& apt install -qq -y --no-install-recommends --no-install-suggests \
wget software-properties-common gnupg
RUN wget https://apt.llvm.org/llvm.sh \
&& chmod +x llvm.sh \
&& ./llvm.sh ${CLANG_VERSION} \
&& rm -rf llvm.sh \
&& apt-get install -y libc++-16-dev libc++abi-16-dev

View File

@@ -0,0 +1,30 @@
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]
# hadolint ignore=DL3002
USER root
WORKDIR /root
ARG CLANG_VERSION=16
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
wget \
software-properties-common \
gnupg \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN wget --progress=dot:giga https://apt.llvm.org/llvm.sh \
&& chmod +x llvm.sh \
&& ./llvm.sh ${CLANG_VERSION} \
&& rm -rf llvm.sh \
&& apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
libc++-${CLANG_VERSION}-dev \
libc++abi-${CLANG_VERSION}-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -0,0 +1,3 @@
# Clang compiler
This image contains clang compiler to build <https://github.com/XRPLF/clio>.

View File

@@ -1,74 +0,0 @@
FROM ubuntu:focal as build
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
ARG UBUNTU_VERSION=20.04
ARG GCC_VERSION=12.3.0
ARG BUILD_VERSION=1
RUN apt update && apt install -y wget build-essential file flex libz-dev libzstd-dev
RUN wget https://gcc.gnu.org/pub/gcc/releases/gcc-$GCC_VERSION/gcc-$GCC_VERSION.tar.gz \
&& tar xf gcc-$GCC_VERSION.tar.gz \
&& cd /gcc-$GCC_VERSION && ./contrib/download_prerequisites
RUN mkdir /${TARGETARCH}-gcc-12
WORKDIR /${TARGETARCH}-gcc-12
RUN /gcc-$GCC_VERSION/configure \
--with-pkgversion="clio-build-$BUILD_VERSION https://github.com/XRPLF/clio" \
--enable-languages=c,c++ \
--prefix=/usr \
--with-gcc-major-version-only \
--program-suffix=-12 \
--enable-shared \
--enable-linker-build-id \
--libexecdir=/usr/lib \
--without-included-gettext \
--enable-threads=posix \
--libdir=/usr/lib \
--disable-nls \
--enable-clocale=gnu \
--enable-libstdcxx-backtrace=yes \
--enable-libstdcxx-debug \
--enable-libstdcxx-time=yes \
--with-default-libstdcxx-abi=new \
--enable-gnu-unique-object \
--disable-vtable-verify \
--enable-plugin \
--enable-default-pie \
--with-system-zlib \
--enable-libphobos-checking=release \
--with-target-system-zlib=auto \
--disable-werror \
--enable-cet \
--disable-multilib \
--without-cuda-driver \
--enable-checking=release \
&& make -j "$(nproc)" \
&& make install-strip DESTDIR=/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION \
&& mkdir -p /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64 \
&& mv /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/lib64/libstdc++.so.6.0.30-gdb.py /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64/libstdc++.so.6.0.30-gdb.py
# Generate deb
WORKDIR /
COPY control.m4 /
COPY ld.so.conf /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/etc/ld.so.conf.d/1-gcc-12.conf
RUN mkdir /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/DEBIAN \
&& m4 -P -DUBUNTU_VERSION=$UBUNTU_VERSION -DVERSION=$GCC_VERSION-$BUILD_VERSION -DTARGETARCH=$TARGETARCH control.m4 > /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/DEBIAN/control \
&& dpkg-deb --build --root-owner-group /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION /gcc12.deb
# Create final image
FROM ubuntu:focal as gcc
COPY --from=build /gcc12.deb /
# Make gcc-12 available but also leave gcc12.deb for others to copy if needed
RUN apt update && apt-get install -y binutils libc6-dev \
&& dpkg -i /gcc12.deb
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 100 \
&& update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-12 100 \
&& update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 100 \
&& update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-12 100 \
&& update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-12 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-12 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-12 100

View File

@@ -0,0 +1,118 @@
ARG UBUNTU_VERSION=20.04
ARG GCC_MAJOR_VERSION=12
FROM ubuntu:$UBUNTU_VERSION AS build
ARG UBUNTU_VERSION
ARG GCC_MAJOR_VERSION
ARG GCC_MINOR_VERSION=3
ARG GCC_PATCH_VERSION=0
ARG GCC_VERSION=${GCC_MAJOR_VERSION}.${GCC_MINOR_VERSION}.${GCC_PATCH_VERSION}
ARG BUILD_VERSION=6
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
build-essential \
file \
flex \
libz-dev \
libzstd-dev \
software-properties-common \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /
RUN wget --progress=dot:giga https://gcc.gnu.org/pub/gcc/releases/gcc-$GCC_VERSION/gcc-$GCC_VERSION.tar.gz \
&& tar xf gcc-$GCC_VERSION.tar.gz
WORKDIR /gcc-$GCC_VERSION
RUN ./contrib/download_prerequisites
RUN mkdir /gcc-build
WORKDIR /gcc-build
RUN /gcc-$GCC_VERSION/configure \
--with-pkgversion="clio-build-$BUILD_VERSION https://github.com/XRPLF/clio" \
--enable-languages=c,c++ \
--prefix=/usr \
--with-gcc-major-version-only \
--program-suffix=-${GCC_MAJOR_VERSION} \
--enable-shared \
--enable-linker-build-id \
--libexecdir=/usr/lib \
--without-included-gettext \
--enable-threads=posix \
--libdir=/usr/lib \
--disable-nls \
--enable-clocale=gnu \
--enable-libstdcxx-backtrace=yes \
--enable-libstdcxx-debug \
--enable-libstdcxx-time=yes \
--with-default-libstdcxx-abi=new \
--enable-gnu-unique-object \
--disable-vtable-verify \
--enable-plugin \
--enable-default-pie \
--with-system-zlib \
--enable-libphobos-checking=release \
--with-target-system-zlib=auto \
--disable-werror \
--enable-cet \
--disable-multilib \
--without-cuda-driver \
--enable-checking=release
RUN make -j "$(nproc)"
RUN make install-strip DESTDIR=/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION \
&& mkdir -p /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64 \
&& mv \
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/lib64/libstdc++.so.6.0.30-gdb.py \
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64/libstdc++.so.6.0.30-gdb.py
# Generate deb
WORKDIR /
COPY control.m4 /
COPY ld.so.conf /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/etc/ld.so.conf.d/1-gcc-${GCC_MAJOR_VERSION}.conf
RUN mkdir /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/DEBIAN \
&& m4 \
-P \
-DUBUNTU_VERSION=$UBUNTU_VERSION \
-DVERSION=$GCC_VERSION-$BUILD_VERSION \
-DTARGETARCH=$TARGETARCH \
control.m4 > /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/DEBIAN/control \
&& dpkg-deb \
--build \
--root-owner-group \
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION \
/gcc${GCC_MAJOR_VERSION}.deb
# Create final image
FROM ubuntu:$UBUNTU_VERSION
ARG GCC_MAJOR_VERSION
COPY --from=build /gcc${GCC_MAJOR_VERSION}.deb /
# Install gcc-${GCC_MAJOR_VERSION}, but also leave gcc${GCC_MAJOR_VERSION}.deb for others to copy if needed
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
binutils \
libc6-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& dpkg -i /gcc${GCC_MAJOR_VERSION}.deb
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_MAJOR_VERSION} 100

View File

@@ -0,0 +1,3 @@
# GCC compiler
This image contains GCC compiler to build <https://github.com/XRPLF/clio>.

View File

@@ -2,5 +2,6 @@ Package: gcc-12-ubuntu-UBUNTUVERSION
Version: VERSION
Architecture: TARGETARCH
Maintainer: Alex Kremer <akremer@ripple.com>
Description: Gcc VERSION build for ubuntu UBUNTUVERSION
Uploaders: Ayaz Salikhov <asalikhov@ripple.com>
Description: GCC VERSION build for ubuntu UBUNTUVERSION
Depends: binutils, libc6-dev

View File

@@ -2,7 +2,7 @@ services:
clio_develop:
image: ghcr.io/xrplf/clio-ci:latest
volumes:
- clio_develop_conan_data:/root/.conan/data
- clio_develop_conan_data:/root/.conan2/p
- clio_develop_ccache:/root/.ccache
- ../../:/root/clio
- clio_develop_build:/root/clio/build_docker

64
docker/tools/Dockerfile Normal file
View File

@@ -0,0 +1,64 @@
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
bison \
build-essential \
cmake \
flex \
ninja-build \
software-properties-common \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /tmp
ARG CCACHE_VERSION=4.11.3
RUN wget --progress=dot:giga "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
&& cd "ccache-${CCACHE_VERSION}" \
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release -DENABLE_TESTING=False .. \
&& cmake --build . --target install \
&& rm -rf /tmp/* /var/tmp/*
ARG DOXYGEN_VERSION=1.12.0
RUN wget --progress=dot:giga "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& tar xf "doxygen-${DOXYGEN_VERSION}.src.tar.gz" \
&& cd "doxygen-${DOXYGEN_VERSION}" \
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& rm -rf /tmp/* /var/tmp/*
ARG CLANG_BUILD_ANALYZER_VERSION=1.6.0
RUN wget --progress=dot:giga "https://github.com/aras-p/ClangBuildAnalyzer/archive/refs/tags/v${CLANG_BUILD_ANALYZER_VERSION}.tar.gz" \
&& tar xf "v${CLANG_BUILD_ANALYZER_VERSION}.tar.gz" \
&& cd "ClangBuildAnalyzer-${CLANG_BUILD_ANALYZER_VERSION}" \
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& rm -rf /tmp/* /var/tmp/*
ARG GIT_CLIFF_VERSION=2.9.1
RUN wget --progress=dot:giga "https://github.com/orhun/git-cliff/releases/download/v${GIT_CLIFF_VERSION}/git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz" \
&& tar xf git-cliff-${GIT_CLIFF_VERSION}-x86_64-unknown-linux-musl.tar.gz \
&& mv git-cliff-${GIT_CLIFF_VERSION}/git-cliff /usr/local/bin/git-cliff \
&& rm -rf /tmp/* /var/tmp/*
ARG GH_VERSION=2.74.0
RUN wget --progress=dot:giga "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz" \
&& tar xf gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& mv gh_${GH_VERSION}_linux_${TARGETARCH}/bin/gh /usr/local/bin/gh \
&& rm -rf /tmp/* /var/tmp/*
WORKDIR /root

View File

@@ -22,6 +22,7 @@ WARNINGS = ${LINT}
WARN_NO_PARAMDOC = ${LINT}
WARN_IF_INCOMPLETE_DOC = ${LINT}
WARN_IF_UNDOCUMENTED = ${LINT}
WARN_AS_ERROR = ${WARN_AS_ERROR}
GENERATE_LATEX = NO
GENERATE_HTML = YES

View File

@@ -6,7 +6,7 @@
## Minimum Requirements
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.55, <2.0](https://conan.io/downloads.html)
- [Conan 2.17.0](https://conan.io/downloads.html)
- [CMake 3.20, <4.0](https://cmake.org/download/)
- [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html): needed for code coverage generation
- [**Optional**] [CCache](https://ccache.dev/): speeds up compilation if you are going to compile Clio often
@@ -19,41 +19,65 @@
### Conan Configuration
Clio requires `compiler.cppstd=20` in your Conan profile (`~/.conan/profiles/default`).
By default, Conan uses `~/.conan2` as it's home folder.
You can change it by using `$CONAN_HOME` env variable.
[More info about Conan home](https://docs.conan.io/2/reference/environment.html#conan-home).
> [!NOTE]
> Although Clio is built using C++23, it's required to set `compiler.cppstd=20` for the time being as some of Clio's dependencies are not yet capable of building under C++23.
> [!TIP]
> To setup Conan automatically, you can run `.github/scripts/conan/init.sh`.
> This will delete Conan home directory (if it exists), set up profiles and add Artifactory remote.
**Mac example**:
The instruction below assumes that `$CONAN_HOME` is not set.
#### Profiles
The default profile is the file in `~/.conan2/profiles/default`.
Here are some examples of possible profiles:
**Mac apple-clang 16 example**:
```text
[settings]
os=Macos
os_build=Macos
arch=armv8
arch_build=armv8
compiler=apple-clang
compiler.version=15
compiler.libcxx=libc++
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=16
os=Macos
[conf]
tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]
grpc/1.50.1:tools.build:cxxflags+=["-Wno-missing-template-arg-list-after-template-kw"]
```
**Linux example**:
**Linux gcc-12 example**:
```text
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=12
compiler.libcxx=libstdc++11
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=gcc
compiler.cppstd=20
compiler.libcxx=libstdc++11
compiler.version=12
os=Linux
[conf]
tools.build:compiler_executables={'c': '/usr/bin/gcc-12', 'cpp': '/usr/bin/g++-12'}
```
> [!NOTE]
> Although Clio is built using C++23, it's required to set `compiler.cppstd=20` in your profile for the time being as some of Clio's dependencies are not yet capable of building under C++23.
#### global.conf file
Add the following to the `~/.conan2/global.conf` file:
```text
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
tools.info.package_id:confs = ["tools.build:cflags", "tools.build:cxxflags", "tools.build:exelinkflags", "tools.build:sharedlinkflags"]
```
#### Artifactory
@@ -61,18 +85,24 @@ compiler.cppstd=20
Make sure artifactory is setup with Conan.
```sh
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
conan remote add --index 0 ripple http://18.143.149.228:8081/artifactory/api/conan/dev
```
Now you should be able to download the prebuilt `xrpl` package on some platforms.
Now you should be able to download the prebuilt dependencies (including `xrpl` package) on supported platforms.
> [!NOTE]
> You may need to edit the `~/.conan/remotes.json` file to ensure that this newly added artifactory is listed last. Otherwise, you could see compilation errors when building the project with gcc version 13 (or newer).
#### Conan lockfile
Remove old packages you may have cached.
To achieve reproducible dependencies, we use [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html).
```sh
conan remove -f xrpl
The `conan.lock` file in the repository contains a "snapshot" of the current dependencies.
It is implicitly used when running `conan` commands, you don't need to specify it.
You have to update this file every time you add a new dependency or change a revision or version of an existing dependency.
To do that, run the following command in the repository root:
```bash
conan lock create . -o '&:tests=True' -o '&:benchmark=True'
```
## Building Clio
@@ -81,19 +111,22 @@ Navigate to Clio's root directory and run:
```sh
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True
# You can also specify profile explicitly by adding `--profile:all <PROFILE_NAME>`
conan install .. --output-folder . --build missing --settings build_type=Release -o '&:tests=True'
# You can also add -GNinja to use Ninja build system instead of Make
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
> [!TIP]
> You can omit the `-o tests=True` if you don't want to build `clio_tests`.
> You can omit the `-o '&:tests=True'` if you don't want to build `clio_tests`.
If successful, `conan install` will find the required packages and `cmake` will do the rest. You should see `clio_server` and `clio_tests` in the `build` directory (the current directory).
> [!TIP]
> To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
> To generate a Code Coverage report, include `-o '&:coverage=True'` in the `conan install` command above, along with `-o '&:tests=True'` to enable tests.
> After running the `cmake` commands, execute `make clio_tests-ccov`.
> The coverage report will be found at `clio_tests-llvm-cov/index.html`.
<!-- markdownlint-disable-line MD028 -->
@@ -106,11 +139,11 @@ The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/
To generate the API docs:
1. First, include `-o docs=True` in the conan install command. For example:
1. First, include `-o '&:docs=True'` in the conan install command. For example:
```sh
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o docs=True
conan install .. --output-folder . --build missing --settings build_type=Release -o '&:tests=True' -o '&:docs=True'
```
2. Once that has completed successfully, run the `cmake` command and add the `--target docs` option:
@@ -134,7 +167,7 @@ It is also possible to build Clio using [Docker](https://www.docker.com/) if you
docker run -it ghcr.io/xrplf/clio-ci:latest
git clone https://github.com/XRPLF/clio
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True
conan install .. --output-folder . --build missing --settings build_type=Release -o '&:tests=True'
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
@@ -150,24 +183,24 @@ If you wish to develop against a `rippled` instance running in standalone mode t
Sometimes, during development, you need to build against a custom version of `libxrpl`. (For example, you may be developing compatibility for a proposed amendment that is not yet merged to the main `rippled` codebase.) To build Clio with compatibility for a custom fork or branch of `rippled`, follow these steps:
1. First, pull/clone the appropriate `rippled` fork and switch to the branch you want to build.
The following example uses an in-development build with [XLS-33d Multi-Purpose Tokens](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens):
1. First, pull/clone the appropriate `rippled` version and switch to the branch you want to build.
The following example uses a `2.5.0-rc1` tag of rippled in the main branch:
```sh
git clone https://github.com/shawnxie999/rippled/
git clone https://github.com/XRPLF/rippled/
cd rippled
git switch mpt-1.1
git checkout 2.5.0-rc1
```
2. Export a custom package to your local Conan store using a user/channel:
```sh
conan export . my/feature
conan export . --user=my --channel=feature
```
3. Patch your local Clio build to use the right package.
Edit `conanfile.py` (from the Clio repository root). Replace the `xrpl` requirement with the custom package version from the previous step. This must also include the current version number from your `rippled` branch. For example:
Edit `conanfile.py` in the Clio repository root. Replace the `xrpl` requirement with the custom package version from the previous step. This must also include the current version number from your `rippled` branch. For example:
```py
# ... (excerpt from conanfile.py)
@@ -178,7 +211,7 @@ Sometimes, during development, you need to build against a custom version of `li
'protobuf/3.21.9',
'grpc/1.50.1',
'openssl/1.1.1v',
'xrpl/2.3.0-b1@my/feature', # Update this line
'xrpl/2.5.0-rc1@my/feature', # Use your exported version here
'zlib/1.3.1',
'libbacktrace/cci.20210118'
]
@@ -192,10 +225,11 @@ Sometimes, during development, you need to build against a custom version of `li
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 19.0.
Clang-tidy can be run by CMake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
Clang-tidy can be run by CMake when building the project.
To achieve this, you just need to provide the option `-o '&:lint=True'` for the `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
conan install .. --output-folder . --build missing --settings build_type=Release -o '&:tests=True' -o '&:lint=True'
```
By default CMake will try to find `clang-tidy` automatically in your system.

View File

@@ -61,6 +61,7 @@ pushd ${DOCDIR} > /dev/null 2>&1
cat ${ROOT}/docs/Doxyfile | \
sed \
-e "s/\${LINT}/YES/" \
-e "s/\${WARN_AS_ERROR}/NO/" \
-e "s!\${SOURCE}!${ROOT}!" \
-e "s/\${USE_DOT}/NO/" \
-e "s/\${EXCLUDES}/impl/" \

View File

@@ -36,6 +36,8 @@
#include "rpc/RPCEngine.hpp"
#include "rpc/WorkQueue.hpp"
#include "rpc/common/impl/HandlerProvider.hpp"
#include "util/Random.hpp"
#include "util/async/context/BasicExecutionContext.hpp"
#include "util/build/Build.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
@@ -103,6 +105,10 @@ ClioApplication::run(bool const useNgWebServer)
// This is not the only io context in the application.
boost::asio::io_context ioc{threads};
// Similarly we need a context to run ETLng on
// In the future we can remove the raw ioc and use ctx instead
util::async::CoroExecutionContext ctx{threads};
// Rate limiter, to prevent abuse
auto whitelistHandler = web::dosguard::WhitelistHandler{config_};
auto const dosguardWeights = web::dosguard::Weights::make(config_);
@@ -139,14 +145,19 @@ ClioApplication::run(bool const useNgWebServer)
// The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and accounts_proposed)
auto balancer = [&] -> std::shared_ptr<etlng::LoadBalancerInterface> {
if (config_.get<bool>("__ng_etl"))
return etlng::LoadBalancer::makeLoadBalancer(config_, ioc, backend, subscriptions, ledgers);
if (config_.get<bool>("__ng_etl")) {
return etlng::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}
return etl::LoadBalancer::makeLoadBalancer(config_, ioc, backend, subscriptions, ledgers);
return etl::LoadBalancer::makeLoadBalancer(
config_, ioc, backend, subscriptions, std::make_unique<util::MTRandomGenerator>(), ledgers
);
}();
// ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes
auto etl = etl::ETLService::makeETLService(config_, ioc, backend, subscriptions, balancer, ledgers);
auto etl = etl::ETLService::makeETLService(config_, ioc, ctx, backend, subscriptions, balancer, ledgers);
auto workQueue = rpc::WorkQueue::makeWorkQueue(config_);
auto counters = rpc::Counters::makeCounters(workQueue);

View File

@@ -139,6 +139,12 @@ struct Amendments {
REGISTER(DeepFreeze);
REGISTER(PermissionDelegation);
REGISTER(fixPayChanCancelAfter);
REGISTER(Batch);
REGISTER(PermissionedDEX);
REGISTER(SingleAssetVault);
REGISTER(TokenEscrow);
REGISTER(fixAMMv1_3);
REGISTER(fixEnforceNFTokenTrustlineV2);
// Obsolete but supported by libxrpl
REGISTER(CryptoConditionsSuite);

View File

@@ -26,6 +26,7 @@
#include "etl/impl/CursorFromAccountProvider.hpp"
#include "etl/impl/CursorFromDiffProvider.hpp"
#include "etl/impl/CursorFromFixDiffNumProvider.hpp"
#include "etlng/CacheLoaderInterface.hpp"
#include "util/Assert.hpp"
#include "util/async/context/BasicExecutionContext.hpp"
#include "util/log/Logger.hpp"
@@ -33,6 +34,7 @@
#include <cstdint>
#include <functional>
#include <memory>
#include <utility>
namespace etl {
@@ -46,7 +48,7 @@ namespace etl {
* @tparam ExecutionContextType The type of the execution context to use
*/
template <typename ExecutionContextType = util::async::CoroExecutionContext>
class CacheLoader {
class CacheLoader : public etlng::CacheLoaderInterface {
using CacheLoaderType = impl::CacheLoaderImpl<data::LedgerCacheInterface>;
util::Logger log_{"ETL"};
@@ -67,10 +69,13 @@ public:
*/
CacheLoader(
util::config::ClioConfigDefinition const& config,
std::shared_ptr<BackendInterface> const& backend,
std::shared_ptr<BackendInterface> backend,
data::LedgerCacheInterface& cache
)
: backend_{backend}, cache_{cache}, settings_{makeCacheLoaderSettings(config)}, ctx_{settings_.numThreads}
: backend_{std::move(backend)}
, cache_{cache}
, settings_{makeCacheLoaderSettings(config)}
, ctx_{settings_.numThreads}
{
}
@@ -83,7 +88,7 @@ public:
* @param seq The sequence number to load cache for
*/
void
load(uint32_t const seq)
load(uint32_t const seq) override
{
ASSERT(not cache_.get().isFull(), "Cache must not be full. seq = {}", seq);
@@ -129,7 +134,7 @@ public:
* @brief Requests the loader to stop asap
*/
void
stop() noexcept
stop() noexcept override
{
if (loader_ != nullptr)
loader_->stop();
@@ -139,7 +144,7 @@ public:
* @brief Waits for the loader to finish background work
*/
void
wait() noexcept
wait() noexcept override
{
if (loader_ != nullptr)
loader_->wait();

View File

@@ -20,12 +20,34 @@
#include "etl/ETLService.hpp"
#include "data/BackendInterface.hpp"
#include "etl/CacheLoader.hpp"
#include "etl/CorruptionDetector.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
#include "etl/impl/ExtractionDataPipe.hpp"
#include "etl/impl/Extractor.hpp"
#include "etl/impl/LedgerFetcher.hpp"
#include "etl/impl/LedgerLoader.hpp"
#include "etl/impl/LedgerPublisher.hpp"
#include "etl/impl/Transformer.hpp"
#include "etlng/ETLService.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/MonitorProvider.hpp"
#include "etlng/impl/TaskManagerProvider.hpp"
#include "etlng/impl/ext/Cache.hpp"
#include "etlng/impl/ext/Core.hpp"
#include "etlng/impl/ext/MPT.hpp"
#include "etlng/impl/ext/NFT.hpp"
#include "etlng/impl/ext/Successor.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/Constants.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
@@ -45,6 +67,82 @@
namespace etl {
std::shared_ptr<etlng::ETLServiceInterface>
ETLService::makeETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
)
{
std::shared_ptr<etlng::ETLServiceInterface> ret;
if (config.get<bool>("__ng_etl")) {
ASSERT(
std::dynamic_pointer_cast<etlng::LoadBalancer>(balancer), "LoadBalancer type must be etlng::LoadBalancer"
);
auto state = std::make_shared<etl::SystemState>();
state->isStrictReadonly = config.get<bool>("read_only");
auto fetcher = std::make_shared<etl::impl::LedgerFetcher>(backend, balancer);
auto extractor = std::make_shared<etlng::impl::Extractor>(fetcher);
auto publisher = std::make_shared<etlng::impl::LedgerPublisher>(ioc, backend, subscriptions, *state);
auto cacheLoader = std::make_shared<etl::CacheLoader<>>(config, backend, backend->cache());
auto cacheUpdater = std::make_shared<etlng::impl::CacheUpdater>(backend->cache());
auto amendmentBlockHandler = std::make_shared<etlng::impl::AmendmentBlockHandler>(ctx, *state);
auto monitorProvider = std::make_shared<etlng::impl::MonitorProvider>();
backend->setCorruptionDetector(CorruptionDetector{*state, backend->cache()});
auto loader = std::make_shared<etlng::impl::Loader>(
backend,
etlng::impl::makeRegistry(
*state,
etlng::impl::CacheExt{cacheUpdater},
etlng::impl::CoreExt{backend},
etlng::impl::SuccessorExt{backend, backend->cache()},
etlng::impl::NFTExt{backend},
etlng::impl::MPTExt{backend}
),
amendmentBlockHandler,
state
);
auto taskManagerProvider = std::make_shared<etlng::impl::TaskManagerProvider>(*ledgers, extractor, loader);
ret = std::make_shared<etlng::ETLService>(
ctx,
config,
backend,
balancer,
ledgers,
publisher,
cacheLoader,
cacheUpdater,
extractor,
loader, // loader itself
loader, // initial load observer
taskManagerProvider,
monitorProvider,
state
);
} else {
ASSERT(std::dynamic_pointer_cast<etl::LoadBalancer>(balancer), "LoadBalancer type must be etl::LoadBalancer");
ret = std::make_shared<etl::ETLService>(config, ioc, backend, subscriptions, balancer, ledgers);
}
// inject networkID into subscriptions, as transaction feed require it to inject CTID in response
if (auto const state = ret->getETLState(); state)
subscriptions->setNetworkID(state->networkID);
ret->run();
return ret;
}
// Database must be populated when this starts
std::optional<uint32_t>
ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
@@ -254,7 +352,7 @@ ETLService::doWork()
worker_ = std::thread([this]() {
beast::setCurrentThreadName("ETLService worker");
if (state_.isReadOnly) {
if (state_.isStrictReadonly) {
monitorReadOnly();
} else {
monitor();
@@ -281,7 +379,7 @@ ETLService::ETLService(
{
startSequence_ = config.maybeValue<uint32_t>("start_sequence");
finishSequence_ = config.maybeValue<uint32_t>("finish_sequence");
state_.isReadOnly = config.get<bool>("read_only");
state_.isStrictReadonly = config.get<bool>("read_only");
extractorThreads_ = config.get<uint32_t>("extractor_threads");
// This should probably be done in the backend factory but we don't have state available until here

View File

@@ -22,7 +22,6 @@
#include "data/BackendInterface.hpp"
#include "etl/CacheLoader.hpp"
#include "etl/ETLState.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
@@ -32,12 +31,12 @@
#include "etl/impl/LedgerLoader.hpp"
#include "etl/impl/LedgerPublisher.hpp"
#include "etl/impl/Transformer.hpp"
#include "etlng/ETLService.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/TaskManagerProvider.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/io_context.hpp>
@@ -150,6 +149,7 @@ public:
*
* @param config The configuration to use
* @param ioc io context to run on
* @param ctx Execution context for asynchronous operations
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param balancer Load balancer to use
@@ -160,34 +160,12 @@ public:
makeETLService(
util::config::ClioConfigDefinition const& config,
boost::asio::io_context& ioc,
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
)
{
std::shared_ptr<etlng::ETLServiceInterface> ret;
if (config.get<bool>("__ng_etl")) {
ASSERT(
std::dynamic_pointer_cast<etlng::LoadBalancer>(balancer),
"LoadBalancer type must be etlng::LoadBalancer"
);
ret = std::make_shared<etlng::ETLService>(config, backend, subscriptions, balancer, ledgers);
} else {
ASSERT(
std::dynamic_pointer_cast<etl::LoadBalancer>(balancer), "LoadBalancer type must be etl::LoadBalancer"
);
ret = std::make_shared<etl::ETLService>(config, ioc, backend, subscriptions, balancer, ledgers);
}
// inject networkID into subscriptions, as transaction feed require it to inject CTID in response
if (auto const state = ret->getETLState(); state)
subscriptions->setNetworkID(state->networkID);
ret->run();
return ret;
}
);
/**
* @brief Stops components and joins worker thread.
@@ -261,7 +239,7 @@ public:
result["etl_sources"] = loadBalancer_->toJson();
result["is_writer"] = static_cast<int>(state_.isWriting);
result["read_only"] = static_cast<int>(state_.isReadOnly);
result["read_only"] = static_cast<int>(state_.isStrictReadonly);
auto last = ledgerPublisher_.getLastPublish();
if (last.time_since_epoch().count() != 0)
result["last_publish_age_seconds"] = std::to_string(ledgerPublisher_.lastPublishAgeSeconds());

View File

@@ -71,12 +71,19 @@ LoadBalancer::makeLoadBalancer(
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory
)
{
return std::make_shared<LoadBalancer>(
config, ioc, std::move(backend), std::move(subscriptions), std::move(validatedLedgers), std::move(sourceFactory)
config,
ioc,
std::move(backend),
std::move(subscriptions),
std::move(randomGenerator),
std::move(validatedLedgers),
std::move(sourceFactory)
);
}
@@ -85,10 +92,12 @@ LoadBalancer::LoadBalancer(
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory
)
: forwardingCounters_{
: randomGenerator_(std::move(randomGenerator))
, forwardingCounters_{
.successDuration = PrometheusService::counterInt(
"forwarding_duration_milliseconds_counter",
Labels({util::prometheus::Label{"status", "success"}}),
@@ -319,7 +328,7 @@ void
LoadBalancer::execute(Func f, uint32_t ledgerSequence, std::chrono::steady_clock::duration retryAfter)
{
ASSERT(not sources_.empty(), "ETL sources must be configured to execute functions.");
size_t sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
size_t numAttempts = 0;
@@ -403,7 +412,7 @@ LoadBalancer::forwardToRippledImpl(
++forwardingCounters_.cacheMiss.get();
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;

View File

@@ -29,6 +29,7 @@
#include "rpc/Errors.hpp"
#include "util/Assert.hpp"
#include "util/Mutex.hpp"
#include "util/Random.hpp"
#include "util/ResponseExpirationCache.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
@@ -88,6 +89,8 @@ private:
std::optional<util::ResponseExpirationCache> forwardingCache_;
std::optional<std::string> forwardingXUserValue_;
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator_;
std::vector<SourcePtr> sources_;
std::optional<ETLState> etlState_;
std::uint32_t downloadRanges_ =
@@ -123,6 +126,7 @@ public:
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param randomGenerator A random generator to use for selecting sources
* @param validatedLedgers The network validated ledgers datastructure
* @param sourceFactory A factory function to create a source
*/
@@ -131,6 +135,7 @@ public:
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource
);
@@ -142,6 +147,7 @@ public:
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param randomGenerator A random generator to use for selecting sources
* @param validatedLedgers The network validated ledgers data structure
* @param sourceFactory A factory function to create a source
* @return A shared pointer to a new instance of LoadBalancer
@@ -152,6 +158,7 @@ public:
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource
);
@@ -170,14 +177,14 @@ public:
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded.
* @note This function will retry indefinitely until the ledger is downloaded or the download is cancelled.
*
* @param sequence Sequence of ledger to download
* @param observer The observer to notify of progress
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
* @return A std::expected with ledger edge keys on success, or InitialLedgerLoadError on failure
*/
std::vector<std::string>
etlng::InitialLedgerLoadResult
loadInitialLedger(
[[maybe_unused]] uint32_t sequence,
[[maybe_unused]] etlng::InitialLoadObserverInterface& observer,

View File

@@ -37,7 +37,7 @@ struct SystemState {
* In strict read-only mode, the process will never attempt to become the ETL writer, and will only publish ledgers
* as they are written to the database.
*/
util::prometheus::Bool isReadOnly = PrometheusService::boolMetric(
util::prometheus::Bool isStrictReadonly = PrometheusService::boolMetric(
"read_only",
util::prometheus::Labels{},
"Whether the process is in strict read-only mode"

View File

@@ -242,8 +242,8 @@ public:
}
prev = cur->key;
static constexpr std::size_t kLOG_INTERVAL = 100000;
if (numWrites % kLOG_INTERVAL == 0 && numWrites != 0)
static constexpr std::size_t kLOG_STRIDE = 100000;
if (numWrites % kLOG_STRIDE == 0 && numWrites != 0)
LOG(log_.info()) << "Wrote " << numWrites << " book successors";
}

View File

@@ -24,6 +24,7 @@
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etl/SystemState.hpp"
#include "etlng/LedgerPublisherInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/log/Logger.hpp"
@@ -31,6 +32,7 @@
#include "util/prometheus/Prometheus.hpp"
#include <boost/asio/io_context.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/strand.hpp>
#include <xrpl/basics/chrono.h>
#include <xrpl/protocol/Fees.h>
@@ -66,7 +68,7 @@ namespace etl::impl {
* includes reading all of the transactions from the database) is done from the application wide asio io_service, and a
* strand is used to ensure ledgers are published in order.
*/
class LedgerPublisher {
class LedgerPublisher : public etlng::LedgerPublisherInterface {
util::Logger log_{"ETL"};
boost::asio::strand<boost::asio::io_context::executor_type> publishStrand_;
@@ -121,7 +123,7 @@ public:
uint32_t ledgerSequence,
std::optional<uint32_t> maxAttempts,
std::chrono::steady_clock::duration attemptsDelay = std::chrono::seconds{1}
)
) override
{
LOG(log_.info()) << "Attempting to publish ledger = " << ledgerSequence;
size_t numAttempts = 0;
@@ -235,7 +237,7 @@ public:
* @brief Get time passed since last publish, in seconds
*/
std::uint32_t
lastPublishAgeSeconds() const
lastPublishAgeSeconds() const override
{
return std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now() - getLastPublish())
.count();
@@ -245,7 +247,7 @@ public:
* @brief Get last publish time as a time point
*/
std::chrono::time_point<std::chrono::system_clock>
getLastPublish() const
getLastPublish() const override
{
return std::chrono::time_point<std::chrono::system_clock>{std::chrono::seconds{lastPublishSeconds_.get().value()
}};
@@ -255,7 +257,7 @@ public:
* @brief Get time passed since last ledger close, in seconds
*/
std::uint32_t
lastCloseAgeSeconds() const
lastCloseAgeSeconds() const override
{
std::shared_lock const lck(closeTimeMtx_);
auto now = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now().time_since_epoch())

View File

@@ -198,7 +198,6 @@ public:
*
* @param sequence Sequence of the ledger to download
* @param numMarkers Number of markers to generate for async calls
* @param cacheOnly Only insert into cache, not the DB; defaults to false
* @return A std::pair of the data and a bool indicating whether the download was successful
*/
std::pair<std::vector<std::string>, bool>

View File

@@ -2,7 +2,8 @@ add_library(clio_etlng)
target_sources(
clio_etlng
PRIVATE LoadBalancer.cpp
PRIVATE ETLService.cpp
LoadBalancer.cpp
Source.cpp
impl/AmendmentBlockHandler.cpp
impl/AsyncGrpcCall.cpp
@@ -14,6 +15,7 @@ target_sources(
impl/TaskManager.cpp
impl/ext/Cache.cpp
impl/ext/Core.cpp
impl/ext/MPT.cpp
impl/ext/NFT.cpp
impl/ext/Successor.cpp
)

View File

@@ -0,0 +1,53 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <cstdint>
namespace etlng {
/**
* @brief An interface for the Cache Loader
*/
struct CacheLoaderInterface {
virtual ~CacheLoaderInterface() = default;
/**
* @brief Load the cache with the most recent ledger data
*
* @param seq The sequence number of the ledger to load
*/
virtual void
load(uint32_t const seq) = 0;
/**
* @brief Stop the cache loading process
*/
virtual void
stop() noexcept = 0;
/**
* @brief Wait for all cache loading tasks to complete
*/
virtual void
wait() noexcept = 0;
};
} // namespace etlng

View File

@@ -0,0 +1,66 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include <cstdint>
#include <vector>
namespace etlng {
/**
* @brief An interface for the Cache Updater
*/
struct CacheUpdaterInterface {
virtual ~CacheUpdaterInterface() = default;
/**
* @brief Update the cache with ledger data
* @param data The ledger data to update with
*/
virtual void
update(model::LedgerData const& data) = 0;
/**
* @brief Update the cache with ledger objects at a specific sequence
* @param seq The ledger sequence number
* @param objs The ledger objects to update with
*/
virtual void
update(uint32_t seq, std::vector<data::LedgerObject> const& objs) = 0;
/**
* @brief Update the cache with model objects at a specific sequence
* @param seq The ledger sequence number
* @param objs The model objects to update with
*/
virtual void
update(uint32_t seq, std::vector<model::Object> const& objs) = 0;
/**
* @brief Mark the cache as fully loaded
*/
virtual void
setFull() = 0;
};
} // namespace etlng

333
src/etlng/ETLService.cpp Normal file
View File

@@ -0,0 +1,333 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "etlng/ETLService.hpp"
#include "data/BackendInterface.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etl/ETLState.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
#include "etl/impl/LedgerFetcher.hpp"
#include "etlng/CacheLoaderInterface.hpp"
#include "etlng/CacheUpdaterInterface.hpp"
#include "etlng/ExtractorInterface.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LedgerPublisherInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/LoaderInterface.hpp"
#include "etlng/MonitorInterface.hpp"
#include "etlng/MonitorProviderInterface.hpp"
#include "etlng/TaskManagerProviderInterface.hpp"
#include "etlng/impl/AmendmentBlockHandler.hpp"
#include "etlng/impl/CacheUpdater.hpp"
#include "etlng/impl/Extraction.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/Loading.hpp"
#include "etlng/impl/Registry.hpp"
#include "etlng/impl/Scheduling.hpp"
#include "etlng/impl/TaskManager.hpp"
#include "etlng/impl/ext/Cache.hpp"
#include "etlng/impl/ext/Core.hpp"
#include "etlng/impl/ext/NFT.hpp"
#include "etlng/impl/ext/Successor.hpp"
#include "util/Assert.hpp"
#include "util/Profiler.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/log/Logger.hpp"
#include <boost/json/object.hpp>
#include <boost/signals2/connection.hpp>
#include <fmt/core.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
#include <string>
#include <utility>
namespace etlng {
ETLService::ETLService(
util::async::AnyExecutionContext ctx,
std::reference_wrapper<util::config::ClioConfigDefinition const> config,
std::shared_ptr<data::BackendInterface> backend,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> ledgers,
std::shared_ptr<LedgerPublisherInterface> publisher,
std::shared_ptr<CacheLoaderInterface> cacheLoader,
std::shared_ptr<CacheUpdaterInterface> cacheUpdater,
std::shared_ptr<ExtractorInterface> extractor,
std::shared_ptr<LoaderInterface> loader,
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver,
std::shared_ptr<etlng::TaskManagerProviderInterface> taskManagerProvider,
std::shared_ptr<etlng::MonitorProviderInterface> monitorProvider,
std::shared_ptr<etl::SystemState> state
)
: ctx_(std::move(ctx))
, config_(config)
, backend_(std::move(backend))
, balancer_(std::move(balancer))
, ledgers_(std::move(ledgers))
, publisher_(std::move(publisher))
, cacheLoader_(std::move(cacheLoader))
, cacheUpdater_(std::move(cacheUpdater))
, extractor_(std::move(extractor))
, loader_(std::move(loader))
, initialLoadObserver_(std::move(initialLoadObserver))
, taskManagerProvider_(std::move(taskManagerProvider))
, monitorProvider_(std::move(monitorProvider))
, state_(std::move(state))
, startSequence_(config.get().maybeValue<uint32_t>("start_sequence"))
, finishSequence_(config.get().maybeValue<uint32_t>("finish_sequence"))
{
ASSERT(not state_->isWriting, "ETL should never start in writer mode");
if (startSequence_.has_value())
LOG(log_.info()) << "Start sequence: " << *startSequence_;
if (finishSequence_.has_value())
LOG(log_.info()) << "Finish sequence: " << *finishSequence_;
LOG(log_.info()) << "Starting in " << (state_->isStrictReadonly ? "STRICT READONLY MODE" : "WRITE MODE");
}
ETLService::~ETLService()
{
stop();
LOG(log_.debug()) << "Destroying ETLng";
}
void
ETLService::run()
{
LOG(log_.info()) << "Running ETLng...";
mainLoop_.emplace(ctx_.execute([this] {
auto const rng = loadInitialLedgerIfNeeded();
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> const mostRecentValidated = ledgers_->getMostRecent();
if (not mostRecentValidated) {
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return;
}
if (not rng.has_value()) {
LOG(log_.warn()) << "Initial ledger download got cancelled - stopping ETL service";
return;
}
auto const nextSequence = rng->maxSequence + 1;
LOG(log_.debug()) << "Database is populated. Starting monitor loop. sequence = " << nextSequence;
startMonitor(nextSequence);
// If we are a writer as the result of loading the initial ledger - start loading
if (state_->isWriting)
startLoading(nextSequence);
}));
}
void
ETLService::stop()
{
LOG(log_.info()) << "Stop called";
if (mainLoop_)
mainLoop_->wait();
if (taskMan_)
taskMan_->stop();
if (monitor_)
monitor_->stop();
}
boost::json::object
ETLService::getInfo() const
{
boost::json::object result;
result["etl_sources"] = balancer_->toJson();
result["is_writer"] = static_cast<int>(state_->isWriting);
result["read_only"] = static_cast<int>(state_->isStrictReadonly);
auto last = publisher_->getLastPublish();
if (last.time_since_epoch().count() != 0)
result["last_publish_age_seconds"] = std::to_string(publisher_->lastPublishAgeSeconds());
return result;
}
bool
ETLService::isAmendmentBlocked() const
{
return state_->isAmendmentBlocked;
}
bool
ETLService::isCorruptionDetected() const
{
return state_->isCorruptionDetected;
}
std::optional<etl::ETLState>
ETLService::getETLState() const
{
return balancer_->getETLState();
}
std::uint32_t
ETLService::lastCloseAgeSeconds() const
{
return publisher_->lastCloseAgeSeconds();
}
std::optional<data::LedgerRange>
ETLService::loadInitialLedgerIfNeeded()
{
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (not rng.has_value()) {
ASSERT(
not state_->isStrictReadonly,
"Database is empty but this node is in strict readonly mode. Can't write initial ledger."
);
LOG(log_.info()) << "Database is empty. Will download a ledger from the network.";
state_->isWriting = true; // immediately become writer as the db is empty
auto const getMostRecent = [this]() {
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
return ledgers_->getMostRecent();
};
if (auto const maybeSeq = startSequence_.or_else(getMostRecent); maybeSeq.has_value()) {
auto const seq = *maybeSeq;
LOG(log_.info()) << "Starting from sequence " << seq
<< ". Initial ledger download and extraction can take a while...";
auto [ledger, timeDiff] = ::util::timed<std::chrono::duration<double>>([this, seq]() {
return extractor_->extractLedgerOnly(seq).and_then(
[this, seq](auto&& data) -> std::optional<ripple::LedgerHeader> {
// TODO: loadInitialLedger in balancer should be called fetchEdgeKeys or similar
auto res = balancer_->loadInitialLedger(seq, *initialLoadObserver_);
if (not res.has_value() and res.error() == InitialLedgerLoadError::Cancelled) {
LOG(log_.debug()) << "Initial ledger load got cancelled";
return std::nullopt;
}
ASSERT(res.has_value(), "Initial ledger retry logic failed");
data.edgeKeys = std::move(res).value();
return loader_->loadInitialLedger(data);
}
);
});
if (not ledger.has_value()) {
LOG(log_.error()) << "Failed to load initial ledger. Exiting monitor loop";
return std::nullopt;
}
LOG(log_.debug()) << "Time to download and store ledger = " << timeDiff;
LOG(log_.info()) << "Finished loadInitialLedger. cache size = " << backend_->cache().size();
return backend_->hardFetchLedgerRangeNoThrow();
}
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return std::nullopt;
}
LOG(log_.info()) << "Database already populated. Picking up from the tip of history";
cacheLoader_->load(rng->maxSequence);
return rng;
}
void
ETLService::startMonitor(uint32_t seq)
{
monitor_ = monitorProvider_->make(ctx_, backend_, ledgers_, seq);
monitorNewSeqSubscription_ = monitor_->subscribeToNewSequence([this](uint32_t seq) {
LOG(log_.info()) << "ETLService (via Monitor) got new seq from db: " << seq;
if (state_->writeConflict) {
LOG(log_.info()) << "Got a write conflict; Giving up writer seat immediately";
giveUpWriter();
}
if (not state_->isWriting) {
auto const diff = data::synchronousAndRetryOnTimeout([this, seq](auto yield) {
return backend_->fetchLedgerDiff(seq, yield);
});
cacheUpdater_->update(seq, diff);
backend_->updateRange(seq);
}
publisher_->publish(seq, {});
});
monitorDbStalledSubscription_ = monitor_->subscribeToDbStalled([this]() {
LOG(log_.warn()) << "ETLService received DbStalled signal from Monitor";
if (not state_->isStrictReadonly and not state_->isWriting)
attemptTakeoverWriter();
});
monitor_->run();
}
void
ETLService::startLoading(uint32_t seq)
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
taskMan_ = taskManagerProvider_->make(ctx_, *monitor_, seq, finishSequence_);
taskMan_->run(config_.get().get<std::size_t>("extractor_threads"));
}
void
ETLService::attemptTakeoverWriter()
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
auto rng = backend_->hardFetchLedgerRangeNoThrow();
ASSERT(rng.has_value(), "Ledger range can't be null");
state_->isWriting = true; // switch to writer
LOG(log_.info()) << "Taking over the ETL writer seat";
startLoading(rng->maxSequence + 1);
}
void
ETLService::giveUpWriter()
{
ASSERT(not state_->isStrictReadonly, "This should only happen on writer nodes");
state_->isWriting = false;
state_->writeConflict = false;
taskMan_ = nullptr;
}
} // namespace etlng

View File

@@ -20,24 +20,29 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "data/LedgerCache.hpp"
#include "data/Types.hpp"
#include "etl/CacheLoader.hpp"
#include "etl/ETLState.hpp"
#include "etl/LedgerFetcherInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/AmendmentBlockHandler.hpp"
#include "etl/impl/LedgerFetcher.hpp"
#include "etl/impl/LedgerPublisher.hpp"
#include "etlng/AmendmentBlockHandlerInterface.hpp"
#include "etlng/CacheLoaderInterface.hpp"
#include "etlng/CacheUpdaterInterface.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/ExtractorInterface.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LedgerPublisherInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/LoaderInterface.hpp"
#include "etlng/MonitorInterface.hpp"
#include "etlng/MonitorProviderInterface.hpp"
#include "etlng/TaskManagerInterface.hpp"
#include "etlng/TaskManagerProviderInterface.hpp"
#include "etlng/impl/AmendmentBlockHandler.hpp"
#include "etlng/impl/CacheUpdater.hpp"
#include "etlng/impl/Extraction.hpp"
#include "etlng/impl/LedgerPublisher.hpp"
#include "etlng/impl/Loading.hpp"
#include "etlng/impl/Monitor.hpp"
#include "etlng/impl/Registry.hpp"
#include "etlng/impl/Scheduling.hpp"
#include "etlng/impl/TaskManager.hpp"
@@ -45,14 +50,14 @@
#include "etlng/impl/ext/Core.hpp"
#include "etlng/impl/ext/NFT.hpp"
#include "etlng/impl/ext/Successor.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/Profiler.hpp"
#include "util/async/context/BasicExecutionContext.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/async/AnyOperation.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/io_context.hpp>
#include <boost/json/object.hpp>
#include <boost/signals2/connection.hpp>
#include <fmt/core.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h>
@@ -64,15 +69,12 @@
#include <xrpl/protocol/TxFormats.h>
#include <xrpl/protocol/TxMeta.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
#include <ranges>
#include <stdexcept>
#include <string>
#include <tuple>
#include <utility>
namespace etlng {
@@ -92,191 +94,106 @@ namespace etlng {
class ETLService : public ETLServiceInterface {
util::Logger log_{"ETL"};
util::async::AnyExecutionContext ctx_;
std::reference_wrapper<util::config::ClioConfigDefinition const> config_;
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions_;
std::shared_ptr<etlng::LoadBalancerInterface> balancer_;
std::shared_ptr<LoadBalancerInterface> balancer_;
std::shared_ptr<etl::NetworkValidatedLedgersInterface> ledgers_;
std::shared_ptr<etl::CacheLoader<>> cacheLoader_;
std::shared_ptr<etl::LedgerFetcherInterface> fetcher_;
std::shared_ptr<LedgerPublisherInterface> publisher_;
std::shared_ptr<CacheLoaderInterface> cacheLoader_;
std::shared_ptr<CacheUpdaterInterface> cacheUpdater_;
std::shared_ptr<ExtractorInterface> extractor_;
std::shared_ptr<LoaderInterface> loader_;
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver_;
std::shared_ptr<etlng::TaskManagerProviderInterface> taskManagerProvider_;
std::shared_ptr<etlng::MonitorProviderInterface> monitorProvider_;
std::shared_ptr<etl::SystemState> state_;
etl::SystemState state_;
util::async::CoroExecutionContext ctx_{8};
std::optional<uint32_t> startSequence_;
std::optional<uint32_t> finishSequence_;
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler_;
std::shared_ptr<impl::Loader> loader_;
std::unique_ptr<MonitorInterface> monitor_;
std::unique_ptr<TaskManagerInterface> taskMan_;
std::optional<util::async::CoroExecutionContext::Operation<void>> mainLoop_;
boost::signals2::scoped_connection monitorNewSeqSubscription_;
boost::signals2::scoped_connection monitorDbStalledSubscription_;
std::optional<util::async::AnyOperation<void>> mainLoop_;
public:
/**
* @brief Create an instance of ETLService.
*
* @param config The configuration to use
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param balancer Load balancer to use
* @param ledgers The network validated ledgers datastructure
* @param ctx The execution context for asynchronous operations
* @param config The Clio configuration definition
* @param backend Interface to the backend database
* @param balancer Load balancer for distributing work
* @param ledgers Interface for accessing network validated ledgers
* @param publisher Interface for publishing ledger data
* @param cacheLoader Interface for loading cache data
* @param cacheUpdater Interface for updating cache data
* @param extractor The extractor to use
* @param loader Interface for loading data
* @param initialLoadObserver The observer for initial data loading
* @param taskManagerProvider The provider of the task manager instance
* @param monitorProvider The provider of the monitor instance
* @param state System state tracking object
*/
ETLService(
util::config::ClioConfigDefinition const& config,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<etlng::LoadBalancerInterface> balancer,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> ledgers
)
: backend_(std::move(backend))
, subscriptions_(std::move(subscriptions))
, balancer_(std::move(balancer))
, ledgers_(std::move(ledgers))
, cacheLoader_(std::make_shared<etl::CacheLoader<>>(config, backend_, backend_->cache()))
, fetcher_(std::make_shared<etl::impl::LedgerFetcher>(backend_, balancer_))
, extractor_(std::make_shared<impl::Extractor>(fetcher_))
, amendmentBlockHandler_(std::make_shared<etlng::impl::AmendmentBlockHandler>(ctx_, state_))
, loader_(std::make_shared<impl::Loader>(
backend_,
fetcher_,
impl::makeRegistry(
impl::CacheExt{backend_->cache()},
impl::CoreExt{backend_},
impl::SuccessorExt{backend_, backend_->cache()},
impl::NFTExt{backend_}
),
amendmentBlockHandler_
))
{
LOG(log_.info()) << "Creating ETLng...";
}
util::async::AnyExecutionContext ctx,
std::reference_wrapper<util::config::ClioConfigDefinition const> config,
std::shared_ptr<data::BackendInterface> backend,
std::shared_ptr<LoadBalancerInterface> balancer,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> ledgers,
std::shared_ptr<LedgerPublisherInterface> publisher,
std::shared_ptr<CacheLoaderInterface> cacheLoader,
std::shared_ptr<CacheUpdaterInterface> cacheUpdater,
std::shared_ptr<ExtractorInterface> extractor,
std::shared_ptr<LoaderInterface> loader,
std::shared_ptr<InitialLoadObserverInterface> initialLoadObserver,
std::shared_ptr<etlng::TaskManagerProviderInterface> taskManagerProvider,
std::shared_ptr<etlng::MonitorProviderInterface> monitorProvider,
std::shared_ptr<etl::SystemState> state
);
~ETLService() override
{
LOG(log_.debug()) << "Stopping ETLng";
}
~ETLService() override;
void
run() override
{
LOG(log_.info()) << "run() in ETLng...";
mainLoop_.emplace(ctx_.execute([this] {
auto const rng = loadInitialLedgerIfNeeded();
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> const mostRecentValidated = ledgers_->getMostRecent();
if (not mostRecentValidated) {
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
return;
}
ASSERT(rng.has_value(), "Ledger range can't be null");
auto const nextSequence = rng->maxSequence + 1;
LOG(log_.debug()) << "Database is populated. Starting monitor loop. sequence = " << nextSequence;
auto scheduler = impl::makeScheduler(impl::ForwardScheduler{*ledgers_, nextSequence}
// impl::BackfillScheduler{nextSequence - 1, nextSequence - 1000},
// TODO lift limit and start with rng.minSeq
);
auto man = impl::TaskManager(ctx_, *scheduler, *extractor_, *loader_);
// TODO: figure out this: std::make_shared<impl::Monitor>(backend_, ledgers_, nextSequence)
man.run({}); // TODO: needs to be interruptible and fill out settings
}));
}
run() override;
void
stop() override
{
LOG(log_.info()) << "Stop called";
// TODO: stop the service correctly
}
stop() override;
boost::json::object
getInfo() const override
{
// TODO
return {{"ok", true}};
}
getInfo() const override;
bool
isAmendmentBlocked() const override
{
// TODO
return false;
}
isAmendmentBlocked() const override;
bool
isCorruptionDetected() const override
{
// TODO
return false;
}
isCorruptionDetected() const override;
std::optional<etl::ETLState>
getETLState() const override
{
// TODO
return std::nullopt;
}
getETLState() const override;
std::uint32_t
lastCloseAgeSeconds() const override
{
// TODO
return 0;
}
lastCloseAgeSeconds() const override;
private:
// TODO: this better be std::expected
std::optional<data::LedgerRange>
loadInitialLedgerIfNeeded()
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); not rng.has_value()) {
LOG(log_.info()) << "Database is empty. Will download a ledger from the network.";
loadInitialLedgerIfNeeded();
try {
LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
if (auto const mostRecentValidated = ledgers_->getMostRecent(); mostRecentValidated.has_value()) {
auto const seq = *mostRecentValidated;
LOG(log_.info()) << "Ledger " << seq << " has been validated. Downloading... ";
void
startMonitor(uint32_t seq);
auto [ledger, timeDiff] = ::util::timed<std::chrono::duration<double>>([this, seq]() {
return extractor_->extractLedgerOnly(seq).and_then([this, seq](auto&& data) {
// TODO: loadInitialLedger in balancer should be called fetchEdgeKeys or similar
data.edgeKeys = balancer_->loadInitialLedger(seq, *loader_);
void
startLoading(uint32_t seq);
// TODO: this should be interruptible for graceful shutdown
return loader_->loadInitialLedger(data);
});
});
void
attemptTakeoverWriter();
LOG(log_.debug()) << "Time to download and store ledger = " << timeDiff;
LOG(log_.info()) << "Finished loadInitialLedger. cache size = " << backend_->cache().size();
if (ledger.has_value())
return backend_->hardFetchLedgerRangeNoThrow();
LOG(log_.error()) << "Failed to load initial ledger. Exiting monitor loop";
} else {
LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
"Exiting monitor loop";
}
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial ledger: " << e.what();
amendmentBlockHandler_->notifyAmendmentBlocked();
}
} else {
LOG(log_.info()) << "Database already populated. Picking up from the tip of history";
cacheLoader_->load(rng->maxSequence);
return rng;
}
return std::nullopt;
}
void
giveUpWriter();
};
} // namespace etlng

View File

@@ -37,13 +37,38 @@ struct LedgerPublisherInterface {
* @param seq The sequence number of the ledger
* @param maxAttempts The maximum number of attempts to publish the ledger; no limit if nullopt
* @param attemptsDelay The delay between attempts
* @return Whether the ledger was found in the database and published
*/
virtual void
virtual bool
publish(
uint32_t seq,
std::optional<uint32_t> maxAttempts,
std::chrono::steady_clock::duration attemptsDelay = std::chrono::seconds{1}
) = 0;
/**
* @brief Get last publish time as a time point
*
* @return A std::chrono::time_point representing the time of the last publish
*/
virtual std::chrono::time_point<std::chrono::system_clock>
getLastPublish() const = 0;
/**
* @brief Get time passed since last ledger close, in seconds
*
* @return The number of seconds since the last ledger close as std::uint32_t
*/
virtual std::uint32_t
lastCloseAgeSeconds() const = 0;
/**
* @brief Get time passed since last publish, in seconds
*
* @return The number of seconds since the last publish as std::uint32_t
*/
virtual std::uint32_t
lastPublishAgeSeconds() const = 0;
};
} // namespace etlng

View File

@@ -72,12 +72,19 @@ LoadBalancer::makeLoadBalancer(
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory
)
{
return std::make_shared<LoadBalancer>(
config, ioc, std::move(backend), std::move(subscriptions), std::move(validatedLedgers), std::move(sourceFactory)
config,
ioc,
std::move(backend),
std::move(subscriptions),
std::move(randomGenerator),
std::move(validatedLedgers),
std::move(sourceFactory)
);
}
@@ -86,10 +93,12 @@ LoadBalancer::LoadBalancer(
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory
)
: forwardingCounters_{
: randomGenerator_(std::move(randomGenerator))
, forwardingCounters_{
.successDuration = PrometheusService::counterInt(
"forwarding_duration_milliseconds_counter",
Labels({util::prometheus::Label{"status", "success"}}),
@@ -201,30 +210,32 @@ LoadBalancer::LoadBalancer(
}
}
std::vector<std::string>
InitialLedgerLoadResult
LoadBalancer::loadInitialLedger(
uint32_t sequence,
etlng::InitialLoadObserverInterface& loadObserver,
std::chrono::steady_clock::duration retryAfter
)
{
std::vector<std::string> response;
InitialLedgerLoadResult response;
execute(
[this, &response, &sequence, &loadObserver](auto& source) {
auto [data, res] = source->loadInitialLedger(sequence, downloadRanges_, loadObserver);
auto res = source->loadInitialLedger(sequence, downloadRanges_, loadObserver);
if (!res) {
if (not res.has_value() and res.error() == InitialLedgerLoadError::Errored) {
LOG(log_.error()) << "Failed to download initial ledger."
<< " Sequence = " << sequence << " source = " << source->toString();
} else {
response = std::move(data);
return false; // should retry on error
}
return res;
response = std::move(res); // cancelled or data received
return true;
},
sequence,
retryAfter
);
return response;
}
@@ -323,7 +334,7 @@ void
LoadBalancer::execute(Func f, uint32_t ledgerSequence, std::chrono::steady_clock::duration retryAfter)
{
ASSERT(not sources_.empty(), "ETL sources must be configured to execute functions.");
size_t sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
size_t numAttempts = 0;
@@ -407,7 +418,7 @@ LoadBalancer::forwardToRippledImpl(
++forwardingCounters_.cacheMiss.get();
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;

View File

@@ -29,6 +29,7 @@
#include "rpc/Errors.hpp"
#include "util/Assert.hpp"
#include "util/Mutex.hpp"
#include "util/Random.hpp"
#include "util/ResponseExpirationCache.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
@@ -48,6 +49,7 @@
#include <concepts>
#include <cstdint>
#include <expected>
#include <functional>
#include <memory>
#include <optional>
#include <string>
@@ -88,6 +90,8 @@ private:
std::optional<util::ResponseExpirationCache> forwardingCache_;
std::optional<std::string> forwardingXUserValue_;
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator_;
std::vector<SourcePtr> sources_;
std::optional<etl::ETLState> etlState_;
std::uint32_t downloadRanges_ =
@@ -123,6 +127,7 @@ public:
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param randomGenerator A random generator to use for selecting sources
* @param validatedLedgers The network validated ledgers datastructure
* @param sourceFactory A factory function to create a source
*/
@@ -131,6 +136,7 @@ public:
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource
);
@@ -142,6 +148,7 @@ public:
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param randomGenerator A random generator to use for selecting sources
* @param validatedLedgers The network validated ledgers datastructure
* @param sourceFactory A factory function to create a source
* @return A shared pointer to a new instance of LoadBalancer
@@ -152,6 +159,7 @@ public:
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::unique_ptr<util::RandomGeneratorInterface> randomGenerator,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource
);
@@ -176,14 +184,14 @@ public:
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded.
* @note This function will retry indefinitely until the ledger is downloaded or the download is cancelled.
*
* @param sequence Sequence of ledger to download
* @param observer The observer to notify of progress
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
* @return A std::expected with ledger edge keys on success, or InitialLedgerLoadError on failure
*/
std::vector<std::string>
InitialLedgerLoadResult
loadInitialLedger(
uint32_t sequence,
etlng::InitialLoadObserverInterface& observer,

View File

@@ -39,6 +39,20 @@
namespace etlng {
/**
* @brief Represents possible errors for initial ledger load
*/
enum class InitialLedgerLoadError {
Cancelled, /*< Indicating the initial load got cancelled by user */
Errored, /*< Indicating some error happened during initial ledger load */
};
/**
* @brief The result type of the initial ledger load
* @note The successful value represents edge keys
*/
using InitialLedgerLoadResult = std::expected<std::vector<std::string>, InitialLedgerLoadError>;
/**
* @brief An interface for LoadBalancer
*/
@@ -52,14 +66,14 @@ public:
/**
* @brief Load the initial ledger, writing data to the queue.
* @note This function will retry indefinitely until the ledger is downloaded.
* @note This function will retry indefinitely until the ledger is downloaded or the download is cancelled.
*
* @param sequence Sequence of ledger to download
* @param loader InitialLoadObserverInterface implementation
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
* @return A std::expected with ledger edge keys on success, or InitialLedgerLoadError on failure
*/
virtual std::vector<std::string>
[[nodiscard]] virtual InitialLedgerLoadResult
loadInitialLedger(
uint32_t sequence,
etlng::InitialLoadObserverInterface& loader,
@@ -74,7 +88,7 @@ public:
* @param retryAfter Time to wait between retries (2 seconds by default)
* @return A std::vector<std::string> The ledger data
*/
virtual std::vector<std::string>
[[nodiscard]] virtual std::vector<std::string>
loadInitialLedger(uint32_t sequence, std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2}) = 0;
/**
@@ -90,7 +104,7 @@ public:
* @return The extracted data, if extraction was successful. If the ledger was found
* in the database or the server is shutting down, the optional will be empty
*/
virtual OptionalGetLedgerResponseType
[[nodiscard]] virtual OptionalGetLedgerResponseType
fetchLedger(
uint32_t ledgerSequence,
bool getObjects,
@@ -103,7 +117,7 @@ public:
*
* @return JSON representation of the state of this load balancer.
*/
virtual boost::json::value
[[nodiscard]] virtual boost::json::value
toJson() const = 0;
/**
@@ -115,7 +129,7 @@ public:
* @param yield The coroutine context
* @return Response received from rippled node as JSON object on success or error on failure
*/
virtual std::expected<boost::json::object, rpc::CombinedError>
[[nodiscard]] virtual std::expected<boost::json::object, rpc::CombinedError>
forwardToRippled(
boost::json::object const& request,
std::optional<std::string> const& clientIp,
@@ -127,7 +141,7 @@ public:
* @brief Return state of ETL nodes.
* @return ETL state, nullopt if etl nodes not available
*/
virtual std::optional<etl::ETLState>
[[nodiscard]] virtual std::optional<etl::ETLState>
getETLState() noexcept = 0;
/**

View File

@@ -23,10 +23,19 @@
#include <xrpl/protocol/LedgerHeader.h>
#include <expected>
#include <optional>
namespace etlng {
/**
* @brief Enumeration of possible errors that can occur during loading operations
*/
enum class LoaderError {
AmendmentBlocked, /*< Error indicating that an operation is blocked by an amendment */
WriteConflict, /*< Error indicating that a write operation resulted in a conflict */
};
/**
* @brief An interface for a ETL Loader
*/
@@ -36,8 +45,9 @@ struct LoaderInterface {
/**
* @brief Load ledger data
* @param data The data to load
* @return Nothing or error as std::expected
*/
virtual void
[[nodiscard]] virtual std::expected<void, LoaderError>
load(model::LedgerData const& data) = 0;
/**

View File

@@ -36,10 +36,25 @@ namespace etlng {
class MonitorInterface {
public:
static constexpr auto kDEFAULT_REPEAT_INTERVAL = std::chrono::seconds{1};
using SignalType = boost::signals2::signal<void(uint32_t)>;
using NewSequenceSignalType = boost::signals2::signal<void(uint32_t)>;
using DbStalledSignalType = boost::signals2::signal<void()>;
virtual ~MonitorInterface() = default;
/**
* @brief Allows the loading process to notify of a freshly committed ledger
* @param seq The ledger sequence loaded
*/
virtual void
notifySequenceLoaded(uint32_t seq) = 0;
/**
* @brief Notifies the monitor of a write conflict
* @param seq The sequence number of the ledger that encountered a write conflict
*/
virtual void
notifyWriteConflict(uint32_t seq) = 0;
/**
* @brief Allows clients to get notified when a new ledger becomes available in Clio's database
*
@@ -47,7 +62,16 @@ public:
* @return A connection object that automatically disconnects the subscription once destroyed
*/
[[nodiscard]] virtual boost::signals2::scoped_connection
subscribe(SignalType::slot_type const& subscriber) = 0;
subscribeToNewSequence(NewSequenceSignalType::slot_type const& subscriber) = 0;
/**
* @brief Allows clients to get notified when no database update is detected for a configured period.
*
* @param subscriber The slot to connect
* @return A connection object that automatically disconnects the subscription once destroyed
*/
[[nodiscard]] virtual boost::signals2::scoped_connection
subscribeToDbStalled(DbStalledSignalType::slot_type const& subscriber) = 0;
/**
* @brief Run the monitor service

View File

@@ -0,0 +1,64 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etlng/MonitorInterface.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include <chrono>
#include <cstdint>
#include <memory>
namespace etlng {
/**
* @brief An interface for providing Monitor instances
*/
struct MonitorProviderInterface {
/**
* @brief The time Monitor should wait before reporting absence of updates to the database
*/
static constexpr auto kDEFAULT_DB_STALLED_REPORT_DELAY = std::chrono::seconds{10};
virtual ~MonitorProviderInterface() = default;
/**
* @brief Create a new Monitor instance
*
* @param ctx The execution context for asynchronous operations
* @param backend Interface to the backend database
* @param validatedLedgers Interface for accessing network validated ledgers
* @param startSequence The sequence number to start monitoring from
* @param dbStalledReportDelay The timeout duration after which to signal no database updates
* @return A unique pointer to a Monitor implementation
*/
[[nodiscard]] virtual std::unique_ptr<MonitorInterface>
make(
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
uint32_t startSequence,
std::chrono::steady_clock::duration dbStalledReportDelay = kDEFAULT_DB_STALLED_REPORT_DELAY
) = 0;
};
} // namespace etlng

View File

@@ -20,8 +20,8 @@
#include "etlng/Source.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etl/impl/ForwardingSource.hpp"
#include "etl/impl/SubscriptionSource.hpp"
#include "etlng/impl/ForwardingSource.hpp"
#include "etlng/impl/GrpcSource.hpp"
#include "etlng/impl/SourceImpl.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
@@ -52,7 +52,7 @@ makeSource(
auto const wsPort = config.get<std::string>("ws_port");
auto const grpcPort = config.get<std::string>("grpc_port");
etl::impl::ForwardingSource forwardingSource{ip, wsPort, forwardingTimeout};
etlng::impl::ForwardingSource forwardingSource{ip, wsPort, forwardingTimeout};
impl::GrpcSource grpcSource{ip, grpcPort};
auto subscriptionSource = std::make_unique<etl::impl::SubscriptionSource>(
ioc,

View File

@@ -19,9 +19,9 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "rpc/Errors.hpp"
#include "util/config/ObjectView.hpp"
@@ -131,7 +131,7 @@ public:
* @param loader InitialLoadObserverInterface implementation
* @return A std::pair of the data and a bool indicating whether the download was successful
*/
virtual std::pair<std::vector<std::string>, bool>
virtual InitialLedgerLoadResult
loadInitialLedger(uint32_t sequence, std::uint32_t numMarkers, etlng::InitialLoadObserverInterface& loader) = 0;
/**

View File

@@ -0,0 +1,46 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <cstddef>
namespace etlng {
/**
* @brief An interface for the Task Manager
*/
struct TaskManagerInterface {
virtual ~TaskManagerInterface() = default;
/**
* @brief Start the task manager with specified settings
* @param numExtractors The number of extraction tasks
*/
virtual void
run(size_t numExtractors) = 0;
/**
* @brief Stop the task manager
*/
virtual void
stop() = 0;
};
} // namespace etlng

View File

@@ -0,0 +1,58 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "etlng/MonitorInterface.hpp"
#include "etlng/TaskManagerInterface.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <optional>
namespace etlng {
/**
* @brief An interface for providing the Task Manager
*/
struct TaskManagerProviderInterface {
virtual ~TaskManagerProviderInterface() = default;
/**
* @brief Make a task manager
*
* @param ctx The async context to associate the task manager instance with
* @param monitor The monitor to notify when ledger is loaded
* @param startSeq The sequence to start at
* @param finishSeq The sequence to stop at if specified
* @return A unique pointer to a TaskManager implementation
*/
[[nodiscard]] virtual std::unique_ptr<TaskManagerInterface>
make(
util::async::AnyExecutionContext ctx,
std::reference_wrapper<MonitorInterface> monitor,
uint32_t startSeq,
std::optional<uint32_t> finishSeq = std::nullopt
) = 0;
};
} // namespace etlng

View File

@@ -36,7 +36,7 @@ AmendmentBlockHandler::ActionType const AmendmentBlockHandler::kDEFAULT_AMENDMEN
};
AmendmentBlockHandler::AmendmentBlockHandler(
util::async::AnyExecutionContext&& ctx,
util::async::AnyExecutionContext ctx,
etl::SystemState& state,
std::chrono::steady_clock::duration interval,
ActionType action

View File

@@ -50,7 +50,7 @@ public:
static ActionType const kDEFAULT_AMENDMENT_BLOCK_ACTION;
AmendmentBlockHandler(
util::async::AnyExecutionContext&& ctx,
util::async::AnyExecutionContext ctx,
etl::SystemState& state,
std::chrono::steady_clock::duration interval = std::chrono::seconds{1},
ActionType action = kDEFAULT_AMENDMENT_BLOCK_ACTION

View File

@@ -0,0 +1,69 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etlng/CacheUpdaterInterface.hpp"
#include "etlng/Models.hpp"
#include "util/log/Logger.hpp"
#include <cstdint>
#include <functional>
#include <vector>
namespace etlng::impl {
class CacheUpdater : public CacheUpdaterInterface {
std::reference_wrapper<data::LedgerCacheInterface> cache_;
util::Logger log_{"ETL"};
public:
CacheUpdater(data::LedgerCacheInterface& cache) : cache_{cache}
{
}
void
update(model::LedgerData const& data) override
{
cache_.get().update(data.objects, data.seq);
}
void
update(uint32_t seq, std::vector<data::LedgerObject> const& objs) override
{
cache_.get().update(objs, seq);
}
void
update(uint32_t seq, std::vector<model::Object> const& objs) override
{
cache_.get().update(objs, seq);
}
void
setFull() override
{
cache_.get().setFull();
}
};
} // namespace etlng::impl

View File

@@ -90,6 +90,13 @@ public:
{
}
Extractor(Extractor const&) = delete;
Extractor(Extractor&&) = delete;
Extractor&
operator=(Extractor const&) = delete;
Extractor&
operator=(Extractor&&) = delete;
[[nodiscard]] std::optional<model::LedgerData>
extractLedgerWithDiff(uint32_t seq) override;

View File

@@ -20,11 +20,13 @@
#include "etlng/impl/GrpcSource.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "etlng/impl/AsyncGrpcCall.hpp"
#include "util/Assert.hpp"
#include "util/log/Logger.hpp"
#include "web/Resolver.hpp"
#include <boost/asio/spawn.hpp>
#include <fmt/core.h>
#include <grpcpp/client_context.h>
#include <grpcpp/security/credentials.h>
@@ -33,9 +35,12 @@
#include <org/xrpl/rpc/v1/get_ledger.pb.h>
#include <org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <atomic>
#include <cstddef>
#include <cstdint>
#include <exception>
#include <expected>
#include <memory>
#include <stdexcept>
#include <string>
#include <utility>
@@ -60,6 +65,7 @@ namespace etlng::impl {
GrpcSource::GrpcSource(std::string const& ip, std::string const& grpcPort)
: log_(fmt::format("ETL_Grpc[{}:{}]", ip, grpcPort))
, initialLoadShouldStop_(std::make_unique<std::atomic_bool>(false))
{
try {
grpc::ChannelArguments chArgs;
@@ -103,15 +109,18 @@ GrpcSource::fetchLedger(uint32_t sequence, bool getObjects, bool getObjectNeighb
return {status, std::move(response)};
}
std::pair<std::vector<std::string>, bool>
InitialLedgerLoadResult
GrpcSource::loadInitialLedger(
uint32_t const sequence,
uint32_t const numMarkers,
etlng::InitialLoadObserverInterface& observer
)
{
if (*initialLoadShouldStop_)
return std::unexpected{InitialLedgerLoadError::Cancelled};
if (!stub_)
return {{}, false};
return std::unexpected{InitialLedgerLoadError::Errored};
std::vector<AsyncGrpcCall> calls = AsyncGrpcCall::makeAsyncCalls(sequence, numMarkers);
@@ -131,9 +140,9 @@ GrpcSource::loadInitialLedger(
ASSERT(tag != nullptr, "Tag can't be null.");
auto ptr = static_cast<AsyncGrpcCall*>(tag);
if (!ok) {
LOG(log_.error()) << "loadInitialLedger - ok is false";
return {{}, false}; // cancelled
if (not ok or *initialLoadShouldStop_) {
LOG(log_.error()) << "loadInitialLedger cancelled";
return std::unexpected{InitialLedgerLoadError::Cancelled};
}
LOG(log_.trace()) << "Marker prefix = " << ptr->getMarkerPrefix();
@@ -151,7 +160,16 @@ GrpcSource::loadInitialLedger(
abort = true;
}
return {std::move(edgeKeys), !abort};
if (abort)
return std::unexpected{InitialLedgerLoadError::Errored};
return edgeKeys;
}
void
GrpcSource::stop(boost::asio::yield_context)
{
initialLoadShouldStop_->store(true);
}
} // namespace etlng::impl

View File

@@ -20,23 +20,26 @@
#pragma once
#include "etlng/InitialLoadObserverInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/spawn.hpp>
#include <grpcpp/support/status.h>
#include <org/xrpl/rpc/v1/get_ledger.pb.h>
#include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <atomic>
#include <cstdint>
#include <memory>
#include <string>
#include <utility>
#include <vector>
namespace etlng::impl {
class GrpcSource {
util::Logger log_;
std::unique_ptr<org::xrpl::rpc::v1::XRPLedgerAPIService::Stub> stub_;
std::unique_ptr<std::atomic_bool> initialLoadShouldStop_;
public:
GrpcSource(std::string const& ip, std::string const& grpcPort);
@@ -61,10 +64,18 @@ public:
* @param sequence Sequence of the ledger to download
* @param numMarkers Number of markers to generate for async calls
* @param observer InitialLoadObserverInterface implementation
* @return A std::pair of the data and a bool indicating whether the download was successful
* @return Downloaded data or an indication of error or cancellation
*/
std::pair<std::vector<std::string>, bool>
InitialLedgerLoadResult
loadInitialLedger(uint32_t sequence, uint32_t numMarkers, etlng::InitialLoadObserverInterface& observer);
/**
* @brief Stop any ongoing operations
* @note This is used to cancel any ongoing initial ledger downloads
* @param yield The coroutine context
*/
void
stop(boost::asio::yield_context yield);
};
} // namespace etlng::impl

View File

@@ -0,0 +1,282 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "data/DBHelpers.hpp"
#include "etl/SystemState.hpp"
#include "etlng/LedgerPublisherInterface.hpp"
#include "etlng/impl/Loading.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp"
#include "util/Mutex.hpp"
#include "util/log/Logger.hpp"
#include "util/prometheus/Counter.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <boost/asio/io_context.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/strand.hpp>
#include <fmt/core.h>
#include <xrpl/basics/chrono.h>
#include <xrpl/protocol/Fees.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STObject.h>
#include <xrpl/protocol/Serializer.h>
#include <algorithm>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <string>
#include <thread>
#include <utility>
#include <vector>
namespace etlng::impl {
/**
* @brief Publishes ledgers in a synchronized fashion.
*
* If ETL is started far behind the network, ledgers will be written and published very rapidly. Monitoring processes
* will publish ledgers as they are written. However, to publish a ledger, the monitoring process needs to read all of
* the transactions for that ledger from the database. Reading the transactions from the database requires network
* calls, which can be slow. It is imperative however that the monitoring processes keep up with the writer, else the
* monitoring processes will not be able to detect if the writer failed. Therefore, publishing each ledger (which
* includes reading all of the transactions from the database) is done from the application wide asio io_service, and a
* strand is used to ensure ledgers are published in order.
*/
class LedgerPublisher : public etlng::LedgerPublisherInterface {
util::Logger log_{"ETL"};
boost::asio::strand<boost::asio::io_context::executor_type> publishStrand_;
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions_;
std::reference_wrapper<etl::SystemState const> state_; // shared state for ETL
util::Mutex<std::chrono::time_point<ripple::NetClock>, std::shared_mutex> lastCloseTime_;
std::reference_wrapper<util::prometheus::CounterInt> lastPublishSeconds_ = PrometheusService::counterInt(
"etl_last_publish_seconds",
{},
"Seconds since epoch of the last published ledger"
);
util::Mutex<std::optional<uint32_t>, std::shared_mutex> lastPublishedSequence_;
public:
/**
* @brief Create an instance of the publisher
*/
LedgerPublisher(
boost::asio::io_context& ioc, // TODO: replace with AsyncContext shared with ETLServiceNg
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
etl::SystemState const& state
)
: publishStrand_{boost::asio::make_strand(ioc)}
, backend_{std::move(backend)}
, subscriptions_{std::move(subscriptions)}
, state_{std::cref(state)}
{
}
/**
* @brief Attempt to read the specified ledger from the database, and then publish that ledger to the ledgers
* stream.
*
* @param ledgerSequence the sequence of the ledger to publish
* @param maxAttempts the number of times to attempt to read the ledger from the database
* @param attemptsDelay the delay between attempts to read the ledger from the database
* @return Whether the ledger was found in the database and published
*/
bool
publish(
uint32_t ledgerSequence,
std::optional<uint32_t> maxAttempts,
std::chrono::steady_clock::duration attemptsDelay = std::chrono::seconds{1}
) override
{
LOG(log_.info()) << "Attempting to publish ledger = " << ledgerSequence;
size_t numAttempts = 0;
while (not state_.get().isStopping) {
auto range = backend_->hardFetchLedgerRangeNoThrow();
if (!range || range->maxSequence < ledgerSequence) {
++numAttempts;
LOG(log_.debug()) << "Trying to publish. Could not find ledger with sequence = " << ledgerSequence;
// We try maxAttempts times to publish the ledger, waiting one second in between each attempt.
if (maxAttempts && numAttempts >= maxAttempts) {
LOG(log_.debug()) << "Failed to publish ledger after " << numAttempts << " attempts.";
return false;
}
std::this_thread::sleep_for(attemptsDelay);
continue;
}
auto lgr = data::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchLedgerBySequence(ledgerSequence, yield);
});
ASSERT(lgr.has_value(), "Ledger must exist in database. Ledger sequence = {}", ledgerSequence);
publish(*lgr);
return true;
}
return false;
}
/**
* @brief Publish the passed ledger asynchronously.
*
* All ledgers are published thru publishStrand_ which ensures that all publishes are performed in a serial fashion.
*
* @param lgrInfo the ledger to publish
*/
void
publish(ripple::LedgerHeader const& lgrInfo)
{
boost::asio::post(publishStrand_, [this, lgrInfo = lgrInfo]() {
LOG(log_.info()) << "Publishing ledger " << std::to_string(lgrInfo.seq);
setLastClose(lgrInfo.closeTime);
auto age = lastCloseAgeSeconds();
// if the ledger closed over MAX_LEDGER_AGE_SECONDS ago, assume we are still catching up and don't publish
static constexpr std::uint32_t kMAX_LEDGER_AGE_SECONDS = 600;
if (age < kMAX_LEDGER_AGE_SECONDS) {
std::optional<ripple::Fees> fees = data::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchFees(lgrInfo.seq, yield);
});
ASSERT(fees.has_value(), "Fees must exist for ledger {}", lgrInfo.seq);
auto transactions = data::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchAllTransactionsInLedger(lgrInfo.seq, yield);
});
auto const ledgerRange = backend_->fetchLedgerRange();
ASSERT(ledgerRange.has_value(), "Ledger range must exist");
auto const range = fmt::format("{}-{}", ledgerRange->minSequence, ledgerRange->maxSequence);
subscriptions_->pubLedger(lgrInfo, *fees, range, transactions.size());
// order with transaction index
std::ranges::sort(transactions, [](auto const& t1, auto const& t2) {
ripple::SerialIter iter1{t1.metadata.data(), t1.metadata.size()};
ripple::STObject const object1(iter1, ripple::sfMetadata);
ripple::SerialIter iter2{t2.metadata.data(), t2.metadata.size()};
ripple::STObject const object2(iter2, ripple::sfMetadata);
return object1.getFieldU32(ripple::sfTransactionIndex) <
object2.getFieldU32(ripple::sfTransactionIndex);
});
for (auto const& txAndMeta : transactions)
subscriptions_->pubTransaction(txAndMeta, lgrInfo);
subscriptions_->pubBookChanges(lgrInfo, transactions);
setLastPublishTime();
LOG(log_.info()) << "Published ledger " << lgrInfo.seq;
} else {
LOG(log_.info()) << "Skipping publishing ledger " << lgrInfo.seq;
}
});
// we track latest publish-requested seq, not necessarily already published
setLastPublishedSequence(lgrInfo.seq);
}
/**
* @brief Get time passed since last publish, in seconds
*/
std::uint32_t
lastPublishAgeSeconds() const override
{
return std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now() - getLastPublish())
.count();
}
/**
* @brief Get last publish time as a time point
*/
std::chrono::time_point<std::chrono::system_clock>
getLastPublish() const override
{
return std::chrono::time_point<std::chrono::system_clock>{std::chrono::seconds{lastPublishSeconds_.get().value()
}};
}
/**
* @brief Get time passed since last ledger close, in seconds
*/
std::uint32_t
lastCloseAgeSeconds() const override
{
auto closeTime = lastCloseTime_.lock()->time_since_epoch().count();
auto now = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now().time_since_epoch())
.count();
if (now < (kRIPPLE_EPOCH_START + closeTime))
return 0;
return now - (kRIPPLE_EPOCH_START + closeTime);
}
/**
* @brief Get the sequence of the last schueduled ledger to publish, Be aware that the ledger may not have been
* published to network
*/
std::optional<uint32_t>
getLastPublishedSequence() const
{
return *lastPublishedSequence_.lock();
}
private:
void
setLastClose(std::chrono::time_point<ripple::NetClock> lastCloseTime)
{
auto closeTime = lastCloseTime_.lock<std::scoped_lock>();
*closeTime = lastCloseTime;
}
void
setLastPublishTime()
{
using namespace std::chrono;
auto const nowSeconds = duration_cast<seconds>(system_clock::now().time_since_epoch()).count();
lastPublishSeconds_.get().set(nowSeconds);
}
void
setLastPublishedSequence(std::optional<uint32_t> lastPublishedSequence)
{
auto lastPublishSeq = lastPublishedSequence_.lock();
*lastPublishSeq = lastPublishedSequence;
}
};
} // namespace etlng::impl

View File

@@ -20,12 +20,14 @@
#include "etlng/impl/Loading.hpp"
#include "data/BackendInterface.hpp"
#include "etl/LedgerFetcherInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/LedgerLoader.hpp"
#include "etlng/AmendmentBlockHandlerInterface.hpp"
#include "etlng/LoaderInterface.hpp"
#include "etlng/Models.hpp"
#include "etlng/RegistryInterface.hpp"
#include "util/Assert.hpp"
#include "util/Constants.hpp"
#include "util/LedgerUtils.hpp"
#include "util/Profiler.hpp"
#include "util/log/Logger.hpp"
@@ -46,32 +48,46 @@ namespace etlng::impl {
Loader::Loader(
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::LedgerFetcherInterface> fetcher,
std::shared_ptr<RegistryInterface> registry,
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler,
std::shared_ptr<etl::SystemState> state
)
: backend_(std::move(backend))
, fetcher_(std::move(fetcher))
, registry_(std::move(registry))
, amendmentBlockHandler_(std::move(amendmentBlockHandler))
, state_(std::move(state))
{
}
void
std::expected<void, LoaderError>
Loader::load(model::LedgerData const& data)
{
try {
// perform cache updates and all writes from extensions
// Perform cache updates and all writes from extensions
// TODO: maybe this readonly logic should be removed?
registry_->dispatch(data);
auto [success, duration] =
::util::timed<std::chrono::duration<double>>([&]() { return backend_->finishWrites(data.seq); });
LOG(log_.info()) << "Finished writes to DB for " << data.seq << ": " << (success ? "YES" : "NO") << "; took "
<< duration;
// Only a writer should attempt to commit to DB
// This is also where conflicts with other writer nodes will be detected
if (state_->isWriting) {
auto [success, duration] =
::util::timed<std::chrono::milliseconds>([&]() { return backend_->finishWrites(data.seq); });
LOG(log_.info()) << "Finished writes to DB for " << data.seq << ": " << (success ? "YES" : "NO")
<< "; took " << duration << "ms";
if (not success) {
state_->writeConflict = true;
LOG(log_.warn()) << "Another node wrote a ledger into the DB - we have a write conflict";
return std::unexpected(LoaderError::WriteConflict);
}
}
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load " << data.seq << ": " << e.what();
amendmentBlockHandler_->notifyAmendmentBlocked();
return std::unexpected(LoaderError::AmendmentBlocked);
}
return {};
};
void
@@ -81,31 +97,61 @@ Loader::onInitialLoadGotMoreObjects(
std::optional<std::string> lastKey
)
{
LOG(log_.debug()) << "On initial load: got more objects for seq " << seq << ". size = " << data.size();
registry_->dispatchInitialObjects(
seq, data, std::move(lastKey).value_or(std::string{}) // TODO: perhaps use optional all the way to extensions?
);
static constexpr std::size_t kLOG_STRIDE = 1000u;
static auto kINITIAL_LOAD_START_TIME = std::chrono::steady_clock::now();
try {
LOG(log_.trace()) << "On initial load: got more objects for seq " << seq << ". size = " << data.size();
registry_->dispatchInitialObjects(
seq,
data,
std::move(lastKey).value_or(std::string{}) // TODO: perhaps use optional all the way to extensions?
);
initialLoadWrittenObjects_ += data.size();
++initialLoadWrites_;
if (initialLoadWrites_ % kLOG_STRIDE == 0u && initialLoadWrites_ != 0u) {
auto elapsedSinceStart = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - kINITIAL_LOAD_START_TIME
);
auto elapsedSeconds = elapsedSinceStart.count() / static_cast<double>(util::kMILLISECONDS_PER_SECOND);
auto objectsPerSecond =
elapsedSeconds > 0.0 ? static_cast<double>(initialLoadWrittenObjects_) / elapsedSeconds : 0.0;
LOG(log_.info()) << "Wrote " << initialLoadWrittenObjects_
<< " initial ledger objects so far with average rate of " << objectsPerSecond
<< " objects per second";
}
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial objects for " << seq << ": " << e.what();
amendmentBlockHandler_->notifyAmendmentBlocked();
}
}
std::optional<ripple::LedgerHeader>
Loader::loadInitialLedger(model::LedgerData const& data)
{
// check that database is actually empty
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (rng) {
ASSERT(false, "Database is not empty");
try {
if (auto const rng = backend_->hardFetchLedgerRangeNoThrow(); rng.has_value()) {
ASSERT(false, "Database is not empty");
return std::nullopt;
}
LOG(log_.debug()) << "Deserialized ledger header. " << ::util::toString(data.header);
auto seconds = ::util::timed<std::chrono::seconds>([this, &data]() { registry_->dispatchInitialData(data); });
LOG(log_.info()) << "Dispatching initial data and submitting all writes took " << seconds << " seconds.";
backend_->finishWrites(data.seq);
LOG(log_.debug()) << "Loaded initial ledger";
return {data.header};
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial ledger " << data.seq << ": " << e.what();
amendmentBlockHandler_->notifyAmendmentBlocked();
return std::nullopt;
}
LOG(log_.debug()) << "Deserialized ledger header. " << ::util::toString(data.header);
auto seconds = ::util::timed<std::chrono::seconds>([this, &data]() { registry_->dispatchInitialData(data); });
LOG(log_.info()) << "Dispatching initial data and submitting all writes took " << seconds << " seconds.";
backend_->finishWrites(data.seq);
LOG(log_.debug()) << "Loaded initial ledger";
return {data.header};
}
} // namespace etlng::impl

View File

@@ -20,7 +20,7 @@
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/LedgerFetcherInterface.hpp"
#include "etl/SystemState.hpp"
#include "etl/impl/LedgerLoader.hpp"
#include "etlng/AmendmentBlockHandlerInterface.hpp"
#include "etlng/InitialLoadObserverInterface.hpp"
@@ -39,6 +39,7 @@
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/TxMeta.h>
#include <cstddef>
#include <cstdint>
#include <memory>
#include <optional>
@@ -49,10 +50,12 @@ namespace etlng::impl {
class Loader : public LoaderInterface, public InitialLoadObserverInterface {
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<etl::LedgerFetcherInterface> fetcher_;
std::shared_ptr<RegistryInterface> registry_;
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler_;
std::shared_ptr<etl::SystemState> state_;
std::size_t initialLoadWrittenObjects_{0u};
std::size_t initialLoadWrites_{0u};
util::Logger log_{"ETL"};
public:
@@ -62,12 +65,19 @@ public:
Loader(
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::LedgerFetcherInterface> fetcher,
std::shared_ptr<RegistryInterface> registry,
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler
std::shared_ptr<AmendmentBlockHandlerInterface> amendmentBlockHandler,
std::shared_ptr<etl::SystemState> state
);
void
Loader(Loader const&) = delete;
Loader(Loader&&) = delete;
Loader&
operator=(Loader const&) = delete;
Loader&
operator=(Loader&&) = delete;
std::expected<void, LoaderError>
load(model::LedgerData const& data) override;
void

View File

@@ -23,11 +23,11 @@
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "util/Assert.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/async/AnyOperation.hpp"
#include "util/log/Logger.hpp"
#include <boost/signals2/connection.hpp>
#include <algorithm>
#include <chrono>
#include <cstddef>
#include <cstdint>
@@ -41,12 +41,18 @@ Monitor::Monitor(
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
uint32_t startSequence
uint32_t startSequence,
std::chrono::steady_clock::duration dbStalledReportDelay
)
: strand_(ctx.makeStrand())
, backend_(std::move(backend))
, validatedLedgers_(std::move(validatedLedgers))
, nextSequence_(startSequence)
, updateData_({
.dbStalledReportDelay = dbStalledReportDelay,
.lastDbCheckTime = std::chrono::steady_clock::now(),
.lastSeenMaxSeqInDb = startSequence > 0 ? startSequence - 1 : 0,
})
{
}
@@ -55,11 +61,37 @@ Monitor::~Monitor()
stop();
}
void
Monitor::notifySequenceLoaded(uint32_t seq)
{
LOG(log_.debug()) << "Loader notified Monitor about newly committed ledger " << seq;
{
auto lck = updateData_.lock();
lck->lastSeenMaxSeqInDb = std::max(seq, lck->lastSeenMaxSeqInDb);
lck->lastDbCheckTime = std::chrono::steady_clock::now();
}
repeatedTask_->invoke(); // force-invoke doWork immediately
};
void
Monitor::notifyWriteConflict(uint32_t seq)
{
LOG(log_.warn()) << "Loader notified Monitor about write conflict at " << seq;
nextSequence_ = seq + 1; // we already loaded the cache for seq just before we detected conflict
LOG(log_.warn()) << "Resume monitoring from " << nextSequence_;
}
void
Monitor::run(std::chrono::steady_clock::duration repeatInterval)
{
ASSERT(not repeatedTask_.has_value(), "Monitor attempted to run more than once");
LOG(log_.debug()) << "Starting monitor";
{
auto lck = updateData_.lock();
LOG(log_.debug()) << "Starting monitor with repeat interval: "
<< std::chrono::duration_cast<std::chrono::seconds>(repeatInterval).count()
<< "s and dbStalledReportDelay: "
<< std::chrono::duration_cast<std::chrono::seconds>(lck->dbStalledReportDelay).count() << "s";
}
repeatedTask_ = strand_.executeRepeatedly(repeatInterval, std::bind_front(&Monitor::doWork, this));
subscription_ = validatedLedgers_->subscribe(std::bind_front(&Monitor::onNextSequence, this));
@@ -71,28 +103,65 @@ Monitor::stop()
if (repeatedTask_.has_value())
repeatedTask_->abort();
subscription_ = std::nullopt;
repeatedTask_ = std::nullopt;
}
boost::signals2::scoped_connection
Monitor::subscribe(SignalType::slot_type const& subscriber)
Monitor::subscribeToNewSequence(NewSequenceSignalType::slot_type const& subscriber)
{
return notificationChannel_.connect(subscriber);
}
boost::signals2::scoped_connection
Monitor::subscribeToDbStalled(DbStalledSignalType::slot_type const& subscriber)
{
return dbStalledChannel_.connect(subscriber);
}
void
Monitor::onNextSequence(uint32_t seq)
{
LOG(log_.debug()) << "rippled published sequence " << seq;
ASSERT(repeatedTask_.has_value(), "Ledger subscription without repeated task is a logic error");
LOG(log_.debug()) << "Notified about new sequence on the network: " << seq;
repeatedTask_->invoke(); // force-invoke immediately
}
void
Monitor::doWork()
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng) {
while (rng->maxSequence >= nextSequence_)
auto rng = backend_->hardFetchLedgerRangeNoThrow();
bool dbProgressedThisCycle = false;
auto lck = updateData_.lock();
if (rng.has_value()) {
if (rng->maxSequence > lck->lastSeenMaxSeqInDb) {
LOG(log_.trace()) << "DB progressed. Old max seq = " << lck->lastSeenMaxSeqInDb
<< ", new max seq = " << rng->maxSequence;
lck->lastSeenMaxSeqInDb = rng->maxSequence;
dbProgressedThisCycle = true;
}
while (lck->lastSeenMaxSeqInDb >= nextSequence_) {
LOG(log_.trace()) << "Publishing from Monitor::doWork. nextSequence_ = " << nextSequence_
<< ", lastSeenMaxSeqInDb_ = " << lck->lastSeenMaxSeqInDb;
notificationChannel_(nextSequence_++);
dbProgressedThisCycle = true;
}
} else {
LOG(log_.trace()) << "DB range is not available or empty. lastSeenMaxSeqInDb_ = " << lck->lastSeenMaxSeqInDb
<< ", nextSequence_ = " << nextSequence_;
}
if (dbProgressedThisCycle) {
lck->lastDbCheckTime = std::chrono::steady_clock::now();
} else if (std::chrono::steady_clock::now() - lck->lastDbCheckTime > lck->dbStalledReportDelay) {
LOG(log_.info()) << "No DB update detected for "
<< std::chrono::duration_cast<std::chrono::seconds>(lck->dbStalledReportDelay).count()
<< " seconds. Firing dbStalledChannel. Last seen max seq in DB: " << lck->lastSeenMaxSeqInDb
<< ". Expecting next: " << nextSequence_;
dbStalledChannel_();
lck->lastDbCheckTime = std::chrono::steady_clock::now();
}
}

View File

@@ -22,6 +22,7 @@
#include "data/BackendInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etlng/MonitorInterface.hpp"
#include "util/Mutex.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include "util/async/AnyOperation.hpp"
#include "util/async/AnyStrand.hpp"
@@ -30,6 +31,7 @@
#include <boost/signals2/connection.hpp>
#include <xrpl/protocol/TxFormats.h>
#include <atomic>
#include <chrono>
#include <cstddef>
#include <cstdint>
@@ -43,11 +45,20 @@ class Monitor : public MonitorInterface {
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers_;
uint32_t nextSequence_;
std::atomic_uint32_t nextSequence_;
std::optional<util::async::AnyOperation<void>> repeatedTask_;
std::optional<boost::signals2::scoped_connection> subscription_; // network validated ledgers subscription
SignalType notificationChannel_;
NewSequenceSignalType notificationChannel_;
DbStalledSignalType dbStalledChannel_;
struct UpdateData {
std::chrono::steady_clock::duration dbStalledReportDelay;
std::chrono::steady_clock::time_point lastDbCheckTime;
uint32_t lastSeenMaxSeqInDb = 0u;
};
util::Mutex<UpdateData> updateData_;
util::Logger log_{"ETL"};
@@ -56,10 +67,17 @@ public:
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
uint32_t startSequence
uint32_t startSequence,
std::chrono::steady_clock::duration dbStalledReportDelay
);
~Monitor() override;
void
notifySequenceLoaded(uint32_t seq) override;
void
notifyWriteConflict(uint32_t seq) override;
void
run(std::chrono::steady_clock::duration repeatInterval) override;
@@ -67,7 +85,10 @@ public:
stop() override;
boost::signals2::scoped_connection
subscribe(SignalType::slot_type const& subscriber) override;
subscribeToNewSequence(NewSequenceSignalType::slot_type const& subscriber) override;
boost::signals2::scoped_connection
subscribeToDbStalled(DbStalledSignalType::slot_type const& subscriber) override;
private:
void

View File

@@ -0,0 +1,53 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/NetworkValidatedLedgersInterface.hpp"
#include "etlng/MonitorInterface.hpp"
#include "etlng/MonitorProviderInterface.hpp"
#include "etlng/impl/Monitor.hpp"
#include "util/async/AnyExecutionContext.hpp"
#include <chrono>
#include <cstdint>
#include <memory>
#include <utility>
namespace etlng::impl {
class MonitorProvider : public MonitorProviderInterface {
public:
std::unique_ptr<MonitorInterface>
make(
util::async::AnyExecutionContext ctx,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<etl::NetworkValidatedLedgersInterface> validatedLedgers,
uint32_t startSequence,
std::chrono::steady_clock::duration dbStalledReportDelay
) override
{
return std::make_unique<Monitor>(
std::move(ctx), std::move(backend), std::move(validatedLedgers), startSequence, dbStalledReportDelay
);
}
};
} // namespace etlng::impl

Some files were not shown because too many files have changed in this diff Show More