Compare commits

..

58 Commits

Author SHA1 Message Date
Alex Kremer
40be25a68c fix: Add upper bound to limit 2024-12-11 23:51:44 +00:00
Alex Kremer
9fc8846f6a chore: Add relevant changes from develop (#1762) 2024-11-27 19:16:38 +00:00
Peter Chen
d001e35427 fix: authorized_credential elements in array not objects bug (#1744) (#1747)
fixes: #1743
2024-11-21 12:05:15 -05:00
Peter Chen
592af70f03 fix: Credential error message (#1738)
fixes #1737
2024-11-18 16:15:11 +00:00
Alex Kremer
33cf336964 feat: Upgrade to libxrpl 2.3.0-rc2 (#1736) 2024-11-18 16:13:59 +00:00
github-actions[bot]
16e07b90db style: clang-tidy auto fixes (#1735)
Fixes #1734. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-18 16:13:47 +00:00
Peter Chen
39419c8b58 feat: Add Support Credentials for Clio (#1712)
Rippled PR: [here](https://github.com/XRPLF/rippled/pull/5103)
2024-11-18 16:13:09 +00:00
github-actions[bot]
e38658a0d6 style: clang-tidy auto fixes (#1730)
Fixes #1729. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-18 16:12:40 +00:00
Shawn Xie
fb98a6a394 feat: Implement MPT changes (#1147)
Implements https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens
2024-11-11 17:27:04 +00:00
dependabot[bot]
b8a8248c42 ci: Bump wandalen/wretry.action from 3.7.0 to 3.7.2 (#1723)
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.7.0 to 3.7.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8ceaefd717"><code>8ceaefd</code></a>
version 3.7.2</li>
<li><a
href="ce976ac9e7"><code>ce976ac</code></a>
version 3.7.1</li>
<li><a
href="7a8f8d4bf2"><code>7a8f8d4</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/174">#174</a>
from dmvict/master</li>
<li><a
href="2103bce855"><code>2103bce</code></a>
Fix action, add option <code>pre_retry_command</code> to call of
subaction</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.7.0...v3.7.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wandalen/wretry.action&package-manager=github_actions&previous-version=3.7.0&new-version=3.7.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2024-11-11 17:27:04 +00:00
Alex Kremer
a092b7ae08 feat: Upgrade to libxrpl 2.3.0-rc1 (#1718)
Fixes #1717
2024-11-11 17:27:04 +00:00
github-actions[bot]
07438a2e02 style: clang-tidy auto fixes (#1720)
Fixes #1719. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-11 17:27:04 +00:00
Alex Kremer
09aa688de4 feat: ETLng Registry (#1713)
For #1597
2024-11-11 17:27:03 +00:00
dependabot[bot]
a7bff26fd6 ci: Bump wandalen/wretry.action from 3.5.0 to 3.7.0 (#1714)
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.5.0 to 3.7.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f8754f7974"><code>f8754f7</code></a>
version 3.7.0</li>
<li><a
href="03db9837ed"><code>03db983</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/171">#171</a>
from dmvict/docker_readme</li>
<li><a
href="d80901cd5c"><code>d80901c</code></a>
Sync readme for new feature</li>
<li><a
href="e00d406ade"><code>e00d406</code></a>
version 3.6.0</li>
<li><a
href="e00deaa9ba"><code>e00deaa</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/170">#170</a>
from dmvict/pre_retry_action</li>
<li><a
href="8b50f3152e"><code>8b50f31</code></a>
Update action, add option <code>pre_retry_command</code> to run command
between retries</li>
<li><a
href="990f16983d"><code>990f169</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/167">#167</a>
from Vampire/add-typing</li>
<li><a
href="aeb34f4d13"><code>aeb34f4</code></a>
Add action typing</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.5.0...v3.7.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wandalen/wretry.action&package-manager=github_actions&previous-version=3.5.0&new-version=3.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2024-11-11 17:27:03 +00:00
Peter Chen
081adf1cae fix: Support Delete NFT (#1695)
Fixes #1677
2024-11-11 17:27:03 +00:00
cyan317
ffc9deb0f8 fix: Add queue size limit for websocket (#1701)
For slow clients, we will disconnect with it if the message queue is too
long.

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-11-11 17:27:02 +00:00
github-actions[bot]
717a29ecdf style: clang-tidy auto fixes (#1711)
Fixes #1710.

---------

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-11-11 17:27:02 +00:00
Sergey Kuznetsov
e8db74456a ci: Fix nightly build (#1709)
Fixes #1703.
2024-11-11 17:27:02 +00:00
Sergey Kuznetsov
4947a83696 fix: Fix issues clang-tidy found (#1708)
Fixes #1706.
2024-11-11 17:27:01 +00:00
github-actions[bot]
164387cab0 style: clang-tidy auto fixes (#1705)
Fixes #1704. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-11 17:27:01 +00:00
Sergey Kuznetsov
b8f1deb90f refactor: Coroutine based webserver (#1699)
Code of new coroutine-based web server. The new server is not connected
to Clio and not ready to use yet.
For #919.
2024-11-11 17:27:01 +00:00
Sergey Kuznetsov
5c77e59374 fix: Fix timer spurious calls (#1700)
Fixes #1634.
I also checked other timers and they don't have the issue.
2024-11-11 17:27:00 +00:00
Peter Chen
6d070132c7 fix: example config syntax (#1696) 2024-11-11 17:27:00 +00:00
cyan317
d2dda69448 fix: Remove log (#1694) 2024-11-11 17:27:00 +00:00
cyan317
e2aeaa0956 chore: Add counter for total messages waiting to be sent (#1691) 2024-11-11 17:27:00 +00:00
Sergey Kuznetsov
2951b4aaa0 style: Fix include (#1687)
Fixes #1686
2024-11-11 17:26:59 +00:00
cyan317
6c3c761dd1 chore: Remove unused static variables (#1683) 2024-11-11 17:26:59 +00:00
github-actions[bot]
527020680a style: clang-tidy auto fixes (#1685)
Fixes #1684. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-11 17:26:59 +00:00
Alex Kremer
401448f771 style: Update code formatting (#1682)
For #1664
2024-11-11 17:26:58 +00:00
Alex Kremer
0f12a6d7f2 chore: Upgrade to llvm 19 tooling (#1681)
For #1664
2024-11-11 17:26:58 +00:00
Peter Chen
5c8fc939f2 fix: deletion script will not OOM (#1679)
fixes #1676 and #1678
2024-11-11 17:26:58 +00:00
github-actions[bot]
b1be848098 style: clang-tidy auto fixes (#1674)
Fixes #1673. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-11 17:26:58 +00:00
cyan317
41aabbfcce feat: server info cache (#1671)
fix: #1181
2024-11-11 17:26:57 +00:00
dependabot[bot]
c00d25aa6b chore: Bump ytanikin/PRConventionalCommits from 1.2.0 to 1.3.0 (#1670)
Bumps
[ytanikin/PRConventionalCommits](https://github.com/ytanikin/prconventionalcommits)
from 1.2.0 to 1.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ytanikin/prconventionalcommits/releases">ytanikin/PRConventionalCommits's
releases</a>.</em></p>
<blockquote>
<h2>1.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: Set breaking changes regex by <a
href="https://github.com/alexangas"><code>@​alexangas</code></a> in <a
href="https://redirect.github.com/ytanikin/PRConventionalCommits/pull/24">ytanikin/PRConventionalCommits#24</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/alexangas"><code>@​alexangas</code></a>
made their first contribution in <a
href="https://redirect.github.com/ytanikin/PRConventionalCommits/pull/24">ytanikin/PRConventionalCommits#24</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ytanikin/PRConventionalCommits/compare/1.2.0...1.3.0">https://github.com/ytanikin/PRConventionalCommits/compare/1.2.0...1.3.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b628c5a234"><code>b628c5a</code></a>
test: enable &quot;breaking change&quot; test</li>
<li><a
href="e1b5683aa4"><code>e1b5683</code></a>
fix: Set breaking changes regex (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/24">#24</a>)</li>
<li><a
href="92a7ab7dc6"><code>92a7ab7</code></a>
fix: upgrade dependencies (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/26">#26</a>)</li>
<li><a
href="cc6cc0dddb"><code>cc6cc0d</code></a>
test: fix tests (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/25">#25</a>)</li>
<li>See full diff in <a
href="https://github.com/ytanikin/prconventionalcommits/compare/1.2.0...1.3.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ytanikin/PRConventionalCommits&package-manager=github_actions&previous-version=1.2.0&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 17:26:57 +00:00
Alex Kremer
8d5c588e35 chore: Apply commits for 2.3.0-b4 (#1725) 2024-11-11 14:37:31 +00:00
Sergey Kuznetsov
9df3e936cc chore: Update libxrpl to 2.3.0-b4 (#1667) 2024-09-25 14:44:03 +01:00
Alex Kremer
4166c46820 fix: Workaround for gcc12 bug with defaulted destructors (#1666)
Fixes #1662
2024-09-25 14:44:03 +01:00
github-actions[bot]
f75cbd456b style: clang-tidy auto fixes (#1663)
Fixes #1662.

---------

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
Co-authored-by: Peter Chen <ychen@ripple.com>
2024-09-25 14:44:03 +01:00
Peter Chen
d189651821 fix: add no lint to ignore clang-tidy (#1660)
Fixes build for
[#1659](https://github.com/XRPLF/clio/actions/runs/10956058143/job/30421296417)
2024-09-25 14:44:02 +01:00
github-actions[bot]
3f791c1315 style: clang-tidy auto fixes (#1659)
Fixes #1658. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-25 14:44:02 +01:00
Peter Chen
418511332e chore: Revert Cassandra driver upgrade (#1656)
Reverts XRPLF/clio#1646
2024-09-25 14:44:02 +01:00
Peter Chen
e5a0477352 refactor: Clio Config (#1593)
Add constraint + parse json into Config
Second part of refactoring Clio Config; First PR found
[here](https://github.com/XRPLF/clio/pull/1544)

Steps that are left to implement:
- Replacing all the places where we fetch config values (by using
config.valueOr/MaybeValue) to instead get it from Config Definition
- Generate markdown file using Clio Config Description
2024-09-25 14:44:02 +01:00
cyan317
3118110eb8 feat: add 'force_forward' field to request (#1647)
Fix #1141
2024-09-25 14:44:01 +01:00
Alex Kremer
6d20f39f67 feat: Delete-before support in data removal tool (#1649)
Fixes #1650
2024-09-25 14:44:01 +01:00
Peter Chen
9cb1e06c8e fix: Upgrade Cassandra driver (#1646)
Fixes #1296
2024-09-25 14:44:01 +01:00
Peter Chen
423244eb4b fix: pre-push tag (#1614)
Fix issue of git was verifying incorrect Tag
2024-09-25 14:44:01 +01:00
cyan317
7aaba1cbad fix: no restriction on type field (#1644)
'type' should not matter if 'full' or 'accounts' is false. Relax the
restriction for 'type'
2024-09-25 14:44:00 +01:00
cyan317
b7c50fd73d fix: Add more restrictions to admin fields (#1643) 2024-09-25 14:44:00 +01:00
dependabot[bot]
442ee874d5 ci: Bump peter-evans/create-pull-request from 6 to 7 (#1636)
Bumps
[peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request)
from 6 to 7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>Create Pull Request v7.0.0</h2>
<p> Now supports commit signing with bot-generated tokens! See
&quot;What's new&quot; below. ✍️🤖</p>
<h3>Behaviour changes</h3>
<ul>
<li>Action input <code>git-token</code> has been renamed
<code>branch-token</code>, to be more clear about its purpose. The
<code>branch-token</code> is the token that the action will use to
create and update the branch.</li>
<li>The action now handles requests that have been rate-limited by
GitHub. Requests hitting a primary rate limit will retry twice, for a
total of three attempts. Requests hitting a secondary rate limit will
not be retried.</li>
<li>The <code>pull-request-operation</code> output now returns
<code>none</code> when no operation was executed.</li>
<li>Removed deprecated output environment variable
<code>PULL_REQUEST_NUMBER</code>. Please use the
<code>pull-request-number</code> action output instead.</li>
</ul>
<h3>What's new</h3>
<ul>
<li>The action can now sign commits as <code>github-actions[bot]</code>
when using <code>GITHUB_TOKEN</code>, or your own bot when using <a
href="https://github.com/peter-evans/create-pull-request/blob/HEAD/docs/concepts-guidelines.md#authenticating-with-github-app-generated-tokens">GitHub
App tokens</a>. See <a
href="https://github.com/peter-evans/create-pull-request/blob/HEAD/docs/concepts-guidelines.md#commit-signature-verification-for-bots">commit
signing</a> for details.</li>
<li>Action input <code>draft</code> now accepts a new value
<code>always-true</code>. This will set the pull request to draft status
when the pull request is updated, as well as on creation.</li>
<li>A new action input <code>maintainer-can-modify</code> indicates
whether <a
href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork">maintainers
can modify</a> the pull request. The default is <code>true</code>, which
retains the existing behaviour of the action.</li>
<li>A new output <code>pull-request-commits-verified</code> returns
<code>true</code> or <code>false</code>, indicating whether GitHub
considers the signature of the branch's commits to be verified.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.36 to
18.19.39 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3000">peter-evans/create-pull-request#3000</a></li>
<li>build(deps-dev): bump ts-jest from 29.1.5 to 29.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3008">peter-evans/create-pull-request#3008</a></li>
<li>build(deps-dev): bump prettier from 3.3.2 to 3.3.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3018">peter-evans/create-pull-request#3018</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.0 to 29.2.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3019">peter-evans/create-pull-request#3019</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.1.3 to 5.2.1 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3035">peter-evans/create-pull-request#3035</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.39 to
18.19.41 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3037">peter-evans/create-pull-request#3037</a></li>
<li>build(deps): bump undici from 6.19.2 to 6.19.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3036">peter-evans/create-pull-request#3036</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.2 to 29.2.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3038">peter-evans/create-pull-request#3038</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.41 to
18.19.42 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3070">peter-evans/create-pull-request#3070</a></li>
<li>build(deps): bump undici from 6.19.4 to 6.19.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3086">peter-evans/create-pull-request#3086</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.42 to
18.19.43 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3087">peter-evans/create-pull-request#3087</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.3 to 29.2.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3088">peter-evans/create-pull-request#3088</a></li>
<li>build(deps): bump undici from 6.19.5 to 6.19.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3145">peter-evans/create-pull-request#3145</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.43 to
18.19.44 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3144">peter-evans/create-pull-request#3144</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3154">peter-evans/create-pull-request#3154</a></li>
<li>build(deps): bump undici from 6.19.7 to 6.19.8 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3213">peter-evans/create-pull-request#3213</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.44 to
18.19.45 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3214">peter-evans/create-pull-request#3214</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3221">peter-evans/create-pull-request#3221</a></li>
<li>build(deps-dev): bump eslint-import-resolver-typescript from 3.6.1
to 3.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3255">peter-evans/create-pull-request#3255</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.45 to
18.19.46 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3254">peter-evans/create-pull-request#3254</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.4 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3256">peter-evans/create-pull-request#3256</a></li>
<li>v7 - signed commits by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3057">peter-evans/create-pull-request#3057</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/rustycl0ck"><code>@​rustycl0ck</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3057">peter-evans/create-pull-request#3057</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v6.1.0...v7.0.0">https://github.com/peter-evans/create-pull-request/compare/v6.1.0...v7.0.0</a></p>
<h2>Create Pull Request v6.1.0</h2>
<p> Adds <code>pull-request-branch</code> as an action output.</p>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8867c4aba1"><code>8867c4a</code></a>
fix: handle ambiguous argument failure on diff stat (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3312">#3312</a>)</li>
<li><a
href="6073f5434b"><code>6073f54</code></a>
build(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code> (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3291">#3291</a>)</li>
<li><a
href="6d01b5601c"><code>6d01b56</code></a>
build(deps-dev): bump eslint-plugin-import from 2.29.1 to 2.30.0 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3290">#3290</a>)</li>
<li><a
href="25cf8451c3"><code>25cf845</code></a>
build(deps-dev): bump <code>@​typescript-eslint/parser</code> from
7.17.0 to 7.18.0 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3289">#3289</a>)</li>
<li><a
href="d87b980a0e"><code>d87b980</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.46 to
18.19.48 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3288">#3288</a>)</li>
<li><a
href="119d131ea9"><code>119d131</code></a>
build(deps): bump peter-evans/create-pull-request from 6 to 7 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3283">#3283</a>)</li>
<li><a
href="73e6230af4"><code>73e6230</code></a>
docs: update readme</li>
<li><a
href="c0348e860f"><code>c0348e8</code></a>
ci: add v7 to workflow</li>
<li><a
href="4320041ed3"><code>4320041</code></a>
feat: signed commits (v7) (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3057">#3057</a>)</li>
<li><a
href="0c2a66fe4a"><code>0c2a66f</code></a>
build(deps-dev): bump ts-jest from 29.2.4 to 29.2.5 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3256">#3256</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/peter-evans/create-pull-request/compare/v6...v7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/create-pull-request&package-manager=github_actions&previous-version=6&new-version=7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 14:44:00 +01:00
cyan317
0679034978 fix: Don't forward ledger API if 'full' is a string (#1640)
Fix #1635
2024-09-25 14:43:59 +01:00
github-actions[bot]
b41ea34212 style: clang-tidy auto fixes (#1639)
Fixes #1638. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-25 14:43:59 +01:00
Sergey Kuznetsov
4e147deafa fix: Subscription source bugs fix (#1626) (#1633)
Fixes #1620.
Cherry pick of #1626 into develop.

- Add timeouts for websocket operations for connections to rippled.
Without these timeouts if connection hangs for some reason, clio
wouldn't know the connection is hanging.
- Fix potential data race in choosing new subscription source which will
forward messages to users.
- Optimise switching between subscription sources.
2024-09-25 14:43:59 +01:00
Sergey Kuznetsov
b08447e8e0 fix: Fix logging in SubscriptionSource (#1617) (#1632)
Fixes #1616. 
Cherry pick of #1617 into develop.
2024-09-25 14:43:59 +01:00
cyan317
9432165ace refactor: Remove SubscriptionManagerRunner (#1623) 2024-09-25 14:43:58 +01:00
github-actions[bot]
3b6a87249c style: clang-tidy auto fixes (#1631)
Fixes #1630. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-25 14:43:58 +01:00
Sergey Kuznetsov
b7449f72b7 test: Add test for WsConnection for ping response (#1619) 2024-09-25 14:43:58 +01:00
cyan317
443c74436e fix: not forward admin API (#1628) 2024-09-25 14:43:57 +01:00
Peter Chen
7b5e02731d fix: AccountNFT with invalid marker (#1589)
Fixes [#1497](https://github.com/XRPLF/clio/issues/1497)
Mimics the behavior of the [fix on Rippled
side](https://github.com/XRPLF/rippled/pull/5045)
2024-08-27 15:38:19 -04:00
638 changed files with 14828 additions and 37842 deletions

View File

@@ -8,7 +8,6 @@ Checks: '-*,
bugprone-chained-comparison, bugprone-chained-comparison,
bugprone-compare-pointer-to-member-virtual-function, bugprone-compare-pointer-to-member-virtual-function,
bugprone-copy-constructor-init, bugprone-copy-constructor-init,
bugprone-crtp-constructor-accessibility,
bugprone-dangling-handle, bugprone-dangling-handle,
bugprone-dynamic-static-initializers, bugprone-dynamic-static-initializers,
bugprone-empty-catch, bugprone-empty-catch,
@@ -34,11 +33,9 @@ Checks: '-*,
bugprone-non-zero-enum-to-bool-conversion, bugprone-non-zero-enum-to-bool-conversion,
bugprone-optional-value-conversion, bugprone-optional-value-conversion,
bugprone-parent-virtual-call, bugprone-parent-virtual-call,
bugprone-pointer-arithmetic-on-polymorphic-object,
bugprone-posix-return, bugprone-posix-return,
bugprone-redundant-branch-condition, bugprone-redundant-branch-condition,
bugprone-reserved-identifier, bugprone-reserved-identifier,
bugprone-return-const-ref-from-parameter,
bugprone-shared-ptr-array-mismatch, bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler, bugprone-signal-handler,
bugprone-signed-char-misuse, bugprone-signed-char-misuse,
@@ -58,7 +55,6 @@ Checks: '-*,
bugprone-suspicious-realloc-usage, bugprone-suspicious-realloc-usage,
bugprone-suspicious-semicolon, bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare, bugprone-suspicious-string-compare,
bugprone-suspicious-stringview-data-usage,
bugprone-swapped-arguments, bugprone-swapped-arguments,
bugprone-switch-missing-default-case, bugprone-switch-missing-default-case,
bugprone-terminating-continue, bugprone-terminating-continue,
@@ -101,12 +97,10 @@ Checks: '-*,
modernize-make-unique, modernize-make-unique,
modernize-pass-by-value, modernize-pass-by-value,
modernize-type-traits, modernize-type-traits,
modernize-use-designated-initializers,
modernize-use-emplace, modernize-use-emplace,
modernize-use-equals-default, modernize-use-equals-default,
modernize-use-equals-delete, modernize-use-equals-delete,
modernize-use-override, modernize-use-override,
modernize-use-ranges,
modernize-use-starts-ends-with, modernize-use-starts-ends-with,
modernize-use-std-numbers, modernize-use-std-numbers,
modernize-use-using, modernize-use-using,
@@ -127,12 +121,9 @@ Checks: '-*,
readability-convert-member-functions-to-static, readability-convert-member-functions-to-static,
readability-duplicate-include, readability-duplicate-include,
readability-else-after-return, readability-else-after-return,
readability-enum-initial-value,
readability-implicit-bool-conversion, readability-implicit-bool-conversion,
readability-inconsistent-declaration-parameter-name, readability-inconsistent-declaration-parameter-name,
readability-identifier-naming,
readability-make-member-function-const, readability-make-member-function-const,
readability-math-missing-parentheses,
readability-misleading-indentation, readability-misleading-indentation,
readability-non-const-parameter, readability-non-const-parameter,
readability-redundant-casting, readability-redundant-casting,
@@ -144,45 +135,11 @@ Checks: '-*,
readability-simplify-boolean-expr, readability-simplify-boolean-expr,
readability-static-accessed-through-instance, readability-static-accessed-through-instance,
readability-static-definition-in-anonymous-namespace, readability-static-definition-in-anonymous-namespace,
readability-suspicious-call-argument, readability-suspicious-call-argument
readability-use-std-min-max
' '
CheckOptions: CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2 readability-braces-around-statements.ShortStatementLines: 2
readability-identifier-naming.MacroDefinitionCase: UPPER_CASE
readability-identifier-naming.ClassCase: CamelCase
readability-identifier-naming.StructCase: CamelCase
readability-identifier-naming.UnionCase: CamelCase
readability-identifier-naming.EnumCase: CamelCase
readability-identifier-naming.EnumConstantCase: CamelCase
readability-identifier-naming.ScopedEnumConstantCase: CamelCase
readability-identifier-naming.GlobalConstantCase: UPPER_CASE
readability-identifier-naming.GlobalConstantPrefix: 'k'
readability-identifier-naming.GlobalVariableCase: CamelCase
readability-identifier-naming.GlobalVariablePrefix: 'g'
readability-identifier-naming.ConstexprFunctionCase: camelBack
readability-identifier-naming.ConstexprMethodCase: camelBack
readability-identifier-naming.ClassMethodCase: camelBack
readability-identifier-naming.ClassMemberCase: camelBack
readability-identifier-naming.ClassConstantCase: UPPER_CASE
readability-identifier-naming.ClassConstantPrefix: 'k'
readability-identifier-naming.StaticConstantCase: UPPER_CASE
readability-identifier-naming.StaticConstantPrefix: 'k'
readability-identifier-naming.StaticVariableCase: UPPER_CASE
readability-identifier-naming.StaticVariablePrefix: 'k'
readability-identifier-naming.ConstexprVariableCase: UPPER_CASE
readability-identifier-naming.ConstexprVariablePrefix: 'k'
readability-identifier-naming.LocalConstantCase: camelBack
readability-identifier-naming.LocalVariableCase: camelBack
readability-identifier-naming.TemplateParameterCase: CamelCase
readability-identifier-naming.ParameterCase: camelBack
readability-identifier-naming.FunctionCase: camelBack
readability-identifier-naming.MemberCase: camelBack
readability-identifier-naming.PrivateMemberSuffix: _
readability-identifier-naming.ProtectedMemberSuffix: _
readability-identifier-naming.PublicMemberSuffix: ''
readability-identifier-naming.FunctionIgnoredRegexp: '.*tag_invoke.*'
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*;.*ranges_lower_bound\.h;time.h;stdlib.h' misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*;.*ranges_lower_bound\.h;time.h;stdlib.h'

View File

@@ -4,18 +4,12 @@ inputs:
target: target:
description: Build target name description: Build target name
default: all default: all
substract_threads:
description: An option for the action get_number_of_threads. See get_number_of_threads
required: true
default: '0'
runs: runs:
using: composite using: composite
steps: steps:
- name: Get number of threads - name: Get number of threads
uses: ./.github/actions/get_number_of_threads uses: ./.github/actions/get_number_of_threads
id: number_of_threads id: number_of_threads
with:
substract_threads: ${{ inputs.substract_threads }}
- name: Build Clio - name: Build Clio
shell: bash shell: bash

View File

@@ -12,10 +12,6 @@ inputs:
description: Build type for third-party libraries and clio. Could be 'Release', 'Debug' description: Build type for third-party libraries and clio. Could be 'Release', 'Debug'
required: true required: true
default: 'Release' default: 'Release'
build_integration_tests:
description: Whether to build integration tests
required: true
default: 'true'
code_coverage: code_coverage:
description: Whether conan's coverage option should be on or not description: Whether conan's coverage option should be on or not
required: true required: true
@@ -24,10 +20,6 @@ inputs:
description: Whether Clio is to be statically linked description: Whether Clio is to be statically linked
required: true required: true
default: 'false' default: 'false'
sanitizer:
description: Sanitizer to use
required: true
default: 'false' # false, tsan, asan or ubsan
runs: runs:
using: composite using: composite
steps: steps:
@@ -41,20 +33,14 @@ runs:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}" BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
CODE_COVERAGE: "${{ inputs.code_coverage == 'true' && 'True' || 'False' }}" CODE_COVERAGE: "${{ inputs.code_coverage == 'true' && 'True' || 'False' }}"
STATIC_OPTION: "${{ inputs.static == 'true' && 'True' || 'False' }}" STATIC_OPTION: "${{ inputs.static == 'true' && 'True' || 'False' }}"
INTEGRATION_TESTS_OPTION: "${{ inputs.build_integration_tests == 'true' && 'True' || 'False' }}"
run: | run: |
cd build cd build
conan install .. -of . -b $BUILD_OPTION -s build_type=${{ inputs.build_type }} -o clio:static="${STATIC_OPTION}" -o clio:tests=True -o clio:integration_tests="${INTEGRATION_TESTS_OPTION}" -o clio:lint=False -o clio:coverage="${CODE_COVERAGE}" --profile ${{ inputs.conan_profile }} conan install .. -of . -b $BUILD_OPTION -s build_type=${{ inputs.build_type }} -o clio:static="${STATIC_OPTION}" -o clio:tests=True -o clio:integration_tests=True -o clio:lint=False -o clio:coverage="${CODE_COVERAGE}" --profile ${{ inputs.conan_profile }}
- name: Run cmake - name: Run cmake
shell: bash shell: bash
env: env:
BUILD_TYPE: "${{ inputs.build_type }}" BUILD_TYPE: "${{ inputs.build_type }}"
SANITIZER_OPTION: |
${{ inputs.sanitizer == 'tsan' && '-Dsan=thread' ||
inputs.sanitizer == 'ubsan' && '-Dsan=undefined' ||
inputs.sanitizer == 'asan' && '-Dsan=address' ||
'' }}
run: | run: |
cd build cd build
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE="${BUILD_TYPE}" ${SANITIZER_OPTION} .. -G Ninja cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=${{ inputs.build_type }} ${{ inputs.extra_cmake_args }} .. -G Ninja

View File

@@ -1,10 +1,5 @@
name: Get number of threads name: Get number of threads
description: Determines number of threads to use on macOS and Linux description: Determines number of threads to use on macOS and Linux
inputs:
substract_threads:
description: How many threads to substract from the calculated number
required: true
default: '0'
outputs: outputs:
threads_number: threads_number:
description: Number of threads to use description: Number of threads to use
@@ -24,11 +19,8 @@ runs:
shell: bash shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Shift and export number of threads - name: Export output variable
id: number_of_threads_export
shell: bash shell: bash
id: number_of_threads_export
run: | run: |
num_of_threads=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }} echo "num=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}" >> $GITHUB_OUTPUT
shift_by=${{ inputs.substract_threads }}
shifted=$((num_of_threads - shift_by))
echo "num=$(( shifted > 1 ? shifted : 1 ))" >> $GITHUB_OUTPUT

View File

@@ -11,7 +11,7 @@ runs:
if: ${{ runner.os == 'macOS' }} if: ${{ runner.os == 'macOS' }}
shell: bash shell: bash
run: | run: |
brew install llvm@14 pkg-config ninja bison cmake ccache jq gh conan@1 ca-certificates brew install llvm@14 pkg-config ninja bison cmake ccache jq gh conan@1
echo "/opt/homebrew/opt/conan@1/bin" >> $GITHUB_PATH echo "/opt/homebrew/opt/conan@1/bin" >> $GITHUB_PATH
- name: Fix git permissions on Linux - name: Fix git permissions on Linux

View File

@@ -15,10 +15,10 @@ runs:
if: ${{ runner.os == 'macOS' }} if: ${{ runner.os == 'macOS' }}
shell: bash shell: bash
env: env:
CONAN_PROFILE: apple_clang_16 CONAN_PROFILE: apple_clang_15
id: conan_setup_mac id: conan_setup_mac
run: | run: |
echo "Creating $CONAN_PROFILE conan profile" echo "Creating $CONAN_PROFILE conan profile";
conan profile new $CONAN_PROFILE --detect --force conan profile new $CONAN_PROFILE --detect --force
conan profile update settings.compiler.libcxx=libc++ $CONAN_PROFILE conan profile update settings.compiler.libcxx=libc++ $CONAN_PROFILE
conan profile update settings.compiler.cppstd=20 $CONAN_PROFILE conan profile update settings.compiler.cppstd=20 $CONAN_PROFILE

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -o pipefail
# Note: This script is intended to be run from the root of the repository.
#
# This script runs each unit-test separately and generates reports from the currently active sanitizer.
# Output is saved in ./.sanitizer-report in the root of the repository
if [[ -z "$1" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
Path to clio_tests should be passed as first argument to the script.
-----------------------------------------------------------------------------
EOF
exit 1
fi
TEST_BINARY=$1
if [[ ! -f "$TEST_BINARY" ]]; then
echo "Test binary not found: $TEST_BINARY"
exit 1
fi
TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {suite=$1}')
OUTPUT_DIR="./.sanitizer-report"
mkdir -p "$OUTPUT_DIR"
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}"
export TSAN_OPTIONS="log_path=\"$OUTPUT_FILE\" die_after_fork=0"
export ASAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export UBSAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export MallocNanoZone='0' # for MacOSX
$TEST_BINARY --gtest_filter="$TEST" > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
fi
done

View File

@@ -9,7 +9,7 @@ on:
jobs: jobs:
check_format: check_format:
name: Check format name: Check format
runs-on: ubuntu-latest runs-on: ubuntu-20.04
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
steps: steps:
@@ -26,7 +26,7 @@ jobs:
check_docs: check_docs:
name: Check documentation name: Check documentation
runs-on: ubuntu-latest runs-on: ubuntu-20.04
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
steps: steps:
@@ -47,44 +47,133 @@ jobs:
matrix: matrix:
include: include:
- os: heavy - os: heavy
conan_profile: gcc container:
image: rippleci/clio_ci:latest
build_type: Release build_type: Release
container: '{ "image": "rippleci/clio_ci:latest" }' conan_profile: gcc
code_coverage: false code_coverage: false
static: true static: true
- os: heavy - os: heavy
conan_profile: gcc container:
image: rippleci/clio_ci:latest
build_type: Debug build_type: Debug
container: '{ "image": "rippleci/clio_ci:latest" }' conan_profile: gcc
code_coverage: true code_coverage: true
static: true static: true
- os: heavy - os: heavy
conan_profile: clang container:
image: rippleci/clio_ci:latest
build_type: Release build_type: Release
container: '{ "image": "rippleci/clio_ci:latest" }' conan_profile: clang
code_coverage: false code_coverage: false
static: true static: true
- os: heavy - os: heavy
conan_profile: clang container:
image: rippleci/clio_ci:latest
build_type: Debug build_type: Debug
container: '{ "image": "rippleci/clio_ci:latest" }' conan_profile: clang
code_coverage: false code_coverage: false
static: true static: true
- os: macos15 - os: macos14
build_type: Release build_type: Release
code_coverage: false code_coverage: false
static: false static: false
uses: ./.github/workflows/build_impl.yml runs-on: [self-hosted, "${{ matrix.os }}"]
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }} container: ${{ matrix.container }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: false
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: ${{ matrix.conan_profile }} conan_profile: ${{ matrix.conan_profile }}
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_profile: ${{ steps.conan.outputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ matrix.build_type }} build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }} code_coverage: ${{ matrix.code_coverage }}
static: ${{ matrix.static }} static: ${{ matrix.static }}
unit_tests: true
integration_tests: true - name: Build Clio
clio_server: true uses: ./.github/actions/build_clio
- name: Show ccache's statistics
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
- name: Strip tests
if: ${{ !matrix.code_coverage }}
run: strip build/clio_tests && strip build/clio_integration_tests
- name: Upload clio_server
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !matrix.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_*tests
- name: Save cache
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
conan_profile: ${{ steps.conan.outputs.conan_profile }}
# TODO: This is not a part of build process but it is the easiest way to do it here.
# It will be refactored in https://github.com/XRPLF/clio/issues/1075
- name: Run code coverage
if: ${{ matrix.code_coverage }}
uses: ./.github/actions/code_coverage
upload_coverage_report:
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
test: test:
name: Run Tests name: Run Tests
@@ -94,24 +183,24 @@ jobs:
matrix: matrix:
include: include:
- os: heavy - os: heavy
container:
image: rippleci/clio_ci:latest
conan_profile: gcc conan_profile: gcc
build_type: Release build_type: Release
- os: heavy
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
- os: heavy
conan_profile: clang conan_profile: clang
build_type: Release build_type: Release
- os: heavy
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
- os: heavy
conan_profile: clang conan_profile: clang
build_type: Debug build_type: Debug
container: - os: macos14
image: rippleci/clio_ci:latest conan_profile: apple_clang_15
- os: macos15
conan_profile: apple_clang_16
build_type: Release build_type: Release
runs-on: ${{ matrix.os }} runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }} container: ${{ matrix.container }}
steps: steps:
@@ -127,44 +216,3 @@ jobs:
run: | run: |
chmod +x ./clio_tests chmod +x ./clio_tests
./clio_tests ./clio_tests
check_config:
name: Check Config Description
needs: build
runs-on: heavy
container:
image: rippleci/clio_ci:latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: clio_server_Linux_Release_gcc
- name: Compare Config Description
shell: bash
run: |
repoConfigFile=docs/config-description.md
if ! [ -f ${repoConfigFile} ]; then
echo "Config Description markdown file is missing in docs folder"
exit 1
fi
chmod +x ./clio_server
configDescriptionFile=config_description_new.md
./clio_server -d ${configDescriptionFile}
configDescriptionHash=$(sha256sum ${configDescriptionFile} | cut -d' ' -f1)
repoConfigHash=$(sha256sum ${repoConfigFile} | cut -d' ' -f1)
if [ ${configDescriptionHash} != ${repoConfigHash} ]; then
echo "Markdown file is not up to date"
diff -u "${repoConfigFile}" "${configDescriptionFile}"
rm -f ${configDescriptionFile}
exit 1
fi
rm -f ${configDescriptionFile}
exit 0

View File

@@ -40,7 +40,7 @@ on:
jobs: jobs:
build_and_publish_image: build_and_publish_image:
name: Build and publish image name: Build and publish image
runs-on: ubuntu-latest runs-on: ubuntu-20.04
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4

View File

@@ -1,192 +0,0 @@
name: Reusable build
on:
workflow_call:
inputs:
runs_on:
description: Runner to run the job on
required: true
type: string
default: heavy
container:
description: "The container object as a JSON string (leave empty to run natively)"
required: true
type: string
default: ""
conan_profile:
description: Conan profile to use
required: true
type: string
build_type:
description: Build type
required: true
type: string
disable_cache:
description: Whether ccache and conan cache should be disabled
required: false
type: boolean
default: false
code_coverage:
description: Whether to enable code coverage
required: true
type: boolean
default: false
static:
description: Whether to build static binaries
required: true
type: boolean
default: true
unit_tests:
description: Whether to run unit tests
required: true
type: boolean
default: false
integration_tests:
description: Whether to run integration tests
required: true
type: boolean
default: false
clio_server:
description: Whether to build clio_server
required: true
type: boolean
default: true
target:
description: Build target name
required: false
type: string
default: all
sanitizer:
description: Sanitizer to use
required: false
type: string
default: 'false'
jobs:
build:
name: Build ${{ inputs.container != '' && 'in container' || 'natively' }}
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: ${{ inputs.disable_cache }}
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: ${{ inputs.conan_profile }}
- name: Restore cache
if: ${{ !inputs.disable_cache }}
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_profile: ${{ steps.conan.outputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ !inputs.disable_cache && steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
static: ${{ inputs.static }}
sanitizer: ${{ inputs.sanitizer }}
- name: Build Clio
uses: ./.github/actions/build_clio
with:
target: ${{ inputs.target }}
- name: Show ccache's statistics
if: ${{ !inputs.disable_cache }}
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
- name: Strip unit_tests
if: ${{ inputs.unit_tests && !inputs.code_coverage && inputs.sanitizer == 'false' }}
run: strip build/clio_tests
- name: Strip integration_tests
if: ${{ inputs.integration_tests && !inputs.code_coverage }}
run: strip build/clio_integration_tests
- name: Upload clio_server
if: ${{ inputs.clio_server }}
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ inputs.unit_tests && !inputs.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_tests
- name: Upload clio_integration_tests
if: ${{ inputs.integration_tests && !inputs.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_integration_tests
- name: Save cache
if: ${{ !inputs.disable_cache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
conan_profile: ${{ steps.conan.outputs.conan_profile }}
# TODO: This is not a part of build process but it is the easiest way to do it here.
# It will be refactored in https://github.com/XRPLF/clio/issues/1075
- name: Run code coverage
if: ${{ inputs.code_coverage }}
uses: ./.github/actions/code_coverage
upload_coverage_report:
if: ${{ inputs.code_coverage }}
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -71,7 +71,7 @@ jobs:
name: Create an issue on failure name: Create an issue on failure
needs: [build, run_tests] needs: [build, run_tests]
if: ${{ always() && contains(needs.*.result, 'failure') }} if: ${{ always() && contains(needs.*.result, 'failure') }}
runs-on: ubuntu-latest runs-on: ubuntu-20.04
permissions: permissions:
contents: write contents: write
issues: write issues: write

View File

@@ -6,7 +6,7 @@ on:
jobs: jobs:
check_title: check_title:
runs-on: ubuntu-latest runs-on: ubuntu-20.04
# permissions: # permissions:
# pull-requests: write # pull-requests: write
steps: steps:

View File

@@ -1,7 +1,7 @@
name: Clang-tidy check name: Clang-tidy check
on: on:
schedule: schedule:
- cron: "0 9 * * 1-5" - cron: "0 6 * * 1-5"
workflow_dispatch: workflow_dispatch:
pull_request: pull_request:
branches: [develop] branches: [develop]
@@ -12,7 +12,7 @@ on:
jobs: jobs:
clang_tidy: clang_tidy:
runs-on: heavy runs-on: [self-hosted, Linux]
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
permissions: permissions:

View File

@@ -6,7 +6,7 @@ on:
jobs: jobs:
restart_clang_tidy: restart_clang_tidy:
runs-on: ubuntu-latest runs-on: ubuntu-20.04
permissions: permissions:
actions: write actions: write

View File

@@ -18,7 +18,7 @@ jobs:
environment: environment:
name: github-pages name: github-pages
url: ${{ steps.deployment.outputs.page_url }} url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest runs-on: ubuntu-20.04
continue-on-error: true continue-on-error: true
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest

View File

@@ -1,7 +1,7 @@
name: Nightly release name: Nightly release
on: on:
schedule: schedule:
- cron: '0 8 * * 1-5' - cron: '0 5 * * 1-5'
workflow_dispatch: workflow_dispatch:
pull_request: pull_request:
paths: paths:
@@ -15,29 +15,74 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
include: include:
- os: macos15 - os: macos14
build_type: Release build_type: Release
static: false static: false
- os: heavy - os: heavy
build_type: Release build_type: Release
static: true static: true
container: '{ "image": "rippleci/clio_ci:latest" }' container:
image: rippleci/clio_ci:latest
- os: heavy - os: heavy
build_type: Debug build_type: Debug
static: true static: true
container: '{ "image": "rippleci/clio_ci:latest" }' container:
uses: ./.github/workflows/build_impl.yml image: rippleci/clio_ci:latest
with: runs-on: [self-hosted, "${{ matrix.os }}"]
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }} container: ${{ matrix.container }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: gcc conan_profile: gcc
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ matrix.build_type }} build_type: ${{ matrix.build_type }}
code_coverage: false
static: ${{ matrix.static }} static: ${{ matrix.static }}
unit_tests: true
integration_tests: true - name: Build Clio
clio_server: true uses: ./.github/actions/build_clio
disable_cache: true
- name: Strip tests
run: strip build/clio_tests && strip build/clio_integration_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_*tests
- name: Compress clio_server
shell: bash
run: |
cd build
tar czf ./clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz ./clio_server
- name: Upload clio_server
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz
run_tests: run_tests:
needs: build needs: build
@@ -45,18 +90,15 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
include: include:
- os: macos15 - os: macos14
conan_profile: apple_clang_16
build_type: Release build_type: Release
integration_tests: false integration_tests: false
- os: heavy - os: heavy
conan_profile: gcc
build_type: Release build_type: Release
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
integration_tests: true integration_tests: true
- os: heavy - os: heavy
conan_profile: gcc
build_type: Debug build_type: Debug
container: container:
image: rippleci/clio_ci:latest image: rippleci/clio_ci:latest
@@ -80,17 +122,13 @@ jobs:
- uses: actions/download-artifact@v4 - uses: actions/download-artifact@v4
with: with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ matrix.conan_profile }} name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
- name: Run clio_tests - name: Run clio_tests
run: | run: |
chmod +x ./clio_tests chmod +x ./clio_tests
./clio_tests ./clio_tests
- uses: actions/download-artifact@v4
with:
name: clio_integration_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ matrix.conan_profile }}
# To be enabled back once docker in mac runner arrives # To be enabled back once docker in mac runner arrives
# https://github.com/XRPLF/clio/issues/1400 # https://github.com/XRPLF/clio/issues/1400
- name: Run clio_integration_tests - name: Run clio_integration_tests
@@ -102,7 +140,7 @@ jobs:
nightly_release: nightly_release:
if: ${{ github.event_name != 'pull_request' }} if: ${{ github.event_name != 'pull_request' }}
needs: run_tests needs: run_tests
runs-on: ubuntu-latest runs-on: ubuntu-20.04
env: env:
GH_REPO: ${{ github.repository }} GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }} GH_TOKEN: ${{ github.token }}
@@ -151,14 +189,14 @@ jobs:
tags: | tags: |
type=raw,value=nightly type=raw,value=nightly
type=raw,value=${{ github.sha }} type=raw,value=${{ github.sha }}
artifact_name: clio_server_Linux_Release_gcc artifact_name: clio_server_Linux_Release
strip_binary: true strip_binary: true
publish_image: ${{ github.event_name != 'pull_request' }} publish_image: ${{ github.event_name != 'pull_request' }}
create_issue_on_failure: create_issue_on_failure:
needs: [build, run_tests, nightly_release, build_and_publish_docker_image] needs: [build, run_tests, nightly_release, build_and_publish_docker_image]
if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name != 'pull_request' }} if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name != 'pull_request' }}
runs-on: ubuntu-latest runs-on: ubuntu-20.04
permissions: permissions:
contents: write contents: write
issues: write issues: write

View File

@@ -1,106 +0,0 @@
name: Run tests with sanitizers
on:
schedule:
- cron: "0 4 * * 1-5"
workflow_dispatch:
pull_request:
paths:
- '.github/workflows/sanitizers.yml'
jobs:
build:
name: Build clio tests
strategy:
fail-fast: false
matrix:
include:
- sanitizer: tsan
compiler: gcc
- sanitizer: asan
compiler: gcc
# - sanitizer: ubsan # todo: enable when heavy runners are available
# compiler: gcc
uses: ./.github/workflows/build_impl.yml
with:
runs_on: ubuntu-latest # todo: change to heavy
container: '{ "image": "rippleci/clio_ci:latest" }'
disable_cache: true
conan_profile: ${{ matrix.compiler }}.${{ matrix.sanitizer }}
build_type: Release
code_coverage: false
static: false
unit_tests: true
integration_tests: false
clio_server: false
target: clio_tests
sanitizer: ${{ matrix.sanitizer }}
# consider combining this with the previous matrix instead
run_tests:
needs: build
strategy:
fail-fast: false
matrix:
include:
- sanitizer: tsan
compiler: gcc
- sanitizer: asan
compiler: gcc
# - sanitizer: ubsan # todo: enable when heavy runners are available
# compiler: gcc
runs-on: ubuntu-latest # todo: change to heavy
container:
image: rippleci/clio_ci:latest
permissions:
contents: write
issues: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/download-artifact@v4
with:
name: clio_tests_${{ runner.os }}_Release_${{ matrix.compiler }}.${{ matrix.sanitizer }}
- name: Run clio_tests [${{ matrix.compiler }} / ${{ matrix.sanitizer }}]
shell: bash
run: |
chmod +x ./clio_tests
./.github/scripts/execute-tests-under-sanitizer ./clio_tests
- name: Check for sanitizer report
shell: bash
id: check_report
run: |
if ls .sanitizer-report/* 1> /dev/null 2>&1; then
echo "found_report=true" >> $GITHUB_OUTPUT
else
echo "found_report=false" >> $GITHUB_OUTPUT
fi
- name: Upload report
if: ${{ steps.check_report.outputs.found_report == 'true' }}
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.compiler }}_${{ matrix.sanitizer }}_report
path: .sanitizer-report/*
include-hidden-files: true
#
# todo: enable when we have fixed all currently existing issues from sanitizers
#
# - name: Create an issue
# if: ${{ steps.check_report.outputs.found_report == 'true' }}
# uses: ./.github/actions/create_issue
# env:
# GH_TOKEN: ${{ github.token }}
# with:
# labels: 'bug'
# title: '[${{ matrix.sanitizer }}/${{ matrix.compiler }}] reported issues'
# body: >
# Clio tests failed one or more sanitizer checks when built with ${{ matrix.compiler }}`.
# Workflow: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/
# Reports are available as artifacts.

View File

@@ -9,7 +9,7 @@ on:
jobs: jobs:
upload_report: upload_report:
name: Upload report name: Upload report
runs-on: ubuntu-latest runs-on: ubuntu-20.04
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -23,7 +23,7 @@ jobs:
- name: Upload coverage report - name: Upload coverage report
if: ${{ hashFiles('build/coverage_report.xml') != '' }} if: ${{ hashFiles('build/coverage_report.xml') != '' }}
uses: wandalen/wretry.action@v3.7.3 uses: wandalen/wretry.action@v3.7.2
with: with:
action: codecov/codecov-action@v4 action: codecov/codecov-action@v4
with: | with: |

1
.gitignore vendored
View File

@@ -6,7 +6,6 @@
.vscode .vscode
.python-version .python-version
.DS_Store .DS_Store
.sanitizer-report
CMakeUserPresets.json CMakeUserPresets.json
config.json config.json
src/util/build/Build.cpp src/util/build/Build.cpp

View File

@@ -16,8 +16,6 @@ option(coverage "Build test coverage report" FALSE)
option(packaging "Create distribution packages" FALSE) option(packaging "Create distribution packages" FALSE)
option(lint "Run clang-tidy checks during compilation" FALSE) option(lint "Run clang-tidy checks during compilation" FALSE)
option(static "Statically linked Clio" FALSE) option(static "Statically linked Clio" FALSE)
option(snapshot "Build snapshot tool" FALSE)
# ========================================================================== # # ========================================================================== #
set(san "" CACHE STRING "Add sanitizer instrumentation") set(san "" CACHE STRING "Add sanitizer instrumentation")
set(CMAKE_EXPORT_COMPILE_COMMANDS TRUE) set(CMAKE_EXPORT_COMPILE_COMMANDS TRUE)
@@ -67,21 +65,15 @@ endif ()
# Enable selected sanitizer if enabled via `san` # Enable selected sanitizer if enabled via `san`
if (san) if (san)
set(SUPPORTED_SANITIZERS "address" "thread" "memory" "undefined")
list(FIND SUPPORTED_SANITIZERS "${san}" INDEX)
if (INDEX EQUAL -1)
message(FATAL_ERROR "Error: Unsupported sanitizer '${san}'. Supported values are: ${SUPPORTED_SANITIZERS}.")
endif ()
target_compile_options( target_compile_options(
clio_options INTERFACE # Sanitizers recommend minimum of -O1 for reasonable performance clio PUBLIC # Sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1> ${SAN_FLAG} -fno-omit-frame-pointer $<$<CONFIG:Debug>:-O1> ${SAN_FLAG} -fno-omit-frame-pointer
) )
target_compile_definitions( target_compile_definitions(
clio_options INTERFACE $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN> clio PUBLIC $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN> $<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>
) )
target_link_libraries(clio_options INTERFACE ${SAN_FLAG} ${SAN_LIB}) target_link_libraries(clio INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif () endif ()
# Generate `docs` target for doxygen documentation if enabled Note: use `make docs` to generate the documentation # Generate `docs` target for doxygen documentation if enabled Note: use `make docs` to generate the documentation
@@ -93,7 +85,3 @@ include(install/install)
if (packaging) if (packaging)
include(cmake/packaging.cmake) # This file exists only in build runner include(cmake/packaging.cmake) # This file exists only in build runner
endif () endif ()
if (snapshot)
add_subdirectory(tools/snapshot)
endif ()

View File

@@ -28,6 +28,7 @@ Below are some useful docs to learn more about Clio.
**For Developers**: **For Developers**:
- [How to build Clio](./docs/build-clio.md) - [How to build Clio](./docs/build-clio.md)
- [Metrics and static analysis](./docs/metrics-and-static-analysis.md)
- [Coverage report](./docs/coverage-report.md) - [Coverage report](./docs/coverage-report.md)
**For Operators**: **For Operators**:

View File

@@ -188,10 +188,10 @@ public:
static auto static auto
generateData() generateData()
{ {
constexpr auto kTOTAL = 10'000; constexpr auto TOTAL = 10'000;
std::vector<uint64_t> data; std::vector<uint64_t> data;
data.reserve(kTOTAL); data.reserve(TOTAL);
for (auto i = 0; i < kTOTAL; ++i) for (auto i = 0; i < TOTAL; ++i)
data.push_back(util::Random::uniform(1, 100'000'000)); data.push_back(util::Random::uniform(1, 100'000'000));
return data; return data;
@@ -208,7 +208,7 @@ benchmarkThreads(benchmark::State& state)
} }
template <typename CtxType> template <typename CtxType>
static void void
benchmarkExecutionContextBatched(benchmark::State& state) benchmarkExecutionContextBatched(benchmark::State& state)
{ {
auto data = generateData(); auto data = generateData();
@@ -219,7 +219,7 @@ benchmarkExecutionContextBatched(benchmark::State& state)
} }
template <typename CtxType> template <typename CtxType>
static void void
benchmarkAnyExecutionContextBatched(benchmark::State& state) benchmarkAnyExecutionContextBatched(benchmark::State& state)
{ {
auto data = generateData(); auto data = generateData();

View File

@@ -23,19 +23,19 @@
namespace util::build { namespace util::build {
static constexpr char versionString[] = "@CLIO_VERSION@"; // NOLINT(readability-identifier-naming) static constexpr char versionString[] = "@CLIO_VERSION@";
std::string const& std::string const&
getClioVersionString() getClioVersionString()
{ {
static std::string const value = versionString; // NOLINT(readability-identifier-naming) static std::string const value = versionString;
return value; return value;
} }
std::string const& std::string const&
getClioFullVersionString() getClioFullVersionString()
{ {
static std::string const value = "clio-" + getClioVersionString(); // NOLINT(readability-identifier-naming) static std::string const value = "clio-" + getClioVersionString();
return value; return value;
} }

View File

@@ -39,34 +39,6 @@ if (is_appleclang)
list(APPEND COMPILER_FLAGS -Wreorder-init-list) list(APPEND COMPILER_FLAGS -Wreorder-init-list)
endif () endif ()
if (san)
# When building with sanitizers some compilers will actually produce extra warnings/errors. We don't want this yet, at
# least not until we have fixed all runtime issues reported by the sanitizers. Once that is done we can start removing
# some of these and trying to fix it in our codebase. We can never remove all of below because most of them are
# reported from deep inside libraries like boost or libxrpl.
#
# TODO: Address in https://github.com/XRPLF/clio/issues/1885
list(
APPEND
COMPILER_FLAGS
-Wno-error=tsan # Disables treating TSAN warnings as errors
-Wno-tsan # Disables TSAN warnings (thread-safety analysis)
-Wno-uninitialized # Disables warnings about uninitialized variables (AddressSanitizer, UndefinedBehaviorSanitizer,
# etc.)
-Wno-stringop-overflow # Disables warnings about potential string operation overflows (AddressSanitizer)
-Wno-unsafe-buffer-usage # Disables warnings about unsafe memory operations (AddressSanitizer)
-Wno-frame-larger-than # Disables warnings about stack frame size being too large (AddressSanitizer)
-Wno-unused-function # Disables warnings about unused functions (LeakSanitizer, memory-related issues)
-Wno-unused-but-set-variable # Disables warnings about unused variables (MemorySanitizer)
-Wno-thread-safety-analysis # Disables warnings related to thread safety usage (ThreadSanitizer)
-Wno-thread-safety # Disables warnings related to thread safety usage (ThreadSanitizer)
-Wno-sign-compare # Disables warnings about signed/unsigned comparison (UndefinedBehaviorSanitizer)
-Wno-nonnull # Disables warnings related to null pointer dereferencing (UndefinedBehaviorSanitizer)
-Wno-address # Disables warnings about address-related issues (UndefinedBehaviorSanitizer)
-Wno-array-bounds # Disables array bounds checks (UndefinedBehaviorSanitizer)
)
endif ()
# See https://github.com/cpp-best-practices/cppbestpractices/blob/master/02-Use_the_Tools_Available.md#gcc--clang for # See https://github.com/cpp-best-practices/cppbestpractices/blob/master/02-Use_the_Tools_Available.md#gcc--clang for
# the flags description # the flags description

View File

@@ -1,11 +1,3 @@
if ("${san}" STREQUAL "") target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_LINK)
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_LINK) target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_USE_BACKTRACE)
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_USE_BACKTRACE) find_package(libbacktrace REQUIRED CONFIG)
find_package(libbacktrace REQUIRED CONFIG)
else ()
# Some sanitizers (TSAN and ASAN for sure) can't be used with libbacktrace because they have their own backtracing
# capabilities and there are conflicts. In any case, this makes sure Clio code knows that backtrace is not available.
# See relevant conan profiles for sanitizers where we disable stacktrace in Boost explicitly.
target_compile_definitions(clio_options INTERFACE CLIO_WITHOUT_STACKTRACE)
message(STATUS "Sanitizer enabled, disabling stacktrace")
endif ()

View File

@@ -19,7 +19,6 @@ class Clio(ConanFile):
'packaging': [True, False], # create distribution packages 'packaging': [True, False], # create distribution packages
'coverage': [True, False], # build for test coverage report; create custom target `clio_tests-ccov` 'coverage': [True, False], # build for test coverage report; create custom target `clio_tests-ccov`
'lint': [True, False], # run clang-tidy checks during compilation 'lint': [True, False], # run clang-tidy checks during compilation
'snapshot': [True, False], # build export/import snapshot tool
} }
requires = [ requires = [
@@ -29,7 +28,7 @@ class Clio(ConanFile):
'protobuf/3.21.9', 'protobuf/3.21.9',
'grpc/1.50.1', 'grpc/1.50.1',
'openssl/1.1.1u', 'openssl/1.1.1u',
'xrpl/2.4.0-rc4', 'xrpl/2.3.0',
'zlib/1.3.1', 'zlib/1.3.1',
'libbacktrace/cci.20210118' 'libbacktrace/cci.20210118'
] ]
@@ -45,7 +44,6 @@ class Clio(ConanFile):
'coverage': False, 'coverage': False,
'lint': False, 'lint': False,
'docs': False, 'docs': False,
'snapshot': False,
'xrpl/*:tests': False, 'xrpl/*:tests': False,
'xrpl/*:rocksdb': False, 'xrpl/*:rocksdb': False,
@@ -94,7 +92,6 @@ class Clio(ConanFile):
tc.variables['docs'] = self.options.docs tc.variables['docs'] = self.options.docs
tc.variables['packaging'] = self.options.packaging tc.variables['packaging'] = self.options.packaging
tc.variables['benchmark'] = self.options.benchmark tc.variables['benchmark'] = self.options.benchmark
tc.variables['snapshot'] = self.options.snapshot
tc.generate() tc.generate()
def build(self): def build(self):

View File

@@ -4,13 +4,12 @@ This image contains an environment to build [Clio](https://github.com/XRPLF/clio
It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but can also be used to compile Clio locally. It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but can also be used to compile Clio locally.
The image is based on Ubuntu 20.04 and contains: The image is based on Ubuntu 20.04 and contains:
- clang 16.0.6 - clang 16
- gcc 12.3 - gcc 12.3
- doxygen 1.12 - doxygen 1.10
- gh 2.40 - gh 2.40
- ccache 4.10.2 - ccache 4.8.3
- conan 1.62 - conan
- and some other useful tools - and some other useful tools
Conan is set up to build Clio without any additional steps. There are two preset conan profiles: `clang` and `gcc` to use corresponding compiler. By default conan is setup to use `gcc`. Conan is set up to build Clio without any additional steps. There are two preset conan profiles: `clang` and `gcc` to use corresponding compiler.
Sanitizer builds for `ASAN`, `TSAN` and `UBSAN` are enabled via conan profiles for each of the supported compilers. These can be selected using the following pattern (all lowercase): `[compiler].[sanitizer]` (e.g. `--profile gcc.tsan`).

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -1,9 +0,0 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -1,9 +0,0 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

View File

@@ -98,10 +98,3 @@ RUN conan profile new clang --detect \
&& conan profile update "conf.tools.build:compiler_executables={\"c\": \"/usr/bin/clang-16\", \"cpp\": \"/usr/bin/clang++-16\"}" clang && conan profile update "conf.tools.build:compiler_executables={\"c\": \"/usr/bin/clang-16\", \"cpp\": \"/usr/bin/clang++-16\"}" clang
RUN echo "include(gcc)" >> .conan/profiles/default RUN echo "include(gcc)" >> .conan/profiles/default
COPY conan/gcc.asan /root/.conan/profiles
COPY conan/gcc.tsan /root/.conan/profiles
COPY conan/gcc.ubsan /root/.conan/profiles
COPY conan/clang.asan /root/.conan/profiles
COPY conan/clang.tsan /root/.conan/profiles
COPY conan/clang.ubsan /root/.conan/profiles

View File

@@ -181,20 +181,3 @@ Sometimes, during development, you need to build against a custom version of `li
4. Build Clio as you would have before. 4. Build Clio as you would have before.
See [Building Clio](#building-clio) for details. See [Building Clio](#building-clio) for details.
## Using `clang-tidy` for static analysis
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 19.0.
Clang-tidy can be run by Cmake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
```
By default Cmake will try to find `clang-tidy` automatically in your system.
To force Cmake to use your desired binary, set the `CLIO_CLANG_TIDY_BIN` environment variable to the path of the `clang-tidy` binary. For example:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@19/bin/clang-tidy
```

View File

@@ -1,452 +0,0 @@
# Clio Config Description
This file lists all Clio Configuration definitions in detail.
## Configuration Details
### Key: database.type
- **Required**: True
- **Type**: string
- **Default value**: cassandra
- **Constraints**: The value must be one of the following: `cassandra`
- **Description**: Type of database to use. We currently support Cassandra and Scylladb. We default to Scylladb.
### Key: database.cassandra.contact_points
- **Required**: True
- **Type**: string
- **Default value**: localhost
- **Constraints**: None
- **Description**: A list of IP addresses or hostnames of the initial nodes (Cassandra/Scylladb cluster nodes) that the client will connect to when establishing a connection with the database. If you're running locally, it should be 'localhost' or 127.0.0.1
### Key: database.cassandra.secure_connect_bundle
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Configuration file that contains the necessary security credentials and connection details for securely connecting to a Cassandra database cluster.
### Key: database.cassandra.port
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535
- **Description**: Port number to connect to the database.
### Key: database.cassandra.keyspace
- **Required**: True
- **Type**: string
- **Default value**: clio
- **Constraints**: None
- **Description**: Keyspace to use for the database.
### Key: database.cassandra.replication_factor
- **Required**: True
- **Type**: int
- **Default value**: 3
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Number of replicated nodes for Scylladb. Visit this link for more details : https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/fault-tolerance-replication-factor/
### Key: database.cassandra.table_prefix
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Prefix for Database table names.
### Key: database.cassandra.max_write_requests_outstanding
- **Required**: True
- **Type**: int
- **Default value**: 10000
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum number of outstanding write requests. Write requests are api calls that write to database
### Key: database.cassandra.max_read_requests_outstanding
- **Required**: True
- **Type**: int
- **Default value**: 100000
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum number of outstanding read requests, which reads from database
### Key: database.cassandra.threads
- **Required**: True
- **Type**: int
- **Default value**: The number of available CPU cores.
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Number of threads that will be used for database operations.
### Key: database.cassandra.core_connections_per_host
- **Required**: True
- **Type**: int
- **Default value**: 1
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Number of core connections per host for Cassandra.
### Key: database.cassandra.queue_size_io
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Queue size for I/O operations in Cassandra.
### Key: database.cassandra.write_batch_size
- **Required**: True
- **Type**: int
- **Default value**: 20
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Batch size for write operations in Cassandra.
### Key: database.cassandra.connect_timeout
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The maximum amount of time in seconds the system will wait for a connection to be successfully established with the database.
### Key: database.cassandra.request_timeout
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The maximum amount of time in seconds the system will wait for a request to be fetched from database.
### Key: database.cassandra.username
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The username used for authenticating with the database.
### Key: database.cassandra.password
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The password used for authenticating with the database.
### Key: database.cassandra.certfile
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The path to the SSL/TLS certificate file used to establish a secure connection between the client and the Cassandra database.
### Key: allow_no_etl
- **Required**: True
- **Type**: boolean
- **Default value**: True
- **Constraints**: None
- **Description**: If True, no ETL nodes will run with Clio.
### Key: etl_sources.[].ip
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be a valid IP address
- **Description**: IP address of the ETL source.
### Key: etl_sources.[].ws_port
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535
- **Description**: WebSocket port of the ETL source.
### Key: etl_sources.[].grpc_port
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535
- **Description**: gRPC port of the ETL source.
### Key: forwarding.cache_timeout
- **Required**: True
- **Type**: double
- **Default value**: 0
- **Constraints**: The value must be a positive double number
- **Description**: Timeout duration for the forwarding cache used in Rippled communication.
### Key: forwarding.request_timeout
- **Required**: True
- **Type**: double
- **Default value**: 10
- **Constraints**: The value must be a positive double number
- **Description**: Timeout duration for the forwarding request used in Rippled communication.
### Key: rpc.cache_timeout
- **Required**: True
- **Type**: double
- **Default value**: 0
- **Constraints**: The value must be a positive double number
- **Description**: Timeout duration for RPC requests.
### Key: num_markers
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `256`
- **Description**: The number of markers is the number of coroutines to download the initial ledger
### Key: dos_guard.whitelist.[]
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: List of IP addresses to whitelist for DOS protection.
### Key: dos_guard.max_fetches
- **Required**: True
- **Type**: int
- **Default value**: 1000000
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum number of fetch operations allowed by DOS guard.
### Key: dos_guard.max_connections
- **Required**: True
- **Type**: int
- **Default value**: 20
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum number of concurrent connections allowed by DOS guard.
### Key: dos_guard.max_requests
- **Required**: True
- **Type**: int
- **Default value**: 20
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum number of requests allowed by DOS guard.
### Key: dos_guard.sweep_interval
- **Required**: True
- **Type**: double
- **Default value**: 1
- **Constraints**: The value must be a positive double number
- **Description**: Interval in seconds for DOS guard to sweep/clear its state.
### Key: workers
- **Required**: True
- **Type**: int
- **Default value**: The number of available CPU cores.
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Number of threads to process RPC requests.
### Key: server.ip
- **Required**: True
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be a valid IP address
- **Description**: IP address of the Clio HTTP server.
### Key: server.port
- **Required**: True
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535
- **Description**: Port number of the Clio HTTP server.
### Key: server.max_queue_size
- **Required**: True
- **Type**: int
- **Default value**: 0
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum size of the server's request queue. Value of 0 is no limit.
### Key: server.local_admin
- **Required**: False
- **Type**: boolean
- **Default value**: None
- **Constraints**: None
- **Description**: Indicates if the server should run with admin privileges. Only one of local_admin or admin_password can be set.
### Key: server.admin_password
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Password for Clio admin-only APIs. Only one of local_admin or admin_password can be set.
### Key: server.processing_policy
- **Required**: True
- **Type**: string
- **Default value**: parallel
- **Constraints**: The value must be one of the following: `parallel, sequent`
- **Description**: Could be "sequent" or "parallel". For the sequent policy, requests from a single client
connection are processed one by one, with the next request read only after the previous one is processed. For the parallel policy, Clio will accept
all requests and process them in parallel, sending a reply for each request as soon as it is ready.
### Key: server.parallel_requests_limit
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Optional parameter, used only if processing_strategy `parallel`. It limits the number of requests for a single client connection that are processed in parallel. If not specified, the limit is infinite.
### Key: server.ws_max_sending_queue_size
- **Required**: True
- **Type**: int
- **Default value**: 1500
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Maximum size of the websocket sending queue.
### Key: prometheus.enabled
- **Required**: True
- **Type**: boolean
- **Default value**: False
- **Constraints**: None
- **Description**: Enable or disable Prometheus metrics.
### Key: prometheus.compress_reply
- **Required**: True
- **Type**: boolean
- **Default value**: False
- **Constraints**: None
- **Description**: Enable or disable compression of Prometheus responses.
### Key: io_threads
- **Required**: True
- **Type**: int
- **Default value**: 2
- **Constraints**: The minimum value is `1`. The maximum value is `65535`
- **Description**: Number of I/O threads. Value cannot be less than 1
### Key: subscription_workers
- **Required**: True
- **Type**: int
- **Default value**: 1
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The number of worker threads or processes that are responsible for managing and processing subscription-based tasks from rippled
### Key: graceful_period
- **Required**: True
- **Type**: double
- **Default value**: 10
- **Constraints**: The value must be a positive double number
- **Description**: Number of milliseconds server will wait to shutdown gracefully.
### Key: cache.num_diffs
- **Required**: True
- **Type**: int
- **Default value**: 32
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Number of diffs to cache. For more info, consult readme.md in etc
### Key: cache.num_markers
- **Required**: True
- **Type**: int
- **Default value**: 48
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Number of markers to cache.
### Key: cache.num_cursors_from_diff
- **Required**: True
- **Type**: int
- **Default value**: 0
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Num of cursors that are different.
### Key: cache.num_cursors_from_account
- **Required**: True
- **Type**: int
- **Default value**: 0
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Number of cursors from an account.
### Key: cache.page_fetch_size
- **Required**: True
- **Type**: int
- **Default value**: 512
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Page fetch size for cache operations.
### Key: cache.load
- **Required**: True
- **Type**: string
- **Default value**: async
- **Constraints**: The value must be one of the following: `sync, async, none`
- **Description**: Cache loading strategy ('sync' or 'async').
### Key: log_channels.[].channel
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be one of the following: `General, WebServer, Backend, RPC, ETL, Subscriptions, Performance, Migration`
- **Description**: Name of the log channel.'RPC', 'ETL', and 'Performance'
### Key: log_channels.[].log_level
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be one of the following: `trace, debug, info, warning, error, fatal, count`
- **Description**: Log level for the specific log channel.`warning`, `error`, `fatal`
### Key: log_level
- **Required**: True
- **Type**: string
- **Default value**: info
- **Constraints**: The value must be one of the following: `trace, debug, info, warning, error, fatal, count`
- **Description**: General logging level of Clio. This level will be applied to all log channels that do not have an explicitly defined logging level.
### Key: log_format
- **Required**: True
- **Type**: string
- **Default value**: %TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%
- **Constraints**: None
- **Description**: Format string for log messages.
### Key: log_to_console
- **Required**: True
- **Type**: boolean
- **Default value**: True
- **Constraints**: None
- **Description**: Enable or disable logging to console.
### Key: log_directory
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Directory path for log files.
### Key: log_rotation_size
- **Required**: True
- **Type**: int
- **Default value**: 2048
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`
- **Description**: Log rotation size in megabytes. When the log file reaches this particular size, a new log file starts.
### Key: log_directory_max_size
- **Required**: True
- **Type**: int
- **Default value**: 51200
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`
- **Description**: Maximum size of the log directory in megabytes.
### Key: log_rotation_hour_interval
- **Required**: True
- **Type**: int
- **Default value**: 12
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`
- **Description**: Interval in hours for log rotation. If the current log file reaches this value in logging, a new log file starts.
### Key: log_tag_style
- **Required**: True
- **Type**: string
- **Default value**: none
- **Constraints**: The value must be one of the following: `int, uint, null, none, uuid`
- **Description**: Style for log tags.
### Key: extractor_threads
- **Required**: True
- **Type**: int
- **Default value**: 1
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Number of extractor threads.
### Key: read_only
- **Required**: True
- **Type**: boolean
- **Default value**: True
- **Constraints**: None
- **Description**: Indicates if the server should have read-only privileges.
### Key: txn_threshold
- **Required**: True
- **Type**: int
- **Default value**: 0
- **Constraints**: The minimum value is `0`. The maximum value is `65535`
- **Description**: Transaction threshold value.
### Key: start_sequence
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Starting ledger index.
### Key: finish_sequence
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: Ending ledger index.
### Key: ssl_cert_file
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Path to the SSL certificate file.
### Key: ssl_key_file
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: Path to the SSL key file.
### Key: api_version.default
- **Required**: True
- **Type**: int
- **Default value**: 1
- **Constraints**: The minimum value is `1`. The maximum value is `3`
- **Description**: Default API version Clio will run on.
### Key: api_version.min
- **Required**: True
- **Type**: int
- **Default value**: 1
- **Constraints**: The minimum value is `1`. The maximum value is `3`
- **Description**: Minimum API version.
### Key: api_version.max
- **Required**: True
- **Type**: int
- **Default value**: 3
- **Constraints**: The minimum value is `1`. The maximum value is `3`
- **Description**: Maximum API version.
### Key: migration.full_scan_threads
- **Required**: True
- **Type**: int
- **Default value**: 2
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The number of threads used to scan the table.
### Key: migration.full_scan_jobs
- **Required**: True
- **Type**: int
- **Default value**: 4
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The number of coroutines used to scan the table.
### Key: migration.cursors_per_job
- **Required**: True
- **Type**: int
- **Default value**: 100
- **Constraints**: The minimum value is `0`. The maximum value is `4294967295`
- **Description**: The number of cursors each coroutine will scan.

View File

@@ -0,0 +1,43 @@
/*
* This is an example configuration file. Please do not use without modifying to suit your needs.
*/
{
"database": {
"type": "cassandra",
"cassandra": {
// This option can be used to setup a secure connect bundle connection
"secure_connect_bundle": "[path/to/zip. ignore if using contact_points]",
// The following options are used only if using contact_points
"contact_points": "[ip. ignore if using secure_connect_bundle]",
"port": "[port. ignore if using_secure_connect_bundle]",
// Authentication settings
"username": "[username, if any]",
"password": "[password, if any]",
// Other common settings
"keyspace": "clio",
"max_write_requests_outstanding": 25000,
"max_read_requests_outstanding": 30000,
"threads": 8
}
},
"etl_sources": [
{
"ip": "[rippled ip]",
"ws_port": "6006",
"grpc_port": "50051"
}
],
"dos_guard": {
"whitelist": [
"127.0.0.1"
]
},
"server": {
"ip": "0.0.0.0",
"port": 8080
},
"log_level": "debug",
"log_file": "./clio.log",
"extractor_threads": 8,
"read_only": false
}

View File

@@ -77,8 +77,7 @@
// send a reply for each request whenever it is ready. // send a reply for each request whenever it is ready.
"parallel_requests_limit": 10, // Optional parameter, used only if "processing_strategy" is "parallel". It limits the number of requests for one client connection processed in parallel. Infinite if not specified. "parallel_requests_limit": 10, // Optional parameter, used only if "processing_strategy" is "parallel". It limits the number of requests for one client connection processed in parallel. Infinite if not specified.
// Max number of responses to queue up before sent successfully. If a client's waiting queue is too long, the server will close the connection. // Max number of responses to queue up before sent successfully. If a client's waiting queue is too long, the server will close the connection.
"ws_max_sending_queue_size": 1500, "ws_max_sending_queue_size": 1500
"__ng_web_server": false // Use ng web server. This is a temporary setting which will be deleted after switching to ng web server
}, },
// Time in seconds for graceful shutdown. Defaults to 10 seconds. Not fully implemented yet. // Time in seconds for graceful shutdown. Defaults to 10 seconds. Not fully implemented yet.
"graceful_period": 10.0, "graceful_period": 10.0,

View File

@@ -1,9 +1,5 @@
# Example of clio monitoring infrastructure # Example of clio monitoring infrastructure
> [!WARNING]
> This is only an example of Grafana dashboard for Clio. It was created for demonstration purposes only and may contain errors.
> Clio team would not recommend to relate on data from this dashboard or use it for monitoring your Clio instances.
This directory contains an example of docker based infrastructure to collect and visualise metrics from clio. This directory contains an example of docker based infrastructure to collect and visualise metrics from clio.
The structure of the directory: The structure of the directory:

View File

@@ -20,6 +20,7 @@
"graphTooltip": 0, "graphTooltip": 0,
"id": 1, "id": 1,
"links": [], "links": [],
"liveNow": false,
"panels": [ "panels": [
{ {
"datasource": { "datasource": {
@@ -78,7 +79,6 @@
"graphMode": "area", "graphMode": "area",
"justifyMode": "auto", "justifyMode": "auto",
"orientation": "auto", "orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": { "reduceOptions": {
"calcs": [ "calcs": [
"lastNotNull" "lastNotNull"
@@ -90,7 +90,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "10.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -159,7 +159,6 @@
"graphMode": "area", "graphMode": "area",
"justifyMode": "auto", "justifyMode": "auto",
"orientation": "auto", "orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": { "reduceOptions": {
"calcs": [ "calcs": [
"lastNotNull" "lastNotNull"
@@ -171,7 +170,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "10.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -244,7 +243,6 @@
"graphMode": "area", "graphMode": "area",
"justifyMode": "auto", "justifyMode": "auto",
"orientation": "auto", "orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": { "reduceOptions": {
"calcs": [ "calcs": [
"lastNotNull" "lastNotNull"
@@ -256,7 +254,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "10.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -329,7 +327,6 @@
"graphMode": "area", "graphMode": "area",
"justifyMode": "auto", "justifyMode": "auto",
"orientation": "auto", "orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": { "reduceOptions": {
"calcs": [ "calcs": [
"lastNotNull" "lastNotNull"
@@ -341,7 +338,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "10.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -376,7 +373,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -439,7 +435,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -496,7 +491,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -558,7 +552,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -593,7 +586,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -655,7 +647,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -690,7 +681,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -752,7 +742,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -787,7 +776,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -849,7 +837,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -885,7 +872,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -948,7 +934,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -956,7 +941,7 @@
"uid": "PBFA97CFB590B2093" "uid": "PBFA97CFB590B2093"
}, },
"editorMode": "code", "editorMode": "code",
"expr": "sum by (method) (increase(rpc_method_duration_us[$__interval]))\n / \n sum by (method,) (increase(rpc_method_total_number{status=\"finished\"}[$__interval]))", "expr": "rpc_method_duration_us{job=\"clio\"}",
"instant": false, "instant": false,
"legendFormat": "{{method}}", "legendFormat": "{{method}}",
"range": true, "range": true,
@@ -983,7 +968,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -1045,7 +1029,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1080,7 +1063,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -1142,7 +1124,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1177,7 +1158,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 10, "fillOpacity": 10,
"gradientMode": "none", "gradientMode": "none",
@@ -1243,7 +1223,7 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0", "pluginVersion": "10.2.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1316,7 +1296,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -1378,7 +1357,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1426,7 +1404,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -1488,7 +1465,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1534,7 +1510,6 @@
"axisLabel": "", "axisLabel": "",
"axisPlacement": "auto", "axisPlacement": "auto",
"barAlignment": 0, "barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line", "drawStyle": "line",
"fillOpacity": 0, "fillOpacity": 0,
"gradientMode": "none", "gradientMode": "none",
@@ -1597,7 +1572,6 @@
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -1616,9 +1590,8 @@
"type": "timeseries" "type": "timeseries"
} }
], ],
"preload": false,
"refresh": "5s", "refresh": "5s",
"schemaVersion": 40, "schemaVersion": 39,
"tags": [], "tags": [],
"templating": { "templating": {
"list": [] "list": []

View File

@@ -0,0 +1,30 @@
# Metrics and static analysis
## Prometheus metrics collection
Clio natively supports [Prometheus](https://prometheus.io/) metrics collection. It accepts Prometheus requests on the port configured in the `server` section of the config.
Prometheus metrics are enabled by default, and replies to `/metrics` are compressed. To disable compression, and have human readable metrics, add `"prometheus": { "enabled": true, "compress_reply": false }` to Clio's config.
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to Clio's config.
It is important to know that Clio responds to Prometheus request only if they are admin requests. If you are using the admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
You can find an example docker-compose file, with Prometheus and Grafana configs, in [examples/infrastructure](../docs/examples/infrastructure/).
## Using `clang-tidy` for static analysis
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 19.0.
Clang-tidy can be run by Cmake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
```
By default Cmake will try to find `clang-tidy` automatically in your system.
To force Cmake to use your desired binary, set the `CLIO_CLANG_TIDY_BIN` environment variable to the path of the `clang-tidy` binary. For example:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@19/bin/clang-tidy
```

View File

@@ -80,15 +80,3 @@ Clio will fallback to hardcoded defaults when these values are not specified in
> [!TIP] > [!TIP]
> See the [example-config.json](../docs/examples/config/example-config.json) for more details. > See the [example-config.json](../docs/examples/config/example-config.json) for more details.
## Prometheus metrics collection
Clio natively supports [Prometheus](https://prometheus.io/) metrics collection. It accepts Prometheus requests on the port configured in the `server` section of the config.
Prometheus metrics are enabled by default, and replies to `/metrics` are compressed. To disable compression, and have human readable metrics, add `"prometheus": { "enabled": true, "compress_reply": false }` to Clio's config.
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to Clio's config.
It is important to know that Clio responds to Prometheus request only if they are admin requests. If you are using the admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
You can find an example docker-compose file, with Prometheus and Grafana configs, in [examples/infrastructure](../docs/examples/infrastructure/).

View File

@@ -5,6 +5,5 @@ add_subdirectory(etlng)
add_subdirectory(feed) add_subdirectory(feed)
add_subdirectory(rpc) add_subdirectory(rpc)
add_subdirectory(web) add_subdirectory(web)
add_subdirectory(migration)
add_subdirectory(app) add_subdirectory(app)
add_subdirectory(main) add_subdirectory(main)

View File

@@ -1,4 +1,4 @@
add_library(clio_app) add_library(clio_app)
target_sources(clio_app PRIVATE CliArgs.cpp ClioApplication.cpp Stopper.cpp WebHandlers.cpp) target_sources(clio_app PRIVATE CliArgs.cpp ClioApplication.cpp)
target_link_libraries(clio_app PUBLIC clio_etl clio_etlng clio_feed clio_web clio_rpc clio_migration) target_link_libraries(clio_app PUBLIC clio_etl clio_etlng clio_feed clio_web clio_rpc)

View File

@@ -19,9 +19,7 @@
#include "app/CliArgs.hpp" #include "app/CliArgs.hpp"
#include "migration/MigrationApplication.hpp"
#include "util/build/Build.hpp" #include "util/build/Build.hpp"
#include "util/newconfig/ConfigDescription.hpp"
#include <boost/program_options/options_description.hpp> #include <boost/program_options/options_description.hpp>
#include <boost/program_options/parsers.hpp> #include <boost/program_options/parsers.hpp>
@@ -30,7 +28,6 @@
#include <boost/program_options/variables_map.hpp> #include <boost/program_options/variables_map.hpp>
#include <cstdlib> #include <cstdlib>
#include <filesystem>
#include <iostream> #include <iostream>
#include <string> #include <string>
#include <utility> #include <utility>
@@ -44,13 +41,9 @@ CliArgs::parse(int argc, char const* argv[])
// clang-format off // clang-format off
po::options_description description("Options"); po::options_description description("Options");
description.add_options() description.add_options()
("help,h", "Print help message and exit") ("help,h", "print help message and exit")
("version,v", "Print version and exit") ("version,v", "print version and exit")
("conf,c", po::value<std::string>()->default_value(kDEFAULT_CONFIG_PATH), "Configuration file") ("conf,c", po::value<std::string>()->default_value(defaultConfigPath), "configuration file")
("ng-web-server,w", "Use ng-web-server")
("migrate", po::value<std::string>(), "Start migration helper")
("verify", "Checks the validity of config values")
("config-description,d", po::value<std::string>(), "Generate config description markdown file")
; ;
// clang-format on // clang-format on
po::positional_options_description positional; po::positional_options_description positional;
@@ -70,31 +63,8 @@ CliArgs::parse(int argc, char const* argv[])
return Action{Action::Exit{EXIT_SUCCESS}}; return Action{Action::Exit{EXIT_SUCCESS}};
} }
if (parsed.count("config-description") != 0u) {
std::filesystem::path const filePath = parsed["config-description"].as<std::string>();
auto const res = util::config::ClioConfigDescription::generateConfigDescriptionToFile(filePath);
if (res.has_value())
return Action{Action::Exit{EXIT_SUCCESS}};
std::cerr << res.error().error << std::endl;
return Action{Action::Exit{EXIT_FAILURE}};
}
auto configPath = parsed["conf"].as<std::string>(); auto configPath = parsed["conf"].as<std::string>();
return Action{Action::Run{std::move(configPath)}};
if (parsed.count("migrate") != 0u) {
auto const opt = parsed["migrate"].as<std::string>();
if (opt == "status")
return Action{Action::Migrate{.configPath = std::move(configPath), .subCmd = MigrateSubCmd::status()}};
return Action{Action::Migrate{.configPath = std::move(configPath), .subCmd = MigrateSubCmd::migration(opt)}};
}
if (parsed.count("verify") != 0u)
return Action{Action::VerifyConfig{.configPath = std::move(configPath)}};
return Action{Action::Run{.configPath = std::move(configPath), .useNgWebServer = parsed.count("ng-web-server") != 0}
};
} }
} // namespace app } // namespace app

View File

@@ -19,7 +19,6 @@
#pragma once #pragma once
#include "migration/MigrationApplication.hpp"
#include "util/OverloadSet.hpp" #include "util/OverloadSet.hpp"
#include <string> #include <string>
@@ -35,7 +34,7 @@ public:
/** /**
* @brief Default configuration path. * @brief Default configuration path.
*/ */
static constexpr char kDEFAULT_CONFIG_PATH[] = "/etc/opt/clio/config.json"; static constexpr char defaultConfigPath[] = "/etc/opt/clio/config.json";
/** /**
* @brief An action parsed from the command line. * @brief An action parsed from the command line.
@@ -44,24 +43,14 @@ public:
public: public:
/** @brief Run action. */ /** @brief Run action. */
struct Run { struct Run {
std::string configPath; ///< Configuration file path. /** @brief Configuration file path. */
bool useNgWebServer; ///< Whether to use a ng web server std::string configPath;
}; };
/** @brief Exit action. */ /** @brief Exit action. */
struct Exit { struct Exit {
int exitCode; ///< Exit code. /** @brief Exit code. */
}; int exitCode;
/** @brief Migration action. */
struct Migrate {
std::string configPath;
MigrateSubCmd subCmd;
};
/** @brief Verify Config action. */
struct VerifyConfig {
std::string configPath;
}; };
/** /**
@@ -70,8 +59,7 @@ public:
* @param action Run action. * @param action Run action.
*/ */
template <typename ActionType> template <typename ActionType>
requires std::is_same_v<ActionType, Run> or std::is_same_v<ActionType, Exit> or requires std::is_same_v<ActionType, Run> or std::is_same_v<ActionType, Exit>
std::is_same_v<ActionType, Migrate> or std::is_same_v<ActionType, VerifyConfig>
explicit Action(ActionType&& action) : action_(std::forward<ActionType>(action)) explicit Action(ActionType&& action) : action_(std::forward<ActionType>(action))
{ {
} }
@@ -91,7 +79,7 @@ public:
} }
private: private:
std::variant<Run, Exit, Migrate, VerifyConfig> action_; std::variant<Run, Exit> action_;
}; };
/** /**

View File

@@ -19,40 +19,32 @@
#include "app/ClioApplication.hpp" #include "app/ClioApplication.hpp"
#include "app/Stopper.hpp"
#include "app/WebHandlers.hpp"
#include "data/AmendmentCenter.hpp" #include "data/AmendmentCenter.hpp"
#include "data/BackendFactory.hpp" #include "data/BackendFactory.hpp"
#include "etl/ETLService.hpp" #include "etl/ETLService.hpp"
#include "etl/LoadBalancer.hpp" #include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgers.hpp" #include "etl/NetworkValidatedLedgers.hpp"
#include "feed/SubscriptionManager.hpp" #include "feed/SubscriptionManager.hpp"
#include "migration/MigrationInspectorFactory.hpp"
#include "rpc/Counters.hpp" #include "rpc/Counters.hpp"
#include "rpc/RPCEngine.hpp" #include "rpc/RPCEngine.hpp"
#include "rpc/WorkQueue.hpp" #include "rpc/WorkQueue.hpp"
#include "rpc/common/impl/HandlerProvider.hpp" #include "rpc/common/impl/HandlerProvider.hpp"
#include "util/build/Build.hpp" #include "util/build/Build.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/prometheus/Prometheus.hpp" #include "util/prometheus/Prometheus.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/RPCServerHandler.hpp" #include "web/RPCServerHandler.hpp"
#include "web/Server.hpp" #include "web/Server.hpp"
#include "web/dosguard/DOSGuard.hpp" #include "web/dosguard/DOSGuard.hpp"
#include "web/dosguard/IntervalSweepHandler.hpp" #include "web/dosguard/IntervalSweepHandler.hpp"
#include "web/dosguard/WhitelistHandler.hpp" #include "web/dosguard/WhitelistHandler.hpp"
#include "web/ng/RPCServerHandler.hpp"
#include "web/ng/Server.hpp"
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
#include <cstdint> #include <cstdint>
#include <cstdlib> #include <cstdlib>
#include <memory> #include <memory>
#include <optional>
#include <thread> #include <thread>
#include <utility>
#include <vector> #include <vector>
namespace app { namespace app {
@@ -80,18 +72,20 @@ start(boost::asio::io_context& ioc, std::uint32_t numThreads)
} // namespace } // namespace
ClioApplication::ClioApplication(util::config::ClioConfigDefinition const& config) ClioApplication::ClioApplication(util::Config const& config) : config_(config), signalsHandler_{config_}
: config_(config), signalsHandler_{config_}
{ {
LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString(); LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString();
PrometheusService::init(config); PrometheusService::init(config);
signalsHandler_.subscribeToStop([this]() { appStopper_.stop(); });
} }
int int
ClioApplication::run(bool const useNgWebServer) ClioApplication::run()
{ {
auto const threads = config_.get<uint16_t>("io_threads"); auto const threads = config_.valueOr("io_threads", 2);
if (threads <= 0) {
LOG(util::LogService::fatal()) << "io_threads is less than 1";
return EXIT_FAILURE;
}
LOG(util::LogService::info()) << "Number of io threads = " << threads; LOG(util::LogService::info()) << "Number of io threads = " << threads;
// IO context to handle all incoming requests, as well as other things. // IO context to handle all incoming requests, as well as other things.
@@ -104,92 +98,38 @@ ClioApplication::run(bool const useNgWebServer)
auto sweepHandler = web::dosguard::IntervalSweepHandler{config_, ioc, dosGuard}; auto sweepHandler = web::dosguard::IntervalSweepHandler{config_, ioc, dosGuard};
// Interface to the database // Interface to the database
auto backend = data::makeBackend(config_); auto backend = data::make_Backend(config_);
auto const amendmentCenter = std::make_shared<data::AmendmentCenter const>(backend);
{
auto const migrationInspector = migration::makeMigrationInspector(config_, backend);
// Check if any migration is blocking Clio server starting.
if (migrationInspector->isBlockingClio() and backend->hardFetchLedgerRangeNoThrow()) {
LOG(util::LogService::error())
<< "Existing Migration is blocking Clio, Please complete the database migration first.";
return EXIT_FAILURE;
}
}
// Manages clients subscribed to streams // Manages clients subscribed to streams
auto subscriptions = feed::SubscriptionManager::makeSubscriptionManager(config_, backend, amendmentCenter); auto subscriptions = feed::SubscriptionManager::make_SubscriptionManager(config_, backend);
// Tracks which ledgers have been validated by the network // Tracks which ledgers have been validated by the network
auto ledgers = etl::NetworkValidatedLedgers::makeValidatedLedgers(); auto ledgers = etl::NetworkValidatedLedgers::make_ValidatedLedgers();
// Handles the connection to one or more rippled nodes. // Handles the connection to one or more rippled nodes.
// ETL uses the balancer to extract data. // ETL uses the balancer to extract data.
// The server uses the balancer to forward RPCs to a rippled node. // The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and accounts_proposed) // The balancer itself publishes to streams (transactions_proposed and accounts_proposed)
auto balancer = etl::LoadBalancer::makeLoadBalancer(config_, ioc, backend, subscriptions, ledgers); auto balancer = etl::LoadBalancer::make_LoadBalancer(config_, ioc, backend, subscriptions, ledgers);
// ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes // ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes
auto etl = etl::ETLService::makeETLService(config_, ioc, backend, subscriptions, balancer, ledgers); auto etl = etl::ETLService::make_ETLService(config_, ioc, backend, subscriptions, balancer, ledgers);
auto workQueue = rpc::WorkQueue::makeWorkQueue(config_);
auto counters = rpc::Counters::makeCounters(workQueue);
auto workQueue = rpc::WorkQueue::make_WorkQueue(config_);
auto counters = rpc::Counters::make_Counters(workQueue);
auto const amendmentCenter = std::make_shared<data::AmendmentCenter const>(backend);
auto const handlerProvider = std::make_shared<rpc::impl::ProductionHandlerProvider const>( auto const handlerProvider = std::make_shared<rpc::impl::ProductionHandlerProvider const>(
config_, backend, subscriptions, balancer, etl, amendmentCenter, counters config_, backend, subscriptions, balancer, etl, amendmentCenter, counters
); );
using RPCEngineType = rpc::RPCEngine<etl::LoadBalancer, rpc::Counters>; using RPCEngineType = rpc::RPCEngine<etl::LoadBalancer, rpc::Counters>;
auto const rpcEngine = auto const rpcEngine =
RPCEngineType::makeRPCEngine(config_, backend, balancer, dosGuard, workQueue, counters, handlerProvider); RPCEngineType::make_RPCEngine(config_, backend, balancer, dosGuard, workQueue, counters, handlerProvider);
if (useNgWebServer or config_.get<bool>("server.__ng_web_server")) {
web::ng::RPCServerHandler<RPCEngineType, etl::ETLService> handler{config_, backend, rpcEngine, etl};
auto expectedAdminVerifier = web::makeAdminVerificationStrategy(config_);
if (not expectedAdminVerifier.has_value()) {
LOG(util::LogService::error()) << "Error creating admin verifier: " << expectedAdminVerifier.error();
return EXIT_FAILURE;
}
auto const adminVerifier = std::move(expectedAdminVerifier).value();
auto httpServer = web::ng::makeServer(config_, OnConnectCheck{dosGuard}, DisconnectHook{dosGuard}, ioc);
if (not httpServer.has_value()) {
LOG(util::LogService::error()) << "Error creating web server: " << httpServer.error();
return EXIT_FAILURE;
}
httpServer->onGet("/metrics", MetricsHandler{adminVerifier});
httpServer->onGet("/health", HealthCheckHandler{});
auto requestHandler = RequestHandler{adminVerifier, handler, dosGuard};
httpServer->onPost("/", requestHandler);
httpServer->onWs(std::move(requestHandler));
auto const maybeError = httpServer->run();
if (maybeError.has_value()) {
LOG(util::LogService::error()) << "Error starting web server: " << *maybeError;
return EXIT_FAILURE;
}
appStopper_.setOnStop(
Stopper::makeOnStopCallback(httpServer.value(), *balancer, *etl, *subscriptions, *backend, ioc)
);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope
// Calls destructors on all resources, and destructs in order
start(ioc, threads);
return EXIT_SUCCESS;
}
// Init the web server // Init the web server
auto handler = auto handler =
std::make_shared<web::RPCServerHandler<RPCEngineType, etl::ETLService>>(config_, backend, rpcEngine, etl); std::make_shared<web::RPCServerHandler<RPCEngineType, etl::ETLService>>(config_, backend, rpcEngine, etl);
auto const httpServer = web::make_HttpServer(config_, ioc, dosGuard, handler);
auto const httpServer = web::makeHttpServer(config_, ioc, dosGuard, handler);
// Blocks until stopped. // Blocks until stopped.
// When stopped, shared_ptrs fall out of scope // When stopped, shared_ptrs fall out of scope

View File

@@ -19,9 +19,8 @@
#pragma once #pragma once
#include "app/Stopper.hpp"
#include "util/SignalsHandler.hpp" #include "util/SignalsHandler.hpp"
#include "util/newconfig/ConfigDefinition.hpp" #include "util/config//Config.hpp"
namespace app { namespace app {
@@ -29,9 +28,8 @@ namespace app {
* @brief The main application class * @brief The main application class
*/ */
class ClioApplication { class ClioApplication {
util::config::ClioConfigDefinition const& config_; util::Config const& config_;
util::SignalsHandler signalsHandler_; util::SignalsHandler signalsHandler_;
Stopper appStopper_;
public: public:
/** /**
@@ -39,17 +37,15 @@ public:
* *
* @param config The configuration of the application * @param config The configuration of the application
*/ */
ClioApplication(util::config::ClioConfigDefinition const& config); ClioApplication(util::Config const& config);
/** /**
* @brief Run the application * @brief Run the application
* *
* @param useNgWebServer Whether to use the new web server
*
* @return exit code * @return exit code
*/ */
int int
run(bool useNgWebServer); run();
}; };
} // namespace app } // namespace app

View File

@@ -1,52 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/Stopper.hpp"
#include <boost/asio/spawn.hpp>
#include <functional>
#include <thread>
#include <utility>
namespace app {
Stopper::~Stopper()
{
if (worker_.joinable())
worker_.join();
}
void
Stopper::setOnStop(std::function<void(boost::asio::yield_context)> cb)
{
boost::asio::spawn(ctx_, std::move(cb));
}
void
Stopper::stop()
{
// Do nothing if worker_ is already running
if (worker_.joinable())
return;
worker_ = std::thread{[this]() { ctx_.run(); }};
}
} // namespace app

View File

@@ -1,118 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "etl/ETLService.hpp"
#include "etl/LoadBalancer.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/log/Logger.hpp"
#include "web/ng/Server.hpp"
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <functional>
#include <thread>
namespace app {
/**
* @brief Application stopper class. On stop it will create a new thread to run all the shutdown tasks.
*/
class Stopper {
boost::asio::io_context ctx_;
std::thread worker_;
public:
/**
* @brief Destroy the Stopper object
*/
~Stopper();
/**
* @brief Set the callback to be called when the application is stopped.
*
* @param cb The callback to be called on application stop.
*/
void
setOnStop(std::function<void(boost::asio::yield_context)> cb);
/**
* @brief Stop the application and run the shutdown tasks.
*/
void
stop();
/**
* @brief Create a callback to be called on application stop.
*
* @param server The server to stop.
* @param balancer The load balancer to stop.
* @param etl The ETL service to stop.
* @param subscriptions The subscription manager to stop.
* @param backend The backend to stop.
* @param ioc The io_context to stop.
* @return The callback to be called on application stop.
*/
template <
web::ng::SomeServer ServerType,
etl::SomeLoadBalancer LoadBalancerType,
etl::SomeETLService ETLServiceType>
static std::function<void(boost::asio::yield_context)>
makeOnStopCallback(
ServerType& server,
LoadBalancerType& balancer,
ETLServiceType& etl,
feed::SubscriptionManagerInterface& subscriptions,
data::BackendInterface& backend,
boost::asio::io_context& ioc
)
{
return [&](boost::asio::yield_context yield) {
util::CoroutineGroup coroutineGroup{yield};
coroutineGroup.spawn(yield, [&server](auto innerYield) {
server.stop(innerYield);
LOG(util::LogService::info()) << "Server stopped";
});
coroutineGroup.spawn(yield, [&balancer](auto innerYield) {
balancer.stop(innerYield);
LOG(util::LogService::info()) << "LoadBalancer stopped";
});
coroutineGroup.asyncWait(yield);
etl.stop();
LOG(util::LogService::info()) << "ETL stopped";
subscriptions.stop();
LOG(util::LogService::info()) << "SubscriptionManager stopped";
backend.waitForWritesToFinish();
LOG(util::LogService::info()) << "Backend writes finished";
ioc.stop();
LOG(util::LogService::info()) << "io_context stopped";
};
}
};
} // namespace app

View File

@@ -1,58 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/newconfig/ConfigFileJson.hpp"
#include <cstdlib>
#include <iostream>
#include <string_view>
namespace app {
/**
* @brief Verifies user's config values are correct
*
* @param configPath The path to config
* @return true if config values are all correct, false otherwise
*/
inline bool
parseConfig(std::string_view configPath)
{
using namespace util::config;
auto const json = ConfigFileJson::makeConfigFileJson(configPath);
if (!json.has_value()) {
std::cerr << "Error parsing json from config: " << configPath << "\n" << json.error().error << std::endl;
return false;
}
auto const errors = gClioConfig.parse(json.value());
if (errors.has_value()) {
for (auto const& err : errors.value()) {
std::cerr << "Issues found in provided config '" << configPath << "':\n";
std::cerr << err.error << std::endl;
}
return false;
}
return true;
}
} // namespace app

View File

@@ -1,111 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/WebHandlers.hpp"
#include "util/Assert.hpp"
#include "util/prometheus/Http.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
#include "web/dosguard/DOSGuardInterface.hpp"
#include "web/ng/Connection.hpp"
#include "web/ng/Request.hpp"
#include "web/ng/Response.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <memory>
#include <optional>
#include <utility>
namespace app {
OnConnectCheck::OnConnectCheck(web::dosguard::DOSGuardInterface& dosguard) : dosguard_{dosguard}
{
}
std::expected<void, web::ng::Response>
OnConnectCheck::operator()(web::ng::Connection const& connection)
{
dosguard_.get().increment(connection.ip());
if (not dosguard_.get().isOk(connection.ip())) {
return std::unexpected{
web::ng::Response{boost::beast::http::status::too_many_requests, "Too many requests", connection}
};
}
return {};
}
DisconnectHook::DisconnectHook(web::dosguard::DOSGuardInterface& dosguard) : dosguard_{dosguard}
{
}
void
DisconnectHook::operator()(web::ng::Connection const& connection)
{
dosguard_.get().decrement(connection.ip());
}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier)
: adminVerifier_{std::move(adminVerifier)}
{
}
web::ng::Response
MetricsHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
)
{
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
// FIXME(#1702): Using veb server thread to handle prometheus request. Better to post on work queue.
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
return web::ng::Response{std::move(maybeResponse).value(), request};
}
web::ng::Response
HealthCheckHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
)
{
static auto constexpr kHEALTH_CHECK_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Test page for Clio</title></head>
<body><h1>Clio Test</h1><p>This page shows Clio http(s) connectivity is working.</p></body>
</html>
)html";
return web::ng::Response{boost::beast::http::status::ok, kHEALTH_CHECK_HTML, request};
}
} // namespace app

View File

@@ -1,234 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "rpc/Errors.hpp"
#include "util/log/Logger.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
#include "web/dosguard/DOSGuardInterface.hpp"
#include "web/ng/Connection.hpp"
#include "web/ng/Request.hpp"
#include "web/ng/Response.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <boost/json/array.hpp>
#include <boost/json/parse.hpp>
#include <exception>
#include <functional>
#include <memory>
#include <utility>
namespace app {
/**
* @brief A function object that checks if the connection is allowed to proceed.
*/
class OnConnectCheck {
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new OnConnectCheck object
*
* @param dosguard The DOSGuardInterface to use for checking the connection.
*/
OnConnectCheck(web::dosguard::DOSGuardInterface& dosguard);
/**
* @brief Check if the connection is allowed to proceed.
*
* @param connection The connection to check.
* @return A response if the connection is not allowed to proceed or void otherwise.
*/
std::expected<void, web::ng::Response>
operator()(web::ng::Connection const& connection);
};
/**
* @brief A function object to be called when a connection is disconnected.
*/
class DisconnectHook {
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new DisconnectHook object
*
* @param dosguard The DOSGuardInterface to use for disconnecting the connection.
*/
DisconnectHook(web::dosguard::DOSGuardInterface& dosguard);
/**
* @brief The call of the function object.
*
* @param connection The connection which has disconnected.
*/
void
operator()(web::ng::Connection const& connection);
};
/**
* @brief A function object that handles the metrics endpoint.
*/
class MetricsHandler {
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
public:
/**
* @brief Construct a new MetricsHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
*/
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier);
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @return The response to the request.
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
);
};
/**
* @brief A function object that handles the health check endpoint.
*/
class HealthCheckHandler {
public:
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @return The response to the request
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
);
};
/**
* @brief A function object that handles the websocket endpoint.
*
* @tparam RpcHandlerType The type of the RPC handler.
*/
template <typename RpcHandlerType>
class RequestHandler {
util::Logger webServerLog_{"WebServer"};
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
std::reference_wrapper<RpcHandlerType> rpcHandler_;
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new RequestHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
* @param rpcHandler The RPC handler to use for handling the request.
* @param dosguard The DOSGuardInterface to use for checking the connection.
*/
RequestHandler(
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier,
RpcHandlerType& rpcHandler,
web::dosguard::DOSGuardInterface& dosguard
)
: adminVerifier_(std::move(adminVerifier)), rpcHandler_(rpcHandler), dosguard_(dosguard)
{
}
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @param subscriptionContext The subscription context.
* @param yield The yield context.
* @return The response to the request.
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr subscriptionContext,
boost::asio::yield_context yield
)
{
if (not dosguard_.get().request(connectionMetadata.ip())) {
auto error = rpc::makeError(rpc::RippledError::rpcSLOW_DOWN);
if (not request.isHttp()) {
try {
auto requestJson = boost::json::parse(request.message());
if (requestJson.is_object() && requestJson.as_object().contains("id"))
error["id"] = requestJson.as_object().at("id");
error["request"] = request.message();
} catch (std::exception const&) {
error["request"] = request.message();
}
}
return web::ng::Response{boost::beast::http::status::service_unavailable, error, request};
}
LOG(webServerLog_.info()) << connectionMetadata.tag()
<< "Received request from ip = " << connectionMetadata.ip()
<< " - posting to WorkQueue";
connectionMetadata.setIsAdmin([this, &request, &connectionMetadata]() {
return adminVerifier_->isAdmin(request.httpHeaders(), connectionMetadata.ip());
});
try {
auto response = rpcHandler_(request, connectionMetadata, std::move(subscriptionContext), yield);
if (not dosguard_.get().add(connectionMetadata.ip(), response.message().size())) {
auto jsonResponse = boost::json::parse(response.message()).as_object();
jsonResponse["warning"] = "load";
if (jsonResponse.contains("warnings") && jsonResponse["warnings"].is_array()) {
jsonResponse["warnings"].as_array().push_back(rpc::makeWarning(rpc::WarnRpcRateLimit));
} else {
jsonResponse["warnings"] = boost::json::array{rpc::makeWarning(rpc::WarnRpcRateLimit)};
}
response.setMessage(jsonResponse);
}
return response;
} catch (std::exception const&) {
return web::ng::Response{
boost::beast::http::status::internal_server_error,
rpc::makeError(rpc::RippledError::rpcINTERNAL),
request
};
}
}
};
} // namespace app

View File

@@ -50,10 +50,10 @@
namespace { namespace {
std::unordered_set<std::string>& std::unordered_set<std::string>&
supportedAmendments() SUPPORTED_AMENDMENTS()
{ {
static std::unordered_set<std::string> kAMENDMENTS = {}; static std::unordered_set<std::string> amendments = {};
return kAMENDMENTS; return amendments;
} }
bool bool
@@ -72,8 +72,8 @@ namespace impl {
WritingAmendmentKey::WritingAmendmentKey(std::string amendmentName) : AmendmentKey{std::move(amendmentName)} WritingAmendmentKey::WritingAmendmentKey(std::string amendmentName) : AmendmentKey{std::move(amendmentName)}
{ {
ASSERT(not supportedAmendments().contains(name), "Attempt to register the same amendment twice"); ASSERT(not SUPPORTED_AMENDMENTS().contains(name), "Attempt to register the same amendment twice");
supportedAmendments().insert(name); SUPPORTED_AMENDMENTS().insert(name);
} }
} // namespace impl } // namespace impl
@@ -90,7 +90,7 @@ AmendmentKey::operator std::string_view() const
AmendmentKey::operator ripple::uint256() const AmendmentKey::operator ripple::uint256() const
{ {
return Amendment::getAmendmentId(name); return Amendment::GetAmendmentId(name);
} }
AmendmentCenter::AmendmentCenter(std::shared_ptr<data::BackendInterface> const& backend) : backend_{backend} AmendmentCenter::AmendmentCenter(std::shared_ptr<data::BackendInterface> const& backend) : backend_{backend}
@@ -103,9 +103,9 @@ AmendmentCenter::AmendmentCenter(std::shared_ptr<data::BackendInterface> const&
auto const& [name, support] = p; auto const& [name, support] = p;
return Amendment{ return Amendment{
.name = name, .name = name,
.feature = Amendment::getAmendmentId(name), .feature = Amendment::GetAmendmentId(name),
.isSupportedByXRPL = support != ripple::AmendmentSupport::Unsupported, .isSupportedByXRPL = support != ripple::AmendmentSupport::Unsupported,
.isSupportedByClio = rg::find(supportedAmendments(), name) != rg::end(supportedAmendments()), .isSupportedByClio = rg::find(SUPPORTED_AMENDMENTS(), name) != rg::end(SUPPORTED_AMENDMENTS()),
.isRetired = support == ripple::AmendmentSupport::Retired .isRetired = support == ripple::AmendmentSupport::Retired
}; };
}), }),
@@ -180,7 +180,7 @@ AmendmentCenter::operator[](AmendmentKey const& key) const
} }
ripple::uint256 ripple::uint256
Amendment::getAmendmentId(std::string_view name) Amendment::GetAmendmentId(std::string_view name)
{ {
return ripple::sha512Half(ripple::Slice(name.data(), name.size())); return ripple::sha512Half(ripple::Slice(name.data(), name.size()));
} }

View File

@@ -67,7 +67,6 @@ struct Amendments {
// Most of the time it's going to be no changes at all. // Most of the time it's going to be no changes at all.
/** @cond */ /** @cond */
// NOLINTBEGIN(readability-identifier-naming)
REGISTER(OwnerPaysFee); REGISTER(OwnerPaysFee);
REGISTER(Flow); REGISTER(Flow);
REGISTER(FlowCross); REGISTER(FlowCross);
@@ -132,11 +131,6 @@ struct Amendments {
REGISTER(fixAMMv1_2); REGISTER(fixAMMv1_2);
REGISTER(AMMClawback); REGISTER(AMMClawback);
REGISTER(Credentials); REGISTER(Credentials);
REGISTER(DynamicNFT);
REGISTER(PermissionedDomains);
REGISTER(fixInvalidTxFlags);
REGISTER(fixFrozenLPTokenTransfer);
REGISTER(DeepFreeze);
// Obsolete but supported by libxrpl // Obsolete but supported by libxrpl
REGISTER(CryptoConditionsSuite); REGISTER(CryptoConditionsSuite);
@@ -160,7 +154,6 @@ struct Amendments {
REGISTER(fix1512); REGISTER(fix1512);
REGISTER(fix1523); REGISTER(fix1523);
REGISTER(fix1528); REGISTER(fix1528);
// NOLINTEND(readability-identifier-naming)
/** @endcond */ /** @endcond */
}; };

View File

@@ -36,7 +36,7 @@ namespace data {
namespace { namespace {
std::vector<std::int64_t> const kHISTOGRAM_BUCKETS{1, 2, 5, 10, 20, 50, 100, 200, 500, 700, 1000}; std::vector<std::int64_t> const histogramBuckets{1, 2, 5, 10, 20, 50, 100, 200, 500, 700, 1000};
std::int64_t std::int64_t
durationInMillisecondsSince(std::chrono::steady_clock::time_point const startTime) durationInMillisecondsSince(std::chrono::steady_clock::time_point const startTime)
@@ -69,13 +69,13 @@ BackendCounters::BackendCounters()
, readDurationHistogram_(PrometheusService::histogramInt( , readDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram", "backend_duration_milliseconds_histogram",
Labels({Label{"operation", "read"}}), Labels({Label{"operation", "read"}}),
kHISTOGRAM_BUCKETS, histogramBuckets,
"The duration of backend read operations including retries" "The duration of backend read operations including retries"
)) ))
, writeDurationHistogram_(PrometheusService::histogramInt( , writeDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram", "backend_duration_milliseconds_histogram",
Labels({Label{"operation", "write"}}), Labels({Label{"operation", "write"}}),
kHISTOGRAM_BUCKETS, histogramBuckets,
"The duration of backend write operations including retries" "The duration of backend write operations including retries"
)) ))
{ {

View File

@@ -22,8 +22,8 @@
#include "data/BackendInterface.hpp" #include "data/BackendInterface.hpp"
#include "data/CassandraBackend.hpp" #include "data/CassandraBackend.hpp"
#include "data/cassandra/SettingsProvider.hpp" #include "data/cassandra/SettingsProvider.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include <boost/algorithm/string.hpp> #include <boost/algorithm/string.hpp>
#include <boost/algorithm/string/predicate.hpp> #include <boost/algorithm/string/predicate.hpp>
@@ -41,18 +41,18 @@ namespace data {
* @return A shared_ptr<BackendInterface> with the selected implementation * @return A shared_ptr<BackendInterface> with the selected implementation
*/ */
inline std::shared_ptr<BackendInterface> inline std::shared_ptr<BackendInterface>
makeBackend(util::config::ClioConfigDefinition const& config) make_Backend(util::Config const& config)
{ {
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming) static util::Logger const log{"Backend"};
LOG(log.info()) << "Constructing BackendInterface"; LOG(log.info()) << "Constructing BackendInterface";
auto const readOnly = config.get<bool>("read_only"); auto const readOnly = config.valueOr("read_only", false);
auto const type = config.get<std::string>("database.type"); auto const type = config.value<std::string>("database.type");
std::shared_ptr<BackendInterface> backend = nullptr; std::shared_ptr<BackendInterface> backend = nullptr;
if (boost::iequals(type, "cassandra")) { if (boost::iequals(type, "cassandra")) {
auto const cfg = config.getObject("database." + type); auto cfg = config.section("database." + type);
backend = std::make_shared<data::cassandra::CassandraBackend>(data::cassandra::SettingsProvider{cfg}, readOnly); backend = std::make_shared<data::cassandra::CassandraBackend>(data::cassandra::SettingsProvider{cfg}, readOnly);
} }

View File

@@ -176,9 +176,9 @@ BackendInterface::fetchSuccessorObject(
if (succ) { if (succ) {
auto obj = fetchLedgerObject(*succ, ledgerSequence, yield); auto obj = fetchLedgerObject(*succ, ledgerSequence, yield);
if (!obj) if (!obj)
return {{.key = *succ, .blob = {}}}; return {{*succ, {}}};
return {{.key = *succ, .blob = *obj}}; return {{*succ, *obj}};
} }
return {}; return {};
} }
@@ -267,7 +267,7 @@ std::optional<LedgerRange>
BackendInterface::fetchLedgerRange() const BackendInterface::fetchLedgerRange() const
{ {
std::shared_lock const lck(rngMtx_); std::shared_lock const lck(rngMtx_);
return range_; return range;
} }
void void
@@ -276,16 +276,16 @@ BackendInterface::updateRange(uint32_t newMax)
std::scoped_lock const lck(rngMtx_); std::scoped_lock const lck(rngMtx_);
ASSERT( ASSERT(
!range_ || newMax >= range_->maxSequence, !range || newMax >= range->maxSequence,
"Range shouldn't exist yet or newMax should be greater. newMax = {}, range->maxSequence = {}", "Range shouldn't exist yet or newMax should be greater. newMax = {}, range->maxSequence = {}",
newMax, newMax,
range_->maxSequence range->maxSequence
); );
if (!range_) { if (!range) {
range_ = {.minSequence = newMax, .maxSequence = newMax}; range = {newMax, newMax};
} else { } else {
range_->maxSequence = newMax; range->maxSequence = newMax;
} }
} }
@@ -296,10 +296,10 @@ BackendInterface::setRange(uint32_t min, uint32_t max, bool force)
if (!force) { if (!force) {
ASSERT(min <= max, "Range min must be less than or equal to max"); ASSERT(min <= max, "Range min must be less than or equal to max");
ASSERT(not range_.has_value(), "Range was already set"); ASSERT(not range.has_value(), "Range was already set");
} }
range_ = {.minSequence = min, .maxSequence = max}; range = {min, max};
} }
LedgerPage LedgerPage
@@ -320,10 +320,10 @@ BackendInterface::fetchLedgerPage(
ripple::uint256 const& curCursor = [&]() { ripple::uint256 const& curCursor = [&]() {
if (!keys.empty()) if (!keys.empty())
return keys.back(); return keys.back();
return (cursor ? *cursor : kFIRST_KEY); return (cursor ? *cursor : firstKey);
}(); }();
std::uint32_t const seq = outOfOrder ? range_->maxSequence : ledgerSequence; std::uint32_t const seq = outOfOrder ? range->maxSequence : ledgerSequence;
auto succ = fetchSuccessorKey(curCursor, seq, yield); auto succ = fetchSuccessorKey(curCursor, seq, yield);
if (!succ) { if (!succ) {

View File

@@ -65,7 +65,7 @@ public:
} }
}; };
static constexpr std::size_t kDEFAULT_WAIT_BETWEEN_RETRY = 500; static constexpr std::size_t DEFAULT_WAIT_BETWEEN_RETRY = 500;
/** /**
* @brief A helper function that catches DatabaseTimout exceptions and retries indefinitely. * @brief A helper function that catches DatabaseTimout exceptions and retries indefinitely.
* *
@@ -76,9 +76,9 @@ static constexpr std::size_t kDEFAULT_WAIT_BETWEEN_RETRY = 500;
*/ */
template <typename FnType> template <typename FnType>
auto auto
retryOnTimeout(FnType func, size_t waitMs = kDEFAULT_WAIT_BETWEEN_RETRY) retryOnTimeout(FnType func, size_t waitMs = DEFAULT_WAIT_BETWEEN_RETRY)
{ {
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming) static util::Logger const log{"Backend"};
while (true) { while (true) {
try { try {
@@ -138,7 +138,7 @@ synchronousAndRetryOnTimeout(FnType&& func)
class BackendInterface { class BackendInterface {
protected: protected:
mutable std::shared_mutex rngMtx_; mutable std::shared_mutex rngMtx_;
std::optional<LedgerRange> range_; std::optional<LedgerRange> range;
LedgerCache cache_; LedgerCache cache_;
std::optional<etl::CorruptionDetector<LedgerCache>> corruptionDetector_; std::optional<etl::CorruptionDetector<LedgerCache>> corruptionDetector_;
@@ -548,16 +548,6 @@ public:
boost::asio::yield_context yield boost::asio::yield_context yield
) const; ) const;
/**
* @brief Fetches the status of migrator by name.
*
* @param migratorName The name of the migrator
* @param yield The coroutine context
* @return The status of the migrator if found; nullopt otherwise
*/
virtual std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const = 0;
/** /**
* @brief Synchronously fetches the ledger range from DB. * @brief Synchronously fetches the ledger range from DB.
* *
@@ -683,21 +673,6 @@ public:
bool bool
finishWrites(std::uint32_t ledgerSequence); finishWrites(std::uint32_t ledgerSequence);
/**
* @brief Wait for all pending writes to finish.
*/
virtual void
waitForWritesToFinish() = 0;
/**
* @brief Mark the migration status of a migrator as Migrated in the database
*
* @param migratorName The name of the migrator
* @param status The status to set
*/
virtual void
writeMigratorStatus(std::string const& migratorName, std::string const& status) = 0;
/** /**
* @return true if database is overwhelmed; false otherwise * @return true if database is overwhelmed; false otherwise
*/ */

View File

@@ -22,7 +22,6 @@
#include "data/BackendInterface.hpp" #include "data/BackendInterface.hpp"
#include "data/DBHelpers.hpp" #include "data/DBHelpers.hpp"
#include "data/Types.hpp" #include "data/Types.hpp"
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/Handle.hpp" #include "data/cassandra/Handle.hpp"
#include "data/cassandra/Schema.hpp" #include "data/cassandra/Schema.hpp"
#include "data/cassandra/SettingsProvider.hpp" #include "data/cassandra/SettingsProvider.hpp"
@@ -36,7 +35,6 @@
#include <boost/asio/spawn.hpp> #include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp> #include <boost/json/object.hpp>
#include <cassandra.h> #include <cassandra.h>
#include <fmt/core.h>
#include <xrpl/basics/Blob.h> #include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h> #include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h> #include <xrpl/basics/strHex.h>
@@ -74,15 +72,13 @@ class BasicCassandraBackend : public BackendInterface {
SettingsProviderType settingsProvider_; SettingsProviderType settingsProvider_;
Schema<SettingsProviderType> schema_; Schema<SettingsProviderType> schema_;
std::atomic_uint32_t ledgerSequence_ = 0u;
protected:
Handle handle_; Handle handle_;
// have to be mutable because BackendInterface constness :( // have to be mutable because BackendInterface constness :(
mutable ExecutionStrategyType executor_; mutable ExecutionStrategyType executor_;
std::atomic_uint32_t ledgerSequence_ = 0u;
public: public:
/** /**
* @brief Create a new cassandra/scylla backend instance. * @brief Create a new cassandra/scylla backend instance.
@@ -97,7 +93,7 @@ public:
, executor_{settingsProvider_.getSettings(), handle_} , executor_{settingsProvider_.getSettings(), handle_}
{ {
if (auto const res = handle_.connect(); not res) if (auto const res = handle_.connect(); not res)
throw std::runtime_error("Could not connect to database: " + res.error()); throw std::runtime_error("Could not connect to databse: " + res.error());
if (not readOnly) { if (not readOnly) {
if (auto const res = handle_.execute(schema_.createKeyspace); not res) { if (auto const res = handle_.execute(schema_.createKeyspace); not res) {
@@ -114,24 +110,13 @@ public:
try { try {
schema_.prepareStatements(handle_); schema_.prepareStatements(handle_);
} catch (std::runtime_error const& ex) { } catch (std::runtime_error const& ex) {
auto const error = fmt::format( LOG(log_.error()) << "Failed to prepare the statements: " << ex.what() << "; readOnly: " << readOnly;
"Failed to prepare the statements: {}; readOnly: {}. ReadOnly should be turned off or another Clio " throw;
"node with write access to DB should be started first.",
ex.what(),
readOnly
);
LOG(log_.error()) << error;
throw std::runtime_error(error);
} }
LOG(log_.info()) << "Created (revamped) CassandraBackend"; LOG(log_.info()) << "Created (revamped) CassandraBackend";
} }
/*
* @brief Move constructor is deleted because handle_ is shared by reference with executor
*/
BasicCassandraBackend(BasicCassandraBackend&&) = delete;
TransactionsAndCursor TransactionsAndCursor
fetchAccountTransactions( fetchAccountTransactions(
ripple::AccountID const& account, ripple::AccountID const& account,
@@ -143,7 +128,7 @@ public:
{ {
auto rng = fetchLedgerRange(); auto rng = fetchLedgerRange();
if (!rng) if (!rng)
return {.txns = {}, .cursor = {}}; return {{}, {}};
Statement const statement = [this, forward, &account]() { Statement const statement = [this, forward, &account]() {
if (forward) if (forward)
@@ -200,18 +185,13 @@ public:
return {txns, {}}; return {txns, {}};
} }
void
waitForWritesToFinish() override
{
executor_.sync();
}
bool bool
doFinishWrites() override doFinishWrites() override
{ {
waitForWritesToFinish(); // wait for other threads to finish their writes
executor_.sync();
if (!range_) { if (!range) {
executor_.writeSync(schema_->updateLedgerRange, ledgerSequence_, false, ledgerSequence_); executor_.writeSync(schema_->updateLedgerRange, ledgerSequence_, false, ledgerSequence_);
} }
@@ -419,7 +399,7 @@ public:
{ {
auto rng = fetchLedgerRange(); auto rng = fetchLedgerRange();
if (!rng) if (!rng)
return {.txns = {}, .cursor = {}}; return {{}, {}};
Statement const statement = [this, forward, &tokenID]() { Statement const statement = [this, forward, &tokenID]() {
if (forward) if (forward)
@@ -636,6 +616,7 @@ public:
return seq; return seq;
} }
LOG(log_.debug()) << "Could not fetch ledger object sequence - no rows"; LOG(log_.debug()) << "Could not fetch ledger object sequence - no rows";
} else { } else {
LOG(log_.error()) << "Could not fetch ledger object sequence: " << res.error(); LOG(log_.error()) << "Could not fetch ledger object sequence: " << res.error();
} }
@@ -666,7 +647,7 @@ public:
{ {
if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res) { if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res) {
if (auto const result = res->template get<ripple::uint256>(); result) { if (auto const result = res->template get<ripple::uint256>(); result) {
if (*result == kLAST_KEY) if (*result == lastKey)
return std::nullopt; return std::nullopt;
return result; return result;
} }
@@ -854,32 +835,12 @@ public:
return results; return results;
} }
std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const override
{
auto const res = executor_.read(yield, schema_->selectMigratorStatus, Text(migratorName));
if (not res) {
LOG(log_.error()) << "Could not fetch migrator status: " << res.error();
return {};
}
auto const& results = res.value();
if (not results) {
return {};
}
for (auto [statusString] : extract<std::string>(results))
return statusString;
return {};
}
void void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override
{ {
LOG(log_.trace()) << " Writing ledger object " << key.size() << ":" << seq << " [" << blob.size() << " bytes]"; LOG(log_.trace()) << " Writing ledger object " << key.size() << ":" << seq << " [" << blob.size() << " bytes]";
if (range_) if (range)
executor_.write(schema_->insertDiff, seq, key); executor_.write(schema_->insertDiff, seq, key);
executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob)); executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob));
@@ -959,7 +920,6 @@ public:
statements.reserve(data.size() * 3); statements.reserve(data.size() * 3);
for (NFTsData const& record : data) { for (NFTsData const& record : data) {
if (!record.onlyUriChanged) {
statements.push_back( statements.push_back(
schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned) schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned)
); );
@@ -979,15 +939,9 @@ public:
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value()) schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
); );
} }
} else {
// only uri changed, we update the uri table only
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
);
}
} }
executor_.writeEach(std::move(statements)); executor_.write(std::move(statements));
} }
void void
@@ -1008,14 +962,6 @@ public:
// probably was used in PG to start a transaction or smth. // probably was used in PG to start a transaction or smth.
} }
void
writeMigratorStatus(std::string const& migratorName, std::string const& status) override
{
executor_.writeSync(
schema_->insertMigratorStatus, data::cassandra::Text{migratorName}, data::cassandra::Text(status)
);
}
bool bool
isTooBusy() const override isTooBusy() const override
{ {

View File

@@ -107,7 +107,6 @@ struct NFTsData {
ripple::AccountID owner; ripple::AccountID owner;
std::optional<ripple::Blob> uri; std::optional<ripple::Blob> uri;
bool isBurned = false; bool isBurned = false;
bool onlyUriChanged = false; // Whether only the URI was changed
/** /**
* @brief Construct a new NFTsData object * @brief Construct a new NFTsData object
@@ -171,23 +170,6 @@ struct NFTsData {
: tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri) : tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri)
{ {
} }
/**
* @brief Construct a new NFTsData object with only the URI changed
*
* @param tokenID The token ID
* @param meta The transaction metadata
* @param uri The new URI
*
*/
NFTsData(ripple::uint256 const& tokenID, ripple::TxMeta const& meta, ripple::Blob const& uri)
: tokenID(tokenID)
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
, uri(uri)
, onlyUriChanged(true)
{
}
}; };
/** /**
@@ -208,11 +190,11 @@ template <typename T>
inline bool inline bool
isOffer(T const& object) isOffer(T const& object)
{ {
static constexpr short kOFFER_OFFSET = 0x006f; static constexpr short OFFER_OFFSET = 0x006f;
static constexpr short kSHIFT = 8; static constexpr short SHIFT = 8;
short offerBytes = (object[1] << kSHIFT) | object[2]; short offer_bytes = (object[1] << SHIFT) | object[2];
return offerBytes == kOFFER_OFFSET; return offer_bytes == OFFER_OFFSET;
} }
/** /**
@@ -241,9 +223,9 @@ template <typename T>
inline bool inline bool
isDirNode(T const& object) isDirNode(T const& object)
{ {
static constexpr short kDIR_NODE_SPACE_KEY = 0x0064; static constexpr short DIR_NODE_SPACE_KEY = 0x0064;
short const spaceKey = (object.data()[1] << 8) | object.data()[2]; short const spaceKey = (object.data()[1] << 8) | object.data()[2];
return spaceKey == kDIR_NODE_SPACE_KEY; return spaceKey == DIR_NODE_SPACE_KEY;
} }
/** /**
@@ -291,12 +273,12 @@ template <typename T>
inline ripple::uint256 inline ripple::uint256
getBookBase(T const& key) getBookBase(T const& key)
{ {
static constexpr size_t kEY_SIZE = 24; static constexpr size_t KEY_SIZE = 24;
ASSERT(key.size() == ripple::uint256::size(), "Invalid key size {}", key.size()); ASSERT(key.size() == ripple::uint256::size(), "Invalid key size {}", key.size());
ripple::uint256 ret; ripple::uint256 ret;
for (size_t i = 0; i < kEY_SIZE; ++i) for (size_t i = 0; i < KEY_SIZE; ++i)
ret.data()[i] = key.data()[i]; ret.data()[i] = key.data()[i];
return ret; return ret;
@@ -315,4 +297,4 @@ uint256ToString(ripple::uint256 const& input)
} }
/** @brief The ripple epoch start timestamp. Midnight on 1st January 2000. */ /** @brief The ripple epoch start timestamp. Midnight on 1st January 2000. */
static constexpr std::uint32_t kRIPPLE_EPOCH_START = 946684800; static constexpr std::uint32_t rippleEpochStart = 946684800;

View File

@@ -75,7 +75,7 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
auto& e = map_[obj.key]; auto& e = map_[obj.key];
if (seq > e.seq) { if (seq > e.seq) {
e = {.seq = seq, .blob = obj.blob}; e = {seq, obj.blob};
} }
} else { } else {
map_.erase(obj.key); map_.erase(obj.key);
@@ -101,7 +101,7 @@ LedgerCache::getSuccessor(ripple::uint256 const& key, uint32_t seq) const
if (e == map_.end()) if (e == map_.end())
return {}; return {};
++successorHitCounter_.get(); ++successorHitCounter_.get();
return {{.key = e->first, .blob = e->second.blob}}; return {{e->first, e->second.blob}};
} }
std::optional<LedgerObject> std::optional<LedgerObject>
@@ -117,7 +117,7 @@ LedgerCache::getPredecessor(ripple::uint256 const& key, uint32_t seq) const
if (e == map_.begin()) if (e == map_.begin())
return {}; return {};
--e; --e;
return {{.key = e->first, .blob = e->second.blob}}; return {{e->first, e->second.blob}};
} }
std::optional<Blob> std::optional<Blob>

View File

@@ -262,15 +262,3 @@ CREATE TABLE clio.nf_token_transactions (
``` ```
The `nf_token_transactions` table serves as the NFT counterpart to `account_tx`, inspired by the same motivations and fulfilling a similar role within this context. It drives the `nft_history` API. The `nf_token_transactions` table serves as the NFT counterpart to `account_tx`, inspired by the same motivations and fulfilling a similar role within this context. It drives the `nft_history` API.
### migrator_status
```
CREATE TABLE clio.migrator_status (
migrator_name TEXT, # The name of the migrator
status TEXT, # The status of the migrator
PRIMARY KEY (migrator_name)
)
```
The `migrator_status` table stores the status of the migratior in this database. If a migrator's status is `migrated`, it means this database has finished data migration for this migrator.

View File

@@ -266,7 +266,7 @@ struct Amendment {
* @return The amendment Id as uint256 * @return The amendment Id as uint256
*/ */
static ripple::uint256 static ripple::uint256
getAmendmentId(std::string_view const name); GetAmendmentId(std::string_view const name);
/** /**
* @brief Equality comparison operator * @brief Equality comparison operator
@@ -312,8 +312,8 @@ struct AmendmentKey {
operator<=>(AmendmentKey const& other) const = default; operator<=>(AmendmentKey const& other) const = default;
}; };
constexpr ripple::uint256 kFIRST_KEY{"0000000000000000000000000000000000000000000000000000000000000000"}; constexpr ripple::uint256 firstKey{"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 kLAST_KEY{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"}; constexpr ripple::uint256 lastKey{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 kHI192{"0000000000000000000000000000000000000000000000001111111111111111"}; constexpr ripple::uint256 hi192{"0000000000000000000000000000000000000000000000001111111111111111"};
} // namespace data } // namespace data

View File

@@ -60,7 +60,7 @@ Handle::connect() const
Handle::FutureType Handle::FutureType
Handle::asyncConnect(std::string_view keyspace) const Handle::asyncConnect(std::string_view keyspace) const
{ {
return cass_session_connect_keyspace_n(session_, cluster_, keyspace.data(), keyspace.size()); return cass_session_connect_keyspace(session_, cluster_, keyspace.data());
} }
Handle::MaybeErrorType Handle::MaybeErrorType
@@ -155,7 +155,7 @@ Handle::asyncExecute(std::vector<StatementType> const& statements, std::function
Handle::PreparedStatementType Handle::PreparedStatementType
Handle::prepare(std::string_view query) const Handle::prepare(std::string_view query) const
{ {
Handle::FutureType const future = cass_session_prepare_n(session_, query.data(), query.size()); Handle::FutureType const future = cass_session_prepare(session_, query.data());
auto const rc = future.await(); auto const rc = future.await();
if (rc) if (rc)
return cass_future_get_prepared(future); return cass_future_get_prepared(future);

View File

@@ -74,7 +74,7 @@ public:
'class': 'SimpleStrategy', 'class': 'SimpleStrategy',
'replication_factor': '{}' 'replication_factor': '{}'
}} }}
AND durable_writes = True AND durable_writes = true
)", )",
settingsProvider_.get().getKeyspace(), settingsProvider_.get().getKeyspace(),
settingsProvider_.get().getReplicationFactor() settingsProvider_.get().getReplicationFactor()
@@ -270,18 +270,6 @@ public:
qualifiedTableName(settingsProvider_.get(), "mp_token_holders") qualifiedTableName(settingsProvider_.get(), "mp_token_holders")
)); ));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
migrator_name TEXT,
status TEXT,
PRIMARY KEY (migrator_name)
)
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
return statements; return statements;
}(); }();
@@ -472,23 +460,12 @@ public:
R"( R"(
UPDATE {} UPDATE {}
SET sequence = ? SET sequence = ?
WHERE is_latest = False WHERE is_latest = false
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range") qualifiedTableName(settingsProvider_.get(), "ledger_range")
)); ));
}(); }();
PreparedStatement insertMigratorStatus = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(migrator_name, status)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
}();
// //
// Select queries // Select queries
// //
@@ -776,7 +753,7 @@ public:
R"( R"(
SELECT sequence SELECT sequence
FROM {} FROM {}
WHERE is_latest = True WHERE is_latest = true
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range") qualifiedTableName(settingsProvider_.get(), "ledger_range")
)); ));
@@ -787,22 +764,10 @@ public:
R"( R"(
SELECT sequence SELECT sequence
FROM {} FROM {}
WHERE is_latest in (True, False)
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range") qualifiedTableName(settingsProvider_.get(), "ledger_range")
)); ));
}(); }();
PreparedStatement selectMigratorStatus = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT status
FROM {}
WHERE migrator_name = ?
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
}();
}; };
/** /**

View File

@@ -22,7 +22,10 @@
#include "data/cassandra/Types.hpp" #include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/Cluster.hpp" #include "data/cassandra/impl/Cluster.hpp"
#include "util/Constants.hpp" #include "util/Constants.hpp"
#include "util/newconfig/ObjectView.hpp" #include "util/config/Config.hpp"
#include <boost/json/conversion.hpp>
#include <boost/json/value.hpp>
#include <cerrno> #include <cerrno>
#include <chrono> #include <chrono>
@@ -33,17 +36,43 @@
#include <ios> #include <ios>
#include <iterator> #include <iterator>
#include <optional> #include <optional>
#include <stdexcept>
#include <string> #include <string>
#include <string_view>
#include <system_error> #include <system_error>
namespace data::cassandra { namespace data::cassandra {
SettingsProvider::SettingsProvider(util::config::ObjectView const& cfg) namespace impl {
inline Settings::ContactPoints
tag_invoke(boost::json::value_to_tag<Settings::ContactPoints>, boost::json::value const& value)
{
if (not value.is_object()) {
throw std::runtime_error("Feed entire Cassandra section to parse Settings::ContactPoints instead");
}
util::Config const obj{value};
Settings::ContactPoints out;
out.contactPoints = obj.valueOrThrow<std::string>("contact_points", "`contact_points` must be a string");
out.port = obj.maybeValue<uint16_t>("port");
return out;
}
inline Settings::SecureConnectionBundle
tag_invoke(boost::json::value_to_tag<Settings::SecureConnectionBundle>, boost::json::value const& value)
{
if (not value.is_string())
throw std::runtime_error("`secure_connect_bundle` must be a string");
return Settings::SecureConnectionBundle{value.as_string().data()};
}
} // namespace impl
SettingsProvider::SettingsProvider(util::Config const& cfg)
: config_{cfg} : config_{cfg}
, keyspace_{cfg.get<std::string>("keyspace")} , keyspace_{cfg.valueOr<std::string>("keyspace", "clio")}
, tablePrefix_{cfg.maybeValue<std::string>("table_prefix")} , tablePrefix_{cfg.maybeValue<std::string>("table_prefix")}
, replicationFactor_{cfg.get<uint16_t>("replication_factor")} , replicationFactor_{cfg.valueOr<uint16_t>("replication_factor", 3)}
, settings_{parseSettings()} , settings_{parseSettings()}
{ {
} }
@@ -57,8 +86,8 @@ SettingsProvider::getSettings() const
std::optional<std::string> std::optional<std::string>
SettingsProvider::parseOptionalCertificate() const SettingsProvider::parseOptionalCertificate() const
{ {
if (auto const certPath = config_.getValueView("certfile"); certPath.hasValue()) { if (auto const certPath = config_.maybeValue<std::string>("certfile"); certPath) {
auto const path = std::filesystem::path(certPath.asString()); auto const path = std::filesystem::path(*certPath);
std::ifstream fileStream(path.string(), std::ios::in); std::ifstream fileStream(path.string(), std::ios::in);
if (!fileStream) { if (!fileStream) {
throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string()); throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string());
@@ -79,34 +108,30 @@ Settings
SettingsProvider::parseSettings() const SettingsProvider::parseSettings() const
{ {
auto settings = Settings::defaultSettings(); auto settings = Settings::defaultSettings();
if (auto const bundle = config_.maybeValue<Settings::SecureConnectionBundle>("secure_connect_bundle"); bundle) {
// all config values used in settings is under "database.cassandra" prefix settings.connectionInfo = *bundle;
if (config_.getValueView("secure_connect_bundle").hasValue()) {
auto const bundle = Settings::SecureConnectionBundle{(config_.get<std::string>("secure_connect_bundle"))};
settings.connectionInfo = bundle;
} else { } else {
Settings::ContactPoints out; settings.connectionInfo =
out.contactPoints = config_.get<std::string>("contact_points"); config_.valueOrThrow<Settings::ContactPoints>("Missing contact_points in Cassandra config");
out.port = config_.maybeValue<uint32_t>("port");
settings.connectionInfo = out;
} }
settings.threads = config_.get<uint32_t>("threads"); settings.threads = config_.valueOr<uint32_t>("threads", settings.threads);
settings.maxWriteRequestsOutstanding = config_.get<uint32_t>("max_write_requests_outstanding"); settings.maxWriteRequestsOutstanding =
settings.maxReadRequestsOutstanding = config_.get<uint32_t>("max_read_requests_outstanding"); config_.valueOr<uint32_t>("max_write_requests_outstanding", settings.maxWriteRequestsOutstanding);
settings.coreConnectionsPerHost = config_.get<uint32_t>("core_connections_per_host"); settings.maxReadRequestsOutstanding =
config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding);
settings.coreConnectionsPerHost =
config_.valueOr<uint32_t>("core_connections_per_host", settings.coreConnectionsPerHost);
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io"); settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
settings.writeBatchSize = config_.get<std::size_t>("write_batch_size"); settings.writeBatchSize = config_.valueOr<std::size_t>("write_batch_size", settings.writeBatchSize);
if (config_.getValueView("connect_timeout").hasValue()) { auto const connectTimeoutSecond = config_.maybeValue<uint32_t>("connect_timeout");
auto const connectTimeoutSecond = config_.get<uint32_t>("connect_timeout"); if (connectTimeoutSecond)
settings.connectionTimeout = std::chrono::milliseconds{connectTimeoutSecond * util::kMILLISECONDS_PER_SECOND}; settings.connectionTimeout = std::chrono::milliseconds{*connectTimeoutSecond * util::MILLISECONDS_PER_SECOND};
}
if (config_.getValueView("request_timeout").hasValue()) { auto const requestTimeoutSecond = config_.maybeValue<uint32_t>("request_timeout");
auto const requestTimeoutSecond = config_.get<uint32_t>("request_timeout"); if (requestTimeoutSecond)
settings.requestTimeout = std::chrono::milliseconds{requestTimeoutSecond * util::kMILLISECONDS_PER_SECOND}; settings.requestTimeout = std::chrono::milliseconds{*requestTimeoutSecond * util::MILLISECONDS_PER_SECOND};
}
settings.certificate = parseOptionalCertificate(); settings.certificate = parseOptionalCertificate();
settings.username = config_.maybeValue<std::string>("username"); settings.username = config_.maybeValue<std::string>("username");

View File

@@ -19,9 +19,10 @@
#pragma once #pragma once
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Types.hpp" #include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/Cluster.hpp" #include "util/config/Config.hpp"
#include "util/newconfig/ObjectView.hpp" #include "util/log/Logger.hpp"
#include <cstdint> #include <cstdint>
#include <optional> #include <optional>
@@ -33,7 +34,7 @@ namespace data::cassandra {
* @brief Provides settings for @ref BasicCassandraBackend. * @brief Provides settings for @ref BasicCassandraBackend.
*/ */
class SettingsProvider { class SettingsProvider {
util::config::ObjectView config_; util::Config config_;
std::string keyspace_; std::string keyspace_;
std::optional<std::string> tablePrefix_; std::optional<std::string> tablePrefix_;
@@ -46,7 +47,7 @@ public:
* *
* @param cfg The config of Clio to use * @param cfg The config of Clio to use
*/ */
explicit SettingsProvider(util::config::ObjectView const& cfg); explicit SettingsProvider(util::Config const& cfg);
/** /**
* @return The cluster settings * @return The cluster settings

View File

@@ -21,8 +21,6 @@
#include <cstdint> #include <cstdint>
#include <expected> #include <expected>
#include <string>
#include <utility>
namespace data::cassandra { namespace data::cassandra {
@@ -57,26 +55,6 @@ struct Limit {
int32_t limit; int32_t limit;
}; };
/**
* @brief A strong type wrapper for string
*
* This is unfortunately needed right now to support TEXT properly
* because clio uses string to represent BLOB
* If we want to bind TEXT with string, we need to use this type
*/
struct Text {
std::string text;
/**
* @brief Construct a new Text object from string type
*
* @param text The text to wrap
*/
explicit Text(std::string text) : text{std::move(text)}
{
}
};
class Handle; class Handle;
class CassandraError; class CassandraError;

View File

@@ -23,7 +23,6 @@
#include "data/cassandra/Handle.hpp" #include "data/cassandra/Handle.hpp"
#include "data/cassandra/Types.hpp" #include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/RetryPolicy.hpp" #include "data/cassandra/impl/RetryPolicy.hpp"
#include "util/Mutex.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include <boost/asio.hpp> #include <boost/asio.hpp>
@@ -65,8 +64,8 @@ class AsyncExecutor : public std::enable_shared_from_this<AsyncExecutor<Statemen
RetryCallbackType onRetry_; RetryCallbackType onRetry_;
// does not exist during initial construction, hence optional // does not exist during initial construction, hence optional
using OptionalFuture = std::optional<FutureWithCallbackType>; std::optional<FutureWithCallbackType> future_;
util::Mutex<OptionalFuture> future_; std::mutex mtx_;
public: public:
/** /**
@@ -128,8 +127,8 @@ private:
self = nullptr; // explicitly decrement refcount self = nullptr; // explicitly decrement refcount
}; };
auto future = future_.template lock<std::scoped_lock>(); std::scoped_lock const lck{mtx_};
future->emplace(handle.asyncExecute(data_, std::move(handler))); future_.emplace(handle.asyncExecute(data_, std::move(handler)));
} }
}; };

View File

@@ -31,14 +31,14 @@
#include <vector> #include <vector>
namespace { namespace {
constexpr auto kBATCH_DELETER = [](CassBatch* ptr) { cass_batch_free(ptr); }; constexpr auto batchDeleter = [](CassBatch* ptr) { cass_batch_free(ptr); };
} // namespace } // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
// TODO: Use an appropriate value instead of CASS_BATCH_TYPE_LOGGED for different use cases // TODO: Use an appropriate value instead of CASS_BATCH_TYPE_LOGGED for different use cases
Batch::Batch(std::vector<Statement> const& statements) Batch::Batch(std::vector<Statement> const& statements)
: ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), kBATCH_DELETER} : ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), batchDeleter}
{ {
cass_batch_set_is_idempotent(*this, cass_true); cass_batch_set_is_idempotent(*this, cass_true);

View File

@@ -33,13 +33,13 @@
namespace { namespace {
constexpr auto kCLUSTER_DELETER = [](CassCluster* ptr) { cass_cluster_free(ptr); }; constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); };
}; // namespace }; // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), kCLUSTER_DELETER} Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), clusterDeleter}
{ {
using std::to_string; using std::to_string;

View File

@@ -25,8 +25,6 @@
#include <cassandra.h> #include <cassandra.h>
#include <chrono> #include <chrono>
#include <cstddef>
#include <cstdint>
#include <optional> #include <optional>
#include <string> #include <string>
#include <string_view> #include <string_view>
@@ -41,10 +39,10 @@ namespace data::cassandra::impl {
* @brief Bundles all cassandra settings in one place. * @brief Bundles all cassandra settings in one place.
*/ */
struct Settings { struct Settings {
static constexpr std::size_t kDEFAULT_CONNECTION_TIMEOUT = 10000; static constexpr std::size_t DEFAULT_CONNECTION_TIMEOUT = 10000;
static constexpr uint32_t kDEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING = 10'000; static constexpr uint32_t DEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING = 10'000;
static constexpr uint32_t kDEFAULT_MAX_READ_REQUESTS_OUTSTANDING = 100'000; static constexpr uint32_t DEFAULT_MAX_READ_REQUESTS_OUTSTANDING = 100'000;
static constexpr std::size_t kDEFAULT_BATCH_SIZE = 20; static constexpr std::size_t DEFAULT_BATCH_SIZE = 20;
/** /**
* @brief Represents the configuration of contact points for cassandra. * @brief Represents the configuration of contact points for cassandra.
@@ -65,7 +63,7 @@ struct Settings {
bool enableLog = false; bool enableLog = false;
/** @brief Connect timeout specified in milliseconds */ /** @brief Connect timeout specified in milliseconds */
std::chrono::milliseconds connectionTimeout = std::chrono::milliseconds{kDEFAULT_CONNECTION_TIMEOUT}; std::chrono::milliseconds connectionTimeout = std::chrono::milliseconds{DEFAULT_CONNECTION_TIMEOUT};
/** @brief Request timeout specified in milliseconds */ /** @brief Request timeout specified in milliseconds */
std::chrono::milliseconds requestTimeout = std::chrono::milliseconds{0}; // no timeout at all std::chrono::milliseconds requestTimeout = std::chrono::milliseconds{0}; // no timeout at all
@@ -77,16 +75,16 @@ struct Settings {
uint32_t threads = std::thread::hardware_concurrency(); uint32_t threads = std::thread::hardware_concurrency();
/** @brief The maximum number of outstanding write requests at any given moment */ /** @brief The maximum number of outstanding write requests at any given moment */
uint32_t maxWriteRequestsOutstanding = kDEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING; uint32_t maxWriteRequestsOutstanding = DEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING;
/** @brief The maximum number of outstanding read requests at any given moment */ /** @brief The maximum number of outstanding read requests at any given moment */
uint32_t maxReadRequestsOutstanding = kDEFAULT_MAX_READ_REQUESTS_OUTSTANDING; uint32_t maxReadRequestsOutstanding = DEFAULT_MAX_READ_REQUESTS_OUTSTANDING;
/** @brief The number of connection per host to always have active */ /** @brief The number of connection per host to always have active */
uint32_t coreConnectionsPerHost = 1u; uint32_t coreConnectionsPerHost = 1u;
/** @brief Size of batches when writing */ /** @brief Size of batches when writing */
std::size_t writeBatchSize = kDEFAULT_BATCH_SIZE; std::size_t writeBatchSize = DEFAULT_BATCH_SIZE;
/** @brief Size of the IO queue */ /** @brief Size of the IO queue */
std::optional<uint32_t> queueSizeIO = std::nullopt; // NOLINT(readability-redundant-member-init) std::optional<uint32_t> queueSizeIO = std::nullopt; // NOLINT(readability-redundant-member-init)

View File

@@ -33,7 +33,7 @@
namespace data::cassandra::impl { namespace data::cassandra::impl {
class Collection : public ManagedObject<CassCollection> { class Collection : public ManagedObject<CassCollection> {
static constexpr auto kDELETER = [](CassCollection* ptr) { cass_collection_free(ptr); }; static constexpr auto deleter = [](CassCollection* ptr) { cass_collection_free(ptr); };
static void static void
throwErrorIfNeeded(CassError const rc, std::string_view const label) throwErrorIfNeeded(CassError const rc, std::string_view const label)
@@ -49,7 +49,7 @@ public:
template <typename Type> template <typename Type>
explicit Collection(std::vector<Type> const& value) explicit Collection(std::vector<Type> const& value)
: ManagedObject{cass_collection_new(CASS_COLLECTION_TYPE_LIST, value.size()), kDELETER} : ManagedObject{cass_collection_new(CASS_COLLECTION_TYPE_LIST, value.size()), deleter}
{ {
bind(value); bind(value);
} }

View File

@@ -35,7 +35,6 @@
#include <boost/asio/spawn.hpp> #include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp> #include <boost/json/object.hpp>
#include <algorithm>
#include <atomic> #include <atomic>
#include <chrono> #include <chrono>
#include <condition_variable> #include <condition_variable>
@@ -193,24 +192,10 @@ public:
template <typename... Args> template <typename... Args>
void void
write(PreparedStatementType const& preparedStatement, Args&&... args) write(PreparedStatementType const& preparedStatement, Args&&... args)
{
auto statement = preparedStatement.bind(std::forward<Args>(args)...);
write(std::move(statement));
}
/**
* @brief Non-blocking query execution used for writing data.
*
* Retries forever with retry policy specified by @ref AsyncExecutor
*
* @param statement Statement to execute
* @throw DatabaseTimeout on timeout
*/
void
write(StatementType&& statement)
{ {
auto const startTime = std::chrono::steady_clock::now(); auto const startTime = std::chrono::steady_clock::now();
auto statement = preparedStatement.bind(std::forward<Args>(args)...);
incrementOutstandingRequestCount(); incrementOutstandingRequestCount();
counters_->registerWriteStarted(); counters_->registerWriteStarted();
@@ -266,21 +251,6 @@ public:
}); });
} }
/**
* @brief Non-blocking query execution used for writing data. Constrast with write, this method does not execute
* the statements in a batch.
*
* Retries forever with retry policy specified by @ref AsyncExecutor.
*
* @param statements Vector of statements to execute
* @throw DatabaseTimeout on timeout
*/
void
writeEach(std::vector<StatementType>&& statements)
{
std::ranges::for_each(std::move(statements), [this](auto& statement) { this->write(std::move(statement)); });
}
/** /**
* @brief Coroutine-based query execution used for reading data. * @brief Coroutine-based query execution used for reading data.
* *

View File

@@ -32,12 +32,12 @@
#include <utility> #include <utility>
namespace { namespace {
constexpr auto kFUTURE_DELETER = [](CassFuture* ptr) { cass_future_free(ptr); }; constexpr auto futureDeleter = [](CassFuture* ptr) { cass_future_free(ptr); };
} // namespace } // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
/* implicit */ Future::Future(CassFuture* ptr) : ManagedObject{ptr, kFUTURE_DELETER} /* implicit */ Future::Future(CassFuture* ptr) : ManagedObject{ptr, futureDeleter}
{ {
} }

View File

@@ -30,8 +30,8 @@ protected:
std::unique_ptr<Managed, void (*)(Managed*)> ptr_; std::unique_ptr<Managed, void (*)(Managed*)> ptr_;
public: public:
template <typename DeleterCallable> template <typename deleterCallable>
ManagedObject(Managed* rawPtr, DeleterCallable deleter) : ptr_{rawPtr, deleter} ManagedObject(Managed* rawPtr, deleterCallable deleter) : ptr_{rawPtr, deleter}
{ {
if (rawPtr == nullptr) if (rawPtr == nullptr)
throw std::runtime_error("Could not create DB object - got nullptr"); throw std::runtime_error("Could not create DB object - got nullptr");

View File

@@ -26,13 +26,13 @@
#include <cstddef> #include <cstddef>
namespace { namespace {
constexpr auto kRESULT_DELETER = [](CassResult const* ptr) { cass_result_free(ptr); }; constexpr auto resultDeleter = [](CassResult const* ptr) { cass_result_free(ptr); };
constexpr auto kRESULT_ITERATOR_DELETER = [](CassIterator* ptr) { cass_iterator_free(ptr); }; constexpr auto resultIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace } // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
/* implicit */ Result::Result(CassResult const* ptr) : ManagedObject{ptr, kRESULT_DELETER} /* implicit */ Result::Result(CassResult const* ptr) : ManagedObject{ptr, resultDeleter}
{ {
} }
@@ -49,7 +49,7 @@ Result::hasRows() const
} }
/* implicit */ ResultIterator::ResultIterator(CassIterator* ptr) /* implicit */ ResultIterator::ResultIterator(CassIterator* ptr)
: ManagedObject{ptr, kRESULT_ITERATOR_DELETER}, hasMore_{cass_iterator_next(ptr) != 0u} : ManagedObject{ptr, resultIteratorDeleter}, hasMore_{cass_iterator_next(ptr) != 0u}
{ {
} }

View File

@@ -26,10 +26,10 @@
namespace data::cassandra::impl { namespace data::cassandra::impl {
class Session : public ManagedObject<CassSession> { class Session : public ManagedObject<CassSession> {
static constexpr auto kDELETER = [](CassSession* ptr) { cass_session_free(ptr); }; static constexpr auto deleter = [](CassSession* ptr) { cass_session_free(ptr); };
public: public:
Session() : ManagedObject{cass_session_new(), kDELETER} Session() : ManagedObject{cass_session_new(), deleter}
{ {
} }
}; };

View File

@@ -27,12 +27,12 @@
#include <string> #include <string>
namespace { namespace {
constexpr auto kCONTEXT_DELETER = [](CassSsl* ptr) { cass_ssl_free(ptr); }; constexpr auto contextDeleter = [](CassSsl* ptr) { cass_ssl_free(ptr); };
} // namespace } // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
SslContext::SslContext(std::string const& certificate) : ManagedObject{cass_ssl_new(), kCONTEXT_DELETER} SslContext::SslContext(std::string const& certificate) : ManagedObject{cass_ssl_new(), contextDeleter}
{ {
cass_ssl_set_verify_flags(*this, CASS_SSL_VERIFY_NONE); cass_ssl_set_verify_flags(*this, CASS_SSL_VERIFY_NONE);
if (auto const rc = cass_ssl_add_trusted_cert(*this, certificate.c_str()); rc != CASS_OK) { if (auto const rc = cass_ssl_add_trusted_cert(*this, certificate.c_str()); rc != CASS_OK) {

View File

@@ -43,7 +43,7 @@
namespace data::cassandra::impl { namespace data::cassandra::impl {
class Statement : public ManagedObject<CassStatement> { class Statement : public ManagedObject<CassStatement> {
static constexpr auto kDELETER = [](CassStatement* ptr) { cass_statement_free(ptr); }; static constexpr auto deleter = [](CassStatement* ptr) { cass_statement_free(ptr); };
public: public:
/** /**
@@ -54,14 +54,14 @@ public:
*/ */
template <typename... Args> template <typename... Args>
explicit Statement(std::string_view query, Args&&... args) explicit Statement(std::string_view query, Args&&... args)
: ManagedObject{cass_statement_new_n(query.data(), query.size(), sizeof...(args)), kDELETER} : ManagedObject{cass_statement_new(query.data(), sizeof...(args)), deleter}
{ {
cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM); cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true); cass_statement_set_is_idempotent(*this, cass_true);
bind<Args...>(std::forward<Args>(args)...); bind<Args...>(std::forward<Args>(args)...);
} }
/* implicit */ Statement(CassStatement* ptr) : ManagedObject{ptr, kDELETER} /* implicit */ Statement(CassStatement* ptr) : ManagedObject{ptr, deleter}
{ {
cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM); cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true); cass_statement_set_is_idempotent(*this, cass_true);
@@ -119,9 +119,6 @@ public:
// reinterpret_cast is needed here :'( // reinterpret_cast is needed here :'(
auto const rc = bindBytes(reinterpret_cast<unsigned char const*>(value.data()), value.size()); auto const rc = bindBytes(reinterpret_cast<unsigned char const*>(value.data()), value.size());
throwErrorIfNeeded(rc, "Bind string (as bytes)"); throwErrorIfNeeded(rc, "Bind string (as bytes)");
} else if constexpr (std::is_convertible_v<DecayedType, Text>) {
auto const rc = cass_statement_bind_string_n(*this, idx, value.text.c_str(), value.text.size());
throwErrorIfNeeded(rc, "Bind string (as TEXT)");
} else if constexpr (std::is_same_v<DecayedType, UintTupleType> || } else if constexpr (std::is_same_v<DecayedType, UintTupleType> ||
std::is_same_v<DecayedType, UintByteTupleType>) { std::is_same_v<DecayedType, UintByteTupleType>) {
auto const rc = cass_statement_bind_tuple(*this, idx, Tuple{std::forward<Type>(value)}); auto const rc = cass_statement_bind_tuple(*this, idx, Tuple{std::forward<Type>(value)});
@@ -153,10 +150,10 @@ public:
* This is used to produce Statement objects that can be executed. * This is used to produce Statement objects that can be executed.
*/ */
class PreparedStatement : public ManagedObject<CassPrepared const> { class PreparedStatement : public ManagedObject<CassPrepared const> {
static constexpr auto kDELETER = [](CassPrepared const* ptr) { cass_prepared_free(ptr); }; static constexpr auto deleter = [](CassPrepared const* ptr) { cass_prepared_free(ptr); };
public: public:
/* implicit */ PreparedStatement(CassPrepared const* ptr) : ManagedObject{ptr, kDELETER} /* implicit */ PreparedStatement(CassPrepared const* ptr) : ManagedObject{ptr, deleter}
{ {
} }

View File

@@ -24,17 +24,17 @@
#include <cassandra.h> #include <cassandra.h>
namespace { namespace {
constexpr auto kTUPLE_DELETER = [](CassTuple* ptr) { cass_tuple_free(ptr); }; constexpr auto tupleDeleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
constexpr auto kTUPLE_ITERATOR_DELETER = [](CassIterator* ptr) { cass_iterator_free(ptr); }; constexpr auto tupleIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace } // namespace
namespace data::cassandra::impl { namespace data::cassandra::impl {
/* implicit */ Tuple::Tuple(CassTuple* ptr) : ManagedObject{ptr, kTUPLE_DELETER} /* implicit */ Tuple::Tuple(CassTuple* ptr) : ManagedObject{ptr, tupleDeleter}
{ {
} }
/* implicit */ TupleIterator::TupleIterator(CassIterator* ptr) : ManagedObject{ptr, kTUPLE_ITERATOR_DELETER} /* implicit */ TupleIterator::TupleIterator(CassIterator* ptr) : ManagedObject{ptr, tupleIteratorDeleter}
{ {
} }

View File

@@ -37,14 +37,14 @@
namespace data::cassandra::impl { namespace data::cassandra::impl {
class Tuple : public ManagedObject<CassTuple> { class Tuple : public ManagedObject<CassTuple> {
static constexpr auto kDELETER = [](CassTuple* ptr) { cass_tuple_free(ptr); }; static constexpr auto deleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
public: public:
/* implicit */ Tuple(CassTuple* ptr); /* implicit */ Tuple(CassTuple* ptr);
template <typename... Types> template <typename... Types>
explicit Tuple(std::tuple<Types...>&& value) explicit Tuple(std::tuple<Types...>&& value)
: ManagedObject{cass_tuple_new(std::tuple_size<std::tuple<Types...>>{}), kDELETER} : ManagedObject{cass_tuple_new(std::tuple_size<std::tuple<Types...>>{}), deleter}
{ {
std::apply(std::bind_front(&Tuple::bind<Types...>, this), std::move(value)); std::apply(std::bind_front(&Tuple::bind<Types...>, this), std::move(value));
} }

View File

@@ -64,12 +64,8 @@ public:
* @param backend The backend to use * @param backend The backend to use
* @param cache The cache to load into * @param cache The cache to load into
*/ */
CacheLoader( CacheLoader(util::Config const& config, std::shared_ptr<BackendInterface> const& backend, CacheType& cache)
util::config::ClioConfigDefinition const& config, : backend_{backend}, cache_{cache}, settings_{make_CacheLoaderSettings(config)}, ctx_{settings_.numThreads}
std::shared_ptr<BackendInterface> const& backend,
CacheType& cache
)
: backend_{backend}, cache_{cache}, settings_{makeCacheLoaderSettings(config)}, ctx_{settings_.numThreads}
{ {
} }
@@ -130,7 +126,6 @@ public:
void void
stop() noexcept stop() noexcept
{ {
if (loader_ != nullptr)
loader_->stop(); loader_->stop();
} }
@@ -140,7 +135,6 @@ public:
void void
wait() noexcept wait() noexcept
{ {
if (loader_ != nullptr)
loader_->wait(); loader_->wait();
} }
}; };

View File

@@ -19,12 +19,11 @@
#include "etl/CacheLoaderSettings.hpp" #include "etl/CacheLoaderSettings.hpp"
#include "util/newconfig/ConfigDefinition.hpp" #include "util/config/Config.hpp"
#include <boost/algorithm/string/predicate.hpp> #include <boost/algorithm/string/predicate.hpp>
#include <cstddef> #include <cstddef>
#include <cstdint>
#include <string> #include <string>
namespace etl { namespace etl {
@@ -48,29 +47,31 @@ CacheLoaderSettings::isDisabled() const
} }
[[nodiscard]] CacheLoaderSettings [[nodiscard]] CacheLoaderSettings
makeCacheLoaderSettings(util::config::ClioConfigDefinition const& config) make_CacheLoaderSettings(util::Config const& config)
{ {
CacheLoaderSettings settings; CacheLoaderSettings settings;
settings.numThreads = config.get<uint16_t>("io_threads"); settings.numThreads = config.valueOr("io_threads", settings.numThreads);
auto const cache = config.getObject("cache"); if (config.contains("cache")) {
auto const cache = config.section("cache");
// Given diff number to generate cursors // Given diff number to generate cursors
settings.numCacheDiffs = cache.get<std::size_t>("num_diffs"); settings.numCacheDiffs = cache.valueOr<size_t>("num_diffs", settings.numCacheDiffs);
// Given cursors number fetching from diff // Given cursors number fetching from diff
settings.numCacheCursorsFromDiff = cache.get<std::size_t>("num_cursors_from_diff"); settings.numCacheCursorsFromDiff = cache.valueOr<size_t>("num_cursors_from_diff", 0);
// Given cursors number fetching from account // Given cursors number fetching from account
settings.numCacheCursorsFromAccount = cache.get<std::size_t>("num_cursors_from_account"); settings.numCacheCursorsFromAccount = cache.valueOr<size_t>("num_cursors_from_account", 0);
settings.numCacheMarkers = cache.get<std::size_t>("num_markers"); settings.numCacheMarkers = cache.valueOr<size_t>("num_markers", settings.numCacheMarkers);
settings.cachePageFetchSize = cache.get<std::size_t>("page_fetch_size"); settings.cachePageFetchSize = cache.valueOr<size_t>("page_fetch_size", settings.cachePageFetchSize);
auto const entry = cache.get<std::string>("load"); if (auto entry = cache.maybeValue<std::string>("load"); entry) {
if (boost::iequals(entry, "sync")) if (boost::iequals(*entry, "sync"))
settings.loadStyle = CacheLoaderSettings::LoadStyle::SYNC; settings.loadStyle = CacheLoaderSettings::LoadStyle::SYNC;
if (boost::iequals(entry, "async")) if (boost::iequals(*entry, "async"))
settings.loadStyle = CacheLoaderSettings::LoadStyle::ASYNC; settings.loadStyle = CacheLoaderSettings::LoadStyle::ASYNC;
if (boost::iequals(entry, "none") or boost::iequals(entry, "no")) if (boost::iequals(*entry, "none") or boost::iequals(*entry, "no"))
settings.loadStyle = CacheLoaderSettings::LoadStyle::NONE; settings.loadStyle = CacheLoaderSettings::LoadStyle::NONE;
}
}
return settings; return settings;
} }

View File

@@ -19,7 +19,7 @@
#pragma once #pragma once
#include "util/newconfig/ConfigDefinition.hpp" #include "util/config/Config.hpp"
#include <cstddef> #include <cstddef>
@@ -64,6 +64,6 @@ struct CacheLoaderSettings {
* @returns The CacheLoaderSettings object * @returns The CacheLoaderSettings object
*/ */
[[nodiscard]] CacheLoaderSettings [[nodiscard]] CacheLoaderSettings
makeCacheLoaderSettings(util::config::ClioConfigDefinition const& config); make_CacheLoaderSettings(util::Config const& config);
} // namespace etl } // namespace etl

View File

@@ -26,8 +26,8 @@
#include "feed/SubscriptionManagerInterface.hpp" #include "feed/SubscriptionManagerInterface.hpp"
#include "util/Assert.hpp" #include "util/Assert.hpp"
#include "util/Constants.hpp" #include "util/Constants.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
#include <xrpl/beast/core/CurrentThreadName.h> #include <xrpl/beast/core/CurrentThreadName.h>
@@ -88,9 +88,9 @@ ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
auto const end = std::chrono::system_clock::now(); auto const end = std::chrono::system_clock::now();
auto const lastPublishedSeq = ledgerPublisher_.getLastPublishedSequence(); auto const lastPublishedSeq = ledgerPublisher_.getLastPublishedSequence();
static constexpr auto kNANOSECONDS_PER_SECOND = 1'000'000'000.0; static constexpr auto NANOSECONDS_PER_SECOND = 1'000'000'000.0;
LOG(log_.debug()) << "Extracted and wrote " << lastPublishedSeq.value_or(startSequence) - startSequence << " in " LOG(log_.debug()) << "Extracted and wrote " << lastPublishedSeq.value_or(startSequence) - startSequence << " in "
<< ((end - begin).count()) / kNANOSECONDS_PER_SECOND; << ((end - begin).count()) / NANOSECONDS_PER_SECOND;
state_.isWriting = false; state_.isWriting = false;
@@ -134,7 +134,7 @@ ETLService::monitor()
} }
} catch (std::runtime_error const& e) { } catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial ledger: " << e.what(); LOG(log_.fatal()) << "Failed to load initial ledger: " << e.what();
amendmentBlockHandler_.notifyAmendmentBlocked(); amendmentBlockHandler_.onAmendmentBlock();
return; return;
} }
@@ -168,7 +168,7 @@ ETLService::publishNextSequence(uint32_t nextSequence)
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence) { if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence) {
ledgerPublisher_.publish(nextSequence, {}); ledgerPublisher_.publish(nextSequence, {});
++nextSequence; ++nextSequence;
} else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, util::kMILLISECONDS_PER_SECOND)) { } else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, util::MILLISECONDS_PER_SECOND)) {
LOG(log_.info()) << "Ledger with sequence = " << nextSequence << " has been validated by the network. " LOG(log_.info()) << "Ledger with sequence = " << nextSequence << " has been validated by the network. "
<< "Attempting to find in database and publish"; << "Attempting to find in database and publish";
@@ -178,8 +178,8 @@ ETLService::publishNextSequence(uint32_t nextSequence)
// database after the specified number of attempts. publishLedger() // database after the specified number of attempts. publishLedger()
// waits one second between each attempt to read the ledger from the // waits one second between each attempt to read the ledger from the
// database // database
constexpr size_t kTIMEOUT_SECONDS = 10; constexpr size_t timeoutSeconds = 10;
bool const success = ledgerPublisher_.publish(nextSequence, kTIMEOUT_SECONDS); bool const success = ledgerPublisher_.publish(nextSequence, timeoutSeconds);
if (!success) { if (!success) {
LOG(log_.warn()) << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL"; LOG(log_.warn()) << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL";
@@ -233,7 +233,7 @@ ETLService::monitorReadOnly()
// if we can't, wait until it's validated by the network, or 1 second passes, whichever occurs // if we can't, wait until it's validated by the network, or 1 second passes, whichever occurs
// first. Even if we don't hear from rippled, if ledgers are being written to the db, we publish // first. Even if we don't hear from rippled, if ledgers are being written to the db, we publish
// them. // them.
networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, util::kMILLISECONDS_PER_SECOND); networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, util::MILLISECONDS_PER_SECOND);
} }
} }
} }
@@ -262,7 +262,7 @@ ETLService::doWork()
} }
ETLService::ETLService( ETLService::ETLService(
util::config::ClioConfigDefinition const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -280,9 +280,9 @@ ETLService::ETLService(
{ {
startSequence_ = config.maybeValue<uint32_t>("start_sequence"); startSequence_ = config.maybeValue<uint32_t>("start_sequence");
finishSequence_ = config.maybeValue<uint32_t>("finish_sequence"); finishSequence_ = config.maybeValue<uint32_t>("finish_sequence");
state_.isReadOnly = config.get<bool>("read_only"); state_.isReadOnly = config.valueOr("read_only", static_cast<bool>(state_.isReadOnly));
extractorThreads_ = config.get<uint32_t>("extractor_threads"); extractorThreads_ = config.valueOr<uint32_t>("extractor_threads", extractorThreads_);
txnThreshold_ = config.get<std::size_t>("txn_threshold"); txnThreshold_ = config.valueOr<size_t>("txn_threshold", txnThreshold_);
// This should probably be done in the backend factory but we don't have state available until here // This should probably be done in the backend factory but we don't have state available until here
backend_->setCorruptionDetector(CorruptionDetector<data::LedgerCache>{state_, backend->cache()}); backend_->setCorruptionDetector(CorruptionDetector<data::LedgerCache>{state_, backend->cache()});

View File

@@ -42,7 +42,6 @@
#include <org/xrpl/rpc/v1/get_ledger.pb.h> #include <org/xrpl/rpc/v1/get_ledger.pb.h>
#include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h> #include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <concepts>
#include <cstddef> #include <cstddef>
#include <cstdint> #include <cstdint>
#include <memory> #include <memory>
@@ -59,16 +58,6 @@ struct NFTsData;
*/ */
namespace etl { namespace etl {
/**
* @brief A tag class to help identify ETLService in templated code.
*/
struct ETLServiceTag {
virtual ~ETLServiceTag() = default;
};
template <typename T>
concept SomeETLService = std::derived_from<T, ETLServiceTag>;
/** /**
* @brief This class is responsible for continuously extracting data from a p2p node, and writing that data to the * @brief This class is responsible for continuously extracting data from a p2p node, and writing that data to the
* databases. * databases.
@@ -82,7 +71,7 @@ concept SomeETLService = std::derived_from<T, ETLServiceTag>;
* the others will fall back to monitoring/publishing. In this sense, this class dynamically transitions from monitoring * the others will fall back to monitoring/publishing. In this sense, this class dynamically transitions from monitoring
* to writing and from writing to monitoring, based on the activity of other processes running on different machines. * to writing and from writing to monitoring, based on the activity of other processes running on different machines.
*/ */
class ETLService : public ETLServiceTag { class ETLService {
// TODO: make these template parameters in ETLService // TODO: make these template parameters in ETLService
using LoadBalancerType = LoadBalancer; using LoadBalancerType = LoadBalancer;
using DataPipeType = etl::impl::ExtractionDataPipe<org::xrpl::rpc::v1::GetLedgerResponse>; using DataPipeType = etl::impl::ExtractionDataPipe<org::xrpl::rpc::v1::GetLedgerResponse>;
@@ -130,7 +119,7 @@ public:
* @param ledgers The network validated ledgers datastructure * @param ledgers The network validated ledgers datastructure
*/ */
ETLService( ETLService(
util::config::ClioConfigDefinition const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -138,11 +127,6 @@ public:
std::shared_ptr<NetworkValidatedLedgersInterface> ledgers std::shared_ptr<NetworkValidatedLedgersInterface> ledgers
); );
/**
* @brief Move constructor is deleted because ETL service shares its fields by reference
*/
ETLService(ETLService&&) = delete;
/** /**
* @brief A factory function to spawn new ETLService instances. * @brief A factory function to spawn new ETLService instances.
* *
@@ -157,8 +141,8 @@ public:
* @return A shared pointer to a new instance of ETLService * @return A shared pointer to a new instance of ETLService
*/ */
static std::shared_ptr<ETLService> static std::shared_ptr<ETLService>
makeETLService( make_ETLService(
util::config::ClioConfigDefinition const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -175,20 +159,10 @@ public:
/** /**
* @brief Stops components and joins worker thread. * @brief Stops components and joins worker thread.
*/ */
~ETLService() override ~ETLService()
{ {
if (not state_.isStopping) LOG(log_.info()) << "onStop called";
stop(); LOG(log_.debug()) << "Stopping Reporting ETL";
}
/**
* @brief Stop the ETL service.
* @note This method blocks until the ETL service has stopped.
*/
void
stop()
{
LOG(log_.info()) << "Stop called";
state_.isStopping = true; state_.isStopping = true;
cacheLoader_.stop(); cacheLoader_.stop();

View File

@@ -1,65 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
/** @file */
#pragma once
#include <xrpl/proto/org/xrpl/rpc/v1/get_ledger.pb.h>
#include <xrpl/proto/org/xrpl/rpc/v1/ledger.pb.h>
#include <cstdint>
#include <optional>
namespace etl {
/**
* @brief An interface for LedgerFetcher
*/
struct LedgerFetcherInterface {
using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse;
using OptionalGetLedgerResponseType = std::optional<GetLedgerResponseType>;
virtual ~LedgerFetcherInterface() = default;
/**
* @brief Extract data for a particular ledger from an ETL source
*
* This function continously tries to extract the specified ledger (using all available ETL sources) until the
* extraction succeeds, or the server shuts down.
*
* @param seq sequence of the ledger to extract
* @return Ledger header and transaction+metadata blobs; Empty optional if the server is shutting down
*/
[[nodiscard]] virtual OptionalGetLedgerResponseType
fetchData(uint32_t seq) = 0;
/**
* @brief Extract diff data for a particular ledger from an ETL source.
*
* This function continously tries to extract the specified ledger (using all available ETL sources) until the
* extraction succeeds, or the server shuts down.
*
* @param seq sequence of the ledger to extract
* @return Ledger data diff between sequance and parent; Empty optional if the server is shutting down
*/
[[nodiscard]] virtual OptionalGetLedgerResponseType
fetchDataAndDiff(uint32_t seq) = 0;
};
} // namespace etl

View File

@@ -26,13 +26,9 @@
#include "feed/SubscriptionManagerInterface.hpp" #include "feed/SubscriptionManagerInterface.hpp"
#include "rpc/Errors.hpp" #include "rpc/Errors.hpp"
#include "util/Assert.hpp" #include "util/Assert.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/Random.hpp" #include "util/Random.hpp"
#include "util/ResponseExpirationCache.hpp" #include "util/ResponseExpirationCache.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ArrayView.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/newconfig/ObjectView.hpp"
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp> #include <boost/asio/spawn.hpp>
@@ -55,13 +51,13 @@
#include <utility> #include <utility>
#include <vector> #include <vector>
using namespace util::config; using namespace util;
namespace etl { namespace etl {
std::shared_ptr<LoadBalancer> std::shared_ptr<LoadBalancer>
LoadBalancer::makeLoadBalancer( LoadBalancer::make_LoadBalancer(
ClioConfigDefinition const& config, Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -75,7 +71,7 @@ LoadBalancer::makeLoadBalancer(
} }
LoadBalancer::LoadBalancer( LoadBalancer::LoadBalancer(
ClioConfigDefinition const& config, Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -83,23 +79,23 @@ LoadBalancer::LoadBalancer(
SourceFactory sourceFactory SourceFactory sourceFactory
) )
{ {
auto const forwardingCacheTimeout = config.get<float>("forwarding.cache_timeout"); auto const forwardingCacheTimeout = config.valueOr<float>("forwarding.cache_timeout", 0.f);
if (forwardingCacheTimeout > 0.f) { if (forwardingCacheTimeout > 0.f) {
forwardingCache_ = util::ResponseExpirationCache{ forwardingCache_ = util::ResponseExpirationCache{
util::config::ClioConfigDefinition::toMilliseconds(forwardingCacheTimeout), Config::toMilliseconds(forwardingCacheTimeout),
{"server_info", "server_state", "server_definitions", "fee", "ledger_closed"} {"server_info", "server_state", "server_definitions", "fee", "ledger_closed"}
}; };
} }
auto const numMarkers = config.getValueView("num_markers"); static constexpr std::uint32_t MAX_DOWNLOAD = 256;
if (numMarkers.hasValue()) { if (auto value = config.maybeValue<uint32_t>("num_markers"); value) {
auto const value = numMarkers.asIntType<uint32_t>(); ASSERT(*value > 0 and *value <= MAX_DOWNLOAD, "'num_markers' value in config must be in range 1-256");
downloadRanges_ = value; downloadRanges_ = *value;
} else if (backend->fetchLedgerRange()) { } else if (backend->fetchLedgerRange()) {
downloadRanges_ = 4; downloadRanges_ = 4;
} }
auto const allowNoEtl = config.get<bool>("allow_no_etl"); auto const allowNoEtl = config.valueOr("allow_no_etl", false);
auto const checkOnETLFailure = [this, allowNoEtl](std::string const& log) { auto const checkOnETLFailure = [this, allowNoEtl](std::string const& log) {
LOG(log_.warn()) << log; LOG(log_.warn()) << log;
@@ -110,12 +106,10 @@ LoadBalancer::LoadBalancer(
} }
}; };
auto const forwardingTimeout = auto const forwardingTimeout = Config::toMilliseconds(config.valueOr<float>("forwarding.request_timeout", 10.));
ClioConfigDefinition::toMilliseconds(config.get<float>("forwarding.request_timeout")); for (auto const& entry : config.array("etl_sources")) {
auto const etlArray = config.getArray("etl_sources");
for (auto it = etlArray.begin<ObjectView>(); it != etlArray.end<ObjectView>(); ++it) {
auto source = sourceFactory( auto source = sourceFactory(
*it, entry,
ioc, ioc,
backend, backend,
subscriptions, subscriptions,
@@ -236,7 +230,7 @@ LoadBalancer::forwardToRippled(
) )
{ {
if (not request.contains("command")) if (not request.contains("command"))
return std::unexpected{rpc::ClioError::RpcCommandIsMissing}; return std::unexpected{rpc::ClioError::rpcCOMMAND_IS_MISSING};
auto const cmd = boost::json::value_to<std::string>(request.at("command")); auto const cmd = boost::json::value_to<std::string>(request.at("command"));
if (forwardingCache_) { if (forwardingCache_) {
@@ -250,10 +244,10 @@ LoadBalancer::forwardToRippled(
auto numAttempts = 0u; auto numAttempts = 0u;
auto xUserValue = isAdmin ? kADMIN_FORWARDING_X_USER_VALUE : kUSER_FORWARDING_X_USER_VALUE; auto xUserValue = isAdmin ? ADMIN_FORWARDING_X_USER_VALUE : USER_FORWARDING_X_USER_VALUE;
std::optional<boost::json::object> response; std::optional<boost::json::object> response;
rpc::ClioError error = rpc::ClioError::EtlConnectionError; rpc::ClioError error = rpc::ClioError::etlCONNECTION_ERROR;
while (numAttempts < sources_.size()) { while (numAttempts < sources_.size()) {
auto res = sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield); auto res = sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield);
if (res) { if (res) {
@@ -337,16 +331,6 @@ LoadBalancer::getETLState() noexcept
return etlState_; return etlState_;
} }
void
LoadBalancer::stop(boost::asio::yield_context yield)
{
util::CoroutineGroup group{yield};
std::ranges::for_each(sources_, [&group, yield](auto& source) {
group.spawn(yield, [&source](boost::asio::yield_context innerYield) { source->stop(innerYield); });
});
group.asyncWait(yield);
}
void void
LoadBalancer::chooseForwardingSource() LoadBalancer::chooseForwardingSource()
{ {

View File

@@ -27,8 +27,8 @@
#include "rpc/Errors.hpp" #include "rpc/Errors.hpp"
#include "util/Mutex.hpp" #include "util/Mutex.hpp"
#include "util/ResponseExpirationCache.hpp" #include "util/ResponseExpirationCache.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include <boost/asio.hpp> #include <boost/asio.hpp>
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
@@ -41,7 +41,6 @@
#include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h> #include <xrpl/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <chrono> #include <chrono>
#include <concepts>
#include <cstdint> #include <cstdint>
#include <expected> #include <expected>
#include <memory> #include <memory>
@@ -52,16 +51,6 @@
namespace etl { namespace etl {
/**
* @brief A tag class to help identify LoadBalancer in templated code.
*/
struct LoadBalancerTag {
virtual ~LoadBalancerTag() = default;
};
template <typename T>
concept SomeLoadBalancer = std::derived_from<T, LoadBalancerTag>;
/** /**
* @brief This class is used to manage connections to transaction processing processes. * @brief This class is used to manage connections to transaction processing processes.
* *
@@ -69,14 +58,14 @@ concept SomeLoadBalancer = std::derived_from<T, LoadBalancerTag>;
* which ledgers have been validated by the network, and the range of ledgers each etl source has). This class also * which ledgers have been validated by the network, and the range of ledgers each etl source has). This class also
* allows requests for ledger data to be load balanced across all possible ETL sources. * allows requests for ledger data to be load balanced across all possible ETL sources.
*/ */
class LoadBalancer : public LoadBalancerTag { class LoadBalancer {
public: public:
using RawLedgerObjectType = org::xrpl::rpc::v1::RawLedgerObject; using RawLedgerObjectType = org::xrpl::rpc::v1::RawLedgerObject;
using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse; using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse;
using OptionalGetLedgerResponseType = std::optional<GetLedgerResponseType>; using OptionalGetLedgerResponseType = std::optional<GetLedgerResponseType>;
private: private:
static constexpr std::uint32_t kDEFAULT_DOWNLOAD_RANGES = 16; static constexpr std::uint32_t DEFAULT_DOWNLOAD_RANGES = 16;
util::Logger log_{"ETL"}; util::Logger log_{"ETL"};
// Forwarding cache must be destroyed after sources because sources have a callback to invalidate cache // Forwarding cache must be destroyed after sources because sources have a callback to invalidate cache
@@ -86,7 +75,7 @@ private:
std::vector<SourcePtr> sources_; std::vector<SourcePtr> sources_;
std::optional<ETLState> etlState_; std::optional<ETLState> etlState_;
std::uint32_t downloadRanges_ = std::uint32_t downloadRanges_ =
kDEFAULT_DOWNLOAD_RANGES; /*< The number of markers to use when downloading initial ledger */ DEFAULT_DOWNLOAD_RANGES; /*< The number of markers to use when downloading initial ledger */
// Using mutext instead of atomic_bool because choosing a new source to // Using mutext instead of atomic_bool because choosing a new source to
// forward messages should be done with a mutual exclusion otherwise there will be a race condition // forward messages should be done with a mutual exclusion otherwise there will be a race condition
@@ -96,12 +85,12 @@ public:
/** /**
* @brief Value for the X-User header when forwarding admin requests * @brief Value for the X-User header when forwarding admin requests
*/ */
static constexpr std::string_view kADMIN_FORWARDING_X_USER_VALUE = "clio_admin"; static constexpr std::string_view ADMIN_FORWARDING_X_USER_VALUE = "clio_admin";
/** /**
* @brief Value for the X-User header when forwarding user requests * @brief Value for the X-User header when forwarding user requests
*/ */
static constexpr std::string_view kUSER_FORWARDING_X_USER_VALUE = "clio_user"; static constexpr std::string_view USER_FORWARDING_X_USER_VALUE = "clio_user";
/** /**
* @brief Create an instance of the load balancer. * @brief Create an instance of the load balancer.
@@ -114,12 +103,12 @@ public:
* @param sourceFactory A factory function to create a source * @param sourceFactory A factory function to create a source
*/ */
LoadBalancer( LoadBalancer(
util::config::ClioConfigDefinition const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers, std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource SourceFactory sourceFactory = make_Source
); );
/** /**
@@ -134,16 +123,16 @@ public:
* @return A shared pointer to a new instance of LoadBalancer * @return A shared pointer to a new instance of LoadBalancer
*/ */
static std::shared_ptr<LoadBalancer> static std::shared_ptr<LoadBalancer>
makeLoadBalancer( make_LoadBalancer(
util::config::ClioConfigDefinition const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers, std::shared_ptr<NetworkValidatedLedgersInterface> validatedLedgers,
SourceFactory sourceFactory = makeSource SourceFactory sourceFactory = make_Source
); );
~LoadBalancer() override; ~LoadBalancer();
/** /**
* @brief Load the initial ledger, writing data to the queue. * @brief Load the initial ledger, writing data to the queue.
@@ -214,15 +203,6 @@ public:
std::optional<ETLState> std::optional<ETLState>
getETLState() noexcept; getETLState() noexcept;
/**
* @brief Stop the load balancer. This will stop all subscription sources.
* @note This function will asynchronously wait for all sources to stop.
*
* @param yield The coroutine context
*/
void
stop(boost::asio::yield_context yield);
private: private:
/** /**
* @brief Execute a function on a randomly selected source. * @brief Execute a function on a randomly selected source.

View File

@@ -27,7 +27,6 @@
#include <xrpl/protocol/SField.h> #include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STLedgerEntry.h> #include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/STObject.h> #include <xrpl/protocol/STObject.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/TER.h> #include <xrpl/protocol/TER.h>
#include <xrpl/protocol/TxFormats.h> #include <xrpl/protocol/TxFormats.h>
@@ -42,7 +41,7 @@ namespace etl {
* @param txMeta Transaction metadata * @param txMeta Transaction metadata
* @return MPT and holder account pair * @return MPT and holder account pair
*/ */
std::optional<MPTHolderData> static std::optional<MPTHolderData>
getMPTokenAuthorize(ripple::TxMeta const& txMeta) getMPTokenAuthorize(ripple::TxMeta const& txMeta)
{ {
for (ripple::STObject const& node : txMeta.getNodes()) { for (ripple::STObject const& node : txMeta.getNodes()) {
@@ -51,9 +50,7 @@ getMPTokenAuthorize(ripple::TxMeta const& txMeta)
if (node.getFName() == ripple::sfCreatedNode) { if (node.getFName() == ripple::sfCreatedNode) {
auto const& newMPT = node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>(); auto const& newMPT = node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>();
return MPTHolderData{ return MPTHolderData{newMPT[ripple::sfMPTokenIssuanceID], newMPT.getAccountID(ripple::sfAccount)};
.mptID = newMPT[ripple::sfMPTokenIssuanceID], .holder = newMPT.getAccountID(ripple::sfAccount)
};
} }
} }
return {}; return {};
@@ -80,7 +77,7 @@ getMPTHolderFromObj(std::string const& key, std::string const& blob)
auto const mptIssuanceID = sle[ripple::sfMPTokenIssuanceID]; auto const mptIssuanceID = sle[ripple::sfMPTokenIssuanceID];
auto const holder = sle.getAccountID(ripple::sfAccount); auto const holder = sle.getAccountID(ripple::sfAccount);
return MPTHolderData{.mptID = mptIssuanceID, .holder = holder}; return MPTHolderData{mptIssuanceID, holder};
} }
} // namespace etl } // namespace etl

View File

@@ -47,17 +47,6 @@
namespace etl { namespace etl {
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNftokenModifyData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
auto const tokenID = sttx.getFieldH256(ripple::sfNFTokenID);
// note: sfURI is optional, if it is absent, we will update the uri as empty string
return {
{NFTTransactionsData(sttx.getFieldH256(ripple::sfNFTokenID), txMeta, sttx.getTransactionID())},
NFTsData(tokenID, txMeta, sttx.getFieldVL(ripple::sfURI))
};
}
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>> std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx) getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{ {
@@ -84,9 +73,9 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
if (node.getFName() == ripple::sfCreatedNode) { if (node.getFName() == ripple::sfCreatedNode) {
ripple::STArray const& toAddNFTs = ripple::STArray const& toAddNFTs =
node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
std::ranges::transform( std::transform(
toAddNFTs, toAddNFTs.begin(),
toAddNFTs.end(),
std::back_inserter(finalIDs), std::back_inserter(finalIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); } [](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
); );
@@ -109,18 +98,18 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
continue; continue;
ripple::STArray const& toAddNFTs = previousFields.getFieldArray(ripple::sfNFTokens); ripple::STArray const& toAddNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
std::ranges::transform( std::transform(
toAddNFTs, toAddNFTs.begin(),
toAddNFTs.end(),
std::back_inserter(prevIDs), std::back_inserter(prevIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); } [](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
); );
ripple::STArray const& toAddFinalNFTs = ripple::STArray const& toAddFinalNFTs =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
std::ranges::transform( std::transform(
toAddFinalNFTs, toAddFinalNFTs.begin(),
toAddFinalNFTs.end(),
std::back_inserter(finalIDs), std::back_inserter(finalIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); } [](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
); );
@@ -132,7 +121,6 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// Find the first NFT ID that doesn't match. We're looking for an // Find the first NFT ID that doesn't match. We're looking for an
// added NFT, so the one we want will be the mismatch in finalIDs. // added NFT, so the one we want will be the mismatch in finalIDs.
// NOLINTNEXTLINE(modernize-use-ranges)
auto const diff = std::mismatch(finalIDs.begin(), finalIDs.end(), prevIDs.begin(), prevIDs.end()); auto const diff = std::mismatch(finalIDs.begin(), finalIDs.end(), prevIDs.begin(), prevIDs.end());
// There should always be a difference so the returned finalIDs // There should always be a difference so the returned finalIDs
@@ -177,7 +165,7 @@ getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>(); node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>();
if (previousFields.isFieldPresent(ripple::sfNFTokens)) if (previousFields.isFieldPresent(ripple::sfNFTokens))
prevNFTs = previousFields.getFieldArray(ripple::sfNFTokens); prevNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
} else if (node.getFName() == ripple::sfDeletedNode) { } else if (!prevNFTs && node.getFName() == ripple::sfDeletedNode) {
prevNFTs = prevNFTs =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
} }
@@ -273,7 +261,7 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
.getFieldArray(ripple::sfNFTokens); .getFieldArray(ripple::sfNFTokens);
}(); }();
auto const nft = std::ranges::find_if(nfts, [&tokenID](ripple::STObject const& candidate) { auto const nft = std::find_if(nfts.begin(), nfts.end(), [&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID; return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
}); });
if (nft != nfts.end()) { if (nft != nfts.end()) {
@@ -310,10 +298,10 @@ getNFTokenCancelOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
std::ranges::sort(txs, [](NFTTransactionsData const& a, NFTTransactionsData const& b) { std::ranges::sort(txs, [](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID < b.tokenID; return a.tokenID < b.tokenID;
}); });
auto [last, end] = std::ranges::unique(txs, [](NFTTransactionsData const& a, NFTTransactionsData const& b) { auto last = std::unique(txs.begin(), txs.end(), [](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID == b.tokenID; return a.tokenID == b.tokenID;
}); });
txs.erase(last, end); txs.erase(last, txs.end());
return {txs, {}}; return {txs, {}};
} }
@@ -347,9 +335,6 @@ getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
case ripple::TxType::ttNFTOKEN_CREATE_OFFER: case ripple::TxType::ttNFTOKEN_CREATE_OFFER:
return getNFTokenCreateOfferData(txMeta, sttx); return getNFTokenCreateOfferData(txMeta, sttx);
case ripple::TxType::ttNFTOKEN_MODIFY:
return getNftokenModifyData(txMeta, sttx);
default: default:
return {{}, {}}; return {{}, {}};
} }
@@ -381,9 +366,10 @@ getUniqueNFTsDatas(std::vector<NFTsData> const& nfts)
return a.tokenID == b.tokenID ? a.transactionIndex > b.transactionIndex : a.tokenID > b.tokenID; return a.tokenID == b.tokenID ? a.transactionIndex > b.transactionIndex : a.tokenID > b.tokenID;
}); });
auto const [last, end] = auto const last = std::unique(results.begin(), results.end(), [](NFTsData const& a, NFTsData const& b) {
std::ranges::unique(results, [](NFTsData const& a, NFTsData const& b) { return a.tokenID == b.tokenID; }); return a.tokenID == b.tokenID;
results.erase(last, end); });
results.erase(last, results.end());
return results; return results;
} }

View File

@@ -33,16 +33,6 @@
namespace etl { namespace etl {
/**
* @brief Get the NFT URI change data from a NFToken Modify transaction
*
* @param txMeta Transaction metadata
* @param sttx The transaction
* @return NFT URI change data as a pair of transactions and optional NFTsData
*/
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNftokenModifyData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx);
/** /**
* @brief Get the NFT Token mint data from a transaction * @brief Get the NFT Token mint data from a transaction
* *

View File

@@ -19,8 +19,6 @@
#include "etl/NetworkValidatedLedgers.hpp" #include "etl/NetworkValidatedLedgers.hpp"
#include <boost/signals2/connection.hpp>
#include <chrono> #include <chrono>
#include <cstdint> #include <cstdint>
#include <memory> #include <memory>
@@ -29,7 +27,7 @@
namespace etl { namespace etl {
std::shared_ptr<NetworkValidatedLedgers> std::shared_ptr<NetworkValidatedLedgers>
NetworkValidatedLedgers::makeValidatedLedgers() NetworkValidatedLedgers::make_ValidatedLedgers()
{ {
return std::make_shared<NetworkValidatedLedgers>(); return std::make_shared<NetworkValidatedLedgers>();
} }
@@ -37,27 +35,25 @@ NetworkValidatedLedgers::makeValidatedLedgers()
void void
NetworkValidatedLedgers::push(uint32_t idx) NetworkValidatedLedgers::push(uint32_t idx)
{ {
std::lock_guard const lck(mtx_); std::lock_guard const lck(m_);
if (!latest_ || idx > *latest_) if (!max_ || idx > *max_)
latest_ = idx; max_ = idx;
notificationChannel_(idx);
cv_.notify_all(); cv_.notify_all();
} }
std::optional<uint32_t> std::optional<uint32_t>
NetworkValidatedLedgers::getMostRecent() NetworkValidatedLedgers::getMostRecent()
{ {
std::unique_lock lck(mtx_); std::unique_lock lck(m_);
cv_.wait(lck, [this]() { return latest_; }); cv_.wait(lck, [this]() { return max_; });
return latest_; return max_;
} }
bool bool
NetworkValidatedLedgers::waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs) NetworkValidatedLedgers::waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs)
{ {
std::unique_lock lck(mtx_); std::unique_lock lck(m_);
auto pred = [sequence, this]() -> bool { return (latest_ && sequence <= *latest_); }; auto pred = [sequence, this]() -> bool { return (max_ && sequence <= *max_); };
if (maxWaitMs) { if (maxWaitMs) {
cv_.wait_for(lck, std::chrono::milliseconds(*maxWaitMs)); cv_.wait_for(lck, std::chrono::milliseconds(*maxWaitMs));
} else { } else {
@@ -66,10 +62,4 @@ NetworkValidatedLedgers::waitUntilValidatedByNetwork(uint32_t sequence, std::opt
return pred(); return pred();
} }
boost::signals2::scoped_connection
NetworkValidatedLedgers::subscribe(SignalType::slot_type const& subscriber)
{
return notificationChannel_.connect(subscriber);
}
} // namespace etl } // namespace etl

View File

@@ -21,10 +21,6 @@
#include "etl/NetworkValidatedLedgersInterface.hpp" #include "etl/NetworkValidatedLedgersInterface.hpp"
#include <boost/signals2/connection.hpp>
#include <boost/signals2/signal.hpp>
#include <boost/signals2/variadic_signal.hpp>
#include <condition_variable> #include <condition_variable>
#include <cstdint> #include <cstdint>
#include <memory> #include <memory>
@@ -42,13 +38,12 @@ namespace etl {
* remains stopped for the rest of its lifetime. * remains stopped for the rest of its lifetime.
*/ */
class NetworkValidatedLedgers : public NetworkValidatedLedgersInterface { class NetworkValidatedLedgers : public NetworkValidatedLedgersInterface {
std::optional<uint32_t> latest_; // currently known latest sequence validated by network // max sequence validated by network
std::optional<uint32_t> max_;
mutable std::mutex mtx_; mutable std::mutex m_;
std::condition_variable cv_; std::condition_variable cv_;
SignalType notificationChannel_;
public: public:
/** /**
* @brief A factory function for NetworkValidatedLedgers * @brief A factory function for NetworkValidatedLedgers
@@ -56,7 +51,7 @@ public:
* @return A shared pointer to a new instance of NetworkValidatedLedgers * @return A shared pointer to a new instance of NetworkValidatedLedgers
*/ */
static std::shared_ptr<NetworkValidatedLedgers> static std::shared_ptr<NetworkValidatedLedgers>
makeValidatedLedgers(); make_ValidatedLedgers();
/** /**
* @brief Notify the datastructure that idx has been validated by the network. * @brief Notify the datastructure that idx has been validated by the network.
@@ -86,9 +81,6 @@ public:
*/ */
bool bool
waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs = {}) final; waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs = {}) final;
boost::signals2::scoped_connection
subscribe(SignalType::slot_type const& subscriber) override;
}; };
} // namespace etl } // namespace etl

View File

@@ -20,10 +20,6 @@
/** @file */ /** @file */
#pragma once #pragma once
#include <boost/signals2/connection.hpp>
#include <boost/signals2/signal.hpp>
#include <boost/signals2/variadic_signal.hpp>
#include <cstdint> #include <cstdint>
#include <optional> #include <optional>
namespace etl { namespace etl {
@@ -33,8 +29,6 @@ namespace etl {
*/ */
class NetworkValidatedLedgersInterface { class NetworkValidatedLedgersInterface {
public: public:
using SignalType = boost::signals2::signal<void(uint32_t)>;
virtual ~NetworkValidatedLedgersInterface() = default; virtual ~NetworkValidatedLedgersInterface() = default;
/** /**
@@ -52,7 +46,7 @@ public:
* *
* @return Sequence of most recently validated ledger. empty optional if the datastructure has been stopped * @return Sequence of most recently validated ledger. empty optional if the datastructure has been stopped
*/ */
[[nodiscard]] virtual std::optional<uint32_t> virtual std::optional<uint32_t>
getMostRecent() = 0; getMostRecent() = 0;
/** /**
@@ -65,15 +59,6 @@ public:
*/ */
virtual bool virtual bool
waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs = {}) = 0; waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs = {}) = 0;
/**
* @brief Allows clients to get notified when a new validated ledger becomes known to Clio
*
* @param subscriber The slot to connect
* @return A connection object that automatically disconnects the subscription once destroyed
*/
[[nodiscard]] virtual boost::signals2::scoped_connection
subscribe(SignalType::slot_type const& subscriber) = 0;
}; };
} // namespace etl } // namespace etl

View File

@@ -26,7 +26,7 @@
#include "etl/impl/SourceImpl.hpp" #include "etl/impl/SourceImpl.hpp"
#include "etl/impl/SubscriptionSource.hpp" #include "etl/impl/SubscriptionSource.hpp"
#include "feed/SubscriptionManagerInterface.hpp" #include "feed/SubscriptionManagerInterface.hpp"
#include "util/newconfig/ObjectView.hpp" #include "util/config/Config.hpp"
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
@@ -38,8 +38,8 @@
namespace etl { namespace etl {
SourcePtr SourcePtr
makeSource( make_Source(
util::config::ObjectView const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -50,9 +50,9 @@ makeSource(
SourceBase::OnLedgerClosedHook onLedgerClosed SourceBase::OnLedgerClosedHook onLedgerClosed
) )
{ {
auto const ip = config.get<std::string>("ip"); auto const ip = config.valueOr<std::string>("ip", {});
auto const wsPort = config.get<std::string>("ws_port"); auto const wsPort = config.valueOr<std::string>("ws_port", {});
auto const grpcPort = config.get<std::string>("grpc_port"); auto const grpcPort = config.valueOr<std::string>("grpc_port", {});
impl::ForwardingSource forwardingSource{ip, wsPort, forwardingTimeout}; impl::ForwardingSource forwardingSource{ip, wsPort, forwardingTimeout};
impl::GrpcSource grpcSource{ip, grpcPort, std::move(backend)}; impl::GrpcSource grpcSource{ip, grpcPort, std::move(backend)};

View File

@@ -23,9 +23,8 @@
#include "etl/NetworkValidatedLedgersInterface.hpp" #include "etl/NetworkValidatedLedgersInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp" #include "feed/SubscriptionManagerInterface.hpp"
#include "rpc/Errors.hpp" #include "rpc/Errors.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp" #include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/newconfig/ObjectView.hpp"
#include <boost/asio/io_context.hpp> #include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp> #include <boost/asio/spawn.hpp>
@@ -65,15 +64,6 @@ public:
virtual void virtual void
run() = 0; run() = 0;
/**
* @brief Stop Source.
* @note This method will asynchronously wait for source to be stopped.
*
* @param yield The coroutine context.
*/
virtual void
stop(boost::asio::yield_context yield) = 0;
/** /**
* @brief Check if source is connected * @brief Check if source is connected
* *
@@ -157,7 +147,7 @@ public:
using SourcePtr = std::unique_ptr<SourceBase>; using SourcePtr = std::unique_ptr<SourceBase>;
using SourceFactory = std::function<SourcePtr( using SourceFactory = std::function<SourcePtr(
util::config::ObjectView const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,
@@ -184,8 +174,8 @@ using SourceFactory = std::function<SourcePtr(
* @return The created source * @return The created source
*/ */
SourcePtr SourcePtr
makeSource( make_Source(
util::config::ObjectView const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions, std::shared_ptr<feed::SubscriptionManagerInterface> subscriptions,

View File

@@ -30,8 +30,8 @@
namespace etl::impl { namespace etl::impl {
AmendmentBlockHandler::ActionType const AmendmentBlockHandler::kDEFAULT_AMENDMENT_BLOCK_ACTION = []() { AmendmentBlockHandler::ActionType const AmendmentBlockHandler::defaultAmendmentBlockAction = []() {
static util::Logger const log{"ETL"}; // NOLINT(readability-identifier-naming) static util::Logger const log{"ETL"};
LOG(log.fatal()) << "Can't process new ledgers: The current ETL source is not compatible with the version of " LOG(log.fatal()) << "Can't process new ledgers: The current ETL source is not compatible with the version of "
<< "the libxrpl Clio is currently using. Please upgrade Clio to a newer version."; << "the libxrpl Clio is currently using. Please upgrade Clio to a newer version.";
}; };
@@ -47,7 +47,7 @@ AmendmentBlockHandler::AmendmentBlockHandler(
} }
void void
AmendmentBlockHandler::notifyAmendmentBlocked() AmendmentBlockHandler::onAmendmentBlock()
{ {
state_.get().isAmendmentBlocked = true; state_.get().isAmendmentBlocked = true;
repeat_.start(interval_, action_); repeat_.start(interval_, action_);

Some files were not shown because too many files have changed in this diff Show More