Port of #1830 into release/2.3.1.
Fixes#1825 by removing the check in the gateway_balances RPC handler
that returns the RpcInvalidHotWallet error code if one of the addresses
supplied in the request's `hotwallet` array does not have a trustline
with the `account` from the request.
As stated in the original ticket, this change fixes a discrepancy in
behavior between Clio and rippled, as rippled does not check for
trustline existence when handling gateway_balances RPCs
Fixes#1825 by removing the check in the gateway_balances RPC handler
that returns the RpcInvalidHotWallet error code if one of the addresses
supplied in the request's `hotwallet` array does not have a trustline
with the `account` from the request.
As stated in the original ticket, this change fixes a discrepancy in
behavior between Clio and rippled, as rippled does not check for
trustline existence when handling gateway_balances RPCs
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.7.0 to 3.7.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8ceaefd717"><code>8ceaefd</code></a>
version 3.7.2</li>
<li><a
href="ce976ac9e7"><code>ce976ac</code></a>
version 3.7.1</li>
<li><a
href="7a8f8d4bf2"><code>7a8f8d4</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/174">#174</a>
from dmvict/master</li>
<li><a
href="2103bce855"><code>2103bce</code></a>
Fix action, add option <code>pre_retry_command</code> to call of
subaction</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.7.0...v3.7.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.5.0 to 3.7.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f8754f7974"><code>f8754f7</code></a>
version 3.7.0</li>
<li><a
href="03db9837ed"><code>03db983</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/171">#171</a>
from dmvict/docker_readme</li>
<li><a
href="d80901cd5c"><code>d80901c</code></a>
Sync readme for new feature</li>
<li><a
href="e00d406ade"><code>e00d406</code></a>
version 3.6.0</li>
<li><a
href="e00deaa9ba"><code>e00deaa</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/170">#170</a>
from dmvict/pre_retry_action</li>
<li><a
href="8b50f3152e"><code>8b50f31</code></a>
Update action, add option <code>pre_retry_command</code> to run command
between retries</li>
<li><a
href="990f16983d"><code>990f169</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/167">#167</a>
from Vampire/add-typing</li>
<li><a
href="aeb34f4d13"><code>aeb34f4</code></a>
Add action typing</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.5.0...v3.7.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
Add constraint + parse json into Config
Second part of refactoring Clio Config; First PR found
[here](https://github.com/XRPLF/clio/pull/1544)
Steps that are left to implement:
- Replacing all the places where we fetch config values (by using
config.valueOr/MaybeValue) to instead get it from Config Definition
- Generate markdown file using Clio Config Description
Fixes#1620.
Cherry pick of #1626 into develop.
- Add timeouts for websocket operations for connections to rippled.
Without these timeouts if connection hangs for some reason, clio
wouldn't know the connection is hanging.
- Fix potential data race in choosing new subscription source which will
forward messages to users.
- Optimise switching between subscription sources.
Implementation of new config definition + methods + UT
Steps that still need to be implemented:
- Make ClioConfigDefinition and it's method to be as constexpr as
possible
- Getting User Config file and populating the values in ConfigDefinition
while checking for constraints on user values
- Replacing all the places where we fetch config values (by using
config.valueOr/MaybeValue) to instead get it from Config Definition
- Generate markdown file using Clio Config Description
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
1.4.10 to 3.5.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6feedb7ded"><code>6feedb7</code></a>
version 3.5.0</li>
<li><a
href="b5ca17cbcf"><code>b5ca17c</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/163">#163</a>
from dmvict/master</li>
<li><a
href="82f08e4ccd"><code>82f08e4</code></a>
Add dependency to build script</li>
<li><a
href="c14e35fac1"><code>c14e35f</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/162">#162</a>
from dmvict/master</li>
<li><a
href="979d4e538d"><code>979d4e5</code></a>
Synchronize info with <code>Readme</code> of <code>js_action</code>
branch</li>
<li><a
href="0dd1d5d77d"><code>0dd1d5d</code></a>
version 3.4.0</li>
<li><a
href="8e449ec3ce"><code>8e449ec</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/156">#156</a>
from dmvict/master</li>
<li><a
href="050710def1"><code>050710d</code></a>
Update action files, add option <code>steps_context</code>, update file
<code>Readme</code>, add op...</li>
<li><a
href="dfc978b430"><code>dfc978b</code></a>
version 3.3.0</li>
<li><a
href="b7882d347e"><code>b7882d3</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/154">#154</a>
from dmvict/master</li>
<li>Additional commits viewable in <a
href="https://github.com/wandalen/wretry.action/compare/v1.4.10...v3.5.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
I'm constantly harassed to follow the rules when I'm not doing anything
relevant.
This is an attempt to ameliorate that rather than `--no-verify` or just
disabling hooks altogether. 🍄☁️
Somebody might want to confirm `basename` is included with macOS.
Fixes#1436
This is a temporary implementation of the `feature` RPC that will always
return `noPermission` iff `vetoed` is set.
If `vetoed` isn't specified, Clio will always forward the request to
`rippled` instead.
In the future, #1131 will implement a Clio-native `feature` RPC. This
requires specific support from `libxrpl` side and that is not going to
be available till at least 2.2.1, hence the temporary forwarding.
It would be great to review the error message and code so that we pick
the right one from the start.
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
A release's version string should be a signed annotated tag which Clio
has generally been following.
Uses the date of the commit since that seems like a more useful item to
track to not have identical source builds have different version
strings.
Partially fixes#884.
Adds:
- Docker image for CI on Linux
- Nightly builds without cache and releases
- Nightly clang-tidy checks
- Fix typos in .clang-tidy
Fixes#932
Also fixes#966
Decided not to add Summary type because it has the same functionality as Histogram but makes more calculations on client side (Clio side). See https://prometheus.io/docs/practices/histograms for detailed comparison.
Fixes#835
We are using xrpl's function to check currency code is valid so there is no need to change our code.
I added more test cases to be sure that clio supports characters added in xrpl.
* Generate conan profile in CI
* Move linux build into main workflow
* Add saving/restoring conan data
* Move cache to Linux
* Fix error
* Change key to hash from conanfile
* Fix path error
* Populate cache only in develop branch
* Big refactor
- Move duplicated code to actions
- Isolate mac build from home directory
- Separate ccache and conan caches
* Fix errors
* Change ccache cache name and fix errors
* Always populate cache
* Use newer ccache on Linux
* Strip tests
* Better conan hash
* Don't fail on ledger params for v1
* Different error on invalid ledger indexes for v1
* Allow forward and binary to be not bool for v1
* Minor fixes
* Fix tests
* Don't fail if input ledger index is out of range for v1
* Restore deleted test
* Fix comparison of integers with different signedness
* Updated default api version in README and example config
* Allow server to download cache from another clio server
* Config takes an array of clio peers. If any of these peers have a
full cache, clio picks a peer at random to download the cache from.
Otherwise, fall back to downloading cache from the database.
* The ledger close time can occasionally be a few seconds in the future,
which causes ETL to not publish the ledger, because the age
calculation wraps around and the age is computed as a very large
unsigned integer. This fix rounds to zero when the age would be
negative
* Fine tune cache download
* Allow operators to specify the max number of concurrent markers. The
software generates possible markers from ledger diffs, as before, but
only processes a specified number at one time, which caps database
reads and distributes the load more evenly over the entire download.
* Allow operators to specify the page fetch size during the cache
download, which is the number of ledger objects to fetch per marker at
one time.
* Refactor full ledger dump in test.py
Fixes an issue that occurred when rebasing the previous log rotation PR.
Updated config to allow log rotation size, log rotation interval, and log directory max size specification
Updated file size base unit to Mb, added documentation for logging
The file size base unit is now in Mb, with detailed description of logging configurations in readme.md
Updated CMake install script to correctly set path in production mode
Co-authored-by: Brandon Kong <bkong@ripple.com>
* NFTs are iterated in reverse order, starting from the max page,
working towards the min page.
* Iteration always continues to page end
Signed-off-by: CJ Cobb <ccobb@ripple.com>
* Keep track of number of requests currently being processed
* Reject new requests when number of in flight requests exceeds a
configurable limit
* Track time spent between request arrival and start of request
processing
Signed-off-by: CJ Cobb <ccobb@ripple.com>
Co-authored-by: natenichols <natenichols@cox.net>
- Added log rotation feature, currently set to rotate for every 12h or if log file size exceeds 2 Gb. If the log directory exceeds 50 Gb, old log files will be deleted.
- Added config options for toggling console and file logging.
- Changed config options for log file storage, now writing log files to a directory instead of a single file.
- Added config options to allow specifying the log rotation size, log rotation interval, and log directory max size.
- Added detailed documentation in README.md regarding how to configure log rotation.
- Updated CMake install script to correctly set path in production mode
Co-authored-by: Brandon Kong <bkong@ripple.com>
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on your path. If you no longer wish to use Git LFS, remove this hook by deleting the 'post-checkout' file in the hooks directory (set by 'core.hookspath'; usually '.git/hooks').\n"; exit 2; }
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on your path. If you no longer wish to use Git LFS, remove this hook by deleting the 'post-commit' file in the hooks directory (set by 'core.hookspath'; usually '.git/hooks').\n"; exit 2; }
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on your path. If you no longer wish to use Git LFS, remove this hook by deleting the 'post-merge' file in the hooks directory (set by 'core.hookspath'; usually '.git/hooks').\n"; exit 2; }
# git for-each-ref refs/tags # see which tags are annotated and which are lightweight. Annotated tags are "tag" objects.
# # Set these so your commits and tags are always signed
# git config commit.gpgsign true
# git config tag.gpgsign true
verify_commit_signed() {
if git verify-commit HEAD &> /dev/null; then
:
# echo "HEAD commit seems signed..."
else
echo "HEAD commit isn't signed!"
exit 1
fi
}
verify_tag() {
if git describe --exact-match --tags HEAD &> /dev/null; then
: # You might be ok to push
# echo "Tag is annotated."
return 0
else
echo "Tag for [$version] not an annotated tag."
exit 1
fi
}
verify_tag_signed() {
if git verify-tag "$version" &> /dev/null ; then
: # ok, I guess we'll let you push
# echo "Tag appears signed"
return 0
else
echo "$version tag isn't signed"
echo "Sign it with [git tag -ams\"$version\" $version]"
exit 1
fi
}
while read local_ref local_oid remote_ref remote_oid; do
# Check some things if we're pushing a branch called "release/"
if echo "$remote_ref" | grep ^refs\/heads\/release\/ &> /dev/null ; then
version=$(git tag --points-at HEAD)
echo "Looks like you're trying to push a $version release..."
echo "Making sure you've signed and tagged it."
if verify_commit_signed && verify_tag && verify_tag_signed ; then
: # Ok, I guess you can push
else
exit 1
fi
fi
done
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on your path. If you no longer wish to use Git LFS, remove this hook by deleting the 'pre-push' file in the hooks directory (set by 'core.hookspath'; usually '.git/hooks').\n"; exit 2; }
Thank you for your interest in contributing to the `clio` project 🙏
To contribute, please:
1. Fork the repository under your own user.
2. Create a new branch on which to commit/push your changes.
3. Write and test your code.
4. Ensure that your code compiles with the provided build engine and update the provided build engine as part of your PR where needed and where appropriate.
5. Where applicable, write test cases for your code and include those in the relevant subfolder under `tests`.
6. Ensure your code passes automated checks (e.g. clang-format)
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change). See below for more details.
8. Open a PR to the main repository onto the _develop_ branch, and follow the provided template.
> **Note:** Please read the [Style guide](#style-guide).
## Install git hooks
Please run the following command in order to use git hooks that are helpful for `clio` development.
``` bash
git config --local core.hooksPath .githooks
```
## Git hooks dependencies
The pre-commit hook requires `clang-format >= 19.0.0` and `cmake-format` to be installed on your machine.
`clang-format` can be installed using `brew` on macOS and default package manager on Linux.
`cmake-format` can be installed using `pip`.
The hook will also attempt to automatically use `doxygen` to verify that everything public in the codebase is covered by doc comments. If `doxygen` is not installed, the hook will raise a warning suggesting to install `doxygen` for future commits.
## Git commands
This sections offers a detailed look at the git commands you will need to use to get your PR submitted.
Please note that there are more than one way to do this and these commands are provided for your convenience.
At this point it's assumed that you have already finished working on your feature/bug.
> **Important:** Before you issue any of the commands below, please hit the `Sync fork` button and make sure your fork's `develop` branch is up-to-date with the main `clio` repository.
``` bash
# Create a backup of your branch
git branch <your feature branch>_bk
# Rebase and squash commits into one
git checkout develop
git pull origin develop
git checkout <your feature branch>
git rebase -i develop
```
For each commit in the list other than the first one, enter `s` to squash.
After this is done, you will have the opportunity to write a message for the squashed commit.
> **Hint:** Please use **imperative mood** in the commit message, and capitalize the first word.
``` bash
# You should now have a single commit on top of a commit in `develop`
git log
```
> **Note:** If there are merge conflicts, please resolve them now.
``` bash
# Use the same commit message as you did above
git commit -m 'Your message'
git rebase --continue
```
> **Important:** If you have no GPG keys set up, please follow [this tutorial](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account)
``` bash
# Sign the commit with your GPG key, and push your changes
git commit --amend -S
git push --force
```
## Use ccache (optional)
Clio uses `ccache` to speed up compilation. If you want to use it, please make sure it is installed on your machine.
CMake will automatically detect it and use it if it is available.
## Opening a pull request
When a pull request is open CI will perform checks on the new code.
Title of the pull request and squashed commit should follow [conventional commits specification](https://www.conventionalcommits.org/en/v1.0.0/).
## Fixing issues found during code review
While your code is in review, it's possible that some changes will be requested by reviewer(s).
This section describes the process of adding your fixes.
We assume that you already made the required changes on your feature branch.
``` bash
# Add the changed code
git add <paths to add>
# Add a [FOLD] commit message (so you remember to squash it later)
# while also signing it with your GPG key
git commit -S -m "[FOLD] Your commit message"
# And finally push your changes
git push
```
## After code review
When your PR is approved and ready to merge, use `Squash and merge`.
The button for that is near the bottom of the PR's page on GitHub.
> **Important:** Please leave the automatically-generated mention/link to the PR in the subject line **and** in the description field add `"Fix #ISSUE_ID"` (replacing `ISSUE_ID` with yours) if the PR fixes an issue.
> **Note:** See [issues](https://github.com/XRPLF/clio/issues) to find the `ISSUE_ID` for the feature/bug you were working on.
# Style guide
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent.
## Formatting
Code must conform to `clang-format` version 19, unless the result would be unreasonably difficult to read or maintain.
In most cases the pre-commit hook will take care of formatting and will fix any issues automatically.
To manually format your code, use `clang-format -i <yourchangedfiles>` for C++ files and `cmake-format -i <yourchangedfiles>` for CMake files.
## Documentation
All public namespaces, classes and functions must be covered by doc (`doxygen`) comments. Everything that is not within a nested `impl` namespace is considered public.
> **Note:** Keep in mind that this is enforced by Clio's CI and your build will fail if newly added public code lacks documentation.
## Avoid
* Proliferation of nearly identical code.
* Proliferation of new files and classes unless it improves readability or/and compilation time.
* Unmanaged memory allocation and raw pointers.
* Macros (unless they add significant value.)
* Lambda patterns (unless these add significant value.)
* CPU or architecture-specific code unless there is a good reason to include it, and where it is used guard it with macros and provide explanatory comments.
* Importing new libraries unless there is a very good reason to do so.
## Seek to
* Extend functionality of existing code rather than creating new code.
* Prefer readability over terseness where important logic is concerned.
* Inline functions that are not used or are not likely to be used elsewhere in the codebase.
* Use clear and self-explanatory names for functions, variables, structs and classes.
* Use TitleCase for classes, structs and filenames, camelCase for function and variable names, lower case for namespaces and folders.
* Provide as many comments as you feel that a competent programmer would need to understand what your code does.
# Maintainers
Maintainers are ecosystem participants with elevated access to the repository. They are able to push new code, make decisions on when a release should be made, etc.
## Code Review
A PR must be reviewed and approved by at least one of the maintainers before it can be merged.
## Adding and Removing
New maintainers can be proposed by two existing maintainers, subject to a vote by a quorum of the existing maintainers. A minimum of 50% support and a 50% participation is required. In the event of a tie vote, the addition of the new maintainer will be rejected.
Existing maintainers can resign, or be subject to a vote for removal at the behest of two existing maintainers. A minimum of 60% agreement and 50% participation are required. The XRP Ledger Foundation will have the ability, for cause, to remove an existing maintainer without a vote.
Clio offers the full rippled API, with the caveat that Clio by default only returns validated data.
This means that `ledger_index` defaults to `validated` instead of `current` for all requests.
Other non-validated data is also not returned, such as information about queued transactions.
For requests that require access to the p2p network, such as `fee` or `submit`, Clio automatically forwards the request to a rippled node, and propagates the response back to the client. To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to rippled.
Clio is an XRP Ledger API server optimized for RPC calls over WebSocket or JSON-RPC.
It stores validated historical ledger and transaction data in a more space efficient format, and uses up to 4 times less space than [rippled](https://github.com/XRPLF/rippled).
Clio does not connect to the peer to peer network. Instead, Clio extracts data from a specified rippled node. Running Clio requires access to a rippled node
from which data can be extracted. The rippled node does not need to be running on the same machine as Clio.
Clio can be configured to store data in [Apache Cassandra](https://cassandra.apache.org/_/index.html) or [ScyllaDB](https://www.scylladb.com/), enabling scalable read throughput.
Multiple Clio nodes can share access to the same dataset, which allows for a highly available cluster of Clio nodes without the need for redundant data storage or computation.
## 📡 Clio and `rippled`
## Requirements
1. Access to a Cassandra cluster or ScyllaDB cluster. Can be local or remote.
Clio offers the full `rippled` API, with the caveat that Clio by default only returns validated data. This means that `ledger_index` defaults to `validated` instead of `current` for all requests. Other non-validated data, such as information about queued transactions, is also not returned.
2. Access to one or more rippled nodes. Can be local or remote.
Clio retrieves data from a designated group of `rippled` nodes instead of connecting to the peer-to-peer network.
For requests that require access to the peer-to-peer network, such as `fee` or `submit`, Clio automatically forwards the request to a `rippled` node and propagates the response back to the client. To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to `rippled`.
## Building
> [!NOTE]
> Clio requires access to at least one `rippled` node, which can run on the same machine as Clio or separately.
Clio is built with cmake. Clio requires c++20, and boost 1.75.0 or later.
## 📚 Learn more about Clio
Use these instructions to build a Clio executable from source. These instructions were tested on Ubuntu 20.04 LTS.
Below are some useful docs to learn more about Clio.
The example configs of rippled and Clio are setup such that minimal changes are
required. When running locally, the only change needed is to uncomment the `port_grpc`
section of the rippled config. When running Clio and rippled on separate machines,
in addition to uncommenting the `port_grpc` section, a few other steps must be taken:
1. change the `ip` of the first entry of `etl_sources` to the ip where your rippled
server is running
2. open a public, unencrypted websocket port on your rippled server
3. change the ip specified in `secure_gateway` of `port_grpc` section of the rippled config
to the ip of your Clio server. This entry can take the form of a comma separated list if
you are running multiple Clio nodes.
## 🆘 Help
Once your config files are ready, start rippled and Clio. It doesn't matter which you
start first, and it's fine to stop one or the other and restart at any given time.
Clio will wait for rippled to sync before extracting any ledgers. If there is already
data in Clio's database, Clio will begin extraction with the ledger whose sequence
is one greater than the greatest sequence currently in the database. Clio will wait
for this ledger to be available. Be aware that the behavior of rippled is to sync to
the most recent ledger on the network, and then backfill. If Clio is extracting ledgers
from rippled, and then rippled is stopped for a significant amount of time and then restarted, rippled
will take time to backfill to the next ledger that Clio wants. The time it takes is proportional
to the amount of time rippled was offline for. Also be aware that the amount rippled backfills
is dependent on the online_delete and ledger_history config values; if these values
are small, and rippled is stopped for a significant amount of time, rippled may never backfill
to the ledger that Clio wants. To avoid this situation, it is advised to keep history
proportional to the amount of time that you expect rippled to be offline. For example, if you
expect rippled to be offline for a few days from time to time, you should keep at least
a few days of history. If you expect rippled to never be offline, then you can keep a very small
amount of history.
Clio can use multiple rippled servers as a data source. Simply add more entries to
the `etl_sources` section. Clio will load balance requests across the servers specified
in this list. As long as one rippled server is up and synced, Clio will continue
extracting ledgers.
In contrast to rippled, Clio will answer RPC requests for the data already in the
database as soon as the server starts. Clio doesn't wait to sync to the network, or
for rippled to sync.
When starting Clio with a fresh database, Clio needs to download a ledger in full.
This can take some time, and depends on database throughput. With a moderately fast
database, this should take less than 10 minutes. If you did not properly set `secure_gateway`
in the `port_grpc` section of rippled, this step will fail. Once the first ledger
is fully downloaded, Clio only needs to extract the changed data for each ledger,
so extraction is much faster and Clio can keep up with rippled in real time. Even under
intense load, Clio should not lag behind the network, as Clio is not processing the data,
and is simply writing to a database. The throughput of Clio is dependent on the throughput
of your database, but a standard Cassandra or Scylla deployment can handle
the write load of the XRP Ledger without any trouble. Generally the performance considerations
come on the read side, and depends on the number of RPC requests your Clio nodes
are serving. Be aware that very heavy read traffic can impact write throughput. Again, this
is on the database side, so if you are seeing this, upgrade your database.
It is possible to run multiple Clio nodes that share access to the same database.
The Clio nodes don't need to know about each other. You can simply spin up more Clio
nodes pointing to the same database as you wish, and shut them down as you wish.
On startup, each Clio node queries the database for the latest ledger. If this latest
ledger does not change for some time, the Clio node begins extracting ledgers
and writing to the database. If the Clio node detects a ledger that it is trying to
write has already been written, the Clio node will backoff and stop writing. If later
the Clio node sees no ledger written for some time, it will start writing again.
This algorithm ensures that at any given time, one and only one Clio node is writing
to the database.
It is possible to force Clio to only read data, and to never become a writer.
To do this, set `read_only: true` in the config. One common setup is to have a
small number of writer nodes that are inaccessible to clients, with several
read only nodes handling client requests. The number of read only nodes can be scaled
up or down in response to request volume.
When using multiple rippled servers as data sources and multiple Clio nodes,
each Clio node should use the same set of rippled servers as sources. The order doesn't matter.
The only reason not to do this is if you are running servers in different regions, and
you want the Clio nodes to extract from servers in their region. However, if you
are doing this, be aware that database traffic will be flowing across regions,
which can cause high latencies. A possible alternative to this is to just deploy
a database in each region, and the Clio nodes in each region use their region's database.
This is effectively two systems.
Feel free to open an [issue](https://github.com/XRPLF/clio/issues) if you have a feature request or something doesn't work as expected.
If you have any questions about building, running, contributing, using Clio or any other, you could always start a new [discussion](https://github.com/XRPLF/clio/discussions).
This document contains the release notes for `clio_server`, an XRP Ledger API Server.
To build and run `clio_server`, follow the instructions in [README.md](https://github.com/XRPLF/clio).
If you find issues or have a new idea, please open [an issue](https://github.com/XRPLF/clio/issues).
# Releases
## 0.1.0
Clio is an XRP Ledger API server. Clio is optimized for RPC calls, over websocket or JSON-RPC. Validated historical ledger and transaction data is stored in a more space efficient format, using up to 4 times less space than rippled.
Clio uses Cassandra or ScyllaDB, allowing for scalable read throughput. Multiple clio nodes can share access to the same dataset, allowing for a highly available cluster of clio nodes, without the need for redundant data storage or computation.
**0.1.0** is the first beta of Project Clio. It contains:
-`./src/backend` is the BackendInterface. This provides an abstraction for reading and writing information to a database.
-`./src/etl` is the ReportingETL. The classes in this folder are used to extract information from the P2P network and write it to a database, either locally or over the network.
-`./src/rpc` contains RPC handlers that are called by clients. These handlers should expose the same API as rippled.
-`./src/subscriptions` contains the SubscriptionManager. This manages publishing to clients subscribing to streams or accounts.
-`./src/webserver` contains a flex server that handles both http/s and ws/s traffic on a single port.
-`./unittests` simple unit tests that write to and read from a database to verify that the ETL works.
Clio is built with [CMake](https://cmake.org/) and uses [Conan](https://conan.io/) for managing dependencies. It is written in C++20 and therefore requires a modern compiler.
## Minimum Requirements
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.55](https://conan.io/downloads.html)
- [CMake 3.20](https://cmake.org/download/)
- [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html): needed for code coverage generation
- [**Optional**] [CCache](https://ccache.dev/): speeds up compilation if you are going to compile Clio often
| Compiler | Version |
|-------------|---------|
| GCC | 12.3 |
| Clang | 16 |
| Apple Clang | 15 |
### Conan Configuration
Clio does not require anything other than `compiler.cppstd=20` in your (`~/.conan/profiles/default`) Conan profile.
> [!NOTE]
> Although Clio is built using C++23, it's required to set `compiler.cppstd=20` for the time being as some of Clio's dependencies are not yet capable of building under C++23.
Now you should be able to download the prebuilt `xrpl` package on some platforms.
> [!NOTE]
> You may need to edit the `~/.conan/remotes.json` file to ensure that this newly added artifactory is listed last. Otherwise, you could see compilation errors when building the project with gcc version 13 (or newer).
cmake --build . --parallel 8# or without the number if you feel extra adventurous
```
> [!TIP]
> You can omit the `-o tests=True` if you don't want to build `clio_tests`.
If successful, `conan install` will find the required packages and `cmake` will do the rest. You should see `clio_server` and `clio_tests` in the `build` directory (the current directory).
> [!TIP]
> To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
> [!NOTE]
> If you've built Clio before and the build is now failing, it's likely due to updated dependencies. Try deleting the build folder and then rerunning the Conan and CMake commands mentioned above.
### Generating API docs for Clio
The API documentation for Clio is generated by [Doxygen](https://www.doxygen.nl/index.html). If you want to generate the API documentation when building Clio, make sure to install Doxygen on your system.
To generate the API docs:
1. First, include `-o docs=True` in the conan install command. For example:
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
## Developing against `rippled` in standalone mode
If you wish to develop against a `rippled` instance running in standalone mode there are a few quirks of both Clio and `rippled` that you need to keep in mind. You must:
1. Advance the `rippled` ledger to at least ledger 256.
2. Wait 10 minutes before first starting Clio against this standalone node.
## Building with a Custom `libxrpl`
Sometimes, during development, you need to build against a custom version of `libxrpl`. (For example, you may be developing compatibility for a proposed amendment that is not yet merged to the main `rippled` codebase.) To build Clio with compatibility for a custom fork or branch of `rippled`, follow these steps:
1. First, pull/clone the appropriate `rippled` fork and switch to the branch you want to build. For example, the following example uses an in-development build with [XLS-33d Multi-Purpose Tokens](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens):
```sh
git clone https://github.com/shawnxie999/rippled/
cd rippled
git switch mpt-1.1
```
2. Export a custom package to your local Conan store using a user/channel:
```sh
conan export . my/feature
```
3. Patch your local Clio build to use the right package.
Edit `conanfile.py` (from the Clio repository root). Replace the `xrpl` requirement with the custom package version from the previous step. This must also include the current version number from your `rippled` branch. For example:
Clio needs access to a `rippled` server in order to work. The following configurations are required for Clio and `rippled` to communicate:
1. In the Clio config file, provide the following:
- The IP of the `rippled` server
- The port on which `rippled` is accepting unencrypted WebSocket connections
- The port on which `rippled` is handling gRPC requests
2. In the `rippled` config file, you need to open:
- A port to accept unencrypted WebSocket connections
- A port to handle gRPC requests, with the IP(s) of Clio specified in the `secure_gateway` entry
The example configs of [rippled](https://github.com/XRPLF/rippled/blob/develop/cfg/rippled-example.cfg) and [Clio](../docs/examples/config/example-config.json) are set up in a way that minimal changes are required.
When running locally, the only change needed is to uncomment the `port_grpc` section of the `rippled` config.
If you're running Clio and `rippled` on separate machines, in addition to uncommenting the `port_grpc` section, a few other steps must be taken:
1. Change the `ip` in `etl_sources` to the IP where your `rippled` server is running.
2. Open a public, unencrypted WebSocket port on your `rippled` server.
3. In the `rippled` config, change the IP specified for `secure_gateway`, under the `port_grpc` and websocket server sections, to the IP of your Clio server. This entry can take the form of a comma-separated list if you are running multiple Clio nodes.
## Ledger sequence
The parameter `start_sequence` can be included and configured within the top level of the config file. This parameter specifies the sequence of the first ledger to extract if the database is empty.
Note that ETL extracts ledgers in order, and backfilling functionality currently doesn't exist. This means Clio does not retroactively learn ledgers older than the one you specify. Choosing to specify this or not will yield the following behavior:
- If this setting is absent and the database is empty, ETL will start with the next ledger validated by the network.
- If this setting is present and the database is not empty, an exception is thrown.
In addition, the optional parameter `finish_sequence` can be added to the json file as well, specifying where the ledger can stop.
To add `start_sequence` and/or `finish_sequence` to the `config.json` file appropriately, they must be on the same top level of precedence as other parameters (i.e., `database`, `etl_sources`, `read_only`) and be specified with an integer.
Here is an example snippet from the config file:
```json
"start_sequence":12345,
"finish_sequence":54321
```
## SSL
The parameters `ssl_cert_file` and `ssl_key_file` can also be added to the top level of precedence of our Clio config. The `ssl_cert_file` field specifies the filepath for your SSL cert, while `ssl_key_file` specifies the filepath for your SSL key. It is up to you how to change ownership of these folders for your designated Clio user.
Your options include:
- Copying the two files as root somewhere that's accessible by the Clio user, then running `sudo chown` to your user
- Changing the permissions directly so it's readable by your Clio user
- Running Clio as root (strongly discouraged)
Here is an example of how to specify `ssl_cert_file` and `ssl_key_file` in the config:
```json
"server":{
"ip":"0.0.0.0",
"port":51233
},
"ssl_cert_file":"/full/path/to/cert.file",
"ssl_key_file":"/full/path/to/key.file"
```
## Admin rights for requests
By default Clio checks admin privileges by IP address from requests (only `127.0.0.1` is considered to be an admin). This is not very secure because the IP could be spoofed. For better security, an `admin_password` can be provided in the `server` section of Clio's config:
```json
"server":{
"admin_password":"secret"
}
```
If the password is presented in the config, Clio will check the Authorization header (if any) in each request for the password. The Authorization header should contain the type `Password`, and the password from the config (e.g. `Password secret`).
Exactly equal password gains admin rights for the request or a websocket connection.
## ETL sources forwarding cache
Clio can cache requests to ETL sources to reduce the load on the ETL source.
Only following commands are cached: `server_info`, `server_state`, `server_definitions`, `fee`, `ledger_closed`.
By default the forwarding cache is off.
To enable the caching for a source, `forwarding_cache_timeout` value should be added to the configuration file, e.g.:
```json
"forwarding_cache_timeout":0.250,
```
`forwarding_cache_timeout` defines for how long (in seconds) a cache entry will be valid after being placed into the cache.
Zero value turns off the cache feature.
## Graceful shutdown (not fully implemented yet)
Clio can be gracefully shut down by sending a `SIGINT` (Ctrl+C) or `SIGTERM` signal.
The process will stop accepting new connections and will wait for the time specified in `graceful_period` config value (or 10 seconds by default).
If Clio finishes all the scheduled operations before the end of the period, it will stop immediately.
Otherwise, it will wait for the period to finish and then exit without finishing operations.
If Clio receives a second signal during the period, it will stop immediately.
Coverage report is intended for developers using compilers GCC or Clang (including Apple Clang). It is generated by the build target `coverage_report`, which is only enabled when both `tests` and `coverage` options are set (e.g., with `-o coverage=True -o tests=True` in `conan`).
## Prerequisites
To generate the coverage report you need:
- [gcovr tool](https://gcovr.com/en/stable/getting-started.html) (can be installed e.g. with `pip install gcovr`)
-`gcov` for GCC (installed with the compiler by default)
-`llvm-cov` for Clang (installed with the compiler by default, also on Apple)
-`Debug` build type
## Creating the coverage report
The coverage report is created when the following steps are completed, in order:
1.`clio_tests` binary built with the instrumentation data, enabled by the `coverage`
option mentioned above.
2. Completed run of unit tests, which populates coverage capture data.
3. Completed run of `gcovr` tool, which internally invokes either `gcov` or `llvm-cov`
to assemble both instrumentation data and coverage capture data into a coverage report.
The above steps are automated into a single target `coverage_report`. The instrumented `clio_tests` binary can also be used for running regular unit tests.
In case of a spurious failure of unit tests, it is possible to re-run the `coverage_report` target without rebuilding the `clio_tests` binary (since it is simply a dependency of the coverage report target).
The default coverage report format is `html-details`, but developers can override it to any of the formats listed in `cmake/CodeCoverage.cmake` by setting `CODE_COVERAGE_REPORT_FORMAT` variable in `cmake`. For example, CI is setting this parameter to `xml` for the [codecov](https://codecov.io) integration.
After the `coverage_report` target is completed, the generated coverage report will be stored inside the build directory as either:
- A File named `coverage_report.*`, with a suitable extension for the report format.
- A Directory named `coverage_report`, with `index.html` and other files inside, for `html-details` or `html-nested` report formats.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.