mirror of
https://github.com/XRPLF/clio.git
synced 2025-11-04 20:05:51 +00:00
443
README.md
443
README.md
@@ -1,415 +1,48 @@
|
||||
# Clio
|
||||
# <img src='./docs/img/xrpl-logo.svg' width='40'> Clio
|
||||
|
||||
[](https://github.com/XRPLF/clio/actions/workflows/build.yml?query=branch%3Adevelop)
|
||||
[](https://github.com/XRPLF/clio/actions/workflows/nightly.yml?query=branch%3Adevelop)
|
||||
[](https://github.com/XRPLF/clio/actions/workflows/clang-tidy.yml?query=branch%3Adevelop)
|
||||
[](https://app.codecov.io/gh/XRPLF/clio)
|
||||
|
||||
Clio is an XRP Ledger API server. Clio is optimized for RPC calls, over WebSocket or JSON-RPC.
|
||||
Validated historical ledger and transaction data are stored in a more space-efficient format,
|
||||
using up to 4 times less space than rippled. Clio can be configured to store data in Apache Cassandra or ScyllaDB,
|
||||
allowing for scalable read throughput. Multiple Clio nodes can share access to the same dataset,
|
||||
allowing for a highly available cluster of Clio nodes, without the need for redundant data storage or computation.
|
||||
Clio is an XRP Ledger API server optimized for RPC calls over WebSocket or JSON-RPC.
|
||||
It stores validated historical ledger and transaction data in a more space efficient format, and uses up to 4 times less space than [rippled](https://github.com/XRPLF/rippled).
|
||||
|
||||
Clio offers the full rippled API, with the caveat that Clio by default only returns validated data.
|
||||
This means that `ledger_index` defaults to `validated` instead of `current` for all requests.
|
||||
Other non-validated data is also not returned, such as information about queued transactions.
|
||||
For requests that require access to the p2p network, such as `fee` or `submit`, Clio automatically forwards the request to a rippled node and propagates the response back to the client.
|
||||
To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to rippled.
|
||||
Clio can be configured to store data in [Apache Cassandra](https://cassandra.apache.org/_/index.html) or [ScyllaDB](https://www.scylladb.com/), enabling scalable read throughput.
|
||||
Multiple Clio nodes can share access to the same dataset, which allows for a highly available cluster of Clio nodes without the need for redundant data storage or computation.
|
||||
|
||||
Clio does not connect to the peer-to-peer network. Instead, Clio extracts data from a group of specified rippled nodes. Running Clio requires access to at least one rippled node
|
||||
from which data can be extracted. The rippled node does not need to be running on the same machine as Clio.
|
||||
## 📡 Clio and `rippled`
|
||||
|
||||
Clio offers the full `rippled` API, with the caveat that Clio by default only returns validated data. This means that `ledger_index` defaults to `validated` instead of `current` for all requests. Other non-validated data, such as information about queued transactions, is also not returned.
|
||||
|
||||
Clio retrieves data from a designated group of `rippled` nodes instead of connecting to the peer-to-peer network.
|
||||
For requests that require access to the peer-to-peer network, such as `fee` or `submit`, Clio automatically forwards the request to a `rippled` node and propagates the response back to the client. To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to `rippled`.
|
||||
|
||||
> [!NOTE]
|
||||
> Clio requires access to at least one `rippled` node, which can run on the same machine as Clio or separately.
|
||||
|
||||
## 📚 Learn more about Clio
|
||||
|
||||
Below are some useful docs to learn more about Clio.
|
||||
|
||||
**For Developers**:
|
||||
|
||||
- [How to build Clio](./docs/build-clio.md)
|
||||
- [Metrics and static analysis](./docs/metrics-and-static-analysis.md)
|
||||
- [Coverage report](./docs/coverage-report.md)
|
||||
|
||||
**For Operators**:
|
||||
|
||||
- [How to configure Clio and rippled](./docs/configure-clio.md)
|
||||
- [How to run Clio](./docs/run-clio.md)
|
||||
- [Logging](./docs/logging.md)
|
||||
|
||||
**General reference material:**
|
||||
|
||||
- [API reference](https://xrpl.org/http-websocket-apis.html)
|
||||
- [Clio documentation](https://xrpl.org/the-clio-server.html#the-clio-server)
|
||||
|
||||
## 🆘 Help
|
||||
|
||||
## Help
|
||||
Feel free to open an [issue](https://github.com/XRPLF/clio/issues) if you have a feature request or something doesn't work as expected.
|
||||
If you have any questions about building, running, contributing, using Clio or any other, you could always start a new [discussion](https://github.com/XRPLF/clio/discussions).
|
||||
|
||||
## Requirements
|
||||
1. Access to a Cassandra cluster or ScyllaDB cluster. Can be local or remote.
|
||||
2. Access to one or more rippled nodes. Can be local or remote.
|
||||
|
||||
## Building
|
||||
|
||||
Clio is built with CMake and uses Conan for managing dependencies.
|
||||
It is written in C++20 and therefore requires a modern compiler.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Minimum Requirements
|
||||
|
||||
- [Python 3.7](https://www.python.org/downloads/)
|
||||
- [Conan 1.55](https://conan.io/downloads.html)
|
||||
- [CMake 3.16](https://cmake.org/download/)
|
||||
- [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html) (needed for code coverage generation)
|
||||
- [**Optional**] [CCache](https://ccache.dev/) (speeds up compilation if you are going to compile Clio often)
|
||||
|
||||
| Compiler | Version |
|
||||
|-------------|---------|
|
||||
| GCC | 11 |
|
||||
| Clang | 14 |
|
||||
| Apple Clang | 14.0.3 |
|
||||
|
||||
### Conan configuration
|
||||
|
||||
Clio does not require anything but default settings in your (`~/.conan/profiles/default`) Conan profile. It's best to have no extra flags specified.
|
||||
> Mac example:
|
||||
```
|
||||
[settings]
|
||||
os=Macos
|
||||
os_build=Macos
|
||||
arch=armv8
|
||||
arch_build=armv8
|
||||
compiler=apple-clang
|
||||
compiler.version=14
|
||||
compiler.libcxx=libc++
|
||||
build_type=Release
|
||||
compiler.cppstd=20
|
||||
```
|
||||
> Linux example:
|
||||
```
|
||||
[settings]
|
||||
os=Linux
|
||||
os_build=Linux
|
||||
arch=x86_64
|
||||
arch_build=x86_64
|
||||
compiler=gcc
|
||||
compiler.version=11
|
||||
compiler.libcxx=libstdc++11
|
||||
build_type=Release
|
||||
compiler.cppstd=20
|
||||
```
|
||||
|
||||
### Artifactory
|
||||
|
||||
1. Make sure artifactory is setup with Conan
|
||||
```sh
|
||||
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
|
||||
```
|
||||
Now you should be able to download prebuilt `xrpl` package on some platforms.
|
||||
|
||||
You might need to edit the `~/.conan/remotes.json` file to ensure that this newly added artifactory is listed last. Otherwise you might see compilation errors when building the project with gcc version 13 (or newer).
|
||||
|
||||
2. Remove old packages you may have cached:
|
||||
```sh
|
||||
conan remove -f xrpl
|
||||
```
|
||||
|
||||
## Building Clio
|
||||
|
||||
Navigate to Clio's root directory and perform
|
||||
```sh
|
||||
mkdir build && cd build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
|
||||
```
|
||||
If all goes well, `conan install` will find required packages and `cmake` will do the rest. you should end up with `clio_server` and `clio_tests` in the `build` directory (the current directory).
|
||||
|
||||
> **Tip:** You can omit the `-o tests=True` in `conan install` command above if you don't want to build `clio_tests`.
|
||||
|
||||
> **Tip:** To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
|
||||
|
||||
## Building Clio with Docker
|
||||
|
||||
It is possible to build Clio using docker if you don't want to install all the dependencies on your machine.
|
||||
```sh
|
||||
docker run -it rippleci/clio_ci:latest
|
||||
git clone https://github.com/XRPLF/clio
|
||||
mkdir build && cd build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
|
||||
```
|
||||
|
||||
## Running
|
||||
```sh
|
||||
./clio_server config.json
|
||||
```
|
||||
|
||||
Clio needs access to a rippled server. The config files of rippled and Clio need
|
||||
to match in a certain sense.
|
||||
Clio needs to know:
|
||||
- the IP of rippled
|
||||
- the port on which rippled is accepting unencrypted WebSocket connections
|
||||
- the port on which rippled is handling gRPC requests
|
||||
|
||||
rippled needs to open:
|
||||
- a port to accept unencrypted websocket connections
|
||||
- a port to handle gRPC requests, with the IP(s) of Clio specified in the `secure_gateway` entry
|
||||
|
||||
The example configs of rippled and Clio are setups such that minimal changes are
|
||||
required. When running locally, the only change needed is to uncomment the `port_grpc`
|
||||
section of the rippled config. When running Clio and rippled on separate machines,
|
||||
in addition to uncommenting the `port_grpc` section, a few other steps must be taken:
|
||||
1. change the `ip` of the first entry of `etl_sources` to the IP where your rippled
|
||||
server is running
|
||||
2. open a public, unencrypted WebSocket port on your rippled server
|
||||
3. change the IP specified in `secure_gateway` of `port_grpc` section of the rippled config
|
||||
to the IP of your Clio server. This entry can take the form of a comma-separated list if
|
||||
you are running multiple Clio nodes.
|
||||
|
||||
|
||||
In addition, the parameter `start_sequence` can be included and configured within the top level of the config file. This parameter specifies the sequence of first ledger to extract if the database is empty. Note that ETL extracts ledgers in order and that no backfilling functionality currently exists, meaning Clio will not retroactively learn ledgers older than the one you specify. Choosing to specify this or not will yield the following behavior:
|
||||
- If this setting is absent and the database is empty, ETL will start with the next ledger validated by the network.
|
||||
- If this setting is present and the database is not empty, an exception is thrown.
|
||||
|
||||
In addition, the optional parameter `finish_sequence` can be added to the json file as well, specifying where the ledger can stop.
|
||||
|
||||
To add `start_sequence` and/or `finish_sequence` to the config.json file appropriately, they will be on the same top level of precedence as other parameters (such as `database`, `etl_sources`, `read_only`, etc.) and be specified with an integer. Here is an example snippet from the config file:
|
||||
|
||||
```json
|
||||
"start_sequence": 12345,
|
||||
"finish_sequence": 54321
|
||||
```
|
||||
|
||||
The parameters `ssl_cert_file` and `ssl_key_file` can also be added to the top level of precedence of our Clio config. `ssl_cert_file` specifies the filepath for your SSL cert while `ssl_key_file` specifies the filepath for your SSL key. It is up to you how to change ownership of these folders for your designated Clio user. Your options include:
|
||||
- Copying the two files as root somewhere that's accessible by the Clio user, then running `sudo chown` to your user
|
||||
- Changing the permissions directly so it's readable by your Clio user
|
||||
- Running Clio as root (strongly discouraged)
|
||||
|
||||
An example of how to specify `ssl_cert_file` and `ssl_key_file` in the config:
|
||||
|
||||
```json
|
||||
"server": {
|
||||
"ip": "0.0.0.0",
|
||||
"port": 51233
|
||||
},
|
||||
"ssl_cert_file": "/full/path/to/cert.file",
|
||||
"ssl_key_file": "/full/path/to/key.file"
|
||||
```
|
||||
|
||||
Once your config files are ready, start rippled and Clio. It doesn't matter which you
|
||||
start first, and it's fine to stop one or the other and restart at any given time.
|
||||
|
||||
Clio will wait for rippled to sync before extracting any ledgers. If there is already
|
||||
data in Clio's database, Clio will begin extraction with the ledger whose sequence
|
||||
is one greater than the greatest sequence currently in the database. Clio will wait
|
||||
for this ledger to be available. Be aware that the behavior of rippled is to sync to
|
||||
the most recent ledger on the network, and then backfill. If Clio is extracting ledgers
|
||||
from rippled, and then rippled is stopped for a significant amount of time and then restarted, rippled
|
||||
will take time to backfill to the next ledger that Clio wants. The time it takes is proportional
|
||||
to the amount of time rippled was offline for. Also be aware that the amount rippled backfills
|
||||
are dependent on the online_delete and ledger_history config values; if these values
|
||||
are small, and rippled is stopped for a significant amount of time, rippled may never backfill
|
||||
to the ledger that Clio wants. To avoid this situation, it is advised to keep history
|
||||
proportional to the amount of time that you expect rippled to be offline. For example, if you
|
||||
expect rippled to be offline for a few days from time to time, you should keep at least
|
||||
a few days of history. If you expect rippled to never be offline, then you can keep a very small
|
||||
amount of history.
|
||||
|
||||
Clio can use multiple rippled servers as a data source. Simply add more entries to
|
||||
the `etl_sources` section. Clio will load balance requests across the servers specified
|
||||
in this list. As long as one rippled server is up and synced, Clio will continue
|
||||
extracting ledgers.
|
||||
|
||||
In contrast to rippled, Clio will answer RPC requests for the data already in the
|
||||
database as soon as the server starts. Clio doesn't wait to sync to the network, or
|
||||
for rippled to sync.
|
||||
|
||||
When starting Clio with a fresh database, Clio needs to download a ledger in full.
|
||||
This can take some time, and depends on database throughput. With a moderately fast
|
||||
database, this should take less than 10 minutes. If you did not properly set `secure_gateway`
|
||||
in the `port_grpc` section of rippled, this step will fail. Once the first ledger
|
||||
is fully downloaded, Clio only needs to extract the changed data for each ledger,
|
||||
so extraction is much faster and Clio can keep up with rippled in real-time. Even under
|
||||
intense load, Clio should not lag behind the network, as Clio is not processing the data,
|
||||
and is simply writing to a database. The throughput of Clio is dependent on the throughput
|
||||
of your database, but a standard Cassandra or Scylla deployment can handle
|
||||
the write load of the XRP Ledger without any trouble. Generally the performance considerations
|
||||
come on the read side, and depends on the number of RPC requests your Clio nodes
|
||||
are serving. Be aware that very heavy read traffic can impact write throughput. Again, this
|
||||
is on the database side, so if you are seeing this, upgrade your database.
|
||||
|
||||
It is possible to run multiple Clio nodes that share access to the same database.
|
||||
The Clio nodes don't need to know about each other. You can simply spin up more Clio
|
||||
nodes pointing to the same database as you wish, and shut them down as you wish.
|
||||
On startup, each Clio node queries the database for the latest ledger. If this latest
|
||||
ledger does not change for some time, the Clio node begins extracting ledgers
|
||||
and writing to the database. If the Clio node detects a ledger that it is trying to
|
||||
write has already been written, the Clio node will backoff and stop writing. If later
|
||||
the Clio node sees no ledger written for some time, it will start writing again.
|
||||
This algorithm ensures that at any given time, one and only one Clio node is writing
|
||||
to the database.
|
||||
|
||||
It is possible to force Clio to only read data, and to never become a writer.
|
||||
To do this, set `read_only: true` in the config. One common setup is to have a
|
||||
small number of writer nodes that are inaccessible to clients, with several
|
||||
read only nodes handling client requests. The number of read only nodes can be scaled
|
||||
up or down in response to request volume.
|
||||
|
||||
When using multiple rippled servers as data sources and multiple Clio nodes,
|
||||
each Clio node should use the same set of rippled servers as sources. The order doesn't matter.
|
||||
The only reason not to do this is if you are running servers in different regions, and
|
||||
you want the Clio nodes to extract from servers in their region. However, if you
|
||||
are doing this, be aware that database traffic will be flowing across regions,
|
||||
which can cause high latencies. A possible alternative to this is to just deploy
|
||||
a database in each region, and the Clio nodes in each region use their region's database.
|
||||
This is effectively two systems.
|
||||
|
||||
Clio supports API versioning as [described here](https://xrpl.org/request-formatting.html#api-versioning).
|
||||
It's possible to configure `minimum`, `maximum` and `default` version like so:
|
||||
```json
|
||||
"api_version": {
|
||||
"min": 1,
|
||||
"max": 2,
|
||||
"default": 1
|
||||
}
|
||||
```
|
||||
All of the above are optional.
|
||||
Clio will fallback to hardcoded defaults when not specified in the config file or configured values are outside
|
||||
of the minimum and maximum supported versions hardcoded in `src/rpc/common/APIVersion.h`.
|
||||
> **Note:** See `example-config.json` for more details.
|
||||
|
||||
## Admin rights for requests
|
||||
|
||||
By default Clio checks admin privileges by IP address from request (only `127.0.0.1` is considered to be an admin).
|
||||
It is not very secure because the IP could be spoofed.
|
||||
For a better security `admin_password` could be provided in the `server` section of Clio's config:
|
||||
```json
|
||||
"server": {
|
||||
"admin_password": "secret"
|
||||
}
|
||||
```
|
||||
If the password is presented in the config, Clio will check the Authorization header (if any) in each request for the password.
|
||||
The Authorization header should contain type `Password` and the password from the config, e.g. `Password secret`.
|
||||
Exactly equal password gains admin rights for the request or a websocket connection.
|
||||
|
||||
## Prometheus metrics collection
|
||||
|
||||
Clio natively supports Prometheus metrics collection. It accepts Prometheus requests on the port configured in `server` section of config.
|
||||
By default Prometheus metrics are enabled and replies to `/metrics` are compressed.
|
||||
|
||||
To disable compression and have human readable metrics add `"prometheus": { "enabled": true, "compress_reply": false }` to the Clio's config.
|
||||
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to the Clio's config.
|
||||
|
||||
It is important to know that Clio responds to Prometheus request only if they are admin requests.
|
||||
If you are using admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
|
||||
There is an example of docker-compose file, Prometheus and Grafana configs in [examples/infrastructure](examples/infrastructure).
|
||||
|
||||
## Using clang-tidy for static analysis
|
||||
|
||||
Minimum clang-tidy version required is 17.0.
|
||||
Clang-tidy could be run by cmake during building the project.
|
||||
For that provide the option `-o lint=True` for `conan install` command:
|
||||
```sh
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
|
||||
```
|
||||
By default cmake will try to find clang-tidy automatically in your system.
|
||||
To force cmake use desired binary set `CLIO_CLANG_TIDY_BIN` environment variable as path to clang-tidy binary.
|
||||
E.g.:
|
||||
```sh
|
||||
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@17/bin/clang-tidy
|
||||
```
|
||||
|
||||
## Coverage report
|
||||
|
||||
Coverage report is intended for the developers using compilers GCC
|
||||
or Clang (including Apple Clang). It is generated by the build target `coverage_report`,
|
||||
which is only enabled when both `tests` and `coverage` options are set, e.g. with
|
||||
`-o coverage=True -o tests=True` in `conan`
|
||||
|
||||
Prerequisites for the coverage report:
|
||||
|
||||
- [gcovr tool](https://gcovr.com/en/stable/getting-started.html) (can be installed e.g. with `pip install gcovr`)
|
||||
- `gcov` for GCC (installed with the compiler by default) or
|
||||
- `llvm-cov` for Clang (installed with the compiler by default, also on Apple)
|
||||
- `Debug` build type
|
||||
|
||||
Coverage report is created when the following steps are completed, in order:
|
||||
|
||||
1. `clio_tests` binary built with the instrumentation data, enabled by the `coverage`
|
||||
option mentioned above
|
||||
2. completed run of unit tests, which populates coverage capture data
|
||||
3. completed run of `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
|
||||
to assemble both instrumentation data and coverage capture data into a coverage report
|
||||
|
||||
Above steps are automated into a single target `coverage_report`. The instrumented
|
||||
`clio_tests` binary can be also used for running regular unit tests. In case of a
|
||||
spurious failure of unit tests, it is possile to re-run `coverage_report` target without
|
||||
rebuilding the `clio_tests` binary (since it is simply a dependency of the coverage report target).
|
||||
|
||||
The default coverage report format is `html-details`, but the developers
|
||||
can override it to any of the formats listed in `CMake/CodeCoverage.cmake`
|
||||
by setting `CODE_COVERAGE_REPORT_FORMAT` variable in `cmake`. For example, CI
|
||||
is setting this parameter to `xml` for the [codecov](codecov.io) integration.
|
||||
|
||||
In case if some unit tests predictably fail e.g. due to absence of a Cassandra database, it is possible
|
||||
to set unit tests options in `CODE_COVERAGE_TESTS_ARGS` cmake variable, as demonstrated below:
|
||||
|
||||
```
|
||||
cd .build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Debug -o tests=True -o coverage=True
|
||||
cmake -DCODE_COVERAGE_REPORT_FORMAT=json-details -DCMAKE_BUILD_TYPE=Debug -DCODE_COVERAGE_TESTS_ARGS="--gtest_filter=-BackendCassandra*" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
|
||||
cmake --build . --target coverage_report
|
||||
```
|
||||
|
||||
After `coverage_report` target is completed, the generated coverage report will be
|
||||
stored inside the build directory, as either of:
|
||||
|
||||
- file named `coverage_report.*`, with a suitable extension for the report format, or
|
||||
- directory named `coverage_report`, with `index.html` and other files inside, for `html-details` or `html-nested` report formats.
|
||||
|
||||
|
||||
## Developing against `rippled` in standalone mode
|
||||
|
||||
If you wish you develop against a `rippled` instance running in standalone
|
||||
mode there are a few quirks of both Clio and rippled you need to keep in mind.
|
||||
You must:
|
||||
|
||||
1. Advance the `rippled` ledger to at least ledger 256
|
||||
2. Wait 10 minutes before first starting Clio against this standalone node.
|
||||
|
||||
## Logging
|
||||
Clio provides several logging options, all are configurable via the config file and are detailed below.
|
||||
|
||||
`log_level`: The minimum level of severity at which the log message will be outputted by default.
|
||||
Severity options are `trace`, `debug`, `info`, `warning`, `error`, `fatal`. Defaults to `info`.
|
||||
|
||||
`log_format`: The format of log lines produced by Clio. Defaults to `"%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%"`.
|
||||
Each of the variables expands like so
|
||||
- `TimeStamp`: The full date and time of the log entry
|
||||
- `SourceLocation`: A partial path to the c++ file and the line number in said file (`source/file/path:linenumber`)
|
||||
- `ThreadID`: The ID of the thread the log entry is written from
|
||||
- `Channel`: The channel that this log entry was sent to
|
||||
- `Severity`: The severity (aka log level) the entry was sent at
|
||||
- `Message`: The actual log message
|
||||
|
||||
`log_channels`: An array of json objects, each overriding properties for a logging `channel`.
|
||||
At the moment of writing, only `log_level` can be overriden using this mechanism.
|
||||
|
||||
Each object is of this format:
|
||||
```json
|
||||
{
|
||||
"channel": "Backend",
|
||||
"log_level": "fatal"
|
||||
}
|
||||
```
|
||||
If no override is present for a given channel, that channel will log at the severity specified by the global `log_level`.
|
||||
Overridable log channels: `Backend`, `WebServer`, `Subscriptions`, `RPC`, `ETL` and `Performance`.
|
||||
|
||||
> **Note:** See `example-config.json` for more details.
|
||||
|
||||
`log_to_console`: Enable/disable log output to console. Options are `true`/`false`. Defaults to true.
|
||||
|
||||
`log_directory`: Path to the directory where log files are stored. If such directory doesn't exist, Clio will create it. If not specified, logs are not written to a file.
|
||||
|
||||
`log_rotation_size`: The max size of the log file in **megabytes** before it will rotate into a smaller file. Defaults to 2GB.
|
||||
|
||||
`log_directory_max_size`: The max size of the log directory in **megabytes** before old log files will be
|
||||
deleted to free up space. Defaults to 50GB.
|
||||
|
||||
`log_rotation_hour_interval`: The time interval in **hours** after the last log rotation to automatically
|
||||
rotate the current log file. Defaults to 12 hours.
|
||||
|
||||
Note, time-based log rotation occurs dependently on size-based log rotation, where if a
|
||||
size-based log rotation occurs, the timer for the time-based rotation will reset.
|
||||
|
||||
`log_tag_style`: Tag implementation to use. Must be one of:
|
||||
- `uint`: Lock free and threadsafe but outputs just a simple unsigned integer
|
||||
- `uuid`: Threadsafe and outputs a UUID tag
|
||||
- `none`: Don't use tagging at all
|
||||
|
||||
## Cassandra / Scylla Administration
|
||||
|
||||
Since Clio relies on either Cassandra or Scylla for its database backend, here are some important considerations:
|
||||
|
||||
- Scylla, by default, will reserve all free RAM on a machine for itself. If you are running `rippled` or other services on the same machine, restrict its memory usage using the `--memory` argument: https://docs.scylladb.com/getting-started/scylla-in-a-shared-environment/
|
||||
|
||||
109
docs/build-clio.md
Normal file
109
docs/build-clio.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# How to build Clio
|
||||
|
||||
Clio is built with [CMake](https://cmake.org/) and uses [Conan](https://conan.io/) for managing dependencies. It is written in C++20 and therefore requires a modern compiler.
|
||||
|
||||
## Minimum Requirements
|
||||
|
||||
- [Python 3.7](https://www.python.org/downloads/)
|
||||
- [Conan 1.55](https://conan.io/downloads.html)
|
||||
- [CMake 3.16](https://cmake.org/download/)
|
||||
- [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html): needed for code coverage generation
|
||||
- [**Optional**] [CCache](https://ccache.dev/): speeds up compilation if you are going to compile Clio often
|
||||
|
||||
| Compiler | Version |
|
||||
|-------------|---------|
|
||||
| GCC | 11 |
|
||||
| Clang | 14 |
|
||||
| Apple Clang | 14.0.3 |
|
||||
|
||||
### Conan Configuration
|
||||
|
||||
Clio does not require anything but default settings in your (`~/.conan/profiles/default`) Conan profile. It's best to have no extra flags specified.
|
||||
|
||||
> Mac example:
|
||||
|
||||
```
|
||||
[settings]
|
||||
os=Macos
|
||||
os_build=Macos
|
||||
arch=armv8
|
||||
arch_build=armv8
|
||||
compiler=apple-clang
|
||||
compiler.version=14
|
||||
compiler.libcxx=libc++
|
||||
build_type=Release
|
||||
compiler.cppstd=20
|
||||
```
|
||||
|
||||
> Linux example:
|
||||
|
||||
```
|
||||
[settings]
|
||||
os=Linux
|
||||
os_build=Linux
|
||||
arch=x86_64
|
||||
arch_build=x86_64
|
||||
compiler=gcc
|
||||
compiler.version=11
|
||||
compiler.libcxx=libstdc++11
|
||||
build_type=Release
|
||||
compiler.cppstd=20
|
||||
```
|
||||
|
||||
#### Artifactory
|
||||
|
||||
Make sure artifactory is setup with Conan.
|
||||
|
||||
```sh
|
||||
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
|
||||
```
|
||||
|
||||
Now you should be able to download the prebuilt `xrpl` package on some platforms.
|
||||
|
||||
> [!NOTE]
|
||||
> You may need to edit the `~/.conan/remotes.json` file to ensure that this newly added artifactory is listed last. Otherwise, you could see compilation errors when building the project with gcc version 13 (or newer).
|
||||
|
||||
Remove old packages you may have cached.
|
||||
|
||||
```sh
|
||||
conan remove -f xrpl
|
||||
```
|
||||
|
||||
## Building Clio
|
||||
|
||||
Navigate to Clio's root directory and run:
|
||||
|
||||
```sh
|
||||
mkdir build && cd build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> You can omit the `-o tests=True` if you don't want to build `clio_tests`.
|
||||
|
||||
If successful, `conan install` will find the required packages and `cmake` will do the rest. You should see `clio_server` and `clio_tests` in the `build` directory (the current directory).
|
||||
|
||||
> [!TIP]
|
||||
> To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
|
||||
|
||||
## Building Clio with Docker
|
||||
|
||||
It is also possible to build Clio using [Docker](https://www.docker.com/) if you don't want to install all the dependencies on your machine.
|
||||
|
||||
```sh
|
||||
docker run -it rippleci/clio_ci:latest
|
||||
git clone https://github.com/XRPLF/clio
|
||||
mkdir build && cd build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
|
||||
```
|
||||
|
||||
## Developing against `rippled` in standalone mode
|
||||
|
||||
If you wish to develop against a `rippled` instance running in standalone mode there are a few quirks of both Clio and `rippled` that you need to keep in mind. You must:
|
||||
|
||||
1. Advance the `rippled` ledger to at least ledger 256.
|
||||
2. Wait 10 minutes before first starting Clio against this standalone node.
|
||||
85
docs/configure-clio.md
Normal file
85
docs/configure-clio.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# How to configure Clio and `rippled`
|
||||
|
||||
## Ports
|
||||
|
||||
Clio needs access to a `rippled` server in order to work. The following configurations are required for Clio and `rippled` to communicate:
|
||||
|
||||
1. In the Clio config file, provide the following:
|
||||
|
||||
- The IP of the `rippled` server
|
||||
|
||||
- The port on which `rippled` is accepting unencrypted WebSocket connections
|
||||
|
||||
- The port on which `rippled` is handling gRPC requests
|
||||
|
||||
2. In the `rippled` config file, you need to open:
|
||||
|
||||
- A port to accept unencrypted WebSocket connections
|
||||
|
||||
- A port to handle gRPC requests, with the IP(s) of Clio specified in the `secure_gateway` entry
|
||||
|
||||
The example configs of [rippled](https://github.com/XRPLF/rippled/blob/develop/cfg/rippled-example.cfg) and [Clio](../example-config.json) are set up in a way that minimal changes are required.
|
||||
When running locally, the only change needed is to uncomment the `port_grpc` section of the `rippled` config.
|
||||
|
||||
If you're running Clio and `rippled` on separate machines, in addition to uncommenting the `port_grpc` section, a few other steps must be taken:
|
||||
|
||||
1. Change the `ip` in `etl_sources` to the IP where your `rippled` server is running.
|
||||
|
||||
2. Open a public, unencrypted WebSocket port on your `rippled` server.
|
||||
|
||||
3. In the `rippled` config, change the IP specified for `secure_gateway`, under the `port_grpc` section, to the IP of your Clio server. This entry can take the form of a comma-separated list if you are running multiple Clio nodes.
|
||||
|
||||
## Ledger sequence
|
||||
|
||||
The parameter `start_sequence` can be included and configured within the top level of the config file. This parameter specifies the sequence of the first ledger to extract if the database is empty.
|
||||
|
||||
Note that ETL extracts ledgers in order, and backfilling functionality currently doesn't exist. This means Clio does not retroactively learn ledgers older than the one you specify. Choosing to specify this or not will yield the following behavior:
|
||||
|
||||
- If this setting is absent and the database is empty, ETL will start with the next ledger validated by the network.
|
||||
|
||||
- If this setting is present and the database is not empty, an exception is thrown.
|
||||
|
||||
In addition, the optional parameter `finish_sequence` can be added to the json file as well, specifying where the ledger can stop.
|
||||
|
||||
To add `start_sequence` and/or `finish_sequence` to the `config.json` file appropriately, they must be on the same top level of precedence as other parameters (i.e., `database`, `etl_sources`, `read_only`) and be specified with an integer.
|
||||
|
||||
Here is an example snippet from the config file:
|
||||
|
||||
```json
|
||||
"start_sequence": 12345,
|
||||
"finish_sequence": 54321
|
||||
```
|
||||
|
||||
## SSL
|
||||
|
||||
The parameters `ssl_cert_file` and `ssl_key_file` can also be added to the top level of precedence of our Clio config. The `ssl_cert_file` field specifies the filepath for your SSL cert, while `ssl_key_file` specifies the filepath for your SSL key. It is up to you how to change ownership of these folders for your designated Clio user.
|
||||
|
||||
Your options include:
|
||||
|
||||
- Copying the two files as root somewhere that's accessible by the Clio user, then running `sudo chown` to your user
|
||||
- Changing the permissions directly so it's readable by your Clio user
|
||||
- Running Clio as root (strongly discouraged)
|
||||
|
||||
Here is an example of how to specify `ssl_cert_file` and `ssl_key_file` in the config:
|
||||
|
||||
```json
|
||||
"server": {
|
||||
"ip": "0.0.0.0",
|
||||
"port": 51233
|
||||
},
|
||||
"ssl_cert_file": "/full/path/to/cert.file",
|
||||
"ssl_key_file": "/full/path/to/key.file"
|
||||
```
|
||||
|
||||
## Admin rights for requests
|
||||
|
||||
By default Clio checks admin privileges by IP address from requests (only `127.0.0.1` is considered to be an admin). This is not very secure because the IP could be spoofed. For better security, an `admin_password` can be provided in the `server` section of Clio's config:
|
||||
|
||||
```json
|
||||
"server": {
|
||||
"admin_password": "secret"
|
||||
}
|
||||
```
|
||||
|
||||
If the password is presented in the config, Clio will check the Authorization header (if any) in each request for the password. The Authorization header should contain the type `Password`, and the password from the config (e.g. `Password secret`).
|
||||
Exactly equal password gains admin rights for the request or a websocket connection.
|
||||
42
docs/coverage-report.md
Normal file
42
docs/coverage-report.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Coverage report
|
||||
|
||||
Coverage report is intended for developers using compilers GCC or Clang (including Apple Clang). It is generated by the build target `coverage_report`, which is only enabled when both `tests` and `coverage` options are set (e.g., with `-o coverage=True -o tests=True` in `conan`).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To generate the coverage report you need:
|
||||
|
||||
- [gcovr tool](https://gcovr.com/en/stable/getting-started.html) (can be installed e.g. with `pip install gcovr`)
|
||||
- `gcov` for GCC (installed with the compiler by default)
|
||||
- `llvm-cov` for Clang (installed with the compiler by default, also on Apple)
|
||||
- `Debug` build type
|
||||
|
||||
## Creating the coverage report
|
||||
|
||||
The coverage report is created when the following steps are completed, in order:
|
||||
|
||||
1. `clio_tests` binary built with the instrumentation data, enabled by the `coverage`
|
||||
option mentioned above.
|
||||
2. Completed run of unit tests, which populates coverage capture data.
|
||||
3. Completed run of `gcovr` tool, which internally invokes either `gcov` or `llvm-cov`
|
||||
to assemble both instrumentation data and coverage capture data into a coverage report.
|
||||
|
||||
The above steps are automated into a single target `coverage_report`. The instrumented `clio_tests` binary can also be used for running regular unit tests.
|
||||
|
||||
In case of a spurious failure of unit tests, it is possible to re-run the `coverage_report` target without rebuilding the `clio_tests` binary (since it is simply a dependency of the coverage report target).
|
||||
|
||||
The default coverage report format is `html-details`, but developers can override it to any of the formats listed in `CMake/CodeCoverage.cmake` by setting `CODE_COVERAGE_REPORT_FORMAT` variable in `cmake`. For example, CI is setting this parameter to `xml` for the [codecov](https://codecov.io) integration.
|
||||
|
||||
If some unit tests predictably fail (e.g., due to absence of a Cassandra database), it is possible to set unit tests options in the `CODE_COVERAGE_TESTS_ARGS` cmake variable, as demonstrated below:
|
||||
|
||||
```sh
|
||||
cd .build
|
||||
conan install .. --output-folder . --build missing --settings build_type=Debug -o tests=True -o coverage=True
|
||||
cmake -DCODE_COVERAGE_REPORT_FORMAT=json-details -DCMAKE_BUILD_TYPE=Debug -DCODE_COVERAGE_TESTS_ARGS="--gtest_filter=-BackendCassandra*" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
|
||||
cmake --build . --target coverage_report
|
||||
```
|
||||
|
||||
After the `coverage_report` target is completed, the generated coverage report will be stored inside the build directory as either:
|
||||
|
||||
- A File named `coverage_report.*`, with a suitable extension for the report format.
|
||||
- A Directory named `coverage_report`, with `index.html` and other files inside, for `html-details` or `html-nested` report formats.
|
||||
14
docs/img/xrpl-logo.svg
Normal file
14
docs/img/xrpl-logo.svg
Normal file
@@ -0,0 +1,14 @@
|
||||
<svg width="298" height="225" viewBox="0 0 298 225" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path
|
||||
d="M68.5456 13.9416H73.9673V0.000139238H68.5456C62.7975 -0.00495441 57.1047 1.12348 51.7931 3.32086C46.4815 5.51824 41.6553 8.74144 37.5907 12.806C33.5262 16.8706 30.303 21.6968 28.1056 27.0084C25.9082 32.32 24.7798 38.0127 24.7849 43.7609V75.5165C24.7965 82.2404 22.2357 88.7141 17.6273 93.6105C13.019 98.5068 6.71227 101.455 0 101.85L0.387264 108.821L0 115.792C6.71227 116.187 13.019 119.136 17.6273 124.032C22.2357 128.928 24.7965 135.402 24.7849 142.126V178.722C24.7643 190.866 29.566 202.521 38.1348 211.126C46.7035 219.731 58.3382 224.582 70.482 224.613V210.671C62.0614 210.666 53.9872 207.319 48.033 201.365C42.0788 195.411 38.7315 187.336 38.7264 178.916V142.126C38.7322 135.558 37.1293 129.088 34.0577 123.282C30.986 117.477 26.5392 112.512 21.1059 108.821C26.5237 105.115 30.9588 100.147 34.0285 94.3446C37.0983 88.5425 38.7106 82.0807 38.7264 75.5165V43.7609C38.762 35.8633 41.9151 28.2994 47.4996 22.7149C53.0841 17.1304 60.6481 13.9773 68.5456 13.9416V13.9416Z"
|
||||
fill="black" />
|
||||
<path
|
||||
d="M229.648 13.9414H224.227V-6.10352e-05H229.648C241.227 0.0307156 252.32 4.65727 260.489 12.8629C268.659 21.0685 273.236 32.1819 273.215 43.7607V75.5163C273.204 82.2402 275.765 88.7139 280.373 93.6103C284.981 98.5066 291.288 101.455 298 101.85L297.613 108.821L298 115.792C291.288 116.187 284.981 119.135 280.373 124.032C275.765 128.928 273.204 135.402 273.215 142.126V178.722C273.236 190.866 268.434 202.521 259.865 211.126C251.297 219.731 239.662 224.582 227.518 224.613V210.671C235.939 210.666 244.013 207.319 249.967 201.365C255.921 195.41 259.269 187.336 259.274 178.916V142.126C259.268 135.557 260.871 129.088 263.943 123.282C267.014 117.476 271.461 112.511 276.894 108.821C271.477 105.115 267.041 100.147 263.972 94.3444C260.902 88.5423 259.29 82.0805 259.274 75.5163V43.7607C259.294 39.8554 258.543 35.9844 257.064 32.37C255.585 28.7556 253.407 25.4688 250.654 22.6983C247.902 19.9278 244.629 17.7281 241.024 16.2254C237.42 14.7226 233.554 13.9465 229.648 13.9414V13.9414Z"
|
||||
fill="black" />
|
||||
<path
|
||||
d="M199.828 56.1533H220.547L177.367 96.6224C169.62 103.632 159.544 107.514 149.097 107.514C138.649 107.514 128.573 103.632 120.826 96.6224L77.6465 56.1533H98.3651L131.089 86.7471C135.976 91.232 142.367 93.7204 149 93.7204C155.633 93.7204 162.024 91.232 166.911 86.7471L199.828 56.1533Z"
|
||||
fill="black" />
|
||||
<path
|
||||
d="M98.1717 168.459H77.4531L120.827 127.796C128.531 120.7 138.622 116.761 149.097 116.761C159.571 116.761 169.663 120.7 177.367 127.796L220.741 168.459H200.022L167.105 137.478C162.218 132.993 155.826 130.505 149.194 130.505C142.561 130.505 136.169 132.993 131.283 137.478L98.1717 168.459Z"
|
||||
fill="black" />
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 2.7 KiB |
76
docs/logging.md
Normal file
76
docs/logging.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Logging
|
||||
|
||||
Clio provides several logging options, which all are configurable via the config file. These are detailed in the following sections.
|
||||
|
||||
## `log_level`
|
||||
|
||||
The minimum level of severity at which the log message will be outputted by default. Severity options are `trace`, `debug`, `info`, `warning`, `error`, `fatal`. Defaults to `info`.
|
||||
|
||||
## `log_format`
|
||||
|
||||
The format of log lines produced by Clio. Defaults to `"%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%"`.
|
||||
|
||||
Each of the variables expands like so:
|
||||
|
||||
- `TimeStamp`: The full date and time of the log entry
|
||||
- `SourceLocation`: A partial path to the c++ file and the line number in said file (`source/file/path:linenumber`)
|
||||
- `ThreadID`: The ID of the thread the log entry is written from
|
||||
- `Channel`: The channel that this log entry was sent to
|
||||
- `Severity`: The severity (aka log level) the entry was sent at
|
||||
- `Message`: The actual log message
|
||||
|
||||
## `log_channels`
|
||||
|
||||
An array of JSON objects, each overriding properties for a logging `channel`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> At the time of writing, only `log_level` can be overridden using this mechanism.
|
||||
|
||||
Each object is of this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"channel": "Backend",
|
||||
"log_level": "fatal"
|
||||
}
|
||||
```
|
||||
|
||||
If no override is present for a given channel, that channel will log at the severity specified by the global `log_level`.
|
||||
|
||||
The log channels that can be overridden are: `Backend`, `WebServer`, `Subscriptions`, `RPC`, `ETL` and `Performance`.
|
||||
|
||||
> [!NOTE]
|
||||
> See [example-config.json](../example-config.json) for more details.
|
||||
|
||||
## `log_to_console`
|
||||
|
||||
Enable or disable log output to console. Options are `true`/`false`. This option defaults to `true`.
|
||||
|
||||
## `log_directory`
|
||||
|
||||
Path to the directory where log files are stored. If such directory doesn't exist, Clio will create it.
|
||||
|
||||
If the option is not specified, the logs are not written to a file.
|
||||
|
||||
## `log_rotation_size`
|
||||
|
||||
The max size of the log file in **megabytes** before it will rotate into a smaller file. Defaults to 2GB.
|
||||
|
||||
## `log_directory_max_size`
|
||||
|
||||
The max size of the log directory in **megabytes** before old log files will be deleted to free up space. Defaults to 50GB.
|
||||
|
||||
## `log_rotation_hour_interval`
|
||||
|
||||
The time interval in **hours** after the last log rotation to automatically rotate the current log file. Defaults to 12 hours.
|
||||
|
||||
> [!NOTE]
|
||||
> Log rotation based on time occurs in conjunction with size-based log rotation. For example, if a size-based log rotation occurs, the timer for the time-based rotation will reset.
|
||||
|
||||
## `log_tag_style`
|
||||
|
||||
Tag implementation to use. Must be one of:
|
||||
|
||||
- `uint`: Lock free and threadsafe but outputs just a simple unsigned integer
|
||||
- `uuid`: Threadsafe and outputs a UUID tag
|
||||
- `none`: Doesn't use tagging at all
|
||||
30
docs/metrics-and-static-analysis.md
Normal file
30
docs/metrics-and-static-analysis.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Metrics and static analysis
|
||||
|
||||
## Prometheus metrics collection
|
||||
|
||||
Clio natively supports [Prometheus](https://prometheus.io/) metrics collection. It accepts Prometheus requests on the port configured in the `server` section of the config.
|
||||
|
||||
Prometheus metrics are enabled by default, and replies to `/metrics` are compressed. To disable compression, and have human readable metrics, add `"prometheus": { "enabled": true, "compress_reply": false }` to Clio's config.
|
||||
|
||||
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to Clio's config.
|
||||
|
||||
It is important to know that Clio responds to Prometheus request only if they are admin requests. If you are using the admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
|
||||
|
||||
You can find an example docker-compose file, with Prometheus and Grafana configs, in [examples/infrastructure](../examples/infrastructure/).
|
||||
|
||||
## Using `clang-tidy` for static analysis
|
||||
|
||||
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 17.0.
|
||||
|
||||
Clang-tidy can be run by Cmake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
|
||||
|
||||
```sh
|
||||
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
|
||||
```
|
||||
|
||||
By default Cmake will try to find `clang-tidy` automatically in your system.
|
||||
To force Cmake to use your desired binary, set the `CLIO_CLANG_TIDY_BIN` environment variable to the path of the `clang-tidy` binary. For example:
|
||||
|
||||
```sh
|
||||
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@17/bin/clang-tidy
|
||||
```
|
||||
82
docs/run-clio.md
Normal file
82
docs/run-clio.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# How to run Clio
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to a Cassandra cluster or ScyllaDB cluster. Can be local or remote.
|
||||
> [!IMPORTANT]
|
||||
> There are some key considerations when using **ScyllaDB**. By default, Scylla reserves all free RAM on a machine for itself. If you are running `rippled` or other services on the same machine, restrict its memory usage using the `--memory` argument.
|
||||
>
|
||||
> See [ScyllaDB in a Shared Environment](https://docs.scylladb.com/getting-started/scylla-in-a-shared-environment/) to learn more.
|
||||
|
||||
- Access to one or more `rippled` nodes. Can be local or remote.
|
||||
|
||||
## Starting `rippled` and Clio
|
||||
|
||||
To run Clio you must first make the necessary changes to your configuration file, `config.json`. See [How to configure Clio and rippled](./configure-clio.md) to learn more.
|
||||
|
||||
Once your config files are ready, start `rippled` and Clio.
|
||||
|
||||
> [!TIP]
|
||||
> It doesn't matter which you start first, and it's fine to stop one or the other and restart at any given time.
|
||||
|
||||
To start Clio, simply run:
|
||||
|
||||
```sh
|
||||
./clio_server config.json
|
||||
```
|
||||
|
||||
Clio will wait for `rippled` to sync before extracting any ledgers. If there is already data in Clio's database, Clio will begin extraction with the ledger whose sequence is one greater than the greatest sequence currently in the database. Clio will wait for this ledger to be available.
|
||||
|
||||
## Extracting ledgers from `rippled`
|
||||
|
||||
Be aware that the behavior of `rippled` is to sync to the most recent ledger on the network, and then backfill. If Clio is extracting ledgers from `rippled`, and then `rippled` is stopped for a significant amount of time and then restarted, `rippled` will take time to backfill to the next ledger that Clio wants.
|
||||
|
||||
The time it takes is proportional to the amount of time `rippled` was offline for. Additionally, the amount `rippled` backfills is dependent on the `online_delete` and `ledger_history` config values. If these values are small, and `rippled` is stopped for a significant amount of time, `rippled` may never backfill to the ledger that Clio wants.
|
||||
To avoid this situation, it is advised to keep history proportional to the amount of time that you expect rippled to be offline. For example, if you expect `rippled` to be offline for a few days from time to time, you should keep at least a few days of history. If you expect `rippled` to never be offline, then you can keep a very small
|
||||
amount of history.
|
||||
|
||||
Clio can use multiple `rippled` servers as a data source. Simply add more entries to the `etl_sources` section, and Clio will load balance requests across the servers specified in this list. As long as one `rippled` server is up and synced, Clio will continue extracting ledgers.
|
||||
|
||||
In contrast to `rippled`, Clio answers RPC requests for the data already in the database as soon as the server starts. Clio does not wait to sync to the network, or for `rippled` to sync.
|
||||
|
||||
## Starting Clio with a fresh database
|
||||
|
||||
When starting Clio with a fresh database, Clio needs to download a ledger in full.
|
||||
This can take some time, and depends on database throughput. With a moderately fast database, this should take less than 10 minutes. If you did not properly set `secure_gateway` in the `port_grpc` section of `rippled`, this step will fail.
|
||||
|
||||
Once the first ledger is fully downloaded, Clio only needs to extract the changed data for each ledger, so extraction is much faster and Clio can keep up with `rippled` in real-time. Even under intense load, Clio should not lag behind the network, as Clio is not processing the data, and is simply writing to a database. The throughput of Clio is dependent on the throughput of your database, but a standard Cassandra or Scylla deployment can handle the write load of the XRP Ledger without any trouble.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Generally the performance considerations come on the read side, and depend on the number of RPC requests your Clio nodes are serving. Be aware that very heavy read traffic can impact write throughput. Again, this is on the database side, so if you are seeing this, upgrade your database.
|
||||
|
||||
## Running multiple Clio nodes
|
||||
|
||||
It is possible to run multiple Clio nodes that share access to the same database. The Clio nodes don't need to know about each other. You can simply spin up more Clio nodes pointing to the same database, and shut them down as you wish.
|
||||
|
||||
On startup, each Clio node queries the database for the latest ledger. If this latest ledger does not change for some time, the Clio node begins extracting ledgers and writing to the database. If the Clio node detects a ledger that it is trying to write has already been written, the Clio node will backoff and stop writing. If the node does not see a ledger written for some time, it will start writing again. This algorithm ensures that at any given time, one and only one Clio node is writing to the database.
|
||||
|
||||
### Configuring read only Clio nodes
|
||||
|
||||
It is possible to force Clio to only read data, and to never become a writer. To do this, set `read_only: true` in the config. One common setup is to have a small number of writer nodes that are inaccessible to clients, with several read only nodes handling client requests. The number of read only nodes can be scaled up or down in response to request volume.
|
||||
|
||||
### Running multiple `rippled` servers
|
||||
|
||||
When using multiple `rippled` servers as data sources and multiple Clio nodes, each Clio node should use the same set of `rippled` servers as sources. The order doesn't matter. The only reason not to do this is if you are running servers in different regions, and you want the Clio nodes to extract from servers in their region. However, if you are doing this, be aware that database traffic will be flowing across regions, which can cause high latencies. A possible alternative to this is to just deploy a database in each region, and the Clio nodes in each region use their region's database. This is effectively two systems.
|
||||
|
||||
Clio supports API versioning as [described here](https://xrpl.org/request-formatting.html#api-versioning).
|
||||
It's possible to configure `minimum`, `maximum` and `default` version like so:
|
||||
|
||||
```json
|
||||
"api_version": {
|
||||
"min": 1,
|
||||
"max": 2,
|
||||
"default": 1
|
||||
}
|
||||
```
|
||||
|
||||
All of the above are optional.
|
||||
|
||||
Clio will fallback to hardcoded defaults when these values are not specified in the config file, or if the configured values are outside of the minimum and maximum supported versions hardcoded in [src/rpc/common/APIVersion.h](../src/rpc/common/APIVersion.hpp).
|
||||
|
||||
> [!TIP]
|
||||
> See the [example-config.json](../example-config.json) for more details.
|
||||
Reference in New Issue
Block a user