Compare commits

...

62 Commits

Author SHA1 Message Date
Ayaz Salikhov
e733fadb45 ci: Pass version explicitly and don't rely on tags (#2904) 2026-01-12 17:31:09 +00:00
Ayaz Salikhov
a7ac7b54a8 ci: Show ccache stats with -vv (#2902) 2026-01-12 17:30:45 +00:00
Sergey Kuznetsov
88866ea6fd fix: No output from failed asserts in tests (#2905)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2026-01-12 17:29:20 +00:00
Sergey Kuznetsov
bb39bce40b style: Fix clang-tidy error (#2901) 2026-01-12 14:31:34 +00:00
Ayaz Salikhov
bb3159bda0 feat: Add build information to clio_server --version (#2893) 2026-01-09 13:59:43 +00:00
Ayaz Salikhov
c0c5c14791 chore: Fix branch name and commit SHA for GitHub PRs (#2888) 2026-01-09 12:33:32 +00:00
github-actions[bot]
b0abe14057 style: clang-tidy auto fixes (#2891) 2026-01-09 10:07:21 +00:00
Bart
c9df784c4e ci: Use updated prepare-runner in actions and worfklows (#2889) 2026-01-08 20:13:49 +00:00
Alex Kremer
a9787b131e feat: Basic support for channels (#2859)
This PR implements go-like channels wrapper (on top of asio experimental
channels).
In the future this will be integrated into the AsyncFramework.

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2026-01-08 14:21:46 +00:00
Sergey Kuznetsov
9f76eabf0a feat: Option to save cache asyncronously (#2883)
This PR adds an option to save cache to file asynchronously in parallel
with shutting down the rest of Clio services.
2026-01-07 17:20:56 +00:00
github-actions[bot]
79c08fc735 style: Update pre-commit hooks (#2875)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2026-01-05 01:10:10 +00:00
dependabot[bot]
2c9c5634ad ci: [DEPENDABOT] Bump actions/cache from 4.3.0 to 5.0.1 (#2871)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-12-23 01:46:14 +00:00
dependabot[bot]
850333528c ci: [DEPENDABOT] Bump docker/setup-buildx-action from 3.11.1 to 3.12.0 (#2870)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 11:52:41 +00:00
github-actions[bot]
8da4194fe2 style: clang-tidy auto fixes (#2874)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-12-22 11:52:23 +00:00
dependabot[bot]
4dece23ede ci: [DEPENDABOT] Bump docker/setup-buildx-action from 3.11.1 to 3.12.0 in /.github/actions/build-docker-image (#2872)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 11:52:04 +00:00
Alex Kremer
2327e81b0b fix: WorkQueue contention (#2866)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
2025-12-19 15:26:55 +00:00
Bart
5269ea0223 ci: Remove unnecessary creation of build directory (#2867) 2025-12-18 15:15:00 +00:00
github-actions[bot]
89fbcbf66a style: clang-tidy auto fixes (#2862) 2025-12-17 10:19:20 +00:00
Ayaz Salikhov
4b731a92ae ci: Update shared actions (#2852) 2025-12-16 20:54:48 +00:00
Ayaz Salikhov
7600e740a0 revert: "refactor: Add writing command to etl::SystemState" (#2860) 2025-12-16 15:06:35 +00:00
dependabot[bot]
db9a460867 ci: [DEPENDABOT] Bump actions/upload-artifact from 5.0.0 to 6.0.0 in /.github/actions/code-coverage (#2858)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:36:09 +00:00
dependabot[bot]
d5b0329e70 ci: [DEPENDABOT] Bump codecov/codecov-action from 5.5.1 to 5.5.2 (#2857)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:35:58 +00:00
dependabot[bot]
612434677a ci: [DEPENDABOT] Bump actions/upload-artifact from 5.0.0 to 6.0.0 (#2856)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:35:49 +00:00
dependabot[bot]
5a5a79fe30 ci: [DEPENDABOT] Bump actions/download-artifact from 6.0.0 to 7.0.0 (#2855)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:35:19 +00:00
dependabot[bot]
b1a49fdaab ci: [DEPENDABOT] Bump peter-evans/create-pull-request from 7.0.11 to 8.0.0 (#2854)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:35:11 +00:00
dependabot[bot]
f451996944 ci: [DEPENDABOT] Bump tj-actions/changed-files from 46.0.5 to 47.0.1 (#2853)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 12:34:55 +00:00
Ayaz Salikhov
488bb05d22 chore: Add a script to regenerate conan lockfile (#2849) 2025-12-12 16:40:00 +00:00
Ayaz Salikhov
f2c4275f61 ci: Put debian package to release (#2850) 2025-12-12 16:39:53 +00:00
github-actions[bot]
e9b98cf5b3 style: clang-tidy auto fixes (#2848)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-12-11 09:51:03 +00:00
Sergey Kuznetsov
3aa1854129 refactor: Add writing command to etl::SystemState (#2842) 2025-12-10 16:29:29 +00:00
Ayaz Salikhov
f2f5a6ab19 chore: Switch to xrpl/3.0.0 (#2843) 2025-12-10 16:06:21 +00:00
Ayaz Salikhov
1469d4b198 chore: Add systemd file to the debian package (#2844) 2025-12-10 16:02:43 +00:00
yinyiqian1
06ea05891d feat: Add DynamicMPT in account_mptoken_issuances (#2820)
Support DynamicMPT for the  account_mptoken_issuances handler.

Related commit:
eed757e0c4

The original spec for `DynamicMPT` can be found here:
https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0094-dynamic-MPT

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2025-12-10 11:36:24 +00:00
Ayaz Salikhov
c7c270cc03 style: Use shfmt for shell scripts (#2841) 2025-12-09 18:51:56 +00:00
Alex Kremer
c1f2f5b100 chore: Less delay in ETL taskman (#2802) 2025-12-09 12:25:00 +00:00
github-actions[bot]
bea0b51c8b style: clang-tidy auto fixes (#2840) 2025-12-09 10:36:53 +00:00
Alex Kremer
69b8e5bd06 feat: Add observable value util (#2831)
This implements a simple observable value. Can be used for a more
reactive approach. Will be used in ETL state and across the codebase
with time.
2025-12-08 16:44:43 +00:00
dependabot[bot]
33dc4ad95a ci: [DEPENDABOT] Bump ytanikin/pr-conventional-commits from 1.4.2 to 1.5.1 (#2835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:52:52 +00:00
dependabot[bot]
13cbb405c7 ci: [DEPENDABOT] Bump peter-evans/create-pull-request from 7.0.9 to 7.0.11 (#2836)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:52:02 +00:00
dependabot[bot]
8a37a2e083 ci: [DEPENDABOT] Bump actions/checkout from 6.0.0 to 6.0.1 (#2837)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 10:51:32 +00:00
github-actions[bot]
f8b6c98219 style: Update pre-commit hooks (#2825)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-12-07 18:42:50 +00:00
dependabot[bot]
92883bf012 ci: [DEPENDABOT] Bump docker/metadata-action from 5.9.0 to 5.10.0 in /.github/actions/build-docker-image (#2826)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-07 18:42:34 +00:00
Alex Kremer
88881e95dd chore: TSAN fix async-signal-unsafe (#2824)
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2025-12-02 17:36:36 +00:00
Alex Kremer
94e70e4026 chore: Add mathbunnyru to maintainers (#2823) 2025-11-27 19:24:05 +00:00
github-actions[bot]
b534570cdd style: clang-tidy auto fixes (#2822)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-27 09:46:53 +00:00
Ayaz Salikhov
56fbfc63c2 chore: Update lockfile (#2818) 2025-11-26 17:06:38 +00:00
Ayaz Salikhov
80978657c0 ci: Update images to use latest Ninja (#2817) 2025-11-26 16:15:18 +00:00
Ayaz Salikhov
067449c3f8 chore: Install latest Ninja in images (#2813) 2025-11-26 13:54:19 +00:00
Ayaz Salikhov
946976546a chore: Use boost::asio::ssl::stream instead of boost::beast::ssl_stream (#2814) 2025-11-26 12:27:48 +00:00
github-actions[bot]
73e90b0a3f style: clang-tidy auto fixes (#2816)
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-11-26 09:44:53 +00:00
Ayaz Salikhov
7681c58a3a style: Add black pre-commit hook (#2811) 2025-11-25 17:13:29 +00:00
Alex Kremer
391e7b07ab chore: WebServerAdminTestsSuit TSAN issues (#2809) 2025-11-25 12:17:24 +00:00
Alex Kremer
4eadaa85fa chore: Repeat-based tests TSAN fixes (#2810) 2025-11-25 12:15:43 +00:00
Ayaz Salikhov
1b1a46c429 feat: Handle prometheus requests in WorkQueue (#2790) 2025-11-24 16:17:45 +00:00
Ayaz Salikhov
89707d9668 ci: Run clang-tidy 3 times to make sure we don't have to fix again (#2803) 2025-11-24 12:19:34 +00:00
Ayaz Salikhov
ae260d1229 chore: Update spdlog and fmt libraries (#2804) 2025-11-24 11:27:29 +00:00
dependabot[bot]
058c05cfb6 ci: [DEPENDABOT] Bump actions/checkout from 5.0.0 to 6.0.0 (#2806)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 09:59:28 +00:00
dependabot[bot]
b2a7d185cb ci: [DEPENDABOT] Bump peter-evans/create-pull-request from 7.0.8 to 7.0.9 (#2805)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 09:58:51 +00:00
github-actions[bot]
9ea61ba6b9 style: clang-tidy auto fixes (#2801)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-11-21 11:27:40 +00:00
github-actions[bot]
19157dec74 style: clang-tidy auto fixes (#2799)
Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-11-21 11:02:07 +00:00
github-actions[bot]
42a6f516dc style: clang-tidy auto fixes (#2797) 2025-11-21 10:17:56 +00:00
emrearıyürek
2cd8226a11 refactor: Make getLedgerIndex return std::expected instead of throwing (#2788)
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <kuzzz99@gmail.com>
2025-11-20 17:46:15 +00:00
130 changed files with 4864 additions and 884 deletions

View File

@@ -14,7 +14,7 @@ runs:
using: composite
steps:
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}

View File

@@ -50,9 +50,9 @@ runs:
- uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0
with:
cache-image: false
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- uses: docker/metadata-action@318604b99e75e41977312d83839a89be02ca4893 # v5.9.0
- uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
id: meta
with:
images: ${{ inputs.images }}

View File

@@ -37,6 +37,10 @@ inputs:
description: Whether to generate Debian package
required: true
default: "false"
version:
description: Version of the clio_server binary
required: false
default: ""
runs:
using: composite
@@ -57,6 +61,19 @@ runs:
STATIC: "${{ inputs.static == 'true' && 'ON' || 'OFF' }}"
TIME_TRACE: "${{ inputs.time_trace == 'true' && 'ON' || 'OFF' }}"
PACKAGE: "${{ inputs.package == 'true' && 'ON' || 'OFF' }}"
# GitHub creates a merge commit for a PR
# https://www.kenmuse.com/blog/the-many-shas-of-a-github-pull-request/
#
# We:
# - explicitly provide branch name
# - use `github.head_ref` to get the SHA of last commit in the PR branch
#
# This way it works both for PRs and pushes to branches.
GITHUB_BRANCH_NAME: "${{ github.head_ref || github.ref_name }}"
GITHUB_HEAD_SHA: "${{ github.event.pull_request.head.sha || github.sha }}"
#
# If tag is being pushed, or it's a nightly release, we use that version.
FORCE_CLIO_VERSION: ${{ inputs.version }}
run: |
cmake \
-B "${BUILD_DIR}" \

View File

@@ -24,7 +24,7 @@ runs:
-j8 --exclude-throw-branches
- name: Archive coverage report
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coverage-report.xml
path: build/coverage_report.xml

View File

@@ -21,10 +21,6 @@ inputs:
runs:
using: composite
steps:
- name: Create build directory
shell: bash
run: mkdir -p "${{ inputs.build_dir }}"
- name: Run conan
shell: bash
env:

View File

@@ -4,7 +4,7 @@ import json
LINUX_OS = ["heavy", "heavy-arm64"]
LINUX_CONTAINERS = [
'{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
'{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
]
LINUX_COMPILERS = ["gcc", "clang"]

View File

@@ -40,9 +40,9 @@ mkdir -p "$PROFILES_DIR"
if [[ "$(uname)" == "Darwin" ]]; then
create_profile_with_sanitizers "apple-clang" "$APPLE_CLANG_PROFILE"
echo "include(apple-clang)" > "$PROFILES_DIR/default"
echo "include(apple-clang)" >"$PROFILES_DIR/default"
else
create_profile_with_sanitizers "clang" "$CLANG_PROFILE"
create_profile_with_sanitizers "gcc" "$GCC_PROFILE"
echo "include(gcc)" > "$PROFILES_DIR/default"
echo "include(gcc)" >"$PROFILES_DIR/default"
fi

25
.github/scripts/conan/regenerate_lockfile.sh vendored Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -ex
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
echo "Using temporary CONAN_HOME: $TEMP_DIR"
# We use a temporary Conan home to avoid polluting the user's existing Conan
# configuration and to not use local cache (which leads to non-reproducible lockfiles).
export CONAN_HOME="$TEMP_DIR"
# Ensure that the xrplf remote is the first to be consulted, so any recipes we
# patched are used. We also add it there to not created huge diff when the
# official Conan Center Index is updated.
conan remote add --force --index 0 xrplf https://conan.ripplex.io
# Delete any existing lockfile.
rm -f conan.lock
# Create a new lockfile that is compatible with macOS.
# It should also work on Linux.
conan lock create . \
--profile:all=.github/scripts/conan/apple-clang-17.profile

View File

@@ -22,8 +22,8 @@ fi
TEST_BINARY=$1
if [[ ! -f "$TEST_BINARY" ]]; then
echo "Test binary not found: $TEST_BINARY"
exit 1
echo "Test binary not found: $TEST_BINARY"
exit 1
fi
TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {suite=$1}')
@@ -35,12 +35,12 @@ export TSAN_OPTIONS="die_after_fork=0"
export MallocNanoZone='0' # for MacOSX
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}.log"
$TEST_BINARY --gtest_filter="$TEST" > "$OUTPUT_FILE" 2>&1
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}.log"
$TEST_BINARY --gtest_filter="$TEST" >"$OUTPUT_FILE" 2>&1
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
else
rm "$OUTPUT_FILE"
fi
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
else
rm "$OUTPUT_FILE"
fi
done

View File

@@ -20,5 +20,5 @@ for artifact_name in $(ls); do
rm "${artifact_name}/${BINARY_NAME}"
rm -r "${artifact_name}"
sha256sum "./${artifact_name}.zip" > "./${artifact_name}.zip.sha256sum"
sha256sum "./${artifact_name}.zip" >"./${artifact_name}.zip.sha256sum"
done

View File

@@ -48,11 +48,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Download Clio binary from artifact
if: ${{ inputs.artifact_name != null }}
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: ${{ inputs.artifact_name }}
path: ./docker/clio/artifact/

View File

@@ -49,7 +49,7 @@ jobs:
build_type: [Release, Debug]
container:
[
'{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }',
'{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }',
]
static: [true]
@@ -79,7 +79,7 @@ jobs:
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
conan_profile: gcc
build_type: Debug
download_ccache: true
@@ -92,35 +92,17 @@ jobs:
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
package:
name: Build packages
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
conan_profile: gcc
build_type: Release
download_ccache: true
upload_ccache: false
code_coverage: false
static: true
upload_clio_server: false
package: true
targets: package
analyze_build_time: false
check_config:
name: Check Config Description
needs: build-and-test
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: clio_server_Linux_Release_gcc

View File

@@ -21,17 +21,17 @@ jobs:
name: Build Clio / `libXRPL ${{ github.event.client_payload.version }}`
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: true
enable_ccache: false
- name: Update libXRPL version requirement
run: |
@@ -59,7 +59,7 @@ jobs:
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: clio_tests_check_libxrpl
path: build/clio_tests
@@ -69,10 +69,10 @@ jobs:
needs: build
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: clio_tests_check_libxrpl
@@ -92,7 +92,7 @@ jobs:
issues: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Create an issue
uses: ./.github/actions/create-issue

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: ytanikin/pr-conventional-commits@b72758283dcbee706975950e96bc4bf323a8d8c0 # 1.4.2
- uses: ytanikin/pr-conventional-commits@fda730cb152c05a849d6d84325e50c6182d9d1e9 # 1.5.1
with:
task_types: '["build","feat","fix","docs","test","ci","style","refactor","perf","chore"]'
add_label: false

View File

@@ -31,7 +31,7 @@ jobs:
if: github.event_name != 'push' || contains(github.event.head_commit.message, 'clang-tidy auto fixes')
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
permissions:
contents: write
@@ -39,14 +39,14 @@ jobs:
pull-requests: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: true
enable_ccache: false
- name: Run conan
uses: ./.github/actions/conan
@@ -59,30 +59,33 @@ jobs:
conan_profile: ${{ env.CONAN_PROFILE }}
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
uses: XRPLF/actions/get-nproc@cf0433aa74563aead044a1e395610c96d65a37cf
id: nproc
- name: Run clang-tidy
- name: Run clang-tidy (several times)
continue-on-error: true
id: run_clang_tidy
id: clang_tidy
run: |
run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j "${{ steps.nproc.outputs.nproc }}" -fix -quiet 1>output.txt
# We run clang-tidy several times, because some fixes may enable new fixes in subsequent runs.
CLANG_TIDY_COMMAND="run-clang-tidy-${{ env.LLVM_TOOLS_VERSION }} -p build -j ${{ steps.nproc.outputs.nproc }} -fix -quiet"
${CLANG_TIDY_COMMAND} ||
${CLANG_TIDY_COMMAND} ||
${CLANG_TIDY_COMMAND}
- name: Check for changes
id: files_changed
continue-on-error: true
run: |
git diff --exit-code
- name: Fix local includes and clang-format style
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
if: ${{ steps.files_changed.outcome != 'success' }}
run: |
pre-commit run --all-files fix-local-includes || true
pre-commit run --all-files clang-format || true
- name: Print issues found
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
run: |
sed -i '/error\||/!d' ./output.txt
cat output.txt
rm output.txt
- name: Create an issue
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
if: ${{ (steps.clang_tidy.outcome != 'success' || steps.files_changed.outcome != 'success') && github.event_name != 'pull_request' }}
id: create_issue
uses: ./.github/actions/create-issue
env:
@@ -95,7 +98,7 @@ jobs:
List of the issues found: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/
- uses: crazy-max/ghaction-import-gpg@e89d40939c28e39f97cf32126055eeae86ba74ec # v6.3.0
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
if: ${{ steps.files_changed.outcome != 'success' && github.event_name != 'pull_request' }}
with:
gpg_private_key: ${{ secrets.ACTIONS_GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.ACTIONS_GPG_PASSPHRASE }}
@@ -103,8 +106,8 @@ jobs:
git_commit_gpgsign: true
- name: Create PR with fixes
if: ${{ steps.run_clang_tidy.outcome != 'success' && github.event_name != 'pull_request' }}
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
if: ${{ steps.files_changed.outcome != 'success' && github.event_name != 'pull_request' }}
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -119,5 +122,5 @@ jobs:
reviewers: "godexsoft,kuznetsss,PeterChen13579,mathbunnyru"
- name: Fail the job
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
if: ${{ steps.clang_tidy.outcome != 'success' || steps.files_changed.outcome != 'success' }}
run: exit 1

View File

@@ -18,18 +18,18 @@ jobs:
build:
runs-on: ubuntu-latest
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
lfs: true
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: true
enable_ccache: false
- name: Create build directory
run: mkdir build_docs

View File

@@ -28,8 +28,20 @@ defaults:
shell: bash
jobs:
get_date:
name: Get Date
runs-on: ubuntu-latest
outputs:
date: ${{ steps.get_date.outputs.date }}
steps:
- name: Get current date
id: get_date
run: |
echo "date=$(date +'%Y%m%d')" >> $GITHUB_OUTPUT
build-and-test:
name: Build and Test
needs: get_date
strategy:
fail-fast: false
@@ -43,17 +55,17 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
- os: heavy
conan_profile: gcc
build_type: Debug
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
- os: heavy
conan_profile: gcc.ubsan
build_type: Release
static: false
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
uses: ./.github/workflows/reusable-build-test.yml
with:
@@ -67,9 +79,31 @@ jobs:
upload_clio_server: true
download_ccache: false
upload_ccache: false
version: nightly-${{ needs.get_date.outputs.date }}
package:
name: Build debian package
needs: get_date
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
conan_profile: gcc
build_type: Release
download_ccache: false
upload_ccache: false
code_coverage: false
static: true
upload_clio_server: false
package: true
version: nightly-${{ needs.get_date.outputs.date }}
targets: package
analyze_build_time: false
analyze_build_time:
name: Analyze Build Time
needs: get_date
strategy:
fail-fast: false
@@ -77,7 +111,7 @@ jobs:
include:
- os: heavy
conan_profile: clang
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
static: true
- os: macos15
conan_profile: apple-clang
@@ -96,20 +130,10 @@ jobs:
upload_clio_server: false
targets: all
analyze_build_time: true
get_date:
name: Get Date
runs-on: ubuntu-latest
outputs:
date: ${{ steps.get_date.outputs.date }}
steps:
- name: Get current date
id: get_date
run: |
echo "date=$(date +'%Y%m%d')" >> $GITHUB_OUTPUT
version: nightly-${{ needs.get_date.outputs.date }}
nightly_release:
needs: [build-and-test, get_date]
needs: [build-and-test, package, get_date]
uses: ./.github/workflows/reusable-release.yml
with:
delete_pattern: "nightly-*"
@@ -145,7 +169,7 @@ jobs:
issues: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Create an issue
uses: ./.github/actions/create-issue

View File

@@ -8,7 +8,7 @@ on:
jobs:
run-hooks:
uses: XRPLF/actions/.github/workflows/pre-commit.yml@34790936fae4c6c751f62ec8c06696f9c1a5753a
uses: XRPLF/actions/.github/workflows/pre-commit.yml@5ca417783f0312ab26d6f48b85c78edf1de99bbd
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-pre-commit:c117f470f2ef954520ab5d1c8a5ed2b9e68d6f8a" }'
container: '{ "image": "ghcr.io/xrplf/clio-pre-commit:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'

View File

@@ -29,7 +29,7 @@ jobs:
conan_profile: gcc
build_type: Release
static: true
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
uses: ./.github/workflows/reusable-build-test.yml
with:
@@ -43,10 +43,29 @@ jobs:
upload_clio_server: true
download_ccache: false
upload_ccache: false
expected_version: ${{ github.event_name == 'push' && github.ref_name || '' }}
version: ${{ github.event_name == 'push' && github.ref_name || '' }}
package:
name: Build debian package
uses: ./.github/workflows/reusable-build.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
conan_profile: gcc
build_type: Release
download_ccache: false
upload_ccache: false
code_coverage: false
static: true
upload_clio_server: false
package: true
version: ${{ github.event_name == 'push' && github.ref_name || '' }}
targets: package
analyze_build_time: false
release:
needs: build-and-test
needs: [build-and-test, package]
uses: ./.github/workflows/reusable-release.yml
with:
delete_pattern: ""

View File

@@ -63,18 +63,18 @@ on:
type: string
default: all
expected_version:
description: Expected version of the clio_server binary
required: false
type: string
default: ""
package:
description: Whether to generate Debian package
required: false
type: boolean
default: false
version:
description: Version of the clio_server binary
required: false
type: string
default: ""
jobs:
build:
uses: ./.github/workflows/reusable-build.yml
@@ -90,8 +90,8 @@ jobs:
upload_clio_server: ${{ inputs.upload_clio_server }}
targets: ${{ inputs.targets }}
analyze_build_time: false
expected_version: ${{ inputs.expected_version }}
package: ${{ inputs.package }}
version: ${{ inputs.version }}
test:
needs: build

View File

@@ -60,17 +60,17 @@ on:
required: true
type: boolean
expected_version:
description: Expected version of the clio_server binary
required: false
type: string
default: ""
package:
description: Whether to generate Debian package
required: false
type: boolean
version:
description: Version of the clio_server binary
required: false
type: string
default: ""
secrets:
CODECOV_TOKEN:
required: false
@@ -88,20 +88,16 @@ jobs:
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
# We need to fetch tags to have correct version in the release
# The workaround is based on https://github.com/actions/checkout/issues/1467
fetch-tags: true
ref: ${{ github.ref }}
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: ${{ !inputs.download_ccache }}
enable_ccache: ${{ inputs.download_ccache }}
- name: Setup conan on macOS
if: ${{ runner.os == 'macOS' }}
@@ -117,7 +113,7 @@ jobs:
- name: Restore ccache cache
if: ${{ inputs.download_ccache && github.ref != 'refs/heads/develop' }}
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache/restore@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ steps.cache_key.outputs.key }}
@@ -139,6 +135,7 @@ jobs:
static: ${{ inputs.static }}
time_trace: ${{ inputs.analyze_build_time }}
package: ${{ inputs.package }}
version: ${{ inputs.version }}
- name: Build Clio
uses: ./.github/actions/build-clio
@@ -154,7 +151,7 @@ jobs:
- name: Upload build time analyze report
if: ${{ inputs.analyze_build_time }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: build_time_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build_time_report.txt
@@ -162,12 +159,12 @@ jobs:
- name: Show ccache's statistics and zero it
if: ${{ inputs.download_ccache }}
run: |
ccache --show-stats
ccache --show-stats -vv
ccache --zero-stats
- name: Save ccache cache
if: ${{ inputs.upload_ccache && github.ref == 'refs/heads/develop' }}
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache/save@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ steps.cache_key.outputs.key }}
@@ -182,28 +179,28 @@ jobs:
- name: Upload clio_server
if: ${{ inputs.upload_clio_server && !inputs.code_coverage && !inputs.analyze_build_time }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: clio_server_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_tests
- name: Upload clio_integration_tests
if: ${{ !inputs.code_coverage && !inputs.analyze_build_time && !inputs.package }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/clio_integration_tests
- name: Upload Clio Linux package
if: ${{ inputs.package }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: clio_deb_package_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: build/*.deb
@@ -218,15 +215,19 @@ jobs:
if: ${{ inputs.code_coverage }}
uses: ./.github/actions/code-coverage
- name: Verify expected version
if: ${{ inputs.expected_version != '' }}
- name: Verify version is expected
if: ${{ inputs.version != '' }}
env:
INPUT_EXPECTED_VERSION: ${{ inputs.expected_version }}
INPUT_VERSION: ${{ inputs.version }}
run: |
set -e
EXPECTED_VERSION="clio-${INPUT_EXPECTED_VERSION}"
actual_version=$(./build/clio_server --version)
if [[ "$actual_version" != "$EXPECTED_VERSION" ]]; then
EXPECTED_VERSION="clio-${INPUT_VERSION}"
if [[ ${{ inputs.build_type }} == "Debug" ]]; then
EXPECTED_VERSION="${EXPECTED_VERSION}+DEBUG"
fi
actual_version=$(./build/clio_server --version | head -n 1)
if [[ "${actual_version}" != "${EXPECTED_VERSION}" ]]; then
echo "Expected version '${EXPECTED_VERSION}', but got '${actual_version}'"
exit 1
fi

View File

@@ -46,7 +46,7 @@ jobs:
release:
runs-on: heavy
container:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -55,20 +55,28 @@ jobs:
contents: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: true
enable_ccache: false
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
path: release_artifacts
pattern: clio_server_*
- name: Prepare release artifacts
run: .github/scripts/prepare-release-artifacts.sh release_artifacts
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
path: release_artifacts
pattern: clio_deb_package_*
- name: Create release notes
env:
RELEASE_HEADER: ${{ inputs.header }}
@@ -86,11 +94,8 @@ jobs:
git-cliff "${BASE_COMMIT}..HEAD" --ignore-tags "nightly|-b|-rc"
cat CHANGELOG.md >> "${RUNNER_TEMP}/release_notes.md"
- name: Prepare release artifacts
run: .github/scripts/prepare-release-artifacts.sh release_artifacts
- name: Upload release notes
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: release_notes_${{ inputs.version }}
path: "${RUNNER_TEMP}/release_notes.md"
@@ -122,4 +127,4 @@ jobs:
--target "${GITHUB_SHA}" \
${DRAFT_OPTION} \
--notes-file "${RUNNER_TEMP}/release_notes.md" \
./release_artifacts/clio_server*
./release_artifacts/clio_*

View File

@@ -52,13 +52,13 @@ jobs:
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
@@ -85,7 +85,7 @@ jobs:
- name: Upload sanitizer report
if: ${{ env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true' }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: sanitizer_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: .sanitizer-report/*
@@ -124,7 +124,7 @@ jobs:
steps:
- name: Cleanup workspace
if: ${{ runner.os == 'macOS' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@ea9970b7c211b18f4c8bcdb28c29f5711752029f
uses: XRPLF/actions/cleanup-workspace@cf0433aa74563aead044a1e395610c96d65a37cf
- name: Spin up scylladb
if: ${{ runner.os == 'macOS' }}
@@ -146,7 +146,7 @@ jobs:
sleep 5
done
- uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}

View File

@@ -16,19 +16,19 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Download report artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: coverage-report.xml
path: build
- name: Upload coverage report
if: ${{ hashFiles('build/coverage_report.xml') != '' }}
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
files: build/coverage_report.xml
fail_ci_if_error: true

View File

@@ -44,7 +44,7 @@ jobs:
uses: ./.github/workflows/reusable-build-test.yml
with:
runs_on: heavy
container: '{ "image": "ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7" }'
container: '{ "image": "ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f" }'
download_ccache: false
upload_ccache: false
conan_profile: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}

View File

@@ -56,11 +56,11 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/compilers/gcc/**"
@@ -94,11 +94,11 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/compilers/gcc/**"
@@ -132,16 +132,16 @@ jobs:
needs: [repo, gcc-amd64, gcc-arm64]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/compilers/gcc/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Login to GitHub Container Registry
if: ${{ github.event_name != 'pull_request' }}
@@ -183,11 +183,11 @@ jobs:
needs: repo
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/compilers/clang/**"
@@ -219,11 +219,11 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/tools/**"
@@ -250,11 +250,11 @@ jobs:
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/tools/**"
@@ -281,16 +281,16 @@ jobs:
needs: [repo, tools-amd64, tools-arm64]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
with:
files: "docker/tools/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Login to GitHub Container Registry
if: ${{ github.event_name != 'pull_request' }}
@@ -316,7 +316,7 @@ jobs:
needs: [repo, tools-merge]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -338,7 +338,7 @@ jobs:
needs: [repo, gcc-merge, clang, tools-merge]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: ./.github/actions/build-docker-image
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -22,6 +22,7 @@ on:
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
- "!.github/scripts/conan/regenerate_lockfile.sh"
- conanfile.py
- conan.lock
@@ -32,6 +33,7 @@ on:
- .github/actions/conan/action.yml
- ".github/scripts/conan/**"
- "!.github/scripts/conan/regenerate_lockfile.sh"
- conanfile.py
- conan.lock
@@ -50,7 +52,7 @@ jobs:
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Calculate conan matrix
id: set-matrix
@@ -73,12 +75,12 @@ jobs:
CONAN_PROFILE: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@8abb0722cbff83a9a2dc7d06c473f7a4964b7382
uses: XRPLF/actions/prepare-runner@65da1c59e81965eeb257caa3587b9d45066fb925
with:
disable_ccache: true
enable_ccache: false
- name: Setup conan on macOS
if: ${{ runner.os == 'macOS' }}

View File

@@ -29,12 +29,12 @@ repos:
# Autoformat: YAML, JSON, Markdown, etc.
- repo: https://github.com/rbubley/mirrors-prettier
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
rev: 14abee445aea04b39069c19b4bd54efff6775819 # frozen: v3.7.4
hooks:
- id: prettier
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: 192ad822316c3a22fb3d3cc8aa6eafa0b8488360 # frozen: v0.45.0
rev: 76b3d32d3f4b965e1d6425253c59407420ae2c43 # frozen: v0.47.0
hooks:
- id: markdownlint-fix
exclude: LICENSE.md
@@ -58,6 +58,17 @@ repos:
--ignore-words=pre-commit-hooks/codespell_ignore.txt,
]
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 831207fd435b47aeffdf6af853097e64322b4d44 # frozen: 25.12.0
hooks:
- id: black
- repo: https://github.com/scop/pre-commit-shfmt
rev: 2a30809d16bc7a60d9b97353c797f42b510d3368 # frozen: v3.12.0-2
hooks:
- id: shfmt
args: ["-i", "4", "--write"]
# Running some C++ hooks before clang-format
# to ensure that the style is consistent.
- repo: local
@@ -83,7 +94,7 @@ repos:
language: script
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 719856d56a62953b8d2839fb9e851f25c3cfeef8 # frozen: v21.1.2
rev: 75ca4ad908dc4a99f57921f29b7e6c1521e10b26 # frozen: v21.1.8
hooks:
- id: clang-format
args: [--style=file]

View File

@@ -180,6 +180,7 @@ Existing maintainers can resign, or be subject to a vote for removal at the behe
- [kuznetsss](https://github.com/kuznetsss) (Ripple)
- [legleux](https://github.com/legleux) (Ripple)
- [PeterChen13579](https://github.com/PeterChen13579) (Ripple)
- [mathbunnyru](https://github.com/mathbunnyru) (Ripple)
### Honorable ex-Maintainers

View File

@@ -9,10 +9,12 @@ target_sources(
util/async/ExecutionContextBenchmarks.cpp
# Logger
util/log/LoggerBenchmark.cpp
# WorkQueue
rpc/WorkQueueBenchmarks.cpp
)
include(deps/gbench)
target_include_directories(clio_benchmark PRIVATE .)
target_link_libraries(clio_benchmark PUBLIC clio_util benchmark::benchmark_main spdlog::spdlog)
target_link_libraries(clio_benchmark PUBLIC clio_util clio_rpc benchmark::benchmark_main spdlog::spdlog)
set_target_properties(clio_benchmark PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})

View File

@@ -0,0 +1,122 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "rpc/WorkQueue.hpp"
#include "util/Assert.hpp"
#include "util/config/Array.hpp"
#include "util/config/ConfigConstraints.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/config/ConfigValue.hpp"
#include "util/config/Types.hpp"
#include "util/log/Logger.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <benchmark/benchmark.h>
#include <boost/asio/steady_timer.hpp>
#include <atomic>
#include <cassert>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <mutex>
using namespace rpc;
using namespace util::config;
namespace {
auto const kCONFIG = ClioConfigDefinition{
{"prometheus.compress_reply", ConfigValue{ConfigType::Boolean}.defaultValue(true)},
{"prometheus.enabled", ConfigValue{ConfigType::Boolean}.defaultValue(true)},
{"log.channels.[].channel", Array{ConfigValue{ConfigType::String}}},
{"log.channels.[].level", Array{ConfigValue{ConfigType::String}}},
{"log.level", ConfigValue{ConfigType::String}.defaultValue("info")},
{"log.format", ConfigValue{ConfigType::String}.defaultValue(R"(%Y-%m-%d %H:%M:%S.%f %^%3!l:%n%$ - %v)")},
{"log.is_async", ConfigValue{ConfigType::Boolean}.defaultValue(false)},
{"log.enable_console", ConfigValue{ConfigType::Boolean}.defaultValue(false)},
{"log.directory", ConfigValue{ConfigType::String}.optional()},
{"log.rotation_size", ConfigValue{ConfigType::Integer}.defaultValue(2048).withConstraint(gValidateUint32)},
{"log.directory_max_files", ConfigValue{ConfigType::Integer}.defaultValue(25).withConstraint(gValidateUint32)},
{"log.tag_style", ConfigValue{ConfigType::String}.defaultValue("none")},
};
// this should be a fixture but it did not work with Args very well
void
init()
{
static std::once_flag kONCE;
std::call_once(kONCE, [] {
PrometheusService::init(kCONFIG);
(void)util::LogService::init(kCONFIG);
});
}
} // namespace
static void
benchmarkWorkQueue(benchmark::State& state)
{
init();
auto const total = static_cast<size_t>(state.range(0));
auto const numThreads = static_cast<uint32_t>(state.range(1));
auto const maxSize = static_cast<uint32_t>(state.range(2));
auto const delayMs = static_cast<uint32_t>(state.range(3));
for (auto _ : state) {
std::atomic_size_t totalExecuted = 0uz;
std::atomic_size_t totalQueued = 0uz;
state.PauseTiming();
WorkQueue queue(numThreads, maxSize);
state.ResumeTiming();
for (auto i = 0uz; i < total; ++i) {
totalQueued += static_cast<std::size_t>(queue.postCoro(
[&delayMs, &totalExecuted](auto yield) {
++totalExecuted;
boost::asio::steady_timer timer(yield.get_executor(), std::chrono::milliseconds{delayMs});
timer.async_wait(yield);
},
/* isWhiteListed = */ false
));
}
queue.stop();
ASSERT(totalExecuted == totalQueued, "Totals don't match");
ASSERT(totalQueued <= total, "Queued more than requested");
ASSERT(totalQueued >= maxSize, "Queued less than maxSize");
}
}
// Usage example:
/*
./clio_benchmark \
--benchmark_repetitions=10 \
--benchmark_display_aggregates_only=true \
--benchmark_min_time=1x \
--benchmark_filter="WorkQueue"
*/
// TODO: figure out what happens on 1 thread
BENCHMARK(benchmarkWorkQueue)
->ArgsProduct({{1'000, 10'000, 100'000}, {2, 4, 8}, {0, 5'000}, {10, 100, 250}})
->Unit(benchmark::kMillisecond);

View File

@@ -1,42 +1,43 @@
find_package(Git REQUIRED)
set(GIT_COMMAND describe --tags --exact-match)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND}
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
OUTPUT_VARIABLE TAG
RESULT_VARIABLE RC
ERROR_VARIABLE ERR
OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_STRIP_TRAILING_WHITESPACE
)
if (RC EQUAL 0)
message(STATUS "Found tag '${TAG}' in git. Will use it as Clio version")
set(CLIO_VERSION "${TAG}")
set(DOC_CLIO_VERSION "${TAG}")
if (DEFINED ENV{GITHUB_BRANCH_NAME})
set(GIT_BUILD_BRANCH $ENV{GITHUB_BRANCH_NAME})
set(GIT_COMMIT_HASH $ENV{GITHUB_HEAD_SHA})
else ()
message(STATUS "Error finding tag in git: ${ERR}")
message(STATUS "Will use 'YYYYMMDDHMS-<branch>-<git-rev>' as Clio version")
set(GIT_COMMAND show -s --date=format:%Y%m%d%H%M%S --format=%cd)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE DATE
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
set(GIT_COMMAND branch --show-current)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE BRANCH
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE GIT_BUILD_BRANCH
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
set(GIT_COMMAND rev-parse --short HEAD)
set(GIT_COMMAND rev-parse HEAD)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE REV
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
endif ()
set(CLIO_VERSION "${DATE}-${BRANCH}-${REV}")
set(GIT_COMMAND show -s --date=format:%Y%m%d%H%M%S --format=%cd)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE BUILD_DATE
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
message(STATUS "Git branch: ${GIT_BUILD_BRANCH}")
message(STATUS "Git commit hash: ${GIT_COMMIT_HASH}")
message(STATUS "Build date: ${BUILD_DATE}")
if (DEFINED ENV{FORCE_CLIO_VERSION} AND NOT "$ENV{FORCE_CLIO_VERSION}" STREQUAL "")
message(STATUS "Using explicitly provided '${FORCE_CLIO_VERSION}' as Clio version")
set(CLIO_VERSION "$ENV{FORCE_CLIO_VERSION}")
set(DOC_CLIO_VERSION "$ENV{FORCE_CLIO_VERSION}")
else ()
message(STATUS "Using 'YYYYMMDDHMS-<branch>-<git short rev>' as Clio version")
string(SUBSTRING ${GIT_COMMIT_HASH} 0 7 GIT_COMMIT_HASH_SHORT)
set(CLIO_VERSION "${BUILD_DATE}-${GIT_BUILD_BRANCH}-${GIT_COMMIT_HASH_SHORT}")
set(DOC_CLIO_VERSION "develop")
endif ()

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Clio XRPL API server
Documentation=https://github.com/XRPLF/clio.git
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=@CLIO_INSTALL_DIR@/bin/clio_server @CLIO_INSTALL_DIR@/etc/config.json
Restart=on-failure
User=clio
Group=clio
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -11,3 +11,6 @@ file(READ docs/examples/config/example-config.json config)
string(REGEX REPLACE "./clio_log" "/var/log/clio/" config "${config}")
file(WRITE ${CMAKE_BINARY_DIR}/install-config.json "${config}")
install(FILES ${CMAKE_BINARY_DIR}/install-config.json DESTINATION etc RENAME config.json)
configure_file("${CMAKE_SOURCE_DIR}/cmake/install/clio.service.in" "${CMAKE_BINARY_DIR}/clio.service")
install(FILES "${CMAKE_BINARY_DIR}/clio.service" DESTINATION /lib/systemd/system)

View File

@@ -10,37 +10,36 @@ CLIO_BIN="$CLIO_PREFIX/bin/${CLIO_EXECUTABLE}"
CLIO_CONFIG="$CLIO_PREFIX/etc/config.json"
case "$1" in
configure)
if ! id -u "$USER_NAME" >/dev/null 2>&1; then
# Users who should not have a home directory should have their home directory set to /nonexistent
# https://www.debian.org/doc/debian-policy/ch-opersys.html#non-existent-home-directories
useradd \
--system \
--home-dir /nonexistent \
--no-create-home \
--shell /usr/sbin/nologin \
--comment "system user for ${CLIO_EXECUTABLE}" \
--user-group \
${USER_NAME}
fi
configure)
if ! id -u "$USER_NAME" >/dev/null 2>&1; then
# Users who should not have a home directory should have their home directory set to /nonexistent
# https://www.debian.org/doc/debian-policy/ch-opersys.html#non-existent-home-directories
useradd \
--system \
--home-dir /nonexistent \
--no-create-home \
--shell /usr/sbin/nologin \
--comment "system user for ${CLIO_EXECUTABLE}" \
--user-group \
${USER_NAME}
fi
install -d -o "$USER_NAME" -g "$GROUP_NAME" /var/log/clio
install -d -o "$USER_NAME" -g "$GROUP_NAME" /var/log/clio
if [ -f "$CLIO_CONFIG" ]; then
chown "$USER_NAME:$GROUP_NAME" "$CLIO_CONFIG"
fi
if [ -f "$CLIO_CONFIG" ]; then
chown "$USER_NAME:$GROUP_NAME" "$CLIO_CONFIG"
fi
chown -R "$USER_NAME:$GROUP_NAME" "$CLIO_PREFIX"
chown -R "$USER_NAME:$GROUP_NAME" "$CLIO_PREFIX"
ln -sf "$CLIO_BIN" "/usr/bin/${CLIO_EXECUTABLE}"
ln -sf "$CLIO_BIN" "/usr/bin/${CLIO_EXECUTABLE}"
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
;;
abort-upgrade | abort-remove | abort-deconfigure) ;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0

View File

@@ -3,13 +3,13 @@
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"xxhash/0.8.3#681d36a0a6111fc56e5e45ea182c19cc%1756234289.683",
"xrpl/3.0.0-rc1#f5c8ecd42bdf511ad36f57bc702dacd2%1762975621.294",
"xrpl/3.0.0#534d3f65a336109eee929b88962bae4e%1765375071.547",
"sqlite3/3.49.1#8631739a4c9b93bd3d6b753bac548a63%1756234266.869",
"spdlog/1.15.3#3ca0e9e6b83af4d0151e26541d140c86%1754401846.61",
"spdlog/1.16.0#942c2c39562ae25ba575d9c8e2bdf3b6%1763984117.108",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1756234262.318",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1756234257.976",
"re2/20230301#ca3b241baec15bd31ea9187150e0b333%1764175362.029",
"rapidjson/cci.20220822#1b9d8c2256876a154172dc5cfbe447c6%1754325007.656",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88%1764175361.456",
"openssl/1.1.1w#a8f0792d7c5121b954578a7149d23e03%1756223730.729",
"nudb/2.0.9#fb8dfd1a5557f5e0528114c2da17721e%1763150366.909",
"minizip/1.2.13#9e87d57804bd372d6d1e32b1871517a3%1754325004.374",
@@ -17,41 +17,45 @@
"libuv/1.46.0#dc28c1f653fa197f00db5b577a6f6011%1754325003.592",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1756223727.64",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1756230911.03",
"libarchive/3.8.1#5cf685686322e906cb42706ab7e099a8%1756234256.696",
"libarchive/3.8.1#ffee18995c706e02bf96e7a2f7042e0d%1764175360.142",
"http_parser/2.9.4#98d91690d6fd021e9e624218a85d9d97%1754325001.385",
"gtest/1.14.0#f8f0757a574a8dd747d16af62d6eb1b7%1754325000.842",
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1756234248.958",
"fmt/11.2.0#579bb2cdf4a7607621beea4eb4651e0f%1754324999.086",
"fmt/12.1.0#50abab23274d56bb8f42c94b3b9a40c7%1763984116.926",
"doctest/2.4.11#a4211dfc329a16ba9f280f9574025659%1756234220.819",
"date/3.0.4#f74bbba5a08fa388256688743136cb6f%1756234217.493",
"cassandra-cpp-driver/2.17.0#e50919efac8418c26be6671fd702540a%1754324997.363",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1756234217.915",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1756234261.716",
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368%1754325043.336",
"date/3.0.4#862e11e80030356b53c2c38599ceb32b%1763584497.32",
"cassandra-cpp-driver/2.17.0#bd3934138689482102c265d01288a316%1764175359.611",
"c-ares/1.34.5#5581c2b62a608b40bb85d965ab3ec7c8%1764175359.429",
"bzip2/1.0.8#c470882369c2d95c5c77e970c0c7e321%1764175359.429",
"boost/1.83.0#91d8b1572534d2c334d6790e3c34d0c1%1764175359.61",
"benchmark/1.9.4#ce4403f7a24d3e1f907cd9da4b678be4%1754578869.672",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1756234220.907"
"abseil/20230802.1#90ba607d4ee8fb5fb157c3db540671fc%1764175359.429"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1756234269.497",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1756234251.614",
"cmake/3.31.8#dde3bde00bb843687e55aea5afa0e220%1756234232.89",
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88%1764175361.456",
"cmake/4.2.0#ae0a44f44a1ef9ab68fd4b3e9a1f8671%1764175359.44",
"cmake/3.31.10#313d16a1aa16bbdb2ca0792467214b76%1764175359.429",
"b2/5.3.3#107c15377719889654eb9a162a673975%1756234226.28"
],
"python_requires": [],
"overrides": {
"boost/1.83.0": [
null,
"boost/1.83.0#5d975011d65b51abb2d2f6eb8386b368"
"boost/1.83.0#91d8b1572534d2c334d6790e3c34d0c1"
],
"protobuf/3.21.12": [
null,
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1"
"protobuf/3.21.12#44ee56c0a6eea0c19aeeaca680370b88"
],
"lz4/1.9.4": [
"lz4/1.10.0"
],
"sqlite3/3.44.2": [
"sqlite3/3.49.1"
],
"fmt/12.0.0": [
"fmt/12.1.0"
]
},
"config_requires": []

View File

@@ -3,62 +3,60 @@ from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
class ClioConan(ConanFile):
name = 'clio'
license = 'ISC'
author = 'Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>, Ayaz Salikhov <asalikhov@ripple.com>'
url = 'https://github.com/xrplf/clio'
description = 'Clio RPC server'
settings = 'os', 'compiler', 'build_type', 'arch'
name = "clio"
license = "ISC"
author = "Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>, Ayaz Salikhov <asalikhov@ripple.com>"
url = "https://github.com/xrplf/clio"
description = "Clio RPC server"
settings = "os", "compiler", "build_type", "arch"
options = {}
requires = [
'boost/1.83.0',
'cassandra-cpp-driver/2.17.0',
'fmt/11.2.0',
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1w',
'xrpl/3.0.0-rc1',
'zlib/1.3.1',
'libbacktrace/cci.20210118',
'spdlog/1.15.3',
"boost/1.83.0",
"cassandra-cpp-driver/2.17.0",
"protobuf/3.21.12",
"grpc/1.50.1",
"openssl/1.1.1w",
"xrpl/3.0.0",
"zlib/1.3.1",
"libbacktrace/cci.20210118",
"spdlog/1.16.0",
]
default_options = {
'xrpl/*:tests': False,
'xrpl/*:rocksdb': False,
'cassandra-cpp-driver/*:shared': False,
'date/*:header_only': True,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libpq/*:shared': False,
'lz4/*:shared': False,
'openssl/*:shared': False,
'protobuf/*:shared': False,
'protobuf/*:with_zlib': True,
'snappy/*:shared': False,
'gtest/*:no_main': True,
"xrpl/*:tests": False,
"xrpl/*:rocksdb": False,
"cassandra-cpp-driver/*:shared": False,
"date/*:header_only": True,
"grpc/*:shared": False,
"grpc/*:secure": True,
"libpq/*:shared": False,
"lz4/*:shared": False,
"openssl/*:shared": False,
"protobuf/*:shared": False,
"protobuf/*:with_zlib": True,
"snappy/*:shared": False,
"gtest/*:no_main": True,
}
exports_sources = (
'CMakeLists.txt', 'cmake/*', 'src/*'
)
exports_sources = ("CMakeLists.txt", "cmake/*", "src/*")
def requirements(self):
self.requires('gtest/1.14.0')
self.requires('benchmark/1.9.4')
self.requires("gtest/1.14.0")
self.requires("benchmark/1.9.4")
self.requires("fmt/12.1.0", force=True)
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
if self.settings.compiler == "apple-clang":
self.options["boost"].visibility = "global"
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
self.folders.generators = "build/generators"
generators = 'CMakeDeps'
generators = "CMakeDeps"
def generate(self):
tc = CMakeToolchain(self)

View File

@@ -36,7 +36,6 @@ RUN apt-get update \
libmpfr-dev \
libncurses-dev \
make \
ninja-build \
wget \
zip \
&& apt-get clean \
@@ -107,6 +106,7 @@ COPY --from=clio-tools \
/usr/local/bin/git-cliff \
/usr/local/bin/gh \
/usr/local/bin/gdb \
/usr/local/bin/ninja \
/usr/local/bin/
WORKDIR /root

View File

@@ -15,6 +15,7 @@ The image is based on Ubuntu 20.04 and contains:
- gh 2.82.1
- git-cliff 2.10.1
- mold 2.40.4
- Ninja 1.13.2
- Python 3.8
- and some other useful tools

View File

@@ -1,6 +1,6 @@
services:
clio_develop:
image: ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
image: ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
volumes:
- clio_develop_conan_data:/root/.conan2/p
- clio_develop_ccache:/root/.ccache

View File

@@ -2,7 +2,7 @@
script_dir=$(dirname $0)
pushd $script_dir > /dev/null
pushd $script_dir >/dev/null
function start_container {
if [ -z "$(docker ps -q -f name=clio_develop)" ]; then
@@ -41,21 +41,26 @@ EOF
}
case $1 in
-h|--help)
print_help ;;
-h | --help)
print_help
;;
-t|--terminal)
open_terminal ;;
-t | --terminal)
open_terminal
;;
-s|--stop)
stop_container ;;
-s | --stop)
stop_container
;;
-*)
echo "Unknown option: $1"
print_help ;;
-*)
echo "Unknown option: $1"
print_help
;;
*)
run "$@" ;;
*)
run "$@"
;;
esac
popd > /dev/null
popd >/dev/null

View File

@@ -12,7 +12,6 @@ ARG BUILD_VERSION=0
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
ninja-build \
python3 \
python3-pip \
software-properties-common \
@@ -24,6 +23,15 @@ RUN apt-get update \
WORKDIR /tmp
ARG NINJA_VERSION=1.13.2
RUN wget --progress=dot:giga "https://github.com/ninja-build/ninja/archive/refs/tags/v${NINJA_VERSION}.tar.gz" \
&& tar xf "v${NINJA_VERSION}.tar.gz" \
&& cd "ninja-${NINJA_VERSION}" \
&& ./configure.py --bootstrap \
&& mv ninja /usr/local/bin/ninja \
&& rm -rf /tmp/* /var/tmp/*
ARG MOLD_VERSION=2.40.4
RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v${MOLD_VERSION}.tar.gz" \
&& tar xf "v${MOLD_VERSION}.tar.gz" \

View File

@@ -97,30 +97,14 @@ Now you should be able to download the prebuilt dependencies (including `xrpl` p
#### Conan lockfile
To achieve reproducible dependencies, we use [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html).
To achieve reproducible dependencies, we use a [Conan lockfile](https://docs.conan.io/2/tutorial/versioning/lockfiles.html).
The `conan.lock` file in the repository contains a "snapshot" of the current dependencies.
It is implicitly used when running `conan` commands, you don't need to specify it.
You have to update this file every time you add a new dependency or change a revision or version of an existing dependency.
> [!NOTE]
> Conan uses local cache by default when creating a lockfile.
>
> To ensure, that lockfile creation works the same way on all developer machines, you should clear the local cache before creating a new lockfile.
To create a new lockfile, run the following commands in the repository root:
```bash
conan remove '*' --confirm
rm conan.lock
# This ensure that xrplf remote is the first to be consulted
conan remote add --force --index 0 xrplf https://conan.ripplex.io
conan lock create .
```
> [!NOTE]
> If some dependencies are exclusive for some OS, you may need to run the last command for them adding `--profile:all <PROFILE>`.
To update a lockfile, run from the repository root: `./.github/scripts/conan/regenerate_lockfile.sh`
## Building Clio
@@ -191,7 +175,7 @@ Open the `index.html` file in your browser to see the documentation pages.
It is also possible to build Clio using [Docker](https://www.docker.com/) if you don't want to install all the dependencies on your machine.
```sh
docker run -it ghcr.io/xrplf/clio-ci:77387d8f9f13aea8f23831d221ac3e7683bb69b7
docker run -it ghcr.io/xrplf/clio-ci:067449c3f8ae6755ea84752ea2962b589fe56c8f
git clone https://github.com/XRPLF/clio
cd clio
```

View File

@@ -457,6 +457,14 @@ This document provides a list of all available Clio configuration properties in
- **Constraints**: None
- **Description**: Max allowed difference between the latest sequence in DB and in cache file. If the cache file is too old (contains too low latest sequence) Clio will reject using it.
### cache.file.async_save
- **Required**: True
- **Type**: boolean
- **Default value**: `False`
- **Constraints**: None
- **Description**: When false, Clio waits for cache saving to finish before shutting down. When true, cache saving runs in parallel with other shutdown operations.
### log.channels.[].channel
- **Required**: False

View File

@@ -45,7 +45,7 @@ if [[ "1.14.0" > "$version" ]]; then
ERROR
-----------------------------------------------------------------------------
A minimum of version 1.14 of `which doxygen` is required.
A minimum of version 1.14 of $(which doxygen) is required.
Your version is $version. Please upgrade it.
Your changes may fail CI checks.
@@ -55,26 +55,26 @@ EOF
exit 0
fi
mkdir -p ${DOCDIR} > /dev/null 2>&1
pushd ${DOCDIR} > /dev/null 2>&1
mkdir -p ${DOCDIR} >/dev/null 2>&1
pushd ${DOCDIR} >/dev/null 2>&1
cat ${ROOT}/docs/Doxyfile | \
sed \
-e "s/\${LINT}/YES/" \
-e "s/\${WARN_AS_ERROR}/NO/" \
-e "s!\${SOURCE}!${ROOT}!" \
-e "s/\${USE_DOT}/NO/" \
-e "s/\${EXCLUDES}/impl/" \
| ${DOXYGEN} - 2> ${TMPFILE} 1> /dev/null
cat ${ROOT}/docs/Doxyfile |
sed \
-e "s/\${LINT}/YES/" \
-e "s/\${WARN_AS_ERROR}/NO/" \
-e "s!\${SOURCE}!${ROOT}!" \
-e "s/\${USE_DOT}/NO/" \
-e "s/\${EXCLUDES}/impl/" |
${DOXYGEN} - 2>${TMPFILE} 1>/dev/null
# We don't want to check for default values and typedefs as well as for member variables
OUT=$(cat ${TMPFILE} \
| grep -v "=default" \
| grep -v "\(variable\)" \
| grep -v "\(typedef\)")
OUT=$(cat ${TMPFILE} |
grep -v "=default" |
grep -v "\(variable\)" |
grep -v "\(typedef\)")
rm -rf ${TMPFILE} > /dev/null 2>&1
popd > /dev/null 2>&1
rm -rf ${TMPFILE} >/dev/null 2>&1
popd >/dev/null 2>&1
if [[ ! -z "$OUT" ]]; then
cat <<EOF

View File

@@ -23,10 +23,10 @@ fix_includes() {
file_path_fixed="${file_path}.tmp.fixed"
# Make all includes to be <...> style
sed -E 's|#include "(.*)"|#include <\1>|g' "$file_path" > "$file_path_all_global"
sed -E 's|#include "(.*)"|#include <\1>|g' "$file_path" >"$file_path_all_global"
# Make local includes to be "..." style
sed -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g" "$file_path_all_global" > "$file_path_fixed"
sed -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g" "$file_path_all_global" >"$file_path_fixed"
rm "$file_path_all_global"
# Check if the temporary file is different from the original file

View File

@@ -4,7 +4,6 @@ import argparse
import re
from pathlib import Path
PATTERN = r'R"JSON\((.*?)\)JSON"'
@@ -40,6 +39,7 @@ def fix_colon_spacing(cpp_content: str) -> str:
raw_json = match.group(1)
raw_json = re.sub(r'":\n\s*(\[|\{)', r'": \1', raw_json)
return f'R"JSON({raw_json})JSON"'
return re.sub(PATTERN, replace_json, cpp_content, flags=re.DOTALL)
@@ -49,12 +49,12 @@ def fix_indentation(cpp_content: str) -> str:
lines = cpp_content.splitlines()
ends_with_newline = cpp_content.endswith('\n')
ends_with_newline = cpp_content.endswith("\n")
def find_indentation(line: str) -> int:
return len(line) - len(line.lstrip())
for (line_num, (line, next_line)) in enumerate(zip(lines[:-1], lines[1:])):
for line_num, (line, next_line) in enumerate(zip(lines[:-1], lines[1:])):
if "JSON(" in line and ")JSON" not in line:
indent = find_indentation(line)
next_indent = find_indentation(next_line)
@@ -69,7 +69,11 @@ def fix_indentation(cpp_content: str) -> str:
if ")JSON" in lines[i]:
lines[i] = " " * indent + lines[i].lstrip()
break
lines[i] = lines[i][by_how_much:] if by_how_much > 0 else " " * (-by_how_much) + lines[i]
lines[i] = (
lines[i][by_how_much:]
if by_how_much > 0
else " " * (-by_how_much) + lines[i]
)
result = "\n".join(lines)

View File

@@ -4,7 +4,7 @@
#
set -e -o pipefail
if ! command -v gofmt &> /dev/null ; then
if ! command -v gofmt &>/dev/null; then
echo "gofmt not installed or available in the PATH" >&2
exit 1
fi

View File

@@ -1,5 +1,4 @@
#!/bin/sh
#!/bin/bash
# git for-each-ref refs/tags # see which tags are annotated and which are lightweight. Annotated tags are "tag" objects.
# # Set these so your commits and tags are always signed
@@ -7,7 +6,7 @@
# git config tag.gpgsign true
verify_commit_signed() {
if git verify-commit HEAD &> /dev/null; then
if git verify-commit HEAD &>/dev/null; then
:
# echo "HEAD commit seems signed..."
else
@@ -17,7 +16,7 @@ verify_commit_signed() {
}
verify_tag() {
if git describe --exact-match --tags HEAD &> /dev/null; then
if git describe --exact-match --tags HEAD &>/dev/null; then
: # You might be ok to push
# echo "Tag is annotated."
return 0
@@ -28,7 +27,7 @@ verify_tag() {
}
verify_tag_signed() {
if git verify-tag "$version" &> /dev/null ; then
if git verify-tag "$version" &>/dev/null; then
: # ok, I guess we'll let you push
# echo "Tag appears signed"
return 0
@@ -40,11 +39,11 @@ verify_tag_signed() {
}
# Check some things if we're pushing a branch called "release/"
if echo "$PRE_COMMIT_REMOTE_BRANCH" | grep ^refs\/heads\/release\/ &> /dev/null ; then
if echo "$PRE_COMMIT_REMOTE_BRANCH" | grep ^refs\/heads\/release\/ &>/dev/null; then
version=$(git tag --points-at HEAD)
echo "Looks like you're trying to push a $version release..."
echo "Making sure you've signed and tagged it."
if verify_commit_signed && verify_tag && verify_tag_signed ; then
if verify_commit_signed && verify_tag && verify_tag_signed; then
: # Ok, I guess you can push
else
exit 1

View File

@@ -77,7 +77,10 @@ CliArgs::parse(int argc, char const* argv[])
}
if (parsed.contains("version")) {
std::cout << util::build::getClioFullVersionString() << '\n';
std::cout << util::build::getClioFullVersionString() << '\n'
<< "Git commit hash: " << util::build::getGitCommitHash() << '\n'
<< "Git build branch: " << util::build::getGitBuildBranch() << '\n'
<< "Build date: " << util::build::getBuildDate() << '\n';
return Action{Action::Exit{EXIT_SUCCESS}};
}

View File

@@ -91,6 +91,7 @@ ClioApplication::ClioApplication(util::config::ClioConfigDefinition const& confi
{
LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString();
signalsHandler_.subscribeToStop([this]() { appStopper_.stop(); });
appStopper_.setOnComplete([this]() { signalsHandler_.notifyGracefulShutdownComplete(); });
}
int
@@ -182,7 +183,7 @@ ClioApplication::run(bool const useNgWebServer)
return EXIT_FAILURE;
}
httpServer->onGet("/metrics", MetricsHandler{adminVerifier});
httpServer->onGet("/metrics", MetricsHandler{adminVerifier, workQueue});
httpServer->onGet("/health", HealthCheckHandler{});
httpServer->onGet("/cache_state", CacheStateHandler{cache});
auto requestHandler = RequestHandler{adminVerifier, handler};

View File

@@ -38,7 +38,18 @@ Stopper::~Stopper()
void
Stopper::setOnStop(std::function<void(boost::asio::yield_context)> cb)
{
util::spawn(ctx_, std::move(cb));
util::spawn(ctx_, [this, cb = std::move(cb)](auto yield) {
cb(yield);
if (onCompleteCallback_)
onCompleteCallback_();
});
}
void
Stopper::setOnComplete(std::function<void()> cb)
{
onCompleteCallback_ = std::move(cb);
}
void

View File

@@ -43,6 +43,7 @@ namespace app {
class Stopper {
boost::asio::io_context ctx_;
std::thread worker_;
std::function<void()> onCompleteCallback_;
public:
/**
@@ -58,6 +59,14 @@ public:
void
setOnStop(std::function<void(boost::asio::yield_context)> cb);
/**
* @brief Set the callback to be called when graceful shutdown completes.
*
* @param cb The callback to be called when shutdown completes.
*/
void
setOnComplete(std::function<void()> cb);
/**
* @brief Stop the application and run the shutdown tasks.
*/

View File

@@ -19,7 +19,10 @@
#include "app/WebHandlers.hpp"
#include "rpc/Errors.hpp"
#include "rpc/WorkQueue.hpp"
#include "util/Assert.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/prometheus/Http.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
@@ -31,6 +34,7 @@
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <functional>
#include <memory>
#include <optional>
#include <string>
@@ -76,8 +80,8 @@ DisconnectHook::operator()(web::ng::Connection const& connection)
dosguard_.get().decrement(connection.ip());
}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier)
: adminVerifier_{std::move(adminVerifier)}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier, rpc::WorkQueue& workQueue)
: adminVerifier_{std::move(adminVerifier)}, workQueue_{std::ref(workQueue)}
{
}
@@ -86,19 +90,45 @@ MetricsHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
boost::asio::yield_context yield
)
{
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
std::optional<web::ng::Response> response;
util::CoroutineGroup coroutineGroup{yield, 1};
auto const onTaskComplete = coroutineGroup.registerForeign(yield);
ASSERT(onTaskComplete.has_value(), "Coroutine group can't be full");
// FIXME(#1702): Using veb server thread to handle prometheus request. Better to post on work queue.
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
bool const postSuccessful = workQueue_.get().postCoro(
[this, &request, &response, &onTaskComplete = onTaskComplete.value(), &connectionMetadata](
boost::asio::yield_context
) mutable {
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
response = web::ng::Response{std::move(maybeResponse).value(), request};
// notify the coroutine group that the foreign task is done
onTaskComplete();
},
/* isWhiteListed= */ true,
rpc::WorkQueue::Priority::High
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
return web::ng::Response{std::move(maybeResponse).value(), request};
if (!postSuccessful) {
return web::ng::Response{
boost::beast::http::status::too_many_requests, rpc::makeError(rpc::RippledError::rpcTOO_BUSY), request
};
}
// Put the coroutine to sleep until the foreign task is done
coroutineGroup.asyncWait(yield);
ASSERT(response.has_value(), "Woke up coroutine without setting response");
return std::move(response).value();
}
web::ng::Response

View File

@@ -21,6 +21,7 @@
#include "data/LedgerCacheInterface.hpp"
#include "rpc/Errors.hpp"
#include "rpc/WorkQueue.hpp"
#include "util/log/Logger.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
@@ -119,20 +120,23 @@ public:
*/
class MetricsHandler {
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
std::reference_wrapper<rpc::WorkQueue> workQueue_;
public:
/**
* @brief Construct a new MetricsHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
* @param workQueue The WorkQueue to use for handling the request.
*/
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier);
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier, rpc::WorkQueue& workQueue);
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @param yield The yield context.
* @return The response to the request.
*/
web::ng::Response
@@ -140,7 +144,7 @@ public:
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
boost::asio::yield_context yield
);
};

View File

@@ -152,6 +152,7 @@ struct Amendments {
REGISTER(fixDirectoryLimit);
REGISTER(fixIncludeKeyletFields);
REGISTER(fixTokenEscrowV1);
REGISTER(LendingProtocol);
// Obsolete but supported by libxrpl
REGISTER(CryptoConditionsSuite);

View File

@@ -30,7 +30,9 @@
namespace data {
LedgerCacheSaver::LedgerCacheSaver(util::config::ClioConfigDefinition const& config, LedgerCacheInterface const& cache)
: cacheFilePath_(config.maybeValue<std::string>("cache.file.path")), cache_(cache)
: cacheFilePath_(config.maybeValue<std::string>("cache.file.path"))
, cache_(cache)
, isAsync_(config.get<bool>("cache.file.async_save"))
{
}
@@ -56,6 +58,9 @@ LedgerCacheSaver::save()
LOG(util::LogService::error()) << "Error saving LedgerCache to file: " << success.error();
}
});
if (not isAsync_) {
waitToFinish();
}
}
void

View File

@@ -53,6 +53,7 @@ class LedgerCacheSaver {
std::optional<std::string> cacheFilePath_;
std::reference_wrapper<LedgerCacheInterface const> cache_;
std::optional<std::thread> savingThread_;
bool isAsync_;
public:
/**

View File

@@ -87,8 +87,8 @@ TaskManager::run(std::size_t numExtractors)
util::async::AnyOperation<void>
TaskManager::spawnExtractor(TaskQueue& queue)
{
// TODO: these values may be extracted to config later and/or need to be fine-tuned on a realistic system
static constexpr auto kDELAY_BETWEEN_ATTEMPTS = std::chrono::milliseconds{100u};
// TODO https://github.com/XRPLF/clio/issues/2838: the approach should be changed to a reactive one instead
static constexpr auto kDELAY_BETWEEN_ATTEMPTS = std::chrono::milliseconds{10u};
static constexpr auto kDELAY_BETWEEN_ENQUEUE_ATTEMPTS = std::chrono::milliseconds{1u};
return ctx_.execute([this, &queue](auto stopRequested) {

View File

@@ -34,8 +34,8 @@
#include <cstddef>
#include <cstdint>
#include <functional>
#include <optional>
#include <utility>
#include <vector>
namespace rpc {
@@ -122,7 +122,7 @@ WorkQueue::dispatcherLoop(boost::asio::yield_context yield)
// all ongoing tasks must be completed before stopping fully
while (not stopping_ or size() > 0) {
std::vector<TaskType> batch;
std::optional<TaskType> task;
{
auto state = dispatcherState_.lock();
@@ -130,43 +130,31 @@ WorkQueue::dispatcherLoop(boost::asio::yield_context yield)
if (state->empty()) {
state->isIdle = true;
} else {
for (auto count = 0uz; count < kTAKE_HIGH_PRIO and not state->high.empty(); ++count) {
batch.push_back(std::move(state->high.front()));
state->high.pop();
}
if (not state->normal.empty()) {
batch.push_back(std::move(state->normal.front()));
state->normal.pop();
}
task = state->popNext();
}
}
if (not stopping_ and batch.empty()) {
if (not stopping_ and not task.has_value()) {
waitTimer_.expires_at(std::chrono::steady_clock::time_point::max());
boost::system::error_code ec;
waitTimer_.async_wait(yield[ec]);
} else {
for (auto task : std::move(batch)) {
util::spawn(
ioc_,
[this, spawnedAt = std::chrono::system_clock::now(), task = std::move(task)](auto yield) mutable {
auto const takenAt = std::chrono::system_clock::now();
auto const waited =
std::chrono::duration_cast<std::chrono::microseconds>(takenAt - spawnedAt).count();
} else if (task.has_value()) {
util::spawn(
ioc_,
[this, spawnedAt = std::chrono::system_clock::now(), task = std::move(*task)](auto yield) mutable {
auto const takenAt = std::chrono::system_clock::now();
auto const waited =
std::chrono::duration_cast<std::chrono::microseconds>(takenAt - spawnedAt).count();
++queued_.get();
durationUs_.get() += waited;
LOG(log_.info()) << "WorkQueue wait time: " << waited << ", queue size: " << size();
++queued_.get();
durationUs_.get() += waited;
LOG(log_.info()) << "WorkQueue wait time: " << waited << ", queue size: " << size();
task(yield);
task(yield);
--curSize_.get();
}
);
}
boost::asio::post(ioc_.get_executor(), yield); // yield back to avoid hijacking the thread
--curSize_.get();
}
);
}
}

View File

@@ -38,7 +38,9 @@
#include <cstdint>
#include <functional>
#include <limits>
#include <optional>
#include <queue>
#include <utility>
namespace rpc {
@@ -79,6 +81,7 @@ private:
QueueType normal;
bool isIdle = false;
size_t highPriorityCounter = 0;
void
push(Priority priority, auto&& task)
@@ -96,6 +99,26 @@ private:
{
return high.empty() and normal.empty();
}
[[nodiscard]] std::optional<TaskType>
popNext()
{
if (not high.empty() and (highPriorityCounter < kTAKE_HIGH_PRIO or normal.empty())) {
auto task = std::move(high.front());
high.pop();
++highPriorityCounter;
return task;
}
if (not normal.empty()) {
auto task = std::move(normal.front());
normal.pop();
highPriorityCounter = 0;
return task;
}
return std::nullopt;
}
};
private:

View File

@@ -316,8 +316,11 @@ tag_invoke(boost::json::value_to_tag<AMMInfoHandler::Input>, boost::json::value
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(asset)))
input.issue1 = parseIssue(jsonObject.at(JS(asset)).as_object());

View File

@@ -154,8 +154,11 @@ tag_invoke(boost::json::value_to_tag<AccountChannelsHandler::Input>, boost::json
if (jsonObject.contains(JS(destination_account)))
input.destinationAccount = boost::json::value_to<std::string>(jv.at(JS(destination_account)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -128,8 +128,11 @@ tag_invoke(boost::json::value_to_tag<AccountCurrenciesHandler::Input>, boost::js
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -204,8 +204,11 @@ tag_invoke(boost::json::value_to_tag<AccountInfoHandler::Input>, boost::json::va
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(signer_lists)))
input.signerLists = boost::json::value_to<JsonBool>(jsonObject.at(JS(signer_lists)));

View File

@@ -215,8 +215,11 @@ tag_invoke(boost::json::value_to_tag<AccountLinesHandler::Input>, boost::json::v
if (jsonObject.contains(JS(ignore_default)))
input.ignoreDefault = jv.at(JS(ignore_default)).as_bool();
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -56,6 +56,7 @@ AccountMPTokenIssuancesHandler::addMPTokenIssuance(
{
MPTokenIssuanceResponse issuance;
issuance.MPTokenIssuanceID = ripple::strHex(sle.key());
issuance.issuer = ripple::to_string(account);
issuance.sequence = sle.getFieldU32(ripple::sfSequence);
auto const flags = sle.getFieldU32(ripple::sfFlags);
@@ -73,6 +74,24 @@ AccountMPTokenIssuancesHandler::addMPTokenIssuance(
setFlag(issuance.mptCanTransfer, ripple::lsfMPTCanTransfer);
setFlag(issuance.mptCanClawback, ripple::lsfMPTCanClawback);
if (sle.isFieldPresent(ripple::sfMutableFlags)) {
auto const mutableFlags = sle.getFieldU32(ripple::sfMutableFlags);
auto const setMutableFlag = [&](std::optional<bool>& field, std::uint32_t mask) {
if ((mutableFlags & mask) != 0u)
field = true;
};
setMutableFlag(issuance.mptCanMutateCanLock, ripple::lsmfMPTCanMutateCanLock);
setMutableFlag(issuance.mptCanMutateRequireAuth, ripple::lsmfMPTCanMutateRequireAuth);
setMutableFlag(issuance.mptCanMutateCanEscrow, ripple::lsmfMPTCanMutateCanEscrow);
setMutableFlag(issuance.mptCanMutateCanTrade, ripple::lsmfMPTCanMutateCanTrade);
setMutableFlag(issuance.mptCanMutateCanTransfer, ripple::lsmfMPTCanMutateCanTransfer);
setMutableFlag(issuance.mptCanMutateCanClawback, ripple::lsmfMPTCanMutateCanClawback);
setMutableFlag(issuance.mptCanMutateMetadata, ripple::lsmfMPTCanMutateMetadata);
setMutableFlag(issuance.mptCanMutateTransferFee, ripple::lsmfMPTCanMutateTransferFee);
}
if (sle.isFieldPresent(ripple::sfTransferFee))
issuance.transferFee = sle.getFieldU16(ripple::sfTransferFee);
@@ -164,8 +183,11 @@ tag_invoke(boost::json::value_to_tag<AccountMPTokenIssuancesHandler::Input>, boo
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}
@@ -198,6 +220,7 @@ tag_invoke(
)
{
auto obj = boost::json::object{
{JS(mpt_issuance_id), issuance.MPTokenIssuanceID},
{JS(issuer), issuance.issuer},
{JS(sequence), issuance.sequence},
};
@@ -224,6 +247,15 @@ tag_invoke(
setIfPresent("mpt_can_transfer", issuance.mptCanTransfer);
setIfPresent("mpt_can_clawback", issuance.mptCanClawback);
setIfPresent("mpt_can_mutate_can_lock", issuance.mptCanMutateCanLock);
setIfPresent("mpt_can_mutate_require_auth", issuance.mptCanMutateRequireAuth);
setIfPresent("mpt_can_mutate_can_escrow", issuance.mptCanMutateCanEscrow);
setIfPresent("mpt_can_mutate_can_trade", issuance.mptCanMutateCanTrade);
setIfPresent("mpt_can_mutate_can_transfer", issuance.mptCanMutateCanTransfer);
setIfPresent("mpt_can_mutate_can_clawback", issuance.mptCanMutateCanClawback);
setIfPresent("mpt_can_mutate_metadata", issuance.mptCanMutateMetadata);
setIfPresent("mpt_can_mutate_transfer_fee", issuance.mptCanMutateTransferFee);
jv = std::move(obj);
}

View File

@@ -61,6 +61,7 @@ public:
* @brief A struct to hold data for one MPTokenIssuance response.
*/
struct MPTokenIssuanceResponse {
std::string MPTokenIssuanceID;
std::string issuer;
uint32_t sequence{};
@@ -80,6 +81,15 @@ public:
std::optional<bool> mptCanTrade;
std::optional<bool> mptCanTransfer;
std::optional<bool> mptCanClawback;
std::optional<bool> mptCanMutateCanLock;
std::optional<bool> mptCanMutateRequireAuth;
std::optional<bool> mptCanMutateCanEscrow;
std::optional<bool> mptCanMutateCanTrade;
std::optional<bool> mptCanMutateCanTransfer;
std::optional<bool> mptCanMutateCanClawback;
std::optional<bool> mptCanMutateMetadata;
std::optional<bool> mptCanMutateTransferFee;
};
/**

View File

@@ -54,6 +54,7 @@ AccountMPTokensHandler::addMPToken(std::vector<MPTokenResponse>& mpts, ripple::S
MPTokenResponse token{};
auto const flags = sle.getFieldU32(ripple::sfFlags);
token.MPTokenID = ripple::strHex(sle.key());
token.account = ripple::to_string(sle.getAccountID(ripple::sfAccount));
token.MPTokenIssuanceID = ripple::strHex(sle.getFieldH192(ripple::sfMPTokenIssuanceID));
token.MPTAmount = sle.getFieldU64(ripple::sfMPTAmount);
@@ -139,8 +140,11 @@ tag_invoke(boost::json::value_to_tag<AccountMPTokensHandler::Input>, boost::json
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}
@@ -167,6 +171,7 @@ void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountMPTokensHandler::MPTokenResponse const& mptoken)
{
auto obj = boost::json::object{
{"mpt_id", mptoken.MPTokenID},
{JS(account), mptoken.account},
{JS(mpt_issuance_id), mptoken.MPTokenIssuanceID},
{JS(mpt_amount), mptoken.MPTAmount},

View File

@@ -59,6 +59,7 @@ public:
* @brief A struct to hold data for one MPToken response.
*/
struct MPTokenResponse {
std::string MPTokenID;
std::string account;
std::string MPTokenIssuanceID;
uint64_t MPTAmount{};

View File

@@ -157,8 +157,11 @@ tag_invoke(boost::json::value_to_tag<AccountNFTsHandler::Input>, boost::json::va
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(limit)))
input.limit = util::integralValueAs<uint32_t>(jsonObject.at(JS(limit)));

View File

@@ -153,8 +153,11 @@ tag_invoke(boost::json::value_to_tag<AccountObjectsHandler::Input>, boost::json:
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(type))) {
input.type =

View File

@@ -169,8 +169,11 @@ tag_invoke(boost::json::value_to_tag<AccountOffersHandler::Input>, boost::json::
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(limit)))
input.limit = util::integralValueAs<uint32_t>(jsonObject.at(JS(limit)));

View File

@@ -258,8 +258,10 @@ tag_invoke(boost::json::value_to_tag<AccountTxHandler::Input>, boost::json::valu
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index))) {
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (not input.ledgerIndex.has_value()) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value()) {
input.ledgerIndex = *expectedLedgerIndex;
} else {
// could not get the latest validated ledger seq here, using this flag to indicate that
input.usingValidatedLedger = true;
}

View File

@@ -90,8 +90,11 @@ tag_invoke(boost::json::value_to_tag<BookChangesHandler::Input>, boost::json::va
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -122,8 +122,11 @@ tag_invoke(boost::json::value_to_tag<BookOffersHandler::Input>, boost::json::val
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(taker)))
input.taker = accountFromStringStrict(boost::json::value_to<std::string>(jv.at(JS(taker))));

View File

@@ -145,8 +145,11 @@ tag_invoke(boost::json::value_to_tag<DepositAuthorizedHandler::Input>, boost::js
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(credentials)))
input.credentials = boost::json::value_to<boost::json::array>(jv.at(JS(credentials)));

View File

@@ -168,8 +168,11 @@ tag_invoke(boost::json::value_to_tag<FeatureHandler::Input>, boost::json::value
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -249,8 +249,11 @@ tag_invoke(boost::json::value_to_tag<GatewayBalancesHandler::Input>, boost::json
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(hotwallet))) {
if (jsonObject.at(JS(hotwallet)).is_string()) {

View File

@@ -263,8 +263,11 @@ tag_invoke(boost::json::value_to_tag<GetAggregatePriceHandler::Input>, boost::js
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
for (auto const& oracle : jsonObject.at(JS(oracles)).as_array()) {
input.oracles.push_back(

View File

@@ -208,8 +208,11 @@ tag_invoke(boost::json::value_to_tag<LedgerHandler::Input>, boost::json::value c
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(transactions)))
input.transactions = jv.at(JS(transactions)).as_bool();

View File

@@ -210,8 +210,11 @@ tag_invoke(boost::json::value_to_tag<LedgerDataHandler::Input>, boost::json::val
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(type)))
input.type = util::LedgerTypes::getLedgerEntryTypeFromStr(boost::json::value_to<std::string>(jv.at(JS(type))));

View File

@@ -305,8 +305,11 @@ tag_invoke(boost::json::value_to_tag<LedgerEntryHandler::Input>, boost::json::va
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(binary)))
input.binary = jv.at(JS(binary)).as_bool();

View File

@@ -124,8 +124,11 @@ tag_invoke(boost::json::value_to_tag<MPTHoldersHandler::Input>, boost::json::val
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jsonObject.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(limit)))
input.limit = util::integralValueAs<uint32_t>(jsonObject.at(JS(limit)));

View File

@@ -215,8 +215,11 @@ tag_invoke(boost::json::value_to_tag<NFTHistoryHandler::Input>, boost::json::val
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(binary)))
input.binary = jsonObject.at(JS(binary)).as_bool();

View File

@@ -115,8 +115,11 @@ tag_invoke(boost::json::value_to_tag<NFTInfoHandler::Input>, boost::json::value
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -194,8 +194,11 @@ tag_invoke(boost::json::value_to_tag<NFTOffersHandlerBase::Input>, boost::json::
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(marker)))
input.marker = boost::json::value_to<std::string>(jsonObject.at(JS(marker)));

View File

@@ -136,8 +136,11 @@ tag_invoke(boost::json::value_to_tag<NFTsByIssuerHandler::Input>, boost::json::v
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
if (jsonObject.contains(JS(limit)))
input.limit = util::integralValueAs<uint32_t>(jsonObject.at(JS(limit)));

View File

@@ -196,8 +196,11 @@ tag_invoke(boost::json::value_to_tag<NoRippleCheckHandler::Input>, boost::json::
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jsonObject.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -109,8 +109,11 @@ tag_invoke(boost::json::value_to_tag<TransactionEntryHandler::Input>, boost::jso
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = boost::json::value_to<std::string>(jv.at(JS(ledger_hash)));
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jv.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -177,8 +177,11 @@ tag_invoke(boost::json::value_to_tag<VaultInfoHandler::Input>, boost::json::valu
if (jsonObject.contains(JS(vault_id)))
input.vaultID = jsonObject.at(JS(vault_id)).as_string();
if (jsonObject.contains(JS(ledger_index)))
input.ledgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (jsonObject.contains(JS(ledger_index))) {
auto const expectedLedgerIndex = util::getLedgerIndex(jsonObject.at(JS(ledger_index)));
if (expectedLedgerIndex.has_value())
input.ledgerIndex = *expectedLedgerIndex;
}
return input;
}

View File

@@ -54,10 +54,10 @@ OnAssert::resetAction()
void
OnAssert::defaultAction(std::string_view message)
{
if (LogServiceState::initialized()) {
if (LogServiceState::initialized() and LogServiceState::hasSinks()) {
LOG(LogService::fatal()) << message;
} else {
std::cerr << message;
std::cerr << message << std::endl;
}
std::exit(EXIT_FAILURE); // std::abort does not flush gcovr output and causes uncovered lines
}

382
src/util/Channel.hpp Normal file
View File

@@ -0,0 +1,382 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <boost/asio/any_io_executor.hpp>
#include <boost/asio/experimental/channel.hpp>
#include <boost/asio/experimental/concurrent_channel.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/system/detail/error_code.hpp>
#include <concepts>
#include <cstddef>
#include <memory>
#include <optional>
#include <type_traits>
#include <utility>
namespace util {
#ifdef __clang__
namespace detail {
// Forward declaration for compile-time check
template <typename T>
struct ChannelInstantiated;
} // namespace detail
#endif
/**
* @brief Represents a go-like channel, a multi-producer (Sender) multi-consumer (Receiver) thread-safe data pipe.
* @note Use INSTANTIATE_CHANNEL_FOR_CLANG macro when using this class. See docs at the bottom of the file for more
* details.
*
* @tparam T The type of data the channel transfers
*/
template <typename T>
class Channel {
private:
class ControlBlock {
using InternalChannelType = boost::asio::experimental::concurrent_channel<void(boost::system::error_code, T)>;
boost::asio::any_io_executor executor_;
InternalChannelType ch_;
public:
ControlBlock(auto&& context, std::size_t capacity) : executor_(context.get_executor()), ch_(context, capacity)
{
}
[[nodiscard]] InternalChannelType&
channel()
{
return ch_;
}
void
close()
{
if (not isClosed()) {
ch_.close();
// Workaround for Boost bug: close() alone doesn't cancel pending async operations.
// We must call cancel() to unblock them. The bug also causes cancel() to return
// error_code 0 instead of channel_cancelled, so async operations must check
// isClosed() to detect this case.
// https://github.com/chriskohlhoff/asio/issues/1575
ch_.cancel();
}
}
[[nodiscard]] bool
isClosed() const
{
return not ch_.is_open();
}
};
/**
* @brief This is used to close the channel once either all Senders or all Receivers are destroyed
*/
struct Guard {
std::shared_ptr<ControlBlock> shared;
~Guard()
{
shared->close();
}
};
/**
* @brief The sending end of a channel.
*
* Sender is copyable and movable. The channel remains open as long as at least one Sender exists.
* When all Sender instances are destroyed, the channel is closed and receivers will receive std::nullopt.
*/
class Sender {
std::shared_ptr<ControlBlock> shared_;
std::shared_ptr<Guard> guard_;
public:
/**
* @brief Constructs a Sender from a shared control block.
* @param shared The shared control block managing the channel state
*/
Sender(std::shared_ptr<ControlBlock> shared)
: shared_(std::move(shared)), guard_(std::make_shared<Guard>(shared_)) {};
Sender(Sender&&) = default;
Sender(Sender const&) = default;
Sender&
operator=(Sender&&) = default;
Sender&
operator=(Sender const&) = default;
/**
* @brief Asynchronously sends data through the channel using a coroutine.
*
* Blocks the coroutine until the data is sent or the channel is closed.
*
* @tparam D The type of data to send (must be convertible to T)
* @param data The data to send
* @param yield The Boost.Asio yield context for coroutine suspension
* @return true if the data was sent successfully, false if the channel is closed
*/
template <typename D>
bool
asyncSend(D&& data, boost::asio::yield_context yield)
requires(std::convertible_to<std::remove_cvref_t<D>, std::remove_cvref_t<T>>)
{
boost::system::error_code const ecIn;
boost::system::error_code ecOut;
shared_->channel().async_send(ecIn, std::forward<D>(data), yield[ecOut]);
// Workaround: asio channels bug returns ec=0 on cancel, check isClosed() instead
if (not ecOut and shared_->isClosed())
return false;
return not ecOut;
}
/**
* @brief Asynchronously sends data through the channel using a callback.
*
* The callback is invoked when the send operation completes.
*
* @tparam D The type of data to send (must be convertible to T)
* @param data The data to send
* @param fn Callback function invoked with true if successful, false if the channel is closed
*/
template <typename D>
void
asyncSend(D&& data, std::invocable<bool> auto&& fn)
requires(std::convertible_to<std::remove_cvref_t<D>, std::remove_cvref_t<T>>)
{
boost::system::error_code const ecIn;
shared_->channel().async_send(
ecIn,
std::forward<D>(data),
[fn = std::forward<decltype(fn)>(fn), shared = shared_](boost::system::error_code ec) mutable {
// Workaround: asio channels bug returns ec=0 on cancel, check isClosed() instead
if (not ec and shared->isClosed()) {
fn(false);
return;
}
fn(not ec);
}
);
}
/**
* @brief Attempts to send data through the channel without blocking.
*
* @tparam D The type of data to send (must be convertible to T)
* @param data The data to send
* @return true if the data was sent successfully, false if the channel is full or closed
*/
template <typename D>
bool
trySend(D&& data)
requires(std::convertible_to<std::remove_cvref_t<D>, std::remove_cvref_t<T>>)
{
boost::system::error_code ec;
return shared_->channel().try_send(ec, std::forward<D>(data));
}
};
/**
* @brief The receiving end of a channel.
*
* Receiver is copyable and movable. Multiple receivers can consume from the same channel concurrently.
* When all Receiver instances are destroyed, the channel is closed and senders will fail to send.
*/
class Receiver {
std::shared_ptr<ControlBlock> shared_;
std::shared_ptr<Guard> guard_;
public:
/**
* @brief Constructs a Receiver from a shared control block.
* @param shared The shared control block managing the channel state
*/
Receiver(std::shared_ptr<ControlBlock> shared)
: shared_(std::move(shared)), guard_(std::make_shared<Guard>(shared_)) {};
Receiver(Receiver&&) = default;
Receiver(Receiver const&) = default;
Receiver&
operator=(Receiver&&) = default;
Receiver&
operator=(Receiver const&) = default;
/**
* @brief Attempts to receive data from the channel without blocking.
*
* @return std::optional containing the received value, or std::nullopt if the channel is empty or closed
*/
std::optional<T>
tryReceive()
{
std::optional<T> result;
shared_->channel().try_receive([&result](boost::system::error_code ec, auto&& value) {
if (not ec)
result = std::forward<decltype(value)>(value);
});
return result;
}
/**
* @brief Asynchronously receives data from the channel using a coroutine.
*
* Blocks the coroutine until data is available or the channel is closed.
*
* @param yield The Boost.Asio yield context for coroutine suspension
* @return std::optional containing the received value, or std::nullopt if the channel is closed
*/
[[nodiscard]] std::optional<T>
asyncReceive(boost::asio::yield_context yield)
{
boost::system::error_code ec;
auto value = shared_->channel().async_receive(yield[ec]);
if (ec)
return std::nullopt;
return value;
}
/**
* @brief Asynchronously receives data from the channel using a callback.
*
* The callback is invoked when data is available or the channel is closed.
*
* @param fn Callback function invoked with std::optional containing the value, or std::nullopt if closed
*/
void
asyncReceive(std::invocable<std::optional<std::remove_cvref_t<T>>> auto&& fn)
{
shared_->channel().async_receive(
[fn = std::forward<decltype(fn)>(fn)](boost::system::error_code ec, T&& value) mutable {
if (ec) {
fn(std::optional<T>(std::nullopt));
return;
}
fn(std::make_optional<T>(std::move(value)));
}
);
}
/**
* @brief Checks if the channel is closed.
*
* A channel is closed when all Sender instances have been destroyed.
*
* @return true if the channel is closed, false otherwise
*/
[[nodiscard]] bool
isClosed() const
{
return shared_->isClosed();
}
};
public:
/**
* @brief Factory function to create channel components.
* @param context A supported context type (either io_context or thread_pool)
* @param capacity Size of the internal buffer on the channel
* @return A pair of Sender and Receiver
*/
static std::pair<Sender, Receiver>
create(auto&& context, std::size_t capacity)
{
#ifdef __clang__
static_assert(
util::detail::ChannelInstantiated<T>::value,
"When using Channel<T> with Clang, you must add INSTANTIATE_CHANNEL_FOR_CLANG(T) "
"to one .cpp file. See documentation at the bottom of Channel.hpp for details."
);
#endif
auto shared = std::make_shared<ControlBlock>(std::forward<decltype(context)>(context), capacity);
auto sender = Sender{shared};
auto receiver = Receiver{std::move(shared)};
return {std::move(sender), std::move(receiver)};
}
};
} // namespace util
// ================================================================================================
// Clang/Apple Clang Workaround for Boost.Asio Experimental Channels
// ================================================================================================
//
// IMPORTANT: When using Channel<T> with Clang or Apple Clang, you MUST add the following line
// to ONE .cpp file that uses Channel<T>:
//
// INSTANTIATE_CHANNEL_FOR_CLANG(YourType)
//
// Example:
// // In ChannelTests.cpp or any .cpp file that uses Channel<int>:
// #include "util/Channel.hpp"
// INSTANTIATE_CHANNEL_FOR_CLANG(int)
//
// Why this is needed:
// Boost.Asio's experimental concurrent_channel has a bug where close() doesn't properly cancel
// pending async operations. When using cancellation signals (which we do in our workaround),
// Clang generates vtable references for internal cancellation_handler types but Boost.Asio
// doesn't provide the definitions, causing linker errors:
//
// Undefined symbols for architecture arm64:
// "boost::asio::detail::cancellation_handler<...>::call(boost::asio::cancellation_type)"
// "boost::asio::detail::cancellation_handler<...>::destroy()"
//
// This macro explicitly instantiates the required template specializations.
//
// See: https://github.com/chriskohlhoff/asio/issues/1575
//
#ifdef __clang__
#include <boost/asio/cancellation_signal.hpp>
#include <boost/asio/experimental/channel_traits.hpp>
#include <boost/asio/experimental/detail/channel_service.hpp>
namespace util::detail {
// Tag type used to verify that INSTANTIATE_CHANNEL_FOR_CLANG was called for a given type
template <typename T>
struct ChannelInstantiated : std::false_type {};
} // namespace util::detail
#define INSTANTIATE_CHANNEL_FOR_CLANG(T) \
/* NOLINTNEXTLINE(cppcoreguidelines-virtual-class-destructor) */ \
template class boost::asio::detail::cancellation_handler< \
boost::asio::experimental::detail::channel_service<boost::asio::detail::posix_mutex>:: \
op_cancellation<boost::asio::experimental::channel_traits<>, void(boost::system::error_code, T)>>; \
namespace util::detail { \
template <> \
struct ChannelInstantiated<T> : std::true_type {}; \
}
#else
// No workaround needed for non-Clang compilers
#define INSTANTIATE_CHANNEL_FOR_CLANG(T)
#endif

View File

@@ -23,10 +23,13 @@
#include <boost/json.hpp>
#include <boost/json/object.hpp>
#include <xrpl/beast/core/LexicalCast.h>
#include <algorithm>
#include <cctype>
#include <charconv>
#include <concepts>
#include <expected>
#include <stdexcept>
#include <string>
@@ -96,12 +99,11 @@ removeSecret(boost::json::object const& object)
*
* @tparam Type The type to cast to
* @param value The JSON value to cast
* @return Value casted to the requested type
* @throws logic_error if the underlying number is neither int64 nor uint64
* @return Value casted to the requested type or an error message
*/
template <std::integral Type>
Type
integralValueAs(boost::json::value const& value)
std::expected<Type, std::string>
tryIntegralValueAs(boost::json::value const& value)
{
if (value.is_uint64())
return static_cast<Type>(value.as_uint64());
@@ -109,29 +111,49 @@ integralValueAs(boost::json::value const& value)
if (value.is_int64())
return static_cast<Type>(value.as_int64());
throw std::logic_error("Value neither uint64 nor int64");
return std::unexpected("Value neither uint64 nor int64");
}
/**
* @brief Detects the type of number stored in value and casts it back to the requested Type.
* @note This conversion can possibly cause wrapping around or UB. Use with caution.
*
* @tparam Type The type to cast to
* @param value The JSON value to cast
* @return Value casted to the requested type
* @throws logic_error if the underlying number is neither int64 nor uint64
*/
template <std::integral Type>
Type
integralValueAs(boost::json::value const& value)
{
auto expectedResult = tryIntegralValueAs<Type>(value);
if (expectedResult.has_value())
return *expectedResult;
throw std::logic_error(std::move(expectedResult).error());
}
/**
* @brief Extracts ledger index from a JSON value which can be either a number or a string.
*
* @param value The JSON value to extract ledger index from
* @return An optional containing the ledger index if it is a number; std::nullopt otherwise
* @throws logic_error comes from integralValueAs if the underlying number is neither int64 nor uint64
* @throws std::invalid_argument or std::out_of_range if the string cannot be converted to a number
* @return The extracted ledger index or an error message
*/
[[nodiscard]] inline std::optional<uint32_t>
[[nodiscard]] inline std::expected<uint32_t, std::string>
getLedgerIndex(boost::json::value const& value)
{
std::optional<uint32_t> ledgerIndex;
if (not value.is_string()) {
ledgerIndex = util::integralValueAs<uint32_t>(value);
} else if (value.as_string() != "validated") {
ledgerIndex = std::stoi(value.as_string().c_str());
return tryIntegralValueAs<uint32_t>(value);
}
return ledgerIndex;
if (value.as_string() != "validated") {
uint32_t ledgerIndex{};
if (beast::lexicalCastChecked(ledgerIndex, value.as_string().c_str())) {
return ledgerIndex;
}
return std::unexpected("Invalid ledger index string");
}
return std::unexpected("'validated' ledger index is requested");
}
} // namespace util

View File

@@ -0,0 +1,426 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <boost/signals2/connection.hpp>
#include <boost/signals2/signal.hpp>
#include <boost/signals2/variadic_signal.hpp>
#include <atomic>
#include <concepts>
#include <type_traits>
namespace util {
template <typename T>
concept SomeAtomic = std::same_as<std::remove_cvref_t<T>, std::atomic<std::remove_cvref_t<typename T::value_type>>>;
/**
* @brief Concept defining types that can be observed for changes.
*
* A type is Observable if it satisfies all requirements for being stored
* and monitored in an ObservableValue container:
*
* - Must be equality comparable to detect changes
* - Must be copy constructible for capturing old values in guards
* - Must be move constructible for efficient value updates
*
* @note Copy assignment is intentionally not required since we use move semantics
* for value updates and only need copy construction for change detection.
*/
template <typename T>
concept Observable = std::equality_comparable<T> && std::copy_constructible<T> && std::move_constructible<T>;
namespace impl {
/**
* @brief Base class containing common ObservableValue functionality.
*
* This class contains all the observer management and notification logic
* that is shared between regular and atomic ObservableValue specializations.
*
* @tparam T The value type (for atomic specializations, this is the underlying type, not std::atomic<T>)
*/
template <Observable T>
class ObservableValueBase {
protected:
boost::signals2::signal<void(T const&)> onUpdate_;
public:
virtual ~ObservableValueBase() = default;
/**
* @brief Registers an observer callback for value changes.
* @param fn Callback function/lambda that accepts T const&
* @return Connection object for managing the subscription
*/
boost::signals2::connection
observe(std::invocable<T const&> auto&& fn)
{
return onUpdate_.connect(std::forward<decltype(fn)>(fn));
}
/**
* @brief Checks if there are any active observers.
* @return true if there are observers, false otherwise
*/
[[nodiscard]] bool
hasObservers() const
{
return not onUpdate_.empty();
}
/**
* @brief Forces notification of all observers with the current value.
*
* This method will notify all observers with the current value regardless
* of whether the value has changed since the last notification.
*/
virtual void
forceNotify() = 0;
protected:
/**
* @brief Notifies all observers with the given value.
* @param value The value to send to observers
*/
void
notifyObservers(T const& value)
{
onUpdate_(value);
}
};
} // namespace impl
// Forward declaration
template <typename T>
class ObservableValue;
/**
* @brief An observable value container that notifies observers when the value changes.
*
* ObservableValue wraps a value of type T and provides a mechanism to observe changes to that value.
* When the value is modified (and actually changes), all registered observers are notified.
*
* @tparam T The type of value to observe. Must satisfy the Observable concept.
*
* @par Thread Safety
* - Observer subscription/unsubscription (observe() and connection.disconnect()) are thread-safe
* - Value modification operations (set(), operator=) are NOT thread-safe and require external synchronization
* - Observer callbacks are invoked synchronously on the same thread that triggered the value change
* - If observers need to perform work on different threads, they must handle dispatch themselves
* (e.g., using an async execution context or message queue)
*
* @par Exception Handling
* - If an observer callback throws an exception, the exception will propagate to the caller
* - The value will still be updated even if observers throw exceptions
* - No guarantee is made about whether other observers will be called if one throws
* - It is the caller's responsibility to handle exceptions from observer callbacks
*/
template <Observable T>
requires(not SomeAtomic<T>)
class ObservableValue<T> : public impl::ObservableValueBase<T> {
T value_;
/**
* @brief RAII guard for deferred notification of value changes.
*
* ObservableGuard captures the current value when created and compares it
* with the final value when destroyed. If the values differ, observers
* are notified. This allows for multiple modifications to the value with
* only a single notification at the end.
*
* @note This class is returned by operator->() and should not be used directly.
*/
struct ObservableGuard {
T const oldValue; ///< Value captured at construction time
ObservableValue<T>& ref; ///< Reference to the observable value
/**
* @brief Constructs guard and captures current value.
* @param observable The ObservableValue to guard
*/
ObservableGuard(ObservableValue<T>& observable) : oldValue(observable), ref(observable)
{
}
/**
* @brief Destructor that triggers notification if value changed.
*
* Compares the captured value with the current value. If they differ,
* notifies all observers with the current value.
*/
~ObservableGuard()
{
if (oldValue != ref.value_)
ref.notifyObservers(ref.value_);
}
/**
* @brief Provides mutable access to the underlying value.
* @return Mutable reference to the wrapped value
*/
[[nodiscard]]
operator T&()
{
return ref.value_;
}
};
public:
/**
* @brief Constructs ObservableValue with initial value.
* @param value Initial value (must be convertible to T)
*/
ObservableValue(std::convertible_to<T> auto&& value) : value_{std::forward<decltype(value)>(value)}
{
}
/**
* @brief Constructs ObservableValue with default initial value.
*/
ObservableValue()
requires std::default_initializable<T>
: value_{}
{
}
ObservableValue(ObservableValue const&) = delete;
ObservableValue(ObservableValue&&) = default;
ObservableValue&
operator=(ObservableValue const&) = delete;
ObservableValue&
operator=(ObservableValue&&) = default;
/**
* @brief Assignment operator that updates value and notifies observers.
*
* Updates the stored value and notifies observers if the new value
* differs from the current value (using operator!=).
*
* @param val New value (must be convertible to T)
* @return Reference to this object for chaining
*
* @throws Any exception thrown by observer callbacks will propagate
*/
ObservableValue&
operator=(std::convertible_to<T> auto&& val)
{
set(val);
return *this;
}
/**
* @brief Provides deferred notification access to the value.
*
* Returns an ObservableGuard that allows modification of the value
* with notification deferred until the guard is destroyed.
*
* @return ObservableGuard for deferred notification
*/
[[nodiscard]] ObservableGuard
operator->()
{
return {*this};
}
/**
* @brief Implicit conversion to const reference of the value.
* @return Const reference to the stored value
*/
[[nodiscard]]
operator T const&() const
{
return value_;
}
/**
* @brief Explicitly gets the current value.
* @return Const reference to the stored value
*/
[[nodiscard]] T const&
get() const
{
return value_;
}
/**
* @brief Sets a new value and notifies observers if changed.
*
* Updates the stored value and notifies all observers if the new value
* differs from the current value (using operator!=). If the values are
* equal, no notification occurs.
*
* @param val New value (must be convertible to T)
*
* @throws Any exception thrown by observer callbacks will propagate
*
* @par Thread Safety
* - This method is NOT thread-safe and requires external synchronization for concurrent access
* - Observer callbacks are invoked synchronously on the calling thread
*/
void
set(std::convertible_to<T> auto&& val)
{
if (value_ != val) {
value_ = std::forward<decltype(val)>(val);
this->notifyObservers(value_);
}
}
/**
* @brief Forces notification of all observers with the current value.
*
* This method will notify all observers with the current value regardless
* of whether the value has changed since the last notification.
*/
void
forceNotify() override
{
this->notifyObservers(value_);
}
};
/**
* @brief Partial specialization of ObservableValue for atomic types.
*
* This specialization provides thread-safe observation of atomic values while
* maintaining atomic semantics. It avoids the issues of copying atomic values
* and handles race conditions properly.
*
* @tparam T The underlying type stored in the atomic
*
* @par Thread Safety
* - All operations are thread-safe
* - Observer notifications are atomic with respect to value changes
* - Multiple threads can safely modify and observe the atomic value
*
* @par Performance Considerations
* - Uses atomic compare-and-swap operations for updates
* - Minimizes atomic reads during guard operations
* - Observer notifications happen outside of atomic operations when possible
*/
template <Observable T>
class ObservableValue<std::atomic<T>> : public impl::ObservableValueBase<T> {
std::atomic<T> value_;
public:
/**
* @brief Constructs ObservableValue with initial atomic value.
* @param value Initial value (will be stored in the atomic)
*/
ObservableValue(std::convertible_to<T> auto&& value) : value_{std::forward<decltype(value)>(value)}
{
}
/**
* @brief Constructs ObservableValue with default initial value.
*/
ObservableValue()
requires std::default_initializable<T>
: value_{}
{
}
ObservableValue(ObservableValue const&) = delete;
ObservableValue(ObservableValue&&) = default;
ObservableValue&
operator=(ObservableValue const&) = delete;
ObservableValue&
operator=(ObservableValue&&) = default;
/**
* @brief Assignment operator that updates atomic value and notifies observers.
*
* Uses atomic compare-and-swap to update the value and notifies observers
* only if the value actually changed.
*
* @param val New value
* @return Reference to this object for chaining
*/
ObservableValue&
operator=(std::convertible_to<T> auto&& val)
{
set(std::forward<decltype(val)>(val));
return *this;
}
/**
* @brief Gets the current atomic value.
* @return Current value stored in the atomic
*/
[[nodiscard]] T
get() const
{
return value_.load();
}
/**
* @brief Implicit conversion to the current atomic value.
* @return Current value stored in the atomic
*/
[[nodiscard]]
operator T() const
{
return get();
}
/**
* @brief Sets a new atomic value and notifies observers if changed.
*
* Uses atomic compare-and-swap to update the value. Notifies all observers
* if the value actually changed.
*
* @param val New value
*/
void
set(std::convertible_to<T> auto&& val)
{
T newValue = std::forward<decltype(val)>(val);
T oldValue = value_.load();
// Use compare-and-swap to atomically update
while (!value_.compare_exchange_weak(oldValue, newValue)) {
// compare_exchange_weak updates oldValue with current value on failure
// Continue until we succeed
}
// Notify observers if we actually changed the value
// Note: oldValue now contains the actual previous value that was replaced
if (oldValue != newValue) {
this->notifyObservers(newValue);
}
}
/**
* @brief Forces notification of all observers with the current value.
*
* This method will notify all observers with the current atomic value
* regardless of whether the value has changed since the last notification.
*/
void
forceNotify() override
{
this->notifyObservers(value_.load());
}
};
} // namespace util

View File

@@ -19,6 +19,8 @@
#include "util/Repeat.hpp"
#include <boost/asio/post.hpp>
namespace util {
void
@@ -27,8 +29,11 @@ Repeat::stop()
if (control_->stopping)
return;
control_->stopping = true;
control_->timer.cancel();
boost::asio::post(control_->strand, [control = control_] {
control->stopping = true;
control->timer.cancel();
});
control_->semaphore.acquire();
}

View File

@@ -21,9 +21,11 @@
#include "util/Assert.hpp"
#include <boost/asio/any_io_executor.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/asio/strand.hpp>
#include <atomic>
#include <chrono>
@@ -41,10 +43,11 @@ namespace util {
class Repeat {
struct Control {
boost::asio::steady_timer timer;
boost::asio::strand<boost::asio::any_io_executor> strand;
std::atomic_bool stopping{true};
std::binary_semaphore semaphore{0};
Control(auto& ctx) : timer(ctx)
Control(auto& ctx) : timer(ctx), strand(boost::asio::make_strand(ctx))
{
}
};
@@ -98,15 +101,24 @@ private:
static void
startImpl(std::shared_ptr<Control> control, std::chrono::steady_clock::duration interval, Action&& action)
{
control->timer.expires_after(interval);
control->timer.async_wait([control, interval, action = std::forward<Action>(action)](auto const& ec) mutable {
if (ec or control->stopping) {
boost::asio::post(control->strand, [control, interval, action = std::forward<Action>(action)]() mutable {
if (control->stopping) {
control->semaphore.release();
return;
}
action();
startImpl(std::move(control), interval, std::forward<Action>(action));
control->timer.expires_after(interval);
control->timer.async_wait(
[control, interval, action = std::forward<Action>(action)](auto const& ec) mutable {
if (ec or control->stopping) {
control->semaphore.release();
return;
}
action();
startImpl(std::move(control), interval, std::forward<Action>(action));
}
);
});
}
};

View File

@@ -23,10 +23,13 @@
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include <atomic>
#include <chrono>
#include <condition_variable>
#include <csignal>
#include <functional>
#include <optional>
#include <mutex>
#include <thread>
#include <utility>
namespace util {
@@ -50,17 +53,11 @@ public:
}
static void
handleSignal(int signal)
handleSignal(int /* signal */)
{
ASSERT(installedHandler != nullptr, "SignalsHandler is not initialized");
installedHandler->stopHandler_(signal);
}
static void
handleSecondSignal(int signal)
{
ASSERT(installedHandler != nullptr, "SignalsHandler is not initialized");
installedHandler->secondSignalHandler_(signal);
installedHandler->signalReceived_ = true;
installedHandler->cv_.notify_one();
}
};
@@ -69,56 +66,109 @@ SignalsHandler* SignalsHandlerStatic::installedHandler = nullptr;
} // namespace impl
SignalsHandler::SignalsHandler(config::ClioConfigDefinition const& config, std::function<void()> forceExitHandler)
: gracefulPeriod_(0)
, context_(1)
, stopHandler_([this, forceExitHandler](int) mutable {
LOG(LogService::info()) << "Got stop signal. Stopping Clio. Graceful period is "
<< std::chrono::duration_cast<std::chrono::milliseconds>(gracefulPeriod_).count()
<< " milliseconds.";
setHandler(impl::SignalsHandlerStatic::handleSecondSignal);
timer_.emplace(context_.scheduleAfter(
gracefulPeriod_, [forceExitHandler = std::move(forceExitHandler)](auto&& stopToken, bool canceled) {
// TODO: Update this after https://github.com/XRPLF/clio/issues/1380
if (not stopToken.isStopRequested() and not canceled) {
LOG(LogService::warn()) << "Force exit at the end of graceful period.";
forceExitHandler();
}
}
));
stopSignal_();
})
, secondSignalHandler_([this, forceExitHandler = std::move(forceExitHandler)](int) {
LOG(LogService::warn()) << "Force exit on second signal.";
forceExitHandler();
cancelTimer();
setHandler();
})
: gracefulPeriod_(util::config::ClioConfigDefinition::toMilliseconds(config.get<float>("graceful_period")))
, forceExitHandler_(std::move(forceExitHandler))
{
impl::SignalsHandlerStatic::registerHandler(*this);
gracefulPeriod_ = util::config::ClioConfigDefinition::toMilliseconds(config.get<float>("graceful_period"));
workerThread_ = std::thread([this]() { runStateMachine(); });
setHandler(impl::SignalsHandlerStatic::handleSignal);
}
SignalsHandler::~SignalsHandler()
{
cancelTimer();
setHandler();
state_ = State::NormalExit;
cv_.notify_one();
if (workerThread_.joinable())
workerThread_.join();
impl::SignalsHandlerStatic::resetHandler(); // This is needed mostly for tests to reset static state
}
void
SignalsHandler::cancelTimer()
SignalsHandler::notifyGracefulShutdownComplete()
{
if (timer_.has_value())
timer_->abort();
if (state_ == State::GracefulShutdown) {
LOG(LogService::info()) << "Graceful shutdown completed successfully.";
state_ = State::NormalExit;
cv_.notify_one();
}
}
void
SignalsHandler::setHandler(void (*handler)(int))
{
for (int const signal : kHANDLED_SIGNALS) {
for (int const signal : kHANDLED_SIGNALS)
std::signal(signal, handler == nullptr ? SIG_DFL : handler);
}
void
SignalsHandler::runStateMachine()
{
while (state_ != State::NormalExit) {
auto currentState = state_.load();
switch (currentState) {
case State::WaitingForSignal: {
{
std::unique_lock<std::mutex> lock(mutex_);
cv_.wait(lock, [this]() { return signalReceived_ or state_ == State::NormalExit; });
}
if (state_ == State::NormalExit)
return;
LOG(
LogService::info()
) << "Got stop signal. Stopping Clio. Graceful period is "
<< std::chrono::duration_cast<std::chrono::milliseconds>(gracefulPeriod_).count() << " milliseconds.";
state_ = State::GracefulShutdown;
signalReceived_ = false;
stopSignal_();
break;
}
case State::GracefulShutdown: {
bool waitResult = false;
{
std::unique_lock<std::mutex> lock(mutex_);
// Wait for either:
// 1. Graceful period to elapse (timeout)
// 2. Another signal (signalReceived_)
// 3. Graceful shutdown completion (state changes to NormalExit)
waitResult = cv_.wait_for(lock, gracefulPeriod_, [this]() {
return signalReceived_ or state_ == State::NormalExit;
});
}
if (state_ == State::NormalExit)
break;
if (signalReceived_) {
LOG(LogService::warn()) << "Force exit on second signal.";
state_ = State::ForceExit;
signalReceived_ = false;
} else if (not waitResult) {
LOG(LogService::warn()) << "Force exit at the end of graceful period.";
state_ = State::ForceExit;
}
break;
}
case State::ForceExit: {
forceExitHandler_();
state_ = State::NormalExit;
break;
}
case State::NormalExit:
return;
}
}
}

View File

@@ -19,22 +19,20 @@
#pragma once
#include "util/async/context/BasicExecutionContext.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/signals2/signal.hpp>
#include <boost/signals2/variadic_signal.hpp>
#include <atomic>
#include <chrono>
#include <concepts>
#include <condition_variable>
#include <csignal>
#include <cstdlib>
#include <functional>
#include <optional>
#include <mutex>
#include <thread>
namespace util {
namespace impl {
@@ -48,13 +46,22 @@ class SignalsHandlerStatic;
* @note There could be only one instance of this class.
*/
class SignalsHandler {
/**
* @brief States of the signal handler state machine.
*/
enum class State { WaitingForSignal, GracefulShutdown, ForceExit, NormalExit };
std::chrono::steady_clock::duration gracefulPeriod_;
async::PoolExecutionContext context_;
std::optional<async::PoolExecutionContext::ScheduledOperation<void>> timer_;
std::function<void()> forceExitHandler_;
boost::signals2::signal<void()> stopSignal_;
std::function<void(int)> stopHandler_;
std::function<void(int)> secondSignalHandler_;
std::atomic<bool> signalReceived_{false};
std::atomic<State> state_{State::WaitingForSignal};
std::mutex mutex_;
std::condition_variable cv_;
std::thread workerThread_;
friend class impl::SignalsHandlerStatic;
@@ -101,15 +108,16 @@ public:
stopSignal_.connect(static_cast<int>(priority), std::forward<SomeCallback>(callback));
}
/**
* @brief Notify the signal handler that graceful shutdown has completed.
* This allows the handler to transition to NormalExit state.
*/
void
notifyGracefulShutdownComplete();
static constexpr auto kHANDLED_SIGNALS = {SIGINT, SIGTERM};
private:
/**
* @brief Cancel scheduled force exit if any.
*/
void
cancelTimer();
/**
* @brief Set signal handler for handled signals.
*
@@ -118,6 +126,12 @@ private:
static void
setHandler(void (*handler)(int) = nullptr);
/**
* @brief Run the state machine loop in a worker thread.
*/
void
runStateMachine();
static constexpr auto kDEFAULT_FORCE_EXIT_HANDLER = []() { std::exit(EXIT_FAILURE); };
};

View File

@@ -26,7 +26,20 @@ namespace util::build {
#ifndef CLIO_VERSION
#error "CLIO_VERSION must be defined"
#endif
#ifndef GIT_COMMIT_HASH
#error "GIT_COMMIT_HASH must be defined"
#endif
#ifndef GIT_BUILD_BRANCH
#error "GIT_BUILD_BRANCH must be defined"
#endif
#ifndef BUILD_DATE
#error "BUILD_DATE must be defined"
#endif
static constexpr char kVERSION_STRING[] = CLIO_VERSION;
static constexpr char kGIT_COMMIT_HASH[] = GIT_COMMIT_HASH;
static constexpr char kGIT_BUILD_BRANCH[] = GIT_BUILD_BRANCH;
static constexpr char kBUILD_DATE[] = BUILD_DATE;
std::string const&
getClioVersionString()
@@ -42,4 +55,25 @@ getClioFullVersionString()
return value;
}
std::string const&
getGitCommitHash()
{
static std::string const value = kGIT_COMMIT_HASH; // NOLINT(readability-identifier-naming)
return value;
}
std::string const&
getGitBuildBranch()
{
static std::string const value = kGIT_BUILD_BRANCH; // NOLINT(readability-identifier-naming)
return value;
}
std::string const&
getBuildDate()
{
static std::string const value = kBUILD_DATE; // NOLINT(readability-identifier-naming)
return value;
}
} // namespace util::build

Some files were not shown because too many files have changed in this diff Show More