Compare commits

...

39 Commits

Author SHA1 Message Date
Sergey Kuznetsov
7558dfc7f5 chore: Commits for 2.5.0 (#2352) 2025-07-22 11:43:18 +01:00
Sergey Kuznetsov
00333a8d16 fix: Handle logger exceptions (#2349) 2025-07-21 17:30:08 +01:00
Sergey Kuznetsov
61c17400fe fix: Fix writing into dead variables in BlockingCache (#2333) 2025-07-21 12:16:46 +01:00
Ayaz Salikhov
d43002b49a ci: Run upload_conan_deps on necessary changes in .github/scripts/conan/ (#2348) 2025-07-18 13:49:18 +01:00
Ayaz Salikhov
30880ad627 ci: Don't run tsan unit tests, build 3rd party for gcc.tsan heavy-arm64 (#2347) 2025-07-18 13:04:53 +01:00
Ayaz Salikhov
25e55ef952 feat: Update to GCC 14.3 (#2344)
Testing in: https://github.com/XRPLF/clio/pull/2345
2025-07-18 12:20:42 +01:00
Ayaz Salikhov
579e6030ca chore: Install apple-clang 17 locally (#2342) 2025-07-18 11:42:56 +01:00
Ayaz Salikhov
d93b23206e chore: Do not hardcode python filename in gcc Dockerfile (#2340) 2025-07-17 18:18:00 +01:00
Ayaz Salikhov
1b63c3c315 chore: Don't hardcode gcc version in ci/Dockerfile (#2337) 2025-07-17 15:52:51 +01:00
Ayaz Salikhov
a8e61204da chore: Don't hardcode xrplf repo when building docker images (#2336) 2025-07-16 13:46:33 +01:00
Ayaz Salikhov
bef24c1387 chore: Add trufflehog security tool (#2334)
Tool works locally and doesn't require internet connection, if
`--no-verification` is passed.
I also tried to add `secret.cpp` with github PAT and the tool detected
it.
2025-07-15 19:26:14 +01:00
Sergey Kuznetsov
d83be17ded chore: Commits for 2.5.0 (#2330) 2025-07-14 15:55:34 +01:00
Sergey Kuznetsov
b6c1e2578b chore: Remove using blocking cache (#2328)
BlockingCache has a bug so reverting its usage for now.
2025-07-14 13:47:13 +01:00
Ayaz Salikhov
e0496aff5a fix: Prepare runner in docs workflow (#2325) 2025-07-10 18:41:22 +01:00
Ayaz Salikhov
2f7adfb883 feat: Always use version from tag if available (#2322) 2025-07-10 17:10:31 +01:00
Ayaz Salikhov
0f1895947d feat: Add script to rebuild conan dependencies (#2311) 2025-07-10 15:44:12 +01:00
Ayaz Salikhov
fa693b2aff chore: Unify how we deal with branches (#2320) 2025-07-10 14:16:36 +01:00
Ayaz Salikhov
1825ea701f fix: Mark tags with dash as prerelease (#2319) 2025-07-10 14:16:03 +01:00
Ayaz Salikhov
2ae5b13fb9 refactor: Simplify CMakeLists.txt for sanitizers (#2297) 2025-07-10 14:04:29 +01:00
Ayaz Salikhov
686a732fa8 fix: Link with boost libraries explicitly (#2313) 2025-07-10 12:09:21 +01:00
github-actions[bot]
4919b57466 style: clang-tidy auto fixes (#2317) 2025-07-10 10:50:03 +01:00
Alex Kremer
44d39f335e fix: ASAN heap-buffer-overflow issue in DBHelpers (#2310) 2025-07-09 21:17:25 +01:00
Sergey Kuznetsov
bfe5b52a64 fix: Add sending queue to ng web server (#2273) 2025-07-09 17:29:57 +01:00
Alex Kremer
413b823976 fix: ASAN stack-buffer-overflow in NFTHelpersTest_NFTDataFromLedgerObject (#2306) 2025-07-09 13:43:39 +01:00
Hanlu
e664f0b9ce chore: Remove redundant words in comment (#2309)
remove redundant words in comment

Signed-off-by: liangmulu <liangmulu@outlook.com>
2025-07-09 12:11:49 +01:00
Ayaz Salikhov
907bd7a58f chore: Update conan revisions (#2308)
Took latest versions from CCI and uploaded to our artifactory
2025-07-09 11:46:34 +01:00
Alex Kremer
f94a9864f0 fix: Usage of compiler flags for RPC module (#2305) 2025-07-08 22:46:26 +01:00
Ayaz Salikhov
36ea0389e2 chore: Bump tools docker image version (#2303) 2025-07-08 18:05:08 +01:00
Ayaz Salikhov
12640de22d ci: Build tools image separately for different archs (#2302) 2025-07-08 17:59:52 +01:00
Ayaz Salikhov
ae4f2d9023 ci: Add mold to tools image (#2301)
Work on: https://github.com/XRPLF/clio/issues/1242
2025-07-08 17:41:22 +01:00
Ayaz Salikhov
b7b61ef61d fix: Temporarily disable dockerhub description update (#2298) 2025-07-08 15:41:03 +01:00
Ayaz Salikhov
f391c3c899 feat: Run sanitizers for Debug builds as well (#2296) 2025-07-08 12:32:16 +01:00
Ayaz Salikhov
562ea41a64 feat: Update to Clang 19 (#2293) 2025-07-08 11:49:11 +01:00
Ayaz Salikhov
687b1e8887 chore: Don't hardcode versions in Dockerfiles and workflows (#2291) 2025-07-03 11:53:53 +01:00
github-actions[bot]
cc506fd094 style: Update pre-commit hooks (#2290)
Update versions of pre-commit hooks to latest version.

Co-authored-by: mathbunnyru <12270691+mathbunnyru@users.noreply.github.com>
2025-07-02 16:36:34 +01:00
Ayaz Salikhov
1fe323190a fix: Make pre-commit autoupdate PRs verified (#2289) 2025-07-02 16:34:16 +01:00
Sergey Kuznetsov
f04d2a97ec chore: Commits for 2.5.0 (#2268) 2025-06-30 11:32:26 +01:00
Sergey Kuznetsov
b8b82e5dd9 chore: Commits for 2.5.0-b3 (#2197) 2025-06-09 11:48:12 +01:00
Sergey Kuznetsov
764601e7fc chore: Commits for 2.5.0-b2 (#2146) 2025-05-20 16:53:06 +01:00
171 changed files with 5330 additions and 3953 deletions

View File

@@ -17,6 +17,9 @@ inputs:
platforms:
description: Platforms to build the image for (e.g. linux/amd64,linux/arm64)
required: true
build_args:
description: List of build-time variables
required: false
dockerhub_repo:
description: DockerHub repository name
@@ -61,13 +64,4 @@ runs:
platforms: ${{ inputs.platforms }}
push: ${{ inputs.push_image == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
- name: Update DockerHub description
if: ${{ inputs.push_image == 'true' && inputs.dockerhub_repo != '' }}
uses: peter-evans/dockerhub-description@432a30c9e07499fd01da9f8a49f0faf9e0ca5b77 # v4.0.2
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
repository: ${{ inputs.dockerhub_repo }}
short-description: ${{ inputs.dockerhub_description }}
readme-filepath: ${{ inputs.directory }}/README.md
build-args: ${{ inputs.build_args }}

View File

@@ -0,0 +1,11 @@
[settings]
arch={{detect_api.detect_arch()}}
build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=17
os=Macos
[conf]
grpc/1.50.1:tools.build:cxxflags+=["-Wno-missing-template-arg-list-after-template-kw"]

View File

@@ -22,9 +22,6 @@ def generate_matrix():
itertools.product(MACOS_OS, MACOS_CONTAINERS, MACOS_COMPILERS),
):
for sanitizer_ext, build_type in itertools.product(SANITIZER_EXT, BUILD_TYPES):
# libbacktrace doesn't build on arm64 with gcc.tsan
if os == "heavy-arm64" and compiler == "gcc" and sanitizer_ext == ".tsan":
continue
configurations.append(
{
"os": os,

View File

@@ -8,7 +8,11 @@ REPO_DIR="$(cd "$CURRENT_DIR/../../../" && pwd)"
CONAN_DIR="${CONAN_HOME:-$HOME/.conan2}"
PROFILES_DIR="$CONAN_DIR/profiles"
APPLE_CLANG_PROFILE="$CURRENT_DIR/apple-clang.profile"
if [[ -z "$CI" ]]; then
APPLE_CLANG_PROFILE="$CURRENT_DIR/apple-clang-local.profile"
else
APPLE_CLANG_PROFILE="$CURRENT_DIR/apple-clang-ci.profile"
fi
GCC_PROFILE="$REPO_DIR/docker/ci/conan/gcc.profile"
CLANG_PROFILE="$REPO_DIR/docker/ci/conan/clang.profile"

View File

@@ -2,9 +2,9 @@ name: Build
on:
push:
branches: [master, release/*, develop]
branches: [release/*, develop]
pull_request:
branches: [master, release/*, develop]
branches: [release/*, develop]
paths:
- .github/workflows/build.yml

View File

@@ -22,6 +22,11 @@ jobs:
with:
lfs: true
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Create build directory
run: mkdir build_docs

View File

@@ -96,6 +96,7 @@ jobs:
uses: ./.github/workflows/release_impl.yml
with:
overwrite_release: true
prerelease: true
title: "Clio development (nightly) build"
version: nightly
header: >

View File

@@ -40,8 +40,11 @@ jobs:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
with:
branch: update/pre-commit-hooks
title: "style: Update pre-commit hooks"
commit-message: "style: Update pre-commit hooks"
committer: Clio CI <skuznetsov@ripple.com>
branch: update/pre-commit-hooks
branch-suffix: timestamp
delete-branch: true
title: "style: Update pre-commit hooks"
body: Update versions of pre-commit hooks to latest version.
reviewers: "godexsoft,kuznetsss,PeterChen13579,mathbunnyru"

View File

@@ -3,8 +3,7 @@ name: Run pre-commit hooks
on:
pull_request:
push:
branches:
- develop
branches: [develop]
workflow_dispatch:
jobs:

View File

@@ -48,6 +48,7 @@ jobs:
uses: ./.github/workflows/release_impl.yml
with:
overwrite_release: false
prerelease: ${{ contains(github.ref_name, '-') }}
title: "${{ github.ref_name}}"
version: "${{ github.ref_name }}"
header: >

View File

@@ -8,6 +8,11 @@ on:
required: true
type: boolean
prerelease:
description: "Create a prerelease"
required: true
type: boolean
title:
description: "Release title"
required: true
@@ -25,12 +30,12 @@ on:
generate_changelog:
description: "Generate changelog"
required: false
required: true
type: boolean
draft:
description: "Create a draft release"
required: false
required: true
type: boolean
jobs:
@@ -109,7 +114,7 @@ jobs:
shell: bash
run: |
gh release create "${{ inputs.version }}" \
${{ inputs.overwrite_release && '--prerelease' || '' }} \
${{ inputs.prerelease && '--prerelease' || '' }} \
--title "${{ inputs.title }}" \
--target "${GITHUB_SHA}" \
${{ inputs.draft && '--draft' || '' }} \

View File

@@ -37,12 +37,9 @@ jobs:
strategy:
fail-fast: false
matrix:
compiler: ["gcc", "clang"]
sanitizer_ext: [".asan", ".tsan", ".ubsan"]
exclude:
# Currently, clang.tsan unit tests hang
- compiler: clang
sanitizer_ext: .tsan
compiler: [gcc, clang]
sanitizer_ext: [.asan, .tsan, .ubsan]
build_type: [Release, Debug]
uses: ./.github/workflows/build_and_test.yml
with:
@@ -50,9 +47,10 @@ jobs:
container: '{ "image": "ghcr.io/xrplf/clio-ci:latest" }'
disable_cache: true
conan_profile: ${{ matrix.compiler }}${{ matrix.sanitizer_ext }}
build_type: Release
build_type: ${{ matrix.build_type }}
static: false
run_unit_tests: true
# Currently, both gcc.tsan and clang.tsan unit tests hang
run_unit_tests: ${{ matrix.sanitizer_ext != '.tsan' }}
run_integration_tests: false
upload_clio_server: false
targets: clio_tests clio_integration_tests

View File

@@ -85,7 +85,7 @@ jobs:
if: env.SANITIZER_IGNORE_ERRORS == 'true' && steps.check_report.outputs.found_report == 'true'
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.conan_profile }}_report
name: sanitizer_report_${{ runner.os }}_${{ inputs.build_type }}_${{ inputs.conan_profile }}
path: .sanitizer-report/*
include-hidden-files: true

View File

@@ -29,12 +29,27 @@ concurrency:
cancel-in-progress: false
env:
GHCR_REPO: ghcr.io/${{ github.repository_owner }}
CLANG_MAJOR_VERSION: 19
GCC_MAJOR_VERSION: 14
GCC_VERSION: 14.3.0
jobs:
repo:
name: Calculate repo name
runs-on: ubuntu-latest
outputs:
GHCR_REPO: ${{ steps.set-ghcr-repo.outputs.GHCR_REPO }}
steps:
- name: Set GHCR_REPO
id: set-ghcr-repo
run: |
echo "GHCR_REPO=$(echo ghcr.io/${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> ${GITHUB_OUTPUT}
gcc-amd64:
name: Build and push GCC docker image (amd64)
runs-on: heavy
needs: repo
steps:
- uses: actions/checkout@v4
@@ -53,22 +68,26 @@ jobs:
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-gcc
${{ needs.repo.outputs.GHCR_REPO }}/clio-gcc
rippleci/clio_gcc
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/gcc
tags: |
type=raw,value=amd64-latest
type=raw,value=amd64-12
type=raw,value=amd64-12.3.0
type=raw,value=amd64-${{ env.GCC_MAJOR_VERSION }}
type=raw,value=amd64-${{ env.GCC_VERSION }}
type=raw,value=amd64-${{ github.sha }}
platforms: linux/amd64
build_args: |
GCC_MAJOR_VERSION=${{ env.GCC_MAJOR_VERSION }}
GCC_VERSION=${{ env.GCC_VERSION }}
dockerhub_repo: rippleci/clio_gcc
dockerhub_description: GCC compiler for XRPLF/clio.
gcc-arm64:
name: Build and push GCC docker image (arm64)
runs-on: heavy-arm64
needs: repo
steps:
- uses: actions/checkout@v4
@@ -87,23 +106,26 @@ jobs:
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-gcc
${{ needs.repo.outputs.GHCR_REPO }}/clio-gcc
rippleci/clio_gcc
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/gcc
tags: |
type=raw,value=arm64-latest
type=raw,value=arm64-12
type=raw,value=arm64-12.3.0
type=raw,value=arm64-${{ env.GCC_MAJOR_VERSION }}
type=raw,value=arm64-${{ env.GCC_VERSION }}
type=raw,value=arm64-${{ github.sha }}
platforms: linux/arm64
build_args: |
GCC_MAJOR_VERSION=${{ env.GCC_MAJOR_VERSION }}
GCC_VERSION=${{ env.GCC_VERSION }}
dockerhub_repo: rippleci/clio_gcc
dockerhub_description: GCC compiler for XRPLF/clio.
gcc-merge:
name: Merge and push multi-arch GCC docker image
runs-on: heavy
needs: [gcc-amd64, gcc-arm64]
needs: [repo, gcc-amd64, gcc-arm64]
steps:
- uses: actions/checkout@v4
@@ -132,18 +154,14 @@ jobs:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
- name: Make GHCR_REPO lowercase
run: |
echo "GHCR_REPO_LC=$(echo ${{env.GHCR_REPO}} | tr '[:upper:]' '[:lower:]')" >> ${GITHUB_ENV}
- name: Create and push multi-arch manifest
if: github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true'
run: |
for image in ${{ env.GHCR_REPO_LC }}/clio-gcc rippleci/clio_gcc; do
for image in ${{ needs.repo.outputs.GHCR_REPO }}/clio-gcc rippleci/clio_gcc; do
docker buildx imagetools create \
-t $image:latest \
-t $image:12 \
-t $image:12.3.0 \
-t $image:${{ env.GCC_MAJOR_VERSION }} \
-t $image:${{ env.GCC_VERSION }} \
-t $image:${{ github.sha }} \
$image:arm64-latest \
$image:amd64-latest
@@ -152,6 +170,7 @@ jobs:
clang:
name: Build and push Clang docker image
runs-on: heavy
needs: repo
steps:
- uses: actions/checkout@v4
@@ -170,21 +189,24 @@ jobs:
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-clang
${{ needs.repo.outputs.GHCR_REPO }}/clio-clang
rippleci/clio_clang
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/compilers/clang
tags: |
type=raw,value=latest
type=raw,value=16
type=raw,value=${{ env.CLANG_MAJOR_VERSION }}
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
build_args: |
CLANG_MAJOR_VERSION=${{ env.CLANG_MAJOR_VERSION }}
dockerhub_repo: rippleci/clio_clang
dockerhub_description: Clang compiler for XRPLF/clio.
tools:
name: Build and push tools docker image
tools-amd64:
name: Build and push tools docker image (amd64)
runs-on: heavy
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
@@ -201,18 +223,87 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ env.GHCR_REPO }}/clio-tools
${{ needs.repo.outputs.GHCR_REPO }}/clio-tools
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/tools
tags: |
type=raw,value=latest
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
type=raw,value=amd64-latest
type=raw,value=amd64-${{ github.sha }}
platforms: linux/amd64
build_args: |
GHCR_REPO=${{ needs.repo.outputs.GHCR_REPO }}
GCC_VERSION=${{ env.GCC_VERSION }}
tools-arm64:
name: Build and push tools docker image (arm64)
runs-on: heavy-arm64
needs: [repo, gcc-merge]
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/tools/**"
- uses: ./.github/actions/build_docker_image
if: steps.changed-files.outputs.any_changed == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: |
${{ needs.repo.outputs.GHCR_REPO }}/clio-tools
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/tools
tags: |
type=raw,value=arm64-latest
type=raw,value=arm64-${{ github.sha }}
platforms: linux/arm64
build_args: |
GHCR_REPO=${{ needs.repo.outputs.GHCR_REPO }}
GCC_VERSION=${{ env.GCC_VERSION }}
tools-merge:
name: Merge and push multi-arch tools docker image
runs-on: heavy
needs: [repo, tools-amd64, tools-arm64]
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
with:
files: "docker/tools/**"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push multi-arch manifest
if: github.event_name != 'pull_request' && steps.changed-files.outputs.any_changed == 'true'
run: |
image=${{ needs.repo.outputs.GHCR_REPO }}/clio-tools
docker buildx imagetools create \
-t $image:latest \
-t $image:${{ github.sha }} \
$image:arm64-latest \
$image:amd64-latest
ci:
name: Build and push CI docker image
runs-on: heavy
needs: [gcc-merge, clang, tools]
needs: [repo, gcc-merge, clang, tools-merge]
steps:
- uses: actions/checkout@v4
@@ -223,14 +314,19 @@ jobs:
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
with:
images: |
${{ env.GHCR_REPO }}/clio-ci
${{ needs.repo.outputs.GHCR_REPO }}/clio-ci
rippleci/clio_ci
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/ci
tags: |
type=raw,value=latest
type=raw,value=gcc_12_clang_16
type=raw,value=gcc_${{ env.GCC_MAJOR_VERSION }}_clang_${{ env.CLANG_MAJOR_VERSION }}
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
build_args: |
GHCR_REPO=${{ needs.repo.outputs.GHCR_REPO }}
CLANG_MAJOR_VERSION=${{ env.CLANG_MAJOR_VERSION }}
GCC_MAJOR_VERSION=${{ env.GCC_MAJOR_VERSION }}
GCC_VERSION=${{ env.GCC_VERSION }}
dockerhub_repo: rippleci/clio_ci
dockerhub_description: CI image for XRPLF/clio.

View File

@@ -11,28 +11,26 @@ on:
default: false
type: boolean
pull_request:
branches:
- develop
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/actions/generate/action.yml
- .github/actions/prepare_runner/action.yml
- .github/scripts/conan/generate_matrix.py
- .github/scripts/conan/init.sh
- ".github/scripts/conan/**"
- "!.github/scripts/conan/apple-clang-local.profile"
- conanfile.py
- conan.lock
push:
branches:
- develop
branches: [develop]
paths:
- .github/workflows/upload_conan_deps.yml
- .github/actions/generate/action.yml
- .github/actions/prepare_runner/action.yml
- .github/scripts/conan/generate_matrix.py
- .github/scripts/conan/init.sh
- ".github/scripts/conan/**"
- "!.github/scripts/conan/apple-clang-local.profile"
- conanfile.py
- conan.lock

View File

@@ -26,12 +26,12 @@ repos:
# Autoformat: YAML, JSON, Markdown, etc.
- repo: https://github.com/rbubley/mirrors-prettier
rev: 787fb9f542b140ba0b2aced38e6a3e68021647a3 # frozen: v3.5.3
rev: 5ba47274f9b181bce26a5150a725577f3c336011 # frozen: v3.6.2
hooks:
- id: prettier
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: 586c3ea3f51230da42bab657c6a32e9e66c364f0 # frozen: v0.44.0
rev: 192ad822316c3a22fb3d3cc8aa6eafa0b8488360 # frozen: v0.45.0
hooks:
- id: markdownlint-fix
exclude: LICENSE.md
@@ -55,6 +55,12 @@ repos:
--ignore-words=pre-commit-hooks/codespell_ignore.txt,
]
- repo: https://github.com/trufflesecurity/trufflehog
rev: 6641d4ba5b684fffe195b9820345de1bf19f3181 # frozen: v3.89.2
hooks:
- id: trufflehog
entry: trufflehog git file://. --since-commit HEAD --no-verification --fail
# Running some C++ hooks before clang-format
# to ensure that the style is consistent.
- repo: local
@@ -80,7 +86,7 @@ repos:
language: script
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: f9a52e87b6cdcb01b0a62b8611d9ba9f2dad0067 # frozen: v19.1.7
rev: 6b9072cd80691b1b48d80046d884409fb1d962d1 # frozen: v20.1.7
hooks:
- id: clang-format
args: [--style=file]

View File

@@ -69,15 +69,17 @@ endif ()
# Enable selected sanitizer if enabled via `san`
if (san)
set(SUPPORTED_SANITIZERS "address" "thread" "memory" "undefined")
list(FIND SUPPORTED_SANITIZERS "${san}" INDEX)
if (INDEX EQUAL -1)
if (NOT san IN_LIST SUPPORTED_SANITIZERS)
message(FATAL_ERROR "Error: Unsupported sanitizer '${san}'. Supported values are: ${SUPPORTED_SANITIZERS}.")
endif ()
target_compile_options(
clio_options INTERFACE # Sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1> ${SAN_FLAG} -fno-omit-frame-pointer
)
# Sanitizers recommend minimum of -O1 for reasonable performance so we enable it for debug builds
set(SAN_OPTIMIZATION_FLAG "")
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(SAN_OPTIMIZATION_FLAG -O1)
endif ()
target_compile_options(clio_options INTERFACE ${SAN_OPTIMIZATION_FLAG} ${SAN_FLAG} -fno-omit-frame-pointer)
target_compile_definitions(
clio_options INTERFACE $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>

View File

@@ -4,39 +4,42 @@
find_package(Git REQUIRED)
set(GIT_COMMAND rev-parse --short HEAD)
set(GIT_COMMAND describe --tags --exact-match)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE REV
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND}
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
OUTPUT_VARIABLE TAG
RESULT_VARIABLE RC
OUTPUT_STRIP_TRAILING_WHITESPACE
)
set(GIT_COMMAND branch --show-current)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE BRANCH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if (RC EQUAL 0)
# if we are on a tag, use the tag name
set(CLIO_VERSION "${TAG}")
set(DOC_CLIO_VERSION "${TAG}")
else ()
# if not, use YYYYMMDDHMS-<branch>-<git-rev>
if (BRANCH STREQUAL "")
set(BRANCH "dev")
endif ()
if (NOT (BRANCH MATCHES master OR BRANCH MATCHES release/*)) # for develop and any other branch name
# YYYYMMDDHMS-<branch>-<git-rev>
set(GIT_COMMAND show -s --date=format:%Y%m%d%H%M%S --format=%cd)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} OUTPUT_VARIABLE DATE
OUTPUT_STRIP_TRAILING_WHITESPACE
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE DATE
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
set(GIT_COMMAND branch --show-current)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE BRANCH
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
set(GIT_COMMAND rev-parse --short HEAD)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE REV
OUTPUT_STRIP_TRAILING_WHITESPACE COMMAND_ERROR_IS_FATAL ANY
)
set(CLIO_VERSION "${DATE}-${BRANCH}-${REV}")
set(DOC_CLIO_VERSION "develop")
else ()
set(GIT_COMMAND describe --tags)
execute_process(
COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE CLIO_TAG_VERSION
OUTPUT_STRIP_TRAILING_WHITESPACE
)
set(CLIO_VERSION "${CLIO_TAG_VERSION}")
set(DOC_CLIO_VERSION "${CLIO_TAG_VERSION}")
endif ()
if (CMAKE_BUILD_TYPE MATCHES Debug)

View File

@@ -1,47 +1,46 @@
{
"version": "0.5",
"requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1750263732.782",
"xxhash/0.8.2#7856c968c985b2981b707ee8f2413b2b%1750263730.908",
"xrpl/2.5.0#7880d1696f11fceb1d498570f1a184c8%1751035267.743",
"sqlite3/3.47.0#7a0904fd061f5f8a2366c294f9387830%1750263721.79",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1750263717.455",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1750263715.145",
"rapidjson/cci.20220822#1b9d8c2256876a154172dc5cfbe447c6%1750263713.526",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1750263698.841",
"openssl/1.1.1v#216374e4fb5b2e0f5ab1fb6f27b5b434%1750263685.885",
"nudb/2.0.8#63990d3e517038e04bf529eb8167f69f%1750263683.814",
"minizip/1.2.13#9e87d57804bd372d6d1e32b1871517a3%1750263681.745",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1750263679.891",
"libuv/1.46.0#78565d142ac7102776256328a26cdf60%1750263677.819",
"libiconv/1.17#1ae2f60ab5d08de1643a22a81b360c59%1750257497.552",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1750263675.748",
"libarchive/3.7.6#e0453864b2a4d225f06b3304903cb2b7%1750263671.05",
"http_parser/2.9.4#98d91690d6fd021e9e624218a85d9d97%1750263668.751",
"gtest/1.14.0#f8f0757a574a8dd747d16af62d6eb1b7%1750263666.833",
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1750263646.614",
"fmt/11.2.0#579bb2cdf4a7607621beea4eb4651e0f%1746298708.362",
"fmt/10.1.1#021e170cf81db57da82b5f737b6906c1%1750263644.741",
"date/3.0.3#cf28fe9c0aab99fe12da08aa42df65e1%1750263643.099",
"cassandra-cpp-driver/2.17.0#e50919efac8418c26be6671fd702540a%1750263632.157",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1750263630.06",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1750263627.95",
"boost/1.83.0#8eb22f36ddfb61f54bbc412c4555bd66%1750263616.444",
"benchmark/1.8.3#1a2ce62c99e2b3feaa57b1f0c15a8c46%1724323740.181",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1750263609.776"
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1752006674.465",
"xxhash/0.8.2#7856c968c985b2981b707ee8f2413b2b%1752006674.334",
"xrpl/2.5.0#7880d1696f11fceb1d498570f1a184c8%1752006708.218",
"sqlite3/3.47.0#7a0904fd061f5f8a2366c294f9387830%1752006674.338",
"soci/4.0.3#a9f8d773cd33e356b5879a4b0564f287%1752006674.465",
"re2/20230301#dfd6e2bf050eb90ddd8729cfb4c844a4%1752006674.077",
"rapidjson/cci.20220822#1b9d8c2256876a154172dc5cfbe447c6%1752006673.227",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1752006673.172",
"openssl/1.1.1v#216374e4fb5b2e0f5ab1fb6f27b5b434%1752006673.069",
"nudb/2.0.8#63990d3e517038e04bf529eb8167f69f%1752006673.862",
"minizip/1.2.13#9e87d57804bd372d6d1e32b1871517a3%1752006672.983",
"lz4/1.10.0#59fc63cac7f10fbe8e05c7e62c2f3504%1752006672.825",
"libuv/1.46.0#78565d142ac7102776256328a26cdf60%1752006672.827",
"libiconv/1.17#1e65319e945f2d31941a9d28cc13c058%1752006672.826",
"libbacktrace/cci.20210118#a7691bfccd8caaf66309df196790a5a1%1752006672.822",
"libarchive/3.7.6#e0453864b2a4d225f06b3304903cb2b7%1752006672.917",
"http_parser/2.9.4#98d91690d6fd021e9e624218a85d9d97%1752006672.658",
"gtest/1.14.0#f8f0757a574a8dd747d16af62d6eb1b7%1752006671.555",
"grpc/1.50.1#02291451d1e17200293a409410d1c4e1%1752006671.777",
"fmt/11.2.0#579bb2cdf4a7607621beea4eb4651e0f%1752006671.557",
"date/3.0.3#cf28fe9c0aab99fe12da08aa42df65e1%1752006671.553",
"cassandra-cpp-driver/2.17.0#e50919efac8418c26be6671fd702540a%1752006671.654",
"c-ares/1.34.5#b78b91e7cfb1f11ce777a285bbf169c6%1752006671.554",
"bzip2/1.0.8#00b4a4658791c1f06914e087f0e792f5%1752006671.549",
"boost/1.83.0#5bcb2a14a35875e328bf312e080d3562%1752006671.557",
"benchmark/1.8.3#1a2ce62c99e2b3feaa57b1f0c15a8c46%1752006671.408",
"abseil/20230802.1#f0f91485b111dc9837a68972cb19ca7b%1752006671.555"
],
"build_requires": [
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1750263732.782",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1750263698.841",
"protobuf/3.21.9#64ce20e1d9ea24f3d6c504015d5f6fa8%1750263690.822",
"cmake/3.31.7#57c3e118bcf267552c0ea3f8bee1e7d5%1749863707.208",
"b2/5.3.2#7b5fabfe7088ae933fb3e78302343ea0%1750263614.565"
"zlib/1.3.1#b8bc2603263cf7eccbd6e17e66b0ed76%1752006674.465",
"protobuf/3.21.12#d927114e28de9f4691a6bbcdd9a529d1%1752006673.172",
"protobuf/3.21.9#64ce20e1d9ea24f3d6c504015d5f6fa8%1752006673.173",
"cmake/3.31.7#57c3e118bcf267552c0ea3f8bee1e7d5%1752006671.64",
"b2/5.3.2#7b5fabfe7088ae933fb3e78302343ea0%1752006671.407"
],
"python_requires": [],
"overrides": {
"boost/1.83.0": [
null,
"boost/1.83.0#8eb22f36ddfb61f54bbc412c4555bd66"
"boost/1.83.0#5bcb2a14a35875e328bf312e080d3562"
],
"protobuf/3.21.9": [
null,

View File

@@ -1,7 +1,11 @@
FROM ghcr.io/xrplf/clio-gcc:12.3.0 AS clio-gcc
FROM ghcr.io/xrplf/clio-tools:latest AS clio-tools
ARG GHCR_REPO=invalid
ARG CLANG_MAJOR_VERSION=invalid
ARG GCC_VERSION=invalid
FROM ghcr.io/xrplf/clio-clang:16
FROM ${GHCR_REPO}/clio-gcc:${GCC_VERSION} AS clio-gcc
FROM ${GHCR_REPO}/clio-tools:latest AS clio-tools
FROM ${GHCR_REPO}/clio-clang:${CLANG_MAJOR_VERSION}
ARG DEBIAN_FRONTEND=noninteractive
@@ -56,29 +60,33 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install gcc-12 and make ldconfig aware of the new libstdc++ location (for gcc)
ARG GCC_MAJOR_VERSION=invalid
# Install custom-built gcc and make ldconfig aware of the new libstdc++ location (for gcc)
# Note: Clang is using libc++ instead
COPY --from=clio-gcc /gcc12.deb /
COPY --from=clio-gcc /gcc${GCC_MAJOR_VERSION}.deb /
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
binutils \
libc6-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& dpkg -i /gcc12.deb \
&& rm -rf /gcc12.deb \
&& dpkg -i /gcc${GCC_MAJOR_VERSION}.deb \
&& rm -rf /gcc${GCC_MAJOR_VERSION}.deb \
&& ldconfig
# Rewire to use gcc-12 as default compiler
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 100 \
&& update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-12 100 \
&& update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 100 \
&& update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-12 100 \
&& update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-12 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-12 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-12 100
# Rewire to use our custom-built gcc as default compiler
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_MAJOR_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_MAJOR_VERSION} 100
COPY --from=clio-tools \
/usr/local/bin/mold \
/usr/local/bin/ld.mold \
/usr/local/bin/ccache \
/usr/local/bin/doxygen \
/usr/local/bin/ClangBuildAnalyzer \

View File

@@ -6,13 +6,14 @@ It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but c
The image is based on Ubuntu 20.04 and contains:
- ccache 4.11.3
- clang 16.0.6
- Clang 19
- ClangBuildAnalyzer 1.6.0
- conan 2.17.0
- doxygen 1.12
- gcc 12.3.0
- Conan 2.17.0
- Doxygen 1.12
- GCC 14.3.0
- gh 2.74
- git-cliff 2.9.1
- mold 2.40.1
- and some other useful tools
Conan is set up to build Clio without any additional steps.

View File

@@ -4,8 +4,9 @@ build_type=Release
compiler=clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=16
compiler.version=19
os=Linux
[conf]
tools.build:compiler_executables={"c": "/usr/bin/clang-16", "cpp": "/usr/bin/clang++-16"}
tools.build:compiler_executables={"c": "/usr/bin/clang-19", "cpp": "/usr/bin/clang++-19"}
grpc/1.50.1:tools.build:cxxflags+=["-Wno-missing-template-arg-list-after-template-kw"]

View File

@@ -4,8 +4,8 @@ build_type=Release
compiler=gcc
compiler.cppstd=20
compiler.libcxx=libstdc++11
compiler.version=12
compiler.version=14
os=Linux
[conf]
tools.build:compiler_executables={"c": "/usr/bin/gcc-12", "cpp": "/usr/bin/g++-12"}
tools.build:compiler_executables={"c": "/usr/bin/gcc-14", "cpp": "/usr/bin/g++-14"}

View File

@@ -8,8 +8,6 @@ SHELL ["/bin/bash", "-c"]
USER root
WORKDIR /root
ARG CLANG_VERSION=16
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
wget \
@@ -18,13 +16,17 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG CLANG_MAJOR_VERSION=invalid
# Bump this version to force rebuild of the image
ARG BUILD_VERSION=0
RUN wget --progress=dot:giga https://apt.llvm.org/llvm.sh \
&& chmod +x llvm.sh \
&& ./llvm.sh ${CLANG_VERSION} \
&& ./llvm.sh ${CLANG_MAJOR_VERSION} \
&& rm -rf llvm.sh \
&& apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
libc++-${CLANG_VERSION}-dev \
libc++abi-${CLANG_VERSION}-dev \
libc++-${CLANG_MAJOR_VERSION}-dev \
libc++abi-${CLANG_MAJOR_VERSION}-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -1,16 +1,14 @@
ARG UBUNTU_VERSION=20.04
ARG GCC_MAJOR_VERSION=12
ARG GCC_MAJOR_VERSION=invalid
FROM ubuntu:$UBUNTU_VERSION AS build
ARG UBUNTU_VERSION
ARG GCC_MAJOR_VERSION
ARG GCC_MINOR_VERSION=3
ARG GCC_PATCH_VERSION=0
ARG GCC_VERSION=${GCC_MAJOR_VERSION}.${GCC_MINOR_VERSION}.${GCC_PATCH_VERSION}
ARG BUILD_VERSION=6
ARG BUILD_VERSION=0
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
@@ -27,6 +25,8 @@ RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG GCC_VERSION
WORKDIR /
RUN wget --progress=dot:giga https://gcc.gnu.org/pub/gcc/releases/gcc-$GCC_VERSION/gcc-$GCC_VERSION.tar.gz \
&& tar xf gcc-$GCC_VERSION.tar.gz
@@ -68,12 +68,13 @@ RUN /gcc-$GCC_VERSION/configure \
--enable-checking=release
RUN make -j "$(nproc)"
RUN make install-strip DESTDIR=/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION
RUN make install-strip DESTDIR=/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION \
&& mkdir -p /gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64 \
RUN export GDB_AUTOLOAD_DIR="/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64" \
&& mkdir -p "$GDB_AUTOLOAD_DIR" \
&& mv \
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/lib64/libstdc++.so.6.0.30-gdb.py \
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/share/gdb/auto-load/usr/lib64/libstdc++.so.6.0.30-gdb.py
/gcc-$GCC_VERSION-$BUILD_VERSION-ubuntu-$UBUNTU_VERSION/usr/lib64/libstdc++.so.*-gdb.py \
$GDB_AUTOLOAD_DIR/
# Generate deb
WORKDIR /

View File

@@ -1,4 +1,4 @@
Package: gcc-12-ubuntu-UBUNTUVERSION
Package: gcc-14-ubuntu-UBUNTUVERSION
Version: VERSION
Architecture: TARGETARCH
Maintainer: Alex Kremer <akremer@ripple.com>

View File

@@ -1,24 +1,41 @@
FROM ubuntu:20.04
ARG GHCR_REPO=invalid
ARG GCC_VERSION=invalid
FROM ${GHCR_REPO}/clio-gcc:${GCC_VERSION}
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
ARG BUILD_VERSION=1
RUN apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests \
bison \
build-essential \
cmake \
flex \
ninja-build \
python3 \
python3-pip \
software-properties-common \
wget \
&& pip3 install -q --no-cache-dir \
cmake==3.31.6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /tmp
ARG MOLD_VERSION=2.40.1
RUN wget --progress=dot:giga "https://github.com/rui314/mold/archive/refs/tags/v${MOLD_VERSION}.tar.gz" \
&& tar xf "v${MOLD_VERSION}.tar.gz" \
&& cd "mold-${MOLD_VERSION}" \
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG CCACHE_VERSION=4.11.3
RUN wget --progress=dot:giga "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
@@ -26,7 +43,7 @@ RUN wget --progress=dot:giga "https://github.com/ccache/ccache/releases/download
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release -DENABLE_TESTING=False .. \
&& cmake --build . --target install \
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG DOXYGEN_VERSION=1.12.0
@@ -36,7 +53,7 @@ RUN wget --progress=dot:giga "https://github.com/doxygen/doxygen/releases/downlo
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG CLANG_BUILD_ANALYZER_VERSION=1.6.0
@@ -46,7 +63,7 @@ RUN wget --progress=dot:giga "https://github.com/aras-p/ClangBuildAnalyzer/archi
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& cmake --build . --target install \
&& ninja install \
&& rm -rf /tmp/* /var/tmp/*
ARG GIT_CLIFF_VERSION=2.9.1

View File

@@ -35,7 +35,7 @@ The default profile is the file in `~/.conan2/profiles/default`.
Here are some examples of possible profiles:
**Mac apple-clang 16 example**:
**Mac apple-clang 17 example**:
```text
[settings]
@@ -44,7 +44,7 @@ build_type=Release
compiler=apple-clang
compiler.cppstd=20
compiler.libcxx=libc++
compiler.version=16
compiler.version=17
os=Macos
[conf]

View File

@@ -5,7 +5,6 @@
Clio needs access to a `rippled` server in order to work. The following configurations are required for Clio and `rippled` to communicate:
1. In the Clio config file, provide the following:
- The IP of the `rippled` server
- The port on which `rippled` is accepting unencrypted WebSocket connections
@@ -13,7 +12,6 @@ Clio needs access to a `rippled` server in order to work. The following configur
- The port on which `rippled` is handling gRPC requests
2. In the `rippled` config file, you need to open:
- A port to accept unencrypted WebSocket connections
- A port to handle gRPC requests, with the IP(s) of Clio specified in the `secure_gateway` entry

View File

@@ -10,4 +10,5 @@ target_link_libraries(
clio_web
clio_rpc
clio_migration
PRIVATE Boost::program_options
)

View File

@@ -41,7 +41,6 @@
#include "util/build/Build.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include "util/prometheus/Prometheus.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/RPCServerHandler.hpp"
#include "web/Server.hpp"
@@ -91,7 +90,6 @@ ClioApplication::ClioApplication(util::config::ClioConfigDefinition const& confi
: config_(config), signalsHandler_{config_}
{
LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString();
PrometheusService::init(config);
signalsHandler_.subscribeToStop([this]() { appStopper_.stop(); });
}

View File

@@ -97,7 +97,7 @@ HealthCheckHandler::operator()(
boost::asio::yield_context
)
{
static auto constexpr kHEALTH_CHECK_HTML = R"html(
static constexpr auto kHEALTH_CHECK_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Test page for Clio</title></head>

View File

@@ -78,17 +78,20 @@ WritingAmendmentKey::WritingAmendmentKey(std::string amendmentName) : AmendmentK
} // namespace impl
AmendmentKey::operator std::string const&() const
AmendmentKey::
operator std::string const&() const
{
return name;
}
AmendmentKey::operator std::string_view() const
AmendmentKey::
operator std::string_view() const
{
return name;
}
AmendmentKey::operator ripple::uint256() const
AmendmentKey::
operator ripple::uint256() const
{
return Amendment::getAmendmentId(name);
}

View File

@@ -49,35 +49,45 @@ durationInMillisecondsSince(std::chrono::steady_clock::time_point const startTim
using namespace util::prometheus;
BackendCounters::BackendCounters()
: tooBusyCounter_(PrometheusService::counterInt(
"backend_too_busy_total_number",
Labels(),
"The total number of times the backend was too busy to process a request"
))
, writeSyncCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync"}}),
"The total number of times the backend had to write synchronously"
))
, writeSyncRetryCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync_retry"}}),
"The total number of times the backend had to retry a synchronous write"
))
: tooBusyCounter_(
PrometheusService::counterInt(
"backend_too_busy_total_number",
Labels(),
"The total number of times the backend was too busy to process a request"
)
)
, writeSyncCounter_(
PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync"}}),
"The total number of times the backend had to write synchronously"
)
)
, writeSyncRetryCounter_(
PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync_retry"}}),
"The total number of times the backend had to retry a synchronous write"
)
)
, asyncWriteCounters_{"write_async"}
, asyncReadCounters_{"read_async"}
, readDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "read"}}),
kHISTOGRAM_BUCKETS,
"The duration of backend read operations including retries"
))
, writeDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "write"}}),
kHISTOGRAM_BUCKETS,
"The duration of backend write operations including retries"
))
, readDurationHistogram_(
PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "read"}}),
kHISTOGRAM_BUCKETS,
"The duration of backend read operations including retries"
)
)
, writeDurationHistogram_(
PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "write"}}),
kHISTOGRAM_BUCKETS,
"The duration of backend write operations including retries"
)
)
{
}
@@ -170,26 +180,34 @@ BackendCounters::report() const
BackendCounters::AsyncOperationCounters::AsyncOperationCounters(std::string name)
: name_(std::move(name))
, pendingCounter_(PrometheusService::gaugeInt(
"backend_operations_current_number",
Labels({{"operation", name_}, {"status", "pending"}}),
"The current number of pending " + name_ + " operations"
))
, completedCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "completed"}}),
"The total number of completed " + name_ + " operations"
))
, retryCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "retry"}}),
"The total number of retried " + name_ + " operations"
))
, errorCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "error"}}),
"The total number of errored " + name_ + " operations"
))
, pendingCounter_(
PrometheusService::gaugeInt(
"backend_operations_current_number",
Labels({{"operation", name_}, {"status", "pending"}}),
"The current number of pending " + name_ + " operations"
)
)
, completedCounter_(
PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "completed"}}),
"The total number of completed " + name_ + " operations"
)
)
, retryCounter_(
PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "retry"}}),
"The total number of retried " + name_ + " operations"
)
)
, errorCounter_(
PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "error"}}),
"The total number of errored " + name_ + " operations"
)
)
{
}

View File

@@ -234,8 +234,12 @@ public:
* @return A vector of ripple::uint256 representing the account roots
*/
virtual std::vector<ripple::uint256>
fetchAccountRoots(std::uint32_t number, std::uint32_t pageSize, std::uint32_t seq, boost::asio::yield_context yield)
const = 0;
fetchAccountRoots(
std::uint32_t number,
std::uint32_t pageSize,
std::uint32_t seq,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Updates the range of sequences that are stored in the DB.
@@ -459,8 +463,11 @@ public:
* @return The sequence in unit32_t on success; nullopt otherwise
*/
virtual std::optional<std::uint32_t>
doFetchLedgerObjectSeq(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield)
const = 0;
doFetchLedgerObjectSeq(
ripple::uint256 const& key,
std::uint32_t sequence,
boost::asio::yield_context yield
) const = 0;
/**
* @brief The database-specific implementation for fetching ledger objects.

View File

@@ -361,8 +361,10 @@ public:
}
std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context yield)
const override
fetchAllTransactionHashesInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto start = std::chrono::system_clock::now();
auto const res = executor_.read(yield, schema_->selectAllTransactionHashesInLedger, ledgerSequence);
@@ -392,8 +394,11 @@ public:
}
std::optional<NFT>
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t const ledgerSequence, boost::asio::yield_context yield)
const override
fetchNFT(
ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto const res = executor_.read(yield, schema_->selectNFT, tokenID, ledgerSequence);
if (not res)
@@ -554,10 +559,9 @@ public:
selectNFTStatements.reserve(nftIDs.size());
std::transform(
std::cbegin(nftIDs),
std::cend(nftIDs),
std::back_inserter(selectNFTStatements),
[&](auto const& nftID) { return schema_->selectNFT.bind(nftID, ledgerSequence); }
std::cbegin(nftIDs), std::cend(nftIDs), std::back_inserter(selectNFTStatements), [&](auto const& nftID) {
return schema_->selectNFT.bind(nftID, ledgerSequence);
}
);
auto const nftInfos = executor_.readEach(yield, selectNFTStatements);
@@ -566,10 +570,9 @@ public:
selectNFTURIStatements.reserve(nftIDs.size());
std::transform(
std::cbegin(nftIDs),
std::cend(nftIDs),
std::back_inserter(selectNFTURIStatements),
[&](auto const& nftID) { return schema_->selectNFTURI.bind(nftID, ledgerSequence); }
std::cbegin(nftIDs), std::cend(nftIDs), std::back_inserter(selectNFTURIStatements), [&](auto const& nftID) {
return schema_->selectNFTURI.bind(nftID, ledgerSequence);
}
);
auto const nftUris = executor_.readEach(yield, selectNFTURIStatements);
@@ -626,8 +629,11 @@ public:
}
std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context yield)
const override
doFetchLedgerObject(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const override
{
LOG(log_.debug()) << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res) {
@@ -645,8 +651,11 @@ public:
}
std::optional<std::uint32_t>
doFetchLedgerObjectSeq(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context yield)
const override
doFetchLedgerObjectSeq(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const override
{
LOG(log_.debug()) << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res) {
@@ -680,8 +689,11 @@ public:
}
std::optional<ripple::uint256>
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context yield)
const override
doFetchSuccessorKey(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res) {
if (auto const result = res->template get<ripple::uint256>(); result) {
@@ -714,10 +726,9 @@ public:
auto const timeDiff = util::timed([this, yield, &results, &hashes, &statements]() {
// TODO: seems like a job for "hash IN (list of hashes)" instead?
std::transform(
std::cbegin(hashes),
std::cend(hashes),
std::back_inserter(statements),
[this](auto const& hash) { return schema_->selectTransaction.bind(hash); }
std::cbegin(hashes), std::cend(hashes), std::back_inserter(statements), [this](auto const& hash) {
return schema_->selectTransaction.bind(hash);
}
);
auto const entries = executor_.readEach(yield, statements);
@@ -761,18 +772,14 @@ public:
// TODO: seems like a job for "key IN (list of keys)" instead?
std::transform(
std::cbegin(keys),
std::cend(keys),
std::back_inserter(statements),
[this, &sequence](auto const& key) { return schema_->selectObject.bind(key, sequence); }
std::cbegin(keys), std::cend(keys), std::back_inserter(statements), [this, &sequence](auto const& key) {
return schema_->selectObject.bind(key, sequence);
}
);
auto const entries = executor_.readEach(yield, statements);
std::transform(
std::cbegin(entries),
std::cend(entries),
std::back_inserter(results),
[](auto const& res) -> Blob {
std::cbegin(entries), std::cend(entries), std::back_inserter(results), [](auto const& res) -> Blob {
if (auto const maybeValue = res.template get<Blob>(); maybeValue)
return *maybeValue;
@@ -785,8 +792,12 @@ public:
}
std::vector<ripple::uint256>
fetchAccountRoots(std::uint32_t number, std::uint32_t pageSize, std::uint32_t seq, boost::asio::yield_context yield)
const override
fetchAccountRoots(
std::uint32_t number,
std::uint32_t pageSize,
std::uint32_t seq,
boost::asio::yield_context yield
) const override
{
std::vector<ripple::uint256> liveAccounts;
std::optional<ripple::AccountID> lastItem;

View File

@@ -198,39 +198,6 @@ struct MPTHolderData {
ripple::AccountID holder;
};
/**
* @brief Check whether the supplied object is an offer.
*
* @param object The object to check
* @return true if the object is an offer; false otherwise
*/
template <typename T>
inline bool
isOffer(T const& object)
{
static constexpr short kOFFER_OFFSET = 0x006f;
static constexpr short kSHIFT = 8;
short offerBytes = (object[1] << kSHIFT) | object[2];
return offerBytes == kOFFER_OFFSET;
}
/**
* @brief Check whether the supplied hex represents an offer object.
*
* @param object The object to check
* @return true if the object is an offer; false otherwise
*/
template <typename T>
inline bool
isOfferHex(T const& object)
{
auto blob = ripple::strUnHex(4, object.begin(), object.begin() + 4);
if (blob)
return isOffer(*blob);
return false;
}
/**
* @brief Check whether the supplied object is a dir node.
*
@@ -241,6 +208,10 @@ template <typename T>
inline bool
isDirNode(T const& object)
{
static constexpr auto kMIN_SIZE_REQUIRED = 3;
if (std::size(object) < kMIN_SIZE_REQUIRED)
return false;
static constexpr short kDIR_NODE_SPACE_KEY = 0x0064;
short const spaceKey = (object.data()[1] << 8) | object.data()[2];
return spaceKey == kDIR_NODE_SPACE_KEY;
@@ -264,23 +235,6 @@ isBookDir(T const& key, R const& object)
return !sle[~ripple::sfOwner].has_value();
}
/**
* @brief Get the book out of an offer object.
*
* @param offer The offer to get the book for
* @return Book as ripple::uint256
*/
template <typename T>
inline ripple::uint256
getBook(T const& offer)
{
ripple::SerialIter it{offer.data(), offer.size()};
ripple::SLE const sle{it, {}};
ripple::uint256 book = sle.getFieldH256(ripple::sfBookDirectory);
return book;
}
/**
* @brief Get the book base.
*

View File

@@ -6,7 +6,7 @@ To support additional database types, you can create new classes that implement
## Data Model
The data model used by Clio to read and write ledger data is different from what `rippled` uses. `rippled` uses a novel data structure named [_SHAMap_](https://github.com/ripple/rippled/blob/master/src/ripple/shamap/README.md), which is a combination of a Merkle Tree and a Radix Trie. In a SHAMap, ledger objects are stored in the root vertices of the tree. Thus, looking up a record located at the leaf node of the SHAMap executes a tree search, where the path from the root node to the leaf node is the key of the record.
The data model used by Clio to read and write ledger data is different from what `rippled` uses. `rippled` uses a novel data structure named [_SHAMap_](https://github.com/XRPLF/rippled/blob/develop/src/xrpld/shamap/README.md), which is a combination of a Merkle Tree and a Radix Trie. In a SHAMap, ledger objects are stored in the root vertices of the tree. Thus, looking up a record located at the leaf node of the SHAMap executes a tree search, where the path from the root node to the leaf node is the key of the record.
`rippled` nodes can also generate a proof-tree by forming a subtree with all the path nodes and their neighbors, which can then be used to prove the existence of the leaf node data to other `rippled` nodes. In short, the main purpose of the SHAMap data structure is to facilitate the fast validation of data integrity between different decentralized `rippled` nodes.

View File

@@ -99,7 +99,7 @@ public:
connect() const;
/**
* @brief Connect to the the specified keyspace asynchronously.
* @brief Connect to the specified keyspace asynchronously.
*
* @param keyspace The keyspace to use
* @return A future
@@ -137,7 +137,7 @@ public:
disconnect() const;
/**
* @brief Reconnect to the the specified keyspace asynchronously.
* @brief Reconnect to the specified keyspace asynchronously.
*
* @param keyspace The keyspace to use
* @return A future

File diff suppressed because it is too large Load Diff

View File

@@ -45,7 +45,8 @@ Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), k
cass_cluster_set_token_aware_routing(*this, cass_true);
if (auto const rc = cass_cluster_set_protocol_version(*this, CASS_PROTOCOL_VERSION_V4); rc != CASS_OK) {
throw std::runtime_error(fmt::format("Error setting cassandra protocol version to v4: {}", cass_error_desc(rc))
throw std::runtime_error(
fmt::format("Error setting cassandra protocol version to v4: {}", cass_error_desc(rc))
);
}

View File

@@ -45,11 +45,13 @@ public:
* @brief Create a new retry policy instance with the io_context provided
*/
ExponentialBackoffRetryPolicy(boost::asio::io_context& ioc)
: retry_(util::makeRetryExponentialBackoff(
std::chrono::milliseconds(1),
std::chrono::seconds(1),
boost::asio::make_strand(ioc)
))
: retry_(
util::makeRetryExponentialBackoff(
std::chrono::milliseconds(1),
std::chrono::seconds(1),
boost::asio::make_strand(ioc)
)
)
{
}

View File

@@ -171,9 +171,11 @@ ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
auto pipe = DataPipeType{numExtractors, startSequence};
for (auto i = 0u; i < numExtractors; ++i) {
extractors.push_back(std::make_unique<ExtractorType>(
pipe, networkValidatedLedgers_, ledgerFetcher_, startSequence + i, finishSequence_, state_
));
extractors.push_back(
std::make_unique<ExtractorType>(
pipe, networkValidatedLedgers_, ledgerFetcher_, startSequence + i, finishSequence_, state_
)
);
}
auto transformer =

View File

@@ -57,7 +57,6 @@
#include <string>
#include <thread>
#include <utility>
#include <variant>
#include <vector>
using namespace util::config;
@@ -184,11 +183,14 @@ LoadBalancer::LoadBalancer(
LOG(log_.warn()) << "Failed to fetch ETL state from source = " << source->toString()
<< " Please check the configuration and network";
} else if (etlState_ && etlState_->networkID != stateOpt->networkID) {
checkOnETLFailure(fmt::format(
"ETL sources must be on the same network. Source network id = {} does not match others network id = {}",
stateOpt->networkID,
etlState_->networkID
));
checkOnETLFailure(
fmt::format(
"ETL sources must be on the same network. Source network id = {} does not match others network id "
"= {}",
stateOpt->networkID,
etlState_->networkID
)
);
} else {
etlState_ = stateOpt;
}
@@ -275,42 +277,46 @@ LoadBalancer::forwardToRippled(
return std::unexpected{rpc::ClioError::RpcCommandIsMissing};
auto const cmd = boost::json::value_to<std::string>(request.at("command"));
if (forwardingCache_ and forwardingCache_->shouldCache(cmd)) {
bool servedFromCache = true;
auto updater =
[this, &request, &clientIp, &servedFromCache, isAdmin](boost::asio::yield_context yield
) -> std::expected<util::ResponseExpirationCache::EntryData, util::ResponseExpirationCache::Error> {
servedFromCache = false;
auto result = forwardToRippledImpl(request, clientIp, isAdmin, yield);
if (result.has_value()) {
return util::ResponseExpirationCache::EntryData{
.lastUpdated = std::chrono::steady_clock::now(), .response = std::move(result).value()
};
}
return std::unexpected{
util::ResponseExpirationCache::Error{.status = rpc::Status{result.error()}, .warnings = {}}
};
};
auto result = forwardingCache_->getOrUpdate(
yield,
cmd,
std::move(updater),
[](util::ResponseExpirationCache::EntryData const& entry) { return not entry.response.contains("error"); }
);
if (servedFromCache) {
++forwardingCounters_.cacheHit.get();
if (forwardingCache_) {
if (auto cachedResponse = forwardingCache_->get(cmd); cachedResponse) {
forwardingCounters_.cacheHit.get() += 1;
return std::move(cachedResponse).value();
}
if (result.has_value()) {
return std::move(result).value();
}
forwardingCounters_.cacheMiss.get() += 1;
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;
auto xUserValue = isAdmin ? kADMIN_FORWARDING_X_USER_VALUE : kUSER_FORWARDING_X_USER_VALUE;
std::optional<boost::json::object> response;
rpc::ClioError error = rpc::ClioError::EtlConnectionError;
while (numAttempts < sources_.size()) {
auto [res, duration] =
util::timed([&]() { return sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield); });
if (res) {
forwardingCounters_.successDuration.get() += duration;
response = std::move(res).value();
break;
}
auto const combinedError = result.error().status.code;
ASSERT(std::holds_alternative<rpc::ClioError>(combinedError), "There could be only ClioError here");
return std::unexpected{std::get<rpc::ClioError>(combinedError)};
forwardingCounters_.failDuration.get() += duration;
++forwardingCounters_.retries.get();
error = std::max(error, res.error()); // Choose the best result between all sources
sourceIdx = (sourceIdx + 1) % sources_.size();
++numAttempts;
}
return forwardToRippledImpl(request, clientIp, isAdmin, yield);
if (response) {
if (forwardingCache_ and not response->contains("error"))
forwardingCache_->put(cmd, *response);
return std::move(response).value();
}
return std::unexpected{error};
}
boost::json::value
@@ -401,47 +407,4 @@ LoadBalancer::chooseForwardingSource()
}
}
std::expected<boost::json::object, rpc::CombinedError>
LoadBalancer::forwardToRippledImpl(
boost::json::object const& request,
std::optional<std::string> const& clientIp,
bool const isAdmin,
boost::asio::yield_context yield
)
{
++forwardingCounters_.cacheMiss.get();
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;
auto xUserValue = isAdmin ? kADMIN_FORWARDING_X_USER_VALUE : kUSER_FORWARDING_X_USER_VALUE;
std::optional<boost::json::object> response;
rpc::ClioError error = rpc::ClioError::EtlConnectionError;
while (numAttempts < sources_.size()) {
auto [res, duration] =
util::timed([&]() { return sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield); });
if (res) {
forwardingCounters_.successDuration.get() += duration;
response = std::move(res).value();
break;
}
forwardingCounters_.failDuration.get() += duration;
++forwardingCounters_.retries.get();
error = std::max(error, res.error()); // Choose the best result between all sources
sourceIdx = (sourceIdx + 1) % sources_.size();
++numAttempts;
}
if (response.has_value()) {
return std::move(response).value();
}
return std::unexpected{error};
}
} // namespace etl

View File

@@ -49,6 +49,7 @@
#include <concepts>
#include <cstdint>
#include <expected>
#include <functional>
#include <memory>
#include <optional>
#include <string>
@@ -172,8 +173,10 @@ public:
* @return A std::vector<std::string> The ledger data
*/
std::vector<std::string>
loadInitialLedger(uint32_t sequence, std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2})
override;
loadInitialLedger(
uint32_t sequence,
std::chrono::steady_clock::duration retryAfter = std::chrono::seconds{2}
) override;
/**
* @brief Load the initial ledger, writing data to the queue.
@@ -279,14 +282,6 @@ private:
*/
void
chooseForwardingSource();
std::expected<boost::json::object, rpc::CombinedError>
forwardToRippledImpl(
boost::json::object const& request,
std::optional<std::string> const& clientIp,
bool isAdmin,
boost::asio::yield_context yield
);
};
} // namespace etl

View File

@@ -18,6 +18,7 @@
//==============================================================================
#include "data/DBHelpers.hpp"
#include "util/Assert.hpp"
#include <fmt/format.h>
#include <xrpl/basics/base_uint.h>
@@ -138,7 +139,8 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// There should always be a difference so the returned finalIDs
// iterator should never be end(). But better safe than sorry.
if (finalIDs.size() != prevIDs.size() + 1 || diff.first == finalIDs.end() || !owner) {
throw std::runtime_error(fmt::format(" - unexpected NFTokenMint data in tx {}", strHex(sttx.getTransactionID()))
throw std::runtime_error(
fmt::format(" - unexpected NFTokenMint data in tx {}", strHex(sttx.getTransactionID()))
);
}
@@ -358,14 +360,18 @@ getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
std::vector<NFTsData>
getNFTDataFromObj(std::uint32_t const seq, std::string const& key, std::string const& blob)
{
std::vector<NFTsData> nfts;
ripple::STLedgerEntry const sle =
// https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0020-non-fungible-tokens#tokenpage-id-format
ASSERT(key.size() == ripple::uint256::size(), "The size of the key (token) is expected to fit uint256 exactly");
auto const sle =
ripple::STLedgerEntry(ripple::SerialIter{blob.data(), blob.size()}, ripple::uint256::fromVoid(key.data()));
if (sle.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE)
return nfts;
return {};
auto const owner = ripple::AccountID::fromVoid(key.data());
std::vector<NFTsData> nfts;
for (ripple::STObject const& node : sle.getFieldArray(ripple::sfNFTokens))
nfts.emplace_back(node.getFieldH256(ripple::sfNFTokenID), seq, owner, node.getFieldVL(ripple::sfURI));

View File

@@ -94,8 +94,8 @@ private:
double totalTime = 0.0;
auto currentSequence = startSequence_;
while (!shouldFinish(currentSequence) && networkValidatedLedgers_->waitUntilValidatedByNetwork(currentSequence)
) {
while (!shouldFinish(currentSequence) &&
networkValidatedLedgers_->waitUntilValidatedByNetwork(currentSequence)) {
auto [fetchResponse, time] = ::util::timed<std::chrono::duration<double>>([this, currentSequence]() {
return ledgerFetcher_.get().fetchDataAndDiff(currentSequence);
});

View File

@@ -209,47 +209,49 @@ public:
size_t numWrites = 0;
backend_->cache().setFull();
auto seconds = ::util::timed<std::chrono::seconds>([this, keys = std::move(edgeKeys), sequence, &numWrites](
) mutable {
for (auto& key : keys) {
LOG(log_.debug()) << "Writing edge key = " << ripple::strHex(key);
auto succ = backend_->cache().getSuccessor(*ripple::uint256::fromVoidChecked(key), sequence);
if (succ)
backend_->writeSuccessor(std::move(key), sequence, uint256ToString(succ->key));
}
ripple::uint256 prev = data::kFIRST_KEY;
while (auto cur = backend_->cache().getSuccessor(prev, sequence)) {
ASSERT(cur.has_value(), "Successor for key {} must exist", ripple::strHex(prev));
if (prev == data::kFIRST_KEY)
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(cur->key));
if (isBookDir(cur->key, cur->blob)) {
auto base = getBookBase(cur->key);
// make sure the base is not an actual object
if (!backend_->cache().get(base, sequence)) {
auto succ = backend_->cache().getSuccessor(base, sequence);
ASSERT(succ.has_value(), "Book base {} must have a successor", ripple::strHex(base));
if (succ->key == cur->key) {
LOG(log_.debug()) << "Writing book successor = " << ripple::strHex(base) << " - "
<< ripple::strHex(cur->key);
backend_->writeSuccessor(uint256ToString(base), sequence, uint256ToString(cur->key));
}
}
++numWrites;
auto seconds =
::util::timed<std::chrono::seconds>([this, keys = std::move(edgeKeys), sequence, &numWrites]() mutable {
for (auto& key : keys) {
LOG(log_.debug()) << "Writing edge key = " << ripple::strHex(key);
auto succ = backend_->cache().getSuccessor(*ripple::uint256::fromVoidChecked(key), sequence);
if (succ)
backend_->writeSuccessor(std::move(key), sequence, uint256ToString(succ->key));
}
prev = cur->key;
static constexpr std::size_t kLOG_STRIDE = 100000;
if (numWrites % kLOG_STRIDE == 0 && numWrites != 0)
LOG(log_.info()) << "Wrote " << numWrites << " book successors";
}
ripple::uint256 prev = data::kFIRST_KEY;
while (auto cur = backend_->cache().getSuccessor(prev, sequence)) {
ASSERT(cur.has_value(), "Successor for key {} must exist", ripple::strHex(prev));
if (prev == data::kFIRST_KEY)
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(cur->key));
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(data::kLAST_KEY));
++numWrites;
});
if (isBookDir(cur->key, cur->blob)) {
auto base = getBookBase(cur->key);
// make sure the base is not an actual object
if (!backend_->cache().get(base, sequence)) {
auto succ = backend_->cache().getSuccessor(base, sequence);
ASSERT(succ.has_value(), "Book base {} must have a successor", ripple::strHex(base));
if (succ->key == cur->key) {
LOG(log_.debug()) << "Writing book successor = " << ripple::strHex(base) << " - "
<< ripple::strHex(cur->key);
backend_->writeSuccessor(
uint256ToString(base), sequence, uint256ToString(cur->key)
);
}
}
++numWrites;
}
prev = cur->key;
static constexpr std::size_t kLOG_STRIDE = 100000;
if (numWrites % kLOG_STRIDE == 0 && numWrites != 0)
LOG(log_.info()) << "Wrote " << numWrites << " book successors";
}
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(data::kLAST_KEY));
++numWrites;
});
LOG(log_.info()) << "Looping through cache and submitting all writes took " << seconds
<< " seconds. numWrites = " << std::to_string(numWrites);

View File

@@ -249,8 +249,9 @@ public:
std::chrono::time_point<std::chrono::system_clock>
getLastPublish() const override
{
return std::chrono::time_point<std::chrono::system_clock>{std::chrono::seconds{lastPublishSeconds_.get().value()
}};
return std::chrono::time_point<std::chrono::system_clock>{
std::chrono::seconds{lastPublishSeconds_.get().value()}
};
}
/**

View File

@@ -79,11 +79,13 @@ SubscriptionSource::SubscriptionSource(
, onConnect_(std::move(onConnect))
, onDisconnect_(std::move(onDisconnect))
, onLedgerClosed_(std::move(onLedgerClosed))
, lastMessageTimeSecondsSinceEpoch_(PrometheusService::gaugeInt(
"subscription_source_last_message_time",
util::prometheus::Labels({{"source", fmt::format("{}:{}", ip, wsPort)}}),
"Seconds since epoch of the last message received from rippled subscription streams"
))
, lastMessageTimeSecondsSinceEpoch_(
PrometheusService::gaugeInt(
"subscription_source_last_message_time",
util::prometheus::Labels({{"source", fmt::format("{}:{}", ip, wsPort)}}),
"Seconds since epoch of the last message received from rippled subscription streams"
)
)
{
wsConnectionBuilder_.addHeader({boost::beast::http::field::user_agent, "clio-client"})
.addHeader({"X-User", "clio-client"})
@@ -329,9 +331,13 @@ SubscriptionSource::setValidatedRange(std::string range)
pairs.emplace_back(sequence, sequence);
} else {
if (minAndMax.size() != 2) {
throw std::runtime_error(fmt::format(
"Error parsing range: {}.Min and max should be of size 2. Got size = {}", range, minAndMax.size()
));
throw std::runtime_error(
fmt::format(
"Error parsing range: {}.Min and max should be of size 2. Got size = {}",
range,
minAndMax.size()
)
);
}
uint32_t const min = std::stoll(minAndMax[0]);
uint32_t const max = std::stoll(minAndMax[1]);

View File

@@ -58,7 +58,6 @@
#include <string>
#include <thread>
#include <utility>
#include <variant>
#include <vector>
using namespace util::config;
@@ -184,11 +183,14 @@ LoadBalancer::LoadBalancer(
LOG(log_.warn()) << "Failed to fetch ETL state from source = " << source->toString()
<< " Please check the configuration and network";
} else if (etlState_ && etlState_->networkID != stateOpt->networkID) {
checkOnETLFailure(fmt::format(
"ETL sources must be on the same network. Source network id = {} does not match others network id = {}",
stateOpt->networkID,
etlState_->networkID
));
checkOnETLFailure(
fmt::format(
"ETL sources must be on the same network. Source network id = {} does not match others network id "
"= {}",
stateOpt->networkID,
etlState_->networkID
)
);
} else {
etlState_ = stateOpt;
}
@@ -281,42 +283,46 @@ LoadBalancer::forwardToRippled(
return std::unexpected{rpc::ClioError::RpcCommandIsMissing};
auto const cmd = boost::json::value_to<std::string>(request.at("command"));
if (forwardingCache_ and forwardingCache_->shouldCache(cmd)) {
bool servedFromCache = true;
auto updater =
[this, &request, &clientIp, &servedFromCache, isAdmin](boost::asio::yield_context yield
) -> std::expected<util::ResponseExpirationCache::EntryData, util::ResponseExpirationCache::Error> {
servedFromCache = false;
auto result = forwardToRippledImpl(request, clientIp, isAdmin, yield);
if (result.has_value()) {
return util::ResponseExpirationCache::EntryData{
.lastUpdated = std::chrono::steady_clock::now(), .response = std::move(result).value()
};
}
return std::unexpected{
util::ResponseExpirationCache::Error{.status = rpc::Status{result.error()}, .warnings = {}}
};
};
auto result = forwardingCache_->getOrUpdate(
yield,
cmd,
std::move(updater),
[](util::ResponseExpirationCache::EntryData const& entry) { return not entry.response.contains("error"); }
);
if (servedFromCache) {
++forwardingCounters_.cacheHit.get();
if (forwardingCache_) {
if (auto cachedResponse = forwardingCache_->get(cmd); cachedResponse) {
forwardingCounters_.cacheHit.get() += 1;
return std::move(cachedResponse).value();
}
if (result.has_value()) {
return std::move(result).value();
}
forwardingCounters_.cacheMiss.get() += 1;
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;
auto xUserValue = isAdmin ? kADMIN_FORWARDING_X_USER_VALUE : kUSER_FORWARDING_X_USER_VALUE;
std::optional<boost::json::object> response;
rpc::ClioError error = rpc::ClioError::EtlConnectionError;
while (numAttempts < sources_.size()) {
auto [res, duration] =
util::timed([&]() { return sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield); });
if (res) {
forwardingCounters_.successDuration.get() += duration;
response = std::move(res).value();
break;
}
auto const combinedError = result.error().status.code;
ASSERT(std::holds_alternative<rpc::ClioError>(combinedError), "There could be only ClioError here");
return std::unexpected{std::get<rpc::ClioError>(combinedError)};
forwardingCounters_.failDuration.get() += duration;
++forwardingCounters_.retries.get();
error = std::max(error, res.error()); // Choose the best result between all sources
sourceIdx = (sourceIdx + 1) % sources_.size();
++numAttempts;
}
return forwardToRippledImpl(request, clientIp, isAdmin, yield);
if (response) {
if (forwardingCache_ and not response->contains("error"))
forwardingCache_->put(cmd, *response);
return std::move(response).value();
}
return std::unexpected{error};
}
boost::json::value
@@ -407,47 +413,4 @@ LoadBalancer::chooseForwardingSource()
}
}
std::expected<boost::json::object, rpc::CombinedError>
LoadBalancer::forwardToRippledImpl(
boost::json::object const& request,
std::optional<std::string> const& clientIp,
bool isAdmin,
boost::asio::yield_context yield
)
{
++forwardingCounters_.cacheMiss.get();
ASSERT(not sources_.empty(), "ETL sources must be configured to forward requests.");
std::size_t sourceIdx = randomGenerator_->uniform(0ul, sources_.size() - 1);
auto numAttempts = 0u;
auto xUserValue = isAdmin ? kADMIN_FORWARDING_X_USER_VALUE : kUSER_FORWARDING_X_USER_VALUE;
std::optional<boost::json::object> response;
rpc::ClioError error = rpc::ClioError::EtlConnectionError;
while (numAttempts < sources_.size()) {
auto [res, duration] =
util::timed([&]() { return sources_[sourceIdx]->forwardToRippled(request, clientIp, xUserValue, yield); });
if (res) {
forwardingCounters_.successDuration.get() += duration;
response = std::move(res).value();
break;
}
forwardingCounters_.failDuration.get() += duration;
++forwardingCounters_.retries.get();
error = std::max(error, res.error()); // Choose the best result between all sources
sourceIdx = (sourceIdx + 1) % sources_.size();
++numAttempts;
}
if (response.has_value()) {
return std::move(response).value();
}
return std::unexpected{error};
}
} // namespace etlng

View File

@@ -282,14 +282,6 @@ private:
*/
void
chooseForwardingSource();
std::expected<boost::json::object, rpc::CombinedError>
forwardToRippledImpl(
boost::json::object const& request,
std::optional<std::string> const& clientIp,
bool isAdmin,
boost::asio::yield_context yield
);
};
} // namespace etlng

View File

@@ -227,8 +227,9 @@ public:
std::chrono::time_point<std::chrono::system_clock>
getLastPublish() const override
{
return std::chrono::time_point<std::chrono::system_clock>{std::chrono::seconds{lastPublishSeconds_.get().value()
}};
return std::chrono::time_point<std::chrono::system_clock>{
std::chrono::seconds{lastPublishSeconds_.get().value()}
};
}
/**

View File

@@ -159,8 +159,10 @@ public:
* @param transactions The transactions in the current ledger.
*/
void
pubBookChanges(ripple::LedgerHeader const& lgrInfo, std::vector<data::TransactionAndMetadata> const& transactions)
final;
pubBookChanges(
ripple::LedgerHeader const& lgrInfo,
std::vector<data::TransactionAndMetadata> const& transactions
) final;
/**
* @brief Subscribe to the proposed transactions feed.

View File

@@ -57,9 +57,7 @@ ProposedTransactionFeed::sub(ripple::AccountID const& account, SubscriberSharedP
{
auto const weakPtr = std::weak_ptr(subscriber);
auto const added = accountSignal_.connectTrackableSlot(
subscriber,
account,
[this, weakPtr](std::shared_ptr<std::string> const& msg) {
subscriber, account, [this, weakPtr](std::shared_ptr<std::string> const& msg) {
if (auto connectionPtr = weakPtr.lock()) {
// Check if this connection already sent
if (notified_.contains(connectionPtr.get()))

View File

@@ -25,6 +25,7 @@
#include "util/TerminationHandler.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/log/Logger.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <cstdlib>
#include <exception>
@@ -52,6 +53,7 @@ try {
if (not app::parseConfig(run.configPath))
return EXIT_FAILURE;
PrometheusService::init(gClioConfig);
if (auto const initSuccess = util::LogService::init(gClioConfig); not initSuccess) {
std::cerr << initSuccess.error() << std::endl;
return EXIT_FAILURE;

View File

@@ -32,7 +32,7 @@ namespace migration {
*/
struct MigrationManagerInterface : virtual public MigrationInspectorInterface {
/**
* @brief Run the the migration according to the given migrator's name
* @brief Run the migration according to the given migrator's name
*/
virtual void
runMigration(std::string const&) = 0;

View File

@@ -60,7 +60,6 @@ Most indexes are based on either ledger states or transactions. We provide the `
If you need to do full scan against other table, you can follow below steps:
- Describe the table which needs full scan in a struct. It has to satisfy the `TableSpec`(cassandra/Spec.hpp) concept, containing static member:
- Tuple type `Row`, it's the type of each field in a row. The order of types should match what database will return in a row. Key types should come first, followed by other field types sorted in alphabetical order.
- `kPARTITION_KEY`, it's the name of the partition key of the table.
- `kTABLE_NAME`

View File

@@ -63,16 +63,18 @@ public:
std::string const& key
)
{
return handler.prepare(fmt::format(
R"(
return handler.prepare(
fmt::format(
R"(
SELECT *
FROM {}
WHERE TOKEN({}) >= ? AND TOKEN({}) <= ?
)",
data::cassandra::qualifiedTableName<SettingsProviderType>(settingsProvider_.get(), tableName),
key,
key
));
data::cassandra::qualifiedTableName<SettingsProviderType>(settingsProvider_.get(), tableName),
key,
key
)
);
}
/**
@@ -84,14 +86,16 @@ public:
data::cassandra::PreparedStatement const&
getPreparedInsertMigratedMigrator(data::cassandra::Handle const& handler)
{
static auto kPREPARED = handler.prepare(fmt::format(
R"(
static auto kPREPARED = handler.prepare(
fmt::format(
R"(
INSERT INTO {}
(migrator_name, status)
VALUES (?, ?)
)",
data::cassandra::qualifiedTableName<SettingsProviderType>(settingsProvider_.get(), "migrator_status")
));
data::cassandra::qualifiedTableName<SettingsProviderType>(settingsProvider_.get(), "migrator_status")
)
);
return kPREPARED;
}
};

View File

@@ -56,7 +56,7 @@ public:
}
/**
* @brief Run the the migration according to the given migrator's name
* @brief Run the migration according to the given migrator's name
*
* @param name The name of the migrator
*/

View File

@@ -2,6 +2,7 @@
add_library(clio_rpc_center)
target_sources(clio_rpc_center PRIVATE RPCCenter.cpp)
target_include_directories(clio_rpc_center PUBLIC "${CMAKE_SOURCE_DIR}/src")
target_link_libraries(clio_rpc_center PUBLIC clio_options)
add_library(clio_rpc)

View File

@@ -40,41 +40,55 @@ using util::prometheus::Label;
using util::prometheus::Labels;
Counters::MethodInfo::MethodInfo(std::string const& method)
: started(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "started"}, {"method", method}}},
fmt::format("Total number of started calls to the method {}", method)
))
, finished(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "finished"}, {"method", method}}},
fmt::format("Total number of finished calls to the method {}", method)
))
, failed(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "failed"}, {"method", method}}},
fmt::format("Total number of failed calls to the method {}", method)
))
, errored(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "errored"}, {"method", method}}},
fmt::format("Total number of errored calls to the method {}", method)
))
, forwarded(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "forwarded"}, {"method", method}}},
fmt::format("Total number of forwarded calls to the method {}", method)
))
, failedForward(PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "failed_forward"}, {"method", method}}},
fmt::format("Total number of failed forwarded calls to the method {}", method)
))
, duration(PrometheusService::counterInt(
"rpc_method_duration_us",
Labels({util::prometheus::Label{"method", method}}),
fmt::format("Total duration of calls to the method {}", method)
))
: started(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "started"}, {"method", method}}},
fmt::format("Total number of started calls to the method {}", method)
)
)
, finished(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "finished"}, {"method", method}}},
fmt::format("Total number of finished calls to the method {}", method)
)
)
, failed(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "failed"}, {"method", method}}},
fmt::format("Total number of failed calls to the method {}", method)
)
)
, errored(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "errored"}, {"method", method}}},
fmt::format("Total number of errored calls to the method {}", method)
)
)
, forwarded(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "forwarded"}, {"method", method}}},
fmt::format("Total number of forwarded calls to the method {}", method)
)
)
, failedForward(
PrometheusService::counterInt(
"rpc_method_total_number",
Labels{{{"status", "failed_forward"}, {"method", method}}},
fmt::format("Total number of failed forwarded calls to the method {}", method)
)
)
, duration(
PrometheusService::counterInt(
"rpc_method_duration_us",
Labels({util::prometheus::Label{"method", method}}),
fmt::format("Total duration of calls to the method {}", method)
)
)
{
}
@@ -89,31 +103,41 @@ Counters::getMethodInfo(std::string const& method)
}
Counters::Counters(WorkQueue const& wq)
: tooBusyCounter_(PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "too_busy"}}),
"Total number of too busy errors"
))
, notReadyCounter_(PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "not_ready"}}),
"Total number of not ready replyes"
))
, badSyntaxCounter_(PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "bad_syntax"}}),
"Total number of bad syntax replyes"
))
, unknownCommandCounter_(PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "unknown_command"}}),
"Total number of unknown command replyes"
))
, internalErrorCounter_(PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "internal_error"}}),
"Total number of internal errors"
))
: tooBusyCounter_(
PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "too_busy"}}),
"Total number of too busy errors"
)
)
, notReadyCounter_(
PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "not_ready"}}),
"Total number of not ready replyes"
)
)
, badSyntaxCounter_(
PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "bad_syntax"}}),
"Total number of bad syntax replyes"
)
)
, unknownCommandCounter_(
PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "unknown_command"}}),
"Total number of unknown command replyes"
)
)
, internalErrorCounter_(
PrometheusService::counterInt(
"rpc_error_total_number",
Labels({Label{"error_type", "internal_error"}}),
"Total number of internal errors"
)
)
, workQueue_(std::cref(wq))
, startupTime_{std::chrono::system_clock::now()}
{

View File

@@ -17,7 +17,6 @@ See [tests/unit/rpc](https://github.com/XRPLF/clio/tree/develop/tests/unit/rpc)
Handlers need to fulfil the requirements specified by the `SomeHandler` concept (see `rpc/common/Concepts.hpp`):
- Expose types:
- `Input` - The POD struct which acts as input for the handler
- `Output` - The POD struct which acts as output of a valid handler invocation

View File

@@ -157,49 +157,55 @@ public:
return forwardingProxy_.forward(ctx);
}
if (not ctx.isAdmin and responseCache_ and responseCache_->shouldCache(ctx.method)) {
auto updater =
[this, &ctx](boost::asio::yield_context
) -> std::expected<util::ResponseExpirationCache::EntryData, util::ResponseExpirationCache::Error> {
auto result = buildResponseImpl(ctx);
auto const extracted =
[&result]() -> std::expected<boost::json::object, util::ResponseExpirationCache::Error> {
if (result.response.has_value()) {
return std::move(result.response).value();
}
return std::unexpected{util::ResponseExpirationCache::Error{
.status = std::move(result.response).error(), .warnings = std::move(result.warnings)
}};
}();
if (extracted.has_value()) {
return util::ResponseExpirationCache::EntryData{
.lastUpdated = std::chrono::steady_clock::now(), .response = std::move(extracted).value()
};
}
return std::unexpected{std::move(extracted).error()};
};
auto result = responseCache_->getOrUpdate(
ctx.yield,
ctx.method,
std::move(updater),
[&ctx](util::ResponseExpirationCache::EntryData const& entry) {
return not ctx.isAdmin and not entry.response.contains("error");
}
);
if (result.has_value()) {
return Result{std::move(result).value()};
}
auto error = std::move(result).error();
Result errorResult{std::move(error.status)};
errorResult.warnings = std::move(error.warnings);
return errorResult;
if (not ctx.isAdmin and responseCache_) {
if (auto res = responseCache_->get(ctx.method); res.has_value())
return Result{std::move(res).value()};
}
return buildResponseImpl(ctx);
if (backend_->isTooBusy()) {
LOG(log_.error()) << "Database is too busy. Rejecting request";
notifyTooBusy(); // TODO: should we add ctx.method if we have it?
return Result{Status{RippledError::rpcTOO_BUSY}};
}
auto const method = handlerProvider_->getHandler(ctx.method);
if (!method) {
notifyUnknownCommand();
return Result{Status{RippledError::rpcUNKNOWN_COMMAND}};
}
try {
LOG(perfLog_.debug()) << ctx.tag() << " start executing rpc `" << ctx.method << '`';
auto const context = Context{
.yield = ctx.yield,
.session = ctx.session,
.isAdmin = ctx.isAdmin,
.clientIp = ctx.clientIp,
.apiVersion = ctx.apiVersion
};
auto v = (*method).process(ctx.params, context);
LOG(perfLog_.debug()) << ctx.tag() << " finish executing rpc `" << ctx.method << '`';
if (not v) {
notifyErrored(ctx.method);
} else if (not ctx.isAdmin and responseCache_) {
responseCache_->put(ctx.method, v.result->as_object());
}
return Result{std::move(v)};
} catch (data::DatabaseTimeout const& t) {
LOG(log_.error()) << "Database timeout";
notifyTooBusy();
return Result{Status{RippledError::rpcTOO_BUSY}};
} catch (std::exception const& ex) {
LOG(log_.error()) << ctx.tag() << "Caught exception: " << ex.what();
notifyInternalError();
return Result{Status{RippledError::rpcINTERNAL}};
}
}
/**

View File

@@ -1283,9 +1283,10 @@ postProcessOrderBook(
} else {
saTakerGetsFunded = saOwnerFundsLimit;
offerJson["taker_gets_funded"] = toBoostJson(saTakerGetsFunded.getJson(ripple::JsonOptions::none));
offerJson["taker_pays_funded"] =
toBoostJson(std::min(saTakerPays, ripple::multiply(saTakerGetsFunded, dirRate, saTakerPays.issue()))
.getJson(ripple::JsonOptions::none));
offerJson["taker_pays_funded"] = toBoostJson(
std::min(saTakerPays, ripple::multiply(saTakerGetsFunded, dirRate, saTakerPays.issue()))
.getJson(ripple::JsonOptions::none)
);
}
ripple::STAmount const saOwnerPays = (ripple::parityRate == offerRate)

View File

@@ -46,7 +46,8 @@ WorkQueue::OneTimeCallable::operator()()
called_ = true;
}
}
WorkQueue::OneTimeCallable::operator bool() const
WorkQueue::OneTimeCallable::
operator bool() const
{
return func_.operator bool();
}

View File

@@ -107,8 +107,8 @@ public:
template <SomeRequirement... Requirements>
explicit IfType(Requirements&&... requirements)
: processor_(
[... r = std::forward<Requirements>(requirements
)](boost::json::value& j, std::string_view key) -> MaybeError {
[... r = std::forward<Requirements>(requirements)](boost::json::value& j, std::string_view key)
-> MaybeError {
std::optional<Status> firstFailure = std::nullopt;
// the check logic is the same as fieldspec

View File

@@ -457,7 +457,10 @@ public:
checkIsU32Numeric(std::string_view sv);
template <class HexType>
requires(std::is_same_v<HexType, ripple::uint160> || std::is_same_v<HexType, ripple::uint192> || std::is_same_v<HexType, ripple::uint256>)
requires(
std::is_same_v<HexType, ripple::uint160> || std::is_same_v<HexType, ripple::uint192> ||
std::is_same_v<HexType, ripple::uint256>
)
MaybeError
makeHexStringValidator(boost::json::value const& value, std::string_view key)
{

View File

@@ -108,9 +108,11 @@ GetAggregatePriceHandler::process(GetAggregatePriceHandler::Input input, Context
auto const scale =
iter->isFieldPresent(ripple::sfScale) ? -static_cast<int>(iter->getFieldU8(ripple::sfScale)) : 0;
timestampPricesBiMap.insert(TimestampPricesBiMap::value_type(
node.getFieldU32(ripple::sfLastUpdateTime), ripple::STAmount{ripple::noIssue(), price, scale}
));
timestampPricesBiMap.insert(
TimestampPricesBiMap::value_type(
node.getFieldU32(ripple::sfLastUpdateTime), ripple::STAmount{ripple::noIssue(), price, scale}
)
);
return true;
}
return false;
@@ -263,12 +265,14 @@ tag_invoke(boost::json::value_to_tag<GetAggregatePriceHandler::Input>, boost::js
}
for (auto const& oracle : jsonObject.at(JS(oracles)).as_array()) {
input.oracles.push_back(GetAggregatePriceHandler::Oracle{
.documentId = boost::json::value_to<std::uint64_t>(oracle.as_object().at(JS(oracle_document_id))),
.account = *util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(oracle.as_object().at(JS(account)))
)
});
input.oracles.push_back(
GetAggregatePriceHandler::Oracle{
.documentId = boost::json::value_to<std::uint64_t>(oracle.as_object().at(JS(oracle_document_id))),
.account = *util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(oracle.as_object().at(JS(account)))
)
}
);
}
input.baseAsset = boost::json::value_to<std::string>(jv.at(JS(base_asset)));
input.quoteAsset = boost::json::value_to<std::string>(jv.at(JS(quote_asset)));

View File

@@ -75,9 +75,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
key = expectedkey.value();
} else if (input.offer) {
auto const id =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(input.offer->at(JS(account)))
);
auto const id = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(input.offer->at(JS(account)))
);
key = ripple::keylet::offer(*id, boost::json::value_to<std::uint32_t>(input.offer->at(JS(seq)))).key;
} else if (input.rippleStateAccount) {
auto const id1 = util::parseBase58Wrapper<ripple::AccountID>(
@@ -91,9 +91,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
key = ripple::keylet::line(*id1, *id2, currency).key;
} else if (input.escrow) {
auto const id =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(input.escrow->at(JS(owner)))
);
auto const id = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(input.escrow->at(JS(owner)))
);
key = ripple::keylet::escrow(*id, input.escrow->at(JS(seq)).as_int64()).key;
} else if (input.depositPreauth) {
auto const owner = util::parseBase58Wrapper<ripple::AccountID>(
@@ -124,9 +124,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
key = ripple::keylet::depositPreauth(owner.value(), authCreds).key;
}
} else if (input.ticket) {
auto const id =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(input.ticket->at(JS(account))
));
auto const id = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(input.ticket->at(JS(account)))
);
key = ripple::getTicketIndex(*id, input.ticket->at(JS(ticket_seq)).as_int64());
} else if (input.amm) {
@@ -136,9 +136,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
if (ripple::isXRP(currency)) {
return ripple::xrpIssue();
}
auto const issuer =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(assetJson.at(JS(issuer)))
);
auto const issuer = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(assetJson.at(JS(issuer)))
);
return ripple::Issue{currency, *issuer};
};
@@ -174,9 +174,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
} else if (input.mptoken) {
auto const holder =
ripple::parseBase58<ripple::AccountID>(boost::json::value_to<std::string>(input.mptoken->at(JS(account))));
auto const mptIssuanceID =
ripple::uint192{std::string_view(boost::json::value_to<std::string>(input.mptoken->at(JS(mpt_issuance_id))))
};
auto const mptIssuanceID = ripple::uint192{
std::string_view(boost::json::value_to<std::string>(input.mptoken->at(JS(mpt_issuance_id))))
};
key = ripple::keylet::mptoken(mptIssuanceID, *holder).key;
} else if (input.permissionedDomain) {
auto const account = ripple::parseBase58<ripple::AccountID>(
@@ -192,9 +192,9 @@ LedgerEntryHandler::process(LedgerEntryHandler::Input input, Context const& ctx)
} else if (input.delegate) {
auto const account =
ripple::parseBase58<ripple::AccountID>(boost::json::value_to<std::string>(input.delegate->at(JS(account))));
auto const authorize =
ripple::parseBase58<ripple::AccountID>(boost::json::value_to<std::string>(input.delegate->at(JS(authorize)))
);
auto const authorize = ripple::parseBase58<ripple::AccountID>(
boost::json::value_to<std::string>(input.delegate->at(JS(authorize)))
);
key = ripple::keylet::delegate(*account, *authorize).key;
} else {
// Must specify 1 of the following fields to indicate what type

View File

@@ -146,12 +146,12 @@ public:
return Error{Status{RippledError::rpcINVALID_PARAMS, "malformedAccounts"}};
}
auto const id1 =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(value.as_array()[0])
);
auto const id2 =
util::parseBase58Wrapper<ripple::AccountID>(boost::json::value_to<std::string>(value.as_array()[1])
);
auto const id1 = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(value.as_array()[0])
);
auto const id2 = util::parseBase58Wrapper<ripple::AccountID>(
boost::json::value_to<std::string>(value.as_array()[1])
);
if (!id1 || !id2)
return Error{Status{ClioError::RpcMalformedAddress, "malformedAddresses"}};

View File

@@ -138,14 +138,7 @@ NFTOffersHandlerBase::iterateOfferDirectory(
}
auto result = traverseOwnedNodes(
*sharedPtrBackend_,
directory,
cursor,
startHint,
lgrInfo.seq,
reserve,
yield,
[&offers](ripple::SLE&& offer) {
*sharedPtrBackend_, directory, cursor, startHint, lgrInfo.seq, reserve, yield, [&offers](ripple::SLE&& offer) {
if (offer.getType() == ripple::ltNFTOKEN_OFFER) {
offers.push_back(std::move(offer));
return true;

View File

@@ -140,8 +140,10 @@ private:
subscribeToAccounts(std::vector<std::string> const& accounts, feed::SubscriberSharedPtr const& session) const;
void
subscribeToAccountsProposed(std::vector<std::string> const& accounts, feed::SubscriberSharedPtr const& session)
const;
subscribeToAccountsProposed(
std::vector<std::string> const& accounts,
feed::SubscriberSharedPtr const& session
) const;
void
subscribeToBooks(

View File

@@ -129,8 +129,10 @@ UnsubscribeHandler::unsubscribeFromStreams(
}
void
UnsubscribeHandler::unsubscribeFromAccounts(std::vector<std::string> accounts, feed::SubscriberSharedPtr const& session)
const
UnsubscribeHandler::unsubscribeFromAccounts(
std::vector<std::string> accounts,
feed::SubscriberSharedPtr const& session
) const
{
for (auto const& account : accounts) {
auto const accountID = accountFromStringStrict(account);
@@ -150,8 +152,10 @@ UnsubscribeHandler::unsubscribeFromProposedAccounts(
}
}
void
UnsubscribeHandler::unsubscribeFromBooks(std::vector<OrderBook> const& books, feed::SubscriberSharedPtr const& session)
const
UnsubscribeHandler::unsubscribeFromBooks(
std::vector<OrderBook> const& books,
feed::SubscriberSharedPtr const& session
) const
{
for (auto const& orderBook : books) {
subscriptions_->unsubBook(orderBook.book, session);

View File

@@ -106,8 +106,10 @@ private:
unsubscribeFromAccounts(std::vector<std::string> accounts, feed::SubscriberSharedPtr const& session) const;
void
unsubscribeFromProposedAccounts(std::vector<std::string> accountsProposed, feed::SubscriberSharedPtr const& session)
const;
unsubscribeFromProposedAccounts(
std::vector<std::string> accountsProposed,
feed::SubscriberSharedPtr const& session
) const;
void
unsubscribeFromBooks(std::vector<OrderBook> const& books, feed::SubscriberSharedPtr const& session) const;

View File

@@ -197,22 +197,31 @@ private:
std::expected<ValueType, ErrorType>
wait(boost::asio::yield_context yield, Updater updater, Verifier verifier)
{
boost::asio::steady_timer timer{yield.get_executor(), boost::asio::steady_timer::duration::max()};
struct SharedContext {
SharedContext(boost::asio::yield_context y)
: timer(y.get_executor(), boost::asio::steady_timer::duration::max())
{
}
boost::asio::steady_timer timer;
std::optional<std::expected<ValueType, ErrorType>> result;
};
auto sharedContext = std::make_shared<SharedContext>(yield);
boost::system::error_code errorCode;
std::optional<std::expected<ValueType, ErrorType>> result;
boost::signals2::scoped_connection const slot =
updateFinished_.connect([yield, &timer, &result](std::expected<ValueType, ErrorType> value) {
boost::asio::spawn(yield, [&timer, &result, value = std::move(value)](auto&&) {
result = std::move(value);
timer.cancel();
updateFinished_.connect([yield, sharedContext](std::expected<ValueType, ErrorType> value) {
boost::asio::spawn(yield, [sharedContext = std::move(sharedContext), value = std::move(value)](auto&&) {
sharedContext->result = std::move(value);
sharedContext->timer.cancel();
});
});
if (state_ == State::Updating) {
timer.async_wait(yield[errorCode]);
ASSERT(result.has_value(), "There should be some value after waiting");
return std::move(result).value();
sharedContext->timer.async_wait(yield[errorCode]);
ASSERT(sharedContext->result.has_value(), "There should be some value after waiting");
return std::move(sharedContext->result).value();
}
return asyncGet(yield, std::move(updater), std::move(verifier));
}

View File

@@ -42,12 +42,15 @@ target_sources(
# This must be above the target_link_libraries call otherwise backtrace doesn't work
if ("${san}" STREQUAL "")
target_link_libraries(clio_util PUBLIC Boost::stacktrace_backtrace dl libbacktrace::libbacktrace)
target_link_libraries(clio_util PUBLIC Boost::stacktrace_backtrace)
endif ()
target_link_libraries(
clio_util
PUBLIC Boost::headers
Boost::iostreams
Boost::log
Boost::log_setup
fmt::fmt
openssl::openssl
xrpl::libxrpl

View File

@@ -134,8 +134,7 @@ public:
return;
boost::asio::spawn(
yield_,
[signal = familySignal_, fn = std::move(fn)](boost::asio::yield_context yield) mutable {
yield_, [signal = familySignal_, fn = std::move(fn)](boost::asio::yield_context yield) mutable {
Coroutine coroutine(std::move(yield), std::move(signal));
fn(coroutine);
}

View File

@@ -16,31 +16,44 @@
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "util/ResponseExpirationCache.hpp"
#include "util/Assert.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <chrono>
#include <memory>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <string>
#include <unordered_set>
#include <utility>
namespace util {
ResponseExpirationCache::ResponseExpirationCache(
std::chrono::steady_clock::duration cacheTimeout,
std::unordered_set<std::string> const& cmds
)
: cacheTimeout_(cacheTimeout)
void
ResponseExpirationCache::Entry::put(boost::json::object response)
{
for (auto const& command : cmds) {
cache_.emplace(command, std::make_unique<CacheEntry>());
}
response_ = std::move(response);
lastUpdated_ = std::chrono::steady_clock::now();
}
std::optional<boost::json::object>
ResponseExpirationCache::Entry::get() const
{
return response_;
}
std::chrono::steady_clock::time_point
ResponseExpirationCache::Entry::lastUpdated() const
{
return lastUpdated_;
}
void
ResponseExpirationCache::Entry::invalidate()
{
response_.reset();
}
bool
@@ -49,41 +62,38 @@ ResponseExpirationCache::shouldCache(std::string const& cmd)
return cache_.contains(cmd);
}
std::expected<boost::json::object, ResponseExpirationCache::Error>
ResponseExpirationCache::getOrUpdate(
boost::asio::yield_context yield,
std::string const& cmd,
Updater updater,
Verifier verifier
)
std::optional<boost::json::object>
ResponseExpirationCache::get(std::string const& cmd) const
{
auto it = cache_.find(cmd);
ASSERT(it != cache_.end(), "Can't get a value which is not in the cache");
if (it == cache_.end())
return std::nullopt;
auto& entry = it->second;
{
auto result = entry->asyncGet(yield, updater, verifier);
if (not result.has_value()) {
return std::unexpected{std::move(result).error()};
}
if (std::chrono::steady_clock::now() - result->lastUpdated < cacheTimeout_) {
return std::move(result)->response;
}
}
auto const& entry = it->second.lock<std::shared_lock>();
if (std::chrono::steady_clock::now() - entry->lastUpdated() > cacheTimeout_)
return std::nullopt;
// Force update due to cache timeout
auto result = entry->update(yield, std::move(updater), std::move(verifier));
if (not result.has_value()) {
return std::unexpected{std::move(result).error()};
}
return std::move(result)->response;
return entry->get();
}
void
ResponseExpirationCache::put(std::string const& cmd, boost::json::object const& response)
{
if (not shouldCache(cmd))
return;
ASSERT(cache_.contains(cmd), "Command is not in the cache: {}", cmd);
auto entry = cache_[cmd].lock<std::unique_lock>();
entry->put(response);
}
void
ResponseExpirationCache::invalidate()
{
for (auto& [_, entry] : cache_) {
entry->invalidate();
auto entryLock = entry.lock<std::unique_lock>();
entryLock->invalidate();
}
}

View File

@@ -16,18 +16,15 @@
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "rpc/Errors.hpp"
#include "util/BlockingCache.hpp"
#include "util/Mutex.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/json/array.hpp>
#include <boost/json/object.hpp>
#include <chrono>
#include <memory>
#include <optional>
#include <shared_mutex>
#include <string>
#include <unordered_map>
#include <unordered_set>
@@ -36,89 +33,94 @@ namespace util {
/**
* @brief Cache of requests' responses with TTL support and configurable cacheable commands
*
* This class implements a time-based expiration cache for RPC responses. It allows
* caching responses for specified commands and automatically invalidates them after
* a configured timeout period. The cache uses BlockingCache internally to handle
* concurrent access and updates.
*/
class ResponseExpirationCache {
public:
/**
* @brief A data structure to store a cache entry with its timestamp
* @brief A class to store a cache entry.
*/
struct EntryData {
std::chrono::steady_clock::time_point lastUpdated; ///< When the entry was last updated
boost::json::object response; ///< The cached response data
class Entry {
std::chrono::steady_clock::time_point lastUpdated_;
std::optional<boost::json::object> response_;
public:
/**
* @brief Put a response into the cache
*
* @param response The response to store
*/
void
put(boost::json::object response);
/**
* @brief Get the response from the cache
*
* @return The response
*/
std::optional<boost::json::object>
get() const;
/**
* @brief Get the last time the cache was updated
*
* @return The last time the cache was updated
*/
std::chrono::steady_clock::time_point
lastUpdated() const;
/**
* @brief Invalidate the cache entry
*/
void
invalidate();
};
/**
* @brief A data structure to represent errors that can occur during an update of the cache
*/
struct Error {
rpc::Status status; ///< The status code and message of the error
boost::json::array warnings; ///< Any warnings related to the request
bool
operator==(Error const&) const = default;
};
using CacheEntry = util::BlockingCache<EntryData, Error>;
private:
std::chrono::steady_clock::duration cacheTimeout_;
std::unordered_map<std::string, std::unique_ptr<CacheEntry>> cache_;
std::unordered_map<std::string, util::Mutex<Entry, std::shared_mutex>> cache_;
bool
shouldCache(std::string const& cmd);
public:
/**
* @brief Construct a new ResponseExpirationCache object
* @brief Construct a new Cache object
*
* @param cacheTimeout The time period after which cached entries expire
* @param cmds The commands that should be cached (requests for other commands won't be cached)
* @param cacheTimeout The time for cache entries to expire
* @param cmds The commands that should be cached
*/
ResponseExpirationCache(
std::chrono::steady_clock::duration cacheTimeout,
std::unordered_set<std::string> const& cmds
);
)
: cacheTimeout_(cacheTimeout)
{
for (auto const& command : cmds) {
cache_.emplace(command, Entry{});
}
}
/**
* @brief Check if the given command should be cached
* @brief Get a response from the cache
*
* @param cmd The command to check
* @return true if the command should be cached, false otherwise
*/
bool
shouldCache(std::string const& cmd);
using Updater = CacheEntry::Updater;
using Verifier = CacheEntry::Verifier;
/**
* @brief Get a cached response or update the cache if necessary
*
* This method returns a cached response if it exists and hasn't expired.
* If the cache entry is expired or doesn't exist, it calls the updater to
* generate a new value. If multiple coroutines request the same entry
* simultaneously, only one updater will be called while others wait.
*
* @note cmd must be one of the commands that are cached. There is an ASSERT() inside the function
*
* @param yield Asio yield context for coroutine suspension
* @param cmd The command to get the response for
* @param updater Function to generate the response if not in cache or expired
* @param verifier Function to validate if a response should be cached
* @return The cached or newly generated response, or an error
* @return The response if it exists or std::nullopt otherwise
*/
[[nodiscard]] std::expected<boost::json::object, Error>
getOrUpdate(boost::asio::yield_context yield, std::string const& cmd, Updater updater, Verifier verifier);
[[nodiscard]] std::optional<boost::json::object>
get(std::string const& cmd) const;
/**
* @brief Put a response into the cache if the request should be cached
*
* @param cmd The command to store the response for
* @param response The response to store
*/
void
put(std::string const& cmd, boost::json::object const& response);
/**
* @brief Invalidate all entries in the cache
*
* This causes all cached entries to be cleared, forcing the next access
* to generate new responses.
*/
void
invalidate();
};
} // namespace util

View File

@@ -78,8 +78,7 @@ SignalsHandler::SignalsHandler(config::ClioConfigDefinition const& config, std::
<< " milliseconds.";
setHandler(impl::SignalsHandlerStatic::handleSecondSignal);
timer_.emplace(context_.scheduleAfter(
gracefulPeriod_,
[forceExitHandler = std::move(forceExitHandler)](auto&& stopToken, bool canceled) {
gracefulPeriod_, [forceExitHandler = std::move(forceExitHandler)](auto&& stopToken, bool canceled) {
// TODO: Update this after https://github.com/XRPLF/clio/issues/1380
if (not stopToken.isStopRequested() and not canceled) {
LOG(LogService::warn()) << "Force exit at the end of graceful period.";

View File

@@ -166,17 +166,16 @@ public:
static_assert(not std::is_same_v<RetType, std::any>);
auto const millis = std::chrono::duration_cast<std::chrono::milliseconds>(delay);
return AnyOperation<RetType>(pimpl_->scheduleAfter(
millis,
[fn = std::forward<decltype(fn)>(fn)](auto stopToken) -> std::any {
return AnyOperation<RetType>(
pimpl_->scheduleAfter(millis, [fn = std::forward<decltype(fn)>(fn)](auto stopToken) -> std::any {
if constexpr (std::is_void_v<RetType>) {
fn(std::move(stopToken));
return {};
} else {
return std::make_any<RetType>(fn(std::move(stopToken)));
}
}
));
})
);
}
/**
@@ -197,8 +196,7 @@ public:
auto const millis = std::chrono::duration_cast<std::chrono::milliseconds>(delay);
return AnyOperation<RetType>(pimpl_->scheduleAfter(
millis,
[fn = std::forward<decltype(fn)>(fn)](auto stopToken, auto cancelled) -> std::any {
millis, [fn = std::forward<decltype(fn)>(fn)](auto stopToken, auto cancelled) -> std::any {
if constexpr (std::is_void_v<RetType>) {
fn(std::move(stopToken), cancelled);
return {};
@@ -224,13 +222,10 @@ public:
auto const millis = std::chrono::duration_cast<std::chrono::milliseconds>(interval);
return AnyOperation<RetType>( //
pimpl_->executeRepeatedly(
millis,
[fn = std::forward<decltype(fn)>(fn)] -> std::any {
fn();
return {};
}
)
pimpl_->executeRepeatedly(millis, [fn = std::forward<decltype(fn)>(fn)] -> std::any {
fn();
return {};
})
);
}

View File

@@ -146,13 +146,10 @@ public:
auto const millis = std::chrono::duration_cast<std::chrono::milliseconds>(interval);
return AnyOperation<RetType>( //
pimpl_->executeRepeatedly(
millis,
[fn = std::forward<decltype(fn)>(fn)] -> std::any {
fn();
return {};
}
)
pimpl_->executeRepeatedly(millis, [fn = std::forward<decltype(fn)>(fn)] -> std::any {
fn();
return {};
})
);
}

View File

@@ -170,7 +170,7 @@ template <typename T>
concept SomeStdDuration = requires {
// Thank you Ed Catmur for this trick.
// See https://stackoverflow.com/questions/74383254/concept-that-models-only-the-stdchrono-duration-types
[]<typename Rep, typename Period>( //
[]<typename Rep, typename Period>( //
std::type_identity<std::chrono::duration<Rep, Period>>
) {}(std::type_identity<std::decay_t<T>>());
};
@@ -180,7 +180,7 @@ concept SomeStdDuration = requires {
*/
template <typename T>
concept SomeStdOptional = requires {
[]<typename Type>( //
[]<typename Type>( //
std::type_identity<std::optional<Type>>
) {}(std::type_identity<std::decay_t<T>>());
};

View File

@@ -259,8 +259,8 @@ private:
* without default values must be present in the user's config file.
*/
static ClioConfigDefinition gClioConfig = ClioConfigDefinition{
{{"database.type", ConfigValue{ConfigType::String}.defaultValue("cassandra").withConstraint(gValidateCassandraName)
},
{{"database.type",
ConfigValue{ConfigType::String}.defaultValue("cassandra").withConstraint(gValidateCassandraName)},
{"database.cassandra.contact_points", ConfigValue{ConfigType::String}.defaultValue("localhost")},
{"database.cassandra.secure_connect_bundle", ConfigValue{ConfigType::String}.optional()},
{"database.cassandra.port", ConfigValue{ConfigType::Integer}.withConstraint(gValidatePort).optional()},
@@ -284,10 +284,10 @@ static ClioConfigDefinition gClioConfig = ClioConfigDefinition{
{"database.cassandra.queue_size_io", ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateUint16)},
{"database.cassandra.write_batch_size",
ConfigValue{ConfigType::Integer}.defaultValue(20).withConstraint(gValidateUint16)},
{"database.cassandra.connect_timeout", ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateUint32)
},
{"database.cassandra.request_timeout", ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateUint32)
},
{"database.cassandra.connect_timeout",
ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateUint32)},
{"database.cassandra.request_timeout",
ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateUint32)},
{"database.cassandra.username", ConfigValue{ConfigType::String}.optional()},
{"database.cassandra.password", ConfigValue{ConfigType::String}.optional()},
{"database.cassandra.certfile", ConfigValue{ConfigType::String}.optional()},
@@ -308,8 +308,8 @@ static ClioConfigDefinition gClioConfig = ClioConfigDefinition{
{"num_markers", ConfigValue{ConfigType::Integer}.optional().withConstraint(gValidateNumMarkers)},
{"dos_guard.whitelist.[]", Array{ConfigValue{ConfigType::String}.optional()}},
{"dos_guard.max_fetches", ConfigValue{ConfigType::Integer}.defaultValue(1000'000u).withConstraint(gValidateUint32)
},
{"dos_guard.max_fetches",
ConfigValue{ConfigType::Integer}.defaultValue(1000'000u).withConstraint(gValidateUint32)},
{"dos_guard.max_connections", ConfigValue{ConfigType::Integer}.defaultValue(20u).withConstraint(gValidateUint32)},
{"dos_guard.max_requests", ConfigValue{ConfigType::Integer}.defaultValue(20u).withConstraint(gValidateUint32)},
{"dos_guard.sweep_interval",
@@ -358,8 +358,8 @@ static ClioConfigDefinition gClioConfig = ClioConfigDefinition{
{"cache.page_fetch_size", ConfigValue{ConfigType::Integer}.defaultValue(512).withConstraint(gValidateUint16)},
{"cache.load", ConfigValue{ConfigType::String}.defaultValue("async").withConstraint(gValidateLoadMode)},
{"log_channels.[].channel", Array{ConfigValue{ConfigType::String}.optional().withConstraint(gValidateChannelName)}
},
{"log_channels.[].channel",
Array{ConfigValue{ConfigType::String}.optional().withConstraint(gValidateChannelName)}},
{"log_channels.[].log_level",
Array{ConfigValue{ConfigType::String}.optional().withConstraint(gValidateLogLevelName)}},
@@ -376,8 +376,8 @@ static ClioConfigDefinition gClioConfig = ClioConfigDefinition{
{"log_rotation_size", ConfigValue{ConfigType::Integer}.defaultValue(2048).withConstraint(gValidateUint32)},
{"log_directory_max_size", ConfigValue{ConfigType::Integer}.defaultValue(50 * 1024).withConstraint(gValidateUint32)
},
{"log_directory_max_size",
ConfigValue{ConfigType::Integer}.defaultValue(50 * 1024).withConstraint(gValidateUint32)},
{"log_rotation_hour_interval", ConfigValue{ConfigType::Integer}.defaultValue(12).withConstraint(gValidateUint32)},

View File

@@ -134,10 +134,11 @@ public:
private:
static constexpr auto kCONFIG_DESCRIPTION = std::array{
KV{.key = "database.type",
.value =
"Specifies the type of database used for storing and retrieving data required by the Clio server. Both "
"ScyllaDB and Cassandra can serve as backends for Clio; however, this value must be set to `cassandra`."
KV{
.key = "database.type",
.value =
"Specifies the type of database used for storing and retrieving data required by the Clio server. Both "
"ScyllaDB and Cassandra can serve as backends for Clio; however, this value must be set to `cassandra`."
},
KV{.key = "database.cassandra.contact_points",
.value = "A list of IP addresses or hostnames for the initial cluster nodes (Cassandra or ScyllaDB) that "
@@ -190,9 +191,8 @@ private:
.value = "Specifies the timeout duration (in seconds) for the forwarding cache used in `rippled` "
"communication. A value of `0` means disabling this feature."},
KV{.key = "forwarding.request_timeout",
.value =
"Specifies the timeout duration (in seconds) for the forwarding request used in `rippled` communication."
},
.value = "Specifies the timeout duration (in seconds) for the forwarding request used in `rippled` "
"communication."},
KV{.key = "rpc.cache_timeout",
.value = "Specifies the timeout duration (in seconds) for RPC cache response to timeout. A value of `0` "
"means disabling this feature."},
@@ -201,16 +201,15 @@ private:
KV{.key = "dos_guard.max_fetches", .value = "The maximum number of fetch operations allowed by DOS guard."},
KV{.key = "dos_guard.max_connections",
.value = "The maximum number of concurrent connections for a specific IP address."},
KV{.key = "dos_guard.max_requests", .value = "The maximum number of requests allowed for a specific IP address."
},
KV{.key = "dos_guard.max_requests",
.value = "The maximum number of requests allowed for a specific IP address."},
KV{.key = "dos_guard.sweep_interval", .value = "Interval in seconds for DOS guard to sweep(clear) its state."},
KV{.key = "workers", .value = "The number of threads used to process RPC requests."},
KV{.key = "server.ip", .value = "The IP address of the Clio HTTP server."},
KV{.key = "server.port", .value = "The port number of the Clio HTTP server."},
KV{.key = "server.max_queue_size",
.value =
"The maximum size of the server's request queue. If set to `0`, this means there is no queue size limit."
},
.value = "The maximum size of the server's request queue. If set to `0`, this means there is no queue size "
"limit."},
KV{.key = "server.local_admin",
.value = "Indicates if requests from `localhost` are allowed to call Clio admin-only APIs. Note that this "
"setting cannot be enabled "
@@ -232,8 +231,8 @@ private:
"client is slow to receive it, ensuring delivery once the client is ready."},
KV{.key = "prometheus.enabled", .value = "Enables or disables Prometheus metrics."},
KV{.key = "prometheus.compress_reply", .value = "Enables or disables compression of Prometheus responses."},
KV{.key = "io_threads", .value = "The number of input/output (I/O) threads. The value cannot be less than `1`."
},
KV{.key = "io_threads",
.value = "The number of input/output (I/O) threads. The value cannot be less than `1`."},
KV{.key = "subscription_workers",
.value = "The number of worker threads or processes that are responsible for managing and processing "
"subscription-based tasks from `rippled`."},

View File

@@ -25,6 +25,9 @@
#include "util/config/ArrayView.hpp"
#include "util/config/ConfigDefinition.hpp"
#include "util/config/ObjectView.hpp"
#include "util/prometheus/Counter.hpp"
#include "util/prometheus/Label.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <boost/algorithm/string/predicate.hpp>
#include <boost/date_time/posix_time/posix_time_duration.hpp>
@@ -42,6 +45,7 @@
#include <boost/log/keywords/target_file_name.hpp>
#include <boost/log/keywords/time_based_rotation.hpp>
#include <boost/log/sinks/text_file_backend.hpp>
#include <boost/log/utility/exception_handler.hpp>
#include <boost/log/utility/setup/common_attributes.hpp>
#include <boost/log/utility/setup/console.hpp>
#include <boost/log/utility/setup/file.hpp>
@@ -52,7 +56,9 @@
#include <array>
#include <cstddef>
#include <cstdint>
#include <exception>
#include <filesystem>
#include <functional>
#include <ios>
#include <iostream>
#include <optional>
@@ -65,6 +71,30 @@
namespace util {
namespace {
class LoggerExceptionHandler {
std::reference_wrapper<util::prometheus::CounterInt> exceptionCounter_ =
PrometheusService::counterInt("logger_exceptions_total_number", util::prometheus::Labels{});
public:
using result_type = void;
LoggerExceptionHandler()
{
ASSERT(PrometheusService::isInitialised(), "Prometheus should be initialised before Logger");
}
void
operator()(std::exception const& e) const
{
std::cerr << fmt::format("Exception in logger: {}\n", e.what());
++exceptionCounter_.get();
}
};
} // namespace
Logger LogService::generalLog = Logger{"General"};
Logger LogService::alertLog = Logger{"Alert"};
boost::log::filter LogService::filter{};
@@ -160,6 +190,10 @@ LogService::init(config::ClioConfigDefinition const& config)
sinks::file::make_collector(keywords::target = dirPath, keywords::max_size = dirSize)
);
fileSink->locked_backend()->scan_for_files();
boost::log::core::get()->set_exception_handler(
boost::log::make_exception_handler<std::exception>(LoggerExceptionHandler())
);
}
// get default severity, can be overridden per channel using the `log_channels` array

View File

@@ -78,8 +78,12 @@ public:
) override;
std::unique_ptr<MetricBase>
operator()(std::string name, std::string labelsString, MetricType type, std::vector<double> const& buckets)
override;
operator()(
std::string name,
std::string labelsString,
MetricType type,
std::vector<double> const& buckets
) override;
private:
static std::unique_ptr<MetricBase>

View File

@@ -184,6 +184,12 @@ PrometheusService::init(util::config::ClioConfigDefinition const& config)
impl = std::make_unique<util::prometheus::PrometheusImpl>(enabled, compressReply);
}
bool
PrometheusService::isInitialised()
{
return impl != nullptr;
}
util::prometheus::Bool
PrometheusService::boolMetric(std::string name, util::prometheus::Labels labels, std::optional<std::string> description)
{
@@ -271,7 +277,7 @@ PrometheusService::replaceInstance(std::unique_ptr<util::prometheus::PrometheusI
util::prometheus::PrometheusInterface&
PrometheusService::instance()
{
ASSERT(impl != nullptr, "PrometheusService::instance() called before init()");
ASSERT(isInitialised(), "PrometheusService::instance() called before init()");
return *impl;
}

View File

@@ -260,6 +260,14 @@ public:
static void
init(util::config::ClioConfigDefinition const& config);
/**
* @brief Whether the singleton has been already initialised
*
* @return True if the singleton was already initialised and false otherwise
*/
static bool
isInitialised();
/**
* @brief Get a bool based metric. It will be created if it doesn't exist
* @note Prometheus does not have a native bool type, so we use a counter with a value of 0 or 1

View File

@@ -75,10 +75,9 @@ public:
{
auto data = data_->template lock<std::scoped_lock>();
auto const bucket = std::lower_bound(
data->buckets.begin(),
data->buckets.end(),
value,
[](Bucket const& bucket, ValueType const& value) { return bucket.upperBound < value; }
data->buckets.begin(), data->buckets.end(), value, [](Bucket const& bucket, ValueType const& value) {
return bucket.upperBound < value;
}
);
if (bucket != data->buckets.end()) {
++bucket->count;

View File

@@ -60,8 +60,10 @@ public:
}
std::expected<std::string, RequestError>
read(boost::asio::yield_context yield, std::optional<std::chrono::steady_clock::duration> timeout = std::nullopt)
override
read(
boost::asio::yield_context yield,
std::optional<std::chrono::steady_clock::duration> timeout = std::nullopt
) override
{
boost::beast::error_code errorCode;
boost::beast::flat_buffer buffer;
@@ -101,8 +103,10 @@ public:
}
std::optional<RequestError>
close(boost::asio::yield_context yield, std::chrono::steady_clock::duration const timeout = kDEFAULT_TIMEOUT)
override
close(
boost::asio::yield_context yield,
std::chrono::steady_clock::duration const timeout = kDEFAULT_TIMEOUT
) override
{
// Set the timeout for closing the connection
boost::beast::websocket::stream_base::timeout wsTimeout{};

View File

@@ -62,7 +62,7 @@
namespace web::impl {
static auto constexpr kHEALTH_CHECK_HTML = R"html(
static constexpr auto kHEALTH_CHECK_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Test page for Clio</title></head>

View File

@@ -181,8 +181,7 @@ public:
{
// Note: post used instead of dispatch to guarantee async behavior of wsFail and maybeSendNext
boost::asio::post(
derived().ws().get_executor(),
[this, self = derived().shared_from_this(), msg = std::move(msg)]() {
derived().ws().get_executor(), [this, self = derived().shared_from_this(), msg = std::move(msg)]() {
if (messages_.size() > maxSendingQueueSize_) {
wsFail(boost::asio::error::timed_out, "Client is too slow");
return;

View File

@@ -187,7 +187,8 @@ tryUpgradeConnection(
if (expectedUpgradedConnection.has_value())
return std::move(expectedUpgradedConnection).value();
return std::unexpected{fmt::format("Error upgrading connection: {}", expectedUpgradedConnection.error().what())
return std::unexpected{
fmt::format("Error upgrading connection: {}", expectedUpgradedConnection.error().what())
};
}
@@ -247,8 +248,7 @@ Server::run()
running_ = true;
boost::asio::spawn(
ctx_.get(),
[this, acceptor = std::move(acceptor).value()](boost::asio::yield_context yield) mutable {
ctx_.get(), [this, acceptor = std::move(acceptor).value()](boost::asio::yield_context yield) mutable {
while (true) {
boost::beast::error_code errorCode;
boost::asio::ip::tcp::socket socket{ctx_.get().get_executor()};
@@ -314,8 +314,7 @@ Server::handleConnection(boost::asio::ip::tcp::socket socket, boost::asio::yield
if (connectionHandler_.isStopping()) {
boost::asio::spawn(
ctx_.get(),
[connection = std::move(connectionExpected).value()](boost::asio::yield_context yield) {
ctx_.get(), [connection = std::move(connectionExpected).value()](boost::asio::yield_context yield) {
web::ng::impl::ConnectionHandler::stopConnection(*connection, yield);
}
);
@@ -329,8 +328,7 @@ Server::handleConnection(boost::asio::ip::tcp::socket socket, boost::asio::yield
}
boost::asio::spawn(
ctx_.get(),
[this, connection = std::move(connection).value()](boost::asio::yield_context yield) mutable {
ctx_.get(), [this, connection = std::move(connection).value()](boost::asio::yield_context yield) mutable {
connectionHandler_.processConnection(std::move(connection), yield);
}
);

View File

@@ -19,10 +19,10 @@
#include "web/ng/SubscriptionContext.hpp"
#include "util/Assert.hpp"
#include "util/Taggable.hpp"
#include "web/SubscriptionContextInterface.hpp"
#include <boost/asio/buffer.hpp>
#include <boost/asio/spawn.hpp>
#include <cstddef>
@@ -50,24 +50,31 @@ SubscriptionContext::SubscriptionContext(
{
}
SubscriptionContext::~SubscriptionContext()
{
ASSERT(disconnected_, "SubscriptionContext must be disconnected before destroying");
}
void
SubscriptionContext::send(std::shared_ptr<std::string> message)
{
if (disconnected_)
if (disconnected_ or gotError_)
return;
if (maxSendQueueSize_.has_value() and tasksGroup_.size() >= *maxSendQueueSize_) {
tasksGroup_.spawn(yield_, [this](boost::asio::yield_context innerYield) {
connection_.get().close(innerYield);
});
disconnected_ = true;
gotError_ = true;
return;
}
tasksGroup_.spawn(yield_, [this, message = std::move(message)](boost::asio::yield_context innerYield) {
auto const maybeError = connection_.get().sendBuffer(boost::asio::buffer(*message), innerYield);
if (maybeError.has_value() and errorHandler_(*maybeError, connection_))
tasksGroup_.spawn(yield_, [this, message = std::move(message)](boost::asio::yield_context innerYield) mutable {
auto const maybeError = connection_.get().sendShared(std::move(message), innerYield);
if (maybeError.has_value() and errorHandler_(*maybeError, connection_)) {
connection_.get().close(innerYield);
gotError_ = true;
}
});
}
@@ -92,8 +99,8 @@ SubscriptionContext::apiSubversion() const
void
SubscriptionContext::disconnect(boost::asio::yield_context yield)
{
onDisconnect_(this);
disconnected_ = true;
onDisconnect_(this);
tasksGroup_.asyncWait(yield);
}

View File

@@ -61,6 +61,7 @@ private:
boost::signals2::signal<void(SubscriptionContextInterface*)> onDisconnect_;
std::atomic_bool disconnected_{false};
std::atomic_bool gotError_{false};
/**
* @brief The API version of the web stream client.
@@ -87,6 +88,8 @@ public:
ErrorHandler errorHandler
);
~SubscriptionContext() override;
/**
* @brief Send message to the client
* @note This method does nothing after disconnected() was called.

View File

@@ -146,11 +146,9 @@ ConnectionHandler::processConnection(ConnectionPtr connectionPtr, boost::asio::y
auto* ptr = dynamic_cast<impl::WsConnectionBase*>(connectionPtr.get());
ASSERT(ptr != nullptr, "Casted not websocket connection");
subscriptionContext = std::make_shared<SubscriptionContext>(
tagFactory_,
*ptr,
maxSubscriptionSendQueueSize_,
yield,
[this](Error const& e, Connection const& c) { return handleError(e, c); }
tagFactory_, *ptr, maxSubscriptionSendQueueSize_, yield, [this](Error const& e, Connection const& c) {
return handleError(e, c);
}
);
LOG(log_.trace()) << connectionRef.tag() << "Created SubscriptionContext for the connection";
}

View File

@@ -26,6 +26,7 @@
#include "web/ng/Request.hpp"
#include "web/ng/Response.hpp"
#include "web/ng/impl/Concepts.hpp"
#include "web/ng/impl/SendingQueue.hpp"
#include "web/ng/impl/WsConnection.hpp"
#include <boost/asio/buffer.hpp>
@@ -75,6 +76,10 @@ class HttpConnection : public UpgradableConnection {
StreamType stream_;
std::optional<boost::beast::http::request<boost::beast::http::string_body>> request_;
std::chrono::steady_clock::duration timeout_{kDEFAULT_TIMEOUT};
using MessageType = boost::beast::http::response<boost::beast::http::string_body>;
SendingQueue<MessageType> sendingQueue_;
bool closed_{false};
public:
@@ -85,7 +90,12 @@ public:
util::TagDecoratorFactory const& tagDecoratorFactory
)
requires IsTcpStream<StreamType>
: UpgradableConnection(std::move(ip), std::move(buffer), tagDecoratorFactory), stream_{std::move(socket)}
: UpgradableConnection(std::move(ip), std::move(buffer), tagDecoratorFactory)
, stream_{std::move(socket)}
, sendingQueue_([this](MessageType const& message, auto&& yield) {
boost::beast::get_lowest_layer(stream_).expires_after(timeout_);
boost::beast::http::async_write(stream_, message, yield);
})
{
}
@@ -99,9 +109,20 @@ public:
requires IsSslTcpStream<StreamType>
: UpgradableConnection(std::move(ip), std::move(buffer), tagDecoratorFactory)
, stream_{std::move(socket), sslCtx}
, sendingQueue_([this](MessageType const& message, auto&& yield) {
boost::beast::get_lowest_layer(stream_).expires_after(timeout_);
boost::beast::http::async_write(stream_, message, yield);
})
{
}
HttpConnection(HttpConnection&& other) = delete;
HttpConnection&
operator=(HttpConnection&& other) = delete;
HttpConnection(HttpConnection const& other) = delete;
HttpConnection&
operator=(HttpConnection const& other) = delete;
std::optional<Error>
sslHandshake(boost::asio::yield_context yield)
requires IsSslTcpStream<StreamType>
@@ -125,15 +146,12 @@ public:
}
std::optional<Error>
sendRaw(boost::beast::http::response<boost::beast::http::string_body> response, boost::asio::yield_context yield)
override
sendRaw(
boost::beast::http::response<boost::beast::http::string_body> response,
boost::asio::yield_context yield
) override
{
boost::system::error_code error;
boost::beast::get_lowest_layer(stream_).expires_after(timeout_);
boost::beast::http::async_write(stream_, response, yield[error]);
if (error)
return error;
return std::nullopt;
return sendingQueue_.send(std::move(response), yield);
}
void

Some files were not shown because too many files have changed in this diff Show More