Compare commits

...

64 Commits

Author SHA1 Message Date
Alex Kremer
2893492569 Remove legacy hook (#1097) 2024-01-11 17:49:00 +00:00
Alex Kremer
b63e98bda0 Update libxrpl to 2.0.0 (#1096) 2024-01-11 16:36:39 +00:00
Alex Kremer
f4df5c2185 Implement amm_info handler (#1060)
Fixes #283
2024-01-11 15:57:53 +00:00
github-actions[bot]
93d5c12b14 [CI] clang-tidy auto fixes (#1094)
Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-01-11 09:37:54 +00:00
cyan317
2514b7986e Fix unstable test (#1089) 2024-01-10 16:56:57 +00:00
cyan317
d30e63d49a add api_version to response (#1088)
Fix #1020
2024-01-09 15:53:09 +00:00
github-actions[bot]
61f1e0853d [CI] clang-tidy auto fixes (#1086)
Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-01-09 09:35:42 +00:00
cyan317
eb1831c489 New subscription manager (#1071)
Fix #886
2024-01-08 14:45:57 +00:00
Shi Cheng
07bd4b0760 upload clio_server artificat (#1083) 2024-01-08 10:49:53 +00:00
Alex Kremer
e26a1e37b5 Improve batching code (#1079)
Fixes #1077
2024-01-05 15:44:30 +00:00
Sergey Kuznetsov
e89640bcfb Add debug cache to ci (#1078)
Fixes #1066
2024-01-05 10:59:26 +00:00
github-actions[bot]
ae135759ef [CI] clang-tidy auto fixes (#1081)
Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-01-05 09:31:56 +00:00
Alex Kremer
28188aa0f9 Add batching to writes (#1076)
Fixes #1077
2024-01-04 15:17:15 +00:00
Sergey Kuznetsov
af485a0634 Add gcovr to CI docker image (#1072)
For #1066
2024-01-03 16:53:26 +00:00
github-actions[bot]
b609298870 [CI] clang-tidy auto fixes (#1070)
Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-01-03 08:53:47 +00:00
Alex Kremer
d077093a8d Simplify backend mock access for unittests (#1062) 2024-01-02 13:35:57 +00:00
Alex Kremer
781f3b3c48 Bump libxrpl version to 2.0.0-rc6 (#1061)
Fixes #1063
2023-12-23 20:28:07 +00:00
Bronek Kozicki
a8bae96ad4 Add coverage_report target (#1058) 2023-12-21 15:08:32 +00:00
Bronek Kozicki
fe9649d872 Fix c++20 requires syntax (#1057) 2023-12-19 20:52:53 +00:00
Sergey Kuznetsov
431b5f5ab8 Add ccache mention in docs (#1055) 2023-12-18 16:43:15 +00:00
Sergey Kuznetsov
b1dc2775fb Remove exception text from error sending (#1048)
Fixes #1037
2023-12-13 16:30:16 +00:00
Elliot Lee
dd35a7cfd2 Update CONTRIBUTING.md (#1047) 2023-12-13 15:47:07 +00:00
github-actions[bot]
a9d685d5c0 [CI] clang-tidy auto fixes (#1046)
Fixes #1045.
2023-12-13 15:08:53 +00:00
Sergey Kuznetsov
6065d324b5 Remove push-to-fork in clang-tidy workflow 2023-12-13 14:23:21 +00:00
Sergey Kuznetsov
fe7b5fe18f Another try to sign commit in CI (#1043) 2023-12-13 13:54:28 +00:00
Sergey Kuznetsov
1c663988f5 Use different token to sign commits (#1041)
For #884
2023-12-13 13:23:24 +00:00
Sergey Kuznetsov
d11d566121 Fix wrong image (#1040)
For #884
2023-12-13 12:49:44 +00:00
Sergey Kuznetsov
a467cb2526 Add signing clang-tidy commit (#1036)
Fixes #884
2023-12-12 18:04:40 +00:00
Sergey Kuznetsov
f62e36dc94 Add status to readme (#1035)
For #844
2023-12-12 17:07:51 +00:00
Sergey Kuznetsov
d933ce2a29 Use clio_ci docker image (#1033)
Fixes #884
2023-12-12 16:03:08 +00:00
Sergey Kuznetsov
db751e3807 Make root default user in CI image (#1034)
For #884
2023-12-12 14:05:30 +00:00
Sergey Kuznetsov
3c4a8f0cfb Add conan setup into image (#1032)
For #884
2023-12-12 12:00:57 +00:00
Sergey Kuznetsov
397ce97175 Fix docker publish (#1027)
Fixes docker build for #884
2023-12-11 17:08:42 +00:00
Sergey Kuznetsov
ac6ad13f6c Fix release notes (#1022)
Fixes release notes for #884
2023-12-11 15:52:36 +00:00
Sergey Kuznetsov
7d1d1749bc Another fix of clang-tidy workflow (#1026)
Another fix for clang-tidy nightly check for #884
2023-12-11 15:11:30 +00:00
Sergey Kuznetsov
acf359d631 Fix permissions issue for clang-tidy (#1023)
Fixes issue creation for clang-tidy nightly checks for #884
2023-12-11 11:53:22 +00:00
Sergey Kuznetsov
a34e107b86 Add nightly builds (#1013)
Partially fixes #884.
Adds:
- Docker image for CI on Linux
- Nightly builds without cache and releases
- Nightly clang-tidy checks
- Fix typos in .clang-tidy
2023-12-08 18:22:22 +00:00
cyan317
b886586de3 Unify ledger_index type (#1019)
Fix #1014
2023-12-08 14:20:40 +00:00
cyan317
a57abb15a3 Fix example json format (#1018) 2023-12-05 12:45:01 +00:00
cyan317
c87586a265 Fix compiler error: header missing (#1016) 2023-12-04 13:45:48 +00:00
cyan317
8172670c93 Add close_time_iso to transaction stream (#1012)
Fix #1011
2023-11-30 13:32:50 +00:00
Sergey Kuznetsov
3fdcd3315b Make assert write to both log file and cerr (#1009) 2023-11-30 10:33:52 +00:00
cyan317
dd018f1c5e Fix ledger close_time_iso(#1008)
Fix #1007
2023-11-29 18:04:12 +00:00
Sergey Kuznetsov
c2b462da75 Fix paste on mac (#1006) 2023-11-29 15:41:45 +00:00
Sergey Kuznetsov
252920ec57 Fix CI 2023-11-29 15:24:50 +00:00
Sergey Kuznetsov
9ef6801c55 Fix git hook 2023-11-29 15:24:50 +00:00
Sergey Kuznetsov
24c562fa2a Add hostname resolving to dosguard (#1000)
Fixes #983.

Cassandra, ETL sorces and cache already support hostname resolving.

Also added config to show missing includes by clangd.
2023-11-29 15:13:40 +00:00
Sergey Kuznetsov
35f119a268 Switch to llvm 17 tools (#1002)
Fixes #952
2023-11-28 20:09:58 +00:00
Sergey Kuznetsov
1be368dcaf Fix wrong assert (#1003) 2023-11-28 14:06:17 +00:00
cyan317
a5fbb01299 fix (#999)
Fix #985
2023-11-24 16:01:27 +00:00
Sergey Kuznetsov
3b75d88a35 Add server_definitions to forwarding set (#996)
Fixes #942
2023-11-22 16:21:03 +00:00
cyan317
f0224581a5 Fix nfts_by_issuer's DB issue (#997)
Fix #988
2023-11-22 15:55:46 +00:00
Sergey Kuznetsov
b998473673 Add compression and histogram metric type for Prometheus (#987)
Fixes #932
Also fixes #966

Decided not to add Summary type because it has the same functionality as Histogram but makes more calculations on client side (Clio side). See https://prometheus.io/docs/practices/histograms for detailed comparison.
2023-11-22 12:55:06 +00:00
Sergey Kuznetsov
8ebe2d6a80 Add assertion that terminate clio (#994)
Fixes #893.

Also added termination handler to print backtrace on crash, so fixes #929.
2023-11-21 13:06:04 +00:00
Sergey Kuznetsov
3bab90ca7a Comment out gcc-only checks (#995) 2023-11-21 09:53:08 +00:00
cyan317
74660aebf1 binary (#993)
Fix #984
2023-11-20 17:53:34 +00:00
cyan317
db08de466a Unify json (#992)
Fix #962
2023-11-20 13:09:28 +00:00
Alex Kremer
1bacad9e49 Update xrpl version to 2.0.0-rc1 (#990)
Fixes #989
2023-11-15 19:40:38 +00:00
cyan317
ca16858878 Add DeliverMax for Tx streams (#980) 2023-11-13 13:29:36 +00:00
cyan317
feae85782c DeliverMax alias of Payment tx (#979)
Fix #973
2023-11-09 13:35:08 +00:00
cyan317
b016c1d7ba Fix lowercase ctid (#977)
Fix #963
2023-11-07 16:10:12 +00:00
Sergey Kuznetsov
0597a9d685 Add amm type to account objects (#975)
Fixes #834
2023-11-03 13:54:54 +00:00
cyan317
05bea6a971 add amm filter (#972)
Fix #968
2023-11-03 13:12:36 +00:00
cyan317
fa660ef400 Implement DID (#967)
Fix #918
2023-11-03 09:40:40 +00:00
370 changed files with 18256 additions and 7975 deletions

View File

@@ -33,12 +33,13 @@ DisableFormat: false
ExperimentalAutoDetectBinPacking: false
FixNamespaceComments: true
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeBlocks: Regroup
IncludeCategories:
- Regex: '^<(BeastConfig)'
Priority: 0
- Regex: '^<(ripple)/'
- Regex: '^".*"$'
Priority: 1
- Regex: '^<.*\.(h|hpp)>$'
Priority: 2
- Regex: '^<(boost)/'
- Regex: '^<.*>$'
Priority: 3
- Regex: '.*'
Priority: 4

View File

@@ -7,6 +7,7 @@ Checks: '-*,
bugprone-copy-constructor-init,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-empty-catch,
bugprone-fold-init-type,
bugprone-forward-declaration-namespace,
bugprone-inaccurate-erase,
@@ -20,11 +21,15 @@ Checks: '-*,
bugprone-misplaced-pointer-arithmetic-in-alloc,
bugprone-misplaced-widening-cast,
bugprone-move-forwarding-reference,
bugprone-multiple-new-in-one-expression,
bugprone-multiple-statement-macro,
bugprone-no-escape,
bugprone-non-zero-enum-to-bool-conversion,
bugprone-parent-virtual-call,
bugprone-posix-return,
bugprone-redundant-branch-condition,
bugprone-reserved-identifier,
bugprone-unused-return-value,
bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler,
bugprone-signed-char-misuse,
@@ -45,6 +50,7 @@ Checks: '-*,
bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare,
bugprone-swapped-arguments,
bugprone-switch-missing-default-case,
bugprone-terminating-continue,
bugprone-throw-keyword-missing,
bugprone-too-small-loop-variable,
@@ -52,18 +58,23 @@ Checks: '-*,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unique-ptr-array-mismatch,
bugprone-unsafe-functions,
bugprone-unused-raii,
bugprone-unused-return-value,
bugprone-use-after-move,
bugprone-virtual-near-miss,
cppcoreguidelines-init-variables,
cppcoreguidelines-prefer-member-initializer,
cppcoreguidelines-misleading-capture-default-by-value,
cppcoreguidelines-pro-type-member-init,
cppcoreguidelines-pro-type-static-cast-downcast,
cppcoreguidelines-rvalue-reference-param-not-moved,
cppcoreguidelines-use-default-member-init,
cppcoreguidelines-virtual-class-destructor,
llvm-namespace-comment,
misc-const-correctness,
misc-definitions-in-headers,
misc-header-include-cycle,
misc-include-cleaner,
misc-misplaced-const,
misc-redundant-expression,
misc-static-assert,
@@ -75,6 +86,7 @@ Checks: '-*,
modernize-make-shared,
modernize-make-unique,
modernize-pass-by-value,
modernize-type-traits,
modernize-use-emplace,
modernize-use-equals-default,
modernize-use-equals-delete,
@@ -112,7 +124,10 @@ Checks: '-*,
CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc;::std::expected
misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*'
HeaderFilterRegex: '^.*/(src|unitests)/.*\.(h|hpp)$'
HeaderFilterRegex: '^.*/(src|unittests)/.*\.(h|hpp)$'
WarningsAsErrors: '*'

5
.clangd Normal file
View File

@@ -0,0 +1,5 @@
Diagnostics:
UnusedIncludes: Strict
MissingIncludes: Strict
Includes:
IgnoreHeader: ".*/(detail|impl)/.*"

11
.codecov.yml Normal file
View File

@@ -0,0 +1,11 @@
coverage:
status:
project:
default:
target: 50%
threshold: 2%
patch:
default:
target: 20% # Need to bump this number https://docs.codecov.com/docs/commit-status#patch-status
threshold: 2%

View File

@@ -1,20 +0,0 @@
#!/bin/bash
# Pushing a release branch requires an annotated tag at the released commit
branch=$(git rev-parse --abbrev-ref HEAD)
if [[ $branch =~ master ]]; then
# check if HEAD commit is tagged
if ! git describe --exact-match HEAD; then
echo "Commits to master must be tagged"
exit 1
fi
elif [[ $branch =~ release/* ]]; then
IFS=/ read -r branch rel_ver <<< ${branch}
tag=$(git describe --tags --abbrev=0)
if [[ "${rel_ver}" != "${tag}" ]]; then
echo "release/${rel_ver} branches must have annotated tag ${rel_ver}"
echo "git tag -am\"${rel_ver}\" ${rel_ver}"
exit 1
fi
fi

View File

@@ -7,12 +7,12 @@ sources="src unittests"
formatter="clang-format -i"
version=$($formatter --version | grep -o '[0-9\.]*')
if [[ "16.0.0" > "$version" ]]; then
if [[ "17.0.0" > "$version" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 16 of `clang-format` is required.
A minimum of version 17 of `which clang-format` is required.
Your version is $version.
Please fix paths and run again.
-----------------------------------------------------------------------------
@@ -21,6 +21,26 @@ EOF
exit 2
fi
function grep_code {
grep -l "${1}" ${sources} -r --include \*.h --include \*.cpp
}
if [[ "$OSTYPE" == "darwin"* ]]; then
# make all includes to be <...> style
grep_code '#include ".*"' | xargs sed -i '' -E 's|#include "(.*)"|#include <\1>|g'
# make local includes to be "..." style
main_src_dirs=$(find ./src -maxdepth 1 -type d -exec basename {} \; | tr '\n' '|' | sed 's/|$//' | sed 's/|/\\|/g')
grep_code "#include <\($main_src_dirs\)/.*>" | xargs sed -i '' -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g"
else
# make all includes to be <...> style
grep_code '#include ".*"' | xargs sed -i -E 's|#include "(.*)"|#include <\1>|g'
# make local includes to be "..." style
main_src_dirs=$(find ./src -type d -maxdepth 1 -exec basename {} \; | paste -sd '|' | sed 's/|/\\|/g')
grep_code "#include <\($main_src_dirs\)/.*>" | xargs sed -i -E "s|#include <(($main_src_dirs)/.*)>|#include \"\1\"|g"
fi
first=$(git diff $sources)
find $sources -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 $formatter
second=$(git diff $sources)
@@ -39,4 +59,3 @@ EOF
exit 1
fi
.githooks/ensure_release_tag

View File

@@ -1,37 +1,18 @@
name: Build clio
description: Build clio in build directory
inputs:
conan_profile:
description: Conan profile name
required: true
default: default
conan_cache_hit:
description: Whether conan cache has been downloaded
required: true
target:
description: Build target name
default: all
runs:
using: composite
steps:
- name: Get number of threads on mac
id: mac_threads
if: ${{ runner.os == 'macOS' }}
shell: bash
run: echo "num=$(($(sysctl -n hw.logicalcpu) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads on Linux
id: linux_threads
if: ${{ runner.os == 'Linux' }}
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
- name: Build Clio
shell: bash
env:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
LINT: "${{ runner.os == 'Linux' && 'True' || 'False' }}"
run: |
mkdir -p build
cd build
threads_num=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}
conan install .. -of . -b $BUILD_OPTION -s build_type=Release -o clio:tests=True -o clio:lint=$LINT --profile ${{ inputs.conan_profile }}
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release .. -G Ninja
cmake --build . --parallel $threads_num
cmake --build . --parallel ${{ steps.number_of_threads.outputs.threads_number }} --target ${{ inputs.target }}

View File

@@ -1,23 +1,27 @@
name: Check format
description: Check format using clang-format-16
description: Check format using clang-format-17
runs:
using: composite
steps:
- name: Add llvm repo
run: |
echo 'deb http://apt.llvm.org/focal/ llvm-toolchain-focal-16 main' | sudo tee -a /etc/apt/sources.list
echo 'deb http://apt.llvm.org/focal/ llvm-toolchain-focal-17 main' | sudo tee -a /etc/apt/sources.list
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
shell: bash
- name: Install packages
run: |
sudo apt update -qq
sudo apt install -y jq clang-format-16
sudo apt install -y jq clang-format-17
sudo rm /usr/bin/clang-format
sudo ln -s /usr/bin/clang-format-17 /usr/bin/clang-format
shell: bash
- name: Run formatter
continue-on-error: true
id: run_formatter
run: |
find src unittests -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-16 -i
./.githooks/pre-commit
shell: bash
- name: Check for differences
@@ -25,3 +29,8 @@ runs:
shell: bash
run: |
git diff --color --exit-code | tee "clang-format.patch"
- name: Fail job
if: ${{ steps.run_formatter.outcome != 'success' }}
shell: bash
run: exit 1

View File

@@ -0,0 +1,29 @@
name: Generate code coverage report
description: Run tests, generate code coverage report and upload it to codecov.io
runs:
using: composite
steps:
- name: Run tests
shell: bash
run: |
build/clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"
- name: Run gcovr
shell: bash
run: |
gcovr -e unittests --xml build/coverage_report.xml -j8 --exclude-throw-branches
- name: Archive coverage report
uses: actions/upload-artifact@v3
with:
name: coverage-report.xml
path: build/coverage_report.xml
retention-days: 30
- name: Upload coverage report
uses: codecov/codecov-action@v3
with:
files: build/coverage_report.xml
fail_ci_if_error: true
verbose: true
token: ${{ env.CODECOV_TOKEN }}

41
.github/actions/generate/action.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Run conan and cmake
description: Run conan and cmake
inputs:
conan_profile:
description: Conan profile name
required: true
conan_cache_hit:
description: Whether conan cache has been downloaded
required: true
default: 'false'
build_type:
description: Build type for third-party libraries and clio. Could be 'Release', 'Debug'
required: true
default: 'Release'
code_coverage:
description: Whether conan's coverage option should be on or not
required: true
default: 'false'
runs:
using: composite
steps:
- name: Create build directory
shell: bash
run: mkdir -p build
- name: Run conan
shell: bash
env:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
CODE_COVERAGE: "${{ inputs.code_coverage == 'true' && 'True' || 'False' }}"
run: |
cd build
conan install .. -of . -b $BUILD_OPTION -s build_type=${{ inputs.build_type }} -o clio:tests=True -o clio:lint=False -o clio:coverage="${CODE_COVERAGE}" --profile ${{ inputs.conan_profile }}
- name: Run cmake
shell: bash
env:
BUILD_TYPE: "${{ inputs.build_type }}"
run: |
cd build
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=${{ inputs.build_type }} ${{ inputs.extra_cmake_args }} .. -G Ninja

View File

@@ -0,0 +1,26 @@
name: Get number of threads
description: Determines number of threads to use on macOS and Linux
outputs:
threads_number:
description: Number of threads to use
value: ${{ steps.number_of_threads_export.outputs.num }}
runs:
using: composite
steps:
- name: Get number of threads on mac
id: mac_threads
if: ${{ runner.os == 'macOS' }}
shell: bash
run: echo "num=$(($(sysctl -n hw.logicalcpu) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads on Linux
id: linux_threads
if: ${{ runner.os == 'Linux' }}
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Export output variable
shell: bash
id: number_of_threads_export
run: |
echo "num=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,47 @@
name: Prepare runner
description: Install packages, set environment variables, create directories
inputs:
disable_ccache:
description: Whether ccache should be disabled
required: true
runs:
using: composite
steps:
- name: Install packages on mac
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
brew install llvm@14 pkg-config ninja bison cmake ccache jq gh
- name: Fix git permissions on Linux
if: ${{ runner.os == 'Linux' }}
shell: bash
run: git config --global --add safe.directory $PWD
- name: Set env variables for macOS
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
echo "CCACHE_DIR=${{ github.workspace }}/.ccache" >> $GITHUB_ENV
echo "CONAN_USER_HOME=${{ github.workspace }}" >> $GITHUB_ENV
- name: Set env variables for Linux
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
echo "CCACHE_DIR=/root/.ccache" >> $GITHUB_ENV
echo "CONAN_USER_HOME=/root/" >> $GITHUB_ENV
- name: Set CCACHE_DISABLE=1
if: ${{ inputs.disable_ccache == 'true' }}
shell: bash
run: |
echo "CCACHE_DISABLE=1" >> $GITHUB_ENV
- name: Create directories
shell: bash
run: |
mkdir -p $CCACHE_DIR
mkdir -p $CONAN_USER_HOME/.conan

View File

@@ -7,6 +7,14 @@ inputs:
ccache_dir:
description: Path to .ccache directory
required: true
build_type:
description: Current build type (e.g. Release, Debug)
required: true
default: Release
code_coverage:
description: Whether code coverage is on
required: true
default: 'false'
outputs:
conan_hash:
description: Hash to use as a part of conan cache key
@@ -40,11 +48,12 @@ runs:
id: conan_cache
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-develop-${{ steps.conan_hash.outputs.hash }}
key: clio-conan_data-${{ runner.os }}-${{ inputs.build_type }}-develop-${{ steps.conan_hash.outputs.hash }}
- name: Restore ccache cache
uses: actions/cache/restore@v3
id: ccache_cache
if: ${{ env.CCACHE_DISABLE != '1' }}
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-develop-${{ steps.git_common_ancestor.outputs.commit }}
key: clio-ccache-${{ runner.os }}-${{ inputs.build_type }}${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

View File

@@ -16,6 +16,16 @@ inputs:
ccache_cache_hit:
description: Whether conan cache has been downloaded
required: true
ccache_cache_miss_rate:
description: How many cache misses happened
build_type:
description: Current build type (e.g. Release, Debug)
required: true
default: Release
code_coverage:
description: Whether code coverage is on
required: true
default: 'false'
runs:
using: composite
steps:
@@ -34,13 +44,13 @@ runs:
uses: actions/cache/save@v3
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-develop-${{ inputs.conan_hash }}
key: clio-conan_data-${{ runner.os }}-${{ inputs.build_type }}-develop-${{ inputs.conan_hash }}
- name: Save ccache cache
if: ${{ inputs.ccache_cache_hit != 'true' }}
if: ${{ inputs.ccache_cache_hit != 'true' || inputs.ccache_cache_miss_rate == '100.0' }}
uses: actions/cache/save@v3
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-develop-${{ steps.git_common_ancestor.outputs.commit }}
key: clio-ccache-${{ runner.os }}-${{ inputs.build_type }}${{ inputs.code_coverage == 'true' && '-code_coverage' || '' }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

View File

@@ -31,9 +31,6 @@ runs:
shell: bash
id: conan_setup_linux
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler.libcxx=libstdc++11 default
echo "created_conan_profile=default" >> $GITHUB_OUTPUT
- name: Export output variable

View File

@@ -1,4 +1,4 @@
name: Build Clio
name: Build
on:
push:
branches: [master, release/*, develop]
@@ -6,30 +6,48 @@ on:
branches: [master, release/*, develop]
workflow_dispatch:
jobs:
lint:
name: Lint
name: Check format
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Run clang-format
uses: ./.github/actions/clang_format
build_mac:
name: Build macOS
build:
name: Build
needs: lint
runs-on: [self-hosted, macOS]
env:
CCACHE_DIR: ${{ github.workspace }}/.ccache
CONAN_USER_HOME: ${{ github.workspace }}
strategy:
fail-fast: false
matrix:
include:
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Release
code_coverage: false
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Debug
code_coverage: true
- os: macOS
build_type: Release
code_coverage: false
runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install packages
run: |
brew install llvm@14 pkg-config ninja bison cmake ccache jq
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: false
- name: Setup conan
uses: ./.github/actions/setup_conan
@@ -41,88 +59,44 @@ jobs:
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
- name: Build Clio
uses: ./.github/actions/build_clio
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v3
with:
name: clio_tests_mac
path: build/clio_tests
- name: Save cache
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
build_linux:
name: Build linux
needs: lint
runs-on: [self-hosted, Linux]
container:
image: conanio/gcc11:1.61.0
options: --user root
env:
CCACHE_DIR: /root/.ccache
CONAN_USER_HOME: /root/
steps:
- name: Get Clio
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Add llvm repo
run: |
echo 'deb http://apt.llvm.org/focal/ llvm-toolchain-focal-16 main' >> /etc/apt/sources.list
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
- name: Install packages
run: |
apt update -qq
apt install -y jq clang-tidy-16
- name: Install ccache
run: |
wget https://github.com/ccache/ccache/releases/download/v4.8.3/ccache-4.8.3-linux-x86_64.tar.xz
tar xf ./ccache-4.8.3-linux-x86_64.tar.xz
mv ./ccache-4.8.3-linux-x86_64/ccache /usr/bin/ccache
- name: Fix git permissions
run: git config --global --add safe.directory $PWD
- name: Setup conan
uses: ./.github/actions/setup_conan
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
- name: Build Clio
uses: ./.github/actions/build_clio
with:
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
- name: Show ccache's statistics
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
- name: Strip tests
if: ${{ !matrix.code_coverage }}
run: strip build/clio_tests
- name: Upload clio_tests
- name: Upload clio_server
uses: actions/upload-artifact@v3
with:
name: clio_tests_linux
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !matrix.code_coverage }}
uses: actions/upload-artifact@v3
with:
name: clio_tests_${{ runner.os }}
path: build/clio_tests
- name: Save cache
@@ -133,26 +107,37 @@ jobs:
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
# TODO: This is not a part of build process but it is the easiest way to do it here.
# It will be refactored in https://github.com/XRPLF/clio/issues/1075
- name: Run code coverage
if: ${{ matrix.code_coverage }}
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
uses: ./.github/actions/code_coverage
test:
name: Run Tests
needs: build
strategy:
fail-fast: false
matrix:
include:
- os: heavy
container:
image: rippleci/clio_ci:latest
- os: macOS
runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }}
test_mac:
needs: build_mac
runs-on: [self-hosted, macOS]
steps:
- uses: actions/download-artifact@v3
with:
name: clio_tests_mac
- name: Run clio_tests
run: |
chmod +x ./clio_tests
./clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"
test_linux:
needs: build_linux
runs-on: [self-hosted, x-heavy]
steps:
- uses: actions/download-artifact@v3
with:
name: clio_tests_linux
name: clio_tests_${{ runner.os }}
- name: Run clio_tests
run: |
chmod +x ./clio_tests

109
.github/workflows/clang-tidy.yml vendored Normal file
View File

@@ -0,0 +1,109 @@
name: Clang-tidy check
on:
schedule:
- cron: '0 6 * * 1-5'
workflow_dispatch:
pull_request:
branches: [develop]
paths:
- .clang_tidy
- .github/workflows/clang-tidy.yml
jobs:
clang_tidy:
runs-on: [self-hosted, Linux]
container:
image: rippleci/clio_ci:latest
permissions:
contents: write
issues: write
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: Release
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
- name: Run clang-tidy
continue-on-error: true
shell: bash
id: run_clang_tidy
run: |
run-clang-tidy-17 -p build -j ${{ steps.number_of_threads.outputs.threads_number }} -fix -quiet 1>output.txt
- name: Print issues found
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
shell: bash
run: |
sed -i '/error\||/!d' ./output.txt
cat output.txt
rm output.txt
- name: Create an issue
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
id: create_issue
shell: bash
env:
GH_TOKEN: ${{ github.token }}
run: |
echo -e 'Clang-tidy found issues in the code:\n' > issue.md
echo -e "List of the issues found: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/" >> issue.md
gh issue create --assignee 'cindyyan317,godexsoft,kuznetsss' --label bug --title 'Clang-tidy found bugs in code🐛' --body-file ./issue.md > create_issue.log
created_issue=$(cat create_issue.log | sed 's|.*/||')
echo "created_issue=$created_issue" >> $GITHUB_OUTPUT
rm create_issue.log issue.md
- uses: crazy-max/ghaction-import-gpg@v5
with:
gpg_private_key: ${{ secrets.ACTIONS_GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.ACTIONS_GPG_PASSPHRASE }}
git_user_signingkey: true
git_commit_gpgsign: true
- name: Create PR with fixes
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
uses: peter-evans/create-pull-request@v5
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
with:
commit-message: '[CI] clang-tidy auto fixes'
committer: Clio CI <skuznetsov@ripple.com>
branch: 'clang_tidy/autofix'
branch-suffix: timestamp
delete-branch: true
title: '[CI] clang-tidy auto fixes'
body: 'Fixes #${{ steps.create_issue.outputs.created_issue }}. Please review and commit clang-tidy fixes.'
reviewers: 'cindyyan317,godexsoft,kuznetsss'
- name: Fail the job
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
shell: bash
run: exit 1

138
.github/workflows/nightly.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
name: Nightly release
on:
schedule:
- cron: '0 5 * * 1-5'
workflow_dispatch:
jobs:
build:
name: Build clio
strategy:
fail-fast: false
matrix:
include:
- os: macOS
build_type: Release
- os: heavy
build_type: Release
container:
image: rippleci/clio_ci:latest
- os: heavy
build_type: Debug
container:
image: rippleci/clio_ci:latest
runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ matrix.build_type }}
- name: Build Clio
uses: ./.github/actions/build_clio
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v3
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_tests
- name: Compress clio_server
shell: bash
run: |
cd build
tar czf ./clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz ./clio_server
- name: Upload clio_server
uses: actions/upload-artifact@v3
with:
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz
run_tests:
needs: build
strategy:
fail-fast: false
matrix:
include:
- os: macOS
build_type: Release
- os: heavy
build_type: Release
- os: heavy
build_type: Debug
runs-on: [self-hosted, "${{ matrix.os }}"]
steps:
- uses: actions/download-artifact@v3
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
- name: Run clio_tests
run: |
chmod +x ./clio_tests
./clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"
nightly_release:
needs: run_tests
runs-on: ubuntu-20.04
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
permissions:
contents: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v3
with:
path: nightly_release
- name: Prepare files
shell: bash
run: |
cp ${{ github.workspace }}/.github/workflows/nightly_notes.md "${RUNNER_TEMP}/nightly_notes.md"
cd nightly_release
rm -r clio_tests*
for d in $(ls); do
archive_name=$(ls $d)
mv ${d}/${archive_name} ./
rm -r $d
sha256sum ./$archive_name > ./${archive_name}.sha256sum
cat ./$archive_name.sha256sum >> "${RUNNER_TEMP}/nightly_notes.md"
done
echo '```' >> "${RUNNER_TEMP}/nightly_notes.md"
- name: Remove current nightly release and nightly tag
shell: bash
run: |
gh release delete nightly --yes || true
git push origin :nightly || true
- name: Publish nightly release
shell: bash
run: |
gh release create nightly --prerelease --title "Clio development (nightly) build" \
--target $GITHUB_SHA --notes-file "${RUNNER_TEMP}/nightly_notes.md" \
./nightly_release/clio_server*

6
.github/workflows/nightly_notes.md vendored Normal file
View File

@@ -0,0 +1,6 @@
> **Note:** Please remember that this is a development release and it is not recommended for production use.
Changelog (including previous releases): https://github.com/XRPLF/clio/commits/nightly
## SHA256 checksums
```

40
.github/workflows/update_docker_ci.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Update CI docker image
on:
push:
branches: [develop]
paths:
- 'docker/ci/**'
- .github/workflows/update_docker_ci.yml
workflow_dispatch:
jobs:
build_and_push:
name: Build and push docker image
runs-on: ubuntu-20.04
steps:
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
- uses: actions/checkout@v4
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/metadata-action@v5
id: meta
with:
images: rippleci/clio_ci
tags: |
type=raw,value=latest
type=raw,value=gcc_11
type=raw,value=${{ env.GITHUB_SHA }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ github.workspace }}/docker/ci
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}

View File

@@ -17,7 +17,9 @@
*/
//==============================================================================
#include <main/Build.h>
#include "main/Build.h"
#include <string>
namespace Build {
static constexpr char versionString[] = "@VERSION@";

View File

@@ -8,7 +8,7 @@ if (lint)
endif ()
message (STATUS "Using clang-tidy from CLIO_CLANG_TIDY_BIN")
else ()
find_program (_CLANG_TIDY_BIN NAMES "clang-tidy-16" "clang-tidy" REQUIRED)
find_program (_CLANG_TIDY_BIN NAMES "clang-tidy-17" "clang-tidy" REQUIRED)
endif ()
if (NOT _CLANG_TIDY_BIN)

440
CMake/CodeCoverage.cmake Normal file
View File

@@ -0,0 +1,440 @@
# Copyright (c) 2012 - 2017, Lars Bilke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CHANGES:
#
# 2012-01-31, Lars Bilke
# - Enable Code Coverage
#
# 2013-09-17, Joakim Söderberg
# - Added support for Clang.
# - Some additional usage instructions.
#
# 2016-02-03, Lars Bilke
# - Refactored functions to use named parameters
#
# 2017-06-02, Lars Bilke
# - Merged with modified version from github.com/ufz/ogs
#
# 2019-05-06, Anatolii Kurotych
# - Remove unnecessary --coverage flag
#
# 2019-12-13, FeRD (Frank Dana)
# - Deprecate COVERAGE_LCOVR_EXCLUDES and COVERAGE_GCOVR_EXCLUDES lists in favor
# of tool-agnostic COVERAGE_EXCLUDES variable, or EXCLUDE setup arguments.
# - CMake 3.4+: All excludes can be specified relative to BASE_DIRECTORY
# - All setup functions: accept BASE_DIRECTORY, EXCLUDE list
# - Set lcov basedir with -b argument
# - Add automatic --demangle-cpp in lcovr, if 'c++filt' is available (can be
# overridden with NO_DEMANGLE option in setup_target_for_coverage_lcovr().)
# - Delete output dir, .info file on 'make clean'
# - Remove Python detection, since version mismatches will break gcovr
# - Minor cleanup (lowercase function names, update examples...)
#
# 2019-12-19, FeRD (Frank Dana)
# - Rename Lcov outputs, make filtered file canonical, fix cleanup for targets
#
# 2020-01-19, Bob Apthorpe
# - Added gfortran support
#
# 2020-02-17, FeRD (Frank Dana)
# - Make all add_custom_target()s VERBATIM to auto-escape wildcard characters
# in EXCLUDEs, and remove manual escaping from gcovr targets
#
# 2021-01-19, Robin Mueller
# - Add CODE_COVERAGE_VERBOSE option which will allow to print out commands which are run
# - Added the option for users to set the GCOVR_ADDITIONAL_ARGS variable to supply additional
# flags to the gcovr command
#
# 2020-05-04, Mihchael Davis
# - Add -fprofile-abs-path to make gcno files contain absolute paths
# - Fix BASE_DIRECTORY not working when defined
# - Change BYPRODUCT from folder to index.html to stop ninja from complaining about double defines
#
# 2021-05-10, Martin Stump
# - Check if the generator is multi-config before warning about non-Debug builds
#
# 2022-02-22, Marko Wehle
# - Change gcovr output from -o <filename> for --xml <filename> and --html <filename> output respectively.
# This will allow for Multiple Output Formats at the same time by making use of GCOVR_ADDITIONAL_ARGS, e.g. GCOVR_ADDITIONAL_ARGS "--txt".
#
# 2022-09-28, Sebastian Mueller
# - fix append_coverage_compiler_flags_to_target to correctly add flags
# - replace "-fprofile-arcs -ftest-coverage" with "--coverage" (equivalent)
#
# 2023-12-15, Bronek Kozicki
# - remove setup_target_for_coverage_lcov (slow) and setup_target_for_coverage_fastcov (no support for Clang)
# - fix Clang support by adding find_program( ... llvm-cov )
# - add Apple Clang support by adding execute_process( COMMAND xcrun -f llvm-cov ... )
# - add CODE_COVERAGE_GCOV_TOOL to explicitly select gcov tool and disable find_program
# - replace both functions setup_target_for_coverage_gcovr_* with single setup_target_for_coverage_gcovr
# - add support for all gcovr output formats
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
#
# 2. Add the following line to your CMakeLists.txt (best inside an if-condition
# using a CMake option() to enable it just optionally):
# include(CodeCoverage)
#
# 3. Append necessary compiler flags for all supported source files:
# append_coverage_compiler_flags()
# Or for specific target:
# append_coverage_compiler_flags_to_target(YOUR_TARGET_NAME)
#
# 3.a (OPTIONAL) Set appropriate optimization flags, e.g. -O0, -O1 or -Og
#
# 4. If you need to exclude additional directories from the report, specify them
# using full paths in the COVERAGE_EXCLUDES variable before calling
# setup_target_for_coverage_*().
# Example:
# set(COVERAGE_EXCLUDES
# '${PROJECT_SOURCE_DIR}/src/dir1/*'
# '/path/to/my/src/dir2/*')
# Or, use the EXCLUDE argument to setup_target_for_coverage_*().
# Example:
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# EXCLUDE "${PROJECT_SOURCE_DIR}/src/dir1/*" "/path/to/my/src/dir2/*")
#
# 4.a NOTE: With CMake 3.4+, COVERAGE_EXCLUDES or EXCLUDE can also be set
# relative to the BASE_DIRECTORY (default: PROJECT_SOURCE_DIR)
# Example:
# set(COVERAGE_EXCLUDES "dir1/*")
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# FORMAT html-details
# BASE_DIRECTORY "${PROJECT_SOURCE_DIR}/src"
# EXCLUDE "dir2/*")
#
# 4.b If you need to pass specific options to gcovr, specify them in
# GCOVR_ADDITIONAL_ARGS variable.
# Example:
# set (GCOVR_ADDITIONAL_ARGS --exclude-throw-branches --exclude-noncode-lines -s)
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# EXCLUDE "src/dir1" "src/dir2")
#
# 5. Use the functions described below to create a custom make target which
# runs your test executable and produces a code coverage report.
#
# 6. Build a Debug build:
# cmake -DCMAKE_BUILD_TYPE=Debug ..
# make
# make my_coverage_target
include(CMakeParseArguments)
option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
# Check prereqs
find_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
if (DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif (DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if (APPLE)
execute_process( COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else()
find_program( LLVMCOV_PATH llvm-cov )
endif()
if(LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif()
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program( GCOV_PATH gcov )
set(GCOV_TOOL "${GCOV_PATH}")
endif()
# Check supported compiler (Clang, GNU and Flang)
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
foreach(LANG ${LANGUAGES})
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif()
elseif(NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU"
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif()
endforeach()
set(COVERAGE_COMPILER_FLAGS "-g --coverage"
CACHE INTERNAL "")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
include(CheckCCompilerFlag)
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
endif()
set(CMAKE_Fortran_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the Fortran compiler during coverage builds."
FORCE )
set(CMAKE_CXX_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C++ compiler during coverage builds."
FORCE )
set(CMAKE_C_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C compiler during coverage builds."
FORCE )
set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used for linking binaries during coverage builds."
FORCE )
set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used by the shared libraries linker during coverage builds."
FORCE )
mark_as_advanced(
CMAKE_Fortran_FLAGS_COVERAGE
CMAKE_CXX_FLAGS_COVERAGE
CMAKE_C_FLAGS_COVERAGE
CMAKE_EXE_LINKER_FLAGS_COVERAGE
CMAKE_SHARED_LINKER_FLAGS_COVERAGE )
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
if(CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_Fortran_COMPILER_ID STREQUAL "GNU")
link_libraries(gcov)
endif()
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# setup_target_for_coverage_gcovr(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# BASE_DIRECTORY "../" # Base directory for report
# # (defaults to PROJECT_SOURCE_DIR)
# FORMAT "cobertura" # Output format, one of:
# # xml cobertura sonarqube json-summary
# # json-details coveralls csv txt
# # html-single html-nested html-details
# # (xml is an alias to cobertura;
# # if no format is set, defaults to xml)
# EXCLUDE "src/dir1/*" "src/dir2/*" # Patterns to exclude (can be relative
# # to BASE_DIRECTORY, with CMake 3.4+)
# )
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
# GCVOR command.
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT GCOV_TOOL)
message(FATAL_ERROR "Could not find gcov or llvm-cov tool! Aborting...")
endif()
if(NOT GCOVR_PATH)
message(FATAL_ERROR "Could not find gcovr tool! Aborting...")
endif()
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
if(DEFINED Coverage_BASE_DIRECTORY)
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
else()
set(BASEDIR ${PROJECT_SOURCE_DIR})
endif()
if(NOT DEFINED Coverage_FORMAT)
set(Coverage_FORMAT xml)
endif()
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else()
if((Coverage_FORMAT STREQUAL "html-details")
OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif(Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif((Coverage_FORMAT STREQUAL "json-summary")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif(Coverage_FORMAT STREQUAL "txt")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.txt)
elseif(Coverage_FORMAT STREQUAL "csv")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.csv)
else()
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.xml)
endif()
endif()
if ((Coverage_FORMAT STREQUAL "cobertura")
OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty )
set(Coverage_FORMAT cobertura) # overwrite xml
elseif(Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary-pretty)
elseif(Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-pretty)
elseif(Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls-pretty)
elseif(Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --html-self-contained)
elseif(Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}" )
else()
message(FATAL_ERROR "Unsupported output style ${Coverage_FORMAT}! Aborting...")
endif()
# Collect excludes (CMake 3.4+: Also compute absolute paths)
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if(CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
endif()
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDE_ARGS "")
foreach(EXCLUDE ${GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
endforeach()
# Set up commands which will be run to generate coverage data
# Run tests
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
# Create folder
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
else()
set(GCOVR_FOLDER_CMD echo) # dummy
endif()
# Running gcovr
set(GCOVR_CMD
${GCOVR_PATH}
--gcov-executable ${GCOV_TOOL}
--gcov-ignore-parse-errors=negative_hits.warn_once_per_file
-r ${BASEDIR}
${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR}
)
if(CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
if(NOT GCOVR_FOLDER_CMD STREQUAL "echo")
message(STATUS "Command to create a folder: ")
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")
endif()
message(STATUS "Command to generate gcovr coverage data: ")
string(REPLACE ";" " " GCOVR_CMD_SPACED "${GCOVR_CMD}")
message(STATUS "${GCOVR_CMD_SPACED}")
endif()
add_custom_target(${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction() # setup_target_for_coverage_gcovr
function(append_coverage_compiler_flags)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
message(STATUS "Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}")
endfunction() # append_coverage_compiler_flags
# Setup coverage for specific library
function(append_coverage_compiler_flags_to_target name)
separate_arguments(_flag_list NATIVE_COMMAND "${COVERAGE_COMPILER_FLAGS}")
target_compile_options(${name} PRIVATE ${_flag_list})
if(CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_Fortran_COMPILER_ID STREQUAL "GNU")
target_link_libraries(${name} PRIVATE gcov)
endif()
endfunction()

View File

@@ -1,125 +0,0 @@
# call add_coverage(module_name) to add coverage targets for the given module
function (add_coverage module)
if ("${CMAKE_C_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang"
OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
message ("[Coverage] Building with llvm Code Coverage Tools")
# Using llvm gcov ; llvm install by xcode
set (LLVM_COV_PATH /Library/Developer/CommandLineTools/usr/bin)
if (NOT EXISTS ${LLVM_COV_PATH}/llvm-cov)
message (FATAL_ERROR "llvm-cov not found! Aborting.")
endif ()
# set Flags
target_compile_options (${module} PRIVATE
-fprofile-instr-generate
-fcoverage-mapping)
target_link_options (${module} PUBLIC
-fprofile-instr-generate
-fcoverage-mapping)
target_compile_options (clio PRIVATE
-fprofile-instr-generate
-fcoverage-mapping)
target_link_options (clio PUBLIC
-fprofile-instr-generate
-fcoverage-mapping)
# llvm-cov
add_custom_target (${module}-ccov-preprocessing
COMMAND LLVM_PROFILE_FILE=${module}.profraw $<TARGET_FILE:${module}>
COMMAND ${LLVM_COV_PATH}/llvm-profdata merge -sparse ${module}.profraw -o
${module}.profdata
DEPENDS ${module})
add_custom_target (${module}-ccov-show
COMMAND ${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -show-line-counts-or-regions
DEPENDS ${module}-ccov-preprocessing)
# add summary for CI parse
add_custom_target (${module}-ccov-report
COMMAND
${LLVM_COV_PATH}/llvm-cov report $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata
-ignore-filename-regex=".*_makefiles|.*unittests|.*_deps"
-show-region-summary=false
DEPENDS ${module}-ccov-preprocessing)
# exclude libs and unittests self
add_custom_target (${module}-ccov
COMMAND
${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -show-line-counts-or-regions
-output-dir=${module}-llvm-cov -format="html"
-ignore-filename-regex=".*_makefiles|.*unittests|.*_deps" > /dev/null 2>&1
DEPENDS ${module}-ccov-preprocessing)
add_custom_command (
TARGET ${module}-ccov
POST_BUILD
COMMENT
"Open ${module}-llvm-cov/index.html in your browser to view the coverage report."
)
elseif ("${CMAKE_C_COMPILER_ID}" MATCHES "GNU" OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
message ("[Coverage] Building with Gcc Code Coverage Tools")
find_program (GCOV_PATH gcov)
if (NOT GCOV_PATH)
message (FATAL_ERROR "gcov not found! Aborting...")
endif () # NOT GCOV_PATH
find_program (GCOVR_PATH gcovr)
if (NOT GCOVR_PATH)
message (FATAL_ERROR "gcovr not found! Aborting...")
endif () # NOT GCOVR_PATH
set (COV_OUTPUT_PATH ${module}-gcc-cov)
target_compile_options (${module} PRIVATE -fprofile-arcs -ftest-coverage
-fPIC)
target_link_libraries (${module} PRIVATE gcov)
target_compile_options (clio PRIVATE -fprofile-arcs -ftest-coverage
-fPIC)
target_link_libraries (clio PRIVATE gcov)
# this target is used for CI as well generate the summary out.xml will send
# to github action to generate markdown, we can paste it to comments or
# readme
add_custom_target (${module}-ccov
COMMAND ${module} ${TEST_PARAMETER}
COMMAND rm -rf ${COV_OUTPUT_PATH}
COMMAND mkdir ${COV_OUTPUT_PATH}
COMMAND
gcovr -r ${CMAKE_SOURCE_DIR} --object-directory=${PROJECT_BINARY_DIR} -x
${COV_OUTPUT_PATH}/out.xml --exclude='${CMAKE_SOURCE_DIR}/unittests/'
--exclude='${PROJECT_BINARY_DIR}/'
COMMAND
gcovr -r ${CMAKE_SOURCE_DIR} --object-directory=${PROJECT_BINARY_DIR}
--html ${COV_OUTPUT_PATH}/report.html
--exclude='${CMAKE_SOURCE_DIR}/unittests/'
--exclude='${PROJECT_BINARY_DIR}/'
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
COMMENT "Running gcovr to produce Cobertura code coverage report.")
# generate the detail report
add_custom_target (${module}-ccov-report
COMMAND ${module} ${TEST_PARAMETER}
COMMAND rm -rf ${COV_OUTPUT_PATH}
COMMAND mkdir ${COV_OUTPUT_PATH}
COMMAND
gcovr -r ${CMAKE_SOURCE_DIR} --object-directory=${PROJECT_BINARY_DIR}
--html-details ${COV_OUTPUT_PATH}/index.html
--exclude='${CMAKE_SOURCE_DIR}/unittests/'
--exclude='${PROJECT_BINARY_DIR}/'
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
COMMENT "Running gcovr to produce Cobertura code coverage report.")
add_custom_command (
TARGET ${module}-ccov-report
POST_BUILD
COMMENT
"Open ${COV_OUTPUT_PATH}/index.html in your browser to view the coverage report."
)
else ()
message (FATAL_ERROR "Complier not support yet")
endif ()
endfunction ()

View File

@@ -19,14 +19,15 @@ set(COMPILER_FLAGS
-Wunused
)
if (is_gcc AND NOT lint)
list(APPEND COMPILER_FLAGS
-Wduplicated-branches
-Wduplicated-cond
-Wlogical-op
-Wuseless-cast
)
endif ()
#TODO: reenable when we change CI #884
# if (is_gcc AND NOT lint)
# list(APPEND COMPILER_FLAGS
# -Wduplicated-branches
# -Wduplicated-cond
# -Wlogical-op
# -Wuseless-cast
# )
# endif ()
if (is_clang)
list(APPEND COMPILER_FLAGS

View File

@@ -0,0 +1,3 @@
target_compile_definitions(clio PUBLIC BOOST_STACKTRACE_LINK)
target_compile_definitions(clio PUBLIC BOOST_STACKTRACE_USE_BACKTRACE)
find_package(libbacktrace REQUIRED)

View File

@@ -21,6 +21,12 @@ include (CMake/Ccache.cmake)
include (CheckCXXCompilerFlag)
include (CMake/ClangTidy.cmake)
# Set coverage build options
if (tests AND coverage)
include (CMake/CodeCoverage.cmake)
append_coverage_compiler_flags()
endif()
if (verbose)
set (CMAKE_VERBOSE_MAKEFILE TRUE)
endif ()
@@ -44,6 +50,7 @@ include (CMake/deps/OpenSSL.cmake)
include (CMake/deps/Threads.cmake)
include (CMake/deps/libfmt.cmake)
include (CMake/deps/cassandra.cmake)
include (CMake/deps/libbacktrace.cmake)
# TODO: Include directory will be wrong when installed.
target_include_directories (clio PUBLIC src)
@@ -56,11 +63,14 @@ target_link_libraries (clio
PUBLIC Boost::system
PUBLIC Boost::log
PUBLIC Boost::log_setup
PUBLIC Boost::stacktrace_backtrace
PUBLIC cassandra-cpp-driver::cassandra-cpp-driver
PUBLIC fmt::fmt
PUBLIC OpenSSL::Crypto
PUBLIC OpenSSL::SSL
PUBLIC xrpl::libxrpl
PUBLIC dl
PUBLIC libbacktrace::libbacktrace
INTERFACE Threads::Threads
)
@@ -95,12 +105,18 @@ target_sources (clio PRIVATE
src/etl/impl/ForwardCache.cpp
## Feed
src/feed/SubscriptionManager.cpp
src/feed/impl/TransactionFeed.cpp
src/feed/impl/LedgerFeed.cpp
src/feed/impl/ProposedTransactionFeed.cpp
src/feed/impl/SingleFeedBase.cpp
## Web
src/web/impl/AdminVerificationStrategy.cpp
src/web/IntervalSweepHandler.cpp
src/web/Resolver.cpp
## RPC
src/rpc/Errors.cpp
src/rpc/Factories.cpp
src/rpc/AMMHelpers.cpp
src/rpc/RPCHelpers.cpp
src/rpc/Counters.cpp
src/rpc/WorkQueue.cpp
@@ -118,6 +134,7 @@ target_sources (clio PRIVATE
src/rpc/handlers/AccountObjects.cpp
src/rpc/handlers/AccountOffers.cpp
src/rpc/handlers/AccountTx.cpp
src/rpc/handlers/AMMInfo.cpp
src/rpc/handlers/BookChanges.cpp
src/rpc/handlers/BookOffers.cpp
src/rpc/handlers/DepositAuthorized.cpp
@@ -140,10 +157,15 @@ target_sources (clio PRIVATE
src/util/log/Logger.cpp
src/util/prometheus/Http.cpp
src/util/prometheus/Label.cpp
src/util/prometheus/Metrics.cpp
src/util/prometheus/MetricBase.cpp
src/util/prometheus/MetricBuilder.cpp
src/util/prometheus/MetricsFamily.cpp
src/util/prometheus/OStream.cpp
src/util/prometheus/Prometheus.cpp
src/util/Random.cpp
src/util/Taggable.cpp)
src/util/Taggable.cpp
src/util/TerminationHandler.cpp
)
# Clio server
add_executable (clio_server src/main/Main.cpp)
@@ -165,15 +187,18 @@ if (tests)
unittests/ProfilerTests.cpp
unittests/JsonUtilTests.cpp
unittests/DOSGuardTests.cpp
unittests/SubscriptionTests.cpp
unittests/SubscriptionManagerTests.cpp
unittests/util/AssertTests.cpp
unittests/util/BatchingTests.cpp
unittests/util/TestObject.cpp
unittests/util/StringUtils.cpp
unittests/util/prometheus/CounterTests.cpp
unittests/util/prometheus/GaugeTests.cpp
unittests/util/prometheus/HistogramTests.cpp
unittests/util/prometheus/HttpTests.cpp
unittests/util/prometheus/LabelTests.cpp
unittests/util/prometheus/MetricsTests.cpp
unittests/util/prometheus/MetricBuilderTests.cpp
unittests/util/prometheus/MetricsFamilyTests.cpp
unittests/util/prometheus/OStreamTests.cpp
# ETL
unittests/etl/ExtractionDataPipeTests.cpp
unittests/etl/ExtractorTests.cpp
@@ -225,6 +250,7 @@ if (tests)
unittests/rpc/handlers/BookChangesTests.cpp
unittests/rpc/handlers/LedgerTests.cpp
unittests/rpc/handlers/VersionHandlerTests.cpp
unittests/rpc/handlers/AMMInfoTests.cpp
# Backend
unittests/data/BackendFactoryTests.cpp
unittests/data/BackendCountersTests.cpp
@@ -239,7 +265,16 @@ if (tests)
unittests/web/ServerTests.cpp
unittests/web/RPCServerHandlerTests.cpp
unittests/web/WhitelistHandlerTests.cpp
unittests/web/SweepHandlerTests.cpp)
unittests/web/SweepHandlerTests.cpp
# Feed
unittests/feed/SubscriptionManagerTests.cpp
unittests/feed/SingleFeedBaseTests.cpp
unittests/feed/ProposedTransactionFeedTests.cpp
unittests/feed/BookChangesFeedTests.cpp
unittests/feed/LedgerFeedTests.cpp
unittests/feed/TransactionFeedTests.cpp
unittests/feed/ForwardFeedTests.cpp
unittests/feed/TrackableSignalTests.cpp)
include (CMake/deps/gtest.cmake)
@@ -253,12 +288,31 @@ if (tests)
target_include_directories (${TEST_TARGET} PRIVATE unittests)
target_link_libraries (${TEST_TARGET} PUBLIC clio gtest::gtest)
# Generate `clio_tests-ccov` if coverage is enabled
# Note: use `make clio_tests-ccov` to generate report
# Generate `coverage_report` target if coverage is enabled
if (coverage)
target_compile_definitions(${TEST_TARGET} PRIVATE COVERAGE_ENABLED)
include (CMake/Coverage.cmake)
add_coverage (${TEST_TARGET})
if (DEFINED CODE_COVERAGE_REPORT_FORMAT)
set(CODE_COVERAGE_FORMAT ${CODE_COVERAGE_REPORT_FORMAT})
else()
set(CODE_COVERAGE_FORMAT html-details)
endif()
if (DEFINED CODE_COVERAGE_TESTS_ARGS)
set(TESTS_ADDITIONAL_ARGS ${CODE_COVERAGE_TESTS_ARGS})
separate_arguments(TESTS_ADDITIONAL_ARGS)
else()
set(TESTS_ADDITIONAL_ARGS "")
endif()
set (GCOVR_ADDITIONAL_ARGS --exclude-throw-branches -s)
setup_target_for_coverage_gcovr(
NAME coverage_report
FORMAT ${CODE_COVERAGE_FORMAT}
EXECUTABLE clio_tests
EXECUTABLE_ARGS --gtest_brief=1 ${TESTS_ADDITIONAL_ARGS}
EXCLUDE "unittests"
DEPENDENCIES clio_tests
)
endif ()
endif ()

View File

@@ -62,6 +62,11 @@ git commit --amend -S
git push --force
```
## Use ccache (optional)
Clio uses ccache to speed up compilation. If you want to use it, please make sure it is installed on your machine.
CMake will automatically detect it and use it if it's available.
## Fixing issues found during code review
While your code is in review, it's possible that some changes will be requested by reviewer(s).
This section describes the process of adding your fixes.
@@ -91,7 +96,7 @@ The button for that is near the bottom of the PR's page on GitHub.
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent.
## Formatting
Code must conform to `clang-format` version 16, unless the result would be unreasonably difficult to read or maintain.
Code must conform to `clang-format` version 17, unless the result would be unreasonably difficult to read or maintain.
To change your code to conform use `clang-format -i <your changed files>`.
## Avoid
@@ -126,6 +131,7 @@ Existing maintainers can resign, or be subject to a vote for removal at the behe
* [cindyyan317](https://github.com/cindyyan317) (Ripple)
* [godexsoft](https://github.com/godexsoft) (Ripple)
* [kuznetsss](https://github.com/kuznetsss) (Ripple)
* [legleux](https://github.com/legleux) (Ripple)
## Honorable ex-Maintainers

View File

@@ -1,4 +1,8 @@
# Clio
[![Build status](https://github.com/XRPLF/clio/actions/workflows/build.yml/badge.svg?branch=develop)](https://github.com/XRPLF/clio/actions/workflows/build.yml?query=branch%3Adevelop)
[![Nightly release status](https://github.com/XRPLF/clio/actions/workflows/nightly.yml/badge.svg?branch=develop)](https://github.com/XRPLF/clio/actions/workflows/nightly.yml?query=branch%3Adevelop)
[![Clang-tidy checks status](https://github.com/XRPLF/clio/actions/workflows/clang-tidy.yml/badge.svg?branch=develop)](https://github.com/XRPLF/clio/actions/workflows/clang-tidy.yml?query=branch%3Adevelop)
[![Code coverage develop branch](https://codecov.io/gh/XRPLF/clio/branch/develop/graph/badge.svg?)](https://app.codecov.io/gh/XRPLF/clio)
Clio is an XRP Ledger API server. Clio is optimized for RPC calls, over WebSocket or JSON-RPC.
Validated historical ledger and transaction data are stored in a more space-efficient format,
@@ -36,6 +40,7 @@ It is written in C++20 and therefore requires a modern compiler.
- [Conan 1.55](https://conan.io/downloads.html)
- [CMake 3.16](https://cmake.org/download/)
- [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html) (needed for code coverage generation)
- [**Optional**] [CCache](https://ccache.dev/) (speeds up compilation if you are going to compile Clio often)
| Compiler | Version |
|-------------|---------|
@@ -91,7 +96,7 @@ conan remove -f xrpl
Navigate to Clio's root directory and perform
```sh
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
@@ -101,6 +106,18 @@ If all goes well, `conan install` will find required packages and `cmake` will d
> **Tip:** To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
## Building Clio with Docker
It is possible to build Clio using docker if you don't want to install all the dependencies on your machine.
```sh
docker run -it rippleci/clio_ci:latest
git clone https://github.com/XRPLF/clio
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=False
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
## Running
```sh
./clio_server config.json
@@ -257,13 +274,13 @@ Exactly equal password gains admin rights for the request or a websocket connect
## Prometheus metrics collection
Clio natively supports Prometheus metrics collection. It accepts Prometheus requests on the port configured in `server` section of config.
Prometheus metrics are enabled by default. To disable it add `"prometheus_enabled": false` to the config.
Prometheus metrics are enabled by default. To disable it add `"prometheus": { "enabled": false }` to the config.
It is important to know that clio responds to Prometheus request only if they are admin requests, so Prometheus should be configured to send admin password in header.
There is an example of docker-compose file, Prometheus and Grafana configs in [examples/infrastructure](examples/infrastructure).
## Using clang-tidy for static analysis
Minimum clang-tidy version required is 16.0.
Minimum clang-tidy version required is 17.0.
Clang-tidy could be run by cmake during building the project.
For that provide the option `-o lint=True` for `conan install` command:
```sh
@@ -273,9 +290,58 @@ By default cmake will try to find clang-tidy automatically in your system.
To force cmake use desired binary set `CLIO_CLANG_TIDY_BIN` environment variable as path to clang-tidy binary.
E.g.:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@16/bin/clang-tidy
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@17/bin/clang-tidy
```
## Coverage report
Coverage report is intended for the developers using compilers GCC
or Clang (including Apple Clang). It is generated by the build target `coverage_report`,
which is only enabled when both `tests` and `coverage` options are set, e.g. with
`-o coverage=True -o tests=True` in `conan`
Prerequisites for the coverage report:
- [gcovr tool](https://gcovr.com/en/stable/getting-started.html) (can be installed e.g. with `pip install gcovr`)
- `gcov` for GCC (installed with the compiler by default) or
- `llvm-cov` for Clang (installed with the compiler by default, also on Apple)
- `Debug` build type
Coverage report is created when the following steps are completed, in order:
1. `clio_tests` binary built with the instrumentation data, enabled by the `coverage`
option mentioned above
2. completed run of unit tests, which populates coverage capture data
3. completed run of `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and coverage capture data into a coverage report
Above steps are automated into a single target `coverage_report`. The instrumented
`clio_tests` binary can be also used for running regular unit tests. In case of a
spurious failure of unit tests, it is possile to re-run `coverage_report` target without
rebuilding the `clio_tests` binary (since it is simply a dependency of the coverage report target).
The default coverage report format is `html-details`, but the developers
can override it to any of the formats listed in `CMake/CodeCoverage.cmake`
by setting `CODE_COVERAGE_REPORT_FORMAT` variable in `cmake`. For example, CI
is setting this parameter to `xml` for the [codecov](codecov.io) integration.
In case if some unit tests predictably fail e.g. due to absence of a Cassandra database, it is possible
to set unit tests options in `CODE_COVERAGE_TESTS_ARGS` cmake variable, as demonstrated below:
```
cd .build
conan install .. --output-folder . --build missing --settings build_type=Debug -o tests=True -o coverage=True
cmake -DCODE_COVERAGE_REPORT_FORMAT=json-details -DCMAKE_BUILD_TYPE=Debug -DCODE_COVERAGE_TESTS_ARGS="--gtest_filter=-BackendCassandra*" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
cmake --build . --target coverage_report
```
After `coverage_report` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage_report.*`, with a suitable extension for the report format, or
- directory named `coverage_report`, with `index.html` and other files inside, for `html-details` or `html-nested` report formats.
## Developing against `rippled` in standalone mode
If you wish you develop against a `rippled` instance running in standalone

View File

@@ -1,6 +1,6 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Clio(ConanFile):
name = 'clio'
@@ -11,7 +11,7 @@ class Clio(ConanFile):
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'fPIC': [True, False],
'verbose': [True, False],
'verbose': [True, False],
'tests': [True, False], # build unit tests; create `clio_tests` binary
'docs': [True, False], # doxygen API docs; create custom target 'docs'
'packaging': [True, False], # create distribution packages
@@ -26,7 +26,8 @@ class Clio(ConanFile):
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1u',
'xrpl/2.0.0-b4',
'xrpl/2.0.0',
'libbacktrace/cci.20210118'
]
default_options = {

67
docker/ci/dockerfile Normal file
View File

@@ -0,0 +1,67 @@
FROM ubuntu:focal
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
USER root
WORKDIR /root/
ENV GCC_VERSION=11 \
CCACHE_VERSION=4.8.3 \
LLVM_TOOLS_VERSION=17 \
GH_VERSION=2.40.0
# Add repositories
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends --no-install-suggests gnupg wget curl software-properties-common \
&& add-apt-repository -y ppa:ubuntu-toolchain-r/test \
&& wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | apt-key add - \
&& apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' \
&& echo "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-${LLVM_TOOLS_VERSION} main" >> /etc/apt/sources.list \
&& wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
# Install packages
RUN apt update -qq \
&& apt install -y --no-install-recommends --no-install-suggests cmake python3 python3-pip sudo git \
ninja-build make pkg-config libzstd-dev libzstd1 g++-${GCC_VERSION} jq \
clang-format-${LLVM_TOOLS_VERSION} clang-tidy-${LLVM_TOOLS_VERSION} clang-tools-${LLVM_TOOLS_VERSION} \
&& update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_VERSION} 100 \
&& update-alternatives --install /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_VERSION} 100 \
&& apt-get clean && apt remove -y software-properties-common \
&& pip3 install -q --upgrade --no-cache-dir pip \
&& pip3 install -q --no-cache-dir conan==1.62 gcovr
# Install ccache from source
WORKDIR /tmp
RUN wget "https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}.tar.gz" \
&& tar xf "ccache-${CCACHE_VERSION}.tar.gz" \
&& cd "ccache-${CCACHE_VERSION}" \
&& mkdir build && cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& ninja && cp ./ccache /usr/bin/ccache
# Install gh
RUN wget https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& tar xf gh_${GH_VERSION}_linux_${TARGETARCH}.tar.gz \
&& mv gh_${GH_VERSION}_linux_${TARGETARCH}/bin/gh /usr/bin/gh
# Clean up
RUN rm -rf /tmp/* /var/tmp/*
WORKDIR /root/
# Using root by default is not very secure but github checkout action doesn't work with any other user
# https://github.com/actions/checkout/issues/956
# And Github Actions doc recommends using root
# https://docs.github.com/en/actions/creating-actions/dockerfile-support-for-github-actions#user
# Setup conan
RUN conan profile new default --detect \
&& conan profile update settings.compiler.cppstd=20 default \
&& conan profile update settings.compiler.libcxx=libstdc++11 default \
&& conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod

View File

@@ -16,7 +16,8 @@
//
// Advanced options. USE AT OWN RISK:
// ---
"core_connections_per_host": 1 // Defaults to 1
"core_connections_per_host": 1, // Defaults to 1
"write_batch_size": 20 // Defaults to 20
//
// Below options will use defaults from cassandra driver if left unspecified.
// See https://docs.datastax.com/en/developer/cpp-driver/2.17/api/struct.CassCluster/ for details.
@@ -66,7 +67,7 @@
// Max number of requests to queue up before rejecting further requests.
// Defaults to 0, which disables the limit.
"max_queue_size": 500,
// If request contains header with authorization, Clio will check if it matches this value's hash
// If request contains header with authorization, Clio will check if it matches the prefix 'Password ' + this value's sha256 hash
// If matches, the request will be considered as admin request
"admin_password": "xrp",
// If local_admin is true, Clio will consider requests come from 127.0.0.1 as admin requests
@@ -101,7 +102,10 @@
"log_level": "trace"
}
],
"prometheus_enabled": true,
"prometheus": {
"enabled": true,
"compress_reply": true
},
"log_level": "info",
// Log format (this is the default format)
"log_format": "%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%",

View File

@@ -68,7 +68,7 @@
},
"gridPos": {
"h": 8,
"w": 5,
"w": 3,
"x": 0,
"y": 0
},
@@ -105,102 +105,6 @@
"title": "Service state",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 7,
"x": 5,
"y": 0
},
"id": 7,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "scrape_duration_seconds{job=\"clio\"}",
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Prometheus Request Processing Time",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
@@ -262,8 +166,8 @@
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"w": 9,
"x": 3,
"y": 0
},
"id": 2,
@@ -296,102 +200,6 @@
"title": "Work Queue Size",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "µs"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"id": 10,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "rpc_method_duration_us{job=\"clio\"}",
"instant": false,
"legendFormat": "{{method}}",
"range": true,
"refId": "A"
}
],
"title": "RPC Method Call Duration",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
@@ -455,7 +263,7 @@
"h": 8,
"w": 12,
"x": 12,
"y": 8
"y": 0
},
"id": 9,
"options": {
@@ -550,9 +358,9 @@
"h": 8,
"w": 12,
"x": 0,
"y": 16
"y": 8
},
"id": 8,
"id": 11,
"options": {
"legend": {
"calcs": [],
@@ -572,14 +380,206 @@
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "rate(rpc_error_total_number{job=\"clio\"}[$__rate_interval])",
"expr": "subscriptions_current_number{job=\"clio\"}",
"instant": false,
"legendFormat": "{{error_type}}",
"legendFormat": "{{collection}}{{stream}}",
"range": true,
"refId": "A"
}
],
"title": "RPC Error Rate",
"title": "Subscriptions Number",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"id": 6,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "sum(increase(ledger_cache_counter_total_number{job=\"clio\",type=\"cache_hit\"}[1m])) / sum(increase(ledger_cache_counter_total_number{job=\"clio\",type=\"request\"}[1m]))",
"hide": false,
"instant": false,
"legendFormat": "ledger cache hit rate",
"range": true,
"refId": "A"
}
],
"title": "Ledger Cache Hit Rate",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "µs"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 16
},
"id": 10,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "rpc_method_duration_us{job=\"clio\"}",
"instant": false,
"legendFormat": "{{method}}",
"range": true,
"refId": "A"
}
],
"title": "RPC Method Call Duration",
"type": "timeseries"
},
{
@@ -647,13 +647,13 @@
"x": 12,
"y": 16
},
"id": 6,
"id": 8,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
"showLegend": true
},
"tooltip": {
"mode": "single",
@@ -667,15 +667,14 @@
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "sum(increase(ledger_cache_counter_total_number{job=\"clio\",type=\"cache_hit\"}[1m])) / sum(increase(ledger_cache_counter_total_number{job=\"clio\",type=\"request\"}[1m]))",
"hide": false,
"expr": "rate(rpc_error_total_number{job=\"clio\"}[$__rate_interval])",
"instant": false,
"legendFormat": "ledger cache hit rate",
"legendFormat": "{{error_type}}",
"range": true,
"refId": "A"
}
],
"title": "Ledger Cache Hit Rate",
"title": "RPC Error Rate",
"type": "timeseries"
},
{
@@ -791,7 +790,7 @@
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
@@ -800,6 +799,9 @@
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineStyle": {
"fill": "solid"
},
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
@@ -828,7 +830,8 @@
"value": 80
}
]
}
},
"unit": "ms"
},
"overrides": []
},
@@ -838,7 +841,7 @@
"x": 12,
"y": 24
},
"id": 11,
"id": 12,
"options": {
"legend": {
"calcs": [],
@@ -851,6 +854,7 @@
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
@@ -858,14 +862,52 @@
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "subscriptions_current_number{job=\"clio\"}",
"expr": "histogram_quantile(0.50, sum(rate(backend_duration_milliseconds_histogram_bucket{job=\"clio\"}[$__interval])) by (le, operation))",
"hide": false,
"instant": false,
"legendFormat": "{{collection}}{{stream}}",
"legendFormat": "{{operation}} 0.5 percentile",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "histogram_quantile(0.75, sum(rate(backend_duration_milliseconds_histogram_bucket{job=\"clio\"}[$__interval])) by (le, operation))",
"hide": false,
"instant": false,
"legendFormat": "{{operation}} 0.75 percentile",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "histogram_quantile(0.95, sum(rate(backend_duration_milliseconds_histogram_bucket{job=\"clio\"}[$__interval])) by (le, operation))",
"hide": false,
"instant": false,
"legendFormat": "{{operation}} 0.95 percentile",
"range": true,
"refId": "C"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"expr": "",
"hide": false,
"instant": false,
"range": true,
"refId": "D"
}
],
"title": "Subscriptions Number",
"title": "DB operation duration",
"type": "timeseries"
},
{
@@ -1081,6 +1123,102 @@
],
"title": "DB Pending operations",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 10,
"x": 0,
"y": 40
},
"id": 7,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "scrape_duration_seconds{job=\"clio\"}",
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Prometheus Request Processing Time",
"type": "timeseries"
}
],
"refresh": "5s",
@@ -1097,6 +1235,6 @@
"timezone": "",
"title": "Clio",
"uid": "aeaae84e-c194-47b2-ad65-86e45eebb815",
"version": 1,
"version": 3,
"weekStart": ""
}

View File

@@ -17,12 +17,35 @@
*/
//==============================================================================
#include <data/BackendCounters.h>
#include "data/BackendCounters.h"
#include <util/prometheus/Prometheus.h>
#include "util/Assert.h"
#include "util/prometheus/Label.h"
#include "util/prometheus/Prometheus.h"
#include <boost/json/object.hpp>
#include <chrono>
#include <cstdint>
#include <memory>
#include <string>
#include <utility>
#include <vector>
namespace data {
namespace {
std::vector<std::int64_t> const histogramBuckets{1, 2, 5, 10, 20, 50, 100, 200, 500, 700, 1000};
std::int64_t
durationInMillisecondsSince(std::chrono::steady_clock::time_point const startTime)
{
return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - startTime).count();
}
} // namespace
using namespace util::prometheus;
BackendCounters::BackendCounters()
@@ -43,6 +66,18 @@ BackendCounters::BackendCounters()
))
, asyncWriteCounters_{"write_async"}
, asyncReadCounters_{"read_async"}
, readDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "read"}}),
histogramBuckets,
"The duration of backend read operations including retries"
))
, writeDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "write"}}),
histogramBuckets,
"The duration of backend write operations including retries"
))
{
}
@@ -60,9 +95,10 @@ BackendCounters::registerTooBusy()
}
void
BackendCounters::registerWriteSync()
BackendCounters::registerWriteSync(std::chrono::steady_clock::time_point const startTime)
{
++writeSyncCounter_.get();
writeDurationHistogram_.get().observe(durationInMillisecondsSince(startTime));
}
void
@@ -78,9 +114,11 @@ BackendCounters::registerWriteStarted()
}
void
BackendCounters::registerWriteFinished()
BackendCounters::registerWriteFinished(std::chrono::steady_clock::time_point const startTime)
{
asyncWriteCounters_.registerFinished(1u);
auto const duration = durationInMillisecondsSince(startTime);
writeDurationHistogram_.get().observe(duration);
}
void
@@ -96,9 +134,12 @@ BackendCounters::registerReadStarted(std::uint64_t const count)
}
void
BackendCounters::registerReadFinished(std::uint64_t const count)
BackendCounters::registerReadFinished(std::chrono::steady_clock::time_point const startTime, std::uint64_t const count)
{
asyncReadCounters_.registerFinished(count);
auto const duration = durationInMillisecondsSince(startTime);
for (std::uint64_t i = 0; i < count; ++i)
readDurationHistogram_.get().observe(duration);
}
void
@@ -161,7 +202,10 @@ BackendCounters::AsyncOperationCounters::registerStarted(std::uint64_t const cou
void
BackendCounters::AsyncOperationCounters::registerFinished(std::uint64_t const count)
{
assert(pendingCounter_.get().value() >= static_cast<std::int64_t>(count));
ASSERT(
pendingCounter_.get().value() >= static_cast<std::int64_t>(count),
"Finished operations can't be more than pending"
);
pendingCounter_.get() -= count;
completedCounter_.get() += count;
}
@@ -175,7 +219,9 @@ BackendCounters::AsyncOperationCounters::registerRetry(std::uint64_t count)
void
BackendCounters::AsyncOperationCounters::registerError(std::uint64_t count)
{
assert(pendingCounter_.get().value() >= static_cast<std::int64_t>(count));
ASSERT(
pendingCounter_.get().value() >= static_cast<std::int64_t>(count), "Error operations can't be more than pending"
);
pendingCounter_.get() -= count;
errorCounter_.get() += count;
}
@@ -187,7 +233,8 @@ BackendCounters::AsyncOperationCounters::report() const
{name_ + "_pending", pendingCounter_.get().value()},
{name_ + "_completed", completedCounter_.get().value()},
{name_ + "_retry", retryCounter_.get().value()},
{name_ + "_error", errorCounter_.get().value()}};
{name_ + "_error", errorCounter_.get().value()}
};
}
} // namespace data

View File

@@ -19,7 +19,7 @@
#pragma once
#include <util/prometheus/Prometheus.h>
#include "util/prometheus/Prometheus.h"
#include <boost/json/object.hpp>
@@ -33,23 +33,43 @@ namespace data {
/**
* @brief A concept for a class that can be used to count backend operations.
*/
// clang-format off
template <typename T>
concept SomeBackendCounters = requires(T a) {
typename T::PtrType;
{ a.registerTooBusy() } -> std::same_as<void>;
{ a.registerWriteSync() } -> std::same_as<void>;
{ a.registerWriteSyncRetry() } -> std::same_as<void>;
{ a.registerWriteStarted() } -> std::same_as<void>;
{ a.registerWriteFinished() } -> std::same_as<void>;
{ a.registerWriteRetry() } -> std::same_as<void>;
{ a.registerReadStarted(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadFinished(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadRetry(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadError(std::uint64_t{}) } -> std::same_as<void>;
{ a.report() } -> std::same_as<boost::json::object>;
{
a.registerTooBusy()
} -> std::same_as<void>;
{
a.registerWriteSync(std::chrono::steady_clock::time_point{})
} -> std::same_as<void>;
{
a.registerWriteSyncRetry()
} -> std::same_as<void>;
{
a.registerWriteStarted()
} -> std::same_as<void>;
{
a.registerWriteFinished(std::chrono::steady_clock::time_point{})
} -> std::same_as<void>;
{
a.registerWriteRetry()
} -> std::same_as<void>;
{
a.registerReadStarted(std::uint64_t{})
} -> std::same_as<void>;
{
a.registerReadFinished(std::chrono::steady_clock::time_point{}, std::uint64_t{})
} -> std::same_as<void>;
{
a.registerReadRetry(std::uint64_t{})
} -> std::same_as<void>;
{
a.registerReadError(std::uint64_t{})
} -> std::same_as<void>;
{
a.report()
} -> std::same_as<boost::json::object>;
};
// clang-format on
/**
* @brief Holds statistics about the backend.
@@ -67,7 +87,7 @@ public:
registerTooBusy();
void
registerWriteSync();
registerWriteSync(std::chrono::steady_clock::time_point startTime);
void
registerWriteSyncRetry();
@@ -76,7 +96,7 @@ public:
registerWriteStarted();
void
registerWriteFinished();
registerWriteFinished(std::chrono::steady_clock::time_point startTime);
void
registerWriteRetry();
@@ -85,7 +105,7 @@ public:
registerReadStarted(std::uint64_t count = 1u);
void
registerReadFinished(std::uint64_t count = 1u);
registerReadFinished(std::chrono::steady_clock::time_point startTime, std::uint64_t count = 1u);
void
registerReadRetry(std::uint64_t count = 1u);
@@ -133,6 +153,9 @@ private:
AsyncOperationCounters asyncWriteCounters_{"write_async"};
AsyncOperationCounters asyncReadCounters_{"read_async"};
std::reference_wrapper<util::prometheus::HistogramInt> readDurationHistogram_;
std::reference_wrapper<util::prometheus::HistogramInt> writeDurationHistogram_;
};
} // namespace data

View File

@@ -19,10 +19,10 @@
#pragma once
#include <data/BackendInterface.h>
#include <data/CassandraBackend.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "data/CassandraBackend.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <boost/algorithm/string.hpp>
@@ -55,10 +55,8 @@ make_Backend(util::Config const& config)
throw std::runtime_error("Invalid database type");
auto const rng = backend->hardFetchLedgerRangeNoThrow();
if (rng) {
backend->updateRange(rng->minSequence);
backend->updateRange(rng->maxSequence);
}
if (rng)
backend->setRange(rng->minSequence, rng->maxSequence);
LOG(log.info()) << "Constructed BackendInterface Successfully";
return backend;

View File

@@ -17,11 +17,31 @@
*/
//==============================================================================
#include <data/BackendInterface.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "data/Types.h"
#include "util/Assert.h"
#include "util/log/Logger.h"
#include <boost/asio/spawn.hpp>
#include <ripple/basics/base_uint.h>
#include <ripple/basics/strHex.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/Serializer.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <sstream>
#include <string>
#include <utility>
#include <vector>
// local to compilation unit loggers
namespace {
@@ -43,7 +63,7 @@ BackendInterface::finishWrites(std::uint32_t const ledgerSequence)
void
BackendInterface::writeLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob)
{
assert(key.size() == sizeof(ripple::uint256));
ASSERT(key.size() == sizeof(ripple::uint256), "Key must be 256 bits");
doWriteLedgerObject(std::move(key), seq, std::move(blob));
}
@@ -109,6 +129,7 @@ BackendInterface::fetchLedgerObjects(
return results;
}
// Fetches the successor to key/index
std::optional<ripple::uint256>
BackendInterface::fetchSuccessorKey(
@@ -155,7 +176,7 @@ BackendInterface::fetchBookOffers(
// TODO try to speed this up. This can take a few seconds. The goal is
// to get it down to a few hundred milliseconds.
BookOffersPage page;
const ripple::uint256 bookEnd = ripple::getQualityNext(book);
ripple::uint256 const bookEnd = ripple::getQualityNext(book);
ripple::uint256 uTipIndex = book;
std::vector<ripple::uint256> keys;
auto getMillis = [](auto diff) { return std::chrono::duration_cast<std::chrono::milliseconds>(diff).count(); };
@@ -178,7 +199,8 @@ BackendInterface::fetchBookOffers(
while (keys.size() < limit) {
++numPages;
ripple::STLedgerEntry const sle{
ripple::SerialIter{offerDir->blob.data(), offerDir->blob.size()}, offerDir->key};
ripple::SerialIter{offerDir->blob.data(), offerDir->blob.size()}, offerDir->key
};
auto indexes = sle.getFieldV256(ripple::sfIndexes);
keys.insert(keys.end(), indexes.begin(), indexes.end());
auto next = sle.getFieldU64(ripple::sfIndexNext);
@@ -188,7 +210,7 @@ BackendInterface::fetchBookOffers(
}
auto nextKey = ripple::keylet::page(uTipIndex, next);
auto nextDir = fetchLedgerObject(nextKey.key, ledgerSequence, yield);
assert(nextDir);
ASSERT(nextDir.has_value(), "Next dir must exist");
offerDir->blob = *nextDir;
offerDir->key = nextKey.key;
}
@@ -200,7 +222,7 @@ BackendInterface::fetchBookOffers(
for (size_t i = 0; i < keys.size() && i < limit; ++i) {
LOG(gLog.trace()) << "Key = " << ripple::strHex(keys[i]) << " blob = " << ripple::strHex(objs[i])
<< " ledgerSequence = " << ledgerSequence;
assert(!objs[i].empty());
ASSERT(!objs[i].empty(), "Ledger object can't be empty");
page.offers.push_back({keys[i], objs[i]});
}
auto end = std::chrono::system_clock::now();
@@ -234,7 +256,14 @@ void
BackendInterface::updateRange(uint32_t newMax)
{
std::scoped_lock const lck(rngMtx_);
assert(!range || newMax >= range->maxSequence);
ASSERT(
!range || newMax >= range->maxSequence,
"Range shouldn't exist yet or newMax should be greater. newMax = {}, range->maxSequence = {}",
newMax,
range->maxSequence
);
if (!range) {
range = {newMax, newMax};
} else {
@@ -242,6 +271,19 @@ BackendInterface::updateRange(uint32_t newMax)
}
}
void
BackendInterface::setRange(uint32_t min, uint32_t max, bool force)
{
std::scoped_lock const lck(rngMtx_);
if (!force) {
ASSERT(min <= max, "Range min must be less than or equal to max");
ASSERT(not range.has_value(), "Range was already set");
}
range = {min, max};
}
LedgerPage
BackendInterface::fetchLedgerPage(
std::optional<ripple::uint256> const& cursor,

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/DBHelpers.h>
#include <data/LedgerCache.h>
#include <data/Types.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/DBHelpers.h"
#include "data/LedgerCache.h"
#include "data/Types.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <thread>
#include <type_traits>
@@ -86,7 +86,7 @@ synchronous(FnType&& func)
boost::asio::io_context ctx;
using R = typename boost::result_of<FnType(boost::asio::yield_context)>::type;
if constexpr (!std::is_same<R, void>::value) {
if constexpr (!std::is_same_v<R, void>) {
R res;
boost::asio::spawn(ctx, [_ = boost::asio::make_work_guard(ctx), &func, &res](auto yield) {
res = func(yield);
@@ -191,6 +191,16 @@ public:
void
updateRange(uint32_t newMax);
/**
* @brief Sets the range of sequences that are stored in the DB.
*
* @param min The new minimum sequence available
* @param max The new maximum sequence available
* @param force If set to true, the range will be set even if it's already set
*/
void
setRange(uint32_t min, uint32_t max, bool force = false);
/**
* @brief Fetch the fees from a specific ledger sequence.
*
@@ -521,7 +531,7 @@ public:
* @param data A vector of NFTsData objects representing the NFTs
*/
virtual void
writeNFTs(std::vector<NFTsData>&& data) = 0;
writeNFTs(std::vector<NFTsData> const& data) = 0;
/**
* @brief Write a new set of account transactions.
@@ -529,7 +539,7 @@ public:
* @param data A vector of AccountTransactionsData objects representing the account transactions
*/
virtual void
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) = 0;
writeAccountTransactions(std::vector<AccountTransactionsData> data) = 0;
/**
* @brief Write NFTs transactions.
@@ -537,7 +547,7 @@ public:
* @param data A vector of NFTTransactionsData objects
*/
virtual void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) = 0;
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) = 0;
/**
* @brief Write a new successor.

View File

@@ -19,19 +19,20 @@
#pragma once
#include <data/BackendInterface.h>
#include <data/cassandra/Concepts.h>
#include <data/cassandra/Handle.h>
#include <data/cassandra/Schema.h>
#include <data/cassandra/SettingsProvider.h>
#include <data/cassandra/impl/ExecutionStrategy.h>
#include <util/LedgerUtils.h>
#include <util/Profiler.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "data/cassandra/Concepts.h"
#include "data/cassandra/Handle.h"
#include "data/cassandra/Schema.h"
#include "data/cassandra/SettingsProvider.h"
#include "data/cassandra/impl/ExecutionStrategy.h"
#include "util/Assert.h"
#include "util/LedgerUtils.h"
#include "util/Profiler.h"
#include "util/log/Logger.h"
#include <boost/asio/spawn.hpp>
#include <ripple/protocol/LedgerHeader.h>
#include <ripple/protocol/nft.h>
#include <boost/asio/spawn.hpp>
namespace data::cassandra {
@@ -493,41 +494,39 @@ public:
if (nftIDs.size() == limit)
ret.cursor = nftIDs.back();
auto const nftQueryStatement = schema_->selectNFTBulk.bind(nftIDs);
nftQueryStatement.bindAt(1, ledgerSequence);
std::vector<Statement> selectNFTStatements;
selectNFTStatements.reserve(nftIDs.size());
// Fetch all the NFT data, meanwhile filtering out the NFTs that are not within the ledger range
auto const nftRes = executor_.read(yield, nftQueryStatement);
auto const& nftQueryResults = nftRes.value();
std::transform(
std::cbegin(nftIDs),
std::cend(nftIDs),
std::back_inserter(selectNFTStatements),
[&](auto const& nftID) { return schema_->selectNFT.bind(nftID, ledgerSequence); }
);
if (not nftQueryResults.hasRows()) {
LOG(log_.debug()) << "No rows returned";
return {};
auto const nftInfos = executor_.readEach(yield, selectNFTStatements);
std::vector<Statement> selectNFTURIStatements;
selectNFTURIStatements.reserve(nftIDs.size());
std::transform(
std::cbegin(nftIDs),
std::cend(nftIDs),
std::back_inserter(selectNFTURIStatements),
[&](auto const& nftID) { return schema_->selectNFTURI.bind(nftID, ledgerSequence); }
);
auto const nftUris = executor_.readEach(yield, selectNFTURIStatements);
for (auto i = 0u; i < nftIDs.size(); i++) {
if (auto const maybeRow = nftInfos[i].template get<uint32_t, ripple::AccountID, bool>(); maybeRow) {
auto [seq, owner, isBurned] = *maybeRow;
NFT nft(nftIDs[i], seq, owner, isBurned);
if (auto const maybeUri = nftUris[i].template get<ripple::Blob>(); maybeUri)
nft.uri = *maybeUri;
ret.nfts.push_back(nft);
}
}
auto const nftURIQueryStatement = schema_->selectNFTURIBulk.bind(nftIDs);
nftURIQueryStatement.bindAt(1, ledgerSequence);
// Get the URI for each NFT, but it's possible that URI doesn't exist
auto const uriRes = executor_.read(yield, nftURIQueryStatement);
auto const& nftURIQueryResults = uriRes.value();
std::unordered_map<std::string, Blob> nftURIMap;
for (auto const [nftID, uri] : extract<ripple::uint256, Blob>(nftURIQueryResults))
nftURIMap.insert({ripple::strHex(nftID), uri});
for (auto const [nftID, seq, owner, isBurned] :
extract<ripple::uint256, std::uint32_t, ripple::AccountID, bool>(nftQueryResults)) {
NFT nft;
nft.tokenID = nftID;
nft.ledgerSequence = seq;
nft.owner = owner;
nft.isBurned = isBurned;
if (nftURIMap.contains(ripple::strHex(nft.tokenID)))
nft.uri = nftURIMap.at(ripple::strHex(nft.tokenID));
ret.nfts.push_back(nft);
}
return ret;
}
@@ -622,7 +621,7 @@ public:
);
});
assert(numHashes == results.size());
ASSERT(numHashes == results.size(), "Number of hashes and results must match");
LOG(log_.debug()) << "Fetched " << numHashes << " transactions from Cassandra in " << timeDiff
<< " milliseconds";
return results;
@@ -735,14 +734,14 @@ public:
{
LOG(log_.trace()) << "Writing successor. key = " << key.size() << " bytes. "
<< " seq = " << std::to_string(seq) << " successor = " << successor.size() << " bytes.";
assert(!key.empty());
assert(!successor.empty());
ASSERT(!key.empty(), "Key must not be empty");
ASSERT(!successor.empty(), "Successor must not be empty");
executor_.write(schema_->insertSuccessor, std::move(key), seq, std::move(successor));
}
void
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) override
writeAccountTransactions(std::vector<AccountTransactionsData> data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 10); // assume 10 transactions avg
@@ -766,7 +765,7 @@ public:
}
void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) override
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
@@ -798,7 +797,7 @@ public:
}
void
writeNFTs(std::vector<NFTsData>&& data) override
writeNFTs(std::vector<NFTsData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 3);

View File

@@ -20,16 +20,16 @@
/** @file */
#pragma once
#include "data/Types.h"
#include "util/Assert.h"
#include <boost/container/flat_set.hpp>
#include <ripple/basics/Log.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STAccount.h>
#include <ripple/protocol/TxMeta.h>
#include <boost/container/flat_set.hpp>
#include <data/Types.h>
/**
* @brief Struct used to keep track of what to write to account_transactions/account_tx tables.
*/
@@ -233,7 +233,7 @@ getBookBase(T const& key)
{
static constexpr size_t KEY_SIZE = 24;
assert(key.size() == ripple::uint256::size());
ASSERT(key.size() == ripple::uint256::size(), "Invalid key size {}", key.size());
ripple::uint256 ret;
for (size_t i = 0; i < KEY_SIZE; ++i)

View File

@@ -17,7 +17,19 @@
*/
//==============================================================================
#include <data/LedgerCache.h>
#include "data/LedgerCache.h"
#include "data/Types.h"
#include "util/Assert.h"
#include <ripple/basics/base_uint.h>
#include <cstddef>
#include <cstdint>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <vector>
namespace data {
@@ -37,7 +49,12 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
{
std::scoped_lock const lck{mtx_};
if (seq > latestSeq_) {
assert(seq == latestSeq_ + 1 || latestSeq_ == 0);
ASSERT(
seq == latestSeq_ + 1 || latestSeq_ == 0,
"New sequense must be either next or first. seq = {}, latestSeq_ = {}",
seq,
latestSeq_
);
latestSeq_ = seq;
}
for (auto const& obj : objs) {

View File

@@ -19,13 +19,15 @@
#pragma once
#include "data/Types.h"
#include "util/prometheus/Prometheus.h"
#include <ripple/basics/base_uint.h>
#include <ripple/basics/hardened_hash.h>
#include <data/Types.h>
#include <map>
#include <mutex>
#include <shared_mutex>
#include <util/prometheus/Prometheus.h>
#include <utility>
#include <vector>

View File

@@ -23,7 +23,6 @@
#include <ripple/protocol/AccountID.h>
#include <optional>
#include <string>
#include <utility>
#include <vector>

View File

@@ -19,7 +19,7 @@
#pragma once
#include <data/cassandra/Types.h>
#include "data/cassandra/Types.h"
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>

View File

@@ -17,7 +17,18 @@
*/
//==============================================================================
#include <data/cassandra/Handle.h>
#include "data/cassandra/Handle.h"
#include "data/cassandra/Types.h"
#include <cassandra.h>
#include <functional>
#include <stdexcept>
#include <string>
#include <string_view>
#include <utility>
#include <vector>
namespace data::cassandra {

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/cassandra/Error.h>
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/Batch.h>
#include <data/cassandra/impl/Cluster.h>
#include <data/cassandra/impl/Future.h>
#include <data/cassandra/impl/ManagedObject.h>
#include <data/cassandra/impl/Result.h>
#include <data/cassandra/impl/Session.h>
#include <data/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include "data/cassandra/Error.h"
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/Batch.h"
#include "data/cassandra/impl/Cluster.h"
#include "data/cassandra/impl/Future.h"
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/Result.h"
#include "data/cassandra/impl/Session.h"
#include "data/cassandra/impl/Statement.h"
#include "util/Expected.h"
#include <cassandra.h>

View File

@@ -19,13 +19,13 @@
#pragma once
#include <data/cassandra/Concepts.h>
#include <data/cassandra/Handle.h>
#include <data/cassandra/SettingsProvider.h>
#include <data/cassandra/Types.h>
#include <util/Expected.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/cassandra/Concepts.h"
#include "data/cassandra/Handle.h"
#include "data/cassandra/SettingsProvider.h"
#include "data/cassandra/Types.h"
#include "util/Expected.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <fmt/compile.h>
@@ -592,20 +592,6 @@ public:
));
}();
PreparedStatement selectNFTBulk = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id, sequence, owner, is_burned
FROM {}
WHERE token_id IN ?
AND sequence <= ?
ORDER BY sequence DESC
PER PARTITION LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_tokens")
));
}();
PreparedStatement selectNFTURI = [this]() {
return handle_.get().prepare(fmt::format(
R"(
@@ -620,20 +606,6 @@ public:
));
}();
PreparedStatement selectNFTURIBulk = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id, uri
FROM {}
WHERE token_id IN ?
AND sequence <= ?
ORDER BY sequence DESC
PER PARTITION LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris")
));
}();
PreparedStatement selectNFTTx = [this]() {
return handle_.get().prepare(fmt::format(
R"(

View File

@@ -17,17 +17,28 @@
*/
//==============================================================================
#include <data/cassandra/SettingsProvider.h>
#include <data/cassandra/impl/Cluster.h>
#include <data/cassandra/impl/Statement.h>
#include <util/Constants.h>
#include <util/config/Config.h>
#include "data/cassandra/SettingsProvider.h"
#include <boost/json.hpp>
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/Cluster.h"
#include "util/Constants.h"
#include "util/config/Config.h"
#include <boost/json/conversion.hpp>
#include <boost/json/value.hpp>
#include <cerrno>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <filesystem>
#include <fstream>
#include <ios>
#include <iterator>
#include <optional>
#include <stdexcept>
#include <string>
#include <thread>
#include <system_error>
namespace data::cassandra {
@@ -112,8 +123,8 @@ SettingsProvider::parseSettings() const
config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding);
settings.coreConnectionsPerHost =
config_.valueOr<uint32_t>("core_connections_per_host", settings.coreConnectionsPerHost);
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
settings.writeBatchSize = config_.valueOr<std::size_t>("write_batch_size", settings.writeBatchSize);
auto const connectTimeoutSecond = config_.maybeValue<uint32_t>("connect_timeout");
if (connectTimeoutSecond)

View File

@@ -19,11 +19,11 @@
#pragma once
#include <data/cassandra/Handle.h>
#include <data/cassandra/Types.h>
#include <util/Expected.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/cassandra/Handle.h"
#include "data/cassandra/Types.h"
#include "util/Expected.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
namespace data::cassandra {

View File

@@ -19,7 +19,7 @@
#pragma once
#include <util/Expected.h>
#include "util/Expected.h"
#include <string>

View File

@@ -19,12 +19,12 @@
#pragma once
#include <data/cassandra/Concepts.h>
#include <data/cassandra/Handle.h>
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/RetryPolicy.h>
#include <util/Expected.h>
#include <util/log/Logger.h>
#include "data/cassandra/Concepts.h"
#include "data/cassandra/Handle.h"
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/RetryPolicy.h"
#include "util/Expected.h"
#include "util/log/Logger.h"
#include <boost/asio.hpp>

View File

@@ -17,12 +17,17 @@
*/
//==============================================================================
#include <data/cassandra/Error.h>
#include <data/cassandra/impl/Batch.h>
#include <data/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include "data/cassandra/impl/Batch.h"
#include <exception>
#include "data/cassandra/Error.h"
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/Statement.h"
#include "util/Expected.h"
#include <cassandra.h>
#include <stdexcept>
#include <vector>
namespace {

View File

@@ -19,8 +19,8 @@
#pragma once
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>

View File

@@ -17,15 +17,18 @@
*/
//==============================================================================
#include <data/cassandra/impl/Cluster.h>
#include <data/cassandra/impl/SslContext.h>
#include <data/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include "data/cassandra/impl/Cluster.h"
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/SslContext.h"
#include "util/log/Logger.h"
#include <cassandra.h>
#include <fmt/core.h>
#include <exception>
#include <vector>
#include <stdexcept>
#include <string>
#include <variant>
namespace {
constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); };
@@ -80,6 +83,7 @@ Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), c
LOG(log_.info()) << "Threads: " << settings.threads;
LOG(log_.info()) << "Core connections per host: " << settings.coreConnectionsPerHost;
LOG(log_.info()) << "IO queue size: " << queueSize;
LOG(log_.info()) << "Batched writes auto-chunk size: " << settings.writeBatchSize;
}
void
@@ -88,7 +92,8 @@ Cluster::setupConnection(Settings const& settings)
std::visit(
overloadSet{
[this](Settings::ContactPoints const& points) { setupContactPoints(points); },
[this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }},
[this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }
},
settings.connectionInfo
);
}

View File

@@ -19,8 +19,8 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include <util/log/Logger.h>
#include "data/cassandra/impl/ManagedObject.h"
#include "util/log/Logger.h"
#include <cassandra.h>
@@ -42,6 +42,8 @@ struct Settings {
static constexpr std::size_t DEFAULT_CONNECTION_TIMEOUT = 10000;
static constexpr uint32_t DEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING = 10'000;
static constexpr uint32_t DEFAULT_MAX_READ_REQUESTS_OUTSTANDING = 100'000;
static constexpr std::size_t DEFAULT_BATCH_SIZE = 20;
/**
* @brief Represents the configuration of contact points for cassandra.
*/
@@ -81,6 +83,9 @@ struct Settings {
/** @brief The number of connection per host to always have active */
uint32_t coreConnectionsPerHost = 1u;
/** @brief Size of batches when writing */
std::size_t writeBatchSize = DEFAULT_BATCH_SIZE;
/** @brief Size of the IO queue */
std::optional<uint32_t> queueSizeIO{};

View File

@@ -19,10 +19,10 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/impl/ManagedObject.h"
#include <ripple/basics/base_uint.h>
#include <cassandra.h>
#include <ripple/basics/base_uint.h>
#include <string>
#include <string_view>

View File

@@ -19,13 +19,15 @@
#pragma once
#include <data/BackendCounters.h>
#include <data/BackendInterface.h>
#include <data/cassandra/Handle.h>
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/AsyncExecutor.h>
#include <util/Expected.h>
#include <util/log/Logger.h>
#include "data/BackendCounters.h"
#include "data/BackendInterface.h"
#include "data/cassandra/Handle.h"
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/AsyncExecutor.h"
#include "util/Assert.h"
#include "util/Batching.h"
#include "util/Expected.h"
#include "util/log/Logger.h"
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
@@ -58,6 +60,8 @@ class DefaultExecutionStrategy {
std::uint32_t maxReadRequestsOutstanding_;
std::atomic_uint32_t numReadRequestsOutstanding_ = 0;
std::size_t writeBatchSize_;
std::mutex throttleMutex_;
std::condition_variable throttleCv_;
@@ -92,6 +96,7 @@ public:
)
: maxWriteRequestsOutstanding_{settings.maxWriteRequestsOutstanding}
, maxReadRequestsOutstanding_{settings.maxReadRequestsOutstanding}
, writeBatchSize_{settings.writeBatchSize}
, work_{ioc_}
, handle_{std::cref(handle)}
, thread_{[this]() { ioc_.run(); }}
@@ -140,10 +145,11 @@ public:
ResultOrErrorType
writeSync(StatementType const& statement)
{
counters_->registerWriteSync();
auto const startTime = std::chrono::steady_clock::now();
while (true) {
auto res = handle_.get().execute(statement);
if (res) {
counters_->registerWriteSync(startTime);
return res;
}
@@ -178,6 +184,8 @@ public:
void
write(PreparedStatementType const& preparedStatement, Args&&... args)
{
auto const startTime = std::chrono::steady_clock::now();
auto statement = preparedStatement.bind(std::forward<Args>(args)...);
incrementOutstandingRequestCount();
@@ -187,10 +195,10 @@ public:
ioc_,
handle_,
std::move(statement),
[this](auto const&) {
[this, startTime](auto const&) {
decrementOutstandingRequestCount();
counters_->registerWriteFinished();
counters_->registerWriteFinished(startTime);
},
[this]() { counters_->registerWriteRetry(); }
);
@@ -210,20 +218,28 @@ public:
if (statements.empty())
return;
incrementOutstandingRequestCount();
util::forEachBatch(std::move(statements), writeBatchSize_, [this](auto begin, auto end) {
auto const startTime = std::chrono::steady_clock::now();
auto chunk = std::vector<StatementType>{};
counters_->registerWriteStarted();
// Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(statements)>, HandleType>::run(
ioc_,
handle_,
std::move(statements),
[this](auto const&) {
decrementOutstandingRequestCount();
counters_->registerWriteFinished();
},
[this]() { counters_->registerWriteRetry(); }
);
chunk.reserve(std::distance(begin, end));
std::move(begin, end, std::back_inserter(chunk));
incrementOutstandingRequestCount();
counters_->registerWriteStarted();
// Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(chunk)>, HandleType>::run(
ioc_,
handle_,
std::move(chunk),
[this, startTime](auto const&) {
decrementOutstandingRequestCount();
counters_->registerWriteFinished(startTime);
},
[this]() { counters_->registerWriteRetry(); }
);
});
}
/**
@@ -257,6 +273,8 @@ public:
[[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, std::vector<StatementType> const& statements)
{
auto const startTime = std::chrono::steady_clock::now();
auto const numStatements = statements.size();
std::optional<FutureWithCallbackType> future;
counters_->registerReadStarted(numStatements);
@@ -282,7 +300,7 @@ public:
numReadRequestsOutstanding_ -= numStatements;
if (res) {
counters_->registerReadFinished(numStatements);
counters_->registerReadFinished(startTime, numStatements);
return res;
}
@@ -310,6 +328,8 @@ public:
[[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, StatementType const& statement)
{
auto const startTime = std::chrono::steady_clock::now();
std::optional<FutureWithCallbackType> future;
counters_->registerReadStarted();
@@ -333,7 +353,7 @@ public:
--numReadRequestsOutstanding_;
if (res) {
counters_->registerReadFinished();
counters_->registerReadFinished(startTime);
return res;
}
@@ -362,6 +382,8 @@ public:
std::vector<ResultType>
readEach(CompletionTokenType token, std::vector<StatementType> const& statements)
{
auto const startTime = std::chrono::steady_clock::now();
std::atomic_uint64_t errorsCount = 0u;
std::atomic_int numOutstanding = statements.size();
numReadRequestsOutstanding_ += statements.size();
@@ -400,12 +422,12 @@ public:
numReadRequestsOutstanding_ -= statements.size();
if (errorsCount > 0) {
assert(errorsCount <= statements.size());
ASSERT(errorsCount <= statements.size(), "Errors number cannot exceed statements number");
counters_->registerReadError(errorsCount);
counters_->registerReadFinished(statements.size() - errorsCount);
counters_->registerReadFinished(startTime, statements.size() - errorsCount);
throw DatabaseTimeout{};
}
counters_->registerReadFinished(statements.size());
counters_->registerReadFinished(startTime, statements.size());
std::vector<ResultType> results;
results.reserve(futures.size());
@@ -422,8 +444,18 @@ public:
}
);
assert(futures.size() == statements.size());
assert(results.size() == statements.size());
ASSERT(
futures.size() == statements.size(),
"Futures size must be equal to statements size. Got {} and {}",
futures.size(),
statements.size()
);
ASSERT(
results.size() == statements.size(),
"Results size must be equal to statements size. Got {} and {}",
results.size(),
statements.size()
);
return results;
}
@@ -455,10 +487,7 @@ private:
decrementOutstandingRequestCount()
{
// sanity check
if (numWriteRequestsOutstanding_ == 0) {
assert(false);
throw std::runtime_error("decrementing num outstanding below 0");
}
ASSERT(numWriteRequestsOutstanding_ > 0, "Decrementing num outstanding below 0");
size_t const cur = (--numWriteRequestsOutstanding_);
{
// mutex lock required to prevent race condition around spurious

View File

@@ -17,12 +17,19 @@
*/
//==============================================================================
#include <data/cassandra/Error.h>
#include <data/cassandra/impl/Future.h>
#include <data/cassandra/impl/Result.h>
#include "data/cassandra/impl/Future.h"
#include <exception>
#include <vector>
#include "data/cassandra/Error.h"
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/Result.h"
#include <cassandra.h>
#include <cstddef>
#include <memory>
#include <string>
#include <utility>
namespace {
constexpr auto futureDeleter = [](CassFuture* ptr) { cass_future_free(ptr); };

View File

@@ -19,8 +19,8 @@
#pragma once
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>

View File

@@ -17,7 +17,13 @@
*/
//==============================================================================
#include <data/cassandra/impl/Result.h>
#include "data/cassandra/impl/Result.h"
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>
#include <cstddef>
namespace {
constexpr auto resultDeleter = [](CassResult const* ptr) { cass_result_free(ptr); };

View File

@@ -19,13 +19,13 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include <data/cassandra/impl/Tuple.h>
#include <util/Expected.h>
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/Tuple.h"
#include "util/Expected.h"
#include <cassandra.h>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h>
#include <cassandra.h>
#include <compare>
#include <iterator>

View File

@@ -19,10 +19,10 @@
#pragma once
#include <data/cassandra/Handle.h>
#include <data/cassandra/Types.h>
#include <util/Expected.h>
#include <util/log/Logger.h>
#include "data/cassandra/Handle.h"
#include "data/cassandra/Types.h"
#include "util/Expected.h"
#include "util/log/Logger.h"
#include <boost/asio.hpp>

View File

@@ -19,7 +19,7 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>

View File

@@ -17,7 +17,14 @@
*/
//==============================================================================
#include <data/cassandra/impl/SslContext.h>
#include "data/cassandra/impl/SslContext.h"
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>
#include <stdexcept>
#include <string>
namespace {
constexpr auto contextDeleter = [](CassSsl* ptr) { cass_ssl_free(ptr); };

View File

@@ -19,7 +19,7 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/cassandra/Types.h>
#include <data/cassandra/impl/Collection.h>
#include <data/cassandra/impl/ManagedObject.h>
#include <data/cassandra/impl/Tuple.h>
#include <util/Expected.h>
#include "data/cassandra/Types.h"
#include "data/cassandra/impl/Collection.h"
#include "data/cassandra/impl/ManagedObject.h"
#include "data/cassandra/impl/Tuple.h"
#include "util/Expected.h"
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/STAccount.h>
#include <cassandra.h>
#include <fmt/core.h>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/STAccount.h>
#include <chrono>
#include <compare>

View File

@@ -17,7 +17,11 @@
*/
//==============================================================================
#include <data/cassandra/impl/Tuple.h>
#include "data/cassandra/impl/Tuple.h"
#include "data/cassandra/impl/ManagedObject.h"
#include <cassandra.h>
namespace {
constexpr auto tupleDeleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };

View File

@@ -19,10 +19,10 @@
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include "data/cassandra/impl/ManagedObject.h"
#include <ripple/basics/base_uint.h>
#include <cassandra.h>
#include <ripple/basics/base_uint.h>
#include <functional>
#include <string>
@@ -75,7 +75,7 @@ public:
}
// clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<DecayedType, int64_t>) {
auto const rc = cass_tuple_set_int64(*this, idx, value);
auto const rc = cass_tuple_set_int64(*this, idx, std::forward<Type>(value));
throwErrorIfNeeded(rc, "Bind int64");
} else if constexpr (std::is_same_v<DecayedType, ripple::uint256>) {
auto const rc = cass_tuple_set_bytes(

View File

@@ -20,7 +20,10 @@
/** @file */
#pragma once
#include "util/Assert.h"
#include <ripple/basics/base_uint.h>
#include <condition_variable>
#include <mutex>
#include <optional>
@@ -209,7 +212,7 @@ public:
inline std::vector<ripple::uint256>
getMarkers(size_t numMarkers)
{
assert(numMarkers <= 256);
ASSERT(numMarkers <= 256, "Number of markers must be <= 256. Got: {}", numMarkers);
unsigned char const incr = 256 / numMarkers;
@@ -222,4 +225,4 @@ getMarkers(size_t numMarkers)
}
return markers;
}
} // namespace etl
} // namespace etl

View File

@@ -17,12 +17,27 @@
*/
//==============================================================================
#include <etl/ETLService.h>
#include <util/Constants.h>
#include "etl/ETLService.h"
#include "data/BackendInterface.h"
#include "util/Assert.h"
#include "util/Constants.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <boost/asio/io_context.hpp>
#include <ripple/beast/core/CurrentThreadName.h>
#include <ripple/protocol/LedgerHeader.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <memory>
#include <optional>
#include <stdexcept>
#include <thread>
#include <utility>
#include <vector>
namespace etl {
// Database must be populated when this starts
@@ -35,11 +50,14 @@ ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
LOG(log_.debug()) << "Starting etl pipeline";
state_.isWriting = true;
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (!rng || rng->maxSequence < startSequence - 1) {
assert(false);
throw std::runtime_error("runETLPipeline: parent ledger is null");
}
auto const rng = backend_->hardFetchLedgerRangeNoThrow();
ASSERT(rng.has_value(), "Parent ledger range can't be null");
ASSERT(
rng->maxSequence >= startSequence - 1,
"Got not parent ledger. rnd->maxSequence = {}, startSequence = {}",
rng->maxSequence,
startSequence
);
auto const begin = std::chrono::system_clock::now();
auto extractors = std::vector<std::unique_ptr<ExtractorType>>{};
@@ -125,7 +143,7 @@ ETLService::monitor()
cacheLoader_.load(rng->maxSequence);
}
assert(rng);
ASSERT(rng.has_value(), "Ledger range can't be null");
uint32_t nextSequence = rng->maxSequence + 1;
LOG(log_.debug()) << "Database is populated. "

View File

@@ -19,25 +19,25 @@
#pragma once
#include <data/BackendInterface.h>
#include <data/LedgerCache.h>
#include <etl/LoadBalancer.h>
#include <etl/Source.h>
#include <etl/SystemState.h>
#include <etl/impl/AmendmentBlock.h>
#include <etl/impl/CacheLoader.h>
#include <etl/impl/ExtractionDataPipe.h>
#include <etl/impl/Extractor.h>
#include <etl/impl/LedgerFetcher.h>
#include <etl/impl/LedgerLoader.h>
#include <etl/impl/LedgerPublisher.h>
#include <etl/impl/Transformer.h>
#include <feed/SubscriptionManager.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "data/LedgerCache.h"
#include "etl/LoadBalancer.h"
#include "etl/Source.h"
#include "etl/SystemState.h"
#include "etl/impl/AmendmentBlock.h"
#include "etl/impl/CacheLoader.h"
#include "etl/impl/ExtractionDataPipe.h"
#include "etl/impl/Extractor.h"
#include "etl/impl/LedgerFetcher.h"
#include "etl/impl/LedgerLoader.h"
#include "etl/impl/LedgerPublisher.h"
#include "etl/impl/Transformer.h"
#include "feed/SubscriptionManager.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/asio/steady_timer.hpp>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <memory>

View File

@@ -17,8 +17,16 @@
*/
//==============================================================================
#include <etl/ETLState.h>
#include <rpc/JS.h>
#include "etl/ETLState.h"
#include "rpc/JS.h"
#include <boost/json/conversion.hpp>
#include <boost/json/value.hpp>
#include <boost/json/value_to.hpp>
#include <ripple/protocol/jss.h>
#include <cstdint>
namespace etl {

View File

@@ -19,7 +19,7 @@
#pragma once
#include <data/BackendInterface.h>
#include "data/BackendInterface.h"
#include <boost/json.hpp>

View File

@@ -17,25 +17,36 @@
*/
//==============================================================================
#include <data/DBHelpers.h>
#include <etl/ETLService.h>
#include <etl/NFTHelpers.h>
#include <etl/ProbingSource.h>
#include <etl/Source.h>
#include <rpc/RPCHelpers.h>
#include <util/Profiler.h>
#include <util/Random.h>
#include <util/log/Logger.h>
#include "etl/LoadBalancer.h"
#include <ripple/beast/net/IPEndpoint.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <boost/asio/strand.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/json.hpp>
#include <boost/json/src.hpp>
#include "data/BackendInterface.h"
#include "etl/ETLHelpers.h"
#include "etl/ETLService.h"
#include "etl/ETLState.h"
#include "etl/ProbingSource.h"
#include "etl/Source.h"
#include "util/Assert.h"
#include "util/Random.h"
#include "util/log/Logger.h"
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/json/array.hpp>
#include <boost/json/object.hpp>
#include <boost/json/value.hpp>
#include <fmt/core.h>
#include <algorithm>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <memory>
#include <optional>
#include <stdexcept>
#include <string>
#include <thread>
#include <utility>
#include <vector>
using namespace util;
@@ -206,7 +217,7 @@ bool
LoadBalancer::shouldPropagateTxnStream(Source* in) const
{
for (auto& src : sources_) {
assert(src);
ASSERT(src != nullptr, "Source is nullptr");
// We pick the first Source encountered that is connected
if (src->isConnected())

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/ETLHelpers.h>
#include <etl/ETLState.h>
#include <feed/SubscriptionManager.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/ETLHelpers.h"
#include "etl/ETLState.h"
#include "feed/SubscriptionManager.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/asio.hpp>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
namespace etl {
class Source;

View File

@@ -17,15 +17,33 @@
*/
//==============================================================================
#include <ripple/protocol/STBase.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/TxMeta.h>
#include <vector>
#include "data/DBHelpers.h"
#include <data/BackendInterface.h>
#include <data/DBHelpers.h>
#include <data/Types.h>
#include <fmt/core.h>
#include <ripple/basics/base_uint.h>
#include <ripple/basics/strHex.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/LedgerFormats.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STArray.h>
#include <ripple/protocol/STBase.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/STObject.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/Serializer.h>
#include <ripple/protocol/TER.h>
#include <ripple/protocol/TxFormats.h>
#include <ripple/protocol/TxMeta.h>
#include <algorithm>
#include <cstdint>
#include <iterator>
#include <optional>
#include <sstream>
#include <stdexcept>
#include <string>
#include <utility>
#include <vector>
namespace etl {
@@ -114,7 +132,8 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
return {
{NFTTransactionsData(*diff.first, txMeta, sttx.getTransactionID())},
NFTsData(*diff.first, *owner, sttx.getFieldVL(ripple::sfURI), txMeta)};
NFTsData(*diff.first, *owner, sttx.getFieldVL(ripple::sfURI), txMeta)
};
}
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
@@ -198,7 +217,8 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
.downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner);
return {
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())}, NFTsData(tokenID, owner, txMeta, false)};
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())}, NFTsData(tokenID, owner, txMeta, false)
};
}
// Otherwise we have to infer the new owner from the affected nodes.
@@ -247,7 +267,8 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
if (nft != nfts.end()) {
return {
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())},
NFTsData(tokenID, nodeOwner, txMeta, false)};
NFTsData(tokenID, nodeOwner, txMeta, false)
};
}
}

View File

@@ -20,7 +20,7 @@
/** @file */
#pragma once
#include <data/DBHelpers.h>
#include "data/DBHelpers.h"
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/TxMeta.h>

View File

@@ -17,7 +17,32 @@
*/
//==============================================================================
#include <etl/ProbingSource.h>
#include "etl/ProbingSource.h"
#include "data/BackendInterface.h"
#include "etl/ETLHelpers.h"
#include "etl/LoadBalancer.h"
#include "etl/Source.h"
#include "feed/SubscriptionManager.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/ssl/context.hpp>
#include <boost/json/object.hpp>
#include <boost/uuid/nil_generator.hpp>
#include <boost/uuid/uuid.hpp>
#include <grpcpp/support/status.h>
#include <cstdint>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <string>
#include <utility>
#include <vector>
namespace etl {
@@ -173,7 +198,8 @@ ProbingSource::make_SSLHooks() noexcept
plainSrc_->resume();
}
return SourceHooks::Action::STOP;
}};
}
};
}
SourceHooks
@@ -200,6 +226,7 @@ ProbingSource::make_PlainHooks() noexcept
sslSrc_->resume();
}
return SourceHooks::Action::STOP;
}};
}
};
};
} // namespace etl

View File

@@ -19,9 +19,9 @@
#pragma once
#include <etl/Source.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "etl/Source.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <boost/asio.hpp>
#include <boost/beast/core.hpp>

View File

@@ -17,22 +17,22 @@
*/
//==============================================================================
#include <data/DBHelpers.h>
#include <etl/ETLService.h>
#include <etl/LoadBalancer.h>
#include <etl/ProbingSource.h>
#include <etl/Source.h>
#include <rpc/RPCHelpers.h>
#include <util/Profiler.h>
#include "etl/Source.h"
#include <ripple/beast/net/IPEndpoint.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <boost/asio/strand.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/json.hpp>
#include "util/log/Logger.h"
#include <thread>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/ssl/stream_base.hpp>
#include <boost/beast/core/error.hpp>
#include <boost/beast/core/role.hpp>
#include <boost/beast/core/stream_traits.hpp>
#include <boost/beast/http/field.hpp>
#include <boost/beast/websocket/rfc6455.hpp>
#include <boost/beast/websocket/stream_base.hpp>
#include <memory>
#include <string>
namespace etl {

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/ETLHelpers.h>
#include <etl/LoadBalancer.h>
#include <etl/impl/AsyncData.h>
#include <etl/impl/ForwardCache.h>
#include <feed/SubscriptionManager.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/ETLHelpers.h"
#include "etl/LoadBalancer.h"
#include "etl/impl/AsyncData.h"
#include "etl/impl/ForwardCache.h"
#include "feed/SubscriptionManager.h"
#include "util/Assert.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/algorithm/string.hpp>
#include <boost/asio.hpp>
#include <boost/beast/core.hpp>
@@ -38,6 +38,8 @@
#include <boost/uuid/uuid.hpp>
#include <boost/uuid/uuid_generators.hpp>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <utility>
namespace feed {
@@ -485,7 +487,7 @@ public:
std::vector<std::string> edgeKeys;
while (numFinished < calls.size() && cq.Next(&tag, &ok)) {
assert(tag);
ASSERT(tag != nullptr, "Tag can't be null.");
auto ptr = static_cast<etl::detail::AsyncCallData*>(tag);
if (!ok) {
@@ -794,7 +796,7 @@ private:
uint32_t const sequence = std::stoll(minAndMax[0]);
pairs.emplace_back(sequence, sequence);
} else {
assert(minAndMax.size() == 2);
ASSERT(minAndMax.size() == 2, "Min and max should be of size 2. Got size = {}", minAndMax.size());
uint32_t const min = std::stoll(minAndMax[0]);
uint32_t const max = std::stoll(minAndMax[1]);
pairs.emplace_back(min, max);

View File

@@ -19,10 +19,11 @@
#pragma once
#include <etl/SystemState.h>
#include <util/log/Logger.h>
#include "etl/SystemState.h"
#include "util/log/Logger.h"
#include <boost/asio/io_context.hpp>
#include <boost/asio/post.hpp>
#include <boost/asio/steady_timer.hpp>
#include <chrono>

View File

@@ -19,12 +19,13 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/NFTHelpers.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/NFTHelpers.h"
#include "util/Assert.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
namespace etl::detail {
@@ -60,7 +61,12 @@ public:
<< " . prefix = " << ripple::strHex(std::string(1, prefix))
<< " . nextPrefix_ = " << ripple::strHex(std::string(1, nextPrefix_));
assert(nextPrefix_ > prefix || nextPrefix_ == 0x00);
ASSERT(
nextPrefix_ > prefix || nextPrefix_ == 0x00,
"Next prefix must be greater than current prefix. Got: nextPrefix_ = {}, prefix = {}",
nextPrefix_,
prefix
);
cur_ = std::make_unique<org::xrpl::rpc::v1::GetLedgerDataResponse>();
next_ = std::make_unique<org::xrpl::rpc::v1::GetLedgerDataResponse>();

View File

@@ -19,16 +19,16 @@
#pragma once
#include <data/BackendInterface.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/algorithm/string.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/beast/core.hpp>
#include <boost/beast/core/string.hpp>
#include <boost/beast/websocket.hpp>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <chrono>
#include <mutex>
@@ -134,10 +134,7 @@ public:
return;
}
if (cache_.get().isFull()) {
assert(false);
return;
}
ASSERT(!cache_.get().isFull(), "Cache must not be full. seq = {}", seq);
if (!clioPeers_.empty()) {
boost::asio::spawn(ioContext_.get(), [this, seq](boost::asio::yield_context yield) {
@@ -220,7 +217,8 @@ private:
{"ledger_index", ledgerIndex},
{"binary", true},
{"out_of_order", true},
{"limit", LIMIT}};
{"limit", LIMIT}
};
if (marker)
request["marker"] = *marker;

View File

@@ -19,8 +19,8 @@
#pragma once
#include <etl/ETLHelpers.h>
#include <util/log/Logger.h>
#include "etl/ETLHelpers.h"
#include "util/log/Logger.h"
#include <memory>
#include <vector>

View File

@@ -19,9 +19,10 @@
#pragma once
#include <etl/SystemState.h>
#include <util/Profiler.h>
#include <util/log/Logger.h>
#include "etl/SystemState.h"
#include "util/Assert.h"
#include "util/Profiler.h"
#include "util/log/Logger.h"
#include <ripple/beast/core/CurrentThreadName.h>
@@ -76,7 +77,7 @@ public:
void
waitTillFinished()
{
assert(thread_.joinable());
ASSERT(thread_.joinable(), "Extractor thread must be joinable");
thread_.join();
}

View File

@@ -17,12 +17,21 @@
*/
//==============================================================================
#include <etl/Source.h>
#include <etl/impl/ForwardCache.h>
#include <rpc/RPCHelpers.h>
#include "etl/impl/ForwardCache.h"
#include "etl/Source.h"
#include "rpc/RPCHelpers.h"
#include "util/log/Logger.h"
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <boost/json/object.hpp>
#include <atomic>
#include <memory>
#include <mutex>
#include <optional>
#include <shared_mutex>
#include <string>
namespace etl::detail {

View File

@@ -19,10 +19,10 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/ETLHelpers.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/ETLHelpers.h"
#include "util/config/Config.h"
#include "util/log/Logger.h"
#include <boost/asio.hpp>
#include <boost/json.hpp>

View File

@@ -19,12 +19,12 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/Source.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/Source.h"
#include "util/log/Logger.h"
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <optional>
#include <utility>

View File

@@ -19,13 +19,14 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/NFTHelpers.h>
#include <etl/SystemState.h>
#include <etl/impl/LedgerFetcher.h>
#include <util/LedgerUtils.h>
#include <util/Profiler.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/NFTHelpers.h"
#include "etl/SystemState.h"
#include "etl/impl/LedgerFetcher.h"
#include "util/Assert.h"
#include "util/LedgerUtils.h"
#include "util/Profiler.h"
#include "util/log/Logger.h"
#include <ripple/beast/core/CurrentThreadName.h>
@@ -152,8 +153,7 @@ public:
// check that database is actually empty
auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (rng) {
LOG(log_.fatal()) << "Database is not empty";
assert(false);
ASSERT(false, "Database is not empty");
return {};
}
@@ -189,50 +189,48 @@ public:
size_t numWrites = 0;
backend_->cache().setFull();
auto seconds =
::util::timed<std::chrono::seconds>([this, edgeKeys = &edgeKeys, sequence, &numWrites]() {
for (auto& key : *edgeKeys) {
LOG(log_.debug()) << "Writing edge key = " << ripple::strHex(key);
auto succ =
backend_->cache().getSuccessor(*ripple::uint256::fromVoidChecked(key), sequence);
if (succ)
backend_->writeSuccessor(std::move(key), sequence, uint256ToString(succ->key));
}
auto seconds = ::util::timed<std::chrono::seconds>([this, edgeKeys = &edgeKeys, sequence, &numWrites] {
for (auto& key : *edgeKeys) {
LOG(log_.debug()) << "Writing edge key = " << ripple::strHex(key);
auto succ = backend_->cache().getSuccessor(*ripple::uint256::fromVoidChecked(key), sequence);
if (succ)
backend_->writeSuccessor(std::move(key), sequence, uint256ToString(succ->key));
}
ripple::uint256 prev = data::firstKey;
while (auto cur = backend_->cache().getSuccessor(prev, sequence)) {
assert(cur);
if (prev == data::firstKey)
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(cur->key));
ripple::uint256 prev = data::firstKey;
while (auto cur = backend_->cache().getSuccessor(prev, sequence)) {
ASSERT(cur.has_value(), "Succesor for key {} must exist", ripple::strHex(prev));
if (prev == data::firstKey)
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(cur->key));
if (isBookDir(cur->key, cur->blob)) {
auto base = getBookBase(cur->key);
// make sure the base is not an actual object
if (!backend_->cache().get(cur->key, sequence)) {
auto succ = backend_->cache().getSuccessor(base, sequence);
assert(succ);
if (succ->key == cur->key) {
LOG(log_.debug()) << "Writing book successor = " << ripple::strHex(base)
<< " - " << ripple::strHex(cur->key);
if (isBookDir(cur->key, cur->blob)) {
auto base = getBookBase(cur->key);
// make sure the base is not an actual object
if (!backend_->cache().get(cur->key, sequence)) {
auto succ = backend_->cache().getSuccessor(base, sequence);
ASSERT(succ.has_value(), "Book base {} must have a successor", ripple::strHex(base));
if (succ->key == cur->key) {
LOG(log_.debug()) << "Writing book successor = " << ripple::strHex(base) << " - "
<< ripple::strHex(cur->key);
backend_->writeSuccessor(
uint256ToString(base), sequence, uint256ToString(cur->key)
);
}
backend_->writeSuccessor(
uint256ToString(base), sequence, uint256ToString(cur->key)
);
}
++numWrites;
}
prev = cur->key;
static constexpr std::size_t LOG_INTERVAL = 100000;
if (numWrites % LOG_INTERVAL == 0 && numWrites != 0)
LOG(log_.info()) << "Wrote " << numWrites << " book successors";
++numWrites;
}
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(data::lastKey));
++numWrites;
});
prev = cur->key;
static constexpr std::size_t LOG_INTERVAL = 100000;
if (numWrites % LOG_INTERVAL == 0 && numWrites != 0)
LOG(log_.info()) << "Wrote " << numWrites << " book successors";
}
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(data::lastKey));
++numWrites;
});
LOG(log_.info()) << "Looping through cache and submitting all writes took " << seconds
<< " seconds. numWrites = " << std::to_string(numWrites);
@@ -242,8 +240,8 @@ public:
if (not state_.get().isStopping) {
backend_->writeAccountTransactions(std::move(insertTxResult.accountTxData));
backend_->writeNFTs(std::move(insertTxResult.nfTokensData));
backend_->writeNFTTransactions(std::move(insertTxResult.nfTokenTxData));
backend_->writeNFTs(insertTxResult.nfTokensData);
backend_->writeNFTTransactions(insertTxResult.nfTokenTxData);
}
backend_->finishWrites(sequence);

View File

@@ -19,11 +19,12 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/SystemState.h>
#include <feed/SubscriptionManager.h>
#include <util/LedgerUtils.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/SystemState.h"
#include "feed/SubscriptionManager.h"
#include "util/Assert.h"
#include "util/LedgerUtils.h"
#include "util/log/Logger.h"
#include <ripple/protocol/LedgerHeader.h>
@@ -115,7 +116,7 @@ public:
return backend_->fetchLedgerBySequence(ledgerSequence, yield);
});
assert(lgr);
ASSERT(lgr.has_value(), "Ledger must exist in database. Ledger sequence = {}", ledgerSequence);
publish(*lgr);
return true;
@@ -157,7 +158,7 @@ public:
std::optional<ripple::Fees> fees = data::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchFees(lgrInfo.seq, yield);
});
assert(fees);
ASSERT(fees.has_value(), "Fees must exist for ledger {}", lgrInfo.seq);
std::vector<data::TransactionAndMetadata> transactions =
data::synchronousAndRetryOnTimeout([&](auto yield) {
@@ -165,7 +166,7 @@ public:
});
auto const ledgerRange = backend_->fetchLedgerRange();
assert(ledgerRange);
ASSERT(ledgerRange.has_value(), "Ledger range must exist");
std::string const range =
std::to_string(ledgerRange->minSequence) + "-" + std::to_string(ledgerRange->maxSequence);

View File

@@ -19,17 +19,18 @@
#pragma once
#include <data/BackendInterface.h>
#include <etl/SystemState.h>
#include <etl/impl/AmendmentBlock.h>
#include <etl/impl/LedgerLoader.h>
#include <util/LedgerUtils.h>
#include <util/Profiler.h>
#include <util/log/Logger.h>
#include "data/BackendInterface.h"
#include "etl/SystemState.h"
#include "etl/impl/AmendmentBlock.h"
#include "etl/impl/LedgerLoader.h"
#include "util/Assert.h"
#include "util/LedgerUtils.h"
#include "util/Profiler.h"
#include "util/log/Logger.h"
#include <grpcpp/grpcpp.h>
#include <ripple/beast/core/CurrentThreadName.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <grpcpp/grpcpp.h>
#include <chrono>
#include <memory>
@@ -113,7 +114,7 @@ public:
void
waitTillFinished()
{
assert(thread_.joinable());
ASSERT(thread_.joinable(), "Transformer thread must be joinable");
thread_.join();
}
@@ -198,8 +199,8 @@ private:
<< rawData.transactions_list().transactions_size();
backend_->writeAccountTransactions(std::move(insertTxResultOp->accountTxData));
backend_->writeNFTs(std::move(insertTxResultOp->nfTokensData));
backend_->writeNFTTransactions(std::move(insertTxResultOp->nfTokenTxData));
backend_->writeNFTs(insertTxResultOp->nfTokensData);
backend_->writeNFTTransactions(insertTxResultOp->nfTokenTxData);
auto [success, duration] =
::util::timed<std::chrono::duration<double>>([&]() { return backend_->finishWrites(lgrInfo.seq); });
@@ -228,7 +229,7 @@ private:
for (auto& obj : *(rawData.mutable_ledger_objects()->mutable_objects())) {
auto key = ripple::uint256::fromVoidChecked(obj.key());
assert(key);
ASSERT(key.has_value(), "Failed to deserialize key from void");
cacheUpdates.push_back({*key, {obj.mutable_data()->begin(), obj.mutable_data()->end()}});
LOG(log_.debug()) << "key = " << ripple::strHex(*key) << " - mod type = " << obj.mod_type();
@@ -245,7 +246,7 @@ private:
if (isDeleted) {
auto const old = backend_->cache().get(*key, lgrInfo.seq - 1);
assert(old);
ASSERT(old.has_value(), "Deleted object must be in cache");
checkBookBase = isBookDir(*key, *old);
} else {
checkBookBase = isBookDir(*key, *blob);
@@ -256,7 +257,11 @@ private:
auto const bookBase = getBookBase(*key);
auto const oldFirstDir = backend_->cache().getSuccessor(bookBase, lgrInfo.seq - 1);
assert(oldFirstDir);
ASSERT(
oldFirstDir.has_value(),
"Book base must have a successor for lgrInfo.seq - 1 = {}",
lgrInfo.seq - 1
);
// We deleted the first directory, or we added a directory prior to the old first
// directory

View File

@@ -17,138 +17,84 @@
*/
//==============================================================================
#include <feed/SubscriptionManager.h>
#include <rpc/BookChangesHelper.h>
#include <rpc/RPCHelpers.h>
#include "feed/SubscriptionManager.h"
#include "data/Types.h"
#include "feed/Types.h"
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Book.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <cstdint>
#include <string>
#include <vector>
namespace feed {
void
Subscription::subscribe(SessionPtrType const& session)
SubscriptionManager::subBookChanges(SubscriberSharedPtr const& subscriber)
{
boost::asio::post(strand_, [this, session]() { addSession(session, subscribers_, subCount_); });
bookChangesFeed_.sub(subscriber);
}
void
Subscription::unsubscribe(SessionPtrType const& session)
SubscriptionManager::unsubBookChanges(SubscriberSharedPtr const& subscriber)
{
boost::asio::post(strand_, [this, session]() { removeSession(session, subscribers_, subCount_); });
}
bool
Subscription::hasSession(SessionPtrType const& session)
{
return subscribers_.contains(session);
bookChangesFeed_.unsub(subscriber);
}
void
Subscription::publish(std::shared_ptr<std::string> const& message)
{
boost::asio::post(strand_, [this, message]() { sendToSubscribers(message, subscribers_, subCount_); });
}
boost::json::object
getLedgerPubMessage(
SubscriptionManager::pubBookChanges(
ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t txnCount
)
std::vector<data::TransactionAndMetadata> const& transactions
) const
{
boost::json::object pubMsg;
bookChangesFeed_.pub(lgrInfo, transactions);
}
pubMsg["type"] = "ledgerClosed";
pubMsg["ledger_index"] = lgrInfo.seq;
pubMsg["ledger_hash"] = to_string(lgrInfo.hash);
pubMsg["ledger_time"] = lgrInfo.closeTime.time_since_epoch().count();
void
SubscriptionManager::subProposedTransactions(SubscriberSharedPtr const& subscriber)
{
proposedTransactionFeed_.sub(subscriber);
}
pubMsg["fee_base"] = rpc::toBoostJson(fees.base.jsonClipped());
pubMsg["reserve_base"] = rpc::toBoostJson(fees.reserve.jsonClipped());
pubMsg["reserve_inc"] = rpc::toBoostJson(fees.increment.jsonClipped());
void
SubscriptionManager::unsubProposedTransactions(SubscriberSharedPtr const& subscriber)
{
proposedTransactionFeed_.unsub(subscriber);
}
pubMsg["validated_ledgers"] = ledgerRange;
pubMsg["txn_count"] = txnCount;
return pubMsg;
void
SubscriptionManager::subProposedAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber)
{
proposedTransactionFeed_.sub(account, subscriber);
}
void
SubscriptionManager::unsubProposedAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber)
{
proposedTransactionFeed_.unsub(account, subscriber);
}
void
SubscriptionManager::forwardProposedTransaction(boost::json::object const& receivedTxJson)
{
proposedTransactionFeed_.pub(receivedTxJson);
}
boost::json::object
SubscriptionManager::subLedger(boost::asio::yield_context yield, SessionPtrType session)
SubscriptionManager::subLedger(boost::asio::yield_context yield, SubscriberSharedPtr const& subscriber)
{
subscribeHelper(session, ledgerSubscribers_, [this](SessionPtrType session) { unsubLedger(session); });
auto ledgerRange = backend_->fetchLedgerRange();
assert(ledgerRange);
auto lgrInfo = backend_->fetchLedgerBySequence(ledgerRange->maxSequence, yield);
assert(lgrInfo);
std::optional<ripple::Fees> fees;
fees = backend_->fetchFees(lgrInfo->seq, yield);
assert(fees);
std::string const range = std::to_string(ledgerRange->minSequence) + "-" + std::to_string(ledgerRange->maxSequence);
auto pubMsg = getLedgerPubMessage(*lgrInfo, *fees, range, 0);
pubMsg.erase("txn_count");
pubMsg.erase("type");
return pubMsg;
return ledgerFeed_.sub(yield, backend_, subscriber);
}
void
SubscriptionManager::unsubLedger(SessionPtrType session)
SubscriptionManager::unsubLedger(SubscriberSharedPtr const& subscriber)
{
ledgerSubscribers_.unsubscribe(session);
}
void
SubscriptionManager::subTransactions(SessionPtrType session)
{
subscribeHelper(session, txSubscribers_, [this](SessionPtrType session) { unsubTransactions(session); });
}
void
SubscriptionManager::unsubTransactions(SessionPtrType session)
{
txSubscribers_.unsubscribe(session);
}
void
SubscriptionManager::subAccount(ripple::AccountID const& account, SessionPtrType const& session)
{
subscribeHelper(session, account, accountSubscribers_, [this, account](SessionPtrType session) {
unsubAccount(account, session);
});
}
void
SubscriptionManager::unsubAccount(ripple::AccountID const& account, SessionPtrType const& session)
{
accountSubscribers_.unsubscribe(session, account);
}
void
SubscriptionManager::subBook(ripple::Book const& book, SessionPtrType session)
{
subscribeHelper(session, book, bookSubscribers_, [this, book](SessionPtrType session) {
unsubBook(book, session);
});
}
void
SubscriptionManager::unsubBook(ripple::Book const& book, SessionPtrType session)
{
bookSubscribers_.unsubscribe(session, book);
}
void
SubscriptionManager::subBookChanges(SessionPtrType session)
{
subscribeHelper(session, bookChangesSubscribers_, [this](SessionPtrType session) { unsubBookChanges(session); });
}
void
SubscriptionManager::unsubBookChanges(SessionPtrType session)
{
bookChangesSubscribers_.unsubscribe(session);
ledgerFeed_.unsub(subscriber);
}
void
@@ -156,226 +102,96 @@ SubscriptionManager::pubLedger(
ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t txnCount
std::uint32_t const txnCount
) const
{
ledgerFeed_.pub(lgrInfo, fees, ledgerRange, txnCount);
}
void
SubscriptionManager::subManifest(SubscriberSharedPtr const& subscriber)
{
manifestFeed_.sub(subscriber);
}
void
SubscriptionManager::unsubManifest(SubscriberSharedPtr const& subscriber)
{
manifestFeed_.unsub(subscriber);
}
void
SubscriptionManager::forwardManifest(boost::json::object const& manifestJson) const
{
manifestFeed_.pub(manifestJson);
}
void
SubscriptionManager::subValidation(SubscriberSharedPtr const& subscriber)
{
validationsFeed_.sub(subscriber);
}
void
SubscriptionManager::unsubValidation(SubscriberSharedPtr const& subscriber)
{
validationsFeed_.unsub(subscriber);
}
void
SubscriptionManager::forwardValidation(boost::json::object const& validationJson) const
{
validationsFeed_.pub(validationJson);
}
void
SubscriptionManager::subTransactions(SubscriberSharedPtr const& subscriber, std::uint32_t const apiVersion)
{
transactionFeed_.sub(subscriber, apiVersion);
}
void
SubscriptionManager::unsubTransactions(SubscriberSharedPtr const& subscriber)
{
transactionFeed_.unsub(subscriber);
}
void
SubscriptionManager::subAccount(
ripple::AccountID const& account,
SubscriberSharedPtr const& subscriber,
std::uint32_t const apiVersion
)
{
auto message =
std::make_shared<std::string>(boost::json::serialize(getLedgerPubMessage(lgrInfo, fees, ledgerRange, txnCount))
);
ledgerSubscribers_.publish(message);
transactionFeed_.sub(account, subscriber, apiVersion);
}
void
SubscriptionManager::pubTransaction(data::TransactionAndMetadata const& blobs, ripple::LedgerHeader const& lgrInfo)
SubscriptionManager::unsubAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber)
{
auto [tx, meta] = rpc::deserializeTxPlusMeta(blobs, lgrInfo.seq);
boost::json::object pubObj;
pubObj["transaction"] = rpc::toJson(*tx);
pubObj["meta"] = rpc::toJson(*meta);
rpc::insertDeliveredAmount(pubObj["meta"].as_object(), tx, meta, blobs.date);
pubObj["type"] = "transaction";
pubObj["validated"] = true;
pubObj["status"] = "closed";
pubObj["ledger_index"] = lgrInfo.seq;
pubObj["ledger_hash"] = ripple::strHex(lgrInfo.hash);
pubObj["transaction"].as_object()["date"] = lgrInfo.closeTime.time_since_epoch().count();
pubObj["engine_result_code"] = meta->getResult();
std::string token;
std::string human;
ripple::transResultInfo(meta->getResultTER(), token, human);
pubObj["engine_result"] = token;
pubObj["engine_result_message"] = human;
if (tx->getTxnType() == ripple::ttOFFER_CREATE) {
auto account = tx->getAccountID(ripple::sfAccount);
auto amount = tx->getFieldAmount(ripple::sfTakerGets);
if (account != amount.issue().account) {
ripple::STAmount ownerFunds;
auto fetchFundsSynchronous = [&]() {
data::synchronous([&](boost::asio::yield_context yield) {
ownerFunds = rpc::accountFunds(*backend_, lgrInfo.seq, amount, account, yield);
});
};
data::retryOnTimeout(fetchFundsSynchronous);
pubObj["transaction"].as_object()["owner_funds"] = ownerFunds.getText();
}
}
auto pubMsg = std::make_shared<std::string>(boost::json::serialize(pubObj));
txSubscribers_.publish(pubMsg);
auto accounts = meta->getAffectedAccounts();
for (auto const& account : accounts)
accountSubscribers_.publish(pubMsg, account);
std::unordered_set<ripple::Book> alreadySent;
for (auto const& node : meta->getNodes()) {
if (node.getFieldU16(ripple::sfLedgerEntryType) == ripple::ltOFFER) {
ripple::SField const* field = nullptr;
// We need a field that contains the TakerGets and TakerPays
// parameters.
if (node.getFName() == ripple::sfModifiedNode) {
field = &ripple::sfPreviousFields;
} else if (node.getFName() == ripple::sfCreatedNode) {
field = &ripple::sfNewFields;
} else if (node.getFName() == ripple::sfDeletedNode) {
field = &ripple::sfFinalFields;
}
if (field != nullptr) {
auto data = dynamic_cast<ripple::STObject const*>(node.peekAtPField(*field));
if ((data != nullptr) && data->isFieldPresent(ripple::sfTakerPays) &&
data->isFieldPresent(ripple::sfTakerGets)) {
// determine the OrderBook
ripple::Book const book{
data->getFieldAmount(ripple::sfTakerGets).issue(),
data->getFieldAmount(ripple::sfTakerPays).issue()};
if (alreadySent.find(book) == alreadySent.end()) {
bookSubscribers_.publish(pubMsg, book);
alreadySent.insert(book);
}
}
}
}
}
transactionFeed_.unsub(account, subscriber);
}
void
SubscriptionManager::pubBookChanges(
ripple::LedgerHeader const& lgrInfo,
std::vector<data::TransactionAndMetadata> const& transactions
SubscriptionManager::subBook(
ripple::Book const& book,
SubscriberSharedPtr const& subscriber,
std::uint32_t const apiVersion
)
{
auto const json = rpc::computeBookChanges(lgrInfo, transactions);
auto const bookChangesMsg = std::make_shared<std::string>(boost::json::serialize(json));
bookChangesSubscribers_.publish(bookChangesMsg);
transactionFeed_.sub(book, subscriber, apiVersion);
}
void
SubscriptionManager::forwardProposedTransaction(boost::json::object const& response)
SubscriptionManager::unsubBook(ripple::Book const& book, SubscriberSharedPtr const& subscriber)
{
auto pubMsg = std::make_shared<std::string>(boost::json::serialize(response));
txProposedSubscribers_.publish(pubMsg);
auto transaction = response.at("transaction").as_object();
auto accounts = rpc::getAccountsFromTransaction(transaction);
for (ripple::AccountID const& account : accounts)
accountProposedSubscribers_.publish(pubMsg, account);
transactionFeed_.unsub(book, subscriber);
}
void
SubscriptionManager::forwardManifest(boost::json::object const& response)
SubscriptionManager::pubTransaction(data::TransactionAndMetadata const& txMeta, ripple::LedgerHeader const& lgrInfo)
{
auto pubMsg = std::make_shared<std::string>(boost::json::serialize(response));
manifestSubscribers_.publish(pubMsg);
}
void
SubscriptionManager::forwardValidation(boost::json::object const& response)
{
auto pubMsg = std::make_shared<std::string>(boost::json::serialize(response));
validationsSubscribers_.publish(pubMsg);
}
void
SubscriptionManager::subProposedAccount(ripple::AccountID const& account, SessionPtrType session)
{
subscribeHelper(session, account, accountProposedSubscribers_, [this, account](SessionPtrType session) {
unsubProposedAccount(account, session);
});
}
void
SubscriptionManager::subManifest(SessionPtrType session)
{
subscribeHelper(session, manifestSubscribers_, [this](SessionPtrType session) { unsubManifest(session); });
}
void
SubscriptionManager::unsubManifest(SessionPtrType session)
{
manifestSubscribers_.unsubscribe(session);
}
void
SubscriptionManager::subValidation(SessionPtrType session)
{
subscribeHelper(session, validationsSubscribers_, [this](SessionPtrType session) { unsubValidation(session); });
}
void
SubscriptionManager::unsubValidation(SessionPtrType session)
{
validationsSubscribers_.unsubscribe(session);
}
void
SubscriptionManager::unsubProposedAccount(ripple::AccountID const& account, SessionPtrType session)
{
accountProposedSubscribers_.unsubscribe(session, account);
}
void
SubscriptionManager::subProposedTransactions(SessionPtrType session)
{
subscribeHelper(session, txProposedSubscribers_, [this](SessionPtrType session) {
unsubProposedTransactions(session);
});
}
void
SubscriptionManager::unsubProposedTransactions(SessionPtrType session)
{
txProposedSubscribers_.unsubscribe(session);
}
void
SubscriptionManager::subscribeHelper(SessionPtrType const& session, Subscription& subs, CleanupFunction&& func)
{
if (subs.hasSession(session))
return;
subs.subscribe(session);
std::scoped_lock const lk(cleanupMtx_);
cleanupFuncs_[session].push_back(std::move(func));
}
template <typename Key>
void
SubscriptionManager::subscribeHelper(
SessionPtrType const& session,
Key const& k,
SubscriptionMap<Key>& subs,
CleanupFunction&& func
)
{
if (subs.hasSession(session, k))
return;
subs.subscribe(session, k);
std::scoped_lock const lk(cleanupMtx_);
cleanupFuncs_[session].push_back(std::move(func));
}
void
SubscriptionManager::cleanup(SessionPtrType session)
{
std::scoped_lock const lk(cleanupMtx_);
if (!cleanupFuncs_.contains(session))
return;
for (auto const& f : cleanupFuncs_[session]) {
f(session);
}
cleanupFuncs_.erase(session);
transactionFeed_.pub(txMeta, lgrInfo, backend_);
}
} // namespace feed

View File

@@ -19,369 +19,141 @@
#pragma once
#include <data/BackendInterface.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include <util/prometheus/Prometheus.h>
#include <web/interface/ConnectionBase.h>
#include "data/BackendInterface.h"
#include "data/Types.h"
#include "feed/Types.h"
#include "feed/impl/BookChangesFeed.h"
#include "feed/impl/ForwardFeed.h"
#include "feed/impl/LedgerFeed.h"
#include "feed/impl/ProposedTransactionFeed.h"
#include "feed/impl/TransactionFeed.h"
#include "util/log/Logger.h"
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Book.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <cstdint>
#include <functional>
#include <memory>
#include <string>
#include <thread>
#include <vector>
/**
* @brief This namespace deals with subscriptions.
*/
namespace feed {
using SessionPtrType = std::shared_ptr<web::ConnectionBase>;
/**
* @brief Sends a message to subscribers.
*
* @param message The message to send
* @param subscribers The subscription stream to send the message to
* @param counter The subscription counter to decrement if session is detected as dead
*/
template <class T>
inline void
sendToSubscribers(std::shared_ptr<std::string> const& message, T& subscribers, util::prometheus::GaugeInt& counter)
{
for (auto it = subscribers.begin(); it != subscribers.end();) {
auto& session = *it;
if (session->dead()) {
it = subscribers.erase(it);
--counter;
} else {
session->send(message);
++it;
}
}
}
/**
* @brief Adds a session to the subscription stream.
*
* @param session The session to add
* @param subscribers The stream to subscribe to
* @param counter The counter representing the current total subscribers
*/
template <class T>
inline void
addSession(SessionPtrType session, T& subscribers, util::prometheus::GaugeInt& counter)
{
if (!subscribers.contains(session)) {
subscribers.insert(session);
++counter;
}
}
/**
* @brief Removes a session from the subscription stream.
*
* @param session The session to remove
* @param subscribers The stream to unsubscribe from
* @param counter The counter representing the current total subscribers
*/
template <class T>
inline void
removeSession(SessionPtrType session, T& subscribers, util::prometheus::GaugeInt& counter)
{
if (subscribers.contains(session)) {
subscribers.erase(session);
--counter;
}
}
/**
* @brief Represents a subscription stream.
*/
class Subscription {
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
std::unordered_set<SessionPtrType> subscribers_ = {};
util::prometheus::GaugeInt& subCount_;
public:
Subscription() = delete;
Subscription(Subscription&) = delete;
Subscription(Subscription&&) = delete;
/**
* @brief Create a new subscription stream.
*
* @param ioc The io_context to run on
*/
explicit Subscription(boost::asio::io_context& ioc, std::string const& name)
: strand_(boost::asio::make_strand(ioc))
, subCount_(PrometheusService::gaugeInt(
"subscriptions_current_number",
util::prometheus::Labels({util::prometheus::Label{"stream", name}}),
fmt::format("Current subscribers number on the {} stream", name)
))
{
}
~Subscription() = default;
/**
* @brief Adds the given session to the subscribers set.
*
* @param session The session to add
*/
void
subscribe(SessionPtrType const& session);
/**
* @brief Removes the given session from the subscribers set.
*
* @param session The session to remove
*/
void
unsubscribe(SessionPtrType const& session);
/**
* @brief Check if a session has been in subscribers list.
*
* @param session The session to check
* @return true if the session is in the subscribers list; false otherwise
*/
bool
hasSession(SessionPtrType const& session);
/**
* @brief Sends the given message to all subscribers.
*
* @param message The message to send
*/
void
publish(std::shared_ptr<std::string> const& message);
/**
* @return Total subscriber count on this stream.
*/
std::uint64_t
count() const
{
return subCount_.value();
}
/**
* @return true if the stream currently has no subscribers; false otherwise
*/
bool
empty() const
{
return count() == 0;
}
};
/**
* @brief Represents a collection of subscriptions where each stream is mapped to a key.
*/
template <class Key>
class SubscriptionMap {
using SubscribersType = std::set<SessionPtrType>;
boost::asio::strand<boost::asio::io_context::executor_type> strand_;
std::unordered_map<Key, SubscribersType> subscribers_ = {};
util::prometheus::GaugeInt& subCount_;
public:
SubscriptionMap() = delete;
SubscriptionMap(SubscriptionMap&) = delete;
SubscriptionMap(SubscriptionMap&&) = delete;
/**
* @brief Create a new subscription map.
*
* @param ioc The io_context to run on
*/
explicit SubscriptionMap(boost::asio::io_context& ioc, std::string const& name)
: strand_(boost::asio::make_strand(ioc))
, subCount_(PrometheusService::gaugeInt(
"subscriptions_current_number",
util::prometheus::Labels({util::prometheus::Label{"collection", name}}),
fmt::format("Current subscribers number on the {} collection", name)
))
{
}
~SubscriptionMap() = default;
/**
* @brief Subscribe to a specific stream by its key.
*
* @param session The session to add
* @param key The key for the subscription to subscribe to
*/
void
subscribe(SessionPtrType const& session, Key const& key)
{
boost::asio::post(strand_, [this, session, key]() { addSession(session, subscribers_[key], subCount_); });
}
/**
* @brief Unsubscribe from a specific stream by its key.
*
* @param session The session to remove
* @param key The key for the subscription to unsubscribe from
*/
void
unsubscribe(SessionPtrType const& session, Key const& key)
{
boost::asio::post(strand_, [this, key, session]() {
if (!subscribers_.contains(key))
return;
if (!subscribers_[key].contains(session))
return;
--subCount_;
subscribers_[key].erase(session);
if (subscribers_[key].size() == 0) {
subscribers_.erase(key);
}
});
}
/**
* @brief Check if a session has been in subscribers list.
*
* @param session The session to check
* @param key The key for the subscription to check
* @return true if the session is in the subscribers list; false otherwise
*/
bool
hasSession(SessionPtrType const& session, Key const& key)
{
if (!subscribers_.contains(key))
return false;
return subscribers_[key].contains(session);
}
/**
* @brief Sends the given message to all subscribers.
*
* @param message The message to send
* @param key The key for the subscription to send the message to
*/
void
publish(std::shared_ptr<std::string> const& message, Key const& key)
{
boost::asio::post(strand_, [this, key, message]() {
if (!subscribers_.contains(key))
return;
sendToSubscribers(message, subscribers_[key], subCount_);
});
}
/**
* @return Total subscriber count on all streams in the collection.
*/
std::uint64_t
count() const
{
return subCount_.value();
}
};
/**
* @brief Manages subscriptions.
*/
class SubscriptionManager {
util::Logger log_{"Subscriptions"};
std::vector<std::thread> workers_;
boost::asio::io_context ioc_;
std::optional<boost::asio::io_context::work> work_;
Subscription ledgerSubscribers_;
Subscription txSubscribers_;
Subscription txProposedSubscribers_;
Subscription manifestSubscribers_;
Subscription validationsSubscribers_;
Subscription bookChangesSubscribers_;
SubscriptionMap<ripple::AccountID> accountSubscribers_;
SubscriptionMap<ripple::AccountID> accountProposedSubscribers_;
SubscriptionMap<ripple::Book> bookSubscribers_;
std::reference_wrapper<boost::asio::io_context> ioContext_;
std::shared_ptr<data::BackendInterface const> backend_;
impl::ForwardFeed manifestFeed_;
impl::ForwardFeed validationsFeed_;
impl::LedgerFeed ledgerFeed_;
impl::BookChangesFeed bookChangesFeed_;
impl::TransactionFeed transactionFeed_;
impl::ProposedTransactionFeed proposedTransactionFeed_;
public:
/**
* @brief A factory function that creates a new subscription manager configured from the config provided.
*
* @param config The configuration to use
* @param backend The backend to use
*/
static std::shared_ptr<SubscriptionManager>
make_SubscriptionManager(util::Config const& config, std::shared_ptr<data::BackendInterface const> const& backend)
{
auto numThreads = config.valueOr<uint64_t>("subscription_workers", 1);
return std::make_shared<SubscriptionManager>(numThreads, backend);
}
/**
* @brief Creates a new instance of the subscription manager.
*
* @param numThreads The number of worker threads to manage subscriptions
* @param backend The backend to use
*/
SubscriptionManager(std::uint64_t numThreads, std::shared_ptr<data::BackendInterface const> const& backend)
: ledgerSubscribers_(ioc_, "ledger")
, txSubscribers_(ioc_, "tx")
, txProposedSubscribers_(ioc_, "tx_proposed")
, manifestSubscribers_(ioc_, "manifest")
, validationsSubscribers_(ioc_, "validations")
, bookChangesSubscribers_(ioc_, "book_changes")
, accountSubscribers_(ioc_, "account")
, accountProposedSubscribers_(ioc_, "account_proposed")
, bookSubscribers_(ioc_, "book")
SubscriptionManager(
boost::asio::io_context& ioContext,
std::shared_ptr<data::BackendInterface const> const& backend
)
: ioContext_(ioContext)
, backend_(backend)
, manifestFeed_(ioContext, "manifest")
, validationsFeed_(ioContext, "validations")
, ledgerFeed_(ioContext)
, bookChangesFeed_(ioContext)
, transactionFeed_(ioContext)
, proposedTransactionFeed_(ioContext)
{
work_.emplace(ioc_);
// We will eventually want to clamp this to be the number of strands,
// since adding more threads than we have strands won't see any
// performance benefits
LOG(log_.info()) << "Starting subscription manager with " << numThreads << " workers";
workers_.reserve(numThreads);
for (auto i = numThreads; i > 0; --i)
workers_.emplace_back([this] { ioc_.run(); });
}
/** @brief Stops the worker threads of the subscription manager. */
~SubscriptionManager()
{
work_.reset();
ioc_.stop();
for (auto& worker : workers_)
worker.join();
}
/**
* @brief Subscribe to the ledger stream.
*
* @param yield The coroutine context
* @param session The session to subscribe to the stream
* @return JSON object representing the first message to be sent to the new subscriber
* @brief Subscribe to the book changes feed.
* @param subscriber
*/
void
subBookChanges(SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe to the book changes feed.
* @param subscriber
*/
void
unsubBookChanges(SubscriberSharedPtr const& subscriber);
/**
* @brief Publish the book changes feed.
* @param lgrInfo The current ledger header.
* @param transactions The transactions in the current ledger.
*/
void
pubBookChanges(ripple::LedgerHeader const& lgrInfo, std::vector<data::TransactionAndMetadata> const& transactions)
const;
/**
* @brief Subscribe to the proposed transactions feed.
* @param subscriber
*/
void
subProposedTransactions(SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe to the proposed transactions feed.
* @param subscriber
*/
void
unsubProposedTransactions(SubscriberSharedPtr const& subscriber);
/**
* @brief Subscribe to the proposed transactions feed, only receive the feed when particular account is affected.
* @param account The account to watch.
* @param subscriber
*/
void
subProposedAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe to the proposed transactions feed for particular account.
* @param account The account to stop watching.
* @param subscriber
*/
void
unsubProposedAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber);
/**
* @brief Forward the proposed transactions feed.
* @param receivedTxJson The proposed transaction json.
*/
void
forwardProposedTransaction(boost::json::object const& receivedTxJson);
/**
* @brief Subscribe to the ledger feed.
* @param subscriber
*/
boost::json::object
subLedger(boost::asio::yield_context yield, SessionPtrType session);
subLedger(boost::asio::yield_context yield, SubscriberSharedPtr const& subscriber);
/**
* @brief Publish to the ledger stream.
*
* @param lgrInfo The ledger header to serialize
* @param fees The fees to serialize
* @param ledgerRange The ledger range this message applies to
* @param txnCount The total number of transactions to serialize
* @brief Unsubscribe to the ledger feed.
* @param subscriber
*/
void
unsubLedger(SubscriberSharedPtr const& subscriber);
/**
* @brief Publish the ledger feed.
* @param lgrInfo The ledger header.
* @param fees The fees.
* @param ledgerRange The ledger range.
* @param txnCount The transaction count.
*/
void
pubLedger(
@@ -389,232 +161,160 @@ public:
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t txnCount
);
) const;
/**
* @brief Publish to the book changes stream.
*
* @param lgrInfo The ledger header to serialize
* @param transactions The transactions to serialize
* @brief Subscribe to the manifest feed.
* @param subscriber
*/
void
pubBookChanges(ripple::LedgerHeader const& lgrInfo, std::vector<data::TransactionAndMetadata> const& transactions);
subManifest(SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe from the ledger stream.
*
* @param session The session to unsubscribe from the stream
* @brief Unsubscribe to the manifest feed.
* @param subscriber
*/
void
unsubLedger(SessionPtrType session);
unsubManifest(SubscriberSharedPtr const& subscriber);
/**
* @brief Subscribe to the transactions stream.
*
* @param session The session to subscribe to the stream
* @brief Forward the manifest feed.
* @param manifestJson The manifest json to forward.
*/
void
subTransactions(SessionPtrType session);
forwardManifest(boost::json::object const& manifestJson) const;
/**
* @brief Unsubscribe from the transactions stream.
*
* @param session The session to unsubscribe from the stream
* @brief Subscribe to the validation feed.
* @param subscriber
*/
void
unsubTransactions(SessionPtrType session);
subValidation(SubscriberSharedPtr const& subscriber);
/**
* @brief Publish to the book changes stream.
*
* @param blobs The transactions to serialize
* @param lgrInfo The ledger header to serialize
* @brief Unsubscribe to the validation feed.
* @param subscriber
*/
void
pubTransaction(data::TransactionAndMetadata const& blobs, ripple::LedgerHeader const& lgrInfo);
unsubValidation(SubscriberSharedPtr const& subscriber);
/**
* @brief Subscribe to the account changes stream.
*
* @param account The account to monitor changes for
* @param session The session to subscribe to the stream
* @brief Forward the validation feed.
* @param validationJson The validation feed json to forward.
*/
void
subAccount(ripple::AccountID const& account, SessionPtrType const& session);
forwardValidation(boost::json::object const& validationJson) const;
/**
* @brief Unsubscribe from the account changes stream.
*
* @param account The account the stream is for
* @param session The session to unsubscribe from the stream
* @brief Subscribe to the transactions feed.
* @param subscriber
* @param apiVersion The api version of feed to subscribe.
*/
void
unsubAccount(ripple::AccountID const& account, SessionPtrType const& session);
subTransactions(SubscriberSharedPtr const& subscriber, std::uint32_t apiVersion);
/**
* @brief Subscribe to a specific book changes stream.
*
* @param book The book to monitor changes for
* @param session The session to subscribe to the stream
* @brief Unsubscribe to the transactions feed.
* @param subscriber
*/
void
subBook(ripple::Book const& book, SessionPtrType session);
unsubTransactions(SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe from the specific book changes stream.
*
* @param book The book to stop monitoring changes for
* @param session The session to unsubscribe from the stream
* @brief Subscribe to the transactions feed, only receive the feed when particular account is affected.
* @param account The account to watch.
* @param subscriber
* @param apiVersion The api version of feed to subscribe.
*/
void
unsubBook(ripple::Book const& book, SessionPtrType session);
subAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber, std::uint32_t apiVersion);
/**
* @brief Subscribe to the book changes stream.
*
* @param session The session to subscribe to the stream
* @brief Unsubscribe to the transactions feed for particular account.
* @param subscriber
*/
void
subBookChanges(SessionPtrType session);
unsubAccount(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe from the book changes stream.
*
* @param session The session to unsubscribe from the stream
* @brief Subscribe to the transactions feed, only receive feed when particular order book is affected.
* @param book The book to watch.
* @param subscriber
* @param apiVersion The api version of feed to subscribe.
*/
void
unsubBookChanges(SessionPtrType session);
subBook(ripple::Book const& book, SubscriberSharedPtr const& subscriber, std::uint32_t apiVersion);
/**
* @brief Subscribe to the manifest stream.
*
* @param session The session to subscribe to the stream
* @brief Unsubscribe to the transactions feed for particular order book.
* @param book The book to watch.
* @param subscriber
*/
void
subManifest(SessionPtrType session);
unsubBook(ripple::Book const& book, SubscriberSharedPtr const& subscriber);
/**
* @brief Unsubscribe from the manifest stream.
*
* @param session The session to unsubscribe from the stream
* @brief Forward the transactions feed.
* @param txMeta The transaction and metadata.
* @param lgrInfo The ledger header.
*/
void
unsubManifest(SessionPtrType session);
pubTransaction(data::TransactionAndMetadata const& txMeta, ripple::LedgerHeader const& lgrInfo);
/**
* @brief Subscribe to the validation stream.
*
* @param session The session to subscribe to the stream
*/
void
subValidation(SessionPtrType session);
/**
* @brief Unsubscribe from the validation stream.
*
* @param session The session to unsubscribe from the stream
*/
void
unsubValidation(SessionPtrType session);
/**
* @brief Publish proposed transactions and proposed accounts from a JSON response.
*
* @param response The JSON response to use
*/
void
forwardProposedTransaction(boost::json::object const& response);
/**
* @brief Publish manifest updates from a JSON response.
*
* @param response The JSON response to use
*/
void
forwardManifest(boost::json::object const& response);
/**
* @brief Publish validation updates from a JSON response.
*
* @param response The JSON response to use
*/
void
forwardValidation(boost::json::object const& response);
/**
* @brief Subscribe to the proposed account stream.
*
* @param account The account to monitor
* @param session The session to subscribe to the stream
*/
void
subProposedAccount(ripple::AccountID const& account, SessionPtrType session);
/**
* @brief Unsubscribe from the proposed account stream.
*
* @param account The account the stream is for
* @param session The session to unsubscribe from the stream
*/
void
unsubProposedAccount(ripple::AccountID const& account, SessionPtrType session);
/**
* @brief Subscribe to the processed transactions stream.
*
* @param session The session to subscribe to the stream
*/
void
subProposedTransactions(SessionPtrType session);
/**
* @brief Unsubscribe from the proposed transactions stream.
*
* @param session The session to unsubscribe from the stream
*/
void
unsubProposedTransactions(SessionPtrType session);
/** @brief Clenup the session on removal. */
void
cleanup(SessionPtrType session);
/**
* @brief Generate a JSON report on the current state of the subscriptions.
*
* @return The report as a JSON object
* @brief Get the number of subscribers.
*/
boost::json::object
report() const
{
return {
{"ledger", ledgerSubscribers_.count()},
{"transactions", txSubscribers_.count()},
{"transactions_proposed", txProposedSubscribers_.count()},
{"manifests", manifestSubscribers_.count()},
{"validations", validationsSubscribers_.count()},
{"account", accountSubscribers_.count()},
{"accounts_proposed", accountProposedSubscribers_.count()},
{"books", bookSubscribers_.count()},
{"book_changes", bookChangesSubscribers_.count()},
{"ledger", ledgerFeed_.count()},
{"transactions", transactionFeed_.transactionSubCount()},
{"transactions_proposed", proposedTransactionFeed_.transactionSubcount()},
{"manifests", manifestFeed_.count()},
{"validations", validationsFeed_.count()},
{"account", transactionFeed_.accountSubCount()},
{"accounts_proposed", proposedTransactionFeed_.accountSubCount()},
{"books", transactionFeed_.bookSubCount()},
{"book_changes", bookChangesFeed_.count()},
};
}
private:
using CleanupFunction = std::function<void(SessionPtrType const)>;
void
subscribeHelper(SessionPtrType const& session, Subscription& subs, CleanupFunction&& func);
template <typename Key>
void
subscribeHelper(SessionPtrType const& session, Key const& k, SubscriptionMap<Key>& subs, CleanupFunction&& func);
// This is how we chose to cleanup subscriptions that have been closed.
// Each time we add a subscriber, we add the opposite lambda that unsubscribes that subscriber when cleanup is
// called with the session that closed.
std::mutex cleanupMtx_;
std::unordered_map<SessionPtrType, std::vector<CleanupFunction>> cleanupFuncs_ = {};
};
/**
* @brief The help class to run the subscription manager. The container of io_context which is used to publish the
* feeds.
*/
class SubscriptionManagerRunner {
boost::asio::io_context ioContext_;
std::shared_ptr<SubscriptionManager> subscriptionManager_;
util::Logger logger_{"Subscriptions"};
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> work_ =
boost::asio::make_work_guard(ioContext_);
std::vector<std::thread> workers_;
public:
SubscriptionManagerRunner(util::Config const& config, std::shared_ptr<data::BackendInterface> const& backend)
: subscriptionManager_(std::make_shared<SubscriptionManager>(ioContext_, backend))
{
auto numThreads = config.valueOr<uint64_t>("subscription_workers", 1);
LOG(logger_.info()) << "Starting subscription manager with " << numThreads << " workers";
workers_.reserve(numThreads);
for (auto i = numThreads; i > 0; --i)
workers_.emplace_back([&] { ioContext_.run(); });
}
std::shared_ptr<SubscriptionManager>
getManager()
{
return subscriptionManager_;
}
~SubscriptionManagerRunner()
{
work_.reset();
for (auto& worker : workers_)
worker.join();
}
};
} // namespace feed

31
src/feed/Types.h Normal file
View File

@@ -0,0 +1,31 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "web/interface/ConnectionBase.h"
#include <memory>
namespace feed {
using Subscriber = web::ConnectionBase;
using SubscriberPtr = Subscriber*;
using SubscriberSharedPtr = std::shared_ptr<Subscriber>;
} // namespace feed

View File

@@ -0,0 +1,55 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/Types.h"
#include "feed/impl/SingleFeedBase.h"
#include "rpc/BookChangesHelper.h"
#include <boost/asio/io_context.hpp>
#include <boost/json/serialize.hpp>
#include <ripple/protocol/LedgerHeader.h>
#include <vector>
namespace feed::impl {
/**
* @brief Feed that publishes book changes. This feed will be published every ledger, even if there are no changes.
* Example : {'type': 'bookChanges', 'ledger_index': 2647936, 'ledger_hash':
* '0A5010342D8AAFABDCA58A68F6F588E1C6E58C21B63ED6CA8DB2478F58F3ECD5', 'ledger_time': 756395682, 'changes': []}
*/
struct BookChangesFeed : public SingleFeedBase {
BookChangesFeed(boost::asio::io_context& ioContext) : SingleFeedBase(ioContext, "book_changes")
{
}
/**
* @brief Publishes the book changes.
* @param lgrInfo The ledger header.
* @param transactions The transactions that were included in the ledger.
*/
void
pub(ripple::LedgerHeader const& lgrInfo, std::vector<data::TransactionAndMetadata> const& transactions) const
{
SingleFeedBase::pub(boost::json::serialize(rpc::computeBookChanges(lgrInfo, transactions)));
}
};
} // namespace feed::impl

View File

@@ -0,0 +1,44 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "feed/impl/SingleFeedBase.h"
#include <boost/json/object.hpp>
#include <boost/json/serialize.hpp>
namespace feed::impl {
/**
* @brief Feed that publishes the json object as it is.
*/
struct ForwardFeed : public SingleFeedBase {
using SingleFeedBase::SingleFeedBase;
/**
* @brief Publishes the json object.
*/
void
pub(boost::json::object const& json) const
{
SingleFeedBase::pub(boost::json::serialize(json));
}
};
} // namespace feed::impl

View File

@@ -0,0 +1,101 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "feed/impl/LedgerFeed.h"
#include "data/BackendInterface.h"
#include "feed/Types.h"
#include "feed/impl/SingleFeedBase.h"
#include "rpc/RPCHelpers.h"
#include "util/Assert.h"
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <boost/json/serialize.hpp>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <cstdint>
#include <memory>
#include <optional>
#include <string>
namespace feed::impl {
boost::json::object
LedgerFeed::makeLedgerPubMessage(
ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t const txnCount
)
{
boost::json::object pubMsg;
pubMsg["type"] = "ledgerClosed";
pubMsg["ledger_index"] = lgrInfo.seq;
pubMsg["ledger_hash"] = to_string(lgrInfo.hash);
pubMsg["ledger_time"] = lgrInfo.closeTime.time_since_epoch().count();
pubMsg["fee_base"] = rpc::toBoostJson(fees.base.jsonClipped());
pubMsg["reserve_base"] = rpc::toBoostJson(fees.reserve.jsonClipped());
pubMsg["reserve_inc"] = rpc::toBoostJson(fees.increment.jsonClipped());
pubMsg["validated_ledgers"] = ledgerRange;
pubMsg["txn_count"] = txnCount;
return pubMsg;
}
boost::json::object
LedgerFeed::sub(
boost::asio::yield_context yield,
std::shared_ptr<data::BackendInterface const> const& backend,
SubscriberSharedPtr const& subscriber
)
{
SingleFeedBase::sub(subscriber);
// For ledger stream, we need to send the last closed ledger info as response
auto const ledgerRange = backend->fetchLedgerRange();
ASSERT(ledgerRange.has_value(), "Ledger range must be valid");
auto const lgrInfo = backend->fetchLedgerBySequence(ledgerRange->maxSequence, yield);
ASSERT(lgrInfo.has_value(), "Ledger must be valid");
auto const fees = backend->fetchFees(lgrInfo->seq, yield);
ASSERT(fees.has_value(), "Fees must be valid");
auto const range = std::to_string(ledgerRange->minSequence) + "-" + std::to_string(ledgerRange->maxSequence);
auto pubMsg = makeLedgerPubMessage(*lgrInfo, *fees, range, 0);
pubMsg.erase("txn_count");
pubMsg.erase("type");
return pubMsg;
}
void
LedgerFeed::pub(
ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t const txnCount
) const
{
SingleFeedBase::pub(boost::json::serialize(makeLedgerPubMessage(lgrInfo, fees, ledgerRange, txnCount)));
}
} // namespace feed::impl

View File

@@ -0,0 +1,89 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.h"
#include "feed/Types.h"
#include "feed/impl/SingleFeedBase.h"
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <boost/json/serialize.hpp>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <cstdint>
#include <memory>
#include <string>
namespace feed::impl {
/**
* @brief Feed that publishes the ledger info.
* Example : {'type': 'ledgerClosed', 'ledger_index': 2647935, 'ledger_hash':
* '5D022718CD782A82EE10D2147FD90B5F42F26A7E937C870B4FE3CF1086C916AE', 'ledger_time': 756395681, 'fee_base': 10,
* 'reserve_base': 10000000, 'reserve_inc': 2000000, 'validated_ledgers': '2619127-2647935', 'txn_count': 0}
*/
class LedgerFeed : public SingleFeedBase {
public:
/**
* @brief Construct a new Ledger Feed object
* @param ioContext The actual publish will be called in the strand of this.
*/
LedgerFeed(boost::asio::io_context& ioContext) : SingleFeedBase(ioContext, "ledger")
{
}
/**
* @brief Subscribe the ledger feed.
* @param yield The coroutine yield.
* @param backend The backend.
* @param subscriber
* @return The information of the latest ledger.
*/
boost::json::object
sub(boost::asio::yield_context yield,
std::shared_ptr<data::BackendInterface const> const& backend,
SubscriberSharedPtr const& subscriber);
/**
* @brief Publishes the ledger feed.
* @param lgrInfo The ledger header.
* @param fees The fees.
* @param ledgerRange The ledger range.
* @param txnCount The transaction count.
*/
void
pub(ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t txnCount) const;
private:
static boost::json::object
makeLedgerPubMessage(
ripple::LedgerHeader const& lgrInfo,
ripple::Fees const& fees,
std::string const& ledgerRange,
std::uint32_t txnCount
);
};
} // namespace feed::impl

View File

@@ -0,0 +1,146 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "feed/impl/ProposedTransactionFeed.h"
#include "feed/Types.h"
#include "rpc/RPCHelpers.h"
#include "util/log/Logger.h"
#include <boost/asio/post.hpp>
#include <boost/json/object.hpp>
#include <boost/json/serialize.hpp>
#include <ripple/protocol/AccountID.h>
#include <cstdint>
#include <memory>
#include <string>
#include <unordered_set>
#include <utility>
namespace feed::impl {
void
ProposedTransactionFeed::sub(SubscriberSharedPtr const& subscriber)
{
auto const weakPtr = std::weak_ptr(subscriber);
auto const added = signal_.connectTrackableSlot(subscriber, [weakPtr](std::shared_ptr<std::string> const& msg) {
if (auto connectionPtr = weakPtr.lock()) {
connectionPtr->send(msg);
}
});
if (added) {
LOG(logger_.debug()) << subscriber->tag() << "Subscribed tx_proposed";
++subAllCount_.get();
subscriber->onDisconnect.connect([this](SubscriberPtr connection) { unsubInternal(connection); });
}
}
void
ProposedTransactionFeed::sub(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber)
{
auto const weakPtr = std::weak_ptr(subscriber);
auto const added = accountSignal_.connectTrackableSlot(
subscriber,
account,
[this, weakPtr](std::shared_ptr<std::string> const& msg) {
if (auto connectionPtr = weakPtr.lock()) {
// Check if this connection already sent
if (notified_.contains(connectionPtr.get()))
return;
notified_.insert(connectionPtr.get());
connectionPtr->send(msg);
}
}
);
if (added) {
LOG(logger_.debug()) << subscriber->tag() << "Subscribed accounts_proposed " << account;
++subAccountCount_.get();
subscriber->onDisconnect.connect([this, account](SubscriberPtr connection) {
unsubInternal(account, connection);
});
}
}
void
ProposedTransactionFeed::unsub(SubscriberSharedPtr const& subscriber)
{
unsubInternal(subscriber.get());
}
void
ProposedTransactionFeed::unsub(ripple::AccountID const& account, SubscriberSharedPtr const& subscriber)
{
unsubInternal(account, subscriber.get());
}
void
ProposedTransactionFeed::pub(boost::json::object const& receivedTxJson)
{
auto pubMsg = std::make_shared<std::string>(boost::json::serialize(receivedTxJson));
auto const transaction = receivedTxJson.at("transaction").as_object();
auto const accounts = rpc::getAccountsFromTransaction(transaction);
auto affectedAccounts = std::unordered_set<ripple::AccountID>(accounts.cbegin(), accounts.cend());
boost::asio::post(strand_, [this, pubMsg = std::move(pubMsg), affectedAccounts = std::move(affectedAccounts)]() {
signal_.emit(pubMsg);
// Prevent the same connection from receiving the same message twice if it is subscribed to multiple accounts
// However, if the same connection subscribe both stream and account, it will still receive the message twice.
// notified_ can be cleared before signal_ emit to improve this, but let's keep it as is for now, since rippled
// acts like this.
notified_.clear();
for (auto const& account : affectedAccounts)
accountSignal_.emit(account, pubMsg);
});
}
std::uint64_t
ProposedTransactionFeed::transactionSubcount() const
{
return subAllCount_.get().value();
}
std::uint64_t
ProposedTransactionFeed::accountSubCount() const
{
return subAccountCount_.get().value();
}
void
ProposedTransactionFeed::unsubInternal(SubscriberPtr subscriber)
{
if (signal_.disconnect(subscriber)) {
LOG(logger_.debug()) << subscriber->tag() << "Unsubscribed tx_proposed";
--subAllCount_.get();
}
}
void
ProposedTransactionFeed::unsubInternal(ripple::AccountID const& account, SubscriberPtr subscriber)
{
if (accountSignal_.disconnect(subscriber, account)) {
LOG(logger_.debug()) << subscriber->tag() << "Unsubscribed accounts_proposed " << account;
--subAccountCount_.get();
}
}
} // namespace feed::impl

Some files were not shown because too many files have changed in this diff Show More