Compare commits

...

72 Commits

Author SHA1 Message Date
cyan317
aace437245 Add the missing input check for "diff" field of "ledger" (#643)
Fixes #644
2023-05-17 19:02:55 +01:00
cyan317
2e3e8cd779 Fix mismatch error when owner is in wrong format (#641)
Fixes #642
2023-05-16 14:26:08 +01:00
Alex Kremer
6f93e1003e Address cppcheck issues (#640)
Fixes #639
2023-05-15 17:57:21 +01:00
cyan317
14978ca91d add nft_page (#637)
Fixes #638
2023-05-15 13:00:35 +01:00
Alex Kremer
9adcaeb21b Rename functions to camelCase (#636) 2023-05-15 11:38:48 +01:00
Alex Kremer
d94fe4e7ab Fix nft_history error to match rippled (#635)
Fixes #633
2023-05-15 11:28:24 +01:00
ledhed2222
c8029255ba fix nft uri (#634)
* fix nft uri

* remove some comments too
2023-05-14 04:15:45 -04:00
cyan317
d548d44a61 Fix mismatches (#630)
Fix #632
2023-05-11 17:48:27 +01:00
Alex Kremer
4cae248b5c Fix race condition and ub (#631) 2023-05-10 18:35:04 +01:00
Alex Kremer
b3db4cadab Improve performance of Counters and add unit-test (#629)
Fixes #478
2023-05-09 14:24:12 +01:00
cyan317
02c0a1f11d Add handlers comments (#627)
Fixes #628
2023-05-09 14:02:01 +01:00
cyan317
6bc2ec745f fix bugs (#625)
Fixes #626
2023-05-05 12:55:14 +01:00
Alex Kremer
d7d5d61747 Integrate nextgen RPC into clio (#572)
Fixes #592
2023-05-04 16:15:36 +01:00
cyan317
f1b3a6b511 Use self-hosted mac for CI (#619) 2023-05-04 15:21:46 +01:00
cyan317
f52f36ecbc Add missing expectation in unit test (#622)
Fixes #623
2023-05-03 11:06:37 +01:00
Alex Kremer
860d10cddc Fix issue with retry policy that lead to crashes (#620)
Fixes #621
2023-05-03 11:04:30 +01:00
cyan317
36ac3215e2 Ledger (#604)
Fixes #618
2023-05-02 14:07:26 +01:00
cyan317
7776a5ffb6 Init (#614)
Fixes #617
2023-05-02 13:24:23 +01:00
cyan317
4b2d53fc2f account_objects of new RPC framework (#599)
Fixes #602
2023-04-25 09:14:20 +01:00
cyan317
9a19519550 Account_nfts (#598)
Fixes #601
2023-04-24 14:17:28 +01:00
cyan317
88e25687dc Ledgerdata (#596)
Fixes #600
2023-04-24 09:00:10 +01:00
cyan317
93e2ac529d Unsubscribe (#595)
Fixes #597
2023-04-20 08:54:20 +01:00
cyan317
0bc84fefbf Subscribe handler (#591)
Fixes #593
2023-04-13 14:14:11 +01:00
Alex Kremer
36bb20806e Implement server_info nextgen RPC (#590)
Fixes #587
2023-04-13 11:51:54 +01:00
ledhed2222
dfe974d5ab Add default ordering to issuer_nf_tokens_v2 (#588)
Fixes #589
2023-04-07 23:15:39 +01:00
Alex Kremer
bf65cfabae Fix backend error handling (#586)
Fixes #585
2023-04-06 14:21:08 +01:00
Alex Kremer
f42e024f38 Update git blame ignore file 2023-04-06 11:32:16 +01:00
Alex Kremer
d816ef54ab Reformat codebase with 120 char limit (#583) 2023-04-06 11:24:36 +01:00
Alex Kremer
e60fd3e58e Implement nft_history nextgen handler (#581)
Fixes #580
2023-04-05 14:06:26 +01:00
cyan317
654168efec Create ngContext (#579)
Fixes #582
2023-04-05 12:46:59 +01:00
Alex Kremer
5d06a79f13 Implement nextgen random handler and tests (#576)
Fixes #575
2023-04-04 15:16:02 +01:00
cyan317
3320125d8f Fix compile error on clang14.0.3 (#577)
Fixes #578
2023-04-04 12:55:43 +01:00
cyan317
a1f93b09f7 account_info implementation in new RPC framework (#573)
Fixes #574
2023-04-03 17:03:48 +01:00
Alex Kremer
232acaeff2 Implement nextgen nft_sell_offers handler (#571)
Fixes #570
2023-03-30 12:46:54 +01:00
Alex Kremer
d86104577b Implement new experimental cassandra backend (#537) 2023-03-29 19:38:38 +01:00
cyan317
e9937fab76 account_offer in new RPC framework (#567)
Fixes #569
2023-03-29 16:40:51 +01:00
Alex Kremer
75c2011845 Implement nextgen handler for nft_buy_offers (#568)
Fixes #564
2023-03-29 16:33:48 +01:00
cyan317
5604b37c02 account_tx of new RPC framework (#562)
Fixes #566
2023-03-28 13:21:51 +01:00
cyan317
f604856eab Use JSS string (#563)
#565
2023-03-28 09:07:10 +01:00
cyan317
b69e4350a1 noripple_check implementation of new RPC system (#554)
Fixes #561
2023-03-27 14:17:51 +01:00
Alex Kremer
95da706fed Implement nextgen nft_info handler (#558)
Fixing #557
2023-03-27 11:50:13 +01:00
Alex Kremer
1bb67217e5 Add codecov.io steps (#546)
Fixing #
2023-03-27 10:58:30 +01:00
cyan317
21f1b70daf Fix spawn (#556)
Fixes #559
2023-03-24 14:10:20 +00:00
cyan317
430812abf5 Transaction entry with new RPC framework (#553)
Fixes #555
2023-03-24 12:57:54 +00:00
Alex Kremer
8d5e28ef30 Implement nextgen account_lines handler (#551)
Fixing #550
2023-03-24 12:00:00 +00:00
cyan317
4180d81819 Fix subscription forward issue (#544)
Fixes #552
2023-03-23 13:54:55 +00:00
Alex Kremer
21eeb9ae02 Implement ledger_range rpc handler (#548)
Fixes #549
2023-03-21 14:14:43 +00:00
cyan317
edd2e9dd4b Implement book_offers in new RPC framework (#542)
Fixes #547
2023-03-21 09:12:25 +00:00
ledhed2222
b25ac5d707 Write NFT URIs to nf_token_uris table and pull from it for nft_info API (#313)
Fixes #308
2023-03-20 17:43:31 +00:00
cyan317
9d10cff873 Custom error validator (#540)
Fixes #541
2023-03-15 17:19:57 +00:00
cyan317
bc438ce58a Ledger entry in new RPC framework (#534)
Fixes #539
2023-03-15 13:01:40 +00:00
cyan317
b99a68e55f Gateway balance (#536)
Fixes #538
2023-03-14 14:21:28 +00:00
cyan317
7a819f4955 Gateway balance fix (#535)
Fixes #464
2023-03-08 15:08:20 +00:00
cyan317
6b78b1ad8b Fix ledger_entry bug (#532)
Fixes #533
2023-03-07 09:32:39 +00:00
cyan317
488e28e874 Add IfType requirement to RPC framework (#530)
Fixes #531
2023-03-03 12:25:53 +00:00
cyan317
d26dd5a8cf Fix (#528)
Fixes #529
2023-02-28 15:29:12 +00:00
cyan317
67f0fa26ae Tx handler in new RPC framework (#526)
Fixes #527
2023-02-28 09:35:13 +00:00
cyan317
a3211f4458 Handler account_currencies (#524)
Fixes #525
2023-02-27 09:17:51 +00:00
cyan317
7d4e5ff0bd Account channel (#519)
Fixes #523
2023-02-24 09:34:29 +00:00
cyan317
f6c2008540 Provide coroutine process interface for handler (#521)
Fixes #522
2023-02-23 16:35:01 +00:00
Elliot Lee
d74ca4940b Update CONTRIBUTING.md (#520) 2023-02-22 23:21:39 +00:00
cyan317
739807a7d7 Fix marker issue (#518)
* Fixes #515
2023-02-21 13:48:52 +00:00
Alex Kremer
9fa26be13a Change few loglines severity and channel (#517)
Fix #516
2023-02-20 11:14:05 +00:00
Alex Kremer
f0555af284 Add libfmt (#514)
Fix #513
2023-02-16 15:15:12 +00:00
cyan317
b7fa9b09fe Add common validator (#510)
Fixes #512
2023-02-15 13:54:53 +00:00
Michael Legleux
08f7a7a476 Exit 1 on failed experimental builds to fail build step (#507) 2023-02-14 11:48:13 -08:00
cyan317
703196b013 Fix mac build failure (#509)
Fixes #511
2023-02-14 16:55:42 +00:00
Elliot Lee
284986e7b7 Update CONTRIBUTING.md (#504) 2023-02-10 13:57:47 +00:00
cyan317
09ac1b866e Add ping handler (#503)
Fix #506
2023-02-08 16:20:24 +00:00
cyan317
4112cc42df Fix backend test fail (#502)
Fix #505
2023-02-08 10:21:19 +00:00
Alex Kremer
c07e04ce84 Document RPC framework (#501)
Fixes #500
2023-02-03 12:07:51 +00:00
cyan317
19455b4d6c Add Unittests for subscription module (#488)
Fix #492
2023-02-03 09:07:02 +00:00
221 changed files with 35698 additions and 8845 deletions

View File

@@ -34,7 +34,7 @@ BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
ColumnLimit: 120
CommentPragmas: '^ IWYU pragma:'
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4

View File

@@ -7,3 +7,4 @@
# clang-format
e41150248a97e4bdc1cf21b54650c4bb7c63928e
2e542e7b0d94451a933c88778461cc8d3d7e6417
d816ef54abd8e8e979b9c795bdb657a8d18f5e95

View File

@@ -4,7 +4,7 @@ exec 1>&2
# paths to check and re-format
sources="src unittests"
formatter="clang-format -i"
formatter="clang-format-11 -i"
first=$(git diff $sources)
find $sources -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 $formatter

View File

@@ -17,7 +17,7 @@ jobs:
build_clio:
name: Build Clio
runs-on: [self-hosted, Linux]
runs-on: [self-hosted, heavy]
needs: lint
strategy:
fail-fast: false
@@ -32,7 +32,6 @@ jobs:
container:
image: ${{ matrix.type.image }}
# options: --user 1001
steps:
- uses: actions/checkout@v3
with:
@@ -73,7 +72,7 @@ jobs:
name: Build on Mac/Clang14 and run tests
needs: lint
continue-on-error: false
runs-on: macos-12
runs-on: [self-hosted, macOS]
steps:
- uses: actions/checkout@v3
@@ -84,33 +83,42 @@ jobs:
id: boost
uses: actions/cache@v3
with:
path: boost
path: boost_1_77_0
key: ${{ runner.os }}-boost
- name: Build boost
if: steps.boost.outputs.cache-hit != 'true'
- name: Build Boost
if: ${{ steps.boost.outputs.cache-hit != 'true' }}
run: |
curl -s -OJL "https://boostorg.jfrog.io/artifactory/main/release/1.77.0/source/boost_1_77_0.tar.gz"
rm -rf boost_1_77_0.tar.gz boost_1_77_0 # cleanup if needed first
curl -s -fOJL "https://boostorg.jfrog.io/artifactory/main/release/1.77.0/source/boost_1_77_0.tar.gz"
tar zxf boost_1_77_0.tar.gz
mv boost_1_77_0 boost
cd boost
cd boost_1_77_0
./bootstrap.sh
./b2 cxxflags="-std=c++14"
- name: install deps
./b2 define=BOOST_ASIO_HAS_STD_INVOKE_RESULT cxxflags="-std=c++20"
- name: Install dependencies
run: |
brew install pkg-config protobuf openssl ninja cassandra-cpp-driver bison
brew install llvm@14 pkg-config protobuf openssl ninja cassandra-cpp-driver bison cmake
- name: Setup environment for llvm-14
run: |
export PATH="/usr/local/opt/llvm@14/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/llvm@14/lib -L/usr/local/opt/llvm@14/lib/c++ -Wl,-rpath,/usr/local/opt/llvm@14/lib/c++"
export CPPFLAGS="-I/usr/local/opt/llvm@14/include"
- name: Build clio
run: |
export BOOST_ROOT=$(pwd)/boost
export BOOST_ROOT=$(pwd)/boost_1_77_0
cd clio
cmake -B build
if ! cmake --build build -j$(nproc); then
cmake -B build -DCMAKE_C_COMPILER='/usr/local/opt/llvm@14/bin/clang' -DCMAKE_CXX_COMPILER='/usr/local/opt/llvm@14/bin/clang++'
if ! cmake --build build -j; then
echo '# 🔥🔥 MacOS AppleClang build failed!💥' >> $GITHUB_STEP_SUMMARY
exit 1
fi
- name: Run Test
run: |
cd clio/build
./clio_tests --gtest_filter="-Backend*"
./clio_tests --gtest_filter="-BackendTest*:BackendCassandraBaseTest*:BackendCassandraTest*"
test_clio:
name: Test Clio
@@ -159,22 +167,20 @@ jobs:
cd boost
./bootstrap.sh
./b2
- name: install deps
run: |
sudo apt-get -y install git pkg-config protobuf-compiler libprotobuf-dev libssl-dev wget build-essential doxygen bison flex autoconf clang-format gcovr
- name: Build clio
run: |
export BOOST_ROOT=$(pwd)/boost
cd clio
cmake -B build -DCODE_COVERAGE=on -DTEST_PARAMETER='--gtest_filter="-Backend*"'
cmake -B build -DCODE_COVERAGE=on -DTEST_PARAMETER='--gtest_filter="-BackendTest*:BackendCassandraBaseTest*:BackendCassandraTest*"'
if ! cmake --build build -j$(nproc); then
echo '# 🔥Ubuntu build🔥 failed!💥' >> $GITHUB_STEP_SUMMARY
exit 1
fi
cd build
make clio_tests-ccov
- name: Code Coverage Summary Report
uses: irongut/CodeCoverageSummary@v1.2.0
with:
@@ -189,7 +195,18 @@ jobs:
echo ${{ github.event.number }} > ./UnitTestCoverage/NR
cp clio/build/clio_tests-gcc-cov/report.html ./UnitTestCoverage/report.html
cp code-coverage-results.md ./UnitTestCoverage/out.md
- uses: actions/upload-artifact@v2
cat code-coverage-results.md > $GITHUB_STEP_SUMMARY
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
files: clio/build/clio_tests-gcc-cov/out.xml
- uses: actions/upload-artifact@v3
with:
name: UnitTestCoverage
path: UnitTestCoverage/
- uses: actions/upload-artifact@v3
with:
name: code_coverage_report
path: clio/build/clio_tests-gcc-cov/out.xml

View File

@@ -40,7 +40,7 @@ function(add_converage module)
COMMAND
${LLVM_COV_PATH}/llvm-cov report $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata
-ignore-filename-regex=".*_makefiles|.*unittests"
-ignore-filename-regex=".*_makefiles|.*unittests|.*_deps"
-show-region-summary=false
DEPENDS ${module}-ccov-preprocessing)
@@ -51,7 +51,7 @@ function(add_converage module)
${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -show-line-counts-or-regions
-output-dir=${module}-llvm-cov -format="html"
-ignore-filename-regex=".*_makefiles|.*unittests" > /dev/null 2>&1
-ignore-filename-regex=".*_makefiles|.*unittests|.*_deps" > /dev/null 2>&1
DEPENDS ${module}-ccov-preprocessing)
add_custom_command(

View File

@@ -144,8 +144,10 @@ if(NOT cassandra)
else()
message("Found system installed cassandra cpp driver")
message(${cassandra})
find_path(cassandra_includes NAMES cassandra.h REQUIRED)
message(${cassandra_includes})
get_filename_component(CASSANDRA_HEADER ${cassandra_includes}/cassandra.h REALPATH)
get_filename_component(CASSANDRA_HEADER_DIR ${CASSANDRA_HEADER} DIRECTORY)
target_link_libraries (clio PUBLIC ${cassandra})
target_include_directories(clio INTERFACE ${cassandra_includes})
target_include_directories(clio PUBLIC ${CASSANDRA_HEADER_DIR})
endif()

14
CMake/deps/libfmt.cmake Normal file
View File

@@ -0,0 +1,14 @@
FetchContent_Declare(
libfmt
URL https://github.com/fmtlib/fmt/releases/download/9.1.0/fmt-9.1.0.zip
)
FetchContent_GetProperties(libfmt)
if(NOT libfmt_POPULATED)
FetchContent_Populate(libfmt)
add_subdirectory(${libfmt_SOURCE_DIR} ${libfmt_BINARY_DIR} EXCLUDE_FROM_ALL)
endif()
target_link_libraries(clio PUBLIC fmt)

View File

@@ -19,6 +19,9 @@ if(PACKAGING)
add_definitions(-DPKG=1)
endif()
#c++20 removed std::result_of but boost 1.75 is still using it.
add_definitions(-DBOOST_ASIO_HAS_STD_INVOKE_RESULT=1)
add_library(clio)
target_compile_features(clio PUBLIC cxx_std_20)
target_include_directories(clio PUBLIC src)
@@ -28,6 +31,7 @@ include(ExternalProject)
include(CMake/settings.cmake)
include(CMake/ClioVersion.cmake)
include(CMake/deps/rippled.cmake)
include(CMake/deps/libfmt.cmake)
include(CMake/deps/Boost.cmake)
include(CMake/deps/cassandra.cmake)
include(CMake/deps/SourceLocation.cmake)
@@ -39,6 +43,15 @@ target_sources(clio PRIVATE
src/backend/BackendInterface.cpp
src/backend/CassandraBackend.cpp
src/backend/SimpleCache.cpp
## NextGen Backend
src/backend/cassandra/impl/Future.cpp
src/backend/cassandra/impl/Cluster.cpp
src/backend/cassandra/impl/Batch.cpp
src/backend/cassandra/impl/Result.cpp
src/backend/cassandra/impl/Tuple.cpp
src/backend/cassandra/impl/SslContext.cpp
src/backend/cassandra/Handle.cpp
src/backend/cassandra/SettingsProvider.cpp
## ETL
src/etl/ETLSource.cpp
src/etl/ProbingETLSource.cpp
@@ -48,48 +61,40 @@ target_sources(clio PRIVATE
src/subscriptions/SubscriptionManager.cpp
## RPC
src/rpc/Errors.cpp
src/rpc/RPC.cpp
src/rpc/Factories.cpp
src/rpc/RPCHelpers.cpp
src/rpc/Counters.cpp
src/rpc/WorkQueue.cpp
## NextGen RPC
src/rpc/common/Specs.cpp
src/rpc/common/Validators.cpp
## RPC Methods
# Account
# RPC impl
src/rpc/common/impl/HandlerProvider.cpp
## RPC handler
src/rpc/handlers/AccountChannels.cpp
src/rpc/handlers/AccountCurrencies.cpp
src/rpc/handlers/AccountInfo.cpp
src/rpc/handlers/AccountLines.cpp
src/rpc/handlers/AccountOffers.cpp
src/rpc/handlers/AccountNFTs.cpp
src/rpc/handlers/AccountObjects.cpp
src/rpc/handlers/AccountOffers.cpp
src/rpc/handlers/AccountTx.cpp
src/rpc/handlers/BookChanges.cpp
src/rpc/handlers/BookOffers.cpp
src/rpc/handlers/GatewayBalances.cpp
src/rpc/handlers/NoRippleCheck.cpp
# NFT
src/rpc/handlers/NFTHistory.cpp
src/rpc/handlers/NFTInfo.cpp
src/rpc/handlers/NFTOffers.cpp
# Ledger
src/rpc/handlers/Ledger.cpp
src/rpc/handlers/LedgerData.cpp
src/rpc/handlers/LedgerEntry.cpp
src/rpc/handlers/LedgerRange.cpp
# Transaction
src/rpc/handlers/Tx.cpp
src/rpc/handlers/TransactionEntry.cpp
src/rpc/handlers/AccountTx.cpp
# Dex
src/rpc/handlers/BookChanges.cpp
src/rpc/handlers/BookOffers.cpp
# Payment Channel
src/rpc/handlers/ChannelAuthorize.cpp
src/rpc/handlers/ChannelVerify.cpp
# Subscribe
src/rpc/handlers/Subscribe.cpp
# Server
src/rpc/handlers/ServerInfo.cpp
# Utilities
src/rpc/handlers/NFTBuyOffers.cpp
src/rpc/handlers/NFTHistory.cpp
src/rpc/handlers/NFTInfo.cpp
src/rpc/handlers/NFTOffersCommon.cpp
src/rpc/handlers/NFTSellOffers.cpp
src/rpc/handlers/NoRippleCheck.cpp
src/rpc/handlers/Random.cpp
src/rpc/handlers/TransactionEntry.cpp
src/rpc/handlers/Tx.cpp
## Util
src/config/Config.cpp
src/log/Logger.cpp
src/util/Taggable.cpp)
@@ -106,12 +111,57 @@ if(BUILD_TESTS)
unittests/Config.cpp
unittests/ProfilerTest.cpp
unittests/DOSGuard.cpp
unittests/SubscriptionTest.cpp
unittests/SubscriptionManagerTest.cpp
unittests/util/TestObject.cpp
# RPC
unittests/rpc/ErrorTests.cpp
unittests/rpc/BaseTests.cpp
unittests/rpc/RPCHelpersTest.cpp
unittests/rpc/CountersTest.cpp
unittests/rpc/AdminVerificationTest.cpp
## RPC handlers
unittests/rpc/handlers/DefaultProcessorTests.cpp
unittests/rpc/handlers/TestHandlerTests.cpp
unittests/rpc/handlers/DefaultProcessorTests.cpp)
unittests/rpc/handlers/AccountCurrenciesTest.cpp
unittests/rpc/handlers/AccountLinesTest.cpp
unittests/rpc/handlers/AccountTxTest.cpp
unittests/rpc/handlers/AccountOffersTest.cpp
unittests/rpc/handlers/AccountInfoTest.cpp
unittests/rpc/handlers/AccountChannelsTest.cpp
unittests/rpc/handlers/AccountNFTsTest.cpp
unittests/rpc/handlers/BookOffersTest.cpp
unittests/rpc/handlers/GatewayBalancesTest.cpp
unittests/rpc/handlers/TxTest.cpp
unittests/rpc/handlers/TransactionEntryTest.cpp
unittests/rpc/handlers/LedgerEntryTest.cpp
unittests/rpc/handlers/LedgerRangeTest.cpp
unittests/rpc/handlers/NoRippleCheckTest.cpp
unittests/rpc/handlers/ServerInfoTest.cpp
unittests/rpc/handlers/PingTest.cpp
unittests/rpc/handlers/RandomTest.cpp
unittests/rpc/handlers/NFTInfoTest.cpp
unittests/rpc/handlers/NFTBuyOffersTest.cpp
unittests/rpc/handlers/NFTSellOffersTest.cpp
unittests/rpc/handlers/NFTHistoryTest.cpp
unittests/rpc/handlers/SubscribeTest.cpp
unittests/rpc/handlers/UnsubscribeTest.cpp
unittests/rpc/handlers/LedgerDataTest.cpp
unittests/rpc/handlers/AccountObjectsTest.cpp
unittests/rpc/handlers/BookChangesTest.cpp
unittests/rpc/handlers/LedgerTest.cpp
# Backend
unittests/backend/cassandra/BaseTests.cpp
unittests/backend/cassandra/BackendTests.cpp
unittests/backend/cassandra/RetryPolicyTests.cpp
unittests/backend/cassandra/SettingsProviderTests.cpp
unittests/backend/cassandra/ExecutionStrategyTests.cpp
unittests/backend/cassandra/AsyncExecutorTests.cpp)
include(CMake/deps/gtest.cmake)
# test for dwarf5 bug on ci
target_compile_options(clio PUBLIC -gdwarf-4)
# if CODE_COVERAGE enable, add clio_test-ccov
if(CODE_COVERAGE)
include(CMake/coverage.cmake)

View File

@@ -3,18 +3,18 @@ Thank you for your interest in contributing to the `clio` project 🙏
To contribute, please:
1. Fork the repository under your own user.
2. Create a new branch on which to write your changes.
2. Create a new branch on which to commit/push your changes.
3. Write and test your code.
4. Ensure that your code compiles with the provided build engine and update the provided build engine as part of your PR where needed and where appropriate.
5. Where applicable, write test cases for your code and include those in `unittests`.
6. Ensure your code passes automated checks (e.g. clang-format)
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change.). See below for more details.
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change). See below for more details.
8. Open a PR to the main repository onto the _develop_ branch, and follow the provided template.
> **Note:** Please make sure you read the [Style guide](#style-guide).
> **Note:** Please read the [Style guide](#style-guide).
## Install git hooks
Please make sure to run the following command in order to use git hooks that are helpful for `clio` development.
Please run the following command in order to use git hooks that are helpful for `clio` development.
``` bash
git config --local core.hooksPath .githooks
@@ -22,10 +22,10 @@ git config --local core.hooksPath .githooks
## Git commands
This sections offers a detailed look at the git commands you will need to use to get your PR submitted.
Please note that there are more than one way to do this and these commands are only provided for your convenience.
Please note that there are more than one way to do this and these commands are provided for your convenience.
At this point it's assumed that you have already finished working on your feature/bug.
> **Important:** Before you issue any of the commands below, please hit the `Sync fork` button and make sure your fork's `develop` branch is up to date with the main `clio` repository.
> **Important:** Before you issue any of the commands below, please hit the `Sync fork` button and make sure your fork's `develop` branch is up-to-date with the main `clio` repository.
``` bash
# Create a backup of your branch
@@ -37,16 +37,16 @@ git pull origin develop
git checkout <your feature branch>
git rebase -i develop
```
For each commit in the list other than the first one please select `s` to squash.
After this is done you will have the opportunity to write a message for the squashed commit.
For each commit in the list other than the first one, enter `s` to squash.
After this is done, you will have the opportunity to write a message for the squashed commit.
> **Hint:** Please use **imperative mood** commit message capitalizing the first word of the subject.
> **Hint:** Please use **imperative mood** in the commit message, and capitalize the first word.
``` bash
# You should now have a single commit on top of a commit in `develop`
git log
```
> **Todo:** In case there are merge conflicts, please resolve them now
> **Note:** If there are merge conflicts, please resolve them now.
``` bash
# Use the same commit message as you did above
@@ -54,16 +54,16 @@ git commit -m 'Your message'
git rebase --continue
```
> **Important:** If you have no GPG keys setup please follow [this tutorial](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account)
> **Important:** If you have no GPG keys set up, please follow [this tutorial](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account)
``` bash
# Sign the commit with your GPG key and finally push your changes to the repo
# Sign the commit with your GPG key, and push your changes
git commit --amend -S
git push --force
```
## Fixing issues found during code review
While your code is in review it's possible that some changes will be requested by the reviewer.
While your code is in review, it's possible that some changes will be requested by reviewer(s).
This section describes the process of adding your fixes.
We assume that you already made the required changes on your feature branch.
@@ -72,25 +72,26 @@ We assume that you already made the required changes on your feature branch.
# Add the changed code
git add <paths to add>
# Add a folded commit message (so you can squash them later)
# Add a [FOLD] commit message (so you remember to squash it later)
# while also signing it with your GPG key
git commit -S -m "[FOLD] Your commit message"
# And finally push your changes
git push
```
## After code review
Last but not least, when your PR is approved you still have to `Squash and merge` your code.
Luckily there is a button for that towards the bottom of the PR's page on github.
> **Important:** Please leave the automatically generated link to PR in the subject line **and** in the description field please add `"Fixes #ISSUE_ID"` (replacing `ISSUE_ID` with yours).
## After code review
When your PR is approved and ready to merge, use `Squash and merge`.
The button for that is near the bottom of the PR's page on GitHub.
> **Important:** Please leave the automatically-generated mention/link to the PR in the subject line **and** in the description field add `"Fix #ISSUE_ID"` (replacing `ISSUE_ID` with yours) if the PR fixes an issue.
> **Note:** See [issues](https://github.com/XRPLF/clio/issues) to find the `ISSUE_ID` for the feature/bug you were working on.
# Style guide
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of _thou shalt not_ commandments.
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent.
## Formatting
All code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain.
Code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain.
To change your code to conform use `clang-format -i <your changed files>`.
## Avoid
@@ -114,7 +115,7 @@ To change your code to conform use `clang-format -i <your changed files>`.
Maintainers are ecosystem participants with elevated access to the repository. They are able to push new code, make decisions on when a release should be made, etc.
## Code Review
PRs must be reviewed by at least one of the maintainers.
A PR must be reviewed and approved by at least one of the maintainers before it can be merged.
## Adding and Removing
New maintainers can be proposed by two existing maintainers, subject to a vote by a quorum of the existing maintainers. A minimum of 50% support and a 50% participation is required. In the event of a tie vote, the addition of the new maintainer will be rejected.
@@ -123,12 +124,11 @@ Existing maintainers can resign, or be subject to a vote for removal at the behe
## Existing Maintainers
* [cjcobb23](https://github.com/cjcobb23) (Ripple)
* [legleux](https://github.com/legleux) (Ripple)
* [undertome](https://github.com/undertome) (Ripple)
* [cindyyan317](https://github.com/cindyyan317) (Ripple)
* [godexsoft](https://github.com/godexsoft) (Ripple)
* [officialfrancismendoza](https://github.com/officialfrancismendoza) (Ripple)
* [legleux](https://github.com/legleux) (Ripple)
## Honorable ex-Maintainers
* [cjcobb23](https://github.com/cjcobb23) (ex-Ripple)
* [natenichols](https://github.com/natenichols) (ex-Ripple)

View File

@@ -21,6 +21,7 @@
#include <backend/BackendInterface.h>
#include <backend/CassandraBackend.h>
#include <backend/CassandraBackendNew.h>
#include <config/Config.h>
#include <log/Logger.h>
@@ -43,6 +44,13 @@ make_Backend(boost::asio::io_context& ioc, clio::Config const& config)
auto ttl = config.valueOr<uint32_t>("online_delete", 0) * 4;
backend = std::make_shared<CassandraBackend>(ioc, cfg, ttl);
}
else if (boost::iequals(type, "cassandra-new"))
{
auto cfg = config.section("database." + type);
auto ttl = config.valueOr<uint16_t>("online_delete", 0) * 4;
backend =
std::make_shared<Backend::Cassandra::CassandraBackend>(Backend::Cassandra::SettingsProvider{cfg, ttl});
}
if (!backend)
throw std::runtime_error("Invalid database type");

View File

@@ -33,26 +33,24 @@ namespace Backend {
bool
BackendInterface::finishWrites(std::uint32_t const ledgerSequence)
{
gLog.debug() << "Want finish writes for " << ledgerSequence;
auto commitRes = doFinishWrites();
if (commitRes)
{
gLog.debug() << "Successfully commited. Updating range now to " << ledgerSequence;
updateRange(ledgerSequence);
}
return commitRes;
}
void
BackendInterface::writeLedgerObject(
std::string&& key,
std::uint32_t const seq,
std::string&& blob)
BackendInterface::writeLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob)
{
assert(key.size() == sizeof(ripple::uint256));
doWriteLedgerObject(std::move(key), seq, std::move(blob));
}
std::optional<LedgerRange>
BackendInterface::hardFetchLedgerRangeNoThrow(
boost::asio::yield_context& yield) const
BackendInterface::hardFetchLedgerRangeNoThrow(boost::asio::yield_context& yield) const
{
gLog.trace() << "called";
while (true)
@@ -117,8 +115,7 @@ BackendInterface::fetchLedgerObjects(
else
misses.push_back(keys[i]);
}
gLog.trace() << "Cache hits = " << keys.size() - misses.size()
<< " - cache misses = " << misses.size();
gLog.trace() << "Cache hits = " << keys.size() - misses.size() << " - cache misses = " << misses.size();
if (misses.size())
{
@@ -173,7 +170,6 @@ BackendInterface::fetchBookOffers(
ripple::uint256 const& book,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
std::optional<ripple::uint256> const& cursor,
boost::asio::yield_context& yield) const
{
// TODO try to speed this up. This can take a few seconds. The goal is
@@ -182,10 +178,7 @@ BackendInterface::fetchBookOffers(
const ripple::uint256 bookEnd = ripple::getQualityNext(book);
ripple::uint256 uTipIndex = book;
std::vector<ripple::uint256> keys;
auto getMillis = [](auto diff) {
return std::chrono::duration_cast<std::chrono::milliseconds>(diff)
.count();
};
auto getMillis = [](auto diff) { return std::chrono::duration_cast<std::chrono::milliseconds>(diff).count(); };
auto begin = std::chrono::system_clock::now();
std::uint32_t numSucc = 0;
std::uint32_t numPages = 0;
@@ -200,18 +193,14 @@ BackendInterface::fetchBookOffers(
succMillis += getMillis(mid2 - mid1);
if (!offerDir || offerDir->key >= bookEnd)
{
gLog.trace() << "offerDir.has_value() " << offerDir.has_value()
<< " breaking";
gLog.trace() << "offerDir.has_value() " << offerDir.has_value() << " breaking";
break;
}
uTipIndex = offerDir->key;
while (keys.size() < limit)
{
++numPages;
ripple::STLedgerEntry sle{
ripple::SerialIter{
offerDir->blob.data(), offerDir->blob.size()},
offerDir->key};
ripple::STLedgerEntry sle{ripple::SerialIter{offerDir->blob.data(), offerDir->blob.size()}, offerDir->key};
auto indexes = sle.getFieldV256(ripple::sfIndexes);
keys.insert(keys.end(), indexes.begin(), indexes.end());
auto next = sle.getFieldU64(ripple::sfIndexNext);
@@ -221,8 +210,7 @@ BackendInterface::fetchBookOffers(
break;
}
auto nextKey = ripple::keylet::page(uTipIndex, next);
auto nextDir =
fetchLedgerObject(nextKey.key, ledgerSequence, yield);
auto nextDir = fetchLedgerObject(nextKey.key, ledgerSequence, yield);
assert(nextDir);
offerDir->blob = *nextDir;
offerDir->key = nextKey.key;
@@ -234,26 +222,20 @@ BackendInterface::fetchBookOffers(
auto objs = fetchLedgerObjects(keys, ledgerSequence, yield);
for (size_t i = 0; i < keys.size() && i < limit; ++i)
{
gLog.trace() << "Key = " << ripple::strHex(keys[i])
<< " blob = " << ripple::strHex(objs[i])
gLog.trace() << "Key = " << ripple::strHex(keys[i]) << " blob = " << ripple::strHex(objs[i])
<< " ledgerSequence = " << ledgerSequence;
assert(objs[i].size());
page.offers.push_back({keys[i], objs[i]});
}
auto end = std::chrono::system_clock::now();
gLog.debug() << "Fetching " << std::to_string(keys.size())
<< " offers took " << std::to_string(getMillis(mid - begin))
<< " milliseconds. Fetching next dir took "
<< std::to_string(succMillis)
<< " milliseonds. Fetched next dir " << std::to_string(numSucc)
gLog.debug() << "Fetching " << std::to_string(keys.size()) << " offers took "
<< std::to_string(getMillis(mid - begin)) << " milliseconds. Fetching next dir took "
<< std::to_string(succMillis) << " milliseonds. Fetched next dir " << std::to_string(numSucc)
<< " times"
<< " Fetching next page of dir took "
<< std::to_string(pageMillis) << " milliseconds"
<< ". num pages = " << std::to_string(numPages)
<< ". Fetching all objects took "
<< " Fetching next page of dir took " << std::to_string(pageMillis) << " milliseconds"
<< ". num pages = " << std::to_string(numPages) << ". Fetching all objects took "
<< std::to_string(getMillis(end - mid))
<< " milliseconds. total time = "
<< std::to_string(getMillis(end - begin)) << " milliseconds"
<< " milliseconds. total time = " << std::to_string(getMillis(end - begin)) << " milliseconds"
<< " book = " << ripple::strHex(book);
return page;
@@ -273,11 +255,8 @@ BackendInterface::fetchLedgerPage(
bool reachedEnd = false;
while (keys.size() < limit && !reachedEnd)
{
ripple::uint256 const& curCursor = keys.size() ? keys.back()
: cursor ? *cursor
: firstKey;
std::uint32_t const seq =
outOfOrder ? range->maxSequence : ledgerSequence;
ripple::uint256 const& curCursor = keys.size() ? keys.back() : cursor ? *cursor : firstKey;
std::uint32_t const seq = outOfOrder ? range->maxSequence : ledgerSequence;
auto succ = fetchSuccessorKey(curCursor, seq, yield);
if (!succ)
reachedEnd = true;
@@ -292,9 +271,8 @@ BackendInterface::fetchLedgerPage(
page.objects.push_back({std::move(keys[i]), std::move(objects[i])});
else if (!outOfOrder)
{
gLog.error()
<< "Deleted or non-existent object in successor table. key = "
<< ripple::strHex(keys[i]) << " - seq = " << ledgerSequence;
gLog.error() << "Deleted or non-existent object in successor table. key = " << ripple::strHex(keys[i])
<< " - seq = " << ledgerSequence;
std::stringstream msg;
for (size_t j = 0; j < objects.size(); ++j)
{
@@ -310,9 +288,7 @@ BackendInterface::fetchLedgerPage(
}
std::optional<ripple::Fees>
BackendInterface::fetchFees(
std::uint32_t const seq,
boost::asio::yield_context& yield) const
BackendInterface::fetchFees(std::uint32_t const seq, boost::asio::yield_context& yield) const
{
ripple::Fees fees;

View File

@@ -72,8 +72,7 @@ retryOnTimeout(F func, size_t waitMs = 500)
}
catch (DatabaseTimeout& t)
{
log.error()
<< "Database request timed out. Sleeping and retrying ... ";
log.error() << "Database request timed out. Sleeping and retrying ... ";
std::this_thread::sleep_for(std::chrono::milliseconds(waitMs));
}
}
@@ -111,7 +110,7 @@ synchronous(F&& f)
* R is the currently executing coroutine that is about to get passed.
* If corountine types do not match, the current one's type is stored.
*/
using R = typename std::result_of<F(boost::asio::yield_context&)>::type;
using R = typename boost::result_of<F(boost::asio::yield_context&)>::type;
if constexpr (!std::is_same<R, void>::value)
{
/**
@@ -122,11 +121,10 @@ synchronous(F&& f)
* executing coroutine, yield. The different type is returned.
*/
R res;
boost::asio::spawn(
strand, [&f, &work, &res](boost::asio::yield_context yield) {
res = f(yield);
work.reset();
});
boost::asio::spawn(strand, [&f, &work, &res](boost::asio::yield_context yield) {
res = f(yield);
work.reset();
});
ctx.run();
return res;
@@ -134,11 +132,10 @@ synchronous(F&& f)
else
{
/*! @brief When the corutine type is different, run as normal. */
boost::asio::spawn(
strand, [&f, &work](boost::asio::yield_context yield) {
f(yield);
work.reset();
});
boost::asio::spawn(strand, [&f, &work](boost::asio::yield_context yield) {
f(yield);
work.reset();
});
ctx.run();
}
@@ -182,12 +179,8 @@ protected:
*/
public:
BackendInterface(clio::Config const& config)
{
}
virtual ~BackendInterface()
{
}
BackendInterface() = default;
virtual ~BackendInterface() = default;
/*! @brief LEDGER METHODS */
public:
@@ -213,15 +206,11 @@ public:
/*! @brief Fetches a specific ledger by sequence number. */
virtual std::optional<ripple::LedgerInfo>
fetchLedgerBySequence(
std::uint32_t const sequence,
boost::asio::yield_context& yield) const = 0;
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches a specific ledger by hash. */
virtual std::optional<ripple::LedgerInfo>
fetchLedgerByHash(
ripple::uint256 const& hash,
boost::asio::yield_context& yield) const = 0;
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches the latest ledger sequence. */
virtual std::optional<std::uint32_t>
@@ -273,9 +262,7 @@ public:
* @return std::optional<TransactionAndMetadata>
*/
virtual std::optional<TransactionAndMetadata>
fetchTransaction(
ripple::uint256 const& hash,
boost::asio::yield_context& yield) const = 0;
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches multiple transactions.
@@ -285,9 +272,7 @@ public:
* @return std::vector<TransactionAndMetadata>
*/
virtual std::vector<TransactionAndMetadata>
fetchTransactions(
std::vector<ripple::uint256> const& hashes,
boost::asio::yield_context& yield) const = 0;
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches all transactions for a specific account
@@ -318,9 +303,7 @@ public:
* @return std::vector<TransactionAndMetadata>
*/
virtual std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const = 0;
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches all transaction hashes from a specific ledger.
@@ -330,9 +313,7 @@ public:
* @return std::vector<ripple::uint256>
*/
virtual std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const = 0;
fetchAllTransactionHashesInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/*! @brief NFT methods */
/**
@@ -344,10 +325,8 @@ public:
* @return std::optional<NFT>
*/
virtual std::optional<NFT>
fetchNFT(
ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const = 0;
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const = 0;
/**
* @brief Fetches all transactions for a specific NFT.
@@ -377,10 +356,8 @@ public:
* @return std::optional<Blob>
*/
std::optional<Blob>
fetchLedgerObject(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const;
fetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const;
/**
* @brief Fetches all ledger objects: a vector of vectors of unsigned chars.
@@ -398,10 +375,8 @@ public:
/*! @brief Virtual function version of fetchLedgerObject */
virtual std::optional<Blob>
doFetchLedgerObject(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const = 0;
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const = 0;
/*! @brief Virtual function version of fetchLedgerObjects */
virtual std::vector<Blob>
@@ -421,9 +396,7 @@ public:
* @return std::vector<LedgerObject>
*/
virtual std::vector<LedgerObject>
fetchLedgerDiff(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const = 0;
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches a page of ledger objects, ordered by key/index.
@@ -445,24 +418,17 @@ public:
/*! @brief Fetches successor object from key/index. */
std::optional<LedgerObject>
fetchSuccessorObject(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const;
fetchSuccessorObject(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const;
/*! @brief Fetches successor key from key/index. */
std::optional<ripple::uint256>
fetchSuccessorKey(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const;
fetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const;
/*! @brief Virtual function version of fetchSuccessorKey. */
virtual std::optional<ripple::uint256>
doFetchSuccessorKey(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const = 0;
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const = 0;
/**
* @brief Fetches book offers.
@@ -479,7 +445,6 @@ public:
ripple::uint256 const& book,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
std::optional<ripple::uint256> const& cursor,
boost::asio::yield_context& yield) const;
/**
@@ -495,9 +460,7 @@ public:
std::optional<LedgerRange>
hardFetchLedgerRange() const
{
return synchronous([&](boost::asio::yield_context yield) {
return hardFetchLedgerRange(yield);
});
return synchronous([&](boost::asio::yield_context yield) { return hardFetchLedgerRange(yield); });
}
/*! @brief Virtual function equivalent of hardFetchLedgerRange. */
@@ -518,9 +481,7 @@ public:
* @param ledgerHeader r-value string representing ledger header.
*/
virtual void
writeLedger(
ripple::LedgerInfo const& ledgerInfo,
std::string&& ledgerHeader) = 0;
writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& ledgerHeader) = 0;
/**
* @brief Writes a new ledger object.
@@ -532,10 +493,7 @@ public:
* @param blob r-value vector of unsigned characters (blob).
*/
virtual void
writeLedgerObject(
std::string&& key,
std::uint32_t const seq,
std::string&& blob);
writeLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob);
/**
* @brief Writes a new transaction.
@@ -586,10 +544,7 @@ public:
* @param successor Passed in as an r-value reference.
*/
virtual void
writeSuccessor(
std::string&& key,
std::uint32_t const seq,
std::string&& successor) = 0;
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) = 0;
/*! @brief Tells database we will write data for a specific ledger. */
virtual void
@@ -618,9 +573,7 @@ public:
* @return false
*/
virtual bool
doOnlineDelete(
std::uint32_t numLedgersToKeep,
boost::asio::yield_context& yield) const = 0;
doOnlineDelete(std::uint32_t numLedgersToKeep, boost::asio::yield_context& yield) const = 0;
/**
* @brief Opens the database
@@ -650,10 +603,7 @@ private:
* @param blob r-value vector of unsigned chars.
*/
virtual void
doWriteLedgerObject(
std::string&& key,
std::uint32_t const seq,
std::string&& blob) = 0;
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) = 0;
virtual bool
doFinishWrites() = 0;

View File

@@ -17,12 +17,13 @@
*/
//==============================================================================
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
#include <backend/CassandraBackend.h>
#include <backend/DBHelpers.h>
#include <log/Logger.h>
#include <util/Profiler.h>
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
#include <functional>
#include <unordered_map>
@@ -47,22 +48,15 @@ processAsyncWriteResponse(T& requestParams, CassFuture* fut, F func)
if (rc != CASS_OK)
{
// exponential backoff with a max wait of 2^10 ms (about 1 second)
auto wait = std::chrono::milliseconds(
lround(std::pow(2, std::min(10u, requestParams.currentRetries))));
log.error() << "ERROR!!! Cassandra write error: " << rc << ", "
<< cass_error_desc(rc)
<< " id= " << requestParams.toString()
<< ", current retries " << requestParams.currentRetries
auto wait = std::chrono::milliseconds(lround(std::pow(2, std::min(10u, requestParams.currentRetries))));
log.error() << "ERROR!!! Cassandra write error: " << rc << ", " << cass_error_desc(rc)
<< " id= " << requestParams.toString() << ", current retries " << requestParams.currentRetries
<< ", retrying in " << wait.count() << " milliseconds";
++requestParams.currentRetries;
std::shared_ptr<boost::asio::steady_timer> timer =
std::make_shared<boost::asio::steady_timer>(
backend.getIOContext(),
std::chrono::steady_clock::now() + wait);
timer->async_wait([timer, &requestParams, func](
const boost::system::error_code& error) {
func(requestParams, true);
});
std::shared_ptr<boost::asio::steady_timer> timer = std::make_shared<boost::asio::steady_timer>(
backend.getIOContext(), std::chrono::steady_clock::now() + wait);
timer->async_wait(
[timer, &requestParams, func](const boost::system::error_code& error) { func(requestParams, true); });
}
else
{
@@ -90,21 +84,13 @@ struct WriteCallbackData
std::atomic<int> refs = 1;
std::string id;
WriteCallbackData(
CassandraBackend const* b,
T&& d,
B bind,
std::string const& identifier)
WriteCallbackData(CassandraBackend const* b, T&& d, B bind, std::string const& identifier)
: backend(b), data(std::move(d)), id(identifier)
{
retry = [bind, this](auto& params, bool isRetry) {
auto statement = bind(params);
backend->executeAsyncWrite(
statement,
processAsyncWrite<
typename std::remove_reference<decltype(params)>::type>,
params,
isRetry);
statement, processAsyncWrite<typename std::remove_reference<decltype(params)>::type>, params, isRetry);
};
}
virtual void
@@ -145,10 +131,7 @@ struct BulkWriteCallbackData : public WriteCallbackData<T, B>
std::atomic_int& r,
std::mutex& m,
std::condition_variable& c)
: WriteCallbackData<T, B>(b, std::move(d), bind, "bulk")
, numRemaining(r)
, mtx(m)
, cv(c)
: WriteCallbackData<T, B>(b, std::move(d), bind, "bulk"), numRemaining(r), mtx(m), cv(c)
{
}
void
@@ -172,11 +155,7 @@ struct BulkWriteCallbackData : public WriteCallbackData<T, B>
template <class T, class B>
void
makeAndExecuteAsyncWrite(
CassandraBackend const* b,
T&& d,
B bind,
std::string const& id)
makeAndExecuteAsyncWrite(CassandraBackend const* b, T&& d, B bind, std::string const& id)
{
auto* cb = new WriteCallbackData<T, B>(b, std::move(d), bind, id);
cb->start();
@@ -192,17 +171,13 @@ makeAndExecuteBulkAsyncWrite(
std::mutex& m,
std::condition_variable& c)
{
auto cb = std::make_shared<BulkWriteCallbackData<T, B>>(
b, std::move(d), bind, r, m, c);
auto cb = std::make_shared<BulkWriteCallbackData<T, B>>(b, std::move(d), bind, r, m, c);
cb->start();
return cb;
}
void
CassandraBackend::doWriteLedgerObject(
std::string&& key,
std::uint32_t const seq,
std::string&& blob)
CassandraBackend::doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob)
{
log_.trace() << "Writing ledger object to cassandra";
if (range)
@@ -234,14 +209,10 @@ CassandraBackend::doWriteLedgerObject(
}
void
CassandraBackend::writeSuccessor(
std::string&& key,
std::uint32_t const seq,
std::string&& successor)
CassandraBackend::writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor)
{
log_.trace() << "Writing successor. key = " << key.size() << " bytes. "
<< " seq = " << std::to_string(seq)
<< " successor = " << successor.size() << " bytes.";
<< " seq = " << std::to_string(seq) << " successor = " << successor.size() << " bytes.";
assert(key.size() != 0);
assert(successor.size() != 0);
makeAndExecuteAsyncWrite(
@@ -259,9 +230,7 @@ CassandraBackend::writeSuccessor(
"successor");
}
void
CassandraBackend::writeLedger(
ripple::LedgerInfo const& ledgerInfo,
std::string&& header)
CassandraBackend::writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& header)
{
makeAndExecuteAsyncWrite(
this,
@@ -289,8 +258,7 @@ CassandraBackend::writeLedger(
}
void
CassandraBackend::writeAccountTransactions(
std::vector<AccountTransactionsData>&& data)
CassandraBackend::writeAccountTransactions(std::vector<AccountTransactionsData>&& data)
{
for (auto& record : data)
{
@@ -298,11 +266,7 @@ CassandraBackend::writeAccountTransactions(
{
makeAndExecuteAsyncWrite(
this,
std::make_tuple(
std::move(account),
record.ledgerSequence,
record.transactionIndex,
record.txHash),
std::make_tuple(std::move(account), record.ledgerSequence, record.transactionIndex, record.txHash),
[this](auto& params) {
CassandraStatement statement(insertAccountTx_);
auto& [account, lgrSeq, txnIdx, hash] = params.data;
@@ -323,11 +287,7 @@ CassandraBackend::writeNFTTransactions(std::vector<NFTTransactionsData>&& data)
{
makeAndExecuteAsyncWrite(
this,
std::make_tuple(
record.tokenID,
record.ledgerSequence,
record.transactionIndex,
record.txHash),
std::make_tuple(record.tokenID, record.ledgerSequence, record.transactionIndex, record.txHash),
[this](auto const& params) {
CassandraStatement statement(insertNFTTx_);
auto const& [tokenID, lgrSeq, txnIdx, txHash] = params.data;
@@ -363,12 +323,7 @@ CassandraBackend::writeTransaction(
"ledger_transaction");
makeAndExecuteAsyncWrite(
this,
std::make_tuple(
std::move(hash),
seq,
date,
std::move(transaction),
std::move(metadata)),
std::make_tuple(std::move(hash), seq, date, std::move(transaction), std::move(metadata)),
[this](auto& params) {
CassandraStatement statement{insertTransaction_};
auto& [hash, sequence, date, transaction, metadata] = params.data;
@@ -389,11 +344,7 @@ CassandraBackend::writeNFTs(std::vector<NFTsData>&& data)
{
makeAndExecuteAsyncWrite(
this,
std::make_tuple(
record.tokenID,
record.ledgerSequence,
record.owner,
record.isBurned),
std::make_tuple(record.tokenID, record.ledgerSequence, record.owner, record.isBurned),
[this](auto const& params) {
CassandraStatement statement{insertNFT_};
auto const& [tokenID, lgrSeq, owner, isBurned] = params.data;
@@ -405,17 +356,39 @@ CassandraBackend::writeNFTs(std::vector<NFTsData>&& data)
},
"nf_tokens");
makeAndExecuteAsyncWrite(
this,
std::make_tuple(record.tokenID),
[this](auto const& params) {
CassandraStatement statement{insertIssuerNFT_};
auto const& [tokenID] = params.data;
statement.bindNextBytes(ripple::nft::getIssuer(tokenID));
statement.bindNextBytes(tokenID);
return statement;
},
"issuer_nf_tokens");
// If `uri` is set (and it can be set to an empty uri), we know this
// is a net-new NFT. That is, this NFT has not been seen before by us
// _OR_ it is in the extreme edge case of a re-minted NFT ID with the
// same NFT ID as an already-burned token. In this case, we need to
// record the URI and link to the issuer_nf_tokens table.
if (record.uri)
{
makeAndExecuteAsyncWrite(
this,
std::make_tuple(record.tokenID),
[this](auto const& params) {
CassandraStatement statement{insertIssuerNFT_};
auto const& [tokenID] = params.data;
statement.bindNextBytes(ripple::nft::getIssuer(tokenID));
statement.bindNextInt(ripple::nft::toUInt32(ripple::nft::getTaxon(tokenID)));
statement.bindNextBytes(tokenID);
return statement;
},
"issuer_nf_tokens");
makeAndExecuteAsyncWrite(
this,
std::make_tuple(record.tokenID, record.ledgerSequence, record.uri.value()),
[this](auto const& params) {
CassandraStatement statement{insertNFTURI_};
auto const& [tokenID, lgrSeq, uri] = params.data;
statement.bindNextBytes(tokenID);
statement.bindNextInt(lgrSeq);
statement.bindNextBytes(uri);
return statement;
},
"nf_token_uris");
}
}
}
@@ -445,9 +418,8 @@ CassandraBackend::hardFetchLedgerRange(boost::asio::yield_context& yield) const
}
std::vector<TransactionAndMetadata>
CassandraBackend::fetchAllTransactionsInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const
CassandraBackend::fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const
{
auto hashes = fetchAllTransactionHashesInLedger(ledgerSequence, yield);
return fetchTransactions(hashes, yield);
@@ -492,26 +464,21 @@ struct ReadCallbackData
void
resume()
{
boost::asio::post(
boost::asio::get_associated_executor(handler),
[handler = std::move(handler)]() mutable {
handler(boost::system::error_code{});
});
boost::asio::post(boost::asio::get_associated_executor(handler), [handler = std::move(handler)]() mutable {
handler(boost::system::error_code{});
});
}
};
void
processAsyncRead(CassFuture* fut, void* cbData)
{
ReadCallbackData<result_type>& cb =
*static_cast<ReadCallbackData<result_type>*>(cbData);
ReadCallbackData<result_type>& cb = *static_cast<ReadCallbackData<result_type>*>(cbData);
cb.finish(fut);
}
std::vector<TransactionAndMetadata>
CassandraBackend::fetchTransactions(
std::vector<ripple::uint256> const& hashes,
boost::asio::yield_context& yield) const
CassandraBackend::fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context& yield) const
{
if (hashes.size() == 0)
return {};
@@ -531,14 +498,10 @@ CassandraBackend::fetchTransactions(
CassandraStatement statement{selectTransaction_};
statement.bindNextBytes(hashes[i]);
cbs.push_back(std::make_shared<ReadCallbackData<result_type>>(
numOutstanding, handler, [i, &results](auto& result) {
cbs.push_back(
std::make_shared<ReadCallbackData<result_type>>(numOutstanding, handler, [i, &results](auto& result) {
if (result.hasResult())
results[i] = {
result.getBytes(),
result.getBytes(),
result.getUInt32(),
result.getUInt32()};
results[i] = {result.getBytes(), result.getBytes(), result.getUInt32(), result.getUInt32()};
}));
executeAsyncRead(statement, processAsyncRead, *cbs[i]);
@@ -555,9 +518,7 @@ CassandraBackend::fetchTransactions(
throw DatabaseTimeout();
}
log_.debug() << "Fetched " << numHashes
<< " transactions from Cassandra in " << timeDiff
<< " milliseconds";
log_.debug() << "Fetched " << numHashes << " transactions from Cassandra in " << timeDiff << " milliseconds";
return results;
}
@@ -583,12 +544,8 @@ CassandraBackend::fetchAllTransactionHashesInLedger(
{
hashes.push_back(result.getUInt256());
} while (result.nextRow());
log_.debug() << "Fetched " << hashes.size()
<< " transaction hashes from Cassandra in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
end - start)
.count()
<< " milliseconds";
log_.debug() << "Fetched " << hashes.size() << " transaction hashes from Cassandra in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << " milliseconds";
return hashes;
}
@@ -598,18 +555,35 @@ CassandraBackend::fetchNFT(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const
{
CassandraStatement statement{selectNFT_};
statement.bindNextBytes(tokenID);
statement.bindNextInt(ledgerSequence);
CassandraResult response = executeAsyncRead(statement, yield);
if (!response)
CassandraStatement nftStatement{selectNFT_};
nftStatement.bindNextBytes(tokenID);
nftStatement.bindNextInt(ledgerSequence);
CassandraResult nftResponse = executeAsyncRead(nftStatement, yield);
if (!nftResponse)
return {};
NFT result;
result.tokenID = tokenID;
result.ledgerSequence = response.getUInt32();
result.owner = response.getBytes();
result.isBurned = response.getBool();
result.ledgerSequence = nftResponse.getUInt32();
result.owner = nftResponse.getBytes();
result.isBurned = nftResponse.getBool();
// now fetch URI. Usually we will have the URI even for burned NFTs, but
// if the first ledger on this clio included NFTokenBurn transactions
// we will not have the URIs for any of those tokens. In any other case
// not having the URI indicates something went wrong with our data.
//
// TODO - in the future would be great for any handlers that use this
// could inject a warning in this case (the case of not having a URI
// because it was burned in the first ledger) to indicate that even though
// we are returning a blank URI, the NFT might have had one.
CassandraStatement uriStatement{selectNFTURI_};
uriStatement.bindNextBytes(tokenID);
uriStatement.bindNextInt(ledgerSequence);
CassandraResult uriResponse = executeAsyncRead(uriStatement, yield);
if (uriResponse.hasResult())
result.uri = uriResponse.getBytes();
return result;
}
@@ -626,29 +600,23 @@ CassandraBackend::fetchNFTTransactions(
if (!rng)
return {{}, {}};
CassandraStatement statement = forward
? CassandraStatement(selectNFTTxForward_)
: CassandraStatement(selectNFTTx_);
CassandraStatement statement = forward ? CassandraStatement(selectNFTTxForward_) : CassandraStatement(selectNFTTx_);
statement.bindNextBytes(tokenID);
if (cursor)
{
statement.bindNextIntTuple(
cursor->ledgerSequence, cursor->transactionIndex);
log_.debug() << "token_id = " << ripple::strHex(tokenID)
<< " tuple = " << cursor->ledgerSequence
statement.bindNextIntTuple(cursor->ledgerSequence, cursor->transactionIndex);
log_.debug() << "token_id = " << ripple::strHex(tokenID) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
}
else
{
int const seq = forward ? rng->minSequence : rng->maxSequence;
int const placeHolder =
forward ? 0 : std::numeric_limits<std::uint32_t>::max();
int const placeHolder = forward ? 0 : std::numeric_limits<std::uint32_t>::max();
statement.bindNextIntTuple(placeHolder, placeHolder);
log_.debug() << "token_id = " << ripple::strHex(tokenID)
<< " idx = " << seq << " tuple = " << placeHolder;
log_.debug() << "token_id = " << ripple::strHex(tokenID) << " idx = " << seq << " tuple = " << placeHolder;
}
statement.bindNextUInt(limit);
@@ -671,9 +639,7 @@ CassandraBackend::fetchNFTTransactions(
{
log_.debug() << "Setting cursor";
auto const [lgrSeq, txnIdx] = result.getInt64Tuple();
cursor = {
static_cast<std::uint32_t>(lgrSeq),
static_cast<std::uint32_t>(txnIdx)};
cursor = {static_cast<std::uint32_t>(lgrSeq), static_cast<std::uint32_t>(txnIdx)};
// Only modify if forward because forward query
// (selectNFTTxForward_) orders by ledger/tx sequence >= whereas
@@ -718,21 +684,17 @@ CassandraBackend::fetchAccountTransactions(
statement.bindNextBytes(account);
if (cursor)
{
statement.bindNextIntTuple(
cursor->ledgerSequence, cursor->transactionIndex);
log_.debug() << "account = " << ripple::strHex(account)
<< " tuple = " << cursor->ledgerSequence
statement.bindNextIntTuple(cursor->ledgerSequence, cursor->transactionIndex);
log_.debug() << "account = " << ripple::strHex(account) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
}
else
{
int const seq = forward ? rng->minSequence : rng->maxSequence;
int const placeHolder =
forward ? 0 : std::numeric_limits<std::uint32_t>::max();
int const placeHolder = forward ? 0 : std::numeric_limits<std::uint32_t>::max();
statement.bindNextIntTuple(placeHolder, placeHolder);
log_.debug() << "account = " << ripple::strHex(account)
<< " idx = " << seq << " tuple = " << placeHolder;
log_.debug() << "account = " << ripple::strHex(account) << " idx = " << seq << " tuple = " << placeHolder;
}
statement.bindNextUInt(limit);
@@ -754,9 +716,7 @@ CassandraBackend::fetchAccountTransactions(
{
log_.debug() << "Setting cursor";
auto [lgrSeq, txnIdx] = result.getInt64Tuple();
cursor = {
static_cast<std::uint32_t>(lgrSeq),
static_cast<std::uint32_t>(txnIdx)};
cursor = {static_cast<std::uint32_t>(lgrSeq), static_cast<std::uint32_t>(txnIdx)};
// Only modify if forward because forward query
// (selectAccountTxForward_) orders by ledger/tx sequence >= whereas
@@ -848,8 +808,8 @@ CassandraBackend::doFetchLedgerObjects(
cbs.reserve(numKeys);
for (std::size_t i = 0; i < keys.size(); ++i)
{
cbs.push_back(std::make_shared<ReadCallbackData<result_type>>(
numOutstanding, handler, [i, &results](auto& result) {
cbs.push_back(
std::make_shared<ReadCallbackData<result_type>>(numOutstanding, handler, [i, &results](auto& result) {
if (result.hasResult())
results[i] = result.getBytes();
}));
@@ -875,9 +835,7 @@ CassandraBackend::doFetchLedgerObjects(
}
std::vector<LedgerObject>
CassandraBackend::fetchLedgerDiff(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const
CassandraBackend::fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const
{
CassandraStatement statement{selectDiff_};
statement.bindNextInt(ledgerSequence);
@@ -897,29 +855,19 @@ CassandraBackend::fetchLedgerDiff(
{
keys.push_back(result.getUInt256());
} while (result.nextRow());
log_.debug() << "Fetched " << keys.size()
<< " diff hashes from Cassandra in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(
end - start)
.count()
<< " milliseconds";
log_.debug() << "Fetched " << keys.size() << " diff hashes from Cassandra in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << " milliseconds";
auto objs = fetchLedgerObjects(keys, ledgerSequence, yield);
std::vector<LedgerObject> results;
std::transform(
keys.begin(),
keys.end(),
objs.begin(),
std::back_inserter(results),
[](auto const& k, auto const& o) {
keys.begin(), keys.end(), objs.begin(), std::back_inserter(results), [](auto const& k, auto const& o) {
return LedgerObject{k, o};
});
return results;
}
bool
CassandraBackend::doOnlineDelete(
std::uint32_t const numLedgersToKeep,
boost::asio::yield_context& yield) const
CassandraBackend::doOnlineDelete(std::uint32_t const numLedgersToKeep, boost::asio::yield_context& yield) const
{
// calculate TTL
// ledgers close roughly every 4 seconds. We double the TTL so that way
@@ -952,17 +900,15 @@ CassandraBackend::doOnlineDelete(
std::optional<ripple::uint256> cursor;
while (true)
{
auto [objects, curCursor] = retryOnTimeout([&]() {
return fetchLedgerPage(cursor, minLedger, 256, false, yield);
});
auto [objects, curCursor] =
retryOnTimeout([&]() { return fetchLedgerPage(cursor, minLedger, 256, false, yield); });
for (auto& obj : objects)
{
++numOutstanding;
cbs.push_back(makeAndExecuteBulkAsyncWrite(
this,
std::make_tuple(
std::move(obj.key), minLedger, std::move(obj.blob)),
std::make_tuple(std::move(obj.key), minLedger, std::move(obj.blob)),
bind,
numOutstanding,
mtx,
@@ -970,9 +916,7 @@ CassandraBackend::doOnlineDelete(
std::unique_lock<std::mutex> lck(mtx);
log_.trace() << "Got the mutex";
cv.wait(lck, [&numOutstanding, concurrentLimit]() {
return numOutstanding < concurrentLimit;
});
cv.wait(lck, [&numOutstanding, concurrentLimit]() { return numOutstanding < concurrentLimit; });
}
log_.debug() << "Fetched a page";
cursor = curCursor;
@@ -1010,15 +954,13 @@ CassandraBackend::open(bool readOnly)
if (!cluster)
throw std::runtime_error("nodestore:: Failed to create CassCluster");
std::string secureConnectBundle =
config_.valueOr<std::string>("secure_connect_bundle", "");
std::string secureConnectBundle = config_.valueOr<std::string>("secure_connect_bundle", "");
if (!secureConnectBundle.empty())
{
/* Setup driver to connect to the cloud using the secure connection
* bundle */
if (cass_cluster_set_cloud_secure_connection_bundle(
cluster, secureConnectBundle.c_str()) != CASS_OK)
if (cass_cluster_set_cloud_secure_connection_bundle(cluster, secureConnectBundle.c_str()) != CASS_OK)
{
log_.error() << "Unable to configure cloud using the "
"secure connection bundle: "
@@ -1032,15 +974,12 @@ CassandraBackend::open(bool readOnly)
else
{
std::string contact_points = config_.valueOrThrow<std::string>(
"contact_points",
"nodestore: Missing contact_points in Cassandra config");
CassError rc =
cass_cluster_set_contact_points(cluster, contact_points.c_str());
"contact_points", "nodestore: Missing contact_points in Cassandra config");
CassError rc = cass_cluster_set_contact_points(cluster, contact_points.c_str());
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error setting Cassandra contact_points: "
<< contact_points << ", result: " << rc << ", "
ss << "nodestore: Error setting Cassandra contact_points: " << contact_points << ", result: " << rc << ", "
<< cass_error_desc(rc);
throw std::runtime_error(ss.str());
@@ -1053,16 +992,15 @@ CassandraBackend::open(bool readOnly)
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error setting Cassandra port: " << *port
<< ", result: " << rc << ", " << cass_error_desc(rc);
ss << "nodestore: Error setting Cassandra port: " << *port << ", result: " << rc << ", "
<< cass_error_desc(rc);
throw std::runtime_error(ss.str());
}
}
}
cass_cluster_set_token_aware_routing(cluster, cass_true);
CassError rc =
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
CassError rc = cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
if (rc != CASS_OK)
{
std::stringstream ss;
@@ -1077,40 +1015,32 @@ CassandraBackend::open(bool readOnly)
{
log_.debug() << "user = " << *username;
auto password = config_.value<std::string>("password");
cass_cluster_set_credentials(
cluster, username->c_str(), password.c_str());
cass_cluster_set_credentials(cluster, username->c_str(), password.c_str());
}
auto threads =
config_.valueOr<int>("threads", std::thread::hardware_concurrency());
auto threads = config_.valueOr<int>("threads", std::thread::hardware_concurrency());
rc = cass_cluster_set_num_threads_io(cluster, threads);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error setting Cassandra io threads to " << threads
<< ", result: " << rc << ", " << cass_error_desc(rc);
ss << "nodestore: Error setting Cassandra io threads to " << threads << ", result: " << rc << ", "
<< cass_error_desc(rc);
throw std::runtime_error(ss.str());
}
maxWriteRequestsOutstanding = config_.valueOr<int>(
"max_write_requests_outstanding", maxWriteRequestsOutstanding);
maxReadRequestsOutstanding = config_.valueOr<int>(
"max_read_requests_outstanding", maxReadRequestsOutstanding);
maxWriteRequestsOutstanding = config_.valueOr<int>("max_write_requests_outstanding", maxWriteRequestsOutstanding);
maxReadRequestsOutstanding = config_.valueOr<int>("max_read_requests_outstanding", maxReadRequestsOutstanding);
syncInterval_ = config_.valueOr<int>("sync_interval", syncInterval_);
log_.info() << "Sync interval is " << syncInterval_
<< ". max write requests outstanding is "
<< maxWriteRequestsOutstanding
<< ". max read requests outstanding is "
<< maxReadRequestsOutstanding;
log_.info() << "Sync interval is " << syncInterval_ << ". max write requests outstanding is "
<< maxWriteRequestsOutstanding << ". max read requests outstanding is " << maxReadRequestsOutstanding;
cass_cluster_set_request_timeout(cluster, 10000);
rc = cass_cluster_set_queue_size_io(
cluster,
maxWriteRequestsOutstanding +
maxReadRequestsOutstanding); // This number needs to scale w/ the
// number of request per sec
maxWriteRequestsOutstanding + maxReadRequestsOutstanding); // This number needs to scale w/ the
// number of request per sec
if (rc != CASS_OK)
{
std::stringstream ss;
@@ -1123,17 +1053,14 @@ CassandraBackend::open(bool readOnly)
if (auto certfile = config_.maybeValue<std::string>("certfile"); certfile)
{
std::ifstream fileStream(
boost::filesystem::path(*certfile).string(), std::ios::in);
std::ifstream fileStream(boost::filesystem::path(*certfile).string(), std::ios::in);
if (!fileStream)
{
std::stringstream ss;
ss << "opening config file " << *certfile;
throw std::system_error(errno, std::generic_category(), ss.str());
}
std::string cert(
std::istreambuf_iterator<char>{fileStream},
std::istreambuf_iterator<char>{});
std::string cert(std::istreambuf_iterator<char>{fileStream}, std::istreambuf_iterator<char>{});
if (fileStream.bad())
{
std::stringstream ss;
@@ -1147,8 +1074,7 @@ CassandraBackend::open(bool readOnly)
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error setting Cassandra ssl context: " << rc
<< ", " << cass_error_desc(rc);
ss << "nodestore: Error setting Cassandra ssl context: " << rc << ", " << cass_error_desc(rc);
throw std::runtime_error(ss.str());
}
@@ -1184,8 +1110,8 @@ CassandraBackend::open(bool readOnly)
if (rc != CASS_OK && rc != CASS_ERROR_SERVER_INVALID_QUERY)
{
std::stringstream ss;
ss << "nodestore: Error executing simple statement: " << rc << ", "
<< cass_error_desc(rc) << " - " << query;
ss << "nodestore: Error executing simple statement: " << rc << ", " << cass_error_desc(rc) << " - "
<< query;
log_.error() << ss.str();
return false;
}
@@ -1199,15 +1125,13 @@ CassandraBackend::open(bool readOnly)
session_.reset(cass_session_new());
assert(session_);
fut = cass_session_connect_keyspace(
session_.get(), cluster, keyspace.c_str());
fut = cass_session_connect_keyspace(session_.get(), cluster, keyspace.c_str());
rc = cass_future_error_code(fut);
cass_future_free(fut);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error connecting Cassandra session keyspace: "
<< rc << ", " << cass_error_desc(rc)
ss << "nodestore: Error connecting Cassandra session keyspace: " << rc << ", " << cass_error_desc(rc)
<< ", trying to create it ourselves";
log_.error() << ss.str();
// if the keyspace doesn't exist, try to create it
@@ -1218,8 +1142,7 @@ CassandraBackend::open(bool readOnly)
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error connecting Cassandra session at all: "
<< rc << ", " << cass_error_desc(rc);
ss << "nodestore: Error connecting Cassandra session at all: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
}
else
@@ -1256,16 +1179,14 @@ CassandraBackend::open(bool readOnly)
continue;
query.str("");
query
<< "CREATE TABLE IF NOT EXISTS " << tablePrefix << "transactions"
<< " ( hash blob PRIMARY KEY, ledger_sequence bigint, date bigint, "
"transaction blob, metadata blob)"
<< " WITH default_time_to_live = " << std::to_string(ttl);
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix << "transactions"
<< " ( hash blob PRIMARY KEY, ledger_sequence bigint, date bigint, "
"transaction blob, metadata blob)"
<< " WITH default_time_to_live = " << std::to_string(ttl);
if (!executeSimpleStatement(query.str()))
continue;
query.str("");
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix
<< "ledger_transactions"
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix << "ledger_transactions"
<< " ( ledger_sequence bigint, hash blob, PRIMARY "
"KEY(ledger_sequence, hash))"
<< " WITH default_time_to_live = " << std::to_string(ttl);
@@ -1387,25 +1308,45 @@ CassandraBackend::open(bool readOnly)
continue;
query.str("");
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix
<< "issuer_nf_tokens"
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix << "issuer_nf_tokens_v2"
<< " ("
<< " issuer blob,"
<< " taxon bigint,"
<< " token_id blob,"
<< " PRIMARY KEY (issuer, token_id)"
<< " )";
<< " PRIMARY KEY (issuer, taxon, token_id)"
<< " )"
<< " WITH CLUSTERING ORDER BY (taxon ASC, token_id ASC)"
<< " AND default_time_to_live = " << ttl;
if (!executeSimpleStatement(query.str()))
continue;
query.str("");
query << "SELECT * FROM " << tablePrefix << "issuer_nf_tokens"
query << "SELECT * FROM " << tablePrefix << "issuer_nf_tokens_v2"
<< " LIMIT 1";
if (!executeSimpleStatement(query.str()))
continue;
query.str("");
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix
<< "nf_token_transactions"
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix << "nf_token_uris"
<< " ("
<< " token_id blob,"
<< " sequence bigint,"
<< " uri blob,"
<< " PRIMARY KEY (token_id, sequence)"
<< " )"
<< " WITH CLUSTERING ORDER BY (sequence DESC)"
<< " AND default_time_to_live = " << ttl;
if (!executeSimpleStatement(query.str()))
continue;
query.str("");
query << "SELECT * FROM " << tablePrefix << "nf_token_uris"
<< " LIMIT 1";
if (!executeSimpleStatement(query.str()))
continue;
query.str("");
query << "CREATE TABLE IF NOT EXISTS " << tablePrefix << "nf_token_transactions"
<< " ("
<< " token_id blob,"
<< " seq_idx tuple<bigint, bigint>,"
@@ -1484,8 +1425,7 @@ CassandraBackend::open(bool readOnly)
continue;
query.str("");
query << "SELECT transaction, metadata, ledger_sequence, date FROM "
<< tablePrefix << "transactions"
query << "SELECT transaction, metadata, ledger_sequence, date FROM " << tablePrefix << "transactions"
<< " WHERE hash = ?";
if (!selectTransaction_.prepareStatement(query, session_.get()))
continue;
@@ -1493,8 +1433,7 @@ CassandraBackend::open(bool readOnly)
query.str("");
query << "SELECT hash FROM " << tablePrefix << "ledger_transactions"
<< " WHERE ledger_sequence = ?";
if (!selectAllTransactionHashesInLedger_.prepareStatement(
query, session_.get()))
if (!selectAllTransactionHashesInLedger_.prepareStatement(query, session_.get()))
continue;
query.str("");
@@ -1558,12 +1497,28 @@ CassandraBackend::open(bool readOnly)
continue;
query.str("");
query << "INSERT INTO " << tablePrefix << "issuer_nf_tokens"
<< " (issuer,token_id)"
<< " VALUES (?,?)";
query << "INSERT INTO " << tablePrefix << "issuer_nf_tokens_v2"
<< " (issuer,taxon,token_id)"
<< " VALUES (?,?,?)";
if (!insertIssuerNFT_.prepareStatement(query, session_.get()))
continue;
query.str("");
query << "INSERT INTO " << tablePrefix << "nf_token_uris"
<< " (token_id,sequence,uri)"
<< " VALUES (?,?,?)";
if (!insertNFTURI_.prepareStatement(query, session_.get()))
continue;
query.str("");
query << "SELECT uri FROM " << tablePrefix << "nf_token_uris"
<< " WHERE token_id = ? AND"
<< " sequence <= ?"
<< " ORDER BY sequence DESC"
<< " LIMIT 1";
if (!selectNFTURI_.prepareStatement(query, session_.get()))
continue;
query.str("");
query << "INSERT INTO " << tablePrefix << "nf_token_transactions"
<< " (token_id,seq_idx,hash)"
@@ -1623,14 +1578,12 @@ CassandraBackend::open(bool readOnly)
continue;
query.str("");
query << " select header from " << tablePrefix
<< "ledgers where sequence = ?";
query << " select header from " << tablePrefix << "ledgers where sequence = ?";
if (!selectLedgerBySeq_.prepareStatement(query, session_.get()))
continue;
query.str("");
query << " select sequence from " << tablePrefix
<< "ledger_range where is_latest = true";
query << " select sequence from " << tablePrefix << "ledger_range where is_latest = true";
if (!selectLatestLedger_.prepareStatement(query, session_.get()))
continue;

View File

@@ -85,8 +85,8 @@ public:
else
{
std::stringstream ss;
ss << "nodestore: Error preparing statement : " << rc << ", "
<< cass_error_desc(rc) << ". query : " << query;
ss << "nodestore: Error preparing statement : " << rc << ", " << cass_error_desc(rc)
<< ". query : " << query;
log_.error() << ss.str();
}
cass_future_free(prepareFuture);
@@ -137,15 +137,12 @@ public:
bindNextBoolean(bool val)
{
if (!statement_)
throw std::runtime_error(
"CassandraStatement::bindNextBoolean - statement_ is null");
CassError rc = cass_statement_bind_bool(
statement_, curBindingIndex_, static_cast<cass_bool_t>(val));
throw std::runtime_error("CassandraStatement::bindNextBoolean - statement_ is null");
CassError rc = cass_statement_bind_bool(statement_, curBindingIndex_, static_cast<cass_bool_t>(val));
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding boolean to statement: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding boolean to statement: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -190,18 +187,13 @@ public:
bindNextBytes(const unsigned char* data, std::uint32_t const size)
{
if (!statement_)
throw std::runtime_error(
"CassandraStatement::bindNextBytes - statement_ is null");
CassError rc = cass_statement_bind_bytes(
statement_,
curBindingIndex_,
static_cast<cass_byte_t const*>(data),
size);
throw std::runtime_error("CassandraStatement::bindNextBytes - statement_ is null");
CassError rc =
cass_statement_bind_bytes(statement_, curBindingIndex_, static_cast<cass_byte_t const*>(data), size);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding bytes to statement: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding bytes to statement: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -212,17 +204,13 @@ public:
bindNextUInt(std::uint32_t const value)
{
if (!statement_)
throw std::runtime_error(
"CassandraStatement::bindNextUInt - statement_ is null");
log_.trace() << std::to_string(curBindingIndex_) << " "
<< std::to_string(value);
CassError rc =
cass_statement_bind_int32(statement_, curBindingIndex_, value);
throw std::runtime_error("CassandraStatement::bindNextUInt - statement_ is null");
log_.trace() << std::to_string(curBindingIndex_) << " " << std::to_string(value);
CassError rc = cass_statement_bind_int32(statement_, curBindingIndex_, value);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding uint to statement: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding uint to statement: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -239,15 +227,12 @@ public:
bindNextInt(int64_t value)
{
if (!statement_)
throw std::runtime_error(
"CassandraStatement::bindNextInt - statement_ is null");
CassError rc =
cass_statement_bind_int64(statement_, curBindingIndex_, value);
throw std::runtime_error("CassandraStatement::bindNextInt - statement_ is null");
CassError rc = cass_statement_bind_int64(statement_, curBindingIndex_, value);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding int to statement: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding int to statement: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -262,8 +247,7 @@ public:
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding int to tuple: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding int to tuple: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -271,8 +255,7 @@ public:
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding int to tuple: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding int to tuple: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -280,8 +263,7 @@ public:
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "Error binding tuple to statement: " << rc << ", "
<< cass_error_desc(rc);
ss << "Error binding tuple to statement: " << rc << ", " << cass_error_desc(rc);
log_.error() << ss.str();
throw std::runtime_error(ss.str());
}
@@ -382,13 +364,11 @@ public:
throw std::runtime_error("CassandraResult::getBytes - no result");
cass_byte_t const* buf;
std::size_t bufSize;
CassError rc = cass_value_get_bytes(
cass_row_get_column(row_, curGetIndex_), &buf, &bufSize);
CassError rc = cass_value_get_bytes(cass_row_get_column(row_, curGetIndex_), &buf, &bufSize);
if (rc != CASS_OK)
{
std::stringstream msg;
msg << "CassandraResult::getBytes - error getting value: " << rc
<< ", " << cass_error_desc(rc);
msg << "CassandraResult::getBytes - error getting value: " << rc << ", " << cass_error_desc(rc);
log_.error() << msg.str();
throw std::runtime_error(msg.str());
}
@@ -403,13 +383,11 @@ public:
throw std::runtime_error("CassandraResult::uint256 - no result");
cass_byte_t const* buf;
std::size_t bufSize;
CassError rc = cass_value_get_bytes(
cass_row_get_column(row_, curGetIndex_), &buf, &bufSize);
CassError rc = cass_value_get_bytes(cass_row_get_column(row_, curGetIndex_), &buf, &bufSize);
if (rc != CASS_OK)
{
std::stringstream msg;
msg << "CassandraResult::getuint256 - error getting value: " << rc
<< ", " << cass_error_desc(rc);
msg << "CassandraResult::getuint256 - error getting value: " << rc << ", " << cass_error_desc(rc);
log_.error() << msg.str();
throw std::runtime_error(msg.str());
}
@@ -423,13 +401,11 @@ public:
if (!row_)
throw std::runtime_error("CassandraResult::getInt64 - no result");
cass_int64_t val;
CassError rc =
cass_value_get_int64(cass_row_get_column(row_, curGetIndex_), &val);
CassError rc = cass_value_get_int64(cass_row_get_column(row_, curGetIndex_), &val);
if (rc != CASS_OK)
{
std::stringstream msg;
msg << "CassandraResult::getInt64 - error getting value: " << rc
<< ", " << cass_error_desc(rc);
msg << "CassandraResult::getInt64 - error getting value: " << rc << ", " << cass_error_desc(rc);
log_.error() << msg.str();
throw std::runtime_error(msg.str());
}
@@ -447,8 +423,7 @@ public:
getInt64Tuple()
{
if (!row_)
throw std::runtime_error(
"CassandraResult::getInt64Tuple - no result");
throw std::runtime_error("CassandraResult::getInt64Tuple - no result");
CassValue const* tuple = cass_row_get_column(row_, curGetIndex_);
CassIterator* tupleIter = cass_iterator_from_tuple(tuple);
@@ -456,8 +431,7 @@ public:
if (!cass_iterator_next(tupleIter))
{
cass_iterator_free(tupleIter);
throw std::runtime_error(
"CassandraResult::getInt64Tuple - failed to iterate tuple");
throw std::runtime_error("CassandraResult::getInt64Tuple - failed to iterate tuple");
}
CassValue const* value = cass_iterator_get_value(tupleIter);
@@ -466,8 +440,7 @@ public:
if (!cass_iterator_next(tupleIter))
{
cass_iterator_free(tupleIter);
throw std::runtime_error(
"CassandraResult::getInt64Tuple - failed to iterate tuple");
throw std::runtime_error("CassandraResult::getInt64Tuple - failed to iterate tuple");
}
value = cass_iterator_get_value(tupleIter);
@@ -486,20 +459,17 @@ public:
std::size_t bufSize;
if (!row_)
throw std::runtime_error(
"CassandraResult::getBytesTuple - no result");
throw std::runtime_error("CassandraResult::getBytesTuple - no result");
CassValue const* tuple = cass_row_get_column(row_, curGetIndex_);
CassIterator* tupleIter = cass_iterator_from_tuple(tuple);
if (!cass_iterator_next(tupleIter))
throw std::runtime_error(
"CassandraResult::getBytesTuple - failed to iterate tuple");
throw std::runtime_error("CassandraResult::getBytesTuple - failed to iterate tuple");
CassValue const* value = cass_iterator_get_value(tupleIter);
cass_value_get_bytes(value, &buf, &bufSize);
Blob first{buf, buf + bufSize};
if (!cass_iterator_next(tupleIter))
throw std::runtime_error(
"CassandraResult::getBytesTuple - failed to iterate tuple");
throw std::runtime_error("CassandraResult::getBytesTuple - failed to iterate tuple");
value = cass_iterator_get_value(tupleIter);
cass_value_get_bytes(value, &buf, &bufSize);
Blob second{buf, buf + bufSize};
@@ -519,8 +489,7 @@ public:
throw std::runtime_error(msg);
}
cass_bool_t val;
CassError rc =
cass_value_get_bool(cass_row_get_column(row_, curGetIndex_), &val);
CassError rc = cass_value_get_bool(cass_row_get_column(row_, curGetIndex_), &val);
if (rc != CASS_OK)
{
std::stringstream msg;
@@ -544,10 +513,8 @@ public:
inline bool
isTimeout(CassError rc)
{
if (rc == CASS_ERROR_LIB_NO_HOSTS_AVAILABLE or
rc == CASS_ERROR_LIB_REQUEST_TIMED_OUT or
rc == CASS_ERROR_SERVER_UNAVAILABLE or
rc == CASS_ERROR_SERVER_OVERLOADED or
if (rc == CASS_ERROR_LIB_NO_HOSTS_AVAILABLE or rc == CASS_ERROR_LIB_REQUEST_TIMED_OUT or
rc == CASS_ERROR_SERVER_UNAVAILABLE or rc == CASS_ERROR_SERVER_OVERLOADED or
rc == CASS_ERROR_SERVER_READ_TIMEOUT)
return true;
return false;
@@ -558,8 +525,7 @@ CassError
cass_future_error_code(CassFuture* fut, CompletionToken&& token)
{
using function_type = void(boost::system::error_code, CassError);
using result_type =
boost::asio::async_result<CompletionToken, function_type>;
using result_type = boost::asio::async_result<CompletionToken, function_type>;
using handler_type = typename result_type::completion_handler_type;
handler_type handler(std::forward<decltype(token)>(token));
@@ -578,12 +544,10 @@ cass_future_error_code(CassFuture* fut, CompletionToken&& token)
HandlerWrapper* hw = (HandlerWrapper*)data;
boost::asio::post(
boost::asio::get_associated_executor(hw->handler),
[fut, hw, handler = std::move(hw->handler)]() mutable {
boost::asio::get_associated_executor(hw->handler), [fut, hw, handler = std::move(hw->handler)]() mutable {
delete hw;
handler(
boost::system::error_code{}, cass_future_error_code(fut));
handler(boost::system::error_code{}, cass_future_error_code(fut));
});
};
@@ -608,13 +572,12 @@ private:
makeStatement(char const* query, std::size_t params)
{
CassStatement* ret = cass_statement_new(query, params);
CassError rc =
cass_statement_set_consistency(ret, CASS_CONSISTENCY_QUORUM);
CassError rc = cass_statement_set_consistency(ret, CASS_CONSISTENCY_QUORUM);
if (rc != CASS_OK)
{
std::stringstream ss;
ss << "nodestore: Error setting query consistency: " << query
<< ", result: " << rc << ", " << cass_error_desc(rc);
ss << "nodestore: Error setting query consistency: " << query << ", result: " << rc << ", "
<< cass_error_desc(rc);
throw std::runtime_error(ss.str());
}
return ret;
@@ -623,15 +586,13 @@ private:
clio::Logger log_{"Backend"};
std::atomic<bool> open_{false};
std::unique_ptr<CassSession, void (*)(CassSession*)> session_{
nullptr,
[](CassSession* session) {
// Try to disconnect gracefully.
CassFuture* fut = cass_session_close(session);
cass_future_wait(fut);
cass_future_free(fut);
cass_session_free(session);
}};
std::unique_ptr<CassSession, void (*)(CassSession*)> session_{nullptr, [](CassSession* session) {
// Try to disconnect gracefully.
CassFuture* fut = cass_session_close(session);
cass_future_wait(fut);
cass_future_free(fut);
cass_session_free(session);
}};
// Database statements cached server side. Using these is more efficient
// than making a new statement
@@ -655,6 +616,8 @@ private:
CassandraPreparedStatement insertNFT_;
CassandraPreparedStatement selectNFT_;
CassandraPreparedStatement insertIssuerNFT_;
CassandraPreparedStatement insertNFTURI_;
CassandraPreparedStatement selectNFTURI_;
CassandraPreparedStatement insertNFTTx_;
CassandraPreparedStatement selectNFTTx_;
CassandraPreparedStatement selectNFTTxForward_;
@@ -702,11 +665,8 @@ private:
mutable std::uint32_t ledgerSequence_ = 0;
public:
CassandraBackend(
boost::asio::io_context& ioc,
clio::Config const& config,
uint32_t ttl)
: BackendInterface(config), config_(config), ttl_(ttl)
CassandraBackend(boost::asio::io_context& ioc, clio::Config const& config, uint32_t ttl)
: config_(config), ttl_(ttl)
{
work_.emplace(ioContext_);
ioThread_ = std::thread([this]() { ioContext_.run(); });
@@ -775,8 +735,7 @@ public:
statement.bindNextInt(ledgerSequence_ - 1);
if (!executeSyncUpdate(statement))
{
log_.warn() << "Update failed for ledger "
<< std::to_string(ledgerSequence_) << ". Returning";
log_.warn() << "Update failed for ledger " << std::to_string(ledgerSequence_) << ". Returning";
return false;
}
log_.info() << "Committed ledger " << std::to_string(ledgerSequence_);
@@ -790,8 +749,7 @@ public:
// if db is empty, sync. if sync interval is 1, always sync.
// if we've never synced, sync. if its been greater than the configured
// sync interval since we last synced, sync.
if (!range || lastSync_ == 0 ||
ledgerSequence_ - syncInterval_ >= lastSync_)
if (!range || lastSync_ == 0 || ledgerSequence_ - syncInterval_ >= lastSync_)
{
// wait for all other writes to finish
sync();
@@ -813,20 +771,16 @@ public:
statement.bindNextInt(lastSync_);
if (!executeSyncUpdate(statement))
{
log_.warn() << "Update failed for ledger "
<< std::to_string(ledgerSequence_) << ". Returning";
log_.warn() << "Update failed for ledger " << std::to_string(ledgerSequence_) << ". Returning";
return false;
}
log_.info() << "Committed ledger "
<< std::to_string(ledgerSequence_);
log_.info() << "Committed ledger " << std::to_string(ledgerSequence_);
lastSync_ = ledgerSequence_;
}
else
{
log_.info() << "Skipping commit. sync interval is "
<< std::to_string(syncInterval_) << " - last sync is "
<< std::to_string(lastSync_) << " - ledger sequence is "
<< std::to_string(ledgerSequence_);
log_.info() << "Skipping commit. sync interval is " << std::to_string(syncInterval_) << " - last sync is "
<< std::to_string(lastSync_) << " - ledger sequence is " << std::to_string(ledgerSequence_);
}
return true;
}
@@ -840,8 +794,7 @@ public:
return doFinishWritesAsync();
}
void
writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& header)
override;
writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& header) override;
std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context& yield) const override
@@ -851,17 +804,14 @@ public:
CassandraResult result = executeAsyncRead(statement, yield);
if (!result.hasResult())
{
log_.error()
<< "CassandraBackend::fetchLatestLedgerSequence - no rows";
log_.error() << "CassandraBackend::fetchLatestLedgerSequence - no rows";
return {};
}
return result.getUInt32();
}
std::optional<ripple::LedgerInfo>
fetchLedgerBySequence(
std::uint32_t const sequence,
boost::asio::yield_context& yield) const override
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context& yield) const override
{
log_.trace() << "called";
CassandraStatement statement{selectLedgerBySeq_};
@@ -877,9 +827,7 @@ public:
}
std::optional<ripple::LedgerInfo>
fetchLedgerByHash(
ripple::uint256 const& hash,
boost::asio::yield_context& yield) const override
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context& yield) const override
{
CassandraStatement statement{selectLedgerByHash_};
@@ -902,20 +850,15 @@ public:
hardFetchLedgerRange(boost::asio::yield_context& yield) const override;
std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const override;
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const override;
std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const override;
fetchAllTransactionHashesInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override;
std::optional<NFT>
fetchNFT(
ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const override;
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override;
TransactionsAndCursor
fetchNFTTransactions(
@@ -928,36 +871,11 @@ public:
// Synchronously fetch the object with key key, as of ledger with sequence
// sequence
std::optional<Blob>
doFetchLedgerObject(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const override;
std::optional<int64_t>
getToken(void const* key, boost::asio::yield_context& yield) const
{
log_.trace() << "Fetching from cassandra";
CassandraStatement statement{getToken_};
statement.bindNextBytes(key, 32);
CassandraResult result = executeAsyncRead(statement, yield);
if (!result)
{
log_.error() << "No rows";
return {};
}
int64_t token = result.getInt64();
if (token == INT64_MAX)
return {};
else
return token + 1;
}
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const override;
std::optional<TransactionAndMetadata>
fetchTransaction(
ripple::uint256 const& hash,
boost::asio::yield_context& yield) const override
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context& yield) const override
{
log_.trace() << "called";
CassandraStatement statement{selectTransaction_};
@@ -969,23 +887,15 @@ public:
log_.error() << "No rows";
return {};
}
return {
{result.getBytes(),
result.getBytes(),
result.getUInt32(),
result.getUInt32()}};
return {{result.getBytes(), result.getBytes(), result.getUInt32(), result.getUInt32()}};
}
std::optional<ripple::uint256>
doFetchSuccessorKey(
ripple::uint256 key,
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const override;
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override;
std::vector<TransactionAndMetadata>
fetchTransactions(
std::vector<ripple::uint256> const& hashes,
boost::asio::yield_context& yield) const override;
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context& yield) const override;
std::vector<Blob>
doFetchLedgerObjects(
@@ -994,25 +904,16 @@ public:
boost::asio::yield_context& yield) const override;
std::vector<LedgerObject>
fetchLedgerDiff(
std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const override;
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const override;
void
doWriteLedgerObject(
std::string&& key,
std::uint32_t const seq,
std::string&& blob) override;
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override;
void
writeSuccessor(
std::string&& key,
std::uint32_t const seq,
std::string&& successor) override;
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) override;
void
writeAccountTransactions(
std::vector<AccountTransactionsData>&& data) override;
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) override;
void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) override;
@@ -1042,9 +943,7 @@ public:
}
bool
doOnlineDelete(
std::uint32_t const numLedgersToKeep,
boost::asio::yield_context& yield) const override;
doOnlineDelete(std::uint32_t const numLedgersToKeep, boost::asio::yield_context& yield) const override;
bool
isTooBusy() const override;
@@ -1109,26 +1008,18 @@ public:
template <class T, class S>
void
executeAsyncHelper(
CassandraStatement const& statement,
T callback,
S& callbackData) const
executeAsyncHelper(CassandraStatement const& statement, T callback, S& callbackData) const
{
CassFuture* fut = cass_session_execute(session_.get(), statement.get());
cass_future_set_callback(
fut, callback, static_cast<void*>(&callbackData));
cass_future_set_callback(fut, callback, static_cast<void*>(&callbackData));
cass_future_free(fut);
}
template <class T, class S>
void
executeAsyncWrite(
CassandraStatement const& statement,
T callback,
S& callbackData,
bool isRetry) const
executeAsyncWrite(CassandraStatement const& statement, T callback, S& callbackData, bool isRetry) const
{
if (!isRetry)
incrementOutstandingRequestCount();
@@ -1137,10 +1028,7 @@ public:
template <class T, class S>
void
executeAsyncRead(
CassandraStatement const& statement,
T callback,
S& callbackData) const
executeAsyncRead(CassandraStatement const& statement, T callback, S& callbackData) const
{
executeAsyncHelper(statement, callback, callbackData);
}
@@ -1203,8 +1091,7 @@ public:
if (rc != CASS_OK)
{
cass_result_free(res);
log_.error() << "executeSyncUpdate - error getting result " << rc
<< ", " << cass_error_desc(rc);
log_.error() << "executeSyncUpdate - error getting result " << rc << ", " << cass_error_desc(rc);
return false;
}
cass_result_free(res);
@@ -1225,13 +1112,10 @@ public:
}
CassandraResult
executeAsyncRead(
CassandraStatement const& statement,
boost::asio::yield_context& yield) const
executeAsyncRead(CassandraStatement const& statement, boost::asio::yield_context& yield) const
{
using result = boost::asio::async_result<
boost::asio::yield_context,
void(boost::system::error_code, CassError)>;
using result =
boost::asio::async_result<boost::asio::yield_context, void(boost::system::error_code, CassError)>;
CassFuture* fut;
CassError rc;

View File

@@ -0,0 +1,851 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <backend/cassandra/Concepts.h>
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/Schema.h>
#include <backend/cassandra/SettingsProvider.h>
#include <backend/cassandra/impl/ExecutionStrategy.h>
#include <log/Logger.h>
#include <util/Profiler.h>
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
#include <boost/asio/spawn.hpp>
namespace Backend::Cassandra {
/**
* @brief Implements @ref BackendInterface for Cassandra/Scylladb
*
* Note: this is a safer and more correct rewrite of the original implementation
* of the backend. We deliberately did not change the interface for now so that
* other parts such as ETL do not have to change at all.
* Eventually we should change the interface so that it does not have to know
* about yield_context.
*/
template <SomeSettingsProvider SettingsProviderType, SomeExecutionStrategy ExecutionStrategy>
class BasicCassandraBackend : public BackendInterface
{
clio::Logger log_{"Backend"};
SettingsProviderType settingsProvider_;
Schema<SettingsProviderType> schema_;
Handle handle_;
// have to be mutable because BackendInterface constness :(
mutable ExecutionStrategy executor_;
std::atomic_uint32_t ledgerSequence_ = 0u;
public:
/**
* @brief Create a new cassandra/scylla backend instance.
*
* @param settingsProvider
*/
BasicCassandraBackend(SettingsProviderType settingsProvider)
: settingsProvider_{std::move(settingsProvider)}
, schema_{settingsProvider_}
, handle_{settingsProvider_.getSettings()}
, executor_{settingsProvider_.getSettings(), handle_}
{
if (auto const res = handle_.connect(); not res)
throw std::runtime_error("Could not connect to Cassandra: " + res.error());
if (auto const res = handle_.execute(schema_.createKeyspace); not res)
{
// on datastax, creation of keyspaces can be configured to only be done thru the admin interface.
// this does not mean that the keyspace does not already exist tho.
if (res.error().code() != CASS_ERROR_SERVER_UNAUTHORIZED)
throw std::runtime_error("Could not create keyspace: " + res.error());
}
if (auto const res = handle_.executeEach(schema_.createSchema); not res)
throw std::runtime_error("Could not create schema: " + res.error());
schema_.prepareStatements(handle_);
log_.info() << "Created (revamped) CassandraBackend";
}
/*! Not used in this implementation */
void
open([[maybe_unused]] bool readOnly) override
{
}
/*! Not used in this implementation */
void
close() override
{
}
TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
std::uint32_t const limit,
bool forward,
std::optional<TransactionsCursor> const& cursorIn,
boost::asio::yield_context& yield) const override
{
auto rng = fetchLedgerRange();
if (!rng)
return {{}, {}};
Statement statement = [this, forward, &account]() {
if (forward)
return schema_->selectAccountTxForward.bind(account);
else
return schema_->selectAccountTx.bind(account);
}();
auto cursor = cursorIn;
if (cursor)
{
statement.bindAt(1, cursor->asTuple());
log_.debug() << "account = " << ripple::strHex(account) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
}
else
{
auto const seq = forward ? rng->minSequence : rng->maxSequence;
auto const placeHolder = forward ? 0u : std::numeric_limits<std::uint32_t>::max();
statement.bindAt(1, std::make_tuple(placeHolder, placeHolder));
log_.debug() << "account = " << ripple::strHex(account) << " idx = " << seq << " tuple = " << placeHolder;
}
// FIXME: Limit is a hack to support uint32_t properly for the time
// being. Should be removed later and schema updated to use proper
// types.
statement.bindAt(2, Limit{limit});
auto const res = executor_.read(yield, statement);
auto const& results = res.value();
if (not results.hasRows())
{
log_.debug() << "No rows returned";
return {};
}
std::vector<ripple::uint256> hashes = {};
auto numRows = results.numRows();
log_.info() << "num_rows = " << numRows;
for (auto [hash, data] : extract<ripple::uint256, std::tuple<uint32_t, uint32_t>>(results))
{
hashes.push_back(hash);
if (--numRows == 0)
{
log_.debug() << "Setting cursor";
cursor = data;
// forward queries by ledger/tx sequence `>=`
// so we have to advance the index by one
if (forward)
++cursor->transactionIndex;
}
}
auto const txns = fetchTransactions(hashes, yield);
log_.debug() << "Txns = " << txns.size();
if (txns.size() == limit)
{
log_.debug() << "Returning cursor";
return {txns, cursor};
}
return {txns, {}};
}
bool
doFinishWrites() override
{
// wait for other threads to finish their writes
executor_.sync();
if (!range)
{
executor_.writeSync(schema_->updateLedgerRange, ledgerSequence_, false, ledgerSequence_);
}
if (not executeSyncUpdate(schema_->updateLedgerRange.bind(ledgerSequence_, true, ledgerSequence_ - 1)))
{
log_.warn() << "Update failed for ledger " << ledgerSequence_;
return false;
}
log_.info() << "Committed ledger " << ledgerSequence_;
return true;
}
void
writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& header) override
{
executor_.write(schema_->insertLedgerHeader, ledgerInfo.seq, std::move(header));
executor_.write(schema_->insertLedgerHash, ledgerInfo.hash, ledgerInfo.seq);
ledgerSequence_ = ledgerInfo.seq;
}
std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context& yield) const override
{
if (auto const res = executor_.read(yield, schema_->selectLatestLedger); res)
{
if (auto const& result = res.value(); result)
{
if (auto const maybeValue = result.template get<uint32_t>(); maybeValue)
return maybeValue;
log_.error() << "Could not fetch latest ledger - no rows";
return std::nullopt;
}
log_.error() << "Could not fetch latest ledger - no result";
}
else
{
log_.error() << "Could not fetch latest ledger: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::LedgerInfo>
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call for seq " << sequence;
auto const res = executor_.read(yield, schema_->selectLedgerBySeq, sequence);
if (res)
{
if (auto const& result = res.value(); result)
{
if (auto const maybeValue = result.template get<std::vector<unsigned char>>(); maybeValue)
{
return deserializeHeader(ripple::makeSlice(*maybeValue));
}
log_.error() << "Could not fetch ledger by sequence - no rows";
return std::nullopt;
}
log_.error() << "Could not fetch ledger by sequence - no result";
}
else
{
log_.error() << "Could not fetch ledger by sequence: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::LedgerInfo>
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
if (auto const res = executor_.read(yield, schema_->selectLedgerByHash, hash); res)
{
if (auto const& result = res.value(); result)
{
if (auto const maybeValue = result.template get<uint32_t>(); maybeValue)
return fetchLedgerBySequence(*maybeValue, yield);
log_.error() << "Could not fetch ledger by hash - no rows";
return std::nullopt;
}
log_.error() << "Could not fetch ledger by hash - no result";
}
else
{
log_.error() << "Could not fetch ledger by hash: " << res.error();
}
return std::nullopt;
}
std::optional<LedgerRange>
hardFetchLedgerRange(boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
if (auto const res = executor_.read(yield, schema_->selectLedgerRange); res)
{
auto const& results = res.value();
if (not results.hasRows())
{
log_.debug() << "Could not fetch ledger range - no rows";
return std::nullopt;
}
// TODO: this is probably a good place to use user type in
// cassandra instead of having two rows with bool flag. or maybe at
// least use tuple<int, int>?
LedgerRange range;
std::size_t idx = 0;
for (auto [seq] : extract<uint32_t>(results))
{
if (idx == 0)
range.maxSequence = range.minSequence = seq;
else if (idx == 1)
range.maxSequence = seq;
++idx;
}
if (range.minSequence > range.maxSequence)
std::swap(range.minSequence, range.maxSequence);
log_.debug() << "After hardFetchLedgerRange range is " << range.minSequence << ":" << range.maxSequence;
return range;
}
else
{
log_.error() << "Could not fetch ledger range: " << res.error();
}
return std::nullopt;
}
std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
auto hashes = fetchAllTransactionHashesInLedger(ledgerSequence, yield);
return fetchTransactions(hashes, yield);
}
std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override
{
log_.trace() << __func__ << " call";
auto start = std::chrono::system_clock::now();
auto const res = executor_.read(yield, schema_->selectAllTransactionHashesInLedger, ledgerSequence);
if (not res)
{
log_.error() << "Could not fetch all transaction hashes: " << res.error();
return {};
}
auto const& result = res.value();
if (not result.hasRows())
{
log_.error() << "Could not fetch all transaction hashes - no rows; ledger = "
<< std::to_string(ledgerSequence);
return {};
}
std::vector<ripple::uint256> hashes;
for (auto [hash] : extract<ripple::uint256>(result))
hashes.push_back(std::move(hash));
auto end = std::chrono::system_clock::now();
log_.debug() << "Fetched " << hashes.size() << " transaction hashes from Cassandra in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << " milliseconds";
return hashes;
}
std::optional<NFT>
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override
{
log_.trace() << __func__ << " call";
auto const res = executor_.read(yield, schema_->selectNFT, tokenID, ledgerSequence);
if (not res)
return std::nullopt;
if (auto const maybeRow = res->template get<uint32_t, ripple::AccountID, bool>(); maybeRow)
{
auto [seq, owner, isBurned] = *maybeRow;
auto result = std::make_optional<NFT>(tokenID, seq, owner, isBurned);
// now fetch URI. Usually we will have the URI even for burned NFTs,
// but if the first ledger on this clio included NFTokenBurn
// transactions we will not have the URIs for any of those tokens.
// In any other case not having the URI indicates something went
// wrong with our data.
//
// TODO - in the future would be great for any handlers that use
// this could inject a warning in this case (the case of not having
// a URI because it was burned in the first ledger) to indicate that
// even though we are returning a blank URI, the NFT might have had
// one.
auto uriRes = executor_.read(yield, schema_->selectNFTURI, tokenID, ledgerSequence);
if (uriRes)
{
if (auto const maybeUri = uriRes->template get<ripple::Blob>(); maybeUri)
result->uri = *maybeUri;
}
return result;
}
log_.error() << "Could not fetch NFT - no rows";
return std::nullopt;
}
TransactionsAndCursor
fetchNFTTransactions(
ripple::uint256 const& tokenID,
std::uint32_t const limit,
bool const forward,
std::optional<TransactionsCursor> const& cursorIn,
boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
auto rng = fetchLedgerRange();
if (!rng)
return {{}, {}};
Statement statement = [this, forward, &tokenID]() {
if (forward)
return schema_->selectNFTTxForward.bind(tokenID);
else
return schema_->selectNFTTx.bind(tokenID);
}();
auto cursor = cursorIn;
if (cursor)
{
statement.bindAt(1, cursor->asTuple());
log_.debug() << "token_id = " << ripple::strHex(tokenID) << " tuple = " << cursor->ledgerSequence
<< cursor->transactionIndex;
}
else
{
auto const seq = forward ? rng->minSequence : rng->maxSequence;
auto const placeHolder = forward ? 0 : std::numeric_limits<std::uint32_t>::max();
statement.bindAt(1, std::make_tuple(placeHolder, placeHolder));
log_.debug() << "token_id = " << ripple::strHex(tokenID) << " idx = " << seq << " tuple = " << placeHolder;
}
statement.bindAt(2, Limit{limit});
auto const res = executor_.read(yield, statement);
auto const& results = res.value();
if (not results.hasRows())
{
log_.debug() << "No rows returned";
return {};
}
std::vector<ripple::uint256> hashes = {};
auto numRows = results.numRows();
log_.info() << "num_rows = " << numRows;
for (auto [hash, data] : extract<ripple::uint256, std::tuple<uint32_t, uint32_t>>(results))
{
hashes.push_back(hash);
if (--numRows == 0)
{
log_.debug() << "Setting cursor";
cursor = data;
// forward queries by ledger/tx sequence `>=`
// so we have to advance the index by one
if (forward)
++cursor->transactionIndex;
}
}
auto const txns = fetchTransactions(hashes, yield);
log_.debug() << "NFT Txns = " << txns.size();
if (txns.size() == limit)
{
log_.debug() << "Returning cursor";
return {txns, cursor};
}
return {txns, {}};
}
std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const override
{
log_.debug() << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res)
{
if (auto const result = res->template get<Blob>(); result)
{
if (result->size())
return *result;
}
else
{
log_.debug() << "Could not fetch ledger object - no rows";
}
}
else
{
log_.error() << "Could not fetch ledger object: " << res.error();
}
return std::nullopt;
}
std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
if (auto const res = executor_.read(yield, schema_->selectTransaction, hash); res)
{
if (auto const maybeValue = res->template get<Blob, Blob, uint32_t, uint32_t>(); maybeValue)
{
auto [transaction, meta, seq, date] = *maybeValue;
return std::make_optional<TransactionAndMetadata>(transaction, meta, seq, date);
}
else
{
log_.debug() << "Could not fetch transaction - no rows";
}
}
else
{
log_.error() << "Could not fetch transaction: " << res.error();
}
return std::nullopt;
}
std::optional<ripple::uint256>
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const override
{
log_.trace() << __func__ << " call";
if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res)
{
if (auto const result = res->template get<ripple::uint256>(); result)
{
if (*result == lastKey)
return std::nullopt;
return *result;
}
else
{
log_.debug() << "Could not fetch successor - no rows";
}
}
else
{
log_.error() << "Could not fetch successor: " << res.error();
}
return std::nullopt;
}
std::vector<TransactionAndMetadata>
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
if (hashes.size() == 0)
return {};
auto const numHashes = hashes.size();
std::vector<TransactionAndMetadata> results;
results.reserve(numHashes);
std::vector<Statement> statements;
statements.reserve(numHashes);
auto const timeDiff = util::timed([this, &yield, &results, &hashes, &statements]() {
// TODO: seems like a job for "hash IN (list of hashes)" instead?
std::transform(
std::cbegin(hashes), std::cend(hashes), std::back_inserter(statements), [this](auto const& hash) {
return schema_->selectTransaction.bind(hash);
});
auto const entries = executor_.readEach(yield, statements);
std::transform(
std::cbegin(entries),
std::cend(entries),
std::back_inserter(results),
[](auto const& res) -> TransactionAndMetadata {
if (auto const maybeRow = res.template get<Blob, Blob, uint32_t, uint32_t>(); maybeRow)
return *maybeRow;
else
return {};
});
});
assert(numHashes == results.size());
log_.debug() << "Fetched " << numHashes << " transactions from Cassandra in " << timeDiff << " milliseconds";
return results;
}
std::vector<Blob>
doFetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
if (keys.size() == 0)
return {};
auto const numKeys = keys.size();
log_.trace() << "Fetching " << numKeys << " objects";
std::vector<Blob> results;
results.reserve(numKeys);
std::vector<Statement> statements;
statements.reserve(numKeys);
// TODO: seems like a job for "key IN (list of keys)" instead?
std::transform(
std::cbegin(keys), std::cend(keys), std::back_inserter(statements), [this, &sequence](auto const& key) {
return schema_->selectObject.bind(key, sequence);
});
auto const entries = executor_.readEach(yield, statements);
std::transform(
std::cbegin(entries), std::cend(entries), std::back_inserter(results), [](auto const& res) -> Blob {
if (auto const maybeValue = res.template get<Blob>(); maybeValue)
return *maybeValue;
else
return {};
});
log_.trace() << "Fetched " << numKeys << " objects";
return results;
}
std::vector<LedgerObject>
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
auto const [keys, timeDiff] = util::timed([this, &ledgerSequence, &yield]() -> std::vector<ripple::uint256> {
auto const res = executor_.read(yield, schema_->selectDiff, ledgerSequence);
if (not res)
{
log_.error() << "Could not fetch ledger diff: " << res.error() << "; ledger = " << ledgerSequence;
return {};
}
auto const& results = res.value();
if (not results)
{
log_.error() << "Could not fetch ledger diff - no rows; ledger = " << ledgerSequence;
return {};
}
std::vector<ripple::uint256> keys;
for (auto [key] : extract<ripple::uint256>(results))
keys.push_back(key);
return keys;
});
// one of the above errors must have happened
if (keys.empty())
return {};
log_.debug() << "Fetched " << keys.size() << " diff hashes from Cassandra in " << timeDiff << " milliseconds";
auto const objs = fetchLedgerObjects(keys, ledgerSequence, yield);
std::vector<LedgerObject> results;
results.reserve(keys.size());
std::transform(
std::cbegin(keys),
std::cend(keys),
std::cbegin(objs),
std::back_inserter(results),
[](auto const& key, auto const& obj) {
return LedgerObject{key, obj};
});
return results;
}
void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override
{
log_.trace() << " Writing ledger object " << key.size() << ":" << seq << " [" << blob.size() << " bytes]";
if (range)
executor_.write(schema_->insertDiff, seq, key);
executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob));
}
void
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) override
{
log_.trace() << "Writing successor. key = " << key.size() << " bytes. "
<< " seq = " << std::to_string(seq) << " successor = " << successor.size() << " bytes.";
assert(key.size() != 0);
assert(successor.size() != 0);
executor_.write(schema_->insertSuccessor, std::move(key), seq, std::move(successor));
}
void
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 10); // assume 10 transactions avg
for (auto& record : data)
{
std::transform(
std::begin(record.accounts),
std::end(record.accounts),
std::back_inserter(statements),
[this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::move(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash);
});
}
executor_.write(std::move(statements));
}
void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
std::transform(std::cbegin(data), std::cend(data), std::back_inserter(statements), [this](auto const& record) {
return schema_->insertNFTTx.bind(
record.tokenID, std::make_tuple(record.ledgerSequence, record.transactionIndex), record.txHash);
});
executor_.write(std::move(statements));
}
void
writeTransaction(
std::string&& hash,
std::uint32_t const seq,
std::uint32_t const date,
std::string&& transaction,
std::string&& metadata) override
{
log_.trace() << "Writing txn to cassandra";
executor_.write(schema_->insertLedgerTransaction, seq, hash);
executor_.write(
schema_->insertTransaction, std::move(hash), seq, date, std::move(transaction), std::move(metadata));
}
void
writeNFTs(std::vector<NFTsData>&& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size() * 3);
for (NFTsData const& record : data)
{
statements.push_back(
schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned));
// If `uri` is set (and it can be set to an empty uri), we know this
// is a net-new NFT. That is, this NFT has not been seen before by
// us _OR_ it is in the extreme edge case of a re-minted NFT ID with
// the same NFT ID as an already-burned token. In this case, we need
// to record the URI and link to the issuer_nf_tokens table.
if (record.uri)
{
statements.push_back(schema_->insertIssuerNFT.bind(
ripple::nft::getIssuer(record.tokenID),
static_cast<uint32_t>(ripple::nft::getTaxon(record.tokenID)),
record.tokenID));
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value()));
}
}
executor_.write(std::move(statements));
}
void
startWrites() const override
{
// Note: no-op in original implementation too.
// probably was used in PG to start a transaction or smth.
}
/*! Unused in this implementation */
bool
doOnlineDelete(std::uint32_t const numLedgersToKeep, boost::asio::yield_context& yield) const override
{
log_.trace() << __func__ << " call";
return true;
}
bool
isTooBusy() const override
{
return executor_.isTooBusy();
}
private:
bool
executeSyncUpdate(Statement statement)
{
auto const res = executor_.writeSync(statement);
auto maybeSuccess = res->template get<bool>();
if (not maybeSuccess)
{
log_.error() << "executeSyncUpdate - error getting result - no row";
return false;
}
if (not maybeSuccess.value())
{
log_.warn() << "Update failed. Checking if DB state is what we expect";
// error may indicate that another writer wrote something.
// in this case let's just compare the current state of things
// against what we were trying to write in the first place and
// use that as the source of truth for the result.
auto rng = hardFetchLedgerRangeNoThrow();
return rng && rng->maxSequence == ledgerSequence_;
}
return true;
}
};
using CassandraBackend = BasicCassandraBackend<SettingsProvider, detail::DefaultExecutionStrategy<>>;
} // namespace Backend::Cassandra

View File

@@ -39,10 +39,7 @@ struct AccountTransactionsData
std::uint32_t transactionIndex;
ripple::uint256 txHash;
AccountTransactionsData(
ripple::TxMeta& meta,
ripple::uint256 const& txHash,
beast::Journal& j)
AccountTransactionsData(ripple::TxMeta& meta, ripple::uint256 const& txHash, beast::Journal& j)
: accounts(meta.getAffectedAccounts())
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
@@ -62,14 +59,8 @@ struct NFTTransactionsData
std::uint32_t transactionIndex;
ripple::uint256 txHash;
NFTTransactionsData(
ripple::uint256 const& tokenID,
ripple::TxMeta const& meta,
ripple::uint256 const& txHash)
: tokenID(tokenID)
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
, txHash(txHash)
NFTTransactionsData(ripple::uint256 const& tokenID, ripple::TxMeta const& meta, ripple::uint256 const& txHash)
: tokenID(tokenID), ledgerSequence(meta.getLgrSeq()), transactionIndex(meta.getIndex()), txHash(txHash)
{
}
};
@@ -85,16 +76,38 @@ struct NFTsData
// final state of an NFT per ledger. Since we pull this from transactions
// we keep track of which tx index created this so we can de-duplicate, as
// it is possible for one ledger to have multiple txs that change the
// state of the same NFT.
std::uint32_t transactionIndex;
// state of the same NFT. This field is not applicable when we are loading
// initial NFT state via ledger objects, since we do not have to tiebreak
// NFT state for a given ledger in that case.
std::optional<std::uint32_t> transactionIndex;
ripple::AccountID owner;
bool isBurned;
// We only set the uri if this is a mint tx, or if we are
// loading initial state from NFTokenPage objects. In other words,
// uri should only be set if the etl process believes this NFT hasn't
// been seen before in our local database. We do this so that we don't
// write to the the nf_token_uris table every
// time the same NFT changes hands. We also can infer if there is a URI
// that we need to write to the issuer_nf_tokens table.
std::optional<ripple::Blob> uri;
bool isBurned = false;
// This constructor is used when parsing an NFTokenMint tx.
// Unfortunately because of the extreme edge case of being able to
// re-mint an NFT with the same ID, we must explicitly record a null
// URI. For this reason, we _always_ write this field as a result of
// this tx.
NFTsData(
ripple::uint256 const& tokenID,
ripple::AccountID const& owner,
ripple::TxMeta const& meta,
bool isBurned)
ripple::Blob const& uri,
ripple::TxMeta const& meta)
: tokenID(tokenID), ledgerSequence(meta.getLgrSeq()), transactionIndex(meta.getIndex()), owner(owner), uri(uri)
{
}
// This constructor is used when parsing an NFTokenBurn or
// NFTokenAcceptOffer tx
NFTsData(ripple::uint256 const& tokenID, ripple::AccountID const& owner, ripple::TxMeta const& meta, bool isBurned)
: tokenID(tokenID)
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
@@ -102,6 +115,21 @@ struct NFTsData
, isBurned(isBurned)
{
}
// This constructor is used when parsing an NFTokenPage directly from
// ledger state.
// Unfortunately because of the extreme edge case of being able to
// re-mint an NFT with the same ID, we must explicitly record a null
// URI. For this reason, we _always_ write this field as a result of
// this tx.
NFTsData(
ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence,
ripple::AccountID const& owner,
ripple::Blob const& uri)
: tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri)
{
}
};
template <class T>
@@ -140,8 +168,7 @@ isBookDir(T const& key, R const& object)
if (!isDirNode(object))
return false;
ripple::STLedgerEntry const sle{
ripple::SerialIter{object.data(), object.size()}, key};
ripple::STLedgerEntry const sle{ripple::SerialIter{object.data(), object.size()}, key};
return !sle[~ripple::sfOwner].has_value();
}
@@ -180,10 +207,8 @@ deserializeHeader(ripple::Slice data)
info.parentHash = sit.get256();
info.txHash = sit.get256();
info.accountHash = sit.get256();
info.parentCloseTime =
ripple::NetClock::time_point{ripple::NetClock::duration{sit.get32()}};
info.closeTime =
ripple::NetClock::time_point{ripple::NetClock::duration{sit.get32()}};
info.parentCloseTime = ripple::NetClock::time_point{ripple::NetClock::duration{sit.get32()}};
info.closeTime = ripple::NetClock::time_point{ripple::NetClock::duration{sit.get32()}};
info.closeTimeResolution = ripple::NetClock::duration{sit.get8()};
info.closeFlags = sit.get8();

View File

@@ -130,3 +130,91 @@ In each new ledger version with sequence `n`, a ledger object `v` can either be
1. Being **created**, add two new records of `seq=n` with one being `e` pointing to `v`, and `v` pointing to `w` (Linked List insertion operation).
2. Being **modified**, do nothing.
3. Being **deleted**, add a record of `seq=n` with `e` pointing to `v`'s `next` value (Linked List deletion operation).
### NFT data model
In `rippled` NFTs are stored in NFTokenPage ledger objects. This object is
implemented to save ledger space and has the property that it gives us O(1)
lookup time for an NFT, assuming we know who owns the NFT at a particular
ledger. However, if we do not know who owns the NFT at a specific ledger
height we have no alternative in rippled other than scanning the entire
ledger. Because of this tradeoff, clio implements a special NFT indexing data
structure that allows clio users to query NFTs quickly, while keeping
rippled's space-saving optimizations.
#### `nf_tokens`
```
CREATE TABLE clio.nf_tokens (
token_id blob, # The NFT's ID
sequence bigint, # Sequence of ledger version
owner blob, # The account ID of the owner of this NFT at this ledger
is_burned boolean, # True if token was burned in this ledger
PRIMARY KEY (token_id, sequence)
) WITH CLUSTERING ORDER BY (sequence DESC) ...
```
This table indexes NFT IDs with their owner at a given ledger. So
```
SELECT * FROM nf_tokens
WHERE token_id = N AND seq <= Y
ORDER BY seq DESC LIMIT 1;
```
will give you the owner of token N at ledger Y and whether it was burned. If
the token is burned, the owner field indicates the account that owned the
token at the time it was burned; it does not indicate the person who burned
the token, necessarily. If you need to determine who burned the token you can
use the `nft_history` API, which will give you the NFTokenBurn transaction
that burned this token, along with the account that submitted that
transaction.
#### `issuer_nf_tokens_v2`
```
CREATE TABLE clio.issuer_nf_tokens_v2 (
issuer blob, # The NFT issuer's account ID
taxon bigint, # The NFT's token taxon
token_id blob, # The NFT's ID
PRIMARY KEY (issuer, taxon, token_id)
) WITH CLUSTERING ORDER BY (taxon ASC, token_id ASC) ...
```
This table indexes token IDs against their issuer and issuer/taxon
combination. This is useful for determining all the NFTs a specific account
issued, or all the NFTs a specific account issued with a specific taxon. It is
not useful to know all the NFTs with a given taxon while excluding issuer, since the
meaning of a taxon is left to an issuer.
#### `nf_token_uris`
```
CREATE TABLE clio.nf_token_uris (
token_id blob, # The NFT's ID
sequence bigint, # Sequence of ledger version
uri blob, # The NFT's URI
PRIMARY KEY (token_id, sequence)
) WITH CLUSTERING ORDER BY (sequence DESC) ...
```
This table is used to store an NFT's URI. Without storing this here, we would
need to traverse the NFT owner's entire set of NFTs to find the URI, again due
to the way that NFTs are stored in rippled. Furthermore, instead of storing
this in the `nf_tokens` table, we store it here to save space. A given NFT
will have only one entry in this table (see caveat below), written to this
table as soon as clio sees the NFTokenMint transaction, or when clio loads an
NFTokenPage from the initial ledger it downloaded. However, the `nf_tokens`
table is written to every time an NFT changes ownership, or if it is burned.
Given this, why do we have to store the sequence? Unfortunately there is an
extreme edge case where a given NFT ID can be burned, and then re-minted with
a different URI. This is extremely unlikely, and might be fixed in a future
version to rippled, but just in case we can handle that edge case by allowing
a given NFT ID to have a new URI assigned in this case, without removing the
prior URI.
#### `nf_token_transactions`
```
CREATE TABLE clio.nf_token_transactions (
token_id blob, # The NFT's ID
seq_idx tuple<bigint, bigint>, # Tuple of (ledger_index, transaction_index)
hash blob, # Hash of the transaction
PRIMARY KEY (token_id, seq_idx)
) WITH CLUSTERING ORDER BY (seq_idx DESC) ...
```
This table is the NFT equivalent of `account_tx`. It's motivated by the exact
same reasons and serves the analogous purpose here. It drives the
`nft_history` API.

View File

@@ -28,10 +28,7 @@ SimpleCache::latestLedgerSequence() const
}
void
SimpleCache::update(
std::vector<LedgerObject> const& objs,
uint32_t seq,
bool isBackground)
SimpleCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground)
{
if (disabled_)
return;
@@ -99,9 +96,9 @@ SimpleCache::getPredecessor(ripple::uint256 const& key, uint32_t seq) const
std::optional<Blob>
SimpleCache::get(ripple::uint256 const& key, uint32_t seq) const
{
std::shared_lock lck{mtx_};
if (seq > latestSeq_)
return {};
std::shared_lock lck{mtx_};
objectReqCounter_++;
auto e = map_.find(key);
if (e == map_.end())

View File

@@ -37,11 +37,11 @@ class SimpleCache
};
// counters for fetchLedgerObject(s) hit rate
mutable std::atomic_uint32_t objectReqCounter_;
mutable std::atomic_uint32_t objectHitCounter_;
mutable std::atomic_uint32_t objectReqCounter_ = 0;
mutable std::atomic_uint32_t objectHitCounter_ = 0;
// counters for fetchSuccessorKey hit rate
mutable std::atomic_uint32_t successorReqCounter_;
mutable std::atomic_uint32_t successorHitCounter_;
mutable std::atomic_uint32_t successorReqCounter_ = 0;
mutable std::atomic_uint32_t successorHitCounter_ = 0;
std::map<ripple::uint256, CacheEntry> map_;
mutable std::shared_mutex mtx_;
@@ -56,10 +56,7 @@ public:
// Update the cache with new ledger objects
// set isBackground to true when writing old data from a background thread
void
update(
std::vector<LedgerObject> const& blobs,
uint32_t seq,
bool isBackground = false);
update(std::vector<LedgerObject> const& blobs, uint32_t seq, bool isBackground = false);
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const;

View File

@@ -56,8 +56,27 @@ struct TransactionAndMetadata
{
Blob transaction;
Blob metadata;
std::uint32_t ledgerSequence;
std::uint32_t date;
std::uint32_t ledgerSequence = 0;
std::uint32_t date = 0;
TransactionAndMetadata() = default;
TransactionAndMetadata(
Blob const& transaction,
Blob const& metadata,
std::uint32_t ledgerSequence,
std::uint32_t date)
: transaction{transaction}, metadata{metadata}, ledgerSequence{ledgerSequence}, date{date}
{
}
TransactionAndMetadata(std::tuple<Blob, Blob, std::uint32_t, std::uint32_t> data)
: transaction{std::get<0>(data)}
, metadata{std::get<1>(data)}
, ledgerSequence{std::get<2>(data)}
, date{std::get<3>(data)}
{
}
bool
operator==(const TransactionAndMetadata& other) const
{
@@ -70,6 +89,29 @@ struct TransactionsCursor
{
std::uint32_t ledgerSequence;
std::uint32_t transactionIndex;
TransactionsCursor() = default;
TransactionsCursor(std::uint32_t ledgerSequence, std::uint32_t transactionIndex)
: ledgerSequence{ledgerSequence}, transactionIndex{transactionIndex}
{
}
TransactionsCursor(std::tuple<std::uint32_t, std::uint32_t> data)
: ledgerSequence{std::get<0>(data)}, transactionIndex{std::get<1>(data)}
{
}
TransactionsCursor&
operator=(TransactionsCursor const&) = default;
bool
operator==(TransactionsCursor const& other) const = default;
[[nodiscard]] std::tuple<std::uint32_t, std::uint32_t>
asTuple() const
{
return std::make_tuple(ledgerSequence, transactionIndex);
}
};
struct TransactionsAndCursor
@@ -83,16 +125,31 @@ struct NFT
ripple::uint256 tokenID;
std::uint32_t ledgerSequence;
ripple::AccountID owner;
Blob uri;
bool isBurned;
NFT() = default;
NFT(ripple::uint256 const& tokenID,
std::uint32_t ledgerSequence,
ripple::AccountID const& owner,
Blob const& uri,
bool isBurned)
: tokenID{tokenID}, ledgerSequence{ledgerSequence}, owner{owner}, uri{uri}, isBurned{isBurned}
{
}
NFT(ripple::uint256 const& tokenID, std::uint32_t ledgerSequence, ripple::AccountID const& owner, bool isBurned)
: NFT(tokenID, ledgerSequence, owner, {}, isBurned)
{
}
// clearly two tokens are the same if they have the same ID, but this
// struct stores the state of a given token at a given ledger sequence, so
// we also need to compare with ledgerSequence
bool
operator==(NFT const& other) const
{
return tokenID == other.tokenID &&
ledgerSequence == other.ledgerSequence;
return tokenID == other.tokenID && ledgerSequence == other.ledgerSequence;
}
};
@@ -101,10 +158,7 @@ struct LedgerRange
std::uint32_t minSequence;
std::uint32_t maxSequence;
};
constexpr ripple::uint256 firstKey{
"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 lastKey{
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 hi192{
"0000000000000000000000000000000000000000000000001111111111111111"};
constexpr ripple::uint256 firstKey{"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 lastKey{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 hi192{"0000000000000000000000000000000000000000000000001111111111111111"};
} // namespace Backend

View File

@@ -0,0 +1,79 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Types.h>
#include <boost/asio/spawn.hpp>
#include <chrono>
#include <concepts>
#include <optional>
#include <string>
namespace Backend::Cassandra {
// clang-format off
template <typename T>
concept SomeSettingsProvider = requires(T a) {
{ a.getSettings() } -> std::same_as<Settings>;
{ a.getKeyspace() } -> std::same_as<std::string>;
{ a.getTablePrefix() } -> std::same_as<std::optional<std::string>>;
{ a.getReplicationFactor() } -> std::same_as<uint16_t>;
{ a.getTtl() } -> std::same_as<uint16_t>;
};
// clang-format on
// clang-format off
template <typename T>
concept SomeExecutionStrategy = requires(
T a,
Settings settings,
Handle handle,
Statement statement,
std::vector<Statement> statements,
PreparedStatement prepared,
boost::asio::yield_context token
) {
{ T(settings, handle) };
{ a.sync() } -> std::same_as<void>;
{ a.isTooBusy() } -> std::same_as<bool>;
{ a.writeSync(statement) } -> std::same_as<ResultOrError>;
{ a.writeSync(prepared) } -> std::same_as<ResultOrError>;
{ a.write(prepared) } -> std::same_as<void>;
{ a.write(std::move(statements)) } -> std::same_as<void>;
{ a.read(token, prepared) } -> std::same_as<ResultOrError>;
{ a.read(token, statement) } -> std::same_as<ResultOrError>;
{ a.read(token, statements) } -> std::same_as<ResultOrError>;
{ a.readEach(token, statements) } -> std::same_as<std::vector<Result>>;
};
// clang-format on
// clang-format off
template <typename T>
concept SomeRetryPolicy = requires(T a, boost::asio::io_context ioc, CassandraError err, uint32_t attempt) {
{ T(ioc) };
{ a.shouldRetry(err) } -> std::same_as<bool>;
{ a.retry([](){}) } -> std::same_as<void>;
{ a.calculateDelay(attempt) } -> std::same_as<std::chrono::milliseconds>;
};
// clang-format on
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,99 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <cassandra.h>
#include <string>
namespace Backend::Cassandra {
/**
* @brief A simple container for both error message and error code
*/
class CassandraError
{
std::string message_;
uint32_t code_;
public:
CassandraError() = default; // default constructible required by Expected
CassandraError(std::string message, uint32_t code) : message_{message}, code_{code}
{
}
template <typename T>
friend std::string
operator+(T const& lhs, CassandraError const& rhs) requires std::is_convertible_v<T, std::string>
{
return lhs + rhs.message();
}
template <typename T>
friend bool
operator==(T const& lhs, CassandraError const& rhs) requires std::is_convertible_v<T, std::string>
{
return lhs == rhs.message();
}
template <std::integral T>
friend bool
operator==(T const& lhs, CassandraError const& rhs)
{
return lhs == rhs.code();
}
friend std::ostream&
operator<<(std::ostream& os, CassandraError const& err)
{
os << err.message();
return os;
}
std::string
message() const
{
return message_;
}
uint32_t
code() const
{
return code_;
}
bool
isTimeout() const
{
if (code_ == CASS_ERROR_LIB_NO_HOSTS_AVAILABLE or code_ == CASS_ERROR_LIB_REQUEST_TIMED_OUT or
code_ == CASS_ERROR_SERVER_UNAVAILABLE or code_ == CASS_ERROR_SERVER_OVERLOADED or
code_ == CASS_ERROR_SERVER_READ_TIMEOUT)
return true;
return false;
}
bool
isInvalidQuery() const
{
return code_ == CASS_ERROR_SERVER_INVALID_QUERY;
}
};
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,155 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/Handle.h>
namespace Backend::Cassandra {
Handle::Handle(Settings clusterSettings) : cluster_{clusterSettings}
{
}
Handle::Handle(std::string_view contactPoints) : Handle{Settings::defaultSettings().withContactPoints(contactPoints)}
{
}
Handle::~Handle()
{
[[maybe_unused]] auto _ = disconnect(); // attempt to disconnect
}
Handle::FutureType
Handle::asyncConnect() const
{
return cass_session_connect(session_, cluster_);
}
Handle::MaybeErrorType
Handle::connect() const
{
return asyncConnect().await();
}
Handle::FutureType
Handle::asyncConnect(std::string_view keyspace) const
{
return cass_session_connect_keyspace(session_, cluster_, keyspace.data());
}
Handle::MaybeErrorType
Handle::connect(std::string_view keyspace) const
{
return asyncConnect(keyspace).await();
}
Handle::FutureType
Handle::asyncDisconnect() const
{
return cass_session_close(session_);
}
Handle::MaybeErrorType
Handle::disconnect() const
{
return asyncDisconnect().await();
}
Handle::FutureType
Handle::asyncReconnect(std::string_view keyspace) const
{
if (auto rc = asyncDisconnect().await(); not rc) // sync
throw std::logic_error("Reconnect to keyspace '" + std::string{keyspace} + "' failed: " + rc.error());
return asyncConnect(keyspace);
}
Handle::MaybeErrorType
Handle::reconnect(std::string_view keyspace) const
{
return asyncReconnect(keyspace).await();
}
std::vector<Handle::FutureType>
Handle::asyncExecuteEach(std::vector<Statement> const& statements) const
{
std::vector<Handle::FutureType> futures;
for (auto const& statement : statements)
futures.push_back(cass_session_execute(session_, statement));
return futures;
}
Handle::MaybeErrorType
Handle::executeEach(std::vector<Statement> const& statements) const
{
for (auto futures = asyncExecuteEach(statements); auto const& future : futures)
{
if (auto const rc = future.await(); not rc)
return rc;
}
return {};
}
Handle::FutureType
Handle::asyncExecute(Statement const& statement) const
{
return cass_session_execute(session_, statement);
}
Handle::FutureWithCallbackType
Handle::asyncExecute(Statement const& statement, std::function<void(Handle::ResultOrErrorType)>&& cb) const
{
return Handle::FutureWithCallbackType{cass_session_execute(session_, statement), std::move(cb)};
}
Handle::ResultOrErrorType
Handle::execute(Statement const& statement) const
{
return asyncExecute(statement).get();
}
Handle::FutureType
Handle::asyncExecute(std::vector<Statement> const& statements) const
{
return cass_session_execute_batch(session_, Batch{statements});
}
Handle::MaybeErrorType
Handle::execute(std::vector<Statement> const& statements) const
{
return asyncExecute(statements).await();
}
Handle::FutureWithCallbackType
Handle::asyncExecute(std::vector<Statement> const& statements, std::function<void(Handle::ResultOrErrorType)>&& cb)
const
{
return Handle::FutureWithCallbackType{cass_session_execute_batch(session_, Batch{statements}), std::move(cb)};
}
Handle::PreparedStatementType
Handle::prepare(std::string_view query) const
{
Handle::FutureType future = cass_session_prepare(session_, query.data());
if (auto const rc = future.await(); rc)
return cass_future_get_prepared(future);
else
throw std::runtime_error(rc.error().message());
}
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,295 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Error.h>
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/Batch.h>
#include <backend/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/Future.h>
#include <backend/cassandra/impl/ManagedObject.h>
#include <backend/cassandra/impl/Result.h>
#include <backend/cassandra/impl/Session.h>
#include <backend/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include <cassandra.h>
#include <chrono>
#include <compare>
#include <iterator>
#include <vector>
namespace Backend::Cassandra {
/**
* @brief Represents a handle to the cassandra database cluster
*/
class Handle
{
detail::Cluster cluster_;
detail::Session session_;
public:
using ResultOrErrorType = ResultOrError;
using MaybeErrorType = MaybeError;
using FutureWithCallbackType = FutureWithCallback;
using FutureType = Future;
using StatementType = Statement;
using PreparedStatementType = PreparedStatement;
using ResultType = Result;
/**
* @brief Construct a new handle from a @ref Settings object
*/
explicit Handle(Settings clusterSettings = Settings::defaultSettings());
/**
* @brief Construct a new handle with default settings and only by setting
* the contact points
*/
explicit Handle(std::string_view contactPoints);
/**
* @brief Disconnects gracefully if possible
*/
~Handle();
/**
* @brief Move is supported
*/
Handle(Handle&&) = default;
/**
* @brief Connect to the cluster asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncConnect() const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncConnect() const for how this works.
*/
[[nodiscard]] MaybeErrorType
connect() const;
/**
* @brief Connect to the the specified keyspace asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncConnect(std::string_view keyspace) const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncConnect(std::string_view) const for how this works.
*/
[[nodiscard]] MaybeErrorType
connect(std::string_view keyspace) const;
/**
* @brief Disconnect from the cluster asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncDisconnect() const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncDisconnect() const for how this works.
*/
[[maybe_unused]] MaybeErrorType
disconnect() const;
/**
* @brief Reconnect to the the specified keyspace asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncReconnect(std::string_view keyspace) const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncReconnect(std::string_view) const for how this works.
*/
[[nodiscard]] MaybeErrorType
reconnect(std::string_view keyspace) const;
/**
* @brief Execute a simple query with optional args asynchronously
*
* @return A future
*/
template <typename... Args>
[[nodiscard]] FutureType
asyncExecute(std::string_view query, Args&&... args) const
{
auto statement = StatementType{query, std::forward<Args>(args)...};
return cass_session_execute(session_, statement);
}
/**
* @brief Synchonous version of the above
*
* See @ref asyncExecute(std::string_view, Args&&...) const for how this
* works.
*/
template <typename... Args>
[[maybe_unused]] ResultOrErrorType
execute(std::string_view query, Args&&... args) const
{
return asyncExecute<Args...>(query, std::forward<Args>(args)...).get();
}
/**
* @brief Execute each of the statements asynchronously
*
* Batched version is not always the right option. Especially since it only
* supports INSERT, UPDATE and DELETE statements.
* This can be used as an alternative when statements need to execute in
* bulk.
*
* @return A vector of future objects
*/
[[nodiscard]] std::vector<FutureType>
asyncExecuteEach(std::vector<StatementType> const& statements) const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncExecuteEach(std::vector<StatementType> const&) const for
* how this works.
*/
[[maybe_unused]] MaybeErrorType
executeEach(std::vector<StatementType> const& statements) const;
/**
* @brief Execute a prepared statement with optional args asynchronously
*
* @return A future
*/
template <typename... Args>
[[nodiscard]] FutureType
asyncExecute(PreparedStatementType const& statement, Args&&... args) const
{
auto bound = statement.bind<Args...>(std::forward<Args>(args)...);
return cass_session_execute(session_, bound);
}
/**
* @brief Synchonous version of the above
*
* See @ref asyncExecute(std::vector<StatementType> const&, Args&&...) const
* for how this works.
*/
template <typename... Args>
[[maybe_unused]] ResultOrErrorType
execute(PreparedStatementType const& statement, Args&&... args) const
{
return asyncExecute<Args...>(statement, std::forward<Args>(args)...).get();
}
/**
* @brief Execute one (bound or simple) statements asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncExecute(StatementType const& statement) const;
/**
* @brief Execute one (bound or simple) statements asynchronously with a
* callback
*
* @return A future that holds onto the callback provided
*/
[[nodiscard]] FutureWithCallbackType
asyncExecute(StatementType const& statement, std::function<void(ResultOrErrorType)>&& cb) const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncExecute(StatementType const&) const for how this
* works.
*/
[[maybe_unused]] ResultOrErrorType
execute(StatementType const& statement) const;
/**
* @brief Execute a batch of (bound or simple) statements asynchronously
*
* @return A future
*/
[[nodiscard]] FutureType
asyncExecute(std::vector<StatementType> const& statements) const;
/**
* @brief Synchonous version of the above
*
* See @ref asyncExecute(std::vector<StatementType> const&) const for how
* this works.
*/
[[maybe_unused]] MaybeErrorType
execute(std::vector<StatementType> const& statements) const;
/**
* @brief Execute a batch of (bound or simple) statements asynchronously
* with a completion callback
*
* @return A future that holds onto the callback provided
*/
[[nodiscard]] FutureWithCallbackType
asyncExecute(std::vector<StatementType> const& statements, std::function<void(ResultOrErrorType)>&& cb) const;
/**
* @brief Prepare a statement
*
* @return A @ref PreparedStatementType
* @throws std::runtime_error with underlying error description on failure
*/
[[nodiscard]] PreparedStatementType
prepare(std::string_view query) const;
};
/**
* @brief Extracts the results into series of std::tuple<Types...> by creating a
* simple wrapper with an STL input iterator inside.
*
* You can call .begin() and .end() in order to iterate as usual.
* This also means that you can use it in a range-based for or with some
* algorithms.
*/
template <typename... Types>
[[nodiscard]] detail::ResultExtractor<Types...>
extract(Handle::ResultType const& result)
{
return {result};
}
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,667 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Concepts.h>
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/SettingsProvider.h>
#include <backend/cassandra/Types.h>
#include <config/Config.h>
#include <log/Logger.h>
#include <util/Expected.h>
#include <fmt/compile.h>
namespace Backend::Cassandra {
template <SomeSettingsProvider SettingsProviderType>
[[nodiscard]] std::string inline qualifiedTableName(SettingsProviderType const& provider, std::string_view name)
{
return fmt::format("{}.{}{}", provider.getKeyspace(), provider.getTablePrefix().value_or(""), name);
}
/**
* @brief Manages the DB schema and provides access to prepared statements
*/
template <SomeSettingsProvider SettingsProviderType>
class Schema
{
// Current schema version.
// Update this everytime you update the schema.
// Migrations will be ran automatically based on this value.
static constexpr uint16_t version = 1u;
clio::Logger log_{"Backend"};
std::reference_wrapper<SettingsProviderType const> settingsProvider_;
public:
explicit Schema(SettingsProviderType const& settingsProvider) : settingsProvider_{std::cref(settingsProvider)}
{
}
std::string createKeyspace = [this]() {
return fmt::format(
R"(
CREATE KEYSPACE IF NOT EXISTS {}
WITH replication = {{
'class': 'SimpleStrategy',
'replication_factor': '{}'
}}
AND durable_writes = true
)",
settingsProvider_.get().getKeyspace(),
settingsProvider_.get().getReplicationFactor());
}();
// =======================
// Schema creation queries
// =======================
std::vector<Statement> createSchema = [this]() {
std::vector<Statement> statements;
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
key blob,
sequence bigint,
object blob,
PRIMARY KEY (key, sequence)
)
WITH CLUSTERING ORDER BY (sequence DESC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "objects"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
hash blob PRIMARY KEY,
ledger_sequence bigint,
date bigint,
transaction blob,
metadata blob
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "transactions"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
ledger_sequence bigint,
hash blob,
PRIMARY KEY (ledger_sequence, hash)
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
key blob,
seq bigint,
next blob,
PRIMARY KEY (key, seq)
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "successor"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
seq bigint,
key blob,
PRIMARY KEY (seq, key)
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "diff"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
account blob,
seq_idx tuple<bigint, bigint>,
hash blob,
PRIMARY KEY (account, seq_idx)
)
WITH CLUSTERING ORDER BY (seq_idx DESC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "account_tx"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
sequence bigint PRIMARY KEY,
header blob
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "ledgers"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
hash blob PRIMARY KEY,
sequence bigint
)
WITH default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
is_latest boolean PRIMARY KEY,
sequence bigint
)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
token_id blob,
sequence bigint,
owner blob,
is_burned boolean,
PRIMARY KEY (token_id, sequence)
)
WITH CLUSTERING ORDER BY (sequence DESC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "nf_tokens"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
issuer blob,
taxon bigint,
token_id blob,
PRIMARY KEY (issuer, taxon, token_id)
)
WITH CLUSTERING ORDER BY (taxon ASC, token_id ASC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
token_id blob,
sequence bigint,
uri blob,
PRIMARY KEY (token_id, sequence)
)
WITH CLUSTERING ORDER BY (sequence DESC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris"),
settingsProvider_.get().getTtl()));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
token_id blob,
seq_idx tuple<bigint, bigint>,
hash blob,
PRIMARY KEY (token_id, seq_idx)
)
WITH CLUSTERING ORDER BY (seq_idx DESC)
AND default_time_to_live = {}
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"),
settingsProvider_.get().getTtl()));
return statements;
}();
/**
* @brief Prepared statements holder
*/
class Statements
{
std::reference_wrapper<SettingsProviderType const> settingsProvider_;
std::reference_wrapper<Handle const> handle_;
public:
Statements(SettingsProviderType const& settingsProvider, Handle const& handle)
: settingsProvider_{settingsProvider}, handle_{std::cref(handle)}
{
}
//
// Insert queries
//
PreparedStatement insertObject = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(key, sequence, object)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "objects")));
}();
PreparedStatement insertTransaction = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(hash, ledger_sequence, date, transaction, metadata)
VALUES (?, ?, ?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "transactions")));
}();
PreparedStatement insertLedgerTransaction = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(ledger_sequence, hash)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions")));
}();
PreparedStatement insertSuccessor = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(key, seq, next)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "successor")));
}();
PreparedStatement insertDiff = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(seq, key)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "diff")));
}();
PreparedStatement insertAccountTx = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(account, seq_idx, hash)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")));
}();
PreparedStatement insertNFT = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(token_id, sequence, owner, is_burned)
VALUES (?, ?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "nf_tokens")));
}();
PreparedStatement insertIssuerNFT = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(issuer, taxon, token_id)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")));
}();
PreparedStatement insertNFTURI = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(token_id, sequence, uri)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris")));
}();
PreparedStatement insertNFTTx = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(token_id, seq_idx, hash)
VALUES (?, ?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")));
}();
PreparedStatement insertLedgerHeader = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(sequence, header)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "ledgers")));
}();
PreparedStatement insertLedgerHash = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(hash, sequence)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes")));
}();
//
// Update (and "delete") queries
//
PreparedStatement updateLedgerRange = [this]() {
return handle_.get().prepare(fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = ?
IF sequence IN (?, null)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")));
}();
PreparedStatement deleteLedgerRange = [this]() {
return handle_.get().prepare(fmt::format(
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = false
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")));
}();
//
// Select queries
//
PreparedStatement selectSuccessor = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT next
FROM {}
WHERE key = ?
AND seq <= ?
ORDER BY seq DESC
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "successor")));
}();
PreparedStatement selectDiff = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT key
FROM {}
WHERE seq = ?
)",
qualifiedTableName(settingsProvider_.get(), "diff")));
}();
PreparedStatement selectObject = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT object, sequence
FROM {}
WHERE key = ?
AND sequence <= ?
ORDER BY sequence DESC
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "objects")));
}();
PreparedStatement selectTransaction = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT transaction, metadata, ledger_sequence, date
FROM {}
WHERE hash = ?
)",
qualifiedTableName(settingsProvider_.get(), "transactions")));
}();
PreparedStatement selectAllTransactionHashesInLedger = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT hash
FROM {}
WHERE ledger_sequence = ?
)",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions")));
}();
PreparedStatement selectLedgerPageKeys = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")));
}();
PreparedStatement selectLedgerPage = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT object, key
FROM {}
WHERE TOKEN(key) >= ?
AND sequence <= ?
PER PARTITION LIMIT 1
LIMIT ?
ALLOW FILTERING
)",
qualifiedTableName(settingsProvider_.get(), "objects")));
}();
PreparedStatement getToken = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT TOKEN(key)
FROM {}
WHERE key = ?
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "objects")));
}();
PreparedStatement selectAccountTx = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT hash, seq_idx
FROM {}
WHERE account = ?
AND seq_idx <= ?
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")));
}();
PreparedStatement selectAccountTxForward = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT hash, seq_idx
FROM {}
WHERE account = ?
AND seq_idx >= ?
ORDER BY seq_idx ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "account_tx")));
}();
PreparedStatement selectNFT = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT sequence, owner, is_burned
FROM {}
WHERE token_id = ?
AND sequence <= ?
ORDER BY sequence DESC
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_tokens")));
}();
PreparedStatement selectNFTURI = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT uri
FROM {}
WHERE token_id = ?
AND sequence <= ?
ORDER BY sequence DESC
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris")));
}();
PreparedStatement selectNFTTx = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT hash, seq_idx
FROM {}
WHERE token_id = ?
AND seq_idx < ?
ORDER BY seq_idx DESC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")));
}();
PreparedStatement selectNFTTxForward = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT hash, seq_idx
FROM {}
WHERE token_id = ?
AND seq_idx >= ?
ORDER BY seq_idx ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")));
}();
PreparedStatement selectLedgerByHash = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT sequence
FROM {}
WHERE hash = ?
LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes")));
}();
PreparedStatement selectLedgerBySeq = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT header
FROM {}
WHERE sequence = ?
)",
qualifiedTableName(settingsProvider_.get(), "ledgers")));
}();
PreparedStatement selectLatestLedger = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT sequence
FROM {}
WHERE is_latest = true
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")));
}();
PreparedStatement selectLedgerRange = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT sequence
FROM {}
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")));
}();
};
/**
* @brief Recreates the prepared statements
*/
void
prepareStatements(Handle const& handle)
{
log_.info() << "Preparing cassandra statements";
statements_ = std::make_unique<Statements>(settingsProvider_, handle);
log_.info() << "Finished preparing statements";
}
/**
* @brief Provides access to statements
*/
std::unique_ptr<Statements> const&
operator->() const
{
return statements_;
}
private:
std::unique_ptr<Statements> statements_{nullptr};
};
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,125 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/SettingsProvider.h>
#include <backend/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/Statement.h>
#include <config/Config.h>
#include <boost/json.hpp>
#include <string>
#include <thread>
namespace Backend::Cassandra {
namespace detail {
inline Settings::ContactPoints
tag_invoke(boost::json::value_to_tag<Settings::ContactPoints>, boost::json::value const& value)
{
if (not value.is_object())
throw std::runtime_error(
"Feed entire Cassandra section to parse "
"Settings::ContactPoints instead");
clio::Config obj{value};
Settings::ContactPoints out;
out.contactPoints = obj.valueOrThrow<std::string>("contact_points", "`contact_points` must be a string");
out.port = obj.maybeValue<uint16_t>("port");
return out;
}
inline Settings::SecureConnectionBundle
tag_invoke(boost::json::value_to_tag<Settings::SecureConnectionBundle>, boost::json::value const& value)
{
if (not value.is_string())
throw std::runtime_error("`secure_connect_bundle` must be a string");
return Settings::SecureConnectionBundle{value.as_string().data()};
}
} // namespace detail
SettingsProvider::SettingsProvider(clio::Config const& cfg, uint16_t ttl)
: config_{cfg}
, keyspace_{cfg.valueOr<std::string>("keyspace", "clio")}
, tablePrefix_{cfg.maybeValue<std::string>("table_prefix")}
, replicationFactor_{cfg.valueOr<uint16_t>("replication_factor", 3)}
, ttl_{ttl}
, settings_{parseSettings()}
{
}
Settings
SettingsProvider::getSettings() const
{
return settings_;
}
std::optional<std::string>
SettingsProvider::parseOptionalCertificate() const
{
if (auto const certPath = config_.maybeValue<std::string>("certfile"); certPath)
{
auto const path = std::filesystem::path(*certPath);
std::ifstream fileStream(path.string(), std::ios::in);
if (!fileStream)
{
throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string());
}
std::string contents(std::istreambuf_iterator<char>{fileStream}, std::istreambuf_iterator<char>{});
if (fileStream.bad())
{
throw std::system_error(errno, std::generic_category(), "Reading certificate " + path.string());
}
return contents;
}
return std::nullopt;
}
Settings
SettingsProvider::parseSettings() const
{
auto settings = Settings::defaultSettings();
if (auto const bundle = config_.maybeValue<Settings::SecureConnectionBundle>("secure_connect_bundle"); bundle)
{
settings.connectionInfo = *bundle;
}
else
{
settings.connectionInfo =
config_.valueOrThrow<Settings::ContactPoints>("Missing contact_points in Cassandra config");
}
settings.threads = config_.valueOr<uint32_t>("threads", settings.threads);
settings.maxWriteRequestsOutstanding =
config_.valueOr<uint32_t>("max_write_requests_outstanding", settings.maxWriteRequestsOutstanding);
settings.maxReadRequestsOutstanding =
config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding);
settings.certificate = parseOptionalCertificate();
settings.username = config_.maybeValue<std::string>("username");
settings.password = config_.maybeValue<std::string>("password");
return settings;
}
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,86 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/Types.h>
#include <config/Config.h>
#include <log/Logger.h>
#include <util/Expected.h>
namespace Backend::Cassandra {
/**
* @brief Provides settings for @ref CassandraBackend
*/
class SettingsProvider
{
clio::Config config_;
std::string keyspace_;
std::optional<std::string> tablePrefix_;
uint16_t replicationFactor_;
uint16_t ttl_;
Settings settings_;
public:
explicit SettingsProvider(clio::Config const& cfg, uint16_t ttl = 0);
/*! Get the cluster settings */
[[nodiscard]] Settings
getSettings() const;
/*! Get the specified keyspace */
[[nodiscard]] inline std::string
getKeyspace() const
{
return keyspace_;
}
/*! Get an optional table prefix to use in all queries */
[[nodiscard]] inline std::optional<std::string>
getTablePrefix() const
{
return tablePrefix_;
}
/*! Get the replication factor */
[[nodiscard]] inline uint16_t
getReplicationFactor() const
{
return replicationFactor_;
}
/*! Get the default time to live to use in all `create` queries */
[[nodiscard]] inline uint16_t
getTtl() const
{
return ttl_;
}
private:
[[nodiscard]] std::optional<std::string>
parseOptionalCertificate() const;
[[nodiscard]] Settings
parseSettings() const;
};
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,67 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <util/Expected.h>
#include <string>
namespace Backend::Cassandra {
namespace detail {
struct Settings;
class Session;
class Cluster;
struct Future;
class FutureWithCallback;
struct Result;
class Statement;
class PreparedStatement;
struct Batch;
} // namespace detail
using Settings = detail::Settings;
using Future = detail::Future;
using FutureWithCallback = detail::FutureWithCallback;
using Result = detail::Result;
using Statement = detail::Statement;
using PreparedStatement = detail::PreparedStatement;
using Batch = detail::Batch;
/**
* @brief A strong type wrapper for int32_t
*
* This is unfortunately needed right now to support uint32_t properly
* because clio uses bigint (int64) everywhere except for when one need
* to specify LIMIT, which needs an int32 :-/
*/
struct Limit
{
int32_t limit;
};
class Handle;
class CassandraError;
using MaybeError = util::Expected<void, CassandraError>;
using ResultOrError = util::Expected<Result, CassandraError>;
using Error = util::Unexpected<CassandraError>;
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,119 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Concepts.h>
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/RetryPolicy.h>
#include <log/Logger.h>
#include <util/Expected.h>
#include <boost/asio.hpp>
#include <functional>
#include <memory>
namespace Backend::Cassandra::detail {
/**
* @brief A query executor with a changable retry policy
*
* Note: this is a bit of an anti-pattern and should be done differently
* eventually.
*
* Currently it's basically a saner implementation of the previous design that
* was used in production without much issue but was using raw new/delete and
* could leak easily. This version is slightly better but the overall design is
* flawed and should be reworked.
*/
template <
typename StatementType,
typename HandleType = Handle,
SomeRetryPolicy RetryPolicyType = ExponentialBackoffRetryPolicy>
class AsyncExecutor : public std::enable_shared_from_this<AsyncExecutor<StatementType, HandleType, RetryPolicyType>>
{
using FutureWithCallbackType = typename HandleType::FutureWithCallbackType;
using CallbackType = std::function<void(typename HandleType::ResultOrErrorType)>;
clio::Logger log_{"Backend"};
StatementType data_;
RetryPolicyType retryPolicy_;
CallbackType onComplete_;
// does not exist during initial construction, hence optional
std::optional<FutureWithCallbackType> future_;
std::mutex mtx_;
public:
/**
* @brief Create a new instance of the AsyncExecutor and execute it.
*/
static void
run(boost::asio::io_context& ioc, HandleType const& handle, StatementType&& data, CallbackType&& onComplete)
{
// this is a helper that allows us to use std::make_shared below
struct EnableMakeShared : public AsyncExecutor<StatementType, HandleType, RetryPolicyType>
{
EnableMakeShared(boost::asio::io_context& ioc, StatementType&& data, CallbackType&& onComplete)
: AsyncExecutor(ioc, std::move(data), std::move(onComplete))
{
}
};
auto ptr = std::make_shared<EnableMakeShared>(ioc, std::move(data), std::move(onComplete));
ptr->execute(handle);
}
private:
AsyncExecutor(boost::asio::io_context& ioc, StatementType&& data, CallbackType&& onComplete)
: data_{std::move(data)}, retryPolicy_{ioc}, onComplete_{std::move(onComplete)}
{
}
void
execute(HandleType const& handle)
{
auto self = this->shared_from_this();
// lifetime is extended by capturing self ptr
auto handler = [this, &handle, self](auto&& res) mutable {
if (res)
{
onComplete_(std::move(res));
}
else
{
if (retryPolicy_.shouldRetry(res.error()))
retryPolicy_.retry([self, &handle]() { self->execute(handle); });
else
onComplete_(std::move(res)); // report error
}
self = nullptr; // explicitly decrement refcount
};
std::scoped_lock lck{mtx_};
future_.emplace(handle.asyncExecute(data_, std::move(handler)));
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,56 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/Error.h>
#include <backend/cassandra/impl/Batch.h>
#include <backend/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include <exception>
#include <vector>
namespace {
static constexpr auto batchDeleter = [](CassBatch* ptr) { cass_batch_free(ptr); };
};
namespace Backend::Cassandra::detail {
// todo: use an appropritae value instead of CASS_BATCH_TYPE_LOGGED for
// different use cases
Batch::Batch(std::vector<Statement> const& statements)
: ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), batchDeleter}
{
cass_batch_set_is_idempotent(*this, cass_true);
for (auto const& statement : statements)
if (auto const res = add(statement); not res)
throw std::runtime_error("Failed to add statement to batch: " + res.error());
}
MaybeError
Batch::add(Statement const& statement)
{
if (auto const rc = cass_batch_add_statement(*this, statement); rc != CASS_OK)
{
return Error{CassandraError{cass_error_desc(rc), rc}};
}
return {};
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,37 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h>
#include <cassandra.h>
namespace Backend::Cassandra::detail {
struct Batch : public ManagedObject<CassBatch>
{
Batch(std::vector<Statement> const& statements);
MaybeError
add(Statement const& statement);
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,154 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/SslContext.h>
#include <backend/cassandra/impl/Statement.h>
#include <util/Expected.h>
#include <exception>
#include <vector>
namespace {
static constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); };
template <class... Ts>
struct overloadSet : Ts...
{
using Ts::operator()...;
};
// explicit deduction guide (not needed as of C++20, but clang be clang)
template <class... Ts>
overloadSet(Ts...) -> overloadSet<Ts...>;
}; // namespace
namespace Backend::Cassandra::detail {
Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), clusterDeleter}
{
using std::to_string;
cass_cluster_set_token_aware_routing(*this, cass_true);
if (auto const rc = cass_cluster_set_protocol_version(*this, CASS_PROTOCOL_VERSION_V4); rc != CASS_OK)
{
throw std::runtime_error(std::string{"Error setting cassandra protocol version to v4: "} + cass_error_desc(rc));
}
if (auto const rc = cass_cluster_set_num_threads_io(*this, settings.threads); rc != CASS_OK)
{
throw std::runtime_error(
std::string{"Error setting cassandra io threads to "} + to_string(settings.threads) + ": " +
cass_error_desc(rc));
}
cass_log_set_level(settings.enableLog ? CASS_LOG_TRACE : CASS_LOG_DISABLED);
cass_cluster_set_connect_timeout(*this, settings.connectionTimeout.count());
cass_cluster_set_request_timeout(*this, settings.requestTimeout.count());
// TODO: other options to experiment with and consider later:
// cass_cluster_set_max_concurrent_requests_threshold(*this, 10000);
// cass_cluster_set_queue_size_event(*this, 100000);
// cass_cluster_set_queue_size_io(*this, 100000);
// cass_cluster_set_write_bytes_high_water_mark(*this, 16 * 1024 * 1024); // 16mb
// cass_cluster_set_write_bytes_low_water_mark(*this, 8 * 1024 * 1024); // half of allowance
// cass_cluster_set_pending_requests_high_water_mark(*this, 5000);
// cass_cluster_set_pending_requests_low_water_mark(*this, 2500); // half
// cass_cluster_set_max_requests_per_flush(*this, 1000);
// cass_cluster_set_max_concurrent_creation(*this, 8);
// cass_cluster_set_max_connections_per_host(*this, 6);
// cass_cluster_set_core_connections_per_host(*this, 4);
// cass_cluster_set_constant_speculative_execution_policy(*this, 1000, 1024);
if (auto const rc = cass_cluster_set_queue_size_io(
*this, settings.maxWriteRequestsOutstanding + settings.maxReadRequestsOutstanding);
rc != CASS_OK)
{
throw std::runtime_error(std::string{"Could not set queue size for IO per host: "} + cass_error_desc(rc));
}
setupConnection(settings);
setupCertificate(settings);
setupCredentials(settings);
}
void
Cluster::setupConnection(Settings const& settings)
{
std::visit(
overloadSet{
[this](Settings::ContactPoints const& points) { setupContactPoints(points); },
[this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }},
settings.connectionInfo);
}
void
Cluster::setupContactPoints(Settings::ContactPoints const& points)
{
using std::to_string;
auto throwErrorIfNeeded = [](CassError rc, std::string const& label, std::string const& value) {
if (rc != CASS_OK)
throw std::runtime_error("Cassandra: Error setting " + label + " [" + value + "]: " + cass_error_desc(rc));
};
{
log_.debug() << "Attempt connection using contact points: " << points.contactPoints;
auto const rc = cass_cluster_set_contact_points(*this, points.contactPoints.data());
throwErrorIfNeeded(rc, "contact_points", points.contactPoints);
}
if (points.port)
{
auto const rc = cass_cluster_set_port(*this, points.port.value());
throwErrorIfNeeded(rc, "port", to_string(points.port.value()));
}
}
void
Cluster::setupSecureBundle(Settings::SecureConnectionBundle const& bundle)
{
log_.debug() << "Attempt connection using secure bundle";
if (auto const rc = cass_cluster_set_cloud_secure_connection_bundle(*this, bundle.bundle.data()); rc != CASS_OK)
{
throw std::runtime_error("Failed to connect using secure connection bundle" + bundle.bundle);
}
}
void
Cluster::setupCertificate(Settings const& settings)
{
if (not settings.certificate)
return;
log_.debug() << "Configure SSL context";
SslContext context = SslContext(*settings.certificate);
cass_cluster_set_ssl(*this, context);
}
void
Cluster::setupCredentials(Settings const& settings)
{
if (not settings.username || not settings.password)
return;
log_.debug() << "Set credentials; username: " << settings.username.value();
cass_cluster_set_credentials(*this, settings.username.value().c_str(), settings.password.value().c_str());
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,99 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/impl/ManagedObject.h>
#include <log/Logger.h>
#include <cassandra.h>
#include <chrono>
#include <optional>
#include <string>
#include <string_view>
#include <thread>
#include <variant>
namespace Backend::Cassandra::detail {
struct Settings
{
struct ContactPoints
{
std::string contactPoints = "127.0.0.1"; // defaults to localhost
std::optional<uint16_t> port;
};
struct SecureConnectionBundle
{
std::string bundle; // no meaningful default
};
bool enableLog = false;
std::chrono::milliseconds connectionTimeout = std::chrono::milliseconds{10000};
std::chrono::milliseconds requestTimeout = std::chrono::milliseconds{0}; // no timeout at all
std::variant<ContactPoints, SecureConnectionBundle> connectionInfo = ContactPoints{};
uint32_t threads = std::thread::hardware_concurrency();
uint32_t maxWriteRequestsOutstanding = 10'000;
uint32_t maxReadRequestsOutstanding = 100'000;
std::optional<std::string> certificate; // ssl context
std::optional<std::string> username;
std::optional<std::string> password;
Settings
withContactPoints(std::string_view contactPoints)
{
auto tmp = *this;
tmp.connectionInfo = ContactPoints{std::string{contactPoints}};
return tmp;
}
static Settings
defaultSettings()
{
return Settings();
}
};
class Cluster : public ManagedObject<CassCluster>
{
clio::Logger log_{"Backend"};
public:
Cluster(Settings const& settings);
private:
void
setupConnection(Settings const& settings);
void
setupContactPoints(Settings::ContactPoints const& points);
void
setupSecureBundle(Settings::SecureConnectionBundle const& bundle);
void
setupCertificate(Settings const& settings);
void
setupCredentials(Settings const& settings);
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,443 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/AsyncExecutor.h>
#include <log/Logger.h>
#include <util/Expected.h>
#include <boost/asio/async_result.hpp>
#include <boost/asio/spawn.hpp>
#include <atomic>
#include <condition_variable>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <thread>
namespace Backend::Cassandra::detail {
/**
* @brief Implements async and sync querying against the cassandra DB with
* support for throttling.
*
* Note: A lot of the code that uses yield is repeated below. This is ok for now
* because we are hopefully going to be getting rid of it entirely later on.
*/
template <typename HandleType = Handle>
class DefaultExecutionStrategy
{
clio::Logger log_{"Backend"};
std::uint32_t maxWriteRequestsOutstanding_;
std::atomic_uint32_t numWriteRequestsOutstanding_ = 0;
std::uint32_t maxReadRequestsOutstanding_;
std::atomic_uint32_t numReadRequestsOutstanding_ = 0;
std::mutex throttleMutex_;
std::condition_variable throttleCv_;
std::mutex syncMutex_;
std::condition_variable syncCv_;
boost::asio::io_context ioc_;
std::optional<boost::asio::io_service::work> work_;
std::reference_wrapper<HandleType const> handle_;
std::thread thread_;
public:
using ResultOrErrorType = typename HandleType::ResultOrErrorType;
using StatementType = typename HandleType::StatementType;
using PreparedStatementType = typename HandleType::PreparedStatementType;
using FutureType = typename HandleType::FutureType;
using FutureWithCallbackType = typename HandleType::FutureWithCallbackType;
using ResultType = typename HandleType::ResultType;
using CompletionTokenType = boost::asio::yield_context;
using FunctionType = void(boost::system::error_code);
using AsyncResultType = boost::asio::async_result<CompletionTokenType, FunctionType>;
using HandlerType = typename AsyncResultType::completion_handler_type;
DefaultExecutionStrategy(Settings settings, HandleType const& handle)
: maxWriteRequestsOutstanding_{settings.maxWriteRequestsOutstanding}
, maxReadRequestsOutstanding_{settings.maxReadRequestsOutstanding}
, work_{ioc_}
, handle_{std::cref(handle)}
, thread_{[this]() { ioc_.run(); }}
{
log_.info() << "Max write requests outstanding is " << maxWriteRequestsOutstanding_
<< "; Max read requests outstanding is " << maxReadRequestsOutstanding_;
}
~DefaultExecutionStrategy()
{
work_.reset();
ioc_.stop();
thread_.join();
}
/**
* @brief Wait for all async writes to finish before unblocking
*/
void
sync()
{
log_.debug() << "Waiting to sync all writes...";
std::unique_lock<std::mutex> lck(syncMutex_);
syncCv_.wait(lck, [this]() { return finishedAllWriteRequests(); });
log_.debug() << "Sync done.";
}
bool
isTooBusy() const
{
return numReadRequestsOutstanding_ >= maxReadRequestsOutstanding_;
}
/**
* @brief Blocking query execution used for writing data
*
* Retries forever sleeping for 5 milliseconds between attempts.
*/
ResultOrErrorType
writeSync(StatementType const& statement)
{
while (true)
{
if (auto res = handle_.get().execute(statement); res)
{
return res;
}
else
{
log_.warn() << "Cassandra sync write error, retrying: " << res.error();
std::this_thread::sleep_for(std::chrono::milliseconds(5));
}
}
}
/**
* @brief Blocking query execution used for writing data
*
* Retries forever sleeping for 5 milliseconds between attempts.
*/
template <typename... Args>
ResultOrErrorType
writeSync(PreparedStatementType const& preparedStatement, Args&&... args)
{
return writeSync(preparedStatement.bind(std::forward<Args>(args)...));
}
/**
* @brief Non-blocking query execution used for writing data
*
* Retries forever with retry policy specified by @ref AsyncExecutor
*
* @param prepradeStatement Statement to prepare and execute
* @param args Args to bind to the prepared statement
* @throw DatabaseTimeout on timeout
*/
template <typename... Args>
void
write(PreparedStatementType const& preparedStatement, Args&&... args)
{
auto statement = preparedStatement.bind(std::forward<Args>(args)...);
incrementOutstandingRequestCount();
// Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(statement)>, HandleType>::run(
ioc_, handle_, std::move(statement), [this](auto const&) { decrementOutstandingRequestCount(); });
}
/**
* @brief Non-blocking batched query execution used for writing data
*
* Retries forever with retry policy specified by @ref AsyncExecutor.
*
* @param statements Vector of statements to execute as a batch
* @throw DatabaseTimeout on timeout
*/
void
write(std::vector<StatementType>&& statements)
{
if (statements.empty())
return;
incrementOutstandingRequestCount();
// Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(statements)>, HandleType>::run(
ioc_, handle_, std::move(statements), [this](auto const&) { decrementOutstandingRequestCount(); });
}
/**
* @brief Coroutine-based query execution used for reading data.
*
* Retries forever until successful or throws an exception on timeout.
*
* @param token Completion token (yield_context)
* @param prepradeStatement Statement to prepare and execute
* @param args Args to bind to the prepared statement
* @throw DatabaseTimeout on timeout
* @return ResultType or error wrapped in Expected
*/
template <typename... Args>
[[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, PreparedStatementType const& preparedStatement, Args&&... args)
{
return read(token, preparedStatement.bind(std::forward<Args>(args)...));
}
/**
* @brief Coroutine-based query execution used for reading data.
*
* Retries forever until successful or throws an exception on timeout.
*
* @param token Completion token (yield_context)
* @param statements Statements to execute in a batch
* @throw DatabaseTimeout on timeout
* @return ResultType or error wrapped in Expected
*/
[[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, std::vector<StatementType> const& statements)
{
auto handler = HandlerType{token};
auto result = AsyncResultType{handler};
auto const numStatements = statements.size();
// todo: perhaps use policy instead
while (true)
{
numReadRequestsOutstanding_ += numStatements;
auto const future = handle_.get().asyncExecute(statements, [handler](auto&&) mutable {
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable {
handler(boost::system::error_code{});
});
});
// suspend coroutine until completion handler is called
result.get();
numReadRequestsOutstanding_ -= numStatements;
// it's safe to call blocking get on future here as we already
// waited for the coroutine to resume above.
if (auto res = future.get(); res)
{
return res;
}
else
{
log_.error() << "Failed batch read in coroutine: " << res.error();
throwErrorIfNeeded(res.error());
}
}
}
/**
* @brief Coroutine-based query execution used for reading data.
*
* Retries forever until successful or throws an exception on timeout.
*
* @param token Completion token (yield_context)
* @param statement Statement to execute
* @throw DatabaseTimeout on timeout
* @return ResultType or error wrapped in Expected
*/
[[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, StatementType const& statement)
{
auto handler = HandlerType{token};
auto result = AsyncResultType{handler};
// todo: perhaps use policy instead
while (true)
{
++numReadRequestsOutstanding_;
auto const future = handle_.get().asyncExecute(statement, [handler](auto const&) mutable {
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable {
handler(boost::system::error_code{});
});
});
// suspend coroutine until completion handler is called
result.get();
--numReadRequestsOutstanding_;
// it's safe to call blocking get on future here as we already
// waited for the coroutine to resume above.
if (auto res = future.get(); res)
{
return res;
}
else
{
log_.error() << "Failed read in coroutine: " << res.error();
throwErrorIfNeeded(res.error());
}
}
}
/**
* @brief Coroutine-based query execution used for reading data.
*
* Attempts to execute each statement. On any error the whole vector will be
* discarded and exception will be thrown.
*
* @param token Completion token (yield_context)
* @param statements Statements to execute
* @throw DatabaseTimeout on db error
* @return Vector of results
*/
std::vector<ResultType>
readEach(CompletionTokenType token, std::vector<StatementType> const& statements)
{
auto handler = HandlerType{token};
auto result = AsyncResultType{handler};
std::atomic_bool hadError = false;
std::atomic_int numOutstanding = statements.size();
numReadRequestsOutstanding_ += statements.size();
auto futures = std::vector<FutureWithCallbackType>{};
futures.reserve(numOutstanding);
// used as the handler for each async statement individually
auto executionHandler = [handler, &hadError, &numOutstanding](auto const& res) mutable {
if (not res)
hadError = true;
// when all async operations complete unblock the result
if (--numOutstanding == 0)
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable {
handler(boost::system::error_code{});
});
};
std::transform(
std::cbegin(statements),
std::cend(statements),
std::back_inserter(futures),
[this, &executionHandler](auto const& statement) {
return handle_.get().asyncExecute(statement, executionHandler);
});
// suspend coroutine until completion handler is called
result.get();
numReadRequestsOutstanding_ -= statements.size();
if (hadError)
throw DatabaseTimeout{};
std::vector<ResultType> results;
results.reserve(futures.size());
// it's safe to call blocking get on futures here as we already
// waited for the coroutine to resume above.
std::transform(
std::make_move_iterator(std::begin(futures)),
std::make_move_iterator(std::end(futures)),
std::back_inserter(results),
[](auto&& future) {
auto entry = future.get();
auto&& res = entry.value();
return std::move(res);
});
assert(futures.size() == statements.size());
assert(results.size() == statements.size());
return results;
}
private:
void
incrementOutstandingRequestCount()
{
{
std::unique_lock<std::mutex> lck(throttleMutex_);
if (!canAddWriteRequest())
{
log_.trace() << "Max outstanding requests reached. "
<< "Waiting for other requests to finish";
throttleCv_.wait(lck, [this]() { return canAddWriteRequest(); });
}
}
++numWriteRequestsOutstanding_;
}
void
decrementOutstandingRequestCount()
{
// sanity check
if (numWriteRequestsOutstanding_ == 0)
{
assert(false);
throw std::runtime_error("decrementing num outstanding below 0");
}
size_t cur = (--numWriteRequestsOutstanding_);
{
// mutex lock required to prevent race condition around spurious
// wakeup
std::lock_guard lck(throttleMutex_);
throttleCv_.notify_one();
}
if (cur == 0)
{
// mutex lock required to prevent race condition around spurious
// wakeup
std::lock_guard lck(syncMutex_);
syncCv_.notify_one();
}
}
bool
canAddWriteRequest() const
{
return numWriteRequestsOutstanding_ < maxWriteRequestsOutstanding_;
}
bool
finishedAllWriteRequests() const
{
return numWriteRequestsOutstanding_ == 0;
}
void
throwErrorIfNeeded(CassandraError err) const
{
if (err.isTimeout())
throw DatabaseTimeout();
if (err.isInvalidQuery())
throw std::runtime_error("Invalid query");
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,102 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/Error.h>
#include <backend/cassandra/impl/Future.h>
#include <backend/cassandra/impl/Result.h>
#include <exception>
#include <vector>
namespace {
static constexpr auto futureDeleter = [](CassFuture* ptr) { cass_future_free(ptr); };
} // namespace
namespace Backend::Cassandra::detail {
/* implicit */ Future::Future(CassFuture* ptr) : ManagedObject{ptr, futureDeleter}
{
}
MaybeError
Future::await() const
{
if (auto const rc = cass_future_error_code(*this); rc)
{
auto errMsg = [this](std::string const& label) {
char const* message;
std::size_t len;
cass_future_error_message(*this, &message, &len);
return label + ": " + std::string{message, len};
}(cass_error_desc(rc));
return Error{CassandraError{errMsg, rc}};
}
return {};
}
ResultOrError
Future::get() const
{
if (auto const rc = cass_future_error_code(*this); rc)
{
auto const errMsg = [this](std::string const& label) {
char const* message;
std::size_t len;
cass_future_error_message(*this, &message, &len);
return label + ": " + std::string{message, len};
}("future::get()");
return Error{CassandraError{errMsg, rc}};
}
else
{
return Result{cass_future_get_result(*this)};
}
}
void
invokeHelper(CassFuture* ptr, void* cbPtr)
{
// Note: can't use Future{ptr}.get() because double free will occur :/
auto* cb = static_cast<FutureWithCallback::fn_t*>(cbPtr);
if (auto const rc = cass_future_error_code(ptr); rc)
{
auto const errMsg = [&ptr](std::string const& label) {
char const* message;
std::size_t len;
cass_future_error_message(ptr, &message, &len);
return label + ": " + std::string{message, len};
}("invokeHelper");
(*cb)(Error{CassandraError{errMsg, rc}});
}
else
{
(*cb)(Result{cass_future_get_result(ptr)});
}
}
/* implicit */ FutureWithCallback::FutureWithCallback(CassFuture* ptr, fn_t&& cb)
: Future{ptr}, cb_{std::make_unique<fn_t>(std::move(cb))}
{
// Instead of passing `this` as the userdata void*, we pass the address of
// the callback itself which will survive std::move of the
// FutureWithCallback parent. Not ideal but I have no better solution atm.
cass_future_set_callback(*this, &invokeHelper, cb_.get());
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,58 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h>
#include <cassandra.h>
namespace Backend::Cassandra::detail {
struct Future : public ManagedObject<CassFuture>
{
/* implicit */ Future(CassFuture* ptr);
MaybeError
await() const;
ResultOrError
get() const;
};
void
invokeHelper(CassFuture* ptr, void* self);
class FutureWithCallback : public Future
{
public:
using fn_t = std::function<void(ResultOrError)>;
using fn_ptr_t = std::unique_ptr<fn_t>;
/* implicit */ FutureWithCallback(CassFuture* ptr, fn_t&& cb);
FutureWithCallback(FutureWithCallback const&) = delete;
FutureWithCallback(FutureWithCallback&&) = default;
private:
/*! Wrapped in a unique_ptr so it can survive std::move :/ */
fn_ptr_t cb_;
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,47 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <memory>
namespace Backend::Cassandra::detail {
template <typename Managed>
class ManagedObject
{
protected:
std::unique_ptr<Managed, void (*)(Managed*)> ptr_;
public:
template <typename deleterCallable>
ManagedObject(Managed* rawPtr, deleterCallable deleter) : ptr_{rawPtr, deleter}
{
if (rawPtr == nullptr)
throw std::runtime_error("Could not create DB object - got nullptr");
}
ManagedObject(ManagedObject&&) = default;
operator Managed* const() const
{
return ptr_.get();
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,69 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/impl/Result.h>
namespace {
static constexpr auto resultDeleter = [](CassResult const* ptr) { cass_result_free(ptr); };
static constexpr auto resultIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace
namespace Backend::Cassandra::detail {
/* implicit */ Result::Result(CassResult const* ptr) : ManagedObject{ptr, resultDeleter}
{
}
[[nodiscard]] std::size_t
Result::numRows() const
{
return cass_result_row_count(*this);
}
[[nodiscard]] bool
Result::hasRows() const
{
return numRows() > 0;
}
/* implicit */ ResultIterator::ResultIterator(CassIterator* ptr)
: ManagedObject{ptr, resultIteratorDeleter}, hasMore_{cass_iterator_next(ptr)}
{
}
[[nodiscard]] ResultIterator
ResultIterator::fromResult(Result const& result)
{
return {cass_iterator_from_result(result)};
}
[[maybe_unused]] bool
ResultIterator::moveForward()
{
hasMore_ = cass_iterator_next(*this);
return hasMore_;
}
[[nodiscard]] bool
ResultIterator::hasMore() const
{
return hasMore_;
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,257 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/impl/ManagedObject.h>
#include <backend/cassandra/impl/Tuple.h>
#include <util/Expected.h>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h>
#include <cassandra.h>
#include <compare>
#include <iterator>
#include <tuple>
namespace Backend::Cassandra::detail {
template <typename>
static constexpr bool unsupported_v = false;
template <typename Type>
inline Type
extractColumn(CassRow const* row, std::size_t idx)
{
using std::to_string;
Type output;
auto throwErrorIfNeeded = [](CassError rc, std::string_view label) {
if (rc != CASS_OK)
{
auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + ": " + cass_error_desc(rc));
}
};
using decayed_t = std::decay_t<Type>;
using uint_tuple_t = std::tuple<uint32_t, uint32_t>;
using uchar_vector_t = std::vector<unsigned char>;
if constexpr (std::is_same_v<decayed_t, ripple::uint256>)
{
cass_byte_t const* buf;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract ripple::uint256");
output = ripple::uint256::fromVoid(buf);
}
else if constexpr (std::is_same_v<decayed_t, ripple::AccountID>)
{
cass_byte_t const* buf;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract ripple::AccountID");
output = ripple::AccountID::fromVoid(buf);
}
else if constexpr (std::is_same_v<decayed_t, uchar_vector_t>)
{
cass_byte_t const* buf;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract vector<unsigned char>");
output = uchar_vector_t{buf, buf + bufSize};
}
else if constexpr (std::is_same_v<decayed_t, uint_tuple_t>)
{
auto const* tuple = cass_row_get_column(row, idx);
output = TupleIterator::fromTuple(tuple).extract<uint32_t, uint32_t>();
}
else if constexpr (std::is_convertible_v<decayed_t, std::string>)
{
char const* value;
std::size_t len;
auto const rc = cass_value_get_string(cass_row_get_column(row, idx), &value, &len);
throwErrorIfNeeded(rc, "Extract string");
output = std::string{value, len};
}
else if constexpr (std::is_same_v<decayed_t, bool>)
{
cass_bool_t flag;
auto const rc = cass_value_get_bool(cass_row_get_column(row, idx), &flag);
throwErrorIfNeeded(rc, "Extract bool");
output = flag ? true : false;
}
// clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>)
{
int64_t out;
auto const rc = cass_value_get_int64(cass_row_get_column(row, idx), &out);
throwErrorIfNeeded(rc, "Extract int64");
output = static_cast<decayed_t>(out);
}
else
{
// type not supported for extraction
static_assert(unsupported_v<decayed_t>);
}
return output;
}
struct Result : public ManagedObject<CassResult const>
{
/* implicit */ Result(CassResult const* ptr);
[[nodiscard]] std::size_t
numRows() const;
[[nodiscard]] bool
hasRows() const;
template <typename... RowTypes>
std::optional<std::tuple<RowTypes...>>
get() const requires(std::tuple_size<std::tuple<RowTypes...>>{} > 1)
{
// row managed internally by cassandra driver, hence no ManagedObject.
auto const* row = cass_result_first_row(*this);
if (row == nullptr)
return std::nullopt;
std::size_t idx = 0;
auto advanceId = [&idx]() { return idx++; };
return std::make_optional<std::tuple<RowTypes...>>({extractColumn<RowTypes>(row, advanceId())...});
}
template <typename RowType>
std::optional<RowType>
get() const
{
// row managed internally by cassandra driver, hence no ManagedObject.
auto const* row = cass_result_first_row(*this);
if (row == nullptr)
return std::nullopt;
return std::make_optional<RowType>(extractColumn<RowType>(row, 0));
}
};
class ResultIterator : public ManagedObject<CassIterator>
{
bool hasMore_ = false;
public:
/* implicit */ ResultIterator(CassIterator* ptr);
[[nodiscard]] static ResultIterator
fromResult(Result const& result);
[[maybe_unused]] bool
moveForward();
[[nodiscard]] bool
hasMore() const;
template <typename... RowTypes>
std::tuple<RowTypes...>
extractCurrentRow() const
{
// note: row is invalidated on each iteration.
// managed internally by cassandra driver, hence no ManagedObject.
auto const* row = cass_iterator_get_row(*this);
std::size_t idx = 0;
auto advanceId = [&idx]() { return idx++; };
return {extractColumn<RowTypes>(row, advanceId())...};
}
};
template <typename... Types>
class ResultExtractor
{
std::reference_wrapper<Result const> ref_;
public:
struct Sentinel
{
};
struct Iterator
{
using iterator_category = std::input_iterator_tag;
using difference_type = std::size_t; // rows count
using value_type = std::tuple<Types...>;
/* implicit */ Iterator(ResultIterator iterator) : iterator_{std::move(iterator)}
{
}
Iterator(Iterator const&) = delete;
Iterator&
operator=(Iterator const&) = delete;
value_type
operator*() const
{
return iterator_.extractCurrentRow<Types...>();
}
value_type
operator->()
{
return iterator_.extractCurrentRow<Types...>();
}
Iterator&
operator++()
{
iterator_.moveForward();
return *this;
}
bool
operator==(Sentinel const&) const
{
return not iterator_.hasMore();
}
private:
ResultIterator iterator_;
};
ResultExtractor(Result const& result) : ref_{std::cref(result)}
{
}
Iterator
begin()
{
return ResultIterator::fromResult(ref_);
}
Sentinel
end()
{
return {};
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,94 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Handle.h>
#include <backend/cassandra/Types.h>
#include <log/Logger.h>
#include <util/Expected.h>
#include <boost/asio.hpp>
#include <algorithm>
#include <chrono>
#include <cmath>
namespace Backend::Cassandra::detail {
/**
* @brief A retry policy that employs exponential backoff
*/
class ExponentialBackoffRetryPolicy
{
clio::Logger log_{"Backend"};
boost::asio::steady_timer timer_;
uint32_t attempt_ = 0u;
public:
/**
* @brief Create a new retry policy instance with the io_context provided
*/
ExponentialBackoffRetryPolicy(boost::asio::io_context& ioc) : timer_{ioc}
{
}
/**
* @brief Computes next retry delay and returns true unconditionally
*
* @param err The cassandra error that triggered the retry
*/
[[nodiscard]] bool
shouldRetry([[maybe_unused]] CassandraError err)
{
auto const delay = calculateDelay(attempt_);
log_.error() << "Cassandra write error: " << err << ", current retries " << attempt_ << ", retrying in "
<< delay.count() << " milliseconds";
return true; // keep retrying forever
}
/**
* @brief Schedules next retry
*
* @param fn The callable to execute
*/
template <typename Fn>
void
retry(Fn&& fn)
{
timer_.expires_after(calculateDelay(attempt_++));
timer_.async_wait([fn = std::forward<Fn>(fn)]([[maybe_unused]] const auto& err) {
// todo: deal with cancellation (thru err)
fn();
});
}
/**
* @brief Calculates the wait time before attempting another retry
*/
std::chrono::milliseconds
calculateDelay(uint32_t attempt)
{
return std::chrono::milliseconds{lround(std::pow(2, std::min(10u, attempt)))};
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,38 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/impl/ManagedObject.h>
#include <cassandra.h>
namespace Backend::Cassandra::detail {
class Session : public ManagedObject<CassSession>
{
static constexpr auto deleter = [](CassSession* ptr) { cass_session_free(ptr); };
public:
Session() : ManagedObject{cass_session_new(), deleter}
{
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,37 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/impl/SslContext.h>
namespace {
static constexpr auto contextDeleter = [](CassSsl* ptr) { cass_ssl_free(ptr); };
} // namespace
namespace Backend::Cassandra::detail {
SslContext::SslContext(std::string const& certificate) : ManagedObject{cass_ssl_new(), contextDeleter}
{
cass_ssl_set_verify_flags(*this, CASS_SSL_VERIFY_NONE);
if (auto const rc = cass_ssl_add_trusted_cert(*this, certificate.c_str()); rc != CASS_OK)
{
throw std::runtime_error(std::string{"Error setting Cassandra SSL Context: "} + cass_error_desc(rc));
}
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,35 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/impl/ManagedObject.h>
#include <cassandra.h>
#include <string>
namespace Backend::Cassandra::detail {
struct SslContext : public ManagedObject<CassSsl>
{
explicit SslContext(std::string const& certificate);
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,164 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h>
#include <backend/cassandra/impl/Tuple.h>
#include <util/Expected.h>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/STAccount.h>
#include <cassandra.h>
#include <fmt/core.h>
#include <chrono>
#include <compare>
#include <iterator>
namespace Backend::Cassandra::detail {
class Statement : public ManagedObject<CassStatement>
{
static constexpr auto deleter = [](CassStatement* ptr) { cass_statement_free(ptr); };
template <typename>
static constexpr bool unsupported_v = false;
public:
/**
* @brief Construct a new statement with optionally provided arguments
*
* Note: it's up to the user to make sure the bound parameters match
* the format of the query (e.g. amount of '?' matches count of args).
*/
template <typename... Args>
explicit Statement(std::string_view query, Args&&... args)
: ManagedObject{cass_statement_new(query.data(), sizeof...(args)), deleter}
{
cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true);
bind<Args...>(std::forward<Args>(args)...);
}
/* implicit */ Statement(CassStatement* ptr) : ManagedObject{ptr, deleter}
{
cass_statement_set_consistency(*this, CASS_CONSISTENCY_QUORUM);
cass_statement_set_is_idempotent(*this, cass_true);
}
Statement(Statement&&) = default;
template <typename... Args>
void
bind(Args&&... args) const
{
std::size_t idx = 0;
(this->bindAt<Args>(idx++, std::forward<Args>(args)), ...);
}
template <typename Type>
void
bindAt(std::size_t const idx, Type&& value) const
{
using std::to_string;
auto throwErrorIfNeeded = [idx](CassError rc, std::string_view label) {
if (rc != CASS_OK)
throw std::logic_error(fmt::format("[{}] at idx {}: {}", label, idx, cass_error_desc(rc)));
};
auto bindBytes = [this, idx](auto const* data, size_t size) {
return cass_statement_bind_bytes(*this, idx, static_cast<cass_byte_t const*>(data), size);
};
using decayed_t = std::decay_t<Type>;
using uchar_vec_t = std::vector<unsigned char>;
using uint_tuple_t = std::tuple<uint32_t, uint32_t>;
if constexpr (std::is_same_v<decayed_t, ripple::uint256>)
{
auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind ripple::uint256");
}
else if constexpr (std::is_same_v<decayed_t, ripple::AccountID>)
{
auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind ripple::AccountID");
}
else if constexpr (std::is_same_v<decayed_t, uchar_vec_t>)
{
auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind vector<unsigned char>");
}
else if constexpr (std::is_convertible_v<decayed_t, std::string>)
{
// reinterpret_cast is needed here :'(
auto const rc = bindBytes(reinterpret_cast<unsigned char const*>(value.data()), value.size());
throwErrorIfNeeded(rc, "Bind string (as bytes)");
}
else if constexpr (std::is_same_v<decayed_t, uint_tuple_t>)
{
auto const rc = cass_statement_bind_tuple(*this, idx, Tuple{std::move(value)});
throwErrorIfNeeded(rc, "Bind tuple<uint32, uint32>");
}
else if constexpr (std::is_same_v<decayed_t, bool>)
{
auto const rc = cass_statement_bind_bool(*this, idx, value ? cass_true : cass_false);
throwErrorIfNeeded(rc, "Bind bool");
}
else if constexpr (std::is_same_v<decayed_t, Limit>)
{
auto const rc = cass_statement_bind_int32(*this, idx, value.limit);
throwErrorIfNeeded(rc, "Bind limit (int32)");
}
// clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>)
{
auto const rc = cass_statement_bind_int64(*this, idx, value);
throwErrorIfNeeded(rc, "Bind int64");
}
else
{
// type not supported for binding
static_assert(unsupported_v<decayed_t>);
}
}
};
class PreparedStatement : public ManagedObject<CassPrepared const>
{
static constexpr auto deleter = [](CassPrepared const* ptr) { cass_prepared_free(ptr); };
public:
/* implicit */ PreparedStatement(CassPrepared const* ptr) : ManagedObject{ptr, deleter}
{
}
template <typename... Args>
Statement
bind(Args&&... args) const
{
Statement statement = cass_prepared_bind(*this);
statement.bind<Args...>(std::forward<Args>(args)...);
return statement;
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,43 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <backend/cassandra/impl/Tuple.h>
namespace {
static constexpr auto tupleDeleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
static constexpr auto tupleIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace
namespace Backend::Cassandra::detail {
/* implicit */ Tuple::Tuple(CassTuple* ptr) : ManagedObject{ptr, tupleDeleter}
{
}
/* implicit */ TupleIterator::TupleIterator(CassIterator* ptr) : ManagedObject{ptr, tupleIteratorDeleter}
{
}
[[nodiscard]] TupleIterator
TupleIterator::fromTuple(CassValue const* value)
{
return {cass_iterator_from_tuple(value)};
}
} // namespace Backend::Cassandra::detail

View File

@@ -0,0 +1,149 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/impl/ManagedObject.h>
#include <cassandra.h>
#include <functional>
#include <string>
#include <string_view>
#include <tuple>
namespace Backend::Cassandra::detail {
class Tuple : public ManagedObject<CassTuple>
{
static constexpr auto deleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
template <typename>
static constexpr bool unsupported_v = false;
public:
/* implicit */ Tuple(CassTuple* ptr);
template <typename... Types>
explicit Tuple(std::tuple<Types...>&& value)
: ManagedObject{cass_tuple_new(std::tuple_size<std::tuple<Types...>>{}), deleter}
{
std::apply(std::bind_front(&Tuple::bind<Types...>, this), std::move(value));
}
template <typename... Args>
void
bind(Args&&... args) const
{
std::size_t idx = 0;
(this->bindAt<Args>(idx++, std::forward<Args>(args)), ...);
}
template <typename Type>
void
bindAt(std::size_t const idx, Type&& value) const
{
using std::to_string;
auto throwErrorIfNeeded = [idx](CassError rc, std::string_view label) {
if (rc != CASS_OK)
{
auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + " at idx " + to_string(idx) + ": " + cass_error_desc(rc));
}
};
using decayed_t = std::decay_t<Type>;
if constexpr (std::is_same_v<decayed_t, bool>)
{
auto const rc = cass_tuple_set_bool(*this, idx, value ? cass_true : cass_false);
throwErrorIfNeeded(rc, "Bind bool");
}
// clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>)
{
auto const rc = cass_tuple_set_int64(*this, idx, value);
throwErrorIfNeeded(rc, "Bind int64");
}
else
{
// type not supported for binding
static_assert(unsupported_v<decayed_t>);
}
}
};
class TupleIterator : public ManagedObject<CassIterator>
{
template <typename>
static constexpr bool unsupported_v = false;
public:
/* implicit */ TupleIterator(CassIterator* ptr);
[[nodiscard]] static TupleIterator
fromTuple(CassValue const* value);
template <typename... Types>
[[nodiscard]] std::tuple<Types...>
extract() const
{
return {extractNext<Types>()...};
}
private:
template <typename Type>
Type
extractNext() const
{
using std::to_string;
Type output;
if (not cass_iterator_next(*this))
throw std::logic_error("Could not extract next value from tuple iterator");
auto throwErrorIfNeeded = [](CassError rc, std::string_view label) {
if (rc != CASS_OK)
{
auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + ": " + cass_error_desc(rc));
}
};
using decayed_t = std::decay_t<Type>;
// clio only uses bigint (int64_t) so we convert any incoming type
if constexpr (std::is_convertible_v<decayed_t, int64_t>)
{
int64_t out;
auto const rc = cass_value_get_int64(cass_iterator_get_value(*this), &out);
throwErrorIfNeeded(rc, "Extract int64 from tuple");
output = static_cast<decayed_t>(out);
}
else
{
// type not supported for extraction
static_assert(unsupported_v<decayed_t>);
}
return output;
}
};
} // namespace Backend::Cassandra::detail

View File

@@ -62,8 +62,7 @@ Config::lookup(key_type key) const
if (not hasBrokenPath)
{
if (not cur.get().is_object())
throw detail::StoreException(
"Not an object at '" + subkey + "'");
throw detail::StoreException("Not an object at '" + subkey + "'");
if (not cur.get().as_object().contains(section))
hasBrokenPath = true;
else
@@ -91,11 +90,9 @@ Config::maybeArray(key_type key) const
array_type out;
out.reserve(arr.size());
std::transform(
std::begin(arr),
std::end(arr),
std::back_inserter(out),
[](auto&& element) { return Config{std::move(element)}; });
std::transform(std::begin(arr), std::end(arr), std::back_inserter(out), [](auto&& element) {
return Config{std::move(element)};
});
return std::make_optional<array_type>(std::move(out));
}
}
@@ -156,10 +153,7 @@ Config::array() const
out.reserve(arr.size());
std::transform(
std::cbegin(arr),
std::cend(arr),
std::back_inserter(out),
[](auto const& element) { return Config{element}; });
std::cbegin(arr), std::cend(arr), std::back_inserter(out), [](auto const& element) { return Config{element}; });
return out;
}
@@ -180,8 +174,7 @@ ConfigReader::open(std::filesystem::path path)
}
catch (std::exception const& e)
{
LogService::error() << "Could not read configuration file from '"
<< path.string() << "': " << e.what();
LogService::error() << "Could not read configuration file from '" << path.string() << "': " << e.what();
}
return Config{};

View File

@@ -43,9 +43,7 @@ class Config final
public:
using key_type = std::string; /*! The type of key used */
using array_type = std::vector<Config>; /*! The type of array used */
using write_cursor_type = std::pair<
std::optional<std::reference_wrapper<boost::json::value>>,
key_type>;
using write_cursor_type = std::pair<std::optional<std::reference_wrapper<boost::json::value>>, key_type>;
/**
* @brief Construct a new Config object.
@@ -101,8 +99,7 @@ public:
{
auto maybe_element = lookup(key);
if (maybe_element)
return std::make_optional<Result>(
checkedAs<Result>(key, *maybe_element));
return std::make_optional<Result>(checkedAs<Result>(key, *maybe_element));
return std::nullopt;
}
@@ -349,6 +346,8 @@ private:
[[nodiscard]] Return
checkedAs(key_type key, boost::json::value const& value) const
{
using boost::json::value_to;
auto has_error = false;
if constexpr (std::is_same_v<Return, bool>)
{
@@ -365,9 +364,7 @@ private:
if (not value.is_number())
has_error = true;
}
else if constexpr (
std::is_convertible_v<Return, uint64_t> ||
std::is_convertible_v<Return, int64_t>)
else if constexpr (std::is_convertible_v<Return, uint64_t> || std::is_convertible_v<Return, int64_t>)
{
if (not value.is_int64() && not value.is_uint64())
has_error = true;
@@ -375,11 +372,10 @@ private:
if (has_error)
throw std::runtime_error(
"Type for key '" + key + "' is '" +
std::string{to_string(value.kind())} +
"' in JSON but requested '" + detail::typeName<Return>() + "'");
"Type for key '" + key + "' is '" + std::string{to_string(value.kind())} + "' in JSON but requested '" +
detail::typeName<Return>() + "'");
return boost::json::value_to<Return>(value);
return value_to<Return>(value);
}
std::optional<boost::json::value>

View File

@@ -77,14 +77,10 @@ public:
/// @return true if sequence was validated, false otherwise
/// a return value of false means the datastructure has been stopped
bool
waitUntilValidatedByNetwork(
uint32_t sequence,
std::optional<uint32_t> maxWaitMs = {})
waitUntilValidatedByNetwork(uint32_t sequence, std::optional<uint32_t> maxWaitMs = {})
{
std::unique_lock lck(m_);
auto pred = [sequence, this]() -> bool {
return (max_ && sequence <= *max_);
};
auto pred = [sequence, this]() -> bool { return (max_ && sequence <= *max_); };
if (maxWaitMs)
cv_.wait_for(lck, std::chrono::milliseconds(*maxWaitMs));
else

View File

@@ -27,6 +27,7 @@
#include <backend/DBHelpers.h>
#include <etl/ETLSource.h>
#include <etl/NFTHelpers.h>
#include <etl/ProbingETLSource.h>
#include <etl/ReportingETL.h>
#include <log/Logger.h>
@@ -42,15 +43,12 @@ ForwardCache::freshen()
{
log_.trace() << "Freshening ForwardCache";
auto numOutstanding =
std::make_shared<std::atomic_uint>(latestForwarded_.size());
auto numOutstanding = std::make_shared<std::atomic_uint>(latestForwarded_.size());
for (auto const& cacheEntry : latestForwarded_)
{
boost::asio::spawn(
strand_,
[this, numOutstanding, command = cacheEntry.first](
boost::asio::yield_context yield) {
strand_, [this, numOutstanding, command = cacheEntry.first](boost::asio::yield_context yield) {
boost::json::object request = {{"command", command}};
auto resp = source_.requestFromRippled(request, {}, yield);
@@ -77,12 +75,9 @@ std::optional<boost::json::object>
ForwardCache::get(boost::json::object const& request) const
{
std::optional<std::string> command = {};
if (request.contains("command") && !request.contains("method") &&
request.at("command").is_string())
if (request.contains("command") && !request.contains("method") && request.at("command").is_string())
command = request.at("command").as_string().c_str();
else if (
request.contains("method") && !request.contains("command") &&
request.at("method").is_string())
else if (request.contains("method") && !request.contains("command") && request.at("method").is_string())
command = request.at("method").as_string().c_str();
if (!command)
@@ -115,8 +110,7 @@ make_TimeoutOption()
}
else
{
return boost::beast::websocket::stream_base::timeout::suggested(
boost::beast::role_type::client);
return boost::beast::websocket::stream_base::timeout::suggested(boost::beast::role_type::client);
}
}
@@ -137,8 +131,7 @@ ETLSourceImpl<Derived>::reconnect(boost::beast::error_code ec)
// if we cannot connect to the transaction processing process
if (ec.category() == boost::asio::error::get_ssl_category())
{
err = std::string(" (") +
boost::lexical_cast<std::string>(ERR_GET_LIB(ec.value())) + "," +
err = std::string(" (") + boost::lexical_cast<std::string>(ERR_GET_LIB(ec.value())) + "," +
boost::lexical_cast<std::string>(ERR_GET_REASON(ec.value())) + ") ";
// ERR_PACK /* crypto/err/err.h */
char buf[128];
@@ -148,8 +141,7 @@ ETLSourceImpl<Derived>::reconnect(boost::beast::error_code ec)
std::cout << err << std::endl;
}
if (ec != boost::asio::error::operation_aborted &&
ec != boost::asio::error::connection_refused)
if (ec != boost::asio::error::operation_aborted && ec != boost::asio::error::connection_refused)
{
log_.error() << "error code = " << ec << " - " << toString();
}
@@ -183,30 +175,25 @@ PlainETLSource::close(bool startAgain)
// an assertion fails. Using closing_ makes sure async_close is only
// called once
closing_ = true;
derived().ws().async_close(
boost::beast::websocket::close_code::normal,
[this, startAgain](auto ec) {
if (ec)
{
log_.error()
<< " async_close : "
<< "error code = " << ec << " - " << toString();
}
closing_ = false;
if (startAgain)
{
ws_ = std::make_unique<boost::beast::websocket::stream<
boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc_));
derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) {
if (ec)
{
log_.error() << " async_close : "
<< "error code = " << ec << " - " << toString();
}
closing_ = false;
if (startAgain)
{
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc_));
run();
}
});
run();
}
});
}
else if (startAgain)
{
ws_ = std::make_unique<
boost::beast::websocket::stream<boost::beast::tcp_stream>>(
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc_));
run();
@@ -228,31 +215,26 @@ SslETLSource::close(bool startAgain)
// an assertion fails. Using closing_ makes sure async_close is only
// called once
closing_ = true;
derived().ws().async_close(
boost::beast::websocket::close_code::normal,
[this, startAgain](auto ec) {
if (ec)
{
log_.error()
<< " async_close : "
<< "error code = " << ec << " - " << toString();
}
closing_ = false;
if (startAgain)
{
ws_ = std::make_unique<boost::beast::websocket::stream<
boost::beast::ssl_stream<
boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_), *sslCtx_);
derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) {
if (ec)
{
log_.error() << " async_close : "
<< "error code = " << ec << " - " << toString();
}
closing_ = false;
if (startAgain)
{
ws_ = std::make_unique<
boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_), *sslCtx_);
run();
}
});
run();
}
});
}
else if (startAgain)
{
ws_ = std::make_unique<boost::beast::websocket::stream<
boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_), *sslCtx_);
run();
@@ -262,9 +244,7 @@ SslETLSource::close(bool startAgain)
template <class Derived>
void
ETLSourceImpl<Derived>::onResolve(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type results)
ETLSourceImpl<Derived>::onResolve(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type results)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (ec)
@@ -274,12 +254,10 @@ ETLSourceImpl<Derived>::onResolve(
}
else
{
boost::beast::get_lowest_layer(derived().ws())
.expires_after(std::chrono::seconds(30));
boost::beast::get_lowest_layer(derived().ws())
.async_connect(results, [this](auto ec, auto ep) {
derived().onConnect(ec, ep);
});
boost::beast::get_lowest_layer(derived().ws()).expires_after(std::chrono::seconds(30));
boost::beast::get_lowest_layer(derived().ws()).async_connect(results, [this](auto ec, auto ep) {
derived().onConnect(ec, ep);
});
}
}
@@ -306,21 +284,18 @@ PlainETLSource::onConnect(
// Set a decorator to change the User-Agent of the handshake
derived().ws().set_option(
boost::beast::websocket::stream_base::decorator(
[](boost::beast::websocket::request_type& req) {
req.set(
boost::beast::http::field::user_agent, "clio-client");
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(boost::beast::http::field::user_agent, "clio-client");
req.set("X-User", "clio-client");
}));
req.set("X-User", "clio-client");
}));
// Update the host_ string. This will provide the value of the
// Host HTTP header during the WebSocket handshake.
// See https://tools.ietf.org/html/rfc7230#section-5.4
auto host = ip_ + ':' + std::to_string(endpoint.port());
// Perform the websocket handshake
derived().ws().async_handshake(
host, "/", [this](auto ec) { onHandshake(ec); });
derived().ws().async_handshake(host, "/", [this](auto ec) { onHandshake(ec); });
}
}
@@ -347,13 +322,11 @@ SslETLSource::onConnect(
// Set a decorator to change the User-Agent of the handshake
derived().ws().set_option(
boost::beast::websocket::stream_base::decorator(
[](boost::beast::websocket::request_type& req) {
req.set(
boost::beast::http::field::user_agent, "clio-client");
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(boost::beast::http::field::user_agent, "clio-client");
req.set("X-User", "clio-client");
}));
req.set("X-User", "clio-client");
}));
// Update the host_ string. This will provide the value of the
// Host HTTP header during the WebSocket handshake.
@@ -361,8 +334,7 @@ SslETLSource::onConnect(
auto host = ip_ + ':' + std::to_string(endpoint.port());
// Perform the websocket handshake
ws().next_layer().async_handshake(
boost::asio::ssl::stream_base::client,
[this, endpoint](auto ec) { onSslHandshake(ec, endpoint); });
boost::asio::ssl::stream_base::client, [this, endpoint](auto ec) { onSslHandshake(ec, endpoint); });
}
}
@@ -389,8 +361,7 @@ void
ETLSourceImpl<Derived>::onHandshake(boost::beast::error_code ec)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (auto action = hooks_.onConnected(ec);
action == ETLSourceHooks::Action::STOP)
if (auto action = hooks_.onConnected(ec); action == ETLSourceHooks::Action::STOP)
return;
if (ec)
@@ -401,35 +372,26 @@ ETLSourceImpl<Derived>::onHandshake(boost::beast::error_code ec)
else
{
boost::json::object jv{
{"command", "subscribe"},
{"streams",
{"ledger", "manifests", "validations", "transactions_proposed"}}};
{"command", "subscribe"}, {"streams", {"ledger", "manifests", "validations", "transactions_proposed"}}};
std::string s = boost::json::serialize(jv);
log_.trace() << "Sending subscribe stream message";
derived().ws().set_option(
boost::beast::websocket::stream_base::decorator(
[](boost::beast::websocket::request_type& req) {
req.set(
boost::beast::http::field::user_agent,
std::string(BOOST_BEAST_VERSION_STRING) +
" clio-client");
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(
boost::beast::http::field::user_agent, std::string(BOOST_BEAST_VERSION_STRING) + " clio-client");
req.set("X-User", "coro-client");
}));
req.set("X-User", "coro-client");
}));
// Send the message
derived().ws().async_write(
boost::asio::buffer(s),
[this](auto ec, size_t size) { onWrite(ec, size); });
derived().ws().async_write(boost::asio::buffer(s), [this](auto ec, size_t size) { onWrite(ec, size); });
}
}
template <class Derived>
void
ETLSourceImpl<Derived>::onWrite(
boost::beast::error_code ec,
size_t bytesWritten)
ETLSourceImpl<Derived>::onWrite(boost::beast::error_code ec, size_t bytesWritten)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (ec)
@@ -439,8 +401,7 @@ ETLSourceImpl<Derived>::onWrite(
}
else
{
derived().ws().async_read(
readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
derived().ws().async_read(readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
}
}
@@ -461,8 +422,7 @@ ETLSourceImpl<Derived>::onRead(boost::beast::error_code ec, size_t size)
swap(readBuffer_, buffer);
log_.trace() << "calling async_read - " << toString();
derived().ws().async_read(
readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
derived().ws().async_read(readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
}
}
@@ -476,9 +436,7 @@ ETLSourceImpl<Derived>::handleMessage()
connected_ = true;
try
{
std::string msg{
static_cast<char const*>(readBuffer_.data().data()),
readBuffer_.size()};
std::string msg{static_cast<char const*>(readBuffer_.data().data()), readBuffer_.size()};
log_.trace() << msg;
boost::json::value raw = boost::json::parse(msg);
log_.trace() << "parsed";
@@ -494,32 +452,25 @@ ETLSourceImpl<Derived>::handleMessage()
}
if (result.contains("validated_ledgers"))
{
boost::json::string const& validatedLedgers =
result["validated_ledgers"].as_string();
boost::json::string const& validatedLedgers = result["validated_ledgers"].as_string();
setValidatedRange(
{validatedLedgers.c_str(), validatedLedgers.size()});
setValidatedRange({validatedLedgers.c_str(), validatedLedgers.size()});
}
log_.info() << "Received a message on ledger "
<< " subscription stream. Message : " << response
<< " - " << toString();
<< " subscription stream. Message : " << response << " - " << toString();
}
else if (
response.contains("type") && response["type"] == "ledgerClosed")
else if (response.contains("type") && response["type"] == "ledgerClosed")
{
log_.info() << "Received a message on ledger "
<< " subscription stream. Message : " << response
<< " - " << toString();
<< " subscription stream. Message : " << response << " - " << toString();
if (response.contains("ledger_index"))
{
ledgerIndex = response["ledger_index"].as_int64();
}
if (response.contains("validated_ledgers"))
{
boost::json::string const& validatedLedgers =
response["validated_ledgers"].as_string();
setValidatedRange(
{validatedLedgers.c_str(), validatedLedgers.size()});
boost::json::string const& validatedLedgers = response["validated_ledgers"].as_string();
setValidatedRange({validatedLedgers.c_str(), validatedLedgers.size()});
}
}
else
@@ -531,15 +482,11 @@ ETLSourceImpl<Derived>::handleMessage()
forwardCache_.freshen();
subscriptions_->forwardProposedTransaction(response);
}
else if (
response.contains("type") &&
response["type"] == "validationReceived")
else if (response.contains("type") && response["type"] == "validationReceived")
{
subscriptions_->forwardValidation(response);
}
else if (
response.contains("type") &&
response["type"] == "manifestReceived")
else if (response.contains("type") && response["type"] == "manifestReceived")
{
subscriptions_->forwardManifest(response);
}
@@ -548,8 +495,7 @@ ETLSourceImpl<Derived>::handleMessage()
if (ledgerIndex != 0)
{
log_.trace() << "Pushing ledger sequence = " << ledgerIndex << " - "
<< toString();
log_.trace() << "Pushing ledger sequence = " << ledgerIndex << " - " << toString();
networkValidatedLedgers_->push(ledgerIndex);
}
return true;
@@ -577,10 +523,7 @@ class AsyncCallData
std::string lastKey_;
public:
AsyncCallData(
uint32_t seq,
ripple::uint256 const& marker,
std::optional<ripple::uint256> const& nextMarker)
AsyncCallData(uint32_t seq, ripple::uint256 const& marker, std::optional<ripple::uint256> const& nextMarker)
{
request_.mutable_ledger()->set_sequence(seq);
if (marker.isNonZero())
@@ -594,11 +537,9 @@ public:
unsigned char prefix = marker.data()[0];
log_.debug() << "Setting up AsyncCallData. marker = "
<< ripple::strHex(marker)
log_.debug() << "Setting up AsyncCallData. marker = " << ripple::strHex(marker)
<< " . prefix = " << ripple::strHex(std::string(1, prefix))
<< " . nextPrefix_ = "
<< ripple::strHex(std::string(1, nextPrefix_));
<< " . nextPrefix_ = " << ripple::strHex(std::string(1, nextPrefix_));
assert(nextPrefix_ > prefix || nextPrefix_ == 0x00);
@@ -628,8 +569,7 @@ public:
if (!status_.ok())
{
log_.error() << "AsyncCallData status_ not ok: "
<< " code = " << status_.error_code()
<< " message = " << status_.error_message();
<< " code = " << status_.error_code() << " message = " << status_.error_message();
return CallStatus::ERRORED;
}
if (!next_->is_unlimited())
@@ -658,10 +598,13 @@ public:
call(stub, cq);
}
log_.trace() << "Writing objects";
auto const numObjects = cur_->ledger_objects().objects_size();
log_.debug() << "Writing " << numObjects << " objects";
std::vector<Backend::LedgerObject> cacheUpdates;
cacheUpdates.reserve(cur_->ledger_objects().objects_size());
for (int i = 0; i < cur_->ledger_objects().objects_size(); ++i)
cacheUpdates.reserve(numObjects);
for (int i = 0; i < numObjects; ++i)
{
auto& obj = *(cur_->mutable_ledger_objects()->mutable_objects(i));
if (!more && nextPrefix_ != 0x00)
@@ -675,34 +618,26 @@ public:
if (!cacheOnly)
{
if (lastKey_.size())
backend.writeSuccessor(
std::move(lastKey_),
request_.ledger().sequence(),
std::string{obj.key()});
backend.writeSuccessor(std::move(lastKey_), request_.ledger().sequence(), std::string{obj.key()});
lastKey_ = obj.key();
backend.writeNFTs(getNFTDataFromObj(request_.ledger().sequence(), obj.key(), obj.data()));
backend.writeLedgerObject(
std::move(*obj.mutable_key()),
request_.ledger().sequence(),
std::move(*obj.mutable_data()));
std::move(*obj.mutable_key()), request_.ledger().sequence(), std::move(*obj.mutable_data()));
}
}
backend.cache().update(
cacheUpdates, request_.ledger().sequence(), cacheOnly);
log_.trace() << "Wrote objects";
backend.cache().update(cacheUpdates, request_.ledger().sequence(), cacheOnly);
log_.debug() << "Wrote " << numObjects << " objects. Got more: " << (more ? "YES" : "NO");
return more ? CallStatus::MORE : CallStatus::DONE;
}
void
call(
std::unique_ptr<org::xrpl::rpc::v1::XRPLedgerAPIService::Stub>& stub,
grpc::CompletionQueue& cq)
call(std::unique_ptr<org::xrpl::rpc::v1::XRPLedgerAPIService::Stub>& stub, grpc::CompletionQueue& cq)
{
context_ = std::make_unique<grpc::ClientContext>();
std::unique_ptr<grpc::ClientAsyncResponseReader<
org::xrpl::rpc::v1::GetLedgerDataResponse>>
rpc(stub->PrepareAsyncGetLedgerData(context_.get(), request_, &cq));
std::unique_ptr<grpc::ClientAsyncResponseReader<org::xrpl::rpc::v1::GetLedgerDataResponse>> rpc(
stub->PrepareAsyncGetLedgerData(context_.get(), request_, &cq));
rpc->StartCall();
@@ -727,10 +662,7 @@ public:
template <class Derived>
bool
ETLSourceImpl<Derived>::loadInitialLedger(
uint32_t sequence,
uint32_t numMarkers,
bool cacheOnly)
ETLSourceImpl<Derived>::loadInitialLedger(uint32_t sequence, uint32_t numMarkers, bool cacheOnly)
{
if (!stub_)
return false;
@@ -752,8 +684,7 @@ ETLSourceImpl<Derived>::loadInitialLedger(
calls.emplace_back(sequence, markers[i], nextMarker);
}
log_.debug() << "Starting data download for ledger " << sequence
<< ". Using source = " << toString();
log_.debug() << "Starting data download for ledger " << sequence << ". Using source = " << toString();
for (auto& c : calls)
c.call(stub_, cq);
@@ -794,14 +725,12 @@ ETLSourceImpl<Derived>::loadInitialLedger(
}
if (backend_->cache().size() > progress)
{
log_.info() << "Downloaded " << backend_->cache().size()
<< " records from rippled";
log_.info() << "Downloaded " << backend_->cache().size() << " records from rippled";
progress += incr;
}
}
}
log_.info() << "Finished loadInitialLedger. cache size = "
<< backend_->cache().size();
log_.info() << "Finished loadInitialLedger. cache size = " << backend_->cache().size();
size_t numWrites = 0;
if (!abort)
{
@@ -811,27 +740,18 @@ ETLSourceImpl<Derived>::loadInitialLedger(
auto seconds = util::timed<std::chrono::seconds>([&]() {
for (auto& key : edgeKeys)
{
log_.debug()
<< "Writing edge key = " << ripple::strHex(key);
auto succ = backend_->cache().getSuccessor(
*ripple::uint256::fromVoidChecked(key), sequence);
log_.debug() << "Writing edge key = " << ripple::strHex(key);
auto succ = backend_->cache().getSuccessor(*ripple::uint256::fromVoidChecked(key), sequence);
if (succ)
backend_->writeSuccessor(
std::move(key),
sequence,
uint256ToString(succ->key));
backend_->writeSuccessor(std::move(key), sequence, uint256ToString(succ->key));
}
ripple::uint256 prev = Backend::firstKey;
while (auto cur =
backend_->cache().getSuccessor(prev, sequence))
while (auto cur = backend_->cache().getSuccessor(prev, sequence))
{
assert(cur);
if (prev == Backend::firstKey)
{
backend_->writeSuccessor(
uint256ToString(prev),
sequence,
uint256ToString(cur->key));
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(cur->key));
}
if (isBookDir(cur->key, cur->blob))
@@ -840,40 +760,29 @@ ETLSourceImpl<Derived>::loadInitialLedger(
// make sure the base is not an actual object
if (!backend_->cache().get(cur->key, sequence))
{
auto succ =
backend_->cache().getSuccessor(base, sequence);
auto succ = backend_->cache().getSuccessor(base, sequence);
assert(succ);
if (succ->key == cur->key)
{
log_.debug() << "Writing book successor = "
<< ripple::strHex(base) << " - "
log_.debug() << "Writing book successor = " << ripple::strHex(base) << " - "
<< ripple::strHex(cur->key);
backend_->writeSuccessor(
uint256ToString(base),
sequence,
uint256ToString(cur->key));
backend_->writeSuccessor(uint256ToString(base), sequence, uint256ToString(cur->key));
}
}
++numWrites;
}
prev = std::move(cur->key);
if (numWrites % 100000 == 0 && numWrites != 0)
log_.info()
<< "Wrote " << numWrites << " book successors";
log_.info() << "Wrote " << numWrites << " book successors";
}
backend_->writeSuccessor(
uint256ToString(prev),
sequence,
uint256ToString(Backend::lastKey));
backend_->writeSuccessor(uint256ToString(prev), sequence, uint256ToString(Backend::lastKey));
++numWrites;
});
log_.info()
<< "Looping through cache and submitting all writes took "
<< seconds
<< " seconds. numWrites = " << std::to_string(numWrites);
log_.info() << "Looping through cache and submitting all writes took " << seconds
<< " seconds. numWrites = " << std::to_string(numWrites);
}
}
return !abort;
@@ -881,10 +790,7 @@ ETLSourceImpl<Derived>::loadInitialLedger(
template <class Derived>
std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
ETLSourceImpl<Derived>::fetchLedger(
uint32_t ledgerSequence,
bool getObjects,
bool getObjectNeighbors)
ETLSourceImpl<Derived>::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors)
{
org::xrpl::rpc::v1::GetLedgerResponse response;
if (!stub_)
@@ -920,12 +826,7 @@ make_ETLSource(
ETLLoadBalancer& balancer)
{
auto src = std::make_unique<ProbingETLSource>(
config,
ioContext,
backend,
subscriptions,
networkValidatedLedgers,
balancer);
config, ioContext, backend, subscriptions, networkValidatedLedgers, balancer);
src->run();
@@ -946,8 +847,7 @@ ETLLoadBalancer::ETLLoadBalancer(
for (auto const& entry : config.array("etl_sources"))
{
std::unique_ptr<ETLSource> source = make_ETLSource(
entry, ioContext, backend, subscriptions, nwvl, *this);
std::unique_ptr<ETLSource> source = make_ETLSource(entry, ioContext, backend, subscriptions, nwvl, *this);
sources_.push_back(std::move(source));
log_.info() << "Added etl source - " << sources_.back()->toString();
@@ -959,13 +859,11 @@ ETLLoadBalancer::loadInitialLedger(uint32_t sequence, bool cacheOnly)
{
execute(
[this, &sequence, cacheOnly](auto& source) {
bool res =
source->loadInitialLedger(sequence, downloadRanges_, cacheOnly);
bool res = source->loadInitialLedger(sequence, downloadRanges_, cacheOnly);
if (!res)
{
log_.error() << "Failed to download initial ledger."
<< " Sequence = " << sequence
<< " source = " << source->toString();
<< " Sequence = " << sequence << " source = " << source->toString();
}
return res;
},
@@ -973,17 +871,12 @@ ETLLoadBalancer::loadInitialLedger(uint32_t sequence, bool cacheOnly)
}
std::optional<org::xrpl::rpc::v1::GetLedgerResponse>
ETLLoadBalancer::fetchLedger(
uint32_t ledgerSequence,
bool getObjects,
bool getObjectNeighbors)
ETLLoadBalancer::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors)
{
org::xrpl::rpc::v1::GetLedgerResponse response;
bool success = execute(
[&response, ledgerSequence, getObjects, getObjectNeighbors, log = log_](
auto& source) {
auto [status, data] = source->fetchLedger(
ledgerSequence, getObjects, getObjectNeighbors);
[&response, ledgerSequence, getObjects, getObjectNeighbors, log = log_](auto& source) {
auto [status, data] = source->fetchLedger(ledgerSequence, getObjects, getObjectNeighbors);
response = std::move(data);
if (status.ok() && response.validated())
{
@@ -993,10 +886,8 @@ ETLLoadBalancer::fetchLedger(
}
else
{
log.warn() << "Could not fetch ledger " << ledgerSequence
<< ", Reply: " << response.DebugString()
<< ", error_code: " << status.error_code()
<< ", error_msg: " << status.error_message()
log.warn() << "Could not fetch ledger " << ledgerSequence << ", Reply: " << response.DebugString()
<< ", error_code: " << status.error_code() << ", error_msg: " << status.error_message()
<< ", source = " << source->toString();
return false;
}
@@ -1019,8 +910,7 @@ ETLLoadBalancer::forwardToRippled(
auto numAttempts = 0;
while (numAttempts < sources_.size())
{
if (auto res =
sources_[sourceIdx]->forwardToRippled(request, clientIp, yield))
if (auto res = sources_[sourceIdx]->forwardToRippled(request, clientIp, yield))
return res;
sourceIdx = (sourceIdx + 1) % sources_.size();
@@ -1093,14 +983,10 @@ ETLSourceImpl<Derived>::requestFromRippled(
// resources. See "secure_gateway" in
//
// https://github.com/ripple/rippled/blob/develop/cfg/rippled-example.cfg
ws->set_option(websocket::stream_base::decorator(
[&clientIp](websocket::request_type& req) {
req.set(
http::field::user_agent,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-client-coro");
req.set(http::field::forwarded, "for=" + clientIp);
}));
ws->set_option(websocket::stream_base::decorator([&clientIp](websocket::request_type& req) {
req.set(http::field::user_agent, std::string(BOOST_BEAST_VERSION_STRING) + " websocket-client-coro");
req.set(http::field::forwarded, "for=" + clientIp);
}));
log_.trace() << "client ip: " << clientIp;
log_.trace() << "Performing websocket handshake";
@@ -1111,8 +997,7 @@ ETLSourceImpl<Derived>::requestFromRippled(
log_.trace() << "Sending request";
// Send the message
ws->async_write(
net::buffer(boost::json::serialize(request)), yield[ec]);
ws->async_write(net::buffer(boost::json::serialize(request)), yield[ec]);
if (ec)
return {};
@@ -1127,8 +1012,7 @@ ETLSourceImpl<Derived>::requestFromRippled(
if (!parsed.is_object())
{
log_.error() << "Error parsing response: "
<< std::string{begin, end};
log_.error() << "Error parsing response: " << std::string{begin, end};
return {};
}
log_.trace() << "Successfully forward request";
@@ -1157,8 +1041,8 @@ ETLLoadBalancer::execute(Func f, uint32_t ledgerSequence)
{
auto& source = sources_[sourceIdx];
log_.debug() << "Attempting to execute func. ledger sequence = "
<< ledgerSequence << " - source = " << source->toString();
log_.debug() << "Attempting to execute func. ledger sequence = " << ledgerSequence
<< " - source = " << source->toString();
// Originally, it was (source->hasLedger(ledgerSequence) || true)
/* Sometimes rippled has ledger but doesn't actually know. However,
but this does NOT happen in the normal case and is safe to remove
@@ -1168,30 +1052,26 @@ ETLLoadBalancer::execute(Func f, uint32_t ledgerSequence)
bool res = f(source);
if (res)
{
log_.debug() << "Successfully executed func at source = "
<< source->toString()
log_.debug() << "Successfully executed func at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence;
break;
}
else
{
log_.warn() << "Failed to execute func at source = "
<< source->toString()
log_.warn() << "Failed to execute func at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence;
}
}
else
{
log_.warn() << "Ledger not present at source = "
<< source->toString()
log_.warn() << "Ledger not present at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence;
}
sourceIdx = (sourceIdx + 1) % sources_.size();
numAttempts++;
if (numAttempts % sources_.size() == 0)
{
log_.info() << "Ledger sequence " << ledgerSequence
<< " is not yet available from any configured sources. "
log_.info() << "Ledger sequence " << ledgerSequence << " is not yet available from any configured sources. "
<< "Sleeping and trying again";
std::this_thread::sleep_for(std::chrono::seconds(2));
}

View File

@@ -34,6 +34,8 @@
#include <boost/beast/core/string.hpp>
#include <boost/beast/ssl.hpp>
#include <boost/beast/websocket.hpp>
#include <boost/uuid/uuid.hpp>
#include <boost/uuid/uuid_generators.hpp>
class ETLLoadBalancer;
class ETLSource;
@@ -65,26 +67,20 @@ class ForwardCache
clear();
public:
ForwardCache(
clio::Config const& config,
boost::asio::io_context& ioc,
ETLSource const& source)
ForwardCache(clio::Config const& config, boost::asio::io_context& ioc, ETLSource const& source)
: strand_(ioc), timer_(strand_), source_(source)
{
if (config.contains("cache"))
{
auto commands =
config.arrayOrThrow("cache", "ETLSource cache must be array");
auto commands = config.arrayOrThrow("cache", "ETLSource cache must be array");
if (config.contains("cache_duration"))
duration_ = config.valueOrThrow<uint32_t>(
"cache_duration",
"ETLSource cache_duration must be a number");
duration_ =
config.valueOrThrow<uint32_t>("cache_duration", "ETLSource cache_duration must be a number");
for (auto const& command : commands)
{
auto key = command.valueOrThrow<std::string>(
"ETLSource forward command must be array of strings");
auto key = command.valueOrThrow<std::string>("ETLSource forward command must be array of strings");
latestForwarded_[key] = {};
}
}
@@ -126,27 +122,28 @@ public:
hasLedger(uint32_t sequence) const = 0;
virtual std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
fetchLedger(
uint32_t ledgerSequence,
bool getObjects = true,
bool getObjectNeighbors = false) = 0;
fetchLedger(uint32_t ledgerSequence, bool getObjects = true, bool getObjectNeighbors = false) = 0;
virtual bool
loadInitialLedger(
uint32_t sequence,
std::uint32_t numMarkers,
bool cacheOnly = false) = 0;
loadInitialLedger(uint32_t sequence, std::uint32_t numMarkers, bool cacheOnly = false) = 0;
virtual std::optional<boost::json::object>
forwardToRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const = 0;
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield)
const = 0;
virtual boost::uuids::uuid
token() const = 0;
virtual ~ETLSource()
{
}
bool
operator==(ETLSource const& other) const
{
return token() == other.token();
}
protected:
clio::Logger log_{"ETL"};
@@ -209,6 +206,7 @@ class ETLSourceImpl : public ETLSource
ETLLoadBalancer& balancer_;
ForwardCache forwardCache_;
boost::uuids::uuid uuid_;
std::optional<boost::json::object>
requestFromRippled(
@@ -246,9 +244,7 @@ protected:
auto const host = ip_;
auto const port = wsPort_;
resolver_.async_resolve(host, port, [this](auto ec, auto results) {
onResolve(ec, results);
});
resolver_.async_resolve(host, port, [this](auto ec, auto results) { onResolve(ec, results); });
}
public:
@@ -263,6 +259,12 @@ public:
return connected_;
}
boost::uuids::uuid
token() const override
{
return uuid_;
}
std::chrono::system_clock::time_point
getLastMsgTime() const
{
@@ -298,6 +300,9 @@ public:
, timer_(ioContext)
, hooks_(hooks)
{
static boost::uuids::random_generator uuidGenerator;
uuid_ = uuidGenerator();
ip_ = config.valueOr<std::string>("ip", {});
wsPort_ = config.valueOr<std::string>("ws_port", {});
@@ -306,21 +311,18 @@ public:
grpcPort_ = *value;
try
{
boost::asio::ip::tcp::endpoint endpoint{
boost::asio::ip::make_address(ip_), std::stoi(grpcPort_)};
boost::asio::ip::tcp::endpoint endpoint{boost::asio::ip::make_address(ip_), std::stoi(grpcPort_)};
std::stringstream ss;
ss << endpoint;
grpc::ChannelArguments chArgs;
chArgs.SetMaxReceiveMessageSize(-1);
stub_ = org::xrpl::rpc::v1::XRPLedgerAPIService::NewStub(
grpc::CreateCustomChannel(
ss.str(), grpc::InsecureChannelCredentials(), chArgs));
grpc::CreateCustomChannel(ss.str(), grpc::InsecureChannelCredentials(), chArgs));
log_.debug() << "Made stub for remote = " << toString();
}
catch (std::exception const& e)
{
log_.debug() << "Exception while creating stub = " << e.what()
<< " . Remote = " << toString();
log_.debug() << "Exception while creating stub = " << e.what() << " . Remote = " << toString();
}
}
}
@@ -376,9 +378,7 @@ public:
pairs.push_back(std::make_pair(min, max));
}
}
std::sort(pairs.begin(), pairs.end(), [](auto left, auto right) {
return left.first < right.first;
});
std::sort(pairs.begin(), pairs.end(), [](auto left, auto right) { return left.first < right.first; });
// we only hold the lock here, to avoid blocking while string processing
std::lock_guard lck(mtx_);
@@ -401,16 +401,13 @@ public:
/// and the prior one
/// @return the extracted data and the result status
std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
fetchLedger(
uint32_t ledgerSequence,
bool getObjects = true,
bool getObjectNeighbors = false) override;
fetchLedger(uint32_t ledgerSequence, bool getObjects = true, bool getObjectNeighbors = false) override;
std::string
toString() const override
{
return "{validated_ledger: " + getValidatedRange() + ", ip: " + ip_ +
", web socket port: " + wsPort_ + ", grpc port: " + grpcPort_ + "}";
return "{validated_ledger: " + getValidatedRange() + ", ip: " + ip_ + ", web socket port: " + wsPort_ +
", grpc port: " + grpcPort_ + "}";
}
boost::json::object
@@ -425,8 +422,7 @@ public:
auto last = getLastMsgTime();
if (last.time_since_epoch().count() != 0)
res["last_msg_age_seconds"] = std::to_string(
std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now() - getLastMsgTime())
std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now() - getLastMsgTime())
.count());
return res;
}
@@ -436,10 +432,7 @@ public:
/// @param writeQueue queue to push downloaded ledger objects
/// @return true if the download was successful
bool
loadInitialLedger(
std::uint32_t ledgerSequence,
std::uint32_t numMarkers,
bool cacheOnly = false) override;
loadInitialLedger(std::uint32_t ledgerSequence, std::uint32_t numMarkers, bool cacheOnly = false) override;
/// Attempt to reconnect to the ETL source
void
@@ -463,16 +456,11 @@ public:
/// Callback
void
onResolve(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type results);
onResolve(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type results);
/// Callback
virtual void
onConnect(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type
endpoint) = 0;
onConnect(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint) = 0;
/// Callback
void
@@ -492,16 +480,13 @@ public:
handleMessage();
std::optional<boost::json::object>
forwardToRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const override;
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield)
const override;
};
class PlainETLSource : public ETLSourceImpl<PlainETLSource>
{
std::unique_ptr<boost::beast::websocket::stream<boost::beast::tcp_stream>>
ws_;
std::unique_ptr<boost::beast::websocket::stream<boost::beast::tcp_stream>> ws_;
public:
PlainETLSource(
@@ -512,24 +497,14 @@ public:
std::shared_ptr<NetworkValidatedLedgers> nwvl,
ETLLoadBalancer& balancer,
ETLSourceHooks hooks)
: ETLSourceImpl(
config,
ioc,
backend,
subscriptions,
nwvl,
balancer,
std::move(hooks))
, ws_(std::make_unique<
boost::beast::websocket::stream<boost::beast::tcp_stream>>(
: ETLSourceImpl(config, ioc, backend, subscriptions, nwvl, balancer, std::move(hooks))
, ws_(std::make_unique<boost::beast::websocket::stream<boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc)))
{
}
void
onConnect(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint)
onConnect(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint)
override;
/// Close the websocket
@@ -548,9 +523,7 @@ class SslETLSource : public ETLSourceImpl<SslETLSource>
{
std::optional<std::reference_wrapper<boost::asio::ssl::context>> sslCtx_;
std::unique_ptr<boost::beast::websocket::stream<
boost::beast::ssl_stream<boost::beast::tcp_stream>>>
ws_;
std::unique_ptr<boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>> ws_;
public:
SslETLSource(
@@ -562,40 +535,27 @@ public:
std::shared_ptr<NetworkValidatedLedgers> nwvl,
ETLLoadBalancer& balancer,
ETLSourceHooks hooks)
: ETLSourceImpl(
config,
ioc,
backend,
subscriptions,
nwvl,
balancer,
std::move(hooks))
: ETLSourceImpl(config, ioc, backend, subscriptions, nwvl, balancer, std::move(hooks))
, sslCtx_(sslCtx)
, ws_(std::make_unique<boost::beast::websocket::stream<
boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
, ws_(std::make_unique<boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_),
*sslCtx_))
{
}
void
onConnect(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint)
onConnect(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint)
override;
void
onSslHandshake(
boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint);
onSslHandshake(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint);
/// Close the websocket
/// @param startAgain whether to reconnect
void
close(bool startAgain);
boost::beast::websocket::stream<
boost::beast::ssl_stream<boost::beast::tcp_stream>>&
boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>&
ws()
{
return *ws_;
@@ -631,8 +591,7 @@ public:
std::shared_ptr<SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> validatedLedgers)
{
return std::make_shared<ETLLoadBalancer>(
config, ioc, backend, subscriptions, validatedLedgers);
return std::make_shared<ETLLoadBalancer>(config, ioc, backend, subscriptions, validatedLedgers);
}
~ETLLoadBalancer()
@@ -655,10 +614,7 @@ public:
/// was found in the database or the server is shutting down, the optional
/// will be empty
std::optional<org::xrpl::rpc::v1::GetLedgerResponse>
fetchLedger(
uint32_t ledgerSequence,
bool getObjects,
bool getObjectNeighbors);
fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors);
/// Determine whether messages received on the transactions_proposed stream
/// should be forwarded to subscribing clients. The server subscribes to
@@ -676,10 +632,7 @@ public:
// We pick the first ETLSource encountered that is connected
if (src->isConnected())
{
if (src.get() == in)
return true;
else
return false;
return *src == *in;
}
}
@@ -702,10 +655,8 @@ public:
/// @param request JSON-RPC request
/// @return response received from rippled node
std::optional<boost::json::object>
forwardToRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const;
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield)
const;
private:
/// f is a function that takes an ETLSource as an argument and returns a

View File

@@ -17,7 +17,6 @@
*/
//==============================================================================
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
#include <ripple/protocol/STBase.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/TxMeta.h>
@@ -45,25 +44,18 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
for (ripple::STObject const& node : txMeta.getNodes())
{
if (node.getFieldU16(ripple::sfLedgerEntryType) !=
ripple::ltNFTOKEN_PAGE)
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE)
continue;
if (!owner)
owner = ripple::AccountID::fromVoid(
node.getFieldH256(ripple::sfLedgerIndex).data());
owner = ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data());
if (node.getFName() == ripple::sfCreatedNode)
{
ripple::STArray const& toAddNFTs =
node.peekAtField(ripple::sfNewFields)
.downcast<ripple::STObject>()
.getFieldArray(ripple::sfNFTokens);
node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
std::transform(
toAddNFTs.begin(),
toAddNFTs.end(),
std::back_inserter(finalIDs),
[](ripple::STObject const& nft) {
toAddNFTs.begin(), toAddNFTs.end(), std::back_inserter(finalIDs), [](ripple::STObject const& nft) {
return nft.getFieldH256(ripple::sfNFTokenID);
});
}
@@ -81,32 +73,23 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// as rippled outputs all fields in final fields even if they were
// not changed.
ripple::STObject const& previousFields =
node.peekAtField(ripple::sfPreviousFields)
.downcast<ripple::STObject>();
node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>();
if (!previousFields.isFieldPresent(ripple::sfNFTokens))
continue;
ripple::STArray const& toAddNFTs =
previousFields.getFieldArray(ripple::sfNFTokens);
ripple::STArray const& toAddNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
std::transform(
toAddNFTs.begin(),
toAddNFTs.end(),
std::back_inserter(prevIDs),
[](ripple::STObject const& nft) {
toAddNFTs.begin(), toAddNFTs.end(), std::back_inserter(prevIDs), [](ripple::STObject const& nft) {
return nft.getFieldH256(ripple::sfNFTokenID);
});
ripple::STArray const& toAddFinalNFTs =
node.peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldArray(ripple::sfNFTokens);
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
std::transform(
toAddFinalNFTs.begin(),
toAddFinalNFTs.end(),
std::back_inserter(finalIDs),
[](ripple::STObject const& nft) {
return nft.getFieldH256(ripple::sfNFTokenID);
});
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); });
}
}
@@ -121,9 +104,8 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
std::inserter(tokenIDResult, tokenIDResult.begin()));
if (tokenIDResult.size() == 1 && owner)
return {
{NFTTransactionsData(
tokenIDResult.front(), txMeta, sttx.getTransactionID())},
NFTsData(tokenIDResult.front(), *owner, txMeta, false)};
{NFTTransactionsData(tokenIDResult.front(), txMeta, sttx.getTransactionID())},
NFTsData(tokenIDResult.front(), *owner, sttx.getFieldVL(ripple::sfURI), txMeta)};
std::stringstream msg;
msg << " - unexpected NFTokenMint data in tx " << sttx.getTransactionID();
@@ -134,49 +116,43 @@ std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
ripple::uint256 const tokenID = sttx.getFieldH256(ripple::sfNFTokenID);
std::vector<NFTTransactionsData> const txs = {
NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())};
std::vector<NFTTransactionsData> const txs = {NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())};
// Determine who owned the token when it was burned by finding an
// NFTokenPage that was deleted or modified that contains this
// tokenID.
for (ripple::STObject const& node : txMeta.getNodes())
{
if (node.getFieldU16(ripple::sfLedgerEntryType) !=
ripple::ltNFTOKEN_PAGE ||
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE ||
node.getFName() == ripple::sfCreatedNode)
continue;
// NFT burn can result in an NFTokenPage being modified to no longer
// include the target, or an NFTokenPage being deleted. If this is
// modified, we want to look for the target in the fields prior to
// modification. If deleted, it's possible that the page was modified
// to remove the target NFT prior to the entire page being deleted. In
// this case, we need to look in the PreviousFields. Otherwise, the
// page was not modified prior to deleting and we need to look in the
// FinalFields.
// modification. If deleted, it's possible that the page was
// modified to remove the target NFT prior to the entire page being
// deleted. In this case, we need to look in the PreviousFields.
// Otherwise, the page was not modified prior to deleting and we
// need to look in the FinalFields.
std::optional<ripple::STArray> prevNFTs;
if (node.isFieldPresent(ripple::sfPreviousFields))
{
ripple::STObject const& previousFields =
node.peekAtField(ripple::sfPreviousFields)
.downcast<ripple::STObject>();
node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>();
if (previousFields.isFieldPresent(ripple::sfNFTokens))
prevNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
}
else if (!prevNFTs && node.getFName() == ripple::sfDeletedNode)
prevNFTs = node.peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldArray(ripple::sfNFTokens);
prevNFTs =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
if (!prevNFTs)
continue;
auto const nft = std::find_if(
prevNFTs->begin(),
prevNFTs->end(),
[&tokenID](ripple::STObject const& candidate) {
auto const nft =
std::find_if(prevNFTs->begin(), prevNFTs->end(), [&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
});
if (nft != prevNFTs->end())
@@ -184,92 +160,74 @@ getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
txs,
NFTsData(
tokenID,
ripple::AccountID::fromVoid(
node.getFieldH256(ripple::sfLedgerIndex).data()),
ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data()),
txMeta,
true));
}
std::stringstream msg;
msg << " - could not determine owner at burntime for tx "
<< sttx.getTransactionID();
msg << " - could not determine owner at burntime for tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str());
}
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenAcceptOfferData(
ripple::TxMeta const& txMeta,
ripple::STTx const& sttx)
getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
// If we have the buy offer from this tx, we can determine the owner
// more easily by just looking at the owner of the accepted NFTokenOffer
// object.
if (sttx.isFieldPresent(ripple::sfNFTokenBuyOffer))
{
auto const affectedBuyOffer = std::find_if(
txMeta.getNodes().begin(),
txMeta.getNodes().end(),
[&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) ==
sttx.getFieldH256(ripple::sfNFTokenBuyOffer);
auto const affectedBuyOffer =
std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenBuyOffer);
});
if (affectedBuyOffer == txMeta.getNodes().end())
{
std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx "
<< sttx.getTransactionID();
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str());
}
ripple::uint256 const tokenID =
affectedBuyOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldH256(ripple::sfNFTokenID);
ripple::uint256 const tokenID = affectedBuyOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldH256(ripple::sfNFTokenID);
ripple::AccountID const owner =
affectedBuyOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner);
ripple::AccountID const owner = affectedBuyOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner);
return {
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())},
NFTsData(tokenID, owner, txMeta, false)};
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())}, NFTsData(tokenID, owner, txMeta, false)};
}
// Otherwise we have to infer the new owner from the affected nodes.
auto const affectedSellOffer = std::find_if(
txMeta.getNodes().begin(),
txMeta.getNodes().end(),
[&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) ==
sttx.getFieldH256(ripple::sfNFTokenSellOffer);
auto const affectedSellOffer =
std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenSellOffer);
});
if (affectedSellOffer == txMeta.getNodes().end())
{
std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx "
<< sttx.getTransactionID();
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str());
}
ripple::uint256 const tokenID =
affectedSellOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldH256(ripple::sfNFTokenID);
ripple::uint256 const tokenID = affectedSellOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldH256(ripple::sfNFTokenID);
ripple::AccountID const seller =
affectedSellOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner);
ripple::AccountID const seller = affectedSellOffer->peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner);
for (ripple::STObject const& node : txMeta.getNodes())
{
if (node.getFieldU16(ripple::sfLedgerEntryType) !=
ripple::ltNFTOKEN_PAGE ||
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE ||
node.getFName() == ripple::sfDeletedNode)
continue;
ripple::AccountID const nodeOwner = ripple::AccountID::fromVoid(
node.getFieldH256(ripple::sfLedgerIndex).data());
ripple::AccountID const nodeOwner =
ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data());
if (nodeOwner == seller)
continue;
@@ -283,12 +241,9 @@ getNFTokenAcceptOfferData(
.getFieldArray(ripple::sfNFTokens);
}();
auto const nft = std::find_if(
nfts.begin(),
nfts.end(),
[&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
});
auto const nft = std::find_if(nfts.begin(), nfts.end(), [&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
});
if (nft != nfts.end())
return {
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())},
@@ -296,8 +251,7 @@ getNFTokenAcceptOfferData(
}
std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx "
<< sttx.getTransactionID();
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str());
}
@@ -306,40 +260,28 @@ getNFTokenAcceptOfferData(
// transaction using this feature. This transaction also never returns an
// NFTsData because it does not change the state of an NFT itself.
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenCancelOfferData(
ripple::TxMeta const& txMeta,
ripple::STTx const& sttx)
getNFTokenCancelOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
std::vector<NFTTransactionsData> txs;
for (ripple::STObject const& node : txMeta.getNodes())
{
if (node.getFieldU16(ripple::sfLedgerEntryType) !=
ripple::ltNFTOKEN_OFFER)
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_OFFER)
continue;
ripple::uint256 const tokenID = node.peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>()
.getFieldH256(ripple::sfNFTokenID);
ripple::uint256 const tokenID =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldH256(ripple::sfNFTokenID);
txs.emplace_back(tokenID, txMeta, sttx.getTransactionID());
}
// Deduplicate any transactions based on tokenID/txIdx combo. Can't just
// use txIdx because in this case one tx can cancel offers for several
// NFTs.
std::sort(
txs.begin(),
txs.end(),
[](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID < b.tokenID &&
a.transactionIndex < b.transactionIndex;
});
auto last = std::unique(
txs.begin(),
txs.end(),
[](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID == b.tokenID &&
a.transactionIndex == b.transactionIndex;
});
std::sort(txs.begin(), txs.end(), [](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID < b.tokenID && a.transactionIndex < b.transactionIndex;
});
auto last = std::unique(txs.begin(), txs.end(), [](NFTTransactionsData const& a, NFTTransactionsData const& b) {
return a.tokenID == b.tokenID && a.transactionIndex == b.transactionIndex;
});
txs.erase(last, txs.end());
return {txs, {}};
}
@@ -347,20 +289,13 @@ getNFTokenCancelOfferData(
// This transaction never returns an NFTokensData because it does not
// change the state of an NFT itself.
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenCreateOfferData(
ripple::TxMeta const& txMeta,
ripple::STTx const& sttx)
getNFTokenCreateOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
return {
{NFTTransactionsData(
sttx.getFieldH256(ripple::sfNFTokenID),
txMeta,
sttx.getTransactionID())},
{}};
return {{NFTTransactionsData(sttx.getFieldH256(ripple::sfNFTokenID), txMeta, sttx.getTransactionID())}, {}};
}
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{
if (txMeta.getResultTER() != ripple::tesSUCCESS)
return {{}, {}};
@@ -386,3 +321,20 @@ getNFTData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
return {{}, {}};
}
}
std::vector<NFTsData>
getNFTDataFromObj(std::uint32_t const seq, std::string const& key, std::string const& blob)
{
std::vector<NFTsData> nfts;
ripple::STLedgerEntry const sle =
ripple::STLedgerEntry(ripple::SerialIter{blob.data(), blob.size()}, ripple::uint256::fromVoid(key.data()));
if (sle.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE)
return nfts;
auto const owner = ripple::AccountID::fromVoid(key.data());
for (ripple::STObject const& node : sle.getFieldArray(ripple::sfNFTokens))
nfts.emplace_back(node.getFieldH256(ripple::sfNFTokenID), seq, owner, node.getFieldVL(ripple::sfURI));
return nfts;
}

33
src/etl/NFTHelpers.h Normal file
View File

@@ -0,0 +1,33 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/DBHelpers.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/TxMeta.h>
// Pulling from tx via ReportingETL
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx);
// Pulling from ledger object via loadInitialLedger
std::vector<NFTsData>
getNFTDataFromObj(std::uint32_t const seq, std::string const& key, std::string const& blob);

View File

@@ -31,23 +31,9 @@ ProbingETLSource::ProbingETLSource(
ETLLoadBalancer& balancer,
boost::asio::ssl::context sslCtx)
: sslCtx_{std::move(sslCtx)}
, sslSrc_{make_shared<SslETLSource>(
config,
ioc,
std::ref(sslCtx_),
backend,
subscriptions,
nwvl,
balancer,
make_SSLHooks())}
, plainSrc_{make_shared<PlainETLSource>(
config,
ioc,
backend,
subscriptions,
nwvl,
balancer,
make_PlainHooks())}
, sslSrc_{make_shared<
SslETLSource>(config, ioc, std::ref(sslCtx_), backend, subscriptions, nwvl, balancer, make_SSLHooks())}
, plainSrc_{make_shared<PlainETLSource>(config, ioc, backend, subscriptions, nwvl, balancer, make_PlainHooks())}
{
}
@@ -107,33 +93,32 @@ std::string
ProbingETLSource::toString() const
{
if (!currentSrc_)
return "{probing... ws: " + plainSrc_->toString() +
", wss: " + sslSrc_->toString() + "}";
return "{probing... ws: " + plainSrc_->toString() + ", wss: " + sslSrc_->toString() + "}";
return currentSrc_->toString();
}
boost::uuids::uuid
ProbingETLSource::token() const
{
if (!currentSrc_)
return boost::uuids::nil_uuid();
return currentSrc_->token();
}
bool
ProbingETLSource::loadInitialLedger(
std::uint32_t ledgerSequence,
std::uint32_t numMarkers,
bool cacheOnly)
ProbingETLSource::loadInitialLedger(std::uint32_t ledgerSequence, std::uint32_t numMarkers, bool cacheOnly)
{
if (!currentSrc_)
return false;
return currentSrc_->loadInitialLedger(
ledgerSequence, numMarkers, cacheOnly);
return currentSrc_->loadInitialLedger(ledgerSequence, numMarkers, cacheOnly);
}
std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
ProbingETLSource::fetchLedger(
uint32_t ledgerSequence,
bool getObjects,
bool getObjectNeighbors)
ProbingETLSource::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors)
{
if (!currentSrc_)
return {};
return currentSrc_->fetchLedger(
ledgerSequence, getObjects, getObjectNeighbors);
return currentSrc_->fetchLedger(ledgerSequence, getObjects, getObjectNeighbors);
}
std::optional<boost::json::object>
@@ -171,8 +156,7 @@ ProbingETLSource::make_SSLHooks() noexcept
{
plainSrc_->pause();
currentSrc_ = sslSrc_;
log_.info() << "Selected WSS as the main source: "
<< currentSrc_->toString();
log_.info() << "Selected WSS as the main source: " << currentSrc_->toString();
}
return ETLSourceHooks::Action::PROCEED;
},
@@ -201,8 +185,7 @@ ProbingETLSource::make_PlainHooks() noexcept
{
sslSrc_->pause();
currentSrc_ = plainSrc_;
log_.info() << "Selected Plain WS as the main source: "
<< currentSrc_->toString();
log_.info() << "Selected Plain WS as the main source: " << currentSrc_->toString();
}
return ETLSourceHooks::Action::PROCEED;
},

View File

@@ -53,8 +53,7 @@ public:
std::shared_ptr<SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> nwvl,
ETLLoadBalancer& balancer,
boost::asio::ssl::context sslCtx = boost::asio::ssl::context{
boost::asio::ssl::context::tlsv12});
boost::asio::ssl::context sslCtx = boost::asio::ssl::context{boost::asio::ssl::context::tlsv12});
~ProbingETLSource() = default;
@@ -80,22 +79,17 @@ public:
toString() const override;
bool
loadInitialLedger(
std::uint32_t ledgerSequence,
std::uint32_t numMarkers,
bool cacheOnly = false) override;
loadInitialLedger(std::uint32_t ledgerSequence, std::uint32_t numMarkers, bool cacheOnly = false) override;
std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
fetchLedger(
uint32_t ledgerSequence,
bool getObjects = true,
bool getObjectNeighbors = false) override;
fetchLedger(uint32_t ledgerSequence, bool getObjects = true, bool getObjectNeighbors = false) override;
std::optional<boost::json::object>
forwardToRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const override;
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield)
const override;
boost::uuids::uuid
token() const override;
private:
std::optional<boost::json::object>

View File

@@ -21,6 +21,8 @@
#include <ripple/beast/core/CurrentThreadName.h>
#include <backend/DBHelpers.h>
#include <etl/NFTHelpers.h>
#include <etl/ReportingETL.h>
#include <log/Logger.h>
#include <subscriptions/SubscriptionManager.h>
@@ -45,23 +47,19 @@ std::string
toString(ripple::LedgerInfo const& info)
{
std::stringstream ss;
ss << "LedgerInfo { Sequence : " << info.seq
<< " Hash : " << strHex(info.hash) << " TxHash : " << strHex(info.txHash)
<< " AccountHash : " << strHex(info.accountHash)
ss << "LedgerInfo { Sequence : " << info.seq << " Hash : " << strHex(info.hash)
<< " TxHash : " << strHex(info.txHash) << " AccountHash : " << strHex(info.accountHash)
<< " ParentHash : " << strHex(info.parentHash) << " }";
return ss.str();
}
} // namespace clio::detail
FormattedTransactionsData
ReportingETL::insertTransactions(
ripple::LedgerInfo const& ledger,
org::xrpl::rpc::v1::GetLedgerResponse& data)
ReportingETL::insertTransactions(ripple::LedgerInfo const& ledger, org::xrpl::rpc::v1::GetLedgerResponse& data)
{
FormattedTransactionsData result;
for (auto& txn :
*(data.mutable_transactions_list()->mutable_transactions()))
for (auto& txn : *(data.mutable_transactions_list()->mutable_transactions()))
{
std::string* raw = txn.mutable_transaction_blob();
@@ -70,18 +68,15 @@ ReportingETL::insertTransactions(
log_.trace() << "Inserting transaction = " << sttx.getTransactionID();
ripple::TxMeta txMeta{
sttx.getTransactionID(), ledger.seq, txn.metadata_blob()};
ripple::TxMeta txMeta{sttx.getTransactionID(), ledger.seq, txn.metadata_blob()};
auto const [nftTxs, maybeNFT] = getNFTData(txMeta, sttx);
result.nfTokenTxData.insert(
result.nfTokenTxData.end(), nftTxs.begin(), nftTxs.end());
auto const [nftTxs, maybeNFT] = getNFTDataFromTx(txMeta, sttx);
result.nfTokenTxData.insert(result.nfTokenTxData.end(), nftTxs.begin(), nftTxs.end());
if (maybeNFT)
result.nfTokensData.push_back(*maybeNFT);
auto journal = ripple::debugLog();
result.accountTxData.emplace_back(
txMeta, sttx.getTransactionID(), journal);
result.accountTxData.emplace_back(txMeta, sttx.getTransactionID(), journal);
std::string keyStr{(const char*)sttx.getTransactionID().data(), 32};
backend_->writeTransaction(
std::move(keyStr),
@@ -94,18 +89,12 @@ ReportingETL::insertTransactions(
// Remove all but the last NFTsData for each id. unique removes all
// but the first of a group, so we want to reverse sort by transaction
// index
std::sort(
result.nfTokensData.begin(),
result.nfTokensData.end(),
[](NFTsData const& a, NFTsData const& b) {
return a.tokenID > b.tokenID &&
a.transactionIndex > b.transactionIndex;
});
std::sort(result.nfTokensData.begin(), result.nfTokensData.end(), [](NFTsData const& a, NFTsData const& b) {
return a.tokenID > b.tokenID && a.transactionIndex > b.transactionIndex;
});
// Now we can unique the NFTs by tokenID.
auto last = std::unique(
result.nfTokensData.begin(),
result.nfTokensData.end(),
[](NFTsData const& a, NFTsData const& b) {
auto last =
std::unique(result.nfTokensData.begin(), result.nfTokensData.end(), [](NFTsData const& a, NFTsData const& b) {
return a.tokenID == b.tokenID;
});
result.nfTokensData.erase(last, result.nfTokensData.end());
@@ -128,13 +117,11 @@ ReportingETL::loadInitialLedger(uint32_t startingSequence)
// fetch the ledger from the network. This function will not return until
// either the fetch is successful, or the server is being shutdown. This
// only fetches the ledger header and the transactions+metadata
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> ledgerData{
fetchLedgerData(startingSequence)};
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> ledgerData{fetchLedgerData(startingSequence)};
if (!ledgerData)
return {};
ripple::LedgerInfo lgrInfo =
deserializeHeader(ripple::makeSlice(ledgerData->ledger_header()));
ripple::LedgerInfo lgrInfo = deserializeHeader(ripple::makeSlice(ledgerData->ledger_header()));
log_.debug() << "Deserialized ledger header. " << detail::toString(lgrInfo);
@@ -143,12 +130,10 @@ ReportingETL::loadInitialLedger(uint32_t startingSequence)
log_.debug() << "Started writes";
backend_->writeLedger(
lgrInfo, std::move(*ledgerData->mutable_ledger_header()));
backend_->writeLedger(lgrInfo, std::move(*ledgerData->mutable_ledger_header()));
log_.debug() << "Wrote ledger";
FormattedTransactionsData insertTxResult =
insertTransactions(lgrInfo, *ledgerData);
FormattedTransactionsData insertTxResult = insertTransactions(lgrInfo, *ledgerData);
log_.debug() << "Inserted txns";
// download the full account state map. This function downloads full
@@ -162,11 +147,9 @@ ReportingETL::loadInitialLedger(uint32_t startingSequence)
if (!stopping_)
{
backend_->writeAccountTransactions(
std::move(insertTxResult.accountTxData));
backend_->writeAccountTransactions(std::move(insertTxResult.accountTxData));
backend_->writeNFTs(std::move(insertTxResult.nfTokensData));
backend_->writeNFTTransactions(
std::move(insertTxResult.nfTokenTxData));
backend_->writeNFTTransactions(std::move(insertTxResult.nfTokenTxData));
}
backend_->finishWrites(startingSequence);
});
@@ -183,10 +166,8 @@ ReportingETL::publishLedger(ripple::LedgerInfo const& lgrInfo)
{
log_.info() << "Updating cache";
std::vector<Backend::LedgerObject> diff =
Backend::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchLedgerDiff(lgrInfo.seq, yield);
});
std::vector<Backend::LedgerObject> diff = Backend::synchronousAndRetryOnTimeout(
[&](auto yield) { return backend_->fetchLedgerDiff(lgrInfo.seq, yield); });
backend_->cache().update(diff, lgrInfo.seq);
backend_->updateRange(lgrInfo.seq);
@@ -199,22 +180,16 @@ ReportingETL::publishLedger(ripple::LedgerInfo const& lgrInfo)
if (age < 600)
{
std::optional<ripple::Fees> fees =
Backend::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchFees(lgrInfo.seq, yield);
});
Backend::synchronousAndRetryOnTimeout([&](auto yield) { return backend_->fetchFees(lgrInfo.seq, yield); });
std::vector<Backend::TransactionAndMetadata> transactions =
Backend::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchAllTransactionsInLedger(
lgrInfo.seq, yield);
});
std::vector<Backend::TransactionAndMetadata> transactions = Backend::synchronousAndRetryOnTimeout(
[&](auto yield) { return backend_->fetchAllTransactionsInLedger(lgrInfo.seq, yield); });
auto ledgerRange = backend_->fetchLedgerRange();
assert(ledgerRange);
assert(fees);
std::string range = std::to_string(ledgerRange->minSequence) + "-" +
std::to_string(ledgerRange->maxSequence);
std::string range = std::to_string(ledgerRange->minSequence) + "-" + std::to_string(ledgerRange->maxSequence);
subscriptions_->pubLedger(lgrInfo, *fees, range, transactions.size());
@@ -226,15 +201,12 @@ ReportingETL::publishLedger(ripple::LedgerInfo const& lgrInfo)
log_.info() << "Published ledger " << std::to_string(lgrInfo.seq);
}
else
log_.info() << "Skipping publishing ledger "
<< std::to_string(lgrInfo.seq);
log_.info() << "Skipping publishing ledger " << std::to_string(lgrInfo.seq);
setLastPublish();
}
bool
ReportingETL::publishLedger(
uint32_t ledgerSequence,
std::optional<uint32_t> maxAttempts)
ReportingETL::publishLedger(uint32_t ledgerSequence, std::optional<uint32_t> maxAttempts)
{
log_.info() << "Attempting to publish ledger = " << ledgerSequence;
size_t numAttempts = 0;
@@ -251,8 +223,7 @@ ReportingETL::publishLedger(
// second in between each attempt.
if (maxAttempts && numAttempts >= maxAttempts)
{
log_.debug() << "Failed to publish ledger after " << numAttempts
<< " attempts.";
log_.debug() << "Failed to publish ledger after " << numAttempts << " attempts.";
return false;
}
std::this_thread::sleep_for(std::chrono::seconds(1));
@@ -261,9 +232,8 @@ ReportingETL::publishLedger(
}
else
{
auto lgr = Backend::synchronousAndRetryOnTimeout([&](auto yield) {
return backend_->fetchLedgerBySequence(ledgerSequence, yield);
});
auto lgr = Backend::synchronousAndRetryOnTimeout(
[&](auto yield) { return backend_->fetchLedgerBySequence(ledgerSequence, yield); });
assert(lgr);
publishLedger(*lgr);
@@ -279,8 +249,7 @@ ReportingETL::fetchLedgerData(uint32_t seq)
{
log_.debug() << "Attempting to fetch ledger with sequence = " << seq;
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> response =
loadBalancer_->fetchLedger(seq, false, false);
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> response = loadBalancer_->fetchLedger(seq, false, false);
if (response)
log_.trace() << "GetLedger reply = " << response->DebugString();
return response;
@@ -291,12 +260,8 @@ ReportingETL::fetchLedgerDataAndDiff(uint32_t seq)
{
log_.debug() << "Attempting to fetch ledger with sequence = " << seq;
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> response =
loadBalancer_->fetchLedger(
seq,
true,
!backend_->cache().isFull() ||
backend_->cache().latestLedgerSequence() >= seq);
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> response = loadBalancer_->fetchLedger(
seq, true, !backend_->cache().isFull() || backend_->cache().latestLedgerSequence() >= seq);
if (response)
log_.trace() << "GetLedger reply = " << response->DebugString();
return response;
@@ -306,8 +271,7 @@ std::pair<ripple::LedgerInfo, bool>
ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
{
log_.debug() << "Beginning ledger update";
ripple::LedgerInfo lgrInfo =
deserializeHeader(ripple::makeSlice(rawData.ledger_header()));
ripple::LedgerInfo lgrInfo = deserializeHeader(ripple::makeSlice(rawData.ledger_header()));
log_.debug() << "Deserialized ledger header. " << detail::toString(lgrInfo);
backend_->startWrites();
@@ -325,14 +289,10 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
auto firstBook = std::move(*obj.mutable_first_book());
if (!firstBook.size())
firstBook = uint256ToString(Backend::lastKey);
log_.debug() << "writing book successor "
<< ripple::strHex(obj.book_base()) << " - "
log_.debug() << "writing book successor " << ripple::strHex(obj.book_base()) << " - "
<< ripple::strHex(firstBook);
backend_->writeSuccessor(
std::move(*obj.mutable_book_base()),
lgrInfo.seq,
std::move(firstBook));
backend_->writeSuccessor(std::move(*obj.mutable_book_base()), lgrInfo.seq, std::move(firstBook));
}
for (auto& obj : *(rawData.mutable_ledger_objects()->mutable_objects()))
{
@@ -345,32 +305,20 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
if (!succPtr->size())
*succPtr = uint256ToString(Backend::lastKey);
if (obj.mod_type() ==
org::xrpl::rpc::v1::RawLedgerObject::DELETED)
if (obj.mod_type() == org::xrpl::rpc::v1::RawLedgerObject::DELETED)
{
log_.debug() << "Modifying successors for deleted object "
<< ripple::strHex(obj.key()) << " - "
<< ripple::strHex(*predPtr) << " - "
<< ripple::strHex(*succPtr);
log_.debug() << "Modifying successors for deleted object " << ripple::strHex(obj.key()) << " - "
<< ripple::strHex(*predPtr) << " - " << ripple::strHex(*succPtr);
backend_->writeSuccessor(
std::move(*predPtr), lgrInfo.seq, std::move(*succPtr));
backend_->writeSuccessor(std::move(*predPtr), lgrInfo.seq, std::move(*succPtr));
}
else
{
log_.debug() << "adding successor for new object "
<< ripple::strHex(obj.key()) << " - "
<< ripple::strHex(*predPtr) << " - "
<< ripple::strHex(*succPtr);
log_.debug() << "adding successor for new object " << ripple::strHex(obj.key()) << " - "
<< ripple::strHex(*predPtr) << " - " << ripple::strHex(*succPtr);
backend_->writeSuccessor(
std::move(*predPtr),
lgrInfo.seq,
std::string{obj.key()});
backend_->writeSuccessor(
std::string{obj.key()},
lgrInfo.seq,
std::move(*succPtr));
backend_->writeSuccessor(std::move(*predPtr), lgrInfo.seq, std::string{obj.key()});
backend_->writeSuccessor(std::string{obj.key()}, lgrInfo.seq, std::move(*succPtr));
}
}
else
@@ -386,17 +334,13 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
{
auto key = ripple::uint256::fromVoidChecked(obj.key());
assert(key);
cacheUpdates.push_back(
{*key, {obj.mutable_data()->begin(), obj.mutable_data()->end()}});
log_.debug() << "key = " << ripple::strHex(*key)
<< " - mod type = " << obj.mod_type();
cacheUpdates.push_back({*key, {obj.mutable_data()->begin(), obj.mutable_data()->end()}});
log_.debug() << "key = " << ripple::strHex(*key) << " - mod type = " << obj.mod_type();
if (obj.mod_type() != org::xrpl::rpc::v1::RawLedgerObject::MODIFIED &&
!rawData.object_neighbors_included())
if (obj.mod_type() != org::xrpl::rpc::v1::RawLedgerObject::MODIFIED && !rawData.object_neighbors_included())
{
log_.debug() << "object neighbors not included. using cache";
if (!backend_->cache().isFull() ||
backend_->cache().latestLedgerSequence() != lgrInfo.seq - 1)
if (!backend_->cache().isFull() || backend_->cache().latestLedgerSequence() != lgrInfo.seq - 1)
throw std::runtime_error(
"Cache is not full, but object neighbors were not "
"included");
@@ -415,20 +359,15 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
{
log_.debug() << "Is book dir. key = " << ripple::strHex(*key);
auto bookBase = getBookBase(*key);
auto oldFirstDir =
backend_->cache().getSuccessor(bookBase, lgrInfo.seq - 1);
auto oldFirstDir = backend_->cache().getSuccessor(bookBase, lgrInfo.seq - 1);
assert(oldFirstDir);
// We deleted the first directory, or we added a directory prior
// to the old first directory
if ((isDeleted && key == oldFirstDir->key) ||
(!isDeleted && key < oldFirstDir->key))
if ((isDeleted && key == oldFirstDir->key) || (!isDeleted && key < oldFirstDir->key))
{
log_.debug()
<< "Need to recalculate book base successor. base = "
<< ripple::strHex(bookBase)
<< " - key = " << ripple::strHex(*key)
<< " - isDeleted = " << isDeleted
<< " - seq = " << lgrInfo.seq;
log_.debug() << "Need to recalculate book base successor. base = " << ripple::strHex(bookBase)
<< " - key = " << ripple::strHex(*key) << " - isDeleted = " << isDeleted
<< " - seq = " << lgrInfo.seq;
bookSuccessorsToCalculate.insert(bookBase);
}
}
@@ -436,18 +375,14 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
if (obj.mod_type() == org::xrpl::rpc::v1::RawLedgerObject::MODIFIED)
modified.insert(*key);
backend_->writeLedgerObject(
std::move(*obj.mutable_key()),
lgrInfo.seq,
std::move(*obj.mutable_data()));
backend_->writeLedgerObject(std::move(*obj.mutable_key()), lgrInfo.seq, std::move(*obj.mutable_data()));
}
backend_->cache().update(cacheUpdates, lgrInfo.seq);
// rippled didn't send successor information, so use our cache
if (!rawData.object_neighbors_included())
{
log_.debug() << "object neighbors not included. using cache";
if (!backend_->cache().isFull() ||
backend_->cache().latestLedgerSequence() != lgrInfo.seq)
if (!backend_->cache().isFull() || backend_->cache().latestLedgerSequence() != lgrInfo.seq)
throw std::runtime_error(
"Cache is not full, but object neighbors were not "
"included");
@@ -463,31 +398,18 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
ub = {Backend::lastKey, {}};
if (obj.blob.size() == 0)
{
log_.debug() << "writing successor for deleted object "
<< ripple::strHex(obj.key) << " - "
<< ripple::strHex(lb->key) << " - "
<< ripple::strHex(ub->key);
log_.debug() << "writing successor for deleted object " << ripple::strHex(obj.key) << " - "
<< ripple::strHex(lb->key) << " - " << ripple::strHex(ub->key);
backend_->writeSuccessor(
uint256ToString(lb->key),
lgrInfo.seq,
uint256ToString(ub->key));
backend_->writeSuccessor(uint256ToString(lb->key), lgrInfo.seq, uint256ToString(ub->key));
}
else
{
backend_->writeSuccessor(
uint256ToString(lb->key),
lgrInfo.seq,
uint256ToString(obj.key));
backend_->writeSuccessor(
uint256ToString(obj.key),
lgrInfo.seq,
uint256ToString(ub->key));
backend_->writeSuccessor(uint256ToString(lb->key), lgrInfo.seq, uint256ToString(obj.key));
backend_->writeSuccessor(uint256ToString(obj.key), lgrInfo.seq, uint256ToString(ub->key));
log_.debug() << "writing successor for new object "
<< ripple::strHex(lb->key) << " - "
<< ripple::strHex(obj.key) << " - "
<< ripple::strHex(ub->key);
log_.debug() << "writing successor for new object " << ripple::strHex(lb->key) << " - "
<< ripple::strHex(obj.key) << " - " << ripple::strHex(ub->key);
}
}
for (auto const& base : bookSuccessorsToCalculate)
@@ -495,34 +417,24 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
auto succ = backend_->cache().getSuccessor(base, lgrInfo.seq);
if (succ)
{
backend_->writeSuccessor(
uint256ToString(base),
lgrInfo.seq,
uint256ToString(succ->key));
backend_->writeSuccessor(uint256ToString(base), lgrInfo.seq, uint256ToString(succ->key));
log_.debug()
<< "Updating book successor " << ripple::strHex(base)
<< " - " << ripple::strHex(succ->key);
log_.debug() << "Updating book successor " << ripple::strHex(base) << " - "
<< ripple::strHex(succ->key);
}
else
{
backend_->writeSuccessor(
uint256ToString(base),
lgrInfo.seq,
uint256ToString(Backend::lastKey));
backend_->writeSuccessor(uint256ToString(base), lgrInfo.seq, uint256ToString(Backend::lastKey));
log_.debug()
<< "Updating book successor " << ripple::strHex(base)
<< " - " << ripple::strHex(Backend::lastKey);
log_.debug() << "Updating book successor " << ripple::strHex(base) << " - "
<< ripple::strHex(Backend::lastKey);
}
}
}
log_.debug()
<< "Inserted/modified/deleted all objects. Number of objects = "
<< rawData.ledger_objects().objects_size();
FormattedTransactionsData insertTxResult =
insertTransactions(lgrInfo, rawData);
log_.debug() << "Inserted/modified/deleted all objects. Number of objects = "
<< rawData.ledger_objects().objects_size();
FormattedTransactionsData insertTxResult = insertTransactions(lgrInfo, rawData);
log_.debug() << "Inserted all transactions. Number of transactions = "
<< rawData.transactions_list().transactions_size();
backend_->writeAccountTransactions(std::move(insertTxResult.accountTxData));
@@ -530,8 +442,8 @@ ReportingETL::buildNextLedger(org::xrpl::rpc::v1::GetLedgerResponse& rawData)
backend_->writeNFTTransactions(std::move(insertTxResult.nfTokenTxData));
log_.debug() << "wrote account_tx";
auto [success, duration] = util::timed<std::chrono::duration<double>>(
[&]() { return backend_->finishWrites(lgrInfo.seq); });
auto [success, duration] =
util::timed<std::chrono::duration<double>>([&]() { return backend_->finishWrites(lgrInfo.seq); });
log_.debug() << "Finished writes. took " << std::to_string(duration);
log_.debug() << "Finished ledger update. " << detail::toString(lgrInfo);
@@ -583,12 +495,10 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
std::optional<uint32_t> lastPublishedSequence;
uint32_t maxQueueSize = 1000 / numExtractors;
auto begin = std::chrono::system_clock::now();
using QueueType =
ThreadSafeQueue<std::optional<org::xrpl::rpc::v1::GetLedgerResponse>>;
using QueueType = ThreadSafeQueue<std::optional<org::xrpl::rpc::v1::GetLedgerResponse>>;
std::vector<std::shared_ptr<QueueType>> queues;
auto getNext = [&queues, &startSequence, &numExtractors](
uint32_t sequence) -> std::shared_ptr<QueueType> {
auto getNext = [&queues, &startSequence, &numExtractors](uint32_t sequence) -> std::shared_ptr<QueueType> {
return queues[(sequence - startSequence) % numExtractors];
};
std::vector<std::thread> extractors;
@@ -597,12 +507,7 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
auto transformQueue = std::make_shared<QueueType>(maxQueueSize);
queues.push_back(transformQueue);
extractors.emplace_back([this,
&startSequence,
&writeConflict,
transformQueue,
i,
numExtractors]() {
extractors.emplace_back([this, &startSequence, &writeConflict, transformQueue, i, numExtractors]() {
beast::setCurrentThreadName("rippled: ReportingETL extract");
uint32_t currentSequence = startSequence + i;
@@ -614,14 +519,11 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
// the entire server is shutting down. This can be detected in a
// variety of ways. See the comment at the top of the function
while ((!finishSequence_ || currentSequence <= *finishSequence_) &&
networkValidatedLedgers_->waitUntilValidatedByNetwork(
currentSequence) &&
!writeConflict && !isStopping())
networkValidatedLedgers_->waitUntilValidatedByNetwork(currentSequence) && !writeConflict &&
!isStopping())
{
auto [fetchResponse, time] =
util::timed<std::chrono::duration<double>>([&]() {
return fetchLedgerDataAndDiff(currentSequence);
});
auto [fetchResponse, time] = util::timed<std::chrono::duration<double>>(
[&]() { return fetchLedgerDataAndDiff(currentSequence); });
totalTime += time;
// if the fetch is unsuccessful, stop. fetchLedger only
@@ -635,16 +537,11 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
{
break;
}
auto tps =
fetchResponse->transactions_list().transactions_size() /
time;
auto tps = fetchResponse->transactions_list().transactions_size() / time;
log_.info() << "Extract phase time = " << time
<< " . Extract phase tps = " << tps
<< " . Avg extract time = "
<< totalTime / (currentSequence - startSequence + 1)
<< " . thread num = " << i
<< " . seq = " << currentSequence;
log_.info() << "Extract phase time = " << time << " . Extract phase tps = " << tps
<< " . Avg extract time = " << totalTime / (currentSequence - startSequence + 1)
<< " . thread num = " << i << " . seq = " << currentSequence;
transformQueue->push(std::move(fetchResponse));
currentSequence += numExtractors;
@@ -656,19 +553,13 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
});
}
std::thread transformer{[this,
&minSequence,
&writeConflict,
&startSequence,
&getNext,
&lastPublishedSequence]() {
std::thread transformer{[this, &minSequence, &writeConflict, &startSequence, &getNext, &lastPublishedSequence]() {
beast::setCurrentThreadName("rippled: ReportingETL transform");
uint32_t currentSequence = startSequence;
while (!writeConflict)
{
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> fetchResponse{
getNext(currentSequence)->pop()};
std::optional<org::xrpl::rpc::v1::GetLedgerResponse> fetchResponse{getNext(currentSequence)->pop()};
++currentSequence;
// if fetchResponse is an empty optional, the extracter thread
// has stopped and the transformer should stop as well
@@ -679,8 +570,7 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
if (isStopping())
continue;
auto numTxns =
fetchResponse->transactions_list().transactions_size();
auto numTxns = fetchResponse->transactions_list().transactions_size();
auto numObjects = fetchResponse->ledger_objects().objects_size();
auto start = std::chrono::system_clock::now();
auto [lgrInfo, success] = buildNextLedger(*fetchResponse);
@@ -688,40 +578,31 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
auto duration = ((end - start).count()) / 1000000000.0;
if (success)
log_.info()
<< "Load phase of etl : "
<< "Successfully wrote ledger! Ledger info: "
<< detail::toString(lgrInfo) << ". txn count = " << numTxns
<< ". object count = " << numObjects
<< ". load time = " << duration
<< ". load txns per second = " << numTxns / duration
<< ". load objs per second = " << numObjects / duration;
log_.info() << "Load phase of etl : "
<< "Successfully wrote ledger! Ledger info: " << detail::toString(lgrInfo)
<< ". txn count = " << numTxns << ". object count = " << numObjects
<< ". load time = " << duration << ". load txns per second = " << numTxns / duration
<< ". load objs per second = " << numObjects / duration;
else
log_.error()
<< "Error writing ledger. " << detail::toString(lgrInfo);
log_.error() << "Error writing ledger. " << detail::toString(lgrInfo);
// success is false if the ledger was already written
if (success)
{
boost::asio::post(publishStrand_, [this, lgrInfo = lgrInfo]() {
publishLedger(lgrInfo);
});
boost::asio::post(publishStrand_, [this, lgrInfo = lgrInfo]() { publishLedger(lgrInfo); });
lastPublishedSequence = lgrInfo.seq;
}
writeConflict = !success;
// TODO move online delete logic to an admin RPC call
if (onlineDeleteInterval_ && !deleting_ &&
lgrInfo.seq - minSequence > *onlineDeleteInterval_)
if (onlineDeleteInterval_ && !deleting_ && lgrInfo.seq - minSequence > *onlineDeleteInterval_)
{
deleting_ = true;
ioContext_.post([this, &minSequence]() {
log_.info() << "Running online delete";
Backend::synchronous(
[&](boost::asio::yield_context& yield) {
backend_->doOnlineDelete(
*onlineDeleteInterval_, yield);
});
Backend::synchronous([&](boost::asio::yield_context& yield) {
backend_->doOnlineDelete(*onlineDeleteInterval_, yield);
});
log_.info() << "Finished online delete";
auto rng = backend_->fetchLedgerRange();
@@ -742,8 +623,7 @@ ReportingETL::runETLPipeline(uint32_t startSequence, int numExtractors)
for (auto& t : extractors)
t.join();
auto end = std::chrono::system_clock::now();
log_.debug() << "Extracted and wrote "
<< *lastPublishedSequence - startSequence << " in "
log_.debug() << "Extracted and wrote " << *lastPublishedSequence - startSequence << " in "
<< ((end - begin).count()) / 1000000000.0;
writing_ = false;
@@ -774,20 +654,16 @@ ReportingETL::monitor()
if (startSequence_)
{
log_.info() << "ledger sequence specified in config. "
<< "Will begin ETL process starting with ledger "
<< *startSequence_;
<< "Will begin ETL process starting with ledger " << *startSequence_;
ledger = loadInitialLedger(*startSequence_);
}
else
{
log_.info()
<< "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> mostRecentValidated =
networkValidatedLedgers_->getMostRecent();
log_.info() << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> mostRecentValidated = networkValidatedLedgers_->getMostRecent();
if (mostRecentValidated)
{
log_.info() << "Ledger " << *mostRecentValidated
<< " has been validated. "
log_.info() << "Ledger " << *mostRecentValidated << " has been validated. "
<< "Downloading...";
ledger = loadInitialLedger(*mostRecentValidated);
}
@@ -803,8 +679,7 @@ ReportingETL::monitor()
rng = backend_->hardFetchLedgerRangeNoThrow();
else
{
log_.error()
<< "Failed to load initial ledger. Exiting monitor loop";
log_.error() << "Failed to load initial ledger. Exiting monitor loop";
return;
}
}
@@ -812,11 +687,9 @@ ReportingETL::monitor()
{
if (startSequence_)
{
log_.warn()
<< "start sequence specified but db is already populated";
log_.warn() << "start sequence specified but db is already populated";
}
log_.info()
<< "Database already populated. Picking up from the tip of history";
log_.info() << "Database already populated. Picking up from the tip of history";
loadCache(rng->maxSequence);
}
assert(rng);
@@ -826,17 +699,14 @@ ReportingETL::monitor()
<< "Starting monitor loop. sequence = " << nextSequence;
while (true)
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow();
rng && rng->maxSequence >= nextSequence)
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence)
{
publishLedger(nextSequence, {});
++nextSequence;
}
else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(
nextSequence, 1000))
else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, 1000))
{
log_.info() << "Ledger with sequence = " << nextSequence
<< " has been validated by the network. "
log_.info() << "Ledger with sequence = " << nextSequence << " has been validated by the network. "
<< "Attempting to find in database and publish";
// Attempt to take over responsibility of ETL writer after 10 failed
// attempts to publish the ledger. publishLedger() fails if the
@@ -848,12 +718,10 @@ ReportingETL::monitor()
bool success = publishLedger(nextSequence, timeoutSeconds);
if (!success)
{
log_.warn() << "Failed to publish ledger with sequence = "
<< nextSequence << " . Beginning ETL";
log_.warn() << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL";
// doContinousETLPipelined returns the most recent sequence
// published empty optional if no sequence was published
std::optional<uint32_t> lastPublished =
runETLPipeline(nextSequence, extractorThreads_);
std::optional<uint32_t> lastPublished = runETLPipeline(nextSequence, extractorThreads_);
log_.info() << "Aborting ETL. Falling back to publishing";
// if no ledger was published, don't increment nextSequence
if (lastPublished)
@@ -871,8 +739,7 @@ ReportingETL::loadCacheFromClioPeer(
std::string const& port,
boost::asio::yield_context& yield)
{
log_.info() << "Loading cache from peer. ip = " << ip
<< " . port = " << port;
log_.info() << "Loading cache from peer. ip = " << ip << " . port = " << port;
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from
@@ -885,8 +752,7 @@ ReportingETL::loadCacheFromClioPeer(
tcp::resolver resolver{ioContext_};
log_.trace() << "Creating websocket";
auto ws =
std::make_unique<websocket::stream<beast::tcp_stream>>(ioContext_);
auto ws = std::make_unique<websocket::stream<beast::tcp_stream>>(ioContext_);
// Look up the domain name
auto const results = resolver.async_resolve(ip, port, yield[ec]);
@@ -926,9 +792,7 @@ ReportingETL::loadCacheFromClioPeer(
do
{
// Send the message
ws->async_write(
net::buffer(boost::json::serialize(getRequest(marker))),
yield[ec]);
ws->async_write(net::buffer(boost::json::serialize(getRequest(marker))), yield[ec]);
if (ec)
{
log_.error() << "error writing = " << ec.message();
@@ -953,8 +817,7 @@ ReportingETL::loadCacheFromClioPeer(
}
log_.trace() << "Successfully parsed response " << parsed;
if (auto const& response = parsed.as_object();
response.contains("error"))
if (auto const& response = parsed.as_object(); response.contains("error"))
{
log_.error() << "Response contains error: " << response;
auto const& err = response.at("error");
@@ -963,15 +826,13 @@ ReportingETL::loadCacheFromClioPeer(
++numAttempts;
if (numAttempts >= 5)
{
log_.error()
<< " ledger not found at peer after 5 attempts. "
"peer = "
<< ip << " ledger = " << ledgerIndex
<< ". Check your config and the health of the peer";
log_.error() << " ledger not found at peer after 5 attempts. "
"peer = "
<< ip << " ledger = " << ledgerIndex
<< ". Check your config and the health of the peer";
return false;
}
log_.warn() << "Ledger not found. ledger = " << ledgerIndex
<< ". Sleeping and trying again";
log_.warn() << "Ledger not found. ledger = " << ledgerIndex << ". Sleeping and trying again";
std::this_thread::sleep_for(std::chrono::seconds(1));
continue;
}
@@ -980,8 +841,7 @@ ReportingETL::loadCacheFromClioPeer(
started = true;
auto const& response = parsed.as_object()["result"].as_object();
if (!response.contains("cache_full") ||
!response.at("cache_full").as_bool())
if (!response.contains("cache_full") || !response.at("cache_full").as_bool())
{
log_.error() << "cache not full for clio node. ip = " << ip;
return false;
@@ -1001,15 +861,12 @@ ReportingETL::loadCacheFromClioPeer(
Backend::LedgerObject stateObject = {};
if (!stateObject.key.parseHex(
obj.at("index").as_string().c_str()))
if (!stateObject.key.parseHex(obj.at("index").as_string().c_str()))
{
log_.error() << "failed to parse object id";
return false;
}
boost::algorithm::unhex(
obj.at("data").as_string().c_str(),
std::back_inserter(stateObject.blob));
boost::algorithm::unhex(obj.at("data").as_string().c_str(), std::back_inserter(stateObject.blob));
objects.push_back(std::move(stateObject));
}
backend_->cache().update(objects, ledgerIndex, true);
@@ -1018,16 +875,14 @@ ReportingETL::loadCacheFromClioPeer(
log_.debug() << "At marker " << *marker;
} while (marker || !started);
log_.info() << "Finished downloading ledger from clio node. ip = "
<< ip;
log_.info() << "Finished downloading ledger from clio node. ip = " << ip;
backend_->cache().setFull();
return true;
}
catch (std::exception const& e)
{
log_.error() << "Encountered exception : " << e.what()
<< " - ip = " << ip;
log_.error() << "Encountered exception : " << e.what() << " - ip = " << ip;
return false;
}
}
@@ -1057,18 +912,16 @@ ReportingETL::loadCache(uint32_t seq)
if (clioPeers.size() > 0)
{
boost::asio::spawn(
ioContext_, [this, seq](boost::asio::yield_context yield) {
for (auto const& peer : clioPeers)
{
// returns true on success
if (loadCacheFromClioPeer(
seq, peer.ip, std::to_string(peer.port), yield))
return;
}
// if we couldn't successfully load from any peers, load from db
loadCacheFromDb(seq);
});
boost::asio::spawn(ioContext_, [this, seq](boost::asio::yield_context yield) {
for (auto const& peer : clioPeers)
{
// returns true on success
if (loadCacheFromClioPeer(seq, peer.ip, std::to_string(peer.port), yield))
return;
}
// if we couldn't successfully load from any peers, load from db
loadCacheFromDb(seq);
});
return;
}
else
@@ -1076,14 +929,11 @@ ReportingETL::loadCache(uint32_t seq)
loadCacheFromDb(seq);
}
// If loading synchronously, poll cache until full
while (cacheLoadStyle_ == CacheLoadStyle::SYNC &&
!backend_->cache().isFull())
while (cacheLoadStyle_ == CacheLoadStyle::SYNC && !backend_->cache().isFull())
{
log_.debug() << "Cache not full. Cache size = "
<< backend_->cache().size() << ". Sleeping ...";
log_.debug() << "Cache not full. Cache size = " << backend_->cache().size() << ". Sleeping ...";
std::this_thread::sleep_for(std::chrono::seconds(10));
log_.info() << "Cache is full. Cache size = "
<< backend_->cache().size();
log_.info() << "Cache is full. Cache size = " << backend_->cache().size();
}
}
@@ -1099,9 +949,7 @@ ReportingETL::loadCacheFromDb(uint32_t seq)
}
loading = true;
std::vector<Backend::LedgerObject> diff;
auto append = [](auto&& a, auto&& b) {
a.insert(std::end(a), std::begin(b), std::end(b));
};
auto append = [](auto&& a, auto&& b) { a.insert(std::end(a), std::begin(b), std::end(b)); };
for (size_t i = 0; i < numCacheDiffs_; ++i)
{
@@ -1111,15 +959,9 @@ ReportingETL::loadCacheFromDb(uint32_t seq)
}
std::sort(diff.begin(), diff.end(), [](auto a, auto b) {
return a.key < b.key ||
(a.key == b.key && a.blob.size() < b.blob.size());
return a.key < b.key || (a.key == b.key && a.blob.size() < b.blob.size());
});
diff.erase(
std::unique(
diff.begin(),
diff.end(),
[](auto a, auto b) { return a.key == b.key; }),
diff.end());
diff.erase(std::unique(diff.begin(), diff.end(), [](auto a, auto b) { return a.key == b.key; }), diff.end());
std::vector<std::optional<ripple::uint256>> cursors;
cursors.push_back({});
for (auto& obj : diff)
@@ -1140,8 +982,7 @@ ReportingETL::loadCacheFromDb(uint32_t seq)
cacheDownloader_ = std::thread{[this, seq, cursors]() {
auto startTime = std::chrono::system_clock::now();
auto markers = std::make_shared<std::atomic_int>(0);
auto numRemaining =
std::make_shared<std::atomic_int>(cursors.size() - 1);
auto numRemaining = std::make_shared<std::atomic_int>(cursors.size() - 1);
for (size_t i = 0; i < cursors.size() - 1; ++i)
{
std::optional<ripple::uint256> start = cursors[i];
@@ -1150,33 +991,23 @@ ReportingETL::loadCacheFromDb(uint32_t seq)
++(*markers);
boost::asio::spawn(
ioContext_,
[this, seq, start, end, numRemaining, startTime, markers](
boost::asio::yield_context yield) {
[this, seq, start, end, numRemaining, startTime, markers](boost::asio::yield_context yield) {
std::optional<ripple::uint256> cursor = start;
std::string cursorStr = cursor.has_value()
? ripple::strHex(cursor.value())
: ripple::strHex(Backend::firstKey);
log_.debug() << "Starting a cursor: " << cursorStr
<< " markers = " << *markers;
std::string cursorStr =
cursor.has_value() ? ripple::strHex(cursor.value()) : ripple::strHex(Backend::firstKey);
log_.debug() << "Starting a cursor: " << cursorStr << " markers = " << *markers;
while (!stopping_)
{
auto res = Backend::retryOnTimeout([this,
seq,
&cursor,
&yield]() {
return backend_->fetchLedgerPage(
cursor, seq, cachePageFetchSize_, false, yield);
auto res = Backend::retryOnTimeout([this, seq, &cursor, &yield]() {
return backend_->fetchLedgerPage(cursor, seq, cachePageFetchSize_, false, yield);
});
backend_->cache().update(res.objects, seq, true);
if (!res.cursor || (end && *(res.cursor) > *end))
break;
log_.trace()
<< "Loading cache. cache size = "
<< backend_->cache().size() << " - cursor = "
<< ripple::strHex(res.cursor.value())
<< " start = " << cursorStr
<< " markers = " << *markers;
log_.trace() << "Loading cache. cache size = " << backend_->cache().size()
<< " - cursor = " << ripple::strHex(res.cursor.value()) << " start = " << cursorStr
<< " markers = " << *markers;
cursor = std::move(res.cursor);
}
@@ -1185,19 +1016,15 @@ ReportingETL::loadCacheFromDb(uint32_t seq)
if (--(*numRemaining) == 0)
{
auto endTime = std::chrono::system_clock::now();
auto duration =
std::chrono::duration_cast<std::chrono::seconds>(
endTime - startTime);
log_.info() << "Finished loading cache. cache size = "
<< backend_->cache().size() << ". Took "
auto duration = std::chrono::duration_cast<std::chrono::seconds>(endTime - startTime);
log_.info() << "Finished loading cache. cache size = " << backend_->cache().size() << ". Took "
<< duration.count() << " seconds";
backend_->cache().setFull();
}
else
{
log_.info() << "Finished a cursor. num remaining = "
<< *numRemaining << " start = " << cursorStr
<< " markers = " << *markers;
log_.info() << "Finished a cursor. num remaining = " << *numRemaining
<< " start = " << cursorStr << " markers = " << *markers;
}
});
}
@@ -1221,8 +1048,7 @@ ReportingETL::monitorReadOnly()
latestSequence++;
while (true)
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow();
rng && rng->maxSequence >= latestSequence)
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= latestSequence)
{
publishLedger(latestSequence, {});
latestSequence = latestSequence + 1;
@@ -1231,8 +1057,7 @@ ReportingETL::monitorReadOnly()
// second passes, whichever occurs first. Even if we don't hear
// from rippled, if ledgers are being written to the db, we
// publish them
networkValidatedLedgers_->waitUntilValidatedByNetwork(
latestSequence, 1000);
networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, 1000);
}
}
@@ -1272,16 +1097,14 @@ ReportingETL::ReportingETL(
if (*interval > max)
{
std::stringstream msg;
msg << "online_delete cannot be greater than "
<< std::to_string(max);
msg << "online_delete cannot be greater than " << std::to_string(max);
throw std::runtime_error(msg.str());
}
if (*interval > 0)
onlineDeleteInterval_ = *interval;
}
extractorThreads_ =
config.valueOr<uint32_t>("extractor_threads", extractorThreads_);
extractorThreads_ = config.valueOr<uint32_t>("extractor_threads", extractorThreads_);
txnThreshold_ = config.valueOr<size_t>("txn_threshold", txnThreshold_);
if (config.contains("cache"))
{
@@ -1297,10 +1120,8 @@ ReportingETL::ReportingETL(
}
numCacheDiffs_ = cache.valueOr<size_t>("num_diffs", numCacheDiffs_);
numCacheMarkers_ =
cache.valueOr<size_t>("num_markers", numCacheMarkers_);
cachePageFetchSize_ =
cache.valueOr<size_t>("page_fetch_size", cachePageFetchSize_);
numCacheMarkers_ = cache.valueOr<size_t>("num_markers", numCacheMarkers_);
cachePageFetchSize_ = cache.valueOr<size_t>("page_fetch_size", cachePageFetchSize_);
if (auto peers = cache.maybeArray("peers"); peers)
{
@@ -1312,13 +1133,9 @@ ReportingETL::ReportingETL(
// todo: use emplace_back when clang is ready
clioPeers.push_back({ip, port});
}
unsigned seed =
std::chrono::system_clock::now().time_since_epoch().count();
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
std::shuffle(
clioPeers.begin(),
clioPeers.end(),
std::default_random_engine(seed));
std::shuffle(clioPeers.begin(), clioPeers.end(), std::default_random_engine(seed));
}
}
}

View File

@@ -38,13 +38,6 @@
#include <chrono>
/**
* Helper function for the ReportingETL, implemented in NFTHelpers.cpp, to
* pull to-write data out of a transaction that relates to NFTs.
*/
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx);
struct AccountTransactionsData;
struct NFTTransactionsData;
struct NFTsData;
@@ -272,9 +265,7 @@ private:
/// (mostly transaction hashes, corresponding nodestore hashes and affected
/// accounts)
FormattedTransactionsData
insertTransactions(
ripple::LedgerInfo const& ledger,
org::xrpl::rpc::v1::GetLedgerResponse& data);
insertTransactions(ripple::LedgerInfo const& ledger, org::xrpl::rpc::v1::GetLedgerResponse& data);
// TODO update this documentation
/// Build the next ledger using the previous ledger and the extracted data.
@@ -348,8 +339,7 @@ public:
std::shared_ptr<ETLLoadBalancer> balancer,
std::shared_ptr<NetworkValidatedLedgers> ledgers)
{
auto etl = std::make_shared<ReportingETL>(
config, ioc, backend, subscriptions, balancer, ledgers);
auto etl = std::make_shared<ReportingETL>(config, ioc, backend, subscriptions, balancer, ledgers);
etl->run();
@@ -380,8 +370,7 @@ public:
result["read_only"] = readOnly_;
auto last = getLastPublish();
if (last.time_since_epoch().count() != 0)
result["last_publish_age_seconds"] =
std::to_string(lastPublishAgeSeconds());
result["last_publish_age_seconds"] = std::to_string(lastPublishAgeSeconds());
return result;
}
@@ -395,8 +384,7 @@ public:
std::uint32_t
lastPublishAgeSeconds() const
{
return std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now() - getLastPublish())
return std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now() - getLastPublish())
.count();
}
@@ -404,8 +392,7 @@ public:
lastCloseAgeSeconds() const
{
std::shared_lock lck(closeTimeMtx_);
auto now = std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now().time_since_epoch())
auto now = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::system_clock::now().time_since_epoch())
.count();
auto closeTime = lastCloseTime_.time_since_epoch().count();
if (now < (rippleEpochStart + closeTime))

View File

@@ -57,8 +57,7 @@ tag_invoke(boost::json::value_to_tag<Severity>, boost::json::value const& value)
return Severity::DBG;
else if (boost::iequals(logLevel, "info"))
return Severity::NFO;
else if (
boost::iequals(logLevel, "warning") || boost::iequals(logLevel, "warn"))
else if (boost::iequals(logLevel, "warning") || boost::iequals(logLevel, "warn"))
return Severity::WRN;
else if (boost::iequals(logLevel, "error"))
return Severity::ERR;
@@ -82,8 +81,7 @@ LogService::init(Config const& config)
auto const defaultFormat =
"%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% "
"%Message%";
std::string format =
config.valueOr<std::string>("log_format", defaultFormat);
std::string format = config.valueOr<std::string>("log_format", defaultFormat);
if (config.valueOr("log_to_console", false))
{
@@ -96,14 +94,9 @@ LogService::init(Config const& config)
boost::filesystem::path dirPath{logDir.value()};
if (!boost::filesystem::exists(dirPath))
boost::filesystem::create_directories(dirPath);
auto const rotationSize =
config.valueOr<uint64_t>("log_rotation_size", 2048u) * 1024u *
1024u;
auto const rotationPeriod =
config.valueOr<uint32_t>("log_rotation_hour_interval", 12u);
auto const dirSize =
config.valueOr<uint64_t>("log_directory_max_size", 50u * 1024u) *
1024u * 1024u;
auto const rotationSize = config.valueOr<uint64_t>("log_rotation_size", 2048u) * 1024u * 1024u;
auto const rotationPeriod = config.valueOr<uint32_t>("log_rotation_hour_interval", 12u);
auto const dirSize = config.valueOr<uint64_t>("log_directory_max_size", 50u * 1024u) * 1024u * 1024u;
auto fileSink = boost::log::add_file_log(
keywords::file_name = dirPath / "clio.log",
keywords::target_file_name = dirPath / "clio_%Y-%m-%d_%H-%M-%S.log",
@@ -112,11 +105,9 @@ LogService::init(Config const& config)
keywords::open_mode = std::ios_base::app,
keywords::rotation_size = rotationSize,
keywords::time_based_rotation =
sinks::file::rotation_at_time_interval(
boost::posix_time::hours(rotationPeriod)));
sinks::file::rotation_at_time_interval(boost::posix_time::hours(rotationPeriod)));
fileSink->locked_backend()->set_file_collector(
sinks::file::make_collector(
keywords::target = dirPath, keywords::max_size = dirSize));
sinks::file::make_collector(keywords::target = dirPath, keywords::max_size = dirSize));
fileSink->locked_backend()->scan_for_files();
}
@@ -134,26 +125,19 @@ LogService::init(Config const& config)
};
auto core = boost::log::core::get();
auto min_severity = boost::log::expressions::channel_severity_filter(
log_channel, log_severity);
auto min_severity = boost::log::expressions::channel_severity_filter(log_channel, log_severity);
for (auto const& channel : channels)
min_severity[channel] = defaultSeverity;
min_severity["Alert"] =
Severity::WRN; // Channel for alerts, always warning severity
min_severity["Alert"] = Severity::WRN; // Channel for alerts, always warning severity
for (auto const overrides = config.arrayOr("log_channels", {});
auto const& cfg : overrides)
for (auto const overrides = config.arrayOr("log_channels", {}); auto const& cfg : overrides)
{
auto name = cfg.valueOrThrow<std::string>(
"channel", "Channel name is required");
auto name = cfg.valueOrThrow<std::string>("channel", "Channel name is required");
if (not std::count(std::begin(channels), std::end(channels), name))
throw std::runtime_error(
"Can't override settings for log channel " + name +
": invalid channel");
throw std::runtime_error("Can't override settings for log channel " + name + ": invalid channel");
min_severity[name] =
cfg.valueOr<Severity>("log_level", defaultSeverity);
min_severity[name] = cfg.valueOr<Severity>("log_level", defaultSeverity);
}
core->set_filter(min_severity);
@@ -202,8 +186,7 @@ Logger::Pump::pretty_path(source_location_t const& loc, size_t max_depth) const
if (idx == std::string::npos || idx == 0)
break;
}
return file_path.substr(idx == std::string::npos ? 0 : idx + 1) + ':' +
std::to_string(loc.line());
return file_path.substr(idx == std::string::npos ? 0 : idx + 1) + ':' + std::to_string(loc.line());
}
} // namespace clio

View File

@@ -68,8 +68,7 @@ class SourceLocation
std::size_t line_;
public:
SourceLocation(std::string_view file, std::size_t line)
: file_{file}, line_{line}
SourceLocation(std::string_view file, std::size_t line) : file_{file}, line_{line}
{
}
std::string_view
@@ -84,8 +83,7 @@ public:
}
};
using source_location_t = SourceLocation;
#define CURRENT_SRC_LOCATION \
source_location_t(__builtin_FILE(), __builtin_LINE())
#define CURRENT_SRC_LOCATION source_location_t(__builtin_FILE(), __builtin_LINE())
#endif
/**
@@ -113,18 +111,6 @@ BOOST_LOG_ATTRIBUTE_KEYWORD(log_channel, "Channel", std::string);
std::ostream&
operator<<(std::ostream& stream, Severity sev);
/**
* @brief Custom JSON parser for @ref Severity.
*
* @param value The JSON string to parse
* @return Severity The parsed severity
* @throws std::runtime_error Thrown if severity is not in the right format
*/
Severity
tag_invoke(
boost::json::value_to_tag<Severity>,
boost::json::value const& value);
/**
* @brief A simple thread-safe logger for the channel specified
* in the constructor.
@@ -135,8 +121,7 @@ tag_invoke(
*/
class Logger final
{
using logger_t =
boost::log::sources::severity_channel_logger_mt<Severity, std::string>;
using logger_t = boost::log::sources::severity_channel_logger_mt<Severity, std::string>;
mutable logger_t logger_;
friend class LogService; // to expose the Pump interface
@@ -146,8 +131,7 @@ class Logger final
*/
class Pump final
{
using pump_opt_t =
std::optional<boost::log::aux::record_pump<logger_t>>;
using pump_opt_t = std::optional<boost::log::aux::record_pump<logger_t>>;
boost::log::record rec_;
pump_opt_t pump_ = std::nullopt;
@@ -160,8 +144,7 @@ class Logger final
if (rec_)
{
pump_.emplace(boost::log::aux::make_record_pump(logger, rec_));
pump_->stream() << boost::log::add_value(
"SourceLocation", pretty_path(loc));
pump_->stream() << boost::log::add_value("SourceLocation", pretty_path(loc));
}
}
@@ -192,6 +175,16 @@ class Logger final
private:
[[nodiscard]] std::string
pretty_path(source_location_t const& loc, size_t max_depth = 3) const;
/**
* @brief Custom JSON parser for @ref Severity.
*
* @param value The JSON string to parse
* @return Severity The parsed severity
* @throws std::runtime_error Thrown if severity is not in the right format
*/
friend Severity
tag_invoke(boost::json::value_to_tag<Severity>, boost::json::value const& value);
};
public:
@@ -205,8 +198,7 @@ public:
*
* @param channel The channel this logger will report into.
*/
Logger(std::string channel)
: logger_{boost::log::keywords::channel = channel}
Logger(std::string channel) : logger_{boost::log::keywords::channel = channel}
{
}
Logger(Logger const&) = default;

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Copyright (c) 2022-2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -29,6 +29,9 @@
#include <config/Config.h>
#include <etl/ReportingETL.h>
#include <log/Logger.h>
#include <rpc/Counters.h>
#include <rpc/RPCEngine.h>
#include <rpc/common/impl/HandlerProvider.h>
#include <webserver/Listener.h>
#include <boost/asio/dispatch.hpp>
@@ -78,12 +81,7 @@ parseCli(int argc, char* argv[])
positional.add("conf", 1);
po::variables_map parsed;
po::store(
po::command_line_parser(argc, argv)
.options(description)
.positional(positional)
.run(),
parsed);
po::store(po::command_line_parser(argc, argv).options(description).positional(positional).run(), parsed);
po::notify(parsed);
if (parsed.count("version"))
@@ -94,9 +92,7 @@ parseCli(int argc, char* argv[])
if (parsed.count("help"))
{
std::cout << "Clio server " << Build::getClioFullVersionString()
<< "\n\n"
<< description;
std::cout << "Clio server " << Build::getClioFullVersionString() << "\n\n" << description;
std::exit(EXIT_SUCCESS);
}
@@ -136,16 +132,9 @@ parseCerts(Config const& config)
std::string key = contents.str();
ssl::context ctx{ssl::context::tlsv12};
ctx.set_options(
boost::asio::ssl::context::default_workarounds |
boost::asio::ssl::context::no_sslv2);
ctx.set_options(boost::asio::ssl::context::default_workarounds | boost::asio::ssl::context::no_sslv2);
ctx.use_certificate_chain(boost::asio::buffer(cert.data(), cert.size()));
ctx.use_private_key(
boost::asio::buffer(key.data(), key.size()),
boost::asio::ssl::context::file_format::pem);
ctx.use_private_key(boost::asio::buffer(key.data(), key.size()), boost::asio::ssl::context::file_format::pem);
return ctx;
}
@@ -175,8 +164,7 @@ try
auto const config = ConfigReader::open(configPath);
if (!config)
{
std::cerr << "Couldnt parse config '" << configPath << "'."
<< std::endl;
std::cerr << "Couldnt parse config '" << configPath << "'." << std::endl;
return EXIT_FAILURE;
}
@@ -184,9 +172,7 @@ try
LogService::info() << "Clio version: " << Build::getClioFullVersionString();
auto ctx = parseCerts(config);
auto ctxRef = ctx
? std::optional<std::reference_wrapper<ssl::context>>{ctx.value()}
: std::nullopt;
auto ctxRef = ctx ? std::optional<std::reference_wrapper<ssl::context>>{ctx.value()} : std::nullopt;
auto const threads = config.valueOr("io_threads", 2);
if (threads <= 0)
@@ -208,8 +194,7 @@ try
auto backend = Backend::make_Backend(ioc, config);
// Manages clients subscribed to streams
auto subscriptions =
SubscriptionManager::make_SubscriptionManager(config, backend);
auto subscriptions = SubscriptionManager::make_SubscriptionManager(config, backend);
// Tracks which ledgers have been validated by the
// network
@@ -220,17 +205,22 @@ try
// The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and
// accounts_proposed)
auto balancer = ETLLoadBalancer::make_ETLLoadBalancer(
config, ioc, backend, subscriptions, ledgers);
auto balancer = ETLLoadBalancer::make_ETLLoadBalancer(config, ioc, backend, subscriptions, ledgers);
// ETL is responsible for writing and publishing to streams. In read-only
// mode, ETL only publishes
auto etl = ReportingETL::make_ReportingETL(
config, ioc, backend, subscriptions, balancer, ledgers);
auto etl = ReportingETL::make_ReportingETL(config, ioc, backend, subscriptions, balancer, ledgers);
auto workQueue = WorkQueue::make_WorkQueue(config);
auto counters = RPC::Counters::make_Counters(workQueue);
auto const handlerProvider =
std::make_shared<RPC::detail::ProductionHandlerProvider const>(backend, subscriptions, balancer, etl, counters);
auto const rpcEngine = RPC::RPCEngine::make_RPCEngine(
config, backend, subscriptions, balancer, etl, dosGuard, workQueue, counters, handlerProvider);
// The server handles incoming RPCs
auto httpServer = Server::make_HttpServer(
config, ioc, ctxRef, backend, subscriptions, balancer, etl, dosGuard);
auto httpServer =
Server::make_HttpServer(config, ioc, ctxRef, backend, rpcEngine, subscriptions, balancer, etl, dosGuard);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope

235
src/rpc/BookChangesHelper.h Normal file
View File

@@ -0,0 +1,235 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <rpc/RPCHelpers.h>
#include <set>
namespace RPC {
/**
* @brief Represents an entry in the book_changes' changes array.
*/
struct BookChange
{
ripple::STAmount sideAVolume;
ripple::STAmount sideBVolume;
ripple::STAmount highRate;
ripple::STAmount lowRate;
ripple::STAmount openRate;
ripple::STAmount closeRate;
};
/**
* @brief Encapsulates the book_changes computations and transformations.
*/
class BookChanges final
{
public:
BookChanges() = delete; // only accessed via static handle function
/**
* @brief Computes all book_changes for the given transactions.
*
* @param transactions The transactions to compute book changes for
* @return std::vector<BookChange> Book changes
*/
[[nodiscard]] static std::vector<BookChange>
compute(std::vector<Backend::TransactionAndMetadata> const& transactions)
{
return HandlerImpl{}(transactions);
}
private:
class HandlerImpl final
{
std::map<std::string, BookChange> tally_ = {};
std::optional<uint32_t> offerCancel_ = {};
public:
[[nodiscard]] std::vector<BookChange>
operator()(std::vector<Backend::TransactionAndMetadata> const& transactions)
{
for (auto const& tx : transactions)
handleBookChange(tx);
// TODO: rewrite this with std::ranges when compilers catch up
std::vector<BookChange> changes;
std::transform(
std::make_move_iterator(std::begin(tally_)),
std::make_move_iterator(std::end(tally_)),
std::back_inserter(changes),
[](auto obj) { return obj.second; });
return changes;
}
private:
void
handleAffectedNode(ripple::STObject const& node)
{
auto const& metaType = node.getFName();
auto const nodeType = node.getFieldU16(ripple::sfLedgerEntryType);
// we only care about ripple::ltOFFER objects being modified or
// deleted
if (nodeType != ripple::ltOFFER || metaType == ripple::sfCreatedNode)
return;
// if either FF or PF are missing we can't compute
// but generally these are cancelled rather than crossed
// so skipping them is consistent
if (!node.isFieldPresent(ripple::sfFinalFields) || !node.isFieldPresent(ripple::sfPreviousFields))
return;
auto const& finalFields = node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>();
auto const& previousFields = node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>();
// defensive case that should never be hit
if (!finalFields.isFieldPresent(ripple::sfTakerGets) || !finalFields.isFieldPresent(ripple::sfTakerPays) ||
!previousFields.isFieldPresent(ripple::sfTakerGets) ||
!previousFields.isFieldPresent(ripple::sfTakerPays))
return;
// filter out any offers deleted by explicit offer cancels
if (metaType == ripple::sfDeletedNode && offerCancel_ &&
finalFields.getFieldU32(ripple::sfSequence) == *offerCancel_)
return;
// compute the difference in gets and pays actually
// affected onto the offer
auto const deltaGets =
finalFields.getFieldAmount(ripple::sfTakerGets) - previousFields.getFieldAmount(ripple::sfTakerGets);
auto const deltaPays =
finalFields.getFieldAmount(ripple::sfTakerPays) - previousFields.getFieldAmount(ripple::sfTakerPays);
transformAndStore(deltaGets, deltaPays);
}
void
transformAndStore(ripple::STAmount const& deltaGets, ripple::STAmount const& deltaPays)
{
auto const g = to_string(deltaGets.issue());
auto const p = to_string(deltaPays.issue());
auto const noswap = isXRP(deltaGets) ? true : (isXRP(deltaPays) ? false : (g < p));
auto first = noswap ? deltaGets : deltaPays;
auto second = noswap ? deltaPays : deltaGets;
// defensively programmed, should (probably) never happen
if (second == beast::zero)
return;
auto const rate = divide(first, second, ripple::noIssue());
if (first < beast::zero)
first = -first;
if (second < beast::zero)
second = -second;
auto const key = noswap ? (g + '|' + p) : (p + '|' + g);
if (tally_.contains(key))
{
auto& entry = tally_.at(key);
entry.sideAVolume += first;
entry.sideBVolume += second;
if (entry.highRate < rate)
entry.highRate = rate;
if (entry.lowRate > rate)
entry.lowRate = rate;
entry.closeRate = rate;
}
else
{
// TODO: use paranthesized initialization when clang catches up
tally_[key] = {
first, // sideAVolume
second, // sideBVolume
rate, // highRate
rate, // lowRate
rate, // openRate
rate, // closeRate
};
}
}
void
handleBookChange(Backend::TransactionAndMetadata const& blob)
{
auto const [tx, meta] = RPC::deserializeTxPlusMeta(blob);
if (!tx || !meta || !tx->isFieldPresent(ripple::sfTransactionType))
return;
offerCancel_ = shouldCancelOffer(tx);
for (auto const& node : meta->getFieldArray(ripple::sfAffectedNodes))
handleAffectedNode(node);
}
std::optional<uint32_t>
shouldCancelOffer(std::shared_ptr<ripple::STTx const> const& tx) const
{
switch (tx->getFieldU16(ripple::sfTransactionType))
{
// in future if any other ways emerge to cancel an offer
// this switch makes them easy to add
case ripple::ttOFFER_CANCEL:
case ripple::ttOFFER_CREATE:
if (tx->isFieldPresent(ripple::sfOfferSequence))
return tx->getFieldU32(ripple::sfOfferSequence);
default:
return std::nullopt;
}
}
};
};
inline void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, BookChange const& change)
{
auto amountStr = [](ripple::STAmount const& amount) -> std::string {
return isXRP(amount) ? to_string(amount.xrp()) : to_string(amount.iou());
};
auto currencyStr = [](ripple::STAmount const& amount) -> std::string {
return isXRP(amount) ? "XRP_drops" : to_string(amount.issue());
};
jv = {
{JS(currency_a), currencyStr(change.sideAVolume)},
{JS(currency_b), currencyStr(change.sideBVolume)},
{JS(volume_a), amountStr(change.sideAVolume)},
{JS(volume_b), amountStr(change.sideBVolume)},
{JS(high), to_string(change.highRate.iou())},
{JS(low), to_string(change.lowRate.iou())},
{JS(open), to_string(change.openRate.iou())},
{JS(close), to_string(change.closeRate.iou())},
};
}
[[nodiscard]] boost::json::object const
computeBookChanges(ripple::LedgerInfo const& lgrInfo, std::vector<Backend::TransactionAndMetadata> const& transactions);
} // namespace RPC

View File

@@ -18,50 +18,24 @@
//==============================================================================
#include <rpc/Counters.h>
#include <rpc/RPC.h>
#include <rpc/JS.h>
#include <rpc/RPCHelpers.h>
namespace RPC {
void
Counters::initializeCounter(std::string const& method)
{
std::shared_lock lk(mutex_);
if (methodInfo_.count(method) == 0)
{
lk.unlock();
std::scoped_lock ulk(mutex_);
// This calls the default constructor for methodInfo of the method.
methodInfo_[method];
}
}
void
Counters::rpcErrored(std::string const& method)
{
if (!validHandler(method))
return;
initializeCounter(method);
std::shared_lock lk(mutex_);
std::scoped_lock lk(mutex_);
MethodInfo& counters = methodInfo_[method];
counters.started++;
counters.errored++;
}
void
Counters::rpcComplete(
std::string const& method,
std::chrono::microseconds const& rpcDuration)
Counters::rpcComplete(std::string const& method, std::chrono::microseconds const& rpcDuration)
{
if (!validHandler(method))
return;
initializeCounter(method);
std::shared_lock lk(mutex_);
std::scoped_lock lk(mutex_);
MethodInfo& counters = methodInfo_[method];
counters.started++;
counters.finished++;
@@ -71,27 +45,23 @@ Counters::rpcComplete(
void
Counters::rpcForwarded(std::string const& method)
{
if (!validHandler(method))
return;
initializeCounter(method);
std::shared_lock lk(mutex_);
std::scoped_lock lk(mutex_);
MethodInfo& counters = methodInfo_[method];
counters.forwarded++;
}
boost::json::object
Counters::report()
Counters::report() const
{
std::shared_lock lk(mutex_);
boost::json::object obj = {};
std::scoped_lock lk(mutex_);
auto obj = boost::json::object{};
obj[JS(rpc)] = boost::json::object{};
auto& rpc = obj[JS(rpc)].as_object();
for (auto const& [method, info] : methodInfo_)
{
boost::json::object counters = {};
auto counters = boost::json::object{};
counters[JS(started)] = std::to_string(info.started);
counters[JS(finished)] = std::to_string(info.finished);
counters[JS(errored)] = std::to_string(info.errored);
@@ -100,6 +70,7 @@ Counters::report()
rpc[method] = std::move(counters);
}
obj["work_queue"] = workQueue_.get().report();
return obj;

View File

@@ -19,12 +19,12 @@
#pragma once
#include <boost/json.hpp>
#include <chrono>
#include <cstdint>
#include <functional>
#include <rpc/WorkQueue.h>
#include <shared_mutex>
#include <boost/json.hpp>
#include <chrono>
#include <mutex>
#include <string>
#include <unordered_map>
@@ -32,22 +32,16 @@ namespace RPC {
class Counters
{
private:
struct MethodInfo
{
MethodInfo() = default;
std::atomic_uint64_t started{0};
std::atomic_uint64_t finished{0};
std::atomic_uint64_t errored{0};
std::atomic_uint64_t forwarded{0};
std::atomic_uint64_t duration{0};
std::uint64_t started = 0u;
std::uint64_t finished = 0u;
std::uint64_t errored = 0u;
std::uint64_t forwarded = 0u;
std::uint64_t duration = 0u;
};
void
initializeCounter(std::string const& method);
std::shared_mutex mutex_;
mutable std::mutex mutex_;
std::unordered_map<std::string, MethodInfo> methodInfo_;
std::reference_wrapper<const WorkQueue> workQueue_;
@@ -55,19 +49,23 @@ private:
public:
Counters(WorkQueue const& wq) : workQueue_(std::cref(wq)){};
static Counters
make_Counters(WorkQueue const& wq)
{
return Counters{wq};
}
void
rpcErrored(std::string const& method);
void
rpcComplete(
std::string const& method,
std::chrono::microseconds const& rpcDuration);
rpcComplete(std::string const& method, std::chrono::microseconds const& rpcDuration);
void
rpcForwarded(std::string const& method);
boost::json::object
report();
report() const;
};
} // namespace RPC

View File

@@ -47,11 +47,11 @@ getWarningInfo(WarningCode code)
"want to talk to rippled, include 'ledger_index':'current' in your "
"request"},
{warnRPC_OUTDATED, "This server may be out of date"},
{warnRPC_RATE_LIMIT, "You are about to be rate limited"}};
{warnRPC_RATE_LIMIT, "You are about to be rate limited"},
};
auto matchByCode = [code](auto const& info) { return info.code == code; };
if (auto it = find_if(begin(infos), end(infos), matchByCode);
it != end(infos))
if (auto it = find_if(begin(infos), end(infos), matchByCode); it != end(infos))
return *it;
throw(out_of_range("Invalid WarningCode"));
@@ -60,10 +60,12 @@ getWarningInfo(WarningCode code)
boost::json::object
makeWarning(WarningCode code)
{
boost::json::object json;
auto json = boost::json::object{};
auto const& info = getWarningInfo(code);
json["id"] = code;
json["message"] = static_cast<string>(info.message);
return json;
}
@@ -71,31 +73,21 @@ ClioErrorInfo const&
getErrorInfo(ClioError code)
{
constexpr static ClioErrorInfo infos[]{
{ClioError::rpcMALFORMED_CURRENCY,
"malformedCurrency",
"Malformed currency."},
{ClioError::rpcMALFORMED_REQUEST,
"malformedRequest",
"Malformed request."},
{ClioError::rpcMALFORMED_CURRENCY, "malformedCurrency", "Malformed currency."},
{ClioError::rpcMALFORMED_REQUEST, "malformedRequest", "Malformed request."},
{ClioError::rpcMALFORMED_OWNER, "malformedOwner", "Malformed owner."},
{ClioError::rpcMALFORMED_ADDRESS,
"malformedAddress",
"Malformed address."},
{ClioError::rpcMALFORMED_ADDRESS, "malformedAddress", "Malformed address."},
};
auto matchByCode = [code](auto const& info) { return info.code == code; };
if (auto it = find_if(begin(infos), end(infos), matchByCode);
it != end(infos))
if (auto it = find_if(begin(infos), end(infos), matchByCode); it != end(infos))
return *it;
throw(out_of_range("Invalid error code"));
}
boost::json::object
makeError(
RippledError err,
optional<string_view> customError,
optional<string_view> customMessage)
makeError(RippledError err, optional<string_view> customError, optional<string_view> customMessage)
{
boost::json::object json;
auto const& info = ripple::RPC::get_error_info(err);
@@ -105,14 +97,12 @@ makeError(
json["error_message"] = customMessage.value_or(info.message.c_str()).data();
json["status"] = "error";
json["type"] = "response";
return json;
}
boost::json::object
makeError(
ClioError err,
optional<string_view> customError,
optional<string_view> customMessage)
makeError(ClioError err, optional<string_view> customError, optional<string_view> customMessage)
{
boost::json::object json;
auto const& info = getErrorInfo(err);
@@ -122,47 +112,33 @@ makeError(
json["error_message"] = customMessage.value_or(info.message).data();
json["status"] = "error";
json["type"] = "response";
return json;
}
boost::json::object
makeError(Status const& status)
{
auto wrapOptional = [](string_view const& str) {
return str.empty() ? nullopt : make_optional(str);
};
auto wrapOptional = [](string_view const& str) { return str.empty() ? nullopt : make_optional(str); };
auto res = visit(
overloadSet{
[&status, &wrapOptional](RippledError err) {
if (err == ripple::rpcUNKNOWN)
{
return boost::json::object{
{"error", status.message},
{"type", "response"},
{"status", "error"}};
}
return boost::json::object{{"error", status.message}, {"type", "response"}, {"status", "error"}};
return makeError(
err,
wrapOptional(status.error),
wrapOptional(status.message));
return makeError(err, wrapOptional(status.error), wrapOptional(status.message));
},
[&status, &wrapOptional](ClioError err) {
return makeError(
err,
wrapOptional(status.error),
wrapOptional(status.message));
return makeError(err, wrapOptional(status.error), wrapOptional(status.message));
},
},
status.code);
if (status.extraInfo)
{
for (auto& [key, value] : status.extraInfo.value())
{
res[key] = value;
}
}
return res;
}

View File

@@ -75,24 +75,19 @@ struct Status
Status() = default;
/* implicit */ Status(CombinedError code) : code(code){};
Status(CombinedError code, boost::json::object&& extraInfo)
: code(code), extraInfo(std::move(extraInfo)){};
Status(CombinedError code, boost::json::object&& extraInfo) : code(code), extraInfo(std::move(extraInfo)){};
// HACK. Some rippled handlers explicitly specify errors.
// This means that we have to be able to duplicate this
// functionality.
explicit Status(std::string const& message)
: code(ripple::rpcUNKNOWN), message(message)
// This means that we have to be able to duplicate this functionality.
explicit Status(std::string const& message) : code(ripple::rpcUNKNOWN), message(message)
{
}
Status(CombinedError code, std::string message)
: code(code), message(message)
Status(CombinedError code, std::string message) : code(code), message(message)
{
}
Status(CombinedError code, std::string error, std::string message)
: code(code), error(error), message(message)
Status(CombinedError code, std::string error, std::string message) : code(code), error(error), message(message)
{
}
@@ -103,6 +98,7 @@ struct Status
{
if (auto err = std::get_if<RippledError>(&code))
return *err != RippledError::rpcSUCCESS;
return true;
}
@@ -117,6 +113,7 @@ struct Status
{
if (auto err = std::get_if<RippledError>(&code))
return *err == other;
return false;
}
@@ -131,6 +128,7 @@ struct Status
{
if (auto err = std::get_if<ClioError>(&code))
return *err == other;
return false;
}
};
@@ -138,12 +136,7 @@ struct Status
/**
* @brief Warning codes that can be returned by clio.
*/
enum WarningCode {
warnUNKNOWN = -1,
warnRPC_CLIO = 2001,
warnRPC_OUTDATED = 2002,
warnRPC_RATE_LIMIT = 2003
};
enum WarningCode { warnUNKNOWN = -1, warnRPC_CLIO = 2001, warnRPC_OUTDATED = 2002, warnRPC_RATE_LIMIT = 2003 };
/**
* @brief Holds information about a clio warning.
@@ -151,8 +144,7 @@ enum WarningCode {
struct WarningInfo
{
constexpr WarningInfo() = default;
constexpr WarningInfo(WarningCode code, char const* message)
: code(code), message(message)
constexpr WarningInfo(WarningCode code, char const* message) : code(code), message(message)
{
}
@@ -190,6 +182,7 @@ public:
explicit AccountNotFoundError(std::string const& acct) : account(acct)
{
}
const char*
what() const throw() override
{

88
src/rpc/Factories.cpp Normal file
View File

@@ -0,0 +1,88 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <etl/ETLSource.h>
#include <rpc/Factories.h>
#include <rpc/common/impl/HandlerProvider.h>
#include <webserver/HttpBase.h>
#include <webserver/WsBase.h>
#include <boost/asio/spawn.hpp>
#include <unordered_map>
using namespace std;
using namespace clio;
using namespace RPC;
namespace RPC {
optional<Web::Context>
make_WsContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
shared_ptr<WsBase> const& session,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
string const& clientIp)
{
boost::json::value commandValue = nullptr;
if (!request.contains("command") && request.contains("method"))
commandValue = request.at("method");
else if (request.contains("command") && !request.contains("method"))
commandValue = request.at("command");
if (!commandValue.is_string())
return {};
string command = commandValue.as_string().c_str();
return make_optional<Web::Context>(yc, command, 1, request, session, tagFactory, range, clientIp);
}
optional<Web::Context>
make_HttpContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
string const& clientIp)
{
if (!request.contains("method") || !request.at("method").is_string())
return {};
string const& command = request.at("method").as_string().c_str();
if (command == "subscribe" || command == "unsubscribe")
return {};
if (!request.at("params").is_array())
return {};
boost::json::array const& array = request.at("params").as_array();
if (array.size() != 1)
return {};
if (!array.at(0).is_object())
return {};
return make_optional<Web::Context>(yc, command, 1, array.at(0).as_object(), nullptr, tagFactory, range, clientIp);
}
} // namespace RPC

67
src/rpc/Factories.h Normal file
View File

@@ -0,0 +1,67 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <log/Logger.h>
#include <rpc/Errors.h>
#include <webserver/Context.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <fmt/core.h>
#include <chrono>
#include <optional>
#include <string>
/*
* This file contains various classes necessary for executing RPC handlers.
* Context gives the handlers access to various other parts of the application Status is used to report errors.
* And lastly, there are various functions for making Contexts, Statuses and serializing Status to JSON.
* This file is meant to contain any class or function that code outside of the rpc folder needs to use. For helper
* functions or classes used within the rpc folder, use RPCHelpers.h.
*/
class WsBase;
class SubscriptionManager;
class ETLLoadBalancer;
class ReportingETL;
namespace RPC {
std::optional<Web::Context>
make_WsContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
std::shared_ptr<WsBase> const& session,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
std::string const& clientIp);
std::optional<Web::Context>
make_HttpContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
std::string const& clientIp);
} // namespace RPC

59
src/rpc/HandlerTable.h Normal file
View File

@@ -0,0 +1,59 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <rpc/common/AnyHandler.h>
#include <rpc/common/Types.h>
#include <memory>
#include <optional>
#include <string>
namespace RPC {
class HandlerTable
{
std::shared_ptr<HandlerProvider const> provider_;
public:
HandlerTable(std::shared_ptr<HandlerProvider const> const& provider) : provider_{provider}
{
}
bool
contains(std::string const& method) const
{
return provider_->contains(method);
}
std::optional<AnyHandler>
getHandler(std::string const& command) const
{
return provider_->getHandler(command);
}
bool
isClioOnly(std::string const& command) const
{
return provider_->isClioOnly(command);
}
};
} // namespace RPC

View File

@@ -1,122 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <rpc/RPC.h>
namespace RPC {
/*
* This file just contains declarations for all of the handlers
*/
// account state methods
Result
doAccountInfo(Context const& context);
Result
doAccountChannels(Context const& context);
Result
doAccountCurrencies(Context const& context);
Result
doAccountLines(Context const& context);
Result
doAccountNFTs(Context const& context);
Result
doAccountObjects(Context const& context);
Result
doAccountOffers(Context const& context);
Result
doGatewayBalances(Context const& context);
Result
doNoRippleCheck(Context const& context);
// channels methods
Result
doChannelAuthorize(Context const& context);
Result
doChannelVerify(Context const& context);
// book methods
[[nodiscard]] Result
doBookChanges(Context const& context);
Result
doBookOffers(Context const& context);
// NFT methods
Result
doNFTBuyOffers(Context const& context);
Result
doNFTSellOffers(Context const& context);
Result
doNFTInfo(Context const& context);
Result
doNFTHistory(Context const& context);
// ledger methods
Result
doLedger(Context const& context);
Result
doLedgerEntry(Context const& context);
Result
doLedgerData(Context const& context);
Result
doLedgerRange(Context const& context);
// transaction methods
Result
doTx(Context const& context);
Result
doTransactionEntry(Context const& context);
Result
doAccountTx(Context const& context);
// subscriptions
Result
doSubscribe(Context const& context);
Result
doUnsubscribe(Context const& context);
// server methods
Result
doServerInfo(Context const& context);
// Utility methods
Result
doRandom(Context const& context);
} // namespace RPC

29
src/rpc/JS.h Normal file
View File

@@ -0,0 +1,29 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <ripple/protocol/jss.h>
// Useful macro for borrowing from ripple::jss
// static strings. (J)son (S)trings
#define JS(x) ripple::jss::x.c_str()
// Access (SF)ield name (S)trings
#define SFS(x) ripple::x.jsonName.c_str()

29
src/rpc/README.md Normal file
View File

@@ -0,0 +1,29 @@
# Clio RPC subsystem
## Background
The RPC subsystem is where the common framework for handling incoming JSON requests is implemented.
Currently the NextGen RPC framework is a work in progress and the handlers are not yet implemented using the new common framework classes.
## Integration plan
- Implement base framework - **done**
- Migrate handlers one by one, making them injectable, adding unit-tests - **in progress**
- Integrate all new handlers into clio in one go
- Cover the rest with unit-tests
- Release first time with new subsystem active
## Components
See `common` subfolder.
- **AnyHandler**: The type-erased wrapper that allows for storing different handlers in one map/vector.
- **RpcSpec/FieldSpec**: The RPC specification classes, used to specify how incoming JSON is to be validated before it's parsed and passed on to individual handler implementations.
- **Validators**: A bunch of supported validators that can be specified as requirements for each **`FieldSpec`** to make up the final **`RpcSpec`** of any given RPC handler.
## Implementing a (NextGen) handler
See `unittests/rpc` for exmaples.
Handlers need to fulfil the requirements specified by the **`Handler`** concept (see `rpc/common/Concepts.h`):
- Expose types:
* `Input` - The POD struct which acts as input for the handler
* `Output` - The POD struct which acts as output of a valid handler invocation
- Have a `spec()` member function returning a const reference to an **`RpcSpec`** describing the JSON input.
- Have a `process(Input)` member function that operates on `Input` POD and returns `HandlerReturnType<Output>`
- Implement `value_from` and `value_to` support using `tag_invoke` as per `boost::json` documentation for these functions.

View File

@@ -1,396 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <etl/ETLSource.h>
#include <log/Logger.h>
#include <rpc/Handlers.h>
#include <rpc/RPCHelpers.h>
#include <webserver/HttpBase.h>
#include <webserver/WsBase.h>
#include <boost/asio/spawn.hpp>
#include <unordered_map>
using namespace std;
using namespace clio;
// local to compilation unit loggers
namespace {
clio::Logger gPerfLog{"Performance"};
clio::Logger gLog{"RPC"};
} // namespace
namespace RPC {
Context::Context(
boost::asio::yield_context& yield_,
string const& command_,
uint32_t version_,
boost::json::object const& params_,
shared_ptr<BackendInterface const> const& backend_,
shared_ptr<SubscriptionManager> const& subscriptions_,
shared_ptr<ETLLoadBalancer> const& balancer_,
shared_ptr<ReportingETL const> const& etl_,
shared_ptr<WsBase> const& session_,
util::TagDecoratorFactory const& tagFactory_,
Backend::LedgerRange const& range_,
Counters& counters_,
string const& clientIp_)
: Taggable(tagFactory_)
, yield(yield_)
, method(command_)
, version(version_)
, params(params_)
, backend(backend_)
, subscriptions(subscriptions_)
, balancer(balancer_)
, etl(etl_)
, session(session_)
, range(range_)
, counters(counters_)
, clientIp(clientIp_)
{
gPerfLog.debug() << tag() << "new Context created";
}
optional<Context>
make_WsContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
shared_ptr<BackendInterface const> const& backend,
shared_ptr<SubscriptionManager> const& subscriptions,
shared_ptr<ETLLoadBalancer> const& balancer,
shared_ptr<ReportingETL const> const& etl,
shared_ptr<WsBase> const& session,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
Counters& counters,
string const& clientIp)
{
boost::json::value commandValue = nullptr;
if (!request.contains("command") && request.contains("method"))
commandValue = request.at("method");
else if (request.contains("command") && !request.contains("method"))
commandValue = request.at("command");
if (!commandValue.is_string())
return {};
string command = commandValue.as_string().c_str();
return make_optional<Context>(
yc,
command,
1,
request,
backend,
subscriptions,
balancer,
etl,
session,
tagFactory,
range,
counters,
clientIp);
}
optional<Context>
make_HttpContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
shared_ptr<BackendInterface const> const& backend,
shared_ptr<SubscriptionManager> const& subscriptions,
shared_ptr<ETLLoadBalancer> const& balancer,
shared_ptr<ReportingETL const> const& etl,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
RPC::Counters& counters,
string const& clientIp)
{
if (!request.contains("method") || !request.at("method").is_string())
return {};
string const& command = request.at("method").as_string().c_str();
if (command == "subscribe" || command == "unsubscribe")
return {};
if (!request.at("params").is_array())
return {};
boost::json::array const& array = request.at("params").as_array();
if (array.size() != 1)
return {};
if (!array.at(0).is_object())
return {};
return make_optional<Context>(
yc,
command,
1,
array.at(0).as_object(),
backend,
subscriptions,
balancer,
etl,
nullptr,
tagFactory,
range,
counters,
clientIp);
}
using LimitRange = tuple<uint32_t, uint32_t, uint32_t>;
using HandlerFunction = function<Result(Context const&)>;
struct Handler
{
string method;
function<Result(Context const&)> handler;
optional<LimitRange> limit;
bool isClioOnly = false;
};
class HandlerTable
{
unordered_map<string, Handler> handlerMap_;
public:
HandlerTable(initializer_list<Handler> handlers)
{
for (auto const& handler : handlers)
{
handlerMap_[handler.method] = move(handler);
}
}
bool
contains(string const& method)
{
return handlerMap_.contains(method);
}
optional<LimitRange>
getLimitRange(string const& command)
{
if (!handlerMap_.contains(command))
return {};
return handlerMap_[command].limit;
}
optional<HandlerFunction>
getHandler(string const& command)
{
if (!handlerMap_.contains(command))
return {};
return handlerMap_[command].handler;
}
bool
isClioOnly(string const& command)
{
return handlerMap_.contains(command) && handlerMap_[command].isClioOnly;
}
};
static HandlerTable handlerTable{
{"account_channels", &doAccountChannels, LimitRange{10, 50, 256}},
{"account_currencies", &doAccountCurrencies, {}},
{"account_info", &doAccountInfo, {}},
{"account_lines", &doAccountLines, LimitRange{10, 50, 256}},
{"account_nfts", &doAccountNFTs, LimitRange{1, 5, 10}},
{"account_objects", &doAccountObjects, LimitRange{10, 50, 256}},
{"account_offers", &doAccountOffers, LimitRange{10, 50, 256}},
{"account_tx", &doAccountTx, LimitRange{1, 50, 100}},
{"gateway_balances", &doGatewayBalances, {}},
{"noripple_check", &doNoRippleCheck, LimitRange{1, 300, 500}},
{"book_changes", &doBookChanges, {}},
{"book_offers", &doBookOffers, LimitRange{1, 50, 100}},
{"ledger", &doLedger, {}},
{"ledger_data", &doLedgerData, LimitRange{1, 100, 2048}},
{"nft_buy_offers", &doNFTBuyOffers, LimitRange{1, 50, 100}},
{"nft_history", &doNFTHistory, LimitRange{1, 50, 100}, true},
{"nft_info", &doNFTInfo, {}, true},
{"nft_sell_offers", &doNFTSellOffers, LimitRange{1, 50, 100}},
{"ledger_entry", &doLedgerEntry, {}},
{"ledger_range", &doLedgerRange, {}},
{"subscribe", &doSubscribe, {}},
{"server_info", &doServerInfo, {}},
{"unsubscribe", &doUnsubscribe, {}},
{"tx", &doTx, {}},
{"transaction_entry", &doTransactionEntry, {}},
{"random", &doRandom, {}}};
static unordered_set<string> forwardCommands{
"submit",
"submit_multisigned",
"fee",
"ledger_closed",
"ledger_current",
"ripple_path_find",
"manifest",
"channel_authorize",
"channel_verify"};
bool
validHandler(string const& method)
{
return handlerTable.contains(method) || forwardCommands.contains(method);
}
bool
isClioOnly(string const& method)
{
return handlerTable.isClioOnly(method);
}
bool
shouldSuppressValidatedFlag(RPC::Context const& context)
{
return boost::iequals(context.method, "subscribe") ||
boost::iequals(context.method, "unsubscribe");
}
Status
getLimit(RPC::Context const& context, uint32_t& limit)
{
if (!handlerTable.getHandler(context.method))
return Status{RippledError::rpcUNKNOWN_COMMAND};
if (!handlerTable.getLimitRange(context.method))
return Status{
RippledError::rpcINVALID_PARAMS, "rpcDoesNotRequireLimit"};
auto [lo, def, hi] = *handlerTable.getLimitRange(context.method);
if (context.params.contains(JS(limit)))
{
string errMsg = "Invalid field 'limit', not unsigned integer.";
if (!context.params.at(JS(limit)).is_int64())
return Status{RippledError::rpcINVALID_PARAMS, errMsg};
int input = context.params.at(JS(limit)).as_int64();
if (input <= 0)
return Status{RippledError::rpcINVALID_PARAMS, errMsg};
limit = clamp(static_cast<uint32_t>(input), lo, hi);
}
else
{
limit = def;
}
return {};
}
bool
shouldForwardToRippled(Context const& ctx)
{
auto request = ctx.params;
if (isClioOnly(ctx.method))
return false;
if (forwardCommands.find(ctx.method) != forwardCommands.end())
return true;
if (specifiesCurrentOrClosedLedger(request))
return true;
if (ctx.method == "account_info" && request.contains("queue") &&
request.at("queue").as_bool())
return true;
return false;
}
Result
buildResponse(Context const& ctx)
{
if (shouldForwardToRippled(ctx))
{
boost::json::object toForward = ctx.params;
toForward["command"] = ctx.method;
auto res =
ctx.balancer->forwardToRippled(toForward, ctx.clientIp, ctx.yield);
ctx.counters.rpcForwarded(ctx.method);
if (!res)
return Status{RippledError::rpcFAILED_TO_FORWARD};
return *res;
}
if (ctx.method == "ping")
return boost::json::object{};
if (ctx.backend->isTooBusy())
{
gLog.error() << "Database is too busy. Rejecting request";
return Status{RippledError::rpcTOO_BUSY};
}
auto method = handlerTable.getHandler(ctx.method);
if (!method)
return Status{RippledError::rpcUNKNOWN_COMMAND};
try
{
gPerfLog.debug() << ctx.tag() << " start executing rpc `" << ctx.method
<< '`';
auto v = (*method)(ctx);
gPerfLog.debug() << ctx.tag() << " finish executing rpc `" << ctx.method
<< '`';
if (auto object = get_if<boost::json::object>(&v);
object && not shouldSuppressValidatedFlag(ctx))
{
(*object)[JS(validated)] = true;
}
return v;
}
catch (InvalidParamsError const& err)
{
return Status{RippledError::rpcINVALID_PARAMS, err.what()};
}
catch (AccountNotFoundError const& err)
{
return Status{RippledError::rpcACT_NOT_FOUND, err.what()};
}
catch (Backend::DatabaseTimeout const& t)
{
gLog.error() << "Database timeout";
return Status{RippledError::rpcTOO_BUSY};
}
catch (exception const& err)
{
gLog.error() << ctx.tag() << " caught exception: " << err.what();
return Status{RippledError::rpcINTERNAL};
}
}
} // namespace RPC

View File

@@ -1,166 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <log/Logger.h>
#include <rpc/Counters.h>
#include <rpc/Errors.h>
#include <util/Taggable.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <optional>
#include <string>
#include <variant>
/*
* This file contains various classes necessary for executing RPC handlers.
* Context gives the handlers access to various other parts of the application
* Status is used to report errors.
* And lastly, there are various functions for making Contexts, Statuses and
* serializing Status to JSON.
* This file is meant to contain any class or function that code outside of the
* rpc folder needs to use. For helper functions or classes used within the rpc
* folder, use RPCHelpers.h.
*/
class WsBase;
class SubscriptionManager;
class ETLLoadBalancer;
class ReportingETL;
namespace RPC {
struct Context : public util::Taggable
{
clio::Logger perfLog_{"Performance"};
boost::asio::yield_context& yield;
std::string method;
std::uint32_t version;
boost::json::object const& params;
std::shared_ptr<BackendInterface const> const& backend;
// this needs to be an actual shared_ptr, not a reference. The above
// references refer to shared_ptr members of WsBase, but WsBase contains
// SubscriptionManager as a weak_ptr, to prevent a shared_ptr cycle.
std::shared_ptr<SubscriptionManager> subscriptions;
std::shared_ptr<ETLLoadBalancer> const& balancer;
std::shared_ptr<ReportingETL const> const& etl;
std::shared_ptr<WsBase> session;
Backend::LedgerRange const& range;
Counters& counters;
std::string clientIp;
Context(
boost::asio::yield_context& yield_,
std::string const& command_,
std::uint32_t version_,
boost::json::object const& params_,
std::shared_ptr<BackendInterface const> const& backend_,
std::shared_ptr<SubscriptionManager> const& subscriptions_,
std::shared_ptr<ETLLoadBalancer> const& balancer_,
std::shared_ptr<ReportingETL const> const& etl_,
std::shared_ptr<WsBase> const& session_,
util::TagDecoratorFactory const& tagFactory_,
Backend::LedgerRange const& range_,
Counters& counters_,
std::string const& clientIp_);
};
struct AccountCursor
{
ripple::uint256 index;
std::uint32_t hint;
std::string
toString() const
{
return ripple::strHex(index) + "," + std::to_string(hint);
}
bool
isNonZero() const
{
return index.isNonZero() || hint != 0;
}
};
using Result = std::variant<Status, boost::json::object>;
std::optional<Context>
make_WsContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
std::shared_ptr<BackendInterface const> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptions,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL const> const& etl,
std::shared_ptr<WsBase> const& session,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
Counters& counters,
std::string const& clientIp);
std::optional<Context>
make_HttpContext(
boost::asio::yield_context& yc,
boost::json::object const& request,
std::shared_ptr<BackendInterface const> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptions,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL const> const& etl,
util::TagDecoratorFactory const& tagFactory,
Backend::LedgerRange const& range,
Counters& counters,
std::string const& clientIp);
Result
buildResponse(Context const& ctx);
bool
validHandler(std::string const& method);
bool
isClioOnly(std::string const& method);
Status
getLimit(RPC::Context const& context, std::uint32_t& limit);
template <class T>
void
logDuration(Context const& ctx, T const& dur)
{
static clio::Logger log{"RPC"};
std::stringstream ss;
ss << ctx.tag() << "Request processing duration = "
<< std::chrono::duration_cast<std::chrono::milliseconds>(dur).count()
<< " milliseconds. request = " << ctx.params;
auto seconds =
std::chrono::duration_cast<std::chrono::seconds>(dur).count();
if (seconds > 10)
log.error() << ss.str();
else if (seconds > 1)
log.warn() << ss.str();
else
log.info() << ss.str();
}
} // namespace RPC

275
src/rpc/RPCEngine.h Normal file
View File

@@ -0,0 +1,275 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <config/Config.h>
#include <etl/ETLSource.h>
#include <log/Logger.h>
#include <rpc/Counters.h>
#include <rpc/Errors.h>
#include <rpc/HandlerTable.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/AnyHandler.h>
#include <rpc/common/Types.h>
#include <rpc/common/impl/AdminVerificationStrategy.h>
#include <util/Taggable.h>
#include <webserver/Context.h>
#include <webserver/DOSGuard.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <fmt/core.h>
#include <optional>
#include <string>
#include <unordered_map>
#include <variant>
class WsBase;
class SubscriptionManager;
class ETLLoadBalancer;
class ReportingETL;
namespace RPC {
/**
* @brief The RPC engine that ties all RPC-related functionality together
*/
template <typename AdminVerificationStrategyType>
class RPCEngineBase
{
clio::Logger perfLog_{"Performance"};
clio::Logger log_{"RPC"};
std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<SubscriptionManager> subscriptions_;
std::shared_ptr<ETLLoadBalancer> balancer_;
std::reference_wrapper<clio::DOSGuard const> dosGuard_;
std::reference_wrapper<WorkQueue> workQueue_;
std::reference_wrapper<Counters> counters_;
HandlerTable handlerTable_;
AdminVerificationStrategyType adminVerifier_;
public:
RPCEngineBase(
std::shared_ptr<BackendInterface> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptions,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL> const& etl,
clio::DOSGuard const& dosGuard,
WorkQueue& workQueue,
Counters& counters,
std::shared_ptr<HandlerProvider const> const& handlerProvider)
: backend_{backend}
, subscriptions_{subscriptions}
, balancer_{balancer}
, dosGuard_{std::cref(dosGuard)}
, workQueue_{std::ref(workQueue)}
, counters_{std::ref(counters)}
, handlerTable_{handlerProvider}
{
}
static std::shared_ptr<RPCEngineBase>
make_RPCEngine(
clio::Config const& config,
std::shared_ptr<BackendInterface> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptions,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL> const& etl,
clio::DOSGuard const& dosGuard,
WorkQueue& workQueue,
Counters& counters,
std::shared_ptr<HandlerProvider const> const& handlerProvider)
{
return std::make_shared<RPCEngineBase>(
backend, subscriptions, balancer, etl, dosGuard, workQueue, counters, handlerProvider);
}
/**
* @brief Main request processor routine
* @param ctx The @ref Context of the request
*/
Result
buildResponse(Web::Context const& ctx)
{
if (shouldForwardToRippled(ctx))
{
auto toForward = ctx.params;
toForward["command"] = ctx.method;
auto const res = balancer_->forwardToRippled(toForward, ctx.clientIp, ctx.yield);
notifyForwarded(ctx.method);
if (!res)
return Status{RippledError::rpcFAILED_TO_FORWARD};
return *res;
}
if (backend_->isTooBusy())
{
log_.error() << "Database is too busy. Rejecting request";
return Status{RippledError::rpcTOO_BUSY};
}
auto const method = handlerTable_.getHandler(ctx.method);
if (!method)
return Status{RippledError::rpcUNKNOWN_COMMAND};
try
{
perfLog_.debug() << ctx.tag() << " start executing rpc `" << ctx.method << '`';
auto const isAdmin = adminVerifier_.isAdmin(ctx.clientIp);
auto const context = Context{ctx.yield, ctx.session, isAdmin, ctx.clientIp};
auto const v = (*method).process(ctx.params, context);
perfLog_.debug() << ctx.tag() << " finish executing rpc `" << ctx.method << '`';
if (v)
return v->as_object();
else
return Status{v.error()};
}
catch (InvalidParamsError const& err)
{
return Status{RippledError::rpcINVALID_PARAMS, err.what()};
}
catch (AccountNotFoundError const& err)
{
return Status{RippledError::rpcACT_NOT_FOUND, err.what()};
}
catch (Backend::DatabaseTimeout const& t)
{
log_.error() << "Database timeout";
return Status{RippledError::rpcTOO_BUSY};
}
catch (std::exception const& err)
{
log_.error() << ctx.tag() << " caught exception: " << err.what();
return Status{RippledError::rpcINTERNAL};
}
}
/**
* @brief Used to schedule request processing onto the work queue
* @param func The lambda to execute when this request is handled
* @param ip The ip address for which this request is being executed
*/
template <typename Fn>
bool
post(Fn&& func, std::string const& ip)
{
return workQueue_.get().postCoro(std::forward<Fn>(func), dosGuard_.get().isWhiteListed(ip));
}
/**
* @brief Notify the system that specified method was executed
* @param method
* @param duration The time it took to execute the method specified in
* microseconds
*/
void
notifyComplete(std::string const& method, std::chrono::microseconds const& duration)
{
if (validHandler(method))
counters_.get().rpcComplete(method, duration);
}
/**
* @brief Notify the system that specified method failed to execute
* @param method
*/
void
notifyErrored(std::string const& method)
{
if (validHandler(method))
counters_.get().rpcErrored(method);
}
/**
* @brief Notify the system that specified method execution was forwarded to rippled
* @param method
*/
void
notifyForwarded(std::string const& method)
{
if (validHandler(method))
counters_.get().rpcForwarded(method);
}
private:
bool
shouldForwardToRippled(Web::Context const& ctx) const
{
auto request = ctx.params;
if (isClioOnly(ctx.method))
return false;
if (isForwardCommand(ctx.method))
return true;
if (specifiesCurrentOrClosedLedger(request))
return true;
if (ctx.method == "account_info" && request.contains("queue") && request.at("queue").as_bool())
return true;
return false;
}
bool
isForwardCommand(std::string const& method) const
{
static std::unordered_set<std::string> const FORWARD_COMMANDS{
"submit",
"submit_multisigned",
"fee",
"ledger_closed",
"ledger_current",
"ripple_path_find",
"manifest",
"channel_authorize",
"channel_verify",
};
return FORWARD_COMMANDS.contains(method);
}
bool
isClioOnly(std::string const& method) const
{
return handlerTable_.isClioOnly(method);
}
bool
validHandler(std::string const& method) const
{
return handlerTable_.contains(method) || isForwardCommand(method);
}
};
using RPCEngine = RPCEngineBase<detail::IPAdminVerificationStrategy>;
} // namespace RPC

File diff suppressed because it is too large Load Diff

View File

@@ -20,26 +20,22 @@
#pragma once
/*
* This file contains a variety of utility functions used when executing
* the handlers
* This file contains a variety of utility functions used when executing the handlers.
*/
#include <ripple/app/ledger/Ledger.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/jss.h>
#include <backend/BackendInterface.h>
#include <rpc/RPC.h>
#include <rpc/JS.h>
#include <rpc/common/Types.h>
#include <webserver/Context.h>
// Useful macro for borrowing from ripple::jss
// static strings. (J)son (S)trings
#define JS(x) ripple::jss::x.c_str()
// Access (SF)ield name (S)trings
#define SFS(x) ripple::x.jsonName.c_str()
#include <fmt/core.h>
namespace RPC {
std::optional<ripple::AccountID>
accountFromStringStrict(std::string const& account);
@@ -50,25 +46,15 @@ std::uint64_t
getStartHint(ripple::SLE const& sle, ripple::AccountID const& accountID);
std::optional<AccountCursor>
parseAccountCursor(
BackendInterface const& backend,
std::uint32_t seq,
std::optional<std::string> jsonCursor,
boost::asio::yield_context& yield);
parseAccountCursor(std::optional<std::string> jsonCursor);
// TODO this function should probably be in a different file and namespace
std::pair<
std::shared_ptr<ripple::STTx const>,
std::shared_ptr<ripple::STObject const>>
std::pair<std::shared_ptr<ripple::STTx const>, std::shared_ptr<ripple::STObject const>>
deserializeTxPlusMeta(Backend::TransactionAndMetadata const& blobs);
// TODO this function should probably be in a different file and namespace
std::pair<
std::shared_ptr<ripple::STTx const>,
std::shared_ptr<ripple::TxMeta const>>
deserializeTxPlusMeta(
Backend::TransactionAndMetadata const& blobs,
std::uint32_t seq);
std::pair<std::shared_ptr<ripple::STTx const>, std::shared_ptr<ripple::TxMeta const>>
deserializeTxPlusMeta(Backend::TransactionAndMetadata const& blobs, std::uint32_t seq);
std::pair<boost::json::object, boost::json::object>
toExpandedJson(Backend::TransactionAndMetadata const& blobs);
@@ -104,7 +90,15 @@ generatePubLedgerMessage(
std::uint32_t txnCount);
std::variant<Status, ripple::LedgerInfo>
ledgerInfoFromRequest(Context const& ctx);
ledgerInfoFromRequest(std::shared_ptr<Backend::BackendInterface const> const& backend, Web::Context const& ctx);
std::variant<Status, ripple::LedgerInfo>
getLedgerInfoFromHashOrSeq(
BackendInterface const& backend,
boost::asio::yield_context& yield,
std::optional<std::string> ledgerHash,
std::optional<uint32_t> ledgerIndex,
uint32_t maxSeq);
std::variant<Status, AccountCursor>
traverseOwnedNodes(
@@ -128,11 +122,24 @@ traverseOwnedNodes(
boost::asio::yield_context& yield,
std::function<void(ripple::SLE&&)> atOwnedNode);
// Remove the account check from traverseOwnedNodes
// Account check has been done by framework,remove it from internal function
std::variant<Status, AccountCursor>
ngTraverseOwnedNodes(
BackendInterface const& backend,
ripple::AccountID const& accountID,
std::uint32_t sequence,
std::uint32_t limit,
std::optional<std::string> jsonCursor,
boost::asio::yield_context& yield,
std::function<void(ripple::SLE&&)> atOwnedNode);
std::shared_ptr<ripple::SLE const>
read(
std::shared_ptr<Backend::BackendInterface const> const& backend,
ripple::Keylet const& keylet,
ripple::LedgerInfo const& lgrInfo,
Context const& context);
Web::Context const& context);
std::variant<Status, std::pair<ripple::PublicKey, ripple::SecretKey>>
keypairFromRequst(boost::json::object const& request);
@@ -200,6 +207,9 @@ postProcessOrderBook(
std::uint32_t ledgerSequence,
boost::asio::yield_context& yield);
std::variant<Status, ripple::Book>
parseBook(ripple::Currency pays, ripple::AccountID payIssuer, ripple::Currency gets, ripple::AccountID getIssuer);
std::variant<Status, ripple::Book>
parseBook(boost::json::object const& request);
@@ -210,10 +220,7 @@ std::optional<std::uint32_t>
getUInt(boost::json::object const& request, std::string const& field);
std::uint32_t
getUInt(
boost::json::object const& request,
std::string const& field,
std::uint32_t dfault);
getUInt(boost::json::object const& request, std::string const& field, std::uint32_t dfault);
std::uint32_t
getRequiredUInt(boost::json::object const& request, std::string const& field);
@@ -222,10 +229,7 @@ std::optional<bool>
getBool(boost::json::object const& request, std::string const& field);
bool
getBool(
boost::json::object const& request,
std::string const& field,
bool dfault);
getBool(boost::json::object const& request, std::string const& field, bool dfault);
bool
getRequiredBool(boost::json::object const& request, std::string const& field);
@@ -237,10 +241,7 @@ std::string
getRequiredString(boost::json::object const& request, std::string const& field);
std::string
getString(
boost::json::object const& request,
std::string const& field,
std::string dfault);
getString(boost::json::object const& request, std::string const& field, std::string dfault);
Status
getHexMarker(boost::json::object const& request, ripple::uint256& marker);
@@ -249,10 +250,7 @@ Status
getAccount(boost::json::object const& request, ripple::AccountID& accountId);
Status
getAccount(
boost::json::object const& request,
ripple::AccountID& destAccount,
boost::string_view const& field);
getAccount(boost::json::object const& request, ripple::AccountID& destAccount, boost::string_view const& field);
Status
getOptionalAccount(
@@ -272,21 +270,24 @@ specifiesCurrentOrClosedLedger(boost::json::object const& request);
std::variant<ripple::uint256, Status>
getNFTID(boost::json::object const& request);
// This function is the driver for both `account_tx` and `nft_tx` and should
// be used for any future transaction enumeration APIs.
std::variant<Status, boost::json::object>
traverseTransactions(
Context const& context,
std::function<Backend::TransactionsAndCursor(
std::shared_ptr<Backend::BackendInterface const> const& backend,
std::uint32_t const,
bool const,
std::optional<Backend::TransactionsCursor> const&,
boost::asio::yield_context& yield)> transactionFetcher);
template <class T>
void
logDuration(Web::Context const& ctx, T const& dur)
{
using boost::json::serialize;
[[nodiscard]] boost::json::object const
computeBookChanges(
ripple::LedgerInfo const& lgrInfo,
std::vector<Backend::TransactionAndMetadata> const& transactions);
static clio::Logger log{"RPC"};
auto const millis = std::chrono::duration_cast<std::chrono::milliseconds>(dur).count();
auto const seconds = std::chrono::duration_cast<std::chrono::seconds>(dur).count();
auto const msg =
fmt::format("Request processing duration = {} milliseconds. request = {}", millis, serialize(ctx.params));
if (seconds > 10)
log.error() << ctx.tag() << msg;
else if (seconds > 1)
log.warn() << ctx.tag() << msg;
else
log.info() << ctx.tag() << msg;
}
} // namespace RPC

View File

@@ -23,8 +23,14 @@ WorkQueue::WorkQueue(std::uint32_t numWorkers, uint32_t maxSize)
{
if (maxSize != 0)
maxSize_ = maxSize;
while (--numWorkers)
{
threads_.emplace_back([this] { ioc_.run(); });
}
}
WorkQueue::~WorkQueue()
{
work_.reset();
for (auto& thread : threads_)
thread.join();
}

View File

@@ -19,6 +19,7 @@
#pragma once
#include <config/Config.h>
#include <log/Logger.h>
#include <boost/asio.hpp>
@@ -41,8 +42,25 @@ class WorkQueue
uint32_t maxSize_ = std::numeric_limits<uint32_t>::max();
clio::Logger log_{"RPC"};
std::vector<std::thread> threads_ = {};
boost::asio::io_context ioc_ = {};
std::optional<boost::asio::io_context::work> work_{ioc_};
public:
WorkQueue(std::uint32_t numWorkers, uint32_t maxSize = 0);
~WorkQueue();
static WorkQueue
make_WorkQueue(clio::Config const& config)
{
static clio::Logger log{"RPC"};
auto const serverConfig = config.section("server");
auto const numThreads = config.valueOr<uint32_t>("workers", std::thread::hardware_concurrency());
auto const maxQueueSize = serverConfig.valueOr<uint32_t>("max_queue_size", 0); // 0 is no limit
log.info() << "Number of workers = " << numThreads << ". Max queue size = " << maxQueueSize;
return WorkQueue{numThreads, maxQueueSize};
}
template <typename F>
bool
@@ -50,48 +68,42 @@ public:
{
if (curSize_ >= maxSize_ && !isWhiteListed)
{
log_.warn() << "Queue is full. rejecting job. current size = "
<< curSize_ << " max size = " << maxSize_;
log_.warn() << "Queue is full. rejecting job. current size = " << curSize_ << " max size = " << maxSize_;
return false;
}
++curSize_;
auto start = std::chrono::system_clock::now();
// Each time we enqueue a job, we want to post a symmetrical job that
// will dequeue and run the job at the front of the job queue.
boost::asio::spawn(
ioc_,
[this, f = std::move(f), start](boost::asio::yield_context yield) {
auto run = std::chrono::system_clock::now();
auto wait =
std::chrono::duration_cast<std::chrono::microseconds>(
run - start)
.count();
// increment queued_ here, in the same place we implement
// durationUs_
++queued_;
durationUs_ += wait;
log_.info() << "WorkQueue wait time = " << wait
<< " queue size = " << curSize_;
f(yield);
--curSize_;
});
// Each time we enqueue a job, we want to post a symmetrical job that will dequeue and run the job at the front
// of the job queue.
boost::asio::spawn(ioc_, [this, f = std::move(f), start](auto yield) {
auto const run = std::chrono::system_clock::now();
auto const wait = std::chrono::duration_cast<std::chrono::microseconds>(run - start).count();
// increment queued_ here, in the same place we implement durationUs_
++queued_;
durationUs_ += wait;
log_.info() << "WorkQueue wait time = " << wait << " queue size = " << curSize_;
f(yield);
--curSize_;
});
return true;
}
boost::json::object
report() const
{
boost::json::object obj;
auto obj = boost::json::object{};
obj["queued"] = queued_;
obj["queued_duration_us"] = durationUs_;
obj["current_queue_size"] = curSize_;
obj["max_queue_size"] = maxSize_;
return obj;
}
private:
std::vector<std::thread> threads_ = {};
boost::asio::io_context ioc_ = {};
std::optional<boost::asio::io_context::work> work_{ioc_};
};

View File

@@ -23,20 +23,28 @@
#include <rpc/common/Types.h>
#include <rpc/common/impl/Processors.h>
namespace RPCng {
namespace RPC {
/**
* @brief A type-erased Handler that can contain any (NextGen) RPC handler class
*
* This allows to store different handlers in one map/vector etc.
* Support for copying was added in order to allow storing in a
* map/unordered_map using the initializer_list constructor.
*/
class AnyHandler final
{
public:
template <
Handler HandlerType,
typename ProcessingStrategy = detail::DefaultProcessor<HandlerType>>
/**
* @brief Type-erases any handler class.
*
* @tparam HandlerType The real type of wrapped handler class
* @tparam ProcessingStrategy A strategy that implements how processing of JSON is to be done
* @param handler The handler to wrap. Required to fulfil the @ref Handler concept.
*/
template <Handler HandlerType, typename ProcessingStrategy = detail::DefaultProcessor<HandlerType>>
/* implicit */ AnyHandler(HandlerType&& handler)
: pimpl_{std::make_unique<Model<HandlerType, ProcessingStrategy>>(
std::forward<HandlerType>(handler))}
: pimpl_{std::make_unique<Model<HandlerType, ProcessingStrategy>>(std::forward<HandlerType>(handler))}
{
}
@@ -44,6 +52,7 @@ public:
AnyHandler(AnyHandler const& other) : pimpl_{other.pimpl_->clone()}
{
}
AnyHandler&
operator=(AnyHandler const& rhs)
{
@@ -51,21 +60,43 @@ public:
pimpl_.swap(copy.pimpl_);
return *this;
}
AnyHandler(AnyHandler&&) = default;
AnyHandler&
operator=(AnyHandler&&) = default;
/**
* @brief Process incoming JSON by the stored handler
*
* @param value The JSON to process
* @return JSON result or @ref Status on error
*/
[[nodiscard]] ReturnType
process(boost::json::value const& value) const
{
return pimpl_->process(value);
}
/**
* @brief Process incoming JSON by the stored handler in a provided coroutine
*
* @param value The JSON to process
* @return JSON result or @ref Status on error
*/
[[nodiscard]] ReturnType
process(boost::json::value const& value, Context const& ctx) const
{
return pimpl_->process(value, ctx);
}
private:
struct Concept
{
virtual ~Concept() = default;
[[nodiscard]] virtual ReturnType
process(boost::json::value const& value, Context const& ctx) const = 0;
[[nodiscard]] virtual ReturnType
process(boost::json::value const& value) const = 0;
@@ -89,6 +120,12 @@ private:
return processor(handler, value);
}
[[nodiscard]] ReturnType
process(boost::json::value const& value, Context const& ctx) const override
{
return processor(handler, value, &ctx);
}
[[nodiscard]] std::unique_ptr<Concept>
clone() const override
{
@@ -100,4 +137,4 @@ private:
std::unique_ptr<Concept> pimpl_;
};
} // namespace RPCng
} // namespace RPC

View File

@@ -26,7 +26,7 @@
#include <string>
namespace RPCng {
namespace RPC {
struct RpcSpec;
@@ -49,12 +49,40 @@ concept Requirement = requires(T a) {
*/
// clang-format off
template <typename T>
concept Handler = requires(T a, typename T::Input in, typename T::Output out) {
{ a.spec() } -> std::same_as<RpcSpecConstRef>;
{ a.process(in) } -> std::same_as<HandlerReturnType<decltype(out)>>;
concept ContextProcessWithInput = requires(T a, typename T::Input in, typename T::Output out, Context const& ctx) {
{ a.process(in, ctx) } -> std::same_as<HandlerReturnType<decltype(out)>>;
};
template <typename T>
concept ContextProcessWithoutInput = requires(T a, typename T::Output out, Context const& ctx) {
{ a.process(ctx) } -> std::same_as<HandlerReturnType<decltype(out)>>;
};
template <typename T>
concept NonContextProcess = requires(T a, typename T::Input in, typename T::Output out) {
{ a.process(in) } -> std::same_as<HandlerReturnType<decltype(out)>>;
};
template <typename T>
concept HandlerWithInput = requires(T a) {
{ a.spec() } -> std::same_as<RpcSpecConstRef>;
}
and (ContextProcessWithInput<T> or NonContextProcess<T>)
and boost::json::has_value_to<typename T::Input>::value;
template <typename T>
concept HandlerWithoutInput = requires(T a, typename T::Output out) {
{ a.process() } -> std::same_as<HandlerReturnType<decltype(out)>>;
}
&& boost::json::has_value_from<typename T::Output>::value
&& boost::json::has_value_to<typename T::Input>::value;
or ContextProcessWithoutInput<T>;
template <typename T>
concept Handler =
(
HandlerWithInput<T> or
HandlerWithoutInput<T>
)
and boost::json::has_value_from<typename T::Output>::value;
// clang-format on
} // namespace RPCng
} // namespace RPC

View File

@@ -21,7 +21,7 @@
#include <boost/json/value.hpp>
namespace RPCng {
namespace RPC {
[[nodiscard]] MaybeError
FieldSpec::validate(boost::json::value const& value) const
@@ -39,4 +39,4 @@ RpcSpec::validate(boost::json::value const& value) const
return {};
}
} // namespace RPCng
} // namespace RPC

View File

@@ -26,21 +26,33 @@
#include <string>
#include <vector>
namespace RPCng {
namespace RPC {
/**
* @brief Represents a Specification for one field of an RPC command
*/
struct FieldSpec final
{
/**
* @brief Construct a field specification out of a set of requirements
*
* @tparam Requirements The types of requirements @ref Requirement
* @param key The key in a JSON object that the field validates
* @param requirements The requirements, each of them have to fulfil
* the @ref Requirement concept
*/
template <Requirement... Requirements>
FieldSpec(std::string const& key, Requirements&&... requirements)
: validator_{detail::makeFieldValidator<Requirements...>(
key,
std::forward<Requirements>(requirements)...)}
: validator_{detail::makeFieldValidator<Requirements...>(key, std::forward<Requirements>(requirements)...)}
{
}
/**
* @brief Validates the passed JSON value using the stored requirements
*
* @param value The JSON value to validate
* @return Nothing on success; @ref Status on error
*/
[[nodiscard]] MaybeError
validate(boost::json::value const& value) const;
@@ -56,10 +68,21 @@ private:
*/
struct RpcSpec final
{
/**
* @brief Construct a full RPC request specification
*
* @param fields The fields of the RPC specification @ref FieldSpec
*/
RpcSpec(std::initializer_list<FieldSpec> fields) : fields_{fields}
{
}
/**
* @brief Validates the passed JSON value using the stored field specs
*
* @param value The JSON value to validate
* @return Nothing on success; @ref Status on error
*/
[[nodiscard]] MaybeError
validate(boost::json::value const& value) const;
@@ -67,4 +90,4 @@ private:
std::vector<FieldSpec> fields_;
};
} // namespace RPCng
} // namespace RPC

View File

@@ -22,35 +22,98 @@
#include <rpc/Errors.h>
#include <util/Expected.h>
#include <ripple/basics/base_uint.h>
#include <boost/asio/spawn.hpp>
#include <boost/json/value.hpp>
namespace RPCng {
class WsBase;
class SubscriptionManager;
namespace RPC {
/**
* @brief Return type used for Validators that can return error but don't have
* specific value to return
*/
using MaybeError = util::Expected<void, RPC::Status>;
using MaybeError = util::Expected<void, Status>;
/**
* @brief The type that represents just the error part of @ref MaybeError
*/
using Error = util::Unexpected<RPC::Status>;
using Error = util::Unexpected<Status>;
/**
* @brief Return type for each individual handler
*/
template <typename OutputType>
using HandlerReturnType = util::Expected<OutputType, RPC::Status>;
using HandlerReturnType = util::Expected<OutputType, Status>;
/**
* @brief The final return type out of RPC engine
*/
using ReturnType = util::Expected<boost::json::value, RPC::Status>;
using ReturnType = util::Expected<boost::json::value, Status>;
struct RpcSpec;
struct FieldSpec;
using RpcSpecConstRef = RpcSpec const&;
} // namespace RPCng
struct VoidOutput
{
};
struct Context
{
// TODO: we shall change yield_context to const yield_context after we
// update backend interfaces to use const& yield
std::reference_wrapper<boost::asio::yield_context> yield;
std::shared_ptr<WsBase> session;
bool isAdmin = false;
std::string clientIp;
};
using Result = std::variant<Status, boost::json::object>;
struct AccountCursor
{
ripple::uint256 index;
std::uint32_t hint;
std::string
toString() const
{
return ripple::strHex(index) + "," + std::to_string(hint);
}
bool
isNonZero() const
{
return index.isNonZero() || hint != 0;
}
};
class AnyHandler;
class HandlerProvider
{
public:
virtual ~HandlerProvider() = default;
virtual bool
contains(std::string const& method) const = 0;
virtual std::optional<AnyHandler>
getHandler(std::string const& command) const = 0;
virtual bool
isClioOnly(std::string const& command) const = 0;
};
inline void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, VoidOutput const&)
{
jv = boost::json::object{};
}
} // namespace RPC

View File

@@ -17,13 +17,16 @@
*/
//==============================================================================
#include <ripple/basics/base_uint.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/Validators.h>
#include <boost/json/value.hpp>
#include <charconv>
#include <string_view>
namespace RPCng::validation {
namespace RPC::validation {
[[nodiscard]] MaybeError
Section::verify(boost::json::value const& value, std::string_view key) const
@@ -33,11 +36,17 @@ Section::verify(boost::json::value const& value, std::string_view key) const
// instead
auto const& res = value.at(key.data());
// if it is not a json object, let other validators fail
if (!res.is_object())
return {};
for (auto const& spec : specs)
{
if (auto const ret = spec.validate(res); not ret)
return Error{ret.error()};
}
return {};
}
@@ -45,27 +54,24 @@ Section::verify(boost::json::value const& value, std::string_view key) const
Required::verify(boost::json::value const& value, std::string_view key) const
{
if (not value.is_object() or not value.as_object().contains(key.data()))
return Error{RPC::Status{
RPC::RippledError::rpcINVALID_PARAMS,
"Required field '" + std::string{key} + "' missing"}};
return Error{Status{RippledError::rpcINVALID_PARAMS, "Required field '" + std::string{key} + "' missing"}};
return {};
}
[[nodiscard]] MaybeError
ValidateArrayAt::verify(boost::json::value const& value, std::string_view key)
const
ValidateArrayAt::verify(boost::json::value const& value, std::string_view key) const
{
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
// instead
if (not value.as_object().at(key.data()).is_array())
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
return Error{Status{RippledError::rpcINVALID_PARAMS}};
auto const& arr = value.as_object().at(key.data()).as_array();
if (idx_ >= arr.size())
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
return Error{Status{RippledError::rpcINVALID_PARAMS}};
auto const& res = arr.at(idx_);
for (auto const& spec : specs_)
@@ -76,8 +82,7 @@ ValidateArrayAt::verify(boost::json::value const& value, std::string_view key)
}
[[nodiscard]] MaybeError
CustomValidator::verify(boost::json::value const& value, std::string_view key)
const
CustomValidator::verify(boost::json::value const& value, std::string_view key) const
{
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
@@ -86,4 +91,154 @@ CustomValidator::verify(boost::json::value const& value, std::string_view key)
return validator_(value.as_object().at(key.data()), key);
}
} // namespace RPCng::validation
[[nodiscard]] bool
checkIsU32Numeric(std::string_view sv)
{
uint32_t unused;
auto [_, ec] = std::from_chars(sv.data(), sv.data() + sv.size(), unused);
return ec == std::errc();
}
CustomValidator Uint256HexStringValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
ripple::uint256 ledgerHash;
if (!ledgerHash.parseHex(value.as_string().c_str()))
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "Malformed"}};
return MaybeError{};
}};
CustomValidator LedgerIndexValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
auto err = Error{Status{RippledError::rpcINVALID_PARAMS, "ledgerIndexMalformed"}};
if (!value.is_string() && !(value.is_uint64() || value.is_int64()))
return err;
if (value.is_string() && value.as_string() != "validated" && !checkIsU32Numeric(value.as_string().c_str()))
return err;
return MaybeError{};
}};
CustomValidator AccountValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
// TODO: we are using accountFromStringStrict from RPCHelpers, after we
// remove all old handler, this function can be moved to here
if (!accountFromStringStrict(value.as_string().c_str()))
return Error{Status{RippledError::rpcACT_MALFORMED, std::string(key) + "Malformed"}};
return MaybeError{};
}};
CustomValidator AccountBase58Validator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
auto const account = ripple::parseBase58<ripple::AccountID>(value.as_string().c_str());
if (!account || account->isZero())
return Error{Status{ClioError::rpcMALFORMED_ADDRESS}};
return MaybeError{};
}};
CustomValidator AccountMarkerValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
// TODO: we are using parseAccountCursor from RPCHelpers, after we
// remove all old handler, this function can be moved to here
if (!parseAccountCursor(value.as_string().c_str()))
{
// align with the current error message
return Error{Status{RippledError::rpcINVALID_PARAMS, "Malformed cursor"}};
}
return MaybeError{};
}};
CustomValidator CurrencyValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
ripple::Currency currency;
if (!ripple::to_currency(currency, value.as_string().c_str()))
return Error{Status{ClioError::rpcMALFORMED_CURRENCY, "malformedCurrency"}};
return MaybeError{};
}};
CustomValidator IssuerValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotString"}};
ripple::AccountID issuer;
// TODO: need to align with the error
if (!ripple::to_issuer(issuer, value.as_string().c_str()))
return Error{Status{RippledError::rpcINVALID_PARAMS, fmt::format("Invalid field '{}', bad issuer.", key)}};
if (issuer == ripple::noAccount())
return Error{Status{
RippledError::rpcINVALID_PARAMS,
fmt::format(
"Invalid field '{}', bad issuer account "
"one.",
key)}};
return MaybeError{};
}};
CustomValidator SubscribeStreamValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
static std::unordered_set<std::string> const validStreams = {
"ledger", "transactions", "transactions_proposed", "book_changes", "manifests", "validations"};
if (!value.is_array())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotArray"}};
for (auto const& v : value.as_array())
{
if (!v.is_string())
return Error{Status{RippledError::rpcINVALID_PARAMS, "streamNotString"}};
if (not validStreams.contains(v.as_string().c_str()))
return Error{Status{RippledError::rpcSTREAM_MALFORMED}};
}
return MaybeError{};
}};
CustomValidator SubscribeAccountsValidator =
CustomValidator{[](boost::json::value const& value, std::string_view key) -> MaybeError {
if (!value.is_array())
return Error{Status{RippledError::rpcINVALID_PARAMS, std::string(key) + "NotArray"}};
if (value.as_array().size() == 0)
return Error{Status{RippledError::rpcACT_MALFORMED, std::string(key) + " malformed."}};
for (auto const& v : value.as_array())
{
auto obj = boost::json::object();
auto const keyItem = std::string(key) + "'sItem";
obj[keyItem] = v;
if (auto const err = AccountValidator.verify(obj, keyItem); !err)
return err;
}
return MaybeError{};
}};
} // namespace RPC::validation

View File

@@ -23,7 +23,9 @@
#include <rpc/common/Specs.h>
#include <rpc/common/Types.h>
namespace RPCng::validation {
#include <fmt/core.h>
namespace RPC::validation {
/**
* @brief Check that the type is the same as what was expected
@@ -46,24 +48,26 @@ template <typename Expected>
if (not value.is_string())
hasError = true;
}
else if constexpr (
std::is_same_v<Expected, double> or std::is_same_v<Expected, float>)
else if constexpr (std::is_same_v<Expected, double> or std::is_same_v<Expected, float>)
{
if (not value.is_double())
hasError = true;
}
else if constexpr (
std::is_convertible_v<Expected, uint64_t> or
std::is_convertible_v<Expected, int64_t>)
{
if (not value.is_int64() && not value.is_uint64())
hasError = true;
}
else if constexpr (std::is_same_v<Expected, boost::json::array>)
{
if (not value.is_array())
hasError = true;
}
else if constexpr (std::is_same_v<Expected, boost::json::object>)
{
if (not value.is_object())
hasError = true;
}
else if constexpr (std::is_convertible_v<Expected, uint64_t> or std::is_convertible_v<Expected, int64_t>)
{
if (not value.is_int64() && not value.is_uint64())
hasError = true;
}
return not hasError;
}
@@ -76,10 +80,22 @@ class Section final
std::vector<FieldSpec> specs;
public:
/**
* @brief Construct new section validator from a list of specs
*
* @param specs List of specs @ref FieldSpec
*/
explicit Section(std::initializer_list<FieldSpec> specs) : specs{specs}
{
}
/**
* @brief Verify that the JSON value representing the section is valid
* according to the given specs
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the section from the outer object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const;
};
@@ -93,12 +109,77 @@ struct Required final
verify(boost::json::value const& value, std::string_view key) const;
};
/**
* @brief A validator that forbids a field to be present
* If there is a value provided, it will forbid the field only when the value equals
* If there is no value provided, it will forbid the field when the field shows up
*/
template <typename... T>
class NotSupported;
/**
* @brief A specialized NotSupported validator that forbids a field to be present when the value equals the given value
*/
template <typename T>
class NotSupported<T> final
{
T value_;
public:
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
if (value.is_object() and value.as_object().contains(key.data()))
{
using boost::json::value_to;
auto const res = value_to<T>(value.as_object().at(key.data()));
if (value_ == res)
return Error{Status{
RippledError::rpcNOT_SUPPORTED,
fmt::format("Not supported field '{}'s value '{}'", std::string{key}, res)}};
}
return {};
}
NotSupported(T val) : value_(val)
{
}
};
/**
* @brief A specialized NotSupported validator that forbids a field to be present
*/
template <>
class NotSupported<> final
{
public:
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
if (value.is_object() and value.as_object().contains(key.data()))
return Error{Status{RippledError::rpcNOT_SUPPORTED, "Not supported field '" + std::string{key}}};
return {};
}
};
// deduction guide to avoid having to specify the template arguments
template <typename... T>
NotSupported(T&&... t) -> NotSupported<T...>;
/**
* @brief Validates that the type of the value is one of the given types
*/
template <typename... Types>
struct Type final
{
/**
* @brief Verify that the JSON value is (one) of specified type(s)
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the tested value from the outer
* object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
@@ -110,7 +191,7 @@ struct Type final
auto const convertible = (checkType<Types>(res) || ...);
if (not convertible)
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
return Error{Status{RippledError::rpcINVALID_PARAMS}};
return {};
}
@@ -126,23 +207,38 @@ class Between final
Type max_;
public:
/**
* @brief Construct the validator storing min and max values
*
* @param min
* @param max
*/
explicit Between(Type min, Type max) : min_{min}, max_{max}
{
}
/**
* @brief Verify that the JSON value is within a certain range
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the tested value from the outer
* object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
using boost::json::value_to;
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
// instead
using boost::json::value_to;
auto const res = value_to<Type>(value.as_object().at(key.data()));
// todo: may want a way to make this code more generic (e.g. use a free
// TODO: may want a way to make this code more generic (e.g. use a free
// function that can be overridden for this comparison)
if (res < min_ || res > max_)
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
return Error{Status{RippledError::rpcINVALID_PARAMS}};
return {};
}
@@ -157,21 +253,34 @@ class EqualTo final
Type original_;
public:
/**
* @brief Construct the validator with stored original value
*
* @param original The original value to store
*/
explicit EqualTo(Type original) : original_{original}
{
}
/**
* @brief Verify that the JSON value is equal to the stored original
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the tested value from the outer
* object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
using boost::json::value_to;
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
// instead
using boost::json::value_to;
auto const res = value_to<Type>(value.as_object().at(key.data()));
if (res != original_)
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
return Error{Status{RippledError::rpcINVALID_PARAMS}};
return {};
}
@@ -192,22 +301,34 @@ class OneOf final
std::vector<Type> options_;
public:
/**
* @brief Construct the validator with stored options
*
* @param options The list of allowed options
*/
explicit OneOf(std::initializer_list<Type> options) : options_{options}
{
}
/**
* @brief Verify that the JSON value is one of the stored options
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the tested value from the outer
* object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
using boost::json::value_to;
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
// instead
using boost::json::value_to;
auto const res = value_to<Type>(value.as_object().at(key.data()));
if (std::find(std::begin(options_), std::end(options_), res) ==
std::end(options_))
return Error{RPC::Status{RPC::RippledError::rpcINVALID_PARAMS}};
if (std::find(std::begin(options_), std::end(options_), res) == std::end(options_))
return Error{Status{RippledError::rpcINVALID_PARAMS}};
return {};
}
@@ -229,13 +350,117 @@ class ValidateArrayAt final
std::vector<FieldSpec> specs_;
public:
ValidateArrayAt(std::size_t idx, std::initializer_list<FieldSpec> specs)
: idx_{idx}, specs_{specs}
/**
* @brief Constructs a validator that validates the specified element of a
* JSON array
*
* @param idx The index inside the array to validate
* @param specs The specifications to validate against
*/
ValidateArrayAt(std::size_t idx, std::initializer_list<FieldSpec> specs) : idx_{idx}, specs_{specs}
{
}
/**
* @brief Verify that the JSON array element at given index is valid
* according the stored specs
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the array from the outer object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const;
};
/**
* @brief A meta-validator that specifies a list of requirements to run against
* when the type matches the template parameter
*/
template <typename Type>
class IfType final
{
public:
/**
* @brief Constructs a validator that validates the specs if the type
* matches
* @param requirements The requirements to validate against
*/
template <Requirement... Requirements>
IfType(Requirements&&... requirements)
{
validator_ = [... r = std::forward<Requirements>(requirements)](
boost::json::value const& j, std::string_view key) -> MaybeError {
std::optional<Status> firstFailure = std::nullopt;
// the check logic is the same as fieldspec
// clang-format off
([&j, &key, &firstFailure, req = &r]() {
if (firstFailure)
return;
if (auto const res = req->verify(j, key); not res)
firstFailure = res.error();
}(), ...);
// clang-format on
if (firstFailure)
return Error{firstFailure.value()};
return {};
};
}
/**
* @brief Verify that the element is valid
* according the stored requirements when type matches
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the element from the outer object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const
{
if (not value.is_object() or not value.as_object().contains(key.data()))
return {}; // ignore. field does not exist, let 'required' fail
// instead
if (not checkType<Type>(value.as_object().at(key.data())))
return {}; // ignore if type does not match
return validator_(value, key);
}
private:
std::function<MaybeError(boost::json::value const&, std::string_view)> validator_;
};
/**
* @brief A meta-validator that wrapp other validator to send the customized
* error
*/
template <typename Requirement>
class WithCustomError final
{
Requirement requirement;
Status error;
public:
/**
* @brief Constructs a validator that calls the given validator "req" and
* return customized error "err"
*/
WithCustomError(Requirement req, Status err) : requirement{std::move(req)}, error{err}
{
}
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const;
verify(boost::json::value const& value, std::string_view key) const
{
if (auto const res = requirement.verify(value, key); not res)
return Error{error};
return {};
}
};
/**
@@ -243,17 +468,94 @@ public:
*/
class CustomValidator final
{
std::function<MaybeError(boost::json::value const&, std::string_view)>
validator_;
std::function<MaybeError(boost::json::value const&, std::string_view)> validator_;
public:
/**
* @brief Constructs a custom validator from any supported callable
*
* @tparam Fn The type of callable
* @param fn The callable/function object
*/
template <typename Fn>
explicit CustomValidator(Fn&& fn) : validator_{std::forward<Fn>(fn)}
{
}
/**
* @brief Verify that the JSON value is valid according to the custom
* validation function stored
*
* @param value The JSON value representing the outer object
* @param key The key used to retrieve the tested value from the outer
* object
*/
[[nodiscard]] MaybeError
verify(boost::json::value const& value, std::string_view key) const;
};
} // namespace RPCng::validation
/**
* @brief Helper function to check if sv is an uint32 number or not
*/
[[nodiscard]] bool
checkIsU32Numeric(std::string_view sv);
/**
* @brief Provide a common used validator for ledger index
* LedgerIndex must be a string or int
* If the specified LedgerIndex is a string, it's value must be either
* "validated" or a valid integer value represented as a string.
*/
extern CustomValidator LedgerIndexValidator;
/**
* @brief Provide a common used validator for account
* Account must be a string and the converted public key is valid
*/
extern CustomValidator AccountValidator;
/**
* @brief Provide a common used validator for account
* Account must be a string and can convert to base58
*/
extern CustomValidator AccountBase58Validator;
/**
* @brief Provide a common used validator for marker
* Marker is composed of a comma separated index and start hint. The
* former will be read as hex, and the latter can cast to uint64.
*/
extern CustomValidator AccountMarkerValidator;
/**
* @brief Provide a common used validator for uint256 hex string
* It must be a string and hex
* Transaction index, ledger hash all use this validator
*/
extern CustomValidator Uint256HexStringValidator;
/**
* @brief Provide a common used validator for currency
* including standard currency code and token code
*/
extern CustomValidator CurrencyValidator;
/**
* @brief Provide a common used validator for issuer type
* It must be a hex string or base58 string
*/
extern CustomValidator IssuerValidator;
/**
* @brief Provide a validator for validating valid streams used in
* subscribe/unsubscribe
*/
extern CustomValidator SubscribeStreamValidator;
/**
* @brief Provide a validator for validating valid accounts used in
* subscribe/unsubscribe
*/
extern CustomValidator SubscribeAccountsValidator;
} // namespace RPC::validation

View File

@@ -0,0 +1,42 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <string_view>
namespace RPC::detail {
class IPAdminVerificationStrategy final
{
public:
/**
* @brief Checks whether request is from a host that is considered authorized as admin.
*
* @param ip The ip addr of the client
* @return true if authorized; false otherwise
*/
bool
isAdmin(std::string_view ip) const
{
return ip == "127.0.0.1";
}
};
} // namespace RPC::detail

View File

@@ -26,20 +26,19 @@
#include <optional>
namespace RPCng::detail {
namespace RPC::detail {
template <Requirement... Requirements>
[[nodiscard]] auto
makeFieldValidator(std::string const& key, Requirements&&... requirements)
{
return [key, ... r = std::forward<Requirements>(requirements)](
boost::json::value const& j) -> MaybeError {
// clang-format off
std::optional<RPC::Status> firstFailure = std::nullopt;
return [key, ... r = std::forward<Requirements>(requirements)](boost::json::value const& j) -> MaybeError {
std::optional<Status> firstFailure = std::nullopt;
// This expands in order of Requirements and stops evaluating after
// first failure which is stored in `firstFailure` and can be checked
// This expands in order of Requirements and stops evaluating after
// first failure which is stored in `firstFailure` and can be checked
// later on to see whether the verification failed as a whole or not.
// clang-format off
([&j, &key, &firstFailure, req = &r]() {
if (firstFailure)
return; // already failed earlier - skip
@@ -56,4 +55,4 @@ makeFieldValidator(std::string const& key, Requirements&&... requirements)
};
}
} // namespace RPCng::detail
} // namespace RPC::detail

View File

@@ -0,0 +1,115 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <rpc/common/impl/HandlerProvider.h>
#include <etl/ReportingETL.h>
#include <rpc/Counters.h>
#include <subscriptions/SubscriptionManager.h>
#include <rpc/handlers/AccountChannels.h>
#include <rpc/handlers/AccountCurrencies.h>
#include <rpc/handlers/AccountInfo.h>
#include <rpc/handlers/AccountLines.h>
#include <rpc/handlers/AccountNFTs.h>
#include <rpc/handlers/AccountObjects.h>
#include <rpc/handlers/AccountOffers.h>
#include <rpc/handlers/AccountTx.h>
#include <rpc/handlers/BookChanges.h>
#include <rpc/handlers/BookOffers.h>
#include <rpc/handlers/GatewayBalances.h>
#include <rpc/handlers/Ledger.h>
#include <rpc/handlers/LedgerData.h>
#include <rpc/handlers/LedgerEntry.h>
#include <rpc/handlers/LedgerRange.h>
#include <rpc/handlers/NFTBuyOffers.h>
#include <rpc/handlers/NFTHistory.h>
#include <rpc/handlers/NFTInfo.h>
#include <rpc/handlers/NFTSellOffers.h>
#include <rpc/handlers/NoRippleCheck.h>
#include <rpc/handlers/Ping.h>
#include <rpc/handlers/Random.h>
#include <rpc/handlers/ServerInfo.h>
#include <rpc/handlers/Subscribe.h>
#include <rpc/handlers/TransactionEntry.h>
#include <rpc/handlers/Tx.h>
#include <rpc/handlers/Unsubscribe.h>
namespace RPC::detail {
ProductionHandlerProvider::ProductionHandlerProvider(
std::shared_ptr<BackendInterface> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptionManager,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL const> const& etl,
Counters const& counters)
: handlerMap_{
{"account_channels", {AccountChannelsHandler{backend}}},
{"account_currencies", {AccountCurrenciesHandler{backend}}},
{"account_info", {AccountInfoHandler{backend}}},
{"account_lines", {AccountLinesHandler{backend}}},
{"account_nfts", {AccountNFTsHandler{backend}}},
{"account_objects", {AccountObjectsHandler{backend}}},
{"account_offers", {AccountOffersHandler{backend}}},
{"account_tx", {AccountTxHandler{backend}}},
{"book_changes", {BookChangesHandler{backend}}},
{"book_offers", {BookOffersHandler{backend}}},
{"gateway_balances", {GatewayBalancesHandler{backend}}},
{"ledger", {LedgerHandler{backend}}},
{"ledger_data", {LedgerDataHandler{backend}}},
{"ledger_entry", {LedgerEntryHandler{backend}}},
{"ledger_range", {LedgerRangeHandler{backend}}},
{"nft_history", {NFTHistoryHandler{backend}, true}}, // clio only
{"nft_buy_offers", {NFTBuyOffersHandler{backend}}},
{"nft_info", {NFTInfoHandler{backend}, true}}, // clio only
{"nft_sell_offers", {NFTSellOffersHandler{backend}}},
{"noripple_check", {NoRippleCheckHandler{backend}}},
{"ping", {PingHandler{}}},
{"random", {RandomHandler{}}},
{"server_info", {ServerInfoHandler{backend, subscriptionManager, balancer, etl, counters}}},
{"transaction_entry", {TransactionEntryHandler{backend}}},
{"tx", {TxHandler{backend}}},
{"subscribe", {SubscribeHandler{backend, subscriptionManager}}},
{"unsubscribe", {UnsubscribeHandler{backend, subscriptionManager}}},
}
{
}
bool
ProductionHandlerProvider::contains(std::string const& method) const
{
return handlerMap_.contains(method);
}
std::optional<AnyHandler>
ProductionHandlerProvider::getHandler(std::string const& command) const
{
if (!handlerMap_.contains(command))
return {};
return handlerMap_.at(command).handler;
}
bool
ProductionHandlerProvider::isClioOnly(std::string const& command) const
{
return handlerMap_.contains(command) && handlerMap_.at(command).isClioOnly;
}
} // namespace RPC::detail

View File

@@ -0,0 +1,73 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/common/AnyHandler.h>
#include <rpc/common/Types.h>
#include <subscriptions/SubscriptionManager.h>
#include <optional>
#include <string>
#include <unordered_map>
class SubscriptionManager;
class ReportingETL;
class ETLLoadBalancer;
namespace RPC {
class Counters;
}
namespace RPC::detail {
class ProductionHandlerProvider final : public HandlerProvider
{
struct Handler
{
AnyHandler handler;
bool isClioOnly;
/* implicit */ Handler(AnyHandler handler, bool clioOnly = false) : handler{handler}, isClioOnly{clioOnly}
{
}
};
std::unordered_map<std::string, Handler> handlerMap_;
public:
ProductionHandlerProvider(
std::shared_ptr<BackendInterface> const& backend,
std::shared_ptr<SubscriptionManager> const& subscriptionManager,
std::shared_ptr<ETLLoadBalancer> const& balancer,
std::shared_ptr<ReportingETL const> const& etl,
Counters const& counters);
bool
contains(std::string const& method) const override;
std::optional<AnyHandler>
getHandler(std::string const& command) const override;
bool
isClioOnly(std::string const& command) const override;
};
} // namespace RPC::detail

View File

@@ -22,31 +22,73 @@
#include <rpc/common/Concepts.h>
#include <rpc/common/Types.h>
namespace RPCng::detail {
namespace RPC::detail {
template <typename>
static constexpr bool unsupported_handler_v = false;
template <Handler HandlerType>
struct DefaultProcessor final
{
[[nodiscard]] ReturnType
operator()(HandlerType const& handler, boost::json::value const& value)
const
operator()(HandlerType const& handler, boost::json::value const& value, Context const* ctx = nullptr) const
{
using boost::json::value_from;
using boost::json::value_to;
if constexpr (HandlerWithInput<HandlerType>)
{
// first we run validation
auto const spec = handler.spec();
if (auto const ret = spec.validate(value); not ret)
return Error{ret.error()}; // forward Status
// first we run validation
auto const spec = handler.spec();
if (auto const ret = spec.validate(value); not ret)
return Error{ret.error()}; // forward Status
auto const inData = value_to<typename HandlerType::Input>(value);
if constexpr (NonContextProcess<HandlerType>)
{
auto const ret = handler.process(inData);
// real handler is given expected Input, not json
if (!ret)
return Error{ret.error()}; // forward Status
else
return value_from(ret.value());
}
else
{
auto const ret = handler.process(inData, *ctx);
// real handler is given expected Input, not json
if (!ret)
return Error{ret.error()}; // forward Status
else
return value_from(ret.value());
}
}
else if constexpr (HandlerWithoutInput<HandlerType>)
{
using OutType = HandlerReturnType<typename HandlerType::Output>;
auto const inData = value_to<typename HandlerType::Input>(value);
// real handler is given expected Input, not json
if (auto const ret = handler.process(inData); not ret)
return Error{ret.error()}; // forward Status
// no input to pass, ignore the value
if constexpr (ContextProcessWithoutInput<HandlerType>)
{
if (auto const ret = handler.process(*ctx); not ret)
return Error{ret.error()}; // forward Status
else
return value_from(ret.value());
}
else
{
if (auto const ret = handler.process(); not ret)
return Error{ret.error()}; // forward Status
else
return value_from(ret.value());
}
}
else
return value_from(ret.value());
{
// when concept HandlerWithInput and HandlerWithoutInput not cover
// all Handler case
static_assert(unsupported_handler_v<HandlerType>);
}
}
};
} // namespace RPCng::detail
} // namespace RPC::detail

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -17,129 +17,173 @@
*/
//==============================================================================
#include <ripple/app/ledger/Ledger.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/protocol/ErrorCodes.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/jss.h>
#include <boost/json.hpp>
#include <algorithm>
#include <backend/BackendInterface.h>
#include <backend/DBHelpers.h>
#include <rpc/RPCHelpers.h>
#include <rpc/handlers/AccountChannels.h>
namespace RPC {
void
addChannel(boost::json::array& jsonLines, ripple::SLE const& line)
AccountChannelsHandler::addChannel(std::vector<ChannelResponse>& jsonChannels, ripple::SLE const& channelSle) const
{
boost::json::object jDst;
jDst[JS(channel_id)] = ripple::to_string(line.key());
jDst[JS(account)] = ripple::to_string(line.getAccountID(ripple::sfAccount));
jDst[JS(destination_account)] =
ripple::to_string(line.getAccountID(ripple::sfDestination));
jDst[JS(amount)] = line[ripple::sfAmount].getText();
jDst[JS(balance)] = line[ripple::sfBalance].getText();
if (publicKeyType(line[ripple::sfPublicKey]))
{
ripple::PublicKey const pk(line[ripple::sfPublicKey]);
jDst[JS(public_key)] = toBase58(ripple::TokenType::AccountPublic, pk);
jDst[JS(public_key_hex)] = strHex(pk);
}
jDst[JS(settle_delay)] = line[ripple::sfSettleDelay];
if (auto const& v = line[~ripple::sfExpiration])
jDst[JS(expiration)] = *v;
if (auto const& v = line[~ripple::sfCancelAfter])
jDst[JS(cancel_after)] = *v;
if (auto const& v = line[~ripple::sfSourceTag])
jDst[JS(source_tag)] = *v;
if (auto const& v = line[~ripple::sfDestinationTag])
jDst[JS(destination_tag)] = *v;
ChannelResponse channel;
channel.channelID = ripple::to_string(channelSle.key());
channel.account = ripple::to_string(channelSle.getAccountID(ripple::sfAccount));
channel.accountDestination = ripple::to_string(channelSle.getAccountID(ripple::sfDestination));
channel.amount = channelSle[ripple::sfAmount].getText();
channel.balance = channelSle[ripple::sfBalance].getText();
channel.settleDelay = channelSle[ripple::sfSettleDelay];
jsonLines.push_back(jDst);
if (publicKeyType(channelSle[ripple::sfPublicKey]))
{
ripple::PublicKey const pk(channelSle[ripple::sfPublicKey]);
channel.publicKey = toBase58(ripple::TokenType::AccountPublic, pk);
channel.publicKeyHex = strHex(pk);
}
if (auto const& v = channelSle[~ripple::sfExpiration])
channel.expiration = *v;
if (auto const& v = channelSle[~ripple::sfCancelAfter])
channel.cancelAfter = *v;
if (auto const& v = channelSle[~ripple::sfSourceTag])
channel.sourceTag = *v;
if (auto const& v = channelSle[~ripple::sfDestinationTag])
channel.destinationTag = *v;
jsonChannels.push_back(channel);
}
Result
doAccountChannels(Context const& context)
AccountChannelsHandler::Result
AccountChannelsHandler::process(AccountChannelsHandler::Input input, Context const& ctx) const
{
auto request = context.params;
boost::json::object response = {};
auto const range = sharedPtrBackend_->fetchLedgerRange();
auto const lgrInfoOrStatus = getLedgerInfoFromHashOrSeq(
*sharedPtrBackend_, ctx.yield, input.ledgerHash, input.ledgerIndex, range->maxSequence);
auto v = ledgerInfoFromRequest(context);
if (auto status = std::get_if<Status>(&v))
return *status;
if (auto status = std::get_if<Status>(&lgrInfoOrStatus))
return Error{*status};
auto lgrInfo = std::get<ripple::LedgerInfo>(v);
auto const lgrInfo = std::get<ripple::LedgerInfo>(lgrInfoOrStatus);
auto const accountID = accountFromStringStrict(input.account);
auto const accountLedgerObject =
sharedPtrBackend_->fetchLedgerObject(ripple::keylet::account(*accountID).key, lgrInfo.seq, ctx.yield);
ripple::AccountID accountID;
if (auto const status = getAccount(request, accountID); status)
return status;
if (!accountLedgerObject)
return Error{Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"}};
auto rawAcct = context.backend->fetchLedgerObject(
ripple::keylet::account(accountID).key, lgrInfo.seq, context.yield);
if (!rawAcct)
return Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"};
ripple::AccountID destAccount;
if (auto const status =
getAccount(request, destAccount, JS(destination_account));
status)
return status;
std::uint32_t limit;
if (auto const status = getLimit(context, limit); status)
return status;
std::optional<std::string> marker = {};
if (request.contains(JS(marker)))
{
if (!request.at(JS(marker)).is_string())
return Status{RippledError::rpcINVALID_PARAMS, "markerNotString"};
marker = request.at(JS(marker)).as_string().c_str();
}
response[JS(account)] = ripple::to_string(accountID);
response[JS(channels)] = boost::json::value(boost::json::array_kind);
response[JS(limit)] = limit;
boost::json::array& jsonChannels = response.at(JS(channels)).as_array();
auto const destAccountID = input.destinationAccount ? accountFromStringStrict(input.destinationAccount.value())
: std::optional<ripple::AccountID>{};
Output response;
auto const addToResponse = [&](ripple::SLE&& sle) {
if (sle.getType() == ripple::ltPAYCHAN &&
sle.getAccountID(ripple::sfAccount) == accountID &&
(!destAccount ||
destAccount == sle.getAccountID(ripple::sfDestination)))
if (sle.getType() == ripple::ltPAYCHAN && sle.getAccountID(ripple::sfAccount) == accountID &&
(!destAccountID || *destAccountID == sle.getAccountID(ripple::sfDestination)))
{
addChannel(jsonChannels, sle);
addChannel(response.channels, sle);
}
return true;
};
auto next = traverseOwnedNodes(
*context.backend,
accountID,
lgrInfo.seq,
limit,
marker,
context.yield,
addToResponse);
auto const next = ngTraverseOwnedNodes(
*sharedPtrBackend_, *accountID, lgrInfo.seq, input.limit, input.marker, ctx.yield, addToResponse);
response[JS(ledger_hash)] = ripple::strHex(lgrInfo.hash);
response[JS(ledger_index)] = lgrInfo.seq;
if (auto status = std::get_if<Status>(&next))
return Error{*status};
if (auto status = std::get_if<RPC::Status>(&next))
return *status;
auto nextMarker = std::get<RPC::AccountCursor>(next);
response.account = input.account;
response.limit = input.limit;
response.ledgerHash = ripple::strHex(lgrInfo.hash);
response.ledgerIndex = lgrInfo.seq;
auto const nextMarker = std::get<AccountCursor>(next);
if (nextMarker.isNonZero())
response[JS(marker)] = nextMarker.toString();
response.marker = nextMarker.toString();
return response;
}
AccountChannelsHandler::Input
tag_invoke(boost::json::value_to_tag<AccountChannelsHandler::Input>, boost::json::value const& jv)
{
auto input = AccountChannelsHandler::Input{};
auto const& jsonObject = jv.as_object();
input.account = jv.at(JS(account)).as_string().c_str();
if (jsonObject.contains(JS(limit)))
input.limit = jv.at(JS(limit)).as_int64();
if (jsonObject.contains(JS(marker)))
input.marker = jv.at(JS(marker)).as_string().c_str();
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jv.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(destination_account)))
input.destinationAccount = jv.at(JS(destination_account)).as_string().c_str();
if (jsonObject.contains(JS(ledger_index)))
{
if (!jsonObject.at(JS(ledger_index)).is_string())
input.ledgerIndex = jv.at(JS(ledger_index)).as_int64();
else if (jsonObject.at(JS(ledger_index)).as_string() != "validated")
input.ledgerIndex = std::stoi(jv.at(JS(ledger_index)).as_string().c_str());
}
return input;
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountChannelsHandler::Output const& output)
{
auto obj = boost::json::object{
{JS(account), output.account},
{JS(ledger_hash), output.ledgerHash},
{JS(ledger_index), output.ledgerIndex},
{JS(validated), output.validated},
{JS(limit), output.limit},
{JS(channels), output.channels},
};
if (output.marker)
obj[JS(marker)] = output.marker.value();
jv = std::move(obj);
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountChannelsHandler::ChannelResponse const& channel)
{
auto obj = boost::json::object{
{JS(channel_id), channel.channelID},
{JS(account), channel.account},
{JS(destination_account), channel.accountDestination},
{JS(amount), channel.amount},
{JS(balance), channel.balance},
{JS(settle_delay), channel.settleDelay},
};
if (channel.publicKey)
obj[JS(public_key)] = *(channel.publicKey);
if (channel.publicKeyHex)
obj[JS(public_key_hex)] = *(channel.publicKeyHex);
if (channel.expiration)
obj[JS(expiration)] = *(channel.expiration);
if (channel.cancelAfter)
obj[JS(cancel_after)] = *(channel.cancelAfter);
if (channel.sourceTag)
obj[JS(source_tag)] = *(channel.sourceTag);
if (channel.destinationTag)
obj[JS(destination_tag)] = *(channel.destinationTag);
jv = std::move(obj);
}
} // namespace RPC

View File

@@ -0,0 +1,121 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/JS.h>
#include <rpc/common/Types.h>
#include <rpc/common/Validators.h>
#include <vector>
namespace RPC {
/**
* @brief The account_channels method returns information about an account's Payment Channels. This includes only
* channels where the specified account is the channel's source, not the destination.
* All information retrieved is relative to a particular version of the ledger.
*
* For more details see: https://xrpl.org/account_channels.html
*/
class AccountChannelsHandler
{
// dependencies
std::shared_ptr<BackendInterface> const sharedPtrBackend_;
public:
// type align with SField.h
struct ChannelResponse
{
std::string channelID;
std::string account;
std::string accountDestination;
std::string amount;
std::string balance;
std::optional<std::string> publicKey;
std::optional<std::string> publicKeyHex;
uint32_t settleDelay;
std::optional<uint32_t> expiration;
std::optional<uint32_t> cancelAfter;
std::optional<uint32_t> sourceTag;
std::optional<uint32_t> destinationTag;
};
struct Output
{
std::vector<ChannelResponse> channels;
std::string account;
std::string ledgerHash;
uint32_t ledgerIndex;
// validated should be sent via framework
bool validated = true;
uint32_t limit;
std::optional<std::string> marker;
};
struct Input
{
std::string account;
std::optional<std::string> destinationAccount;
std::optional<std::string> ledgerHash;
std::optional<uint32_t> ledgerIndex;
uint32_t limit = 50;
std::optional<std::string> marker;
};
using Result = HandlerReturnType<Output>;
AccountChannelsHandler(std::shared_ptr<BackendInterface> const& sharedPtrBackend)
: sharedPtrBackend_(sharedPtrBackend)
{
}
RpcSpecConstRef
spec() const
{
static auto const rpcSpec = RpcSpec{
{JS(account), validation::Required{}, validation::AccountValidator},
{JS(destination_account), validation::Type<std::string>{}, validation::AccountValidator},
{JS(ledger_hash), validation::Uint256HexStringValidator},
{JS(limit), validation::Type<uint32_t>{}, validation::Between{10, 400}},
{JS(ledger_index), validation::LedgerIndexValidator},
{JS(marker), validation::AccountMarkerValidator},
};
return rpcSpec;
}
Result
process(Input input, Context const& ctx) const;
private:
void
addChannel(std::vector<ChannelResponse>& jsonLines, ripple::SLE const& line) const;
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, Output const& output);
friend Input
tag_invoke(boost::json::value_to_tag<Input>, boost::json::value const& jv);
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, ChannelResponse const& channel);
};
} // namespace RPC

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -17,92 +17,99 @@
*/
//==============================================================================
#include <ripple/app/ledger/Ledger.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/protocol/ErrorCodes.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/jss.h>
#include <boost/json.hpp>
#include <algorithm>
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
#include <rpc/handlers/AccountCurrencies.h>
namespace RPC {
Result
doAccountCurrencies(Context const& context)
AccountCurrenciesHandler::Result
AccountCurrenciesHandler::process(AccountCurrenciesHandler::Input input, Context const& ctx) const
{
auto request = context.params;
boost::json::object response = {};
auto const range = sharedPtrBackend_->fetchLedgerRange();
auto const lgrInfoOrStatus = getLedgerInfoFromHashOrSeq(
*sharedPtrBackend_, ctx.yield, input.ledgerHash, input.ledgerIndex, range->maxSequence);
auto v = ledgerInfoFromRequest(context);
if (auto status = std::get_if<Status>(&v))
return *status;
if (auto const status = std::get_if<Status>(&lgrInfoOrStatus))
return Error{*status};
auto lgrInfo = std::get<ripple::LedgerInfo>(v);
auto const lgrInfo = std::get<ripple::LedgerInfo>(lgrInfoOrStatus);
auto const accountID = accountFromStringStrict(input.account);
ripple::AccountID accountID;
if (auto const status = getAccount(request, accountID); status)
return status;
auto const accountLedgerObject =
sharedPtrBackend_->fetchLedgerObject(ripple::keylet::account(*accountID).key, lgrInfo.seq, ctx.yield);
if (!accountLedgerObject)
return Error{Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"}};
auto rawAcct = context.backend->fetchLedgerObject(
ripple::keylet::account(accountID).key, lgrInfo.seq, context.yield);
if (!rawAcct)
return Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"};
std::set<std::string> send, receive;
Output response;
auto const addToResponse = [&](ripple::SLE&& sle) {
if (sle.getType() == ripple::ltRIPPLE_STATE)
{
ripple::STAmount balance = sle.getFieldAmount(ripple::sfBalance);
auto balance = sle.getFieldAmount(ripple::sfBalance);
auto const lowLimit = sle.getFieldAmount(ripple::sfLowLimit);
auto const highLimit = sle.getFieldAmount(ripple::sfHighLimit);
bool const viewLowest = (lowLimit.getIssuer() == accountID);
auto const lineLimit = viewLowest ? lowLimit : highLimit;
auto const lineLimitPeer = !viewLowest ? lowLimit : highLimit;
auto lowLimit = sle.getFieldAmount(ripple::sfLowLimit);
auto highLimit = sle.getFieldAmount(ripple::sfHighLimit);
bool viewLowest = (lowLimit.getIssuer() == accountID);
auto lineLimit = viewLowest ? lowLimit : highLimit;
auto lineLimitPeer = !viewLowest ? lowLimit : highLimit;
if (!viewLowest)
balance.negate();
if (balance < lineLimit)
receive.insert(ripple::to_string(balance.getCurrency()));
response.receiveCurrencies.insert(ripple::to_string(balance.getCurrency()));
if ((-balance) < lineLimitPeer)
send.insert(ripple::to_string(balance.getCurrency()));
response.sendCurrencies.insert(ripple::to_string(balance.getCurrency()));
}
return true;
};
traverseOwnedNodes(
*context.backend,
accountID,
// traverse all owned nodes, limit->max, marker->empty
ngTraverseOwnedNodes(
*sharedPtrBackend_,
*accountID,
lgrInfo.seq,
std::numeric_limits<std::uint32_t>::max(),
{},
context.yield,
ctx.yield,
addToResponse);
response[JS(ledger_hash)] = ripple::strHex(lgrInfo.hash);
response[JS(ledger_index)] = lgrInfo.seq;
response[JS(receive_currencies)] =
boost::json::value(boost::json::array_kind);
boost::json::array& jsonReceive =
response.at(JS(receive_currencies)).as_array();
for (auto const& currency : receive)
jsonReceive.push_back(currency.c_str());
response[JS(send_currencies)] = boost::json::value(boost::json::array_kind);
boost::json::array& jsonSend = response.at(JS(send_currencies)).as_array();
for (auto const& currency : send)
jsonSend.push_back(currency.c_str());
response.ledgerHash = ripple::strHex(lgrInfo.hash);
response.ledgerIndex = lgrInfo.seq;
return response;
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountCurrenciesHandler::Output const& output)
{
jv = {
{JS(ledger_hash), output.ledgerHash},
{JS(ledger_index), output.ledgerIndex},
{JS(validated), output.validated},
{JS(receive_currencies), output.receiveCurrencies},
{JS(send_currencies), output.sendCurrencies},
};
}
AccountCurrenciesHandler::Input
tag_invoke(boost::json::value_to_tag<AccountCurrenciesHandler::Input>, boost::json::value const& jv)
{
auto input = AccountCurrenciesHandler::Input{};
auto const& jsonObject = jv.as_object();
input.account = jv.at(JS(account)).as_string().c_str();
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jv.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(ledger_index)))
{
if (!jsonObject.at(JS(ledger_index)).is_string())
input.ledgerIndex = jv.at(JS(ledger_index)).as_int64();
else if (jsonObject.at(JS(ledger_index)).as_string() != "validated")
input.ledgerIndex = std::stoi(jv.at(JS(ledger_index)).as_string().c_str());
}
return input;
}
} // namespace RPC

View File

@@ -0,0 +1,91 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/Types.h>
#include <rpc/common/Validators.h>
#include <set>
namespace RPC {
/**
* @brief The account_currencies command retrieves a list of currencies that an account can send or receive,
* based on its trust lines.
*
* For more details see: https://xrpl.org/account_currencies.html
*/
class AccountCurrenciesHandler
{
// dependencies
std::shared_ptr<BackendInterface> sharedPtrBackend_;
public:
struct Output
{
std::string ledgerHash;
uint32_t ledgerIndex;
std::set<std::string> receiveCurrencies;
std::set<std::string> sendCurrencies;
// validated should be sent via framework
bool validated = true;
};
// TODO: We did not implement the "strict" field (can't be implemented?)
struct Input
{
std::string account;
std::optional<std::string> ledgerHash;
std::optional<uint32_t> ledgerIndex;
};
using Result = HandlerReturnType<Output>;
AccountCurrenciesHandler(std::shared_ptr<BackendInterface> const& sharedPtrBackend)
: sharedPtrBackend_(sharedPtrBackend)
{
}
RpcSpecConstRef
spec() const
{
static auto const rpcSpec = RpcSpec{
{JS(account), validation::Required{}, validation::AccountValidator},
{JS(ledger_hash), validation::Uint256HexStringValidator},
{JS(ledger_index), validation::LedgerIndexValidator},
};
return rpcSpec;
}
Result
process(Input input, Context const& ctx) const;
private:
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, Output const& output);
friend Input
tag_invoke(boost::json::value_to_tag<Input>, boost::json::value const& jv);
};
} // namespace RPC

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -17,104 +17,118 @@
*/
//==============================================================================
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <boost/json.hpp>
#include <rpc/handlers/AccountInfo.h>
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
// {
// account: <ident>,
// strict: <bool> // optional (default false)
// // if true only allow public keys and addresses.
// ledger_hash : <ledger>
// ledger_index : <ledger_index>
// signer_lists : <bool> // optional (default false)
// // if true return SignerList(s).
// queue : <bool> // optional (default false)
// // if true return information about transactions
// // in the current TxQ, only if the requested
// // ledger is open. Otherwise if true, returns an
// // error.
// }
#include <ripple/protocol/ErrorCodes.h>
namespace RPC {
Result
doAccountInfo(Context const& context)
AccountInfoHandler::Result
AccountInfoHandler::process(AccountInfoHandler::Input input, Context const& ctx) const
{
auto request = context.params;
boost::json::object response = {};
if (!input.account && !input.ident)
return Error{Status{RippledError::rpcINVALID_PARAMS, ripple::RPC::missing_field_message(JS(account))}};
std::string strIdent;
if (request.contains(JS(account)))
strIdent = request.at(JS(account)).as_string().c_str();
else if (request.contains(JS(ident)))
strIdent = request.at(JS(ident)).as_string().c_str();
else
return Status{RippledError::rpcACT_MALFORMED};
auto const range = sharedPtrBackend_->fetchLedgerRange();
auto const lgrInfoOrStatus = getLedgerInfoFromHashOrSeq(
*sharedPtrBackend_, ctx.yield, input.ledgerHash, input.ledgerIndex, range->maxSequence);
// We only need to fetch the ledger header because the ledger hash is
// supposed to be included in the response. The ledger sequence is specified
// in the request
auto v = ledgerInfoFromRequest(context);
if (auto status = std::get_if<Status>(&v))
return *status;
if (auto const status = std::get_if<Status>(&lgrInfoOrStatus))
return Error{*status};
auto lgrInfo = std::get<ripple::LedgerInfo>(v);
auto const lgrInfo = std::get<ripple::LedgerInfo>(lgrInfoOrStatus);
auto const accountStr = input.account.value_or(input.ident.value_or(""));
auto const accountID = accountFromStringStrict(accountStr);
auto const accountKeylet = ripple::keylet::account(*accountID);
auto const accountLedgerObject = sharedPtrBackend_->fetchLedgerObject(accountKeylet.key, lgrInfo.seq, ctx.yield);
// Get info on account.
auto accountID = accountFromStringStrict(strIdent);
if (!accountID)
return Status{RippledError::rpcACT_MALFORMED};
if (!accountLedgerObject)
return Error{Status{RippledError::rpcACT_NOT_FOUND}};
auto key = ripple::keylet::account(accountID.value());
std::optional<std::vector<unsigned char>> dbResponse =
context.backend->fetchLedgerObject(key.key, lgrInfo.seq, context.yield);
ripple::STLedgerEntry const sle{
ripple::SerialIter{accountLedgerObject->data(), accountLedgerObject->size()}, accountKeylet.key};
if (!dbResponse)
return Status{RippledError::rpcACT_NOT_FOUND};
ripple::STLedgerEntry sle{
ripple::SerialIter{dbResponse->data(), dbResponse->size()}, key.key};
if (!key.check(sle))
return Status{RippledError::rpcDB_DESERIALIZATION};
response[JS(account_data)] = toJson(sle);
response[JS(ledger_hash)] = ripple::strHex(lgrInfo.hash);
response[JS(ledger_index)] = lgrInfo.seq;
if (!accountKeylet.check(sle))
return Error{Status{RippledError::rpcDB_DESERIALIZATION}};
// Return SignerList(s) if that is requested.
if (request.contains(JS(signer_lists)) &&
request.at(JS(signer_lists)).as_bool())
if (input.signerLists)
{
// We put the SignerList in an array because of an anticipated
// future when we support multiple signer lists on one account.
boost::json::array signerList;
auto signersKey = ripple::keylet::signers(*accountID);
auto const signersKey = ripple::keylet::signers(*accountID);
// This code will need to be revisited if in the future we
// support multiple SignerLists on one account.
auto const signers = context.backend->fetchLedgerObject(
signersKey.key, lgrInfo.seq, context.yield);
auto const signers = sharedPtrBackend_->fetchLedgerObject(signersKey.key, lgrInfo.seq, ctx.yield);
std::vector<ripple::STLedgerEntry> signerList;
if (signers)
{
ripple::STLedgerEntry sleSigners{
ripple::SerialIter{signers->data(), signers->size()},
signersKey.key};
if (!signersKey.check(sleSigners))
return Status{RippledError::rpcDB_DESERIALIZATION};
ripple::STLedgerEntry const sleSigners{
ripple::SerialIter{signers->data(), signers->size()}, signersKey.key};
signerList.push_back(toJson(sleSigners));
if (!signersKey.check(sleSigners))
return Error{Status{RippledError::rpcDB_DESERIALIZATION}};
signerList.push_back(sleSigners);
}
response[JS(account_data)].as_object()[JS(signer_lists)] =
std::move(signerList);
return Output(lgrInfo.seq, ripple::strHex(lgrInfo.hash), sle, signerList);
}
return response;
return Output(lgrInfo.seq, ripple::strHex(lgrInfo.hash), sle);
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountInfoHandler::Output const& output)
{
jv = boost::json::object{
{JS(account_data), toJson(output.accountData)},
{JS(ledger_hash), output.ledgerHash},
{JS(ledger_index), output.ledgerIndex},
{JS(validated), output.validated},
};
if (output.signerLists)
{
auto signers = boost::json::array();
std::transform(
std::cbegin(output.signerLists.value()),
std::cend(output.signerLists.value()),
std::back_inserter(signers),
[](auto const& signerList) { return toJson(signerList); });
jv.as_object()[JS(account_data)].as_object()[JS(signer_lists)] = std::move(signers);
}
}
AccountInfoHandler::Input
tag_invoke(boost::json::value_to_tag<AccountInfoHandler::Input>, boost::json::value const& jv)
{
auto input = AccountInfoHandler::Input{};
auto const& jsonObject = jv.as_object();
if (jsonObject.contains(JS(ident)))
input.ident = jsonObject.at(JS(ident)).as_string().c_str();
if (jsonObject.contains(JS(account)))
input.account = jsonObject.at(JS(account)).as_string().c_str();
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jsonObject.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(ledger_index)))
{
if (!jsonObject.at(JS(ledger_index)).is_string())
input.ledgerIndex = jsonObject.at(JS(ledger_index)).as_int64();
else if (jsonObject.at(JS(ledger_index)).as_string() != "validated")
input.ledgerIndex = std::stoi(jsonObject.at(JS(ledger_index)).as_string().c_str());
}
if (jsonObject.contains(JS(signer_lists)))
input.signerLists = jsonObject.at(JS(signer_lists)).as_bool();
return input;
}
} // namespace RPC

View File

@@ -0,0 +1,108 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/Types.h>
#include <rpc/common/Validators.h>
namespace RPC {
/**
* @brief The account_info command retrieves information about an account, its activity, and its XRP balance.
*
* For more details see: https://xrpl.org/account_info.html
*/
class AccountInfoHandler
{
std::shared_ptr<BackendInterface> sharedPtrBackend_;
public:
struct Output
{
uint32_t ledgerIndex;
std::string ledgerHash;
ripple::STLedgerEntry accountData;
std::optional<std::vector<ripple::STLedgerEntry>> signerLists;
// validated should be sent via framework
bool validated = true;
Output(
uint32_t ledgerId,
std::string ledgerHash,
ripple::STLedgerEntry sle,
std::vector<ripple::STLedgerEntry> signerLists)
: ledgerIndex(ledgerId)
, ledgerHash(std::move(ledgerHash))
, accountData(std::move(sle))
, signerLists(std::move(signerLists))
{
}
Output(uint32_t ledgerId, std::string ledgerHash, ripple::STLedgerEntry sle)
: ledgerIndex(ledgerId), ledgerHash(std::move(ledgerHash)), accountData(std::move(sle))
{
}
};
// "queue" is not available in Reporting mode
// "ident" is deprecated, keep it for now, in line with rippled
struct Input
{
std::optional<std::string> account;
std::optional<std::string> ident;
std::optional<std::string> ledgerHash;
std::optional<uint32_t> ledgerIndex;
bool signerLists = false;
};
using Result = HandlerReturnType<Output>;
AccountInfoHandler(std::shared_ptr<BackendInterface> const& sharedPtrBackend) : sharedPtrBackend_(sharedPtrBackend)
{
}
RpcSpecConstRef
spec() const
{
static auto const rpcSpec = RpcSpec{
{JS(account), validation::AccountValidator},
{JS(ident), validation::AccountValidator},
{JS(ledger_hash), validation::Uint256HexStringValidator},
{JS(ledger_index), validation::LedgerIndexValidator},
{JS(signer_lists), validation::Type<bool>{}},
};
return rpcSpec;
}
Result
process(Input input, Context const& ctx) const;
private:
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, Output const& output);
friend Input
tag_invoke(boost::json::value_to_tag<Input>, boost::json::value const& jv);
};
} // namespace RPC

View File

@@ -1,7 +1,7 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
@@ -17,192 +17,221 @@
*/
//==============================================================================
#include <ripple/app/ledger/Ledger.h>
#include <ripple/app/paths/TrustLine.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/protocol/ErrorCodes.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/jss.h>
#include <boost/json.hpp>
#include <algorithm>
#include <backend/BackendInterface.h>
#include <backend/DBHelpers.h>
#include <rpc/RPCHelpers.h>
#include <rpc/handlers/AccountLines.h>
namespace RPC {
void
addLine(
boost::json::array& jsonLines,
ripple::SLE const& line,
AccountLinesHandler::addLine(
std::vector<LineResponse>& lines,
ripple::SLE const& lineSle,
ripple::AccountID const& account,
std::optional<ripple::AccountID> const& peerAccount)
std::optional<ripple::AccountID> const& peerAccount) const
{
auto flags = line.getFieldU32(ripple::sfFlags);
auto lowLimit = line.getFieldAmount(ripple::sfLowLimit);
auto highLimit = line.getFieldAmount(ripple::sfHighLimit);
auto lowID = lowLimit.getIssuer();
auto highID = highLimit.getIssuer();
auto lowQualityIn = line.getFieldU32(ripple::sfLowQualityIn);
auto lowQualityOut = line.getFieldU32(ripple::sfLowQualityOut);
auto highQualityIn = line.getFieldU32(ripple::sfHighQualityIn);
auto highQualityOut = line.getFieldU32(ripple::sfHighQualityOut);
auto balance = line.getFieldAmount(ripple::sfBalance);
auto const flags = lineSle.getFieldU32(ripple::sfFlags);
auto const lowLimit = lineSle.getFieldAmount(ripple::sfLowLimit);
auto const highLimit = lineSle.getFieldAmount(ripple::sfHighLimit);
auto const lowID = lowLimit.getIssuer();
auto const highID = highLimit.getIssuer();
auto const lowQualityIn = lineSle.getFieldU32(ripple::sfLowQualityIn);
auto const lowQualityOut = lineSle.getFieldU32(ripple::sfLowQualityOut);
auto const highQualityIn = lineSle.getFieldU32(ripple::sfHighQualityIn);
auto const highQualityOut = lineSle.getFieldU32(ripple::sfHighQualityOut);
auto balance = lineSle.getFieldAmount(ripple::sfBalance);
bool viewLowest = (lowID == account);
auto lineLimit = viewLowest ? lowLimit : highLimit;
auto lineLimitPeer = !viewLowest ? lowLimit : highLimit;
auto lineAccountIDPeer = !viewLowest ? lowID : highID;
auto lineQualityIn = viewLowest ? lowQualityIn : highQualityIn;
auto lineQualityOut = viewLowest ? lowQualityOut : highQualityOut;
auto const viewLowest = (lowID == account);
auto const lineLimit = viewLowest ? lowLimit : highLimit;
auto const lineLimitPeer = not viewLowest ? lowLimit : highLimit;
auto const lineAccountIDPeer = not viewLowest ? lowID : highID;
auto const lineQualityIn = viewLowest ? lowQualityIn : highQualityIn;
auto const lineQualityOut = viewLowest ? lowQualityOut : highQualityOut;
if (peerAccount && peerAccount != lineAccountIDPeer)
return;
if (!viewLowest)
if (not viewLowest)
balance.negate();
bool lineAuth =
flags & (viewLowest ? ripple::lsfLowAuth : ripple::lsfHighAuth);
bool lineAuthPeer =
flags & (!viewLowest ? ripple::lsfLowAuth : ripple::lsfHighAuth);
bool lineNoRipple =
flags & (viewLowest ? ripple::lsfLowNoRipple : ripple::lsfHighNoRipple);
bool lineDefaultRipple = flags & ripple::lsfDefaultRipple;
bool lineNoRipplePeer = flags &
(!viewLowest ? ripple::lsfLowNoRipple : ripple::lsfHighNoRipple);
bool lineFreeze =
flags & (viewLowest ? ripple::lsfLowFreeze : ripple::lsfHighFreeze);
bool lineFreezePeer =
flags & (!viewLowest ? ripple::lsfLowFreeze : ripple::lsfHighFreeze);
bool const lineAuth = flags & (viewLowest ? ripple::lsfLowAuth : ripple::lsfHighAuth);
bool const lineAuthPeer = flags & (not viewLowest ? ripple::lsfLowAuth : ripple::lsfHighAuth);
bool const lineNoRipple = flags & (viewLowest ? ripple::lsfLowNoRipple : ripple::lsfHighNoRipple);
bool const lineNoRipplePeer = flags & (not viewLowest ? ripple::lsfLowNoRipple : ripple::lsfHighNoRipple);
bool const lineFreeze = flags & (viewLowest ? ripple::lsfLowFreeze : ripple::lsfHighFreeze);
bool const lineFreezePeer = flags & (not viewLowest ? ripple::lsfLowFreeze : ripple::lsfHighFreeze);
ripple::STAmount const& saBalance(balance);
ripple::STAmount const& saLimit(lineLimit);
ripple::STAmount const& saLimitPeer(lineLimitPeer);
ripple::STAmount const& saBalance = balance;
ripple::STAmount const& saLimit = lineLimit;
ripple::STAmount const& saLimitPeer = lineLimitPeer;
LineResponse line;
line.account = ripple::to_string(lineAccountIDPeer);
line.balance = saBalance.getText();
line.currency = ripple::to_string(saBalance.issue().currency);
line.limit = saLimit.getText();
line.limitPeer = saLimitPeer.getText();
line.qualityIn = lineQualityIn;
line.qualityOut = lineQualityOut;
boost::json::object jPeer;
jPeer[JS(account)] = ripple::to_string(lineAccountIDPeer);
jPeer[JS(balance)] = saBalance.getText();
jPeer[JS(currency)] = ripple::to_string(saBalance.issue().currency);
jPeer[JS(limit)] = saLimit.getText();
jPeer[JS(limit_peer)] = saLimitPeer.getText();
jPeer[JS(quality_in)] = lineQualityIn;
jPeer[JS(quality_out)] = lineQualityOut;
if (lineAuth)
jPeer[JS(authorized)] = true;
if (lineAuthPeer)
jPeer[JS(peer_authorized)] = true;
if (lineNoRipple || !lineDefaultRipple)
jPeer[JS(no_ripple)] = lineNoRipple;
if (lineNoRipple || !lineDefaultRipple)
jPeer[JS(no_ripple_peer)] = lineNoRipplePeer;
if (lineFreeze)
jPeer[JS(freeze)] = true;
if (lineFreezePeer)
jPeer[JS(freeze_peer)] = true;
line.authorized = true;
jsonLines.push_back(jPeer);
if (lineAuthPeer)
line.peerAuthorized = true;
if (lineFreeze)
line.freeze = true;
if (lineFreezePeer)
line.freezePeer = true;
line.noRipple = lineNoRipple;
line.noRipplePeer = lineNoRipplePeer;
lines.push_back(line);
}
Result
doAccountLines(Context const& context)
AccountLinesHandler::Result
AccountLinesHandler::process(AccountLinesHandler::Input input, Context const& ctx) const
{
auto request = context.params;
boost::json::object response = {};
auto const range = sharedPtrBackend_->fetchLedgerRange();
auto const lgrInfoOrStatus = getLedgerInfoFromHashOrSeq(
*sharedPtrBackend_, ctx.yield, input.ledgerHash, input.ledgerIndex, range->maxSequence);
auto v = ledgerInfoFromRequest(context);
if (auto status = std::get_if<Status>(&v))
return *status;
if (auto status = std::get_if<Status>(&lgrInfoOrStatus))
return Error{*status};
auto lgrInfo = std::get<ripple::LedgerInfo>(v);
auto const lgrInfo = std::get<ripple::LedgerInfo>(lgrInfoOrStatus);
auto const accountID = accountFromStringStrict(input.account);
auto const accountLedgerObject =
sharedPtrBackend_->fetchLedgerObject(ripple::keylet::account(*accountID).key, lgrInfo.seq, ctx.yield);
ripple::AccountID accountID;
if (auto const status = getAccount(request, accountID); status)
return status;
if (not accountLedgerObject)
return Error{Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"}};
auto rawAcct = context.backend->fetchLedgerObject(
ripple::keylet::account(accountID).key, lgrInfo.seq, context.yield);
auto const peerAccountID = input.peer ? accountFromStringStrict(*(input.peer)) : std::optional<ripple::AccountID>{};
if (!rawAcct)
return Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"};
Output response;
response.lines.reserve(input.limit);
std::optional<ripple::AccountID> peerAccount;
if (auto const status = getOptionalAccount(request, peerAccount, JS(peer));
status)
return status;
std::uint32_t limit;
if (auto const status = getLimit(context, limit); status)
return status;
std::optional<std::string> marker = {};
if (request.contains(JS(marker)))
{
if (not request.at(JS(marker)).is_string())
return Status{RippledError::rpcINVALID_PARAMS, "markerNotString"};
marker = request.at(JS(marker)).as_string().c_str();
}
auto ignoreDefault = false;
if (request.contains(JS(ignore_default)))
{
if (not request.at(JS(ignore_default)).is_bool())
return Status{
RippledError::rpcINVALID_PARAMS, "ignoreDefaultNotBool"};
ignoreDefault = request.at(JS(ignore_default)).as_bool();
}
response[JS(account)] = ripple::to_string(accountID);
response[JS(ledger_hash)] = ripple::strHex(lgrInfo.hash);
response[JS(ledger_index)] = lgrInfo.seq;
response[JS(limit)] = limit;
response[JS(lines)] = boost::json::value(boost::json::array_kind);
boost::json::array& jsonLines = response.at(JS(lines)).as_array();
auto const addToResponse = [&](ripple::SLE&& sle) -> void {
auto const addToResponse = [&](ripple::SLE&& sle) {
if (sle.getType() == ripple::ltRIPPLE_STATE)
{
auto ignore = false;
if (ignoreDefault)
if (input.ignoreDefault)
{
if (sle.getFieldAmount(ripple::sfLowLimit).getIssuer() ==
accountID)
ignore =
!(sle.getFieldU32(ripple::sfFlags) &
ripple::lsfLowReserve);
if (sle.getFieldAmount(ripple::sfLowLimit).getIssuer() == accountID)
ignore = !(sle.getFieldU32(ripple::sfFlags) & ripple::lsfLowReserve);
else
ignore =
!(sle.getFieldU32(ripple::sfFlags) &
ripple::lsfHighReserve);
ignore = !(sle.getFieldU32(ripple::sfFlags) & ripple::lsfHighReserve);
}
if (!ignore)
addLine(jsonLines, sle, accountID, peerAccount);
if (not ignore)
addLine(response.lines, sle, *accountID, peerAccountID);
}
};
auto next = traverseOwnedNodes(
*context.backend,
accountID,
lgrInfo.seq,
limit,
marker,
context.yield,
addToResponse);
auto const next = ngTraverseOwnedNodes(
*sharedPtrBackend_, *accountID, lgrInfo.seq, input.limit, input.marker, ctx.yield, addToResponse);
if (auto status = std::get_if<RPC::Status>(&next))
return *status;
if (auto status = std::get_if<Status>(&next))
return Error{*status};
auto nextMarker = std::get<RPC::AccountCursor>(next);
auto const nextMarker = std::get<AccountCursor>(next);
response.account = input.account;
response.limit = input.limit; // not documented,
// https://github.com/XRPLF/xrpl-dev-portal/issues/1838
response.ledgerHash = ripple::strHex(lgrInfo.hash);
response.ledgerIndex = lgrInfo.seq;
if (nextMarker.isNonZero())
response[JS(marker)] = nextMarker.toString();
response.marker = nextMarker.toString();
return response;
}
AccountLinesHandler::Input
tag_invoke(boost::json::value_to_tag<AccountLinesHandler::Input>, boost::json::value const& jv)
{
auto input = AccountLinesHandler::Input{};
auto const& jsonObject = jv.as_object();
input.account = jv.at(JS(account)).as_string().c_str();
if (jsonObject.contains(JS(limit)))
input.limit = jv.at(JS(limit)).as_int64();
if (jsonObject.contains(JS(marker)))
input.marker = jv.at(JS(marker)).as_string().c_str();
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jv.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(peer)))
input.peer = jv.at(JS(peer)).as_string().c_str();
if (jsonObject.contains(JS(ignore_default)))
input.ignoreDefault = jv.at(JS(ignore_default)).as_bool();
if (jsonObject.contains(JS(ledger_index)))
{
if (!jsonObject.at(JS(ledger_index)).is_string())
input.ledgerIndex = jv.at(JS(ledger_index)).as_int64();
else if (jsonObject.at(JS(ledger_index)).as_string() != "validated")
input.ledgerIndex = std::stoi(jv.at(JS(ledger_index)).as_string().c_str());
}
return input;
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountLinesHandler::Output const& output)
{
auto obj = boost::json::object{
{JS(account), output.account},
{JS(ledger_hash), output.ledgerHash},
{JS(ledger_index), output.ledgerIndex},
{JS(validated), output.validated},
{JS(limit), output.limit},
{JS(lines), output.lines},
};
if (output.marker)
obj[JS(marker)] = output.marker.value();
jv = std::move(obj);
}
void
tag_invoke(
boost::json::value_from_tag,
boost::json::value& jv,
[[maybe_unused]] AccountLinesHandler::LineResponse const& line)
{
auto obj = boost::json::object{
{JS(account), line.account},
{JS(balance), line.balance},
{JS(currency), line.currency},
{JS(limit), line.limit},
{JS(limit_peer), line.limitPeer},
{JS(quality_in), line.qualityIn},
{JS(quality_out), line.qualityOut},
};
obj[JS(no_ripple)] = line.noRipple;
obj[JS(no_ripple_peer)] = line.noRipplePeer;
if (line.authorized)
obj[JS(authorized)] = *(line.authorized);
if (line.peerAuthorized)
obj[JS(peer_authorized)] = *(line.peerAuthorized);
if (line.freeze)
obj[JS(freeze)] = *(line.freeze);
if (line.freezePeer)
obj[JS(freeze_peer)] = *(line.freezePeer);
jv = std::move(obj);
}
} // namespace RPC

View File

@@ -0,0 +1,127 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/Types.h>
#include <rpc/common/Validators.h>
#include <vector>
namespace RPC {
/**
* @brief The account_lines method returns information about an account's trust lines, which contain balances in all
* non-XRP currencies and assets.
*
* For more details see: https://xrpl.org/account_lines.html
*/
class AccountLinesHandler
{
// dependencies
std::shared_ptr<BackendInterface> const sharedPtrBackend_;
public:
struct LineResponse
{
std::string account;
std::string balance;
std::string currency;
std::string limit;
std::string limitPeer;
uint32_t qualityIn;
uint32_t qualityOut;
bool noRipple;
bool noRipplePeer;
std::optional<bool> authorized;
std::optional<bool> peerAuthorized;
std::optional<bool> freeze;
std::optional<bool> freezePeer;
};
struct Output
{
std::string account;
std::vector<LineResponse> lines;
std::string ledgerHash;
uint32_t ledgerIndex;
bool validated = true; // should be sent via framework
std::optional<std::string> marker;
uint32_t limit;
};
struct Input
{
std::string account;
std::optional<std::string> ledgerHash;
std::optional<uint32_t> ledgerIndex;
std::optional<std::string> peer;
bool ignoreDefault = false; // TODO: document
// https://github.com/XRPLF/xrpl-dev-portal/issues/1839
uint32_t limit = 50;
std::optional<std::string> marker;
};
using Result = HandlerReturnType<Output>;
AccountLinesHandler(std::shared_ptr<BackendInterface> const& sharedPtrBackend) : sharedPtrBackend_(sharedPtrBackend)
{
}
RpcSpecConstRef
spec() const
{
static auto const rpcSpec = RpcSpec{
{JS(account), validation::Required{}, validation::AccountValidator},
{JS(peer), validation::Type<std::string>{}, validation::AccountValidator},
{JS(ignore_default), validation::Type<bool>{}},
{JS(ledger_hash), validation::Uint256HexStringValidator},
{JS(limit), validation::Type<uint32_t>{}, validation::Between{10, 400}},
{JS(ledger_index), validation::LedgerIndexValidator},
{JS(marker), validation::AccountMarkerValidator},
};
return rpcSpec;
}
Result
process(Input input, Context const& ctx) const;
private:
void
addLine(
std::vector<LineResponse>& lines,
ripple::SLE const& lineSle,
ripple::AccountID const& account,
std::optional<ripple::AccountID> const& peerAccount) const;
private:
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, Output const& output);
friend Input
tag_invoke(boost::json::value_to_tag<Input>, boost::json::value const& jv);
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, LineResponse const& line);
};
} // namespace RPC

View File

@@ -0,0 +1,148 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <rpc/handlers/AccountNFTs.h>
#include <ripple/app/tx/impl/details/NFTokenUtils.h>
namespace RPC {
AccountNFTsHandler::Result
AccountNFTsHandler::process(AccountNFTsHandler::Input input, Context const& ctx) const
{
auto const range = sharedPtrBackend_->fetchLedgerRange();
auto const lgrInfoOrStatus = getLedgerInfoFromHashOrSeq(
*sharedPtrBackend_, ctx.yield, input.ledgerHash, input.ledgerIndex, range->maxSequence);
if (auto const status = std::get_if<Status>(&lgrInfoOrStatus))
return Error{*status};
auto const lgrInfo = std::get<ripple::LedgerInfo>(lgrInfoOrStatus);
auto const accountID = accountFromStringStrict(input.account);
auto const accountLedgerObject =
sharedPtrBackend_->fetchLedgerObject(ripple::keylet::account(*accountID).key, lgrInfo.seq, ctx.yield);
if (!accountLedgerObject)
return Error{Status{RippledError::rpcACT_NOT_FOUND, "accountNotFound"}};
auto response = Output{};
response.account = input.account;
response.limit = input.limit;
response.ledgerHash = ripple::strHex(lgrInfo.hash);
response.ledgerIndex = lgrInfo.seq;
// if a marker was passed, start at the page specified in marker. Else, start at the max page
auto const pageKey =
input.marker ? ripple::uint256{input.marker->c_str()} : ripple::keylet::nftpage_max(*accountID).key;
auto const blob = sharedPtrBackend_->fetchLedgerObject(pageKey, lgrInfo.seq, ctx.yield);
if (!blob)
return response;
std::optional<ripple::SLE const> page{ripple::SLE{ripple::SerialIter{blob->data(), blob->size()}, pageKey}};
auto numPages = 0u;
while (page)
{
auto const arr = page->getFieldArray(ripple::sfNFTokens);
for (auto const& nft : arr)
{
auto const nftokenID = nft[ripple::sfNFTokenID];
response.nfts.push_back(toBoostJson(nft.getJson(ripple::JsonOptions::none)));
auto& obj = response.nfts.back().as_object();
// Pull out the components of the nft ID.
obj[SFS(sfFlags)] = ripple::nft::getFlags(nftokenID);
obj[SFS(sfIssuer)] = to_string(ripple::nft::getIssuer(nftokenID));
obj[SFS(sfNFTokenTaxon)] = ripple::nft::toUInt32(ripple::nft::getTaxon(nftokenID));
obj[JS(nft_serial)] = ripple::nft::getSerial(nftokenID);
if (std::uint16_t xferFee = {ripple::nft::getTransferFee(nftokenID)})
obj[SFS(sfTransferFee)] = xferFee;
}
++numPages;
if (auto const npm = (*page)[~ripple::sfPreviousPageMin])
{
auto const nextKey = ripple::Keylet(ripple::ltNFTOKEN_PAGE, *npm);
if (numPages == input.limit)
{
response.marker = to_string(nextKey.key);
return response;
}
auto const nextBlob = sharedPtrBackend_->fetchLedgerObject(nextKey.key, lgrInfo.seq, ctx.yield);
page.emplace(ripple::SLE{ripple::SerialIter{nextBlob->data(), nextBlob->size()}, nextKey.key});
}
else
{
page.reset();
}
}
return response;
}
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, AccountNFTsHandler::Output const& output)
{
jv = {
{JS(ledger_hash), output.ledgerHash},
{JS(ledger_index), output.ledgerIndex},
{JS(validated), output.validated},
{JS(account), output.account},
{JS(account_nfts), output.nfts},
{JS(limit), output.limit},
};
if (output.marker)
jv.as_object()[JS(marker)] = *output.marker;
}
AccountNFTsHandler::Input
tag_invoke(boost::json::value_to_tag<AccountNFTsHandler::Input>, boost::json::value const& jv)
{
auto input = AccountNFTsHandler::Input{};
auto const& jsonObject = jv.as_object();
input.account = jsonObject.at(JS(account)).as_string().c_str();
if (jsonObject.contains(JS(ledger_hash)))
input.ledgerHash = jsonObject.at(JS(ledger_hash)).as_string().c_str();
if (jsonObject.contains(JS(ledger_index)))
{
if (!jsonObject.at(JS(ledger_index)).is_string())
input.ledgerIndex = jsonObject.at(JS(ledger_index)).as_int64();
else if (jsonObject.at(JS(ledger_index)).as_string() != "validated")
input.ledgerIndex = std::stoi(jsonObject.at(JS(ledger_index)).as_string().c_str());
}
if (jsonObject.contains(JS(limit)))
input.limit = jsonObject.at(JS(limit)).as_int64();
if (jsonObject.contains(JS(marker)))
input.marker = jsonObject.at(JS(marker)).as_string().c_str();
return input;
}
} // namespace RPC

View File

@@ -0,0 +1,91 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/BackendInterface.h>
#include <rpc/RPCHelpers.h>
#include <rpc/common/Types.h>
#include <rpc/common/Validators.h>
namespace RPC {
/**
* @brief The account_nfts method returns a list of NFToken objects for the specified account.
*
* For more details see: https://xrpl.org/account_nfts.html
*/
class AccountNFTsHandler
{
std::shared_ptr<BackendInterface> sharedPtrBackend_;
public:
struct Output
{
std::string account;
std::string ledgerHash;
uint32_t ledgerIndex;
// TODO: use better type than json
boost::json::array nfts;
uint32_t limit;
std::optional<std::string> marker;
bool validated = true;
};
struct Input
{
std::string account;
std::optional<std::string> ledgerHash;
std::optional<uint32_t> ledgerIndex;
uint32_t limit = 100; // Limit the number of token pages to retrieve. [20,400]
std::optional<std::string> marker;
};
using Result = HandlerReturnType<Output>;
AccountNFTsHandler(std::shared_ptr<BackendInterface> const& sharedPtrBackend) : sharedPtrBackend_(sharedPtrBackend)
{
}
RpcSpecConstRef
spec() const
{
static auto const rpcSpec = RpcSpec{
{JS(account), validation::Required{}, validation::AccountValidator},
{JS(ledger_hash), validation::Uint256HexStringValidator},
{JS(ledger_index), validation::LedgerIndexValidator},
{JS(marker), validation::Uint256HexStringValidator},
{JS(limit), validation::Type<uint32_t>{}, validation::Between{20, 400}},
};
return rpcSpec;
}
Result
process(Input input, Context const& ctx) const;
private:
friend void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, Output const& output);
friend Input
tag_invoke(boost::json::value_to_tag<Input>, boost::json::value const& jv);
};
} // namespace RPC

Some files were not shown because too many files have changed in this diff Show More