Compare commits

..

128 Commits

Author SHA1 Message Date
Alex Kremer
1bacad9e49 Update xrpl version to 2.0.0-rc1 (#990)
Fixes #989
2023-11-15 19:40:38 +00:00
cyan317
ca16858878 Add DeliverMax for Tx streams (#980) 2023-11-13 13:29:36 +00:00
cyan317
feae85782c DeliverMax alias of Payment tx (#979)
Fix #973
2023-11-09 13:35:08 +00:00
cyan317
b016c1d7ba Fix lowercase ctid (#977)
Fix #963
2023-11-07 16:10:12 +00:00
Sergey Kuznetsov
0597a9d685 Add amm type to account objects (#975)
Fixes #834
2023-11-03 13:54:54 +00:00
cyan317
05bea6a971 add amm filter (#972)
Fix #968
2023-11-03 13:12:36 +00:00
cyan317
fa660ef400 Implement DID (#967)
Fix #918
2023-11-03 09:40:40 +00:00
Arihant Kothari
25d9e3cc36 Use .empty() instead of .size() for vectors (#971) 2023-11-02 23:02:00 +00:00
Sergey Kuznetsov
58f13e1660 Fix code inside assert (#969) 2023-11-02 21:34:03 +00:00
Sergey Kuznetsov
a16b680a7a Add prometheus support (#950)
Fixes #888
2023-11-02 17:26:03 +00:00
cyan317
320ebaa5d2 Move AdminVerificationStrategy to Server (#965) 2023-11-02 10:17:32 +00:00
Alex Kremer
058df4d12a Fix exit of ETL on exception (#964)
Fixes #708
2023-11-01 11:59:19 +00:00
Alex Kremer
5145d07693 Update conan to use xrpl 2.0.0-b4 (#961)
Fixes #960
2023-10-31 19:27:06 +00:00
cyan317
5e9e5f6f65 Admin password (#958)
Fix #922
2023-10-31 15:39:20 +00:00
Sergey Kuznetsov
1ce7bcbc28 Fix random source choosing (#959) 2023-10-31 15:04:15 +00:00
Shawn Xie
243858df12 nfts_by_issuer (#948)
Fixes issue #385

Original PR:
#584
2023-10-30 19:53:32 +00:00
Sergey Kuznetsov
b363cc93af Fix wrong random using (#955)
Fixes #855
2023-10-30 16:40:16 +00:00
Sergey Kuznetsov
200d97f0de Add AMM types to AccountTx filter (#954) 2023-10-30 16:36:28 +00:00
cyan317
1ec5d3e5a3 Amm ledgerentry (#951)
Fix #916
2023-10-30 15:23:47 +00:00
cyan317
e062121917 Add config to run without valid etl (#946)
Fix #943
2023-10-20 16:22:25 +01:00
Alex Kremer
1aab2b94b1 Move to clang-format-16 (#908)
Fixes #848
2023-10-19 16:55:04 +01:00
Sergey Kuznetsov
5de87b9ef8 Upgrade fmt to 10.1.1 (#937) 2023-10-18 17:11:31 +01:00
Sergey Kuznetsov
398db13f4d Add help part to readme (#938) 2023-10-18 17:10:51 +01:00
cyan317
5e8ffb66b4 Subscribe cleanup (#940)
Fix #939
2023-10-18 15:45:54 +01:00
cyan317
939740494b Fix dosguard max_connection (#927)
Fix #928
2023-10-13 13:10:33 +01:00
Alex Kremer
ff3d2b5600 Set libxrpl version to 2.0.0-b2 (#926)
Fixes #925
2023-10-13 12:38:39 +01:00
cyan317
7080b4d549 Fix messages pile up (#921)
Fix #924
2023-10-11 17:24:39 +01:00
cyan317
8d783ecd6a ctid for tx (#907)
Fix #898 and #917
2023-10-11 09:47:05 +01:00
Sergey Kuznetsov
5e6682ddc7 Add db usage counters (#912)
Fixes #911
2023-10-10 18:34:28 +01:00
Alex Kremer
fca29694a0 Fix http params handling discrepancy (#913)
Fixes #909
2023-10-10 12:23:40 +01:00
Sergey Kuznetsov
a541e6d00e Update gtest version (#900) 2023-10-09 15:36:28 +01:00
Alex Kremer
9bd38dd290 Update cassandra version (#844)
Fixes #843
2023-10-09 13:28:11 +01:00
Alex Kremer
f683b25f76 Add field name to output of invalidParams for OneOf (#906)
Fixes #901
2023-10-09 13:26:54 +01:00
cyan317
91ad1ffc3b Fix error "readability-else-after-return" (#905)
Fix compile error
2023-10-09 11:32:29 +01:00
cyan317
64b4a908da Fix account_tx response both both ledger range and ledger index/hash are specified (#904)
Fix mismatch with rippled
2023-10-09 10:19:07 +01:00
Alex Kremer
ac752c656e Change consume to full buffer recreate (#899) 2023-10-06 14:57:05 +01:00
Alex Kremer
4fe868aaeb Add inLedger to tx and account_tx (#895)
Fixes #890
2023-10-05 21:16:52 +01:00
cyan317
59eb40a1f2 Fix ledger_entry error code (#891)
Fix #896
2023-10-05 18:11:42 +01:00
Alex Kremer
0b5f667e4a Fixes broken counters for broken pipe connections (#880)
Fixes #885
2023-10-04 16:59:40 +01:00
cyan317
fa42c5c900 Fix trans order of subscription transactions stream (#882)
Fix #833
2023-10-04 09:11:32 +01:00
Sergey Kuznetsov
0818b6ce5b Add admin password check (#847)
Fixes #846
2023-10-03 17:22:37 +01:00
cyan317
e2cc56d25a Add unittests for ledger publisher and bug fixes (#860)
Fix #881
2023-10-03 13:47:49 +01:00
Sergey Kuznetsov
caaa01bf0f Add tests for special characters in currency validator (#872)
Fixes #835

We are using xrpl's function to check currency code is valid so there is no need to change our code.
I added more test cases to be sure that clio supports characters added in xrpl.
2023-10-03 12:13:17 +01:00
Sergey Kuznetsov
4b53bef1f5 Add clang tidy (#864)
Fixes #863
2023-10-03 10:43:54 +01:00
Sergey Kuznetsov
69f5025a29 Add compiler flags (#850)
Fixes #435
2023-10-02 16:45:48 +01:00
Sergey Kuznetsov
d1c41a8bb7 Don't use clio for conan cache hash (#879) 2023-10-02 11:43:51 +01:00
Sergey Kuznetsov
207ba51461 Fix CI (#878)
* Put conan-non-prod artifactory first

* Rebuild all conan packages if no cache

* Save cache only if there was no cache found
2023-09-28 16:49:15 +01:00
Sergey Kuznetsov
ebe7688ccb Api v1 bool support (#877)
* Allow not bool for signer_lists

* Allow transactions to be not bool for v1

* Add tests for JsonBool
2023-09-28 12:56:38 +01:00
Sergey Kuznetsov
6d9f8a7ead CI improvements (#867)
* Generate conan profile in CI

* Move linux build into main workflow

* Add saving/restoring conan data

* Move cache to Linux

* Fix error

* Change key to hash from conanfile

* Fix path error

* Populate cache only in develop branch

* Big refactor

- Move duplicated code to actions
- Isolate mac build from home directory
- Separate ccache and conan caches

* Fix errors

* Change ccache cache name and fix errors

* Always populate cache

* Use newer ccache on Linux

* Strip tests

* Better conan hash
2023-09-28 11:36:03 +01:00
Sergey Kuznetsov
6ca777ea96 Account tx v1 api support (#874)
* Don't fail on ledger params for v1

* Different error on invalid ledger indexes for v1

* Allow forward and binary to be not bool for v1

* Minor fixes

* Fix tests

* Don't fail if input ledger index is out of range for v1

* Restore deleted test

* Fix comparison of integers with different signedness

* Updated default api version in README and example config
2023-09-28 11:31:35 +01:00
cyan317
963685dd31 Ledger_entry return invalid parameter error for v1 (#873)
Fixes #875
2023-09-28 09:14:01 +01:00
cyan317
e36545058d Duplicate signer_lists in account_info (#870)
Fix #871
2023-09-25 13:24:16 +01:00
cyan317
44527140f0 Fix inaccurate coverage caused by LOG (#868)
Fix #845
2023-09-21 16:19:53 +01:00
Alex Kremer
0eaaa1fb31 Add workaround for async_compose (#841)
Fixes #840
2023-09-18 18:52:32 +01:00
Alex Kremer
1846f629a5 AccountTx filtering by transaction type (#851)
Fixes #685
2023-09-18 18:52:00 +01:00
Alex Kremer
83af5af3c6 Remove deprecated cassandra options (#852)
Fixes #849
2023-09-18 13:40:38 +01:00
Alex Kremer
418a0ddbf2 Add libxrpl version to server_info output (#854)
Fixes #853
2023-09-18 13:39:01 +01:00
Alex Kremer
6cfbfda014 Repeatedly log on amendment block (#829)
Fixes #364
2023-09-13 13:34:02 +01:00
Alex Kremer
91648f98ad Fix malformed taker error to match rippled (#827)
Fixes #352
2023-09-11 19:39:10 +01:00
Sergey Kuznetsov
71e1637c5f Add options for better clangd support (#836)
Fixes #839
2023-09-11 17:53:30 +01:00
Sergey Kuznetsov
59cd2ce5aa Fix missing lock (#837) 2023-09-11 16:19:57 +01:00
Alex Kremer
d783edd57a Add working dir to git command executions (#828) 2023-09-11 13:40:22 +01:00
cyan317
1ce8a58167 Add number of requests to log (#838) 2023-09-11 12:58:45 +01:00
Peter Chen
92e5c4792b Change boost::json to json in unittests (#831) 2023-09-11 12:39:38 +01:00
Michael Legleux
d7f36733bc Link libstd++ and gcc lib statically (#830) 2023-08-24 12:41:08 +01:00
Alex Kremer
435d56e7c5 Fix empty log lines (#825) 2023-08-16 22:57:52 +01:00
Alex Kremer
bf3b24867c Implement sanitizer support via CMake (#822)
Fixes #302
2023-08-15 15:20:50 +01:00
Alex Kremer
ec70127050 Add LOG macro to prevent unnecessary evaluations (#823)
Fixes #824
2023-08-15 14:36:11 +01:00
Alex Kremer
547cb340bd Update doxygen comments (#818)
Fixes #421
2023-08-11 21:32:32 +01:00
Arihant Kothari
c20b14494a Add note on code coverage report generation (#821) 2023-08-11 18:38:35 +01:00
Peter Chen
696b1a585c Refactor namespaces part 2 (#820)
Part 2 of refactoring effort
2023-08-11 17:00:31 +01:00
Peter Chen
23442ff1a7 Refactor namespaces part 1 (#817)
Part 1 of refactoring effort
2023-08-10 18:05:13 +01:00
Alex Kremer
db4046e02a Move connection state logic to an earlier point (#816) 2023-08-09 16:42:33 +01:00
Peter Chen
fc1b5ae4da Support whitelisting for IPV4/IPV6 with CIDR (#796)
Fixes #244
2023-08-08 16:04:16 +01:00
Alex Kremer
5411fd7497 Update readme and compiler requirements (#815) 2023-08-08 11:32:18 +01:00
Michael Legleux
f6488f7024 Fix Linux/gcc build on CI (#813) 2023-08-07 20:53:20 +01:00
cyan317
e3ada6c5da Remove deprecated fields (#814)
Fix #801
2023-08-07 18:23:13 +01:00
cyan317
d61d702ccd Account_info add flags (#812)
Fixes #768
2023-08-04 16:22:39 +01:00
Alex Kremer
4d42cb3cdb Expose advanced options from cassandra-cpp-driver thru the config (#808)
Fixes #810
2023-08-03 15:49:56 +01:00
cyan317
111b55b397 Add cache test (#807)
Fixes #809
2023-08-03 15:03:17 +01:00
cyan317
c90bc15959 Return error when limit<=0 (#804)
Fix #806
2023-08-02 15:34:42 +01:00
Shawn Xie
1804e3e9c0 Update tmp build instructions in README (#802) 2023-08-02 13:45:04 +01:00
Alex Kremer
24f69acd9e Fix Linux/gcc compilation (#795)
Fixes #803
2023-08-02 13:44:03 +01:00
Alex Kremer
98d0a963dc Fix backend factory test and remove cout from base tests (#792)
Fixes #793
2023-07-27 15:40:50 +01:00
cyan317
665890d410 Fix connect_timeout request_timeout not work + tsan in RPCServerTestSuite (#790)
Fixes #791
2023-07-27 13:35:52 +01:00
John Freeman
545886561f Fix link to clio branch of rippled (#789) 2023-07-26 21:44:12 +01:00
Alex Kremer
68eec01dbc Fix TSAN issues part1 (#788)
Fixes a few issues from boost 1.82 migration and some Conan misconfigurations
2023-07-26 21:39:39 +01:00
Peter Chen
02621fe02e Add new RPC Handler "version" (#782)
Fixes #726
2023-07-26 20:02:11 +01:00
cyan317
6ad72446d1 Disable xrpl tests (#785) 2023-07-26 19:31:12 +01:00
Arihant Kothari
1d0a43669b Fix noRippleCheck fee field (#786)
Fixes #709
2023-07-26 19:22:17 +01:00
cyan317
71aabc8c29 Nftids (#780)
Fixes #769
2023-07-26 17:12:20 +01:00
cyan317
6b98579bfb Remove try catch for server_info(#781)
Avoid tsan false alert
2023-07-26 12:44:56 +01:00
cyan317
375ac2ffa6 Enable CI MACOS node (#783)
Fixes  #784
2023-07-26 11:39:13 +01:00
Alex Kremer
c6ca650767 Add initial Conan integration (#712)
Fixes #645
2023-07-24 18:43:02 +01:00
cyan317
2336148d0d Fix missing "validated" (#778)
Fixes #779
2023-07-18 10:52:22 +01:00
Shawn Xie
12178abf4d Use mismatch in getNFTokenMintData (#774) 2023-07-17 22:09:15 +01:00
Alex Kremer
b8705ae086 Add time/uptime/amendment_blocked to server_info (#775) 2023-07-14 16:46:10 +01:00
cyan317
b83d7478ef Unsupported Error when request server stream (#772)
Fixes #773
2023-07-14 14:44:40 +01:00
cyan317
4fd6d51d21 Rename WsSession to WsBase (#770) 2023-07-14 13:28:15 +01:00
cyan317
d195bdb66d Change limit tests (#766)
Fixes #771
2023-07-14 13:08:08 +01:00
Alex Kremer
50dbb51627 Implement configuration options for useful cassandra driver opts (#765)
Fixes #764
2023-07-12 15:59:06 +01:00
cyan317
2f369e175c Add "network_id" to server_info (#761)
Fixes #763
2023-07-12 12:09:08 +01:00
cyan317
47e03a7da3 Forward not supported fields (#757)
Fixes #760
2023-07-11 16:49:16 +01:00
cyan317
d7b84a2e7a Missing "tx_hash" for transaction_entry (#758)
Fixes #759
2023-07-11 16:47:47 +01:00
cyan317
e79425bc21 Remove "strict" (#755)
Fixes #756
2023-07-11 13:21:56 +01:00
cyan317
7710468f37 Ledger owner fund (#753)
Fixes #754
2023-07-11 12:36:48 +01:00
Alex Kremer
210d7fdbc8 Use clamp modifier on limit field instead of between validator (#752)
Fixes #751
2023-07-10 17:57:26 +01:00
Alex Kremer
ba8e7188ca Implement the Clamp modifier (#740)
Fixes #750
2023-07-10 16:09:20 +01:00
cyan317
271323b0f4 account_object supports nft page (#736)
Fix #696
2023-07-10 13:42:57 +01:00
cyan317
7b306f3ba0 version2's account_info (#747)
Fixes #743
2023-07-10 13:42:09 +01:00
cyan317
73805d44ad account_tx rpcLGR_IDXS_INVALID adapt to v2 (#749)
Fixes #748
2023-07-10 13:41:11 +01:00
cyan317
f19772907d account_flags (#745)
Fixes #746
2023-07-10 13:01:33 +01:00
Alex Kremer
616f0176c9 Change an error code in account_lines to match rippled (#742)
Fixes #741
2023-07-07 16:50:32 +01:00
Alex Kremer
9f4f5d319e Fix discrepancies in ledger_entry (#739)
Fixes #738
2023-07-07 12:04:59 +01:00
cyan317
dcbc4577c2 Use "invalidParam" when "book_offers" taker format is wrong (#734)
Fix #735
2023-07-05 17:25:17 +01:00
Alex Kremer
f4d8e18bf7 Add deletion_blockers_only support (#737)
Fixes #730
2023-07-05 17:04:08 +01:00
cyan317
b3e001ebfb Remove date from account_tx (#732)
Fixes #733
2023-07-04 15:40:54 +01:00
Alex Kremer
524821c0b0 Add strict field support (#731)
Fixes #729
2023-07-04 15:39:34 +01:00
cyan317
a292a607c2 Implement 'type' for 'ledger_data' (#705)
Fixes #703
2023-07-04 15:26:21 +01:00
Alex Kremer
81894c0a90 Implement deposit_authorized RPC and tests (#728)
Fixes #727
2023-07-04 11:21:41 +01:00
Alex Kremer
0a7def18cd Implement custom HTTP errors (#720)
Fixes #697
2023-07-04 11:02:32 +01:00
cyan317
1e969ba13b Unknown Option (#710)
Fixes #711
2023-07-03 11:02:14 +01:00
cyan317
ef62718a27 Fix max limit for account tx (#723)
Fixes #724
2023-07-03 10:56:43 +01:00
Alex Kremer
aadd9e50f0 Forward api_version 1 requests to rippled (#716)
Fixes #698
2023-06-26 09:52:57 +01:00
cyan317
d9e89746a4 Stop etl when crash (#708)
Fixes #706
2023-06-21 13:10:24 +01:00
cyan317
557ea5d7f6 Remove sensitive info from log (#701)
Fixes #702
2023-06-16 16:50:54 +01:00
cyan317
4cc3b3ec0f Fix account_tx marker issue (#699)
Fixes #700
2023-06-16 12:29:34 +01:00
Alex Kremer
a960471ef4 Support api_version (#695)
Fixes #64
2023-06-16 12:14:30 +01:00
372 changed files with 35294 additions and 16463 deletions

View File

@@ -1,7 +1,7 @@
--- ---
Language: Cpp Language: Cpp
AccessModifierOffset: -4 AccessModifierOffset: -4
AlignAfterOpenBracket: AlwaysBreak AlignAfterOpenBracket: BlockIndent
AlignConsecutiveAssignments: false AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false AlignConsecutiveDeclarations: false
AlignEscapedNewlinesLeft: true AlignEscapedNewlinesLeft: true
@@ -18,20 +18,8 @@ AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: true AlwaysBreakTemplateDeclarations: true
BinPackArguments: false BinPackArguments: false
BinPackParameters: false BinPackParameters: false
BraceWrapping:
AfterClass: true
AfterControlStatement: true
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
BreakBeforeBinaryOperators: false BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom BreakBeforeBraces: WebKit
BreakBeforeTernaryOperators: true BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true BreakConstructorInitializersBeforeComma: true
ColumnLimit: 120 ColumnLimit: 120
@@ -43,6 +31,7 @@ Cpp11BracedListStyle: true
DerivePointerAlignment: false DerivePointerAlignment: false
DisableFormat: false DisableFormat: false
ExperimentalAutoDetectBinPacking: false ExperimentalAutoDetectBinPacking: false
FixNamespaceComments: true
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ] ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeCategories: IncludeCategories:
- Regex: '^<(BeastConfig)' - Regex: '^<(BeastConfig)'
@@ -58,6 +47,8 @@ IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false IndentFunctionDeclarationAfterType: false
IndentWidth: 4 IndentWidth: 4
IndentWrappedFunctionNames: false IndentWrappedFunctionNames: false
IndentRequiresClause: true
RequiresClausePosition: OwnLine
KeepEmptyLinesAtTheStartOfBlocks: false KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1 MaxEmptyLinesToKeep: 1
NamespaceIndentation: None NamespaceIndentation: None
@@ -70,6 +61,7 @@ PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000 PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200 PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left PointerAlignment: Left
QualifierAlignment: Right
ReflowComments: true ReflowComments: true
SortIncludes: true SortIncludes: true
SpaceAfterCStyleCast: false SpaceAfterCStyleCast: false

118
.clang-tidy Normal file
View File

@@ -0,0 +1,118 @@
---
Checks: '-*,
bugprone-argument-comment,
bugprone-assert-side-effect,
bugprone-bad-signal-to-kill-thread,
bugprone-bool-pointer-implicit-conversion,
bugprone-copy-constructor-init,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-fold-init-type,
bugprone-forward-declaration-namespace,
bugprone-inaccurate-erase,
bugprone-incorrect-roundings,
bugprone-infinite-loop,
bugprone-integer-division,
bugprone-lambda-function-name,
bugprone-macro-parentheses,
bugprone-macro-repeated-side-effects,
bugprone-misplaced-operator-in-strlen-in-alloc,
bugprone-misplaced-pointer-arithmetic-in-alloc,
bugprone-misplaced-widening-cast,
bugprone-move-forwarding-reference,
bugprone-multiple-statement-macro,
bugprone-no-escape,
bugprone-parent-virtual-call,
bugprone-posix-return,
bugprone-redundant-branch-condition,
bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler,
bugprone-signed-char-misuse,
bugprone-sizeof-container,
bugprone-sizeof-expression,
bugprone-spuriously-wake-up-functions,
bugprone-standalone-empty,
bugprone-string-constructor,
bugprone-string-integer-assignment,
bugprone-string-literal-with-embedded-nul,
bugprone-stringview-nullptr,
bugprone-suspicious-enum-usage,
bugprone-suspicious-include,
bugprone-suspicious-memory-comparison,
bugprone-suspicious-memset-usage,
bugprone-suspicious-missing-comma,
bugprone-suspicious-realloc-usage,
bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare,
bugprone-swapped-arguments,
bugprone-terminating-continue,
bugprone-throw-keyword-missing,
bugprone-too-small-loop-variable,
bugprone-undefined-memory-manipulation,
bugprone-undelegated-constructor,
bugprone-unhandled-exception-at-new,
bugprone-unhandled-self-assignment,
bugprone-unused-raii,
bugprone-unused-return-value,
bugprone-use-after-move,
bugprone-virtual-near-miss,
cppcoreguidelines-init-variables,
cppcoreguidelines-prefer-member-initializer,
cppcoreguidelines-pro-type-member-init,
cppcoreguidelines-pro-type-static-cast-downcast,
cppcoreguidelines-virtual-class-destructor,
llvm-namespace-comment,
misc-const-correctness,
misc-definitions-in-headers,
misc-misplaced-const,
misc-redundant-expression,
misc-static-assert,
misc-throw-by-value-catch-by-reference,
misc-unused-alias-decls,
misc-unused-using-decls,
modernize-concat-nested-namespaces,
modernize-deprecated-headers,
modernize-make-shared,
modernize-make-unique,
modernize-pass-by-value,
modernize-use-emplace,
modernize-use-equals-default,
modernize-use-equals-delete,
modernize-use-override,
modernize-use-using,
performance-faster-string-find,
performance-for-range-copy,
performance-implicit-conversion-in-loop,
performance-inefficient-vector-operation,
performance-move-const-arg,
performance-move-constructor-init,
performance-no-automatic-move,
performance-trivially-destructible,
readability-avoid-const-params-in-decls,
readability-braces-around-statements,
readability-const-return-type,
readability-container-contains,
readability-container-size-empty,
readability-convert-member-functions-to-static,
readability-duplicate-include,
readability-else-after-return,
readability-implicit-bool-conversion,
readability-inconsistent-declaration-parameter-name,
readability-make-member-function-const,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-declaration,
readability-redundant-member-init,
readability-redundant-string-init,
readability-simplify-boolean-expr,
readability-static-accessed-through-instance,
readability-static-definition-in-anonymous-namespace,
readability-suspicious-call-argument
'
CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2
HeaderFilterRegex: '^.*/(src|unitests)/.*\.(h|hpp)$'
WarningsAsErrors: '*'

View File

@@ -4,7 +4,22 @@ exec 1>&2
# paths to check and re-format # paths to check and re-format
sources="src unittests" sources="src unittests"
formatter="clang-format-11 -i" formatter="clang-format -i"
version=$($formatter --version | grep -o '[0-9\.]*')
if [[ "16.0.0" > "$version" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 16 of `clang-format` is required.
Your version is $version.
Please fix paths and run again.
-----------------------------------------------------------------------------
EOF
exit 2
fi
first=$(git diff $sources) first=$(git diff $sources)
find $sources -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 $formatter find $sources -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 $formatter

37
.github/actions/build_clio/action.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: Build clio
description: Build clio in build directory
inputs:
conan_profile:
description: Conan profile name
required: true
default: default
conan_cache_hit:
description: Whether conan cache has been downloaded
required: true
runs:
using: composite
steps:
- name: Get number of threads on mac
id: mac_threads
if: ${{ runner.os == 'macOS' }}
shell: bash
run: echo "num=$(($(sysctl -n hw.logicalcpu) - 2))" >> $GITHUB_OUTPUT
- name: Get number of threads on Linux
id: linux_threads
if: ${{ runner.os == 'Linux' }}
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Build Clio
shell: bash
env:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
LINT: "${{ runner.os == 'Linux' && 'True' || 'False' }}"
run: |
mkdir -p build
cd build
threads_num=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}
conan install .. -of . -b $BUILD_OPTION -s build_type=Release -o clio:tests=True -o clio:lint=$LINT --profile ${{ inputs.conan_profile }}
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release .. -G Ninja
cmake --build . --parallel $threads_num

27
.github/actions/clang_format/action.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Check format
description: Check format using clang-format-16
runs:
using: composite
steps:
- name: Add llvm repo
run: |
echo 'deb http://apt.llvm.org/focal/ llvm-toolchain-focal-16 main' | sudo tee -a /etc/apt/sources.list
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
shell: bash
- name: Install packages
run: |
sudo apt update -qq
sudo apt install -y jq clang-format-16
shell: bash
- name: Run formatter
run: |
find src unittests -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-16 -i
shell: bash
- name: Check for differences
id: assert
shell: bash
run: |
git diff --color --exit-code | tee "clang-format.patch"

View File

@@ -0,0 +1,14 @@
name: Git common ancestor
description: Find the closest common commit
outputs:
commit:
description: Hash of commit
value: ${{ steps.find_common_ancestor.outputs.commit }}
runs:
using: composite
steps:
- name: Find common git ancestor
id: find_common_ancestor
shell: bash
run: |
echo "commit=$(git merge-base --fork-point origin/develop)" >> $GITHUB_OUTPUT

View File

@@ -1,13 +0,0 @@
runs:
using: composite
steps:
# Github's ubuntu-20.04 image already has clang-format-11 installed
- run: |
find src unittests -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-11 -i
shell: bash
- name: Check for differences
id: assert
shell: bash
run: |
git diff --color --exit-code | tee "clang-format.patch"

View File

@@ -0,0 +1,50 @@
name: Restore cache
description: Find and restores conan and ccache cache
inputs:
conan_dir:
description: Path to .conan directory
required: true
ccache_dir:
description: Path to .ccache directory
required: true
outputs:
conan_hash:
description: Hash to use as a part of conan cache key
value: ${{ steps.conan_hash.outputs.hash }}
conan_cache_hit:
description: True if conan cache has been downloaded
value: ${{ steps.conan_cache.outputs.cache-hit }}
ccache_cache_hit:
description: True if ccache cache has been downloaded
value: ${{ steps.ccache_cache.outputs.cache-hit }}
runs:
using: composite
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Calculate conan hash
id: conan_hash
shell: bash
run: |
conan info . -j info.json -o clio:tests=True
packages_info=$(cat info.json | jq '.[] | "\(.display_name): \(.id)"' | grep -v 'clio')
echo "$packages_info"
hash=$(echo "$packages_info" | shasum -a 256 | cut -d ' ' -f 1)
rm info.json
echo "hash=$hash" >> $GITHUB_OUTPUT
- name: Restore conan cache
uses: actions/cache/restore@v3
id: conan_cache
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-develop-${{ steps.conan_hash.outputs.hash }}
- name: Restore ccache cache
uses: actions/cache/restore@v3
id: ccache_cache
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

46
.github/actions/save_cache/action.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: Save cache
description: Save conan and ccache cache for develop branch
inputs:
conan_dir:
description: Path to .conan directory
required: true
conan_hash:
description: Hash to use as a part of conan cache key
required: true
conan_cache_hit:
description: Whether conan cache has been downloaded
required: true
ccache_dir:
description: Path to .ccache directory
required: true
ccache_cache_hit:
description: Whether conan cache has been downloaded
required: true
runs:
using: composite
steps:
- name: Find common commit
id: git_common_ancestor
uses: ./.github/actions/git_common_ancestor
- name: Cleanup conan directory from extra data
if: ${{ inputs.conan_cache_hit != 'true' }}
shell: bash
run: |
conan remove "*" -s -b -f
- name: Save conan cache
if: ${{ inputs.conan_cache_hit != 'true' }}
uses: actions/cache/save@v3
with:
path: ${{ inputs.conan_dir }}/data
key: clio-conan_data-${{ runner.os }}-develop-${{ inputs.conan_hash }}
- name: Save ccache cache
if: ${{ inputs.ccache_cache_hit != 'true' }}
uses: actions/cache/save@v3
with:
path: ${{ inputs.ccache_dir }}
key: clio-ccache-${{ runner.os }}-develop-${{ steps.git_common_ancestor.outputs.commit }}

55
.github/actions/setup_conan/action.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: Setup conan
description: Setup conan profile and artifactory
outputs:
conan_profile:
description: Created conan profile name
value: ${{ steps.conan_export_output.outputs.conan_profile }}
runs:
using: composite
steps:
- name: On mac
if: ${{ runner.os == 'macOS' }}
shell: bash
env:
CONAN_PROFILE: clio_clang_14
id: conan_setup_mac
run: |
echo "Creating $CONAN_PROFILE conan profile";
clang_path="$(brew --prefix llvm@14)/bin/clang"
clang_cxx_path="$(brew --prefix llvm@14)/bin/clang++"
conan profile new $CONAN_PROFILE --detect --force
conan profile update settings.compiler=clang $CONAN_PROFILE
conan profile update settings.compiler.version=14 $CONAN_PROFILE
conan profile update settings.compiler.cppstd=20 $CONAN_PROFILE
conan profile update "conf.tools.build:compiler_executables={\"c\": \"$clang_path\", \"cpp\": \"$clang_cxx_path\"}" $CONAN_PROFILE
conan profile update env.CC="$clang_path" $CONAN_PROFILE
conan profile update env.CXX="$clang_cxx_path" $CONAN_PROFILE
echo "created_conan_profile=$CONAN_PROFILE" >> $GITHUB_OUTPUT
- name: On linux
if: ${{ runner.os == 'Linux' }}
shell: bash
id: conan_setup_linux
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler.libcxx=libstdc++11 default
echo "created_conan_profile=default" >> $GITHUB_OUTPUT
- name: Export output variable
shell: bash
id: conan_export_output
run: |
echo "conan_profile=${{ steps.conan_setup_mac.outputs.created_conan_profile || steps.conan_setup_linux.outputs.created_conan_profile }}" >> $GITHUB_OUTPUT
- name: Add conan-non-prod artifactory
shell: bash
run: |
if [[ -z $(conan remote list | grep conan-non-prod) ]]; then
echo "Adding conan-non-prod"
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
else
echo "Conan-non-prod is available"
fi

View File

@@ -1,9 +1,9 @@
name: Build Clio name: Build Clio
on: on:
push: push:
branches: [master, release/*, develop, develop-next] branches: [master, release/*, develop]
pull_request: pull_request:
branches: [master, release/*, develop, develop-next] branches: [master, release/*, develop]
workflow_dispatch: workflow_dispatch:
jobs: jobs:
@@ -13,200 +13,147 @@ jobs:
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Run clang-format - name: Run clang-format
uses: ./.github/actions/lint uses: ./.github/actions/clang_format
build_clio: build_mac:
name: Build Clio name: Build macOS
runs-on: [self-hosted, heavy]
needs: lint needs: lint
strategy: runs-on: [self-hosted, macOS]
fail-fast: false env:
matrix: CCACHE_DIR: ${{ github.workspace }}/.ccache
type: CONAN_USER_HOME: ${{ github.workspace }}
- suffix: deb
image: rippleci/clio-dpkg-builder:2022-09-17
script: dpkg
- suffix: rpm
image: rippleci/clio-rpm-builder:2022-09-17
script: rpm
container:
image: ${{ matrix.type.image }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
with: with:
path: clio
fetch-depth: 0 fetch-depth: 0
- name: Clone Clio packaging repo - name: Install packages
run: |
brew install llvm@14 pkg-config ninja bison cmake ccache jq
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
- name: Build Clio
uses: ./.github/actions/build_clio
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v3
with:
name: clio_tests_mac
path: build/clio_tests
- name: Save cache
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
build_linux:
name: Build linux
needs: lint
runs-on: [self-hosted, Linux]
container:
image: conanio/gcc11:1.61.0
options: --user root
env:
CCACHE_DIR: /root/.ccache
CONAN_USER_HOME: /root/
steps:
- name: Get Clio
uses: actions/checkout@v3 uses: actions/checkout@v3
with: with:
path: clio-packages fetch-depth: 0
repository: XRPLF/clio-packages
ref: main
- name: Build - name: Add llvm repo
shell: bash
run: | run: |
export CLIO_ROOT=$(realpath clio) echo 'deb http://apt.llvm.org/focal/ llvm-toolchain-focal-16 main' >> /etc/apt/sources.list
if [ ${{ matrix.type.suffix }} == "rpm" ]; then wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
source /opt/rh/devtoolset-11/enable
fi - name: Install packages
cmake -S clio-packages -B clio-packages/build -DCLIO_ROOT=$CLIO_ROOT run: |
cmake --build clio-packages/build --parallel $(nproc) apt update -qq
cp ./clio-packages/build/clio-prefix/src/clio-build/clio_tests . apt install -y jq clang-tidy-16
mv ./clio-packages/build/*.${{ matrix.type.suffix }} .
- name: Artifact packages - name: Install ccache
run: |
wget https://github.com/ccache/ccache/releases/download/v4.8.3/ccache-4.8.3-linux-x86_64.tar.xz
tar xf ./ccache-4.8.3-linux-x86_64.tar.xz
mv ./ccache-4.8.3-linux-x86_64/ccache /usr/bin/ccache
- name: Fix git permissions
run: git config --global --add safe.directory $PWD
- name: Setup conan
uses: ./.github/actions/setup_conan
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
ccache_dir: ${{ env.CCACHE_DIR }}
- name: Build Clio
uses: ./.github/actions/build_clio
with:
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
- name: Strip tests
run: strip build/clio_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
with: with:
name: clio_${{ matrix.type.suffix }}_packages name: clio_tests_linux
path: ${{ github.workspace }}/*.${{ matrix.type.suffix }} path: build/clio_tests
- name: Artifact clio_tests - name: Save cache
uses: actions/upload-artifact@v3 uses: ./.github/actions/save_cache
with: with:
name: clio_tests-${{ matrix.type.suffix }} conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
path: ${{ github.workspace }}/clio_tests conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
build_dev: test_mac:
name: Build on Mac/Clang14 and run tests needs: build_mac
needs: lint
continue-on-error: false
runs-on: [self-hosted, macOS] runs-on: [self-hosted, macOS]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/download-artifact@v3
with: with:
path: clio name: clio_tests_mac
- name: Run clio_tests
- name: Check Boost cache
id: boost
uses: actions/cache@v3
with:
path: boost_1_77_0
key: ${{ runner.os }}-boost
- name: Build Boost
if: ${{ steps.boost.outputs.cache-hit != 'true' }}
run: | run: |
rm -rf boost_1_77_0.tar.gz boost_1_77_0 # cleanup if needed first chmod +x ./clio_tests
curl -s -fOJL "https://boostorg.jfrog.io/artifactory/main/release/1.77.0/source/boost_1_77_0.tar.gz"
tar zxf boost_1_77_0.tar.gz
cd boost_1_77_0
./bootstrap.sh
./b2 define=BOOST_ASIO_HAS_STD_INVOKE_RESULT cxxflags="-std=c++20"
- name: Install dependencies
run: |
brew install llvm@14 pkg-config protobuf openssl ninja cassandra-cpp-driver bison cmake
- name: Setup environment for llvm-14
run: |
export PATH="/usr/local/opt/llvm@14/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/llvm@14/lib -L/usr/local/opt/llvm@14/lib/c++ -Wl,-rpath,/usr/local/opt/llvm@14/lib/c++"
export CPPFLAGS="-I/usr/local/opt/llvm@14/include"
- name: Build clio
run: |
export BOOST_ROOT=$(pwd)/boost_1_77_0
cd clio
cmake -B build -DCMAKE_C_COMPILER='/usr/local/opt/llvm@14/bin/clang' -DCMAKE_CXX_COMPILER='/usr/local/opt/llvm@14/bin/clang++'
if ! cmake --build build -j; then
echo '# 🔥🔥 MacOS AppleClang build failed!💥' >> $GITHUB_STEP_SUMMARY
exit 1
fi
- name: Run Test
run: |
cd clio/build
./clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*" ./clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"
test_clio: test_linux:
name: Test Clio needs: build_linux
runs-on: [self-hosted, Linux] runs-on: [self-hosted, x-heavy]
needs: build_clio
strategy:
fail-fast: false
matrix:
suffix: [rpm, deb]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/download-artifact@v3
- name: Get clio_tests artifact
uses: actions/download-artifact@v3
with: with:
name: clio_tests-${{ matrix.suffix }} name: clio_tests_linux
- name: Run clio_tests
- name: Run tests
timeout-minutes: 10
uses: ./.github/actions/test
code_coverage:
name: Build on Linux and code coverage
needs: lint
continue-on-error: false
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
with:
path: clio
- name: Check Boost cache
id: boost
uses: actions/cache@v3
with:
path: boost
key: ${{ runner.os }}-boost
- name: Build boost
if: steps.boost.outputs.cache-hit != 'true'
run: | run: |
curl -s -OJL "https://boostorg.jfrog.io/artifactory/main/release/1.77.0/source/boost_1_77_0.tar.gz" chmod +x ./clio_tests
tar zxf boost_1_77_0.tar.gz ./clio_tests --gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"
mv boost_1_77_0 boost
cd boost
./bootstrap.sh
./b2
- name: install deps
run: |
sudo apt-get -y install git pkg-config protobuf-compiler libprotobuf-dev libssl-dev wget build-essential doxygen bison flex autoconf clang-format gcovr
- name: Build clio
run: |
export BOOST_ROOT=$(pwd)/boost
cd clio
cmake -B build -DCODE_COVERAGE=on -DTEST_PARAMETER='--gtest_filter="-BackendCassandraBaseTest*:BackendCassandraTest*:BackendCassandraFactoryTestWithDB*"'
if ! cmake --build build -j$(nproc); then
echo '# 🔥Ubuntu build🔥 failed!💥' >> $GITHUB_STEP_SUMMARY
exit 1
fi
cd build
make clio_tests-ccov
- name: Code Coverage Summary Report
uses: irongut/CodeCoverageSummary@v1.2.0
with:
filename: clio/build/clio_tests-gcc-cov/out.xml
badge: true
output: both
format: markdown
- name: Save PR number and ccov report
run: |
mkdir -p ./UnitTestCoverage
echo ${{ github.event.number }} > ./UnitTestCoverage/NR
cp clio/build/clio_tests-gcc-cov/report.html ./UnitTestCoverage/report.html
cp code-coverage-results.md ./UnitTestCoverage/out.md
cat code-coverage-results.md > $GITHUB_STEP_SUMMARY
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
files: clio/build/clio_tests-gcc-cov/out.xml
- uses: actions/upload-artifact@v3
with:
name: UnitTestCoverage
path: UnitTestCoverage/
- uses: actions/upload-artifact@v3
with:
name: code_coverage_report
path: clio/build/clio_tests-gcc-cov/out.xml

5
.gitignore vendored
View File

@@ -1,6 +1,9 @@
*clio*.log *clio*.log
build*/ /build*/
.build
.cache
.vscode .vscode
.python-version .python-version
CMakeUserPresets.json
config.json config.json
src/main/impl/Build.cpp src/main/impl/Build.cpp

5
CMake/Ccache.cmake Normal file
View File

@@ -0,0 +1,5 @@
find_program (CCACHE_PATH "ccache")
if (CCACHE_PATH)
set (CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PATH}")
message (STATUS "Using ccache: ${CCACHE_PATH}")
endif ()

42
CMake/CheckCompiler.cmake Normal file
View File

@@ -0,0 +1,42 @@
if (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 14)
message (FATAL_ERROR "Clang 14+ required for building clio")
endif ()
set (is_clang TRUE)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 14)
message (FATAL_ERROR "AppleClang 14+ required for building clio")
endif ()
set (is_appleclang TRUE)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 11)
message (FATAL_ERROR "GCC 11+ required for building clio")
endif ()
set (is_gcc TRUE)
else ()
message (FATAL_ERROR "Supported compilers: AppleClang 14+, Clang 14+, GCC 11+")
endif ()
if (san)
string (TOLOWER ${san} san)
set (SAN_FLAG "-fsanitize=${san}")
set (SAN_LIB "")
if (is_gcc)
if (san STREQUAL "address")
set (SAN_LIB "asan")
elseif (san STREQUAL "thread")
set (SAN_LIB "tsan")
elseif (san STREQUAL "memory")
set (SAN_LIB "msan")
elseif (san STREQUAL "undefined")
set (SAN_LIB "ubsan")
endif ()
endif ()
set (_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set (CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
CHECK_CXX_COMPILER_FLAG (${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set (CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if (NOT COMPILER_SUPPORTS_SAN)
message (FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif ()
endif ()

31
CMake/ClangTidy.cmake Normal file
View File

@@ -0,0 +1,31 @@
if (lint)
# Find clang-tidy binary
if (DEFINED ENV{CLIO_CLANG_TIDY_BIN})
set (_CLANG_TIDY_BIN $ENV{CLIO_CLANG_TIDY_BIN})
if ((NOT EXISTS ${_CLANG_TIDY_BIN}) OR IS_DIRECTORY ${_CLANG_TIDY_BIN})
message (FATAL_ERROR "$ENV{CLIO_CLANG_TIDY_BIN} no such file. Check CLIO_CLANG_TIDY_BIN env variable")
endif ()
message (STATUS "Using clang-tidy from CLIO_CLANG_TIDY_BIN")
else ()
find_program (_CLANG_TIDY_BIN NAMES "clang-tidy-16" "clang-tidy" REQUIRED)
endif ()
if (NOT _CLANG_TIDY_BIN)
message (FATAL_ERROR
"clang-tidy binary not found. Please set the CLIO_CLANG_TIDY_BIN environment variable or install clang-tidy.")
endif ()
# Support for https://github.com/matus-chochlik/ctcache
find_program (CLANG_TIDY_CACHE_PATH NAMES "clang-tidy-cache")
if (CLANG_TIDY_CACHE_PATH)
set (_CLANG_TIDY_CMD
"${CLANG_TIDY_CACHE_PATH};${_CLANG_TIDY_BIN}"
CACHE STRING "A combined command to run clang-tidy with caching wrapper")
else ()
set(_CLANG_TIDY_CMD "${_CLANG_TIDY_BIN}")
endif ()
set (CMAKE_CXX_CLANG_TIDY "${_CLANG_TIDY_CMD};--quiet")
message (STATUS "Using clang-tidy: ${CMAKE_CXX_CLANG_TIDY}")
endif ()

View File

@@ -2,32 +2,38 @@
write version to source write version to source
#]===================================================================] #]===================================================================]
find_package(Git REQUIRED) find_package (Git REQUIRED)
set(GIT_COMMAND rev-parse --short HEAD) set (GIT_COMMAND rev-parse --short HEAD)
execute_process(COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} OUTPUT_VARIABLE REV OUTPUT_STRIP_TRAILING_WHITESPACE) execute_process (COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND}
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
OUTPUT_VARIABLE REV OUTPUT_STRIP_TRAILING_WHITESPACE)
set(GIT_COMMAND branch --show-current) set (GIT_COMMAND branch --show-current)
execute_process(COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} OUTPUT_VARIABLE BRANCH OUTPUT_STRIP_TRAILING_WHITESPACE) execute_process (COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND}
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
OUTPUT_VARIABLE BRANCH OUTPUT_STRIP_TRAILING_WHITESPACE)
if(BRANCH STREQUAL "") if (BRANCH STREQUAL "")
set(BRANCH "dev") set (BRANCH "dev")
endif() endif ()
if(NOT (BRANCH MATCHES master OR BRANCH MATCHES release/*)) # for develop and any other branch name YYYYMMDDHMS-<branch>-<git-ref> if (NOT (BRANCH MATCHES master OR BRANCH MATCHES release/*)) # for develop and any other branch name YYYYMMDDHMS-<branch>-<git-rev>
execute_process(COMMAND date +%Y%m%d%H%M%S OUTPUT_VARIABLE DATE OUTPUT_STRIP_TRAILING_WHITESPACE) execute_process (COMMAND date +%Y%m%d%H%M%S OUTPUT_VARIABLE DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
set(VERSION "${DATE}-${BRANCH}-${REV}") set (VERSION "${DATE}-${BRANCH}-${REV}")
else() else ()
set(GIT_COMMAND describe --tags) set (GIT_COMMAND describe --tags)
execute_process(COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND} OUTPUT_VARIABLE TAG_VERSION OUTPUT_STRIP_TRAILING_WHITESPACE) execute_process (COMMAND ${GIT_EXECUTABLE} ${GIT_COMMAND}
set(VERSION "${TAG_VERSION}-${REV}") WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
endif() OUTPUT_VARIABLE TAG_VERSION OUTPUT_STRIP_TRAILING_WHITESPACE)
set (VERSION "${TAG_VERSION}-${REV}")
endif ()
if(CMAKE_BUILD_TYPE MATCHES Debug) if (CMAKE_BUILD_TYPE MATCHES Debug)
set(VERSION "${VERSION}+DEBUG") set (VERSION "${VERSION}+DEBUG")
endif() endif ()
message(STATUS "Build version: ${VERSION}") message (STATUS "Build version: ${VERSION}")
set(clio_version "${VERSION}") set (clio_version "${VERSION}")
configure_file(CMake/Build.cpp.in ${CMAKE_SOURCE_DIR}/src/main/impl/Build.cpp) configure_file (CMake/Build.cpp.in ${CMAKE_SOURCE_DIR}/src/main/impl/Build.cpp)

View File

@@ -1,42 +1,45 @@
# call add_converage(module_name) to add coverage targets for the given module # call add_coverage(module_name) to add coverage targets for the given module
function(add_converage module) function (add_coverage module)
if("${CMAKE_C_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang" if ("${CMAKE_C_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang"
OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang") OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
message("[Coverage] Building with llvm Code Coverage Tools") message ("[Coverage] Building with llvm Code Coverage Tools")
# Using llvm gcov ; llvm install by xcode # Using llvm gcov ; llvm install by xcode
set(LLVM_COV_PATH /Library/Developer/CommandLineTools/usr/bin) set (LLVM_COV_PATH /Library/Developer/CommandLineTools/usr/bin)
if(NOT EXISTS ${LLVM_COV_PATH}/llvm-cov) if (NOT EXISTS ${LLVM_COV_PATH}/llvm-cov)
message(FATAL_ERROR "llvm-cov not found! Aborting.") message (FATAL_ERROR "llvm-cov not found! Aborting.")
endif() endif ()
# set Flags # set Flags
target_compile_options(${module} PRIVATE -fprofile-instr-generate target_compile_options (${module} PRIVATE
-fcoverage-mapping) -fprofile-instr-generate
target_link_options(${module} PUBLIC -fprofile-instr-generate
-fcoverage-mapping) -fcoverage-mapping)
target_compile_options(clio PRIVATE -fprofile-instr-generate target_link_options (${module} PUBLIC
-fprofile-instr-generate
-fcoverage-mapping) -fcoverage-mapping)
target_link_options(clio PUBLIC -fprofile-instr-generate
target_compile_options (clio PRIVATE
-fprofile-instr-generate
-fcoverage-mapping)
target_link_options (clio PUBLIC
-fprofile-instr-generate
-fcoverage-mapping) -fcoverage-mapping)
# llvm-cov # llvm-cov
add_custom_target( add_custom_target (${module}-ccov-preprocessing
${module}-ccov-preprocessing
COMMAND LLVM_PROFILE_FILE=${module}.profraw $<TARGET_FILE:${module}> COMMAND LLVM_PROFILE_FILE=${module}.profraw $<TARGET_FILE:${module}>
COMMAND ${LLVM_COV_PATH}/llvm-profdata merge -sparse ${module}.profraw -o COMMAND ${LLVM_COV_PATH}/llvm-profdata merge -sparse ${module}.profraw -o
${module}.profdata ${module}.profdata
DEPENDS ${module}) DEPENDS ${module})
add_custom_target( add_custom_target (${module}-ccov-show
${module}-ccov-show
COMMAND ${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}> COMMAND ${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -show-line-counts-or-regions -instr-profile=${module}.profdata -show-line-counts-or-regions
DEPENDS ${module}-ccov-preprocessing) DEPENDS ${module}-ccov-preprocessing)
# add summary for CI parse # add summary for CI parse
add_custom_target( add_custom_target (${module}-ccov-report
${module}-ccov-report
COMMAND COMMAND
${LLVM_COV_PATH}/llvm-cov report $<TARGET_FILE:${module}> ${LLVM_COV_PATH}/llvm-cov report $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -instr-profile=${module}.profdata
@@ -45,8 +48,7 @@ function(add_converage module)
DEPENDS ${module}-ccov-preprocessing) DEPENDS ${module}-ccov-preprocessing)
# exclude libs and unittests self # exclude libs and unittests self
add_custom_target( add_custom_target (${module}-ccov
${module}-ccov
COMMAND COMMAND
${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}> ${LLVM_COV_PATH}/llvm-cov show $<TARGET_FILE:${module}>
-instr-profile=${module}.profdata -show-line-counts-or-regions -instr-profile=${module}.profdata -show-line-counts-or-regions
@@ -54,38 +56,36 @@ function(add_converage module)
-ignore-filename-regex=".*_makefiles|.*unittests|.*_deps" > /dev/null 2>&1 -ignore-filename-regex=".*_makefiles|.*unittests|.*_deps" > /dev/null 2>&1
DEPENDS ${module}-ccov-preprocessing) DEPENDS ${module}-ccov-preprocessing)
add_custom_command( add_custom_command (
TARGET ${module}-ccov TARGET ${module}-ccov
POST_BUILD POST_BUILD
COMMENT COMMENT
"Open ${module}-llvm-cov/index.html in your browser to view the coverage report." "Open ${module}-llvm-cov/index.html in your browser to view the coverage report."
) )
elseif("${CMAKE_C_COMPILER_ID}" MATCHES "GNU" OR "${CMAKE_CXX_COMPILER_ID}" elseif ("${CMAKE_C_COMPILER_ID}" MATCHES "GNU" OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
MATCHES "GNU") message ("[Coverage] Building with Gcc Code Coverage Tools")
message("[Coverage] Building with Gcc Code Coverage Tools")
find_program(GCOV_PATH gcov) find_program (GCOV_PATH gcov)
if(NOT GCOV_PATH) if (NOT GCOV_PATH)
message(FATAL_ERROR "gcov not found! Aborting...") message (FATAL_ERROR "gcov not found! Aborting...")
endif() # NOT GCOV_PATH endif () # NOT GCOV_PATH
find_program(GCOVR_PATH gcovr) find_program (GCOVR_PATH gcovr)
if(NOT GCOVR_PATH) if (NOT GCOVR_PATH)
message(FATAL_ERROR "gcovr not found! Aborting...") message (FATAL_ERROR "gcovr not found! Aborting...")
endif() # NOT GCOVR_PATH endif () # NOT GCOVR_PATH
set(COV_OUTPUT_PATH ${module}-gcc-cov) set (COV_OUTPUT_PATH ${module}-gcc-cov)
target_compile_options(${module} PRIVATE -fprofile-arcs -ftest-coverage target_compile_options (${module} PRIVATE -fprofile-arcs -ftest-coverage
-fPIC) -fPIC)
target_link_libraries(${module} PRIVATE gcov) target_link_libraries (${module} PRIVATE gcov)
target_compile_options(clio PRIVATE -fprofile-arcs -ftest-coverage target_compile_options (clio PRIVATE -fprofile-arcs -ftest-coverage
-fPIC) -fPIC)
target_link_libraries(clio PRIVATE gcov) target_link_libraries (clio PRIVATE gcov)
# this target is used for CI as well generate the summary out.xml will send # this target is used for CI as well generate the summary out.xml will send
# to github action to generate markdown, we can paste it to comments or # to github action to generate markdown, we can paste it to comments or
# readme # readme
add_custom_target( add_custom_target (${module}-ccov
${module}-ccov
COMMAND ${module} ${TEST_PARAMETER} COMMAND ${module} ${TEST_PARAMETER}
COMMAND rm -rf ${COV_OUTPUT_PATH} COMMAND rm -rf ${COV_OUTPUT_PATH}
COMMAND mkdir ${COV_OUTPUT_PATH} COMMAND mkdir ${COV_OUTPUT_PATH}
@@ -102,8 +102,7 @@ function(add_converage module)
COMMENT "Running gcovr to produce Cobertura code coverage report.") COMMENT "Running gcovr to produce Cobertura code coverage report.")
# generate the detail report # generate the detail report
add_custom_target( add_custom_target (${module}-ccov-report
${module}-ccov-report
COMMAND ${module} ${TEST_PARAMETER} COMMAND ${module} ${TEST_PARAMETER}
COMMAND rm -rf ${COV_OUTPUT_PATH} COMMAND rm -rf ${COV_OUTPUT_PATH}
COMMAND mkdir ${COV_OUTPUT_PATH} COMMAND mkdir ${COV_OUTPUT_PATH}
@@ -114,13 +113,13 @@ function(add_converage module)
--exclude='${PROJECT_BINARY_DIR}/' --exclude='${PROJECT_BINARY_DIR}/'
WORKING_DIRECTORY ${PROJECT_BINARY_DIR} WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
COMMENT "Running gcovr to produce Cobertura code coverage report.") COMMENT "Running gcovr to produce Cobertura code coverage report.")
add_custom_command( add_custom_command (
TARGET ${module}-ccov-report TARGET ${module}-ccov-report
POST_BUILD POST_BUILD
COMMENT COMMENT
"Open ${COV_OUTPUT_PATH}/index.html in your browser to view the coverage report." "Open ${COV_OUTPUT_PATH}/index.html in your browser to view the coverage report."
) )
else() else ()
message(FATAL_ERROR "Complier not support yet") message (FATAL_ERROR "Complier not support yet")
endif() endif ()
endfunction() endfunction ()

11
CMake/Docs.cmake Normal file
View File

@@ -0,0 +1,11 @@
find_package (Doxygen REQUIRED)
set (DOXYGEN_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile)
set (DOXYGEN_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)
configure_file (${DOXYGEN_IN} ${DOXYGEN_OUT} @ONLY)
add_custom_target (docs
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYGEN_OUT}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating API documentation with Doxygen"
VERBATIM)

45
CMake/Settings.cmake Normal file
View File

@@ -0,0 +1,45 @@
set(COMPILER_FLAGS
-Wall
-Wcast-align
-Wdouble-promotion
-Wextra
-Werror
-Wformat=2
-Wimplicit-fallthrough
-Wmisleading-indentation
-Wno-narrowing
-Wno-deprecated-declarations
-Wno-dangling-else
-Wno-unused-but-set-variable
-Wnon-virtual-dtor
-Wnull-dereference
-Wold-style-cast
-pedantic
-Wpedantic
-Wunused
)
if (is_gcc AND NOT lint)
list(APPEND COMPILER_FLAGS
-Wduplicated-branches
-Wduplicated-cond
-Wlogical-op
-Wuseless-cast
)
endif ()
if (is_clang)
list(APPEND COMPILER_FLAGS
-Wshadow # gcc is to aggressive with shadowing https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78147
)
endif ()
if (is_appleclang)
list(APPEND COMPILER_FLAGS
-Wreorder-init-list
)
endif ()
# See https://github.com/cpp-best-practices/cppbestpractices/blob/master/02-Use_the_Tools_Available.md#gcc--clang for the flags description
target_compile_options (clio PUBLIC ${COMPILER_FLAGS})

View File

@@ -0,0 +1,11 @@
include (CheckIncludeFileCXX)
check_include_file_cxx ("source_location" SOURCE_LOCATION_AVAILABLE)
if (SOURCE_LOCATION_AVAILABLE)
target_compile_definitions (clio PUBLIC "HAS_SOURCE_LOCATION")
endif ()
check_include_file_cxx ("experimental/source_location" EXPERIMENTAL_SOURCE_LOCATION_AVAILABLE)
if (EXPERIMENTAL_SOURCE_LOCATION_AVAILABLE)
target_compile_definitions (clio PUBLIC "HAS_EXPERIMENTAL_SOURCE_LOCATION")
endif ()

View File

@@ -1,6 +1,11 @@
set(Boost_USE_STATIC_LIBS ON) set (Boost_USE_STATIC_LIBS ON)
set(Boost_USE_STATIC_RUNTIME ON) set (Boost_USE_STATIC_RUNTIME ON)
find_package(Boost 1.75 COMPONENTS filesystem log_setup log thread system REQUIRED) find_package (Boost 1.82 REQUIRED
COMPONENTS
target_link_libraries(clio PUBLIC ${Boost_LIBRARIES}) program_options
coroutine
system
log
log_setup
)

5
CMake/deps/OpenSSL.cmake Normal file
View File

@@ -0,0 +1,5 @@
find_package (OpenSSL 1.1.1 REQUIRED)
set_target_properties (OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)

View File

@@ -1,24 +0,0 @@
From 5cd9d09d960fa489a0c4379880cd7615b1c16e55 Mon Sep 17 00:00:00 2001
From: CJ Cobb <ccobb@ripple.com>
Date: Wed, 10 Aug 2022 12:30:01 -0400
Subject: [PATCH] Remove bitset operator !=
---
src/ripple/protocol/Feature.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/ripple/protocol/Feature.h b/src/ripple/protocol/Feature.h
index b3ecb099b..6424be411 100644
--- a/src/ripple/protocol/Feature.h
+++ b/src/ripple/protocol/Feature.h
@@ -126,7 +126,6 @@ class FeatureBitset : private std::bitset<detail::numFeatures>
public:
using base::bitset;
using base::operator==;
- using base::operator!=;
using base::all;
using base::any;
--
2.32.0

View File

@@ -1,11 +0,0 @@
include(CheckIncludeFileCXX)
check_include_file_cxx("source_location" SOURCE_LOCATION_AVAILABLE)
if(SOURCE_LOCATION_AVAILABLE)
target_compile_definitions(clio PUBLIC "HAS_SOURCE_LOCATION")
endif()
check_include_file_cxx("experimental/source_location" EXPERIMENTAL_SOURCE_LOCATION_AVAILABLE)
if(EXPERIMENTAL_SOURCE_LOCATION_AVAILABLE)
target_compile_definitions(clio PUBLIC "HAS_EXPERIMENTAL_SOURCE_LOCATION")
endif()

2
CMake/deps/Threads.cmake Normal file
View File

@@ -0,0 +1,2 @@
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)

View File

@@ -1,153 +1 @@
find_package(ZLIB REQUIRED) find_package (cassandra-cpp-driver REQUIRED)
find_library(cassandra NAMES cassandra)
if(NOT cassandra)
message("System installed Cassandra cpp driver not found. Will build")
find_library(zlib NAMES zlib1g-dev zlib-devel zlib z)
if(NOT zlib)
message("zlib not found. will build")
add_library(zlib STATIC IMPORTED GLOBAL)
ExternalProject_Add(zlib_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/madler/zlib.git
GIT_TAG v1.2.12
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${CMAKE_STATIC_LIBRARY_PREFIX}z.a
)
ExternalProject_Get_Property (zlib_src SOURCE_DIR)
ExternalProject_Get_Property (zlib_src BINARY_DIR)
set (zlib_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${zlib_src_SOURCE_DIR}/include)
set_target_properties (zlib PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}z.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(zlib zlib_src)
file(TO_CMAKE_PATH "${zlib_src_SOURCE_DIR}" zlib_src_SOURCE_DIR)
endif()
find_library(krb5 NAMES krb5-dev libkrb5-dev)
if(NOT krb5)
message("krb5 not found. will build")
add_library(krb5 STATIC IMPORTED GLOBAL)
ExternalProject_Add(krb5_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/krb5/krb5.git
GIT_TAG krb5-1.20
UPDATE_COMMAND ""
CONFIGURE_COMMAND autoreconf src && CFLAGS=-fcommon ./src/configure --enable-static --disable-shared
BUILD_IN_SOURCE 1
BUILD_COMMAND make
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/${CMAKE_STATIC_LIBRARY_PREFIX}krb5.a
)
message(${ep_lib_prefix}/krb5.a)
message(${CMAKE_STATIC_LIBRARY_PREFIX}krb5.a)
ExternalProject_Get_Property (krb5_src SOURCE_DIR)
ExternalProject_Get_Property (krb5_src BINARY_DIR)
set (krb5_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${krb5_src_SOURCE_DIR}/include)
set_target_properties (krb5 PROPERTIES
IMPORTED_LOCATION
${SOURCE_DIR}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}krb5.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(krb5 krb5_src)
file(TO_CMAKE_PATH "${krb5_src_SOURCE_DIR}" krb5_src_SOURCE_DIR)
endif()
find_library(libuv1 NAMES uv1 libuv1 liubuv1-dev libuv1:amd64)
if(NOT libuv1)
message("libuv1 not found, will build")
add_library(libuv1 STATIC IMPORTED GLOBAL)
ExternalProject_Add(libuv_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libuv/libuv.git
GIT_TAG v1.44.1
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${CMAKE_STATIC_LIBRARY_PREFIX}uv_a.a
)
ExternalProject_Get_Property (libuv_src SOURCE_DIR)
ExternalProject_Get_Property (libuv_src BINARY_DIR)
set (libuv_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${libuv_src_SOURCE_DIR}/include)
set_target_properties (libuv1 PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}uv_a.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(libuv1 libuv_src)
file(TO_CMAKE_PATH "${libuv_src_SOURCE_DIR}" libuv_src_SOURCE_DIR)
endif()
add_library (cassandra STATIC IMPORTED GLOBAL)
ExternalProject_Add(cassandra_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/datastax/cpp-driver.git
GIT_TAG 2.16.2
CMAKE_ARGS
-DLIBUV_ROOT_DIR=${BINARY_DIR}
-DLIBUV_INCLUDE_DIR=${SOURCE_DIR}/include
-DCASS_BUILD_STATIC=ON
-DCASS_BUILD_SHARED=OFF
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${CMAKE_STATIC_LIBRARY_PREFIX}cassandra_static.a
)
ExternalProject_Get_Property (cassandra_src SOURCE_DIR)
ExternalProject_Get_Property (cassandra_src BINARY_DIR)
set (cassandra_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${cassandra_src_SOURCE_DIR}/include)
set_target_properties (cassandra PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}cassandra_static.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
message("cass dirs")
message(${BINARY_DIR})
message(${SOURCE_DIR})
message(${BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}cassandra_static.a)
add_dependencies(cassandra cassandra_src)
if(NOT libuv1)
ExternalProject_Add_StepDependencies(cassandra_src build libuv1)
target_link_libraries(cassandra INTERFACE libuv1)
else()
target_link_libraries(cassandra INTERFACE ${libuv1})
endif()
if(NOT krb5)
ExternalProject_Add_StepDependencies(cassandra_src build krb5)
target_link_libraries(cassandra INTERFACE krb5)
else()
target_link_libraries(cassandra INTERFACE ${krb5})
endif()
if(NOT zlib)
ExternalProject_Add_StepDependencies(cassandra_src build zlib)
target_link_libraries(cassandra INTERFACE zlib)
else()
target_link_libraries(cassandra INTERFACE ${zlib})
endif()
set(OPENSSL_USE_STATIC_LIBS TRUE)
find_package(OpenSSL REQUIRED)
target_link_libraries(cassandra INTERFACE OpenSSL::SSL)
file(TO_CMAKE_PATH "${cassandra_src_SOURCE_DIR}" cassandra_src_SOURCE_DIR)
target_link_libraries(clio PUBLIC cassandra)
else()
message("Found system installed cassandra cpp driver")
message(${cassandra})
find_path(cassandra_includes NAMES cassandra.h REQUIRED)
message(${cassandra_includes})
get_filename_component(CASSANDRA_HEADER ${cassandra_includes}/cassandra.h REALPATH)
get_filename_component(CASSANDRA_HEADER_DIR ${CASSANDRA_HEADER} DIRECTORY)
target_link_libraries (clio PUBLIC ${cassandra})
target_include_directories(clio PUBLIC ${CASSANDRA_HEADER_DIR})
endif()

View File

@@ -1,22 +1,4 @@
FetchContent_Declare( find_package (GTest REQUIRED)
googletest
URL https://github.com/google/googletest/archive/609281088cfefc76f9d0ce82e1ff6c30cc3591e5.zip
)
FetchContent_GetProperties(googletest) enable_testing ()
include (GoogleTest)
if(NOT googletest_POPULATED)
FetchContent_Populate(googletest)
add_subdirectory(${googletest_SOURCE_DIR} ${googletest_BINARY_DIR} EXCLUDE_FROM_ALL)
endif()
target_link_libraries(clio_tests PUBLIC clio gmock_main)
target_include_directories(clio_tests PRIVATE unittests)
enable_testing()
include(GoogleTest)
#increase timeout for tests discovery to 10 seconds, by default it is 5s. As more unittests added, we start to hit this issue
#https://github.com/google/googletest/issues/3475
gtest_discover_tests(clio_tests DISCOVERY_TIMEOUT 10)

View File

@@ -1,14 +1 @@
FetchContent_Declare( find_package (fmt REQUIRED)
libfmt
URL https://github.com/fmtlib/fmt/releases/download/9.1.0/fmt-9.1.0.zip
)
FetchContent_GetProperties(libfmt)
if(NOT libfmt_POPULATED)
FetchContent_Populate(libfmt)
add_subdirectory(${libfmt_SOURCE_DIR} ${libfmt_BINARY_DIR} EXCLUDE_FROM_ALL)
endif()
target_link_libraries(clio PUBLIC fmt)

1
CMake/deps/libxrpl.cmake Normal file
View File

@@ -0,0 +1 @@
find_package (xrpl REQUIRED)

View File

@@ -1,20 +0,0 @@
set(RIPPLED_REPO "https://github.com/ripple/rippled.git")
set(RIPPLED_BRANCH "1.9.2")
set(NIH_CACHE_ROOT "${CMAKE_CURRENT_BINARY_DIR}" CACHE INTERNAL "")
set(patch_command ! grep operator!= src/ripple/protocol/Feature.h || git apply < ${CMAKE_CURRENT_SOURCE_DIR}/CMake/deps/Remove-bitset-operator.patch)
message(STATUS "Cloning ${RIPPLED_REPO} branch ${RIPPLED_BRANCH}")
FetchContent_Declare(rippled
GIT_REPOSITORY "${RIPPLED_REPO}"
GIT_TAG "${RIPPLED_BRANCH}"
GIT_SHALLOW ON
PATCH_COMMAND "${patch_command}"
)
FetchContent_GetProperties(rippled)
if(NOT rippled_POPULATED)
FetchContent_Populate(rippled)
add_subdirectory(${rippled_SOURCE_DIR} ${rippled_BINARY_DIR} EXCLUDE_FROM_ALL)
endif()
target_link_libraries(clio PUBLIC xrpl_core grpc_pbufs)
target_include_directories(clio PUBLIC ${rippled_SOURCE_DIR}/src ) # TODO: Seems like this shouldn't be needed?

View File

@@ -1,16 +1,14 @@
set(CLIO_INSTALL_DIR "/opt/clio") set (CLIO_INSTALL_DIR "/opt/clio")
set(CMAKE_INSTALL_PREFIX ${CLIO_INSTALL_DIR}) set (CMAKE_INSTALL_PREFIX ${CLIO_INSTALL_DIR})
install(TARGETS clio_server DESTINATION bin) install (TARGETS clio_server DESTINATION bin)
# install(TARGETS clio_tests DESTINATION bin) # NOTE: Do we want to install the tests?
#install(FILES example-config.json DESTINATION etc RENAME config.json) file (READ example-config.json config)
file(READ example-config.json config) string (REGEX REPLACE "./clio_log" "/var/log/clio/" config "${config}")
string(REGEX REPLACE "./clio_log" "/var/log/clio/" config "${config}") file (WRITE ${CMAKE_BINARY_DIR}/install-config.json "${config}")
file(WRITE ${CMAKE_BINARY_DIR}/install-config.json "${config}") install (FILES ${CMAKE_BINARY_DIR}/install-config.json DESTINATION etc RENAME config.json)
install(FILES ${CMAKE_BINARY_DIR}/install-config.json DESTINATION etc RENAME config.json)
configure_file("${CMAKE_SOURCE_DIR}/CMake/install/clio.service.in" "${CMAKE_BINARY_DIR}/clio.service") configure_file ("${CMAKE_SOURCE_DIR}/CMake/install/clio.service.in" "${CMAKE_BINARY_DIR}/clio.service")
install(FILES "${CMAKE_BINARY_DIR}/clio.service" DESTINATION /lib/systemd/system) install (FILES "${CMAKE_BINARY_DIR}/clio.service" DESTINATION /lib/systemd/system)

View File

@@ -1,6 +0,0 @@
target_compile_options(clio
PUBLIC -Wall
-Werror
-Wno-narrowing
-Wno-deprecated-declarations
-Wno-dangling-else)

View File

@@ -1,65 +1,103 @@
cmake_minimum_required(VERSION 3.16.3) cmake_minimum_required(VERSION 3.16.3)
project(clio) project(clio)
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 11) # ========================================================================== #
message(FATAL_ERROR "GCC 11+ required for building clio") # Options #
endif() # ========================================================================== #
option (verbose "Verbose build" FALSE)
option (tests "Build tests" FALSE)
option (docs "Generate doxygen docs" FALSE)
option (coverage "Build test coverage report" FALSE)
option (packaging "Create distribution packages" FALSE)
option (lint "Run clang-tidy checks during compilation" FALSE)
# ========================================================================== #
set (san "" CACHE STRING "Add sanitizer instrumentation")
set (CMAKE_EXPORT_COMPILE_COMMANDS TRUE)
set_property (CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
# ========================================================================== #
option(BUILD_TESTS "Build tests" TRUE) # Include required modules
include (CMake/Ccache.cmake)
include (CheckCXXCompilerFlag)
include (CMake/ClangTidy.cmake)
option(VERBOSE "Verbose build" TRUE) if (verbose)
if(VERBOSE) set (CMAKE_VERBOSE_MAKEFILE TRUE)
set(CMAKE_VERBOSE_MAKEFILE TRUE) endif ()
set(FETCHCONTENT_QUIET FALSE CACHE STRING "Verbose FetchContent()")
endif()
if (packaging)
add_definitions (-DPKG=1)
endif ()
if(PACKAGING) add_library (clio)
add_definitions(-DPKG=1)
endif()
#c++20 removed std::result_of but boost 1.75 is still using it. # Clio tweaks and checks
add_definitions(-DBOOST_ASIO_HAS_STD_INVOKE_RESULT=1) include (CMake/CheckCompiler.cmake)
include (CMake/Settings.cmake)
include (CMake/ClioVersion.cmake)
include (CMake/SourceLocation.cmake)
add_library(clio) # Clio deps
target_compile_features(clio PUBLIC cxx_std_20) include (CMake/deps/libxrpl.cmake)
target_include_directories(clio PUBLIC src) include (CMake/deps/Boost.cmake)
include (CMake/deps/OpenSSL.cmake)
include (CMake/deps/Threads.cmake)
include (CMake/deps/libfmt.cmake)
include (CMake/deps/cassandra.cmake)
include(FetchContent) # TODO: Include directory will be wrong when installed.
include(ExternalProject) target_include_directories (clio PUBLIC src)
include(CMake/settings.cmake) target_compile_features (clio PUBLIC cxx_std_20)
include(CMake/ClioVersion.cmake)
include(CMake/deps/rippled.cmake)
include(CMake/deps/libfmt.cmake)
include(CMake/deps/Boost.cmake)
include(CMake/deps/cassandra.cmake)
include(CMake/deps/SourceLocation.cmake)
target_sources(clio PRIVATE target_link_libraries (clio
PUBLIC Boost::boost
PUBLIC Boost::coroutine
PUBLIC Boost::program_options
PUBLIC Boost::system
PUBLIC Boost::log
PUBLIC Boost::log_setup
PUBLIC cassandra-cpp-driver::cassandra-cpp-driver
PUBLIC fmt::fmt
PUBLIC OpenSSL::Crypto
PUBLIC OpenSSL::SSL
PUBLIC xrpl::libxrpl
INTERFACE Threads::Threads
)
if (is_gcc)
# FIXME: needed on gcc for now
target_compile_definitions (clio PUBLIC BOOST_ASIO_DISABLE_CONCEPTS)
endif ()
target_sources (clio PRIVATE
## Main ## Main
src/main/impl/Build.cpp src/main/impl/Build.cpp
## Backend ## Backend
src/backend/BackendInterface.cpp src/data/BackendCounters.cpp
src/backend/LedgerCache.cpp src/data/BackendInterface.cpp
## NextGen Backend src/data/LedgerCache.cpp
src/backend/cassandra/impl/Future.cpp src/data/cassandra/impl/Future.cpp
src/backend/cassandra/impl/Cluster.cpp src/data/cassandra/impl/Cluster.cpp
src/backend/cassandra/impl/Batch.cpp src/data/cassandra/impl/Batch.cpp
src/backend/cassandra/impl/Result.cpp src/data/cassandra/impl/Result.cpp
src/backend/cassandra/impl/Tuple.cpp src/data/cassandra/impl/Tuple.cpp
src/backend/cassandra/impl/SslContext.cpp src/data/cassandra/impl/SslContext.cpp
src/backend/cassandra/Handle.cpp src/data/cassandra/Handle.cpp
src/backend/cassandra/SettingsProvider.cpp src/data/cassandra/SettingsProvider.cpp
## ETL ## ETL
src/etl/Source.cpp src/etl/Source.cpp
src/etl/ProbingSource.cpp src/etl/ProbingSource.cpp
src/etl/NFTHelpers.cpp src/etl/NFTHelpers.cpp
src/etl/ETLService.cpp src/etl/ETLService.cpp
src/etl/ETLState.cpp
src/etl/LoadBalancer.cpp src/etl/LoadBalancer.cpp
src/etl/impl/ForwardCache.cpp src/etl/impl/ForwardCache.cpp
## Subscriptions ## Feed
src/subscriptions/SubscriptionManager.cpp src/feed/SubscriptionManager.cpp
## Web
src/web/impl/AdminVerificationStrategy.cpp
src/web/IntervalSweepHandler.cpp
## RPC ## RPC
src/rpc/Errors.cpp src/rpc/Errors.cpp
src/rpc/Factories.cpp src/rpc/Factories.cpp
@@ -68,9 +106,10 @@ target_sources(clio PRIVATE
src/rpc/WorkQueue.cpp src/rpc/WorkQueue.cpp
src/rpc/common/Specs.cpp src/rpc/common/Specs.cpp
src/rpc/common/Validators.cpp src/rpc/common/Validators.cpp
# RPC impl src/rpc/common/MetaProcessors.cpp
src/rpc/common/impl/APIVersionParser.cpp
src/rpc/common/impl/HandlerProvider.cpp src/rpc/common/impl/HandlerProvider.cpp
## RPC handler ## RPC handlers
src/rpc/handlers/AccountChannels.cpp src/rpc/handlers/AccountChannels.cpp
src/rpc/handlers/AccountCurrencies.cpp src/rpc/handlers/AccountCurrencies.cpp
src/rpc/handlers/AccountInfo.cpp src/rpc/handlers/AccountInfo.cpp
@@ -81,11 +120,13 @@ target_sources(clio PRIVATE
src/rpc/handlers/AccountTx.cpp src/rpc/handlers/AccountTx.cpp
src/rpc/handlers/BookChanges.cpp src/rpc/handlers/BookChanges.cpp
src/rpc/handlers/BookOffers.cpp src/rpc/handlers/BookOffers.cpp
src/rpc/handlers/DepositAuthorized.cpp
src/rpc/handlers/GatewayBalances.cpp src/rpc/handlers/GatewayBalances.cpp
src/rpc/handlers/Ledger.cpp src/rpc/handlers/Ledger.cpp
src/rpc/handlers/LedgerData.cpp src/rpc/handlers/LedgerData.cpp
src/rpc/handlers/LedgerEntry.cpp src/rpc/handlers/LedgerEntry.cpp
src/rpc/handlers/LedgerRange.cpp src/rpc/handlers/LedgerRange.cpp
src/rpc/handlers/NFTsByIssuer.cpp
src/rpc/handlers/NFTBuyOffers.cpp src/rpc/handlers/NFTBuyOffers.cpp
src/rpc/handlers/NFTHistory.cpp src/rpc/handlers/NFTHistory.cpp
src/rpc/handlers/NFTInfo.cpp src/rpc/handlers/NFTInfo.cpp
@@ -94,90 +135,157 @@ target_sources(clio PRIVATE
src/rpc/handlers/NoRippleCheck.cpp src/rpc/handlers/NoRippleCheck.cpp
src/rpc/handlers/Random.cpp src/rpc/handlers/Random.cpp
src/rpc/handlers/TransactionEntry.cpp src/rpc/handlers/TransactionEntry.cpp
src/rpc/handlers/Tx.cpp
## Util ## Util
src/config/Config.cpp src/util/config/Config.cpp
src/log/Logger.cpp src/util/log/Logger.cpp
src/util/prometheus/Http.cpp
src/util/prometheus/Label.cpp
src/util/prometheus/Metrics.cpp
src/util/prometheus/Prometheus.cpp
src/util/Random.cpp
src/util/Taggable.cpp) src/util/Taggable.cpp)
add_executable(clio_server src/main/main.cpp) # Clio server
target_link_libraries(clio_server PUBLIC clio) add_executable (clio_server src/main/Main.cpp)
target_link_libraries (clio_server PRIVATE clio)
target_link_options(clio_server
PRIVATE
$<$<AND:$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:-static-libstdc++ -static-libgcc>
)
if(BUILD_TESTS) # Unittesting
set(TEST_TARGET clio_tests) if (tests)
add_executable(${TEST_TARGET} set (TEST_TARGET clio_tests)
add_executable (${TEST_TARGET}
# Common
unittests/Main.cpp
unittests/Playground.cpp unittests/Playground.cpp
unittests/Logger.cpp unittests/LoggerTests.cpp
unittests/Config.cpp unittests/ConfigTests.cpp
unittests/ProfilerTest.cpp unittests/ProfilerTests.cpp
unittests/DOSGuard.cpp unittests/JsonUtilTests.cpp
unittests/SubscriptionTest.cpp unittests/DOSGuardTests.cpp
unittests/SubscriptionManagerTest.cpp unittests/SubscriptionTests.cpp
unittests/SubscriptionManagerTests.cpp
unittests/util/TestObject.cpp unittests/util/TestObject.cpp
unittests/util/StringUtils.cpp unittests/util/StringUtils.cpp
unittests/util/prometheus/CounterTests.cpp
unittests/util/prometheus/GaugeTests.cpp
unittests/util/prometheus/HttpTests.cpp
unittests/util/prometheus/LabelTests.cpp
unittests/util/prometheus/MetricsTests.cpp
# ETL # ETL
unittests/etl/ExtractionDataPipeTest.cpp unittests/etl/ExtractionDataPipeTests.cpp
unittests/etl/ExtractorTest.cpp unittests/etl/ExtractorTests.cpp
unittests/etl/TransformerTest.cpp unittests/etl/TransformerTests.cpp
unittests/etl/CacheLoaderTests.cpp
unittests/etl/AmendmentBlockHandlerTests.cpp
unittests/etl/LedgerPublisherTests.cpp
unittests/etl/ETLStateTests.cpp
# RPC # RPC
unittests/rpc/ErrorTests.cpp unittests/rpc/ErrorTests.cpp
unittests/rpc/BaseTests.cpp unittests/rpc/BaseTests.cpp
unittests/rpc/RPCHelpersTest.cpp unittests/rpc/RPCHelpersTests.cpp
unittests/rpc/CountersTest.cpp unittests/rpc/CountersTests.cpp
unittests/rpc/AdminVerificationTest.cpp unittests/rpc/APIVersionTests.cpp
unittests/rpc/ForwardingProxyTests.cpp
unittests/rpc/WorkQueueTests.cpp
unittests/rpc/AmendmentsTests.cpp
unittests/rpc/JsonBoolTests.cpp
## RPC handlers ## RPC handlers
unittests/rpc/handlers/DefaultProcessorTests.cpp unittests/rpc/handlers/DefaultProcessorTests.cpp
unittests/rpc/handlers/TestHandlerTests.cpp unittests/rpc/handlers/TestHandlerTests.cpp
unittests/rpc/handlers/AccountCurrenciesTest.cpp unittests/rpc/handlers/AccountCurrenciesTests.cpp
unittests/rpc/handlers/AccountLinesTest.cpp unittests/rpc/handlers/AccountLinesTests.cpp
unittests/rpc/handlers/AccountTxTest.cpp unittests/rpc/handlers/AccountTxTests.cpp
unittests/rpc/handlers/AccountOffersTest.cpp unittests/rpc/handlers/AccountOffersTests.cpp
unittests/rpc/handlers/AccountInfoTest.cpp unittests/rpc/handlers/AccountInfoTests.cpp
unittests/rpc/handlers/AccountChannelsTest.cpp unittests/rpc/handlers/AccountChannelsTests.cpp
unittests/rpc/handlers/AccountNFTsTest.cpp unittests/rpc/handlers/AccountNFTsTests.cpp
unittests/rpc/handlers/BookOffersTest.cpp unittests/rpc/handlers/BookOffersTests.cpp
unittests/rpc/handlers/GatewayBalancesTest.cpp unittests/rpc/handlers/DepositAuthorizedTests.cpp
unittests/rpc/handlers/TxTest.cpp unittests/rpc/handlers/GatewayBalancesTests.cpp
unittests/rpc/handlers/TransactionEntryTest.cpp unittests/rpc/handlers/TxTests.cpp
unittests/rpc/handlers/LedgerEntryTest.cpp unittests/rpc/handlers/TransactionEntryTests.cpp
unittests/rpc/handlers/LedgerRangeTest.cpp unittests/rpc/handlers/LedgerEntryTests.cpp
unittests/rpc/handlers/NoRippleCheckTest.cpp unittests/rpc/handlers/LedgerRangeTests.cpp
unittests/rpc/handlers/ServerInfoTest.cpp unittests/rpc/handlers/NoRippleCheckTests.cpp
unittests/rpc/handlers/PingTest.cpp unittests/rpc/handlers/ServerInfoTests.cpp
unittests/rpc/handlers/RandomTest.cpp unittests/rpc/handlers/PingTests.cpp
unittests/rpc/handlers/NFTInfoTest.cpp unittests/rpc/handlers/RandomTests.cpp
unittests/rpc/handlers/NFTBuyOffersTest.cpp unittests/rpc/handlers/NFTInfoTests.cpp
unittests/rpc/handlers/NFTSellOffersTest.cpp unittests/rpc/handlers/NFTBuyOffersTests.cpp
unittests/rpc/handlers/NFTHistoryTest.cpp unittests/rpc/handlers/NFTsByIssuerTest.cpp
unittests/rpc/handlers/SubscribeTest.cpp unittests/rpc/handlers/NFTSellOffersTests.cpp
unittests/rpc/handlers/UnsubscribeTest.cpp unittests/rpc/handlers/NFTHistoryTests.cpp
unittests/rpc/handlers/LedgerDataTest.cpp unittests/rpc/handlers/SubscribeTests.cpp
unittests/rpc/handlers/AccountObjectsTest.cpp unittests/rpc/handlers/UnsubscribeTests.cpp
unittests/rpc/handlers/BookChangesTest.cpp unittests/rpc/handlers/LedgerDataTests.cpp
unittests/rpc/handlers/LedgerTest.cpp unittests/rpc/handlers/AccountObjectsTests.cpp
unittests/rpc/handlers/BookChangesTests.cpp
unittests/rpc/handlers/LedgerTests.cpp
unittests/rpc/handlers/VersionHandlerTests.cpp
# Backend # Backend
unittests/backend/BackendFactoryTest.cpp unittests/data/BackendFactoryTests.cpp
unittests/backend/cassandra/BaseTests.cpp unittests/data/BackendCountersTests.cpp
unittests/backend/cassandra/BackendTests.cpp unittests/data/cassandra/BaseTests.cpp
unittests/backend/cassandra/RetryPolicyTests.cpp unittests/data/cassandra/BackendTests.cpp
unittests/backend/cassandra/SettingsProviderTests.cpp unittests/data/cassandra/RetryPolicyTests.cpp
unittests/backend/cassandra/ExecutionStrategyTests.cpp unittests/data/cassandra/SettingsProviderTests.cpp
unittests/backend/cassandra/AsyncExecutorTests.cpp unittests/data/cassandra/ExecutionStrategyTests.cpp
unittests/webserver/ServerTest.cpp unittests/data/cassandra/AsyncExecutorTests.cpp
unittests/webserver/RPCExecutorTest.cpp) # Webserver
include(CMake/deps/gtest.cmake) unittests/web/AdminVerificationTests.cpp
unittests/web/ServerTests.cpp
unittests/web/RPCServerHandlerTests.cpp
unittests/web/WhitelistHandlerTests.cpp
unittests/web/SweepHandlerTests.cpp)
# test for dwarf5 bug on ci include (CMake/deps/gtest.cmake)
target_compile_options(clio PUBLIC -gdwarf-4)
# if CODE_COVERAGE enable, add clio_test-ccov # See https://github.com/google/googletest/issues/3475
if(CODE_COVERAGE) gtest_discover_tests (clio_tests DISCOVERY_TIMEOUT 10)
include(CMake/coverage.cmake)
add_converage(${TEST_TARGET})
endif()
endif()
include(CMake/install/install.cmake) # Fix for dwarf5 bug on ci
if(PACKAGING) target_compile_options (clio PUBLIC -gdwarf-4)
include(CMake/packaging.cmake)
endif() target_compile_definitions (${TEST_TARGET} PUBLIC UNITTEST_BUILD)
target_include_directories (${TEST_TARGET} PRIVATE unittests)
target_link_libraries (${TEST_TARGET} PUBLIC clio gtest::gtest)
# Generate `clio_tests-ccov` if coverage is enabled
# Note: use `make clio_tests-ccov` to generate report
if (coverage)
target_compile_definitions(${TEST_TARGET} PRIVATE COVERAGE_ENABLED)
include (CMake/Coverage.cmake)
add_coverage (${TEST_TARGET})
endif ()
endif ()
# Enable selected sanitizer if enabled via `san`
if (san)
target_compile_options (clio
PUBLIC
# Sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (clio
PUBLIC
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (clio INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
# Generate `docs` target for doxygen documentation if enabled
# Note: use `make docs` to generate the documentation
if (docs)
include (CMake/Docs.cmake)
endif ()
include (CMake/install/install.cmake)
if (packaging)
include (CMake/packaging.cmake) # This file exists only in build runner
endif ()

View File

@@ -91,7 +91,7 @@ The button for that is near the bottom of the PR's page on GitHub.
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent. This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent.
## Formatting ## Formatting
Code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain. Code must conform to `clang-format` version 16, unless the result would be unreasonably difficult to read or maintain.
To change your code to conform use `clang-format -i <your changed files>`. To change your code to conform use `clang-format -i <your changed files>`.
## Avoid ## Avoid

View File

@@ -1,3 +1,16 @@
PROJECT_NAME = "Clio" PROJECT_NAME = "Clio"
INPUT = src INPUT = ../src ../unittests
EXCLUDE_PATTERNS = *Test*.cpp *Test*.h
RECURSIVE = YES RECURSIVE = YES
HAVE_DOT = YES
QUIET = YES
WARNINGS = NO
WARN_NO_PARAMDOC = NO
WARN_IF_INCOMPLETE_DOC = NO
WARN_IF_UNDOCUMENTED = NO
GENERATE_LATEX = NO
GENERATE_HTML = YES
SORT_MEMBERS_CTORS_1ST = YES

162
README.md
View File

@@ -1,51 +1,105 @@
# Clio # Clio
Clio is an XRP Ledger API server. Clio is optimized for RPC calls, over WebSocket or JSON-RPC. Validated
historical ledger and transaction data are stored in a more space-efficient format, Clio is an XRP Ledger API server. Clio is optimized for RPC calls, over WebSocket or JSON-RPC.
Validated historical ledger and transaction data are stored in a more space-efficient format,
using up to 4 times less space than rippled. Clio can be configured to store data in Apache Cassandra or ScyllaDB, using up to 4 times less space than rippled. Clio can be configured to store data in Apache Cassandra or ScyllaDB,
allowing for scalable read throughput. Multiple Clio nodes can share allowing for scalable read throughput. Multiple Clio nodes can share access to the same dataset,
access to the same dataset, allowing for a highly available cluster of Clio nodes, allowing for a highly available cluster of Clio nodes, without the need for redundant data storage or computation.
without the need for redundant data storage or computation.
Clio offers the full rippled API, with the caveat that Clio by default only returns validated data. Clio offers the full rippled API, with the caveat that Clio by default only returns validated data.
This means that `ledger_index` defaults to `validated` instead of `current` for all requests. This means that `ledger_index` defaults to `validated` instead of `current` for all requests.
Other non-validated data is also not returned, such as information about queued transactions. Other non-validated data is also not returned, such as information about queued transactions.
For requests that require access to the p2p network, such as `fee` or `submit`, Clio automatically forwards the request to a rippled node and propagates the response back to the client. To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to rippled. For requests that require access to the p2p network, such as `fee` or `submit`, Clio automatically forwards the request to a rippled node and propagates the response back to the client.
To access non-validated data for *any* request, simply add `ledger_index: "current"` to the request, and Clio will forward the request to rippled.
Clio does not connect to the peer-to-peer network. Instead, Clio extracts data from a group of specified rippled nodes. Running Clio requires access to at least one rippled node Clio does not connect to the peer-to-peer network. Instead, Clio extracts data from a group of specified rippled nodes. Running Clio requires access to at least one rippled node
from which data can be extracted. The rippled node does not need to be running on the same machine as Clio. from which data can be extracted. The rippled node does not need to be running on the same machine as Clio.
## Help
Feel free to open an [issue](https://github.com/XRPLF/clio/issues) if you have a feature request or something doesn't work as expected.
If you have any questions about building, running, contributing, using clio or any other, you could always start a new [discussion](https://github.com/XRPLF/clio/discussions).
## Requirements ## Requirements
1. Access to a Cassandra cluster or ScyllaDB cluster. Can be local or remote. 1. Access to a Cassandra cluster or ScyllaDB cluster. Can be local or remote.
2. Access to one or more rippled nodes. Can be local or remote. 2. Access to one or more rippled nodes. Can be local or remote.
## Building ## Building
Clio is built with CMake. Clio requires at least GCC-11/clang-14.0.0 (C++20), and Boost 1.75.0. Clio is built with CMake and uses Conan for managing dependencies.
It is written in C++20 and therefore requires a modern compiler.
Use these instructions to build a Clio executable from the source. These instructions were tested on Ubuntu 20.04 LTS. ## Prerequisites
```sh ### Minimum Requirements
# Install dependencies
sudo apt-get -y install git pkg-config protobuf-compiler libprotobuf-dev libssl-dev wget build-essential bison flex autoconf cmake clang-format
# Install gcovr to run code coverage
sudo apt-get -y install gcovr
# Compile Boost - [Python 3.7](https://www.python.org/downloads/)
wget -O $HOME/boost_1_75_0.tar.gz https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz - [Conan 1.55](https://conan.io/downloads.html)
tar xvzf $HOME/boost_1_75_0.tar.gz - [CMake 3.16](https://cmake.org/download/)
cd $HOME/boost_1_75_0 - [**Optional**] [GCovr](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html) (needed for code coverage generation)
./bootstrap.sh
./b2 -j$(nproc)
echo "export BOOST_ROOT=$HOME/boost_1_75_0" >> $HOME/.profile && source $HOME/.profile
# Clone the Clio Git repository & build Clio | Compiler | Version |
cd $HOME |-------------|---------|
git clone https://github.com/XRPLF/clio.git | GCC | 11 |
cd $HOME/clio | Clang | 14 |
cmake -B build && cmake --build build --parallel $(nproc) | Apple Clang | 14.0.3 |
### Conan configuration
Clio does not require anything but default settings in your (`~/.conan/profiles/default`) Conan profile. It's best to have no extra flags specified.
> Mac example:
``` ```
[settings]
os=Macos
os_build=Macos
arch=armv8
arch_build=armv8
compiler=apple-clang
compiler.version=14
compiler.libcxx=libc++
build_type=Release
compiler.cppstd=20
```
> Linux example:
```
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=11
compiler.libcxx=libstdc++11
build_type=Release
compiler.cppstd=20
```
### Artifactory
1. Make sure artifactory is setup with Conan
```sh
conan remote add --insert 0 conan-non-prod http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
```
Now you should be able to download prebuilt `xrpl` package on some platforms.
2. Remove old packages you may have cached:
```sh
conan remove -f xrpl
```
## Building Clio
Navigate to Clio's root directory and perform
```sh
mkdir build && cd build
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --parallel 8 # or without the number if you feel extra adventurous
```
If all goes well, `conan install` will find required packages and `cmake` will do the rest. you should end up with `clio_server` and `clio_tests` in the `build` directory (the current directory).
> **Tip:** You can omit the `-o tests=True` in `conan install` command above if you don't want to build `clio_tests`.
> **Tip:** To generate a Code Coverage report, include `-o coverage=True` in the `conan install` command above, along with `-o tests=True` to enable tests. After running the `cmake` commands, execute `make clio_tests-ccov`. The coverage report will be found at `clio_tests-llvm-cov/index.html`.
## Running ## Running
```sh ```sh
@@ -96,12 +150,12 @@ The parameters `ssl_cert_file` and `ssl_key_file` can also be added to the top l
An example of how to specify `ssl_cert_file` and `ssl_key_file` in the config: An example of how to specify `ssl_cert_file` and `ssl_key_file` in the config:
```json ```json
"server":{ "server": {
"ip": "0.0.0.0", "ip": "0.0.0.0",
"port": 51233 "port": 51233
}, },
"ssl_cert_file" : "/full/path/to/cert.file", "ssl_cert_file": "/full/path/to/cert.file",
"ssl_key_file" : "/full/path/to/key.file" "ssl_key_file": "/full/path/to/key.file"
``` ```
Once your config files are ready, start rippled and Clio. It doesn't matter which you Once your config files are ready, start rippled and Clio. It doesn't matter which you
@@ -172,6 +226,56 @@ which can cause high latencies. A possible alternative to this is to just deploy
a database in each region, and the Clio nodes in each region use their region's database. a database in each region, and the Clio nodes in each region use their region's database.
This is effectively two systems. This is effectively two systems.
Clio supports API versioning as [described here](https://xrpl.org/request-formatting.html#api-versioning).
It's possible to configure `minimum`, `maximum` and `default` version like so:
```json
"api_version": {
"min": 1,
"max": 2,
"default": 1
}
```
All of the above are optional.
Clio will fallback to hardcoded defaults when not specified in the config file or configured values are outside
of the minimum and maximum supported versions hardcoded in `src/rpc/common/APIVersion.h`.
> **Note:** See `example-config.json` for more details.
## Admin rights for requests
By default clio checks admin privileges by IP address from request (only `127.0.0.1` is considered to be an admin).
It is not very secure because the IP could be spoofed.
For a better security `admin_password` could be provided in the `server` section of clio's config:
```json
"server": {
"admin_password": "secret"
}
```
If the password is presented in the config, clio will check the Authorization header (if any) in each request for the password.
The Authorization header should contain type `Password` and the password from the config, e.g. `Password secret`.
Exactly equal password gains admin rights for the request or a websocket connection.
## Prometheus metrics collection
Clio natively supports Prometheus metrics collection. It accepts Prometheus requests on the port configured in `server` section of config.
Prometheus metrics are enabled by default. To disable it add `"prometheus_enabled": false` to the config.
It is important to know that clio responds to Prometheus request only if they are admin requests, so Prometheus should be configured to send admin password in header.
There is an example of docker-compose file, Prometheus and Grafana configs in [examples/infrastructure](examples/infrastructure).
## Using clang-tidy for static analysis
Minimum clang-tidy version required is 16.0.
Clang-tidy could be run by cmake during building the project.
For that provide the option `-o lint=True` for `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
```
By default cmake will try to find clang-tidy automatically in your system.
To force cmake use desired binary set `CLIO_CLANG_TIDY_BIN` environment variable as path to clang-tidy binary.
E.g.:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@16/bin/clang-tidy
```
## Developing against `rippled` in standalone mode ## Developing against `rippled` in standalone mode
If you wish you develop against a `rippled` instance running in standalone If you wish you develop against a `rippled` instance running in standalone

View File

@@ -1,14 +1,22 @@
/*
* This is an example configuration file. Please do not use without modifying to suit your needs.
*/
{ {
"database": { "database": {
"type": "cassandra", "type": "cassandra",
"cassandra": { "cassandra": {
// This option can be used to setup a secure connect bundle connection
"secure_connect_bundle": "[path/to/zip. ignore if using contact_points]", "secure_connect_bundle": "[path/to/zip. ignore if using contact_points]",
// The following options are used only if using contact_points
"contact_points": "[ip. ignore if using secure_connect_bundle]", "contact_points": "[ip. ignore if using secure_connect_bundle]",
"port": "[port. ignore if using_secure_connect_bundle]", "port": "[port. ignore if using_secure_connect_bundle]",
"keyspace": "clio", // Authentication settings
"username": "[username, if any]", "username": "[username, if any]",
"password": "[password, if any]", "password": "[password, if any]",
"max_requests_outstanding": 25000, // Other common settings
"keyspace": "clio",
"max_write_requests_outstanding": 25000,
"max_read_requests_outstanding": 30000,
"threads": 8 "threads": 8
} }
}, },

91
conanfile.py Normal file
View File

@@ -0,0 +1,91 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Clio(ConanFile):
name = 'clio'
license = 'ISC'
author = 'Alex Kremer <akremer@ripple.com>, John Freeman <jfreeman@ripple.com>'
url = 'https://github.com/xrplf/clio'
description = 'Clio RPC server'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'fPIC': [True, False],
'verbose': [True, False],
'tests': [True, False], # build unit tests; create `clio_tests` binary
'docs': [True, False], # doxygen API docs; create custom target 'docs'
'packaging': [True, False], # create distribution packages
'coverage': [True, False], # build for test coverage report; create custom target `clio_tests-ccov`
'lint': [True, False], # run clang-tidy checks during compilation
}
requires = [
'boost/1.82.0',
'cassandra-cpp-driver/2.17.0',
'fmt/10.1.1',
'protobuf/3.21.12',
'grpc/1.50.1',
'openssl/1.1.1u',
'xrpl/2.0.0-rc1',
]
default_options = {
'fPIC': True,
'verbose': False,
'tests': False,
'packaging': False,
'coverage': False,
'lint': False,
'docs': False,
'xrpl/*:tests': False,
'cassandra-cpp-driver/*:shared': False,
'date/*:header_only': True,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libpq/*:shared': False,
'lz4/*:shared': False,
'openssl/*:shared': False,
'protobuf/*:shared': False,
'protobuf/*:with_zlib': True,
'snappy/*:shared': False,
'gtest/*:no_main': True,
}
exports_sources = (
'CMakeLists.txt', 'CMake/*', 'src/*'
)
def requirements(self):
if self.options.tests:
self.requires('gtest/1.14.0')
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
generators = 'CMakeDeps'
def generate(self):
tc = CMakeToolchain(self)
tc.variables['verbose'] = self.options.verbose
tc.variables['tests'] = self.options.tests
tc.variables['coverage'] = self.options.coverage
tc.variables['lint'] = self.options.lint
tc.variables['docs'] = self.options.docs
tc.variables['packaging'] = self.options.packaging
tc.generate()
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
cmake = CMake(self)
cmake.install()

View File

@@ -1,3 +1,6 @@
/*
* This is an example configuration file. Please do not use without modifying to suit your needs.
*/
{ {
"database": { "database": {
"type": "cassandra", "type": "cassandra",
@@ -9,9 +12,21 @@
"table_prefix": "", "table_prefix": "",
"max_write_requests_outstanding": 25000, "max_write_requests_outstanding": 25000,
"max_read_requests_outstanding": 30000, "max_read_requests_outstanding": 30000,
"threads": 8 "threads": 8,
//
// Advanced options. USE AT OWN RISK:
// ---
"core_connections_per_host": 1 // Defaults to 1
//
// Below options will use defaults from cassandra driver if left unspecified.
// See https://docs.datastax.com/en/developer/cpp-driver/2.17/api/struct.CassCluster/ for details.
//
// "queue_size_io": 2
//
// ---
} }
}, },
"allow_no_etl": false, // Allow Clio to run without valid ETL source, otherwise Clio will stop if ETL check fails
"etl_sources": [ "etl_sources": [
{ {
"ip": "127.0.0.1", "ip": "127.0.0.1",
@@ -20,21 +35,24 @@
} }
], ],
"dos_guard": { "dos_guard": {
// Comma-separated list of IPs to exclude from rate limiting
"whitelist": [ "whitelist": [
"127.0.0.1" "127.0.0.1"
], // comma-separated list of ips to exclude from rate limiting ],
/* The below values are the default values and are only specified here //
* for documentation purposes. The rate limiter currently limits // The below values are the default values and are only specified here
* connections and bandwidth per ip. The rate limiter looks at the raw // for documentation purposes. The rate limiter currently limits
* ip of a client connection, and so requests routed through a load // connections and bandwidth per IP. The rate limiter looks at the raw
* balancer will all have the same ip and be treated as a single client // IP of a client connection, and so requests routed through a load
*/ // balancer will all have the same IP and be treated as a single client.
"max_fetches": 1000000, // max bytes per ip per sweep interval //
"max_connections": 20, // max connections per ip "max_fetches": 1000000, // Max bytes per IP per sweep interval
"max_requests": 20, // max connections per ip "max_connections": 20, // Max connections per IP
"sweep_interval": 1 // time in seconds before resetting bytes per ip count "max_requests": 20, // Max connections per IP per sweep interval
"sweep_interval": 1 // Time in seconds before resetting max_fetches and max_requests
}, },
"cache": { "cache": {
// Comma-separated list of peer nodes that Clio can use to download cache from at startup
"peers": [ "peers": [
{ {
"ip": "127.0.0.1", "ip": "127.0.0.1",
@@ -45,11 +63,18 @@
"server": { "server": {
"ip": "0.0.0.0", "ip": "0.0.0.0",
"port": 51233, "port": 51233,
/* Max number of requests to queue up before rejecting further requests. // Max number of requests to queue up before rejecting further requests.
* Defaults to 0, which disables the limit // Defaults to 0, which disables the limit.
*/ "max_queue_size": 500,
"max_queue_size": 500 // If request contains header with authorization, Clio will check if it matches the prefix 'Password ' + this value's sha256 hash
// If matches, the request will be considered as admin request
"admin_password": "xrp",
// If local_admin is true, Clio will consider requests come from 127.0.0.1 as admin requests
// It's true by default unless admin_password is set,'local_admin' : true and 'admin_password' can not be set at the same time
"local_amdin": false
}, },
// Overrides log level on a per logging channel.
// Defaults to global "log_level" for each unspecified channel.
"log_channels": [ "log_channels": [
{ {
"channel": "Backend", "channel": "Backend",
@@ -76,18 +101,26 @@
"log_level": "trace" "log_level": "trace"
} }
], ],
"prometheus_enabled": true,
"log_level": "info", "log_level": "info",
"log_format": "%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%", // This is the default format // Log format (this is the default format)
"log_format": "%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%",
"log_to_console": true, "log_to_console": true,
"log_directory": "./clio_log", // Clio logs to file in the specified directory only if "log_directory" is set
// "log_directory": "./clio_log",
"log_rotation_size": 2048, "log_rotation_size": 2048,
"log_directory_max_size": 51200, "log_directory_max_size": 51200,
"log_rotation_hour_interval": 12, "log_rotation_hour_interval": 12,
"log_tag_style": "uint", "log_tag_style": "uint",
"extractor_threads": 8, "extractor_threads": 8,
"read_only": false, "read_only": false,
//"start_sequence": [integer] the ledger index to start from, // "start_sequence": [integer] the ledger index to start from,
//"finish_sequence": [integer] the ledger index to finish at, // "finish_sequence": [integer] the ledger index to finish at,
//"ssl_cert_file" : "/full/path/to/cert.file", // "ssl_cert_file" : "/full/path/to/cert.file",
//"ssl_key_file" : "/full/path/to/key.file" // "ssl_key_file" : "/full/path/to/key.file"
"api_version": {
"min": 1, // Minimum API version supported (could be 1 or 2)
"max": 2, // Maximum API version supported (could be 1 or 2, but >= min)
"default": 1 // Clio behaves the same as rippled by default
}
} }

View File

@@ -0,0 +1,25 @@
# Example of clio monitoring infrastructure
This directory contains an example of docker based infrastructure to collect and visualise metrics from clio.
The structure of the directory:
- `compose.yaml`
Docker-compose file with Prometheus and Grafana set up.
- `prometheus.yaml`
Defines metrics collection from Clio and Prometheus itself.
Demonstrates how to setup Clio target and Clio's admin authorisation in Prometheus.
- `grafana/clio_dashboard.json`
Json file containing preconfigured dashboard in Grafana format.
- `grafana/dashboard_local.yaml`
Grafana configuration file defining the directory to search for dashboards json files.
- `grafana/datasources.yaml`
Grafana configuration file defining Prometheus as a data source for Grafana.
## How to try
1. Make sure you have `docker` and `docker-compose` installed.
2. Run `docker-compose up -d` from this directory. It will start docker containers with Prometheus and Grafana.
3. Open [http://localhost:3000/dashboards](http://localhost:3000/dashboards). Grafana login `admin`, password `grafana`.
There will be preconfigured Clio dashboard.
If Clio is not running yet launch Clio to see metrics. Some of the metrics may appear only after requests to Clio.

View File

@@ -0,0 +1,20 @@
services:
prometheus:
image: prom/prometheus
ports:
- 9090:9090
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
grafana:
image: grafana/grafana
ports:
- 3000:3000
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=grafana
volumes:
- ./grafana/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
- ./grafana/dashboard_local.yaml:/etc/grafana/provisioning/dashboards/local.yaml
- ./grafana/clio_dashboard.json:/var/lib/grafana/dashboards/clio_dashboard.json

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,23 @@
apiVersion: 1
providers:
- name: 'Clio dashboard'
# <int> Org id. Default to 1
orgId: 1
# <string> name of the dashboard folder.
folder: ''
# <string> folder UID. will be automatically generated if not specified
folderUid: ''
# <string> provider type. Default to 'file'
type: file
# <bool> disable dashboard deletion
disableDeletion: false
# <int> how often Grafana will scan for changed dashboards
updateIntervalSeconds: 10
# <bool> allow updating provisioned dashboards from the UI
allowUiUpdates: false
options:
# <string, required> path to dashboard files on disk. Required when using the 'file' type
path: /var/lib/grafana/dashboards
# <bool> use folder names from filesystem to create folders in Grafana
foldersFromFilesStructure: true

View File

@@ -0,0 +1,8 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus:9090
isDefault: true
access: proxy

View File

@@ -0,0 +1,19 @@
scrape_configs:
- job_name: clio
scrape_interval: 5s
scrape_timeout: 5s
authorization:
type: Password
# sha256sum from password `xrp`
# use echo -n 'your_password' | shasum -a 256 to get hash
credentials: 0e1dcf1ff020cceabf8f4a60a32e814b5b46ee0bb8cd4af5c814e4071bd86a18
static_configs:
- targets:
- host.docker.internal:51233
- job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
static_configs:
- targets:
- localhost:9090

View File

@@ -1,583 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <ripple/ledger/ReadView.h>
#include <backend/DBHelpers.h>
#include <backend/LedgerCache.h>
#include <backend/Types.h>
#include <config/Config.h>
#include <log/Logger.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <thread>
#include <type_traits>
namespace Backend {
/**
* @brief Throws an error when database read time limit is exceeded.
*
* This class is throws an error when read time limit is exceeded but
* is also paired with a separate class to retry the connection.
*/
class DatabaseTimeout : public std::exception
{
public:
const char*
what() const throw() override
{
return "Database read timed out. Please retry the request";
}
};
/**
* @brief Separate class that reattempts connection after time limit.
*
* @tparam F Represents a class of handlers for Cassandra database.
* @param func Instance of Cassandra database handler class.
* @param waitMs Is the arbitrary time limit of 500ms.
* @return auto
*/
template <class F>
auto
retryOnTimeout(F func, size_t waitMs = 500)
{
static clio::Logger log{"Backend"};
while (true)
{
try
{
return func();
}
catch (DatabaseTimeout& t)
{
log.error() << "Database request timed out. Sleeping and retrying ... ";
std::this_thread::sleep_for(std::chrono::milliseconds(waitMs));
}
}
}
/**
* @brief Passes in serialized handlers in an asynchronous fashion.
*
* Note that the synchronous auto passes handlers critical to supporting
* the Clio backend. The coroutine types are checked if same/different.
*
* @tparam F Represents a class of handlers for Cassandra database.
* @param f R-value instance of Cassandra handler class.
* @return auto
*/
template <class F>
auto
synchronous(F&& f)
{
/** @brief Serialized handlers and their execution.
*
* The ctx class is converted into a serialized handler, also named
* ctx, and is used to pass a stream of data into the method.
*/
boost::asio::io_context ctx;
boost::asio::io_context::strand strand(ctx);
std::optional<boost::asio::io_context::work> work;
/*! @brief Place the ctx within the vector of serialized handlers. */
work.emplace(ctx);
/**
* @brief If/else statements regarding coroutine type matching.
*
* R is the currently executing coroutine that is about to get passed.
* If corountine types do not match, the current one's type is stored.
*/
using R = typename boost::result_of<F(boost::asio::yield_context&)>::type;
if constexpr (!std::is_same<R, void>::value)
{
/**
* @brief When the coroutine type is the same
*
* The spawn function enables programs to implement asynchronous logic
* in a synchronous manner. res stores the instance of the currently
* executing coroutine, yield. The different type is returned.
*/
R res;
boost::asio::spawn(strand, [&f, &work, &res](boost::asio::yield_context yield) {
res = f(yield);
work.reset();
});
ctx.run();
return res;
}
else
{
/*! @brief When the corutine type is different, run as normal. */
boost::asio::spawn(strand, [&f, &work](boost::asio::yield_context yield) {
f(yield);
work.reset();
});
ctx.run();
}
}
/**
* @brief Reestablishes synchronous connection on timeout.
*
* @tparam Represents a class of handlers for Cassandra database.
* @param f R-value instance of Cassandra database handler class.
* @return auto
*/
template <class F>
auto
synchronousAndRetryOnTimeout(F&& f)
{
return retryOnTimeout([&]() { return synchronous(f); });
}
/*! @brief Handles ledger and transaction backend data. */
class BackendInterface
{
/**
* @brief Shared mutexes and a cache for the interface.
*
* rngMutex is a shared mutex. Shared mutexes prevent shared data
* from being accessed by multiple threads and has two levels of
* access: shared and exclusive.
*/
protected:
mutable std::shared_mutex rngMtx_;
std::optional<LedgerRange> range;
LedgerCache cache_;
/**
* @brief Public read methods
*
* All of these reads methods can throw DatabaseTimeout. When writing
* code in an RPC handler, this exception does not need to be caught:
* when an RPC results in a timeout, an error is returned to the client.
*/
public:
BackendInterface() = default;
virtual ~BackendInterface() = default;
/**
* @brief Cache that holds states of the ledger
* @return Immutable cache
*/
LedgerCache const&
cache() const
{
return cache_;
}
/**
* @brief Cache that holds states of the ledger
* @return Mutable cache
*/
LedgerCache&
cache()
{
return cache_;
}
/*! @brief Fetches a specific ledger by sequence number. */
virtual std::optional<ripple::LedgerInfo>
fetchLedgerBySequence(std::uint32_t const sequence, boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches a specific ledger by hash. */
virtual std::optional<ripple::LedgerInfo>
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches the latest ledger sequence. */
virtual std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches the current ledger range while locking that process */
std::optional<LedgerRange>
fetchLedgerRange() const
{
std::shared_lock lck(rngMtx_);
return range;
}
/**
* @brief Updates the range of sequences to be tracked.
*
* Function that continues updating the range sliding window or creates
* a new sliding window once the maxSequence limit has been reached.
*
* @param newMax Unsigned 32-bit integer representing new max of range.
*/
void
updateRange(uint32_t newMax)
{
std::scoped_lock lck(rngMtx_);
assert(!range || newMax >= range->maxSequence);
if (!range)
range = {newMax, newMax};
else
range->maxSequence = newMax;
}
/**
* @brief Returns the fees for specific transactions.
*
* @param seq Unsigned 32-bit integer reprsenting sequence.
* @param yield The currently executing coroutine.
* @return std::optional<ripple::Fees>
*/
std::optional<ripple::Fees>
fetchFees(std::uint32_t const seq, boost::asio::yield_context& yield) const;
/*! @brief TRANSACTION METHODS */
/**
* @brief Fetches a specific transaction.
*
* @param hash Unsigned 256-bit integer representing hash.
* @param yield The currently executing coroutine.
* @return std::optional<TransactionAndMetadata>
*/
virtual std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches multiple transactions.
*
* @param hashes Unsigned integer value representing a hash.
* @param yield The currently executing coroutine.
* @return std::vector<TransactionAndMetadata>
*/
virtual std::vector<TransactionAndMetadata>
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches all transactions for a specific account
*
* @param account A specific XRPL Account, speciifed by unique type
* accountID.
* @param limit Paging limit for how many transactions can be returned per
* page.
* @param forward Boolean whether paging happens forwards or backwards.
* @param cursor Important metadata returned every time paging occurs.
* @param yield Currently executing coroutine.
* @return TransactionsAndCursor
*/
virtual TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
std::uint32_t const limit,
bool forward,
std::optional<TransactionsCursor> const& cursor,
boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches all transactions from a specific ledger.
*
* @param ledgerSequence Unsigned 32-bit integer for latest total
* transactions.
* @param yield Currently executing coroutine.
* @return std::vector<TransactionAndMetadata>
*/
virtual std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches all transaction hashes from a specific ledger.
*
* @param ledgerSequence Standard unsigned integer.
* @param yield Currently executing coroutine.
* @return std::vector<ripple::uint256>
*/
virtual std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/*! @brief NFT methods */
/**
* @brief Fetches a specific NFT
*
* @param tokenID Unsigned 256-bit integer.
* @param ledgerSequence Standard unsigned integer.
* @param yield Currently executing coroutine.
* @return std::optional<NFT>
*/
virtual std::optional<NFT>
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const = 0;
/**
* @brief Fetches all transactions for a specific NFT.
*
* @param tokenID Unsigned 256-bit integer.
* @param limit Paging limit as to how many transactions return per page.
* @param forward Boolean whether paging happens forwards or backwards.
* @param cursorIn Represents transaction number and ledger sequence.
* @param yield Currently executing coroutine is passed in as input.
* @return TransactionsAndCursor
*/
virtual TransactionsAndCursor
fetchNFTTransactions(
ripple::uint256 const& tokenID,
std::uint32_t const limit,
bool const forward,
std::optional<TransactionsCursor> const& cursorIn,
boost::asio::yield_context& yield) const = 0;
/*! @brief STATE DATA METHODS */
/**
* @brief Fetches a specific ledger object: vector of unsigned chars
*
* @param key Unsigned 256-bit integer.
* @param sequence Unsigned 32-bit integer.
* @param yield Currently executing coroutine.
* @return std::optional<Blob>
*/
std::optional<Blob>
fetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const;
/**
* @brief Fetches all ledger objects: a vector of vectors of unsigned chars.
*
* @param keys Unsigned 256-bit integer.
* @param sequence Unsigned 32-bit integer.
* @param yield Currently executing coroutine.
* @return std::vector<Blob>
*/
std::vector<Blob>
fetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const;
/*! @brief Virtual function version of fetchLedgerObject */
virtual std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context& yield)
const = 0;
/*! @brief Virtual function version of fetchLedgerObjects */
virtual std::vector<Blob>
doFetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t const sequence,
boost::asio::yield_context& yield) const = 0;
/**
* @brief Returns the difference between ledgers: vector of objects
*
* Objects are made of a key value, vector of unsigned chars (blob),
* and a boolean detailing whether keys and blob match.
*
* @param ledgerSequence Standard unsigned integer.
* @param yield Currently executing coroutine.
* @return std::vector<LedgerObject>
*/
virtual std::vector<LedgerObject>
fetchLedgerDiff(std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const = 0;
/**
* @brief Fetches a page of ledger objects, ordered by key/index.
*
* @param cursor Important metadata returned every time paging occurs.
* @param ledgerSequence Standard unsigned integer.
* @param limit Paging limit as to how many transactions returned per page.
* @param outOfOrder Boolean on whether ledger page is out of order.
* @param yield Currently executing coroutine.
* @return LedgerPage
*/
LedgerPage
fetchLedgerPage(
std::optional<ripple::uint256> const& cursor,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
bool outOfOrder,
boost::asio::yield_context& yield) const;
/*! @brief Fetches successor object from key/index. */
std::optional<LedgerObject>
fetchSuccessorObject(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const;
/*! @brief Fetches successor key from key/index. */
std::optional<ripple::uint256>
fetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield) const;
/*! @brief Virtual function version of fetchSuccessorKey. */
virtual std::optional<ripple::uint256>
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t const ledgerSequence, boost::asio::yield_context& yield)
const = 0;
/**
* @brief Fetches book offers.
*
* @param book Unsigned 256-bit integer.
* @param ledgerSequence Standard unsigned integer.
* @param limit Pagaing limit as to how many transactions returned per page.
* @param cursor Important metadata returned every time paging occurs.
* @param yield Currently executing coroutine.
* @return BookOffersPage
*/
BookOffersPage
fetchBookOffers(
ripple::uint256 const& book,
std::uint32_t const ledgerSequence,
std::uint32_t const limit,
boost::asio::yield_context& yield) const;
/**
* @brief Returns a ledger range
*
* Ledger range is a struct of min and max sequence numbers). Due to
* the use of [&], which denotes a special case of a lambda expression
* where values found outside the scope are passed by reference, wrt the
* currently executing coroutine.
*
* @return std::optional<LedgerRange>
*/
std::optional<LedgerRange>
hardFetchLedgerRange() const
{
return synchronous([&](boost::asio::yield_context yield) { return hardFetchLedgerRange(yield); });
}
/*! @brief Virtual function equivalent of hardFetchLedgerRange. */
virtual std::optional<LedgerRange>
hardFetchLedgerRange(boost::asio::yield_context& yield) const = 0;
/*! @brief Fetches ledger range but doesn't throw timeout. Use with care. */
std::optional<LedgerRange>
hardFetchLedgerRangeNoThrow() const;
/*! @brief Fetches ledger range but doesn't throw timeout. Use with care. */
std::optional<LedgerRange>
hardFetchLedgerRangeNoThrow(boost::asio::yield_context& yield) const;
/**
* @brief Writes to a specific ledger.
*
* @param ledgerInfo Const on ledger information.
* @param ledgerHeader r-value string representing ledger header.
*/
virtual void
writeLedger(ripple::LedgerInfo const& ledgerInfo, std::string&& ledgerHeader) = 0;
/**
* @brief Writes a new ledger object.
*
* The key and blob are r-value references and do NOT have memory addresses.
*
* @param key String represented as an r-value.
* @param seq Unsigned integer representing a sequence.
* @param blob r-value vector of unsigned characters (blob).
*/
virtual void
writeLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob);
/**
* @brief Writes a new transaction.
*
* @param hash r-value reference. No memory address.
* @param seq Unsigned 32-bit integer.
* @param date Unsigned 32-bit integer.
* @param transaction r-value reference. No memory address.
* @param metadata r-value refrence. No memory address.
*/
virtual void
writeTransaction(
std::string&& hash,
std::uint32_t const seq,
std::uint32_t const date,
std::string&& transaction,
std::string&& metadata) = 0;
/**
* @brief Write a new NFT.
*
* @param data Passed in as an r-value reference.
*/
virtual void
writeNFTs(std::vector<NFTsData>&& data) = 0;
/**
* @brief Write a new set of account transactions.
*
* @param data Passed in as an r-value reference.
*/
virtual void
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) = 0;
/**
* @brief Write a new transaction for a specific NFT.
*
* @param data Passed in as an r-value reference.
*/
virtual void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) = 0;
/**
* @brief Write a new successor.
*
* @param key Passed in as an r-value reference.
* @param seq Unsigned 32-bit integer.
* @param successor Passed in as an r-value reference.
*/
virtual void
writeSuccessor(std::string&& key, std::uint32_t const seq, std::string&& successor) = 0;
/*! @brief Tells database we will write data for a specific ledger. */
virtual void
startWrites() const = 0;
/**
* @brief Tells database we finished writing all data for a specific ledger.
*
* TODO: change the return value to represent different results:
* Committed, write conflict, errored, successful but not committed
*
* @param ledgerSequence Const unsigned 32-bit integer on ledger sequence.
* @return true
* @return false
*/
bool
finishWrites(std::uint32_t const ledgerSequence);
virtual bool
isTooBusy() const = 0;
private:
/**
* @brief Private helper method to write ledger object
*
* @param key r-value string representing key.
* @param seq Unsigned 32-bit integer representing sequence.
* @param blob r-value vector of unsigned chars.
*/
virtual void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) = 0;
virtual bool
doFinishWrites() = 0;
};
} // namespace Backend
using BackendInterface = Backend::BackendInterface;

View File

@@ -1,98 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <ripple/basics/base_uint.h>
#include <ripple/basics/hardened_hash.h>
#include <backend/Types.h>
#include <map>
#include <mutex>
#include <shared_mutex>
#include <utility>
#include <vector>
namespace Backend {
class LedgerCache
{
struct CacheEntry
{
uint32_t seq = 0;
Blob blob;
};
// counters for fetchLedgerObject(s) hit rate
mutable std::atomic_uint32_t objectReqCounter_ = 0;
mutable std::atomic_uint32_t objectHitCounter_ = 0;
// counters for fetchSuccessorKey hit rate
mutable std::atomic_uint32_t successorReqCounter_ = 0;
mutable std::atomic_uint32_t successorHitCounter_ = 0;
std::map<ripple::uint256, CacheEntry> map_;
mutable std::shared_mutex mtx_;
uint32_t latestSeq_ = 0;
std::atomic_bool full_ = false;
std::atomic_bool disabled_ = false;
// temporary set to prevent background thread from writing already deleted data. not used when cache is full
std::unordered_set<ripple::uint256, ripple::hardened_hash<>> deletes_;
public:
// Update the cache with new ledger objects set isBackground to true when writing old data from a background thread
void
update(std::vector<LedgerObject> const& blobs, uint32_t seq, bool isBackground = false);
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const;
// always returns empty optional if isFull() is false
std::optional<LedgerObject>
getSuccessor(ripple::uint256 const& key, uint32_t seq) const;
// always returns empty optional if isFull() is false
std::optional<LedgerObject>
getPredecessor(ripple::uint256 const& key, uint32_t seq) const;
void
setDisabled();
void
setFull();
uint32_t
latestLedgerSequence() const;
// whether the cache has all data for the most recent ledger
bool
isFull() const;
size_t
size() const;
float
getObjectHitRate() const;
float
getSuccessorHitRate() const;
};
} // namespace Backend

View File

@@ -1,79 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <backend/cassandra/Types.h>
#include <boost/asio/spawn.hpp>
#include <chrono>
#include <concepts>
#include <optional>
#include <string>
namespace Backend::Cassandra {
// clang-format off
template <typename T>
concept SomeSettingsProvider = requires(T a) {
{ a.getSettings() } -> std::same_as<Settings>;
{ a.getKeyspace() } -> std::same_as<std::string>;
{ a.getTablePrefix() } -> std::same_as<std::optional<std::string>>;
{ a.getReplicationFactor() } -> std::same_as<uint16_t>;
{ a.getTtl() } -> std::same_as<uint16_t>;
};
// clang-format on
// clang-format off
template <typename T>
concept SomeExecutionStrategy = requires(
T a,
Settings settings,
Handle handle,
Statement statement,
std::vector<Statement> statements,
PreparedStatement prepared,
boost::asio::yield_context token
) {
{ T(settings, handle) };
{ a.sync() } -> std::same_as<void>;
{ a.isTooBusy() } -> std::same_as<bool>;
{ a.writeSync(statement) } -> std::same_as<ResultOrError>;
{ a.writeSync(prepared) } -> std::same_as<ResultOrError>;
{ a.write(prepared) } -> std::same_as<void>;
{ a.write(std::move(statements)) } -> std::same_as<void>;
{ a.read(token, prepared) } -> std::same_as<ResultOrError>;
{ a.read(token, statement) } -> std::same_as<ResultOrError>;
{ a.read(token, statements) } -> std::same_as<ResultOrError>;
{ a.readEach(token, statements) } -> std::same_as<std::vector<Result>>;
};
// clang-format on
// clang-format off
template <typename T>
concept SomeRetryPolicy = requires(T a, boost::asio::io_context ioc, CassandraError err, uint32_t attempt) {
{ T(ioc) };
{ a.shouldRetry(err) } -> std::same_as<bool>;
{ a.retry([](){}) } -> std::same_as<void>;
{ a.calculateDelay(attempt) } -> std::same_as<std::chrono::milliseconds>;
};
// clang-format on
} // namespace Backend::Cassandra

View File

@@ -0,0 +1,193 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <data/BackendCounters.h>
#include <util/prometheus/Prometheus.h>
namespace data {
using namespace util::prometheus;
BackendCounters::BackendCounters()
: tooBusyCounter_(PrometheusService::counterInt(
"backend_too_busy_total_number",
Labels(),
"The total number of times the backend was too busy to process a request"
))
, writeSyncCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync"}}),
"The total number of times the backend had to write synchronously"
))
, writeSyncRetryCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({Label{"operation", "write_sync_retry"}}),
"The total number of times the backend had to retry a synchronous write"
))
, asyncWriteCounters_{"write_async"}
, asyncReadCounters_{"read_async"}
{
}
BackendCounters::PtrType
BackendCounters::make()
{
struct EnableMakeShared : public BackendCounters {};
return std::make_shared<EnableMakeShared>();
}
void
BackendCounters::registerTooBusy()
{
++tooBusyCounter_.get();
}
void
BackendCounters::registerWriteSync()
{
++writeSyncCounter_.get();
}
void
BackendCounters::registerWriteSyncRetry()
{
++writeSyncRetryCounter_.get();
}
void
BackendCounters::registerWriteStarted()
{
asyncWriteCounters_.registerStarted(1u);
}
void
BackendCounters::registerWriteFinished()
{
asyncWriteCounters_.registerFinished(1u);
}
void
BackendCounters::registerWriteRetry()
{
asyncWriteCounters_.registerRetry(1u);
}
void
BackendCounters::registerReadStarted(std::uint64_t const count)
{
asyncReadCounters_.registerStarted(count);
}
void
BackendCounters::registerReadFinished(std::uint64_t const count)
{
asyncReadCounters_.registerFinished(count);
}
void
BackendCounters::registerReadRetry(std::uint64_t const count)
{
asyncReadCounters_.registerRetry(count);
}
void
BackendCounters::registerReadError(std::uint64_t const count)
{
asyncReadCounters_.registerError(count);
}
boost::json::object
BackendCounters::report() const
{
boost::json::object result;
result["too_busy"] = tooBusyCounter_.get().value();
result["write_sync"] = writeSyncCounter_.get().value();
result["write_sync_retry"] = writeSyncRetryCounter_.get().value();
for (auto const& [key, value] : asyncWriteCounters_.report())
result[key] = value;
for (auto const& [key, value] : asyncReadCounters_.report())
result[key] = value;
return result;
}
BackendCounters::AsyncOperationCounters::AsyncOperationCounters(std::string name)
: name_(std::move(name))
, pendingCounter_(PrometheusService::gaugeInt(
"backend_operations_current_number",
Labels({{"operation", name_}, {"status", "pending"}}),
"The current number of pending " + name_ + " operations"
))
, completedCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "completed"}}),
"The total number of completed " + name_ + " operations"
))
, retryCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "retry"}}),
"The total number of retried " + name_ + " operations"
))
, errorCounter_(PrometheusService::counterInt(
"backend_operations_total_number",
Labels({{"operation", name_}, {"status", "error"}}),
"The total number of errored " + name_ + " operations"
))
{
}
void
BackendCounters::AsyncOperationCounters::registerStarted(std::uint64_t const count)
{
pendingCounter_.get() += count;
}
void
BackendCounters::AsyncOperationCounters::registerFinished(std::uint64_t const count)
{
assert(pendingCounter_.get().value() >= static_cast<std::int64_t>(count));
pendingCounter_.get() -= count;
completedCounter_.get() += count;
}
void
BackendCounters::AsyncOperationCounters::registerRetry(std::uint64_t count)
{
retryCounter_.get() += count;
}
void
BackendCounters::AsyncOperationCounters::registerError(std::uint64_t count)
{
assert(pendingCounter_.get().value() >= static_cast<std::int64_t>(count));
pendingCounter_.get() -= count;
errorCounter_.get() += count;
}
boost::json::object
BackendCounters::AsyncOperationCounters::report() const
{
return boost::json::object{
{name_ + "_pending", pendingCounter_.get().value()},
{name_ + "_completed", completedCounter_.get().value()},
{name_ + "_retry", retryCounter_.get().value()},
{name_ + "_error", errorCounter_.get().value()}};
}
} // namespace data

138
src/data/BackendCounters.h Normal file
View File

@@ -0,0 +1,138 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <util/prometheus/Prometheus.h>
#include <boost/json/object.hpp>
#include <atomic>
#include <functional>
#include <memory>
#include <utility>
namespace data {
/**
* @brief A concept for a class that can be used to count backend operations.
*/
// clang-format off
template <typename T>
concept SomeBackendCounters = requires(T a) {
typename T::PtrType;
{ a.registerTooBusy() } -> std::same_as<void>;
{ a.registerWriteSync() } -> std::same_as<void>;
{ a.registerWriteSyncRetry() } -> std::same_as<void>;
{ a.registerWriteStarted() } -> std::same_as<void>;
{ a.registerWriteFinished() } -> std::same_as<void>;
{ a.registerWriteRetry() } -> std::same_as<void>;
{ a.registerReadStarted(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadFinished(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadRetry(std::uint64_t{}) } -> std::same_as<void>;
{ a.registerReadError(std::uint64_t{}) } -> std::same_as<void>;
{ a.report() } -> std::same_as<boost::json::object>;
};
// clang-format on
/**
* @brief Holds statistics about the backend.
*
* @note This class is thread-safe.
*/
class BackendCounters {
public:
using PtrType = std::shared_ptr<BackendCounters>;
static PtrType
make();
void
registerTooBusy();
void
registerWriteSync();
void
registerWriteSyncRetry();
void
registerWriteStarted();
void
registerWriteFinished();
void
registerWriteRetry();
void
registerReadStarted(std::uint64_t count = 1u);
void
registerReadFinished(std::uint64_t count = 1u);
void
registerReadRetry(std::uint64_t count = 1u);
void
registerReadError(std::uint64_t count = 1u);
boost::json::object
report() const;
private:
BackendCounters();
class AsyncOperationCounters {
public:
AsyncOperationCounters(std::string name);
void
registerStarted(std::uint64_t count);
void
registerFinished(std::uint64_t count);
void
registerRetry(std::uint64_t count);
void
registerError(std::uint64_t count);
boost::json::object
report() const;
private:
std::string name_;
std::reference_wrapper<util::prometheus::GaugeInt> pendingCounter_;
std::reference_wrapper<util::prometheus::CounterInt> completedCounter_;
std::reference_wrapper<util::prometheus::CounterInt> retryCounter_;
std::reference_wrapper<util::prometheus::CounterInt> errorCounter_;
};
std::reference_wrapper<util::prometheus::CounterInt> tooBusyCounter_;
std::reference_wrapper<util::prometheus::CounterInt> writeSyncCounter_;
std::reference_wrapper<util::prometheus::CounterInt> writeSyncRetryCounter_;
AsyncOperationCounters asyncWriteCounters_{"write_async"};
AsyncOperationCounters asyncReadCounters_{"read_async"};
};
} // namespace data

View File

@@ -19,19 +19,26 @@
#pragma once #pragma once
#include <backend/BackendInterface.h> #include <data/BackendInterface.h>
#include <backend/CassandraBackend.h> #include <data/CassandraBackend.h>
#include <config/Config.h> #include <util/config/Config.h>
#include <log/Logger.h> #include <util/log/Logger.h>
#include <boost/algorithm/string.hpp> #include <boost/algorithm/string.hpp>
namespace Backend { namespace data {
std::shared_ptr<BackendInterface>
make_Backend(boost::asio::io_context& ioc, clio::Config const& config) /**
* @brief A factory function that creates the backend based on a config.
*
* @param config The clio config to use
* @return A shared_ptr<BackendInterface> with the selected implementation
*/
inline std::shared_ptr<BackendInterface>
make_Backend(util::Config const& config)
{ {
static clio::Logger log{"Backend"}; static util::Logger const log{"Backend"};
log.info() << "Constructing BackendInterface"; LOG(log.info()) << "Constructing BackendInterface";
auto const readOnly = config.valueOr("read_only", false); auto const readOnly = config.valueOr("read_only", false);
@@ -39,24 +46,21 @@ make_Backend(boost::asio::io_context& ioc, clio::Config const& config)
std::shared_ptr<BackendInterface> backend = nullptr; std::shared_ptr<BackendInterface> backend = nullptr;
// TODO: retire `cassandra-new` by next release after 2.0 // TODO: retire `cassandra-new` by next release after 2.0
if (boost::iequals(type, "cassandra") or boost::iequals(type, "cassandra-new")) if (boost::iequals(type, "cassandra") or boost::iequals(type, "cassandra-new")) {
{
auto cfg = config.section("database." + type); auto cfg = config.section("database." + type);
backend = backend = std::make_shared<data::cassandra::CassandraBackend>(data::cassandra::SettingsProvider{cfg}, readOnly);
std::make_shared<Backend::Cassandra::CassandraBackend>(Backend::Cassandra::SettingsProvider{cfg}, readOnly);
} }
if (!backend) if (!backend)
throw std::runtime_error("Invalid database type"); throw std::runtime_error("Invalid database type");
auto const rng = backend->hardFetchLedgerRangeNoThrow(); auto const rng = backend->hardFetchLedgerRangeNoThrow();
if (rng) if (rng) {
{
backend->updateRange(rng->minSequence); backend->updateRange(rng->minSequence);
backend->updateRange(rng->maxSequence); backend->updateRange(rng->maxSequence);
} }
log.info() << "Constructed BackendInterface Successfully"; LOG(log.info()) << "Constructed BackendInterface Successfully";
return backend; return backend;
} }
} // namespace Backend } // namespace data

View File

@@ -17,28 +17,25 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/BackendInterface.h> #include <data/BackendInterface.h>
#include <log/Logger.h> #include <util/log/Logger.h>
#include <ripple/protocol/Indexes.h> #include <ripple/protocol/Indexes.h>
#include <ripple/protocol/STLedgerEntry.h> #include <ripple/protocol/STLedgerEntry.h>
using namespace clio;
// local to compilation unit loggers // local to compilation unit loggers
namespace { namespace {
clio::Logger gLog{"Backend"}; util::Logger gLog{"Backend"};
} // namespace } // namespace
namespace Backend { namespace data {
bool bool
BackendInterface::finishWrites(std::uint32_t const ledgerSequence) BackendInterface::finishWrites(std::uint32_t const ledgerSequence)
{ {
gLog.debug() << "Want finish writes for " << ledgerSequence; LOG(gLog.debug()) << "Want finish writes for " << ledgerSequence;
auto commitRes = doFinishWrites(); auto commitRes = doFinishWrites();
if (commitRes) if (commitRes) {
{ LOG(gLog.debug()) << "Successfully commited. Updating range now to " << ledgerSequence;
gLog.debug() << "Successfully commited. Updating range now to " << ledgerSequence;
updateRange(ledgerSequence); updateRange(ledgerSequence);
} }
return commitRes; return commitRes;
@@ -50,27 +47,9 @@ BackendInterface::writeLedgerObject(std::string&& key, std::uint32_t const seq,
doWriteLedgerObject(std::move(key), seq, std::move(blob)); doWriteLedgerObject(std::move(key), seq, std::move(blob));
} }
std::optional<LedgerRange>
BackendInterface::hardFetchLedgerRangeNoThrow(boost::asio::yield_context& yield) const
{
gLog.trace() << "called";
while (true)
{
try
{
return hardFetchLedgerRange(yield);
}
catch (DatabaseTimeout& t)
{
;
}
}
}
std::optional<LedgerRange> std::optional<LedgerRange>
BackendInterface::hardFetchLedgerRangeNoThrow() const BackendInterface::hardFetchLedgerRangeNoThrow() const
{ {
gLog.trace() << "called";
return retryOnTimeout([&]() { return hardFetchLedgerRange(); }); return retryOnTimeout([&]() { return hardFetchLedgerRange(); });
} }
@@ -79,52 +58,49 @@ std::optional<Blob>
BackendInterface::fetchLedgerObject( BackendInterface::fetchLedgerObject(
ripple::uint256 const& key, ripple::uint256 const& key,
std::uint32_t const sequence, std::uint32_t const sequence,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
auto obj = cache_.get(key, sequence); auto obj = cache_.get(key, sequence);
if (obj) if (obj) {
{ LOG(gLog.trace()) << "Cache hit - " << ripple::strHex(key);
gLog.trace() << "Cache hit - " << ripple::strHex(key);
return *obj; return *obj;
} }
else
{ LOG(gLog.trace()) << "Cache miss - " << ripple::strHex(key);
gLog.trace() << "Cache miss - " << ripple::strHex(key);
auto dbObj = doFetchLedgerObject(key, sequence, yield); auto dbObj = doFetchLedgerObject(key, sequence, yield);
if (!dbObj) if (!dbObj) {
gLog.trace() << "Missed cache and missed in db"; LOG(gLog.trace()) << "Missed cache and missed in db";
else } else {
gLog.trace() << "Missed cache but found in db"; LOG(gLog.trace()) << "Missed cache but found in db";
return dbObj;
} }
return dbObj;
} }
std::vector<Blob> std::vector<Blob>
BackendInterface::fetchLedgerObjects( BackendInterface::fetchLedgerObjects(
std::vector<ripple::uint256> const& keys, std::vector<ripple::uint256> const& keys,
std::uint32_t const sequence, std::uint32_t const sequence,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
std::vector<Blob> results; std::vector<Blob> results;
results.resize(keys.size()); results.resize(keys.size());
std::vector<ripple::uint256> misses; std::vector<ripple::uint256> misses;
for (size_t i = 0; i < keys.size(); ++i) for (size_t i = 0; i < keys.size(); ++i) {
{
auto obj = cache_.get(keys[i], sequence); auto obj = cache_.get(keys[i], sequence);
if (obj) if (obj) {
results[i] = *obj; results[i] = *obj;
else } else {
misses.push_back(keys[i]); misses.push_back(keys[i]);
} }
gLog.trace() << "Cache hits = " << keys.size() - misses.size() << " - cache misses = " << misses.size(); }
LOG(gLog.trace()) << "Cache hits = " << keys.size() - misses.size() << " - cache misses = " << misses.size();
if (misses.size()) if (!misses.empty()) {
{
auto objs = doFetchLedgerObjects(misses, sequence, yield); auto objs = doFetchLedgerObjects(misses, sequence, yield);
for (size_t i = 0, j = 0; i < results.size(); ++i) for (size_t i = 0, j = 0; i < results.size(); ++i) {
{ if (results[i].empty()) {
if (results[i].size() == 0)
{
results[i] = objs[j]; results[i] = objs[j];
++j; ++j;
} }
@@ -138,13 +114,15 @@ std::optional<ripple::uint256>
BackendInterface::fetchSuccessorKey( BackendInterface::fetchSuccessorKey(
ripple::uint256 key, ripple::uint256 key,
std::uint32_t const ledgerSequence, std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
auto succ = cache_.getSuccessor(key, ledgerSequence); auto succ = cache_.getSuccessor(key, ledgerSequence);
if (succ) if (succ) {
gLog.trace() << "Cache hit - " << ripple::strHex(key); LOG(gLog.trace()) << "Cache hit - " << ripple::strHex(key);
else } else {
gLog.trace() << "Cache miss - " << ripple::strHex(key); LOG(gLog.trace()) << "Cache miss - " << ripple::strHex(key);
}
return succ ? succ->key : doFetchSuccessorKey(key, ledgerSequence, yield); return succ ? succ->key : doFetchSuccessorKey(key, ledgerSequence, yield);
} }
@@ -152,11 +130,11 @@ std::optional<LedgerObject>
BackendInterface::fetchSuccessorObject( BackendInterface::fetchSuccessorObject(
ripple::uint256 key, ripple::uint256 key,
std::uint32_t const ledgerSequence, std::uint32_t const ledgerSequence,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
auto succ = fetchSuccessorKey(key, ledgerSequence, yield); auto succ = fetchSuccessorKey(key, ledgerSequence, yield);
if (succ) if (succ) {
{
auto obj = fetchLedgerObject(*succ, ledgerSequence, yield); auto obj = fetchLedgerObject(*succ, ledgerSequence, yield);
if (!obj) if (!obj)
return {{*succ, {}}}; return {{*succ, {}}};
@@ -171,7 +149,8 @@ BackendInterface::fetchBookOffers(
ripple::uint256 const& book, ripple::uint256 const& book,
std::uint32_t const ledgerSequence, std::uint32_t const ledgerSequence,
std::uint32_t const limit, std::uint32_t const limit,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
// TODO try to speed this up. This can take a few seconds. The goal is // TODO try to speed this up. This can take a few seconds. The goal is
// to get it down to a few hundred milliseconds. // to get it down to a few hundred milliseconds.
@@ -185,29 +164,26 @@ BackendInterface::fetchBookOffers(
std::uint32_t numPages = 0; std::uint32_t numPages = 0;
long succMillis = 0; long succMillis = 0;
long pageMillis = 0; long pageMillis = 0;
while (keys.size() < limit) while (keys.size() < limit) {
{
auto mid1 = std::chrono::system_clock::now(); auto mid1 = std::chrono::system_clock::now();
auto offerDir = fetchSuccessorObject(uTipIndex, ledgerSequence, yield); auto offerDir = fetchSuccessorObject(uTipIndex, ledgerSequence, yield);
auto mid2 = std::chrono::system_clock::now(); auto mid2 = std::chrono::system_clock::now();
numSucc++; numSucc++;
succMillis += getMillis(mid2 - mid1); succMillis += getMillis(mid2 - mid1);
if (!offerDir || offerDir->key >= bookEnd) if (!offerDir || offerDir->key >= bookEnd) {
{ LOG(gLog.trace()) << "offerDir.has_value() " << offerDir.has_value() << " breaking";
gLog.trace() << "offerDir.has_value() " << offerDir.has_value() << " breaking";
break; break;
} }
uTipIndex = offerDir->key; uTipIndex = offerDir->key;
while (keys.size() < limit) while (keys.size() < limit) {
{
++numPages; ++numPages;
ripple::STLedgerEntry sle{ripple::SerialIter{offerDir->blob.data(), offerDir->blob.size()}, offerDir->key}; ripple::STLedgerEntry const sle{
ripple::SerialIter{offerDir->blob.data(), offerDir->blob.size()}, offerDir->key};
auto indexes = sle.getFieldV256(ripple::sfIndexes); auto indexes = sle.getFieldV256(ripple::sfIndexes);
keys.insert(keys.end(), indexes.begin(), indexes.end()); keys.insert(keys.end(), indexes.begin(), indexes.end());
auto next = sle.getFieldU64(ripple::sfIndexNext); auto next = sle.getFieldU64(ripple::sfIndexNext);
if (!next) if (next == 0u) {
{ LOG(gLog.trace()) << "Next is empty. breaking";
gLog.trace() << "Next is empty. breaking";
break; break;
} }
auto nextKey = ripple::keylet::page(uTipIndex, next); auto nextKey = ripple::keylet::page(uTipIndex, next);
@@ -221,15 +197,14 @@ BackendInterface::fetchBookOffers(
} }
auto mid = std::chrono::system_clock::now(); auto mid = std::chrono::system_clock::now();
auto objs = fetchLedgerObjects(keys, ledgerSequence, yield); auto objs = fetchLedgerObjects(keys, ledgerSequence, yield);
for (size_t i = 0; i < keys.size() && i < limit; ++i) for (size_t i = 0; i < keys.size() && i < limit; ++i) {
{ LOG(gLog.trace()) << "Key = " << ripple::strHex(keys[i]) << " blob = " << ripple::strHex(objs[i])
gLog.trace() << "Key = " << ripple::strHex(keys[i]) << " blob = " << ripple::strHex(objs[i])
<< " ledgerSequence = " << ledgerSequence; << " ledgerSequence = " << ledgerSequence;
assert(objs[i].size()); assert(!objs[i].empty());
page.offers.push_back({keys[i], objs[i]}); page.offers.push_back({keys[i], objs[i]});
} }
auto end = std::chrono::system_clock::now(); auto end = std::chrono::system_clock::now();
gLog.debug() << "Fetching " << std::to_string(keys.size()) << " offers took " LOG(gLog.debug()) << "Fetching " << std::to_string(keys.size()) << " offers took "
<< std::to_string(getMillis(mid - begin)) << " milliseconds. Fetching next dir took " << std::to_string(getMillis(mid - begin)) << " milliseconds. Fetching next dir took "
<< std::to_string(succMillis) << " milliseonds. Fetched next dir " << std::to_string(numSucc) << std::to_string(succMillis) << " milliseonds. Fetched next dir " << std::to_string(numSucc)
<< " times" << " times"
@@ -242,75 +217,94 @@ BackendInterface::fetchBookOffers(
return page; return page;
} }
std::optional<LedgerRange>
BackendInterface::hardFetchLedgerRange() const
{
return synchronous([this](auto yield) { return hardFetchLedgerRange(yield); });
}
std::optional<LedgerRange>
BackendInterface::fetchLedgerRange() const
{
std::shared_lock const lck(rngMtx_);
return range;
}
void
BackendInterface::updateRange(uint32_t newMax)
{
std::scoped_lock const lck(rngMtx_);
assert(!range || newMax >= range->maxSequence);
if (!range) {
range = {newMax, newMax};
} else {
range->maxSequence = newMax;
}
}
LedgerPage LedgerPage
BackendInterface::fetchLedgerPage( BackendInterface::fetchLedgerPage(
std::optional<ripple::uint256> const& cursor, std::optional<ripple::uint256> const& cursor,
std::uint32_t const ledgerSequence, std::uint32_t const ledgerSequence,
std::uint32_t const limit, std::uint32_t const limit,
bool outOfOrder, bool outOfOrder,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
LedgerPage page; LedgerPage page;
std::vector<ripple::uint256> keys; std::vector<ripple::uint256> keys;
bool reachedEnd = false; bool reachedEnd = false;
while (keys.size() < limit && !reachedEnd) while (keys.size() < limit && !reachedEnd) {
{ ripple::uint256 const& curCursor = !keys.empty() ? keys.back() : (cursor ? *cursor : firstKey);
ripple::uint256 const& curCursor = keys.size() ? keys.back() : cursor ? *cursor : firstKey;
std::uint32_t const seq = outOfOrder ? range->maxSequence : ledgerSequence; std::uint32_t const seq = outOfOrder ? range->maxSequence : ledgerSequence;
auto succ = fetchSuccessorKey(curCursor, seq, yield); auto succ = fetchSuccessorKey(curCursor, seq, yield);
if (!succ) if (!succ) {
reachedEnd = true; reachedEnd = true;
else } else {
keys.push_back(std::move(*succ)); keys.push_back(*succ);
}
} }
auto objects = fetchLedgerObjects(keys, ledgerSequence, yield); auto objects = fetchLedgerObjects(keys, ledgerSequence, yield);
for (size_t i = 0; i < objects.size(); ++i) for (size_t i = 0; i < objects.size(); ++i) {
{ if (!objects[i].empty()) {
if (objects[i].size()) page.objects.push_back({keys[i], std::move(objects[i])});
page.objects.push_back({std::move(keys[i]), std::move(objects[i])}); } else if (!outOfOrder) {
else if (!outOfOrder) LOG(gLog.error()) << "Deleted or non-existent object in successor table. key = " << ripple::strHex(keys[i])
{
gLog.error() << "Deleted or non-existent object in successor table. key = " << ripple::strHex(keys[i])
<< " - seq = " << ledgerSequence; << " - seq = " << ledgerSequence;
std::stringstream msg; std::stringstream msg;
for (size_t j = 0; j < objects.size(); ++j) for (size_t j = 0; j < objects.size(); ++j) {
{
msg << " - " << ripple::strHex(keys[j]); msg << " - " << ripple::strHex(keys[j]);
} }
gLog.error() << msg.str(); LOG(gLog.error()) << msg.str();
} }
} }
if (keys.size() && !reachedEnd) if (!keys.empty() && !reachedEnd)
page.cursor = keys.back(); page.cursor = keys.back();
return page; return page;
} }
std::optional<ripple::Fees> std::optional<ripple::Fees>
BackendInterface::fetchFees(std::uint32_t const seq, boost::asio::yield_context& yield) const BackendInterface::fetchFees(std::uint32_t const seq, boost::asio::yield_context yield) const
{ {
ripple::Fees fees; ripple::Fees fees;
auto key = ripple::keylet::fees().key; auto key = ripple::keylet::fees().key;
auto bytes = fetchLedgerObject(key, seq, yield); auto bytes = fetchLedgerObject(key, seq, yield);
if (!bytes) if (!bytes) {
{ LOG(gLog.error()) << "Could not find fees";
gLog.error() << "Could not find fees";
return {}; return {};
} }
ripple::SerialIter it(bytes->data(), bytes->size()); ripple::SerialIter it(bytes->data(), bytes->size());
ripple::SLE sle{it, key}; ripple::SLE const sle{it, key};
if (sle.getFieldIndex(ripple::sfBaseFee) != -1) if (sle.getFieldIndex(ripple::sfBaseFee) != -1)
fees.base = sle.getFieldU64(ripple::sfBaseFee); fees.base = sle.getFieldU64(ripple::sfBaseFee);
if (sle.getFieldIndex(ripple::sfReferenceFeeUnits) != -1)
fees.units = sle.getFieldU32(ripple::sfReferenceFeeUnits);
if (sle.getFieldIndex(ripple::sfReserveBase) != -1) if (sle.getFieldIndex(ripple::sfReserveBase) != -1)
fees.reserve = sle.getFieldU32(ripple::sfReserveBase); fees.reserve = sle.getFieldU32(ripple::sfReserveBase);
@@ -320,4 +314,4 @@ BackendInterface::fetchFees(std::uint32_t const seq, boost::asio::yield_context&
return fees; return fees;
} }
} // namespace Backend } // namespace data

592
src/data/BackendInterface.h Normal file
View File

@@ -0,0 +1,592 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <data/DBHelpers.h>
#include <data/LedgerCache.h>
#include <data/Types.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <thread>
#include <type_traits>
namespace data {
/**
* @brief Represents a database timeout error.
*/
class DatabaseTimeout : public std::exception {
public:
char const*
what() const throw() override
{
return "Database read timed out. Please retry the request";
}
};
static constexpr std::size_t DEFAULT_WAIT_BETWEEN_RETRY = 500;
/**
* @brief A helper function that catches DatabaseTimout exceptions and retries indefinitely.
*
* @tparam FnType The type of function object to execute
* @param func The function object to execute
* @param waitMs Delay between retry attempts
* @return auto The same as the return type of func
*/
template <class FnType>
auto
retryOnTimeout(FnType func, size_t waitMs = DEFAULT_WAIT_BETWEEN_RETRY)
{
static util::Logger const log{"Backend"};
while (true) {
try {
return func();
} catch (DatabaseTimeout const&) {
LOG(log.error()) << "Database request timed out. Sleeping and retrying ... ";
std::this_thread::sleep_for(std::chrono::milliseconds(waitMs));
}
}
}
/**
* @brief Synchronously executes the given function object inside a coroutine.
*
* @tparam FnType The type of function object to execute
* @param func The function object to execute
* @return auto The same as the return type of func
*/
template <class FnType>
auto
synchronous(FnType&& func)
{
boost::asio::io_context ctx;
using R = typename boost::result_of<FnType(boost::asio::yield_context)>::type;
if constexpr (!std::is_same<R, void>::value) {
R res;
boost::asio::spawn(ctx, [_ = boost::asio::make_work_guard(ctx), &func, &res](auto yield) {
res = func(yield);
});
ctx.run();
return res;
} else {
boost::asio::spawn(ctx, [_ = boost::asio::make_work_guard(ctx), &func](auto yield) { func(yield); });
ctx.run();
}
}
/**
* @brief Synchronously execute the given function object and retry until no DatabaseTimeout is thrown.
*
* @tparam FnType The type of function object to execute
* @param func The function object to execute
* @return auto The same as the return type of func
*/
template <class FnType>
auto
synchronousAndRetryOnTimeout(FnType&& func)
{
return retryOnTimeout([&]() { return synchronous(func); });
}
/**
* @brief The interface to the database used by Clio.
*/
class BackendInterface {
protected:
mutable std::shared_mutex rngMtx_;
std::optional<LedgerRange> range;
LedgerCache cache_;
public:
BackendInterface() = default;
virtual ~BackendInterface() = default;
// TODO: Remove this hack. Cache should not be exposed thru BackendInterface
/**
* @return Immutable cache
*/
LedgerCache const&
cache() const
{
return cache_;
}
/**
* @return Mutable cache
*/
LedgerCache&
cache()
{
return cache_;
}
/**
* @brief Fetches a specific ledger by sequence number.
*
* @param sequence The sequence number to fetch for
* @param yield The coroutine context
* @return The ripple::LedgerHeader if found; nullopt otherwise
*/
virtual std::optional<ripple::LedgerHeader>
fetchLedgerBySequence(std::uint32_t sequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches a specific ledger by hash.
*
* @param hash The hash to fetch for
* @param yield The coroutine context
* @return The ripple::LedgerHeader if found; nullopt otherwise
*/
virtual std::optional<ripple::LedgerHeader>
fetchLedgerByHash(ripple::uint256 const& hash, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches the latest ledger sequence.
*
* @param yield The coroutine context
* @return Latest sequence wrapped in an optional if found; nullopt otherwise
*/
virtual std::optional<std::uint32_t>
fetchLatestLedgerSequence(boost::asio::yield_context yield) const = 0;
/**
* @brief Fetch the current ledger range.
*
* @return The current ledger range if populated; nullopt otherwise
*/
std::optional<LedgerRange>
fetchLedgerRange() const;
/**
* @brief Updates the range of sequences that are stored in the DB.
*
* @param newMax The new maximum sequence available
*/
void
updateRange(uint32_t newMax);
/**
* @brief Fetch the fees from a specific ledger sequence.
*
* @param seq The sequence to fetch for
* @param yield The coroutine context
* @return ripple::Fees if fees are found; nullopt otherwise
*/
std::optional<ripple::Fees>
fetchFees(std::uint32_t seq, boost::asio::yield_context yield) const;
/**
* @brief Fetches a specific transaction.
*
* @param hash The hash of the transaction to fetch
* @param yield The coroutine context
* @return TransactionAndMetadata if transaction is found; nullopt otherwise
*/
virtual std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches multiple transactions.
*
* @param hashes A vector of hashes to fetch transactions for
* @param yield The coroutine context
* @return A vector of TransactionAndMetadata matching the given hashes
*/
virtual std::vector<TransactionAndMetadata>
fetchTransactions(std::vector<ripple::uint256> const& hashes, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches all transactions for a specific account.
*
* @param account The account to fetch transactions for
* @param limit The maximum number of transactions per result page
* @param forward Whether to fetch the page forwards or backwards from the given cursor
* @param cursor The cursor to resume fetching from
* @param yield The coroutine context
* @return Results and a cursor to resume from
*/
virtual TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
std::uint32_t limit,
bool forward,
std::optional<TransactionsCursor> const& cursor,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Fetches all transactions from a specific ledger.
*
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return Results as a vector of TransactionAndMetadata
*/
virtual std::vector<TransactionAndMetadata>
fetchAllTransactionsInLedger(std::uint32_t ledgerSequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches all transaction hashes from a specific ledger.
*
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return Hashes as ripple::uint256 in a vector
*/
virtual std::vector<ripple::uint256>
fetchAllTransactionHashesInLedger(std::uint32_t ledgerSequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches a specific NFT.
*
* @param tokenID The ID of the NFT
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return NFT object on success; nullopt otherwise
*/
virtual std::optional<NFT>
fetchNFT(ripple::uint256 const& tokenID, std::uint32_t ledgerSequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches all transactions for a specific NFT.
*
* @param tokenID The ID of the NFT
* @param limit The maximum number of transactions per result page
* @param forward Whether to fetch the page forwards or backwards from the given cursor
* @param cursorIn The cursor to resume fetching from
* @param yield The coroutine context
* @return Results and a cursor to resume from
*/
virtual TransactionsAndCursor
fetchNFTTransactions(
ripple::uint256 const& tokenID,
std::uint32_t limit,
bool forward,
std::optional<TransactionsCursor> const& cursorIn,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Fetches all NFTs issued by a given address.
*
* @param issuer AccountID of issuer you wish you query.
* @param taxon Optional taxon of NFTs by which you wish to filter.
* @param limit Paging limit.
* @param cursorIn Optional cursor to allow us to pick up from where we
* last left off.
* @param yield Currently executing coroutine.
* @return std::vector<NFT> of NFTs issued by this account, or
* this issuer/taxon combination if taxon is passed and an optional marker
*/
virtual NFTsAndCursor
fetchNFTsByIssuer(
ripple::AccountID const& issuer,
std::optional<std::uint32_t> const& taxon,
std::uint32_t ledgerSequence,
std::uint32_t limit,
std::optional<ripple::uint256> const& cursorIn,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Fetches a specific ledger object.
*
* Currently the real fetch happens in doFetchLedgerObject and fetchLedgerObject attempts to fetch from Cache first
* and only calls out to the real DB if a cache miss ocurred.
*
* @param key The key of the object
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The object as a Blob on success; nullopt otherwise
*/
std::optional<Blob>
fetchLedgerObject(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield) const;
/**
* @brief Fetches all ledger objects by their keys.
*
* Currently the real fetch happens in doFetchLedgerObjects and fetchLedgerObjects attempts to fetch from Cache
* first and only calls out to the real DB for each of the keys that was not found in the cache.
*
* @param keys A vector with the keys of the objects to fetch
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return A vector of ledger objects as Blobs
*/
std::vector<Blob>
fetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t sequence,
boost::asio::yield_context yield
) const;
/**
* @brief The database-specific implementation for fetching a ledger object.
*
* @param key The key to fetch for
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The object as a Blob on success; nullopt otherwise
*/
virtual std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield) const = 0;
/**
* @brief The database-specific implementation for fetching ledger objects.
*
* @param keys The keys to fetch for
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return A vector of Blobs representing each fetched object
*/
virtual std::vector<Blob>
doFetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
std::uint32_t sequence,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Returns the difference between ledgers.
*
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return A vector of LedgerObject representing the diff
*/
virtual std::vector<LedgerObject>
fetchLedgerDiff(std::uint32_t ledgerSequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches a page of ledger objects, ordered by key/index.
*
* @param cursor The cursor to resume fetching from
* @param ledgerSequence The ledger sequence to fetch for
* @param limit The maximum number of transactions per result page
* @param outOfOrder If set to true max available sequence is used instead of ledgerSequence
* @param yield The coroutine context
* @return The ledger page
*/
LedgerPage
fetchLedgerPage(
std::optional<ripple::uint256> const& cursor,
std::uint32_t ledgerSequence,
std::uint32_t limit,
bool outOfOrder,
boost::asio::yield_context yield
) const;
/**
* @brief Fetches the successor object.
*
* @param key The key to fetch for
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The sucessor on success; nullopt otherwise
*/
std::optional<LedgerObject>
fetchSuccessorObject(ripple::uint256 key, std::uint32_t ledgerSequence, boost::asio::yield_context yield) const;
/**
* @brief Fetches the successor key.
*
* Thea real fetch happens in doFetchSuccessorKey. This function will attempt to lookup the successor in the cache
* first and only if it's not found in the cache will it fetch from the actual DB.
*
* @param key The key to fetch for
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The sucessor key on success; nullopt otherwise
*/
std::optional<ripple::uint256>
fetchSuccessorKey(ripple::uint256 key, std::uint32_t ledgerSequence, boost::asio::yield_context yield) const;
/**
* @brief Database-specific implementation of fetching the successor key
*
* @param key The key to fetch for
* @param ledgerSequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The sucessor on success; nullopt otherwise
*/
virtual std::optional<ripple::uint256>
doFetchSuccessorKey(ripple::uint256 key, std::uint32_t ledgerSequence, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches book offers.
*
* @param book Unsigned 256-bit integer.
* @param ledgerSequence The ledger sequence to fetch for
* @param limit Pagaing limit as to how many transactions returned per page.
* @param yield The coroutine context
* @return The book offers page
*/
BookOffersPage
fetchBookOffers(
ripple::uint256 const& book,
std::uint32_t ledgerSequence,
std::uint32_t limit,
boost::asio::yield_context yield
) const;
/**
* @brief Synchronously fetches the ledger range from DB.
*
* This function just wraps hardFetchLedgerRange(boost::asio::yield_context) using synchronous(FnType&&).
*
* @return The ledger range if available; nullopt otherwise
*/
std::optional<LedgerRange>
hardFetchLedgerRange() const;
/**
* @brief Fetches the ledger range from DB.
*
* @return The ledger range if available; nullopt otherwise
*/
virtual std::optional<LedgerRange>
hardFetchLedgerRange(boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches the ledger range from DB retrying until no DatabaseTimeout is thrown.
*
* @return The ledger range if available; nullopt otherwise
*/
std::optional<LedgerRange>
hardFetchLedgerRangeNoThrow() const;
/**
* @brief Writes to a specific ledger.
*
* @param ledgerHeader Ledger header.
* @param blob r-value string serialization of ledger header.
*/
virtual void
writeLedger(ripple::LedgerHeader const& ledgerHeader, std::string&& blob) = 0;
/**
* @brief Writes a new ledger object.
*
* @param key The key to write the ledger object under
* @param seq The ledger sequence to write for
* @param blob The data to write
*/
virtual void
writeLedgerObject(std::string&& key, std::uint32_t seq, std::string&& blob);
/**
* @brief Writes a new transaction.
*
* @param hash The hash of the transaction
* @param seq The ledger sequence to write for
* @param date The timestamp of the entry
* @param transaction The transaction data to write
* @param metadata The metadata to write
*/
virtual void
writeTransaction(
std::string&& hash,
std::uint32_t seq,
std::uint32_t date,
std::string&& transaction,
std::string&& metadata
) = 0;
/**
* @brief Writes NFTs to the database.
*
* @param data A vector of NFTsData objects representing the NFTs
*/
virtual void
writeNFTs(std::vector<NFTsData>&& data) = 0;
/**
* @brief Write a new set of account transactions.
*
* @param data A vector of AccountTransactionsData objects representing the account transactions
*/
virtual void
writeAccountTransactions(std::vector<AccountTransactionsData>&& data) = 0;
/**
* @brief Write NFTs transactions.
*
* @param data A vector of NFTTransactionsData objects
*/
virtual void
writeNFTTransactions(std::vector<NFTTransactionsData>&& data) = 0;
/**
* @brief Write a new successor.
*
* @param key Key of the object that the passed successor will be the successor for
* @param seq The ledger sequence to write for
* @param successor The successor data to write
*/
virtual void
writeSuccessor(std::string&& key, std::uint32_t seq, std::string&& successor) = 0;
/**
* @brief Starts a write transaction with the DB. No-op for cassandra.
*
* Note: Can potentially be deprecated and removed.
*/
virtual void
startWrites() const = 0;
/**
* @brief Tells database we finished writing all data for a specific ledger.
*
* Uses doFinishWrites to synchronize with the pending writes.
*
* @param ledgerSequence The ledger sequence to finish writing for
* @return true on success; false otherwise
*/
bool
finishWrites(std::uint32_t ledgerSequence);
/**
* @return true if database is overwhelmed; false otherwise
*/
virtual bool
isTooBusy() const = 0;
/**
* @return json object containing backend usage statistics
*/
virtual boost::json::object
stats() const = 0;
private:
virtual void
doWriteLedgerObject(std::string&& key, std::uint32_t seq, std::string&& blob) = 0;
virtual bool
doFinishWrites() = 0;
};
} // namespace data
using BackendInterface = data::BackendInterface;

View File

@@ -17,30 +17,29 @@
*/ */
//============================================================================== //==============================================================================
/** @file */
#pragma once #pragma once
#include <ripple/basics/Log.h> #include <ripple/basics/Log.h>
#include <ripple/basics/StringUtilities.h> #include <ripple/basics/StringUtilities.h>
#include <ripple/ledger/ReadView.h>
#include <ripple/protocol/SField.h> #include <ripple/protocol/SField.h>
#include <ripple/protocol/STAccount.h> #include <ripple/protocol/STAccount.h>
#include <ripple/protocol/TxMeta.h> #include <ripple/protocol/TxMeta.h>
#include <boost/container/flat_set.hpp> #include <boost/container/flat_set.hpp>
#include <backend/Types.h> #include <data/Types.h>
/** /**
* @brief Struct used to keep track of what to write to account_transactions/account_tx tables * @brief Struct used to keep track of what to write to account_transactions/account_tx tables.
*/ */
struct AccountTransactionsData struct AccountTransactionsData {
{
boost::container::flat_set<ripple::AccountID> accounts; boost::container::flat_set<ripple::AccountID> accounts;
std::uint32_t ledgerSequence; std::uint32_t ledgerSequence{};
std::uint32_t transactionIndex; std::uint32_t transactionIndex{};
ripple::uint256 txHash; ripple::uint256 txHash;
AccountTransactionsData(ripple::TxMeta& meta, ripple::uint256 const& txHash, beast::Journal& j) AccountTransactionsData(ripple::TxMeta& meta, ripple::uint256 const& txHash)
: accounts(meta.getAffectedAccounts()) : accounts(meta.getAffectedAccounts())
, ledgerSequence(meta.getLgrSeq()) , ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex()) , transactionIndex(meta.getIndex())
@@ -52,12 +51,11 @@ struct AccountTransactionsData
}; };
/** /**
* @brief Represents a link from a tx to an NFT that was targeted/modified/created by it * @brief Represents a link from a tx to an NFT that was targeted/modified/created by it.
* *
* Gets written to nf_token_transactions table and the like. * Gets written to nf_token_transactions table and the like.
*/ */
struct NFTTransactionsData struct NFTTransactionsData {
{
ripple::uint256 tokenID; ripple::uint256 tokenID;
std::uint32_t ledgerSequence; std::uint32_t ledgerSequence;
std::uint32_t transactionIndex; std::uint32_t transactionIndex;
@@ -74,8 +72,7 @@ struct NFTTransactionsData
* *
* Gets written to nf_tokens table and the like. * Gets written to nf_tokens table and the like.
*/ */
struct NFTsData struct NFTsData {
{
ripple::uint256 tokenID; ripple::uint256 tokenID;
std::uint32_t ledgerSequence; std::uint32_t ledgerSequence;
@@ -107,7 +104,8 @@ struct NFTsData
ripple::uint256 const& tokenID, ripple::uint256 const& tokenID,
ripple::AccountID const& owner, ripple::AccountID const& owner,
ripple::Blob const& uri, ripple::Blob const& uri,
ripple::TxMeta const& meta) ripple::TxMeta const& meta
)
: tokenID(tokenID), ledgerSequence(meta.getLgrSeq()), transactionIndex(meta.getIndex()), owner(owner), uri(uri) : tokenID(tokenID), ledgerSequence(meta.getLgrSeq()), transactionIndex(meta.getIndex()), owner(owner), uri(uri)
{ {
} }
@@ -133,41 +131,68 @@ struct NFTsData
ripple::uint256 const& tokenID, ripple::uint256 const& tokenID,
std::uint32_t const ledgerSequence, std::uint32_t const ledgerSequence,
ripple::AccountID const& owner, ripple::AccountID const& owner,
ripple::Blob const& uri) ripple::Blob const& uri
)
: tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri) : tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri)
{ {
} }
}; };
/**
* @brief Check whether the supplied object is an offer.
*
* @param object The object to check
* @return true if the object is an offer; false otherwise
*/
template <class T> template <class T>
inline bool inline bool
isOffer(T const& object) isOffer(T const& object)
{ {
short offer_bytes = (object[1] << 8) | object[2]; static constexpr short OFFER_OFFSET = 0x006f;
return offer_bytes == 0x006f; static constexpr short SHIFT = 8;
short offer_bytes = (object[1] << SHIFT) | object[2];
return offer_bytes == OFFER_OFFSET;
} }
/**
* @brief Check whether the supplied hex represents an offer object.
*
* @param object The object to check
* @return true if the object is an offer; false otherwise
*/
template <class T> template <class T>
inline bool inline bool
isOfferHex(T const& object) isOfferHex(T const& object)
{ {
auto blob = ripple::strUnHex(4, object.begin(), object.begin() + 4); auto blob = ripple::strUnHex(4, object.begin(), object.begin() + 4);
if (blob) if (blob)
{ return isOffer(*blob);
short offer_bytes = ((*blob)[1] << 8) | (*blob)[2];
return offer_bytes == 0x006f;
}
return false; return false;
} }
/**
* @brief Check whether the supplied object is a dir node.
*
* @param object The object to check
* @return true if the object is a dir node; false otherwise
*/
template <class T> template <class T>
inline bool inline bool
isDirNode(T const& object) isDirNode(T const& object)
{ {
short spaceKey = (object.data()[1] << 8) | object.data()[2]; static constexpr short DIR_NODE_SPACE_KEY = 0x0064;
return spaceKey == 0x0064; short const spaceKey = (object.data()[1] << 8) | object.data()[2];
return spaceKey == DIR_NODE_SPACE_KEY;
} }
/**
* @brief Check whether the supplied object is a book dir.
*
* @param key The key into the object
* @param object The object to check
* @return true if the object is a book dir; false otherwise
*/
template <class T, class R> template <class T, class R>
inline bool inline bool
isBookDir(T const& key, R const& object) isBookDir(T const& key, R const& object)
@@ -179,33 +204,55 @@ isBookDir(T const& key, R const& object)
return !sle[~ripple::sfOwner].has_value(); return !sle[~ripple::sfOwner].has_value();
} }
/**
* @brief Get the book out of an offer object.
*
* @param offer The offer to get the book for
* @return Book as ripple::uint256
*/
template <class T> template <class T>
inline ripple::uint256 inline ripple::uint256
getBook(T const& offer) getBook(T const& offer)
{ {
ripple::SerialIter it{offer.data(), offer.size()}; ripple::SerialIter it{offer.data(), offer.size()};
ripple::SLE sle{it, {}}; ripple::SLE const sle{it, {}};
ripple::uint256 book = sle.getFieldH256(ripple::sfBookDirectory); ripple::uint256 book = sle.getFieldH256(ripple::sfBookDirectory);
return book; return book;
} }
/**
* @brief Get the book base.
*
* @param key The key to get the book base out of
* @return Book base as ripple::uint256
*/
template <class T> template <class T>
inline ripple::uint256 inline ripple::uint256
getBookBase(T const& key) getBookBase(T const& key)
{ {
static constexpr size_t KEY_SIZE = 24;
assert(key.size() == ripple::uint256::size()); assert(key.size() == ripple::uint256::size());
ripple::uint256 ret; ripple::uint256 ret;
for (size_t i = 0; i < 24; ++i) for (size_t i = 0; i < KEY_SIZE; ++i)
{
ret.data()[i] = key.data()[i]; ret.data()[i] = key.data()[i];
}
return ret; return ret;
} }
/**
* @brief Stringify a ripple::uint256.
*
* @param input The input value
* @return The input value as a string
*/
inline std::string inline std::string
uint256ToString(ripple::uint256 const& uint) uint256ToString(ripple::uint256 const& input)
{ {
return {reinterpret_cast<const char*>(uint.data()), uint.size()}; return {reinterpret_cast<char const*>(input.data()), ripple::uint256::size()};
} }
/** @brief The ripple epoch start timestamp. Midnight on 1st January 2000. */
static constexpr std::uint32_t rippleEpochStart = 946684800; static constexpr std::uint32_t rippleEpochStart = 946684800;

View File

@@ -17,14 +17,14 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/LedgerCache.h> #include <data/LedgerCache.h>
namespace Backend { namespace data {
uint32_t uint32_t
LedgerCache::latestLedgerSequence() const LedgerCache::latestLedgerSequence() const
{ {
std::shared_lock lck{mtx_}; std::shared_lock const lck{mtx_};
return latestSeq_; return latestSeq_;
} }
@@ -35,27 +35,21 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
return; return;
{ {
std::scoped_lock lck{mtx_}; std::scoped_lock const lck{mtx_};
if (seq > latestSeq_) if (seq > latestSeq_) {
{
assert(seq == latestSeq_ + 1 || latestSeq_ == 0); assert(seq == latestSeq_ + 1 || latestSeq_ == 0);
latestSeq_ = seq; latestSeq_ = seq;
} }
for (auto const& obj : objs) for (auto const& obj : objs) {
{ if (!obj.blob.empty()) {
if (obj.blob.size()) if (isBackground && deletes_.contains(obj.key))
{
if (isBackground && deletes_.count(obj.key))
continue; continue;
auto& e = map_[obj.key]; auto& e = map_[obj.key];
if (seq > e.seq) if (seq > e.seq) {
{
e = {seq, obj.blob}; e = {seq, obj.blob};
} }
} } else {
else
{
map_.erase(obj.key); map_.erase(obj.key);
if (!full_ && !isBackground) if (!full_ && !isBackground)
deletes_.insert(obj.key); deletes_.insert(obj.key);
@@ -69,14 +63,14 @@ LedgerCache::getSuccessor(ripple::uint256 const& key, uint32_t seq) const
{ {
if (!full_) if (!full_)
return {}; return {};
std::shared_lock{mtx_}; std::shared_lock const lck{mtx_};
successorReqCounter_++; ++successorReqCounter_.get();
if (seq != latestSeq_) if (seq != latestSeq_)
return {}; return {};
auto e = map_.upper_bound(key); auto e = map_.upper_bound(key);
if (e == map_.end()) if (e == map_.end())
return {}; return {};
successorHitCounter_++; ++successorHitCounter_.get();
return {{e->first, e->second.blob}}; return {{e->first, e->second.blob}};
} }
@@ -85,7 +79,7 @@ LedgerCache::getPredecessor(ripple::uint256 const& key, uint32_t seq) const
{ {
if (!full_) if (!full_)
return {}; return {};
std::shared_lock lck{mtx_}; std::shared_lock const lck{mtx_};
if (seq != latestSeq_) if (seq != latestSeq_)
return {}; return {};
auto e = map_.lower_bound(key); auto e = map_.lower_bound(key);
@@ -98,16 +92,16 @@ LedgerCache::getPredecessor(ripple::uint256 const& key, uint32_t seq) const
std::optional<Blob> std::optional<Blob>
LedgerCache::get(ripple::uint256 const& key, uint32_t seq) const LedgerCache::get(ripple::uint256 const& key, uint32_t seq) const
{ {
std::shared_lock lck{mtx_}; std::shared_lock const lck{mtx_};
if (seq > latestSeq_) if (seq > latestSeq_)
return {}; return {};
objectReqCounter_++; ++objectReqCounter_.get();
auto e = map_.find(key); auto e = map_.find(key);
if (e == map_.end()) if (e == map_.end())
return {}; return {};
if (seq < e->second.seq) if (seq < e->second.seq)
return {}; return {};
objectHitCounter_++; ++objectHitCounter_.get();
return {e->second.blob}; return {e->second.blob};
} }
@@ -124,7 +118,7 @@ LedgerCache::setFull()
return; return;
full_ = true; full_ = true;
std::scoped_lock lck{mtx_}; std::scoped_lock const lck{mtx_};
deletes_.clear(); deletes_.clear();
} }
@@ -137,24 +131,24 @@ LedgerCache::isFull() const
size_t size_t
LedgerCache::size() const LedgerCache::size() const
{ {
std::shared_lock lck{mtx_}; std::shared_lock const lck{mtx_};
return map_.size(); return map_.size();
} }
float float
LedgerCache::getObjectHitRate() const LedgerCache::getObjectHitRate() const
{ {
if (!objectReqCounter_) if (objectReqCounter_.get().value() == 0u)
return 1; return 1;
return ((float)objectHitCounter_) / objectReqCounter_; return static_cast<float>(objectHitCounter_.get().value()) / objectReqCounter_.get().value();
} }
float float
LedgerCache::getSuccessorHitRate() const LedgerCache::getSuccessorHitRate() const
{ {
if (!successorReqCounter_) if (successorReqCounter_.get().value() == 0u)
return 1; return 1;
return ((float)successorHitCounter_) / successorReqCounter_; return static_cast<float>(successorHitCounter_.get().value()) / successorReqCounter_.get().value();
} }
} // namespace Backend } // namespace data

167
src/data/LedgerCache.h Normal file
View File

@@ -0,0 +1,167 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2022, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <ripple/basics/base_uint.h>
#include <ripple/basics/hardened_hash.h>
#include <data/Types.h>
#include <map>
#include <mutex>
#include <shared_mutex>
#include <util/prometheus/Prometheus.h>
#include <utility>
#include <vector>
namespace data {
/**
* @brief Cache for an entire ledger.
*/
class LedgerCache {
struct CacheEntry {
uint32_t seq = 0;
Blob blob;
};
// counters for fetchLedgerObject(s) hit rate
std::reference_wrapper<util::prometheus::CounterInt> objectReqCounter_{PrometheusService::counterInt(
"ledger_cache_counter_total_number",
util::prometheus::Labels({{"type", "request"}, {"fetch", "ledger_objects"}}),
"LedgerCache statistics"
)};
std::reference_wrapper<util::prometheus::CounterInt> objectHitCounter_{PrometheusService::counterInt(
"ledger_cache_counter_total_number",
util::prometheus::Labels({{"type", "cache_hit"}, {"fetch", "ledger_objects"}})
)};
// counters for fetchSuccessorKey hit rate
std::reference_wrapper<util::prometheus::CounterInt> successorReqCounter_{PrometheusService::counterInt(
"ledger_cache_counter_total_number",
util::prometheus::Labels({{"type", "request"}, {"fetch", "successor_key"}}),
"ledgerCache"
)};
std::reference_wrapper<util::prometheus::CounterInt> successorHitCounter_{PrometheusService::counterInt(
"ledger_cache_counter_total_number",
util::prometheus::Labels({{"type", "cache_hit"}, {"fetch", "successor_key"}})
)};
std::map<ripple::uint256, CacheEntry> map_;
mutable std::shared_mutex mtx_;
uint32_t latestSeq_ = 0;
std::atomic_bool full_ = false;
std::atomic_bool disabled_ = false;
// temporary set to prevent background thread from writing already deleted data. not used when cache is full
std::unordered_set<ripple::uint256, ripple::hardened_hash<>> deletes_;
public:
/**
* @brief Update the cache with new ledger objects.
*
* @param blobs The ledger objects to update cache with
* @param seq The sequence to update cache for
* @param isBackground Should be set to true when writing old data from a background thread
*/
void
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground = false);
/**
* @brief Fetch a cached object by its key and sequence number.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached Blob; otherwise nullopt is returned
*/
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const;
/**
* @brief Gets a cached successor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached successor; otherwise nullopt is returned
*/
std::optional<LedgerObject>
getSuccessor(ripple::uint256 const& key, uint32_t seq) const;
/**
* @brief Gets a cached predcessor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached predcessor; otherwise nullopt is returned
*/
std::optional<LedgerObject>
getPredecessor(ripple::uint256 const& key, uint32_t seq) const;
/**
* @brief Disables the cache.
*/
void
setDisabled();
/**
* @brief Sets the full flag to true.
*
* This is used when cache loaded in its entirety at startup of the application. This can be either loaded from DB,
* populated together with initial ledger download (on first run) or downloaded from a peer node (specified in
* config).
*/
void
setFull();
/**
* @return The latest ledger sequence for which cache is available.
*/
uint32_t
latestLedgerSequence() const;
/**
* @return true if the cache has all data for the most recent ledger; false otherwise
*/
bool
isFull() const;
/**
* @return The total size of the cache.
*/
size_t
size() const;
/**
* @return A number representing the success rate of hitting an object in the cache versus missing it.
*/
float
getObjectHitRate() const;
/**
* @return A number representing the success rate of hitting a successor in the cache versus missing it.
*/
float
getSuccessorHitRate() const;
};
} // namespace data

View File

@@ -1,4 +1,5 @@
# Clio Backend # Backend
## Background ## Background
The backend of Clio is responsible for handling the proper reading and writing of past ledger data from and to a given database. As of right now, Cassandra and ScyllaDB are the only supported databases that are production-ready. Support for database types can be easily extended by creating new implementations which implements the virtual methods of `BackendInterface.h`. Then, use the Factory Object Design Pattern to simply add logic statements to `BackendFactory.h` that return the new database interface for a specific `type` in Clio's configuration file. The backend of Clio is responsible for handling the proper reading and writing of past ledger data from and to a given database. As of right now, Cassandra and ScyllaDB are the only supported databases that are production-ready. Support for database types can be easily extended by creating new implementations which implements the virtual methods of `BackendInterface.h`. Then, use the Factory Object Design Pattern to simply add logic statements to `BackendFactory.h` that return the new database interface for a specific `type` in Clio's configuration file.

View File

@@ -21,51 +21,58 @@
#include <ripple/basics/base_uint.h> #include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h> #include <ripple/protocol/AccountID.h>
#include <optional> #include <optional>
#include <string> #include <string>
#include <utility>
#include <vector> #include <vector>
namespace Backend { namespace data {
// *** return types
using Blob = std::vector<unsigned char>; using Blob = std::vector<unsigned char>;
struct LedgerObject /**
{ * @brief Represents an object in the ledger.
*/
struct LedgerObject {
ripple::uint256 key; ripple::uint256 key;
Blob blob; Blob blob;
bool bool
operator==(const LedgerObject& other) const operator==(LedgerObject const& other) const
{ {
return key == other.key && blob == other.blob; return key == other.key && blob == other.blob;
} }
}; };
struct LedgerPage /**
{ * @brief Represents a page of LedgerObjects.
*/
struct LedgerPage {
std::vector<LedgerObject> objects; std::vector<LedgerObject> objects;
std::optional<ripple::uint256> cursor; std::optional<ripple::uint256> cursor;
}; };
struct BookOffersPage
{ /**
* @brief Represents a page of book offer objects.
*/
struct BookOffersPage {
std::vector<LedgerObject> offers; std::vector<LedgerObject> offers;
std::optional<ripple::uint256> cursor; std::optional<ripple::uint256> cursor;
}; };
struct TransactionAndMetadata
{ /**
* @brief Represents a transaction and its metadata bundled together.
*/
struct TransactionAndMetadata {
Blob transaction; Blob transaction;
Blob metadata; Blob metadata;
std::uint32_t ledgerSequence = 0; std::uint32_t ledgerSequence = 0;
std::uint32_t date = 0; std::uint32_t date = 0;
TransactionAndMetadata() = default; TransactionAndMetadata() = default;
TransactionAndMetadata( TransactionAndMetadata(Blob transaction, Blob metadata, std::uint32_t ledgerSequence, std::uint32_t date)
Blob const& transaction, : transaction{std::move(transaction)}, metadata{std::move(metadata)}, ledgerSequence{ledgerSequence}, date{date}
Blob const& metadata,
std::uint32_t ledgerSequence,
std::uint32_t date)
: transaction{transaction}, metadata{metadata}, ledgerSequence{ledgerSequence}, date{date}
{ {
} }
@@ -78,17 +85,19 @@ struct TransactionAndMetadata
} }
bool bool
operator==(const TransactionAndMetadata& other) const operator==(TransactionAndMetadata const& other) const
{ {
return transaction == other.transaction && metadata == other.metadata && return transaction == other.transaction && metadata == other.metadata &&
ledgerSequence == other.ledgerSequence && date == other.date; ledgerSequence == other.ledgerSequence && date == other.date;
} }
}; };
struct TransactionsCursor /**
{ * @brief Represents a cursor into the transactions table.
std::uint32_t ledgerSequence; */
std::uint32_t transactionIndex; struct TransactionsCursor {
std::uint32_t ledgerSequence = 0;
std::uint32_t transactionIndex = 0;
TransactionsCursor() = default; TransactionsCursor() = default;
TransactionsCursor(std::uint32_t ledgerSequence, std::uint32_t transactionIndex) TransactionsCursor(std::uint32_t ledgerSequence, std::uint32_t transactionIndex)
@@ -101,9 +110,6 @@ struct TransactionsCursor
{ {
} }
TransactionsCursor&
operator=(TransactionsCursor const&) = default;
bool bool
operator==(TransactionsCursor const& other) const = default; operator==(TransactionsCursor const& other) const = default;
@@ -114,27 +120,31 @@ struct TransactionsCursor
} }
}; };
struct TransactionsAndCursor /**
{ * @brief Represests a bundle of transactions with metadata and a cursor to the next page.
*/
struct TransactionsAndCursor {
std::vector<TransactionAndMetadata> txns; std::vector<TransactionAndMetadata> txns;
std::optional<TransactionsCursor> cursor; std::optional<TransactionsCursor> cursor;
}; };
struct NFT /**
{ * @brief Represents a NFToken.
*/
struct NFT {
ripple::uint256 tokenID; ripple::uint256 tokenID;
std::uint32_t ledgerSequence; std::uint32_t ledgerSequence{};
ripple::AccountID owner; ripple::AccountID owner;
Blob uri; Blob uri;
bool isBurned; bool isBurned{};
NFT() = default; NFT() = default;
NFT(ripple::uint256 const& tokenID, NFT(ripple::uint256 const& tokenID,
std::uint32_t ledgerSequence, std::uint32_t ledgerSequence,
ripple::AccountID const& owner, ripple::AccountID const& owner,
Blob const& uri, Blob uri,
bool isBurned) bool isBurned)
: tokenID{tokenID}, ledgerSequence{ledgerSequence}, owner{owner}, uri{uri}, isBurned{isBurned} : tokenID{tokenID}, ledgerSequence{ledgerSequence}, owner{owner}, uri{std::move(uri)}, isBurned{isBurned}
{ {
} }
@@ -143,9 +153,8 @@ struct NFT
{ {
} }
// clearly two tokens are the same if they have the same ID, but this // clearly two tokens are the same if they have the same ID, but this struct stores the state of a given token at a
// struct stores the state of a given token at a given ledger sequence, so // given ledger sequence, so we also need to compare with ledgerSequence.
// we also need to compare with ledgerSequence
bool bool
operator==(NFT const& other) const operator==(NFT const& other) const
{ {
@@ -153,12 +162,21 @@ struct NFT
} }
}; };
struct LedgerRange struct NFTsAndCursor {
{ std::vector<NFT> nfts;
std::uint32_t minSequence; std::optional<ripple::uint256> cursor;
std::uint32_t maxSequence;
}; };
/**
* @brief Stores a range of sequences as a min and max pair.
*/
struct LedgerRange {
std::uint32_t minSequence = 0;
std::uint32_t maxSequence = 0;
};
constexpr ripple::uint256 firstKey{"0000000000000000000000000000000000000000000000000000000000000000"}; constexpr ripple::uint256 firstKey{"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 lastKey{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"}; constexpr ripple::uint256 lastKey{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 hi192{"0000000000000000000000000000000000000000000000001111111111111111"}; constexpr ripple::uint256 hi192{"0000000000000000000000000000000000000000000000001111111111111111"};
} // namespace Backend
} // namespace data

View File

@@ -0,0 +1,126 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <data/cassandra/Types.h>
#include <boost/asio/spawn.hpp>
#include <boost/json.hpp>
#include <chrono>
#include <concepts>
#include <optional>
#include <string>
namespace data::cassandra {
/**
* @brief The requirements of a settings provider.
*/
template <typename T>
concept SomeSettingsProvider = requires(T a) {
{
a.getSettings()
} -> std::same_as<Settings>;
{
a.getKeyspace()
} -> std::same_as<std::string>;
{
a.getTablePrefix()
} -> std::same_as<std::optional<std::string>>;
{
a.getReplicationFactor()
} -> std::same_as<uint16_t>;
{
a.getTtl()
} -> std::same_as<uint16_t>;
};
/**
* @brief The requirements of an execution strategy.
*/
template <typename T>
concept SomeExecutionStrategy = requires(
T a,
Settings settings,
Handle handle,
Statement statement,
std::vector<Statement> statements,
PreparedStatement prepared,
boost::asio::yield_context token
) {
{
T(settings, handle)
};
{
a.sync()
} -> std::same_as<void>;
{
a.isTooBusy()
} -> std::same_as<bool>;
{
a.writeSync(statement)
} -> std::same_as<ResultOrError>;
{
a.writeSync(prepared)
} -> std::same_as<ResultOrError>;
{
a.write(prepared)
} -> std::same_as<void>;
{
a.write(std::move(statements))
} -> std::same_as<void>;
{
a.read(token, prepared)
} -> std::same_as<ResultOrError>;
{
a.read(token, statement)
} -> std::same_as<ResultOrError>;
{
a.read(token, statements)
} -> std::same_as<ResultOrError>;
{
a.readEach(token, statements)
} -> std::same_as<std::vector<Result>>;
{
a.stats()
} -> std::same_as<boost::json::object>;
};
/**
* @brief The requirements of a retry policy.
*/
template <typename T>
concept SomeRetryPolicy = requires(T a, boost::asio::io_context ioc, CassandraError err, uint32_t attempt) {
{
T(ioc)
};
{
a.shouldRetry(err)
} -> std::same_as<bool>;
{
a.retry([]() {})
} -> std::same_as<void>;
{
a.calculateDelay(attempt)
} -> std::same_as<std::chrono::milliseconds>;
};
} // namespace data::cassandra

View File

@@ -22,33 +22,35 @@
#include <cassandra.h> #include <cassandra.h>
#include <string> #include <string>
#include <utility>
namespace Backend::Cassandra { namespace data::cassandra {
/** /**
* @brief A simple container for both error message and error code * @brief A simple container for both error message and error code.
*/ */
class CassandraError class CassandraError {
{
std::string message_; std::string message_;
uint32_t code_; uint32_t code_{};
public: public:
CassandraError() = default; // default constructible required by Expected CassandraError() = default; // default constructible required by Expected
CassandraError(std::string message, uint32_t code) : message_{message}, code_{code} CassandraError(std::string message, uint32_t code) : message_{std::move(message)}, code_{code}
{ {
} }
template <typename T> template <typename T>
friend std::string friend std::string
operator+(T const& lhs, CassandraError const& rhs) requires std::is_convertible_v<T, std::string> operator+(T const& lhs, CassandraError const& rhs)
requires std::is_convertible_v<T, std::string>
{ {
return lhs + rhs.message(); return lhs + rhs.message();
} }
template <typename T> template <typename T>
friend bool friend bool
operator==(T const& lhs, CassandraError const& rhs) requires std::is_convertible_v<T, std::string> operator==(T const& lhs, CassandraError const& rhs)
requires std::is_convertible_v<T, std::string>
{ {
return lhs == rhs.message(); return lhs == rhs.message();
} }
@@ -67,28 +69,38 @@ public:
return os; return os;
} }
/**
* @return The final error message as a std::string
*/
std::string std::string
message() const message() const
{ {
return message_; return message_;
} }
/**
* @return The error code
*/
uint32_t uint32_t
code() const code() const
{ {
return code_; return code_;
} }
/**
* @return true if the wrapped error is considered a timeout; false otherwise
*/
bool bool
isTimeout() const isTimeout() const
{ {
if (code_ == CASS_ERROR_LIB_NO_HOSTS_AVAILABLE or code_ == CASS_ERROR_LIB_REQUEST_TIMED_OUT or return code_ == CASS_ERROR_LIB_NO_HOSTS_AVAILABLE or code_ == CASS_ERROR_LIB_REQUEST_TIMED_OUT or
code_ == CASS_ERROR_SERVER_UNAVAILABLE or code_ == CASS_ERROR_SERVER_OVERLOADED or code_ == CASS_ERROR_SERVER_UNAVAILABLE or code_ == CASS_ERROR_SERVER_OVERLOADED or
code_ == CASS_ERROR_SERVER_READ_TIMEOUT) code_ == CASS_ERROR_SERVER_READ_TIMEOUT;
return true;
return false;
} }
/**
* @return true if the wrapped error is an invalid query; false otherwise
*/
bool bool
isInvalidQuery() const isInvalidQuery() const
{ {
@@ -96,4 +108,4 @@ public:
} }
}; };
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -17,9 +17,9 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/Handle.h> #include <data/cassandra/Handle.h>
namespace Backend::Cassandra { namespace data::cassandra {
Handle::Handle(Settings clusterSettings) : cluster_{clusterSettings} Handle::Handle(Settings clusterSettings) : cluster_{clusterSettings}
{ {
@@ -88,17 +88,17 @@ std::vector<Handle::FutureType>
Handle::asyncExecuteEach(std::vector<Statement> const& statements) const Handle::asyncExecuteEach(std::vector<Statement> const& statements) const
{ {
std::vector<Handle::FutureType> futures; std::vector<Handle::FutureType> futures;
futures.reserve(statements.size());
for (auto const& statement : statements) for (auto const& statement : statements)
futures.push_back(cass_session_execute(session_, statement)); futures.emplace_back(cass_session_execute(session_, statement));
return futures; return futures;
} }
Handle::MaybeErrorType Handle::MaybeErrorType
Handle::executeEach(std::vector<Statement> const& statements) const Handle::executeEach(std::vector<Statement> const& statements) const
{ {
for (auto futures = asyncExecuteEach(statements); auto const& future : futures) for (auto futures = asyncExecuteEach(statements); auto const& future : futures) {
{ if (auto rc = future.await(); not rc)
if (auto const rc = future.await(); not rc)
return rc; return rc;
} }
@@ -145,11 +145,12 @@ Handle::asyncExecute(std::vector<Statement> const& statements, std::function<voi
Handle::PreparedStatementType Handle::PreparedStatementType
Handle::prepare(std::string_view query) const Handle::prepare(std::string_view query) const
{ {
Handle::FutureType future = cass_session_prepare(session_, query.data()); Handle::FutureType const future = cass_session_prepare(session_, query.data());
if (auto const rc = future.await(); rc) auto const rc = future.await();
if (rc)
return cass_future_get_prepared(future); return cass_future_get_prepared(future);
else
throw std::runtime_error(rc.error().message()); throw std::runtime_error(rc.error().message());
} }
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -19,15 +19,15 @@
#pragma once #pragma once
#include <backend/cassandra/Error.h> #include <data/cassandra/Error.h>
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <backend/cassandra/impl/Batch.h> #include <data/cassandra/impl/Batch.h>
#include <backend/cassandra/impl/Cluster.h> #include <data/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/Future.h> #include <data/cassandra/impl/Future.h>
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <backend/cassandra/impl/Result.h> #include <data/cassandra/impl/Result.h>
#include <backend/cassandra/impl/Session.h> #include <data/cassandra/impl/Session.h>
#include <backend/cassandra/impl/Statement.h> #include <data/cassandra/impl/Statement.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <cassandra.h> #include <cassandra.h>
@@ -37,13 +37,12 @@
#include <iterator> #include <iterator>
#include <vector> #include <vector>
namespace Backend::Cassandra { namespace data::cassandra {
/** /**
* @brief Represents a handle to the cassandra database cluster * @brief Represents a handle to the cassandra database cluster
*/ */
class Handle class Handle {
{
detail::Cluster cluster_; detail::Cluster cluster_;
detail::Session session_; detail::Session session_;
@@ -57,28 +56,31 @@ public:
using ResultType = Result; using ResultType = Result;
/** /**
* @brief Construct a new handle from a @ref Settings object * @brief Construct a new handle from a @ref detail::Settings object.
*
* @param clusterSettings The settings to use
*/ */
explicit Handle(Settings clusterSettings = Settings::defaultSettings()); explicit Handle(Settings clusterSettings = Settings::defaultSettings());
/** /**
* @brief Construct a new handle with default settings and only by setting * @brief Construct a new handle with default settings and only by setting the contact points.
* the contact points *
* @param contactPoints The contact points to use instead of settings
*/ */
explicit Handle(std::string_view contactPoints); explicit Handle(std::string_view contactPoints);
/** /**
* @brief Disconnects gracefully if possible * @brief Disconnects gracefully if possible.
*/ */
~Handle(); ~Handle();
/** /**
* @brief Move is supported * @brief Move is supported.
*/ */
Handle(Handle&&) = default; Handle(Handle&&) = default;
/** /**
* @brief Connect to the cluster asynchronously * @brief Connect to the cluster asynchronously.
* *
* @return A future * @return A future
*/ */
@@ -86,31 +88,37 @@ public:
asyncConnect() const; asyncConnect() const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncConnect() const for how this works. * See @ref asyncConnect() const for how this works.
*
* @return Possibly an error
*/ */
[[nodiscard]] MaybeErrorType [[nodiscard]] MaybeErrorType
connect() const; connect() const;
/** /**
* @brief Connect to the the specified keyspace asynchronously * @brief Connect to the the specified keyspace asynchronously.
* *
* @param keyspace The keyspace to use
* @return A future * @return A future
*/ */
[[nodiscard]] FutureType [[nodiscard]] FutureType
asyncConnect(std::string_view keyspace) const; asyncConnect(std::string_view keyspace) const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncConnect(std::string_view) const for how this works. * See @ref asyncConnect(std::string_view) const for how this works.
*
* @param keyspace The keyspace to use
* @return Possibly an error
*/ */
[[nodiscard]] MaybeErrorType [[nodiscard]] MaybeErrorType
connect(std::string_view keyspace) const; connect(std::string_view keyspace) const;
/** /**
* @brief Disconnect from the cluster asynchronously * @brief Disconnect from the cluster asynchronously.
* *
* @return A future * @return A future
*/ */
@@ -118,32 +126,40 @@ public:
asyncDisconnect() const; asyncDisconnect() const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncDisconnect() const for how this works. * See @ref asyncDisconnect() const for how this works.
*
* @return Possibly an error
*/ */
[[maybe_unused]] MaybeErrorType [[maybe_unused]] MaybeErrorType
disconnect() const; disconnect() const;
/** /**
* @brief Reconnect to the the specified keyspace asynchronously * @brief Reconnect to the the specified keyspace asynchronously.
* *
* @param keyspace The keyspace to use
* @return A future * @return A future
*/ */
[[nodiscard]] FutureType [[nodiscard]] FutureType
asyncReconnect(std::string_view keyspace) const; asyncReconnect(std::string_view keyspace) const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncReconnect(std::string_view) const for how this works. * See @ref asyncReconnect(std::string_view) const for how this works.
*
* @param keyspace The keyspace to use
* @return Possibly an error
*/ */
[[nodiscard]] MaybeErrorType [[nodiscard]] MaybeErrorType
reconnect(std::string_view keyspace) const; reconnect(std::string_view keyspace) const;
/** /**
* @brief Execute a simple query with optional args asynchronously * @brief Execute a simple query with optional args asynchronously.
* *
* @param query The query to execute
* @param args The arguments to bind for execution
* @return A future * @return A future
*/ */
template <typename... Args> template <typename... Args>
@@ -155,10 +171,13 @@ public:
} }
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncExecute(std::string_view, Args&&...) const for how this * See asyncExecute(std::string_view, Args&&...) const for how this works.
* works. *
* @param query The query to execute
* @param args The arguments to bind for execution
* @return The result or an error
*/ */
template <typename... Args> template <typename... Args>
[[maybe_unused]] ResultOrErrorType [[maybe_unused]] ResultOrErrorType
@@ -168,30 +187,34 @@ public:
} }
/** /**
* @brief Execute each of the statements asynchronously * @brief Execute each of the statements asynchronously.
* *
* Batched version is not always the right option. Especially since it only * Batched version is not always the right option.
* supports INSERT, UPDATE and DELETE statements. * Especially since it only supports INSERT, UPDATE and DELETE statements.
* This can be used as an alternative when statements need to execute in * This can be used as an alternative when statements need to execute in bulk.
* bulk.
* *
* @param statements The statements to execute
* @return A vector of future objects * @return A vector of future objects
*/ */
[[nodiscard]] std::vector<FutureType> [[nodiscard]] std::vector<FutureType>
asyncExecuteEach(std::vector<StatementType> const& statements) const; asyncExecuteEach(std::vector<StatementType> const& statements) const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncExecuteEach(std::vector<StatementType> const&) const for * See @ref asyncExecuteEach(std::vector<StatementType> const&) const for how this works.
* how this works. *
* @param statements The statements to execute
* @return Possibly an error
*/ */
[[maybe_unused]] MaybeErrorType [[maybe_unused]] MaybeErrorType
executeEach(std::vector<StatementType> const& statements) const; executeEach(std::vector<StatementType> const& statements) const;
/** /**
* @brief Execute a prepared statement with optional args asynchronously * @brief Execute a prepared statement with optional args asynchronously.
* *
* @param statement The prepared statement to execute
* @param args The arguments to bind for execution
* @return A future * @return A future
*/ */
template <typename... Args> template <typename... Args>
@@ -203,10 +226,13 @@ public:
} }
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncExecute(std::vector<StatementType> const&, Args&&...) const * See asyncExecute(std::vector<StatementType> const&, Args&&...) const for how this works.
* for how this works. *
* @param statement The prepared statement to bind and execute
* @param args The arguments to bind for execution
* @return The result or an error
*/ */
template <typename... Args> template <typename... Args>
[[maybe_unused]] ResultOrErrorType [[maybe_unused]] ResultOrErrorType
@@ -216,61 +242,70 @@ public:
} }
/** /**
* @brief Execute one (bound or simple) statements asynchronously * @brief Execute one (bound or simple) statements asynchronously.
* *
* @param statement The statement to execute
* @return A future * @return A future
*/ */
[[nodiscard]] FutureType [[nodiscard]] FutureType
asyncExecute(StatementType const& statement) const; asyncExecute(StatementType const& statement) const;
/** /**
* @brief Execute one (bound or simple) statements asynchronously with a * @brief Execute one (bound or simple) statements asynchronously with a callback.
* callback
* *
* @param statement The statement to execute
* @param cb The callback to execute when data is ready
* @return A future that holds onto the callback provided * @return A future that holds onto the callback provided
*/ */
[[nodiscard]] FutureWithCallbackType [[nodiscard]] FutureWithCallbackType
asyncExecute(StatementType const& statement, std::function<void(ResultOrErrorType)>&& cb) const; asyncExecute(StatementType const& statement, std::function<void(ResultOrErrorType)>&& cb) const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncExecute(StatementType const&) const for how this * See @ref asyncExecute(StatementType const&) const for how this works.
* works. *
* @param statement The statement to execute
* @return The result or an error
*/ */
[[maybe_unused]] ResultOrErrorType [[maybe_unused]] ResultOrErrorType
execute(StatementType const& statement) const; execute(StatementType const& statement) const;
/** /**
* @brief Execute a batch of (bound or simple) statements asynchronously * @brief Execute a batch of (bound or simple) statements asynchronously.
* *
* @param statements The statements to execute
* @return A future * @return A future
*/ */
[[nodiscard]] FutureType [[nodiscard]] FutureType
asyncExecute(std::vector<StatementType> const& statements) const; asyncExecute(std::vector<StatementType> const& statements) const;
/** /**
* @brief Synchonous version of the above * @brief Synchonous version of the above.
* *
* See @ref asyncExecute(std::vector<StatementType> const&) const for how * See @ref asyncExecute(std::vector<StatementType> const&) const for how this works.
* this works. *
* @param statements The statements to execute
* @return Possibly an error
*/ */
[[maybe_unused]] MaybeErrorType [[maybe_unused]] MaybeErrorType
execute(std::vector<StatementType> const& statements) const; execute(std::vector<StatementType> const& statements) const;
/** /**
* @brief Execute a batch of (bound or simple) statements asynchronously * @brief Execute a batch of (bound or simple) statements asynchronously with a completion callback.
* with a completion callback
* *
* @param statements The statements to execute
* @param cb The callback to execute when data is ready
* @return A future that holds onto the callback provided * @return A future that holds onto the callback provided
*/ */
[[nodiscard]] FutureWithCallbackType [[nodiscard]] FutureWithCallbackType
asyncExecute(std::vector<StatementType> const& statements, std::function<void(ResultOrErrorType)>&& cb) const; asyncExecute(std::vector<StatementType> const& statements, std::function<void(ResultOrErrorType)>&& cb) const;
/** /**
* @brief Prepare a statement * @brief Prepare a statement.
* *
* @return A @ref PreparedStatementType * @param query
* @return A prepared statement
* @throws std::runtime_error with underlying error description on failure * @throws std::runtime_error with underlying error description on failure
*/ */
[[nodiscard]] PreparedStatementType [[nodiscard]] PreparedStatementType
@@ -278,12 +313,13 @@ public:
}; };
/** /**
* @brief Extracts the results into series of std::tuple<Types...> by creating a * @brief Extracts the results into series of std::tuple<Types...> by creating a simple wrapper with an STL input
* simple wrapper with an STL input iterator inside. * iterator inside.
* *
* You can call .begin() and .end() in order to iterate as usual. * You can call .begin() and .end() in order to iterate as usual.
* This also means that you can use it in a range-based for or with some * This also means that you can use it in a range-based for or with some algorithms.
* algorithms. *
* @param result The result to iterate
*/ */
template <typename... Types> template <typename... Types>
[[nodiscard]] detail::ResultExtractor<Types...> [[nodiscard]] detail::ResultExtractor<Types...>
@@ -292,4 +328,4 @@ extract(Handle::ResultType const& result)
return {result}; return {result};
} }
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -19,17 +19,17 @@
#pragma once #pragma once
#include <backend/cassandra/Concepts.h> #include <data/cassandra/Concepts.h>
#include <backend/cassandra/Handle.h> #include <data/cassandra/Handle.h>
#include <backend/cassandra/SettingsProvider.h> #include <data/cassandra/SettingsProvider.h>
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <config/Config.h>
#include <log/Logger.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include <fmt/compile.h> #include <fmt/compile.h>
namespace Backend::Cassandra { namespace data::cassandra {
template <SomeSettingsProvider SettingsProviderType> template <SomeSettingsProvider SettingsProviderType>
[[nodiscard]] std::string inline qualifiedTableName(SettingsProviderType const& provider, std::string_view name) [[nodiscard]] std::string inline qualifiedTableName(SettingsProviderType const& provider, std::string_view name)
@@ -38,17 +38,11 @@ template <SomeSettingsProvider SettingsProviderType>
} }
/** /**
* @brief Manages the DB schema and provides access to prepared statements * @brief Manages the DB schema and provides access to prepared statements.
*/ */
template <SomeSettingsProvider SettingsProviderType> template <SomeSettingsProvider SettingsProviderType>
class Schema class Schema {
{ util::Logger log_{"Backend"};
// Current schema version.
// Update this everytime you update the schema.
// Migrations will be ran automatically based on this value.
static constexpr uint16_t version = 1u;
clio::Logger log_{"Backend"};
std::reference_wrapper<SettingsProviderType const> settingsProvider_; std::reference_wrapper<SettingsProviderType const> settingsProvider_;
public: public:
@@ -67,7 +61,8 @@ public:
AND durable_writes = true AND durable_writes = true
)", )",
settingsProvider_.get().getKeyspace(), settingsProvider_.get().getKeyspace(),
settingsProvider_.get().getReplicationFactor()); settingsProvider_.get().getReplicationFactor()
);
}(); }();
// ======================= // =======================
@@ -90,7 +85,8 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"), qualifiedTableName(settingsProvider_.get(), "objects"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -105,7 +101,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "transactions"), qualifiedTableName(settingsProvider_.get(), "transactions"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -118,7 +115,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions"), qualifiedTableName(settingsProvider_.get(), "ledger_transactions"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -132,7 +130,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "successor"), qualifiedTableName(settingsProvider_.get(), "successor"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -145,7 +144,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "diff"), qualifiedTableName(settingsProvider_.get(), "diff"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -160,7 +160,8 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "account_tx"), qualifiedTableName(settingsProvider_.get(), "account_tx"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -172,7 +173,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "ledgers"), qualifiedTableName(settingsProvider_.get(), "ledgers"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -184,7 +186,8 @@ public:
WITH default_time_to_live = {} WITH default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes"), qualifiedTableName(settingsProvider_.get(), "ledger_hashes"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -194,7 +197,8 @@ public:
sequence bigint sequence bigint
) )
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range"))); qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -210,7 +214,8 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_tokens"), qualifiedTableName(settingsProvider_.get(), "nf_tokens"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -225,7 +230,8 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2"), qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -240,7 +246,8 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris"), qualifiedTableName(settingsProvider_.get(), "nf_token_uris"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
statements.emplace_back(fmt::format( statements.emplace_back(fmt::format(
R"( R"(
@@ -255,16 +262,16 @@ public:
AND default_time_to_live = {} AND default_time_to_live = {}
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"), qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"),
settingsProvider_.get().getTtl())); settingsProvider_.get().getTtl()
));
return statements; return statements;
}(); }();
/** /**
* @brief Prepared statements holder * @brief Prepared statements holder.
*/ */
class Statements class Statements {
{
std::reference_wrapper<SettingsProviderType const> settingsProvider_; std::reference_wrapper<SettingsProviderType const> settingsProvider_;
std::reference_wrapper<Handle const> handle_; std::reference_wrapper<Handle const> handle_;
@@ -285,7 +292,8 @@ public:
(key, sequence, object) (key, sequence, object)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"))); qualifiedTableName(settingsProvider_.get(), "objects")
));
}(); }();
PreparedStatement insertTransaction = [this]() { PreparedStatement insertTransaction = [this]() {
@@ -295,7 +303,8 @@ public:
(hash, ledger_sequence, date, transaction, metadata) (hash, ledger_sequence, date, transaction, metadata)
VALUES (?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "transactions"))); qualifiedTableName(settingsProvider_.get(), "transactions")
));
}(); }();
PreparedStatement insertLedgerTransaction = [this]() { PreparedStatement insertLedgerTransaction = [this]() {
@@ -305,7 +314,8 @@ public:
(ledger_sequence, hash) (ledger_sequence, hash)
VALUES (?, ?) VALUES (?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions"))); qualifiedTableName(settingsProvider_.get(), "ledger_transactions")
));
}(); }();
PreparedStatement insertSuccessor = [this]() { PreparedStatement insertSuccessor = [this]() {
@@ -315,7 +325,8 @@ public:
(key, seq, next) (key, seq, next)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "successor"))); qualifiedTableName(settingsProvider_.get(), "successor")
));
}(); }();
PreparedStatement insertDiff = [this]() { PreparedStatement insertDiff = [this]() {
@@ -325,7 +336,8 @@ public:
(seq, key) (seq, key)
VALUES (?, ?) VALUES (?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "diff"))); qualifiedTableName(settingsProvider_.get(), "diff")
));
}(); }();
PreparedStatement insertAccountTx = [this]() { PreparedStatement insertAccountTx = [this]() {
@@ -335,7 +347,8 @@ public:
(account, seq_idx, hash) (account, seq_idx, hash)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "account_tx"))); qualifiedTableName(settingsProvider_.get(), "account_tx")
));
}(); }();
PreparedStatement insertNFT = [this]() { PreparedStatement insertNFT = [this]() {
@@ -345,7 +358,8 @@ public:
(token_id, sequence, owner, is_burned) (token_id, sequence, owner, is_burned)
VALUES (?, ?, ?, ?) VALUES (?, ?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_tokens"))); qualifiedTableName(settingsProvider_.get(), "nf_tokens")
));
}(); }();
PreparedStatement insertIssuerNFT = [this]() { PreparedStatement insertIssuerNFT = [this]() {
@@ -355,7 +369,8 @@ public:
(issuer, taxon, token_id) (issuer, taxon, token_id)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2"))); qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
));
}(); }();
PreparedStatement insertNFTURI = [this]() { PreparedStatement insertNFTURI = [this]() {
@@ -365,7 +380,8 @@ public:
(token_id, sequence, uri) (token_id, sequence, uri)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris"))); qualifiedTableName(settingsProvider_.get(), "nf_token_uris")
));
}(); }();
PreparedStatement insertNFTTx = [this]() { PreparedStatement insertNFTTx = [this]() {
@@ -375,7 +391,8 @@ public:
(token_id, seq_idx, hash) (token_id, seq_idx, hash)
VALUES (?, ?, ?) VALUES (?, ?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"))); qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")
));
}(); }();
PreparedStatement insertLedgerHeader = [this]() { PreparedStatement insertLedgerHeader = [this]() {
@@ -385,7 +402,8 @@ public:
(sequence, header) (sequence, header)
VALUES (?, ?) VALUES (?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "ledgers"))); qualifiedTableName(settingsProvider_.get(), "ledgers")
));
}(); }();
PreparedStatement insertLedgerHash = [this]() { PreparedStatement insertLedgerHash = [this]() {
@@ -395,7 +413,8 @@ public:
(hash, sequence) (hash, sequence)
VALUES (?, ?) VALUES (?, ?)
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes"))); qualifiedTableName(settingsProvider_.get(), "ledger_hashes")
));
}(); }();
// //
@@ -410,7 +429,8 @@ public:
WHERE is_latest = ? WHERE is_latest = ?
IF sequence IN (?, null) IF sequence IN (?, null)
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range"))); qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}(); }();
PreparedStatement deleteLedgerRange = [this]() { PreparedStatement deleteLedgerRange = [this]() {
@@ -420,7 +440,8 @@ public:
SET sequence = ? SET sequence = ?
WHERE is_latest = false WHERE is_latest = false
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range"))); qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}(); }();
// //
@@ -437,7 +458,8 @@ public:
ORDER BY seq DESC ORDER BY seq DESC
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "successor"))); qualifiedTableName(settingsProvider_.get(), "successor")
));
}(); }();
PreparedStatement selectDiff = [this]() { PreparedStatement selectDiff = [this]() {
@@ -447,7 +469,8 @@ public:
FROM {} FROM {}
WHERE seq = ? WHERE seq = ?
)", )",
qualifiedTableName(settingsProvider_.get(), "diff"))); qualifiedTableName(settingsProvider_.get(), "diff")
));
}(); }();
PreparedStatement selectObject = [this]() { PreparedStatement selectObject = [this]() {
@@ -460,7 +483,8 @@ public:
ORDER BY sequence DESC ORDER BY sequence DESC
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"))); qualifiedTableName(settingsProvider_.get(), "objects")
));
}(); }();
PreparedStatement selectTransaction = [this]() { PreparedStatement selectTransaction = [this]() {
@@ -470,7 +494,8 @@ public:
FROM {} FROM {}
WHERE hash = ? WHERE hash = ?
)", )",
qualifiedTableName(settingsProvider_.get(), "transactions"))); qualifiedTableName(settingsProvider_.get(), "transactions")
));
}(); }();
PreparedStatement selectAllTransactionHashesInLedger = [this]() { PreparedStatement selectAllTransactionHashesInLedger = [this]() {
@@ -480,7 +505,8 @@ public:
FROM {} FROM {}
WHERE ledger_sequence = ? WHERE ledger_sequence = ?
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_transactions"))); qualifiedTableName(settingsProvider_.get(), "ledger_transactions")
));
}(); }();
PreparedStatement selectLedgerPageKeys = [this]() { PreparedStatement selectLedgerPageKeys = [this]() {
@@ -494,7 +520,8 @@ public:
LIMIT ? LIMIT ?
ALLOW FILTERING ALLOW FILTERING
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"))); qualifiedTableName(settingsProvider_.get(), "objects")
));
}(); }();
PreparedStatement selectLedgerPage = [this]() { PreparedStatement selectLedgerPage = [this]() {
@@ -508,7 +535,8 @@ public:
LIMIT ? LIMIT ?
ALLOW FILTERING ALLOW FILTERING
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"))); qualifiedTableName(settingsProvider_.get(), "objects")
));
}(); }();
PreparedStatement getToken = [this]() { PreparedStatement getToken = [this]() {
@@ -519,7 +547,8 @@ public:
WHERE key = ? WHERE key = ?
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "objects"))); qualifiedTableName(settingsProvider_.get(), "objects")
));
}(); }();
PreparedStatement selectAccountTx = [this]() { PreparedStatement selectAccountTx = [this]() {
@@ -528,25 +557,25 @@ public:
SELECT hash, seq_idx SELECT hash, seq_idx
FROM {} FROM {}
WHERE account = ? WHERE account = ?
AND seq_idx <= ? AND seq_idx < ?
LIMIT ? LIMIT ?
)", )",
qualifiedTableName(settingsProvider_.get(), "account_tx"))); qualifiedTableName(settingsProvider_.get(), "account_tx")
));
}(); }();
// the smallest transaction idx is 0, we use uint to store the transaction index, so we shall use ">=" to
// include it(the transaction with idx 0) in the result
PreparedStatement selectAccountTxForward = [this]() { PreparedStatement selectAccountTxForward = [this]() {
return handle_.get().prepare(fmt::format( return handle_.get().prepare(fmt::format(
R"( R"(
SELECT hash, seq_idx SELECT hash, seq_idx
FROM {} FROM {}
WHERE account = ? WHERE account = ?
AND seq_idx >= ? AND seq_idx > ?
ORDER BY seq_idx ASC ORDER BY seq_idx ASC
LIMIT ? LIMIT ?
)", )",
qualifiedTableName(settingsProvider_.get(), "account_tx"))); qualifiedTableName(settingsProvider_.get(), "account_tx")
));
}(); }();
PreparedStatement selectNFT = [this]() { PreparedStatement selectNFT = [this]() {
@@ -559,7 +588,22 @@ public:
ORDER BY sequence DESC ORDER BY sequence DESC
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_tokens"))); qualifiedTableName(settingsProvider_.get(), "nf_tokens")
));
}();
PreparedStatement selectNFTBulk = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id, sequence, owner, is_burned
FROM {}
WHERE token_id IN ?
AND sequence <= ?
ORDER BY sequence DESC
PER PARTITION LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_tokens")
));
}(); }();
PreparedStatement selectNFTURI = [this]() { PreparedStatement selectNFTURI = [this]() {
@@ -572,7 +616,22 @@ public:
ORDER BY sequence DESC ORDER BY sequence DESC
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris"))); qualifiedTableName(settingsProvider_.get(), "nf_token_uris")
));
}();
PreparedStatement selectNFTURIBulk = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id, uri
FROM {}
WHERE token_id IN ?
AND sequence <= ?
ORDER BY sequence DESC
PER PARTITION LIMIT 1
)",
qualifiedTableName(settingsProvider_.get(), "nf_token_uris")
));
}(); }();
PreparedStatement selectNFTTx = [this]() { PreparedStatement selectNFTTx = [this]() {
@@ -585,7 +644,8 @@ public:
ORDER BY seq_idx DESC ORDER BY seq_idx DESC
LIMIT ? LIMIT ?
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"))); qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")
));
}(); }();
PreparedStatement selectNFTTxForward = [this]() { PreparedStatement selectNFTTxForward = [this]() {
@@ -598,7 +658,37 @@ public:
ORDER BY seq_idx ASC ORDER BY seq_idx ASC
LIMIT ? LIMIT ?
)", )",
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions"))); qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")
));
}();
PreparedStatement selectNFTIDsByIssuer = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND (taxon, token_id) > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
));
}();
PreparedStatement selectNFTIDsByIssuerTaxon = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT token_id
FROM {}
WHERE issuer = ?
AND taxon = ?
AND token_id > ?
ORDER BY taxon ASC, token_id ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "issuer_nf_tokens_v2")
));
}(); }();
PreparedStatement selectLedgerByHash = [this]() { PreparedStatement selectLedgerByHash = [this]() {
@@ -609,7 +699,8 @@ public:
WHERE hash = ? WHERE hash = ?
LIMIT 1 LIMIT 1
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_hashes"))); qualifiedTableName(settingsProvider_.get(), "ledger_hashes")
));
}(); }();
PreparedStatement selectLedgerBySeq = [this]() { PreparedStatement selectLedgerBySeq = [this]() {
@@ -619,7 +710,8 @@ public:
FROM {} FROM {}
WHERE sequence = ? WHERE sequence = ?
)", )",
qualifiedTableName(settingsProvider_.get(), "ledgers"))); qualifiedTableName(settingsProvider_.get(), "ledgers")
));
}(); }();
PreparedStatement selectLatestLedger = [this]() { PreparedStatement selectLatestLedger = [this]() {
@@ -629,7 +721,8 @@ public:
FROM {} FROM {}
WHERE is_latest = true WHERE is_latest = true
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range"))); qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}(); }();
PreparedStatement selectLedgerRange = [this]() { PreparedStatement selectLedgerRange = [this]() {
@@ -638,23 +731,24 @@ public:
SELECT sequence SELECT sequence
FROM {} FROM {}
)", )",
qualifiedTableName(settingsProvider_.get(), "ledger_range"))); qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}(); }();
}; };
/** /**
* @brief Recreates the prepared statements * @brief Recreates the prepared statements.
*/ */
void void
prepareStatements(Handle const& handle) prepareStatements(Handle const& handle)
{ {
log_.info() << "Preparing cassandra statements"; LOG(log_.info()) << "Preparing cassandra statements";
statements_ = std::make_unique<Statements>(settingsProvider_, handle); statements_ = std::make_unique<Statements>(settingsProvider_, handle);
log_.info() << "Finished preparing statements"; LOG(log_.info()) << "Finished preparing statements";
} }
/** /**
* @brief Provides access to statements * @brief Provides access to statements.
*/ */
std::unique_ptr<Statements> const& std::unique_ptr<Statements> const&
operator->() const operator->() const
@@ -666,4 +760,4 @@ private:
std::unique_ptr<Statements> statements_{nullptr}; std::unique_ptr<Statements> statements_{nullptr};
}; };
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -17,28 +17,29 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/SettingsProvider.h> #include <data/cassandra/SettingsProvider.h>
#include <backend/cassandra/impl/Cluster.h> #include <data/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/Statement.h> #include <data/cassandra/impl/Statement.h>
#include <config/Config.h> #include <util/Constants.h>
#include <util/config/Config.h>
#include <boost/json.hpp> #include <boost/json.hpp>
#include <fstream>
#include <string> #include <string>
#include <thread> #include <thread>
namespace Backend::Cassandra { namespace data::cassandra {
namespace detail { namespace detail {
inline Settings::ContactPoints inline Settings::ContactPoints
tag_invoke(boost::json::value_to_tag<Settings::ContactPoints>, boost::json::value const& value) tag_invoke(boost::json::value_to_tag<Settings::ContactPoints>, boost::json::value const& value)
{ {
if (not value.is_object()) if (not value.is_object()) {
throw std::runtime_error( throw std::runtime_error("Feed entire Cassandra section to parse Settings::ContactPoints instead");
"Feed entire Cassandra section to parse " }
"Settings::ContactPoints instead");
clio::Config obj{value}; util::Config const obj{value};
Settings::ContactPoints out; Settings::ContactPoints out;
out.contactPoints = obj.valueOrThrow<std::string>("contact_points", "`contact_points` must be a string"); out.contactPoints = obj.valueOrThrow<std::string>("contact_points", "`contact_points` must be a string");
@@ -56,7 +57,7 @@ tag_invoke(boost::json::value_to_tag<Settings::SecureConnectionBundle>, boost::j
} }
} // namespace detail } // namespace detail
SettingsProvider::SettingsProvider(clio::Config const& cfg, uint16_t ttl) SettingsProvider::SettingsProvider(util::Config const& cfg, uint16_t ttl)
: config_{cfg} : config_{cfg}
, keyspace_{cfg.valueOr<std::string>("keyspace", "clio")} , keyspace_{cfg.valueOr<std::string>("keyspace", "clio")}
, tablePrefix_{cfg.maybeValue<std::string>("table_prefix")} , tablePrefix_{cfg.maybeValue<std::string>("table_prefix")}
@@ -75,18 +76,15 @@ SettingsProvider::getSettings() const
std::optional<std::string> std::optional<std::string>
SettingsProvider::parseOptionalCertificate() const SettingsProvider::parseOptionalCertificate() const
{ {
if (auto const certPath = config_.maybeValue<std::string>("certfile"); certPath) if (auto const certPath = config_.maybeValue<std::string>("certfile"); certPath) {
{
auto const path = std::filesystem::path(*certPath); auto const path = std::filesystem::path(*certPath);
std::ifstream fileStream(path.string(), std::ios::in); std::ifstream fileStream(path.string(), std::ios::in);
if (!fileStream) if (!fileStream) {
{
throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string()); throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string());
} }
std::string contents(std::istreambuf_iterator<char>{fileStream}, std::istreambuf_iterator<char>{}); std::string contents(std::istreambuf_iterator<char>{fileStream}, std::istreambuf_iterator<char>{});
if (fileStream.bad()) if (fileStream.bad()) {
{
throw std::system_error(errno, std::generic_category(), "Reading certificate " + path.string()); throw std::system_error(errno, std::generic_category(), "Reading certificate " + path.string());
} }
@@ -100,12 +98,9 @@ Settings
SettingsProvider::parseSettings() const SettingsProvider::parseSettings() const
{ {
auto settings = Settings::defaultSettings(); auto settings = Settings::defaultSettings();
if (auto const bundle = config_.maybeValue<Settings::SecureConnectionBundle>("secure_connect_bundle"); bundle) if (auto const bundle = config_.maybeValue<Settings::SecureConnectionBundle>("secure_connect_bundle"); bundle) {
{
settings.connectionInfo = *bundle; settings.connectionInfo = *bundle;
} } else {
else
{
settings.connectionInfo = settings.connectionInfo =
config_.valueOrThrow<Settings::ContactPoints>("Missing contact_points in Cassandra config"); config_.valueOrThrow<Settings::ContactPoints>("Missing contact_points in Cassandra config");
} }
@@ -115,6 +110,19 @@ SettingsProvider::parseSettings() const
config_.valueOr<uint32_t>("max_write_requests_outstanding", settings.maxWriteRequestsOutstanding); config_.valueOr<uint32_t>("max_write_requests_outstanding", settings.maxWriteRequestsOutstanding);
settings.maxReadRequestsOutstanding = settings.maxReadRequestsOutstanding =
config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding); config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding);
settings.coreConnectionsPerHost =
config_.valueOr<uint32_t>("core_connections_per_host", settings.coreConnectionsPerHost);
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
auto const connectTimeoutSecond = config_.maybeValue<uint32_t>("connect_timeout");
if (connectTimeoutSecond)
settings.connectionTimeout = std::chrono::milliseconds{*connectTimeoutSecond * util::MILLISECONDS_PER_SECOND};
auto const requestTimeoutSecond = config_.maybeValue<uint32_t>("request_timeout");
if (requestTimeoutSecond)
settings.requestTimeout = std::chrono::milliseconds{*requestTimeoutSecond * util::MILLISECONDS_PER_SECOND};
settings.certificate = parseOptionalCertificate(); settings.certificate = parseOptionalCertificate();
settings.username = config_.maybeValue<std::string>("username"); settings.username = config_.maybeValue<std::string>("username");
settings.password = config_.maybeValue<std::string>("password"); settings.password = config_.maybeValue<std::string>("password");
@@ -122,4 +130,4 @@ SettingsProvider::parseSettings() const
return settings; return settings;
} }
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -19,20 +19,19 @@
#pragma once #pragma once
#include <backend/cassandra/Handle.h> #include <data/cassandra/Handle.h>
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <config/Config.h>
#include <log/Logger.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
namespace Backend::Cassandra { namespace data::cassandra {
/** /**
* @brief Provides settings for @ref CassandraBackend * @brief Provides settings for @ref BasicCassandraBackend.
*/ */
class SettingsProvider class SettingsProvider {
{ util::Config config_;
clio::Config config_;
std::string keyspace_; std::string keyspace_;
std::optional<std::string> tablePrefix_; std::optional<std::string> tablePrefix_;
@@ -41,34 +40,50 @@ class SettingsProvider
Settings settings_; Settings settings_;
public: public:
explicit SettingsProvider(clio::Config const& cfg, uint16_t ttl = 0); /**
* @brief Create a settings provider from the specified config.
*
* @param cfg The config of Clio to use
* @param ttl Time to live setting
*/
explicit SettingsProvider(util::Config const& cfg, uint16_t ttl = 0);
/*! Get the cluster settings */ /**
* @return The cluster settings
*/
[[nodiscard]] Settings [[nodiscard]] Settings
getSettings() const; getSettings() const;
/*! Get the specified keyspace */ /**
* @return The specified keyspace
*/
[[nodiscard]] inline std::string [[nodiscard]] inline std::string
getKeyspace() const getKeyspace() const
{ {
return keyspace_; return keyspace_;
} }
/*! Get an optional table prefix to use in all queries */ /**
* @return The optional table prefix to use in all queries
*/
[[nodiscard]] inline std::optional<std::string> [[nodiscard]] inline std::optional<std::string>
getTablePrefix() const getTablePrefix() const
{ {
return tablePrefix_; return tablePrefix_;
} }
/*! Get the replication factor */ /**
* @return The replication factor
*/
[[nodiscard]] inline uint16_t [[nodiscard]] inline uint16_t
getReplicationFactor() const getReplicationFactor() const
{ {
return replicationFactor_; return replicationFactor_;
} }
/*! Get the default time to live to use in all `create` queries */ /**
* @return The default time to live to use in all `create` queries
*/
[[nodiscard]] inline uint16_t [[nodiscard]] inline uint16_t
getTtl() const getTtl() const
{ {
@@ -83,4 +98,4 @@ private:
parseSettings() const; parseSettings() const;
}; };
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -23,7 +23,7 @@
#include <string> #include <string>
namespace Backend::Cassandra { namespace data::cassandra {
namespace detail { namespace detail {
struct Settings; struct Settings;
@@ -52,8 +52,7 @@ using Batch = detail::Batch;
* because clio uses bigint (int64) everywhere except for when one need * because clio uses bigint (int64) everywhere except for when one need
* to specify LIMIT, which needs an int32 :-/ * to specify LIMIT, which needs an int32 :-/
*/ */
struct Limit struct Limit {
{
int32_t limit; int32_t limit;
}; };
@@ -64,4 +63,4 @@ using MaybeError = util::Expected<void, CassandraError>;
using ResultOrError = util::Expected<Result, CassandraError>; using ResultOrError = util::Expected<Result, CassandraError>;
using Error = util::Unexpected<CassandraError>; using Error = util::Unexpected<CassandraError>;
} // namespace Backend::Cassandra } // namespace data::cassandra

View File

@@ -19,19 +19,19 @@
#pragma once #pragma once
#include <backend/cassandra/Concepts.h> #include <data/cassandra/Concepts.h>
#include <backend/cassandra/Handle.h> #include <data/cassandra/Handle.h>
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <backend/cassandra/impl/RetryPolicy.h> #include <data/cassandra/impl/RetryPolicy.h>
#include <log/Logger.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <util/log/Logger.h>
#include <boost/asio.hpp> #include <boost/asio.hpp>
#include <functional> #include <functional>
#include <memory> #include <memory>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
/** /**
* @brief A query executor with a changable retry policy * @brief A query executor with a changable retry policy
@@ -48,16 +48,17 @@ template <
typename StatementType, typename StatementType,
typename HandleType = Handle, typename HandleType = Handle,
SomeRetryPolicy RetryPolicyType = ExponentialBackoffRetryPolicy> SomeRetryPolicy RetryPolicyType = ExponentialBackoffRetryPolicy>
class AsyncExecutor : public std::enable_shared_from_this<AsyncExecutor<StatementType, HandleType, RetryPolicyType>> class AsyncExecutor : public std::enable_shared_from_this<AsyncExecutor<StatementType, HandleType, RetryPolicyType>> {
{
using FutureWithCallbackType = typename HandleType::FutureWithCallbackType; using FutureWithCallbackType = typename HandleType::FutureWithCallbackType;
using CallbackType = std::function<void(typename HandleType::ResultOrErrorType)>; using CallbackType = std::function<void(typename HandleType::ResultOrErrorType)>;
using RetryCallbackType = std::function<void()>;
clio::Logger log_{"Backend"}; util::Logger log_{"Backend"};
StatementType data_; StatementType data_;
RetryPolicyType retryPolicy_; RetryPolicyType retryPolicy_;
CallbackType onComplete_; CallbackType onComplete_;
RetryCallbackType onRetry_;
// does not exist during initial construction, hence optional // does not exist during initial construction, hence optional
std::optional<FutureWithCallbackType> future_; std::optional<FutureWithCallbackType> future_;
@@ -68,24 +69,37 @@ public:
* @brief Create a new instance of the AsyncExecutor and execute it. * @brief Create a new instance of the AsyncExecutor and execute it.
*/ */
static void static void
run(boost::asio::io_context& ioc, HandleType const& handle, StatementType&& data, CallbackType&& onComplete) run(boost::asio::io_context& ioc,
HandleType const& handle,
StatementType&& data,
CallbackType&& onComplete,
RetryCallbackType&& onRetry)
{ {
// this is a helper that allows us to use std::make_shared below // this is a helper that allows us to use std::make_shared below
struct EnableMakeShared : public AsyncExecutor<StatementType, HandleType, RetryPolicyType> struct EnableMakeShared : public AsyncExecutor<StatementType, HandleType, RetryPolicyType> {
{ EnableMakeShared(
EnableMakeShared(boost::asio::io_context& ioc, StatementType&& data, CallbackType&& onComplete) boost::asio::io_context& ioc,
: AsyncExecutor(ioc, std::move(data), std::move(onComplete)) StatementType&& data,
CallbackType&& onComplete,
RetryCallbackType&& onRetry
)
: AsyncExecutor(ioc, std::move(data), std::move(onComplete), std::move(onRetry))
{ {
} }
}; };
auto ptr = std::make_shared<EnableMakeShared>(ioc, std::move(data), std::move(onComplete)); auto ptr = std::make_shared<EnableMakeShared>(ioc, std::move(data), std::move(onComplete), std::move(onRetry));
ptr->execute(handle); ptr->execute(handle);
} }
private: private:
AsyncExecutor(boost::asio::io_context& ioc, StatementType&& data, CallbackType&& onComplete) AsyncExecutor(
: data_{std::move(data)}, retryPolicy_{ioc}, onComplete_{std::move(onComplete)} boost::asio::io_context& ioc,
StatementType&& data,
CallbackType&& onComplete,
RetryCallbackType&& onRetry
)
: data_{std::move(data)}, retryPolicy_{ioc}, onComplete_{std::move(onComplete)}, onRetry_{std::move(onRetry)}
{ {
} }
@@ -96,24 +110,23 @@ private:
// lifetime is extended by capturing self ptr // lifetime is extended by capturing self ptr
auto handler = [this, &handle, self](auto&& res) mutable { auto handler = [this, &handle, self](auto&& res) mutable {
if (res) if (res) {
{ onComplete_(std::forward<decltype(res)>(res));
onComplete_(std::move(res)); } else {
} if (retryPolicy_.shouldRetry(res.error())) {
else onRetry_();
{
if (retryPolicy_.shouldRetry(res.error()))
retryPolicy_.retry([self, &handle]() { self->execute(handle); }); retryPolicy_.retry([self, &handle]() { self->execute(handle); });
else } else {
onComplete_(std::move(res)); // report error onComplete_(std::forward<decltype(res)>(res)); // report error
}
} }
self = nullptr; // explicitly decrement refcount self = nullptr; // explicitly decrement refcount
}; };
std::scoped_lock lck{mtx_}; std::scoped_lock const lck{mtx_};
future_.emplace(handle.asyncExecute(data_, std::move(handler))); future_.emplace(handle.asyncExecute(data_, std::move(handler)));
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,40 +17,39 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/Error.h> #include <data/cassandra/Error.h>
#include <backend/cassandra/impl/Batch.h> #include <data/cassandra/impl/Batch.h>
#include <backend/cassandra/impl/Statement.h> #include <data/cassandra/impl/Statement.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <exception> #include <exception>
#include <vector> #include <vector>
namespace { namespace {
static constexpr auto batchDeleter = [](CassBatch* ptr) { cass_batch_free(ptr); }; constexpr auto batchDeleter = [](CassBatch* ptr) { cass_batch_free(ptr); };
}; } // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
// todo: use an appropritae value instead of CASS_BATCH_TYPE_LOGGED for // TODO: Use an appropriate value instead of CASS_BATCH_TYPE_LOGGED for different use cases
// different use cases
Batch::Batch(std::vector<Statement> const& statements) Batch::Batch(std::vector<Statement> const& statements)
: ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), batchDeleter} : ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), batchDeleter}
{ {
cass_batch_set_is_idempotent(*this, cass_true); cass_batch_set_is_idempotent(*this, cass_true);
for (auto const& statement : statements) for (auto const& statement : statements) {
if (auto const res = add(statement); not res) if (auto const res = add(statement); not res)
throw std::runtime_error("Failed to add statement to batch: " + res.error()); throw std::runtime_error("Failed to add statement to batch: " + res.error());
}
} }
MaybeError MaybeError
Batch::add(Statement const& statement) Batch::add(Statement const& statement)
{ {
if (auto const rc = cass_batch_add_statement(*this, statement); rc != CASS_OK) if (auto const rc = cass_batch_add_statement(*this, statement); rc != CASS_OK) {
{
return Error{CassandraError{cass_error_desc(rc), rc}}; return Error{CassandraError{cass_error_desc(rc), rc}};
} }
return {}; return {};
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,19 +19,18 @@
#pragma once #pragma once
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <cassandra.h> #include <cassandra.h>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
struct Batch : public ManagedObject<CassBatch> struct Batch : public ManagedObject<CassBatch> {
{
Batch(std::vector<Statement> const& statements); Batch(std::vector<Statement> const& statements);
MaybeError MaybeError
add(Statement const& statement); add(Statement const& statement);
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,20 +17,21 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/impl/Cluster.h> #include <data/cassandra/impl/Cluster.h>
#include <backend/cassandra/impl/SslContext.h> #include <data/cassandra/impl/SslContext.h>
#include <backend/cassandra/impl/Statement.h> #include <data/cassandra/impl/Statement.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <fmt/core.h>
#include <exception> #include <exception>
#include <vector> #include <vector>
namespace { namespace {
static constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); }; constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); };
template <class... Ts> template <class... Ts>
struct overloadSet : Ts... struct overloadSet : Ts... {
{
using Ts::operator()...; using Ts::operator()...;
}; };
@@ -39,53 +40,46 @@ template <class... Ts>
overloadSet(Ts...) -> overloadSet<Ts...>; overloadSet(Ts...) -> overloadSet<Ts...>;
}; // namespace }; // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), clusterDeleter} Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), clusterDeleter}
{ {
using std::to_string; using std::to_string;
cass_cluster_set_token_aware_routing(*this, cass_true); cass_cluster_set_token_aware_routing(*this, cass_true);
if (auto const rc = cass_cluster_set_protocol_version(*this, CASS_PROTOCOL_VERSION_V4); rc != CASS_OK) if (auto const rc = cass_cluster_set_protocol_version(*this, CASS_PROTOCOL_VERSION_V4); rc != CASS_OK) {
{ throw std::runtime_error(fmt::format("Error setting cassandra protocol version to v4: {}", cass_error_desc(rc))
throw std::runtime_error(std::string{"Error setting cassandra protocol version to v4: "} + cass_error_desc(rc)); );
} }
if (auto const rc = cass_cluster_set_num_threads_io(*this, settings.threads); rc != CASS_OK) if (auto const rc = cass_cluster_set_num_threads_io(*this, settings.threads); rc != CASS_OK) {
{
throw std::runtime_error( throw std::runtime_error(
std::string{"Error setting cassandra io threads to "} + to_string(settings.threads) + ": " + fmt::format("Error setting cassandra io threads to {}: {}", settings.threads, cass_error_desc(rc))
cass_error_desc(rc)); );
} }
cass_log_set_level(settings.enableLog ? CASS_LOG_TRACE : CASS_LOG_DISABLED); cass_log_set_level(settings.enableLog ? CASS_LOG_TRACE : CASS_LOG_DISABLED);
cass_cluster_set_connect_timeout(*this, settings.connectionTimeout.count()); cass_cluster_set_connect_timeout(*this, settings.connectionTimeout.count());
cass_cluster_set_request_timeout(*this, settings.requestTimeout.count()); cass_cluster_set_request_timeout(*this, settings.requestTimeout.count());
// TODO: other options to experiment with and consider later: if (auto const rc = cass_cluster_set_core_connections_per_host(*this, settings.coreConnectionsPerHost);
// cass_cluster_set_max_concurrent_requests_threshold(*this, 10000); rc != CASS_OK) {
// cass_cluster_set_queue_size_event(*this, 100000); throw std::runtime_error(fmt::format("Could not set core connections per host: {}", cass_error_desc(rc)));
// cass_cluster_set_queue_size_io(*this, 100000); }
// cass_cluster_set_write_bytes_high_water_mark(*this, 16 * 1024 * 1024); // 16mb
// cass_cluster_set_write_bytes_low_water_mark(*this, 8 * 1024 * 1024); // half of allowance
// cass_cluster_set_pending_requests_high_water_mark(*this, 5000);
// cass_cluster_set_pending_requests_low_water_mark(*this, 2500); // half
// cass_cluster_set_max_requests_per_flush(*this, 1000);
// cass_cluster_set_max_concurrent_creation(*this, 8);
// cass_cluster_set_max_connections_per_host(*this, 6);
// cass_cluster_set_core_connections_per_host(*this, 4);
// cass_cluster_set_constant_speculative_execution_policy(*this, 1000, 1024);
if (auto const rc = cass_cluster_set_queue_size_io( auto const queueSize =
*this, settings.maxWriteRequestsOutstanding + settings.maxReadRequestsOutstanding); settings.queueSizeIO.value_or(settings.maxWriteRequestsOutstanding + settings.maxReadRequestsOutstanding);
rc != CASS_OK) if (auto const rc = cass_cluster_set_queue_size_io(*this, queueSize); rc != CASS_OK) {
{ throw std::runtime_error(fmt::format("Could not set queue size for IO per host: {}", cass_error_desc(rc)));
throw std::runtime_error(std::string{"Could not set queue size for IO per host: "} + cass_error_desc(rc));
} }
setupConnection(settings); setupConnection(settings);
setupCertificate(settings); setupCertificate(settings);
setupCredentials(settings); setupCredentials(settings);
LOG(log_.info()) << "Threads: " << settings.threads;
LOG(log_.info()) << "Core connections per host: " << settings.coreConnectionsPerHost;
LOG(log_.info()) << "IO queue size: " << queueSize;
} }
void void
@@ -95,7 +89,8 @@ Cluster::setupConnection(Settings const& settings)
overloadSet{ overloadSet{
[this](Settings::ContactPoints const& points) { setupContactPoints(points); }, [this](Settings::ContactPoints const& points) { setupContactPoints(points); },
[this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }}, [this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }},
settings.connectionInfo); settings.connectionInfo
);
} }
void void
@@ -103,18 +98,20 @@ Cluster::setupContactPoints(Settings::ContactPoints const& points)
{ {
using std::to_string; using std::to_string;
auto throwErrorIfNeeded = [](CassError rc, std::string const& label, std::string const& value) { auto throwErrorIfNeeded = [](CassError rc, std::string const& label, std::string const& value) {
if (rc != CASS_OK) if (rc != CASS_OK) {
throw std::runtime_error("Cassandra: Error setting " + label + " [" + value + "]: " + cass_error_desc(rc)); throw std::runtime_error(
fmt::format("Cassandra: Error setting {} [{}]: {}", label, value, cass_error_desc(rc))
);
}
}; };
{ {
log_.debug() << "Attempt connection using contact points: " << points.contactPoints; LOG(log_.debug()) << "Attempt connection using contact points: " << points.contactPoints;
auto const rc = cass_cluster_set_contact_points(*this, points.contactPoints.data()); auto const rc = cass_cluster_set_contact_points(*this, points.contactPoints.data());
throwErrorIfNeeded(rc, "contact_points", points.contactPoints); throwErrorIfNeeded(rc, "contact_points", points.contactPoints);
} }
if (points.port) if (points.port) {
{
auto const rc = cass_cluster_set_port(*this, points.port.value()); auto const rc = cass_cluster_set_port(*this, points.port.value());
throwErrorIfNeeded(rc, "port", to_string(points.port.value())); throwErrorIfNeeded(rc, "port", to_string(points.port.value()));
} }
@@ -123,10 +120,9 @@ Cluster::setupContactPoints(Settings::ContactPoints const& points)
void void
Cluster::setupSecureBundle(Settings::SecureConnectionBundle const& bundle) Cluster::setupSecureBundle(Settings::SecureConnectionBundle const& bundle)
{ {
log_.debug() << "Attempt connection using secure bundle"; LOG(log_.debug()) << "Attempt connection using secure bundle";
if (auto const rc = cass_cluster_set_cloud_secure_connection_bundle(*this, bundle.bundle.data()); rc != CASS_OK) if (auto const rc = cass_cluster_set_cloud_secure_connection_bundle(*this, bundle.bundle.data()); rc != CASS_OK) {
{ throw std::runtime_error("Failed to connect using secure connection bundle " + bundle.bundle);
throw std::runtime_error("Failed to connect using secure connection bundle" + bundle.bundle);
} }
} }
@@ -136,8 +132,8 @@ Cluster::setupCertificate(Settings const& settings)
if (not settings.certificate) if (not settings.certificate)
return; return;
log_.debug() << "Configure SSL context"; LOG(log_.debug()) << "Configure SSL context";
SslContext context = SslContext(*settings.certificate); SslContext const context = SslContext(*settings.certificate);
cass_cluster_set_ssl(*this, context); cass_cluster_set_ssl(*this, context);
} }
@@ -147,8 +143,8 @@ Cluster::setupCredentials(Settings const& settings)
if (not settings.username || not settings.password) if (not settings.username || not settings.password)
return; return;
log_.debug() << "Set credentials; username: " << settings.username.value(); LOG(log_.debug()) << "Set credentials; username: " << settings.username.value();
cass_cluster_set_credentials(*this, settings.username.value().c_str(), settings.password.value().c_str()); cass_cluster_set_credentials(*this, settings.username.value().c_str(), settings.password.value().c_str());
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,8 +19,8 @@
#pragma once #pragma once
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <log/Logger.h> #include <util/log/Logger.h>
#include <cassandra.h> #include <cassandra.h>
@@ -31,32 +31,71 @@
#include <thread> #include <thread>
#include <variant> #include <variant>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
struct Settings // TODO: move Settings to public interface, not detail
{
struct ContactPoints /**
{ * @brief Bundles all cassandra settings in one place.
*/
struct Settings {
static constexpr std::size_t DEFAULT_CONNECTION_TIMEOUT = 10000;
static constexpr uint32_t DEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING = 10'000;
static constexpr uint32_t DEFAULT_MAX_READ_REQUESTS_OUTSTANDING = 100'000;
/**
* @brief Represents the configuration of contact points for cassandra.
*/
struct ContactPoints {
std::string contactPoints = "127.0.0.1"; // defaults to localhost std::string contactPoints = "127.0.0.1"; // defaults to localhost
std::optional<uint16_t> port; std::optional<uint16_t> port = {};
}; };
struct SecureConnectionBundle /**
{ * @brief Represents the configuration of a secure connection bundle.
*/
struct SecureConnectionBundle {
std::string bundle; // no meaningful default std::string bundle; // no meaningful default
}; };
/** @brief Enables or disables cassandra driver logger */
bool enableLog = false; bool enableLog = false;
std::chrono::milliseconds connectionTimeout = std::chrono::milliseconds{10000};
std::chrono::milliseconds requestTimeout = std::chrono::milliseconds{0}; // no timeout at all
std::variant<ContactPoints, SecureConnectionBundle> connectionInfo = ContactPoints{};
uint32_t threads = std::thread::hardware_concurrency();
uint32_t maxWriteRequestsOutstanding = 10'000;
uint32_t maxReadRequestsOutstanding = 100'000;
std::optional<std::string> certificate; // ssl context
std::optional<std::string> username;
std::optional<std::string> password;
/** @brief Connect timeout specified in milliseconds */
std::chrono::milliseconds connectionTimeout = std::chrono::milliseconds{DEFAULT_CONNECTION_TIMEOUT};
/** @brief Request timeout specified in milliseconds */
std::chrono::milliseconds requestTimeout = std::chrono::milliseconds{0}; // no timeout at all
/** @brief Connection information; either ContactPoints or SecureConnectionBundle */
std::variant<ContactPoints, SecureConnectionBundle> connectionInfo = ContactPoints{};
/** @brief The number of threads for the driver to pool */
uint32_t threads = std::thread::hardware_concurrency();
/** @brief The maximum number of outstanding write requests at any given moment */
uint32_t maxWriteRequestsOutstanding = DEFAULT_MAX_WRITE_REQUESTS_OUTSTANDING;
/** @brief The maximum number of outstanding read requests at any given moment */
uint32_t maxReadRequestsOutstanding = DEFAULT_MAX_READ_REQUESTS_OUTSTANDING;
/** @brief The number of connection per host to always have active */
uint32_t coreConnectionsPerHost = 1u;
/** @brief Size of the IO queue */
std::optional<uint32_t> queueSizeIO{};
/** @brief SSL certificate */
std::optional<std::string> certificate{}; // ssl context
/** @brief Username/login */
std::optional<std::string> username{};
/** @brief Password to match the `username` */
std::optional<std::string> password{};
/**
* @brief Creates a new Settings object as a copy of the current one with overridden contact points.
*/
Settings Settings
withContactPoints(std::string_view contactPoints) withContactPoints(std::string_view contactPoints)
{ {
@@ -65,6 +104,9 @@ struct Settings
return tmp; return tmp;
} }
/**
* @brief Returns the default settings.
*/
static Settings static Settings
defaultSettings() defaultSettings()
{ {
@@ -72,9 +114,8 @@ struct Settings
} }
}; };
class Cluster : public ManagedObject<CassCluster> class Cluster : public ManagedObject<CassCluster> {
{ util::Logger log_{"Backend"};
clio::Logger log_{"Backend"};
public: public:
Cluster(Settings const& settings); Cluster(Settings const& settings);
@@ -96,4 +137,4 @@ private:
setupCredentials(Settings const& settings); setupCredentials(Settings const& settings);
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -0,0 +1,87 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <data/cassandra/impl/ManagedObject.h>
#include <ripple/basics/base_uint.h>
#include <cassandra.h>
#include <string>
#include <string_view>
namespace data::cassandra::detail {
class Collection : public ManagedObject<CassCollection> {
static constexpr auto deleter = [](CassCollection* ptr) { cass_collection_free(ptr); };
static void
throwErrorIfNeeded(CassError const rc, std::string_view const label)
{
if (rc == CASS_OK)
return;
auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + ": " + cass_error_desc(rc));
}
public:
/* implicit */ Collection(CassCollection* ptr);
template <typename Type>
explicit Collection(std::vector<Type> const& value)
: ManagedObject{cass_collection_new(CASS_COLLECTION_TYPE_LIST, value.size()), deleter}
{
bind(value);
}
template <typename Type>
void
bind(std::vector<Type> const& values) const
{
for (auto const& value : values)
append(value);
}
void
append(bool const value) const
{
auto const rc = cass_collection_append_bool(*this, value ? cass_true : cass_false);
throwErrorIfNeeded(rc, "Bind bool");
}
void
append(int64_t const value) const
{
auto const rc = cass_collection_append_int64(*this, value);
throwErrorIfNeeded(rc, "Bind int64");
}
void
append(ripple::uint256 const& value) const
{
auto const rc = cass_collection_append_bytes(
*this,
static_cast<cass_byte_t const*>(static_cast<unsigned char const*>(value.data())),
ripple::uint256::size()
);
throwErrorIfNeeded(rc, "Bind ripple::uint256");
}
};
} // namespace data::cassandra::detail

View File

@@ -19,13 +19,15 @@
#pragma once #pragma once
#include <backend/cassandra/Handle.h> #include <data/BackendCounters.h>
#include <backend/cassandra/Types.h> #include <data/BackendInterface.h>
#include <backend/cassandra/impl/AsyncExecutor.h> #include <data/cassandra/Handle.h>
#include <log/Logger.h> #include <data/cassandra/Types.h>
#include <data/cassandra/impl/AsyncExecutor.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <util/log/Logger.h>
#include <boost/asio/async_result.hpp> #include <boost/asio.hpp>
#include <boost/asio/spawn.hpp> #include <boost/asio/spawn.hpp>
#include <atomic> #include <atomic>
@@ -36,19 +38,19 @@
#include <optional> #include <optional>
#include <thread> #include <thread>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
// TODO: this could probably be also moved out of detail and into the main cassandra namespace.
/** /**
* @brief Implements async and sync querying against the cassandra DB with * @brief Implements async and sync querying against the cassandra DB with support for throttling.
* support for throttling.
* *
* Note: A lot of the code that uses yield is repeated below. This is ok for now * Note: A lot of the code that uses yield is repeated below.
* because we are hopefully going to be getting rid of it entirely later on. * This is ok for now because we are hopefully going to be getting rid of it entirely later on.
*/ */
template <typename HandleType = Handle> template <typename HandleType = Handle, SomeBackendCounters BackendCountersType = BackendCounters>
class DefaultExecutionStrategy class DefaultExecutionStrategy {
{ util::Logger log_{"Backend"};
clio::Logger log_{"Backend"};
std::uint32_t maxWriteRequestsOutstanding_; std::uint32_t maxWriteRequestsOutstanding_;
std::atomic_uint32_t numWriteRequestsOutstanding_ = 0; std::atomic_uint32_t numWriteRequestsOutstanding_ = 0;
@@ -68,6 +70,8 @@ class DefaultExecutionStrategy
std::reference_wrapper<HandleType const> handle_; std::reference_wrapper<HandleType const> handle_;
std::thread thread_; std::thread thread_;
typename BackendCountersType::PtrType counters_;
public: public:
using ResultOrErrorType = typename HandleType::ResultOrErrorType; using ResultOrErrorType = typename HandleType::ResultOrErrorType;
using StatementType = typename HandleType::StatementType; using StatementType = typename HandleType::StatementType;
@@ -75,20 +79,25 @@ public:
using FutureType = typename HandleType::FutureType; using FutureType = typename HandleType::FutureType;
using FutureWithCallbackType = typename HandleType::FutureWithCallbackType; using FutureWithCallbackType = typename HandleType::FutureWithCallbackType;
using ResultType = typename HandleType::ResultType; using ResultType = typename HandleType::ResultType;
using CompletionTokenType = boost::asio::yield_context; using CompletionTokenType = boost::asio::yield_context;
using FunctionType = void(boost::system::error_code);
using AsyncResultType = boost::asio::async_result<CompletionTokenType, FunctionType>;
using HandlerType = typename AsyncResultType::completion_handler_type;
DefaultExecutionStrategy(Settings settings, HandleType const& handle) /**
* @param settings The settings to use
* @param handle A handle to the cassandra database
*/
DefaultExecutionStrategy(
Settings const& settings,
HandleType const& handle,
typename BackendCountersType::PtrType counters = BackendCountersType::make()
)
: maxWriteRequestsOutstanding_{settings.maxWriteRequestsOutstanding} : maxWriteRequestsOutstanding_{settings.maxWriteRequestsOutstanding}
, maxReadRequestsOutstanding_{settings.maxReadRequestsOutstanding} , maxReadRequestsOutstanding_{settings.maxReadRequestsOutstanding}
, work_{ioc_} , work_{ioc_}
, handle_{std::cref(handle)} , handle_{std::cref(handle)}
, thread_{[this]() { ioc_.run(); }} , thread_{[this]() { ioc_.run(); }}
, counters_{std::move(counters)}
{ {
log_.info() << "Max write requests outstanding is " << maxWriteRequestsOutstanding_ LOG(log_.info()) << "Max write requests outstanding is " << maxWriteRequestsOutstanding_
<< "; Max read requests outstanding is " << maxReadRequestsOutstanding_; << "; Max read requests outstanding is " << maxReadRequestsOutstanding_;
} }
@@ -100,47 +109,52 @@ public:
} }
/** /**
* @brief Wait for all async writes to finish before unblocking * @brief Wait for all async writes to finish before unblocking.
*/ */
void void
sync() sync()
{ {
log_.debug() << "Waiting to sync all writes..."; LOG(log_.debug()) << "Waiting to sync all writes...";
std::unique_lock<std::mutex> lck(syncMutex_); std::unique_lock<std::mutex> lck(syncMutex_);
syncCv_.wait(lck, [this]() { return finishedAllWriteRequests(); }); syncCv_.wait(lck, [this]() { return finishedAllWriteRequests(); });
log_.debug() << "Sync done."; LOG(log_.debug()) << "Sync done.";
}
bool
isTooBusy() const
{
return numReadRequestsOutstanding_ >= maxReadRequestsOutstanding_;
} }
/** /**
* @brief Blocking query execution used for writing data * @return true if outstanding read requests allowance is exhausted; false otherwise
*/
bool
isTooBusy() const
{
bool const result = numReadRequestsOutstanding_ >= maxReadRequestsOutstanding_;
if (result)
counters_->registerTooBusy();
return result;
}
/**
* @brief Blocking query execution used for writing data.
* *
* Retries forever sleeping for 5 milliseconds between attempts. * Retries forever sleeping for 5 milliseconds between attempts.
*/ */
ResultOrErrorType ResultOrErrorType
writeSync(StatementType const& statement) writeSync(StatementType const& statement)
{ {
while (true) counters_->registerWriteSync();
{ while (true) {
if (auto res = handle_.get().execute(statement); res) auto res = handle_.get().execute(statement);
{ if (res) {
return res; return res;
} }
else
{ counters_->registerWriteSyncRetry();
log_.warn() << "Cassandra sync write error, retrying: " << res.error(); LOG(log_.warn()) << "Cassandra sync write error, retrying: " << res.error();
std::this_thread::sleep_for(std::chrono::milliseconds(5)); std::this_thread::sleep_for(std::chrono::milliseconds(5));
} }
} }
}
/** /**
* @brief Blocking query execution used for writing data * @brief Blocking query execution used for writing data.
* *
* Retries forever sleeping for 5 milliseconds between attempts. * Retries forever sleeping for 5 milliseconds between attempts.
*/ */
@@ -152,11 +166,11 @@ public:
} }
/** /**
* @brief Non-blocking query execution used for writing data * @brief Non-blocking query execution used for writing data.
* *
* Retries forever with retry policy specified by @ref AsyncExecutor * Retries forever with retry policy specified by @ref AsyncExecutor
* *
* @param prepradeStatement Statement to prepare and execute * @param preparedStatement Statement to prepare and execute
* @param args Args to bind to the prepared statement * @param args Args to bind to the prepared statement
* @throw DatabaseTimeout on timeout * @throw DatabaseTimeout on timeout
*/ */
@@ -167,13 +181,23 @@ public:
auto statement = preparedStatement.bind(std::forward<Args>(args)...); auto statement = preparedStatement.bind(std::forward<Args>(args)...);
incrementOutstandingRequestCount(); incrementOutstandingRequestCount();
counters_->registerWriteStarted();
// Note: lifetime is controlled by std::shared_from_this internally // Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(statement)>, HandleType>::run( AsyncExecutor<std::decay_t<decltype(statement)>, HandleType>::run(
ioc_, handle_, std::move(statement), [this](auto const&) { decrementOutstandingRequestCount(); }); ioc_,
handle_,
std::move(statement),
[this](auto const&) {
decrementOutstandingRequestCount();
counters_->registerWriteFinished();
},
[this]() { counters_->registerWriteRetry(); }
);
} }
/** /**
* @brief Non-blocking batched query execution used for writing data * @brief Non-blocking batched query execution used for writing data.
* *
* Retries forever with retry policy specified by @ref AsyncExecutor. * Retries forever with retry policy specified by @ref AsyncExecutor.
* *
@@ -188,9 +212,18 @@ public:
incrementOutstandingRequestCount(); incrementOutstandingRequestCount();
counters_->registerWriteStarted();
// Note: lifetime is controlled by std::shared_from_this internally // Note: lifetime is controlled by std::shared_from_this internally
AsyncExecutor<std::decay_t<decltype(statements)>, HandleType>::run( AsyncExecutor<std::decay_t<decltype(statements)>, HandleType>::run(
ioc_, handle_, std::move(statements), [this](auto const&) { decrementOutstandingRequestCount(); }); ioc_,
handle_,
std::move(statements),
[this](auto const&) {
decrementOutstandingRequestCount();
counters_->registerWriteFinished();
},
[this]() { counters_->registerWriteRetry(); }
);
} }
/** /**
@@ -199,7 +232,7 @@ public:
* Retries forever until successful or throws an exception on timeout. * Retries forever until successful or throws an exception on timeout.
* *
* @param token Completion token (yield_context) * @param token Completion token (yield_context)
* @param prepradeStatement Statement to prepare and execute * @param preparedStatement Statement to prepare and execute
* @param args Args to bind to the prepared statement * @param args Args to bind to the prepared statement
* @throw DatabaseTimeout on timeout * @throw DatabaseTimeout on timeout
* @return ResultType or error wrapped in Expected * @return ResultType or error wrapped in Expected
@@ -224,37 +257,43 @@ public:
[[maybe_unused]] ResultOrErrorType [[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, std::vector<StatementType> const& statements) read(CompletionTokenType token, std::vector<StatementType> const& statements)
{ {
auto handler = HandlerType{token};
auto result = AsyncResultType{handler};
auto const numStatements = statements.size(); auto const numStatements = statements.size();
std::optional<FutureWithCallbackType> future;
counters_->registerReadStarted(numStatements);
// todo: perhaps use policy instead // todo: perhaps use policy instead
while (true) while (true) {
{
numReadRequestsOutstanding_ += numStatements; numReadRequestsOutstanding_ += numStatements;
auto const future = handle_.get().asyncExecute(statements, [handler](auto&&) mutable { auto init = [this, &statements, &future]<typename Self>(Self& self) {
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable { auto sself = std::make_shared<Self>(std::move(self));
handler(boost::system::error_code{});
});
});
// suspend coroutine until completion handler is called future.emplace(handle_.get().asyncExecute(statements, [sself](auto&& res) mutable {
result.get(); boost::asio::post(
boost::asio::get_associated_executor(*sself),
[sself, res = std::forward<decltype(res)>(res)]() mutable { sself->complete(std::move(res)); }
);
}));
};
auto res = boost::asio::async_compose<CompletionTokenType, void(ResultOrErrorType)>(
init, token, boost::asio::get_associated_executor(token)
);
numReadRequestsOutstanding_ -= numStatements; numReadRequestsOutstanding_ -= numStatements;
// it's safe to call blocking get on future here as we already if (res) {
// waited for the coroutine to resume above. counters_->registerReadFinished(numStatements);
if (auto res = future.get(); res)
{
return res; return res;
} }
else
{ LOG(log_.error()) << "Failed batch read in coroutine: " << res.error();
log_.error() << "Failed batch read in coroutine: " << res.error(); try {
throwErrorIfNeeded(res.error()); throwErrorIfNeeded(res.error());
} catch (...) {
counters_->registerReadError(numStatements);
throw;
} }
counters_->registerReadRetry(numStatements);
} }
} }
@@ -271,36 +310,41 @@ public:
[[maybe_unused]] ResultOrErrorType [[maybe_unused]] ResultOrErrorType
read(CompletionTokenType token, StatementType const& statement) read(CompletionTokenType token, StatementType const& statement)
{ {
auto handler = HandlerType{token}; std::optional<FutureWithCallbackType> future;
auto result = AsyncResultType{handler}; counters_->registerReadStarted();
// todo: perhaps use policy instead // todo: perhaps use policy instead
while (true) while (true) {
{
++numReadRequestsOutstanding_; ++numReadRequestsOutstanding_;
auto init = [this, &statement, &future]<typename Self>(Self& self) {
auto sself = std::make_shared<Self>(std::move(self));
auto const future = handle_.get().asyncExecute(statement, [handler](auto const&) mutable { future.emplace(handle_.get().asyncExecute(statement, [sself](auto&& res) mutable {
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable { boost::asio::post(
handler(boost::system::error_code{}); boost::asio::get_associated_executor(*sself),
}); [sself, res = std::forward<decltype(res)>(res)]() mutable { sself->complete(std::move(res)); }
}); );
}));
// suspend coroutine until completion handler is called };
result.get();
auto res = boost::asio::async_compose<CompletionTokenType, void(ResultOrErrorType)>(
init, token, boost::asio::get_associated_executor(token)
);
--numReadRequestsOutstanding_; --numReadRequestsOutstanding_;
// it's safe to call blocking get on future here as we already if (res) {
// waited for the coroutine to resume above. counters_->registerReadFinished();
if (auto res = future.get(); res)
{
return res; return res;
} }
else
{ LOG(log_.error()) << "Failed read in coroutine: " << res.error();
log_.error() << "Failed read in coroutine: " << res.error(); try {
throwErrorIfNeeded(res.error()); throwErrorIfNeeded(res.error());
} catch (...) {
counters_->registerReadError();
throw;
} }
counters_->registerReadRetry();
} }
} }
@@ -318,26 +362,26 @@ public:
std::vector<ResultType> std::vector<ResultType>
readEach(CompletionTokenType token, std::vector<StatementType> const& statements) readEach(CompletionTokenType token, std::vector<StatementType> const& statements)
{ {
auto handler = HandlerType{token}; std::atomic_uint64_t errorsCount = 0u;
auto result = AsyncResultType{handler};
std::atomic_bool hadError = false;
std::atomic_int numOutstanding = statements.size(); std::atomic_int numOutstanding = statements.size();
numReadRequestsOutstanding_ += statements.size(); numReadRequestsOutstanding_ += statements.size();
auto futures = std::vector<FutureWithCallbackType>{}; auto futures = std::vector<FutureWithCallbackType>{};
futures.reserve(numOutstanding); futures.reserve(numOutstanding);
counters_->registerReadStarted(statements.size());
// used as the handler for each async statement individually auto init = [this, &statements, &futures, &errorsCount, &numOutstanding]<typename Self>(Self& self) {
auto executionHandler = [handler, &hadError, &numOutstanding](auto const& res) mutable { auto sself = std::make_shared<Self>(std::move(self));
auto executionHandler = [&errorsCount, &numOutstanding, sself](auto const& res) mutable {
if (not res) if (not res)
hadError = true; ++errorsCount;
// when all async operations complete unblock the result // when all async operations complete unblock the result
if (--numOutstanding == 0) if (--numOutstanding == 0) {
boost::asio::post(boost::asio::get_associated_executor(handler), [handler]() mutable { boost::asio::post(boost::asio::get_associated_executor(*sself), [sself]() mutable {
handler(boost::system::error_code{}); sself->complete();
}); });
}
}; };
std::transform( std::transform(
@@ -346,21 +390,27 @@ public:
std::back_inserter(futures), std::back_inserter(futures),
[this, &executionHandler](auto const& statement) { [this, &executionHandler](auto const& statement) {
return handle_.get().asyncExecute(statement, executionHandler); return handle_.get().asyncExecute(statement, executionHandler);
}); }
);
// suspend coroutine until completion handler is called };
result.get();
boost::asio::async_compose<CompletionTokenType, void()>(
init, token, boost::asio::get_associated_executor(token)
);
numReadRequestsOutstanding_ -= statements.size(); numReadRequestsOutstanding_ -= statements.size();
if (hadError) if (errorsCount > 0) {
assert(errorsCount <= statements.size());
counters_->registerReadError(errorsCount);
counters_->registerReadFinished(statements.size() - errorsCount);
throw DatabaseTimeout{}; throw DatabaseTimeout{};
}
counters_->registerReadFinished(statements.size());
std::vector<ResultType> results; std::vector<ResultType> results;
results.reserve(futures.size()); results.reserve(futures.size());
// it's safe to call blocking get on futures here as we already // it's safe to call blocking get on futures here as we already waited for the coroutine to resume above.
// waited for the coroutine to resume above.
std::transform( std::transform(
std::make_move_iterator(std::begin(futures)), std::make_move_iterator(std::begin(futures)),
std::make_move_iterator(std::end(futures)), std::make_move_iterator(std::end(futures)),
@@ -369,22 +419,31 @@ public:
auto entry = future.get(); auto entry = future.get();
auto&& res = entry.value(); auto&& res = entry.value();
return std::move(res); return std::move(res);
}); }
);
assert(futures.size() == statements.size()); assert(futures.size() == statements.size());
assert(results.size() == statements.size()); assert(results.size() == statements.size());
return results; return results;
} }
/**
* @brief Get statistics about the backend.
*/
boost::json::object
stats() const
{
return counters_->report();
}
private: private:
void void
incrementOutstandingRequestCount() incrementOutstandingRequestCount()
{ {
{ {
std::unique_lock<std::mutex> lck(throttleMutex_); std::unique_lock<std::mutex> lck(throttleMutex_);
if (!canAddWriteRequest()) if (!canAddWriteRequest()) {
{ LOG(log_.trace()) << "Max outstanding requests reached. "
log_.trace() << "Max outstanding requests reached. "
<< "Waiting for other requests to finish"; << "Waiting for other requests to finish";
throttleCv_.wait(lck, [this]() { return canAddWriteRequest(); }); throttleCv_.wait(lck, [this]() { return canAddWriteRequest(); });
} }
@@ -396,23 +455,21 @@ private:
decrementOutstandingRequestCount() decrementOutstandingRequestCount()
{ {
// sanity check // sanity check
if (numWriteRequestsOutstanding_ == 0) if (numWriteRequestsOutstanding_ == 0) {
{
assert(false); assert(false);
throw std::runtime_error("decrementing num outstanding below 0"); throw std::runtime_error("decrementing num outstanding below 0");
} }
size_t cur = (--numWriteRequestsOutstanding_); size_t const cur = (--numWriteRequestsOutstanding_);
{ {
// mutex lock required to prevent race condition around spurious // mutex lock required to prevent race condition around spurious
// wakeup // wakeup
std::lock_guard lck(throttleMutex_); std::lock_guard const lck(throttleMutex_);
throttleCv_.notify_one(); throttleCv_.notify_one();
} }
if (cur == 0) if (cur == 0) {
{
// mutex lock required to prevent race condition around spurious // mutex lock required to prevent race condition around spurious
// wakeup // wakeup
std::lock_guard lck(syncMutex_); std::lock_guard const lck(syncMutex_);
syncCv_.notify_one(); syncCv_.notify_one();
} }
} }
@@ -440,4 +497,4 @@ private:
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,18 +17,18 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/Error.h> #include <data/cassandra/Error.h>
#include <backend/cassandra/impl/Future.h> #include <data/cassandra/impl/Future.h>
#include <backend/cassandra/impl/Result.h> #include <data/cassandra/impl/Result.h>
#include <exception> #include <exception>
#include <vector> #include <vector>
namespace { namespace {
static constexpr auto futureDeleter = [](CassFuture* ptr) { cass_future_free(ptr); }; constexpr auto futureDeleter = [](CassFuture* ptr) { cass_future_free(ptr); };
} // namespace } // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
/* implicit */ Future::Future(CassFuture* ptr) : ManagedObject{ptr, futureDeleter} /* implicit */ Future::Future(CassFuture* ptr) : ManagedObject{ptr, futureDeleter}
{ {
@@ -37,11 +37,10 @@ namespace Backend::Cassandra::detail {
MaybeError MaybeError
Future::await() const Future::await() const
{ {
if (auto const rc = cass_future_error_code(*this); rc) if (auto const rc = cass_future_error_code(*this); rc) {
{
auto errMsg = [this](std::string const& label) { auto errMsg = [this](std::string const& label) {
char const* message; char const* message = nullptr;
std::size_t len; std::size_t len = 0;
cass_future_error_message(*this, &message, &len); cass_future_error_message(*this, &message, &len);
return label + ": " + std::string{message, len}; return label + ": " + std::string{message, len};
}(cass_error_desc(rc)); }(cass_error_desc(rc));
@@ -53,45 +52,42 @@ Future::await() const
ResultOrError ResultOrError
Future::get() const Future::get() const
{ {
if (auto const rc = cass_future_error_code(*this); rc) if (auto const rc = cass_future_error_code(*this); rc) {
{
auto const errMsg = [this](std::string const& label) { auto const errMsg = [this](std::string const& label) {
char const* message; char const* message = nullptr;
std::size_t len; std::size_t len = 0;
cass_future_error_message(*this, &message, &len); cass_future_error_message(*this, &message, &len);
return label + ": " + std::string{message, len}; return label + ": " + std::string{message, len};
}("future::get()"); }("future::get()");
return Error{CassandraError{errMsg, rc}}; return Error{CassandraError{errMsg, rc}};
} }
else
{
return Result{cass_future_get_result(*this)}; return Result{cass_future_get_result(*this)};
}
} }
void void
invokeHelper(CassFuture* ptr, void* cbPtr) invokeHelper(CassFuture* ptr, void* cbPtr)
{ {
// Note: can't use Future{ptr}.get() because double free will occur :/ // Note: can't use Future{ptr}.get() because double free will occur :/
auto* cb = static_cast<FutureWithCallback::fn_t*>(cbPtr); // Note2: we are moving/copying it locally as a workaround for an issue we are seeing from asio recently.
if (auto const rc = cass_future_error_code(ptr); rc) // stackoverflow.com/questions/77004137/boost-asio-async-compose-gets-stuck-under-load
{ auto* cb = static_cast<FutureWithCallback::FnType*>(cbPtr);
auto local = std::make_unique<FutureWithCallback::FnType>(std::move(*cb));
if (auto const rc = cass_future_error_code(ptr); rc) {
auto const errMsg = [&ptr](std::string const& label) { auto const errMsg = [&ptr](std::string const& label) {
char const* message; char const* message = nullptr;
std::size_t len; std::size_t len = 0;
cass_future_error_message(ptr, &message, &len); cass_future_error_message(ptr, &message, &len);
return label + ": " + std::string{message, len}; return label + ": " + std::string{message, len};
}("invokeHelper"); }("invokeHelper");
(*cb)(Error{CassandraError{errMsg, rc}}); (*local)(Error{CassandraError{errMsg, rc}});
} } else {
else (*local)(Result{cass_future_get_result(ptr)});
{
(*cb)(Result{cass_future_get_result(ptr)});
} }
} }
/* implicit */ FutureWithCallback::FutureWithCallback(CassFuture* ptr, fn_t&& cb) /* implicit */ FutureWithCallback::FutureWithCallback(CassFuture* ptr, FnType&& cb)
: Future{ptr}, cb_{std::make_unique<fn_t>(std::move(cb))} : Future{ptr}, cb_{std::make_unique<FnType>(std::move(cb))}
{ {
// Instead of passing `this` as the userdata void*, we pass the address of // Instead of passing `this` as the userdata void*, we pass the address of
// the callback itself which will survive std::move of the // the callback itself which will survive std::move of the
@@ -99,4 +95,4 @@ invokeHelper(CassFuture* ptr, void* cbPtr)
cass_future_set_callback(*this, &invokeHelper, cb_.get()); cass_future_set_callback(*this, &invokeHelper, cb_.get());
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,15 +19,14 @@
#pragma once #pragma once
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <cassandra.h> #include <cassandra.h>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
struct Future : public ManagedObject<CassFuture> struct Future : public ManagedObject<CassFuture> {
{
/* implicit */ Future(CassFuture* ptr); /* implicit */ Future(CassFuture* ptr);
MaybeError MaybeError
@@ -38,21 +37,20 @@ struct Future : public ManagedObject<CassFuture>
}; };
void void
invokeHelper(CassFuture* ptr, void* self); invokeHelper(CassFuture* ptr, void* cbPtr);
class FutureWithCallback : public Future class FutureWithCallback : public Future {
{
public: public:
using fn_t = std::function<void(ResultOrError)>; using FnType = std::function<void(ResultOrError)>;
using fn_ptr_t = std::unique_ptr<fn_t>; using FnPtrType = std::unique_ptr<FnType>;
/* implicit */ FutureWithCallback(CassFuture* ptr, fn_t&& cb); /* implicit */ FutureWithCallback(CassFuture* ptr, FnType&& cb);
FutureWithCallback(FutureWithCallback const&) = delete; FutureWithCallback(FutureWithCallback const&) = delete;
FutureWithCallback(FutureWithCallback&&) = default; FutureWithCallback(FutureWithCallback&&) = default;
private: private:
/*! Wrapped in a unique_ptr so it can survive std::move :/ */ /** Wrapped in a unique_ptr so it can survive std::move :/ */
fn_ptr_t cb_; FnPtrType cb_;
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -21,11 +21,10 @@
#include <memory> #include <memory>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
template <typename Managed> template <typename Managed>
class ManagedObject class ManagedObject {
{
protected: protected:
std::unique_ptr<Managed, void (*)(Managed*)> ptr_; std::unique_ptr<Managed, void (*)(Managed*)> ptr_;
@@ -36,12 +35,11 @@ public:
if (rawPtr == nullptr) if (rawPtr == nullptr)
throw std::runtime_error("Could not create DB object - got nullptr"); throw std::runtime_error("Could not create DB object - got nullptr");
} }
ManagedObject(ManagedObject&&) = default;
operator Managed* const() const operator Managed*() const
{ {
return ptr_.get(); return ptr_.get();
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,14 +17,14 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/impl/Result.h> #include <data/cassandra/impl/Result.h>
namespace { namespace {
static constexpr auto resultDeleter = [](CassResult const* ptr) { cass_result_free(ptr); }; constexpr auto resultDeleter = [](CassResult const* ptr) { cass_result_free(ptr); };
static constexpr auto resultIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); }; constexpr auto resultIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace } // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
/* implicit */ Result::Result(CassResult const* ptr) : ManagedObject{ptr, resultDeleter} /* implicit */ Result::Result(CassResult const* ptr) : ManagedObject{ptr, resultDeleter}
{ {
@@ -43,7 +43,7 @@ Result::hasRows() const
} }
/* implicit */ ResultIterator::ResultIterator(CassIterator* ptr) /* implicit */ ResultIterator::ResultIterator(CassIterator* ptr)
: ManagedObject{ptr, resultIteratorDeleter}, hasMore_{cass_iterator_next(ptr)} : ManagedObject{ptr, resultIteratorDeleter}, hasMore_{cass_iterator_next(ptr) != 0u}
{ {
} }
@@ -56,7 +56,7 @@ ResultIterator::fromResult(Result const& result)
[[maybe_unused]] bool [[maybe_unused]] bool
ResultIterator::moveForward() ResultIterator::moveForward()
{ {
hasMore_ = cass_iterator_next(*this); hasMore_ = (cass_iterator_next(*this) != 0u);
return hasMore_; return hasMore_;
} }
@@ -66,4 +66,4 @@ ResultIterator::hasMore() const
return hasMore_; return hasMore_;
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,8 +19,8 @@
#pragma once #pragma once
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <backend/cassandra/impl/Tuple.h> #include <data/cassandra/impl/Tuple.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <ripple/basics/base_uint.h> #include <ripple/basics/base_uint.h>
@@ -31,7 +31,7 @@
#include <iterator> #include <iterator>
#include <tuple> #include <tuple>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
template <typename> template <typename>
static constexpr bool unsupported_v = false; static constexpr bool unsupported_v = false;
@@ -44,80 +44,64 @@ extractColumn(CassRow const* row, std::size_t idx)
Type output; Type output;
auto throwErrorIfNeeded = [](CassError rc, std::string_view label) { auto throwErrorIfNeeded = [](CassError rc, std::string_view label) {
if (rc != CASS_OK) if (rc != CASS_OK) {
{
auto const tag = '[' + std::string{label} + ']'; auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + ": " + cass_error_desc(rc)); throw std::logic_error(tag + ": " + cass_error_desc(rc));
} }
}; };
using decayed_t = std::decay_t<Type>; using DecayedType = std::decay_t<Type>;
using uint_tuple_t = std::tuple<uint32_t, uint32_t>; using UintTupleType = std::tuple<uint32_t, uint32_t>;
using uchar_vector_t = std::vector<unsigned char>; using UCharVectorType = std::vector<unsigned char>;
if constexpr (std::is_same_v<decayed_t, ripple::uint256>) if constexpr (std::is_same_v<DecayedType, ripple::uint256>) {
{ cass_byte_t const* buf = nullptr;
cass_byte_t const* buf; std::size_t bufSize = 0;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize); auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract ripple::uint256"); throwErrorIfNeeded(rc, "Extract ripple::uint256");
output = ripple::uint256::fromVoid(buf); output = ripple::uint256::fromVoid(buf);
} } else if constexpr (std::is_same_v<DecayedType, ripple::AccountID>) {
else if constexpr (std::is_same_v<decayed_t, ripple::AccountID>) cass_byte_t const* buf = nullptr;
{ std::size_t bufSize = 0;
cass_byte_t const* buf;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize); auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract ripple::AccountID"); throwErrorIfNeeded(rc, "Extract ripple::AccountID");
output = ripple::AccountID::fromVoid(buf); output = ripple::AccountID::fromVoid(buf);
} } else if constexpr (std::is_same_v<DecayedType, UCharVectorType>) {
else if constexpr (std::is_same_v<decayed_t, uchar_vector_t>) cass_byte_t const* buf = nullptr;
{ std::size_t bufSize = 0;
cass_byte_t const* buf;
std::size_t bufSize;
auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize); auto const rc = cass_value_get_bytes(cass_row_get_column(row, idx), &buf, &bufSize);
throwErrorIfNeeded(rc, "Extract vector<unsigned char>"); throwErrorIfNeeded(rc, "Extract vector<unsigned char>");
output = uchar_vector_t{buf, buf + bufSize}; output = UCharVectorType{buf, buf + bufSize};
} } else if constexpr (std::is_same_v<DecayedType, UintTupleType>) {
else if constexpr (std::is_same_v<decayed_t, uint_tuple_t>)
{
auto const* tuple = cass_row_get_column(row, idx); auto const* tuple = cass_row_get_column(row, idx);
output = TupleIterator::fromTuple(tuple).extract<uint32_t, uint32_t>(); output = TupleIterator::fromTuple(tuple).extract<uint32_t, uint32_t>();
} } else if constexpr (std::is_convertible_v<DecayedType, std::string>) {
else if constexpr (std::is_convertible_v<decayed_t, std::string>) char const* value = nullptr;
{ std::size_t len = 0;
char const* value;
std::size_t len;
auto const rc = cass_value_get_string(cass_row_get_column(row, idx), &value, &len); auto const rc = cass_value_get_string(cass_row_get_column(row, idx), &value, &len);
throwErrorIfNeeded(rc, "Extract string"); throwErrorIfNeeded(rc, "Extract string");
output = std::string{value, len}; output = std::string{value, len};
} } else if constexpr (std::is_same_v<DecayedType, bool>) {
else if constexpr (std::is_same_v<decayed_t, bool>) cass_bool_t flag = cass_bool_t::cass_false;
{
cass_bool_t flag;
auto const rc = cass_value_get_bool(cass_row_get_column(row, idx), &flag); auto const rc = cass_value_get_bool(cass_row_get_column(row, idx), &flag);
throwErrorIfNeeded(rc, "Extract bool"); throwErrorIfNeeded(rc, "Extract bool");
output = flag ? true : false; output = flag != cass_bool_t::cass_false;
} }
// clio only uses bigint (int64_t) so we convert any incoming type // clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>) else if constexpr (std::is_convertible_v<DecayedType, int64_t>) {
{ int64_t out = 0;
int64_t out;
auto const rc = cass_value_get_int64(cass_row_get_column(row, idx), &out); auto const rc = cass_value_get_int64(cass_row_get_column(row, idx), &out);
throwErrorIfNeeded(rc, "Extract int64"); throwErrorIfNeeded(rc, "Extract int64");
output = static_cast<decayed_t>(out); output = static_cast<DecayedType>(out);
} } else {
else
{
// type not supported for extraction // type not supported for extraction
static_assert(unsupported_v<decayed_t>); static_assert(unsupported_v<DecayedType>);
} }
return output; return output;
} }
struct Result : public ManagedObject<CassResult const> struct Result : public ManagedObject<CassResult const> {
{
/* implicit */ Result(CassResult const* ptr); /* implicit */ Result(CassResult const* ptr);
[[nodiscard]] std::size_t [[nodiscard]] std::size_t
@@ -128,7 +112,8 @@ struct Result : public ManagedObject<CassResult const>
template <typename... RowTypes> template <typename... RowTypes>
std::optional<std::tuple<RowTypes...>> std::optional<std::tuple<RowTypes...>>
get() const requires(std::tuple_size<std::tuple<RowTypes...>>{} > 1) get() const
requires(std::tuple_size<std::tuple<RowTypes...>>{} > 1)
{ {
// row managed internally by cassandra driver, hence no ManagedObject. // row managed internally by cassandra driver, hence no ManagedObject.
auto const* row = cass_result_first_row(*this); auto const* row = cass_result_first_row(*this);
@@ -153,8 +138,7 @@ struct Result : public ManagedObject<CassResult const>
} }
}; };
class ResultIterator : public ManagedObject<CassIterator> class ResultIterator : public ManagedObject<CassIterator> {
{
bool hasMore_ = false; bool hasMore_ = false;
public: public:
@@ -185,17 +169,13 @@ public:
}; };
template <typename... Types> template <typename... Types>
class ResultExtractor class ResultExtractor {
{
std::reference_wrapper<Result const> ref_; std::reference_wrapper<Result const> ref_;
public: public:
struct Sentinel struct Sentinel {};
{
};
struct Iterator struct Iterator {
{
using iterator_category = std::input_iterator_tag; using iterator_category = std::input_iterator_tag;
using difference_type = std::size_t; // rows count using difference_type = std::size_t; // rows count
using value_type = std::tuple<Types...>; using value_type = std::tuple<Types...>;
@@ -254,4 +234,4 @@ public:
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,10 +19,10 @@
#pragma once #pragma once
#include <backend/cassandra/Handle.h> #include <data/cassandra/Handle.h>
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <log/Logger.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <util/log/Logger.h>
#include <boost/asio.hpp> #include <boost/asio.hpp>
@@ -30,14 +30,13 @@
#include <chrono> #include <chrono>
#include <cmath> #include <cmath>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
/** /**
* @brief A retry policy that employs exponential backoff * @brief A retry policy that employs exponential backoff
*/ */
class ExponentialBackoffRetryPolicy class ExponentialBackoffRetryPolicy {
{ util::Logger log_{"Backend"};
clio::Logger log_{"Backend"};
boost::asio::steady_timer timer_; boost::asio::steady_timer timer_;
uint32_t attempt_ = 0u; uint32_t attempt_ = 0u;
@@ -46,7 +45,7 @@ public:
/** /**
* @brief Create a new retry policy instance with the io_context provided * @brief Create a new retry policy instance with the io_context provided
*/ */
ExponentialBackoffRetryPolicy(boost::asio::io_context& ioc) : timer_{ioc} ExponentialBackoffRetryPolicy(boost::asio::io_context& ioc) : timer_{boost::asio::make_strand(ioc)}
{ {
} }
@@ -59,7 +58,7 @@ public:
shouldRetry([[maybe_unused]] CassandraError err) shouldRetry([[maybe_unused]] CassandraError err)
{ {
auto const delay = calculateDelay(attempt_); auto const delay = calculateDelay(attempt_);
log_.error() << "Cassandra write error: " << err << ", current retries " << attempt_ << ", retrying in " LOG(log_.error()) << "Cassandra write error: " << err << ", current retries " << attempt_ << ", retrying in "
<< delay.count() << " milliseconds"; << delay.count() << " milliseconds";
return true; // keep retrying forever return true; // keep retrying forever
@@ -75,7 +74,7 @@ public:
retry(Fn&& fn) retry(Fn&& fn)
{ {
timer_.expires_after(calculateDelay(attempt_++)); timer_.expires_after(calculateDelay(attempt_++));
timer_.async_wait([fn = std::forward<Fn>(fn)]([[maybe_unused]] const auto& err) { timer_.async_wait([fn = std::forward<Fn>(fn)]([[maybe_unused]] auto const& err) {
// todo: deal with cancellation (thru err) // todo: deal with cancellation (thru err)
fn(); fn();
}); });
@@ -84,11 +83,11 @@ public:
/** /**
* @brief Calculates the wait time before attempting another retry * @brief Calculates the wait time before attempting another retry
*/ */
std::chrono::milliseconds static std::chrono::milliseconds
calculateDelay(uint32_t attempt) calculateDelay(uint32_t attempt)
{ {
return std::chrono::milliseconds{lround(std::pow(2, std::min(10u, attempt)))}; return std::chrono::milliseconds{lround(std::pow(2, std::min(10u, attempt)))};
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,14 +19,13 @@
#pragma once #pragma once
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <cassandra.h> #include <cassandra.h>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
class Session : public ManagedObject<CassSession> class Session : public ManagedObject<CassSession> {
{
static constexpr auto deleter = [](CassSession* ptr) { cass_session_free(ptr); }; static constexpr auto deleter = [](CassSession* ptr) { cass_session_free(ptr); };
public: public:
@@ -35,4 +34,4 @@ public:
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,21 +17,20 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/impl/SslContext.h> #include <data/cassandra/impl/SslContext.h>
namespace { namespace {
static constexpr auto contextDeleter = [](CassSsl* ptr) { cass_ssl_free(ptr); }; constexpr auto contextDeleter = [](CassSsl* ptr) { cass_ssl_free(ptr); };
} // namespace } // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
SslContext::SslContext(std::string const& certificate) : ManagedObject{cass_ssl_new(), contextDeleter} SslContext::SslContext(std::string const& certificate) : ManagedObject{cass_ssl_new(), contextDeleter}
{ {
cass_ssl_set_verify_flags(*this, CASS_SSL_VERIFY_NONE); cass_ssl_set_verify_flags(*this, CASS_SSL_VERIFY_NONE);
if (auto const rc = cass_ssl_add_trusted_cert(*this, certificate.c_str()); rc != CASS_OK) if (auto const rc = cass_ssl_add_trusted_cert(*this, certificate.c_str()); rc != CASS_OK) {
{
throw std::runtime_error(std::string{"Error setting Cassandra SSL Context: "} + cass_error_desc(rc)); throw std::runtime_error(std::string{"Error setting Cassandra SSL Context: "} + cass_error_desc(rc));
} }
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,17 +19,16 @@
#pragma once #pragma once
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <cassandra.h> #include <cassandra.h>
#include <string> #include <string>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
struct SslContext : public ManagedObject<CassSsl> struct SslContext : public ManagedObject<CassSsl> {
{
explicit SslContext(std::string const& certificate); explicit SslContext(std::string const& certificate);
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,9 +19,10 @@
#pragma once #pragma once
#include <backend/cassandra/Types.h> #include <data/cassandra/Types.h>
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/Collection.h>
#include <backend/cassandra/impl/Tuple.h> #include <data/cassandra/impl/ManagedObject.h>
#include <data/cassandra/impl/Tuple.h>
#include <util/Expected.h> #include <util/Expected.h>
#include <ripple/basics/base_uint.h> #include <ripple/basics/base_uint.h>
@@ -33,10 +34,9 @@
#include <compare> #include <compare>
#include <iterator> #include <iterator>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
class Statement : public ManagedObject<CassStatement> class Statement : public ManagedObject<CassStatement> {
{
static constexpr auto deleter = [](CassStatement* ptr) { cass_statement_free(ptr); }; static constexpr auto deleter = [](CassStatement* ptr) { cass_statement_free(ptr); };
template <typename> template <typename>
@@ -44,7 +44,7 @@ class Statement : public ManagedObject<CassStatement>
public: public:
/** /**
* @brief Construct a new statement with optionally provided arguments * @brief Construct a new statement with optionally provided arguments.
* *
* Note: it's up to the user to make sure the bound parameters match * Note: it's up to the user to make sure the bound parameters match
* the format of the query (e.g. amount of '?' matches count of args). * the format of the query (e.g. amount of '?' matches count of args).
@@ -64,16 +64,25 @@ public:
cass_statement_set_is_idempotent(*this, cass_true); cass_statement_set_is_idempotent(*this, cass_true);
} }
Statement(Statement&&) = default; /**
* @brief Binds the given arguments to the statement.
*
* @param args Arguments to bind
*/
template <typename... Args> template <typename... Args>
void void
bind(Args&&... args) const bind(Args&&... args) const
{ {
std::size_t idx = 0; std::size_t idx = 0; // NOLINT(misc-const-correctness)
(this->bindAt<Args>(idx++, std::forward<Args>(args)), ...); (this->bindAt<Args>(idx++, std::forward<Args>(args)), ...);
} }
/**
* @brief Binds an argument to a specific index.
*
* @param idx The index of the argument
* @param value The value to bind it to
*/
template <typename Type> template <typename Type>
void void
bindAt(std::size_t const idx, Type&& value) const bindAt(std::size_t const idx, Type&& value) const
@@ -88,62 +97,55 @@ public:
return cass_statement_bind_bytes(*this, idx, static_cast<cass_byte_t const*>(data), size); return cass_statement_bind_bytes(*this, idx, static_cast<cass_byte_t const*>(data), size);
}; };
using decayed_t = std::decay_t<Type>; using DecayedType = std::decay_t<Type>;
using uchar_vec_t = std::vector<unsigned char>; using UCharVectorType = std::vector<unsigned char>;
using uint_tuple_t = std::tuple<uint32_t, uint32_t>; using UintTupleType = std::tuple<uint32_t, uint32_t>;
using UintByteTupleType = std::tuple<uint32_t, ripple::uint256>;
using ByteVectorType = std::vector<ripple::uint256>;
if constexpr (std::is_same_v<decayed_t, ripple::uint256>) if constexpr (std::is_same_v<DecayedType, ripple::uint256>) {
{
auto const rc = bindBytes(value.data(), value.size()); auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind ripple::uint256"); throwErrorIfNeeded(rc, "Bind ripple::uint256");
} } else if constexpr (std::is_same_v<DecayedType, ripple::AccountID>) {
else if constexpr (std::is_same_v<decayed_t, ripple::AccountID>)
{
auto const rc = bindBytes(value.data(), value.size()); auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind ripple::AccountID"); throwErrorIfNeeded(rc, "Bind ripple::AccountID");
} } else if constexpr (std::is_same_v<DecayedType, UCharVectorType>) {
else if constexpr (std::is_same_v<decayed_t, uchar_vec_t>)
{
auto const rc = bindBytes(value.data(), value.size()); auto const rc = bindBytes(value.data(), value.size());
throwErrorIfNeeded(rc, "Bind vector<unsigned char>"); throwErrorIfNeeded(rc, "Bind vector<unsigned char>");
} } else if constexpr (std::is_convertible_v<DecayedType, std::string>) {
else if constexpr (std::is_convertible_v<decayed_t, std::string>)
{
// reinterpret_cast is needed here :'( // reinterpret_cast is needed here :'(
auto const rc = bindBytes(reinterpret_cast<unsigned char const*>(value.data()), value.size()); auto const rc = bindBytes(reinterpret_cast<unsigned char const*>(value.data()), value.size());
throwErrorIfNeeded(rc, "Bind string (as bytes)"); throwErrorIfNeeded(rc, "Bind string (as bytes)");
} } else if constexpr (std::is_same_v<DecayedType, UintTupleType> || std::is_same_v<DecayedType, UintByteTupleType>) {
else if constexpr (std::is_same_v<decayed_t, uint_tuple_t>) auto const rc = cass_statement_bind_tuple(*this, idx, Tuple{std::forward<Type>(value)});
{ throwErrorIfNeeded(rc, "Bind tuple<uint32, uint32> or <uint32_t, ripple::uint256>");
auto const rc = cass_statement_bind_tuple(*this, idx, Tuple{std::move(value)}); } else if constexpr (std::is_same_v<DecayedType, ByteVectorType>) {
throwErrorIfNeeded(rc, "Bind tuple<uint32, uint32>"); auto const rc = cass_statement_bind_collection(*this, idx, Collection{std::forward<Type>(value)});
} throwErrorIfNeeded(rc, "Bind collection");
else if constexpr (std::is_same_v<decayed_t, bool>) } else if constexpr (std::is_same_v<DecayedType, bool>) {
{
auto const rc = cass_statement_bind_bool(*this, idx, value ? cass_true : cass_false); auto const rc = cass_statement_bind_bool(*this, idx, value ? cass_true : cass_false);
throwErrorIfNeeded(rc, "Bind bool"); throwErrorIfNeeded(rc, "Bind bool");
} } else if constexpr (std::is_same_v<DecayedType, Limit>) {
else if constexpr (std::is_same_v<decayed_t, Limit>)
{
auto const rc = cass_statement_bind_int32(*this, idx, value.limit); auto const rc = cass_statement_bind_int32(*this, idx, value.limit);
throwErrorIfNeeded(rc, "Bind limit (int32)"); throwErrorIfNeeded(rc, "Bind limit (int32)");
} }
// clio only uses bigint (int64_t) so we convert any incoming type // clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>) else if constexpr (std::is_convertible_v<DecayedType, int64_t>) {
{
auto const rc = cass_statement_bind_int64(*this, idx, value); auto const rc = cass_statement_bind_int64(*this, idx, value);
throwErrorIfNeeded(rc, "Bind int64"); throwErrorIfNeeded(rc, "Bind int64");
} } else {
else
{
// type not supported for binding // type not supported for binding
static_assert(unsupported_v<decayed_t>); static_assert(unsupported_v<DecayedType>);
} }
} }
}; };
class PreparedStatement : public ManagedObject<CassPrepared const> /**
{ * @brief Represents a prepared statement on the DB side.
*
* This is used to produce Statement objects that can be executed.
*/
class PreparedStatement : public ManagedObject<CassPrepared const> {
static constexpr auto deleter = [](CassPrepared const* ptr) { cass_prepared_free(ptr); }; static constexpr auto deleter = [](CassPrepared const* ptr) { cass_prepared_free(ptr); };
public: public:
@@ -151,6 +153,12 @@ public:
{ {
} }
/**
* @brief Bind the given arguments and produce a ready to execute Statement.
*
* @param args The arguments to bind
* @return A bound and ready to execute Statement object
*/
template <typename... Args> template <typename... Args>
Statement Statement
bind(Args&&... args) const bind(Args&&... args) const
@@ -161,4 +169,4 @@ public:
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,14 +17,14 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/cassandra/impl/Tuple.h> #include <data/cassandra/impl/Tuple.h>
namespace { namespace {
static constexpr auto tupleDeleter = [](CassTuple* ptr) { cass_tuple_free(ptr); }; constexpr auto tupleDeleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
static constexpr auto tupleIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); }; constexpr auto tupleIteratorDeleter = [](CassIterator* ptr) { cass_iterator_free(ptr); };
} // namespace } // namespace
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
/* implicit */ Tuple::Tuple(CassTuple* ptr) : ManagedObject{ptr, tupleDeleter} /* implicit */ Tuple::Tuple(CassTuple* ptr) : ManagedObject{ptr, tupleDeleter}
{ {
@@ -40,4 +40,4 @@ TupleIterator::fromTuple(CassValue const* value)
return {cass_iterator_from_tuple(value)}; return {cass_iterator_from_tuple(value)};
} }
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -19,8 +19,9 @@
#pragma once #pragma once
#include <backend/cassandra/impl/ManagedObject.h> #include <data/cassandra/impl/ManagedObject.h>
#include <ripple/basics/base_uint.h>
#include <cassandra.h> #include <cassandra.h>
#include <functional> #include <functional>
@@ -28,10 +29,9 @@
#include <string_view> #include <string_view>
#include <tuple> #include <tuple>
namespace Backend::Cassandra::detail { namespace data::cassandra::detail {
class Tuple : public ManagedObject<CassTuple> class Tuple : public ManagedObject<CassTuple> {
{
static constexpr auto deleter = [](CassTuple* ptr) { cass_tuple_free(ptr); }; static constexpr auto deleter = [](CassTuple* ptr) { cass_tuple_free(ptr); };
template <typename> template <typename>
@@ -61,36 +61,38 @@ public:
{ {
using std::to_string; using std::to_string;
auto throwErrorIfNeeded = [idx](CassError rc, std::string_view label) { auto throwErrorIfNeeded = [idx](CassError rc, std::string_view label) {
if (rc != CASS_OK) if (rc != CASS_OK) {
{
auto const tag = '[' + std::string{label} + ']'; auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + " at idx " + to_string(idx) + ": " + cass_error_desc(rc)); throw std::logic_error(tag + " at idx " + to_string(idx) + ": " + cass_error_desc(rc));
} }
}; };
using decayed_t = std::decay_t<Type>; using DecayedType = std::decay_t<Type>;
if constexpr (std::is_same_v<decayed_t, bool>) if constexpr (std::is_same_v<DecayedType, bool>) {
{
auto const rc = cass_tuple_set_bool(*this, idx, value ? cass_true : cass_false); auto const rc = cass_tuple_set_bool(*this, idx, value ? cass_true : cass_false);
throwErrorIfNeeded(rc, "Bind bool"); throwErrorIfNeeded(rc, "Bind bool");
} }
// clio only uses bigint (int64_t) so we convert any incoming type // clio only uses bigint (int64_t) so we convert any incoming type
else if constexpr (std::is_convertible_v<decayed_t, int64_t>) else if constexpr (std::is_convertible_v<DecayedType, int64_t>) {
{
auto const rc = cass_tuple_set_int64(*this, idx, value); auto const rc = cass_tuple_set_int64(*this, idx, value);
throwErrorIfNeeded(rc, "Bind int64"); throwErrorIfNeeded(rc, "Bind int64");
} } else if constexpr (std::is_same_v<DecayedType, ripple::uint256>) {
else auto const rc = cass_tuple_set_bytes(
{ *this,
idx,
static_cast<cass_byte_t const*>(static_cast<unsigned char const*>(value.data())),
value.size()
);
throwErrorIfNeeded(rc, "Bind ripple::uint256");
} else {
// type not supported for binding // type not supported for binding
static_assert(unsupported_v<decayed_t>); static_assert(unsupported_v<DecayedType>);
} }
} }
}; };
class TupleIterator : public ManagedObject<CassIterator> class TupleIterator : public ManagedObject<CassIterator> {
{
template <typename> template <typename>
static constexpr bool unsupported_v = false; static constexpr bool unsupported_v = false;
@@ -119,31 +121,27 @@ private:
throw std::logic_error("Could not extract next value from tuple iterator"); throw std::logic_error("Could not extract next value from tuple iterator");
auto throwErrorIfNeeded = [](CassError rc, std::string_view label) { auto throwErrorIfNeeded = [](CassError rc, std::string_view label) {
if (rc != CASS_OK) if (rc != CASS_OK) {
{
auto const tag = '[' + std::string{label} + ']'; auto const tag = '[' + std::string{label} + ']';
throw std::logic_error(tag + ": " + cass_error_desc(rc)); throw std::logic_error(tag + ": " + cass_error_desc(rc));
} }
}; };
using decayed_t = std::decay_t<Type>; using DecayedType = std::decay_t<Type>;
// clio only uses bigint (int64_t) so we convert any incoming type // clio only uses bigint (int64_t) so we convert any incoming type
if constexpr (std::is_convertible_v<decayed_t, int64_t>) if constexpr (std::is_convertible_v<DecayedType, int64_t>) {
{ int64_t out = 0;
int64_t out;
auto const rc = cass_value_get_int64(cass_iterator_get_value(*this), &out); auto const rc = cass_value_get_int64(cass_iterator_get_value(*this), &out);
throwErrorIfNeeded(rc, "Extract int64 from tuple"); throwErrorIfNeeded(rc, "Extract int64 from tuple");
output = static_cast<decayed_t>(out); output = static_cast<DecayedType>(out);
} } else {
else
{
// type not supported for extraction // type not supported for extraction
static_assert(unsupported_v<decayed_t>); static_assert(unsupported_v<DecayedType>);
} }
return output; return output;
} }
}; };
} // namespace Backend::Cassandra::detail } // namespace data::cassandra::detail

View File

@@ -17,6 +17,7 @@
*/ */
//============================================================================== //==============================================================================
/** @file */
#pragma once #pragma once
#include <ripple/basics/base_uint.h> #include <ripple/basics/base_uint.h>
@@ -26,6 +27,7 @@
#include <queue> #include <queue>
#include <sstream> #include <sstream>
namespace etl {
/** /**
* @brief This datastructure is used to keep track of the sequence of the most recent ledger validated by the network. * @brief This datastructure is used to keep track of the sequence of the most recent ledger validated by the network.
* *
@@ -34,8 +36,7 @@
* Any later calls to methods of this datastructure will not wait. Once the datastructure is stopped, the datastructure * Any later calls to methods of this datastructure will not wait. Once the datastructure is stopped, the datastructure
* remains stopped for the rest of its lifetime. * remains stopped for the rest of its lifetime.
*/ */
class NetworkValidatedLedgers class NetworkValidatedLedgers {
{
// max sequence validated by network // max sequence validated by network
std::optional<uint32_t> max_; std::optional<uint32_t> max_;
@@ -43,6 +44,9 @@ class NetworkValidatedLedgers
std::condition_variable cv_; std::condition_variable cv_;
public: public:
/**
* @brief A factory function for NetworkValidatedLedgers.
*/
static std::shared_ptr<NetworkValidatedLedgers> static std::shared_ptr<NetworkValidatedLedgers>
make_ValidatedLedgers() make_ValidatedLedgers()
{ {
@@ -50,14 +54,14 @@ public:
} }
/** /**
* @brief Notify the datastructure that idx has been validated by the network * @brief Notify the datastructure that idx has been validated by the network.
* *
* @param idx sequence validated by network * @param idx Sequence validated by network
*/ */
void void
push(uint32_t idx) push(uint32_t idx)
{ {
std::lock_guard lck(m_); std::lock_guard const lck(m_);
if (!max_ || idx > *max_) if (!max_ || idx > *max_)
max_ = idx; max_ = idx;
cv_.notify_all(); cv_.notify_all();
@@ -68,7 +72,7 @@ public:
* *
* If no ledgers are known to have been validated, this function waits until the next ledger is validated * If no ledgers are known to have been validated, this function waits until the next ledger is validated
* *
* @return sequence of most recently validated ledger. empty optional if the datastructure has been stopped * @return Sequence of most recently validated ledger. empty optional if the datastructure has been stopped
*/ */
std::optional<uint32_t> std::optional<uint32_t>
getMostRecent() getMostRecent()
@@ -79,9 +83,9 @@ public:
} }
/** /**
* @brief Waits for the sequence to be validated by the network * @brief Waits for the sequence to be validated by the network.
* *
* @param sequence to wait for * @param sequence The sequence to wait for
* @return true if sequence was validated, false otherwise a return value of false means the datastructure has been * @return true if sequence was validated, false otherwise a return value of false means the datastructure has been
* stopped * stopped
*/ */
@@ -90,24 +94,24 @@ public:
{ {
std::unique_lock lck(m_); std::unique_lock lck(m_);
auto pred = [sequence, this]() -> bool { return (max_ && sequence <= *max_); }; auto pred = [sequence, this]() -> bool { return (max_ && sequence <= *max_); };
if (maxWaitMs) if (maxWaitMs) {
cv_.wait_for(lck, std::chrono::milliseconds(*maxWaitMs)); cv_.wait_for(lck, std::chrono::milliseconds(*maxWaitMs));
else } else {
cv_.wait(lck, pred); cv_.wait(lck, pred);
}
return pred(); return pred();
} }
}; };
// TODO: does the note make sense? lockfree queues provide the same blocking behaviour just without mutex, don't they? // TODO: does the note make sense? lockfree queues provide the same blocking behaviour just without mutex, don't they?
/** /**
* @brief Generic thread-safe queue with a max capacity * @brief Generic thread-safe queue with a max capacity.
* *
* @note (original note) We can't use a lockfree queue here, since we need the ability to wait for an element to be * @note (original note) We can't use a lockfree queue here, since we need the ability to wait for an element to be
* added or removed from the queue. These waits are blocking calls. * added or removed from the queue. These waits are blocking calls.
*/ */
template <class T> template <class T>
class ThreadSafeQueue class ThreadSafeQueue {
{
std::queue<T> queue_; std::queue<T> queue_;
mutable std::mutex m_; mutable std::mutex m_;
@@ -116,21 +120,21 @@ class ThreadSafeQueue
public: public:
/** /**
* @brief Create an instance of the queue * @brief Create an instance of the queue.
* *
* @param maxSize maximum size of the queue. Calls that would cause the queue to exceed this size will block until * @param maxSize maximum size of the queue. Calls that would cause the queue to exceed this size will block until
* free space is available * free space is available.
*/ */
ThreadSafeQueue(uint32_t maxSize) : maxSize_(maxSize) ThreadSafeQueue(uint32_t maxSize) : maxSize_(maxSize)
{ {
} }
/** /**
* @brief Push element onto the queue * @brief Push element onto the queue.
* *
* Note: This method will block until free space is available * Note: This method will block until free space is available.
* *
* @param elt element to push onto queue * @param elt Element to push onto queue
*/ */
void void
push(T const& elt) push(T const& elt)
@@ -142,11 +146,11 @@ public:
} }
/** /**
* @brief Push element onto the queue * @brief Push element onto the queue.
* *
* Note: This method will block until free space is available * Note: This method will block until free space is available
* *
* @param elt element to push onto queue. elt is moved from * @param elt Element to push onto queue. Ownership is transferred
*/ */
void void
push(T&& elt) push(T&& elt)
@@ -158,11 +162,11 @@ public:
} }
/** /**
* @brief Pop element from the queue * @brief Pop element from the queue.
* *
* Note: Will block until queue is non-empty * Note: Will block until queue is non-empty.
* *
* @return element popped from queue * @return Element popped from queue
*/ */
T T
pop() pop()
@@ -178,14 +182,14 @@ public:
} }
/** /**
* @brief Attempt to pop an element * @brief Attempt to pop an element.
* *
* @return element popped from queue or empty optional if queue was empty * @return Element popped from queue or empty optional if queue was empty
*/ */
std::optional<T> std::optional<T>
tryPop() tryPop()
{ {
std::scoped_lock lck(m_); std::scoped_lock const lck(m_);
if (queue_.empty()) if (queue_.empty())
return {}; return {};
@@ -200,22 +204,22 @@ public:
/** /**
* @brief Parititions the uint256 keyspace into numMarkers partitions, each of equal size. * @brief Parititions the uint256 keyspace into numMarkers partitions, each of equal size.
* *
* @param numMarkers total markers to partition for * @param numMarkers Total markers to partition for
*/ */
inline std::vector<ripple::uint256> inline std::vector<ripple::uint256>
getMarkers(size_t numMarkers) getMarkers(size_t numMarkers)
{ {
assert(numMarkers <= 256); assert(numMarkers <= 256);
unsigned char incr = 256 / numMarkers; unsigned char const incr = 256 / numMarkers;
std::vector<ripple::uint256> markers; std::vector<ripple::uint256> markers;
markers.reserve(numMarkers); markers.reserve(numMarkers);
ripple::uint256 base{0}; ripple::uint256 base{0};
for (size_t i = 0; i < numMarkers; ++i) for (size_t i = 0; i < numMarkers; ++i) {
{
markers.push_back(base); markers.push_back(base);
base.data()[0] += incr; base.data()[0] += incr;
} }
return markers; return markers;
} }
} // namespace etl

View File

@@ -18,22 +18,25 @@
//============================================================================== //==============================================================================
#include <etl/ETLService.h> #include <etl/ETLService.h>
#include <util/Constants.h>
using namespace clio; #include <ripple/protocol/LedgerHeader.h>
#include <utility>
namespace etl {
// Database must be populated when this starts // Database must be populated when this starts
std::optional<uint32_t> std::optional<uint32_t>
ETLService::runETLPipeline(uint32_t startSequence, int numExtractors) ETLService::runETLPipeline(uint32_t startSequence, uint32_t numExtractors)
{ {
if (finishSequence_ && startSequence > *finishSequence_) if (finishSequence_ && startSequence > *finishSequence_)
return {}; return {};
log_.debug() << "Starting etl pipeline"; LOG(log_.debug()) << "Starting etl pipeline";
state_.isWriting = true; state_.isWriting = true;
auto rng = backend_->hardFetchLedgerRangeNoThrow(); auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (!rng || rng->maxSequence < startSequence - 1) if (!rng || rng->maxSequence < startSequence - 1) {
{
assert(false); assert(false);
throw std::runtime_error("runETLPipeline: parent ledger is null"); throw std::runtime_error("runETLPipeline: parent ledger is null");
} }
@@ -42,11 +45,14 @@ ETLService::runETLPipeline(uint32_t startSequence, int numExtractors)
auto extractors = std::vector<std::unique_ptr<ExtractorType>>{}; auto extractors = std::vector<std::unique_ptr<ExtractorType>>{};
auto pipe = DataPipeType{numExtractors, startSequence}; auto pipe = DataPipeType{numExtractors, startSequence};
for (auto i = 0u; i < numExtractors; ++i) for (auto i = 0u; i < numExtractors; ++i) {
extractors.push_back(std::make_unique<ExtractorType>( extractors.push_back(std::make_unique<ExtractorType>(
pipe, networkValidatedLedgers_, ledgerFetcher_, startSequence + i, finishSequence_, state_)); pipe, networkValidatedLedgers_, ledgerFetcher_, startSequence + i, finishSequence_, state_
));
}
auto transformer = TransformerType{pipe, backend_, ledgerLoader_, ledgerPublisher_, startSequence, state_}; auto transformer =
TransformerType{pipe, backend_, ledgerLoader_, ledgerPublisher_, amendmentBlockHandler_, startSequence, state_};
transformer.waitTillFinished(); // suspend current thread until exit condition is met transformer.waitTillFinished(); // suspend current thread until exit condition is met
pipe.cleanup(); // TODO: this should probably happen automatically using destructor pipe.cleanup(); // TODO: this should probably happen automatically using destructor
@@ -56,12 +62,13 @@ ETLService::runETLPipeline(uint32_t startSequence, int numExtractors)
auto const end = std::chrono::system_clock::now(); auto const end = std::chrono::system_clock::now();
auto const lastPublishedSeq = ledgerPublisher_.getLastPublishedSequence(); auto const lastPublishedSeq = ledgerPublisher_.getLastPublishedSequence();
log_.debug() << "Extracted and wrote " << lastPublishedSeq.value_or(startSequence) - startSequence << " in " static constexpr auto NANOSECONDS_PER_SECOND = 1'000'000'000.0;
<< ((end - begin).count()) / 1000000000.0; LOG(log_.debug()) << "Extracted and wrote " << lastPublishedSeq.value_or(startSequence) - startSequence << " in "
<< ((end - begin).count()) / NANOSECONDS_PER_SECOND;
state_.isWriting = false; state_.isWriting = false;
log_.debug() << "Stopping etl pipeline"; LOG(log_.debug()) << "Stopping etl pipeline";
return lastPublishedSeq; return lastPublishedSeq;
} }
@@ -77,73 +84,66 @@ void
ETLService::monitor() ETLService::monitor()
{ {
auto rng = backend_->hardFetchLedgerRangeNoThrow(); auto rng = backend_->hardFetchLedgerRangeNoThrow();
if (!rng) if (!rng) {
{ LOG(log_.info()) << "Database is empty. Will download a ledger from the network.";
log_.info() << "Database is empty. Will download a ledger " std::optional<ripple::LedgerHeader> ledger;
"from the network.";
std::optional<ripple::LedgerInfo> ledger;
if (startSequence_) try {
{ if (startSequence_) {
log_.info() << "ledger sequence specified in config. " LOG(log_.info()) << "ledger sequence specified in config. "
<< "Will begin ETL process starting with ledger " << *startSequence_; << "Will begin ETL process starting with ledger " << *startSequence_;
ledger = ledgerLoader_.loadInitialLedger(*startSequence_); ledger = ledgerLoader_.loadInitialLedger(*startSequence_);
} } else {
else LOG(log_.info()) << "Waiting for next ledger to be validated by network...";
{
log_.info() << "Waiting for next ledger to be validated by network...";
std::optional<uint32_t> mostRecentValidated = networkValidatedLedgers_->getMostRecent(); std::optional<uint32_t> mostRecentValidated = networkValidatedLedgers_->getMostRecent();
if (mostRecentValidated) if (mostRecentValidated) {
{ LOG(log_.info()) << "Ledger " << *mostRecentValidated << " has been validated. Downloading...";
log_.info() << "Ledger " << *mostRecentValidated << " has been validated. "
<< "Downloading...";
ledger = ledgerLoader_.loadInitialLedger(*mostRecentValidated); ledger = ledgerLoader_.loadInitialLedger(*mostRecentValidated);
} } else {
else LOG(log_.info()) << "The wait for the next validated ledger has been aborted. "
{ "Exiting monitor loop";
log_.info() << "The wait for the next validated "
<< "ledger has been aborted. "
<< "Exiting monitor loop";
return; return;
} }
} }
} catch (std::runtime_error const& e) {
LOG(log_.fatal()) << "Failed to load initial ledger: " << e.what();
return amendmentBlockHandler_.onAmendmentBlock();
}
if (ledger) if (ledger) {
{
rng = backend_->hardFetchLedgerRangeNoThrow(); rng = backend_->hardFetchLedgerRangeNoThrow();
} } else {
else LOG(log_.error()) << "Failed to load initial ledger. Exiting monitor loop";
{
log_.error() << "Failed to load initial ledger. Exiting monitor loop";
return; return;
} }
} } else {
else
{
if (startSequence_) if (startSequence_)
log_.warn() << "start sequence specified but db is already populated"; LOG(log_.warn()) << "start sequence specified but db is already populated";
log_.info() << "Database already populated. Picking up from the tip of history"; LOG(log_.info()) << "Database already populated. Picking up from the tip of history";
cacheLoader_.load(rng->maxSequence); cacheLoader_.load(rng->maxSequence);
} }
assert(rng); assert(rng);
uint32_t nextSequence = rng->maxSequence + 1; uint32_t nextSequence = rng->maxSequence + 1;
log_.debug() << "Database is populated. " LOG(log_.debug()) << "Database is populated. "
<< "Starting monitor loop. sequence = " << nextSequence; << "Starting monitor loop. sequence = " << nextSequence;
while (true) while (not isStopping()) {
{ nextSequence = publishNextSequence(nextSequence);
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence) }
{ }
uint32_t
ETLService::publishNextSequence(uint32_t nextSequence)
{
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= nextSequence) {
ledgerPublisher_.publish(nextSequence, {}); ledgerPublisher_.publish(nextSequence, {});
++nextSequence; ++nextSequence;
} } else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, util::MILLISECONDS_PER_SECOND)) {
else if (networkValidatedLedgers_->waitUntilValidatedByNetwork(nextSequence, 1000)) LOG(log_.info()) << "Ledger with sequence = " << nextSequence << " has been validated by the network. "
{
log_.info() << "Ledger with sequence = " << nextSequence << " has been validated by the network. "
<< "Attempting to find in database and publish"; << "Attempting to find in database and publish";
// Attempt to take over responsibility of ETL writer after 10 failed // Attempt to take over responsibility of ETL writer after 10 failed
@@ -153,63 +153,61 @@ ETLService::monitor()
// waits one second between each attempt to read the ledger from the // waits one second between each attempt to read the ledger from the
// database // database
constexpr size_t timeoutSeconds = 10; constexpr size_t timeoutSeconds = 10;
bool success = ledgerPublisher_.publish(nextSequence, timeoutSeconds); bool const success = ledgerPublisher_.publish(nextSequence, timeoutSeconds);
if (!success) if (!success) {
{ LOG(log_.warn()) << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL";
log_.warn() << "Failed to publish ledger with sequence = " << nextSequence << " . Beginning ETL";
// returns the most recent sequence published empty optional if no sequence was published // returns the most recent sequence published empty optional if no sequence was published
std::optional<uint32_t> lastPublished = runETLPipeline(nextSequence, extractorThreads_); std::optional<uint32_t> lastPublished = runETLPipeline(nextSequence, extractorThreads_);
log_.info() << "Aborting ETL. Falling back to publishing"; LOG(log_.info()) << "Aborting ETL. Falling back to publishing";
// if no ledger was published, don't increment nextSequence // if no ledger was published, don't increment nextSequence
if (lastPublished) if (lastPublished)
nextSequence = *lastPublished + 1; nextSequence = *lastPublished + 1;
} } else {
else
{
++nextSequence; ++nextSequence;
} }
} }
} return nextSequence;
} }
void void
ETLService::monitorReadOnly() ETLService::monitorReadOnly()
{ {
log_.debug() << "Starting reporting in strict read only mode"; LOG(log_.debug()) << "Starting reporting in strict read only mode";
auto const latestSequenceOpt = [this]() -> std::optional<uint32_t> {
auto rng = backend_->hardFetchLedgerRangeNoThrow(); auto rng = backend_->hardFetchLedgerRangeNoThrow();
uint32_t latestSequence;
if (!rng) if (!rng) {
{ if (auto net = networkValidatedLedgers_->getMostRecent()) {
if (auto net = networkValidatedLedgers_->getMostRecent()) return *net;
latestSequence = *net; }
else return std::nullopt;
}
return rng->maxSequence;
}();
if (!latestSequenceOpt.has_value()) {
return; return;
} }
else
{ uint32_t latestSequence = *latestSequenceOpt;
latestSequence = rng->maxSequence;
}
cacheLoader_.load(latestSequence); cacheLoader_.load(latestSequence);
latestSequence++; latestSequence++;
while (true) while (not isStopping()) {
{ if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= latestSequence) {
if (auto rng = backend_->hardFetchLedgerRangeNoThrow(); rng && rng->maxSequence >= latestSequence)
{
ledgerPublisher_.publish(latestSequence, {}); ledgerPublisher_.publish(latestSequence, {});
latestSequence = latestSequence + 1; latestSequence = latestSequence + 1;
} } else {
else // if we can't, wait until it's validated by the network, or 1 second passes, whichever occurs
{ // first. Even if we don't hear from rippled, if ledgers are being written to the db, we publish
// if we can't, wait until it's validated by the network, or 1 second passes, whichever occurs first. // them.
// Even if we don't hear from rippled, if ledgers are being written to the db, we publish them. networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, util::MILLISECONDS_PER_SECOND);
networkValidatedLedgers_->waitUntilValidatedByNetwork(latestSequence, 1000);
} }
} }
} }
@@ -217,7 +215,7 @@ ETLService::monitorReadOnly()
void void
ETLService::run() ETLService::run()
{ {
log_.info() << "Starting reporting etl"; LOG(log_.info()) << "Starting reporting etl";
state_.isStopping = false; state_.isStopping = false;
doWork(); doWork();
@@ -227,29 +225,32 @@ void
ETLService::doWork() ETLService::doWork()
{ {
worker_ = std::thread([this]() { worker_ = std::thread([this]() {
beast::setCurrentThreadName("rippled: ETLService worker"); beast::setCurrentThreadName("ETLService worker");
if (state_.isReadOnly) if (state_.isReadOnly) {
monitorReadOnly(); monitorReadOnly();
else } else {
monitor(); monitor();
}
}); });
} }
ETLService::ETLService( ETLService::ETLService(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManagerType> subscriptions, std::shared_ptr<SubscriptionManagerType> subscriptions,
std::shared_ptr<LoadBalancerType> balancer, std::shared_ptr<LoadBalancerType> balancer,
std::shared_ptr<NetworkValidatedLedgersType> ledgers) std::shared_ptr<NetworkValidatedLedgersType> ledgers
)
: backend_(backend) : backend_(backend)
, loadBalancer_(balancer) , loadBalancer_(balancer)
, networkValidatedLedgers_(ledgers) , networkValidatedLedgers_(std::move(ledgers))
, cacheLoader_(config, ioc, backend, backend->cache()) , cacheLoader_(config, ioc, backend, backend->cache())
, ledgerFetcher_(backend, balancer) , ledgerFetcher_(backend, balancer)
, ledgerLoader_(backend, balancer, ledgerFetcher_, state_) , ledgerLoader_(backend, balancer, ledgerFetcher_, state_)
, ledgerPublisher_(ioc, backend, subscriptions, state_) , ledgerPublisher_(ioc, backend, backend->cache(), subscriptions, state_)
, amendmentBlockHandler_(ioc, state_)
{ {
startSequence_ = config.maybeValue<uint32_t>("start_sequence"); startSequence_ = config.maybeValue<uint32_t>("start_sequence");
finishSequence_ = config.maybeValue<uint32_t>("finish_sequence"); finishSequence_ = config.maybeValue<uint32_t>("finish_sequence");
@@ -257,3 +258,4 @@ ETLService::ETLService(
extractorThreads_ = config.valueOr<uint32_t>("extractor_threads", extractorThreads_); extractorThreads_ = config.valueOr<uint32_t>("extractor_threads", extractorThreads_);
txnThreshold_ = config.valueOr<size_t>("txn_threshold", txnThreshold_); txnThreshold_ = config.valueOr<size_t>("txn_threshold", txnThreshold_);
} }
} // namespace etl

View File

@@ -19,11 +19,12 @@
#pragma once #pragma once
#include <backend/BackendInterface.h> #include <data/BackendInterface.h>
#include <backend/LedgerCache.h> #include <data/LedgerCache.h>
#include <etl/LoadBalancer.h> #include <etl/LoadBalancer.h>
#include <etl/Source.h> #include <etl/Source.h>
#include <etl/SystemState.h> #include <etl/SystemState.h>
#include <etl/impl/AmendmentBlock.h>
#include <etl/impl/CacheLoader.h> #include <etl/impl/CacheLoader.h>
#include <etl/impl/ExtractionDataPipe.h> #include <etl/impl/ExtractionDataPipe.h>
#include <etl/impl/Extractor.h> #include <etl/impl/Extractor.h>
@@ -31,10 +32,11 @@
#include <etl/impl/LedgerLoader.h> #include <etl/impl/LedgerLoader.h>
#include <etl/impl/LedgerPublisher.h> #include <etl/impl/LedgerPublisher.h>
#include <etl/impl/Transformer.h> #include <etl/impl/Transformer.h>
#include <log/Logger.h> #include <feed/SubscriptionManager.h>
#include <subscriptions/SubscriptionManager.h> #include <util/log/Logger.h>
#include "org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h" #include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/asio/steady_timer.hpp>
#include <grpcpp/grpcpp.h> #include <grpcpp/grpcpp.h>
#include <memory> #include <memory>
@@ -42,7 +44,14 @@
struct AccountTransactionsData; struct AccountTransactionsData;
struct NFTTransactionsData; struct NFTTransactionsData;
struct NFTsData; struct NFTsData;
namespace feed {
class SubscriptionManager; class SubscriptionManager;
} // namespace feed
/**
* @brief This namespace contains everything to do with the ETL and ETL sources.
*/
namespace etl {
/** /**
* @brief This class is responsible for continuously extracting data from a p2p node, and writing that data to the * @brief This class is responsible for continuously extracting data from a p2p node, and writing that data to the
@@ -57,21 +66,23 @@ class SubscriptionManager;
* the others will fall back to monitoring/publishing. In this sense, this class dynamically transitions from monitoring * the others will fall back to monitoring/publishing. In this sense, this class dynamically transitions from monitoring
* to writing and from writing to monitoring, based on the activity of other processes running on different machines. * to writing and from writing to monitoring, based on the activity of other processes running on different machines.
*/ */
class ETLService class ETLService {
{
// TODO: make these template parameters in ETLService // TODO: make these template parameters in ETLService
using SubscriptionManagerType = SubscriptionManager; using SubscriptionManagerType = feed::SubscriptionManager;
using LoadBalancerType = LoadBalancer; using LoadBalancerType = LoadBalancer;
using NetworkValidatedLedgersType = NetworkValidatedLedgers; using NetworkValidatedLedgersType = NetworkValidatedLedgers;
using DataPipeType = clio::detail::ExtractionDataPipe<org::xrpl::rpc::v1::GetLedgerResponse>; using DataPipeType = etl::detail::ExtractionDataPipe<org::xrpl::rpc::v1::GetLedgerResponse>;
using CacheLoaderType = clio::detail::CacheLoader<Backend::LedgerCache>; using CacheType = data::LedgerCache;
using LedgerFetcherType = clio::detail::LedgerFetcher<LoadBalancerType>; using CacheLoaderType = etl::detail::CacheLoader<CacheType>;
using ExtractorType = clio::detail::Extractor<DataPipeType, NetworkValidatedLedgersType, LedgerFetcherType>; using LedgerFetcherType = etl::detail::LedgerFetcher<LoadBalancerType>;
using LedgerLoaderType = clio::detail::LedgerLoader<LoadBalancerType, LedgerFetcherType>; using ExtractorType = etl::detail::Extractor<DataPipeType, NetworkValidatedLedgersType, LedgerFetcherType>;
using LedgerPublisherType = clio::detail::LedgerPublisher<SubscriptionManagerType>; using LedgerLoaderType = etl::detail::LedgerLoader<LoadBalancerType, LedgerFetcherType>;
using TransformerType = clio::detail::Transformer<DataPipeType, LedgerLoaderType, LedgerPublisherType>; using LedgerPublisherType = etl::detail::LedgerPublisher<SubscriptionManagerType, CacheType>;
using AmendmentBlockHandlerType = etl::detail::AmendmentBlockHandler<>;
using TransformerType =
etl::detail::Transformer<DataPipeType, LedgerLoaderType, LedgerPublisherType, AmendmentBlockHandlerType>;
clio::Logger log_{"ETL"}; util::Logger log_{"ETL"};
std::shared_ptr<BackendInterface> backend_; std::shared_ptr<BackendInterface> backend_;
std::shared_ptr<LoadBalancerType> loadBalancer_; std::shared_ptr<LoadBalancerType> loadBalancer_;
@@ -84,6 +95,7 @@ class ETLService
LedgerFetcherType ledgerFetcher_; LedgerFetcherType ledgerFetcher_;
LedgerLoaderType ledgerLoader_; LedgerLoaderType ledgerLoader_;
LedgerPublisherType ledgerPublisher_; LedgerPublisherType ledgerPublisher_;
AmendmentBlockHandlerType amendmentBlockHandler_;
SystemState state_; SystemState state_;
@@ -94,7 +106,7 @@ class ETLService
public: public:
/** /**
* @brief Create an instance of ETLService * @brief Create an instance of ETLService.
* *
* @param config The configuration to use * @param config The configuration to use
* @param ioc io context to run on * @param ioc io context to run on
@@ -104,21 +116,35 @@ public:
* @param ledgers The network validated ledgers datastructure * @param ledgers The network validated ledgers datastructure
*/ */
ETLService( ETLService(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManagerType> subscriptions, std::shared_ptr<SubscriptionManagerType> subscriptions,
std::shared_ptr<LoadBalancerType> balancer, std::shared_ptr<LoadBalancerType> balancer,
std::shared_ptr<NetworkValidatedLedgersType> ledgers); std::shared_ptr<NetworkValidatedLedgersType> ledgers
);
/**
* @brief A factory function to spawn new ETLService instances.
*
* Creates and runs the ETL service.
*
* @param config The configuration to use
* @param ioc io context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param balancer Load balancer to use
* @param ledgers The network validated ledgers datastructure
*/
static std::shared_ptr<ETLService> static std::shared_ptr<ETLService>
make_ETLService( make_ETLService(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManagerType> subscriptions, std::shared_ptr<SubscriptionManagerType> subscriptions,
std::shared_ptr<LoadBalancerType> balancer, std::shared_ptr<LoadBalancerType> balancer,
std::shared_ptr<NetworkValidatedLedgersType> ledgers) std::shared_ptr<NetworkValidatedLedgersType> ledgers
)
{ {
auto etl = std::make_shared<ETLService>(config, ioc, backend, subscriptions, balancer, ledgers); auto etl = std::make_shared<ETLService>(config, ioc, backend, subscriptions, balancer, ledgers);
etl->run(); etl->run();
@@ -127,12 +153,12 @@ public:
} }
/** /**
* @brief Stops components and joins worker thread * @brief Stops components and joins worker thread.
*/ */
~ETLService() ~ETLService()
{ {
log_.info() << "onStop called"; LOG(log_.info()) << "onStop called";
log_.debug() << "Stopping Reporting ETL"; LOG(log_.debug()) << "Stopping Reporting ETL";
state_.isStopping = true; state_.isStopping = true;
cacheLoader_.stop(); cacheLoader_.stop();
@@ -140,11 +166,11 @@ public:
if (worker_.joinable()) if (worker_.joinable())
worker_.join(); worker_.join();
log_.debug() << "Joined ETLService worker thread"; LOG(log_.debug()) << "Joined ETLService worker thread";
} }
/** /**
* @brief Get time passed since last ledger close, in seconds * @brief Get time passed since last ledger close, in seconds.
*/ */
std::uint32_t std::uint32_t
lastCloseAgeSeconds() const lastCloseAgeSeconds() const
@@ -152,6 +178,17 @@ public:
return ledgerPublisher_.lastCloseAgeSeconds(); return ledgerPublisher_.lastCloseAgeSeconds();
} }
/**
* @brief Check for the amendment blocked state.
*
* @return true if currently amendment blocked; false otherwise
*/
bool
isAmendmentBlocked() const
{
return state_.isAmendmentBlocked;
}
/** /**
* @brief Get state of ETL as a JSON object * @brief Get state of ETL as a JSON object
*/ */
@@ -169,6 +206,16 @@ public:
return result; return result;
} }
/**
* @brief Get the etl nodes' state
* @return the etl nodes' state, nullopt if etl nodes are not connected
*/
std::optional<etl::ETLState>
getETLState() const noexcept
{
return loadBalancer_->getETLState();
}
private: private:
/** /**
* @brief Run the ETL pipeline. * @brief Run the ETL pipeline.
@@ -177,10 +224,11 @@ private:
* @note database must already be populated when this function is called * @note database must already be populated when this function is called
* *
* @param startSequence the first ledger to extract * @param startSequence the first ledger to extract
* @param numExtractors number of extractors to use
* @return the last ledger written to the database, if any * @return the last ledger written to the database, if any
*/ */
std::optional<uint32_t> std::optional<uint32_t>
runETLPipeline(uint32_t startSequence, int offset); runETLPipeline(uint32_t startSequence, uint32_t numExtractors);
/** /**
* @brief Monitor the network for newly validated ledgers. * @brief Monitor the network for newly validated ledgers.
@@ -194,6 +242,15 @@ private:
void void
monitor(); monitor();
/**
* @brief Monitor the network for newly validated ledgers and publish them to the ledgers stream
*
* @param nextSequence the ledger sequence to publish
* @return the next ledger sequence to publish
*/
uint32_t
publishNextSequence(uint32_t nextSequence);
/** /**
* @brief Monitor the database for newly written ledgers. * @brief Monitor the database for newly written ledgers.
* *
@@ -207,33 +264,34 @@ private:
* @return true if stopping; false otherwise * @return true if stopping; false otherwise
*/ */
bool bool
isStopping() isStopping() const
{ {
return state_.isStopping; return state_.isStopping;
} }
/** /**
* @brief Get the number of markers to use during the initial ledger download * @brief Get the number of markers to use during the initial ledger download.
* *
* This is equivelent to the degree of parallelism during the initial ledger download. * This is equivelent to the degree of parallelism during the initial ledger download.
* *
* @return the number of markers * @return the number of markers
*/ */
std::uint32_t std::uint32_t
getNumMarkers() getNumMarkers() const
{ {
return numMarkers_; return numMarkers_;
} }
/** /**
* @brief Start all components to run ETL service * @brief Start all components to run ETL service.
*/ */
void void
run(); run();
/** /**
* @brief Spawn the worker thread and start monitoring * @brief Spawn the worker thread and start monitoring.
*/ */
void void
doWork(); doWork();
}; };
} // namespace etl

42
src/etl/ETLState.cpp Normal file
View File

@@ -0,0 +1,42 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <etl/ETLState.h>
#include <rpc/JS.h>
namespace etl {
ETLState
tag_invoke(boost::json::value_to_tag<ETLState>, boost::json::value const& jv)
{
ETLState state;
auto const& jsonObject = jv.as_object();
if (!jsonObject.contains(JS(error))) {
if (jsonObject.contains(JS(result)) && jsonObject.at(JS(result)).as_object().contains(JS(info))) {
auto const rippledInfo = jsonObject.at(JS(result)).as_object().at(JS(info)).as_object();
if (rippledInfo.contains(JS(network_id)))
state.networkID.emplace(boost::json::value_to<int64_t>(rippledInfo.at(JS(network_id))));
}
}
return state;
}
} // namespace etl

60
src/etl/ETLState.h Normal file
View File

@@ -0,0 +1,60 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2023, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <data/BackendInterface.h>
#include <boost/json.hpp>
#include <cstdint>
#include <optional>
namespace etl {
/**
* @brief This class is responsible for fetching and storing the state of the ETL information, such as the network id
*/
struct ETLState {
std::optional<uint32_t> networkID;
/**
* @brief Fetch the ETL state from the rippled server
* @param source The source to fetch the state from
* @return The ETL state, nullopt if source not available
*/
template <typename Forward>
static std::optional<ETLState>
fetchETLStateFromSource(Forward const& source) noexcept
{
auto const serverInfoRippled = data::synchronous([&source](auto yield) {
return source.forwardToRippled({{"command", "server_info"}}, std::nullopt, yield);
});
if (serverInfoRippled)
return boost::json::value_to<ETLState>(boost::json::value(*serverInfoRippled));
return std::nullopt;
}
};
ETLState
tag_invoke(boost::json::value_to_tag<ETLState>, boost::json::value const& jv);
} // namespace etl

View File

@@ -17,14 +17,15 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/DBHelpers.h> #include <data/DBHelpers.h>
#include <etl/ETLService.h> #include <etl/ETLService.h>
#include <etl/NFTHelpers.h> #include <etl/NFTHelpers.h>
#include <etl/ProbingSource.h> #include <etl/ProbingSource.h>
#include <etl/Source.h> #include <etl/Source.h>
#include <log/Logger.h>
#include <rpc/RPCHelpers.h> #include <rpc/RPCHelpers.h>
#include <util/Profiler.h> #include <util/Profiler.h>
#include <util/Random.h>
#include <util/log/Logger.h>
#include <ripple/beast/net/IPEndpoint.h> #include <ripple/beast/net/IPEndpoint.h>
#include <ripple/protocol/STLedgerEntry.h> #include <ripple/protocol/STLedgerEntry.h>
@@ -36,20 +37,21 @@
#include <thread> #include <thread>
using namespace clio; using namespace util;
namespace etl {
std::unique_ptr<Source> std::unique_ptr<Source>
LoadBalancer::make_Source( LoadBalancer::make_Source(
clio::Config const& config, Config const& config,
boost::asio::io_context& ioContext, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> networkValidatedLedgers, std::shared_ptr<NetworkValidatedLedgers> validatedLedgers,
LoadBalancer& balancer) LoadBalancer& balancer
)
{ {
auto src = auto src = std::make_unique<ProbingSource>(config, ioc, backend, subscriptions, validatedLedgers, balancer);
std::make_unique<ProbingSource>(config, ioContext, backend, subscriptions, networkValidatedLedgers, balancer);
src->run(); src->run();
return src; return src;
@@ -57,34 +59,74 @@ LoadBalancer::make_Source(
std::shared_ptr<LoadBalancer> std::shared_ptr<LoadBalancer>
LoadBalancer::make_LoadBalancer( LoadBalancer::make_LoadBalancer(
clio::Config const& config, Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> validatedLedgers) std::shared_ptr<NetworkValidatedLedgers> validatedLedgers
)
{ {
return std::make_shared<LoadBalancer>(config, ioc, backend, subscriptions, validatedLedgers); return std::make_shared<LoadBalancer>(config, ioc, backend, subscriptions, validatedLedgers);
} }
LoadBalancer::LoadBalancer( LoadBalancer::LoadBalancer(
clio::Config const& config, Config const& config,
boost::asio::io_context& ioContext, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> nwvl) std::shared_ptr<NetworkValidatedLedgers> validatedLedgers
)
{ {
if (auto value = config.maybeValue<uint32_t>("num_markers"); value) static constexpr std::uint32_t MAX_DOWNLOAD = 256;
downloadRanges_ = std::clamp(*value, 1u, 256u); if (auto value = config.maybeValue<uint32_t>("num_markers"); value) {
else if (backend->fetchLedgerRange()) downloadRanges_ = std::clamp(*value, 1u, MAX_DOWNLOAD);
} else if (backend->fetchLedgerRange()) {
downloadRanges_ = 4; downloadRanges_ = 4;
}
for (auto const& entry : config.array("etl_sources")) auto const allowNoEtl = config.valueOr("allow_no_etl", false);
{
std::unique_ptr<Source> source = make_Source(entry, ioContext, backend, subscriptions, nwvl, *this); auto const checkOnETLFailure = [this, allowNoEtl](std::string const& log) {
LOG(log_.error()) << log;
if (!allowNoEtl) {
LOG(log_.error()) << "Set allow_no_etl as true in config to allow clio run without valid ETL sources.";
throw std::logic_error("ETL configuration error.");
}
};
for (auto const& entry : config.array("etl_sources")) {
std::unique_ptr<Source> source = make_Source(entry, ioc, backend, subscriptions, validatedLedgers, *this);
// checking etl node validity
auto const stateOpt = ETLState::fetchETLStateFromSource(*source);
if (!stateOpt) {
checkOnETLFailure(fmt::format(
"Failed to fetch ETL state from source = {} Please check the configuration and network",
source->toString()
));
} else if (etlState_ && etlState_->networkID && stateOpt->networkID && etlState_->networkID != stateOpt->networkID) {
checkOnETLFailure(fmt::format(
"ETL sources must be on the same network. Source network id = {} does not match others network id = {}",
*(stateOpt->networkID),
*(etlState_->networkID)
));
} else {
etlState_ = stateOpt;
}
sources_.push_back(std::move(source)); sources_.push_back(std::move(source));
log_.info() << "Added etl source - " << sources_.back()->toString(); LOG(log_.info()) << "Added etl source - " << sources_.back()->toString();
} }
if (sources_.empty())
checkOnETLFailure("No ETL sources configured. Please check the configuration");
}
LoadBalancer::~LoadBalancer()
{
sources_.clear();
} }
std::pair<std::vector<std::string>, bool> std::pair<std::vector<std::string>, bool>
@@ -95,15 +137,17 @@ LoadBalancer::loadInitialLedger(uint32_t sequence, bool cacheOnly)
[this, &response, &sequence, cacheOnly](auto& source) { [this, &response, &sequence, cacheOnly](auto& source) {
auto [data, res] = source->loadInitialLedger(sequence, downloadRanges_, cacheOnly); auto [data, res] = source->loadInitialLedger(sequence, downloadRanges_, cacheOnly);
if (!res) if (!res) {
log_.error() << "Failed to download initial ledger." LOG(log_.error()) << "Failed to download initial ledger."
<< " Sequence = " << sequence << " source = " << source->toString(); << " Sequence = " << sequence << " source = " << source->toString();
else } else {
response = std::move(data); response = std::move(data);
}
return res; return res;
}, },
sequence); sequence
);
return {std::move(response), success}; return {std::move(response), success};
} }
@@ -111,43 +155,43 @@ LoadBalancer::OptionalGetLedgerResponseType
LoadBalancer::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors) LoadBalancer::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors)
{ {
GetLedgerResponseType response; GetLedgerResponseType response;
bool success = execute( bool const success = execute(
[&response, ledgerSequence, getObjects, getObjectNeighbors, log = log_](auto& source) { [&response, ledgerSequence, getObjects, getObjectNeighbors, log = log_](auto& source) {
auto [status, data] = source->fetchLedger(ledgerSequence, getObjects, getObjectNeighbors); auto [status, data] = source->fetchLedger(ledgerSequence, getObjects, getObjectNeighbors);
response = std::move(data); response = std::move(data);
if (status.ok() && response.validated()) if (status.ok() && response.validated()) {
{ LOG(log.info()) << "Successfully fetched ledger = " << ledgerSequence
log.info() << "Successfully fetched ledger = " << ledgerSequence
<< " from source = " << source->toString(); << " from source = " << source->toString();
return true; return true;
} }
else
{ LOG(log.warn()) << "Could not fetch ledger " << ledgerSequence << ", Reply: " << response.DebugString()
log.warn() << "Could not fetch ledger " << ledgerSequence << ", Reply: " << response.DebugString()
<< ", error_code: " << status.error_code() << ", error_msg: " << status.error_message() << ", error_code: " << status.error_code() << ", error_msg: " << status.error_message()
<< ", source = " << source->toString(); << ", source = " << source->toString();
return false; return false;
}
}, },
ledgerSequence); ledgerSequence
if (success) );
if (success) {
return response; return response;
else }
return {}; return {};
} }
std::optional<boost::json::object> std::optional<boost::json::object>
LoadBalancer::forwardToRippled( LoadBalancer::forwardToRippled(
boost::json::object const& request, boost::json::object const& request,
std::string const& clientIp, std::optional<std::string> const& clientIp,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
srand((unsigned)time(0)); std::size_t sourceIdx = 0;
auto sourceIdx = rand() % sources_.size(); if (!sources_.empty())
auto numAttempts = 0; sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
while (numAttempts < sources_.size()) auto numAttempts = 0u;
{
while (numAttempts < sources_.size()) {
if (auto res = sources_[sourceIdx]->forwardToRippled(request, clientIp, yield)) if (auto res = sources_[sourceIdx]->forwardToRippled(request, clientIp, yield))
return res; return res;
@@ -161,8 +205,7 @@ LoadBalancer::forwardToRippled(
bool bool
LoadBalancer::shouldPropagateTxnStream(Source* in) const LoadBalancer::shouldPropagateTxnStream(Source* in) const
{ {
for (auto& src : sources_) for (auto& src : sources_) {
{
assert(src); assert(src);
// We pick the first Source encountered that is connected // We pick the first Source encountered that is connected
@@ -188,48 +231,55 @@ template <class Func>
bool bool
LoadBalancer::execute(Func f, uint32_t ledgerSequence) LoadBalancer::execute(Func f, uint32_t ledgerSequence)
{ {
srand((unsigned)time(0)); std::size_t sourceIdx = 0;
auto sourceIdx = rand() % sources_.size(); if (!sources_.empty())
sourceIdx = util::Random::uniform(0ul, sources_.size() - 1);
auto numAttempts = 0; auto numAttempts = 0;
while (true) while (true) {
{
auto& source = sources_[sourceIdx]; auto& source = sources_[sourceIdx];
log_.debug() << "Attempting to execute func. ledger sequence = " << ledgerSequence LOG(log_.debug()) << "Attempting to execute func. ledger sequence = " << ledgerSequence
<< " - source = " << source->toString(); << " - source = " << source->toString();
// Originally, it was (source->hasLedger(ledgerSequence) || true) // Originally, it was (source->hasLedger(ledgerSequence) || true)
/* Sometimes rippled has ledger but doesn't actually know. However, /* Sometimes rippled has ledger but doesn't actually know. However,
but this does NOT happen in the normal case and is safe to remove but this does NOT happen in the normal case and is safe to remove
This || true is only needed when loading full history standalone */ This || true is only needed when loading full history standalone */
if (source->hasLedger(ledgerSequence)) if (source->hasLedger(ledgerSequence)) {
{ bool const res = f(source);
bool res = f(source); if (res) {
if (res) LOG(log_.debug()) << "Successfully executed func at source = " << source->toString()
{
log_.debug() << "Successfully executed func at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence; << " - ledger sequence = " << ledgerSequence;
break; break;
} }
else
{ LOG(log_.warn()) << "Failed to execute func at source = " << source->toString()
log_.warn() << "Failed to execute func at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence; << " - ledger sequence = " << ledgerSequence;
} } else {
} LOG(log_.warn()) << "Ledger not present at source = " << source->toString()
else
{
log_.warn() << "Ledger not present at source = " << source->toString()
<< " - ledger sequence = " << ledgerSequence; << " - ledger sequence = " << ledgerSequence;
} }
sourceIdx = (sourceIdx + 1) % sources_.size(); sourceIdx = (sourceIdx + 1) % sources_.size();
numAttempts++; numAttempts++;
if (numAttempts % sources_.size() == 0) if (numAttempts % sources_.size() == 0) {
{ LOG(log_.info()) << "Ledger sequence " << ledgerSequence
log_.info() << "Ledger sequence " << ledgerSequence << " is not yet available from any configured sources. " << " is not yet available from any configured sources. "
<< "Sleeping and trying again"; << "Sleeping and trying again";
std::this_thread::sleep_for(std::chrono::seconds(2)); std::this_thread::sleep_for(std::chrono::seconds(2));
} }
} }
return true; return true;
} }
std::optional<ETLState>
LoadBalancer::getETLState() noexcept
{
if (!etlState_) {
// retry ETLState fetch
etlState_ = ETLState::fetchETLStateFromSource(*this);
}
return etlState_;
}
} // namespace etl

View File

@@ -19,95 +19,127 @@
#pragma once #pragma once
#include <backend/BackendInterface.h> #include <data/BackendInterface.h>
#include <config/Config.h>
#include <etl/ETLHelpers.h> #include <etl/ETLHelpers.h>
#include <log/Logger.h> #include <etl/ETLState.h>
#include <subscriptions/SubscriptionManager.h> #include <feed/SubscriptionManager.h>
#include <util/config/Config.h>
#include <util/log/Logger.h>
#include <ripple/proto/org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h>
#include <boost/asio.hpp> #include <boost/asio.hpp>
#include "org/xrpl/rpc/v1/xrp_ledger.grpc.pb.h"
#include <grpcpp/grpcpp.h> #include <grpcpp/grpcpp.h>
namespace etl {
class Source; class Source;
class ProbingSource; class ProbingSource;
} // namespace etl
namespace feed {
class SubscriptionManager; class SubscriptionManager;
} // namespace feed
namespace etl {
/** /**
* @brief This class is used to manage connections to transaction processing processes * @brief This class is used to manage connections to transaction processing processes.
* *
* This class spawns a listener for each etl source, which listens to messages on the ledgers stream (to keep track of * This class spawns a listener for each etl source, which listens to messages on the ledgers stream (to keep track of
* which ledgers have been validated by the network, and the range of ledgers each etl source has). This class also * which ledgers have been validated by the network, and the range of ledgers each etl source has). This class also
* allows requests for ledger data to be load balanced across all possible ETL sources. * allows requests for ledger data to be load balanced across all possible ETL sources.
*/ */
class LoadBalancer class LoadBalancer {
{
public: public:
using RawLedgerObjectType = org::xrpl::rpc::v1::RawLedgerObject; using RawLedgerObjectType = org::xrpl::rpc::v1::RawLedgerObject;
using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse; using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse;
using OptionalGetLedgerResponseType = std::optional<GetLedgerResponseType>; using OptionalGetLedgerResponseType = std::optional<GetLedgerResponseType>;
private: private:
clio::Logger log_{"ETL"}; static constexpr std::uint32_t DEFAULT_DOWNLOAD_RANGES = 16;
util::Logger log_{"ETL"};
std::vector<std::unique_ptr<Source>> sources_; std::vector<std::unique_ptr<Source>> sources_;
std::uint32_t downloadRanges_ = 16; std::optional<ETLState> etlState_;
std::uint32_t downloadRanges_ =
DEFAULT_DOWNLOAD_RANGES; /*< The number of markers to use when downloading intial ledger */
public: public:
/** /**
* @brief Create an instance of the load balancer * @brief Create an instance of the load balancer.
* *
* @param config The configuration to use * @param config The configuration to use
* @param ioContext io context to run on * @param ioc The io_context to run on
* @param backend BackendInterface implementation * @param backend BackendInterface implementation
* @param subscriptions Subscription manager * @param subscriptions Subscription manager
* @param nwvl The network validated ledgers datastructure * @param validatedLedgers The network validated ledgers datastructure
*/ */
LoadBalancer( LoadBalancer(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioContext,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> nwvl);
static std::shared_ptr<LoadBalancer>
make_LoadBalancer(
clio::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> validatedLedgers); std::shared_ptr<NetworkValidatedLedgers> validatedLedgers
);
static std::unique_ptr<Source>
make_Source(
clio::Config const& config,
boost::asio::io_context& ioContext,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> networkValidatedLedgers,
LoadBalancer& balancer);
~LoadBalancer()
{
sources_.clear();
}
/** /**
* @brief Load the initial ledger, writing data to the queue * @brief A factory function for the load balancer.
* *
* @param sequence sequence of ledger to download * @param config The configuration to use
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param validatedLedgers The network validated ledgers datastructure
*/
static std::shared_ptr<LoadBalancer>
make_LoadBalancer(
util::Config const& config,
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> validatedLedgers
);
/**
* @brief A factory function for the ETL source.
*
* @param config The configuration to use
* @param ioc The io_context to run on
* @param backend BackendInterface implementation
* @param subscriptions Subscription manager
* @param validatedLedgers The network validated ledgers datastructure
* @param balancer The load balancer
*/
static std::unique_ptr<Source>
make_Source(
util::Config const& config,
boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend,
std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> validatedLedgers,
LoadBalancer& balancer
);
~LoadBalancer();
/**
* @brief Load the initial ledger, writing data to the queue.
*
* @param sequence Sequence of ledger to download
* @param cacheOnly Whether to only write to cache and not to the DB; defaults to false
*/ */
std::pair<std::vector<std::string>, bool> std::pair<std::vector<std::string>, bool>
loadInitialLedger(uint32_t sequence, bool cacheOnly = false); loadInitialLedger(uint32_t sequence, bool cacheOnly = false);
/** /**
* @brief Fetch data for a specific ledger * @brief Fetch data for a specific ledger.
* *
* This function will continuously try to fetch data for the specified ledger until the fetch succeeds, the ledger * This function will continuously try to fetch data for the specified ledger until the fetch succeeds, the ledger
* is found in the database, or the server is shutting down. * is found in the database, or the server is shutting down.
* *
* @param ledgerSequence sequence of ledger to fetch data for * @param ledgerSequence Sequence of the ledger to fetch
* @param getObjects if true, fetch diff between specified ledger and previous * @param getObjects Whether to get the account state diff between this ledger and the prior one
* @return the extracted data, if extraction was successful. If the ledger was found in the database or the server * @param getObjectNeighbors Whether to request object neighbors
* @return The extracted data, if extraction was successful. If the ledger was found in the database or the server
* is shutting down, the optional will be empty * is shutting down, the optional will be empty
*/ */
OptionalGetLedgerResponseType OptionalGetLedgerResponseType
@@ -127,30 +159,42 @@ public:
shouldPropagateTxnStream(Source* in) const; shouldPropagateTxnStream(Source* in) const;
/** /**
* @return JSON representation of the state of this load balancer * @return JSON representation of the state of this load balancer.
*/ */
boost::json::value boost::json::value
toJson() const; toJson() const;
/** /**
* @brief Forward a JSON RPC request to a randomly selected rippled node * @brief Forward a JSON RPC request to a randomly selected rippled node.
* *
* @param request JSON-RPC request * @param request JSON-RPC request to forward
* @return response received from rippled node * @param clientIp The IP address of the peer, if known
* @param yield The coroutine context
* @return Response received from rippled node as JSON object on success; nullopt on failure
*/ */
std::optional<boost::json::object> std::optional<boost::json::object>
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield) forwardToRippled(
const; boost::json::object const& request,
std::optional<std::string> const& clientIp,
boost::asio::yield_context yield
) const;
/**
* @brief Return state of ETL nodes.
* @return ETL state, nullopt if etl nodes not available
*/
std::optional<ETLState>
getETLState() noexcept;
private: private:
/** /**
* @brief Execute a function on a randomly selected source * @brief Execute a function on a randomly selected source.
* *
* @note f is a function that takes an Source as an argument and returns a bool. * @note f is a function that takes an Source as an argument and returns a bool.
* Attempt to execute f for one randomly chosen Source that has the specified ledger. If f returns false, another * Attempt to execute f for one randomly chosen Source that has the specified ledger. If f returns false, another
* randomly chosen Source is used. The process repeats until f returns true. * randomly chosen Source is used. The process repeats until f returns true.
* *
* @param f function to execute. This function takes the ETL source as an argument, and returns a bool. * @param f Function to execute. This function takes the ETL source as an argument, and returns a bool
* @param ledgerSequence f is executed for each Source that has this ledger * @param ledgerSequence f is executed for each Source that has this ledger
* @return true if f was eventually executed successfully. false if the ledger was found in the database or the * @return true if f was eventually executed successfully. false if the ledger was found in the database or the
* server is shutting down * server is shutting down
@@ -159,3 +203,4 @@ private:
bool bool
execute(Func f, uint32_t ledgerSequence); execute(Func f, uint32_t ledgerSequence);
}; };
} // namespace etl

View File

@@ -22,9 +22,12 @@
#include <ripple/protocol/TxMeta.h> #include <ripple/protocol/TxMeta.h>
#include <vector> #include <vector>
#include <backend/BackendInterface.h> #include <data/BackendInterface.h>
#include <backend/DBHelpers.h> #include <data/DBHelpers.h>
#include <backend/Types.h> #include <data/Types.h>
#include <fmt/core.h>
namespace etl {
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>> std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx) getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
@@ -42,27 +45,26 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// that were changed. // that were changed.
std::optional<ripple::AccountID> owner; std::optional<ripple::AccountID> owner;
for (ripple::STObject const& node : txMeta.getNodes()) for (ripple::STObject const& node : txMeta.getNodes()) {
{
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE) if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE)
continue; continue;
if (!owner) if (!owner)
owner = ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data()); owner = ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data());
if (node.getFName() == ripple::sfCreatedNode) if (node.getFName() == ripple::sfCreatedNode) {
{
ripple::STArray const& toAddNFTs = ripple::STArray const& toAddNFTs =
node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfNewFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
std::transform( std::transform(
toAddNFTs.begin(), toAddNFTs.end(), std::back_inserter(finalIDs), [](ripple::STObject const& nft) { toAddNFTs.begin(),
return nft.getFieldH256(ripple::sfNFTokenID); toAddNFTs.end(),
}); std::back_inserter(finalIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
);
} }
// Else it's modified, as there should never be a deleted NFToken page // Else it's modified, as there should never be a deleted NFToken page
// as a result of a mint. // as a result of a mint.
else else {
{
// When a mint results in splitting an existing page, // When a mint results in splitting an existing page,
// it results in a created page and a modified node. Sometimes, // it results in a created page and a modified node. Sometimes,
// the created node needs to be linked to a third page, resulting // the created node needs to be linked to a third page, resulting
@@ -79,9 +81,11 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
ripple::STArray const& toAddNFTs = previousFields.getFieldArray(ripple::sfNFTokens); ripple::STArray const& toAddNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
std::transform( std::transform(
toAddNFTs.begin(), toAddNFTs.end(), std::back_inserter(prevIDs), [](ripple::STObject const& nft) { toAddNFTs.begin(),
return nft.getFieldH256(ripple::sfNFTokenID); toAddNFTs.end(),
}); std::back_inserter(prevIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
);
ripple::STArray const& toAddFinalNFTs = ripple::STArray const& toAddFinalNFTs =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
@@ -89,27 +93,28 @@ getNFTokenMintData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
toAddFinalNFTs.begin(), toAddFinalNFTs.begin(),
toAddFinalNFTs.end(), toAddFinalNFTs.end(),
std::back_inserter(finalIDs), std::back_inserter(finalIDs),
[](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }); [](ripple::STObject const& nft) { return nft.getFieldH256(ripple::sfNFTokenID); }
);
} }
} }
std::sort(finalIDs.begin(), finalIDs.end()); std::sort(finalIDs.begin(), finalIDs.end());
std::sort(prevIDs.begin(), prevIDs.end()); std::sort(prevIDs.begin(), prevIDs.end());
std::vector<ripple::uint256> tokenIDResult;
std::set_difference(
finalIDs.begin(),
finalIDs.end(),
prevIDs.begin(),
prevIDs.end(),
std::inserter(tokenIDResult, tokenIDResult.begin()));
if (tokenIDResult.size() == 1 && owner)
return {
{NFTTransactionsData(tokenIDResult.front(), txMeta, sttx.getTransactionID())},
NFTsData(tokenIDResult.front(), *owner, sttx.getFieldVL(ripple::sfURI), txMeta)};
std::stringstream msg; // Find the first NFT ID that doesn't match. We're looking for an
msg << " - unexpected NFTokenMint data in tx " << sttx.getTransactionID(); // added NFT, so the one we want will be the mismatch in finalIDs.
throw std::runtime_error(msg.str()); auto const diff = std::mismatch(finalIDs.begin(), finalIDs.end(), prevIDs.begin(), prevIDs.end());
// There should always be a difference so the returned finalIDs
// iterator should never be end(). But better safe than sorry.
if (finalIDs.size() != prevIDs.size() + 1 || diff.first == finalIDs.end() || !owner) {
throw std::runtime_error(fmt::format(" - unexpected NFTokenMint data in tx {}", strHex(sttx.getTransactionID()))
);
}
return {
{NFTTransactionsData(*diff.first, txMeta, sttx.getTransactionID())},
NFTsData(*diff.first, *owner, sttx.getFieldVL(ripple::sfURI), txMeta)};
} }
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>> std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
@@ -121,8 +126,7 @@ getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// Determine who owned the token when it was burned by finding an // Determine who owned the token when it was burned by finding an
// NFTokenPage that was deleted or modified that contains this // NFTokenPage that was deleted or modified that contains this
// tokenID. // tokenID.
for (ripple::STObject const& node : txMeta.getNodes()) for (ripple::STObject const& node : txMeta.getNodes()) {
{
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE || if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE ||
node.getFName() == ripple::sfCreatedNode) node.getFName() == ripple::sfCreatedNode)
continue; continue;
@@ -137,16 +141,15 @@ getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
// need to look in the FinalFields. // need to look in the FinalFields.
std::optional<ripple::STArray> prevNFTs; std::optional<ripple::STArray> prevNFTs;
if (node.isFieldPresent(ripple::sfPreviousFields)) if (node.isFieldPresent(ripple::sfPreviousFields)) {
{
ripple::STObject const& previousFields = ripple::STObject const& previousFields =
node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>(); node.peekAtField(ripple::sfPreviousFields).downcast<ripple::STObject>();
if (previousFields.isFieldPresent(ripple::sfNFTokens)) if (previousFields.isFieldPresent(ripple::sfNFTokens))
prevNFTs = previousFields.getFieldArray(ripple::sfNFTokens); prevNFTs = previousFields.getFieldArray(ripple::sfNFTokens);
} } else if (!prevNFTs && node.getFName() == ripple::sfDeletedNode) {
else if (!prevNFTs && node.getFName() == ripple::sfDeletedNode)
prevNFTs = prevNFTs =
node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens); node.peekAtField(ripple::sfFinalFields).downcast<ripple::STObject>().getFieldArray(ripple::sfNFTokens);
}
if (!prevNFTs) if (!prevNFTs)
continue; continue;
@@ -155,14 +158,14 @@ getNFTokenBurnData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
std::find_if(prevNFTs->begin(), prevNFTs->end(), [&tokenID](ripple::STObject const& candidate) { std::find_if(prevNFTs->begin(), prevNFTs->end(), [&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID; return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
}); });
if (nft != prevNFTs->end()) if (nft != prevNFTs->end()) {
return std::make_pair( return std::make_pair(
txs, txs,
NFTsData( NFTsData(
tokenID, tokenID, ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data()), txMeta, true
ripple::AccountID::fromVoid(node.getFieldH256(ripple::sfLedgerIndex).data()), )
txMeta, );
true)); }
} }
std::stringstream msg; std::stringstream msg;
@@ -176,14 +179,12 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
// If we have the buy offer from this tx, we can determine the owner // If we have the buy offer from this tx, we can determine the owner
// more easily by just looking at the owner of the accepted NFTokenOffer // more easily by just looking at the owner of the accepted NFTokenOffer
// object. // object.
if (sttx.isFieldPresent(ripple::sfNFTokenBuyOffer)) if (sttx.isFieldPresent(ripple::sfNFTokenBuyOffer)) {
{
auto const affectedBuyOffer = auto const affectedBuyOffer =
std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) { std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenBuyOffer); return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenBuyOffer);
}); });
if (affectedBuyOffer == txMeta.getNodes().end()) if (affectedBuyOffer == txMeta.getNodes().end()) {
{
std::stringstream msg; std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID(); msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str()); throw std::runtime_error(msg.str());
@@ -205,8 +206,7 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) { std::find_if(txMeta.getNodes().begin(), txMeta.getNodes().end(), [&sttx](ripple::STObject const& node) {
return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenSellOffer); return node.getFieldH256(ripple::sfLedgerIndex) == sttx.getFieldH256(ripple::sfNFTokenSellOffer);
}); });
if (affectedSellOffer == txMeta.getNodes().end()) if (affectedSellOffer == txMeta.getNodes().end()) {
{
std::stringstream msg; std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID(); msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
throw std::runtime_error(msg.str()); throw std::runtime_error(msg.str());
@@ -220,8 +220,7 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
.downcast<ripple::STObject>() .downcast<ripple::STObject>()
.getAccountID(ripple::sfOwner); .getAccountID(ripple::sfOwner);
for (ripple::STObject const& node : txMeta.getNodes()) for (ripple::STObject const& node : txMeta.getNodes()) {
{
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE || if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_PAGE ||
node.getFName() == ripple::sfDeletedNode) node.getFName() == ripple::sfDeletedNode)
continue; continue;
@@ -232,10 +231,11 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
continue; continue;
ripple::STArray const& nfts = [&node] { ripple::STArray const& nfts = [&node] {
if (node.getFName() == ripple::sfCreatedNode) if (node.getFName() == ripple::sfCreatedNode) {
return node.peekAtField(ripple::sfNewFields) return node.peekAtField(ripple::sfNewFields)
.downcast<ripple::STObject>() .downcast<ripple::STObject>()
.getFieldArray(ripple::sfNFTokens); .getFieldArray(ripple::sfNFTokens);
}
return node.peekAtField(ripple::sfFinalFields) return node.peekAtField(ripple::sfFinalFields)
.downcast<ripple::STObject>() .downcast<ripple::STObject>()
.getFieldArray(ripple::sfNFTokens); .getFieldArray(ripple::sfNFTokens);
@@ -244,11 +244,12 @@ getNFTokenAcceptOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx
auto const nft = std::find_if(nfts.begin(), nfts.end(), [&tokenID](ripple::STObject const& candidate) { auto const nft = std::find_if(nfts.begin(), nfts.end(), [&tokenID](ripple::STObject const& candidate) {
return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID; return candidate.getFieldH256(ripple::sfNFTokenID) == tokenID;
}); });
if (nft != nfts.end()) if (nft != nfts.end()) {
return { return {
{NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())}, {NFTTransactionsData(tokenID, txMeta, sttx.getTransactionID())},
NFTsData(tokenID, nodeOwner, txMeta, false)}; NFTsData(tokenID, nodeOwner, txMeta, false)};
} }
}
std::stringstream msg; std::stringstream msg;
msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID(); msg << " - unexpected NFTokenAcceptOffer data in tx " << sttx.getTransactionID();
@@ -263,8 +264,7 @@ std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTokenCancelOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx) getNFTokenCancelOfferData(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
{ {
std::vector<NFTTransactionsData> txs; std::vector<NFTTransactionsData> txs;
for (ripple::STObject const& node : txMeta.getNodes()) for (ripple::STObject const& node : txMeta.getNodes()) {
{
if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_OFFER) if (node.getFieldU16(ripple::sfLedgerEntryType) != ripple::ltNFTOKEN_OFFER)
continue; continue;
@@ -300,8 +300,7 @@ getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx)
if (txMeta.getResultTER() != ripple::tesSUCCESS) if (txMeta.getResultTER() != ripple::tesSUCCESS)
return {{}, {}}; return {{}, {}};
switch (sttx.getTxnType()) switch (sttx.getTxnType()) {
{
case ripple::TxType::ttNFTOKEN_MINT: case ripple::TxType::ttNFTOKEN_MINT:
return getNFTokenMintData(txMeta, sttx); return getNFTokenMintData(txMeta, sttx);
@@ -338,3 +337,4 @@ getNFTDataFromObj(std::uint32_t const seq, std::string const& key, std::string c
return nfts; return nfts;
} }
} // namespace etl

View File

@@ -17,21 +17,35 @@
*/ */
//============================================================================== //==============================================================================
/** @file */
#pragma once #pragma once
#include <backend/DBHelpers.h> #include <data/DBHelpers.h>
#include <ripple/protocol/STTx.h> #include <ripple/protocol/STTx.h>
#include <ripple/protocol/TxMeta.h> #include <ripple/protocol/TxMeta.h>
namespace etl {
/** /**
* @brief Pull NFT data from TX via ETLService * @brief Pull NFT data from TX via ETLService.
*
* @param txMeta Transaction metadata
* @param sttx The transaction
* @return NFT transactions data as a pair of transactions and optional NFTsData
*/ */
std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>> std::pair<std::vector<NFTTransactionsData>, std::optional<NFTsData>>
getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx); getNFTDataFromTx(ripple::TxMeta const& txMeta, ripple::STTx const& sttx);
/** /**
* @brief Pull NFT data from ledger object via loadInitialLedger * @brief Pull NFT data from ledger object via loadInitialLedger.
*
* @param seq The ledger sequence to pull for
* @param key The owner key
* @param blob Object data as blob
* @return The NFT data as a vector
*/ */
std::vector<NFTsData> std::vector<NFTsData>
getNFTDataFromObj(std::uint32_t const seq, std::string const& key, std::string const& blob); getNFTDataFromObj(std::uint32_t seq, std::string const& key, std::string const& blob);
} // namespace etl

View File

@@ -18,18 +18,18 @@
//============================================================================== //==============================================================================
#include <etl/ProbingSource.h> #include <etl/ProbingSource.h>
#include <log/Logger.h>
using namespace clio; namespace etl {
ProbingSource::ProbingSource( ProbingSource::ProbingSource(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> nwvl, std::shared_ptr<NetworkValidatedLedgers> nwvl,
LoadBalancer& balancer, LoadBalancer& balancer,
boost::asio::ssl::context sslCtx) boost::asio::ssl::context sslCtx
)
: sslCtx_{std::move(sslCtx)} : sslCtx_{std::move(sslCtx)}
, sslSrc_{make_shared< , sslSrc_{make_shared<
SslSource>(config, ioc, std::ref(sslCtx_), backend, subscriptions, nwvl, balancer, make_SSLHooks())} SslSource>(config, ioc, std::ref(sslCtx_), backend, subscriptions, nwvl, balancer, make_SSLHooks())}
@@ -75,8 +75,7 @@ ProbingSource::hasLedger(uint32_t sequence) const
boost::json::object boost::json::object
ProbingSource::toJson() const ProbingSource::toJson() const
{ {
if (!currentSrc_) if (!currentSrc_) {
{
boost::json::object sourcesJson = { boost::json::object sourcesJson = {
{"ws", plainSrc_->toJson()}, {"ws", plainSrc_->toJson()},
{"wss", sslSrc_->toJson()}, {"wss", sslSrc_->toJson()},
@@ -106,37 +105,44 @@ ProbingSource::token() const
} }
std::pair<std::vector<std::string>, bool> std::pair<std::vector<std::string>, bool>
ProbingSource::loadInitialLedger(std::uint32_t ledgerSequence, std::uint32_t numMarkers, bool cacheOnly) ProbingSource::loadInitialLedger(std::uint32_t sequence, std::uint32_t numMarkers, bool cacheOnly)
{ {
if (!currentSrc_) if (!currentSrc_)
return {{}, false}; return {{}, false};
return currentSrc_->loadInitialLedger(ledgerSequence, numMarkers, cacheOnly); return currentSrc_->loadInitialLedger(sequence, numMarkers, cacheOnly);
} }
std::pair<grpc::Status, ProbingSource::GetLedgerResponseType> std::pair<grpc::Status, ProbingSource::GetLedgerResponseType>
ProbingSource::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors) ProbingSource::fetchLedger(uint32_t sequence, bool getObjects, bool getObjectNeighbors)
{ {
if (!currentSrc_) if (!currentSrc_)
return {}; return {};
return currentSrc_->fetchLedger(ledgerSequence, getObjects, getObjectNeighbors); return currentSrc_->fetchLedger(sequence, getObjects, getObjectNeighbors);
} }
std::optional<boost::json::object> std::optional<boost::json::object>
ProbingSource::forwardToRippled( ProbingSource::forwardToRippled(
boost::json::object const& request, boost::json::object const& request,
std::string const& clientIp, std::optional<std::string> const& clientIp,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
if (!currentSrc_) if (!currentSrc_) // Source may connect to rippled before the connection built to check the validity
return {}; {
if (auto res = plainSrc_->forwardToRippled(request, clientIp, yield))
return res;
return sslSrc_->forwardToRippled(request, clientIp, yield);
}
return currentSrc_->forwardToRippled(request, clientIp, yield); return currentSrc_->forwardToRippled(request, clientIp, yield);
} }
std::optional<boost::json::object> std::optional<boost::json::object>
ProbingSource::requestFromRippled( ProbingSource::requestFromRippled(
boost::json::object const& request, boost::json::object const& request,
std::string const& clientIp, std::optional<std::string> const& clientIp,
boost::asio::yield_context& yield) const boost::asio::yield_context yield
) const
{ {
if (!currentSrc_) if (!currentSrc_)
return {}; return {};
@@ -148,23 +154,21 @@ ProbingSource::make_SSLHooks() noexcept
{ {
return {// onConnected return {// onConnected
[this](auto ec) { [this](auto ec) {
std::lock_guard lck(mtx_); std::lock_guard const lck(mtx_);
if (currentSrc_) if (currentSrc_)
return SourceHooks::Action::STOP; return SourceHooks::Action::STOP;
if (!ec) if (!ec) {
{
plainSrc_->pause(); plainSrc_->pause();
currentSrc_ = sslSrc_; currentSrc_ = sslSrc_;
log_.info() << "Selected WSS as the main source: " << currentSrc_->toString(); LOG(log_.info()) << "Selected WSS as the main source: " << currentSrc_->toString();
} }
return SourceHooks::Action::PROCEED; return SourceHooks::Action::PROCEED;
}, },
// onDisconnected // onDisconnected
[this](auto ec) { [this](auto /* ec */) {
std::lock_guard lck(mtx_); std::lock_guard const lck(mtx_);
if (currentSrc_) if (currentSrc_) {
{
currentSrc_ = nullptr; currentSrc_ = nullptr;
plainSrc_->resume(); plainSrc_->resume();
} }
@@ -177,26 +181,25 @@ ProbingSource::make_PlainHooks() noexcept
{ {
return {// onConnected return {// onConnected
[this](auto ec) { [this](auto ec) {
std::lock_guard lck(mtx_); std::lock_guard const lck(mtx_);
if (currentSrc_) if (currentSrc_)
return SourceHooks::Action::STOP; return SourceHooks::Action::STOP;
if (!ec) if (!ec) {
{
sslSrc_->pause(); sslSrc_->pause();
currentSrc_ = plainSrc_; currentSrc_ = plainSrc_;
log_.info() << "Selected Plain WS as the main source: " << currentSrc_->toString(); LOG(log_.info()) << "Selected Plain WS as the main source: " << currentSrc_->toString();
} }
return SourceHooks::Action::PROCEED; return SourceHooks::Action::PROCEED;
}, },
// onDisconnected // onDisconnected
[this](auto ec) { [this](auto /* ec */) {
std::lock_guard lck(mtx_); std::lock_guard const lck(mtx_);
if (currentSrc_) if (currentSrc_) {
{
currentSrc_ = nullptr; currentSrc_ = nullptr;
sslSrc_->resume(); sslSrc_->resume();
} }
return SourceHooks::Action::STOP; return SourceHooks::Action::STOP;
}}; }};
} };
} // namespace etl

View File

@@ -19,9 +19,9 @@
#pragma once #pragma once
#include <config/Config.h>
#include <etl/Source.h> #include <etl/Source.h>
#include <log/Logger.h> #include <util/config/Config.h>
#include <util/log/Logger.h>
#include <boost/asio.hpp> #include <boost/asio.hpp>
#include <boost/beast/core.hpp> #include <boost/beast/core.hpp>
@@ -31,20 +31,21 @@
#include <mutex> #include <mutex>
namespace etl {
/** /**
* @brief This Source implementation attempts to connect over both secure websocket and plain websocket. * @brief This Source implementation attempts to connect over both secure websocket and plain websocket.
* *
* First to connect pauses the other and the probing is considered done at this point. * First to connect pauses the other and the probing is considered done at this point.
* If however the connected source loses connection the probing is kickstarted again. * If however the connected source loses connection the probing is kickstarted again.
*/ */
class ProbingSource : public Source class ProbingSource : public Source {
{
public: public:
// TODO: inject when unit tests will be written for ProbingSource // TODO: inject when unit tests will be written for ProbingSource
using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse; using GetLedgerResponseType = org::xrpl::rpc::v1::GetLedgerResponse;
private: private:
clio::Logger log_{"ETL"}; util::Logger log_{"ETL"};
std::mutex mtx_; std::mutex mtx_;
boost::asio::ssl::context sslCtx_; boost::asio::ssl::context sslCtx_;
@@ -54,7 +55,7 @@ private:
public: public:
/** /**
* @brief Create an instance of the probing source * @brief Create an instance of the probing source.
* *
* @param config The configuration to use * @param config The configuration to use
* @param ioc io context to run on * @param ioc io context to run on
@@ -65,15 +66,16 @@ public:
* @param sslCtx The SSL context to use; defaults to tlsv12 * @param sslCtx The SSL context to use; defaults to tlsv12
*/ */
ProbingSource( ProbingSource(
clio::Config const& config, util::Config const& config,
boost::asio::io_context& ioc, boost::asio::io_context& ioc,
std::shared_ptr<BackendInterface> backend, std::shared_ptr<BackendInterface> backend,
std::shared_ptr<SubscriptionManager> subscriptions, std::shared_ptr<feed::SubscriptionManager> subscriptions,
std::shared_ptr<NetworkValidatedLedgers> nwvl, std::shared_ptr<NetworkValidatedLedgers> nwvl,
LoadBalancer& balancer, LoadBalancer& balancer,
boost::asio::ssl::context sslCtx = boost::asio::ssl::context{boost::asio::ssl::context::tlsv12}); boost::asio::ssl::context sslCtx = boost::asio::ssl::context{boost::asio::ssl::context::tlsv12}
);
~ProbingSource() = default; ~ProbingSource() override = default;
void void
run() override; run() override;
@@ -97,14 +99,17 @@ public:
toString() const override; toString() const override;
std::pair<std::vector<std::string>, bool> std::pair<std::vector<std::string>, bool>
loadInitialLedger(std::uint32_t ledgerSequence, std::uint32_t numMarkers, bool cacheOnly = false) override; loadInitialLedger(std::uint32_t sequence, std::uint32_t numMarkers, bool cacheOnly = false) override;
std::pair<grpc::Status, GetLedgerResponseType> std::pair<grpc::Status, GetLedgerResponseType>
fetchLedger(uint32_t ledgerSequence, bool getObjects = true, bool getObjectNeighbors = false) override; fetchLedger(uint32_t sequence, bool getObjects = true, bool getObjectNeighbors = false) override;
std::optional<boost::json::object> std::optional<boost::json::object>
forwardToRippled(boost::json::object const& request, std::string const& clientIp, boost::asio::yield_context& yield) forwardToRippled(
const override; boost::json::object const& request,
std::optional<std::string> const& clientIp,
boost::asio::yield_context yield
) const override;
boost::uuids::uuid boost::uuids::uuid
token() const override; token() const override;
@@ -113,8 +118,9 @@ private:
std::optional<boost::json::object> std::optional<boost::json::object>
requestFromRippled( requestFromRippled(
boost::json::object const& request, boost::json::object const& request,
std::string const& clientIp, std::optional<std::string> const& clientIp,
boost::asio::yield_context& yield) const override; boost::asio::yield_context yield
) const override;
SourceHooks SourceHooks
make_SSLHooks() noexcept; make_SSLHooks() noexcept;
@@ -122,3 +128,4 @@ private:
SourceHooks SourceHooks
make_PlainHooks() noexcept; make_PlainHooks() noexcept;
}; };
} // namespace etl

View File

@@ -1,3 +1,5 @@
# ETL subsystem
A single clio node has one or more ETL sources, specified in the config A single clio node has one or more ETL sources, specified in the config
file. clio will subscribe to the `ledgers` stream of each of the ETL file. clio will subscribe to the `ledgers` stream of each of the ETL
sources. This stream sends a message whenever a new ledger is validated. Upon sources. This stream sends a message whenever a new ledger is validated. Upon

View File

@@ -17,13 +17,11 @@
*/ */
//============================================================================== //==============================================================================
#include <backend/DBHelpers.h> #include <data/DBHelpers.h>
#include <etl/ETLService.h> #include <etl/ETLService.h>
#include <etl/LoadBalancer.h> #include <etl/LoadBalancer.h>
#include <etl/NFTHelpers.h>
#include <etl/ProbingSource.h> #include <etl/ProbingSource.h>
#include <etl/Source.h> #include <etl/Source.h>
#include <log/Logger.h>
#include <rpc/RPCHelpers.h> #include <rpc/RPCHelpers.h>
#include <util/Profiler.h> #include <util/Profiler.h>
@@ -36,112 +34,39 @@
#include <thread> #include <thread>
using namespace clio; namespace etl {
static boost::beast::websocket::stream_base::timeout static boost::beast::websocket::stream_base::timeout
make_TimeoutOption() make_TimeoutOption()
{ {
// See #289 for details.
// TODO: investigate the issue and find if there is a solution other than
// introducing artificial timeouts.
if (true)
{
// The only difference between this and the suggested client role is
// that idle_timeout is set to 20 instead of none()
auto opt = boost::beast::websocket::stream_base::timeout{};
opt.handshake_timeout = std::chrono::seconds(30);
opt.idle_timeout = std::chrono::seconds(20);
opt.keep_alive_pings = false;
return opt;
}
else
{
return boost::beast::websocket::stream_base::timeout::suggested(boost::beast::role_type::client); return boost::beast::websocket::stream_base::timeout::suggested(boost::beast::role_type::client);
}
}
template <class Derived>
void
SourceImpl<Derived>::reconnect(boost::beast::error_code ec)
{
if (paused_)
return;
if (connected_)
hooks_.onDisconnected(ec);
connected_ = false;
// These are somewhat normal errors. operation_aborted occurs on shutdown,
// when the timer is cancelled. connection_refused will occur repeatedly
std::string err = ec.message();
// if we cannot connect to the transaction processing process
if (ec.category() == boost::asio::error::get_ssl_category())
{
err = std::string(" (") + boost::lexical_cast<std::string>(ERR_GET_LIB(ec.value())) + "," +
boost::lexical_cast<std::string>(ERR_GET_REASON(ec.value())) + ") ";
// ERR_PACK /* crypto/err/err.h */
char buf[128];
::ERR_error_string_n(ec.value(), buf, sizeof(buf));
err += buf;
std::cout << err << std::endl;
}
if (ec != boost::asio::error::operation_aborted && ec != boost::asio::error::connection_refused)
{
log_.error() << "error code = " << ec << " - " << toString();
}
else
{
log_.warn() << "error code = " << ec << " - " << toString();
}
// exponentially increasing timeouts, with a max of 30 seconds
size_t waitTime = std::min(pow(2, numFailures_), 30.0);
numFailures_++;
timer_.expires_after(boost::asio::chrono::seconds(waitTime));
timer_.async_wait([this](auto ec) {
bool startAgain = (ec != boost::asio::error::operation_aborted);
log_.trace() << "async_wait : ec = " << ec;
derived().close(startAgain);
});
} }
void void
PlainSource::close(bool startAgain) PlainSource::close(bool startAgain)
{ {
timer_.cancel(); timer_.cancel();
ioc_.post([this, startAgain]() { boost::asio::post(strand_, [this, startAgain]() {
if (closing_) if (closing_)
return; return;
if (derived().ws().is_open()) if (derived().ws().is_open()) {
{
// onStop() also calls close(). If the async_close is called twice, // onStop() also calls close(). If the async_close is called twice,
// an assertion fails. Using closing_ makes sure async_close is only // an assertion fails. Using closing_ makes sure async_close is only
// called once // called once
closing_ = true; closing_ = true;
derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) { derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) {
if (ec) if (ec) {
{ LOG(log_.error()) << "async_close: error code = " << ec << " - " << toString();
log_.error() << " async_close : "
<< "error code = " << ec << " - " << toString();
} }
closing_ = false; closing_ = false;
if (startAgain) if (startAgain) {
{ ws_ = std::make_unique<StreamType>(strand_);
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc_));
run(); run();
} }
}); });
} } else if (startAgain) {
else if (startAgain) ws_ = std::make_unique<StreamType>(strand_);
{
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::tcp_stream>>(
boost::asio::make_strand(ioc_));
run(); run();
} }
}); });
@@ -151,96 +76,59 @@ void
SslSource::close(bool startAgain) SslSource::close(bool startAgain)
{ {
timer_.cancel(); timer_.cancel();
ioc_.post([this, startAgain]() { boost::asio::post(strand_, [this, startAgain]() {
if (closing_) if (closing_)
return; return;
if (derived().ws().is_open()) if (derived().ws().is_open()) {
{ // onStop() also calls close(). If the async_close is called twice, an assertion fails. Using closing_
// onStop() also calls close(). If the async_close is called twice, // makes sure async_close is only called once
// an assertion fails. Using closing_ makes sure async_close is only
// called once
closing_ = true; closing_ = true;
derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) { derived().ws().async_close(boost::beast::websocket::close_code::normal, [this, startAgain](auto ec) {
if (ec) if (ec) {
{ LOG(log_.error()) << "async_close: error code = " << ec << " - " << toString();
log_.error() << " async_close : "
<< "error code = " << ec << " - " << toString();
} }
closing_ = false; closing_ = false;
if (startAgain) if (startAgain) {
{ ws_ = std::make_unique<StreamType>(strand_, *sslCtx_);
ws_ = std::make_unique<
boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_), *sslCtx_);
run(); run();
} }
}); });
} } else if (startAgain) {
else if (startAgain) ws_ = std::make_unique<StreamType>(strand_, *sslCtx_);
{
ws_ = std::make_unique<boost::beast::websocket::stream<boost::beast::ssl_stream<boost::beast::tcp_stream>>>(
boost::asio::make_strand(ioc_), *sslCtx_);
run(); run();
} }
}); });
} }
template <class Derived>
void
SourceImpl<Derived>::onResolve(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type results)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (ec)
{
// try again
reconnect(ec);
}
else
{
boost::beast::get_lowest_layer(derived().ws()).expires_after(std::chrono::seconds(30));
boost::beast::get_lowest_layer(derived().ws()).async_connect(results, [this](auto ec, auto ep) {
derived().onConnect(ec, ep);
});
}
}
void void
PlainSource::onConnect( PlainSource::onConnect(
boost::beast::error_code ec, boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint) boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint
)
{ {
log_.trace() << "ec = " << ec << " - " << toString(); if (ec) {
if (ec)
{
// start over // start over
reconnect(ec); reconnect(ec);
} } else {
else connected_ = true;
{
numFailures_ = 0; numFailures_ = 0;
// Turn off timeout on the tcp stream, because websocket stream has it's
// own timeout system // Websocket stream has it's own timeout system
boost::beast::get_lowest_layer(derived().ws()).expires_never(); boost::beast::get_lowest_layer(derived().ws()).expires_never();
// Set a desired timeout for the websocket stream
derived().ws().set_option(make_TimeoutOption()); derived().ws().set_option(make_TimeoutOption());
// Set a decorator to change the User-Agent of the handshake
derived().ws().set_option( derived().ws().set_option(
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) { boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(boost::beast::http::field::user_agent, "clio-client"); req.set(boost::beast::http::field::user_agent, "clio-client");
req.set("X-User", "clio-client"); req.set("X-User", "clio-client");
})); })
);
// Update the host_ string. This will provide the value of the // Update the host_ string. This will provide the value of the
// Host HTTP header during the WebSocket handshake. // Host HTTP header during the WebSocket handshake.
// See https://tools.ietf.org/html/rfc7230#section-5.4 // See https://tools.ietf.org/html/rfc7230#section-5.4
auto host = ip_ + ':' + std::to_string(endpoint.port()); auto host = ip_ + ':' + std::to_string(endpoint.port());
// Perform the websocket handshake
derived().ws().async_handshake(host, "/", [this](auto ec) { onHandshake(ec); }); derived().ws().async_handshake(host, "/", [this](auto ec) { onHandshake(ec); });
} }
} }
@@ -248,572 +136,45 @@ PlainSource::onConnect(
void void
SslSource::onConnect(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint) SslSource::onConnect(boost::beast::error_code ec, boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint)
{ {
log_.trace() << "ec = " << ec << " - " << toString(); if (ec) {
if (ec)
{
// start over // start over
reconnect(ec); reconnect(ec);
} } else {
else connected_ = true;
{
numFailures_ = 0; numFailures_ = 0;
// Turn off timeout on the tcp stream, because websocket stream has it's
// own timeout system // Websocket stream has it's own timeout system
boost::beast::get_lowest_layer(derived().ws()).expires_never(); boost::beast::get_lowest_layer(derived().ws()).expires_never();
// Set a desired timeout for the websocket stream
derived().ws().set_option(make_TimeoutOption()); derived().ws().set_option(make_TimeoutOption());
// Set a decorator to change the User-Agent of the handshake
derived().ws().set_option( derived().ws().set_option(
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) { boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(boost::beast::http::field::user_agent, "clio-client"); req.set(boost::beast::http::field::user_agent, "clio-client");
req.set("X-User", "clio-client"); req.set("X-User", "clio-client");
})); })
);
// Update the host_ string. This will provide the value of the // Update the host_ string. This will provide the value of the
// Host HTTP header during the WebSocket handshake. // Host HTTP header during the WebSocket handshake.
// See https://tools.ietf.org/html/rfc7230#section-5.4 // See https://tools.ietf.org/html/rfc7230#section-5.4
auto host = ip_ + ':' + std::to_string(endpoint.port()); auto host = ip_ + ':' + std::to_string(endpoint.port());
// Perform the websocket handshake ws().next_layer().async_handshake(boost::asio::ssl::stream_base::client, [this, endpoint](auto ec) {
ws().next_layer().async_handshake( onSslHandshake(ec, endpoint);
boost::asio::ssl::stream_base::client, [this, endpoint](auto ec) { onSslHandshake(ec, endpoint); }); });
} }
} }
void void
SslSource::onSslHandshake( SslSource::onSslHandshake(
boost::beast::error_code ec, boost::beast::error_code ec,
boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint) boost::asio::ip::tcp::resolver::results_type::endpoint_type endpoint
)
{ {
if (ec) if (ec) {
{
reconnect(ec); reconnect(ec);
} } else {
else
{
// Perform the websocket handshake
auto host = ip_ + ':' + std::to_string(endpoint.port()); auto host = ip_ + ':' + std::to_string(endpoint.port());
// Perform the websocket handshake
ws().async_handshake(host, "/", [this](auto ec) { onHandshake(ec); }); ws().async_handshake(host, "/", [this](auto ec) { onHandshake(ec); });
} }
} }
} // namespace etl
template <class Derived>
void
SourceImpl<Derived>::onHandshake(boost::beast::error_code ec)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (auto action = hooks_.onConnected(ec); action == SourceHooks::Action::STOP)
return;
if (ec)
{
// start over
reconnect(ec);
}
else
{
boost::json::object jv{
{"command", "subscribe"}, {"streams", {"ledger", "manifests", "validations", "transactions_proposed"}}};
std::string s = boost::json::serialize(jv);
log_.trace() << "Sending subscribe stream message";
derived().ws().set_option(
boost::beast::websocket::stream_base::decorator([](boost::beast::websocket::request_type& req) {
req.set(
boost::beast::http::field::user_agent, std::string(BOOST_BEAST_VERSION_STRING) + " clio-client");
req.set("X-User", "coro-client");
}));
// Send the message
derived().ws().async_write(boost::asio::buffer(s), [this](auto ec, size_t size) { onWrite(ec, size); });
}
}
template <class Derived>
void
SourceImpl<Derived>::onWrite(boost::beast::error_code ec, size_t bytesWritten)
{
log_.trace() << "ec = " << ec << " - " << toString();
if (ec)
{
// start over
reconnect(ec);
}
else
{
derived().ws().async_read(readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
}
}
template <class Derived>
void
SourceImpl<Derived>::onRead(boost::beast::error_code ec, size_t size)
{
log_.trace() << "ec = " << ec << " - " << toString();
// if error or error reading message, start over
if (ec)
{
reconnect(ec);
}
else
{
handleMessage();
boost::beast::flat_buffer buffer;
swap(readBuffer_, buffer);
log_.trace() << "calling async_read - " << toString();
derived().ws().async_read(readBuffer_, [this](auto ec, size_t size) { onRead(ec, size); });
}
}
template <class Derived>
bool
SourceImpl<Derived>::handleMessage()
{
log_.trace() << toString();
setLastMsgTime();
connected_ = true;
try
{
std::string msg{static_cast<char const*>(readBuffer_.data().data()), readBuffer_.size()};
log_.trace() << msg;
boost::json::value raw = boost::json::parse(msg);
log_.trace() << "parsed";
boost::json::object response = raw.as_object();
uint32_t ledgerIndex = 0;
if (response.contains("result"))
{
boost::json::object result = response["result"].as_object();
if (result.contains("ledger_index"))
{
ledgerIndex = result["ledger_index"].as_int64();
}
if (result.contains("validated_ledgers"))
{
boost::json::string const& validatedLedgers = result["validated_ledgers"].as_string();
setValidatedRange({validatedLedgers.c_str(), validatedLedgers.size()});
}
log_.info() << "Received a message on ledger "
<< " subscription stream. Message : " << response << " - " << toString();
}
else if (response.contains("type") && response["type"] == "ledgerClosed")
{
log_.info() << "Received a message on ledger "
<< " subscription stream. Message : " << response << " - " << toString();
if (response.contains("ledger_index"))
{
ledgerIndex = response["ledger_index"].as_int64();
}
if (response.contains("validated_ledgers"))
{
boost::json::string const& validatedLedgers = response["validated_ledgers"].as_string();
setValidatedRange({validatedLedgers.c_str(), validatedLedgers.size()});
}
}
else
{
if (balancer_.shouldPropagateTxnStream(this))
{
if (response.contains("transaction"))
{
forwardCache_.freshen();
subscriptions_->forwardProposedTransaction(response);
}
else if (response.contains("type") && response["type"] == "validationReceived")
{
subscriptions_->forwardValidation(response);
}
else if (response.contains("type") && response["type"] == "manifestReceived")
{
subscriptions_->forwardManifest(response);
}
}
}
if (ledgerIndex != 0)
{
log_.trace() << "Pushing ledger sequence = " << ledgerIndex << " - " << toString();
networkValidatedLedgers_->push(ledgerIndex);
}
return true;
}
catch (std::exception const& e)
{
log_.error() << "Exception in handleMessage : " << e.what();
return false;
}
}
// TODO: move to detail
class AsyncCallData
{
clio::Logger log_{"ETL"};
std::unique_ptr<org::xrpl::rpc::v1::GetLedgerDataResponse> cur_;
std::unique_ptr<org::xrpl::rpc::v1::GetLedgerDataResponse> next_;
org::xrpl::rpc::v1::GetLedgerDataRequest request_;
std::unique_ptr<grpc::ClientContext> context_;
grpc::Status status_;
unsigned char nextPrefix_;
std::string lastKey_;
public:
AsyncCallData(uint32_t seq, ripple::uint256 const& marker, std::optional<ripple::uint256> const& nextMarker)
{
request_.mutable_ledger()->set_sequence(seq);
if (marker.isNonZero())
{
request_.set_marker(marker.data(), marker.size());
}
request_.set_user("ETL");
nextPrefix_ = 0x00;
if (nextMarker)
nextPrefix_ = nextMarker->data()[0];
unsigned char prefix = marker.data()[0];
log_.debug() << "Setting up AsyncCallData. marker = " << ripple::strHex(marker)
<< " . prefix = " << ripple::strHex(std::string(1, prefix))
<< " . nextPrefix_ = " << ripple::strHex(std::string(1, nextPrefix_));
assert(nextPrefix_ > prefix || nextPrefix_ == 0x00);
cur_ = std::make_unique<org::xrpl::rpc::v1::GetLedgerDataResponse>();
next_ = std::make_unique<org::xrpl::rpc::v1::GetLedgerDataResponse>();
context_ = std::make_unique<grpc::ClientContext>();
}
enum class CallStatus { MORE, DONE, ERRORED };
CallStatus
process(
std::unique_ptr<org::xrpl::rpc::v1::XRPLedgerAPIService::Stub>& stub,
grpc::CompletionQueue& cq,
BackendInterface& backend,
bool abort,
bool cacheOnly = false)
{
log_.trace() << "Processing response. "
<< "Marker prefix = " << getMarkerPrefix();
if (abort)
{
log_.error() << "AsyncCallData aborted";
return CallStatus::ERRORED;
}
if (!status_.ok())
{
log_.error() << "AsyncCallData status_ not ok: "
<< " code = " << status_.error_code() << " message = " << status_.error_message();
return CallStatus::ERRORED;
}
if (!next_->is_unlimited())
{
log_.warn() << "AsyncCallData is_unlimited is false. Make sure "
"secure_gateway is set correctly at the ETL source";
}
std::swap(cur_, next_);
bool more = true;
// if no marker returned, we are done
if (cur_->marker().size() == 0)
more = false;
// if returned marker is greater than our end, we are done
unsigned char prefix = cur_->marker()[0];
if (nextPrefix_ != 0x00 && prefix >= nextPrefix_)
more = false;
// if we are not done, make the next async call
if (more)
{
request_.set_marker(std::move(cur_->marker()));
call(stub, cq);
}
auto const numObjects = cur_->ledger_objects().objects_size();
log_.debug() << "Writing " << numObjects << " objects";
std::vector<Backend::LedgerObject> cacheUpdates;
cacheUpdates.reserve(numObjects);
for (int i = 0; i < numObjects; ++i)
{
auto& obj = *(cur_->mutable_ledger_objects()->mutable_objects(i));
if (!more && nextPrefix_ != 0x00)
{
if (((unsigned char)obj.key()[0]) >= nextPrefix_)
continue;
}
cacheUpdates.push_back(
{*ripple::uint256::fromVoidChecked(obj.key()),
{obj.mutable_data()->begin(), obj.mutable_data()->end()}});
if (!cacheOnly)
{
if (lastKey_.size())
backend.writeSuccessor(std::move(lastKey_), request_.ledger().sequence(), std::string{obj.key()});
lastKey_ = obj.key();
backend.writeNFTs(getNFTDataFromObj(request_.ledger().sequence(), obj.key(), obj.data()));
backend.writeLedgerObject(
std::move(*obj.mutable_key()), request_.ledger().sequence(), std::move(*obj.mutable_data()));
}
}
backend.cache().update(cacheUpdates, request_.ledger().sequence(), cacheOnly);
log_.debug() << "Wrote " << numObjects << " objects. Got more: " << (more ? "YES" : "NO");
return more ? CallStatus::MORE : CallStatus::DONE;
}
void
call(std::unique_ptr<org::xrpl::rpc::v1::XRPLedgerAPIService::Stub>& stub, grpc::CompletionQueue& cq)
{
context_ = std::make_unique<grpc::ClientContext>();
std::unique_ptr<grpc::ClientAsyncResponseReader<org::xrpl::rpc::v1::GetLedgerDataResponse>> rpc(
stub->PrepareAsyncGetLedgerData(context_.get(), request_, &cq));
rpc->StartCall();
rpc->Finish(next_.get(), &status_, this);
}
std::string
getMarkerPrefix()
{
if (next_->marker().size() == 0)
return "";
else
return ripple::strHex(std::string{next_->marker().data()[0]});
}
std::string
getLastKey()
{
return lastKey_;
}
};
template <class Derived>
std::pair<std::vector<std::string>, bool>
SourceImpl<Derived>::loadInitialLedger(uint32_t sequence, uint32_t numMarkers, bool cacheOnly)
{
if (!stub_)
return {{}, false};
grpc::CompletionQueue cq;
void* tag;
bool ok = false;
std::vector<AsyncCallData> calls;
auto markers = getMarkers(numMarkers);
for (size_t i = 0; i < markers.size(); ++i)
{
std::optional<ripple::uint256> nextMarker;
if (i + 1 < markers.size())
nextMarker = markers[i + 1];
calls.emplace_back(sequence, markers[i], nextMarker);
}
log_.debug() << "Starting data download for ledger " << sequence << ". Using source = " << toString();
for (auto& c : calls)
c.call(stub_, cq);
size_t numFinished = 0;
bool abort = false;
size_t incr = 500000;
size_t progress = incr;
std::vector<std::string> edgeKeys;
while (numFinished < calls.size() && cq.Next(&tag, &ok))
{
assert(tag);
auto ptr = static_cast<AsyncCallData*>(tag);
if (!ok)
{
log_.error() << "loadInitialLedger - ok is false";
return {{}, false}; // handle cancelled
}
else
{
log_.trace() << "Marker prefix = " << ptr->getMarkerPrefix();
auto result = ptr->process(stub_, cq, *backend_, abort, cacheOnly);
if (result != AsyncCallData::CallStatus::MORE)
{
numFinished++;
log_.debug() << "Finished a marker. "
<< "Current number of finished = " << numFinished;
std::string lastKey = ptr->getLastKey();
if (lastKey.size())
edgeKeys.push_back(ptr->getLastKey());
}
if (result == AsyncCallData::CallStatus::ERRORED)
abort = true;
if (backend_->cache().size() > progress)
{
log_.info() << "Downloaded " << backend_->cache().size() << " records from rippled";
progress += incr;
}
}
}
log_.info() << "Finished loadInitialLedger. cache size = " << backend_->cache().size();
return {std::move(edgeKeys), !abort};
}
template <class Derived>
std::pair<grpc::Status, org::xrpl::rpc::v1::GetLedgerResponse>
SourceImpl<Derived>::fetchLedger(uint32_t ledgerSequence, bool getObjects, bool getObjectNeighbors)
{
org::xrpl::rpc::v1::GetLedgerResponse response;
if (!stub_)
return {{grpc::StatusCode::INTERNAL, "No Stub"}, response};
// ledger header with txns and metadata
org::xrpl::rpc::v1::GetLedgerRequest request;
grpc::ClientContext context;
request.mutable_ledger()->set_sequence(ledgerSequence);
request.set_transactions(true);
request.set_expand(true);
request.set_get_objects(getObjects);
request.set_get_object_neighbors(getObjectNeighbors);
request.set_user("ETL");
grpc::Status status = stub_->GetLedger(&context, request, &response);
if (status.ok() && !response.is_unlimited())
{
log_.warn() << "SourceImpl::fetchLedger - is_unlimited is "
"false. Make sure secure_gateway is set "
"correctly on the ETL source. source = "
<< toString() << " status = " << status.error_message();
}
return {status, std::move(response)};
}
template <class Derived>
std::optional<boost::json::object>
SourceImpl<Derived>::forwardToRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const
{
if (auto resp = forwardCache_.get(request); resp)
{
log_.debug() << "request hit forwardCache";
return resp;
}
return requestFromRippled(request, clientIp, yield);
}
template <class Derived>
std::optional<boost::json::object>
SourceImpl<Derived>::requestFromRippled(
boost::json::object const& request,
std::string const& clientIp,
boost::asio::yield_context& yield) const
{
log_.trace() << "Attempting to forward request to tx. "
<< "request = " << boost::json::serialize(request);
boost::json::object response;
if (!connected_)
{
log_.error() << "Attempted to proxy but failed to connect to tx";
return {};
}
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from
namespace net = boost::asio; // from
using tcp = boost::asio::ip::tcp; // from
try
{
boost::beast::error_code ec;
// These objects perform our I/O
tcp::resolver resolver{ioc_};
log_.trace() << "Creating websocket";
auto ws = std::make_unique<websocket::stream<beast::tcp_stream>>(ioc_);
// Look up the domain name
auto const results = resolver.async_resolve(ip_, wsPort_, yield[ec]);
if (ec)
return {};
ws->next_layer().expires_after(std::chrono::seconds(3));
log_.trace() << "Connecting websocket";
// Make the connection on the IP address we get from a lookup
ws->next_layer().async_connect(results, yield[ec]);
if (ec)
return {};
// Set a decorator to change the User-Agent of the handshake
// and to tell rippled to charge the client IP for RPC
// resources. See "secure_gateway" in
//
// https://github.com/ripple/rippled/blob/develop/cfg/rippled-example.cfg
ws->set_option(websocket::stream_base::decorator([&clientIp](websocket::request_type& req) {
req.set(http::field::user_agent, std::string(BOOST_BEAST_VERSION_STRING) + " websocket-client-coro");
req.set(http::field::forwarded, "for=" + clientIp);
}));
log_.trace() << "client ip: " << clientIp;
log_.trace() << "Performing websocket handshake";
// Perform the websocket handshake
ws->async_handshake(ip_, "/", yield[ec]);
if (ec)
return {};
log_.trace() << "Sending request";
// Send the message
ws->async_write(net::buffer(boost::json::serialize(request)), yield[ec]);
if (ec)
return {};
beast::flat_buffer buffer;
ws->async_read(buffer, yield[ec]);
if (ec)
return {};
auto begin = static_cast<char const*>(buffer.data().data());
auto end = begin + buffer.data().size();
auto parsed = boost::json::parse(std::string(begin, end));
if (!parsed.is_object())
{
log_.error() << "Error parsing response: " << std::string{begin, end};
return {};
}
log_.trace() << "Successfully forward request";
response = parsed.as_object();
response["forwarded"] = true;
return response;
}
catch (std::exception const& e)
{
log_.error() << "Encountered exception : " << e.what();
return {};
}
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More