mirror of
https://github.com/XRPLF/rippled.git
synced 2026-01-17 21:25:23 +00:00
Compare commits
463 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2084d61efa | ||
|
|
57b9da62bd | ||
|
|
79c583e1f7 | ||
|
|
5743dc4537 | ||
|
|
d86b1f8b7d | ||
|
|
d57f88fc18 | ||
|
|
24cf8ab8c7 | ||
|
|
e56a85cb3b | ||
|
|
d1fdea9bc8 | ||
|
|
1a8eb5e6e3 | ||
|
|
6a8180c967 | ||
|
|
eb57679085 | ||
|
|
297def5ed3 | ||
|
|
a01cadbfd5 | ||
|
|
11ca9a946c | ||
|
|
f326f019bf | ||
|
|
90326bf756 | ||
|
|
255bf829ca | ||
|
|
0623a40f02 | ||
|
|
a529b218f3 | ||
|
|
c0cb389b20 | ||
|
|
8f82b62e0d | ||
|
|
dc213a4fab | ||
|
|
06e87e0f6a | ||
|
|
c2a08a1f26 | ||
|
|
df02eb125f | ||
|
|
0c13676d5f | ||
|
|
74e6ed1af3 | ||
|
|
72377e7bf2 | ||
|
|
5b085a75fd | ||
|
|
61389a8bef | ||
|
|
bd97e59254 | ||
|
|
95ecf296ad | ||
|
|
b7e0306d0a | ||
|
|
a9ee802240 | ||
|
|
d23d37fcfd | ||
|
|
289bc0afd9 | ||
|
|
c5dc00af74 | ||
|
|
d49b486224 | ||
|
|
417cfc2fb0 | ||
|
|
febbe14e6d | ||
|
|
44514930f9 | ||
|
|
dc778536ed | ||
|
|
18584ef2fd | ||
|
|
416ce35d73 | ||
|
|
7c12f01358 | ||
|
|
5a4654a0da | ||
|
|
89766c5f21 | ||
|
|
4ec11e692b | ||
|
|
d02f0e11c5 | ||
|
|
e28989638d | ||
|
|
915fe31274 | ||
|
|
db720a59e4 | ||
|
|
72752b1ee0 | ||
|
|
c663f1f62b | ||
|
|
d54f6278bb | ||
|
|
fc04336caa | ||
|
|
47376a0cc3 | ||
|
|
45aa0142a6 | ||
|
|
e3acb61d57 | ||
|
|
8fa33795a3 | ||
|
|
b1c9b134dc | ||
|
|
ae9930b87d | ||
|
|
aaa601841c | ||
|
|
8ca2d98496 | ||
|
|
ad805eb95b | ||
|
|
eb17325cbe | ||
|
|
b00787e161 | ||
|
|
81e7ec859d | ||
|
|
6f6179abb4 | ||
|
|
32a26a65d9 | ||
|
|
daccb5b4c0 | ||
|
|
cf97dcb992 | ||
|
|
6746b863b3 | ||
|
|
bf013c02ad | ||
|
|
fbedfb25ae | ||
|
|
9e877a929e | ||
|
|
e0eae9725b | ||
|
|
3083983fee | ||
|
|
5050b366d9 | ||
|
|
f0c237e001 | ||
|
|
970711f1fd | ||
|
|
19018e8959 | ||
|
|
7edfbbd8bd | ||
|
|
d36024394d | ||
|
|
35e0ab4280 | ||
|
|
ef60ac8348 | ||
|
|
0c47cfad6f | ||
|
|
eb6b79bed7 | ||
|
|
eaff0d30fb | ||
|
|
fae9f9b24b | ||
|
|
5d44998368 | ||
|
|
a145759d1e | ||
|
|
e2a42184b9 | ||
|
|
00a4c3a478 | ||
|
|
0320d2169e | ||
|
|
da26d11593 | ||
|
|
90aa3c75a7 | ||
|
|
1197e49068 | ||
|
|
b6ed50eb03 | ||
|
|
8c78c83d05 | ||
|
|
a5c4684273 | ||
|
|
2bbf0eb588 | ||
|
|
6095f55bf1 | ||
|
|
f64bd54093 | ||
|
|
bc91fd740f | ||
|
|
4a9bd7ed6d | ||
|
|
1ca8898703 | ||
|
|
8a25f32824 | ||
|
|
d9d001dffd | ||
|
|
bdfafa0b58 | ||
|
|
2266b04dd8 | ||
|
|
c50d166c23 | ||
|
|
de43d43560 | ||
|
|
f954faada6 | ||
|
|
9f75f2d522 | ||
|
|
cf70ecbd6d | ||
|
|
2a298469be | ||
|
|
3be668b343 | ||
|
|
2027f642ec | ||
|
|
9376d81d0d | ||
|
|
54e5d5fc35 | ||
|
|
b8552abcea | ||
|
|
3fb60a89a3 | ||
|
|
15b0ae5bf0 | ||
|
|
33b396c7b4 | ||
|
|
ea145d12c7 | ||
|
|
0d17dd8228 | ||
|
|
af5f28cbf8 | ||
|
|
234b754038 | ||
|
|
c231adf324 | ||
|
|
7a088a5280 | ||
|
|
96bbabbd2e | ||
|
|
e37c108195 | ||
|
|
53df35eef3 | ||
|
|
1061b01ab3 | ||
|
|
aee422e819 | ||
|
|
324667b877 | ||
|
|
10d73655bc | ||
|
|
96f11c786e | ||
|
|
9202197354 | ||
|
|
d78a396525 | ||
|
|
cd27b5f2bd | ||
|
|
78bc2727f7 | ||
|
|
8b58e93a2e | ||
|
|
3752234161 | ||
|
|
bf75094224 | ||
|
|
8d59c7dd40 | ||
|
|
a1fd579756 | ||
|
|
b9943d3746 | ||
|
|
b5502a49c3 | ||
|
|
7bd5d51e4e | ||
|
|
d4d937c37b | ||
|
|
2f0231025f | ||
|
|
f1a9e8840f | ||
|
|
2c559116fb | ||
|
|
4bedbd1d39 | ||
|
|
433feade5d | ||
|
|
10e4608ce0 | ||
|
|
ff3d2e7c29 | ||
|
|
7822a28c87 | ||
|
|
dcba79be48 | ||
|
|
2a7c573dec | ||
|
|
22cc9a254a | ||
|
|
09ae9168ca | ||
|
|
9eb9b8f631 | ||
|
|
6298daba1a | ||
|
|
2eb1c6a396 | ||
|
|
7717056cf2 | ||
|
|
9fd5cd303d | ||
|
|
04ff6249d5 | ||
|
|
fa9ecae2d6 | ||
|
|
62d2b76fa8 | ||
|
|
d95aab1139 | ||
|
|
80c2302fd3 | ||
|
|
38f954fd46 | ||
|
|
14b2f27c3e | ||
|
|
a2a37a928a | ||
|
|
c10c0be11b | ||
|
|
34ee4ca0cb | ||
|
|
30fd45890b | ||
|
|
36fe1966c3 | ||
|
|
1bb99e5d3c | ||
|
|
430802c1cf | ||
|
|
9106a06579 | ||
|
|
79e69da364 | ||
|
|
73116297aa | ||
|
|
8579eb0c19 | ||
|
|
9c8caddc5a | ||
|
|
2913847925 | ||
|
|
cf8438fe1d | ||
|
|
6d82fb83a0 | ||
|
|
207e1730e9 | ||
|
|
f0424fe7dd | ||
|
|
2e456a835d | ||
|
|
ab9039e77d | ||
|
|
9932a19139 | ||
|
|
64e4a89470 | ||
|
|
9d89d4c188 | ||
|
|
b2bf2b6e6b | ||
|
|
3b33318dc8 | ||
|
|
85307b29d0 | ||
|
|
95426efb8a | ||
|
|
3e2b568ef9 | ||
|
|
a06525649d | ||
|
|
b4699c3b46 | ||
|
|
27d978b891 | ||
|
|
f91b568069 | ||
|
|
06bd16c928 | ||
|
|
c0a0b79d2d | ||
|
|
1d5d902d28 | ||
|
|
f284a19246 | ||
|
|
735b8b7d48 | ||
|
|
a2e1a7a84d | ||
|
|
c138338358 | ||
|
|
e7eba93666 | ||
|
|
93ea4b2f4f | ||
|
|
5776c2ebe5 | ||
|
|
8defb4cd28 | ||
|
|
d358495f02 | ||
|
|
38dd2d6677 | ||
|
|
80bd107e57 | ||
|
|
68286df23d | ||
|
|
d494d7a725 | ||
|
|
36be4856fd | ||
|
|
c47b4f3667 | ||
|
|
fe129e8e4f | ||
|
|
0a1fb4e6ca | ||
|
|
7e97bfce10 | ||
|
|
49409dbf27 | ||
|
|
c3227a67ec | ||
|
|
a4a46a491f | ||
|
|
c11037fd27 | ||
|
|
114981f774 | ||
|
|
27543170d0 | ||
|
|
b0a39c5f86 | ||
|
|
4fbb1699be | ||
|
|
319f29da8d | ||
|
|
70bacb349e | ||
|
|
012bbcfe36 | ||
|
|
f74b469e68 | ||
|
|
55dc7a252e | ||
|
|
b015623128 | ||
|
|
c76a124d14 | ||
|
|
74d96ff4bd | ||
|
|
44fe0e1fc4 | ||
|
|
78245a072c | ||
|
|
1fd1c34112 | ||
|
|
0dae22adf2 | ||
|
|
746181cb33 | ||
|
|
4b9d3ca7de | ||
|
|
54da532ace | ||
|
|
4159b02753 | ||
|
|
42a068ab5e | ||
|
|
6f7d413d88 | ||
|
|
1aaafeb57b | ||
|
|
6dd3d825c8 | ||
|
|
02ccdeb94e | ||
|
|
7b192945eb | ||
|
|
28ed2b9e69 | ||
|
|
24b17c6de9 | ||
|
|
6eb93e9f8a | ||
|
|
8173d1f643 | ||
|
|
b29812e40b | ||
|
|
183dcd08d9 | ||
|
|
deea16e14f | ||
|
|
e96a719724 | ||
|
|
8e38d8e6f0 | ||
|
|
a0b862bbf7 | ||
|
|
434e2f4cbf | ||
|
|
cb0572d66e | ||
|
|
5c8e072b7f | ||
|
|
e39c452f31 | ||
|
|
1bb294afbc | ||
|
|
5def79e93c | ||
|
|
f072469409 | ||
|
|
4a444f7d60 | ||
|
|
ab77444fa3 | ||
|
|
57ffc58613 | ||
|
|
8c386ae07e | ||
|
|
cba6b4a749 | ||
|
|
64191a4b13 | ||
|
|
9bd6b249ce | ||
|
|
bec6c626d8 | ||
|
|
7ddf856d58 | ||
|
|
a3915fa5c4 | ||
|
|
17abca1caa | ||
|
|
f239050054 | ||
|
|
76a6956138 | ||
|
|
cf5ca9a5cf | ||
|
|
77ec62e9c8 | ||
|
|
a3f2196d4e | ||
|
|
d89c158a77 | ||
|
|
ef53197e1f | ||
|
|
8707c15b9c | ||
|
|
0b4e34b03b | ||
|
|
6968da153c | ||
|
|
03c809371a | ||
|
|
d282b0bf85 | ||
|
|
efa615a5e3 | ||
|
|
068db1f48b | ||
|
|
afacbe2a3a | ||
|
|
f8a0ef8f87 | ||
|
|
e1a2939f89 | ||
|
|
95da398e7d | ||
|
|
52adcc73d9 | ||
|
|
dbde686a97 | ||
|
|
1129110be3 | ||
|
|
a26a175957 | ||
|
|
795de3a75a | ||
|
|
8116b569c7 | ||
|
|
2bba79138f | ||
|
|
a8d481c2a5 | ||
|
|
85fc1e8235 | ||
|
|
ab4102a632 | ||
|
|
4ae2f06be4 | ||
|
|
831e03ad2a | ||
|
|
ab9f3fa42a | ||
|
|
13b8359de6 | ||
|
|
660d9c1602 | ||
|
|
b1d47c65d4 | ||
|
|
271e79095b | ||
|
|
6c268a3e9c | ||
|
|
e5ff70f606 | ||
|
|
801b1580f5 | ||
|
|
707868be33 | ||
|
|
9b9f34f881 | ||
|
|
7724cca384 | ||
|
|
01bd5a2646 | ||
|
|
2b44868373 | ||
|
|
f79a4a8cdb | ||
|
|
7bb6b75f3c | ||
|
|
e5d17a9452 | ||
|
|
dbd5f0073e | ||
|
|
12c0e8148b | ||
|
|
72a9a2bdbb | ||
|
|
80860fa8f5 | ||
|
|
8cf542abb0 | ||
|
|
a931b020b5 | ||
|
|
d317060ae4 | ||
|
|
eee07a4f96 | ||
|
|
7b048b423e | ||
|
|
a459206848 | ||
|
|
3a3b0b4c14 | ||
|
|
de0c527387 | ||
|
|
b54453d1a7 | ||
|
|
706ca874b0 | ||
|
|
51bd4626b1 | ||
|
|
cf6f40ea8f | ||
|
|
d3798f6290 | ||
|
|
4e33a1abf7 | ||
|
|
86e8f2e232 | ||
|
|
91e857874f | ||
|
|
8f50fd051e | ||
|
|
94e8e94750 | ||
|
|
54ece72b62 | ||
|
|
4702c8b591 | ||
|
|
00702f28c2 | ||
|
|
df29e98ea5 | ||
|
|
fe9922d654 | ||
|
|
362a017eee | ||
|
|
e8f3525226 | ||
|
|
1067086f71 | ||
|
|
0214d83aa5 | ||
|
|
d69a902876 | ||
|
|
f43aeda49c | ||
|
|
27484f78a9 | ||
|
|
93bf77bdec | ||
|
|
728651b5d5 | ||
|
|
fb74eefa9e | ||
|
|
853c964194 | ||
|
|
645c06764b | ||
|
|
2d23e7bd18 | ||
|
|
0290d0b82c | ||
|
|
3d86b49dae | ||
|
|
0b9e935806 | ||
|
|
17412d17e2 | ||
|
|
99f5193699 | ||
|
|
328e42ad42 | ||
|
|
6d28f2a8d9 | ||
|
|
421417ab07 | ||
|
|
68494a308e | ||
|
|
16f79d160a | ||
|
|
8f984042f4 | ||
|
|
eb1a699c5a | ||
|
|
ac766ec0eb | ||
|
|
21340a1c1e | ||
|
|
c594f3b0cf | ||
|
|
268e28a278 | ||
|
|
ca664b17d3 | ||
|
|
3936110c8d | ||
|
|
9f91870b1c | ||
|
|
97712107b7 | ||
|
|
d9fa148688 | ||
|
|
d88a7c73b4 | ||
|
|
894d3463ce | ||
|
|
5b5226d518 | ||
|
|
d025f3fb28 | ||
|
|
11be30dd40 | ||
|
|
4fad421c8a | ||
|
|
57b3543e7b | ||
|
|
a00543b6bc | ||
|
|
dbd25f0e32 | ||
|
|
62a3f33d72 | ||
|
|
74f9edef07 | ||
|
|
dbee3f01b7 | ||
|
|
6c72d5cf7e | ||
|
|
2827de4d63 | ||
|
|
381606aba2 | ||
|
|
567e42e071 | ||
|
|
2bf3b194fa | ||
|
|
444ea56182 | ||
|
|
d797589164 | ||
|
|
1577c775b3 | ||
|
|
3bf0b724a3 | ||
|
|
bd8dbb87b6 | ||
|
|
cd78ce3118 | ||
|
|
023f5704d0 | ||
|
|
123e94c603 | ||
|
|
156dc2ec4c | ||
|
|
b7b7e098be | ||
|
|
50760c6935 | ||
|
|
65dfc5d19e | ||
|
|
020b285808 | ||
|
|
bdd22e4d51 | ||
|
|
b7631d2a28 | ||
|
|
284ed38471 | ||
|
|
b96ef6adb8 | ||
|
|
ce6b427202 | ||
|
|
858e93c7f8 | ||
|
|
6477bdf3e8 | ||
|
|
ce5f240551 | ||
|
|
1c3c69f8b5 | ||
|
|
be2652544b | ||
|
|
f155eaff4b | ||
|
|
67981f002f | ||
|
|
0d83223445 | ||
|
|
9f8d648514 | ||
|
|
3d3b6d85cd | ||
|
|
8cf7c9548a | ||
|
|
323dbc7962 | ||
|
|
46a76fb318 | ||
|
|
a6246b0baa | ||
|
|
c8282795ef | ||
|
|
e93a44fe9b | ||
|
|
3e870866e0 | ||
|
|
78d771af36 | ||
|
|
6bb9dd22e0 | ||
|
|
1661c84af6 | ||
|
|
4f422f6f39 | ||
|
|
393ca87572 | ||
|
|
f4c56cbd53 | ||
|
|
9470558ecc | ||
|
|
f22fcb3b2a | ||
|
|
e257a226f3 | ||
|
|
a4e9878790 | ||
|
|
25b13978e7 | ||
|
|
3e9cff9287 | ||
|
|
758a3792eb | ||
|
|
ade5eb71cf | ||
|
|
d097819c52 | ||
|
|
905a97e0aa | ||
|
|
cc452dfa9b |
@@ -18,7 +18,7 @@ AlwaysBreakBeforeMultilineStrings: true
|
||||
AlwaysBreakTemplateDeclarations: true
|
||||
BinPackArguments: false
|
||||
BinPackParameters: false
|
||||
BraceWrapping:
|
||||
BraceWrapping:
|
||||
AfterClass: true
|
||||
AfterControlStatement: true
|
||||
AfterEnum: false
|
||||
@@ -43,8 +43,8 @@ Cpp11BracedListStyle: true
|
||||
DerivePointerAlignment: false
|
||||
DisableFormat: false
|
||||
ExperimentalAutoDetectBinPacking: false
|
||||
ForEachMacros: [ foreach, Q_FOREACH, BOOST_FOREACH ]
|
||||
IncludeCategories:
|
||||
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
|
||||
IncludeCategories:
|
||||
- Regex: '^<(BeastConfig)'
|
||||
Priority: 0
|
||||
- Regex: '^<(ripple)/'
|
||||
@@ -84,4 +84,4 @@ SpacesInParentheses: false
|
||||
SpacesInSquareBrackets: false
|
||||
Standard: Cpp11
|
||||
TabWidth: 8
|
||||
UseTab: Never
|
||||
UseTab: Never
|
||||
|
||||
4
.git-blame-ignore-revs
Normal file
4
.git-blame-ignore-revs
Normal file
@@ -0,0 +1,4 @@
|
||||
# This feature requires Git >= 2.24
|
||||
# To use it by default in git blame:
|
||||
# git config blame.ignoreRevsFile .git-blame-ignore-revs
|
||||
50760c693510894ca368e90369b0cc2dabfd07f3
|
||||
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -22,7 +22,7 @@ assignees: ''
|
||||
|
||||
## Environment
|
||||
<!--Please describe your environment setup (such as Ubuntu 18.04 with Boost 1.70).-->
|
||||
<!-- If you are using a formal release, please use the version returned by './rippled --version' as the verison number-->
|
||||
<!-- If you are using a formal release, please use the version returned by './rippled --version' as the version number-->
|
||||
<!-- If you are working off of develop, please add the git hash via 'git rev-parse HEAD'-->
|
||||
|
||||
## Supporting Files
|
||||
|
||||
55
.github/pull_request_template.md
vendored
Normal file
55
.github/pull_request_template.md
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
<!--
|
||||
This PR template helps you to write a good pull request description.
|
||||
Please feel free to include additional useful information even beyond what is requested below.
|
||||
-->
|
||||
|
||||
## High Level Overview of Change
|
||||
|
||||
<!--
|
||||
Please include a summary of the changes.
|
||||
This may be a direct input to the release notes.
|
||||
If too broad, please consider splitting into multiple PRs.
|
||||
If a relevant task or issue, please link it here.
|
||||
-->
|
||||
|
||||
### Context of Change
|
||||
|
||||
<!--
|
||||
Please include the context of a change.
|
||||
If a bug fix, when was the bug introduced? What was the behavior?
|
||||
If a new feature, why was this architecture chosen? What were the alternatives?
|
||||
If a refactor, how is this better than the previous implementation?
|
||||
|
||||
If there is a spec or design document for this feature, please link it here.
|
||||
-->
|
||||
|
||||
### Type of Change
|
||||
|
||||
<!--
|
||||
Please check [x] relevant options, delete irrelevant ones.
|
||||
-->
|
||||
|
||||
- [ ] Bug fix (non-breaking change which fixes an issue)
|
||||
- [ ] New feature (non-breaking change which adds functionality)
|
||||
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
|
||||
- [ ] Refactor (non-breaking change that only restructures code)
|
||||
- [ ] Tests (You added tests for code that already exists, or your new feature included in this PR)
|
||||
- [ ] Documentation Updates
|
||||
- [ ] Release
|
||||
|
||||
<!--
|
||||
## Before / After
|
||||
If relevant, use this section for an English description of the change at a technical level.
|
||||
If this change affects an API, examples should be included here.
|
||||
-->
|
||||
|
||||
<!--
|
||||
## Test Plan
|
||||
If helpful, please describe the tests that you ran to verify your changes and provide instructions so that others can reproduce.
|
||||
This section may not be needed if your change includes thoroughly commented unit tests.
|
||||
-->
|
||||
|
||||
<!--
|
||||
## Future Tasks
|
||||
For future tasks related to PR.
|
||||
-->
|
||||
60
.github/workflows/clang-format.yml
vendored
Normal file
60
.github/workflows/clang-format.yml
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
name: clang-format
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
check:
|
||||
runs-on: ubuntu-18.04
|
||||
env:
|
||||
CLANG_VERSION: 10
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Install clang-format
|
||||
run: |
|
||||
sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
|
||||
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-${CLANG_VERSION} main
|
||||
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-${CLANG_VERSION} main
|
||||
EOF
|
||||
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
|
||||
sudo apt-get update
|
||||
sudo apt-get install clang-format-${CLANG_VERSION}
|
||||
- name: Format src/ripple
|
||||
run: find src/ripple -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
|
||||
- name: Format src/test
|
||||
run: find src/test -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
|
||||
- name: Check for differences
|
||||
id: assert
|
||||
run: |
|
||||
set -o pipefail
|
||||
git diff --exit-code | tee "clang-format.patch"
|
||||
- name: Upload patch
|
||||
if: failure() && steps.assert.outcome == 'failure'
|
||||
uses: actions/upload-artifact@v2
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: clang-format.patch
|
||||
if-no-files-found: ignore
|
||||
path: clang-format.patch
|
||||
- name: What happened?
|
||||
if: failure() && steps.assert.outcome == 'failure'
|
||||
env:
|
||||
PREAMBLE: |
|
||||
If you are reading this, you are looking at a failed Github Actions
|
||||
job. That means you pushed one or more files that did not conform
|
||||
to the formatting specified in .clang-format. That may be because
|
||||
you neglected to run 'git clang-format' or 'clang-format' before
|
||||
committing, or that your version of clang-format has an
|
||||
incompatibility with the one on this
|
||||
machine, which is:
|
||||
SUGGESTION: |
|
||||
|
||||
To fix it, you can do one of two things:
|
||||
1. Download and apply the patch generated as an artifact of this
|
||||
job to your repo, commit, and push.
|
||||
2. Run 'git-clang-format --extensions c,cpp,h,cxx,ipp develop'
|
||||
in your repo, commit, and push.
|
||||
run: |
|
||||
echo "${PREAMBLE}"
|
||||
clang-format-${CLANG_VERSION} --version
|
||||
echo "${SUGGESTION}"
|
||||
exit 1
|
||||
25
.github/workflows/doxygen.yml
vendored
Normal file
25
.github/workflows/doxygen.yml
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
name: Build and publish Doxygen documentation
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- develop
|
||||
|
||||
jobs:
|
||||
job:
|
||||
runs-on: ubuntu-18.04
|
||||
container:
|
||||
image: docker://rippleci/rippled-ci-builder:2944b78d22db
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v2
|
||||
- name: build
|
||||
run: |
|
||||
mkdir build
|
||||
cd build
|
||||
cmake -DBoost_NO_BOOST_CMAKE=ON ..
|
||||
cmake --build . --target docs --parallel $(nproc)
|
||||
- name: publish
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_dir: build/docs/html
|
||||
49
.github/workflows/levelization.yml
vendored
Normal file
49
.github/workflows/levelization.yml
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
name: levelization
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
check:
|
||||
runs-on: ubuntu-18.04
|
||||
env:
|
||||
CLANG_VERSION: 10
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Check levelization
|
||||
run: Builds/levelization/levelization.sh
|
||||
- name: Check for differences
|
||||
id: assert
|
||||
run: |
|
||||
set -o pipefail
|
||||
git diff --exit-code | tee "levelization.patch"
|
||||
- name: Upload patch
|
||||
if: failure() && steps.assert.outcome == 'failure'
|
||||
uses: actions/upload-artifact@v2
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: levelization.patch
|
||||
if-no-files-found: ignore
|
||||
path: levelization.patch
|
||||
- name: What happened?
|
||||
if: failure() && steps.assert.outcome == 'failure'
|
||||
env:
|
||||
MESSAGE: |
|
||||
If you are reading this, you are looking at a failed Github
|
||||
Actions job. That means you changed the dependency relationships
|
||||
between the modules in rippled. That may be an improvement or a
|
||||
regression. This check doesn't judge.
|
||||
|
||||
A rule of thumb, though, is that if your changes caused
|
||||
something to be removed from loops.txt, that's probably an
|
||||
improvement. If something was added, it's probably a regression.
|
||||
|
||||
To fix it, you can do one of two things:
|
||||
1. Download and apply the patch generated as an artifact of this
|
||||
job to your repo, commit, and push.
|
||||
2. Run './Builds/levelization/levelization.sh' in your repo,
|
||||
commit, and push.
|
||||
|
||||
See Builds/levelization/README.md for more info.
|
||||
run: |
|
||||
echo "${MESSAGE}"
|
||||
exit 1
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -37,6 +37,12 @@ Release/*.*
|
||||
*.gcda
|
||||
*.gcov
|
||||
|
||||
# Levelization checking
|
||||
Builds/levelization/results/rawincludes.txt
|
||||
Builds/levelization/results/paths.txt
|
||||
Builds/levelization/results/includes/
|
||||
Builds/levelization/results/includedby/
|
||||
|
||||
# Ignore tmp directory.
|
||||
tmp
|
||||
|
||||
|
||||
164
.travis.yml
164
.travis.yml
@@ -1,15 +1,29 @@
|
||||
# There is a known issue where Travis will have trouble fetching the cache,
|
||||
# particularly on non-linux builds. Try restarting the individual build
|
||||
# (probably will not be necessary in the "windep" stages) if the end of the
|
||||
# log looks like:
|
||||
#
|
||||
#---------------------------------------
|
||||
# attempting to download cache archive
|
||||
# fetching travisorder/cache--windows-1809-containers-f2bf1c76c7fb4095c897a4999bd7c9b3fb830414dfe91f33d665443b52416d39--compiler-gpp.tgz
|
||||
# found cache
|
||||
# adding C:/Users/travis/_cache to cache
|
||||
# creating directory C:/Users/travis/_cache
|
||||
# No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
|
||||
# Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
|
||||
# The build has been terminated
|
||||
#---------------------------------------
|
||||
|
||||
language: cpp
|
||||
dist: xenial
|
||||
dist: bionic
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
stages:
|
||||
- one
|
||||
- two
|
||||
- three
|
||||
- four
|
||||
- five
|
||||
- windep-vcpkg
|
||||
- windep-boost
|
||||
- build
|
||||
|
||||
env:
|
||||
global:
|
||||
@@ -22,13 +36,27 @@ env:
|
||||
- NIH_CACHE_ROOT=${CACHE_DIR}/nih_c
|
||||
- PARALLEL_TESTS=true
|
||||
# this is NOT used by linux container based builds (which already have boost installed)
|
||||
- BOOST_URL='https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.bz2'
|
||||
- BOOST_URL='https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz'
|
||||
# Alternate dowload location
|
||||
- BOOST_URL2='https://downloads.sourceforge.net/project/boost/boost/1.75.0/boost_1_75_0.tar.bz2?r=&ts=1594393912&use_mirror=newcontinuum'
|
||||
# Travis downloader doesn't seem to have updated certs. Using this option
|
||||
# introduces obvious security risks, but they're Travis's risks.
|
||||
# Note that this option is only used if the "normal" build fails.
|
||||
- BOOST_WGET_OPTIONS='--no-check-certificate'
|
||||
- VCPKG_DIR=${CACHE_DIR}/vcpkg
|
||||
- USE_CCACHE=true
|
||||
- CCACHE_BASEDIR=${TRAVIS_HOME}"
|
||||
- CCACHE_NOHASHDIR=true
|
||||
- CCACHE_DIR=${CACHE_DIR}/ccache
|
||||
|
||||
before_install:
|
||||
- export NUM_PROCESSORS=$(nproc)
|
||||
- echo "NUM PROC is ${NUM_PROCESSORS}"
|
||||
- if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
|
||||
- if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
|
||||
- if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
|
||||
- bin/ci/ubuntu/travis-cache-start.sh
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
allow_failures:
|
||||
@@ -38,11 +66,18 @@ matrix:
|
||||
- name: ubsan, clang-8
|
||||
# this one often runs out of memory:
|
||||
- name: manual tests, gcc-8, release
|
||||
# The Windows build may fail if any of the dependencies fail, but
|
||||
# allow the rest of the builds to continue. They may succeed if the
|
||||
# dependency is already cached. These do not need to be retried if
|
||||
# _any_ of the Windows builds succeed.
|
||||
- stage: windep-vcpkg
|
||||
- stage: windep-boost
|
||||
|
||||
# https://docs.travis-ci.com/user/build-config-yaml#usage-of-yaml-anchors-and-aliases
|
||||
include:
|
||||
# debug builds
|
||||
- &linux
|
||||
stage: one
|
||||
stage: build
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
|
||||
compiler: gcc-8
|
||||
name: gcc-8, debug
|
||||
@@ -60,6 +95,13 @@ matrix:
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
|
||||
- BUILD_TYPE=Debug
|
||||
- <<: *linux
|
||||
compiler: clang-8
|
||||
name: reporting, clang-8, debug
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
|
||||
- BUILD_TYPE=Debug
|
||||
- CMAKE_ADD="-Dreporting=ON"
|
||||
# coverage builds
|
||||
- <<: *linux
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
|
||||
@@ -81,6 +123,25 @@ matrix:
|
||||
- CMAKE_ADD="-Dcoverage=ON"
|
||||
- TARGET=coverage_report
|
||||
- SKIP_TESTS=true
|
||||
# test-free builds
|
||||
- <<: *linux
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
|
||||
compiler: gcc-8
|
||||
name: no-tests-unity, gcc-8
|
||||
env:
|
||||
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
|
||||
- BUILD_TYPE=Debug
|
||||
- CMAKE_ADD="-Dtests=OFF"
|
||||
- SKIP_TESTS=true
|
||||
- <<: *linux
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
|
||||
compiler: clang-8
|
||||
name: no-tests-non-unity, clang-8
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
|
||||
- BUILD_TYPE=Debug
|
||||
- CMAKE_ADD="-Dtests=OFF -Dunity=OFF"
|
||||
- SKIP_TESTS=true
|
||||
# nounity
|
||||
- <<: *linux
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
|
||||
@@ -196,30 +257,24 @@ matrix:
|
||||
- BUILD_TYPE=Debug
|
||||
- NINJA_BUILD=false
|
||||
# misc alternative compilers
|
||||
- <<: *linux
|
||||
compiler: gcc-7
|
||||
name: gcc-7
|
||||
env:
|
||||
- MATRIX_EVAL="CC=gcc-7 && CXX=g++-7"
|
||||
- BUILD_TYPE=Debug
|
||||
- <<: *linux
|
||||
compiler: gcc-9
|
||||
name: gcc-9
|
||||
env:
|
||||
- MATRIX_EVAL="CC=gcc-9 && CXX=g++-9"
|
||||
- BUILD_TYPE=Debug
|
||||
- <<: *linux
|
||||
compiler: clang-7
|
||||
name: clang-7
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-7 && CXX=clang++-7"
|
||||
- BUILD_TYPE=Debug
|
||||
- <<: *linux
|
||||
compiler: clang-9
|
||||
name: clang-9
|
||||
name: clang-9, debug
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
|
||||
- BUILD_TYPE=Debug
|
||||
- <<: *linux
|
||||
compiler: clang-9
|
||||
name: clang-9, release
|
||||
env:
|
||||
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
|
||||
- BUILD_TYPE=Release
|
||||
# verify build with min version of cmake
|
||||
- <<: *linux
|
||||
compiler: gcc-8
|
||||
@@ -227,7 +282,7 @@ matrix:
|
||||
env:
|
||||
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
|
||||
- BUILD_TYPE=Debug
|
||||
- CMAKE_EXE=/opt/local/cmake-3.9/bin/cmake
|
||||
- CMAKE_EXE=/opt/local/cmake/bin/cmake
|
||||
- SKIP_TESTS=true
|
||||
# validator keys project as subproj of rippled
|
||||
- <<: *linux
|
||||
@@ -242,17 +297,17 @@ matrix:
|
||||
# macos
|
||||
- &macos
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_mac/
|
||||
stage: one
|
||||
stage: build
|
||||
os: osx
|
||||
osx_image: xcode10.3
|
||||
name: xcode10, debug
|
||||
osx_image: xcode13.1
|
||||
name: xcode13.1, debug
|
||||
env:
|
||||
# put NIH in non-cache location since it seems to
|
||||
# cause failures when homebrew updates
|
||||
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
|
||||
- BLD_CONFIG=Debug
|
||||
- TEST_EXTRA_ARGS=""
|
||||
- BOOST_ROOT=${CACHE_DIR}/boost_1_70_0
|
||||
- BOOST_ROOT=${CACHE_DIR}/boost_1_75_0
|
||||
- >-
|
||||
CMAKE_ADD="
|
||||
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
|
||||
@@ -283,7 +338,7 @@ matrix:
|
||||
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
|
||||
- ./rippled --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS} ${TEST_EXTRA_ARGS}
|
||||
- <<: *macos
|
||||
name: xcode10, release
|
||||
name: xcode13.1, release
|
||||
before_script:
|
||||
- export BLD_CONFIG=Release
|
||||
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dassert=ON"
|
||||
@@ -292,8 +347,8 @@ matrix:
|
||||
before_script:
|
||||
- export TEST_EXTRA_ARGS="--unittest-ipv6"
|
||||
- <<: *macos
|
||||
osx_image: xcode11.2
|
||||
name: xcode11, debug
|
||||
osx_image: xcode13.1
|
||||
name: xcode13.1, debug
|
||||
# windows
|
||||
- &windows
|
||||
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_win/
|
||||
@@ -306,53 +361,44 @@ matrix:
|
||||
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
|
||||
- VCPKG_DEFAULT_TRIPLET="x64-windows-static"
|
||||
- MATRIX_EVAL="CC=cl.exe && CXX=cl.exe"
|
||||
- BOOST_ROOT=${CACHE_DIR}/boost_1_70
|
||||
- BOOST_ROOT=${CACHE_DIR}/boost_1_75
|
||||
- >-
|
||||
CMAKE_ADD="
|
||||
-DCMAKE_PREFIX_PATH=${BOOST_ROOT}/_INSTALLED_
|
||||
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
|
||||
-DBoost_ROOT=${BOOST_ROOT}/_INSTALLED_
|
||||
-DBoost_DIR=${BOOST_ROOT}/_INSTALLED_/lib/cmake/Boost-1.70.0
|
||||
-DBoost_DIR=${BOOST_ROOT}/_INSTALLED_/lib/cmake/Boost-1.75.0
|
||||
-DBoost_COMPILER=vc141
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_TOOLCHAIN_FILE=${VCPKG_DIR}/scripts/buildsystems/vcpkg.cmake
|
||||
-DVCPKG_TARGET_TRIPLET=x64-windows-static"
|
||||
stage: one
|
||||
name: prereq-ssl
|
||||
stage: windep-vcpkg
|
||||
name: prereq-vcpkg
|
||||
install:
|
||||
- choco upgrade cmake.install
|
||||
- choco install ninja visualstudio2017-workload-vctools -y
|
||||
script:
|
||||
- df -h
|
||||
- env
|
||||
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh openssl
|
||||
- <<: *windows
|
||||
stage: two
|
||||
name: prereq-grpc
|
||||
script:
|
||||
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh grpc
|
||||
- <<: *windows
|
||||
stage: three
|
||||
name: prereq-libarchive
|
||||
script:
|
||||
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh libarchive[lz4]
|
||||
# TBD consider rocksdb via vcpkg if/when we can build with the
|
||||
# TBD consider rocksdb via vcpkg if/when we can build with the
|
||||
# vcpkg version
|
||||
# - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh rocksdb[snappy,lz4,zlib]
|
||||
- <<: *windows
|
||||
stage: four
|
||||
name: prereq-boost
|
||||
stage: windep-boost
|
||||
name: prereq-keep-boost
|
||||
install:
|
||||
- choco upgrade cmake.install
|
||||
- choco install ninja visualstudio2017-workload-vctools -y
|
||||
#Force install 14.24 to fix build issue. TODO - this should be deleted when rocksdb fixes their issue with the new compiler.
|
||||
- choco install visualstudio2019buildtools visualstudio2019community -y
|
||||
- choco install visualstudio2019-workload-vctools --package-parameters "--add Microsoft.VisualStudio.Component.VC.14.24.x86.x64" -y
|
||||
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
|
||||
script:
|
||||
- export BOOST_TOOLSET=msvc-14.1
|
||||
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
|
||||
- &windows-bld
|
||||
<<: *windows
|
||||
stage: five
|
||||
stage: build
|
||||
name: windows, debug
|
||||
before_script:
|
||||
- export BLD_CONFIG=Debug
|
||||
@@ -362,8 +408,8 @@ matrix:
|
||||
- mkdir -p build.ms && cd build.ms
|
||||
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
|
||||
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
|
||||
# override num procs to force single unit test job
|
||||
- export NUM_PROCESSORS=1
|
||||
# override num procs to force fewer unit test jobs
|
||||
- export NUM_PROCESSORS=2
|
||||
- travis_wait ${MAX_TIME_MIN} ./rippled.exe --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
|
||||
- <<: *windows-bld
|
||||
name: windows, release
|
||||
@@ -373,11 +419,12 @@ matrix:
|
||||
name: windows, visual studio, debug
|
||||
script:
|
||||
- mkdir -p build.ms && cd build.ms
|
||||
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DCMAKE_GENERATOR_TOOLSET=host=x64"
|
||||
- cmake -G "Visual Studio 15 2017 Win64" ${CMAKE_EXTRA_ARGS} ..
|
||||
- export DESTDIR=${PWD}/_installed_
|
||||
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose --config ${BLD_CONFIG} --target install
|
||||
# override num procs to force single unit test job
|
||||
- export NUM_PROCESSORS=1
|
||||
# override num procs to force fewer unit test jobs
|
||||
- export NUM_PROCESSORS=2
|
||||
- >-
|
||||
travis_wait ${MAX_TIME_MIN} "./_installed_/Program Files/rippled/bin/rippled.exe" --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
|
||||
- <<: *windows-bld
|
||||
@@ -385,9 +432,7 @@ matrix:
|
||||
install:
|
||||
- choco upgrade cmake.install
|
||||
- choco install ninja -y
|
||||
#Force install 14.24 to fix build issue. TODO - this should be deleted when rocksdb fixes their issue with the new compiler.
|
||||
- choco install visualstudio2019buildtools visualstudio2019community -y
|
||||
- choco install visualstudio2019-workload-vctools --package-parameters "--add Microsoft.VisualStudio.Component.VC.14.24.x86.x64" -y
|
||||
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
|
||||
before_script:
|
||||
- export BLD_CONFIG=Release
|
||||
# we want to use the boost build from cache, which was built using the
|
||||
@@ -411,14 +456,5 @@ cache:
|
||||
directories:
|
||||
- $CACHE_DIR
|
||||
|
||||
before_install:
|
||||
- if [ "$(uname)" = "Darwin" ] ; then export NUM_PROCESSORS=$(sysctl -n hw.physicalcpu); else export NUM_PROCESSORS=$(nproc); fi
|
||||
- echo "NUM PROC is ${NUM_PROCESSORS}"
|
||||
- if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
|
||||
- if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
|
||||
- if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
|
||||
- bin/ci/ubuntu/travis-cache-start.sh
|
||||
|
||||
notifications:
|
||||
email: false
|
||||
|
||||
|
||||
@@ -35,10 +35,10 @@ function (print_ep_logs _target)
|
||||
COMMENT "${_target} BUILD OUTPUT"
|
||||
COMMAND ${CMAKE_COMMAND}
|
||||
-DIN_FILE=${STAMP_DIR}/${_target}-build-out.log
|
||||
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake
|
||||
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake
|
||||
COMMAND ${CMAKE_COMMAND}
|
||||
-DIN_FILE=${STAMP_DIR}/${_target}-build-err.log
|
||||
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
|
||||
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
|
||||
endfunction ()
|
||||
|
||||
#[=========================================================[
|
||||
@@ -177,7 +177,7 @@ function (git_hash hash_val)
|
||||
endif ()
|
||||
endif ()
|
||||
execute_process (COMMAND ${GIT_EXECUTABLE} "log" "--pretty=${_format}" "-n1"
|
||||
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
|
||||
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
|
||||
RESULT_VARIABLE _git_exit_code
|
||||
OUTPUT_VARIABLE _temp_hash
|
||||
OUTPUT_STRIP_TRAILING_WHITESPACE
|
||||
@@ -194,7 +194,7 @@ function (git_branch branch_val)
|
||||
endif ()
|
||||
set (_branch "")
|
||||
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
|
||||
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
|
||||
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
|
||||
RESULT_VARIABLE _git_exit_code
|
||||
OUTPUT_VARIABLE _temp_branch
|
||||
OUTPUT_STRIP_TRAILING_WHITESPACE
|
||||
|
||||
@@ -15,13 +15,13 @@ endif ()
|
||||
find_dependency (Boost 1.70
|
||||
COMPONENTS
|
||||
chrono
|
||||
container
|
||||
context
|
||||
coroutine
|
||||
date_time
|
||||
filesystem
|
||||
program_options
|
||||
regex
|
||||
serialization
|
||||
system
|
||||
thread)
|
||||
#[=========================================================[
|
||||
@@ -45,8 +45,9 @@ if (static OR APPLE OR MSVC)
|
||||
set (OPENSSL_USE_STATIC_LIBS ON)
|
||||
endif ()
|
||||
set (OPENSSL_MSVC_STATIC_RT ON)
|
||||
find_dependency (OpenSSL 1.0.2 REQUIRED)
|
||||
find_dependency (OpenSSL 1.1.1 REQUIRED)
|
||||
find_dependency (ZLIB)
|
||||
find_dependency (date)
|
||||
if (TARGET ZLIB::ZLIB)
|
||||
set_target_properties(OpenSSL::Crypto PROPERTIES
|
||||
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -28,7 +28,7 @@ if (coverage)
|
||||
|
||||
set (extract_pattern "")
|
||||
if (coverage_core_only)
|
||||
set (extract_pattern "${CMAKE_SOURCE_DIR}/src/ripple/")
|
||||
set (extract_pattern "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/")
|
||||
endif ()
|
||||
|
||||
if (LLVM_COV AND LLVM_PROFDATA)
|
||||
@@ -72,14 +72,14 @@ if (coverage)
|
||||
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage- results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
|
||||
# create baseline info file
|
||||
COMMAND ${LCOV}
|
||||
--no-external -d "${CMAKE_SOURCE_DIR}" -c -d . -i -o baseline.info
|
||||
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -i -o baseline.info
|
||||
| grep -v "ignoring data for external file"
|
||||
# run tests
|
||||
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests for coverage report."
|
||||
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
|
||||
# Create test coverage data file
|
||||
COMMAND ${LCOV}
|
||||
--no-external -d "${CMAKE_SOURCE_DIR}" -c -d . -o tests.info
|
||||
--no-external -d "${CMAKE_CURRENT_SOURCE_DIR}" -c -d . -o tests.info
|
||||
| grep -v "ignoring data for external file"
|
||||
# Combine baseline and test coverage data
|
||||
COMMAND ${LCOV}
|
||||
|
||||
@@ -1,31 +1,79 @@
|
||||
#[===================================================================[
|
||||
docs target (optional)
|
||||
#]===================================================================]
|
||||
|
||||
find_package (Doxygen)
|
||||
if (TARGET Doxygen::doxygen)
|
||||
set (doc_srcs docs/source.dox)
|
||||
file (GLOB_RECURSE other_docs docs/*.md)
|
||||
list (APPEND doc_srcs "${other_docs}")
|
||||
# read the source config and make a modified one
|
||||
# that points the output files to our build directory
|
||||
file (READ "${CMAKE_CURRENT_SOURCE_DIR}/docs/source.dox" dox_content)
|
||||
string (REGEX REPLACE "[\t ]*OUTPUT_DIRECTORY[\t ]*=(.*)"
|
||||
"OUTPUT_DIRECTORY=${CMAKE_BINARY_DIR}\n\\1"
|
||||
new_config "${dox_content}")
|
||||
file (WRITE "${CMAKE_BINARY_DIR}/source.dox" "${new_config}")
|
||||
add_custom_target (docs
|
||||
COMMAND "${DOXYGEN_EXECUTABLE}" "${CMAKE_BINARY_DIR}/source.dox"
|
||||
BYPRODUCTS "${CMAKE_BINARY_DIR}/html_doc/index.html"
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/docs"
|
||||
SOURCES "${doc_srcs}")
|
||||
if (is_multiconfig)
|
||||
set_property (
|
||||
SOURCE ${doc_srcs}
|
||||
APPEND
|
||||
PROPERTY HEADER_FILE_ONLY
|
||||
true)
|
||||
if (tests)
|
||||
find_package (Doxygen)
|
||||
if (NOT TARGET Doxygen::doxygen)
|
||||
message (STATUS "doxygen executable not found -- skipping docs target")
|
||||
return ()
|
||||
endif ()
|
||||
else ()
|
||||
message (STATUS "doxygen executable not found -- skipping docs target")
|
||||
|
||||
set (doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
|
||||
set (doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
|
||||
set (doxygen_index_file "${doxygen_output_directory}/html/index.html")
|
||||
set (doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
|
||||
|
||||
file (GLOB_RECURSE doxygen_input
|
||||
docs/*.md
|
||||
src/ripple/*.h
|
||||
src/ripple/*.cpp
|
||||
src/ripple/*.md
|
||||
src/test/*.h
|
||||
src/test/*.md
|
||||
Builds/*/README.md)
|
||||
list (APPEND doxygen_input
|
||||
README.md
|
||||
RELEASENOTES.md
|
||||
src/README.md)
|
||||
set (dependencies "${doxygen_input}" "${doxyfile}")
|
||||
|
||||
function (verbose_find_path variable name)
|
||||
# find_path sets a CACHE variable, so don't try using a "local" variable.
|
||||
find_path (${variable} "${name}" ${ARGN})
|
||||
if (NOT ${variable})
|
||||
message (WARNING "could not find ${name}")
|
||||
else ()
|
||||
message (STATUS "found ${name}: ${${variable}}/${name}")
|
||||
endif ()
|
||||
endfunction ()
|
||||
|
||||
verbose_find_path (doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
|
||||
verbose_find_path (doxygen_dot_path dot)
|
||||
|
||||
# https://en.cppreference.com/w/Cppreference:Archives
|
||||
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
|
||||
set (download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
|
||||
file (WRITE
|
||||
"${download_script}"
|
||||
"file (DOWNLOAD \
|
||||
http://upload.cppreference.com/mwiki/images/b/b2/html_book_20190607.zip \
|
||||
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
|
||||
EXPECTED_HASH MD5=82b3a612d7d35a83e3cb1195a63689ab \
|
||||
)\n \
|
||||
execute_process ( \
|
||||
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
|
||||
)\n"
|
||||
)
|
||||
set (tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
|
||||
add_custom_command (
|
||||
OUTPUT "${tagfile}"
|
||||
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
|
||||
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
|
||||
)
|
||||
set (doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
|
||||
|
||||
add_custom_command (
|
||||
OUTPUT "${doxygen_index_file}"
|
||||
COMMAND "${CMAKE_COMMAND}" -E env
|
||||
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
|
||||
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}"
|
||||
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
|
||||
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
|
||||
"DOXYGEN_DOT_PATH=${doxygen_dot_path}"
|
||||
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
DEPENDS "${dependencies}" "${tagfile}")
|
||||
add_custom_target (docs
|
||||
DEPENDS "${doxygen_index_file}"
|
||||
SOURCES "${dependencies}")
|
||||
endif ()
|
||||
|
||||
@@ -16,14 +16,9 @@ target_compile_definitions (opts
|
||||
BOOST_BEAST_ALLOW_DEPRECATED
|
||||
BOOST_FILESYSTEM_DEPRECATED
|
||||
>
|
||||
$<$<BOOL:${beast_hashers}>:
|
||||
USE_BEAST_HASHER
|
||||
>
|
||||
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
|
||||
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
|
||||
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
|
||||
# doesn't currently compile ? :
|
||||
$<$<BOOL:${verify_nodeobject_keys}>:RIPPLE_VERIFY_NODEOBJECT_KEYS=1>)
|
||||
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>)
|
||||
target_compile_options (opts
|
||||
INTERFACE
|
||||
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
|
||||
|
||||
@@ -13,7 +13,7 @@ if (NOT DEFINED NIH_CACHE_ROOT)
|
||||
if (DEFINED ENV{NIH_CACHE_ROOT})
|
||||
set (NIH_CACHE_ROOT $ENV{NIH_CACHE_ROOT})
|
||||
else ()
|
||||
set (NIH_CACHE_ROOT "${CMAKE_SOURCE_DIR}/.nih_c")
|
||||
set (NIH_CACHE_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/.nih_c")
|
||||
endif ()
|
||||
endif ()
|
||||
set (nih_cache_path
|
||||
|
||||
@@ -61,7 +61,7 @@ if (is_root_project)
|
||||
docker run
|
||||
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
|
||||
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
|
||||
-v ${CMAKE_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
|
||||
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
|
||||
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
|
||||
"$<$<BOOL:${map_user}>:--volume=/etc/passwd:/etc/passwd;--volume=/etc/group:/etc/group;--user=${DOCKER_USER_ID}:${DOCKER_GROUP_ID}>"
|
||||
-t rippled-rpm-builder:${container_label}
|
||||
@@ -124,7 +124,7 @@ if (is_root_project)
|
||||
docker run
|
||||
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
|
||||
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
|
||||
-v ${CMAKE_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
|
||||
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
|
||||
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
|
||||
"$<$<BOOL:${map_user}>:--volume=/etc/passwd:/etc/passwd;--volume=/etc/group:/etc/group;--user=${DOCKER_USER_ID}:${DOCKER_GROUP_ID}>"
|
||||
-t rippled-dpkg-builder:${container_label}
|
||||
|
||||
@@ -39,14 +39,14 @@ endif ()
|
||||
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
|
||||
set (is_clang TRUE)
|
||||
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
|
||||
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0)
|
||||
message (FATAL_ERROR "This project requires clang 7 or later")
|
||||
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
|
||||
message (FATAL_ERROR "This project requires clang 8 or later")
|
||||
endif ()
|
||||
# TODO min AppleClang version check ?
|
||||
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
|
||||
set (is_gcc TRUE)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0)
|
||||
message (FATAL_ERROR "This project requires GCC 7 or later")
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
|
||||
message (FATAL_ERROR "This project requires GCC 8 or later")
|
||||
endif ()
|
||||
endif ()
|
||||
if (CMAKE_GENERATOR STREQUAL "Xcode")
|
||||
|
||||
@@ -4,6 +4,10 @@
|
||||
|
||||
option (assert "Enables asserts, even in release builds" OFF)
|
||||
|
||||
option (reporting "Build rippled with reporting mode enabled" OFF)
|
||||
|
||||
option (tests "Build tests" ON)
|
||||
|
||||
option (unity "Creates a build using UNITY support in cmake. This is the default" ON)
|
||||
if (unity)
|
||||
if (CMAKE_VERSION VERSION_LESS 3.16)
|
||||
@@ -100,12 +104,6 @@ option (have_package_container
|
||||
option (beast_no_unit_test_inline
|
||||
"Prevents unit test definitions from being inserted into global table"
|
||||
OFF)
|
||||
# NOTE - THIS OPTION CURRENTLY DOES NOT COMPILE :
|
||||
# TODO: fix or remove
|
||||
option (verify_nodeobject_keys
|
||||
"This verifies that the hash of node objects matches the payload. \
|
||||
This check is expensive - use with caution."
|
||||
OFF)
|
||||
option (single_io_service_thread
|
||||
"Restricts the number of threads calling io_service::run to one. \
|
||||
This can be useful when debugging."
|
||||
|
||||
@@ -2,98 +2,95 @@
|
||||
NIH dep: boost
|
||||
#]===================================================================]
|
||||
|
||||
if ((NOT DEFINED BOOST_ROOT) AND (DEFINED ENV{BOOST_ROOT}))
|
||||
set (BOOST_ROOT $ENV{BOOST_ROOT})
|
||||
endif ()
|
||||
file (TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
|
||||
if (WIN32 OR CYGWIN)
|
||||
if((NOT DEFINED BOOST_ROOT) AND(DEFINED ENV{BOOST_ROOT}))
|
||||
set(BOOST_ROOT $ENV{BOOST_ROOT})
|
||||
endif()
|
||||
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
|
||||
if(WIN32 OR CYGWIN)
|
||||
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
|
||||
if (DEFINED BOOST_ROOT)
|
||||
if (IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
|
||||
set (BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
|
||||
else ()
|
||||
set (BOOST_LIBRARYDIR ${BOOST_ROOT}/stage/lib)
|
||||
endif ()
|
||||
endif ()
|
||||
endif ()
|
||||
message (STATUS "BOOST_ROOT: ${BOOST_ROOT}")
|
||||
message (STATUS "BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
|
||||
if(DEFINED BOOST_ROOT)
|
||||
if(IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
|
||||
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
|
||||
elseif(IS_DIRECTORY ${BOOST_ROOT}/stage/lib)
|
||||
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage/lib)
|
||||
elseif(IS_DIRECTORY ${BOOST_ROOT}/lib)
|
||||
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/lib)
|
||||
else()
|
||||
message(WARNING "Did not find expected boost library dir. "
|
||||
"Defaulting to ${BOOST_ROOT}")
|
||||
set(BOOST_LIBRARYDIR ${BOOST_ROOT})
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
message(STATUS "BOOST_ROOT: ${BOOST_ROOT}")
|
||||
message(STATUS "BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
|
||||
|
||||
# uncomment the following as needed to debug FindBoost issues:
|
||||
#set (Boost_DEBUG ON)
|
||||
#set(Boost_DEBUG ON)
|
||||
|
||||
#[=========================================================[
|
||||
boost dynamic libraries don't trivially support @rpath
|
||||
linking right now (cmake's default), so just force
|
||||
static linking for macos, or if requested on linux by flag
|
||||
#]=========================================================]
|
||||
if (static)
|
||||
set (Boost_USE_STATIC_LIBS ON)
|
||||
endif ()
|
||||
set (Boost_USE_MULTITHREADED ON)
|
||||
if (static AND NOT APPLE)
|
||||
set (Boost_USE_STATIC_RUNTIME ON)
|
||||
else ()
|
||||
set (Boost_USE_STATIC_RUNTIME OFF)
|
||||
endif ()
|
||||
if(static)
|
||||
set(Boost_USE_STATIC_LIBS ON)
|
||||
endif()
|
||||
set(Boost_USE_MULTITHREADED ON)
|
||||
if(static AND NOT APPLE)
|
||||
set(Boost_USE_STATIC_RUNTIME ON)
|
||||
else()
|
||||
set(Boost_USE_STATIC_RUNTIME OFF)
|
||||
endif()
|
||||
# TBD:
|
||||
# Boost_USE_DEBUG_RUNTIME: When ON, uses Boost libraries linked against the
|
||||
find_package (Boost 1.70 REQUIRED
|
||||
find_package(Boost 1.70 REQUIRED
|
||||
COMPONENTS
|
||||
chrono
|
||||
container
|
||||
context
|
||||
coroutine
|
||||
date_time
|
||||
filesystem
|
||||
program_options
|
||||
regex
|
||||
serialization
|
||||
system
|
||||
thread)
|
||||
|
||||
add_library (ripple_boost INTERFACE)
|
||||
add_library (Ripple::boost ALIAS ripple_boost)
|
||||
if (is_xcode)
|
||||
target_include_directories (ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
|
||||
target_compile_options (ripple_boost INTERFACE --system-header-prefix="boost/")
|
||||
else ()
|
||||
target_include_directories (ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
|
||||
add_library(ripple_boost INTERFACE)
|
||||
add_library(Ripple::boost ALIAS ripple_boost)
|
||||
if(is_xcode)
|
||||
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
|
||||
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
|
||||
else()
|
||||
target_include_directories(ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
|
||||
endif()
|
||||
|
||||
target_link_libraries (ripple_boost
|
||||
target_link_libraries(ripple_boost
|
||||
INTERFACE
|
||||
Boost::boost
|
||||
Boost::chrono
|
||||
Boost::container
|
||||
Boost::coroutine
|
||||
Boost::date_time
|
||||
Boost::filesystem
|
||||
Boost::program_options
|
||||
Boost::regex
|
||||
Boost::serialization
|
||||
Boost::system
|
||||
Boost::thread)
|
||||
if (Boost_COMPILER)
|
||||
target_link_libraries (ripple_boost INTERFACE Boost::disable_autolinking)
|
||||
endif ()
|
||||
if (san AND is_clang)
|
||||
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
|
||||
if(Boost_COMPILER)
|
||||
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)
|
||||
endif()
|
||||
if(san AND is_clang)
|
||||
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
|
||||
# for gcc ?
|
||||
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
|
||||
get_target_property (Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
|
||||
endif ()
|
||||
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
|
||||
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
|
||||
endif()
|
||||
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
|
||||
file (WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
|
||||
target_compile_options (opts
|
||||
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
|
||||
target_compile_options(opts
|
||||
INTERFACE
|
||||
# ignore boost headers for sanitizing
|
||||
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
|
||||
endif ()
|
||||
|
||||
# workaround for xcode 10.2 and boost < 1.69
|
||||
# once we require Boost 1.69 or higher, this can be removed
|
||||
# see: https://github.com/boostorg/asio/commit/43874d5
|
||||
if (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang" AND
|
||||
CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 10.0.1.10010043 AND
|
||||
Boost_VERSION LESS 106900)
|
||||
target_compile_definitions (opts INTERFACE BOOST_ASIO_HAS_STD_STRING_VIEW)
|
||||
endif ()
|
||||
endif()
|
||||
|
||||
@@ -969,7 +969,7 @@ function(_Boost_COMPONENT_DEPENDENCIES component _ret)
|
||||
set(_Boost_WAVE_DEPENDENCIES filesystem serialization thread chrono date_time atomic)
|
||||
set(_Boost_WSERIALIZATION_DEPENDENCIES serialization)
|
||||
endif()
|
||||
if(NOT Boost_VERSION_STRING VERSION_LESS 1.71.0)
|
||||
if(NOT Boost_VERSION_STRING VERSION_LESS 1.77.0)
|
||||
message(WARNING "New Boost version may have incorrect or missing dependencies and imported targets")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
find_package (PkgConfig REQUIRED)
|
||||
pkg_search_module (libarchive_PC QUIET libarchive>=3.3.3)
|
||||
pkg_search_module (libarchive_PC QUIET libarchive>=3.4.3)
|
||||
|
||||
if(static)
|
||||
set(LIBARCHIVE_LIB libarchive.a)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
find_package (PkgConfig)
|
||||
if (PKG_CONFIG_FOUND)
|
||||
pkg_search_module (lz4_PC QUIET liblz4>=1.8)
|
||||
pkg_search_module (lz4_PC QUIET liblz4>=1.9)
|
||||
endif ()
|
||||
|
||||
if(static)
|
||||
|
||||
@@ -29,7 +29,7 @@ if (NOT local_libarchive)
|
||||
endif ()
|
||||
else ()
|
||||
## now try searching using the minimal find module that cmake provides
|
||||
find_package(LibArchive 3.3.3 QUIET)
|
||||
find_package(LibArchive 3.4.3 QUIET)
|
||||
if (LibArchive_FOUND)
|
||||
if (static)
|
||||
# find module doesn't find static libs currently, so we re-search
|
||||
@@ -70,7 +70,7 @@ if (local_libarchive)
|
||||
ExternalProject_Add (libarchive
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/libarchive/libarchive.git
|
||||
GIT_TAG v3.3.3
|
||||
GIT_TAG v3.4.3
|
||||
CMAKE_ARGS
|
||||
# passing the compiler seems to be needed for windows CI, sadly
|
||||
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
|
||||
|
||||
@@ -21,7 +21,7 @@ else()
|
||||
ExternalProject_Add (lz4
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/lz4/lz4.git
|
||||
GIT_TAG v1.8.2
|
||||
GIT_TAG v1.9.2
|
||||
SOURCE_SUBDIR contrib/cmake_unofficial
|
||||
CMAKE_ARGS
|
||||
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
|
||||
@@ -71,9 +71,9 @@ else()
|
||||
if (CMAKE_VERBOSE_MAKEFILE)
|
||||
print_ep_logs (lz4)
|
||||
endif ()
|
||||
add_dependencies (lz4_lib lz4)
|
||||
target_link_libraries (ripple_libs INTERFACE lz4_lib)
|
||||
exclude_if_included (lz4)
|
||||
endif()
|
||||
|
||||
add_dependencies (lz4_lib lz4)
|
||||
target_link_libraries (ripple_libs INTERFACE lz4_lib)
|
||||
exclude_if_included (lz4)
|
||||
exclude_if_included (lz4_lib)
|
||||
|
||||
@@ -12,7 +12,7 @@ if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
|
||||
FetchContent_Declare(
|
||||
nudb_src
|
||||
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
|
||||
GIT_TAG 2.0.1
|
||||
GIT_TAG 2.0.5
|
||||
)
|
||||
FetchContent_GetProperties(nudb_src)
|
||||
if(NOT nudb_src_POPULATED)
|
||||
@@ -23,7 +23,7 @@ if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
|
||||
ExternalProject_Add (nudb_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
|
||||
GIT_TAG 2.0.1
|
||||
GIT_TAG 2.0.5
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
|
||||
@@ -22,7 +22,7 @@ if (static)
|
||||
set (OPENSSL_USE_STATIC_LIBS ON)
|
||||
endif ()
|
||||
set (OPENSSL_MSVC_STATIC_RT ON)
|
||||
find_package (OpenSSL 1.0.2 REQUIRED)
|
||||
find_package (OpenSSL 1.1.1 REQUIRED)
|
||||
target_link_libraries (ripple_libs
|
||||
INTERFACE
|
||||
OpenSSL::SSL
|
||||
|
||||
70
Builds/CMake/deps/Postgres.cmake
Normal file
70
Builds/CMake/deps/Postgres.cmake
Normal file
@@ -0,0 +1,70 @@
|
||||
if(reporting)
|
||||
find_package(PostgreSQL)
|
||||
if(NOT PostgreSQL_FOUND)
|
||||
message("find_package did not find postgres")
|
||||
find_library(postgres NAMES pq libpq libpq-dev pq-dev postgresql-devel)
|
||||
find_path(libpq-fe NAMES libpq-fe.h PATH_SUFFIXES postgresql pgsql include)
|
||||
|
||||
if(NOT libpq-fe_FOUND OR NOT postgres_FOUND)
|
||||
message("No system installed Postgres found. Will build")
|
||||
add_library(postgres SHARED IMPORTED GLOBAL)
|
||||
add_library(pgport SHARED IMPORTED GLOBAL)
|
||||
add_library(pgcommon SHARED IMPORTED GLOBAL)
|
||||
ExternalProject_Add(postgres_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/postgres/postgres.git
|
||||
GIT_TAG master
|
||||
CONFIGURE_COMMAND ./configure --without-readline > /dev/null
|
||||
BUILD_COMMAND ${CMAKE_COMMAND} -E env --unset=MAKELEVEL make
|
||||
UPDATE_COMMAND ""
|
||||
BUILD_IN_SOURCE 1
|
||||
INSTALL_COMMAND ""
|
||||
BUILD_BYPRODUCTS
|
||||
<BINARY_DIR>/src/interfaces/libpq/${ep_lib_prefix}pq.a
|
||||
<BINARY_DIR>/src/common/${ep_lib_prefix}pgcommon.a
|
||||
<BINARY_DIR>/src/port/${ep_lib_prefix}pgport.a
|
||||
LOG_BUILD TRUE
|
||||
)
|
||||
ExternalProject_Get_Property (postgres_src SOURCE_DIR)
|
||||
ExternalProject_Get_Property (postgres_src BINARY_DIR)
|
||||
|
||||
set (postgres_src_SOURCE_DIR "${SOURCE_DIR}")
|
||||
file (MAKE_DIRECTORY ${postgres_src_SOURCE_DIR})
|
||||
list(APPEND INCLUDE_DIRS
|
||||
${SOURCE_DIR}/src/include
|
||||
${SOURCE_DIR}/src/interfaces/libpq
|
||||
)
|
||||
set_target_properties(postgres PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/src/interfaces/libpq/${ep_lib_prefix}pq.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
"${INCLUDE_DIRS}"
|
||||
)
|
||||
set_target_properties(pgcommon PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/src/common/${ep_lib_prefix}pgcommon.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
"${INCLUDE_DIRS}"
|
||||
)
|
||||
set_target_properties(pgport PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/src/port/${ep_lib_prefix}pgport.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
"${INCLUDE_DIRS}"
|
||||
)
|
||||
add_dependencies(postgres postgres_src)
|
||||
add_dependencies(pgcommon postgres_src)
|
||||
add_dependencies(pgport postgres_src)
|
||||
file(TO_CMAKE_PATH "${postgres_src_SOURCE_DIR}" postgres_src_SOURCE_DIR)
|
||||
target_link_libraries(ripple_libs INTERFACE postgres pgcommon pgport)
|
||||
else()
|
||||
message("Found system installed Postgres via find_libary")
|
||||
target_include_directories(ripple_libs INTERFACE ${libpq-fe})
|
||||
target_link_libraries(ripple_libs INTERFACE ${postgres})
|
||||
endif()
|
||||
else()
|
||||
message("Found system installed Postgres via find_package")
|
||||
target_include_directories(ripple_libs INTERFACE ${PostgreSQL_INCLUDE_DIRS})
|
||||
target_link_libraries(ripple_libs INTERFACE ${PostgreSQL_LIBRARIES})
|
||||
endif()
|
||||
endif()
|
||||
@@ -9,9 +9,21 @@ if (static)
|
||||
set (Protobuf_USE_STATIC_LIBS ON)
|
||||
endif ()
|
||||
find_package (Protobuf 3.8)
|
||||
if (local_protobuf OR NOT Protobuf_FOUND)
|
||||
if (is_multiconfig)
|
||||
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARIES})
|
||||
else ()
|
||||
string(TOUPPER ${CMAKE_BUILD_TYPE} upper_cmake_build_type)
|
||||
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARY_${upper_cmake_build_type}})
|
||||
endif ()
|
||||
if (local_protobuf OR NOT (Protobuf_FOUND AND Protobuf_PROTOC_EXECUTABLE AND protobuf_protoc_lib))
|
||||
include (GNUInstallDirs)
|
||||
message (STATUS "using local protobuf build.")
|
||||
set(protobuf_reqs Protobuf_PROTOC_EXECUTABLE protobuf_protoc_lib)
|
||||
foreach(lib ${protobuf_reqs})
|
||||
if(NOT ${lib})
|
||||
message(STATUS "Couldn't find ${lib}")
|
||||
endif()
|
||||
endforeach()
|
||||
if (WIN32)
|
||||
# protobuf prepends lib even on windows
|
||||
set (pbuf_lib_pre "lib")
|
||||
|
||||
@@ -8,7 +8,7 @@ set_target_properties (rocksdb_lib
|
||||
|
||||
option (local_rocksdb "use local build of rocksdb." OFF)
|
||||
if (NOT local_rocksdb)
|
||||
find_package (RocksDB 6.5 QUIET CONFIG)
|
||||
find_package (RocksDB 6.27 QUIET CONFIG)
|
||||
if (TARGET RocksDB::rocksdb)
|
||||
message (STATUS "Found RocksDB using config.")
|
||||
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION_DEBUG)
|
||||
@@ -40,7 +40,7 @@ if (NOT local_rocksdb)
|
||||
# TBD if there is some way to extract transitive deps..then:
|
||||
#set (RocksDB_USE_STATIC ON)
|
||||
else ()
|
||||
find_package (RocksDB 6.5 MODULE)
|
||||
find_package (RocksDB 6.27 MODULE)
|
||||
if (ROCKSDB_FOUND)
|
||||
if (RocksDB_LIBRARY_DEBUG)
|
||||
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_DEBUG ${RocksDB_LIBRARY_DEBUG})
|
||||
@@ -60,17 +60,17 @@ if (local_rocksdb)
|
||||
ExternalProject_Add (rocksdb
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/facebook/rocksdb.git
|
||||
GIT_TAG v6.5.3
|
||||
GIT_TAG v6.27.3
|
||||
PATCH_COMMAND
|
||||
# only used by windows build
|
||||
${CMAKE_COMMAND} -E copy
|
||||
${CMAKE_SOURCE_DIR}/Builds/CMake/rocks_thirdparty.inc
|
||||
${CMAKE_COMMAND} -E copy_if_different
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/rocks_thirdparty.inc
|
||||
<SOURCE_DIR>/thirdparty.inc
|
||||
COMMAND
|
||||
# fixup their build version file to keep the values
|
||||
# from changing always
|
||||
${CMAKE_COMMAND} -E copy_if_different
|
||||
${CMAKE_SOURCE_DIR}/Builds/CMake/rocksdb_build_version.cc.in
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/rocksdb_build_version.cc.in
|
||||
<SOURCE_DIR>/util/build_version.cc.in
|
||||
CMAKE_ARGS
|
||||
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
|
||||
@@ -96,9 +96,13 @@ if (local_rocksdb)
|
||||
-Dlz4_FOUND=ON
|
||||
-USNAPPY_*
|
||||
-Usnappy_*
|
||||
-USnappy_*
|
||||
-Dsnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
|
||||
-Dsnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
|
||||
-Dsnappy_FOUND=ON
|
||||
-DSnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
|
||||
-DSnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
|
||||
-DSnappy_FOUND=ON
|
||||
-DWITH_MD_LIBRARY=OFF
|
||||
-DWITH_RUNTIME_DEBUG=$<IF:$<CONFIG:Debug>,ON,OFF>
|
||||
-DFAIL_ON_WARNINGS=OFF
|
||||
|
||||
@@ -51,7 +51,7 @@ else()
|
||||
# This patch process is likely fragile and should be reviewed carefully
|
||||
# whenever we update the GIT_TAG above.
|
||||
PATCH_COMMAND
|
||||
${CMAKE_COMMAND} -P ${CMAKE_SOURCE_DIR}/Builds/CMake/soci_patch.cmake
|
||||
${CMAKE_COMMAND} -P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/soci_patch.cmake
|
||||
CMAKE_ARGS
|
||||
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
|
||||
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
|
||||
|
||||
@@ -23,15 +23,21 @@ else()
|
||||
PREFIX ${nih_cache_path}
|
||||
# sqlite doesn't use git, but it provides versioned tarballs
|
||||
URL https://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
|
||||
http://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
|
||||
https://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
|
||||
http://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
|
||||
# ^^^ version is apparent in the URL: 3260000 => 3.26.0
|
||||
URL_HASH SHA256=de5dcab133aa339a4cf9e97c40aa6062570086d6085d8f9ad7bc6ddf8a52096e
|
||||
# Don't need to worry about MITM attacks too much because the download
|
||||
# is checked against a strong hash
|
||||
TLS_VERIFY false
|
||||
# we wrote a very simple CMake file to build sqlite
|
||||
# so that's what we copy here so that we can build with
|
||||
# CMake. sqlite doesn't generally provided a build system
|
||||
# for the single amalgamation source file.
|
||||
PATCH_COMMAND
|
||||
${CMAKE_COMMAND} -E copy_if_different
|
||||
${CMAKE_SOURCE_DIR}/Builds/CMake/CMake_sqlite3.txt
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/CMake_sqlite3.txt
|
||||
<SOURCE_DIR>/CMakeLists.txt
|
||||
CMAKE_ARGS
|
||||
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
|
||||
|
||||
165
Builds/CMake/deps/cassandra.cmake
Normal file
165
Builds/CMake/deps/cassandra.cmake
Normal file
@@ -0,0 +1,165 @@
|
||||
if(reporting)
|
||||
find_library(cassandra NAMES cassandra)
|
||||
if(NOT cassandra)
|
||||
|
||||
message("System installed Cassandra cpp driver not found. Will build")
|
||||
|
||||
find_library(zlib NAMES zlib1g-dev zlib-devel zlib z)
|
||||
if(NOT zlib)
|
||||
message("zlib not found. will build")
|
||||
add_library(zlib STATIC IMPORTED GLOBAL)
|
||||
ExternalProject_Add(zlib_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/madler/zlib.git
|
||||
GIT_TAG master
|
||||
INSTALL_COMMAND ""
|
||||
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}z.a
|
||||
LOG_BUILD TRUE
|
||||
LOG_CONFIGURE TRUE
|
||||
)
|
||||
|
||||
|
||||
ExternalProject_Get_Property (zlib_src SOURCE_DIR)
|
||||
ExternalProject_Get_Property (zlib_src BINARY_DIR)
|
||||
set (zlib_src_SOURCE_DIR "${SOURCE_DIR}")
|
||||
file (MAKE_DIRECTORY ${zlib_src_SOURCE_DIR}/include)
|
||||
|
||||
set_target_properties (zlib PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/${ep_lib_prefix}z.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
${SOURCE_DIR}/include)
|
||||
add_dependencies(zlib zlib_src)
|
||||
|
||||
file(TO_CMAKE_PATH "${zlib_src_SOURCE_DIR}" zlib_src_SOURCE_DIR)
|
||||
endif()
|
||||
|
||||
|
||||
|
||||
|
||||
find_library(krb5 NAMES krb5-dev libkrb5-dev)
|
||||
|
||||
if(NOT krb5)
|
||||
message("krb5 not found. will build")
|
||||
add_library(krb5 STATIC IMPORTED GLOBAL)
|
||||
ExternalProject_Add(krb5_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/krb5/krb5.git
|
||||
GIT_TAG master
|
||||
UPDATE_COMMAND ""
|
||||
CONFIGURE_COMMAND autoreconf src && CFLAGS=-fcommon ./src/configure --enable-static --disable-shared > /dev/null
|
||||
BUILD_IN_SOURCE 1
|
||||
BUILD_COMMAND make
|
||||
INSTALL_COMMAND ""
|
||||
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/${ep_lib_prefix}krb5.a
|
||||
LOG_BUILD TRUE
|
||||
)
|
||||
|
||||
ExternalProject_Get_Property (krb5_src SOURCE_DIR)
|
||||
ExternalProject_Get_Property (krb5_src BINARY_DIR)
|
||||
set (krb5_src_SOURCE_DIR "${SOURCE_DIR}")
|
||||
file (MAKE_DIRECTORY ${krb5_src_SOURCE_DIR}/include)
|
||||
|
||||
set_target_properties (krb5 PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/lib/${ep_lib_prefix}krb5.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
${SOURCE_DIR}/include)
|
||||
add_dependencies(krb5 krb5_src)
|
||||
|
||||
file(TO_CMAKE_PATH "${krb5_src_SOURCE_DIR}" krb5_src_SOURCE_DIR)
|
||||
endif()
|
||||
|
||||
|
||||
find_library(libuv1 NAMES uv1 libuv1 liubuv1-dev libuv1:amd64)
|
||||
|
||||
|
||||
if(NOT libuv1)
|
||||
message("libuv1 not found, will build")
|
||||
add_library(libuv1 STATIC IMPORTED GLOBAL)
|
||||
ExternalProject_Add(libuv_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/libuv/libuv.git
|
||||
GIT_TAG v1.x
|
||||
INSTALL_COMMAND ""
|
||||
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}uv_a.a
|
||||
LOG_BUILD TRUE
|
||||
LOG_CONFIGURE TRUE
|
||||
)
|
||||
|
||||
ExternalProject_Get_Property (libuv_src SOURCE_DIR)
|
||||
ExternalProject_Get_Property (libuv_src BINARY_DIR)
|
||||
set (libuv_src_SOURCE_DIR "${SOURCE_DIR}")
|
||||
file (MAKE_DIRECTORY ${libuv_src_SOURCE_DIR}/include)
|
||||
|
||||
set_target_properties (libuv1 PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/${ep_lib_prefix}uv_a.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
${SOURCE_DIR}/include)
|
||||
add_dependencies(libuv1 libuv_src)
|
||||
|
||||
file(TO_CMAKE_PATH "${libuv_src_SOURCE_DIR}" libuv_src_SOURCE_DIR)
|
||||
endif()
|
||||
|
||||
add_library (cassandra STATIC IMPORTED GLOBAL)
|
||||
ExternalProject_Add(cassandra_src
|
||||
PREFIX ${nih_cache_path}
|
||||
GIT_REPOSITORY https://github.com/datastax/cpp-driver.git
|
||||
GIT_TAG master
|
||||
CMAKE_ARGS
|
||||
-DLIBUV_ROOT_DIR=${BINARY_DIR}
|
||||
-DLIBUV_LIBARY=${BINARY_DIR}/libuv_a.a
|
||||
-DLIBUV_INCLUDE_DIR=${SOURCE_DIR}/include
|
||||
-DCASS_BUILD_STATIC=ON
|
||||
INSTALL_COMMAND ""
|
||||
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}cassandra_static.a
|
||||
LOG_BUILD TRUE
|
||||
LOG_CONFIGURE TRUE
|
||||
)
|
||||
|
||||
ExternalProject_Get_Property (cassandra_src SOURCE_DIR)
|
||||
ExternalProject_Get_Property (cassandra_src BINARY_DIR)
|
||||
set (cassandra_src_SOURCE_DIR "${SOURCE_DIR}")
|
||||
file (MAKE_DIRECTORY ${cassandra_src_SOURCE_DIR}/include)
|
||||
|
||||
set_target_properties (cassandra PROPERTIES
|
||||
IMPORTED_LOCATION
|
||||
${BINARY_DIR}/${ep_lib_prefix}cassandra_static.a
|
||||
INTERFACE_INCLUDE_DIRECTORIES
|
||||
${SOURCE_DIR}/include)
|
||||
add_dependencies(cassandra cassandra_src)
|
||||
|
||||
if(NOT libuv1)
|
||||
ExternalProject_Add_StepDependencies(cassandra_src build libuv1)
|
||||
target_link_libraries(cassandra INTERFACE libuv1)
|
||||
else()
|
||||
target_link_libraries(cassandra INTERFACE ${libuv1})
|
||||
endif()
|
||||
if(NOT krb5)
|
||||
|
||||
ExternalProject_Add_StepDependencies(cassandra_src build krb5)
|
||||
target_link_libraries(cassandra INTERFACE krb5)
|
||||
else()
|
||||
target_link_libraries(cassandra INTERFACE ${krb5})
|
||||
endif()
|
||||
|
||||
if(NOT zlib)
|
||||
ExternalProject_Add_StepDependencies(cassandra_src build zlib)
|
||||
target_link_libraries(cassandra INTERFACE zlib)
|
||||
else()
|
||||
target_link_libraries(cassandra INTERFACE ${zlib})
|
||||
endif()
|
||||
|
||||
file(TO_CMAKE_PATH "${cassandra_src_SOURCE_DIR}" cassandra_src_SOURCE_DIR)
|
||||
target_link_libraries(ripple_libs INTERFACE cassandra)
|
||||
else()
|
||||
message("Found system installed cassandra cpp driver")
|
||||
|
||||
find_path(cassandra_includes NAMES cassandra.h REQUIRED)
|
||||
target_link_libraries (ripple_libs INTERFACE ${cassandra})
|
||||
target_include_directories(ripple_libs INTERFACE ${cassandra_includes})
|
||||
endif()
|
||||
|
||||
exclude_if_included (cassandra)
|
||||
endif()
|
||||
@@ -63,20 +63,23 @@ else ()
|
||||
if (cares_FOUND)
|
||||
if (static)
|
||||
set (_search "${CMAKE_STATIC_LIBRARY_PREFIX}cares${CMAKE_STATIC_LIBRARY_SUFFIX}")
|
||||
set (_prefix cares_STATIC)
|
||||
set (_static STATIC)
|
||||
else ()
|
||||
set (_search "${CMAKE_SHARED_LIBRARY_PREFIX}cares${CMAKE_SHARED_LIBRARY_SUFFIX}")
|
||||
set (_prefix cares)
|
||||
set (_static)
|
||||
endif()
|
||||
find_library(_cares
|
||||
NAMES ${_search}
|
||||
HINTS ${cares_LIBRARY_DIRS})
|
||||
if (NOT _cares)
|
||||
find_library(_location NAMES ${_search} HINTS ${cares_LIBRARY_DIRS})
|
||||
if (NOT _location)
|
||||
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
|
||||
endif ()
|
||||
add_library (c-ares::cares STATIC IMPORTED GLOBAL)
|
||||
set_target_properties (c-ares::cares PROPERTIES IMPORTED_LOCATION ${_cares})
|
||||
if (cares_INCLUDE_DIRS)
|
||||
set_target_properties (c-ares::cares PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${cares_INCLUDE_DIRS})
|
||||
endif ()
|
||||
add_library (c-ares::cares ${_static} IMPORTED GLOBAL)
|
||||
set_target_properties (c-ares::cares PROPERTIES
|
||||
IMPORTED_LOCATION ${_location}
|
||||
INTERFACE_INCLUDE_DIRECTORIES "${${_prefix}_INCLUDE_DIRS}"
|
||||
INTERFACE_LINK_OPTIONS "${${_prefix}_LDFLAGS}"
|
||||
)
|
||||
exclude_if_included (c-ares::cares)
|
||||
else ()
|
||||
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
|
||||
@@ -208,6 +211,7 @@ else ()
|
||||
-DCMAKE_DEBUG_POSTFIX=_d
|
||||
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
|
||||
-DgRPC_BUILD_TESTS=OFF
|
||||
-DgRPC_BENCHMARK_PROVIDER=""
|
||||
-DgRPC_BUILD_CSHARP_EXT=OFF
|
||||
-DgRPC_MSVC_STATIC_RUNTIME=ON
|
||||
-DgRPC_INSTALL=OFF
|
||||
@@ -309,7 +313,7 @@ set (GRPC_GEN_DIR "${CMAKE_BINARY_DIR}/proto_gen_grpc")
|
||||
file (MAKE_DIRECTORY ${GRPC_GEN_DIR})
|
||||
set (GRPC_PROTO_SRCS)
|
||||
set (GRPC_PROTO_HDRS)
|
||||
set (GRPC_PROTO_ROOT "${CMAKE_SOURCE_DIR}/src/ripple/proto/org")
|
||||
set (GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
|
||||
file(GLOB_RECURSE GRPC_DEFINITION_FILES LIST_DIRECTORIES false "${GRPC_PROTO_ROOT}/*.proto")
|
||||
foreach(file ${GRPC_DEFINITION_FILES})
|
||||
get_filename_component(_abs_file ${file} ABSOLUTE)
|
||||
|
||||
@@ -1,4 +1,71 @@
|
||||
#include "build_version.h"
|
||||
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha: N/A";
|
||||
const char* rocksdb_build_git_date = "rocksdb_build_git_date: N/A";
|
||||
const char* rocksdb_build_compile_date = "N/A";
|
||||
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include "rocksdb/version.h"
|
||||
#include "util/string_util.h"
|
||||
|
||||
// The build script may replace these values with real values based
|
||||
// on whether or not GIT is available and the platform settings
|
||||
static const std::string rocksdb_build_git_sha = "rocksdb_build_git_sha:@GIT_SHA@";
|
||||
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:@GIT_TAG@";
|
||||
#define HAS_GIT_CHANGES @GIT_MOD@
|
||||
#if HAS_GIT_CHANGES == 0
|
||||
// If HAS_GIT_CHANGES is 0, the GIT date is used.
|
||||
// Use the time the branch/tag was last modified
|
||||
static const std::string rocksdb_build_date = "rocksdb_build_date:@GIT_DATE@";
|
||||
#else
|
||||
// If HAS_GIT_CHANGES is > 0, the branch/tag has modifications.
|
||||
// Use the time the build was created.
|
||||
static const std::string rocksdb_build_date = "rocksdb_build_date:@BUILD_DATE@";
|
||||
#endif
|
||||
|
||||
namespace ROCKSDB_NAMESPACE {
|
||||
static void AddProperty(std::unordered_map<std::string, std::string> *props, const std::string& name) {
|
||||
size_t colon = name.find(":");
|
||||
if (colon != std::string::npos && colon > 0 && colon < name.length() - 1) {
|
||||
// If we found a "@:", then this property was a build-time substitution that failed. Skip it
|
||||
size_t at = name.find("@", colon);
|
||||
if (at != colon + 1) {
|
||||
// Everything before the colon is the name, after is the value
|
||||
(*props)[name.substr(0, colon)] = name.substr(colon + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static std::unordered_map<std::string, std::string>* LoadPropertiesSet() {
|
||||
auto * properties = new std::unordered_map<std::string, std::string>();
|
||||
AddProperty(properties, rocksdb_build_git_sha);
|
||||
AddProperty(properties, rocksdb_build_git_tag);
|
||||
AddProperty(properties, rocksdb_build_date);
|
||||
return properties;
|
||||
}
|
||||
|
||||
const std::unordered_map<std::string, std::string>& GetRocksBuildProperties() {
|
||||
static std::unique_ptr<std::unordered_map<std::string, std::string>> props(LoadPropertiesSet());
|
||||
return *props;
|
||||
}
|
||||
|
||||
std::string GetRocksVersionAsString(bool with_patch) {
|
||||
std::string version = ToString(ROCKSDB_MAJOR) + "." + ToString(ROCKSDB_MINOR);
|
||||
if (with_patch) {
|
||||
return version + "." + ToString(ROCKSDB_PATCH);
|
||||
} else {
|
||||
return version;
|
||||
}
|
||||
}
|
||||
|
||||
std::string GetRocksBuildInfoAsString(const std::string& program, bool verbose) {
|
||||
std::string info = program + " (RocksDB) " + GetRocksVersionAsString(true);
|
||||
if (verbose) {
|
||||
for (const auto& it : GetRocksBuildProperties()) {
|
||||
info.append("\n ");
|
||||
info.append(it.first);
|
||||
info.append(": ");
|
||||
info.append(it.second);
|
||||
}
|
||||
}
|
||||
return info;
|
||||
}
|
||||
} // namespace ROCKSDB_NAMESPACE
|
||||
|
||||
|
||||
@@ -1,13 +1,30 @@
|
||||
# This patches unsigned-types.h in the soci official sources
|
||||
# so as to remove type range check exceptions that cause
|
||||
# us trouble when using boost::optional to select int values
|
||||
|
||||
# Some versions of CMake erroneously patch external projects on every build.
|
||||
# If the patch makes no changes, skip it. This workaround can be
|
||||
# removed once we stop supporting vulnerable versions of CMake.
|
||||
# https://gitlab.kitware.com/cmake/cmake/-/issues/21086
|
||||
file (STRINGS include/soci/unsigned-types.h sourcecode)
|
||||
# Delete the .patched file if it exists, so it doesn't end up duplicated.
|
||||
# Trying to remove a file that does not exist is not a problem.
|
||||
file (REMOVE include/soci/unsigned-types.h.patched)
|
||||
foreach (line_ ${sourcecode})
|
||||
if (line_ MATCHES "^[ \\t]+throw[ ]+soci_error[ ]*\\([ ]*\"Value outside of allowed.+$")
|
||||
set (line_ "//${CMAKE_MATCH_0}")
|
||||
endif ()
|
||||
file (APPEND include/soci/unsigned-types.h.patched "${line_}\n")
|
||||
endforeach ()
|
||||
execute_process( COMMAND ${CMAKE_COMMAND} -E compare_files
|
||||
include/soci/unsigned-types.h include/soci/unsigned-types.h.patched
|
||||
RESULT_VARIABLE compare_result
|
||||
)
|
||||
if( compare_result EQUAL 0)
|
||||
message(DEBUG "The soci source and patch files are identical. Make no changes.")
|
||||
file (REMOVE include/soci/unsigned-types.h.patched)
|
||||
return()
|
||||
endif()
|
||||
file (RENAME include/soci/unsigned-types.h include/soci/unsigned-types.h.orig)
|
||||
file (RENAME include/soci/unsigned-types.h.patched include/soci/unsigned-types.h)
|
||||
# also fix Boost.cmake so that it just returns when we override the Boost_FOUND var
|
||||
|
||||
@@ -16,7 +16,7 @@ need these software components
|
||||
|-----------|-----------------------|
|
||||
| [Visual Studio 2017](README.md#install-visual-studio-2017)| 15.5.4 |
|
||||
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
|
||||
| [OpenSSL Library](README.md#install-openssl) | 1.0.2n |
|
||||
| [OpenSSL Library](README.md#install-openssl) | 1.1.1L |
|
||||
| [Boost library](README.md#build-boost) | 1.70.0 |
|
||||
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
|
||||
|
||||
@@ -50,17 +50,19 @@ Windows is mandatory for running the unit tests.
|
||||
|
||||
### Install OpenSSL
|
||||
|
||||
[Download OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
|
||||
four `Win64` bit variants available, you want the non-light `v1.0` line. As of
|
||||
this writing, you **should** select
|
||||
[Download the latest version of
|
||||
OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
|
||||
several `Win64` bit variants available, you want the non-light
|
||||
`v1.1` line. As of this writing, you **should** select
|
||||
|
||||
* Win64 OpenSSL v1.0.2n.
|
||||
* Win64 OpenSSL v1.1.1L
|
||||
|
||||
and should **not** select
|
||||
|
||||
* Win64 OpenSSL v1.0.2n light
|
||||
* Win64 OpenSSL v1.1.0g
|
||||
* Win64 OpenSSL v1.1.0g light
|
||||
* Anything with "Win32" in the name
|
||||
* Anything with "light" in the name
|
||||
* Anything with "EXPERIMENTAL" in the name
|
||||
* Anything in the 3.0 line - rippled won't currently build with this version.
|
||||
|
||||
Run the installer, and choose an appropriate location for your OpenSSL
|
||||
installation. In this guide we use `C:\lib\OpenSSL-Win64` as the destination
|
||||
@@ -146,7 +148,7 @@ If you receive an error about not having the "correct access rights" make sure
|
||||
you have Github ssh keys, as described above.
|
||||
|
||||
For a stable release, choose the `master` branch or one of the tagged releases
|
||||
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
|
||||
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
|
||||
|
||||
```
|
||||
git checkout master
|
||||
@@ -175,7 +177,7 @@ To begin, simply:
|
||||
cloned rippled folder.
|
||||
2. Right-click on `CMakeLists.txt` in the **Solution Explorer - Folder View** to
|
||||
generate a `CMakeSettings.json` file. A sample settings file is provided
|
||||
[here](/Builds/VisualStudio2017/CMakeSettings-example.json). Customize the
|
||||
[here](/Builds/VisualStudio2017/CMakeSettings-example.json). Customize the
|
||||
settings for `BOOST_ROOT`, `OPENSSL_ROOT` to match the install paths if they
|
||||
differ from those in the file.
|
||||
4. Select either the `x64-Release` or `x64-Debug` configuration from the
|
||||
@@ -214,14 +216,14 @@ execute the following commands within your `rippled` cloned repository:
|
||||
```
|
||||
mkdir build\cmake
|
||||
cd build\cmake
|
||||
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_70_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
|
||||
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_70_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64" -DCMAKE_GENERATOR_TOOLSET=host=x64
|
||||
```
|
||||
Now launch Visual Studio 2017 and select **File | Open | Project/Solution**.
|
||||
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
|
||||
file. You can then choose whether to build the `Debug` or `Release` solution
|
||||
configuration.
|
||||
|
||||
The executable will be in
|
||||
The executable will be in
|
||||
```
|
||||
.\build\cmake\Release\rippled.exe
|
||||
```
|
||||
@@ -233,21 +235,24 @@ These paths are relative to your cloned git repository.
|
||||
|
||||
# Unity/No-Unity Builds
|
||||
|
||||
The rippled build system defaults to using [unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
|
||||
to improve build times. In some cases it might be desirable to disable the unity build and compile
|
||||
individual translation units. Here is how you can switch to a "no-unity" build configuration:
|
||||
The rippled build system defaults to using
|
||||
[unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
|
||||
to improve build times. In some cases it might be desirable to disable the
|
||||
unity build and compile individual translation units. Here is how you can
|
||||
switch to a "no-unity" build configuration:
|
||||
|
||||
## Visual Studio Integrated CMake
|
||||
|
||||
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF` to the `cmakeCommandArgs` entry
|
||||
for each build configuration.
|
||||
|
||||
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF`
|
||||
to the `cmakeCommandArgs` entry for each build configuration.
|
||||
|
||||
## Standalone CMake Builds
|
||||
|
||||
When running cmake to generate the Visual Studio project files, add `-Dunity=OFF` to the
|
||||
command line options passed to cmake.
|
||||
When running cmake to generate the Visual Studio project files, add
|
||||
`-Dunity=OFF` to the command line options passed to cmake.
|
||||
|
||||
**Note:** you will need to re-run the cmake configuration step anytime you want to switch between unity/no-unity builds.
|
||||
**Note:** you will need to re-run the cmake configuration step anytime you
|
||||
want to switch between unity/no-unity builds.
|
||||
|
||||
# Unit Test (Recommended)
|
||||
|
||||
|
||||
@@ -22,7 +22,6 @@ time cmake \
|
||||
-Dpackages_only=ON \
|
||||
-Dcontainer_label="${container_tag}" \
|
||||
-Dhave_package_container=ON \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=OFF \
|
||||
-G Ninja ../..
|
||||
time cmake --build . --target ${pkgtype} -- -v
|
||||
|
||||
time cmake --build . --target ${pkgtype}
|
||||
|
||||
@@ -5,6 +5,7 @@ set -ex
|
||||
echo $(nproc)
|
||||
docker login -u rippled \
|
||||
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
|
||||
apk add --update py-pip
|
||||
apk add \
|
||||
bash util-linux coreutils binutils grep \
|
||||
make ninja cmake build-base gcc g++ abuild git \
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
# can be overridden by project or group variables as needed.
|
||||
variables:
|
||||
# these containers are built manually using the rippled
|
||||
# cmake build (container targets) and tagged/pushed so they
|
||||
# cmake build (container targets) and tagged/pushed so they
|
||||
# can be used here
|
||||
RPM_CONTAINER_TAG: "2020-02-10"
|
||||
RPM_CONTAINER_NAME: "rippled-rpm-builder"
|
||||
@@ -19,7 +19,7 @@ variables:
|
||||
DPKG_CONTAINER_FULLNAME: "${DPKG_CONTAINER_NAME}:${DPKG_CONTAINER_TAG}"
|
||||
ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
|
||||
ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
|
||||
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/snippets/11/raw"
|
||||
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/xrpledger/rippled-packages/snippets/49/raw"
|
||||
PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
|
||||
# also need to define this variable ONLY for the primary
|
||||
# build/publish pipeline on the mainline repo:
|
||||
@@ -44,14 +44,16 @@ stages:
|
||||
- . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
|
||||
variables:
|
||||
docker_driver: overlay2
|
||||
DOCKER_TLS_CERTDIR: ""
|
||||
image:
|
||||
name: docker:latest
|
||||
name: artifactory.ops.ripple.com/docker:latest
|
||||
services:
|
||||
# workaround for TLS issues - consider going back
|
||||
# back to unversioned `dind` when issues are resolved
|
||||
- docker:18-dind
|
||||
- name: artifactory.ops.ripple.com/docker:stable-dind
|
||||
alias: docker
|
||||
tags:
|
||||
- docker-4xlarge
|
||||
- 4xlarge
|
||||
|
||||
.only_primary_template: &only_primary
|
||||
only:
|
||||
@@ -111,7 +113,7 @@ rpm_sign:
|
||||
dependencies:
|
||||
- rpm_build
|
||||
image:
|
||||
name: centos:7
|
||||
name: artifactory.ops.ripple.com/centos:7
|
||||
<<: *only_primary
|
||||
before_script:
|
||||
- |
|
||||
@@ -140,7 +142,7 @@ dpkg_sign:
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
image:
|
||||
name: ubuntu:18.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:18.04
|
||||
<<: *only_primary
|
||||
before_script:
|
||||
- |
|
||||
@@ -179,47 +181,39 @@ centos_7_smoketest:
|
||||
- rpm_build
|
||||
- rpm_sign
|
||||
image:
|
||||
name: centos:7
|
||||
name: artifactory.ops.ripple.com/centos:7
|
||||
<<: *run_local_smoketest
|
||||
|
||||
fedora_29_smoketest:
|
||||
# TODO: Remove "allow_failure" when tests fixed
|
||||
rocky_8_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- rpm_build
|
||||
- rpm_sign
|
||||
image:
|
||||
name: fedora:29
|
||||
name: rockylinux/rockylinux:8
|
||||
<<: *run_local_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_28_smoketest:
|
||||
fedora_34_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- rpm_build
|
||||
- rpm_sign
|
||||
image:
|
||||
name: fedora:28
|
||||
name: artifactory.ops.ripple.com/fedora:34
|
||||
<<: *run_local_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_27_smoketest:
|
||||
fedora_35_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- rpm_build
|
||||
- rpm_sign
|
||||
image:
|
||||
name: fedora:27
|
||||
<<: *run_local_smoketest
|
||||
|
||||
## this one is not LTS, but we
|
||||
## get some extra coverage by including it
|
||||
## consider dropping it when 20.04 is ready
|
||||
ubuntu_19_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: ubuntu:19.04
|
||||
name: artifactory.ops.ripple.com/fedora:35
|
||||
<<: *run_local_smoketest
|
||||
allow_failure: true
|
||||
|
||||
ubuntu_18_smoketest:
|
||||
stage: smoketest
|
||||
@@ -227,25 +221,54 @@ ubuntu_18_smoketest:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: ubuntu:18.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:18.04
|
||||
<<: *run_local_smoketest
|
||||
|
||||
ubuntu_16_smoketest:
|
||||
ubuntu_20_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: ubuntu:16.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:20.04
|
||||
<<: *run_local_smoketest
|
||||
|
||||
# TODO: remove "allow_failure" when 22.04 released in 4/2022...
|
||||
ubuntu_22_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/ubuntu:22.04
|
||||
<<: *run_local_smoketest
|
||||
allow_failure: true
|
||||
|
||||
debian_9_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: debian:9
|
||||
name: artifactory.ops.ripple.com/debian:9
|
||||
<<: *run_local_smoketest
|
||||
|
||||
debian_10_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:10
|
||||
<<: *run_local_smoketest
|
||||
|
||||
debian_11_smoketest:
|
||||
stage: smoketest
|
||||
dependencies:
|
||||
- dpkg_build
|
||||
- dpkg_sign
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:11
|
||||
<<: *run_local_smoketest
|
||||
|
||||
#########################################################################
|
||||
@@ -263,7 +286,7 @@ debian_9_smoketest:
|
||||
verify_head_signed:
|
||||
stage: verify_sig
|
||||
image:
|
||||
name: ubuntu:latest
|
||||
name: artifactory.ops.ripple.com/ubuntu:latest
|
||||
<<: *only_primary
|
||||
script:
|
||||
- . ./Builds/containers/gitlab-ci/verify_head_commit.sh
|
||||
@@ -281,14 +304,16 @@ tag_bld_images:
|
||||
stage: tag_images
|
||||
variables:
|
||||
docker_driver: overlay2
|
||||
DOCKER_TLS_CERTDIR: ""
|
||||
image:
|
||||
name: docker:latest
|
||||
name: artifactory.ops.ripple.com/docker:latest
|
||||
services:
|
||||
# workaround for TLS issues - consider going back
|
||||
# back to unversioned `dind` when issues are resolved
|
||||
- docker:18-dind
|
||||
- name: artifactory.ops.ripple.com/docker:stable-dind
|
||||
alias: docker
|
||||
tags:
|
||||
- docker-large
|
||||
- large
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
- dpkg_sign
|
||||
@@ -311,7 +336,7 @@ push_test:
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: alpine:latest
|
||||
name: artifactory.ops.ripple.com/alpine:latest
|
||||
artifacts:
|
||||
paths:
|
||||
- files.info
|
||||
@@ -336,56 +361,47 @@ centos_7_verify_repo_test:
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: centos:7
|
||||
name: artifactory.ops.ripple.com/centos:7
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
fedora_29_verify_repo_test:
|
||||
rocky_8_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: fedora:29
|
||||
name: rockylinux/rockylinux:8
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_28_verify_repo_test:
|
||||
fedora_34_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: fedora:28
|
||||
name: artifactory.ops.ripple.com/fedora:34
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_27_verify_repo_test:
|
||||
fedora_35_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: fedora:27
|
||||
name: artifactory.ops.ripple.com/fedora:35
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
ubuntu_19_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "disco"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: ubuntu:19.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
ubuntu_18_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
@@ -393,31 +409,69 @@ ubuntu_18_verify_repo_test:
|
||||
DISTRO: "bionic"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: ubuntu:18.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:18.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
ubuntu_16_verify_repo_test:
|
||||
ubuntu_20_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "xenial"
|
||||
DISTRO: "focal"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: ubuntu:16.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:20.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
# TODO: remove "allow_failure" when 22.04 released in 4/2022...
|
||||
ubuntu_22_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "jammy"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/ubuntu:22.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
debian_9_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "stretch"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: debian:9
|
||||
name: artifactory.ops.ripple.com/debian:9
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
debian_10_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "buster"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:10
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
debian_11_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
DISTRO: "bullseye"
|
||||
DEB_REPO: "rippled-deb-test-mirror"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:11
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
@@ -435,7 +489,7 @@ debian_9_verify_repo_test:
|
||||
wait_before_push_prod:
|
||||
stage: wait_approval_prod
|
||||
image:
|
||||
name: alpine:latest
|
||||
name: artifactory.ops.ripple.com/alpine:latest
|
||||
<<: *only_primary
|
||||
script:
|
||||
- echo "proceeding to next stage"
|
||||
@@ -456,7 +510,7 @@ push_prod:
|
||||
DEB_REPO: "rippled-deb"
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: alpine:latest
|
||||
name: artifactory.ops.ripple.com/alpine:latest
|
||||
stage: push_to_prod
|
||||
artifacts:
|
||||
paths:
|
||||
@@ -482,56 +536,47 @@ centos_7_verify_repo_prod:
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: centos:7
|
||||
name: artifactory.ops.ripple.com/centos:7
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
fedora_29_verify_repo_prod:
|
||||
rocky_8_verify_repo_test:
|
||||
stage: verify_from_test
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm-test-mirror"
|
||||
image:
|
||||
name: rockylinux/rockylinux:8
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_34_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: fedora:29
|
||||
name: artifactory.ops.ripple.com/fedora:34
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
fedora_28_verify_repo_prod:
|
||||
fedora_35_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: fedora:28
|
||||
name: artifactory.ops.ripple.com/fedora:35
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
fedora_27_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: fedora:27
|
||||
dependencies:
|
||||
- rpm_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
ubuntu_19_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "disco"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: ubuntu:19.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
ubuntu_18_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
@@ -539,31 +584,69 @@ ubuntu_18_verify_repo_prod:
|
||||
DISTRO: "bionic"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: ubuntu:18.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:18.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
ubuntu_16_verify_repo_prod:
|
||||
ubuntu_20_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "xenial"
|
||||
DISTRO: "focal"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: ubuntu:16.04
|
||||
name: artifactory.ops.ripple.com/ubuntu:20.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
# TODO: remove "allow_failure" when 22.04 released in 4/2022...
|
||||
ubuntu_22_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "jammy"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/ubuntu:22.04
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
allow_failure: true
|
||||
|
||||
debian_9_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "stretch"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: debian:9
|
||||
name: artifactory.ops.ripple.com/debian:9
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
debian_10_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "buster"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:10
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
<<: *run_repo_smoketest
|
||||
|
||||
debian_11_verify_repo_prod:
|
||||
stage: verify_from_prod
|
||||
variables:
|
||||
DISTRO: "bullseye"
|
||||
DEB_REPO: "rippled-deb"
|
||||
image:
|
||||
name: artifactory.ops.ripple.com/debian:11
|
||||
dependencies:
|
||||
- dpkg_sign
|
||||
<<: *only_primary
|
||||
@@ -583,7 +666,7 @@ get_prod_hashes:
|
||||
DEB_REPO: "rippled-deb"
|
||||
RPM_REPO: "rippled-rpm"
|
||||
image:
|
||||
name: alpine:latest
|
||||
name: artifactory.ops.ripple.com/alpine:latest
|
||||
stage: get_final_hashes
|
||||
artifacts:
|
||||
paths:
|
||||
@@ -618,5 +701,3 @@ build_ubuntu_container:
|
||||
script:
|
||||
- . ./Builds/containers/gitlab-ci/build_container.sh dpkg
|
||||
allow_failure: true
|
||||
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.deb)
|
||||
# TODO - where to upload src tgz?
|
||||
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
|
||||
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
|
||||
for dist in stretch buster xenial bionic disco ; do
|
||||
for dist in stretch buster bullseye bionic focal jammy; do
|
||||
DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
|
||||
done
|
||||
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
|
||||
@@ -88,4 +88,3 @@ JSON
|
||||
)
|
||||
curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
|
||||
fi
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ case ${ID} in
|
||||
ubuntu|debian)
|
||||
pkgtype="dpkg"
|
||||
;;
|
||||
fedora|centos|rhel|scientific)
|
||||
fedora|centos|rhel|scientific|rocky)
|
||||
pkgtype="rpm"
|
||||
;;
|
||||
*)
|
||||
@@ -51,7 +51,7 @@ if [ "${pkgtype}" = "dpkg" ] ; then
|
||||
elif [ "${install_from}" = "local" ] ; then
|
||||
# cached pkg install
|
||||
updateWithRetry
|
||||
apt-get -y install libprotobuf-dev libssl-dev
|
||||
apt-get -y install libprotobuf-dev libprotoc-dev protobuf-compiler libssl-dev
|
||||
rm -f build/dpkg/packages/rippled-dbgsym*.*
|
||||
dpkg --no-debsig -i build/dpkg/packages/*.deb
|
||||
else
|
||||
@@ -76,7 +76,12 @@ else
|
||||
yum -y install ${rpm_version_release}
|
||||
elif [ "${install_from}" = "local" ] ; then
|
||||
# cached pkg install
|
||||
yum install -y yum-utils openssl-static zlib-static
|
||||
pkgs=("yum-utils openssl-static zlib-static")
|
||||
if [ "$ID" = "rocky" ]; then
|
||||
sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/Rocky-PowerTools.repo
|
||||
pkgs="${pkgs[@]/openssl-static}"
|
||||
fi
|
||||
yum install -y $pkgs
|
||||
rm -f build/rpm/packages/rippled-debug*.rpm
|
||||
rm -f build/rpm/packages/*.src.rpm
|
||||
rpm -i build/rpm/packages/*.rpm
|
||||
@@ -95,5 +100,3 @@ fi
|
||||
# run unit tests
|
||||
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
|
||||
/opt/ripple/bin/validator-keys --unittest
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
#!/usr/bin/env sh
|
||||
set -ex
|
||||
apt -y update
|
||||
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
|
||||
apt -y install software-properties-common curl git gnupg
|
||||
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
|
||||
gpg --import rippled-pubkeys.txt
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
opt/ripple/etc/rippled.cfg
|
||||
opt/ripple/etc/validators.txt
|
||||
etc/logrotate.d/rippled
|
||||
/opt/ripple/etc/rippled.cfg
|
||||
/opt/ripple/etc/validators.txt
|
||||
/etc/logrotate.d/rippled
|
||||
|
||||
@@ -17,5 +17,5 @@ Section: devel
|
||||
Recommends: rippled (= ${binary:Version})
|
||||
Architecture: any
|
||||
Multi-Arch: same
|
||||
Depends: ${misc:Depends}, ${shlibs:Depends}, libprotobuf-dev, libssl-dev
|
||||
Depends: ${misc:Depends}, ${shlibs:Depends}, libprotobuf-dev, libprotoc-dev, protobuf-compiler
|
||||
Description: development files for applications using xrpl core library (serialize + sign)
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
README.md
|
||||
LICENSE
|
||||
LICENSE.md
|
||||
RELEASENOTES.md
|
||||
|
||||
@@ -11,6 +11,9 @@ export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
|
||||
%:
|
||||
dh $@ --with systemd
|
||||
|
||||
override_dh_systemd_start:
|
||||
dh_systemd_start --no-restart-on-upgrade
|
||||
|
||||
override_dh_auto_configure:
|
||||
env
|
||||
rm -rf bld
|
||||
@@ -19,17 +22,17 @@ override_dh_auto_configure:
|
||||
cmake .. -G Ninja \
|
||||
-DCMAKE_INSTALL_PREFIX=/opt/ripple \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_UNITY_BUILD_BATCH_SIZE=10 \
|
||||
-Dstatic=ON \
|
||||
-Dunity=OFF \
|
||||
-Dvalidator_keys=ON \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_VERBOSE_MAKEFILE=OFF
|
||||
|
||||
override_dh_auto_build:
|
||||
cd bld && \
|
||||
cmake --build . --target rippled --target validator-keys --parallel -- -v
|
||||
cmake --build . --target rippled --target validator-keys --parallel
|
||||
|
||||
override_dh_auto_install:
|
||||
cd bld && DESTDIR=../debian/tmp cmake --build . --target install -- -v
|
||||
cd bld && DESTDIR=../debian/tmp cmake --build . --target install
|
||||
install -D bld/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
|
||||
install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
|
||||
install -D bin/getRippledInfo debian/tmp/opt/ripple/bin/getRippledInfo
|
||||
@@ -38,5 +41,3 @@ override_dh_auto_install:
|
||||
rm -rf debian/tmp/opt/ripple/lib64/cmake/date
|
||||
rm -rf bld
|
||||
rm -rf bld_vl
|
||||
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ rippled
|
||||
%package devel
|
||||
Summary: Files for development of applications using xrpl core library
|
||||
Group: Development/Libraries
|
||||
Requires: openssl-static, zlib-static
|
||||
Requires: zlib-static
|
||||
|
||||
%description devel
|
||||
core library for development of standalone applications that sign transactions.
|
||||
@@ -32,15 +32,15 @@ core library for development of standalone applications that sign transactions.
|
||||
cd rippled
|
||||
mkdir -p bld.release
|
||||
cd bld.release
|
||||
cmake .. -G Ninja -DCMAKE_INSTALL_PREFIX=%{_prefix} -DCMAKE_BUILD_TYPE=Release -DCMAKE_UNITY_BUILD_BATCH_SIZE=10 -Dstatic=true -DCMAKE_VERBOSE_MAKEFILE=ON -Dvalidator_keys=ON
|
||||
cmake --build . --parallel --target rippled --target validator-keys -- -v
|
||||
cmake .. -G Ninja -DCMAKE_INSTALL_PREFIX=%{_prefix} -DCMAKE_BUILD_TYPE=Release -Dstatic=true -Dunity=OFF -DCMAKE_VERBOSE_MAKEFILE=OFF -Dvalidator_keys=ON
|
||||
cmake --build . --parallel --target rippled --target validator-keys
|
||||
|
||||
%pre
|
||||
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
|
||||
|
||||
%install
|
||||
rm -rf $RPM_BUILD_ROOT
|
||||
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.release --target install -- -v
|
||||
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.release --target install
|
||||
rm -rf ${RPM_BUILD_ROOT}/%{_prefix}/lib64/cmake/date
|
||||
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
|
||||
install -d ${RPM_BUILD_ROOT}/usr/local/bin
|
||||
@@ -76,7 +76,7 @@ chmod 644 /etc/logrotate.d/rippled
|
||||
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
|
||||
|
||||
%files
|
||||
%doc rippled/README.md rippled/LICENSE
|
||||
%doc rippled/README.md rippled/LICENSE.md
|
||||
%{_bindir}/rippled
|
||||
/usr/local/bin/rippled
|
||||
%{_bindir}/update-rippled.sh
|
||||
@@ -110,4 +110,3 @@ chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
|
||||
|
||||
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
|
||||
- Install validators.txt
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ function build_boost()
|
||||
mkdir -p /opt/local
|
||||
cd /opt/local
|
||||
BOOST_ROOT=/opt/local/boost_${boost_path}
|
||||
BOOST_URL="https://dl.bintray.com/boostorg/release/${boost_ver}/source/boost_${boost_path}.tar.bz2"
|
||||
BOOST_URL="https://boostorg.jfrog.io/artifactory/main/release/${boost_ver}/source/boost_${boost_path}.tar.gz"
|
||||
BOOST_BUILD_ALL=true
|
||||
. /tmp/install_boost.sh
|
||||
if [ "$do_link" = true ] ; then
|
||||
@@ -29,7 +29,7 @@ cd openssl-${OPENSSL_VER}
|
||||
# NOTE: add -g to the end of the following line if we want debug symbols for openssl
|
||||
SSLDIR=$(openssl version -d | cut -d: -f2 | tr -d [:space:]\")
|
||||
./config -fPIC --prefix=/opt/local/openssl --openssldir=${SSLDIR} zlib shared
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
cd ..
|
||||
rm -f openssl-${OPENSSL_VER}.tar.gz
|
||||
@@ -42,7 +42,7 @@ tar xzf libarchive-3.4.1.tar.gz
|
||||
cd libarchive-3.4.1
|
||||
mkdir _bld && cd _bld
|
||||
cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
cd ../..
|
||||
rm -f libarchive-3.4.1.tar.gz
|
||||
@@ -54,7 +54,7 @@ tar xf protobuf-all-3.10.1.tar.gz
|
||||
cd protobuf-3.10.1
|
||||
./autogen.sh
|
||||
./configure
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
ldconfig
|
||||
cd ..
|
||||
@@ -77,7 +77,7 @@ cmake \
|
||||
-DCARES_BUILD_TESTS=OFF \
|
||||
-DCARES_BUILD_CONTAINER_TESTS=OFF \
|
||||
..
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
cd ../..
|
||||
rm -f c-ares-1.15.0.tar.gz
|
||||
@@ -97,7 +97,7 @@ cmake \
|
||||
-DgRPC_PROTOBUF_PROVIDER=package \
|
||||
-DProtobuf_USE_STATIC_LIBS=ON \
|
||||
..
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
cd ../..
|
||||
rm -f xf v1.25.0.tar.gz
|
||||
@@ -114,7 +114,7 @@ if [ "${CI_USE}" = true ] ; then
|
||||
mkdir build
|
||||
cd build
|
||||
cmake -G "Unix Makefiles" ..
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) >> make_output.txt 2>&1
|
||||
make install
|
||||
cd ../..
|
||||
rm -f Release_1_8_16.tar.gz
|
||||
@@ -145,4 +145,3 @@ if [ "${CI_USE}" = true ] ; then
|
||||
pip install requests
|
||||
pip install https://github.com/codecov/codecov-python/archive/master.zip
|
||||
fi
|
||||
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
#!/usr/bin/env bash
|
||||
# Assumptions:
|
||||
# 1) BOOST_ROOT and BOOST_URL are already defined,
|
||||
# and contain valid values.
|
||||
# and contain valid values. BOOST_URL2 may be defined
|
||||
# as a fallback. BOOST_WGET_OPTIONS may be defined with
|
||||
# retry options if the download(s) fail on the first try.
|
||||
# 2) The last namepart of BOOST_ROOT matches the
|
||||
# folder name internal to boost's .tar.gz
|
||||
# When testing you can force a boost build by clearing travis caches:
|
||||
@@ -19,7 +21,16 @@ fi
|
||||
#fetch/unpack:
|
||||
fn=$(basename -- "$BOOST_URL")
|
||||
ext="${fn##*.}"
|
||||
wget --quiet $BOOST_URL -O /tmp/boost.tar.${ext}
|
||||
wopt="--quiet"
|
||||
wget ${wopt} $BOOST_URL -O /tmp/boost.tar.${ext} || \
|
||||
( [ -n "${BOOST_URL2}" ] && \
|
||||
wget ${wopt} $BOOST_URL2 -O /tmp/boost.tar.${ext} ) || \
|
||||
( [ -n "${BOOST_WGET_OPTIONS}" ] &&
|
||||
( wget ${wopt} ${BOOST_WGET_OPTIONS} $BOOST_URL -O /tmp/boost.tar.${ext} || \
|
||||
( [ -n "${BOOST_URL2}" ] && \
|
||||
wget ${wopt} ${BOOST_WGET_OPTIONS} $BOOST_URL2 -O /tmp/boost.tar.${ext} )
|
||||
)
|
||||
)
|
||||
cd $(dirname $BOOST_ROOT)
|
||||
rm -fr ${BOOST_ROOT}
|
||||
mkdir ${BOOST_ROOT}
|
||||
@@ -33,13 +44,13 @@ if [[ ${BOOST_BUILD_ALL:-false} == "true" ]]; then
|
||||
BLDARGS+=(--without-python)
|
||||
else
|
||||
BLDARGS+=(--with-chrono)
|
||||
BLDARGS+=(--with-container)
|
||||
BLDARGS+=(--with-context)
|
||||
BLDARGS+=(--with-coroutine)
|
||||
BLDARGS+=(--with-date_time)
|
||||
BLDARGS+=(--with-filesystem)
|
||||
BLDARGS+=(--with-program_options)
|
||||
BLDARGS+=(--with-regex)
|
||||
BLDARGS+=(--with-serialization)
|
||||
BLDARGS+=(--with-system)
|
||||
BLDARGS+=(--with-atomic)
|
||||
BLDARGS+=(--with-thread)
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
[Unit]
|
||||
Description=Ripple Daemon
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
|
||||
# Default KillSignal can be used if/when rippled handles SIGTERM
|
||||
KillSignal=SIGINT
|
||||
Restart=no
|
||||
Restart=on-failure
|
||||
User=rippled
|
||||
Group=rippled
|
||||
LimitNOFILE=65536
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
ARG DIST_TAG=16.04
|
||||
ARG DIST_TAG=18.04
|
||||
FROM ubuntu:$DIST_TAG
|
||||
ARG GIT_COMMIT=unknown
|
||||
ARG CI_USE=false
|
||||
|
||||
@@ -42,7 +42,8 @@ apt-get -y --fix-missing install \
|
||||
java-common javacc \
|
||||
dpkg-dev debhelper devscripts fakeroot \
|
||||
debmake git-buildpackage dh-make gitpkg debsums gnupg \
|
||||
dh-buildinfo dh-make dh-systemd
|
||||
dh-buildinfo dh-make dh-systemd \
|
||||
apt-transport-https
|
||||
|
||||
apt-get -y install gcc-7 g++-7
|
||||
update-alternatives --install \
|
||||
|
||||
114
Builds/levelization/README.md
Normal file
114
Builds/levelization/README.md
Normal file
@@ -0,0 +1,114 @@
|
||||
# Levelization
|
||||
|
||||
Levelization is the term used to describe efforts to prevent rippled from
|
||||
having or creating cyclic dependencies.
|
||||
|
||||
rippled code is organized into directories under `src/rippled` (and
|
||||
`src/test`) representing modules. The modules are intended to be
|
||||
organized into "tiers" or "levels" such that a module from one level can
|
||||
only include code from lower levels. Additionally, a module
|
||||
in one level should never include code in an `impl` folder of any level
|
||||
other than it's own.
|
||||
|
||||
Unfortunately, over time, enforcement of levelization has been
|
||||
inconsistent, so the current state of the code doesn't necessarily
|
||||
reflect these rules. Whenever possible, developers should refactor any
|
||||
levelization violations they find (by moving files or individual
|
||||
classes). At the very least, don't make things worse.
|
||||
|
||||
The table below summarizes the _desired_ division of modules, based on the
|
||||
state of the rippled code when it was created. The levels are numbered from
|
||||
the bottom up with the lower level, lower numbered, more independent
|
||||
modules listed first, and the higher level, higher numbered modules with
|
||||
more dependencies listed later.
|
||||
|
||||
**tl;dr:** The modules listed first are more independent than the modules
|
||||
listed later.
|
||||
|
||||
| Level / Tier | Module(s) |
|
||||
|--------------|-----------------------------------------------|
|
||||
| 01 | ripple/beast ripple/unity
|
||||
| 02 | ripple/basics
|
||||
| 03 | ripple/json ripple/crypto
|
||||
| 04 | ripple/protocol
|
||||
| 05 | ripple/core ripple/conditions ripple/consensus ripple/resource ripple/server
|
||||
| 06 | ripple/peerfinder ripple/ledger ripple/nodestore ripple/net
|
||||
| 07 | ripple/shamap ripple/overlay
|
||||
| 08 | ripple/app
|
||||
| 09 | ripple/rpc
|
||||
| 10 | ripple/perflog
|
||||
| 11 | test/jtx test/beast test/csf
|
||||
| 12 | test/unit_test
|
||||
| 13 | test/crypto test/conditions test/json test/resource test/shamap test/peerfinder test/basics test/overlay
|
||||
| 14 | test
|
||||
| 15 | test/net test/protocol test/ledger test/consensus test/core test/server test/nodestore
|
||||
| 16 | test/rpc test/app
|
||||
|
||||
(Note that `test` levelization is *much* less important and *much* less
|
||||
strictly enforced than `ripple` levelization, other than the requirement
|
||||
that `test` code should *never* be included in `ripple` code.)
|
||||
|
||||
## Validation
|
||||
|
||||
The [levelization.sh](levelization.sh) script takes no parameters,
|
||||
reads no environment variables, and can be run from any directory,
|
||||
as long as it is in the expected location in the rippled repo.
|
||||
It can be run at any time from within a checked out repo, and will
|
||||
do an analysis of all the `#include`s in
|
||||
the rippled source. The only caveat is that it runs much slower
|
||||
under Windows than in Linux. It hasn't yet been tested under MacOS.
|
||||
It generates many files of [results](results):
|
||||
|
||||
* `rawincludes.txt`: The raw dump of the `#includes`
|
||||
* `paths.txt`: A second dump grouping the source module
|
||||
to the destination module, deduped, and with frequency counts.
|
||||
* `includes/`: A directory where each file represents a module and
|
||||
contains a list of modules and counts that the module _includes_.
|
||||
* `includedby/`: Similar to `includes/`, but the other way around. Each
|
||||
file represents a module and contains a list of modules and counts
|
||||
that _include_ the module.
|
||||
* [`loops.txt`](results/loops.txt): A list of direct loops detected
|
||||
between modules as they actually exist, as opposed to how they are
|
||||
desired as described above. In a perfect repo, this file will be
|
||||
empty.
|
||||
This file is committed to the repo, and is used by the [levelization
|
||||
Github workflow](../../.github/workflows/levelization.yml) to validate
|
||||
that nothing changed.
|
||||
* [`ordering.txt`](results/ordering.txt): A list showing relationships
|
||||
between modules where there are no loops as they actually exist, as
|
||||
opposed to how they are desired as described above.
|
||||
This file is committed to the repo, and is used by the [levelization
|
||||
Github workflow](../../.github/workflows/levelization.yml) to validate
|
||||
that nothing changed.
|
||||
* [`levelization.yml`](../../.github/workflows/levelization.yml)
|
||||
Github Actions workflow to test that levelization loops haven't
|
||||
changed. Unfortunately, if changes are detected, it can't tell if
|
||||
they are improvements or not, so if you have resolved any issues or
|
||||
done anything else to improve levelization, run `levelization.sh`,
|
||||
and commit the updated results.
|
||||
|
||||
The `loops.txt` and `ordering.txt` files relate the modules
|
||||
using comparison signs, which indicate the number of times each
|
||||
module is included in the other.
|
||||
|
||||
* `A > B` means that A should probably be at a higher level than B,
|
||||
because B is included in A significantly more than A is included in B.
|
||||
These results can be included in both `loops.txt` and `ordering.txt`.
|
||||
Because `ordering.txt`only includes relationships where B is not
|
||||
included in A at all, it will only include these types of results.
|
||||
* `A ~= B` means that A and B are included in each other a different
|
||||
number of times, but the values are so close that the script can't
|
||||
definitively say that one should be above the other. These results
|
||||
will only be included in `loops.txt`.
|
||||
* `A == B` means that A and B include each other the same number of
|
||||
times, so the script has no clue which should be higher. These results
|
||||
will only be included in `loops.txt`.
|
||||
|
||||
The committed files hide the detailed values intentionally, to
|
||||
prevent false alarms and merging issues, and because it's easy to
|
||||
get those details locally.
|
||||
|
||||
1. Run `levelization.sh`
|
||||
2. Grep the modules in `paths.txt`.
|
||||
* For example, if a cycle is found `A ~= B`, simply `grep -w
|
||||
A Builds/levelization/results/paths.txt | grep -w B`
|
||||
127
Builds/levelization/levelization.sh
Executable file
127
Builds/levelization/levelization.sh
Executable file
@@ -0,0 +1,127 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Usage: levelization.sh
|
||||
# This script takes no parameters, reads no environment variables,
|
||||
# and can be run from any directory, as long as it is in the expected
|
||||
# location in the repo.
|
||||
|
||||
pushd $( dirname $0 )
|
||||
|
||||
if [ -v PS1 ]
|
||||
then
|
||||
# if the shell is interactive, clean up any flotsam before analyzing
|
||||
git clean -ix
|
||||
fi
|
||||
|
||||
rm -rfv results
|
||||
mkdir results
|
||||
includes="$( pwd )/results/rawincludes.txt"
|
||||
pushd ../..
|
||||
echo Raw includes:
|
||||
grep -r '#include.*/.*\.h' src/ripple/ src/test/ | \
|
||||
grep -v boost | tee ${includes}
|
||||
popd
|
||||
pushd results
|
||||
|
||||
oldifs=${IFS}
|
||||
IFS=:
|
||||
mkdir includes
|
||||
mkdir includedby
|
||||
echo Build levelization paths
|
||||
exec 3< ${includes} # open rawincludes.txt for input
|
||||
while read -r -u 3 file include
|
||||
do
|
||||
level=$( echo ${file} | cut -d/ -f 2,3 )
|
||||
# If the "level" indicates a file, cut off the filename
|
||||
if [[ "${level##*.}" != "${level}" ]]
|
||||
then
|
||||
# Use the "toplevel" label as a workaround for `sort`
|
||||
# inconsistencies between different utility versions
|
||||
level="$( dirname ${level} )/toplevel"
|
||||
fi
|
||||
level=$( echo ${level} | tr '/' '.' )
|
||||
|
||||
includelevel=$( echo ${include} | sed 's/.*["<]//; s/[">].*//' | \
|
||||
cut -d/ -f 1,2 )
|
||||
if [[ "${includelevel##*.}" != "${includelevel}" ]]
|
||||
then
|
||||
# Use the "toplevel" label as a workaround for `sort`
|
||||
# inconsistencies between different utility versions
|
||||
includelevel="$( dirname ${includelevel} )/toplevel"
|
||||
fi
|
||||
includelevel=$( echo ${includelevel} | tr '/' '.' )
|
||||
|
||||
if [[ "$level" != "$includelevel" ]]
|
||||
then
|
||||
echo $level $includelevel | tee -a paths.txt
|
||||
fi
|
||||
done
|
||||
echo Sort and dedup paths
|
||||
sort -ds paths.txt | uniq -c | tee sortedpaths.txt
|
||||
mv sortedpaths.txt paths.txt
|
||||
exec 3>&- #close fd 3
|
||||
IFS=${oldifs}
|
||||
unset oldifs
|
||||
|
||||
echo Split into flat-file database
|
||||
exec 4<paths.txt # open paths.txt for input
|
||||
while read -r -u 4 count level include
|
||||
do
|
||||
echo ${include} ${count} | tee -a includes/${level}
|
||||
echo ${level} ${count} | tee -a includedby/${include}
|
||||
done
|
||||
exec 4>&- #close fd 4
|
||||
|
||||
loops="$( pwd )/loops.txt"
|
||||
ordering="$( pwd )/ordering.txt"
|
||||
pushd includes
|
||||
echo Search for loops
|
||||
# Redirect stdout to a file
|
||||
exec 4>&1
|
||||
exec 1>"${loops}"
|
||||
for source in *
|
||||
do
|
||||
if [[ -f "$source" ]]
|
||||
then
|
||||
exec 5<"${source}" # open for input
|
||||
while read -r -u 5 include includefreq
|
||||
do
|
||||
if [[ -f $include ]]
|
||||
then
|
||||
if grep -q -w $source $include
|
||||
then
|
||||
if grep -q -w "Loop: $include $source" "${loops}"
|
||||
then
|
||||
continue
|
||||
fi
|
||||
sourcefreq=$( grep -w $source $include | cut -d\ -f2 )
|
||||
echo "Loop: $source $include"
|
||||
# If the counts are close, indicate that the two modules are
|
||||
# on the same level, though they shouldn't be
|
||||
if [[ $(( $includefreq - $sourcefreq )) -gt 3 ]]
|
||||
then
|
||||
echo -e " $source > $include\n"
|
||||
elif [[ $(( $sourcefreq - $includefreq )) -gt 3 ]]
|
||||
then
|
||||
echo -e " $include > $source\n"
|
||||
elif [[ $sourcefreq -eq $includefreq ]]
|
||||
then
|
||||
echo -e " $include == $source\n"
|
||||
else
|
||||
echo -e " $include ~= $source\n"
|
||||
fi
|
||||
else
|
||||
echo "$source > $include" >> "${ordering}"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
exec 5>&- #close fd 5
|
||||
fi
|
||||
done
|
||||
exec 1>&4 #close fd 1
|
||||
exec 4>&- #close fd 4
|
||||
cat "${ordering}"
|
||||
cat "${loops}"
|
||||
popd
|
||||
popd
|
||||
popd
|
||||
51
Builds/levelization/results/loops.txt
Normal file
51
Builds/levelization/results/loops.txt
Normal file
@@ -0,0 +1,51 @@
|
||||
Loop: ripple.app ripple.core
|
||||
ripple.app > ripple.core
|
||||
|
||||
Loop: ripple.app ripple.ledger
|
||||
ripple.app > ripple.ledger
|
||||
|
||||
Loop: ripple.app ripple.net
|
||||
ripple.app > ripple.net
|
||||
|
||||
Loop: ripple.app ripple.nodestore
|
||||
ripple.app > ripple.nodestore
|
||||
|
||||
Loop: ripple.app ripple.overlay
|
||||
ripple.overlay == ripple.app
|
||||
|
||||
Loop: ripple.app ripple.peerfinder
|
||||
ripple.peerfinder ~= ripple.app
|
||||
|
||||
Loop: ripple.app ripple.rpc
|
||||
ripple.rpc > ripple.app
|
||||
|
||||
Loop: ripple.app ripple.shamap
|
||||
ripple.app > ripple.shamap
|
||||
|
||||
Loop: ripple.basics ripple.core
|
||||
ripple.core > ripple.basics
|
||||
|
||||
Loop: ripple.basics ripple.json
|
||||
ripple.json ~= ripple.basics
|
||||
|
||||
Loop: ripple.basics ripple.protocol
|
||||
ripple.protocol > ripple.basics
|
||||
|
||||
Loop: ripple.core ripple.net
|
||||
ripple.net > ripple.core
|
||||
|
||||
Loop: ripple.net ripple.rpc
|
||||
ripple.rpc > ripple.net
|
||||
|
||||
Loop: ripple.nodestore ripple.overlay
|
||||
ripple.overlay ~= ripple.nodestore
|
||||
|
||||
Loop: ripple.overlay ripple.rpc
|
||||
ripple.rpc ~= ripple.overlay
|
||||
|
||||
Loop: test.jtx test.toplevel
|
||||
test.toplevel > test.jtx
|
||||
|
||||
Loop: test.jtx test.unit_test
|
||||
test.unit_test == test.jtx
|
||||
|
||||
230
Builds/levelization/results/ordering.txt
Normal file
230
Builds/levelization/results/ordering.txt
Normal file
@@ -0,0 +1,230 @@
|
||||
ripple.app > ripple.basics
|
||||
ripple.app > ripple.beast
|
||||
ripple.app > ripple.conditions
|
||||
ripple.app > ripple.consensus
|
||||
ripple.app > ripple.crypto
|
||||
ripple.app > ripple.json
|
||||
ripple.app > ripple.protocol
|
||||
ripple.app > ripple.resource
|
||||
ripple.app > ripple.server
|
||||
ripple.app > test.unit_test
|
||||
ripple.basics > ripple.beast
|
||||
ripple.conditions > ripple.basics
|
||||
ripple.conditions > ripple.protocol
|
||||
ripple.consensus > ripple.basics
|
||||
ripple.consensus > ripple.beast
|
||||
ripple.consensus > ripple.json
|
||||
ripple.consensus > ripple.protocol
|
||||
ripple.core > ripple.beast
|
||||
ripple.core > ripple.json
|
||||
ripple.core > ripple.protocol
|
||||
ripple.crypto > ripple.basics
|
||||
ripple.json > ripple.beast
|
||||
ripple.ledger > ripple.basics
|
||||
ripple.ledger > ripple.beast
|
||||
ripple.ledger > ripple.core
|
||||
ripple.ledger > ripple.json
|
||||
ripple.ledger > ripple.protocol
|
||||
ripple.net > ripple.basics
|
||||
ripple.net > ripple.beast
|
||||
ripple.net > ripple.json
|
||||
ripple.net > ripple.protocol
|
||||
ripple.net > ripple.resource
|
||||
ripple.nodestore > ripple.basics
|
||||
ripple.nodestore > ripple.beast
|
||||
ripple.nodestore > ripple.core
|
||||
ripple.nodestore > ripple.json
|
||||
ripple.nodestore > ripple.protocol
|
||||
ripple.nodestore > ripple.unity
|
||||
ripple.overlay > ripple.basics
|
||||
ripple.overlay > ripple.beast
|
||||
ripple.overlay > ripple.core
|
||||
ripple.overlay > ripple.json
|
||||
ripple.overlay > ripple.peerfinder
|
||||
ripple.overlay > ripple.protocol
|
||||
ripple.overlay > ripple.resource
|
||||
ripple.overlay > ripple.server
|
||||
ripple.peerfinder > ripple.basics
|
||||
ripple.peerfinder > ripple.beast
|
||||
ripple.peerfinder > ripple.core
|
||||
ripple.peerfinder > ripple.protocol
|
||||
ripple.perflog > ripple.basics
|
||||
ripple.perflog > ripple.beast
|
||||
ripple.perflog > ripple.core
|
||||
ripple.perflog > ripple.json
|
||||
ripple.perflog > ripple.nodestore
|
||||
ripple.perflog > ripple.protocol
|
||||
ripple.perflog > ripple.rpc
|
||||
ripple.protocol > ripple.beast
|
||||
ripple.protocol > ripple.crypto
|
||||
ripple.protocol > ripple.json
|
||||
ripple.resource > ripple.basics
|
||||
ripple.resource > ripple.beast
|
||||
ripple.resource > ripple.json
|
||||
ripple.resource > ripple.protocol
|
||||
ripple.rpc > ripple.basics
|
||||
ripple.rpc > ripple.beast
|
||||
ripple.rpc > ripple.core
|
||||
ripple.rpc > ripple.crypto
|
||||
ripple.rpc > ripple.json
|
||||
ripple.rpc > ripple.ledger
|
||||
ripple.rpc > ripple.nodestore
|
||||
ripple.rpc > ripple.protocol
|
||||
ripple.rpc > ripple.resource
|
||||
ripple.rpc > ripple.server
|
||||
ripple.rpc > ripple.shamap
|
||||
ripple.server > ripple.basics
|
||||
ripple.server > ripple.beast
|
||||
ripple.server > ripple.crypto
|
||||
ripple.server > ripple.json
|
||||
ripple.server > ripple.protocol
|
||||
ripple.shamap > ripple.basics
|
||||
ripple.shamap > ripple.beast
|
||||
ripple.shamap > ripple.crypto
|
||||
ripple.shamap > ripple.nodestore
|
||||
ripple.shamap > ripple.protocol
|
||||
test.app > ripple.app
|
||||
test.app > ripple.basics
|
||||
test.app > ripple.beast
|
||||
test.app > ripple.core
|
||||
test.app > ripple.json
|
||||
test.app > ripple.ledger
|
||||
test.app > ripple.overlay
|
||||
test.app > ripple.protocol
|
||||
test.app > ripple.resource
|
||||
test.app > ripple.rpc
|
||||
test.app > test.jtx
|
||||
test.app > test.rpc
|
||||
test.app > test.toplevel
|
||||
test.app > test.unit_test
|
||||
test.basics > ripple.basics
|
||||
test.basics > ripple.beast
|
||||
test.basics > ripple.json
|
||||
test.basics > ripple.protocol
|
||||
test.basics > ripple.rpc
|
||||
test.basics > test.jtx
|
||||
test.basics > test.unit_test
|
||||
test.beast > ripple.basics
|
||||
test.beast > ripple.beast
|
||||
test.conditions > ripple.basics
|
||||
test.conditions > ripple.beast
|
||||
test.conditions > ripple.conditions
|
||||
test.consensus > ripple.app
|
||||
test.consensus > ripple.basics
|
||||
test.consensus > ripple.beast
|
||||
test.consensus > ripple.consensus
|
||||
test.consensus > ripple.ledger
|
||||
test.consensus > ripple.rpc
|
||||
test.consensus > test.csf
|
||||
test.consensus > test.toplevel
|
||||
test.consensus > test.unit_test
|
||||
test.core > ripple.basics
|
||||
test.core > ripple.beast
|
||||
test.core > ripple.core
|
||||
test.core > ripple.crypto
|
||||
test.core > ripple.json
|
||||
test.core > ripple.server
|
||||
test.core > test.jtx
|
||||
test.core > test.toplevel
|
||||
test.core > test.unit_test
|
||||
test.csf > ripple.basics
|
||||
test.csf > ripple.beast
|
||||
test.csf > ripple.consensus
|
||||
test.csf > ripple.json
|
||||
test.csf > ripple.protocol
|
||||
test.json > ripple.beast
|
||||
test.json > ripple.json
|
||||
test.json > test.jtx
|
||||
test.jtx > ripple.app
|
||||
test.jtx > ripple.basics
|
||||
test.jtx > ripple.beast
|
||||
test.jtx > ripple.consensus
|
||||
test.jtx > ripple.core
|
||||
test.jtx > ripple.json
|
||||
test.jtx > ripple.ledger
|
||||
test.jtx > ripple.net
|
||||
test.jtx > ripple.protocol
|
||||
test.jtx > ripple.server
|
||||
test.ledger > ripple.app
|
||||
test.ledger > ripple.basics
|
||||
test.ledger > ripple.beast
|
||||
test.ledger > ripple.core
|
||||
test.ledger > ripple.ledger
|
||||
test.ledger > ripple.protocol
|
||||
test.ledger > test.jtx
|
||||
test.ledger > test.toplevel
|
||||
test.net > ripple.net
|
||||
test.net > test.jtx
|
||||
test.net > test.toplevel
|
||||
test.net > test.unit_test
|
||||
test.nodestore > ripple.app
|
||||
test.nodestore > ripple.basics
|
||||
test.nodestore > ripple.beast
|
||||
test.nodestore > ripple.core
|
||||
test.nodestore > ripple.nodestore
|
||||
test.nodestore > ripple.protocol
|
||||
test.nodestore > ripple.unity
|
||||
test.nodestore > test.jtx
|
||||
test.nodestore > test.toplevel
|
||||
test.nodestore > test.unit_test
|
||||
test.overlay > ripple.app
|
||||
test.overlay > ripple.basics
|
||||
test.overlay > ripple.beast
|
||||
test.overlay > ripple.core
|
||||
test.overlay > ripple.overlay
|
||||
test.overlay > ripple.peerfinder
|
||||
test.overlay > ripple.protocol
|
||||
test.overlay > ripple.shamap
|
||||
test.overlay > test.jtx
|
||||
test.overlay > test.unit_test
|
||||
test.peerfinder > ripple.basics
|
||||
test.peerfinder > ripple.beast
|
||||
test.peerfinder > ripple.core
|
||||
test.peerfinder > ripple.peerfinder
|
||||
test.peerfinder > ripple.protocol
|
||||
test.peerfinder > test.beast
|
||||
test.peerfinder > test.unit_test
|
||||
test.protocol > ripple.basics
|
||||
test.protocol > ripple.beast
|
||||
test.protocol > ripple.crypto
|
||||
test.protocol > ripple.json
|
||||
test.protocol > ripple.protocol
|
||||
test.protocol > test.toplevel
|
||||
test.resource > ripple.basics
|
||||
test.resource > ripple.beast
|
||||
test.resource > ripple.resource
|
||||
test.resource > test.unit_test
|
||||
test.rpc > ripple.app
|
||||
test.rpc > ripple.basics
|
||||
test.rpc > ripple.beast
|
||||
test.rpc > ripple.core
|
||||
test.rpc > ripple.json
|
||||
test.rpc > ripple.net
|
||||
test.rpc > ripple.nodestore
|
||||
test.rpc > ripple.overlay
|
||||
test.rpc > ripple.protocol
|
||||
test.rpc > ripple.resource
|
||||
test.rpc > ripple.rpc
|
||||
test.rpc > test.jtx
|
||||
test.rpc > test.nodestore
|
||||
test.rpc > test.toplevel
|
||||
test.server > ripple.app
|
||||
test.server > ripple.basics
|
||||
test.server > ripple.beast
|
||||
test.server > ripple.core
|
||||
test.server > ripple.json
|
||||
test.server > ripple.rpc
|
||||
test.server > ripple.server
|
||||
test.server > test.jtx
|
||||
test.server > test.toplevel
|
||||
test.server > test.unit_test
|
||||
test.shamap > ripple.basics
|
||||
test.shamap > ripple.beast
|
||||
test.shamap > ripple.nodestore
|
||||
test.shamap > ripple.protocol
|
||||
test.shamap > ripple.shamap
|
||||
test.shamap > test.unit_test
|
||||
test.toplevel > ripple.json
|
||||
test.toplevel > test.csf
|
||||
test.unit_test > ripple.basics
|
||||
test.unit_test > ripple.beast
|
||||
@@ -13,13 +13,18 @@ management tools.
|
||||
|
||||
## Dependencies
|
||||
|
||||
gcc-7 or later is required.
|
||||
gcc-8 or later is required.
|
||||
|
||||
Use `apt-get` to install the dependencies provided by the distribution
|
||||
|
||||
```
|
||||
$ apt-get update
|
||||
$ apt-get install -y gcc g++ wget git cmake protobuf-compiler libprotobuf-dev libssl-dev
|
||||
$ apt-get install -y gcc g++ wget git cmake pkg-config libprotoc-dev protobuf-compiler libprotobuf-dev libssl-dev
|
||||
```
|
||||
|
||||
To build the software in reporting mode, install these additional dependencies:
|
||||
```
|
||||
$ apt-get install -y autoconf flex bison
|
||||
```
|
||||
|
||||
Advanced users can choose to install newer versions of gcc, or the clang compiler.
|
||||
@@ -31,9 +36,8 @@ protobuf will give errors.
|
||||
Boost 1.70 or later is required. We recommend downloading and compiling boost
|
||||
with the following process: After changing to the directory where
|
||||
you wish to download and compile boost, run
|
||||
|
||||
```
|
||||
$ wget https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.gz
|
||||
$ wget https://boostorg.jfrog.io/artifactory/main/release/1.70.0/source/boost_1_70_0.tar.gz
|
||||
$ tar -xzf boost_1_70_0.tar.gz
|
||||
$ cd boost_1_70_0
|
||||
$ ./bootstrap.sh
|
||||
@@ -139,6 +143,7 @@ testing and running.
|
||||
* `-Dsan=thread` to enable the thread sanitizer with clang
|
||||
* `-Dsan=address` to enable the address sanitizer with clang
|
||||
* `-Dstatic=ON` to enable static linking library dependencies
|
||||
* `-Dreporting=ON` to build code necessary for reporting mode (defaults to OFF)
|
||||
|
||||
Several other infrequently used options are available - run `ccmake` or
|
||||
`cmake-gui` for a list of all options.
|
||||
@@ -155,7 +160,7 @@ the `-j` parameter in this example tells the build tool to compile several
|
||||
files in parallel. This value should be chosen roughly based on the number of
|
||||
cores you have available and/or want to use for building.
|
||||
|
||||
When the build completes succesfully, you will have a `rippled` executable in
|
||||
When the build completes successfully, you will have a `rippled` executable in
|
||||
the current directory, which can be used to connect to the network (when
|
||||
properly configured) or to run unit tests.
|
||||
|
||||
@@ -234,5 +239,3 @@ change the `/opt/local` module path above to match your chosen installation pref
|
||||
`rippled` builds a set of unit tests into the server executable. To run these unit
|
||||
tests after building, pass the `--unittest` option to the compiled `rippled`
|
||||
executable. The executable will exit with summary info after running the unit tests.
|
||||
|
||||
|
||||
|
||||
@@ -1,231 +1,3 @@
|
||||
# macos Build Instructions
|
||||
|
||||
## Important
|
||||
|
||||
We don't recommend macos for rippled production use at this time. Currently, the
|
||||
Ubuntu platform has received the highest level of quality assurance and
|
||||
testing. That said, macos is suitable for many development/test tasks.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You'll need macos 10.8 or later.
|
||||
|
||||
To clone the source code repository, create branches for inspection or
|
||||
modification, build rippled using clang, and run the system tests you will need
|
||||
these software components:
|
||||
|
||||
* [XCode](https://developer.apple.com/xcode/)
|
||||
* [Homebrew](http://brew.sh/)
|
||||
* [Boost](http://boost.org/)
|
||||
* other misc utilities and libraries installed via homebrew
|
||||
|
||||
## Install Software
|
||||
|
||||
### Install XCode
|
||||
|
||||
If not already installed on your system, download and install XCode using the
|
||||
appstore or by using [this link](https://developer.apple.com/xcode/).
|
||||
|
||||
For more info, see "Step 1: Download and Install the Command Line Tools"
|
||||
[here](http://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac)
|
||||
|
||||
The command line tools can be installed through the terminal with the command:
|
||||
|
||||
```
|
||||
xcode-select --install
|
||||
```
|
||||
|
||||
### Install Homebrew
|
||||
|
||||
> "[Homebrew](http://brew.sh/) installs the stuff you need that Apple didn’t."
|
||||
|
||||
Open a terminal and type:
|
||||
|
||||
```
|
||||
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
```
|
||||
|
||||
For more info, see "Step 2: Install Homebrew"
|
||||
[here](http://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac#step-2)
|
||||
|
||||
### Install Dependencies Using Homebrew
|
||||
|
||||
`brew` will generally install the latest stable version of any package, which
|
||||
should satisfy the the minimum version requirements for rippled.
|
||||
|
||||
```
|
||||
brew update
|
||||
brew install git cmake pkg-config protobuf openssl ninja
|
||||
```
|
||||
|
||||
### Build Boost
|
||||
|
||||
Boost 1.70 or later is required.
|
||||
|
||||
We want to compile boost with clang/libc++
|
||||
|
||||
Download [a release](https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.bz2)
|
||||
|
||||
Extract it to a folder, making note of where, open a terminal, then:
|
||||
|
||||
```
|
||||
./bootstrap.sh
|
||||
./b2 cxxflags="-std=c++14" visibility=global
|
||||
```
|
||||
|
||||
Create an environment variable `BOOST_ROOT` in one of your `rc` files, pointing
|
||||
to the root of the extracted directory.
|
||||
|
||||
### Dependencies for Building Source Documentation
|
||||
|
||||
Source code documentation is not required for running/debugging rippled. That
|
||||
said, the documentation contains some helpful information about specific
|
||||
components of the application. For more information on how to install and run
|
||||
the necessary components, see [this document](../../docs/README.md)
|
||||
|
||||
## Build
|
||||
|
||||
### Clone the rippled repository
|
||||
|
||||
From a shell:
|
||||
|
||||
```
|
||||
git clone git@github.com:ripple/rippled.git
|
||||
cd rippled
|
||||
```
|
||||
|
||||
For a stable release, choose the `master` branch or one of the tagged releases
|
||||
listed on [GitHub](https://github.com/ripple/rippled/releases GitHub).
|
||||
|
||||
```
|
||||
git checkout master
|
||||
```
|
||||
|
||||
or to test the latest release candidate, choose the `release` branch.
|
||||
|
||||
```
|
||||
git checkout release
|
||||
```
|
||||
|
||||
If you are doing development work and want the latest set of untested
|
||||
features, you can consider using the `develop` branch instead.
|
||||
|
||||
```
|
||||
git checkout develop
|
||||
```
|
||||
|
||||
### Configure Library Paths
|
||||
|
||||
If you didn't persistently set the `BOOST_ROOT` environment variable to the
|
||||
root of the extracted directory above, then you should set it temporarily.
|
||||
|
||||
For example, assuming your username were `Abigail` and you extracted Boost
|
||||
1.70.0 in `/Users/Abigail/Downloads/boost_1_70_0`, you would do for any
|
||||
shell in which you want to build:
|
||||
|
||||
```
|
||||
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_70_0
|
||||
```
|
||||
|
||||
### Generate and Build
|
||||
|
||||
For simple command line building we recommend using the *Unix Makefile* or
|
||||
*Ninja* generator with cmake. All builds should be done in a separate directory
|
||||
from the source tree root (a subdirectory is fine). For example, from the root
|
||||
of the ripple source tree:
|
||||
|
||||
```
|
||||
mkdir my_build
|
||||
cd my_build
|
||||
```
|
||||
|
||||
followed by:
|
||||
|
||||
```
|
||||
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug ..
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
cmake -G "Ninja" -DCMAKE_BUILD_TYPE=Debug ..
|
||||
```
|
||||
|
||||
`CMAKE_BUILD_TYPE` can be changed as desired for `Debug` vs.
|
||||
`Release` builds (all four standard cmake build types are supported).
|
||||
|
||||
Once you have generated the build system, you can run the build via cmake:
|
||||
|
||||
```
|
||||
cmake --build . -- -j 4
|
||||
```
|
||||
|
||||
the `-j` parameter in this example tells the build tool to compile several
|
||||
files in parallel. This value should be chosen roughly based on the number of
|
||||
cores you have available and/or want to use for building.
|
||||
|
||||
When the build completes succesfully, you will have a `rippled` executable in
|
||||
the current directory, which can be used to connect to the network (when
|
||||
properly configured) or to run unit tests.
|
||||
|
||||
If you prefer to have an XCode project to use for building, ask CMake to
|
||||
generate that instead:
|
||||
|
||||
```
|
||||
cmake -GXcode ..
|
||||
```
|
||||
|
||||
After generation succeeds, the xcode project file can be opened and used to
|
||||
build/debug. However, just as with other generators, cmake knows how to build
|
||||
using the xcode project as well:
|
||||
|
||||
```
|
||||
cmake --build . -- -jobs 4
|
||||
```
|
||||
|
||||
This will invoke the `xcodebuild` utility to compile the project. See `xcodebuild
|
||||
--help` for details about build options.
|
||||
|
||||
#### Optional installation
|
||||
|
||||
If you'd like to install the artifacts of the build, we have preliminary
|
||||
support for standard CMake installation targets. We recommend explicitly
|
||||
setting the installation location when configuring, e.g.:
|
||||
|
||||
```
|
||||
cmake -DCMAKE_INSTALL_PREFIX=/opt/local ..
|
||||
```
|
||||
|
||||
(change the destination as desired), and then build the `install` target:
|
||||
|
||||
```
|
||||
cmake --build . --target install -- -jobs 4
|
||||
```
|
||||
|
||||
#### Options During Configuration:
|
||||
|
||||
The CMake file defines a number of configure-time options which can be
|
||||
examined by running `cmake-gui` or `ccmake` to generated the build. In
|
||||
particular, the `unity` option allows you to select between the unity and
|
||||
non-unity builds. `unity` builds are faster to compile since they combine
|
||||
multiple sources into a single compiliation unit - this is the default if you
|
||||
don't specify. `nounity` builds can be helpful for detecting include omissions
|
||||
or for finding other build-related issues, but aren't generally needed for
|
||||
testing and running.
|
||||
|
||||
* `-Dunity=ON` to enable/disable unity builds (defaults to ON)
|
||||
* `-Dassert=ON` to enable asserts
|
||||
* `-Djemalloc=ON` to enable jemalloc support for heap checking
|
||||
* `-Dsan=thread` to enable the thread sanitizer with clang
|
||||
* `-Dsan=address` to enable the address sanitizer with clang
|
||||
|
||||
Several other infrequently used options are available - run `ccmake` or
|
||||
`cmake-gui` for a list of all options.
|
||||
|
||||
## Unit Tests (Recommended)
|
||||
|
||||
`rippled` builds a set of unit tests into the server executable. To run these unit
|
||||
tests after building, pass the `--unittest` option to the compiled `rippled`
|
||||
executable. The executable will exit with summary info after running the unit tests.
|
||||
|
||||
# macOS Build Instructions
|
||||
|
||||
[Build and Run rippled on macOS](https://xrpl.org/build-run-rippled-macos.html)
|
||||
|
||||
@@ -1,10 +1,31 @@
|
||||
cmake_minimum_required (VERSION 3.9.0)
|
||||
cmake_minimum_required (VERSION 3.16)
|
||||
|
||||
if (POLICY CMP0074)
|
||||
cmake_policy(SET CMP0074 NEW)
|
||||
endif ()
|
||||
|
||||
project (rippled)
|
||||
set(CMAKE_CXX_EXTENSIONS OFF)
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
set(CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
|
||||
# make GIT_COMMIT_HASH define available to all sources
|
||||
find_package(Git)
|
||||
if(Git_FOUND)
|
||||
execute_process(COMMAND ${GIT_EXECUTABLE} describe --always --abbrev=40
|
||||
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
|
||||
if(gch)
|
||||
set(GIT_COMMIT_HASH "${gch}")
|
||||
message(STATUS gch: ${GIT_COMMIT_HASH})
|
||||
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
|
||||
endif()
|
||||
endif() #git
|
||||
|
||||
if (thread_safety_analysis)
|
||||
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
|
||||
add_compile_options("-stdlib=libc++")
|
||||
add_link_options("-stdlib=libc++")
|
||||
endif()
|
||||
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")
|
||||
@@ -55,6 +76,8 @@ include(deps/Nudb)
|
||||
include(deps/date)
|
||||
include(deps/Protobuf)
|
||||
include(deps/gRPC)
|
||||
include(deps/cassandra)
|
||||
include(deps/Postgres)
|
||||
|
||||
###
|
||||
|
||||
|
||||
77
LICENSE
77
LICENSE
@@ -1,77 +0,0 @@
|
||||
The accompanying files under various copyrights.
|
||||
|
||||
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
|
||||
|
||||
Permission to use, copy, modify, and distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
The accompanying files incorporate work covered by the following copyright
|
||||
and previous license notice:
|
||||
|
||||
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
|
||||
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
|
||||
|
||||
Some code from Raw Material Software, Ltd., provided under the terms of the
|
||||
ISC License. See the corresponding source files for more details.
|
||||
Copyright (c) 2013 - Raw Material Software Ltd.
|
||||
Please visit http://www.juce.com
|
||||
|
||||
Some code from ASIO examples:
|
||||
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
|
||||
//
|
||||
// Distributed under the Boost Software License, Version 1.0. (See accompanying
|
||||
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
||||
|
||||
Some code from Bitcoin:
|
||||
// Copyright (c) 2009-2010 Satoshi Nakamoto
|
||||
// Copyright (c) 2011 The Bitcoin developers
|
||||
// Distributed under the MIT/X11 software license, see the accompanying
|
||||
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
|
||||
|
||||
Some code from Tom Wu:
|
||||
This software is covered under the following copyright:
|
||||
|
||||
/*
|
||||
* Copyright (c) 2003-2005 Tom Wu
|
||||
* All Rights Reserved.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
|
||||
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
|
||||
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
|
||||
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
|
||||
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
|
||||
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*
|
||||
* In addition, the following condition applies:
|
||||
*
|
||||
* All redistributions must retain an intact copy of this copyright notice
|
||||
* and disclaimer.
|
||||
*/
|
||||
|
||||
Address all questions regarding this license to:
|
||||
|
||||
Tom Wu
|
||||
tjw@cs.Stanford.EDU
|
||||
17
LICENSE.md
Normal file
17
LICENSE.md
Normal file
@@ -0,0 +1,17 @@
|
||||
ISC License
|
||||
|
||||
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
|
||||
Copyright (c) 2012-2020, the XRP Ledger developers.
|
||||
|
||||
Permission to use, copy, modify, and distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
128
README.md
128
README.md
@@ -1,18 +1,112 @@
|
||||
# XRP Ledger Side chain Branch
|
||||
|
||||
## Warning
|
||||
|
||||
This is not the main branch of the XRP Ledger. This branch supports side chains
|
||||
on the XRP ledger and integrates an implementation of side chain federators.
|
||||
This is a developers prelease and it should not be used for production or to
|
||||
transfer real value. Consider this "alpha" quality software. There will be bugs.
|
||||
See "Status" for a fuller description.
|
||||
|
||||
The latest production code for the XRP ledger is on the "master" branch.
|
||||
|
||||
Until this branch is merged with the mainline branch, it will periodically be
|
||||
rebased on that branch and force pushed to github.
|
||||
|
||||
## What are side chains?
|
||||
|
||||
Side chains are independent ledgers. They can have their own transaction types,
|
||||
their own UNL list (or even their own consensus algorithm), and their own set of
|
||||
other unique features (like a smart contracts layer). What's special about them
|
||||
is there is a way to transfer assets from the XRP ledger to the side chain, and
|
||||
a way to return those assets back from the side chain to the XRP ledger. Both
|
||||
XRP and issued assets may be exchanged.
|
||||
|
||||
The motivation for creating a side chain is to implement an idea that may not be
|
||||
a great fit for the main XRP ledger, or may take a long time before such a
|
||||
feature is adopted by the main XRP ledger. The advantage of a side chain over a
|
||||
brand new ledger is it allows the side chain to immediate use tokens with real
|
||||
monetary value.
|
||||
|
||||
This implementation is meant to support side chains that are similar to the XRP
|
||||
ledger and use the XRP ledger as the main chain. The idea is to develop a new
|
||||
side chain, first this code will be forked and the new features specific to the
|
||||
new chain will be implemented.
|
||||
|
||||
## Status
|
||||
|
||||
All the functionality needed to build side chains should be complete. However,
|
||||
it has not been well tested or polished.
|
||||
|
||||
In particular, all of the following are built:
|
||||
|
||||
* Cross chain transactions for both XRP and Issued Assets
|
||||
* Refunds if transactions fail
|
||||
* Allowing federators to rejoin a network
|
||||
* Detecting and handling when federators fall too far behind in processing
|
||||
transactions
|
||||
* A python library to easily create configuration files for testing side chains
|
||||
and spin up side chains on a local machine
|
||||
* Python scripts to test side chains
|
||||
* An interactive shell to explore side chain functionality
|
||||
|
||||
The biggest missing pieces are:
|
||||
|
||||
* Testing: While the functionality is there, it has just begun to be tested.
|
||||
There will be bugs. Even horrible and embarrassing bugs. Of course, this will
|
||||
improve as testing progresses.
|
||||
|
||||
* Tooling: There is a python library and an interactive shell that was built to
|
||||
help development. However, these tools are geared to run a test network on a
|
||||
local machine. They are not geared to new users or to production systems.
|
||||
Better tooling is coming.
|
||||
|
||||
* Documentation: There is documentation that describes the technical details of
|
||||
how side chains works, how to run the python scripts to set up side chains on
|
||||
the local machine, and the changes to the configuration files. However, like
|
||||
the tooling, this is not geared to new users or production systems. Better
|
||||
documentation is coming. In particular, good documentation for how to set up a
|
||||
production side chain - or even a test net that doesn't run on a local
|
||||
machine - needs to be written.
|
||||
|
||||
## Getting Started
|
||||
|
||||
See the instructions [here](docs/sidechain/GettingStarted.md) for how to
|
||||
run an interactive shell that will spin up a set of federators on your local
|
||||
machine and allow you to transfer assets between the main chain and a side
|
||||
chain.
|
||||
|
||||
After setting things up and completing a cross chain transaction with the
|
||||
"getting started" script above, it may to useful to browse some other
|
||||
documentation:
|
||||
|
||||
* [This](bin/sidechain/python/README.md) document describes the scripts and
|
||||
python modules used to test and explore side chains on your local machine.
|
||||
|
||||
* [This](docs/sidechain/configFile.md) document describes the new stanzas in the
|
||||
config file needed for side chains.
|
||||
|
||||
* [This](docs/sidechain/federatorAccountSetup.md) document describes how to set
|
||||
up the federator accounts if not using the python scripts.
|
||||
|
||||
* [This](docs/sidechain/design.md) document describes the low-level details for
|
||||
how side chains work.
|
||||
|
||||
# The XRP Ledger
|
||||
|
||||
The XRP Ledger is a decentralized cryptographic ledger powered by a network of peer-to-peer servers. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
|
||||
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
|
||||
|
||||
## XRP
|
||||
XRP is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP to help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
|
||||
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP to help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
|
||||
|
||||
## rippled
|
||||
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE). The `rippled` server is written primarily in C++ and runs on a variety of platforms.
|
||||
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
|
||||
|
||||
### Build from Source
|
||||
|
||||
* [Linux](Builds/linux/README.md)
|
||||
* [Mac](Builds/macos/README.md)
|
||||
* [Windows](Builds/VisualStudio2017/README.md)
|
||||
* [Mac](Builds/macos/README.md) (Not recommended for production)
|
||||
* [Windows](Builds/VisualStudio2017/README.md) (Not recommended for production)
|
||||
|
||||
## Key Features of the XRP Ledger
|
||||
|
||||
@@ -24,13 +118,13 @@ The server software that powers the XRP Ledger is called `rippled` and is availa
|
||||
- **[Modern Features for Smart Contracts][]:** Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the [Interledger Protocol](https://interledger.org/). This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
|
||||
- **[On-Ledger Decentralized Exchange][]:** In addition to all the features that make XRP useful on its own, the XRP Ledger also has a fully-functional accounting system for tracking and trading obligations denominated in any way users want, and an exchange built into the protocol. The XRP Ledger can settle long, cross-currency payment paths and exchanges of multiple currencies in atomic transactions, bridging gaps of trust with XRP.
|
||||
|
||||
[Censorship-Resistant Transaction Processing]: https://developers.ripple.com/xrp-ledger-overview.html#censorship-resistant-transaction-processing
|
||||
[Fast, Efficient Consensus Algorithm]: https://developers.ripple.com/xrp-ledger-overview.html#fast-efficient-consensus-algorithm
|
||||
[Finite XRP Supply]: https://developers.ripple.com/xrp-ledger-overview.html#finite-xrp-supply
|
||||
[Responsible Software Governance]: https://developers.ripple.com/xrp-ledger-overview.html#responsible-software-governance
|
||||
[Secure, Adaptable Cryptography]: https://developers.ripple.com/xrp-ledger-overview.html#secure-adaptable-cryptography
|
||||
[Modern Features for Smart Contracts]: https://developers.ripple.com/xrp-ledger-overview.html#modern-features-for-smart-contracts
|
||||
[On-Ledger Decentralized Exchange]: https://developers.ripple.com/xrp-ledger-overview.html#on-ledger-decentralized-exchange
|
||||
[Censorship-Resistant Transaction Processing]: https://xrpl.org/xrp-ledger-overview.html#censorship-resistant-transaction-processing
|
||||
[Fast, Efficient Consensus Algorithm]: https://xrpl.org/xrp-ledger-overview.html#fast-efficient-consensus-algorithm
|
||||
[Finite XRP Supply]: https://xrpl.org/xrp-ledger-overview.html#finite-xrp-supply
|
||||
[Responsible Software Governance]: https://xrpl.org/xrp-ledger-overview.html#responsible-software-governance
|
||||
[Secure, Adaptable Cryptography]: https://xrpl.org/xrp-ledger-overview.html#secure-adaptable-cryptography
|
||||
[Modern Features for Smart Contracts]: https://xrpl.org/xrp-ledger-overview.html#modern-features-for-smart-contracts
|
||||
[On-Ledger Decentralized Exchange]: https://xrpl.org/xrp-ledger-overview.html#on-ledger-decentralized-exchange
|
||||
|
||||
|
||||
## Source Code
|
||||
@@ -53,9 +147,7 @@ git-subtree. See those directories' README files for more details.
|
||||
|
||||
## See Also
|
||||
|
||||
* [XRP Ledger Dev Portal](https://developers.ripple.com/)
|
||||
* [XRP News](https://ripple.com/category/xrp/)
|
||||
* [Setup and Installation](https://developers.ripple.com/install-rippled.html)
|
||||
|
||||
To learn about how Ripple is transforming global payments, visit
|
||||
<https://ripple.com/contact/>.
|
||||
* [XRP Ledger Dev Portal](https://xrpl.org/)
|
||||
* [Setup and Installation](https://xrpl.org/install-rippled.html)
|
||||
* [Source Documentation (Doxygen)](https://ripple.github.io/rippled)
|
||||
* [Learn more about the XRP Ledger (YouTube)](https://www.youtube.com/playlist?list=PLJQ55Tj1hIVZtJ_JdTvSum2qMTsedWkNi)
|
||||
|
||||
277
RELEASENOTES.md
277
RELEASENOTES.md
@@ -2,13 +2,288 @@
|
||||
|
||||

|
||||
|
||||
This document contains the release notes for `rippled`, the reference server implementation of the Ripple protocol. To learn more about how to build, run or update a `rippled` server, visit https://xrpl.org/install-rippled.html
|
||||
This document contains the release notes for `rippled`, the reference server implementation of the XRP Ledger protocol. To learn more about how to build, run or update a `rippled` server, visit https://xrpl.org/install-rippled.html
|
||||
|
||||
|
||||
Have new ideas? Need help with setting up your node? Come visit us [here](https://github.com/ripple/rippled/issues/new/choose)
|
||||
|
||||
# Change log
|
||||
|
||||
- API version 2 will now return `signer_lists` in the root of the `account_info` response, no longer nested under `account_data`.
|
||||
|
||||
# Releases
|
||||
|
||||
## Version 1.8.5
|
||||
This is the 1.8.5 release of `rippled`, the reference implementation of the XRP Ledger protocol. This release includes fixes and updates for stability and security, and improvements to build scripts. There are no user-facing API or protocol changes in this release.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
This release contains the following bug fixes and under-the-hood improvements:
|
||||
|
||||
- **Correct TaggedPointer move constructor:** Fixes a bug in unused code for the TaggedPointer class. The old code would fail if a caller explicitly tried to remove a child that is not actually part of the node. (227a12d)
|
||||
|
||||
- **Ensure protocol buffer prerequisites are present:** The build scripts and packages now properly handle Protobuf packages and various packages. Prior to this change, building on Ubuntu 21.10 Impish Indri would fail unless the `libprotoc-dev` package was installed. (e06465f)
|
||||
|
||||
- **Improve handling of endpoints during peer discovery.** This hardens and improves handling of incoming messages on the peer protocol. (289bc0a)
|
||||
|
||||
- **Run tests on updated linux distros:** Test builds now run on Rocky Linux 8, Fedora 34 and 35, Ubuntu 18, 20, and 22, and Debian 9, 10, and 11. (a9ee802)
|
||||
|
||||
- **Avoid dereferencing empty optional in ReportingETL:** Fixes a bug in Reporting Mode that could dereference an empty optional value when throwing an error. (cdc215d)
|
||||
|
||||
- **Correctly add GIT_COMMIT_HASH into version string:** When building the server from a non-tagged release, the build files now add the commit ID in a way that follows the semantic-versioning standard, and correctly handle the case where the commit hash ID cannot be retrieved. (d23d37f)
|
||||
|
||||
- **Update RocksDB to version 6.27.3:** Updates the version of RocksDB included in the server from 6.7.3 (which was released on 2020-03-18) to 6.27.3 (released 2021-12-10).
|
||||
|
||||
|
||||
|
||||
## Version 1.8.4
|
||||
This is the 1.8.4 release of `rippled`, the reference implementation of the XRP Ledger protocol.
|
||||
|
||||
This release corrects a technical flaw introduced with 1.8.3 that may result in failures if the newly-introduced 'fast loading' is enabled. The release also adjusts default parameters used to configure the pathfinding engine to reduce resource usage.
|
||||
|
||||
### Bug Fixes
|
||||
- **Adjust mutex scope in `walkMapParallel`**: This commit corrects a technical flaw introduced with commit [7c12f0135897361398917ad2c8cda888249d42ae] that would result in undefined behavior if the server operator configured their server to use the 'fast loading' mechanism introduced with 1.8.3.
|
||||
|
||||
- **Adjust pathfinding configuration defaults**: This commit adjusts the default configuration of the pathfinding engine, to account for the size of the XRP Ledger mainnet. Unless explicitly overriden, the changes mean that pathfinding operations will return fewer, shallower paths than previous releases.
|
||||
|
||||
|
||||
## Version 1.8.3
|
||||
This is the 1.8.3 release of `rippled`, the reference implementation of the XRP Ledger protocol.
|
||||
|
||||
This release implements changes that improve the syncing performance of peers on the network, adds countermeasures to several routines involving LZ4 to defend against CVE-2021-3520, corrects a minor technical flaw that would result in the server not using a cache for nodestore operations, and adjusts tunable values to optimize disk I/O.
|
||||
|
||||
### Summary of Issues
|
||||
Recently, servers in the XRP Ledger network have been taking an increasingly long time to sync back to the network after restartiningg. This is one of several releases which will be made to improve on this issue.
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **Parallel ledger loader & I/O performance improvements**: This commit makes several changes that, together, should decrease the time needed for a server to sync to the network. To make full use of this change, `rippled` needs to be using storage with high IOPS and operators need to explicitly enable this behavior by adding the following to their config file, under the `[node_db]` stanza:
|
||||
|
||||
[node_db]
|
||||
...
|
||||
fast_load=1
|
||||
|
||||
Note that when 'fast loading' is enabled the server will not open RPC and WebSocket interfaces until after the initial load is completed. Because of this, it may appear unresponsive or down.
|
||||
|
||||
- **Detect CVE-2021-3520 when decompressing using LZ4**: This commit adds code to detect LZ4 payloads that may result in out-of-bounds memory accesses.
|
||||
|
||||
- **Provide sensible default values for nodestore cache:**: The nodestore includes a built-in cache to reduce the disk I/O load but, by default, this cache was not initialized unless it was explicitly configured by the server operator. This commit introduces sensible defaults based on the server's configured node size.
|
||||
|
||||
- **Adjust the number of concurrent ledger data jobs**: Processing a large amount of data at once can effectively bottleneck a server's I/O subsystem. This commits helps optimize I/O performance by controlling how many jobs can concurrently process ledger data.
|
||||
|
||||
- **Two small SHAMapSync improvements**: This commit makes minor changes to optimize the way memory is used and control the amount of background I/O performed when attempting to fetch missing `SHAMap` nodes.
|
||||
|
||||
## Version 1.8.2
|
||||
Ripple has released version 1.8.2 of rippled, the reference server implementation of the XRP Ledger protocol. This release addresses the full transaction queues and elevated transaction fees issue observed on the XRP ledger, and also provides some optimizations and small fixes to improve the server's performance overall.
|
||||
|
||||
### Summary of Issues
|
||||
Recently, servers in the XRP Ledger network have had full transaction queues and transactions paying low fees have mostly not been able to be confirmed through the queue. After investigation, it was discovered that a large influx of transactions to the network caused it to raise the transaction costs to be proposed in the next ledger block, and defer transactions paying lower costs to later ledgers. The first part worked as designed, but deferred transactions were not being confirmed as the ledger had capacity to process them.
|
||||
|
||||
The root cause was that there were very many low-cost transactions that different servers in the network received in a different order due to incidental differences in timing or network topology, which caused validators to propose different sets of low-cost transactions from the queue. Since none of these transactions had support from a majority of validators, they were removed from the proposed transaction set. Normally, any transactions removed from a proposed transaction set are supposed to be retried in the next ledger, but servers attempted to put these deferred transactions into their transaction queues first, which had filled up. As a result, the deferred transactions were discarded, and the network was only able to confirm transactions that paid high costs.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **Address elevated transaction fees**: This change addresses the full queue problems in two ways. First, it puts deferred transactions directly into the open ledger, rather than transaction queue. This reverts a subset of the changes from [ximinez@62127d7](https://github.com/ximinez/rippled/commit/62127d725d801641bfaa61dee7d88c95e48820c5). A transaction that is in the open ledger but doesn't get validated should stay in the open ledger so that it can be proposed again right away. Second, it changes the order in which transactions are pulled from the transaction queue to increase the overlap in servers' initial transaction consensus proposals. Like the old rules, transactions paying higher fee levels are selected first. Unlike the old rules, transactions paying the same fee level are ordered by transaction ID / hash ascending. (Previously, transactions paying the same fee level were unsorted, resulting in each server having a different order.)
|
||||
|
||||
- **Add ignore_default option to account_lines API**: This flag, if present, suppresses the output of incoming trust lines in the default state. This is primarily motivated by observing that users often have many unwanted incoming trust lines in a default state, which are not useful in the vast majority of cases. Being able to suppress those when doing `account_lines` saves bandwidth and resources. ([#3980](https://github.com/ripple/rippled/pull/3980))
|
||||
|
||||
- **Make I/O and prefetch worker threads configurable**: This commit adds the ability to specify **io_workers** and **prefetch_workers** in the config file which can be used to specify the number of threads for processing raw inbound and outbound IO and configure the number of threads for performing node store prefetching. ([#3994](https://github.com/ripple/rippled/pull/3994))
|
||||
|
||||
- **Enforce account RPC limits by objects traversed**: This changes the way the account_objects API method counts and limits the number of objects it returns. Instead of limiting results by the number of objects found, it counts by the number of objects traversed. Additionally, the default and maximum limits for non-admin connections have been decreased. This reduces the amount of work that one API call can do so that public API servers can share load more effectively. ([#4032](https://github.com/ripple/rippled/pull/4032))
|
||||
|
||||
- **Fix a crash on shutdown**: The NuDB backend class could throw an error in its destructor, resulting in a crash while the server was shutting down gracefully. This crash was harmless but resulted in false alarms and noise when tracking down other possible crashes. ([#4017](https://github.com/ripple/rippled/pull/4017))
|
||||
|
||||
- **Improve reporting of job queue in admin server_info**: The server_info command, when run with admin permissions, provides information about jobs in the server's job queue. This commit provides more descriptive names and more granular categories for many jobs that were previously all identified as "clientCommand". ([#4031](https://github.com/ripple/rippled/pull/4031))
|
||||
|
||||
- **Improve full & compressed inner node deserialization**: Remove a redundant copy operation from low-level SHAMap deserialization. ([#4004](https://github.com/ripple/rippled/pull/4004))
|
||||
|
||||
- **Reporting mode: only forward to P2P nodes that are synced**: Previously, reporting mode servers forwarded to any of their configured P2P nodes at random. This commit improves the selection so that it only chooses from P2P nodes that are fully synced with the network. ([#4028](https://github.com/ripple/rippled/pull/4028))
|
||||
|
||||
- **Improve handling of HTTP X-Forwarded-For and Forwarded headers**: Fixes the way the server handles IPv6 addresses in these HTTP headers. ([#4009](https://github.com/ripple/rippled/pull/4009), [#4030](https://github.com/ripple/rippled/pull/4030))
|
||||
|
||||
- **Other minor improvements to logging and Reporting Mode.**
|
||||
|
||||
|
||||
## Version 1.8.0
|
||||
Ripple has released version 1.8.0 of rippled, the reference server implementation of the XRP Ledger protocol. This release brings several features and improvements.
|
||||
|
||||
### New and Improved Features
|
||||
|
||||
- **Improve History Sharding**: Shards of ledger history are now assembled in a deterministic way so that any server can make a binary-identical shard for a given range of ledgers. This makes it possible to retrieve a shard from multiple sources in parallel, then verify its integrity by comparing checksums with peers' checksums for the same shard. Additionally, there's a new admin RPC command to import ledger history from the shard store, and the crawl_shards command has been expanded with more info. ([#2688](https://github.com/ripple/rippled/issues/2688), [#3726](https://github.com/ripple/rippled/pull/3726), [#3875](https://github.com/ripple/rippled/pull/3875))
|
||||
- **New CheckCashMakesTrustLine Amendment**: If enabled, this amendment will change the CheckCash transaction type so that cashing a check for an issued token automatically creates a trust line to hold the token, similar to how purchasing a token in the decentralized exchange creates a trust line to hold the token. This change provides a way for issuers to send tokens to a user before that user has set up a trust line, but without forcing anyone to hold tokens they don't want. ([#3823](https://github.com/ripple/rippled/pull/3823))
|
||||
- **Automatically determine the node size**: The server now selects an appropriate `[node_size]` configuration value by default if it is not explicitly specified. This parameter tunes various settings to the specs of the hardware that the server is running on, especially the amount of RAM and the number of CPU threads available in the system. Previously the server always chose the smallest value by default.
|
||||
- **Improve transaction relaying logic**: Previously, the server relayed every transaction to all its peers (except the one that it received the transaction from). To reduce redundant messages, the server now relays transactions to a subset of peers using a randomized algorithm. Peers can determine whether there are transactions they have not seen and can request them from a peer that has them. It is expected that this feature will further reduce the bandwidth needed to operate a server.
|
||||
- **Improve the Byzantine validator detector**: This expands the detection capabilities of the Byzantine validation detector. Previously, the server only monitored validators on its own UNL. Now, the server monitors for Byzantine behavior in all validations it sees.
|
||||
- **Experimental tx stream with history for sidechains**: Adds an experimental subscription stream for sidechain federators to track messages on the main chain in canonical order. This stream is expected to change or be replaced in future versions as work on sidechains matures.
|
||||
- **Support Debian 11 Bullseye**: This is the first release that is compatible with Debian Linux version 11.x, "Bullseye." The .deb packages now use absolute paths only, for compatibility with Bullseye's stricter package requirements. ([#3909](https://github.com/ripple/rippled/pull/3909))
|
||||
- **Improve Cache Performance**: The server uses a new storage structure for several in-memory caches for greatly improved overall performance. The process of purging old data from these caches, called "sweeping", was time-consuming and blocked other important activities necessary for maintaining ledger state and participating in consensus. The new structure divides the caches into smaller partitions that can be swept in parallel.
|
||||
- **Amendment default votes:** Introduces variable default votes per amendment. Previously the server always voted "yes" on any new amendment unless an admin explicitly configured a voting preference for that amendment. Now the server's default vote can be "yes" or "no" in the source code. This should allow a safer, more gradual roll-out of new amendments, as new releases can be configured to understand a new amendment but not vote for it by default. ([#3877](https://github.com/ripple/rippled/pull/3877))
|
||||
- **More fields in the `validations` stream:** The `validations` subscription stream in the API now reports additional fields that were added to validation messages by the HardenedValidations amendment. These fields make it easier to detect misconfigurations such as multiple servers sharing a validation key pair. ([#3865](https://github.com/ripple/rippled/pull/3865))
|
||||
- **Reporting mode supports `validations` and `manifests` streams:** In the API it is now possible to connect to these streams when connected to a servers running in reporting. Previously, attempting to subscribe to these streams on a reporting server failed with the error `reportingUnsupported`. ([#3905](https://github.com/ripple/rippled/pull/3905))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **Clarify the safety of NetClock::time_point arithmetic**: * NetClock::rep is uint32_t and can be error-prone when used with subtraction. * Fixes [#3656](https://github.com/ripple/rippled/pull/3656)
|
||||
- **Fix out-of-bounds reserve, and some minor optimizations**
|
||||
- **Fix nested locks in ValidatorSite**
|
||||
- **Fix clang warnings about copies vs references**
|
||||
- **Fix reporting mode build issue**
|
||||
- **Fix potential deadlock in Validator sites**
|
||||
- **Use libsecp256k1 instead of OpenSSL for key derivation**: The deterministic key derivation code was still using calls to OpenSSL. This replaces the OpenSSL-based routines with new libsecp256k1-based implementations
|
||||
- **Improve NodeStore to ShardStore imports**: This runs the import process in a background thread while preventing online_delete from removing ledgers pending import
|
||||
- **Simplify SHAMapItem construction**: The existing class offered several constructors which were mostly unnecessary. This eliminates all existing constructors and introduces a single new one, taking a `Slice`. The internal buffer is switched from `std::vector` to `Buffer` to save a minimum of 8 bytes (plus the buffer slack that is inherent in `std::vector`) per SHAMapItem instance.
|
||||
- **Redesign stoppable objects**: Stoppable is no longer an abstract base class, but a pattern, modeled after the well-understood `std::thread`. The immediate benefits are less code, less synchronization, less runtime work, and (subjectively) more readable code. The end goal is to adhere to RAII in our object design, and this is one necessary step on that path.
|
||||
|
||||
## Version 1.7.3
|
||||
|
||||
This is the 1.7.3 release of `rippled`, the reference implementation of the XRP Ledger protocol. This release addresses an OOB memory read identified by Guido Vranken, as well as an unrelated issue identified by the Ripple C++ team that could result in incorrect use of SLEs. Additionally, this version also introduces the `NegativeUNL` amendment, which corresponds to the feature which was introduced with the 1.6.0 release.
|
||||
|
||||
## Action Required
|
||||
|
||||
If you operate an XRP Ledger server, then you should upgrade to version 1.7.3 at your earliest convenience to mitigate the issues addressed in this hotfix. If a sufficient majority of servers on the network upgrade, the `NegativeUNL` amendment may gain a majority, at which point a two week activation countdown will begin. If the `NegativeUNL` amendment activates, servers running versions of `rippled` prior to 1.7.3 will become [amendment blocked](https://xrpl.org/amendments.html#amendment-blocked).
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **Improve SLE usage in check cashing**: Fixes a situation which could result in the incorrect use of SLEs.
|
||||
- **Address OOB in base58 decoder**: Corrects a technical flaw that could allow an out-of-bounds memory read in the Base58 decoder.
|
||||
- **Add `NegativeUNL` as a supported amendment**: Introduces an amendment for the Negative UNL feature introduced in `rippled` 1.6.0.
|
||||
|
||||
## Version 1.7.2
|
||||
|
||||
This the 1.7.2 release of rippled, the reference server implementation of the XRP Ledger protocol. This release protects against the security issue [CVE-2021-3499](https://www.openssl.org/news/secadv/20210325.txt) affecting OpenSSL, adds an amendment to fix an issue with small offers not being properly removed from order books in some cases, and includes various other minor fixes.
|
||||
Version 1.7.2 supersedes version 1.7.1 and adds fixes for more issues that were discovered during the release cycle.
|
||||
|
||||
## Action Required
|
||||
|
||||
This release introduces a new amendment to the XRP Ledger protocol: `fixRmSmallIncreasedQOffers`. This amendments is now open for voting according to the XRP Ledger's amendment process, which enables protocol changes following two weeks of >80% support from trusted validators.
|
||||
If you operate an XRP Ledger server, then you should upgrade to version 1.7.2 within two weeks, to ensure service continuity. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
|
||||
If you operate an XRP Ledger validator, please learn more about this amendment so you can make informed decisions about how your validator votes. If you take no action, your validator begins voting in favor of any new amendments as soon as it has been upgraded.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **fixRmSmallIncreasedQOffers Amendment:** This amendment fixes an issue where certain small offers can be left at the tip of an order book without being consumed or removed when appropriate and causes some payments and Offers to fail when they should have succeeded [(#3827)](https://github.com/ripple/rippled/pull/3827).
|
||||
- **Adjust OpenSSL defaults and mitigate CVE-2021-3499:** Prior to this fix, servers compiled against a vulnerable version of OpenSSL could have a crash triggered by a malicious network connection. This fix disables renegotiation support in OpenSSL so that the rippled server is not vulnerable to this bug regardless of the OpenSSL version used to compile the server. This also removes support for deprecated TLS versions 1.0 and 1.1 and ciphers that are not part of TLS 1.2 [(#79e69da)](https://github.com/ripple/rippled/pull/3843/commits/79e69da3647019840dca49622621c3d88bc3883f).
|
||||
- **Support HTTP health check in reporting mode:** Enables the Health Check special method when running the server in the new Reporting Mode introduced in 1.7.0 [(9c8cadd)](https://github.com/ripple/rippled/pull/3843/commits/9c8caddc5a197bdd642556f8beb14f06d53cdfd3).
|
||||
- **Maintain compatibility for forwarded RPC responses:** Fixes a case in API responses from servers in Reporting Mode, where requests that were forwarded to a P2P-mode server would have the result field nested inside another result field [(8579eb0)](https://github.com/ripple/rippled/pull/3843/commits/8579eb0c191005022dcb20641444ab471e277f67).
|
||||
- **Add load_factor in reporting mode:** Adds a load_factor value to the server info method response when running the server in Reporting Mode so that the response is compatible with the format returned by servers in P2P mode (the default) [(430802c)](https://github.com/ripple/rippled/pull/3843/commits/430802c1cf6d4179f2249a30bfab9eff8e1fa748).
|
||||
- **Properly encode metadata from tx RPC command:** Fixes a problem where transaction metadata in the tx API method response would be in JSON format even when binary was requested [(7311629)](https://github.com/ripple/rippled/pull/3843/commits/73116297aa94c4acbfc74c2593d1aa2323b4cc52).
|
||||
- **Updates to Windows builds:** When building on Windows, use vcpkg 2021 by default and add compatibility with MSVC 2019 [(36fe196)](https://github.com/ripple/rippled/pull/3843/commits/36fe1966c3cd37f668693b5d9910fab59c3f8b1f), [(30fd458)](https://github.com/ripple/rippled/pull/3843/commits/30fd45890b1d3d5f372a2091d1397b1e8e29d2ca).
|
||||
|
||||
## Version 1.7.0
|
||||
|
||||
Ripple has released version 1.7.0 of `rippled`, the reference server implementation of the XRP Ledger protocol.
|
||||
This release [significantly improves memory usage](https://blog.ripplex.io/how-ripples-c-team-cut-rippleds-memory-footprint-down-to-size/), introduces a protocol amendment to allow out-of-order transaction execution with Tickets, and brings several other features and improvements.
|
||||
|
||||
## Upgrading (SPECIAL ACTION REQUIRED)
|
||||
If you use the precompiled binaries of rippled that Ripple publishes for supported platforms, please note that Ripple has renewed the GPG key used to sign these packages.
|
||||
If you are upgrading from a previous install, you must download and trust the renewed key. Automatic upgrades will not work until you have re-trusted the key.
|
||||
### Red Hat Enterprise Linux / CentOS
|
||||
|
||||
Perform a [manual upgrade](https://xrpl.org/update-rippled-manually-on-centos-rhel.html). When prompted, confirm that the key's fingerprint matches the following example, then press `y` to accept the updated key:
|
||||
|
||||
```
|
||||
$ sudo yum install rippled
|
||||
Loaded plugins: fastestmirror
|
||||
Loading mirror speeds from cached hostfile
|
||||
* base: mirror.web-ster.com
|
||||
* epel: mirrors.syringanetworks.net
|
||||
* extras: ftp.osuosl.org
|
||||
* updates: mirrors.vcea.wsu.edu
|
||||
ripple-nightly/signature | 650 B 00:00:00
|
||||
Retrieving key from https://repos.ripple.com/repos/rippled-rpm/nightly/repodata/repomd.xml.key
|
||||
Importing GPG key 0xCCAFD9A2:
|
||||
Userid : "TechOps Team at Ripple <techops+rippled@ripple.com>"
|
||||
Fingerprint: c001 0ec2 05b3 5a33 10dc 90de 395f 97ff ccaf d9a2
|
||||
From : https://repos.ripple.com/repos/rippled-rpm/nightly/repodata/repomd.xml.key
|
||||
Is this ok [y/N]: y
|
||||
```
|
||||
|
||||
### Ubuntu / Debian
|
||||
|
||||
Download and trust the updated public key, then perform a [manual upgrade](https://xrpl.org/update-rippled-manually-on-ubuntu.html) as follows:
|
||||
|
||||
```
|
||||
wget -q -O - "https://repos.ripple.com/repos/api/gpg/key/public" | \
|
||||
sudo apt-key add -
|
||||
sudo apt -y update
|
||||
sudo apt -y install rippled
|
||||
```
|
||||
|
||||
### New and Improved Features
|
||||
|
||||
- **Rework deferred node logic and async fetch behavior:** This change significantly improves ledger sync and fetch times while reducing memory consumption. (https://blog.ripplex.io/how-ripples-c-team-cut-rippleds-memory-footprint-down-to-size/)
|
||||
- **New Ticket feature:** Tickets are a mechanism to prepare and send certain transactions outside of the normal sequence order. This version reworks and completes the implementation for Tickets after more than 6 years of development. This feature is now open for voting as the newly-introduced `TicketBatch` amendment, which replaces the previously-proposed `Tickets` amendment. The specification for this change can be found at: [xrp-community/standards-drafts#16](https://github.com/xrp-community/standards-drafts/issues/16)
|
||||
- **Add Reporting Mode:** The server can be compiled to operate in a new mode that serves API requests for validated ledger data without connecting directly to the peer-to-peer network. (The server needs a gRPC connection to another server that is on the peer-to-peer network.) Reporting Mode servers can share access to ledger data via Apache Cassandra and PostgreSQL to more efficiently serve API requests while peer-to-peer servers specialize in broadcasting and processing transactions.
|
||||
- **Optimize relaying of validation and proposal messages:** Servers typically receive multiple copies of any given message from directly connected peers; in particular, consensus proposal and validation messages are often relayed with extremely high redundancy. For servers with several peers, this can cause redundant work. This commit introduces experimental code that attempts to optimize the relaying of proposals and validations by allowing servers to instruct their peers to "squelch" delivery of selected proposals and validations. This change is considered experimental at this time and is disabled by default because the functioning of the consensus network depends on messages propagating with high reliability through the constantly-changing peer-to-peer network. Server operators who wish to test the optimized code can enable it in their server config file.
|
||||
- **Report server domain to other servers:** Server operators now have the option to configure a domain name to be associated with their servers. The value is communicated to other servers and is also reported via the `server_info` API. The value is meant for third-party applications and tools to group servers together. For example, a tool that visualizes the network's topology can show how many servers are operated by different stakeholders. An operator can claim any domain, so tools should use the [xrp-ledger.toml file](https://xrpl.org/xrp-ledger-toml.html) to confirm that the domain also claims ownership of the servers.
|
||||
- **Improve handling of peers that aren't synced:** When evaluating the fitness and usefulness of an outbound peer, the code would incorrectly calculate the amount of time that the peer spent in a non-useful state. This release fixes the calculation and makes the timeout values configurable by server operators. Two new options are introduced in the 'overlay' stanza of the config file.
|
||||
- **Persist API-configured voting settings:** Previously, the amendments that a server would vote in support of or against could be configured both via the configuration file and via the ["feature" API method](https://xrpl.org/feature.html). Changes made in the configuration file were only loaded at server startup; changes made via the command line take effect immediately but were not persisted across restarts. Starting with this release, changes made via the API are saved to the wallet.db database file so that they persist even if the server is restarted.
|
||||
Amendment voting in the config file is deprecated. The first time the server starts with v1.7.0 or higher, it reads any amendment voting settings in the config file and saves the settings to the database; on later restarts the server prints a warning message and ignores the [amendments] and [veto_amendments] stanzas of the config file.
|
||||
Going forward, use the [feature method](https://xrpl.org/feature.html) to view and configure amendment votes. If you want to use the config file to configure amendment votes, add a line to the [rpc_startup] stanza such as the following:
|
||||
[rpc_startup]
|
||||
{ "command": "feature", "feature": "FlowSortStrands", "vetoed": true }
|
||||
- **Support UNLs with future effective dates:** Updates the format for the recommended validator list file format, allowing publishers to pre-publish the next recommended UNL while the current one is still valid. The server is still backwards compatible with the previous format, but the new format removes some uncertainty during the transition from one list to the next. Also, starting with this release, the server locks down and reports an error if it has no valid validator list. You can clear the error by loading a validator list from a file or by configuring a different UNL and restarting; the error also goes away on its own if the server is able to obtain a trusted validator list from the network (for example, after an network outage resolves itself).
|
||||
- **Improve manifest relaying:** Servers now propagate change messages for validators' ephemeral public keys ("manifests") on a best-effort basis, to make manifests more available throughout the peer-to-peer network. Previously, the server would only relay manifests from validators it trusts locally, which made it difficult to detect and track validators that are not broadly trusted.
|
||||
- **Implement ledger forward replay feature:** The server can now sync up to the network by "playing forward" transactions from a previously saved ledger until it catches up to the network. Compared with the default behavior of fetching the latest state and working backwards, forward replay can save time and bandwidth by reconstructing previous ledgers' state data rather than downloading the pre-calculated results from the network. As an added bonus, forward replay confirms that the rest of the network followed the same transaction processing rules as the local server when processing the intervening ledgers. This feature is considered experimental this time and can be enabled with an option in the config file.
|
||||
- **Make the transaction job queue limit adjustable:** The server uses a job queue to manage tasks, with limits on how many jobs of a particular type can be queued. The previously hard-coded limit associated with transactions is now configurable. Server operators can increase the number of transactions their server is able to queue, which may be useful if your server has a large memory capacity or you expect an influx of transactions. (https://github.com/ripple/rippled/issues/3556)
|
||||
- **Add public_key to the Validator List method response:** The [Validator List method](https://xrpl.org/validator-list.html) can be used to request a recommended validator list from a rippled instance. The response now includes the public key of the requested list. (https://github.com/ripple/rippled/issues/3392)
|
||||
- **Server operators can now configure maximum inbound and outbound peers separately:** The new `peers_in_max` and `peers_out_max` config options allow server operators to independently control the maximum number of inbound and outbound peers the server allows. [70c4ecc]
|
||||
- **Improvements to shard downloading:** Previously the download_shard command could only load shards over HTTPS. Compressed shards can now also be downloaded over plain HTTP. The server fully checks the data for integrity and consistency, so the encryption is not strictly necessary. When initiating multiple shard downloads, the server now returns an error if there is not enough space to store all the shards currently being downloaded.
|
||||
- **The manifest command is now public:** The manifest API method returns public information about a given validator. The required permissions have been changed so it is now part of the public API.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **Implement sticky DNS resolution for validator list retrieval:** When attempting to load a validator list from a configured site, attempt to reuse the last IP that was successfully used if that IP is still present in the DNS response. (https://github.com/ripple/rippled/issues/3494).
|
||||
- **Improve handling of RPC ledger_index argument:** You can now provide the `ledger_index` as a numeric string. This allows you to copy and use the numeric string `ledger_index` value returned by certain RPC commands. Previously you could only send native JSON numbers or shortcut strings such as "validated" in the `ledger_index` field. (https://github.com/ripple/rippled/issues/3533)
|
||||
- **Fix improper promotion of bool on return** [6968da1]
|
||||
- **Fix ledger sequence on copynode** [ef53197]
|
||||
- **Fix parsing of node public keys in `manifest` CLI:** The previous code attempts to validate the provided node public key using a function that assumes that the encoded public key is for an account. This causes the parsing to fail. This commit fixes #3317 (https://github.com/ripple/rippled/issues/3317) by letting the caller specify the type of the public key being checked.
|
||||
- **Fix idle peer timer:** Fixes a bug where a function to remove idle peers was called every second instead of every 4 seconds. #3754 (https://github.com/ripple/rippled/issues/3754)
|
||||
- **Add database counters:** Fix bug where DatabaseRotateImp::getBackend and ::sync utilized the writable backend without a lock. ::getBackend was replaced with ::getCounters.
|
||||
- **Improve online_delete configuration and DB tuning** [6e9051e]
|
||||
- **Improve handling of burst writes in NuDB database** ( https://github.com/ripple/rippled/pull/3662 )
|
||||
- **Fix excessive logging after disabling history shards.** Previously if you configured the server with a shard store, then disabled it, the server output excessive warning messages about the shard limit being exceeded.
|
||||
- **Fixed some issues with negotiating link compression.** ( https://github.com/ripple/rippled/pull/3705 )
|
||||
- **Fixed a potential thread deadlock with history sharding.** ( https://github.com/ripple/rippled/pull/3683 )
|
||||
- **Various fixes to typos and comments, refactoring, and build system improvements**
|
||||
|
||||
## Version 1.6.0
|
||||
|
||||
This release introduces several new features including changes to the XRP Ledger's consensus mechanism to make it even more robust in
|
||||
adverse conditions, as well as numerous bug fixes and optimizations.
|
||||
|
||||
### New and Improved Features
|
||||
|
||||
- Initial implementation of Negative UNL functionality: This change can improve the liveness of the network during periods of network instability, by allowing servers to track which validators are temporarily offline and to adjust quorum calculations to match. This change requires an amendment, but the amendment is not in the **1.6.0** release. Ripple expects to run extensive public testing for Negative UNL functionality on the Devnet in the coming weeks. If public testing satisfies all requirements across security, reliability, stability, and performance, then the amendment could be included in a version 2.0 release. [[#3380](https://github.com/ripple/rippled/pull/3380)]
|
||||
- Validation Hardening: This change allows servers to detect accidental misconfiguration of validators, as well as potentially Byzantine behavior by malicious validators. Servers can now log a message to notify operators if they detect a single validator issuing validations for multiple, incompatible ledger versions, or validations from multiple servers sharing a key. As part of this update, validators report the version of `rippled` they are using, as well as the hash of the last ledger they consider to be fully validated, in validation messages. [[#3291](https://github.com/ripple/rippled/pull/3291)] 
|
||||
- Software Upgrade Monitoring & Notification: After the `HardenedValidations` amendment is enabled and the validators begin reporting the versions of `rippled` they are running, a server can check how many of the validators on its UNL run a newer version of the software than itself. If more than 60% of a server's validators are running a newer version, the server writes a message to notify the operator to consider upgrading their software. [[#3447](https://github.com/ripple/rippled/pull/3447)]
|
||||
- Link Compression: Beginning with **1.6.0**, server operators can enable support for compressing peer-to-peer messages. This can save bandwidth at a cost of higher CPU usage. This support is disabled by default and should prove useful for servers with a large number of peers. [[#3287](https://github.com/ripple/rippled/pull/3287)]
|
||||
- Unconditionalize Amendments that were enabled in 2017: This change removes legacy code which the network has not used since 2017. This change limits the ability to [replay](https://github.com/xrp-community/standards-drafts/issues/14) ledgers that rely on the pre-2017 behavior. [[#3292](https://github.com/ripple/rippled/pull/3292)]
|
||||
- New Health Check Method: Perform a simple HTTP request to get a summary of the health of the server: Healthy, Warning, or Critical. [[#3365](https://github.com/ripple/rippled/pull/3365)]
|
||||
- Start work on API version 2. Version 2 of the API will be part of a future release. The first breaking change will be to consolidate several closely related error messages that can occur when the server is not synced into a single "notSynced" error message. [[#3269](https://github.com/ripple/rippled/pull/3269)]
|
||||
- Improved shard concurrency: Improvements to the shard engine have helped reduce the lock scope on all public functions, increasing the concurrency of the code. [[#3251](https://github.com/ripple/rippled/pull/3251)]
|
||||
- Default Port: In the config file, the `[ips_fixed]` and `[ips]` stanzas now use the [IANA-assigned port](https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=2459) for the XRP Ledger protocol (2459) when no port is specified. The `connect` API method also uses the same port by default. [[#2861](https://github.com/ripple/rippled/pull/2861)].
|
||||
- Improve proposal and validation relaying. The peer-to-peer protocol always relays trusted proposals and validations (as part of the [consensus process](https://xrpl.org/consensus.html)), but only relays _untrusted_ proposals and validations in certain circumstances. This update adds configuration options so server operators can fine-tune how their server handles untrusted proposals and validations, and changes the default behavior to prioritize untrusted validations higher than untrusted proposals. [[#3391](https://github.com/ripple/rippled/pull/3391)]
|
||||
- Various Build and CI Improvements including updates to RocksDB 6.7.3 [[#3356](https://github.com/ripple/rippled/pull/3356)], NuDB 2.0.3 [[#3437](https://github.com/ripple/rippled/pull/3437)], adjusting CMake settings so that rippled can be built as a submodule [[#3449](https://github.com/ripple/rippled/pull/3449)], and adding Travis CI settings for Ubuntu Bionic Beaver [[#3319](https://github.com/ripple/rippled/pull/3319)].
|
||||
- Better documentation in the config file for online deletion and database tuning. [[#3429](https://github.com/ripple/rippled/pull/3429)]
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix the 14 day timer to enable amendment to start at the correct quorum size [[#3396](https://github.com/ripple/rippled/pull/3396)]
|
||||
- Improve online delete backend lock which addresses a possibility in the online delete process where one or more backend shared pointer references may become invalid during rotation. [[#3342](https://github.com/ripple/rippled/pull/3342)]
|
||||
- Address an issue that can occur during the loading of validator tokens, where a deliberately malformed token could cause the server to crash during startup. [[#3326](https://github.com/ripple/rippled/pull/3326)]
|
||||
- Add delivered amount to GetAccountTransactionHistory. The delivered_amount field was not being populated when calling GetAccountTransactionHistory. In contrast, the delivered_amount field was being populated when calling GetTransaction. This change populates delivered_amount in the response to GetAccountTransactionHistory, and adds a unit test to make sure the results delivered by GetTransaction and GetAccountTransactionHistory match each other. [[#3370](https://github.com/ripple/rippled/pull/3370)]
|
||||
- Fix build issues for GCC 10 [[#3393](https://github.com/ripple/rippled/pull/3393)]
|
||||
- Fix historical ledger acquisition - this fixes an issue where historical ledgers were acquired only since the last online deletion interval instead of the configured value to allow deletion.[[#3369](https://github.com/ripple/rippled/pull/3369)]
|
||||
- Fix build issue with Docker [#3416](https://github.com/ripple/rippled/pull/3416)]
|
||||
- Add Shard family. The App Family utilizes a single shared Tree Node and Full Below cache for all history shards. This can create a problem when acquiring a shard that shares an account state node that was recently cached from another shard operation. The new Shard Family class solves this issue by managing separate Tree Node and Full Below caches for each shard. [#3448](https://github.com/ripple/rippled/pull/3448)]
|
||||
- Amendment table clean up which fixes a calculation issue with majority. [#3428](https://github.com/ripple/rippled/pull/3428)]
|
||||
- Add the `ledger_cleaner` command to rippled command line help [[#3305](https://github.com/ripple/rippled/pull/3305)]
|
||||
- Various typo and comments fixes.
|
||||
|
||||
|
||||
## Version 1.5.0
|
||||
|
||||
The `rippled` 1.5.0 release introduces several improvements and new features, including support for gRPC API, API versioning, UNL propagation via the peer network, new RPC methods `validator_info` and `manifest`, augmented `submit` method, improved `tx` method response, improved command line parsing, improved handshake protocol, improved package building and various other minor bug fixes and improvements.
|
||||
|
||||
149
SECURITY.md
Normal file
149
SECURITY.md
Normal file
@@ -0,0 +1,149 @@
|
||||
### Operating an XRP Ledger server securely
|
||||
|
||||
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
|
||||
|
||||
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
|
||||
|
||||
## Identifying and Reporting Vulnerabilities
|
||||
|
||||
We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.
|
||||
|
||||
### Responsible Investigation
|
||||
|
||||
We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.
|
||||
|
||||
Responsible investigation includes, but isn't limited to, the following:
|
||||
|
||||
- Not performing tests on the main network. If testing is necessary, use the [Testnet or Devnet](https://xrpl.org/xrp-testnet-faucet.html).
|
||||
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
|
||||
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
|
||||
|
||||
### Responsible Disclosure
|
||||
|
||||
If you discover a vulnerability or potential threat, or if you _think_
|
||||
you have, please reach out by dropping an email using the contact
|
||||
information below.
|
||||
|
||||
Your report should include the following:
|
||||
|
||||
- Your contact information (typically, an email address);
|
||||
- The description of the vulnerability;
|
||||
- The attack scenario (if any);
|
||||
- The steps to reproduce the vulnerability;
|
||||
- Any other relevant details or artifacts, including code, scripts or patches.
|
||||
|
||||
In your mail, please describe of the issue or the potential threat; if possible, please include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as extensive as possible.
|
||||
|
||||
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
|
||||
|
||||
## Report Handling Process
|
||||
|
||||
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
|
||||
|
||||
Once we receive a report, we:
|
||||
|
||||
1. Assign two people to independently evaluate the report;
|
||||
2. Consider their recommendations;
|
||||
3. If action is necessary, formulate a plan to address the issue;
|
||||
4. Communicate privately with the reporter to explain our plan.
|
||||
5. Prepare, test and release a version which fixes the issue; and
|
||||
6. Announce the vulnerability publicly.
|
||||
|
||||
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
|
||||
|
||||
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
|
||||
|
||||
## Bug Bounty Program
|
||||
|
||||
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/ripple/rippled) (and other related projects, like [`ripple-lib`](https://github.com/ripple/ripple-lib)).
|
||||
|
||||
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, order to qualify for a bounty, the bug must be:
|
||||
|
||||
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled` and `ripple-lib`.
|
||||
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the XRP Ledger.
|
||||
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
|
||||
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
|
||||
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other people’s software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromises the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
|
||||
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
|
||||
|
||||
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please don’t hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
|
||||
|
||||
### Contacting Us
|
||||
|
||||
To report a qualifying bug, please send a detailed report to:
|
||||
|
||||
|Email Address|bugs@ripple.com |
|
||||
|:-----------:|:----------------------------------------------------|
|
||||
|Short Key ID | `0xC57929BE` |
|
||||
|Long Key ID | `0xCD49A0AFC57929BE` |
|
||||
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
|
||||
|
||||
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
|
||||
```
|
||||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
|
||||
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
|
||||
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
|
||||
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
|
||||
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
|
||||
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
|
||||
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
|
||||
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
|
||||
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
|
||||
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
|
||||
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
|
||||
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
|
||||
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
|
||||
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
|
||||
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
|
||||
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
|
||||
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
|
||||
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
|
||||
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
|
||||
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
|
||||
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
|
||||
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
|
||||
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
|
||||
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
|
||||
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
|
||||
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
|
||||
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
|
||||
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
|
||||
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
|
||||
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
|
||||
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
|
||||
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
|
||||
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
|
||||
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
|
||||
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
|
||||
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
|
||||
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
|
||||
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
|
||||
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
|
||||
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
|
||||
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
|
||||
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
|
||||
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
|
||||
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
|
||||
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
|
||||
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
|
||||
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
|
||||
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
|
||||
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
|
||||
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
|
||||
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
|
||||
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
|
||||
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
|
||||
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
|
||||
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
|
||||
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
|
||||
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
|
||||
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
|
||||
=spg4
|
||||
-----END PGP PUBLIC KEY BLOCK-----
|
||||
```
|
||||
107
appveyor.yml
107
appveyor.yml
@@ -1,107 +0,0 @@
|
||||
# Set environment variables.
|
||||
environment:
|
||||
|
||||
# We bundle up only the parts of boost and openssl we need so
|
||||
# that it's a small download. We also use appveyor's free cache, avoiding fees
|
||||
# downloading from S3 each time.
|
||||
# TODO: script to create this package.
|
||||
RIPPLED_DEPS_PATH: rippled_deps17.05
|
||||
RIPPLED_DEPS_BASE_URL: https://ripple.github.io/Downloads/appveyor
|
||||
RIPPLED_OPENSSL: rippled_deps.openssl.1.0.2j.zip
|
||||
RIPPLED_BOOST: rippled_deps.boost.1.70.zip
|
||||
RIPPLED_BOOST_STAGE: rippled_deps.boost.stage.1.70.zip
|
||||
|
||||
# CMake honors these environment variables, setting the include/lib paths.
|
||||
BOOST_ROOT: C:/%RIPPLED_DEPS_PATH%/boost
|
||||
OPENSSL_ROOT: C:/%RIPPLED_DEPS_PATH%/openssl
|
||||
NIH_CACHE_ROOT: C:/%RIPPLED_DEPS_PATH%/
|
||||
|
||||
# We've had trouble with AppVeyor apparently not having a stack as large
|
||||
# as the *nix CI platforms. AppVeyor support suggested that we try
|
||||
# GCE VMs. The following line is supposed to enable that VM type.
|
||||
appveyor_build_worker_cloud: gce
|
||||
|
||||
matrix:
|
||||
- build: cmake
|
||||
target: msvc.debug
|
||||
buildconfig: Debug
|
||||
|
||||
os: Visual Studio 2017
|
||||
|
||||
# At the end of each successful build we cache this directory.
|
||||
# https://www.appveyor.com/docs/build-cache/
|
||||
# Resulting archive should not exceed 100 MB.
|
||||
cache:
|
||||
- 'C:\%RIPPLED_DEPS_PATH%'
|
||||
|
||||
# This means we'll download a zip of the branch we want, rather than the full
|
||||
# history.
|
||||
shallow_clone: true
|
||||
|
||||
install:
|
||||
# Download dependencies if appveyor didn't restore them from the cache.
|
||||
# Use 7zip to unzip.
|
||||
- ps: |
|
||||
if (-not(Test-Path "C:/$env:RIPPLED_DEPS_PATH")) {
|
||||
$files = @(
|
||||
"$env:RIPPLED_BOOST",
|
||||
"$env:RIPPLED_BOOST_STAGE",
|
||||
"$env:RIPPLED_OPENSSL"
|
||||
)
|
||||
For ($i=0; $i -lt $files.Length; $i++) {
|
||||
$file = $files[$i]
|
||||
$url = "$env:RIPPLED_DEPS_BASE_URL/$file"
|
||||
echo "Download $file from $url"
|
||||
Start-FileDownload "$url"
|
||||
7z x "$file" -o"C:\$env:RIPPLED_DEPS_PATH" -y > $null
|
||||
if ($LastExitCode -ne 0) { throw "7z failed" }
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
"Dependencies are in cache"
|
||||
ls "C:/$env:RIPPLED_DEPS_PATH"
|
||||
}
|
||||
|
||||
# Newer DEPS include a versions file.
|
||||
# Dump it so we can verify correct behavior.
|
||||
- ps: |
|
||||
cat "C:/$env:RIPPLED_DEPS_PATH/version*.txt"
|
||||
|
||||
# TODO: This is giving me grief
|
||||
# artifacts:
|
||||
# # Save rippled.exe in the cloud after each build.
|
||||
# - path: "build\\rippled.exe"
|
||||
|
||||
build_script:
|
||||
# We set the environment variables needed to put compilers on the PATH.
|
||||
- '"%VS140COMNTOOLS%../../VC/vcvarsall.bat" x86_amd64'
|
||||
# Show which version of the compiler we are using.
|
||||
- cl
|
||||
- ps: |
|
||||
# Build with cmake
|
||||
cmake --version
|
||||
$cmake_target="$($env:target).ci"
|
||||
"$cmake_target"
|
||||
New-Item -ItemType Directory -Force -Path "build/$cmake_target"
|
||||
Push-Location "build/$cmake_target"
|
||||
cmake -G"Visual Studio 15 2017 Win64" ../..
|
||||
if ($LastExitCode -ne 0) { throw "CMake failed" }
|
||||
cmake --build . --config $env:buildconfig --parallel 3
|
||||
if ($LastExitCode -ne 0) { throw "CMake build failed" }
|
||||
Pop-Location
|
||||
|
||||
after_build:
|
||||
- ps: |
|
||||
$exe="build/$cmake_target/$env:buildconfig/rippled"
|
||||
"Exe is at $exe"
|
||||
|
||||
test_script:
|
||||
- ps: |
|
||||
& {
|
||||
# Run the rippled unit tests
|
||||
& $exe --unittest --unittest-log --unittest-jobs 2
|
||||
# https://connect.microsoft.com/PowerShell/feedback/details/751703/option-to-stop-script-if-command-line-exe-fails
|
||||
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
|
||||
}
|
||||
|
||||
@@ -24,7 +24,6 @@ declare -a manual_tests=(
|
||||
'ripple.consensus.ByzantineFailureSim'
|
||||
'ripple.consensus.DistributedValidators'
|
||||
'ripple.consensus.ScaleFreeSim'
|
||||
'ripple.ripple_data.digest'
|
||||
'ripple.tx.CrossingLimits'
|
||||
'ripple.tx.FindOversizeCross'
|
||||
'ripple.tx.Offer_manual'
|
||||
|
||||
@@ -95,8 +95,32 @@ fi
|
||||
mkdir -p "build/${BUILD_DIR}"
|
||||
pushd "build/${BUILD_DIR}"
|
||||
|
||||
# cleanup possible artifacts
|
||||
rm -fv CMakeFiles/CMakeOutput.log CMakeFiles/CMakeError.log
|
||||
# Clean up NIH directories which should be git repos, but aren't
|
||||
for nih_path in ${NIH_CACHE_ROOT}/*/*/*/src ${NIH_CACHE_ROOT}/*/*/src
|
||||
do
|
||||
for dir in lz4 snappy rocksdb
|
||||
do
|
||||
if [ -e ${nih_path}/${dir} -a \! -e ${nih_path}/${dir}/.git ]
|
||||
then
|
||||
ls -la ${nih_path}/${dir}*
|
||||
rm -rfv ${nih_path}/${dir}*
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
# generate
|
||||
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
|
||||
# Display the cmake output, to help with debugging if something fails
|
||||
for file in CMakeOutput.log CMakeError.log
|
||||
do
|
||||
if [ -f CMakeFiles/${file} ]
|
||||
then
|
||||
ls -l CMakeFiles/${file}
|
||||
cat CMakeFiles/${file}
|
||||
fi
|
||||
done
|
||||
# build
|
||||
export DESTDIR=$(pwd)/_INSTALLED_
|
||||
|
||||
@@ -105,7 +129,7 @@ ${time} eval cmake --build . ${BUILDARGS} -- ${BUILDTOOLARGS}
|
||||
if [[ ${TARGET} == "docs" ]]; then
|
||||
## mimic the standard test output for docs build
|
||||
## to make controlling processes like jenkins happy
|
||||
if [ -f html_doc/index.html ]; then
|
||||
if [ -f docs/html/index.html ]; then
|
||||
echo "1 case, 1 test total, 0 failures"
|
||||
else
|
||||
echo "1 case, 1 test total, 1 failures"
|
||||
@@ -136,15 +160,15 @@ else
|
||||
# ORDER matters here...sorted in approximately
|
||||
# descending execution time (longest running tests at top)
|
||||
declare -a manual_tests=(
|
||||
'ripple.ripple_data.digest'
|
||||
'ripple.ripple_data.reduce_relay_simulate'
|
||||
'ripple.tx.Offer_manual'
|
||||
'ripple.app.PayStrandAllPairs'
|
||||
'ripple.tx.CrossingLimits'
|
||||
'ripple.tx.PlumpBook'
|
||||
'ripple.app.Flow_manual'
|
||||
'ripple.tx.OversizeMeta'
|
||||
'ripple.consensus.DistributedValidators'
|
||||
'ripple.app.NoRippleCheckLimits'
|
||||
'ripple.ripple_data.compression'
|
||||
'ripple.NodeStore.Timing'
|
||||
'ripple.consensus.ByzantineFailureSim'
|
||||
'beast.chrono.abstract_clock'
|
||||
|
||||
@@ -2,6 +2,21 @@
|
||||
# some cached files create churn, so save them here for
|
||||
# later restoration before packing the cache
|
||||
set -eux
|
||||
clean_cache="travis_clean_cache"
|
||||
if [[ ! ( "${TRAVIS_JOB_NAME}" =~ "windows" || \
|
||||
"${TRAVIS_JOB_NAME}" =~ "prereq-keep" ) ]] && \
|
||||
( [[ "${TRAVIS_COMMIT_MESSAGE}" =~ "${clean_cache}" ]] || \
|
||||
( [[ -v TRAVIS_PULL_REQUEST_SHA && \
|
||||
"${TRAVIS_PULL_REQUEST_SHA}" != "" ]] && \
|
||||
git log -1 "${TRAVIS_PULL_REQUEST_SHA}" | grep -cq "${clean_cache}" -
|
||||
)
|
||||
)
|
||||
then
|
||||
find ${TRAVIS_HOME}/_cache -maxdepth 2 -type d
|
||||
rm -rf ${TRAVIS_HOME}/_cache
|
||||
mkdir -p ${TRAVIS_HOME}/_cache
|
||||
fi
|
||||
|
||||
pushd ${TRAVIS_HOME}
|
||||
if [ -f cache_ignore.tar ] ; then
|
||||
rm -f cache_ignore.tar
|
||||
|
||||
@@ -1,5 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# This script generates information about your rippled installation
|
||||
# and system. It can be used to help debug issues that you may face
|
||||
# in your installation. While this script endeavors to not display any
|
||||
# sensitive information, it is recommended that you read the output
|
||||
# before sharing with any third parties.
|
||||
|
||||
|
||||
rippled_exe=/opt/ripple/bin/rippled
|
||||
conf_file=/etc/opt/ripple/rippled.cfg
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ else
|
||||
if [[ -d "${VCPKG_DIR}" ]] ; then
|
||||
rm -rf "${VCPKG_DIR}"
|
||||
fi
|
||||
git clone --branch 2019.12 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
|
||||
git clone --branch 2021.04.30 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
|
||||
pushd ${VCPKG_DIR}
|
||||
BSARGS=()
|
||||
if [[ "$(uname)" == "Darwin" ]] ; then
|
||||
|
||||
@@ -22,10 +22,6 @@ while read line ; do
|
||||
fi
|
||||
fi
|
||||
done <<EOL
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64 -vcvars_ver=14.24
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64 -vcvars_ver=14.24
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64 -vcvars_ver=14.24
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64 -vcvars_ver=14.24
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
|
||||
@@ -36,8 +32,6 @@ done <<EOL
|
||||
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
|
||||
EOL
|
||||
# TODO: update the list above as needed to support newer versions of msvc tools
|
||||
# MSVC 19.25.28610.4 causes the rocksdb's compilation to fail, for VS2019, we will choose 14.24 VCTools for now
|
||||
# TODO: Delete lines with -vcars_ver=14.24 once rocksdb becomes compatible with newer compiler version.
|
||||
|
||||
rm -f getenv.bat
|
||||
|
||||
|
||||
183
bin/sidechain/python/README.md
Normal file
183
bin/sidechain/python/README.md
Normal file
@@ -0,0 +1,183 @@
|
||||
## Introduction
|
||||
|
||||
This directory contains python scripts to tests and explore side chains.
|
||||
|
||||
See the instructions [here](docs/sidechain/GettingStarted.md) for how to install
|
||||
the necessary dependencies and run an interactive shell that will spin up a set
|
||||
of federators on your local machine and allow you to transfer assets between the
|
||||
main chain and a side chain.
|
||||
|
||||
For all these scripts, make sure the `RIPPLED_MAINCHAIN_EXE`,
|
||||
`RIPPLED_SIDECHAIN_EXE`, and `RIPPLED_SIDECHAIN_CFG_DIR` environment variables
|
||||
are correctly set, and the side chain configuration files exist. Also make sure the python
|
||||
dependencies are installed and the virtual environment is activated.
|
||||
|
||||
Note: the unit tests do not use the configuration files, so the `RIPPLED_SIDECHAIN_CFG_DIR` is
|
||||
not needed for that script.
|
||||
|
||||
## Unit tests
|
||||
|
||||
The "tests" directory contains a simple unit test. It take several minutes to
|
||||
run, and will create the necessary configuration files, start a test main chain
|
||||
in standalone mode, and a test side chain with 5 federators, and do some simple
|
||||
cross chain transactions. Side chains do not yet have extensive tests. Testing
|
||||
is being actively worked on.
|
||||
|
||||
To run the tests, change directories to the `bin/sidechain/python/tests` directory and type:
|
||||
```
|
||||
pytest
|
||||
```
|
||||
|
||||
To capture logging information and to set the log level (to help with debugging), type this instead:
|
||||
```
|
||||
pytest --log-file=log.txt --log-file-level=info
|
||||
```
|
||||
|
||||
The response should be something like the following:
|
||||
```
|
||||
============================= test session starts ==============================
|
||||
platform linux -- Python 3.8.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
|
||||
rootdir: /home/swd/projs/ripple/mine/bin/sidechain/python/tests
|
||||
collected 1 item
|
||||
|
||||
simple_xchain_transfer_test.py . [100%]
|
||||
|
||||
======================== 1 passed in 215.20s (0:03:35) =========================
|
||||
|
||||
```
|
||||
|
||||
## Scripts
|
||||
### riplrepl.py
|
||||
|
||||
This is an interactive shell for experimenting with side chains. It will spin up
|
||||
a test main chain running in standalone mode, and a test side chain with five
|
||||
federators - all running on the local machine. There are commands to make
|
||||
payments within a chain, make cross chain payments, check balances, check server
|
||||
info, and check federator info. There is a simple "help" system, but more
|
||||
documentation is needed for this tool (or more likely we need to replace this
|
||||
with some web front end).
|
||||
|
||||
Note: a "repl" is another name for an interactive shell. It stands for
|
||||
"read-eval-print-loop". It is pronounced "rep-ul".
|
||||
|
||||
### create_config_file.py
|
||||
|
||||
This is a script used to create the config files needed to run a test side chain
|
||||
on your local machine. To run this, make sure the rippled is built,
|
||||
`RIPPLED_MAINCHAIN_EXE`, `RIPPLED_SIDECHAIN_EXE`, and
|
||||
`RIPPLED_SIDECHAIN_CFG_DIR` environment variables are correctly set, and the
|
||||
side chain configuration files exist. Also make sure the python dependencies are
|
||||
installed and the virtual environment is activated. Running this will create
|
||||
config files in the directory specified by the `RIPPLED_SIDECHAIN_CFG_DIR`
|
||||
environment variable.
|
||||
|
||||
### log_analyzer.py
|
||||
|
||||
This is a script used to take structured log files and convert them to json for easier debugging.
|
||||
|
||||
## Python modules
|
||||
|
||||
### sidechain.py
|
||||
|
||||
A python module that can be used to write python scripts to interact with
|
||||
side chains. This is used to write unit tests and the interactive shell. To write
|
||||
a standalone script, look at how the tests are written in
|
||||
`test/simple_xchain_transfer_test.py`. The idea is to call
|
||||
`sidechain._multinode_with_callback`, which sets up the two chains, and place
|
||||
your code in the callback. For example:
|
||||
|
||||
```
|
||||
def multinode_test(params: Params):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
my_function(mc_app, sc_app, params)
|
||||
|
||||
sidechain._multinode_with_callback(params,
|
||||
callback,
|
||||
setup_user_accounts=False)
|
||||
```
|
||||
|
||||
The functions `sidechain.main_to_side_transfer` and
|
||||
`sidechain.side_to_main_transfer` can be used as convenience functions to initiate
|
||||
cross chain transfers. Of course, these transactions can also be initiated with
|
||||
a payment to the door account with the memo data set to the destination account
|
||||
on the destination chain (which is what those convenience functions do under the
|
||||
hood).
|
||||
|
||||
Transactions execute asynchonously. Use the function
|
||||
`test_utils.wait_for_balance_change` to ensure a transaction has completed.
|
||||
|
||||
### transaction.py
|
||||
|
||||
A python module for transactions. Currently there are transactions for:
|
||||
|
||||
* Payment
|
||||
* Trust (trust set)
|
||||
* SetRegularKey
|
||||
* SignerLisetSet
|
||||
* AccountSet
|
||||
* Offer
|
||||
* Ticket
|
||||
* Hook (experimental - useful paying with the hook amendment from XRPL Labs).
|
||||
|
||||
Typically, a transaction is submitted through the call operator on an `App` object. For example, to make a payment from the account `alice` to the account `bob` for 500 XRP:
|
||||
```
|
||||
mc_app(Payment(account=alice, dst=bob, amt=XRP(500)))
|
||||
```
|
||||
(where mc_app is an App object representing the main chain).
|
||||
|
||||
### command.py
|
||||
|
||||
A python module for RPC commands. Currently there are commands for:
|
||||
* PathFind
|
||||
* Sign
|
||||
* LedgerAccept (for standalone mode)
|
||||
* Stop
|
||||
* LogLevel
|
||||
* WalletPropose
|
||||
* ValidationCreate
|
||||
* AccountInfo
|
||||
* AccountLines
|
||||
* AccountTx
|
||||
* BookOffers
|
||||
* BookSubscription
|
||||
* ServerInfo
|
||||
* FederatorInfo
|
||||
* Subscribe
|
||||
|
||||
### common.py
|
||||
|
||||
Python module for common ledger objects, including:
|
||||
* Account
|
||||
* Asset
|
||||
* Path
|
||||
* Pathlist
|
||||
|
||||
### app.py
|
||||
|
||||
Python module for an application. An application is responsible for local
|
||||
network (or single server) and an address book that maps aliases to accounts.
|
||||
|
||||
### config_file.py
|
||||
|
||||
Python module representing a config file that is read from disk.
|
||||
|
||||
### interactive.py
|
||||
|
||||
Python module with the implementation of the RiplRepl interactive shell.
|
||||
|
||||
### ripple_client.py
|
||||
|
||||
A python module representing a rippled server.
|
||||
|
||||
### testnet.py
|
||||
|
||||
A python module representing a rippled testnet running on the local machine.
|
||||
|
||||
## Other
|
||||
### requirements.txt
|
||||
|
||||
These are the python dependencies needed by the scripts. Use `pip3 install -r
|
||||
requirements.txt` to install them. A python virtual environment is recommended.
|
||||
See the instructions [here](docs/sidechain/GettingStarted.md) for how to install
|
||||
the dependencies.
|
||||
|
||||
624
bin/sidechain/python/app.py
Normal file
624
bin/sidechain/python/app.py
Normal file
@@ -0,0 +1,624 @@
|
||||
from contextlib import contextmanager
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pandas as pd
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
import time
|
||||
from typing import Callable, Dict, List, Optional, Set, Union
|
||||
|
||||
from ripple_client import RippleClient
|
||||
from common import Account, Asset, XRP
|
||||
from command import AccountInfo, AccountLines, BookOffers, Command, FederatorInfo, LedgerAccept, Sign, Submit, SubscriptionCommand, WalletPropose
|
||||
from config_file import ConfigFile
|
||||
import testnet
|
||||
from transaction import Payment, Transaction
|
||||
|
||||
|
||||
class KeyManager:
|
||||
def __init__(self):
|
||||
self._aliases = {} # alias -> account
|
||||
self._accounts = {} # account id -> account
|
||||
|
||||
def add(self, account: Account) -> bool:
|
||||
if account.nickname:
|
||||
self._aliases[account.nickname] = account
|
||||
self._accounts[account.account_id] = account
|
||||
|
||||
def is_alias(self, name: str):
|
||||
return name in self._aliases
|
||||
|
||||
def account_from_alias(self, name: str) -> Account:
|
||||
assert name in self._aliases
|
||||
return self._aliases[name]
|
||||
|
||||
def known_accounts(self) -> List[Account]:
|
||||
return list(self._accounts.values())
|
||||
|
||||
def account_id_dict(self) -> Dict[str, Account]:
|
||||
return self._accounts
|
||||
|
||||
def alias_or_account_id(self, id: Union[Account, str]) -> str:
|
||||
'''
|
||||
return the alias if it exists, otherwise return the id
|
||||
'''
|
||||
if isinstance(id, Account):
|
||||
return id.alias_or_account_id()
|
||||
|
||||
if id in self._accounts:
|
||||
return self._accounts[id].nickname
|
||||
return id
|
||||
|
||||
def alias_to_account_id(self, alias: str) -> Optional[str]:
|
||||
if id in self._aliases:
|
||||
return self._aliases[id].account_id
|
||||
return None
|
||||
|
||||
def to_string(self, nickname: Optional[str] = None):
|
||||
names = []
|
||||
account_ids = []
|
||||
if nickname:
|
||||
names = [nickname]
|
||||
if nickname in self._aliases:
|
||||
account_ids = [self._aliases[nickname].account_id]
|
||||
else:
|
||||
account_id = ['NA']
|
||||
else:
|
||||
for (k, v) in self._aliases.items():
|
||||
names.append(k)
|
||||
account_ids.append(v.account_id)
|
||||
# use a dataframe to get a nice table output
|
||||
df = pd.DataFrame(data={'name': names, 'id': account_ids})
|
||||
return f'{df.to_string(index=False)}'
|
||||
|
||||
|
||||
class AssetAliases:
|
||||
def __init__(self):
|
||||
self._aliases = {} # alias -> asset
|
||||
|
||||
def add(self, asset: Asset, name: str):
|
||||
self._aliases[name] = asset
|
||||
|
||||
def is_alias(self, name: str):
|
||||
return name in self._aliases
|
||||
|
||||
def asset_from_alias(self, name: str) -> Asset:
|
||||
assert name in self._aliases
|
||||
return self._aliases[name]
|
||||
|
||||
def known_aliases(self) -> List[str]:
|
||||
return list(self._aliases.keys())
|
||||
|
||||
def known_assets(self) -> List[Asset]:
|
||||
return list(self._aliases.values())
|
||||
|
||||
def to_string(self, nickname: Optional[str] = None):
|
||||
names = []
|
||||
currencies = []
|
||||
issuers = []
|
||||
if nickname:
|
||||
names = [nickname]
|
||||
if nickname in self._aliases:
|
||||
v = self._aliases[nickname]
|
||||
currencies = [v.currency]
|
||||
iss = v.issuer if v.issuer else ''
|
||||
issuers = [v.issuer if v.issuer else '']
|
||||
else:
|
||||
currencies = ['NA']
|
||||
issuers = ['NA']
|
||||
else:
|
||||
for (k, v) in self._aliases.items():
|
||||
names.append(k)
|
||||
currencies.append(v.currency)
|
||||
issuers.append(v.issuer if v.issuer else '')
|
||||
# use a dataframe to get a nice table output
|
||||
df = pd.DataFrame(data={
|
||||
'name': names,
|
||||
'currency': currencies,
|
||||
'issuer': issuers
|
||||
})
|
||||
return f'{df.to_string(index=False)}'
|
||||
|
||||
|
||||
class App:
|
||||
'''App to to interact with rippled servers'''
|
||||
def __init__(self,
|
||||
*,
|
||||
standalone: bool,
|
||||
network: Optional[testnet.Network] = None,
|
||||
client: Optional[RippleClient] = None):
|
||||
if network and client:
|
||||
raise ValueError('Cannot specify both a testnet and client in App')
|
||||
if not network and not client:
|
||||
raise ValueError('Must specify a testnet or a client in App')
|
||||
|
||||
self.standalone = standalone
|
||||
self.network = network
|
||||
|
||||
if client:
|
||||
self.client = client
|
||||
else:
|
||||
self.client = self.network.get_client(0)
|
||||
|
||||
self.key_manager = KeyManager()
|
||||
self.asset_aliases = AssetAliases()
|
||||
root_account = Account(nickname='root',
|
||||
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
|
||||
secret_key='masterpassphrase')
|
||||
self.key_manager.add(root_account)
|
||||
|
||||
def shutdown(self):
|
||||
if self.network:
|
||||
self.network.shutdown()
|
||||
else:
|
||||
self.client.shutdown()
|
||||
|
||||
def send_signed(self, txn: Transaction) -> dict:
|
||||
'''Sign then send the given transaction'''
|
||||
if not txn.account.secret_key:
|
||||
raise ValueError('Cannot sign transaction without secret key')
|
||||
r = self(Sign(txn.account.secret_key, txn.to_cmd_obj()))
|
||||
raw_signed = r['tx_blob']
|
||||
r = self(Submit(raw_signed))
|
||||
logging.info(f'App.send_signed: {json.dumps(r, indent=1)}')
|
||||
return r
|
||||
|
||||
def send_command(self, cmd: Command) -> dict:
|
||||
'''Send the command to the rippled server'''
|
||||
r = self.client.send_command(cmd)
|
||||
logging.info(
|
||||
f'App.send_command : {cmd.cmd_name()} : {json.dumps(r, indent=1)}')
|
||||
return r
|
||||
|
||||
# Need async version to close ledgers from async functions
|
||||
async def async_send_command(self, cmd: Command) -> dict:
|
||||
'''Send the command to the rippled server'''
|
||||
return await self.client.async_send_command(cmd)
|
||||
|
||||
def send_subscribe_command(
|
||||
self,
|
||||
cmd: SubscriptionCommand,
|
||||
callback: Optional[Callable[[dict], None]] = None) -> dict:
|
||||
'''Send the subscription command to the rippled server. If already subscribed, it will unsubscribe'''
|
||||
return self.client.send_subscribe_command(cmd, callback)
|
||||
|
||||
def get_pids(self) -> List[int]:
|
||||
if self.network:
|
||||
return self.network.get_pids()
|
||||
if pid := self.client.get_pid():
|
||||
return [pid]
|
||||
|
||||
def get_running_status(self) -> List[bool]:
|
||||
if self.network:
|
||||
return self.network.get_running_status()
|
||||
if self.client.get_pid():
|
||||
return [True]
|
||||
else:
|
||||
return [False]
|
||||
|
||||
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
|
||||
def get_brief_server_info(self) -> dict:
|
||||
if self.network:
|
||||
return self.network.get_brief_server_info()
|
||||
else:
|
||||
ret = {}
|
||||
for (k, v) in self.client.get_brief_server_info().items():
|
||||
ret[k] = [v]
|
||||
return ret
|
||||
|
||||
def servers_start(self,
|
||||
server_indexes: Optional[Union[Set[int],
|
||||
List[int]]] = None,
|
||||
*,
|
||||
extra_args: Optional[List[List[str]]] = None):
|
||||
if self.network:
|
||||
return self.network.servers_start(server_indexes,
|
||||
extra_args=extra_args)
|
||||
else:
|
||||
raise ValueError('Cannot start stand alone server')
|
||||
|
||||
def servers_stop(self,
|
||||
server_indexes: Optional[Union[Set[int],
|
||||
List[int]]] = None):
|
||||
if self.network:
|
||||
return self.network.servers_stop(server_indexes)
|
||||
else:
|
||||
raise ValueError('Cannot stop stand alone server')
|
||||
|
||||
def federator_info(self,
|
||||
server_indexes: Optional[Union[Set[int],
|
||||
List[int]]] = None):
|
||||
# key is server index. value is federator_info result
|
||||
result_dict = {}
|
||||
if self.network:
|
||||
if not server_indexes:
|
||||
server_indexes = [
|
||||
i for i in range(self.network.num_clients())
|
||||
if self.network.is_running(i)
|
||||
]
|
||||
for i in server_indexes:
|
||||
if self.network.is_running(i):
|
||||
result_dict[i] = self.network.get_client(i).send_command(
|
||||
FederatorInfo())
|
||||
else:
|
||||
if 0 in server_indexes:
|
||||
result_dict[0] = self.client.send_command(FederatorInfo())
|
||||
return result_dict
|
||||
|
||||
def __call__(self,
|
||||
to_send: Union[Transaction, Command, SubscriptionCommand],
|
||||
callback: Optional[Callable[[dict], None]] = None,
|
||||
*,
|
||||
insert_seq_and_fee=False) -> dict:
|
||||
'''Call `send_signed` for transactions or `send_command` for commands'''
|
||||
if isinstance(to_send, SubscriptionCommand):
|
||||
return self.send_subscribe_command(to_send, callback)
|
||||
assert callback is None
|
||||
if isinstance(to_send, Transaction):
|
||||
if insert_seq_and_fee:
|
||||
self.insert_seq_and_fee(to_send)
|
||||
return self.send_signed(to_send)
|
||||
if isinstance(to_send, Command):
|
||||
return self.send_command(to_send)
|
||||
raise ValueError(
|
||||
'Expected `to_send` to be either a Transaction, Command, or SubscriptionCommand'
|
||||
)
|
||||
|
||||
def get_configs(self) -> List[str]:
|
||||
if self.network:
|
||||
return self.network.get_configs()
|
||||
return [self.client.config]
|
||||
|
||||
def create_account(self, name: str) -> Account:
|
||||
''' Create an account. Use the name as the alias. '''
|
||||
if name == 'root':
|
||||
return
|
||||
assert not self.key_manager.is_alias(name)
|
||||
|
||||
account = Account(nickname=name, result_dict=self(WalletPropose()))
|
||||
self.key_manager.add(account)
|
||||
return account
|
||||
|
||||
def create_accounts(self,
|
||||
names: List[str],
|
||||
funding_account: Union[Account, str] = 'root',
|
||||
amt: Union[int, Asset] = 1000000000) -> List[Account]:
|
||||
'''Fund the accounts with nicknames 'names' by using the funding account and amt'''
|
||||
accounts = [self.create_account(n) for n in names]
|
||||
if not isinstance(funding_account, Account):
|
||||
org_funding_account = funding_account
|
||||
funding_account = self.key_manager.account_from_alias(
|
||||
funding_account)
|
||||
if not isinstance(funding_account, Account):
|
||||
raise ValueError(
|
||||
f'Could not find funding account {org_funding_account}')
|
||||
if not isinstance(amt, Asset):
|
||||
assert isinstance(amt, int)
|
||||
amt = Asset(value=amt)
|
||||
for a in accounts:
|
||||
p = Payment(account=funding_account, dst=a, amt=amt)
|
||||
self.send_signed(p)
|
||||
return accounts
|
||||
|
||||
def maybe_ledger_accept(self):
|
||||
if not self.standalone:
|
||||
return
|
||||
self(LedgerAccept())
|
||||
|
||||
# Need async version to close ledgers from async functions
|
||||
async def async_maybe_ledger_accept(self):
|
||||
if not self.standalone:
|
||||
return
|
||||
await self.async_send_command(LedgerAccept())
|
||||
|
||||
def get_balances(
|
||||
self,
|
||||
account: Union[Account, List[Account], None] = None,
|
||||
asset: Union[Asset, List[Asset]] = Asset()
|
||||
) -> pd.DataFrame:
|
||||
'''Return a pandas dataframe of account balances. If account is None, treat as a wildcard (use address book)'''
|
||||
if account is None:
|
||||
account = self.key_manager.known_accounts()
|
||||
if isinstance(account, list):
|
||||
result = [self.get_balances(acc, asset) for acc in account]
|
||||
return pd.concat(result, ignore_index=True)
|
||||
if isinstance(asset, list):
|
||||
result = [self.get_balances(account, ass) for ass in asset]
|
||||
return pd.concat(result, ignore_index=True)
|
||||
if asset.is_xrp():
|
||||
try:
|
||||
df = self.get_account_info(account)
|
||||
except:
|
||||
# Most likely the account does not exist on the ledger. Give a balance of zero.
|
||||
df = pd.DataFrame({
|
||||
'account': [account],
|
||||
'balance': [0],
|
||||
'flags': [0],
|
||||
'owner_count': [0],
|
||||
'previous_txn_id': ['NA'],
|
||||
'previous_txn_lgr_seq': [-1],
|
||||
'sequence': [-1]
|
||||
})
|
||||
df = df.assign(currency='XRP', peer='', limit='')
|
||||
return df.loc[:,
|
||||
['account', 'balance', 'currency', 'peer', 'limit']]
|
||||
else:
|
||||
try:
|
||||
df = self.get_trust_lines(account)
|
||||
if df.empty: return df
|
||||
df = df[(df['peer'] == asset.issuer.account_id)
|
||||
& (df['currency'] == asset.currency)]
|
||||
except:
|
||||
# Most likely the account does not exist on the ledger. Return an empty data frame
|
||||
df = pd.DataFrame({
|
||||
'account': [],
|
||||
'balance': [],
|
||||
'currency': [],
|
||||
'peer': [],
|
||||
'limit': [],
|
||||
})
|
||||
return df.loc[:,
|
||||
['account', 'balance', 'currency', 'peer', 'limit']]
|
||||
|
||||
def get_balance(self, account: Account, asset: Asset) -> Asset:
|
||||
'''Get a balance from a single account in a single asset'''
|
||||
try:
|
||||
df = self.get_balances(account, asset)
|
||||
return asset(df.iloc[0]['balance'])
|
||||
except:
|
||||
return asset(0)
|
||||
|
||||
def get_account_info(self,
|
||||
account: Optional[Account] = None) -> pd.DataFrame:
|
||||
'''Return a pandas dataframe of account info. If account is None, treat as a wildcard (use address book)'''
|
||||
if account is None:
|
||||
known_accounts = self.key_manager.known_accounts()
|
||||
result = [self.get_account_info(acc) for acc in known_accounts]
|
||||
return pd.concat(result, ignore_index=True)
|
||||
try:
|
||||
result = self.client.send_command(AccountInfo(account))
|
||||
except:
|
||||
# Most likely the account does not exist on the ledger. Give a balance of zero.
|
||||
return pd.DataFrame({
|
||||
'account': [account],
|
||||
'balance': [0],
|
||||
'flags': [0],
|
||||
'owner_count': [0],
|
||||
'previous_txn_id': ['NA'],
|
||||
'previous_txn_lgr_seq': [-1],
|
||||
'sequence': [-1]
|
||||
})
|
||||
if 'account_data' not in result:
|
||||
raise ValueError('Bad result from account_info command')
|
||||
info = result['account_data']
|
||||
for dk in ['LedgerEntryType', 'index']:
|
||||
del info[dk]
|
||||
df = pd.DataFrame([info])
|
||||
df.rename(columns={
|
||||
'Account': 'account',
|
||||
'Balance': 'balance',
|
||||
'Flags': 'flags',
|
||||
'OwnerCount': 'owner_count',
|
||||
'PreviousTxnID': 'previous_txn_id',
|
||||
'PreviousTxnLgrSeq': 'previous_txn_lgr_seq',
|
||||
'Sequence': 'sequence'
|
||||
},
|
||||
inplace=True)
|
||||
df['balance'] = df['balance'].astype(int)
|
||||
return df
|
||||
|
||||
def get_trust_lines(self,
|
||||
account: Account,
|
||||
peer: Optional[Account] = None) -> pd.DataFrame:
|
||||
'''Return a pandas dataframe account trust lines. If peer account is None, treat as a wildcard'''
|
||||
result = self.send_command(AccountLines(account, peer=peer))
|
||||
if 'lines' not in result or 'account' not in result:
|
||||
raise ValueError('Bad result from account_lines command')
|
||||
account = result['account']
|
||||
lines = result['lines']
|
||||
for d in lines:
|
||||
d['peer'] = d['account']
|
||||
d['account'] = account
|
||||
return pd.DataFrame(lines)
|
||||
|
||||
def get_offers(self, taker_pays: Asset, taker_gets: Asset) -> pd.DataFrame:
|
||||
'''Return a pandas dataframe of offers'''
|
||||
result = self.send_command(BookOffers(taker_pays, taker_gets))
|
||||
if 'offers' not in result:
|
||||
raise ValueError('Bad result from book_offers command')
|
||||
|
||||
offers = result['offers']
|
||||
delete_keys = [
|
||||
'BookDirectory', 'BookNode', 'LedgerEntryType', 'OwnerNode',
|
||||
'PreviousTxnID', 'PreviousTxnLgrSeq', 'Sequence', 'index'
|
||||
]
|
||||
for d in offers:
|
||||
for dk in delete_keys:
|
||||
del d[dk]
|
||||
for t in ['TakerPays', 'TakerGets', 'owner_funds']:
|
||||
if 'value' in d[t]:
|
||||
d[t] = d[t]['value']
|
||||
df = pd.DataFrame(offers)
|
||||
df.rename(columns={
|
||||
'Account': 'account',
|
||||
'Flags': 'flags',
|
||||
'TakerGets': 'taker_gets',
|
||||
'TakerPays': 'taker_pays'
|
||||
},
|
||||
inplace=True)
|
||||
return df
|
||||
|
||||
def account_balance(self, account: Account, asset: Asset) -> Asset:
|
||||
'''get the account's balance of the asset'''
|
||||
pass
|
||||
|
||||
def substitute_nicknames(
|
||||
self,
|
||||
df: pd.DataFrame,
|
||||
cols: List[str] = ['account', 'peer']) -> pd.DataFrame:
|
||||
result = df.copy(deep=True)
|
||||
for c in cols:
|
||||
if c not in result:
|
||||
continue
|
||||
result[c] = result[c].map(
|
||||
lambda x: self.key_manager.alias_or_account_id(x))
|
||||
return result
|
||||
|
||||
def add_to_keymanager(self, account: Account):
|
||||
self.key_manager.add(account)
|
||||
|
||||
def is_alias(self, name: str) -> bool:
|
||||
return self.key_manager.is_alias(name)
|
||||
|
||||
def account_from_alias(self, name: str) -> Account:
|
||||
return self.key_manager.account_from_alias(name)
|
||||
|
||||
def known_accounts(self) -> List[Account]:
|
||||
return self.key_manager.known_accounts()
|
||||
|
||||
def known_asset_aliases(self) -> List[str]:
|
||||
return self.asset_aliases.known_aliases()
|
||||
|
||||
def known_iou_assets(self) -> List[Asset]:
|
||||
return self.asset_aliases.known_assets()
|
||||
|
||||
def is_asset_alias(self, name: str) -> bool:
|
||||
return self.asset_aliases.is_alias(name)
|
||||
|
||||
def add_asset_alias(self, asset: Asset, name: str):
|
||||
self.asset_aliases.add(asset, name)
|
||||
|
||||
def asset_from_alias(self, name: str) -> Asset:
|
||||
return self.asset_aliases.asset_from_alias(name)
|
||||
|
||||
def insert_seq_and_fee(self, txn: Transaction):
|
||||
acc_info = self(AccountInfo(txn.account))
|
||||
# TODO: set better fee (Hard code a fee of 15 for now)
|
||||
txn.set_seq_and_fee(acc_info['account_data']['Sequence'], 15)
|
||||
|
||||
def get_client(self) -> RippleClient:
|
||||
return self.client
|
||||
|
||||
|
||||
def balances_dataframe(chains: List[App],
|
||||
chain_names: List[str],
|
||||
account_ids: Optional[List[Account]] = None,
|
||||
assets: List[Asset] = None,
|
||||
in_drops: bool = False):
|
||||
def _removesuffix(self: str, suffix: str) -> str:
|
||||
if suffix and self.endswith(suffix):
|
||||
return self[:-len(suffix)]
|
||||
else:
|
||||
return self[:]
|
||||
|
||||
def _balance_df(chain: App, acc: Optional[Account],
|
||||
asset: Union[Asset, List[Asset]], in_drops: bool):
|
||||
b = chain.get_balances(acc, asset)
|
||||
if not in_drops:
|
||||
b.loc[b['currency'] == 'XRP', 'balance'] /= 1_000_000
|
||||
b = chain.substitute_nicknames(b)
|
||||
b = b.set_index('account')
|
||||
return b
|
||||
|
||||
if account_ids is None:
|
||||
account_ids = [None] * len(chains)
|
||||
|
||||
if assets is None:
|
||||
# XRP and all assets in the assets alias list
|
||||
assets = [[XRP(0)] + c.known_iou_assets() for c in chains]
|
||||
|
||||
dfs = []
|
||||
keys = []
|
||||
for chain, chain_name, acc, asset in zip(chains, chain_names, account_ids,
|
||||
assets):
|
||||
dfs.append(_balance_df(chain, acc, asset, in_drops))
|
||||
keys.append(_removesuffix(chain_name, 'chain'))
|
||||
df = pd.concat(dfs, keys=keys)
|
||||
return df
|
||||
|
||||
|
||||
# Start an app with a single client
|
||||
@contextmanager
|
||||
def single_client_app(*,
|
||||
config: ConfigFile,
|
||||
command_log: Optional[str] = None,
|
||||
server_out=os.devnull,
|
||||
run_server: bool = True,
|
||||
exe: Optional[str] = None,
|
||||
extra_args: Optional[List[str]] = None,
|
||||
standalone=False):
|
||||
'''Start a ripple server and return an app'''
|
||||
try:
|
||||
if extra_args is None:
|
||||
extra_args = []
|
||||
to_run = None
|
||||
app = None
|
||||
client = RippleClient(config=config, command_log=command_log, exe=exe)
|
||||
if run_server:
|
||||
to_run = [client.exe, '--conf', client.config_file_name]
|
||||
if standalone:
|
||||
to_run.append('-a')
|
||||
fout = open(server_out, 'w')
|
||||
p = subprocess.Popen(to_run + extra_args,
|
||||
stdout=fout,
|
||||
stderr=subprocess.STDOUT)
|
||||
client.set_pid(p.pid)
|
||||
print(
|
||||
f'started rippled: config: {client.config_file_name} PID: {p.pid}',
|
||||
flush=True)
|
||||
time.sleep(1.5) # give process time to startup
|
||||
app = App(client=client, standalone=standalone)
|
||||
yield app
|
||||
finally:
|
||||
if app:
|
||||
app.shutdown()
|
||||
if run_server and to_run:
|
||||
subprocess.Popen(to_run + ['stop'],
|
||||
stdout=fout,
|
||||
stderr=subprocess.STDOUT)
|
||||
p.wait()
|
||||
|
||||
|
||||
def configs_for_testnet(config_file_prefix: str) -> List[ConfigFile]:
|
||||
configs = []
|
||||
p = Path(config_file_prefix)
|
||||
dir = p.parent
|
||||
file = p.name
|
||||
file_names = []
|
||||
for f in os.listdir(dir):
|
||||
cfg = os.path.join(dir, f, 'rippled.cfg')
|
||||
if f.startswith(file) and os.path.exists(cfg):
|
||||
file_names.append(cfg)
|
||||
file_names.sort()
|
||||
return [ConfigFile(file_name=f) for f in file_names]
|
||||
|
||||
|
||||
# Start an app for a network with the config files matched by `config_file_prefix*/rippled.cfg`
|
||||
|
||||
|
||||
# Undocumented feature: if the environment variable RIPPLED_SIDECHAIN_RR is set, it is
|
||||
# assumed to point to the rr executable. Sidechain server 0 will then be run under rr.
|
||||
@contextmanager
|
||||
def testnet_app(*,
|
||||
exe: str,
|
||||
configs: List[ConfigFile],
|
||||
command_logs: Optional[List[str]] = None,
|
||||
run_server: Optional[List[bool]] = None,
|
||||
sidechain_rr: Optional[str] = None,
|
||||
extra_args: Optional[List[List[str]]] = None):
|
||||
'''Start a ripple testnet and return an app'''
|
||||
try:
|
||||
app = None
|
||||
network = testnet.Network(exe,
|
||||
configs,
|
||||
command_logs=command_logs,
|
||||
run_server=run_server,
|
||||
with_rr=sidechain_rr,
|
||||
extra_args=extra_args)
|
||||
network.wait_for_validated_ledger()
|
||||
app = App(network=network, standalone=False)
|
||||
yield app
|
||||
finally:
|
||||
if app:
|
||||
app.shutdown()
|
||||
563
bin/sidechain/python/command.py
Normal file
563
bin/sidechain/python/command.py
Normal file
@@ -0,0 +1,563 @@
|
||||
import json
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from common import Account, Asset
|
||||
|
||||
|
||||
class Command:
|
||||
'''Interface for all commands sent to the server'''
|
||||
|
||||
# command id useful for websocket messages
|
||||
next_cmd_id = 1
|
||||
|
||||
def __init__(self):
|
||||
self.cmd_id = Command.next_cmd_id
|
||||
Command.next_cmd_id += 1
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
'''Return the command name for use in a command line'''
|
||||
assert False
|
||||
return ''
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name, json.dumps(self.to_cmd_obj())]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = self.to_cmd_obj()
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''Return an object suitalbe for use in a command (input to json.dumps or similar)'''
|
||||
assert False
|
||||
return {}
|
||||
|
||||
def add_websocket_fields(self, cmd_dict: dict) -> dict:
|
||||
cmd_dict['id'] = self.cmd_id
|
||||
cmd_dict['command'] = self.cmd_name()
|
||||
return cmd_dict
|
||||
|
||||
def _set_flag(self, flag_bit: int, value: bool = True):
|
||||
'''Set or clear the flag bit'''
|
||||
if value:
|
||||
self.flags |= flag_bit
|
||||
else:
|
||||
self.flags &= ~flag_bit
|
||||
return self
|
||||
|
||||
|
||||
class SubscriptionCommand(Command):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
|
||||
class PathFind(Command):
|
||||
'''Rippled ripple_path_find command'''
|
||||
def __init__(self,
|
||||
*,
|
||||
src: Account,
|
||||
dst: Account,
|
||||
amt: Asset,
|
||||
send_max: Optional[Asset] = None,
|
||||
src_currencies: Optional[List[Asset]] = None,
|
||||
ledger_hash: Optional[str] = None,
|
||||
ledger_index: Optional[Union[int, str]] = None):
|
||||
super().__init__()
|
||||
self.src = src
|
||||
self.dst = dst
|
||||
self.amt = amt
|
||||
self.send_max = send_max
|
||||
self.src_currencies = src_currencies
|
||||
self.ledger_hash = ledger_hash
|
||||
self.ledger_index = ledger_index
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'ripple_path_find'
|
||||
|
||||
def add_websocket_fields(self, cmd_dict: dict) -> dict:
|
||||
cmd_dict = super().add_websocket_fields(cmd_dict)
|
||||
cmd_dict['subcommand'] = 'create'
|
||||
return cmd_dict
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
cmd = {
|
||||
'source_account': self.src.account_id,
|
||||
'destination_account': self.dst.account_id,
|
||||
'destination_amount': self.amt.to_cmd_obj()
|
||||
}
|
||||
if self.send_max is not None:
|
||||
cmd['send_max'] = self.send_max.to_cmd_obj()
|
||||
if self.ledger_hash is not None:
|
||||
cmd['ledger_hash'] = self.ledger_hash
|
||||
if self.ledger_index is not None:
|
||||
cmd['ledger_index'] = self.ledger_index
|
||||
if self.src_currencies:
|
||||
c = []
|
||||
for sc in self.src_currencies:
|
||||
d = {'currency': sc.currency, 'issuer': sc.issuer.account_id}
|
||||
c.append(d)
|
||||
cmd['source_currencies'] = c
|
||||
return cmd
|
||||
|
||||
|
||||
class Sign(Command):
|
||||
'''Rippled sign command'''
|
||||
def __init__(self, secret: str, tx: dict):
|
||||
super().__init__()
|
||||
self.tx = tx
|
||||
self.secret = secret
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'sign'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name(), self.secret, f'{json.dumps(self.tx)}']
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'secret': self.secret, 'tx_json': self.tx}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class Submit(Command):
|
||||
'''Rippled submit command'''
|
||||
def __init__(self, tx_blob: str):
|
||||
super().__init__()
|
||||
self.tx_blob = tx_blob
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'submit'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name(), self.tx_blob]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'tx_blob': self.tx_blob}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class LedgerAccept(Command):
|
||||
'''Rippled ledger_accept command'''
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'ledger_accept'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class Stop(Command):
|
||||
'''Rippled stop command'''
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'stop'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class LogLevel(Command):
|
||||
'''Rippled log_level command'''
|
||||
def __init__(self, severity: str, *, partition: Optional[str] = None):
|
||||
super().__init__()
|
||||
self.severity = severity
|
||||
self.partition = partition
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'log_level'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
if self.partition is not None:
|
||||
return [self.cmd_name(), self.partition, self.severity]
|
||||
return [self.cmd_name(), self.severity]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'severity': self.severity}
|
||||
if self.partition is not None:
|
||||
result['partition'] = self.partition
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class WalletPropose(Command):
|
||||
'''Rippled wallet_propose command'''
|
||||
def __init__(self,
|
||||
*,
|
||||
passphrase: Optional[str] = None,
|
||||
seed: Optional[str] = None,
|
||||
seed_hex: Optional[str] = None,
|
||||
key_type: Optional[str] = None):
|
||||
super().__init__()
|
||||
self.passphrase = passphrase
|
||||
self.seed = seed
|
||||
self.seed_hex = seed_hex
|
||||
self.key_type = key_type
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'wallet_propose'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
assert not self.seed and not self.seed_hex and (
|
||||
not self.key_type or self.key_type == 'secp256k1')
|
||||
if self.passphrase:
|
||||
return [self.cmd_name(), self.passphrase]
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
if self.seed is not None:
|
||||
result['seed'] = self.seed
|
||||
if self.seed_hex is not None:
|
||||
result['seed_hex'] = self.seed_hex
|
||||
if self.passphrase is not None:
|
||||
result['passphrase'] = self.passphrase
|
||||
if self.key_type is not None:
|
||||
result['key_type'] = self.key_type
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class ValidationCreate(Command):
|
||||
'''Rippled validation_create command'''
|
||||
def __init__(self, *, secret: Optional[str] = None):
|
||||
super().__init__()
|
||||
self.secret = secret
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'validation_create'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
if self.secret:
|
||||
return [self.cmd_name(), self.secret]
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
if self.secret is not None:
|
||||
result['secret'] = self.secret
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class AccountInfo(Command):
|
||||
'''Rippled account_info command'''
|
||||
def __init__(self,
|
||||
account: Account,
|
||||
*,
|
||||
strict: Optional[bool] = None,
|
||||
ledger_hash: Optional[str] = None,
|
||||
ledger_index: Optional[Union[str, int]] = None,
|
||||
queue: Optional[bool] = None,
|
||||
signers_list: Optional[bool] = None):
|
||||
super().__init__()
|
||||
self.account = account
|
||||
self.strict = strict
|
||||
self.ledger_hash = ledger_hash
|
||||
self.ledger_index = ledger_index
|
||||
self.queue = queue
|
||||
self.signers_list = signers_list
|
||||
assert not ((ledger_hash is not None) and (ledger_index is not None))
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'account_info'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
result = [self.cmd_name(), self.account.account_id]
|
||||
if self.ledger_index is not None:
|
||||
result.append(self.ledger_index)
|
||||
if self.ledger_hash is not None:
|
||||
result.append(self.ledger_hash)
|
||||
if self.strict is not None:
|
||||
result.append(self.strict)
|
||||
return result
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'account': self.account.account_id}
|
||||
if self.ledger_index is not None:
|
||||
result['ledger_index'] = self.ledger_index
|
||||
if self.ledger_hash is not None:
|
||||
result['ledger_hash'] = self.ledger_hash
|
||||
if self.strict is not None:
|
||||
result['strict'] = self.strict
|
||||
if self.queue is not None:
|
||||
result['queue'] = self.queue
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class AccountLines(Command):
|
||||
'''Rippled account_lines command'''
|
||||
def __init__(self,
|
||||
account: Account,
|
||||
*,
|
||||
peer: Optional[Account] = None,
|
||||
ledger_hash: Optional[str] = None,
|
||||
ledger_index: Optional[Union[str, int]] = None,
|
||||
limit: Optional[int] = None,
|
||||
marker=None):
|
||||
super().__init__()
|
||||
self.account = account
|
||||
self.peer = peer
|
||||
self.ledger_hash = ledger_hash
|
||||
self.ledger_index = ledger_index
|
||||
self.limit = limit
|
||||
self.marker = marker
|
||||
assert not ((ledger_hash is not None) and (ledger_index is not None))
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'account_lines'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
assert sum(x is None for x in [
|
||||
self.ledger_index, self.ledger_hash, self.limit, self.marker
|
||||
]) == 4
|
||||
result = [self.cmd_name(), self.account.account_id]
|
||||
if self.peer is not None:
|
||||
result.append(self.peer)
|
||||
return result
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'account': self.account.account_id}
|
||||
if self.peer is not None:
|
||||
result['peer'] = self.peer
|
||||
if self.ledger_index is not None:
|
||||
result['ledger_index'] = self.ledger_index
|
||||
if self.ledger_hash is not None:
|
||||
result['ledger_hash'] = self.ledger_hash
|
||||
if self.limit is not None:
|
||||
result['limit'] = self.limit
|
||||
if self.marker is not None:
|
||||
result['marker'] = self.marker
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class AccountTx(Command):
|
||||
def __init__(self,
|
||||
account: Account,
|
||||
*,
|
||||
limit: Optional[int] = None,
|
||||
marker=None):
|
||||
super().__init__()
|
||||
self.account = account
|
||||
self.limit = limit
|
||||
self.marker = marker
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'account_tx'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
result = [self.cmd_name(), self.account.account_id]
|
||||
return result
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {'account': self.account.account_id}
|
||||
if self.limit is not None:
|
||||
result['limit'] = self.limit
|
||||
if self.marker is not None:
|
||||
result['marker'] = self.marker
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class BookOffers(Command):
|
||||
'''Rippled book_offers command'''
|
||||
def __init__(self,
|
||||
taker_pays: Asset,
|
||||
taker_gets: Asset,
|
||||
*,
|
||||
taker: Optional[Account] = None,
|
||||
ledger_hash: Optional[str] = None,
|
||||
ledger_index: Optional[Union[str, int]] = None,
|
||||
limit: Optional[int] = None,
|
||||
marker=None):
|
||||
super().__init__()
|
||||
self.taker_pays = taker_pays
|
||||
self.taker_gets = taker_gets
|
||||
self.taker = taker
|
||||
self.ledger_hash = ledger_hash
|
||||
self.ledger_index = ledger_index
|
||||
self.limit = limit
|
||||
self.marker = marker
|
||||
assert not ((ledger_hash is not None) and (ledger_index is not None))
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'book_offers'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
assert sum(x is None for x in [
|
||||
self.ledger_index, self.ledger_hash, self.limit, self.marker
|
||||
]) == 4
|
||||
return [
|
||||
self.cmd_name(),
|
||||
self.taker_pays.cmd_str(),
|
||||
self.taker_gets.cmd_str()
|
||||
]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {
|
||||
'taker_pays': self.taker_pays.to_cmd_obj(),
|
||||
'taker_gets': self.taker_gets.to_cmd_obj()
|
||||
}
|
||||
if self.taker is not None:
|
||||
result['taker'] = self.taker.account_id
|
||||
if self.ledger_index is not None:
|
||||
result['ledger_index'] = self.ledger_index
|
||||
if self.ledger_hash is not None:
|
||||
result['ledger_hash'] = self.ledger_hash
|
||||
if self.limit is not None:
|
||||
result['limit'] = self.limit
|
||||
if self.marker is not None:
|
||||
result['marker'] = self.marker
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class BookSubscription:
|
||||
'''Spec for a book in a subscribe command'''
|
||||
def __init__(self,
|
||||
taker_pays: Asset,
|
||||
taker_gets: Asset,
|
||||
*,
|
||||
taker: Optional[Account] = None,
|
||||
snapshot: Optional[bool] = None,
|
||||
both: Optional[bool] = None):
|
||||
self.taker_pays = taker_pays
|
||||
self.taker_gets = taker_gets
|
||||
self.taker = taker
|
||||
self.snapshot = snapshot
|
||||
self.both = both
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''Return an object suitalbe for use in a command'''
|
||||
result = {
|
||||
'taker_pays': self.taker_pays.to_cmd_obj(),
|
||||
'taker_gets': self.taker_gets.to_cmd_obj()
|
||||
}
|
||||
if self.taker is not None:
|
||||
result['taker'] = self.taker.account_id
|
||||
if self.snapshot is not None:
|
||||
result['snapshot'] = self.snapshot
|
||||
if self.both is not None:
|
||||
result['both'] = self.both
|
||||
return result
|
||||
|
||||
|
||||
class ServerInfo(Command):
|
||||
'''Rippled server_info command'''
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'server_info'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class FederatorInfo(Command):
|
||||
'''Rippled federator_info command'''
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'federator_info'
|
||||
|
||||
def get_command_line_list(self) -> List[str]:
|
||||
'''Return a list of strings suitable for a command line command for a rippled server'''
|
||||
return [self.cmd_name()]
|
||||
|
||||
def get_websocket_dict(self) -> dict:
|
||||
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
|
||||
result = {}
|
||||
return self.add_websocket_fields(result)
|
||||
|
||||
|
||||
class Subscribe(SubscriptionCommand):
|
||||
'''The subscribe method requests periodic notifications from the server
|
||||
when certain events happen. See: https://developers.ripple.com/subscribe.html'''
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
streams: Optional[List[str]] = None,
|
||||
accounts: Optional[List[Account]] = None,
|
||||
accounts_proposed: Optional[List[Account]] = None,
|
||||
account_history_account: Optional[Account] = None,
|
||||
books: Optional[
|
||||
List[BookSubscription]] = None, # taker_pays, taker_gets
|
||||
url: Optional[str] = None,
|
||||
url_username: Optional[str] = None,
|
||||
url_password: Optional[str] = None):
|
||||
super().__init__()
|
||||
self.streams = streams
|
||||
self.accounts = accounts
|
||||
self.account_history_account = account_history_account
|
||||
self.accounts_proposed = accounts_proposed
|
||||
self.books = books
|
||||
self.url = url
|
||||
self.url_username = url_username
|
||||
self.url_password = url_password
|
||||
self.websocket = None
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
if self.websocket:
|
||||
return 'unsubscribe'
|
||||
return 'subscribe'
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
d = {}
|
||||
if self.streams is not None:
|
||||
d['streams'] = self.streams
|
||||
if self.accounts is not None:
|
||||
d['accounts'] = [a.account_id for a in self.accounts]
|
||||
if self.account_history_account is not None:
|
||||
d['account_history_tx_stream'] = {
|
||||
'account': self.account_history_account.account_id
|
||||
}
|
||||
if self.accounts_proposed is not None:
|
||||
d['accounts_proposed'] = [
|
||||
a.account_id for a in self.accounts_proposed
|
||||
]
|
||||
if self.books is not None:
|
||||
d['books'] = [b.to_cmd_obj() for b in self.books]
|
||||
if self.url is not None:
|
||||
d['url'] = self.url
|
||||
if self.url_username is not None:
|
||||
d['url_username'] = self.url_username
|
||||
if self.url_password is not None:
|
||||
d['url_password'] = self.url_password
|
||||
return d
|
||||
256
bin/sidechain/python/common.py
Normal file
256
bin/sidechain/python/common.py
Normal file
@@ -0,0 +1,256 @@
|
||||
import binascii
|
||||
import datetime
|
||||
import logging
|
||||
from typing import List, Optional, Union
|
||||
import pandas as pd
|
||||
import pytz
|
||||
import sys
|
||||
|
||||
EPRINT_ENABLED = True
|
||||
|
||||
|
||||
def disable_eprint():
|
||||
global EPRINT_ENABLED
|
||||
EPRINT_ENABLED = False
|
||||
|
||||
|
||||
def enable_eprint():
|
||||
global EPRINT_ENABLED
|
||||
EPRINT_ENABLED = True
|
||||
|
||||
|
||||
def eprint(*args, **kwargs):
|
||||
if not EPRINT_ENABLED:
|
||||
return
|
||||
logging.error(*args)
|
||||
print(*args, file=sys.stderr, flush=True, **kwargs)
|
||||
|
||||
|
||||
def to_rippled_epoch(d: datetime.datetime) -> int:
|
||||
'''Convert from a datetime to the number of seconds since Jan 1, 2000 (rippled epoch)'''
|
||||
start = datetime.datetime(2000, 1, 1, tzinfo=pytz.utc)
|
||||
return int((d - start).total_seconds())
|
||||
|
||||
|
||||
class Account: # pylint: disable=too-few-public-methods
|
||||
'''
|
||||
Account in the ripple ledger
|
||||
'''
|
||||
def __init__(self,
|
||||
*,
|
||||
account_id: Optional[str] = None,
|
||||
nickname: Optional[str] = None,
|
||||
public_key: Optional[str] = None,
|
||||
public_key_hex: Optional[str] = None,
|
||||
secret_key: Optional[str] = None,
|
||||
result_dict: Optional[dict] = None):
|
||||
self.account_id = account_id
|
||||
self.nickname = nickname
|
||||
self.public_key = public_key
|
||||
self.public_key_hex = public_key_hex
|
||||
self.secret_key = secret_key
|
||||
|
||||
if result_dict is not None:
|
||||
self.account_id = result_dict['account_id']
|
||||
self.public_key = result_dict['public_key']
|
||||
self.public_key_hex = result_dict['public_key_hex']
|
||||
self.secret_key = result_dict['master_seed']
|
||||
|
||||
# Accounts are equal if they represent the same account on the ledger
|
||||
# I.e. only check the account_id field for equality.
|
||||
def __eq__(self, lhs):
|
||||
if not isinstance(lhs, self.__class__):
|
||||
return False
|
||||
return self.account_id == lhs.account_id
|
||||
|
||||
def __ne__(self, lhs):
|
||||
return not self.__eq__(lhs)
|
||||
|
||||
def __str__(self) -> str:
|
||||
if self.nickname is not None:
|
||||
return self.nickname
|
||||
return self.account_id
|
||||
|
||||
def alias_or_account_id(self) -> str:
|
||||
'''
|
||||
return the alias if it exists, otherwise return the id
|
||||
'''
|
||||
if self.nickname is not None:
|
||||
return self.nickname
|
||||
return self.account_id
|
||||
|
||||
def account_id_str_as_hex(self) -> str:
|
||||
return binascii.hexlify(self.account_id.encode()).decode('utf-8')
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
return {
|
||||
'account_id': self.account_id,
|
||||
'nickname': self.nickname,
|
||||
'public_key': self.public_key,
|
||||
'public_key_hex': self.public_key_hex,
|
||||
'secret_key': self.secret_key
|
||||
}
|
||||
|
||||
|
||||
class Asset:
|
||||
'''An XRP or IOU value'''
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
value: Union[int, float, None] = None,
|
||||
currency: Optional[
|
||||
str] = None, # Will default to 'XRP' if not specified
|
||||
issuer: Optional[Account] = None,
|
||||
from_asset=None, # asset is of type Optional[Asset]
|
||||
# from_rpc_result is a python object resulting from an rpc command
|
||||
from_rpc_result: Optional[Union[dict, str]] = None):
|
||||
|
||||
assert from_asset is None or from_rpc_result is None
|
||||
|
||||
self.value = value
|
||||
self.issuer = issuer
|
||||
self.currency = currency
|
||||
if from_asset is not None:
|
||||
if self.value is None:
|
||||
self.value = from_asset.value
|
||||
if self.issuer is None:
|
||||
self.issuer = from_asset.issuer
|
||||
if self.currency is None:
|
||||
self.currency = from_asset.currency
|
||||
if from_rpc_result is not None:
|
||||
if isinstance(from_rpc_result, str):
|
||||
self.value = int(from_rpc_result)
|
||||
self.currency = 'XRP'
|
||||
else:
|
||||
self.value = from_rpc_result['value']
|
||||
self.currency = float(from_rpc_result['currency'])
|
||||
self.issuer = Account(account_id=from_rpc_result['issuer'])
|
||||
|
||||
if self.currency is None:
|
||||
self.currency = 'XRP'
|
||||
|
||||
if isinstance(self.value, str):
|
||||
if self.is_xrp():
|
||||
self.value = int(value)
|
||||
else:
|
||||
self.value = float(value)
|
||||
|
||||
def __call__(self, value: Union[int, float]):
|
||||
'''Call operator useful for a terse syntax for assets in tests. I.e. USD(100)'''
|
||||
return Asset(value=value, from_asset=self)
|
||||
|
||||
def __add__(self, lhs):
|
||||
assert (self.issuer == lhs.issuer and self.currency == lhs.currency)
|
||||
return Asset(value=self.value + lhs.value,
|
||||
currency=self.currency,
|
||||
issuer=self.issuer)
|
||||
|
||||
def __sub__(self, lhs):
|
||||
assert (self.issuer == lhs.issuer and self.currency == lhs.currency)
|
||||
return Asset(value=self.value - lhs.value,
|
||||
currency=self.currency,
|
||||
issuer=self.issuer)
|
||||
|
||||
def __eq__(self, lhs):
|
||||
if not isinstance(lhs, self.__class__):
|
||||
return False
|
||||
return (self.value == lhs.value and self.currency == lhs.currency
|
||||
and self.issuer == lhs.issuer)
|
||||
|
||||
def __ne__(self, lhs):
|
||||
return not self.__eq__(lhs)
|
||||
|
||||
def __str__(self) -> str:
|
||||
value_part = '' if self.value is None else f'{self.value}/'
|
||||
issuer_part = '' if self.issuer is None else f'/{self.issuer}'
|
||||
return f'{value_part}{self.currency}{issuer_part}'
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return self.__str__()
|
||||
|
||||
def is_xrp(self) -> bool:
|
||||
''' return true if the asset represents XRP'''
|
||||
return self.currency == 'XRP'
|
||||
|
||||
def cmd_str(self) -> str:
|
||||
value_part = '' if self.value is None else f'{self.value}/'
|
||||
issuer_part = '' if self.issuer is None else f'/{self.issuer.account_id}'
|
||||
return f'{value_part}{self.currency}{issuer_part}'
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''Return an object suitalbe for use in a command'''
|
||||
if self.currency == 'XRP':
|
||||
if self.value is not None:
|
||||
return f'{self.value}' # must be a string
|
||||
return {'currency': self.currency}
|
||||
result = {'currency': self.currency, 'issuer': self.issuer.account_id}
|
||||
if self.value is not None:
|
||||
result['value'] = f'{self.value}' # must be a string
|
||||
return result
|
||||
|
||||
|
||||
def XRP(v: Union[int, float]) -> Asset:
|
||||
return Asset(value=v * 1_000_000)
|
||||
|
||||
|
||||
def drops(v: int) -> Asset:
|
||||
return Asset(value=v)
|
||||
|
||||
|
||||
class Path:
|
||||
'''Payment Path'''
|
||||
def __init__(self,
|
||||
nodes: Optional[List[Union[Account, Asset]]] = None,
|
||||
*,
|
||||
result_list: Optional[List[dict]] = None):
|
||||
assert not (nodes and result_list)
|
||||
if result_list is not None:
|
||||
self.result_list = result_list
|
||||
return
|
||||
if nodes is None:
|
||||
self.result_list = []
|
||||
return
|
||||
self.result_list = [
|
||||
self._create_account_path_node(n)
|
||||
if isinstance(n, Account) else self._create_currency_path_node(n)
|
||||
for n in nodes
|
||||
]
|
||||
|
||||
def _create_account_path_node(self, account: Account) -> dict:
|
||||
return {
|
||||
'account': account.account_id,
|
||||
'type': 1,
|
||||
'type_hex': '0000000000000001'
|
||||
}
|
||||
|
||||
def _create_currency_path_node(self, asset: Asset) -> dict:
|
||||
result = {
|
||||
'currency': asset.currency,
|
||||
'type': 48,
|
||||
'type_hex': '0000000000000030'
|
||||
}
|
||||
if not asset.is_xrp():
|
||||
result['issuer'] = asset.issuer.account_id
|
||||
return result
|
||||
|
||||
def to_cmd_obj(self) -> list:
|
||||
'''Return an object suitalbe for use in a command'''
|
||||
return self.result_list
|
||||
|
||||
|
||||
class PathList:
|
||||
'''Collection of paths for use in payments'''
|
||||
def __init__(self,
|
||||
path_list: Optional[List[Path]] = None,
|
||||
*,
|
||||
result_list: Optional[List[List[dict]]] = None):
|
||||
# result_list can be the response from the rippled server
|
||||
assert not (path_list and result_list)
|
||||
if result_list is not None:
|
||||
self.paths = [Path(result_list=l) for l in result_list]
|
||||
return
|
||||
self.paths = path_list
|
||||
|
||||
def to_cmd_obj(self) -> list:
|
||||
'''Return an object suitalbe for use in a command'''
|
||||
return [p.to_cmd_obj() for p in self.paths]
|
||||
101
bin/sidechain/python/config_file.py
Normal file
101
bin/sidechain/python/config_file.py
Normal file
@@ -0,0 +1,101 @@
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
|
||||
class Section:
|
||||
def section_header(l: str) -> Optional[str]:
|
||||
'''
|
||||
If the line is a section header, return the section name
|
||||
otherwise return None
|
||||
'''
|
||||
if l.startswith('[') and l.endswith(']'):
|
||||
return l[1:-1]
|
||||
return None
|
||||
|
||||
def __init__(self, name: str):
|
||||
super().__setattr__('_name', name)
|
||||
# lines contains all non key-value pairs
|
||||
super().__setattr__('_lines', [])
|
||||
super().__setattr__('_kv_pairs', {})
|
||||
|
||||
def get_name(self):
|
||||
return self._name
|
||||
|
||||
def add_line(self, l):
|
||||
s = l.split('=')
|
||||
if len(s) == 2:
|
||||
self._kv_pairs[s[0].strip()] = s[1].strip()
|
||||
else:
|
||||
self._lines.append(l)
|
||||
|
||||
def get_lines(self):
|
||||
return self._lines
|
||||
|
||||
def get_line(self) -> Optional[str]:
|
||||
if len(self._lines) > 0:
|
||||
return self._lines[0]
|
||||
return None
|
||||
|
||||
def __getattr__(self, name):
|
||||
try:
|
||||
return self._kv_pairs[name]
|
||||
except KeyError:
|
||||
raise AttributeError(name)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if name in self.__dict__:
|
||||
super().__setattr__(name, value)
|
||||
else:
|
||||
self._kv_pairs[name] = value
|
||||
|
||||
def __getstate__(self):
|
||||
return vars(self)
|
||||
|
||||
def __setstate__(self, state):
|
||||
vars(self).update(state)
|
||||
|
||||
|
||||
class ConfigFile:
|
||||
def __init__(self, *, file_name: Optional[str] = None):
|
||||
# parse the file
|
||||
self._file_name = file_name
|
||||
self._sections = {}
|
||||
if not file_name:
|
||||
return
|
||||
|
||||
cur_section = None
|
||||
with open(file_name) as f:
|
||||
for n, l in enumerate(f):
|
||||
l = l.strip()
|
||||
if l.startswith('#') or not l:
|
||||
continue
|
||||
if section_name := Section.section_header(l):
|
||||
if cur_section:
|
||||
self.add_section(cur_section)
|
||||
cur_section = Section(section_name)
|
||||
continue
|
||||
if not cur_section:
|
||||
raise ValueError(
|
||||
f'Error parsing config file: {file_name} line_num: {n} line: {l}'
|
||||
)
|
||||
cur_section.add_line(l)
|
||||
|
||||
if cur_section:
|
||||
self.add_section(cur_section)
|
||||
|
||||
def add_section(self, s: Section):
|
||||
self._sections[s.get_name()] = s
|
||||
|
||||
def get_file_name(self):
|
||||
return self._file_name
|
||||
|
||||
def __getstate__(self):
|
||||
return vars(self)
|
||||
|
||||
def __setstate__(self, state):
|
||||
vars(self).update(state)
|
||||
|
||||
def __getattr__(self, name):
|
||||
try:
|
||||
return self._sections[name]
|
||||
except KeyError:
|
||||
raise AttributeError(name)
|
||||
630
bin/sidechain/python/create_config_files.py
Executable file
630
bin/sidechain/python/create_config_files.py
Executable file
@@ -0,0 +1,630 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# Generate rippled config files, each with their own ports, database paths, and validation_seeds.
|
||||
# There will be configs for shards/no_shards, main/test nets, two config files for each combination
|
||||
# (so one can run in a dogfood mode while another is tested). To avoid confusion,The directory path
|
||||
# will be $data_dir/{main | test}.{shard | no_shard}.{dog | test}
|
||||
# The config file will reside in that directory with the name rippled.cfg
|
||||
# The validators file will reside in that directory with the name validators.txt
|
||||
'''
|
||||
Script to test and debug sidechains.
|
||||
|
||||
The rippled exe location can be set through the command line or
|
||||
the environment variable RIPPLED_MAINCHAIN_EXE
|
||||
|
||||
The configs_dir (where the config files will reside) can be set through the command line
|
||||
or the environment variable RIPPLED_SIDECHAIN_CFG_DIR
|
||||
'''
|
||||
|
||||
import argparse
|
||||
from dataclasses import dataclass
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sys
|
||||
from typing import Dict, List, Optional, Tuple, Union
|
||||
|
||||
from config_file import ConfigFile
|
||||
from command import ValidationCreate, WalletPropose
|
||||
from common import Account, Asset, eprint, XRP
|
||||
from app import App, single_client_app
|
||||
|
||||
mainnet_validators = """
|
||||
[validator_list_sites]
|
||||
https://vl.ripple.com
|
||||
|
||||
[validator_list_keys]
|
||||
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
|
||||
"""
|
||||
|
||||
altnet_validators = """
|
||||
[validator_list_sites]
|
||||
https://vl.altnet.rippletest.net
|
||||
|
||||
[validator_list_keys]
|
||||
ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
|
||||
"""
|
||||
|
||||
node_size = 'medium'
|
||||
default_data_dir = '/home/swd/data/rippled'
|
||||
|
||||
|
||||
@dataclass
|
||||
class Keypair:
|
||||
public_key: str
|
||||
secret_key: str
|
||||
account_id: Optional[str]
|
||||
|
||||
|
||||
def generate_node_keypairs(n: int, rip: App) -> List[Keypair]:
|
||||
'''
|
||||
generate keypairs suitable for validator keys
|
||||
'''
|
||||
result = []
|
||||
for i in range(n):
|
||||
keys = rip(ValidationCreate())
|
||||
result.append(
|
||||
Keypair(public_key=keys["validation_public_key"],
|
||||
secret_key=keys["validation_seed"],
|
||||
account_id=None))
|
||||
return result
|
||||
|
||||
|
||||
def generate_federator_keypairs(n: int, rip: App) -> List[Keypair]:
|
||||
'''
|
||||
generate keypairs suitable for federator keys
|
||||
'''
|
||||
result = []
|
||||
for i in range(n):
|
||||
keys = rip(WalletPropose(key_type='ed25519'))
|
||||
result.append(
|
||||
Keypair(public_key=keys["public_key"],
|
||||
secret_key=keys["master_seed"],
|
||||
account_id=keys["account_id"]))
|
||||
return result
|
||||
|
||||
|
||||
class Ports:
|
||||
'''
|
||||
Port numbers for various services.
|
||||
Port numbers differ by cfg_index so different configs can run
|
||||
at the same time without interfering with each other.
|
||||
'''
|
||||
peer_port_base = 51235
|
||||
http_admin_port_base = 5005
|
||||
ws_public_port_base = 6005
|
||||
|
||||
def __init__(self, cfg_index: int):
|
||||
self.peer_port = Ports.peer_port_base + cfg_index
|
||||
self.http_admin_port = Ports.http_admin_port_base + cfg_index
|
||||
self.ws_public_port = Ports.ws_public_port_base + (2 * cfg_index)
|
||||
# note admin port uses public port base
|
||||
self.ws_admin_port = Ports.ws_public_port_base + (2 * cfg_index) + 1
|
||||
|
||||
|
||||
class Network:
|
||||
def __init__(self, num_nodes: int, num_validators: int,
|
||||
start_cfg_index: int, rip: App):
|
||||
self.validator_keypairs = generate_node_keypairs(num_validators, rip)
|
||||
self.ports = [Ports(start_cfg_index + i) for i in range(num_nodes)]
|
||||
|
||||
|
||||
class SidechainNetwork(Network):
|
||||
def __init__(self, num_nodes: int, num_federators: int,
|
||||
num_validators: int, start_cfg_index: int, rip: App):
|
||||
super().__init__(num_nodes, num_validators, start_cfg_index, rip)
|
||||
self.federator_keypairs = generate_federator_keypairs(
|
||||
num_federators, rip)
|
||||
self.main_account = rip(WalletPropose(key_type='secp256k1'))
|
||||
|
||||
|
||||
class XChainAsset:
|
||||
def __init__(self, main_asset: Asset, side_asset: Asset,
|
||||
main_value: Union[int, float], side_value: Union[int, float],
|
||||
main_refund_penalty: Union[int, float],
|
||||
side_refund_penalty: Union[int, float]):
|
||||
self.main_asset = main_asset(main_value)
|
||||
self.side_asset = side_asset(side_value)
|
||||
self.main_refund_penalty = main_asset(main_refund_penalty)
|
||||
self.side_refund_penalty = side_asset(side_refund_penalty)
|
||||
|
||||
|
||||
def generate_asset_stanzas(
|
||||
assets: Optional[Dict[str, XChainAsset]] = None) -> str:
|
||||
if assets is None:
|
||||
# default to xrp only at a 1:1 value
|
||||
assets = {}
|
||||
assets['xrp_xrp_sidechain_asset'] = XChainAsset(
|
||||
XRP(0), XRP(0), 1, 1, 400, 400)
|
||||
|
||||
index_stanza = """
|
||||
[sidechain_assets]"""
|
||||
|
||||
asset_stanzas = []
|
||||
|
||||
for name, xchainasset in assets.items():
|
||||
index_stanza += '\n' + name
|
||||
new_stanza = f"""
|
||||
[{name}]
|
||||
mainchain_asset={json.dumps(xchainasset.main_asset.to_cmd_obj())}
|
||||
sidechain_asset={json.dumps(xchainasset.side_asset.to_cmd_obj())}
|
||||
mainchain_refund_penalty={json.dumps(xchainasset.main_refund_penalty.to_cmd_obj())}
|
||||
sidechain_refund_penalty={json.dumps(xchainasset.side_refund_penalty.to_cmd_obj())}"""
|
||||
asset_stanzas.append(new_stanza)
|
||||
|
||||
return index_stanza + '\n' + '\n'.join(asset_stanzas)
|
||||
|
||||
|
||||
# First element of the returned tuple is the sidechain stanzas
|
||||
# second element is the bootstrap stanzas
|
||||
def generate_sidechain_stanza(
|
||||
mainchain_ports: Ports,
|
||||
main_account: dict,
|
||||
federators: List[Keypair],
|
||||
signing_key: str,
|
||||
mainchain_cfg_file: str,
|
||||
xchain_assets: Optional[Dict[str,
|
||||
XChainAsset]] = None) -> Tuple[str, str]:
|
||||
mainchain_ip = "127.0.0.1"
|
||||
|
||||
federators_stanza = """
|
||||
# federator signing public keys
|
||||
[sidechain_federators]
|
||||
"""
|
||||
federators_secrets_stanza = """
|
||||
# federator signing secret keys (for standalone-mode testing only; Normally won't be in a config file)
|
||||
[sidechain_federators_secrets]
|
||||
"""
|
||||
bootstrap_federators_stanza = """
|
||||
# first value is federator signing public key, second is the signing pk account
|
||||
[sidechain_federators]
|
||||
"""
|
||||
|
||||
assets_stanzas = generate_asset_stanzas(xchain_assets)
|
||||
|
||||
for fed in federators:
|
||||
federators_stanza += f'{fed.public_key}\n'
|
||||
federators_secrets_stanza += f'{fed.secret_key}\n'
|
||||
bootstrap_federators_stanza += f'{fed.public_key} {fed.account_id}\n'
|
||||
|
||||
sidechain_stanzas = f"""
|
||||
[sidechain]
|
||||
signing_key={signing_key}
|
||||
mainchain_account={main_account["account_id"]}
|
||||
mainchain_ip={mainchain_ip}
|
||||
mainchain_port_ws={mainchain_ports.ws_public_port}
|
||||
# mainchain config file is: {mainchain_cfg_file}
|
||||
|
||||
{assets_stanzas}
|
||||
|
||||
{federators_stanza}
|
||||
|
||||
{federators_secrets_stanza}
|
||||
"""
|
||||
bootstrap_stanzas = f"""
|
||||
[sidechain]
|
||||
mainchain_secret={main_account["master_seed"]}
|
||||
|
||||
{bootstrap_federators_stanza}
|
||||
"""
|
||||
return (sidechain_stanzas, bootstrap_stanzas)
|
||||
|
||||
|
||||
# cfg_type will typically be either 'dog' or 'test', but can be any string. It is only used
|
||||
# to create the data directories.
|
||||
def generate_cfg_dir(*,
|
||||
ports: Ports,
|
||||
with_shards: bool,
|
||||
main_net: bool,
|
||||
cfg_type: str,
|
||||
sidechain_stanza: str,
|
||||
sidechain_bootstrap_stanza: str,
|
||||
validation_seed: Optional[str] = None,
|
||||
validators: Optional[List[str]] = None,
|
||||
fixed_ips: Optional[List[Ports]] = None,
|
||||
data_dir: str,
|
||||
full_history: bool = False,
|
||||
with_hooks: bool = False) -> str:
|
||||
ips_stanza = ''
|
||||
this_ip = '127.0.0.1'
|
||||
if fixed_ips:
|
||||
ips_stanza = '# Fixed ips for a testnet.\n'
|
||||
ips_stanza += '[ips_fixed]\n'
|
||||
for i, p in enumerate(fixed_ips):
|
||||
if p.peer_port == ports.peer_port:
|
||||
continue
|
||||
# rippled limits the number of connects per ip. So use the other loopback devices
|
||||
ips_stanza += f'127.0.0.{i+1} {p.peer_port}\n'
|
||||
else:
|
||||
ips_stanza = '# Where to find some other servers speaking the Ripple protocol.\n'
|
||||
ips_stanza += '[ips]\n'
|
||||
if main_net:
|
||||
ips_stanza += 'r.ripple.com 51235\n'
|
||||
else:
|
||||
ips_stanza += 'r.altnet.rippletest.net 51235\n'
|
||||
disable_shards = '' if with_shards else '# '
|
||||
disable_delete = '#' if full_history else ''
|
||||
history_line = 'full' if full_history else '256'
|
||||
earliest_seq_line = ''
|
||||
if sidechain_stanza:
|
||||
earliest_seq_line = 'earliest_seq=1'
|
||||
hooks_line = 'Hooks' if with_hooks else ''
|
||||
validation_seed_stanza = ''
|
||||
if validation_seed:
|
||||
validation_seed_stanza = f'''
|
||||
[validation_seed]
|
||||
{validation_seed}
|
||||
'''
|
||||
node_size = 'medium'
|
||||
shard_str = 'shards' if with_shards else 'no_shards'
|
||||
net_str = 'main' if main_net else 'test'
|
||||
if not fixed_ips:
|
||||
sub_dir = data_dir + f'/{net_str}.{shard_str}.{cfg_type}'
|
||||
if sidechain_stanza:
|
||||
sub_dir += '.sidechain'
|
||||
else:
|
||||
sub_dir = data_dir + f'/{cfg_type}'
|
||||
db_path = sub_dir + '/db'
|
||||
debug_logfile = sub_dir + '/debug.log'
|
||||
shard_db_path = sub_dir + '/shards'
|
||||
node_db_path = db_path + '/nudb'
|
||||
|
||||
cfg_str = f"""
|
||||
[server]
|
||||
port_rpc_admin_local
|
||||
port_peer
|
||||
port_ws_admin_local
|
||||
port_ws_public
|
||||
#ssl_key = /etc/ssl/private/server.key
|
||||
#ssl_cert = /etc/ssl/certs/server.crt
|
||||
|
||||
[port_rpc_admin_local]
|
||||
port = {ports.http_admin_port}
|
||||
ip = {this_ip}
|
||||
admin = {this_ip}
|
||||
protocol = http
|
||||
|
||||
[port_peer]
|
||||
port = {ports.peer_port}
|
||||
ip = 0.0.0.0
|
||||
protocol = peer
|
||||
|
||||
[port_ws_admin_local]
|
||||
port = {ports.ws_admin_port}
|
||||
ip = {this_ip}
|
||||
admin = {this_ip}
|
||||
protocol = ws
|
||||
|
||||
[port_ws_public]
|
||||
port = {ports.ws_public_port}
|
||||
ip = {this_ip}
|
||||
protocol = ws
|
||||
# protocol = wss
|
||||
|
||||
[node_size]
|
||||
{node_size}
|
||||
|
||||
[ledger_history]
|
||||
{history_line}
|
||||
|
||||
[node_db]
|
||||
type=NuDB
|
||||
path={node_db_path}
|
||||
open_files=2000
|
||||
filter_bits=12
|
||||
cache_mb=256
|
||||
file_size_mb=8
|
||||
file_size_mult=2
|
||||
{earliest_seq_line}
|
||||
{disable_delete}online_delete=256
|
||||
{disable_delete}advisory_delete=0
|
||||
|
||||
[database_path]
|
||||
{db_path}
|
||||
|
||||
# This needs to be an absolute directory reference, not a relative one.
|
||||
# Modify this value as required.
|
||||
[debug_logfile]
|
||||
{debug_logfile}
|
||||
|
||||
[sntp_servers]
|
||||
time.windows.com
|
||||
time.apple.com
|
||||
time.nist.gov
|
||||
pool.ntp.org
|
||||
|
||||
{ips_stanza}
|
||||
|
||||
[validators_file]
|
||||
validators.txt
|
||||
|
||||
[rpc_startup]
|
||||
{{ "command": "log_level", "severity": "fatal" }}
|
||||
{{ "command": "log_level", "partition": "SidechainFederator", "severity": "trace" }}
|
||||
|
||||
[ssl_verify]
|
||||
1
|
||||
|
||||
{validation_seed_stanza}
|
||||
|
||||
{disable_shards}[shard_db]
|
||||
{disable_shards}type=NuDB
|
||||
{disable_shards}path={shard_db_path}
|
||||
{disable_shards}max_historical_shards=6
|
||||
|
||||
{sidechain_stanza}
|
||||
|
||||
[features]
|
||||
{hooks_line}
|
||||
PayChan
|
||||
Flow
|
||||
FlowCross
|
||||
TickSize
|
||||
fix1368
|
||||
Escrow
|
||||
fix1373
|
||||
EnforceInvariants
|
||||
SortedDirectories
|
||||
fix1201
|
||||
fix1512
|
||||
fix1513
|
||||
fix1523
|
||||
fix1528
|
||||
DepositAuth
|
||||
Checks
|
||||
fix1571
|
||||
fix1543
|
||||
fix1623
|
||||
DepositPreauth
|
||||
fix1515
|
||||
fix1578
|
||||
MultiSignReserve
|
||||
fixTakerDryOfferRemoval
|
||||
fixMasterKeyAsRegularKey
|
||||
fixCheckThreading
|
||||
fixPayChanRecipientOwnerDir
|
||||
DeletableAccounts
|
||||
fixQualityUpperBound
|
||||
RequireFullyCanonicalSig
|
||||
fix1781
|
||||
HardenedValidations
|
||||
fixAmendmentMajorityCalc
|
||||
NegativeUNL
|
||||
TicketBatch
|
||||
FlowSortStrands
|
||||
fixSTAmountCanonicalize
|
||||
fixRmSmallIncreasedQOffers
|
||||
CheckCashMakesTrustLine
|
||||
"""
|
||||
|
||||
validators_str = ''
|
||||
for p in [sub_dir, db_path, shard_db_path]:
|
||||
Path(p).mkdir(parents=True, exist_ok=True)
|
||||
# Add the validators.txt file
|
||||
if validators:
|
||||
validators_str = '[validators]\n'
|
||||
for k in validators:
|
||||
validators_str += f'{k}\n'
|
||||
else:
|
||||
validators_str = mainnet_validators if main_net else altnet_validators
|
||||
with open(sub_dir + "/validators.txt", "w") as f:
|
||||
f.write(validators_str)
|
||||
|
||||
# add the rippled.cfg file
|
||||
with open(sub_dir + "/rippled.cfg", "w") as f:
|
||||
f.write(cfg_str)
|
||||
|
||||
if sidechain_bootstrap_stanza:
|
||||
# add the bootstrap file
|
||||
with open(sub_dir + "/sidechain_bootstrap.cfg", "w") as f:
|
||||
f.write(sidechain_bootstrap_stanza)
|
||||
|
||||
return sub_dir + "/rippled.cfg"
|
||||
|
||||
|
||||
def generate_multinode_net(out_dir: str,
|
||||
mainnet: Network,
|
||||
sidenet: SidechainNetwork,
|
||||
xchain_assets: Optional[Dict[str,
|
||||
XChainAsset]] = None):
|
||||
mainnet_cfgs = []
|
||||
for i in range(len(mainnet.ports)):
|
||||
validator_kp = mainnet.validator_keypairs[i]
|
||||
ports = mainnet.ports[i]
|
||||
mainchain_cfg_file = generate_cfg_dir(
|
||||
ports=ports,
|
||||
with_shards=False,
|
||||
main_net=True,
|
||||
cfg_type=f'mainchain_{i}',
|
||||
sidechain_stanza='',
|
||||
sidechain_bootstrap_stanza='',
|
||||
validation_seed=validator_kp.secret_key,
|
||||
data_dir=out_dir)
|
||||
mainnet_cfgs.append(mainchain_cfg_file)
|
||||
|
||||
for i in range(len(sidenet.ports)):
|
||||
validator_kp = sidenet.validator_keypairs[i]
|
||||
ports = sidenet.ports[i]
|
||||
|
||||
mainnet_i = i % len(mainnet.ports)
|
||||
sidechain_stanza, sidechain_bootstrap_stanza = generate_sidechain_stanza(
|
||||
mainnet.ports[mainnet_i], sidenet.main_account,
|
||||
sidenet.federator_keypairs,
|
||||
sidenet.federator_keypairs[i].secret_key, mainnet_cfgs[mainnet_i],
|
||||
xchain_assets)
|
||||
|
||||
generate_cfg_dir(
|
||||
ports=ports,
|
||||
with_shards=False,
|
||||
main_net=True,
|
||||
cfg_type=f'sidechain_{i}',
|
||||
sidechain_stanza=sidechain_stanza,
|
||||
sidechain_bootstrap_stanza=sidechain_bootstrap_stanza,
|
||||
validation_seed=validator_kp.secret_key,
|
||||
validators=[kp.public_key for kp in sidenet.validator_keypairs],
|
||||
fixed_ips=sidenet.ports,
|
||||
data_dir=out_dir,
|
||||
full_history=True,
|
||||
with_hooks=False)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description=('Create config files for testing sidechains'))
|
||||
|
||||
parser.add_argument(
|
||||
'--exe',
|
||||
'-e',
|
||||
help=('path to rippled executable'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--usd',
|
||||
'-u',
|
||||
action='store_true',
|
||||
help=('include a USD/root IOU asset for cross chain transfers'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--cfgs_dir',
|
||||
'-c',
|
||||
help=
|
||||
('path to configuration file dir (where the output config files will be located)'
|
||||
),
|
||||
)
|
||||
|
||||
return parser.parse_known_args()[0]
|
||||
|
||||
|
||||
class Params:
|
||||
def __init__(self):
|
||||
args = parse_args()
|
||||
|
||||
self.exe = None
|
||||
if 'RIPPLED_MAINCHAIN_EXE' in os.environ:
|
||||
self.exe = os.environ['RIPPLED_MAINCHAIN_EXE']
|
||||
if args.exe:
|
||||
self.exe = args.exe
|
||||
|
||||
self.configs_dir = None
|
||||
if 'RIPPLED_SIDECHAIN_CFG_DIR' in os.environ:
|
||||
self.configs_dir = os.environ['RIPPLED_SIDECHAIN_CFG_DIR']
|
||||
if args.cfgs_dir:
|
||||
self.configs_dir = args.cfgs_dir
|
||||
|
||||
self.usd = False
|
||||
if args.usd:
|
||||
self.usd = args.usd
|
||||
|
||||
def check_error(self) -> str:
|
||||
'''
|
||||
Check for errors. Return `None` if no errors,
|
||||
otherwise return a string describing the error
|
||||
'''
|
||||
if not self.exe:
|
||||
return 'Missing exe location. Either set the env variable RIPPLED_MAINCHAIN_EXE or use the --exe_mainchain command line switch'
|
||||
if not self.configs_dir:
|
||||
return 'Missing configs directory location. Either set the env variable RIPPLED_SIDECHAIN_CFG_DIR or use the --cfgs_dir command line switch'
|
||||
|
||||
|
||||
def main(params: Params,
|
||||
xchain_assets: Optional[Dict[str, XChainAsset]] = None):
|
||||
|
||||
if err_str := params.check_error():
|
||||
eprint(err_str)
|
||||
sys.exit(1)
|
||||
index = 0
|
||||
nonvalidator_cfg_file_name = generate_cfg_dir(
|
||||
ports=Ports(index),
|
||||
with_shards=False,
|
||||
main_net=True,
|
||||
cfg_type='non_validator',
|
||||
sidechain_stanza='',
|
||||
sidechain_bootstrap_stanza='',
|
||||
validation_seed=None,
|
||||
data_dir=params.configs_dir)
|
||||
index = index + 1
|
||||
|
||||
nonvalidator_config = ConfigFile(file_name=nonvalidator_cfg_file_name)
|
||||
with single_client_app(exe=params.exe,
|
||||
config=nonvalidator_config,
|
||||
standalone=True) as rip:
|
||||
mainnet = Network(num_nodes=1,
|
||||
num_validators=1,
|
||||
start_cfg_index=index,
|
||||
rip=rip)
|
||||
sidenet = SidechainNetwork(num_nodes=5,
|
||||
num_federators=5,
|
||||
num_validators=5,
|
||||
start_cfg_index=index + 1,
|
||||
rip=rip)
|
||||
generate_multinode_net(
|
||||
out_dir=f'{params.configs_dir}/sidechain_testnet',
|
||||
mainnet=mainnet,
|
||||
sidenet=sidenet,
|
||||
xchain_assets=xchain_assets)
|
||||
index = index + 2
|
||||
|
||||
(Path(params.configs_dir) / 'logs').mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for with_shards in [True, False]:
|
||||
for is_main_net in [True, False]:
|
||||
for cfg_type in ['dog', 'test', 'one', 'two']:
|
||||
if not is_main_net and cfg_type not in ['dog', 'test']:
|
||||
continue
|
||||
|
||||
mainnet = Network(num_nodes=1,
|
||||
num_validators=1,
|
||||
start_cfg_index=index,
|
||||
rip=rip)
|
||||
mainchain_cfg_file = generate_cfg_dir(
|
||||
data_dir=params.configs_dir,
|
||||
ports=mainnet.ports[0],
|
||||
with_shards=with_shards,
|
||||
main_net=is_main_net,
|
||||
cfg_type=cfg_type,
|
||||
sidechain_stanza='',
|
||||
sidechain_bootstrap_stanza='',
|
||||
validation_seed=mainnet.validator_keypairs[0].
|
||||
secret_key)
|
||||
|
||||
sidenet = SidechainNetwork(num_nodes=1,
|
||||
num_federators=5,
|
||||
num_validators=1,
|
||||
start_cfg_index=index + 1,
|
||||
rip=rip)
|
||||
signing_key = sidenet.federator_keypairs[0].secret_key
|
||||
|
||||
sidechain_stanza, sizechain_bootstrap_stanza = generate_sidechain_stanza(
|
||||
mainnet.ports[0], sidenet.main_account,
|
||||
sidenet.federator_keypairs, signing_key,
|
||||
mainchain_cfg_file, xchain_assets)
|
||||
|
||||
generate_cfg_dir(
|
||||
data_dir=params.configs_dir,
|
||||
ports=sidenet.ports[0],
|
||||
with_shards=with_shards,
|
||||
main_net=is_main_net,
|
||||
cfg_type=cfg_type,
|
||||
sidechain_stanza=sidechain_stanza,
|
||||
sidechain_bootstrap_stanza=sizechain_bootstrap_stanza,
|
||||
validation_seed=sidenet.validator_keypairs[0].
|
||||
secret_key)
|
||||
index = index + 2
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
params = Params()
|
||||
|
||||
xchain_assets = None
|
||||
if params.usd:
|
||||
xchain_assets = {}
|
||||
xchain_assets['xrp_xrp_sidechain_asset'] = XChainAsset(
|
||||
XRP(0), XRP(0), 1, 1, 200, 200)
|
||||
root_account = Account(account_id="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh")
|
||||
main_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
|
||||
side_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
|
||||
xchain_assets['iou_iou_sidechain_asset'] = XChainAsset(
|
||||
main_iou_asset, side_iou_asset, 1, 1, 0.02, 0.02)
|
||||
|
||||
main(params, xchain_assets)
|
||||
1496
bin/sidechain/python/interactive.py
Normal file
1496
bin/sidechain/python/interactive.py
Normal file
File diff suppressed because it is too large
Load Diff
198
bin/sidechain/python/log_analyzer.py
Executable file
198
bin/sidechain/python/log_analyzer.py
Executable file
@@ -0,0 +1,198 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
from common import eprint
|
||||
from typing import IO, Optional
|
||||
|
||||
|
||||
class LogLine:
|
||||
UNSTRUCTURED_RE = re.compile(r'''(?x)
|
||||
# The x flag enables insignificant whitespace mode (allowing comments)
|
||||
^(?P<timestamp>.*UTC)
|
||||
[\ ]
|
||||
(?P<module>[^:]*):(?P<level>[^\ ]*)
|
||||
[\ ]
|
||||
(?P<msg>.*$)
|
||||
''')
|
||||
|
||||
STRUCTURED_RE = re.compile(r'''(?x)
|
||||
# The x flag enables insignificant whitespace mode (allowing comments)
|
||||
^(?P<timestamp>.*UTC)
|
||||
[\ ]
|
||||
(?P<module>[^:]*):(?P<level>[^\ ]*)
|
||||
[\ ]
|
||||
(?P<msg>[^{]*)
|
||||
[\ ]
|
||||
(?P<json_data>.*$)
|
||||
''')
|
||||
|
||||
def __init__(self, line: str):
|
||||
self.raw_line = line
|
||||
self.json_data = None
|
||||
|
||||
try:
|
||||
if line.endswith('}'):
|
||||
m = self.STRUCTURED_RE.match(line)
|
||||
try:
|
||||
self.json_data = json.loads(m.group('json_data'))
|
||||
except:
|
||||
m = self.UNSTRUCTURED_RE.match(line)
|
||||
else:
|
||||
m = self.UNSTRUCTURED_RE.match(line)
|
||||
|
||||
self.timestamp = m.group('timestamp')
|
||||
self.level = m.group('level')
|
||||
self.module = m.group('module')
|
||||
self.msg = m.group('msg')
|
||||
except Exception as e:
|
||||
eprint(f'init exception: {e} line: {line}')
|
||||
|
||||
def to_mixed_json_str(self) -> str:
|
||||
'''
|
||||
return a pretty printed string as mixed json
|
||||
'''
|
||||
try:
|
||||
r = f'{self.timestamp} {self.module}:{self.level} {self.msg}'
|
||||
if self.json_data:
|
||||
r += '\n' + json.dumps(self.json_data, indent=1)
|
||||
return r
|
||||
except:
|
||||
eprint(f'Using raw line: {self.raw_line}')
|
||||
return self.raw_line
|
||||
|
||||
def to_pure_json(self) -> dict:
|
||||
'''
|
||||
return a json dict
|
||||
'''
|
||||
dict = {}
|
||||
dict['t'] = self.timestamp
|
||||
dict['m'] = self.module
|
||||
dict['l'] = self.level
|
||||
dict['msg'] = self.msg
|
||||
if self.json_data:
|
||||
dict['data'] = self.json_data
|
||||
return dict
|
||||
|
||||
def to_pure_json_str(self, f_id: Optional[str] = None) -> str:
|
||||
'''
|
||||
return a pretty printed string as pure json
|
||||
'''
|
||||
try:
|
||||
dict = self.to_pure_json(f_id)
|
||||
return json.dumps(dict, indent=1)
|
||||
except:
|
||||
return self.raw_line
|
||||
|
||||
|
||||
def convert_log(in_file_name: str,
|
||||
out: str,
|
||||
*,
|
||||
as_list=False,
|
||||
pure_json=False,
|
||||
module: Optional[str] = 'SidechainFederator') -> list:
|
||||
result = []
|
||||
try:
|
||||
prev_lines = None
|
||||
with open(in_file_name) as input:
|
||||
for l in input:
|
||||
l = l.strip()
|
||||
if not l:
|
||||
continue
|
||||
if LogLine.UNSTRUCTURED_RE.match(l):
|
||||
if prev_lines:
|
||||
log_line = LogLine(prev_lines)
|
||||
if not module or log_line.module == module:
|
||||
if as_list:
|
||||
result.append(log_line.to_pure_json())
|
||||
else:
|
||||
if pure_json:
|
||||
print(log_line.to_pure_json_str(),
|
||||
file=out)
|
||||
else:
|
||||
print(log_line.to_mixed_json_str(),
|
||||
file=out)
|
||||
prev_lines = l
|
||||
else:
|
||||
if not prev_lines:
|
||||
eprint(f'Error: Expected prev_lines. Cur line: {l}')
|
||||
assert prev_lines
|
||||
prev_lines += f' {l}'
|
||||
if prev_lines:
|
||||
log_line = LogLine(prev_lines)
|
||||
if not module or log_line.module == module:
|
||||
if as_list:
|
||||
result.append(log_line.to_pure_json())
|
||||
else:
|
||||
if pure_json:
|
||||
print(log_line.to_pure_json_str(f_id),
|
||||
file=out,
|
||||
flush=True)
|
||||
else:
|
||||
print(log_line.to_mixed_json_str(),
|
||||
file=out,
|
||||
flush=True)
|
||||
except Exception as e:
|
||||
eprint(f'Excption: {e}')
|
||||
raise e
|
||||
return result
|
||||
|
||||
|
||||
def convert_all(in_dir_name: str, out: IO, *, pure_json=False):
|
||||
'''
|
||||
Convert all the "debug.log" log files in one directory level below the in_dir_name into a single json file.
|
||||
There will be a field called 'f' for the director name that the origional log file came from.
|
||||
This is useful when analyzing networks that run on the local machine.
|
||||
'''
|
||||
if not os.path.isdir(in_dir_name):
|
||||
print(f'Error: {in_dir_name} is not a directory')
|
||||
files = []
|
||||
f_ids = []
|
||||
for subdir in os.listdir(in_dir_name):
|
||||
file = f'{in_dir_name}/{subdir}/debug.log'
|
||||
if not os.path.isfile(file):
|
||||
continue
|
||||
files.append(file)
|
||||
f_ids.append(subdir)
|
||||
|
||||
result = {}
|
||||
for f, f_id in zip(files, f_ids):
|
||||
l = convert_log(f, out, as_list=True, pure_json=pure_json, module=None)
|
||||
result[f_id] = l
|
||||
print(json.dumps(result, indent=1), file=out, flush=True)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description=('python script to convert log files to json'))
|
||||
|
||||
parser.add_argument(
|
||||
'--input',
|
||||
'-i',
|
||||
help=('input log file or sidechain config directory structure'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--output',
|
||||
'-o',
|
||||
help=('output log file'),
|
||||
)
|
||||
|
||||
return parser.parse_known_args()[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
args = parse_args()
|
||||
with open(args.output, "w") as out:
|
||||
if os.path.isdir(args.input):
|
||||
convert_all(args.input, out, pure_json=True)
|
||||
else:
|
||||
convert_log(args.input, out, pure_json=True)
|
||||
except Exception as e:
|
||||
eprint(f'Excption: {e}')
|
||||
raise e
|
||||
285
bin/sidechain/python/log_report.py
Executable file
285
bin/sidechain/python/log_report.py
Executable file
@@ -0,0 +1,285 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
from collections import defaultdict
|
||||
import datetime
|
||||
import json
|
||||
import numpy as np
|
||||
import os
|
||||
import pandas as pd
|
||||
import string
|
||||
import sys
|
||||
from typing import Dict, Set
|
||||
|
||||
from common import eprint
|
||||
import log_analyzer
|
||||
|
||||
|
||||
def _has_256bit_hex_field_other(data, result: Set[str]):
|
||||
return
|
||||
|
||||
|
||||
_has_256bit_hex_field_overloads = defaultdict(
|
||||
lambda: _has_256bit_hex_field_other)
|
||||
|
||||
|
||||
def _has_256bit_hex_field_str(data: str, result: Set[str]):
|
||||
if len(data) != 64:
|
||||
return
|
||||
for c in data:
|
||||
o = ord(c.upper())
|
||||
if ord('A') <= o <= ord('F'):
|
||||
continue
|
||||
if ord('0') <= o <= ord('9'):
|
||||
continue
|
||||
return
|
||||
result.add(data)
|
||||
|
||||
|
||||
_has_256bit_hex_field_overloads[str] = _has_256bit_hex_field_str
|
||||
|
||||
|
||||
def _has_256bit_hex_field_dict(data: dict, result: Set[str]):
|
||||
for k, v in data.items():
|
||||
if k in [
|
||||
"meta", "index", "LedgerIndex", "ledger_index", "ledger_hash",
|
||||
"SigningPubKey", "suppression"
|
||||
]:
|
||||
continue
|
||||
_has_256bit_hex_field_overloads[type(v)](v, result)
|
||||
|
||||
|
||||
_has_256bit_hex_field_overloads[dict] = _has_256bit_hex_field_dict
|
||||
|
||||
|
||||
def _has_256bit_hex_field_list(data: list, result: Set[str]):
|
||||
for v in data:
|
||||
_has_256bit_hex_field_overloads[type(v)](v, result)
|
||||
|
||||
|
||||
_has_256bit_hex_field_overloads[list] = _has_256bit_hex_field_list
|
||||
|
||||
|
||||
def has_256bit_hex_field(data: dict) -> Set[str]:
|
||||
'''
|
||||
Find all the fields that are strings 64 chars long with only hex digits
|
||||
This is useful when grouping transactions by hex
|
||||
'''
|
||||
result = set()
|
||||
_has_256bit_hex_field_dict(data, result)
|
||||
return result
|
||||
|
||||
|
||||
def group_by_txn(data: dict) -> dict:
|
||||
'''
|
||||
return a dictionary where the key is the transaction hash, the value is another dictionary.
|
||||
The second dictionary the key is the server id, and the values are a list of log items
|
||||
'''
|
||||
def _make_default():
|
||||
return defaultdict(lambda: list())
|
||||
|
||||
result = defaultdict(_make_default)
|
||||
for server_id, log_list in data.items():
|
||||
for log_item in log_list:
|
||||
if txn_hashes := has_256bit_hex_field(log_item):
|
||||
for h in txn_hashes:
|
||||
result[h][server_id].append(log_item)
|
||||
return result
|
||||
|
||||
|
||||
def _rekey_dict_by_txn_date(hash_to_timestamp: dict,
|
||||
grouped_by_txn: dict) -> dict:
|
||||
'''
|
||||
hash_to_timestamp is a dictionary with a key of the txn hash and a value of the timestamp.
|
||||
grouped_by_txn is a dictionary with a key of the txn and an unspecified value.
|
||||
the keys in hash_to_timestamp are a superset of the keys in grouped_by_txn
|
||||
This function returns a new grouped_by_txn dictionary with the transactions sorted by date.
|
||||
'''
|
||||
known_txns = [
|
||||
k for k, v in sorted(hash_to_timestamp.items(), key=lambda x: x[1])
|
||||
]
|
||||
result = {}
|
||||
for k, v in grouped_by_txn.items():
|
||||
if k not in known_txns:
|
||||
result[k] = v
|
||||
for h in known_txns:
|
||||
result[h] = grouped_by_txn[h]
|
||||
return result
|
||||
|
||||
|
||||
def _to_timestamp(str_time: str) -> datetime.datetime:
|
||||
return datetime.datetime.strptime(
|
||||
str_time.split('.')[0], "%Y-%b-%d %H:%M:%S")
|
||||
|
||||
|
||||
class Report:
|
||||
def __init__(self, in_dir, out_dir):
|
||||
self.in_dir = in_dir
|
||||
self.out_dir = out_dir
|
||||
|
||||
self.combined_logs_file_name = f'{self.out_dir}/combined_logs.json'
|
||||
self.grouped_by_txn_file_name = f'{self.out_dir}/grouped_by_txn.json'
|
||||
self.counts_by_txn_and_server_file_name = f'{self.out_dir}/counts_by_txn_and_server.org'
|
||||
self.data = None # combined logs
|
||||
|
||||
# grouped_by_txn is a dictionary where the key is the server id. mainchain servers
|
||||
# have a key of `mainchain_#` and sidechain servers have a key of
|
||||
# `sidechain_#`, where `#` is a number.
|
||||
self.grouped_by_txn = None
|
||||
|
||||
if not os.path.isdir(in_dir):
|
||||
eprint(f'The input {self.in_dir} must be an existing directory')
|
||||
sys.exit(1)
|
||||
|
||||
if os.path.exists(self.out_dir):
|
||||
if not os.path.isdir(self.out_dir):
|
||||
eprint(
|
||||
f'The output: {self.out_dir} exists and is not a directory'
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
os.makedirs(self.out_dir)
|
||||
|
||||
self.combine_logs()
|
||||
with open(self.combined_logs_file_name) as f:
|
||||
self.data = json.load(f)
|
||||
self.grouped_by_txn = group_by_txn(self.data)
|
||||
|
||||
# counts_by_txn_and_server is a dictionary where the key is the txn_hash
|
||||
# and the value is a pandas df with a row for every server and a column for every message
|
||||
# the value is a count of how many times that message appears for that server.
|
||||
counts_by_txn_and_server = {}
|
||||
# dict where the key is a transaction hash and the value is the transaction
|
||||
hash_to_txn = {}
|
||||
# dict where the key is a transaction hash and the value is earliest timestamp in a log file
|
||||
hash_to_timestamp = {}
|
||||
for txn_hash, server_dict in self.grouped_by_txn.items():
|
||||
message_set = set()
|
||||
# message list is ordered by when it appears in the log
|
||||
message_list = []
|
||||
for server_id, messages in server_dict.items():
|
||||
for m in messages:
|
||||
try:
|
||||
d = m['data']
|
||||
if 'msg' in d and 'transaction' in d['msg']:
|
||||
t = d['msg']['transaction']
|
||||
elif 'tx_json' in d:
|
||||
t = d['tx_json']
|
||||
if t['hash'] == txn_hash:
|
||||
hash_to_txn[txn_hash] = t
|
||||
except:
|
||||
pass
|
||||
msg = m['msg']
|
||||
t = _to_timestamp(m['t'])
|
||||
if txn_hash not in hash_to_timestamp:
|
||||
hash_to_timestamp[txn_hash] = t
|
||||
elif hash_to_timestamp[txn_hash] > t:
|
||||
hash_to_timestamp[txn_hash] = t
|
||||
if msg not in message_set:
|
||||
message_set.add(msg)
|
||||
message_list.append(msg)
|
||||
df = pd.DataFrame(0,
|
||||
index=server_dict.keys(),
|
||||
columns=message_list)
|
||||
for server_id, messages in server_dict.items():
|
||||
for m in messages:
|
||||
df[m['msg']][server_id] += 1
|
||||
counts_by_txn_and_server[txn_hash] = df
|
||||
|
||||
# sort the transactions by timestamp, but the txns with unknown timestamp at the beginning
|
||||
self.grouped_by_txn = _rekey_dict_by_txn_date(hash_to_timestamp,
|
||||
self.grouped_by_txn)
|
||||
counts_by_txn_and_server = _rekey_dict_by_txn_date(
|
||||
hash_to_timestamp, counts_by_txn_and_server)
|
||||
|
||||
with open(self.grouped_by_txn_file_name, 'w') as out:
|
||||
print(json.dumps(self.grouped_by_txn, indent=1), file=out)
|
||||
|
||||
with open(self.counts_by_txn_and_server_file_name, 'w') as out:
|
||||
for txn_hash, df in counts_by_txn_and_server.items():
|
||||
print(f'\n\n* Txn: {txn_hash}', file=out)
|
||||
if txn_hash in hash_to_txn:
|
||||
print(json.dumps(hash_to_txn[txn_hash], indent=1),
|
||||
file=out)
|
||||
rename_dict = {}
|
||||
for column, renamed_column in zip(df.columns.array,
|
||||
string.ascii_uppercase):
|
||||
print(f'{renamed_column} = {column}', file=out)
|
||||
rename_dict[column] = renamed_column
|
||||
df.rename(columns=rename_dict, inplace=True)
|
||||
print(f'\n{df}', file=out)
|
||||
|
||||
def combine_logs(self):
|
||||
try:
|
||||
with open(self.combined_logs_file_name, "w") as out:
|
||||
log_analyzer.convert_all(args.input, out, pure_json=True)
|
||||
except Exception as e:
|
||||
eprint(f'Excption: {e}')
|
||||
raise e
|
||||
|
||||
|
||||
def main(input_dir_name: str, output_dir_name: str):
|
||||
r = Report(input_dir_name, output_dir_name)
|
||||
|
||||
# Values are a list of log lines formatted as json. There are five fields:
|
||||
# `t` is the timestamp.
|
||||
# `m` is the module.
|
||||
# `l` is the log level.
|
||||
# `msg` is the message.
|
||||
# `data` is the data.
|
||||
# For example:
|
||||
#
|
||||
# {
|
||||
# "t": "2021-Oct-08 21:33:41.731371562 UTC",
|
||||
# "m": "SidechainFederator",
|
||||
# "l": "TRC",
|
||||
# "msg": "no last xchain txn with result",
|
||||
# "data": {
|
||||
# "needsOtherChainLastXChainTxn": true,
|
||||
# "isMainchain": false,
|
||||
# "jlogId": 121
|
||||
# }
|
||||
# },
|
||||
|
||||
|
||||
# Lifecycle of a transaction
|
||||
# For each federator record:
|
||||
# Transaction detected: amount, seq, destination, chain, hash
|
||||
# Signature received: hash, seq
|
||||
# Signature sent: hash, seq, federator dst
|
||||
# Transaction submitted
|
||||
# Result received, and detect if error
|
||||
# Detect any field that doesn't match
|
||||
|
||||
# Lifecycle of initialization
|
||||
|
||||
# Chain listener messages
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description=(
|
||||
'python script to generate a log report from a sidechain config directory structure containing the logs'
|
||||
))
|
||||
|
||||
parser.add_argument(
|
||||
'--input',
|
||||
'-i',
|
||||
help=('directory with sidechain config directory structure'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--output',
|
||||
'-o',
|
||||
help=('output directory for report files'),
|
||||
)
|
||||
|
||||
return parser.parse_known_args()[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
args = parse_args()
|
||||
main(args.input, args.output)
|
||||
except Exception as e:
|
||||
eprint(f'Excption: {e}')
|
||||
raise e
|
||||
14
bin/sidechain/python/requirements.txt
Normal file
14
bin/sidechain/python/requirements.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
attrs==21.2.0
|
||||
iniconfig==1.1.1
|
||||
numpy==1.21.2
|
||||
packaging==21.0
|
||||
pandas==1.3.3
|
||||
pluggy==1.0.0
|
||||
py==1.10.0
|
||||
pyparsing==2.4.7
|
||||
pytest==6.2.5
|
||||
python-dateutil==2.8.2
|
||||
pytz==2021.1
|
||||
six==1.16.0
|
||||
toml==0.10.2
|
||||
websockets==8.1
|
||||
35
bin/sidechain/python/riplrepl.py
Executable file
35
bin/sidechain/python/riplrepl.py
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/usr/bin/env python3
|
||||
'''
|
||||
Script to run an interactive shell to test sidechains.
|
||||
'''
|
||||
|
||||
import sys
|
||||
|
||||
from common import disable_eprint, eprint
|
||||
import interactive
|
||||
import sidechain
|
||||
|
||||
|
||||
def main():
|
||||
params = sidechain.Params()
|
||||
params.interactive = True
|
||||
|
||||
interactive.set_hooks_dir(params.hooks_dir)
|
||||
|
||||
if err_str := params.check_error():
|
||||
eprint(err_str)
|
||||
sys.exit(1)
|
||||
|
||||
if params.verbose:
|
||||
print("eprint enabled")
|
||||
else:
|
||||
disable_eprint()
|
||||
|
||||
if params.standalone:
|
||||
sidechain.standalone_interactive_repl(params)
|
||||
else:
|
||||
sidechain.multinode_interactive_repl(params)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
193
bin/sidechain/python/ripple_client.py
Normal file
193
bin/sidechain/python/ripple_client.py
Normal file
@@ -0,0 +1,193 @@
|
||||
import asyncio
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
from os.path import expanduser
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Callable, List, Optional, Union
|
||||
import time
|
||||
import websockets
|
||||
|
||||
from command import Command, ServerInfo, SubscriptionCommand
|
||||
from common import eprint
|
||||
from config_file import ConfigFile
|
||||
|
||||
|
||||
class RippleClient:
|
||||
'''Client to send commands to the rippled server'''
|
||||
def __init__(self,
|
||||
*,
|
||||
config: ConfigFile,
|
||||
exe: str,
|
||||
command_log: Optional[str] = None):
|
||||
self.config = config
|
||||
self.exe = exe
|
||||
self.command_log = command_log
|
||||
section = config.port_ws_admin_local
|
||||
self.websocket_uri = f'{section.protocol}://{section.ip}:{section.port}'
|
||||
self.subscription_websockets = []
|
||||
self.tasks = []
|
||||
self.pid = None
|
||||
if command_log:
|
||||
with open(self.command_log, 'w') as f:
|
||||
f.write(f'# Start \n')
|
||||
|
||||
@property
|
||||
def config_file_name(self):
|
||||
return self.config.get_file_name()
|
||||
|
||||
def shutdown(self):
|
||||
try:
|
||||
group = asyncio.gather(*self.tasks, return_exceptions=True)
|
||||
group.cancel()
|
||||
asyncio.get_event_loop().run_until_complete(group)
|
||||
for ws in self.subscription_websockets:
|
||||
asyncio.get_event_loop().run_until_complete(ws.close())
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
def set_pid(self, pid: int):
|
||||
self.pid = pid
|
||||
|
||||
def get_pid(self) -> Optional[int]:
|
||||
return self.pid
|
||||
|
||||
def get_config(self) -> ConfigFile:
|
||||
return self.config
|
||||
|
||||
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
|
||||
def get_brief_server_info(self) -> dict:
|
||||
ret = {
|
||||
'server_state': 'NA',
|
||||
'ledger_seq': 'NA',
|
||||
'complete_ledgers': 'NA'
|
||||
}
|
||||
if not self.pid or self.pid == -1:
|
||||
return ret
|
||||
r = self.send_command(ServerInfo())
|
||||
if 'info' not in r:
|
||||
return ret
|
||||
r = r['info']
|
||||
for f in ['server_state', 'complete_ledgers']:
|
||||
if f in r:
|
||||
ret[f] = r[f]
|
||||
if 'validated_ledger' in r:
|
||||
ret['ledger_seq'] = r['validated_ledger']['seq']
|
||||
return ret
|
||||
|
||||
def _write_command_log_command(self, cmd: str, cmd_index: int) -> None:
|
||||
if not self.command_log:
|
||||
return
|
||||
with open(self.command_log, 'a') as f:
|
||||
f.write(f'\n\n# command {cmd_index}\n')
|
||||
f.write(f'{cmd}')
|
||||
|
||||
def _write_command_log_result(self, result: str, cmd_index: int) -> None:
|
||||
if not self.command_log:
|
||||
return
|
||||
with open(self.command_log, 'a') as f:
|
||||
f.write(f'\n\n# result {cmd_index}\n')
|
||||
f.write(f'{result}')
|
||||
|
||||
def _send_command_line_command(self, cmd_id: int, *args) -> dict:
|
||||
'''Send the command to the rippled server using the command line interface'''
|
||||
to_run = [self.exe, '-q', '--conf', self.config_file_name, '--']
|
||||
to_run.extend(args)
|
||||
self._write_command_log_command(to_run, cmd_id)
|
||||
max_retries = 4
|
||||
for retry_count in range(0, max_retries + 1):
|
||||
try:
|
||||
r = subprocess.check_output(to_run)
|
||||
self._write_command_log_result(r, cmd_id)
|
||||
return json.loads(r.decode('utf-8'))['result']
|
||||
except Exception as e:
|
||||
if retry_count == max_retries:
|
||||
raise
|
||||
eprint(
|
||||
f'Got exception: {str(e)}\nretrying..{retry_count+1} of {max_retries}'
|
||||
)
|
||||
time.sleep(1) # give process time to startup
|
||||
|
||||
async def _send_websock_command(
|
||||
self,
|
||||
cmd: Command,
|
||||
conn: Optional[websockets.client.Connect] = None) -> dict:
|
||||
assert self.websocket_uri
|
||||
if conn is None:
|
||||
async with websockets.connect(self.websocket_uri) as ws:
|
||||
return await self._send_websock_command(cmd, ws)
|
||||
|
||||
to_send = json.dumps(cmd.get_websocket_dict())
|
||||
self._write_command_log_command(to_send, cmd.cmd_id)
|
||||
await conn.send(to_send)
|
||||
r = await conn.recv()
|
||||
self._write_command_log_result(r, cmd.cmd_id)
|
||||
j = json.loads(r)
|
||||
if not 'result' in j:
|
||||
eprint(
|
||||
f'Error sending websocket command: {json.dumps(cmd.get_websocket_dict(), indent=1)}'
|
||||
)
|
||||
eprint(f'Result: {json.dumps(j, indent=1)}')
|
||||
raise ValueError('Error sending websocket command')
|
||||
return j['result']
|
||||
|
||||
def send_command(self, cmd: Command) -> dict:
|
||||
'''Send the command to the rippled server'''
|
||||
if self.websocket_uri:
|
||||
return asyncio.get_event_loop().run_until_complete(
|
||||
self._send_websock_command(cmd))
|
||||
return self._send_command_line_command(cmd.cmd_id,
|
||||
*cmd.get_command_line_list())
|
||||
|
||||
# Need async version to close ledgers from async functions
|
||||
async def async_send_command(self, cmd: Command) -> dict:
|
||||
'''Send the command to the rippled server'''
|
||||
if self.websocket_uri:
|
||||
return await self._send_websock_command(cmd)
|
||||
return self._send_command_line_command(cmd.cmd_id,
|
||||
*cmd.get_command_line_list())
|
||||
|
||||
def send_subscribe_command(
|
||||
self,
|
||||
cmd: SubscriptionCommand,
|
||||
callback: Optional[Callable[[dict], None]] = None) -> dict:
|
||||
'''Send the command to the rippled server'''
|
||||
assert self.websocket_uri
|
||||
ws = cmd.websocket
|
||||
if ws is None:
|
||||
# subscribe
|
||||
assert callback
|
||||
ws = asyncio.get_event_loop().run_until_complete(
|
||||
websockets.connect(self.websocket_uri))
|
||||
self.subscription_websockets.append(ws)
|
||||
result = asyncio.get_event_loop().run_until_complete(
|
||||
self._send_websock_command(cmd, ws))
|
||||
if cmd.websocket is not None:
|
||||
# unsubscribed. close the websocket
|
||||
self.subscription_websockets.remove(cmd.websocket)
|
||||
cmd.websocket.close()
|
||||
cmd.websocket = None
|
||||
else:
|
||||
# setup a task to read the websocket
|
||||
cmd.websocket = ws # must be set after the _send_websock_command or will unsubscribe
|
||||
|
||||
async def subscribe_callback(ws: websockets.client.Connect,
|
||||
cb: Callable[[dict], None]):
|
||||
while True:
|
||||
r = await ws.recv()
|
||||
d = json.loads(r)
|
||||
cb(d)
|
||||
|
||||
task = asyncio.get_event_loop().create_task(
|
||||
subscribe_callback(cmd.websocket, callback))
|
||||
self.tasks.append(task)
|
||||
return result
|
||||
|
||||
def stop(self):
|
||||
'''Stop the server'''
|
||||
return self.send_command(Stop())
|
||||
|
||||
def set_log_level(self, severity: str, *, partition: Optional[str] = None):
|
||||
'''Set the server log level'''
|
||||
return self.send_command(LogLevel(severity, parition=parition))
|
||||
583
bin/sidechain/python/sidechain.py
Executable file
583
bin/sidechain/python/sidechain.py
Executable file
@@ -0,0 +1,583 @@
|
||||
#!/usr/bin/env python3
|
||||
'''
|
||||
Script to test and debug sidechains.
|
||||
|
||||
The mainchain exe location can be set through the command line or
|
||||
the environment variable RIPPLED_MAINCHAIN_EXE
|
||||
|
||||
The sidechain exe location can be set through the command line or
|
||||
the environment variable RIPPLED_SIDECHAIN_EXE
|
||||
|
||||
The configs_dir (generated with create_config_files.py) can be set through the command line
|
||||
or the environment variable RIPPLED_SIDECHAIN_CFG_DIR
|
||||
'''
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from multiprocessing import Process, Value
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from typing import Callable, Dict, List, Optional
|
||||
|
||||
from app import App, single_client_app, testnet_app, configs_for_testnet
|
||||
from command import AccountInfo, AccountTx, LedgerAccept, LogLevel, Subscribe
|
||||
from common import Account, Asset, eprint, disable_eprint, XRP
|
||||
from config_file import ConfigFile
|
||||
import interactive
|
||||
from log_analyzer import convert_log
|
||||
from test_utils import mc_wait_for_payment_detect, sc_wait_for_payment_detect, mc_connect_subscription, sc_connect_subscription
|
||||
from transaction import AccountSet, Payment, SignerListSet, SetRegularKey, Ticket, Trust
|
||||
|
||||
|
||||
def parse_args_helper(parser: argparse.ArgumentParser):
|
||||
|
||||
parser.add_argument(
|
||||
'--debug_sidechain',
|
||||
'-ds',
|
||||
action='store_true',
|
||||
help=('Mode to debug sidechain (prompt to run sidechain in gdb)'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--debug_mainchain',
|
||||
'-dm',
|
||||
action='store_true',
|
||||
help=('Mode to debug mainchain (prompt to run sidechain in gdb)'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--exe_mainchain',
|
||||
'-em',
|
||||
help=('path to mainchain rippled executable'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--exe_sidechain',
|
||||
'-es',
|
||||
help=('path to mainchain rippled executable'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--cfgs_dir',
|
||||
'-c',
|
||||
help=
|
||||
('path to configuration file dir (generated with create_config_files.py)'
|
||||
),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--standalone',
|
||||
'-a',
|
||||
action='store_true',
|
||||
help=('run standalone tests'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--interactive',
|
||||
'-i',
|
||||
action='store_true',
|
||||
help=('run interactive repl'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--quiet',
|
||||
'-q',
|
||||
action='store_true',
|
||||
help=('Disable printing errors (eprint disabled)'),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--verbose',
|
||||
'-v',
|
||||
action='store_true',
|
||||
help=('Enable printing errors (eprint enabled)'),
|
||||
)
|
||||
|
||||
# Pauses are use for attaching debuggers and looking at logs are know checkpoints
|
||||
parser.add_argument(
|
||||
'--with_pauses',
|
||||
'-p',
|
||||
action='store_true',
|
||||
help=
|
||||
('Add pauses at certain checkpoints in tests until "enter" key is hit'
|
||||
),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--hooks_dir',
|
||||
help=('path to hooks dir'),
|
||||
)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description=('Test and debug sidechains'))
|
||||
parse_args_helper(parser)
|
||||
return parser.parse_known_args()[0]
|
||||
|
||||
|
||||
class Params:
|
||||
def __init__(self, *, configs_dir: Optional[str] = None):
|
||||
args = parse_args()
|
||||
|
||||
self.debug_sidechain = False
|
||||
if args.debug_sidechain:
|
||||
self.debug_sidechain = args.debug_sidechain
|
||||
self.debug_mainchain = False
|
||||
if args.debug_mainchain:
|
||||
self.debug_mainchain = arts.debug_mainchain
|
||||
|
||||
# Undocumented feature: if the environment variable RIPPLED_SIDECHAIN_RR is set, it is
|
||||
# assumed to point to the rr executable. Sidechain server 0 will then be run under rr.
|
||||
self.sidechain_rr = None
|
||||
if 'RIPPLED_SIDECHAIN_RR' in os.environ:
|
||||
self.sidechain_rr = os.environ['RIPPLED_SIDECHAIN_RR']
|
||||
|
||||
self.standalone = args.standalone
|
||||
self.with_pauses = args.with_pauses
|
||||
self.interactive = args.interactive
|
||||
self.quiet = args.quiet
|
||||
self.verbose = args.verbose
|
||||
|
||||
self.mainchain_exe = None
|
||||
if 'RIPPLED_MAINCHAIN_EXE' in os.environ:
|
||||
self.mainchain_exe = os.environ['RIPPLED_MAINCHAIN_EXE']
|
||||
if args.exe_mainchain:
|
||||
self.mainchain_exe = args.exe_mainchain
|
||||
|
||||
self.sidechain_exe = None
|
||||
if 'RIPPLED_SIDECHAIN_EXE' in os.environ:
|
||||
self.sidechain_exe = os.environ['RIPPLED_SIDECHAIN_EXE']
|
||||
if args.exe_sidechain:
|
||||
self.sidechain_exe = args.exe_sidechain
|
||||
|
||||
self.configs_dir = None
|
||||
if 'RIPPLED_SIDECHAIN_CFG_DIR' in os.environ:
|
||||
self.configs_dir = os.environ['RIPPLED_SIDECHAIN_CFG_DIR']
|
||||
if args.cfgs_dir:
|
||||
self.configs_dir = args.cfgs_dir
|
||||
if configs_dir is not None:
|
||||
self.configs_dir = configs_dir
|
||||
|
||||
self.hooks_dir = None
|
||||
if 'RIPPLED_SIDECHAIN_HOOKS_DIR' in os.environ:
|
||||
self.hooks_dir = os.environ['RIPPLED_SIDECHAIN_HOOKS_DIR']
|
||||
if args.hooks_dir:
|
||||
self.hooks_dir = args.hooks_dir
|
||||
|
||||
if not self.configs_dir:
|
||||
self.mainchain_config = None
|
||||
self.sidechain_config = None
|
||||
self.sidechain_bootstrap_config = None
|
||||
self.genesis_account = None
|
||||
self.mc_door_account = None
|
||||
self.user_account = None
|
||||
self.sc_door_account = None
|
||||
self.federators = None
|
||||
return
|
||||
|
||||
if self.standalone:
|
||||
self.mainchain_config = ConfigFile(
|
||||
file_name=f'{self.configs_dir}/main.no_shards.dog/rippled.cfg')
|
||||
self.sidechain_config = ConfigFile(
|
||||
file_name=
|
||||
f'{self.configs_dir}/main.no_shards.dog.sidechain/rippled.cfg')
|
||||
self.sidechain_bootstrap_config = ConfigFile(
|
||||
file_name=
|
||||
f'{self.configs_dir}/main.no_shards.dog.sidechain/sidechain_bootstrap.cfg'
|
||||
)
|
||||
else:
|
||||
self.mainchain_config = ConfigFile(
|
||||
file_name=
|
||||
f'{self.configs_dir}/sidechain_testnet/main.no_shards.mainchain_0/rippled.cfg'
|
||||
)
|
||||
self.sidechain_config = ConfigFile(
|
||||
file_name=
|
||||
f'{self.configs_dir}/sidechain_testnet/sidechain_0/rippled.cfg'
|
||||
)
|
||||
self.sidechain_bootstrap_config = ConfigFile(
|
||||
file_name=
|
||||
f'{self.configs_dir}/sidechain_testnet/sidechain_0/sidechain_bootstrap.cfg'
|
||||
)
|
||||
|
||||
self.genesis_account = Account(
|
||||
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
|
||||
secret_key='masterpassphrase',
|
||||
nickname='genesis')
|
||||
self.mc_door_account = Account(
|
||||
account_id=self.sidechain_config.sidechain.mainchain_account,
|
||||
secret_key=self.sidechain_bootstrap_config.sidechain.
|
||||
mainchain_secret,
|
||||
nickname='door')
|
||||
self.user_account = Account(
|
||||
account_id='rJynXY96Vuq6B58pST9K5Ak5KgJ2JcRsQy',
|
||||
secret_key='snVsJfrr2MbVpniNiUU6EDMGBbtzN',
|
||||
nickname='alice')
|
||||
|
||||
self.sc_door_account = Account(
|
||||
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
|
||||
secret_key='masterpassphrase',
|
||||
nickname='door')
|
||||
self.federators = [
|
||||
l.split()[1].strip() for l in
|
||||
self.sidechain_bootstrap_config.sidechain_federators.get_lines()
|
||||
]
|
||||
|
||||
def check_error(self) -> str:
|
||||
'''
|
||||
Check for errors. Return `None` if no errors,
|
||||
otherwise return a string describing the error
|
||||
'''
|
||||
if not self.mainchain_exe:
|
||||
return 'Missing mainchain_exe location. Either set the env variable RIPPLED_MAINCHAIN_EXE or use the --exe_mainchain command line switch'
|
||||
if not self.sidechain_exe:
|
||||
return 'Missing sidechain_exe location. Either set the env variable RIPPLED_SIDECHAIN_EXE or use the --exe_sidechain command line switch'
|
||||
if not self.configs_dir:
|
||||
return 'Missing configs directory location. Either set the env variable RIPPLED_SIDECHAIN_CFG_DIR or use the --cfgs_dir command line switch'
|
||||
if self.verbose and self.quiet:
|
||||
return 'Cannot specify both verbose and quiet options at the same time'
|
||||
|
||||
|
||||
mainDoorKeeper = 0
|
||||
sideDoorKeeper = 1
|
||||
updateSignerList = 2
|
||||
|
||||
|
||||
def setup_mainchain(mc_app: App,
|
||||
params: Params,
|
||||
setup_user_accounts: bool = True):
|
||||
mc_app.add_to_keymanager(params.mc_door_account)
|
||||
if setup_user_accounts:
|
||||
mc_app.add_to_keymanager(params.user_account)
|
||||
|
||||
mc_app(LogLevel('fatal'))
|
||||
|
||||
# Allow rippling through the genesis account
|
||||
mc_app(AccountSet(account=params.genesis_account).set_default_ripple(True))
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
# Create and fund the mc door account
|
||||
mc_app(
|
||||
Payment(account=params.genesis_account,
|
||||
dst=params.mc_door_account,
|
||||
amt=XRP(10_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
# Create a trust line so USD/root account ious can be sent cross chain
|
||||
mc_app(
|
||||
Trust(account=params.mc_door_account,
|
||||
limit_amt=Asset(value=1_000_000,
|
||||
currency='USD',
|
||||
issuer=params.genesis_account)))
|
||||
|
||||
# set the chain's signer list and disable the master key
|
||||
divide = 4 * len(params.federators)
|
||||
by = 5
|
||||
quorum = (divide + by - 1) // by
|
||||
mc_app(
|
||||
SignerListSet(account=params.mc_door_account,
|
||||
quorum=quorum,
|
||||
keys=params.federators))
|
||||
mc_app.maybe_ledger_accept()
|
||||
r = mc_app(Ticket(account=params.mc_door_account, src_tag=mainDoorKeeper))
|
||||
mc_app.maybe_ledger_accept()
|
||||
mc_app(Ticket(account=params.mc_door_account, src_tag=sideDoorKeeper))
|
||||
mc_app.maybe_ledger_accept()
|
||||
mc_app(Ticket(account=params.mc_door_account, src_tag=updateSignerList))
|
||||
mc_app.maybe_ledger_accept()
|
||||
mc_app(AccountSet(account=params.mc_door_account).set_disable_master())
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
if setup_user_accounts:
|
||||
# Create and fund a regular user account
|
||||
mc_app(
|
||||
Payment(account=params.genesis_account,
|
||||
dst=params.user_account,
|
||||
amt=XRP(2_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
|
||||
def setup_sidechain(sc_app: App,
|
||||
params: Params,
|
||||
setup_user_accounts: bool = True):
|
||||
sc_app.add_to_keymanager(params.sc_door_account)
|
||||
if setup_user_accounts:
|
||||
sc_app.add_to_keymanager(params.user_account)
|
||||
|
||||
sc_app(LogLevel('fatal'))
|
||||
sc_app(LogLevel('trace', partition='SidechainFederator'))
|
||||
|
||||
# set the chain's signer list and disable the master key
|
||||
divide = 4 * len(params.federators)
|
||||
by = 5
|
||||
quorum = (divide + by - 1) // by
|
||||
sc_app(
|
||||
SignerListSet(account=params.genesis_account,
|
||||
quorum=quorum,
|
||||
keys=params.federators))
|
||||
sc_app.maybe_ledger_accept()
|
||||
sc_app(Ticket(account=params.genesis_account, src_tag=mainDoorKeeper))
|
||||
sc_app.maybe_ledger_accept()
|
||||
sc_app(Ticket(account=params.genesis_account, src_tag=sideDoorKeeper))
|
||||
sc_app.maybe_ledger_accept()
|
||||
sc_app(Ticket(account=params.genesis_account, src_tag=updateSignerList))
|
||||
sc_app.maybe_ledger_accept()
|
||||
sc_app(AccountSet(account=params.genesis_account).set_disable_master())
|
||||
sc_app.maybe_ledger_accept()
|
||||
|
||||
|
||||
def _xchain_transfer(from_chain: App, to_chain: App, src: Account,
|
||||
dst: Account, amt: Asset, from_chain_door: Account,
|
||||
to_chain_door: Account):
|
||||
memos = [{'Memo': {'MemoData': dst.account_id_str_as_hex()}}]
|
||||
from_chain(Payment(account=src, dst=from_chain_door, amt=amt, memos=memos))
|
||||
from_chain.maybe_ledger_accept()
|
||||
if to_chain.standalone:
|
||||
# from_chain (side chain) sends a txn, but won't close the to_chain (main chain) ledger
|
||||
time.sleep(1)
|
||||
to_chain.maybe_ledger_accept()
|
||||
|
||||
|
||||
def main_to_side_transfer(mc_app: App, sc_app: App, src: Account, dst: Account,
|
||||
amt: Asset, params: Params):
|
||||
_xchain_transfer(mc_app, sc_app, src, dst, amt, params.mc_door_account,
|
||||
params.sc_door_account)
|
||||
|
||||
|
||||
def side_to_main_transfer(mc_app: App, sc_app: App, src: Account, dst: Account,
|
||||
amt: Asset, params: Params):
|
||||
_xchain_transfer(sc_app, mc_app, src, dst, amt, params.sc_door_account,
|
||||
params.mc_door_account)
|
||||
|
||||
|
||||
def simple_test(mc_app: App, sc_app: App, params: Params):
|
||||
try:
|
||||
bob = sc_app.create_account('bob')
|
||||
main_to_side_transfer(mc_app, sc_app, params.user_account, bob,
|
||||
XRP(200), params)
|
||||
main_to_side_transfer(mc_app, sc_app, params.user_account, bob,
|
||||
XRP(60), params)
|
||||
|
||||
if params.with_pauses:
|
||||
_convert_log_files_to_json(
|
||||
mc_app.get_configs() + sc_app.get_configs(),
|
||||
'checkpoint1.json')
|
||||
input(
|
||||
"Pausing to check for main -> side txns (press enter to continue)"
|
||||
)
|
||||
|
||||
side_to_main_transfer(mc_app, sc_app, bob, params.user_account, XRP(9),
|
||||
params)
|
||||
side_to_main_transfer(mc_app, sc_app, bob, params.user_account,
|
||||
XRP(11), params)
|
||||
|
||||
if params.with_pauses:
|
||||
input(
|
||||
"Pausing to check for side -> main txns (press enter to continue)"
|
||||
)
|
||||
finally:
|
||||
_convert_log_files_to_json(mc_app.get_configs() + sc_app.get_configs(),
|
||||
'final.json')
|
||||
|
||||
|
||||
def _rm_debug_log(config: ConfigFile):
|
||||
try:
|
||||
debug_log = config.debug_logfile.get_line()
|
||||
if debug_log:
|
||||
print(f'removing debug file: {debug_log}', flush=True)
|
||||
os.remove(debug_log)
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def _standalone_with_callback(params: Params,
|
||||
callback: Callable[[App, App], None],
|
||||
setup_user_accounts: bool = True):
|
||||
|
||||
if (params.debug_mainchain):
|
||||
input("Start mainchain server and press enter to continue: ")
|
||||
else:
|
||||
_rm_debug_log(params.mainchain_config)
|
||||
with single_client_app(config=params.mainchain_config,
|
||||
exe=params.mainchain_exe,
|
||||
standalone=True,
|
||||
run_server=not params.debug_mainchain) as mc_app:
|
||||
|
||||
mc_connect_subscription(mc_app, params.mc_door_account)
|
||||
setup_mainchain(mc_app, params, setup_user_accounts)
|
||||
|
||||
if (params.debug_sidechain):
|
||||
input("Start sidechain server and press enter to continue: ")
|
||||
else:
|
||||
_rm_debug_log(params.sidechain_config)
|
||||
with single_client_app(
|
||||
config=params.sidechain_config,
|
||||
exe=params.sidechain_exe,
|
||||
standalone=True,
|
||||
run_server=not params.debug_sidechain) as sc_app:
|
||||
|
||||
sc_connect_subscription(sc_app, params.sc_door_account)
|
||||
setup_sidechain(sc_app, params, setup_user_accounts)
|
||||
callback(mc_app, sc_app)
|
||||
|
||||
|
||||
def _convert_log_files_to_json(to_convert: List[ConfigFile], suffix: str):
|
||||
'''
|
||||
Convert the log file to json
|
||||
'''
|
||||
for c in to_convert:
|
||||
try:
|
||||
debug_log = c.debug_logfile.get_line()
|
||||
if not os.path.exists(debug_log):
|
||||
continue
|
||||
converted_log = f'{debug_log}.{suffix}'
|
||||
if os.path.exists(converted_log):
|
||||
os.remove(converted_log)
|
||||
print(f'Converting log {debug_log} to {converted_log}', flush=True)
|
||||
convert_log(debug_log, converted_log, pure_json=True)
|
||||
except:
|
||||
eprint(f'Exception converting log')
|
||||
|
||||
|
||||
def _multinode_with_callback(params: Params,
|
||||
callback: Callable[[App, App], None],
|
||||
setup_user_accounts: bool = True):
|
||||
|
||||
mainchain_cfg = ConfigFile(
|
||||
file_name=
|
||||
f'{params.configs_dir}/sidechain_testnet/main.no_shards.mainchain_0/rippled.cfg'
|
||||
)
|
||||
_rm_debug_log(mainchain_cfg)
|
||||
if params.debug_mainchain:
|
||||
input("Start mainchain server and press enter to continue: ")
|
||||
with single_client_app(config=mainchain_cfg,
|
||||
exe=params.mainchain_exe,
|
||||
standalone=True,
|
||||
run_server=not params.debug_mainchain) as mc_app:
|
||||
|
||||
if params.with_pauses:
|
||||
input("Pausing after mainchain start (press enter to continue)")
|
||||
|
||||
mc_connect_subscription(mc_app, params.mc_door_account)
|
||||
setup_mainchain(mc_app, params, setup_user_accounts)
|
||||
if params.with_pauses:
|
||||
input("Pausing after mainchain setup (press enter to continue)")
|
||||
|
||||
testnet_configs = configs_for_testnet(
|
||||
f'{params.configs_dir}/sidechain_testnet/sidechain_')
|
||||
for c in testnet_configs:
|
||||
_rm_debug_log(c)
|
||||
|
||||
run_server_list = [True] * len(testnet_configs)
|
||||
if params.debug_sidechain:
|
||||
run_server_list[0] = False
|
||||
input(
|
||||
f'Start testnet server {testnet_configs[0].get_file_name()} and press enter to continue: '
|
||||
)
|
||||
|
||||
with testnet_app(exe=params.sidechain_exe,
|
||||
configs=testnet_configs,
|
||||
run_server=run_server_list,
|
||||
sidechain_rr=params.sidechain_rr) as n_app:
|
||||
|
||||
if params.with_pauses:
|
||||
input("Pausing after testnet start (press enter to continue)")
|
||||
|
||||
sc_connect_subscription(n_app, params.sc_door_account)
|
||||
setup_sidechain(n_app, params, setup_user_accounts)
|
||||
if params.with_pauses:
|
||||
input(
|
||||
"Pausing after sidechain setup (press enter to continue)")
|
||||
callback(mc_app, n_app)
|
||||
|
||||
|
||||
def standalone_test(params: Params):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
simple_test(mc_app, sc_app, params)
|
||||
|
||||
_standalone_with_callback(params, callback)
|
||||
|
||||
|
||||
def multinode_test(params: Params):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
simple_test(mc_app, sc_app, params)
|
||||
|
||||
_multinode_with_callback(params, callback)
|
||||
|
||||
|
||||
# The mainchain runs in standalone mode. Most operations - like cross chain
|
||||
# paymens - will automatically close ledgers. However, some operations, like
|
||||
# refunds need an extra close. This loop automatically closes ledgers.
|
||||
def close_mainchain_ledgers(stop_token: Value, params: Params, sleep_time=4):
|
||||
with single_client_app(config=params.mainchain_config,
|
||||
exe=params.mainchain_exe,
|
||||
standalone=True,
|
||||
run_server=False) as mc_app:
|
||||
while stop_token.value != 0:
|
||||
mc_app.maybe_ledger_accept()
|
||||
time.sleep(sleep_time)
|
||||
|
||||
|
||||
def standalone_interactive_repl(params: Params):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
# process will run while stop token is non-zero
|
||||
stop_token = Value('i', 1)
|
||||
p = None
|
||||
if mc_app.standalone:
|
||||
p = Process(target=close_mainchain_ledgers,
|
||||
args=(stop_token, params))
|
||||
p.start()
|
||||
try:
|
||||
interactive.repl(mc_app, sc_app)
|
||||
finally:
|
||||
if p:
|
||||
stop_token.value = 0
|
||||
p.join()
|
||||
|
||||
_standalone_with_callback(params, callback, setup_user_accounts=False)
|
||||
|
||||
|
||||
def multinode_interactive_repl(params: Params):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
# process will run while stop token is non-zero
|
||||
stop_token = Value('i', 1)
|
||||
p = None
|
||||
if mc_app.standalone:
|
||||
p = Process(target=close_mainchain_ledgers,
|
||||
args=(stop_token, params))
|
||||
p.start()
|
||||
try:
|
||||
interactive.repl(mc_app, sc_app)
|
||||
finally:
|
||||
if p:
|
||||
stop_token.value = 0
|
||||
p.join()
|
||||
|
||||
_multinode_with_callback(params, callback, setup_user_accounts=False)
|
||||
|
||||
|
||||
def main():
|
||||
params = Params()
|
||||
interactive.set_hooks_dir(params.hooks_dir)
|
||||
|
||||
if err_str := params.check_error():
|
||||
eprint(err_str)
|
||||
sys.exit(1)
|
||||
|
||||
if params.quiet:
|
||||
print("Disabling eprint")
|
||||
disable_eprint()
|
||||
|
||||
if params.interactive:
|
||||
if params.standalone:
|
||||
standalone_interactive_repl(params)
|
||||
else:
|
||||
multinode_interactive_repl(params)
|
||||
elif params.standalone:
|
||||
standalone_test(params)
|
||||
else:
|
||||
multinode_test(params)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
176
bin/sidechain/python/test_utils.py
Normal file
176
bin/sidechain/python/test_utils.py
Normal file
@@ -0,0 +1,176 @@
|
||||
import asyncio
|
||||
import collections
|
||||
from contextlib import contextmanager
|
||||
import json
|
||||
import logging
|
||||
import pprint
|
||||
import time
|
||||
from typing import Callable, Dict, List, Optional
|
||||
|
||||
from app import App, balances_dataframe
|
||||
from common import Account, Asset, XRP, eprint
|
||||
from command import Subscribe
|
||||
|
||||
MC_SUBSCRIBE_QUEUE = []
|
||||
SC_SUBSCRIBE_QUEUE = []
|
||||
|
||||
|
||||
def _mc_subscribe_callback(v: dict):
|
||||
MC_SUBSCRIBE_QUEUE.append(v)
|
||||
logging.info(f'mc subscribe_callback:\n{json.dumps(v, indent=1)}')
|
||||
|
||||
|
||||
def _sc_subscribe_callback(v: dict):
|
||||
SC_SUBSCRIBE_QUEUE.append(v)
|
||||
logging.info(f'sc subscribe_callback:\n{json.dumps(v, indent=1)}')
|
||||
|
||||
|
||||
def mc_connect_subscription(app: App, door_account: Account):
|
||||
app(Subscribe(account_history_account=door_account),
|
||||
_mc_subscribe_callback)
|
||||
|
||||
|
||||
def sc_connect_subscription(app: App, door_account: Account):
|
||||
app(Subscribe(account_history_account=door_account),
|
||||
_sc_subscribe_callback)
|
||||
|
||||
|
||||
# This pops elements off the subscribe_queue until the transaction is found
|
||||
# It mofifies the queue in place.
|
||||
async def async_wait_for_payment_detect(app: App, subscribe_queue: List[dict],
|
||||
src: Account, dst: Account,
|
||||
amt_asset: Asset):
|
||||
logging.info(
|
||||
f'Wait for payment {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
n_txns = 10 # keep this many txn in a circular buffer.
|
||||
# If the payment is not detected, write them to the log.
|
||||
last_n_paytxns = collections.deque(maxlen=n_txns)
|
||||
for i in range(30):
|
||||
while subscribe_queue:
|
||||
d = subscribe_queue.pop(0)
|
||||
if 'transaction' not in d:
|
||||
continue
|
||||
txn = d['transaction']
|
||||
if txn['TransactionType'] != 'Payment':
|
||||
continue
|
||||
|
||||
txn_asset = Asset(from_rpc_result=txn['Amount'])
|
||||
if txn['Account'] == src.account_id and txn[
|
||||
'Destination'] == dst.account_id and txn_asset == amt_asset:
|
||||
if d['engine_result_code'] == 0:
|
||||
logging.info(
|
||||
f'Found payment {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
return
|
||||
else:
|
||||
logging.error(
|
||||
f'Expected payment failed {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
raise ValueError(
|
||||
f'Expected payment failed {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
else:
|
||||
last_n_paytxns.append(txn)
|
||||
if i > 0 and not (i % 5):
|
||||
logging.warning(
|
||||
f'Waiting for txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
# side chain can send transactions to the main chain, but won't close the ledger
|
||||
# We don't know when the transaction will be sent, so may need to close the ledger here
|
||||
await app.async_maybe_ledger_accept()
|
||||
await asyncio.sleep(2)
|
||||
logging.warning(
|
||||
f'Last {len(last_n_paytxns)} pay txns while waiting for payment detect'
|
||||
)
|
||||
for t in last_n_paytxns:
|
||||
logging.warning(
|
||||
f'Detected pay transaction while waiting for payment: {t}')
|
||||
logging.error(
|
||||
f'Expected txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
raise ValueError(
|
||||
f'Expected txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
|
||||
)
|
||||
|
||||
|
||||
def mc_wait_for_payment_detect(app: App, src: Account, dst: Account,
|
||||
amt_asset: Asset):
|
||||
logging.info(f'mainchain waiting for payment detect')
|
||||
return asyncio.get_event_loop().run_until_complete(
|
||||
async_wait_for_payment_detect(app, MC_SUBSCRIBE_QUEUE, src, dst,
|
||||
amt_asset))
|
||||
|
||||
|
||||
def sc_wait_for_payment_detect(app: App, src: Account, dst: Account,
|
||||
amt_asset: Asset):
|
||||
logging.info(f'sidechain waiting for payment detect')
|
||||
return asyncio.get_event_loop().run_until_complete(
|
||||
async_wait_for_payment_detect(app, SC_SUBSCRIBE_QUEUE, src, dst,
|
||||
amt_asset))
|
||||
|
||||
|
||||
def wait_for_balance_change(app: App,
|
||||
acc: Account,
|
||||
pre_balance: Asset,
|
||||
expected_diff: Optional[Asset] = None):
|
||||
logging.info(
|
||||
f'waiting for balance change {acc.account_id = } {pre_balance = } {expected_diff = }'
|
||||
)
|
||||
for i in range(30):
|
||||
new_bal = app.get_balance(acc, pre_balance(0))
|
||||
diff = new_bal - pre_balance
|
||||
if new_bal != pre_balance:
|
||||
logging.info(
|
||||
f'Balance changed {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
|
||||
)
|
||||
if expected_diff is None or diff == expected_diff:
|
||||
return
|
||||
app.maybe_ledger_accept()
|
||||
time.sleep(2)
|
||||
if i > 0 and not (i % 5):
|
||||
logging.warning(
|
||||
f'Waiting for balance to change {acc.account_id = } {pre_balance = }'
|
||||
)
|
||||
logging.error(
|
||||
f'Expected balance to change {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
|
||||
)
|
||||
raise ValueError(
|
||||
f'Expected balance to change {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
|
||||
)
|
||||
|
||||
|
||||
def log_chain_state(mc_app, sc_app, log, msg='Chain State'):
|
||||
chains = [mc_app, sc_app]
|
||||
chain_names = ['mainchain', 'sidechain']
|
||||
balances = balances_dataframe(chains, chain_names)
|
||||
df_as_str = balances.to_string(float_format=lambda x: f'{x:,.6f}')
|
||||
log(f'{msg} Balances: \n{df_as_str}')
|
||||
federator_info = sc_app.federator_info()
|
||||
log(f'{msg} Federator Info: \n{pprint.pformat(federator_info)}')
|
||||
|
||||
|
||||
# Tests can set this to True to help debug test failures by showing account
|
||||
# balances in the log before the test runs
|
||||
test_context_verbose_logging = False
|
||||
|
||||
|
||||
@contextmanager
|
||||
def test_context(mc_app, sc_app, verbose_logging: Optional[bool] = None):
|
||||
'''Write extra context info to the log on test failure'''
|
||||
global test_context_verbose_logging
|
||||
if verbose_logging is None:
|
||||
verbose_logging = test_context_verbose_logging
|
||||
try:
|
||||
if verbose_logging:
|
||||
log_chain_state(mc_app, sc_app, logging.info)
|
||||
start_time = time.monotonic()
|
||||
yield
|
||||
except:
|
||||
log_chain_state(mc_app, sc_app, logging.error)
|
||||
raise
|
||||
finally:
|
||||
elapased_time = time.monotonic() - start_time
|
||||
logging.info(f'Test elapsed time: {elapased_time}')
|
||||
if verbose_logging:
|
||||
log_chain_state(mc_app, sc_app, logging.info)
|
||||
216
bin/sidechain/python/testnet.py
Normal file
216
bin/sidechain/python/testnet.py
Normal file
@@ -0,0 +1,216 @@
|
||||
'''
|
||||
Bring up a rippled testnetwork from a set of config files with fixed ips.
|
||||
'''
|
||||
|
||||
from contextlib import contextmanager
|
||||
import glob
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
from typing import Callable, List, Optional, Set, Union
|
||||
|
||||
from command import ServerInfo
|
||||
from config_file import ConfigFile
|
||||
from ripple_client import RippleClient
|
||||
|
||||
|
||||
class Network:
|
||||
# If run_server is None, run all the servers.
|
||||
# This is useful to help debugging
|
||||
def __init__(
|
||||
self,
|
||||
exe: str,
|
||||
configs: List[ConfigFile],
|
||||
*,
|
||||
command_logs: Optional[List[str]] = None,
|
||||
run_server: Optional[List[bool]] = None,
|
||||
# undocumented feature. If with_rr is not None, assume it points to the rr debugger executable
|
||||
# and run server 0 under rr
|
||||
with_rr: Optional[str] = None,
|
||||
extra_args: Optional[List[List[str]]] = None):
|
||||
|
||||
self.with_rr = with_rr
|
||||
if not configs:
|
||||
raise ValueError(f'Must specify at least one config')
|
||||
|
||||
if run_server and len(run_server) != len(configs):
|
||||
raise ValueError(
|
||||
f'run_server length must match number of configs (or be None): {len(configs) = } {len(run_server) = }'
|
||||
)
|
||||
|
||||
self.configs = configs
|
||||
self.clients = []
|
||||
self.running_server_indexes = set()
|
||||
self.processes = {}
|
||||
|
||||
if not run_server:
|
||||
run_server = []
|
||||
run_server += [True] * (len(configs) - len(run_server))
|
||||
|
||||
self.run_server = run_server
|
||||
|
||||
if not command_logs:
|
||||
command_logs = []
|
||||
command_logs += [None] * (len(configs) - len(command_logs))
|
||||
|
||||
self.command_logs = command_logs
|
||||
|
||||
# remove the old database directories.
|
||||
# we want tests to start from the same empty state every time
|
||||
for config in self.configs:
|
||||
db_path = config.database_path.get_line()
|
||||
if db_path and os.path.isdir(db_path):
|
||||
files = glob.glob(f'{db_path}/**', recursive=True)
|
||||
for f in files:
|
||||
if os.path.isdir(f):
|
||||
continue
|
||||
os.unlink(f)
|
||||
|
||||
for config, log in zip(self.configs, self.command_logs):
|
||||
client = RippleClient(config=config, command_log=log, exe=exe)
|
||||
self.clients.append(client)
|
||||
|
||||
self.servers_start(extra_args=extra_args)
|
||||
|
||||
def shutdown(self):
|
||||
for a in self.clients:
|
||||
a.shutdown()
|
||||
|
||||
self.servers_stop()
|
||||
|
||||
def num_clients(self) -> int:
|
||||
return len(self.clients)
|
||||
|
||||
def get_client(self, i: int) -> RippleClient:
|
||||
return self.clients[i]
|
||||
|
||||
def get_configs(self) -> List[ConfigFile]:
|
||||
return [c.config for c in self.clients]
|
||||
|
||||
def get_pids(self) -> List[int]:
|
||||
return [c.get_pid() for c in self.clients if c.get_pid() is not None]
|
||||
|
||||
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
|
||||
def get_brief_server_info(self) -> dict:
|
||||
ret = {'server_state': [], 'ledger_seq': [], 'complete_ledgers': []}
|
||||
for c in self.clients:
|
||||
r = c.get_brief_server_info()
|
||||
for (k, v) in r.items():
|
||||
ret[k].append(v)
|
||||
return ret
|
||||
|
||||
# returns true if the server is running, false if not. Note, this relies on
|
||||
# servers being shut down through the `servers_stop` interface. If a server
|
||||
# crashes, or is started or stopped through other means, an incorrect status
|
||||
# may be reported.
|
||||
def get_running_status(self) -> List[bool]:
|
||||
return [
|
||||
i in self.running_server_indexes for i in range(len(self.clients))
|
||||
]
|
||||
|
||||
def is_running(self, index: int) -> bool:
|
||||
return index in self.running_server_indexes
|
||||
|
||||
def wait_for_validated_ledger(self, server_index: Optional[int] = None):
|
||||
'''
|
||||
Don't return until the network has at least one validated ledger
|
||||
'''
|
||||
|
||||
if server_index is None:
|
||||
for i in range(len(self.configs)):
|
||||
self.wait_for_validated_ledger(i)
|
||||
return
|
||||
|
||||
client = self.clients[server_index]
|
||||
for i in range(600):
|
||||
r = client.send_command(ServerInfo())
|
||||
state = None
|
||||
if 'info' in r:
|
||||
state = r['info']['server_state']
|
||||
if state == 'proposing':
|
||||
print(f'Synced: {server_index} : {state}', flush=True)
|
||||
break
|
||||
if not i % 10:
|
||||
print(f'Waiting for sync: {server_index} : {state}',
|
||||
flush=True)
|
||||
time.sleep(1)
|
||||
|
||||
for i in range(600):
|
||||
r = client.send_command(ServerInfo())
|
||||
state = None
|
||||
if 'info' in r:
|
||||
complete_ledgers = r['info']['complete_ledgers']
|
||||
if complete_ledgers and complete_ledgers != 'empty':
|
||||
print(f'Have complete ledgers: {server_index} : {state}',
|
||||
flush=True)
|
||||
return
|
||||
if not i % 10:
|
||||
print(
|
||||
f'Waiting for complete_ledgers: {server_index} : {complete_ledgers}',
|
||||
flush=True)
|
||||
time.sleep(1)
|
||||
|
||||
raise ValueError('Could not sync server {client.config_file_name}')
|
||||
|
||||
def servers_start(self,
|
||||
server_indexes: Optional[Union[Set[int],
|
||||
List[int]]] = None,
|
||||
*,
|
||||
extra_args: Optional[List[List[str]]] = None):
|
||||
if server_indexes is None:
|
||||
server_indexes = [i for i in range(len(self.clients))]
|
||||
|
||||
if extra_args is None:
|
||||
extra_args = []
|
||||
extra_args += [list()] * (len(self.configs) - len(extra_args))
|
||||
|
||||
for i in server_indexes:
|
||||
if i in self.running_server_indexes or not self.run_server[i]:
|
||||
continue
|
||||
|
||||
client = self.clients[i]
|
||||
to_run = [client.exe, '--conf', client.config_file_name]
|
||||
if self.with_rr and i == 0:
|
||||
to_run = [self.with_rr, 'record'] + to_run
|
||||
print(f'Starting server with rr {client.config_file_name}')
|
||||
else:
|
||||
print(f'Starting server {client.config_file_name}')
|
||||
fout = open(os.devnull, 'w')
|
||||
p = subprocess.Popen(to_run + extra_args[i],
|
||||
stdout=fout,
|
||||
stderr=subprocess.STDOUT)
|
||||
client.set_pid(p.pid)
|
||||
print(
|
||||
f'started rippled: config: {client.config_file_name} PID: {p.pid}',
|
||||
flush=True)
|
||||
self.running_server_indexes.add(i)
|
||||
self.processes[i] = p
|
||||
|
||||
time.sleep(2) # give servers time to start
|
||||
|
||||
def servers_stop(self,
|
||||
server_indexes: Optional[Union[Set[int],
|
||||
List[int]]] = None):
|
||||
if server_indexes is None:
|
||||
server_indexes = self.running_server_indexes.copy()
|
||||
|
||||
if 0 in server_indexes:
|
||||
print(
|
||||
f'WARNING: Server 0 is being stopped. RPC commands cannot be sent until this is restarted.'
|
||||
)
|
||||
|
||||
for i in server_indexes:
|
||||
if i not in self.running_server_indexes:
|
||||
continue
|
||||
client = self.clients[i]
|
||||
to_run = [client.exe, '--conf', client.config_file_name]
|
||||
fout = open(os.devnull, 'w')
|
||||
subprocess.Popen(to_run + ['stop'],
|
||||
stdout=fout,
|
||||
stderr=subprocess.STDOUT)
|
||||
self.running_server_indexes.discard(i)
|
||||
|
||||
for i in server_indexes:
|
||||
self.processes[i].wait()
|
||||
del self.processes[i]
|
||||
self.get_client(i).set_pid(-1)
|
||||
64
bin/sidechain/python/tests/conftest.py
Normal file
64
bin/sidechain/python/tests/conftest.py
Normal file
@@ -0,0 +1,64 @@
|
||||
# Add parent directory to module path
|
||||
import os, sys
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from common import Account, Asset, XRP
|
||||
import create_config_files
|
||||
import sidechain
|
||||
|
||||
import pytest
|
||||
'''
|
||||
Sidechains uses argparse.ArgumentParser to add command line options.
|
||||
The function call to add an argument is `add_argument`. pytest uses `addoption`.
|
||||
This wrapper class changes calls from `add_argument` to calls to `addoption`.
|
||||
To avoid conflicts between pytest and sidechains, all sidechain arguments have
|
||||
the suffix `_sc` appended to them. I.e. `--verbose` is for pytest, `--verbose_sc`
|
||||
is for sidechains.
|
||||
'''
|
||||
|
||||
|
||||
class ArgumentParserWrapper:
|
||||
def __init__(self, wrapped):
|
||||
self.wrapped = wrapped
|
||||
|
||||
def add_argument(self, *args, **kwargs):
|
||||
for a in args:
|
||||
if not a.startswith('--'):
|
||||
continue
|
||||
a = a + '_sc'
|
||||
self.wrapped.addoption(a, **kwargs)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
wrapped = ArgumentParserWrapper(parser)
|
||||
sidechain.parse_args_helper(wrapped)
|
||||
|
||||
|
||||
def _xchain_assets(ratio: int = 1):
|
||||
assets = {}
|
||||
assets['xrp_xrp_sidechain_asset'] = create_config_files.XChainAsset(
|
||||
XRP(0), XRP(0), 1, 1 * ratio, 200, 200 * ratio)
|
||||
root_account = Account(account_id="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh")
|
||||
main_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
|
||||
side_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
|
||||
assets['iou_iou_sidechain_asset'] = create_config_files.XChainAsset(
|
||||
main_iou_asset, side_iou_asset, 1, 1 * ratio, 0.02, 0.02 * ratio)
|
||||
return assets
|
||||
|
||||
|
||||
# Diction of config dirs. Key is ratio
|
||||
_config_dirs = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def configs_dirs_dict(tmp_path):
|
||||
global _config_dirs
|
||||
if not _config_dirs:
|
||||
params = create_config_files.Params()
|
||||
_config_dirs = {}
|
||||
for ratio in (1, 2):
|
||||
params.configs_dir = str(tmp_path / f'test_config_files_{ratio}')
|
||||
create_config_files.main(params, _xchain_assets(ratio))
|
||||
_config_dirs[ratio] = params.configs_dir
|
||||
|
||||
return _config_dirs
|
||||
117
bin/sidechain/python/tests/door_test.py
Normal file
117
bin/sidechain/python/tests/door_test.py
Normal file
@@ -0,0 +1,117 @@
|
||||
from typing import Dict
|
||||
from app import App
|
||||
from common import XRP
|
||||
from sidechain import Params
|
||||
import sidechain
|
||||
import test_utils
|
||||
import time
|
||||
from transaction import Payment
|
||||
import tst_common
|
||||
|
||||
batch_test_num_accounts = 200
|
||||
|
||||
|
||||
def door_test(mc_app: App, sc_app: App, params: Params):
|
||||
# setup, create accounts on both chains
|
||||
for i in range(batch_test_num_accounts):
|
||||
name = "m_" + str(i)
|
||||
account_main = mc_app.create_account(name)
|
||||
name = "s_" + str(i)
|
||||
account_side = sc_app.create_account(name)
|
||||
mc_app(
|
||||
Payment(account=params.genesis_account,
|
||||
dst=account_main,
|
||||
amt=XRP(20_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
account_main_last = mc_app.account_from_alias("m_" +
|
||||
str(batch_test_num_accounts -
|
||||
1))
|
||||
test_utils.wait_for_balance_change(mc_app, account_main_last, XRP(0),
|
||||
XRP(20_000))
|
||||
|
||||
# test
|
||||
to_side_xrp = XRP(1000)
|
||||
to_main_xrp = XRP(100)
|
||||
last_tx_xrp = XRP(343)
|
||||
with test_utils.test_context(mc_app, sc_app, True):
|
||||
# send xchain payment to open accounts on sidechain
|
||||
for i in range(batch_test_num_accounts):
|
||||
name_main = "m_" + str(i)
|
||||
account_main = mc_app.account_from_alias(name_main)
|
||||
name_side = "s_" + str(i)
|
||||
account_side = sc_app.account_from_alias(name_side)
|
||||
memos = [{
|
||||
'Memo': {
|
||||
'MemoData': account_side.account_id_str_as_hex()
|
||||
}
|
||||
}]
|
||||
mc_app(
|
||||
Payment(account=account_main,
|
||||
dst=params.mc_door_account,
|
||||
amt=to_side_xrp,
|
||||
memos=memos))
|
||||
|
||||
while 1:
|
||||
federator_info = sc_app.federator_info()
|
||||
should_loop = False
|
||||
for v in federator_info.values():
|
||||
for c in ['mainchain', 'sidechain']:
|
||||
state = v['info'][c]['listener_info']['state']
|
||||
if state != 'normal':
|
||||
should_loop = True
|
||||
if not should_loop:
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
# wait some time for the door to change
|
||||
door_closing = False
|
||||
door_reopened = False
|
||||
for i in range(batch_test_num_accounts * 2 + 40):
|
||||
server_index = [0]
|
||||
federator_info = sc_app.federator_info(server_index)
|
||||
for v in federator_info.values():
|
||||
door_status = v['info']['mainchain']['door_status']['status']
|
||||
if not door_closing:
|
||||
if door_status != 'open':
|
||||
door_closing = True
|
||||
else:
|
||||
if door_status == 'open':
|
||||
door_reopened = True
|
||||
|
||||
if not door_reopened:
|
||||
time.sleep(1)
|
||||
mc_app.maybe_ledger_accept()
|
||||
else:
|
||||
break
|
||||
|
||||
if not door_reopened:
|
||||
raise ValueError('Expected door status changes did not happen')
|
||||
|
||||
# wait for accounts created on sidechain
|
||||
for i in range(batch_test_num_accounts):
|
||||
name_side = "s_" + str(i)
|
||||
account_side = sc_app.account_from_alias(name_side)
|
||||
test_utils.wait_for_balance_change(sc_app, account_side, XRP(0),
|
||||
to_side_xrp)
|
||||
|
||||
# # try one xchain payment, each direction
|
||||
name_main = "m_" + str(0)
|
||||
account_main = mc_app.account_from_alias(name_main)
|
||||
name_side = "s_" + str(0)
|
||||
account_side = sc_app.account_from_alias(name_side)
|
||||
|
||||
pre_bal = mc_app.get_balance(account_main, XRP(0))
|
||||
sidechain.side_to_main_transfer(mc_app, sc_app, account_side,
|
||||
account_main, to_main_xrp, params)
|
||||
test_utils.wait_for_balance_change(mc_app, account_main, pre_bal,
|
||||
to_main_xrp)
|
||||
|
||||
pre_bal = sc_app.get_balance(account_side, XRP(0))
|
||||
sidechain.main_to_side_transfer(mc_app, sc_app, account_main,
|
||||
account_side, last_tx_xrp, params)
|
||||
test_utils.wait_for_balance_change(sc_app, account_side, pre_bal,
|
||||
last_tx_xrp)
|
||||
|
||||
|
||||
def test_door_operations(configs_dirs_dict: Dict[int, str]):
|
||||
tst_common.test_start(configs_dirs_dict, door_test)
|
||||
151
bin/sidechain/python/tests/simple_xchain_transfer_test.py
Normal file
151
bin/sidechain/python/tests/simple_xchain_transfer_test.py
Normal file
@@ -0,0 +1,151 @@
|
||||
import logging
|
||||
import pprint
|
||||
import pytest
|
||||
from multiprocessing import Process, Value
|
||||
from typing import Dict
|
||||
import sys
|
||||
|
||||
from app import App
|
||||
from common import Asset, eprint, disable_eprint, drops, XRP
|
||||
import interactive
|
||||
from sidechain import Params
|
||||
import sidechain
|
||||
import test_utils
|
||||
import time
|
||||
from transaction import Payment, Trust
|
||||
import tst_common
|
||||
|
||||
|
||||
def simple_xrp_test(mc_app: App, sc_app: App, params: Params):
|
||||
alice = mc_app.account_from_alias('alice')
|
||||
adam = sc_app.account_from_alias('adam')
|
||||
mc_door = mc_app.account_from_alias('door')
|
||||
sc_door = sc_app.account_from_alias('door')
|
||||
|
||||
# main to side
|
||||
# First txn funds the side chain account
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
to_send_asset = XRP(9999)
|
||||
mc_pre_bal = mc_app.get_balance(mc_door, to_send_asset)
|
||||
sc_pre_bal = sc_app.get_balance(adam, to_send_asset)
|
||||
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
|
||||
to_send_asset, params)
|
||||
test_utils.wait_for_balance_change(mc_app, mc_door, mc_pre_bal,
|
||||
to_send_asset)
|
||||
test_utils.wait_for_balance_change(sc_app, adam, sc_pre_bal,
|
||||
to_send_asset)
|
||||
|
||||
for i in range(2):
|
||||
# even amounts for main to side
|
||||
for value in range(20, 30, 2):
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
to_send_asset = drops(value)
|
||||
mc_pre_bal = mc_app.get_balance(mc_door, to_send_asset)
|
||||
sc_pre_bal = sc_app.get_balance(adam, to_send_asset)
|
||||
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
|
||||
to_send_asset, params)
|
||||
test_utils.wait_for_balance_change(mc_app, mc_door, mc_pre_bal,
|
||||
to_send_asset)
|
||||
test_utils.wait_for_balance_change(sc_app, adam, sc_pre_bal,
|
||||
to_send_asset)
|
||||
|
||||
# side to main
|
||||
# odd amounts for side to main
|
||||
for value in range(19, 29, 2):
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
to_send_asset = drops(value)
|
||||
pre_bal = mc_app.get_balance(alice, to_send_asset)
|
||||
sidechain.side_to_main_transfer(mc_app, sc_app, adam, alice,
|
||||
to_send_asset, params)
|
||||
test_utils.wait_for_balance_change(mc_app, alice, pre_bal,
|
||||
to_send_asset)
|
||||
|
||||
|
||||
def simple_iou_test(mc_app: App, sc_app: App, params: Params):
|
||||
alice = mc_app.account_from_alias('alice')
|
||||
adam = sc_app.account_from_alias('adam')
|
||||
|
||||
mc_asset = Asset(value=0,
|
||||
currency='USD',
|
||||
issuer=mc_app.account_from_alias('root'))
|
||||
sc_asset = Asset(value=0,
|
||||
currency='USD',
|
||||
issuer=sc_app.account_from_alias('door'))
|
||||
mc_app.add_asset_alias(mc_asset, 'mcd') # main chain dollar
|
||||
sc_app.add_asset_alias(sc_asset, 'scd') # side chain dollar
|
||||
mc_app(Trust(account=alice, limit_amt=mc_asset(1_000_000)))
|
||||
|
||||
## make sure adam account on the side chain exists and set the trust line
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam, XRP(300),
|
||||
params)
|
||||
|
||||
# create a trust line to alice and pay her USD/root
|
||||
mc_app(Trust(account=alice, limit_amt=mc_asset(1_000_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
mc_app(
|
||||
Payment(account=mc_app.account_from_alias('root'),
|
||||
dst=alice,
|
||||
amt=mc_asset(10_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
# create a trust line for adam
|
||||
sc_app(Trust(account=adam, limit_amt=sc_asset(1_000_000)))
|
||||
|
||||
for i in range(2):
|
||||
# even amounts for main to side
|
||||
for value in range(10, 20, 2):
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
to_send_asset = mc_asset(value)
|
||||
rcv_asset = sc_asset(value)
|
||||
pre_bal = sc_app.get_balance(adam, rcv_asset)
|
||||
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
|
||||
to_send_asset, params)
|
||||
test_utils.wait_for_balance_change(sc_app, adam, pre_bal,
|
||||
rcv_asset)
|
||||
|
||||
# side to main
|
||||
# odd amounts for side to main
|
||||
for value in range(9, 19, 2):
|
||||
with test_utils.test_context(mc_app, sc_app):
|
||||
to_send_asset = sc_asset(value)
|
||||
rcv_asset = mc_asset(value)
|
||||
pre_bal = mc_app.get_balance(alice, to_send_asset)
|
||||
sidechain.side_to_main_transfer(mc_app, sc_app, adam, alice,
|
||||
to_send_asset, params)
|
||||
test_utils.wait_for_balance_change(mc_app, alice, pre_bal,
|
||||
rcv_asset)
|
||||
|
||||
|
||||
def setup_accounts(mc_app: App, sc_app: App, params: Params):
|
||||
# Setup a funded user account on the main chain, and add an unfunded account.
|
||||
# Setup address book and add a funded account on the mainchain.
|
||||
# Typical female names are addresses on the mainchain.
|
||||
# The first account is funded.
|
||||
alice = mc_app.create_account('alice')
|
||||
beth = mc_app.create_account('beth')
|
||||
carol = mc_app.create_account('carol')
|
||||
deb = mc_app.create_account('deb')
|
||||
ella = mc_app.create_account('ella')
|
||||
mc_app(Payment(account=params.genesis_account, dst=alice, amt=XRP(20_000)))
|
||||
mc_app.maybe_ledger_accept()
|
||||
|
||||
# Typical male names are addresses on the sidechain.
|
||||
# All accounts are initially unfunded
|
||||
adam = sc_app.create_account('adam')
|
||||
bob = sc_app.create_account('bob')
|
||||
charlie = sc_app.create_account('charlie')
|
||||
dan = sc_app.create_account('dan')
|
||||
ed = sc_app.create_account('ed')
|
||||
|
||||
|
||||
def run_all(mc_app: App, sc_app: App, params: Params):
|
||||
setup_accounts(mc_app, sc_app, params)
|
||||
logging.info(f'mainchain:\n{mc_app.key_manager.to_string()}')
|
||||
logging.info(f'sidechain:\n{sc_app.key_manager.to_string()}')
|
||||
simple_xrp_test(mc_app, sc_app, params)
|
||||
simple_iou_test(mc_app, sc_app, params)
|
||||
|
||||
|
||||
def test_simple_xchain(configs_dirs_dict: Dict[int, str]):
|
||||
tst_common.test_start(configs_dirs_dict, run_all)
|
||||
74
bin/sidechain/python/tests/tst_common.py
Normal file
74
bin/sidechain/python/tests/tst_common.py
Normal file
@@ -0,0 +1,74 @@
|
||||
import logging
|
||||
import pprint
|
||||
import pytest
|
||||
from multiprocessing import Process, Value
|
||||
from typing import Callable, Dict
|
||||
import sys
|
||||
|
||||
from app import App
|
||||
from common import eprint, disable_eprint, XRP
|
||||
from sidechain import Params
|
||||
import sidechain
|
||||
import test_utils
|
||||
import time
|
||||
|
||||
|
||||
def run(mc_app: App, sc_app: App, params: Params,
|
||||
test_case: Callable[[App, App, Params], None]):
|
||||
# process will run while stop token is non-zero
|
||||
stop_token = Value('i', 1)
|
||||
p = None
|
||||
if mc_app.standalone:
|
||||
p = Process(target=sidechain.close_mainchain_ledgers,
|
||||
args=(stop_token, params))
|
||||
p.start()
|
||||
try:
|
||||
test_case(mc_app, sc_app, params)
|
||||
finally:
|
||||
if p:
|
||||
stop_token.value = 0
|
||||
p.join()
|
||||
sidechain._convert_log_files_to_json(
|
||||
mc_app.get_configs() + sc_app.get_configs(), 'final.json')
|
||||
|
||||
|
||||
def standalone_test(params: Params, test_case: Callable[[App, App, Params],
|
||||
None]):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
run(mc_app, sc_app, params, test_case)
|
||||
|
||||
sidechain._standalone_with_callback(params,
|
||||
callback,
|
||||
setup_user_accounts=False)
|
||||
|
||||
|
||||
def multinode_test(params: Params, test_case: Callable[[App, App, Params],
|
||||
None]):
|
||||
def callback(mc_app: App, sc_app: App):
|
||||
run(mc_app, sc_app, params, test_case)
|
||||
|
||||
sidechain._multinode_with_callback(params,
|
||||
callback,
|
||||
setup_user_accounts=False)
|
||||
|
||||
|
||||
def test_start(configs_dirs_dict: Dict[int, str],
|
||||
test_case: Callable[[App, App, Params], None]):
|
||||
params = sidechain.Params(configs_dir=configs_dirs_dict[1])
|
||||
|
||||
if err_str := params.check_error():
|
||||
eprint(err_str)
|
||||
sys.exit(1)
|
||||
|
||||
if params.verbose:
|
||||
print("eprint enabled")
|
||||
else:
|
||||
disable_eprint()
|
||||
|
||||
# Set to true to help debug tests
|
||||
test_utils.test_context_verbose_logging = True
|
||||
|
||||
if params.standalone:
|
||||
standalone_test(params, test_case)
|
||||
else:
|
||||
multinode_test(params, test_case)
|
||||
366
bin/sidechain/python/transaction.py
Normal file
366
bin/sidechain/python/transaction.py
Normal file
@@ -0,0 +1,366 @@
|
||||
import datetime
|
||||
import json
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from command import Command
|
||||
from common import Account, Asset, Path, PathList, to_rippled_epoch
|
||||
|
||||
|
||||
class Transaction(Command):
|
||||
'''Interface for all transactions'''
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
account: Account,
|
||||
flags: Optional[int] = None,
|
||||
fee: Optional[Union[Asset, int]] = None,
|
||||
sequence: Optional[int] = None,
|
||||
account_txn_id: Optional[str] = None,
|
||||
last_ledger_sequence: Optional[int] = None,
|
||||
src_tag: Optional[int] = None,
|
||||
memos: Optional[List[Dict[str, dict]]] = None,
|
||||
):
|
||||
super().__init__()
|
||||
self.account = account
|
||||
# set even if None
|
||||
self.flags = flags
|
||||
self.fee = fee
|
||||
self.sequence = sequence
|
||||
self.account_txn_id = account_txn_id
|
||||
self.last_ledger_sequence = last_ledger_sequence
|
||||
self.src_tag = src_tag
|
||||
self.memos = memos
|
||||
|
||||
def cmd_name(self) -> str:
|
||||
return 'submit'
|
||||
|
||||
def set_seq_and_fee(self, seq: int, fee: Union[Asset, int]):
|
||||
self.sequence = seq
|
||||
self.fee = fee
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
txn = {
|
||||
'Account': self.account.account_id,
|
||||
}
|
||||
if self.flags is not None:
|
||||
txn['Flags'] = flags
|
||||
if self.fee is not None:
|
||||
if isinstance(self.fee, int):
|
||||
txn['Fee'] = f'{self.fee}' # must be a string
|
||||
else:
|
||||
txn['Fee'] = self.fee.to_cmd_obj()
|
||||
if self.sequence is not None:
|
||||
txn['Sequence'] = self.sequence
|
||||
if self.account_txn_id is not None:
|
||||
txn['AccountTxnID'] = self.account_txn_id
|
||||
if self.last_ledger_sequence is not None:
|
||||
txn['LastLedgerSequence'] = self.last_ledger_sequence
|
||||
if self.src_tag is not None:
|
||||
txn['SourceTag'] = self.src_tag
|
||||
if self.memos is not None:
|
||||
txn['Memos'] = self.memos
|
||||
return txn
|
||||
|
||||
|
||||
class Payment(Transaction):
|
||||
'''A payment transaction'''
|
||||
def __init__(self,
|
||||
*,
|
||||
dst: Account,
|
||||
amt: Asset,
|
||||
send_max: Optional[Asset] = None,
|
||||
paths: Optional[PathList] = None,
|
||||
dst_tag: Optional[int] = None,
|
||||
deliver_min: Optional[Asset] = None,
|
||||
**rest):
|
||||
super().__init__(**rest)
|
||||
self.dst = dst
|
||||
self.amt = amt
|
||||
self.send_max = send_max
|
||||
if paths is not None and isinstance(paths, Path):
|
||||
# allow paths = Path([...]) special case
|
||||
self.paths = PathList([paths])
|
||||
else:
|
||||
self.paths = paths
|
||||
self.dst_tag = dst_tag
|
||||
self.deliver_min = deliver_min
|
||||
|
||||
def set_partial_payment(self, value: bool = True):
|
||||
'''Set or clear the partial payment flag'''
|
||||
self._set_flag(0x0002_0000, value)
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
txn = super().to_cmd_obj()
|
||||
txn = {
|
||||
**txn,
|
||||
'TransactionType': 'Payment',
|
||||
'Destination': self.dst.account_id,
|
||||
'Amount': self.amt.to_cmd_obj(),
|
||||
}
|
||||
if self.paths is not None:
|
||||
txn['Paths'] = self.paths.to_cmd_obj()
|
||||
if self.send_max is not None:
|
||||
txn['SendMax'] = self.send_max.to_cmd_obj()
|
||||
if self.dst_tag is not None:
|
||||
txn['DestinationTag'] = self.dst_tag
|
||||
if self.deliver_min is not None:
|
||||
txn['DeliverMin'] = self.deliver_min
|
||||
return txn
|
||||
|
||||
|
||||
class Trust(Transaction):
|
||||
'''A trust set transaction'''
|
||||
def __init__(self,
|
||||
*,
|
||||
limit_amt: Optional[Asset] = None,
|
||||
qin: Optional[int] = None,
|
||||
qout: Optional[int] = None,
|
||||
**rest):
|
||||
super().__init__(**rest)
|
||||
self.limit_amt = limit_amt
|
||||
self.qin = qin
|
||||
self.qout = qout
|
||||
|
||||
def set_auth(self):
|
||||
'''Set the auth flag (cannot be cleared)'''
|
||||
self._set_flag(0x00010000)
|
||||
return self
|
||||
|
||||
def set_no_ripple(self, value: bool = True):
|
||||
'''Set or clear the noRipple flag'''
|
||||
self._set_flag(0x0002_0000, value)
|
||||
self._set_flag(0x0004_0000, not value)
|
||||
return self
|
||||
|
||||
def set_freeze(self, value: bool = True):
|
||||
'''Set or clear the freeze flag'''
|
||||
self._set_flag(0x0020_0000, value)
|
||||
self._set_flag(0x0040_0000, not value)
|
||||
return self
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
result = super().to_cmd_obj()
|
||||
result = {
|
||||
**result,
|
||||
'TransactionType': 'TrustSet',
|
||||
'LimitAmount': self.limit_amt.to_cmd_obj(),
|
||||
}
|
||||
if self.qin is not None:
|
||||
result['QualityIn'] = self.qin
|
||||
if self.qout is not None:
|
||||
result['QualityOut'] = self.qout
|
||||
return result
|
||||
|
||||
|
||||
class SetRegularKey(Transaction):
|
||||
'''A SetRegularKey transaction'''
|
||||
def __init__(self, *, key: str, **rest):
|
||||
super().__init__(**rest)
|
||||
self.key = key
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
result = super().to_cmd_obj()
|
||||
result = {
|
||||
**result,
|
||||
'TransactionType': 'SetRegularKey',
|
||||
'RegularKey': self.key,
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
class SignerListSet(Transaction):
|
||||
'''A SignerListSet transaction'''
|
||||
def __init__(self,
|
||||
*,
|
||||
keys: List[str],
|
||||
weights: Optional[List[int]] = None,
|
||||
quorum: int,
|
||||
**rest):
|
||||
super().__init__(**rest)
|
||||
self.keys = keys
|
||||
self.quorum = quorum
|
||||
if weights:
|
||||
if len(weights) != len(keys):
|
||||
raise ValueError(
|
||||
f'SignerSetList number of weights must equal number of keys (or be empty). Weights: {weights} Keys: {keys}'
|
||||
)
|
||||
self.weights = weights
|
||||
else:
|
||||
self.weights = [1] * len(keys)
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
result = super().to_cmd_obj()
|
||||
result = {
|
||||
**result,
|
||||
'TransactionType': 'SignerListSet',
|
||||
'SignerQuorum': self.quorum,
|
||||
}
|
||||
entries = []
|
||||
for k, w in zip(self.keys, self.weights):
|
||||
entries.append({'SignerEntry': {'Account': k, 'SignerWeight': w}})
|
||||
result['SignerEntries'] = entries
|
||||
return result
|
||||
|
||||
|
||||
class AccountSet(Transaction):
|
||||
'''An account set transaction'''
|
||||
def __init__(self, account: Account, **rest):
|
||||
super().__init__(account=account, **rest)
|
||||
self.clear_flag = None
|
||||
self.set_flag = None
|
||||
self.transfer_rate = None
|
||||
self.tick_size = None
|
||||
|
||||
def _set_account_flag(self, flag_id: int, value):
|
||||
if value:
|
||||
self.set_flag = flag_id
|
||||
else:
|
||||
self.clear_flag = flag_id
|
||||
return self
|
||||
|
||||
def set_account_txn_id(self, value: bool = True):
|
||||
'''Set or clear the asfAccountTxnID flag'''
|
||||
return self._set_account_flag(5, value)
|
||||
|
||||
def set_default_ripple(self, value: bool = True):
|
||||
'''Set or clear the asfDefaultRipple flag'''
|
||||
return self._set_account_flag(8, value)
|
||||
|
||||
def set_deposit_auth(self, value: bool = True):
|
||||
'''Set or clear the asfDepositAuth flag'''
|
||||
return self._set_account_flag(9, value)
|
||||
|
||||
def set_disable_master(self, value: bool = True):
|
||||
'''Set or clear the asfDisableMaster flag'''
|
||||
return self._set_account_flag(4, value)
|
||||
|
||||
def set_disallow_xrp(self, value: bool = True):
|
||||
'''Set or clear the asfDisallowXRP flag'''
|
||||
return self._set_account_flag(3, value)
|
||||
|
||||
def set_global_freeze(self, value: bool = True):
|
||||
'''Set or clear the asfGlobalFreeze flag'''
|
||||
return self._set_account_flag(7, value)
|
||||
|
||||
def set_no_freeze(self, value: bool = True):
|
||||
'''Set or clear the asfNoFreeze flag'''
|
||||
return self._set_account_flag(6, value)
|
||||
|
||||
def set_require_auth(self, value: bool = True):
|
||||
'''Set or clear the asfRequireAuth flag'''
|
||||
return self._set_account_flag(2, value)
|
||||
|
||||
def set_require_dest(self, value: bool = True):
|
||||
'''Set or clear the asfRequireDest flag'''
|
||||
return self._set_account_flag(1, value)
|
||||
|
||||
def set_transfer_rate(self, value: int):
|
||||
'''Set the fee to change when users transfer this account's issued currencies'''
|
||||
self.transfer_rate = value
|
||||
return self
|
||||
|
||||
def set_tick_size(self, value: int):
|
||||
'''Tick size to use for offers involving a currency issued by this address'''
|
||||
self.tick_size = value
|
||||
return self
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
'''convert to transaction form (suitable for using json.dumps or similar)'''
|
||||
result = super().to_cmd_obj()
|
||||
result = {
|
||||
**result,
|
||||
'TransactionType': 'AccountSet',
|
||||
}
|
||||
if self.clear_flag is not None:
|
||||
result['ClearFlag'] = self.clear_flag
|
||||
if self.set_flag is not None:
|
||||
result['SetFlag'] = self.set_flag
|
||||
if self.transfer_rate is not None:
|
||||
result['TransferRate'] = self.transfer_rate
|
||||
if self.tick_size is not None:
|
||||
result['TickSize'] = self.tick_size
|
||||
return result
|
||||
|
||||
|
||||
class Offer(Transaction):
|
||||
'''An offer transaction'''
|
||||
def __init__(self,
|
||||
*,
|
||||
taker_pays: Asset,
|
||||
taker_gets: Asset,
|
||||
expiration: Optional[int] = None,
|
||||
offer_sequence: Optional[int] = None,
|
||||
**rest):
|
||||
super().__init__(**rest)
|
||||
self.taker_pays = taker_pays
|
||||
self.taker_gets = taker_gets
|
||||
self.expiration = expiration
|
||||
self.offer_sequence = offer_sequence
|
||||
|
||||
def set_passive(self, value: bool = True):
|
||||
return self._set_flag(0x0001_0000, value)
|
||||
|
||||
def set_immediate_or_cancel(self, value: bool = True):
|
||||
return self._set_flag(0x0002_0000, value)
|
||||
|
||||
def set_fill_or_kill(self, value: bool = True):
|
||||
return self._set_flag(0x0004_0000, value)
|
||||
|
||||
def set_sell(self, value: bool = True):
|
||||
return self._set_flag(0x0008_0000, value)
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
txn = super().to_cmd_obj()
|
||||
txn = {
|
||||
**txn,
|
||||
'TransactionType': 'OfferCreate',
|
||||
'TakerPays': self.taker_pays.to_cmd_obj(),
|
||||
'TakerGets': self.taker_gets.to_cmd_obj(),
|
||||
}
|
||||
if self.expiration is not None:
|
||||
txn['Expiration'] = self.expiration
|
||||
if self.offer_sequence is not None:
|
||||
txn['OfferSequence'] = self.offer_sequence
|
||||
return txn
|
||||
|
||||
|
||||
class Ticket(Transaction):
|
||||
'''A ticket create transaction'''
|
||||
def __init__(self, *, count: int = 1, **rest):
|
||||
super().__init__(**rest)
|
||||
self.count = count
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
txn = super().to_cmd_obj()
|
||||
txn = {
|
||||
**txn,
|
||||
'TransactionType': 'TicketCreate',
|
||||
'TicketCount': self.count,
|
||||
}
|
||||
return txn
|
||||
|
||||
|
||||
class SetHook(Transaction):
|
||||
'''A SetHook transaction for the experimental hook amendment'''
|
||||
def __init__(self,
|
||||
*,
|
||||
create_code: str,
|
||||
hook_on: str = '0000000000000000',
|
||||
**rest):
|
||||
super().__init__(**rest)
|
||||
self.create_code = create_code
|
||||
self.hook_on = hook_on
|
||||
|
||||
def to_cmd_obj(self) -> dict:
|
||||
txn = super().to_cmd_obj()
|
||||
txn = {
|
||||
**txn,
|
||||
'TransactionType': 'SetHook',
|
||||
'CreateCode': self.create_code,
|
||||
'HookOn': self.hook_on,
|
||||
}
|
||||
return txn
|
||||
246
bin/start_sync_stop.py
Normal file
246
bin/start_sync_stop.py
Normal file
@@ -0,0 +1,246 @@
|
||||
#!/usr/bin/env python
|
||||
"""A script to test rippled in an infinite loop of start-sync-stop.
|
||||
|
||||
- Requires Python 3.7+.
|
||||
- Can be stopped with SIGINT.
|
||||
- Has no dependencies outside the standard library.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
assert sys.version_info.major == 3 and sys.version_info.minor >= 7
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import configparser
|
||||
import contextlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
import platform
|
||||
import subprocess
|
||||
import time
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
|
||||
# Enable asynchronous subprocesses on Windows. The default changed in 3.8.
|
||||
# https://docs.python.org/3.7/library/asyncio-platforms.html#subprocess-support-on-windows
|
||||
if (platform.system() == 'Windows' and sys.version_info.major == 3
|
||||
and sys.version_info.minor < 8):
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
|
||||
|
||||
DEFAULT_EXE = 'rippled'
|
||||
DEFAULT_CONFIGURATION_FILE = 'rippled.cfg'
|
||||
# Number of seconds to wait before forcefully terminating.
|
||||
PATIENCE = 120
|
||||
# Number of contiguous seconds in a sync state to be considered synced.
|
||||
DEFAULT_SYNC_DURATION = 60
|
||||
# Number of seconds between polls of state.
|
||||
DEFAULT_POLL_INTERVAL = 5
|
||||
SYNC_STATES = ('full', 'validating', 'proposing')
|
||||
|
||||
|
||||
def read_config(config_file):
|
||||
# strict = False: Allow duplicate keys, e.g. [rpc_startup].
|
||||
# allow_no_value = True: Allow keys with no values. Generally, these
|
||||
# instances use the "key" as the value, and the section name is the key,
|
||||
# e.g. [debug_logfile].
|
||||
# delimiters = ('='): Allow ':' as a character in Windows paths. Some of
|
||||
# our "keys" are actually values, and we don't want to split them on ':'.
|
||||
config = configparser.ConfigParser(
|
||||
strict=False,
|
||||
allow_no_value=True,
|
||||
delimiters=('='),
|
||||
)
|
||||
config.read(config_file)
|
||||
return config
|
||||
|
||||
|
||||
def to_list(value, separator=','):
|
||||
"""Parse a list from a delimited string value."""
|
||||
return [s.strip() for s in value.split(separator) if s]
|
||||
|
||||
|
||||
def find_log_file(config_file):
|
||||
"""Try to figure out what log file the user has chosen. Raises all kinds
|
||||
of exceptions if there is any possibility of ambiguity."""
|
||||
config = read_config(config_file)
|
||||
values = list(config['debug_logfile'].keys())
|
||||
if len(values) < 1:
|
||||
raise ValueError(
|
||||
f'no [debug_logfile] in configuration file: {config_file}')
|
||||
if len(values) > 1:
|
||||
raise ValueError(
|
||||
f'too many [debug_logfile] in configuration file: {config_file}')
|
||||
return values[0]
|
||||
|
||||
|
||||
def find_http_port(config_file):
|
||||
config = read_config(config_file)
|
||||
names = list(config['server'].keys())
|
||||
for name in names:
|
||||
server = config[name]
|
||||
if 'http' in to_list(server.get('protocol', '')):
|
||||
return int(server['port'])
|
||||
raise ValueError(f'no server in [server] for "http" protocol')
|
||||
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def rippled(exe=DEFAULT_EXE, config_file=DEFAULT_CONFIGURATION_FILE):
|
||||
"""A context manager for a rippled process."""
|
||||
# Start the server.
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
str(exe),
|
||||
'--conf',
|
||||
str(config_file),
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
)
|
||||
logging.info(f'rippled started with pid {process.pid}')
|
||||
try:
|
||||
yield process
|
||||
finally:
|
||||
# Ask it to stop.
|
||||
logging.info(f'asking rippled (pid: {process.pid}) to stop')
|
||||
start = time.time()
|
||||
process.terminate()
|
||||
|
||||
# Wait nicely.
|
||||
try:
|
||||
await asyncio.wait_for(process.wait(), PATIENCE)
|
||||
except asyncio.TimeoutError:
|
||||
# Ask the operating system to kill it.
|
||||
logging.warning(f'killing rippled ({process.pid})')
|
||||
try:
|
||||
process.kill()
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
|
||||
code = await process.wait()
|
||||
end = time.time()
|
||||
logging.info(
|
||||
f'rippled stopped after {end - start:.1f} seconds with code {code}'
|
||||
)
|
||||
|
||||
|
||||
async def sync(
|
||||
port,
|
||||
*,
|
||||
duration=DEFAULT_SYNC_DURATION,
|
||||
interval=DEFAULT_POLL_INTERVAL,
|
||||
):
|
||||
"""Poll rippled on an interval until it has been synced for a duration."""
|
||||
start = time.perf_counter()
|
||||
while (time.perf_counter() - start) < duration:
|
||||
await asyncio.sleep(interval)
|
||||
|
||||
request = urllib.request.Request(
|
||||
f'http://127.0.0.1:{port}',
|
||||
data=json.dumps({
|
||||
'method': 'server_state'
|
||||
}).encode(),
|
||||
headers={'Content-Type': 'application/json'},
|
||||
)
|
||||
with urllib.request.urlopen(request) as response:
|
||||
try:
|
||||
body = json.loads(response.read())
|
||||
except urllib.error.HTTPError as cause:
|
||||
logging.warning(f'server_state returned not JSON: {cause}')
|
||||
start = time.perf_counter()
|
||||
continue
|
||||
|
||||
try:
|
||||
state = body['result']['state']['server_state']
|
||||
except KeyError as cause:
|
||||
logging.warning(f'server_state response missing key: {cause.key}')
|
||||
start = time.perf_counter()
|
||||
continue
|
||||
logging.info(f'server_state: {state}')
|
||||
if state not in SYNC_STATES:
|
||||
# Require a contiguous sync state.
|
||||
start = time.perf_counter()
|
||||
|
||||
|
||||
async def loop(test,
|
||||
*,
|
||||
exe=DEFAULT_EXE,
|
||||
config_file=DEFAULT_CONFIGURATION_FILE):
|
||||
"""
|
||||
Start-test-stop rippled in an infinite loop.
|
||||
|
||||
Moves log to a different file after each iteration.
|
||||
"""
|
||||
log_file = find_log_file(config_file)
|
||||
id = 0
|
||||
while True:
|
||||
logging.info(f'iteration: {id}')
|
||||
async with rippled(exe, config_file) as process:
|
||||
start = time.perf_counter()
|
||||
exited = asyncio.create_task(process.wait())
|
||||
tested = asyncio.create_task(test())
|
||||
# Try to sync as long as the process is running.
|
||||
done, pending = await asyncio.wait(
|
||||
{exited, tested},
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
if done == {exited}:
|
||||
code = exited.result()
|
||||
logging.warning(
|
||||
f'server halted for unknown reason with code {code}')
|
||||
else:
|
||||
assert done == {tested}
|
||||
assert tested.exception() is None
|
||||
end = time.perf_counter()
|
||||
logging.info(f'synced after {end - start:.0f} seconds')
|
||||
os.replace(log_file, f'debug.{id}.log')
|
||||
id += 1
|
||||
|
||||
|
||||
logging.basicConfig(
|
||||
format='%(asctime)s %(levelname)-8s %(message)s',
|
||||
level=logging.INFO,
|
||||
datefmt='%Y-%m-%d %H:%M:%S',
|
||||
)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
||||
parser.add_argument(
|
||||
'rippled',
|
||||
type=Path,
|
||||
nargs='?',
|
||||
default=DEFAULT_EXE,
|
||||
help='Path to rippled.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--conf',
|
||||
type=Path,
|
||||
default=DEFAULT_CONFIGURATION_FILE,
|
||||
help='Path to configuration file.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--duration',
|
||||
type=int,
|
||||
default=DEFAULT_SYNC_DURATION,
|
||||
help='Number of contiguous seconds required in a synchronized state.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--interval',
|
||||
type=int,
|
||||
default=DEFAULT_POLL_INTERVAL,
|
||||
help='Number of seconds to wait between polls of state.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
port = find_http_port(args.conf)
|
||||
|
||||
|
||||
def test():
|
||||
return sync(port, duration=args.duration, interval=args.interval)
|
||||
|
||||
|
||||
try:
|
||||
asyncio.run(loop(test, exe=args.rippled, config_file=args.conf))
|
||||
except KeyboardInterrupt:
|
||||
# Squelch the message. This is a normal mode of exit.
|
||||
pass
|
||||
10
cfg/initdb.sh
Executable file
10
cfg/initdb.sh
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Execute this script with a running Postgres server on the current host.
|
||||
# It should work with the most generic installation of Postgres,
|
||||
# and is necessary for rippled to store data in Postgres.
|
||||
|
||||
# usage: sudo -u postgres ./initdb.sh
|
||||
psql -c "CREATE USER rippled"
|
||||
psql -c "CREATE DATABASE rippled WITH OWNER = rippled"
|
||||
|
||||
@@ -13,15 +13,17 @@
|
||||
#
|
||||
# 4. HTTPS Client
|
||||
#
|
||||
# 5. Database
|
||||
# 5. Reporting Mode
|
||||
#
|
||||
# 6. Diagnostics
|
||||
# 6. Database
|
||||
#
|
||||
# 7. Voting
|
||||
# 7. Diagnostics
|
||||
#
|
||||
# 8. Misc Settings
|
||||
# 8. Voting
|
||||
#
|
||||
# 9. Example Settings
|
||||
# 9. Misc Settings
|
||||
#
|
||||
# 10. Example Settings
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
@@ -36,7 +38,7 @@
|
||||
# For more information on where the rippled server instance searches for the
|
||||
# file, visit:
|
||||
#
|
||||
# https://developers.ripple.com/commandline-usage.html#generic-options
|
||||
# https://xrpl.org/commandline-usage.html#generic-options
|
||||
#
|
||||
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
|
||||
# or Mac style end of lines. Blank lines and lines beginning with '#' are
|
||||
@@ -198,9 +200,19 @@
|
||||
#
|
||||
# admin = [ IP, IP, IP, ... ]
|
||||
#
|
||||
# A comma-separated list of IP addresses.
|
||||
# A comma-separated list of IP addresses or subnets. Subnets
|
||||
# should be represented in "slash" notation, such as:
|
||||
# 10.0.0.0/8
|
||||
# 172.16.0.0/12
|
||||
# 192.168.0.0/16
|
||||
# Those examples are ipv4, but ipv6 is also supported.
|
||||
# When configuring subnets, the address must match the
|
||||
# underlying network address. Otherwise, the desired IP range is
|
||||
# ambiguous. For example, 10.1.2.3/24 has a network address of
|
||||
# 10.1.2.0. Therefore, that subnet should be configured as
|
||||
# 10.1.2.0/24.
|
||||
#
|
||||
# When set, grants administrative command access to the specified IP
|
||||
# When set, grants administrative command access to the specified
|
||||
# addresses. These commands may be issued over http, https, ws, or wss
|
||||
# if configured on the port. If not provided, the default is to not allow
|
||||
# administrative commands.
|
||||
@@ -231,9 +243,10 @@
|
||||
#
|
||||
# secure_gateway = [ IP, IP, IP, ... ]
|
||||
#
|
||||
# A comma-separated list of IP addresses.
|
||||
# A comma-separated list of IP addresses or subnets. See the
|
||||
# details for the "admin" option above.
|
||||
#
|
||||
# When set, allows the specified IP addresses to pass HTTP headers
|
||||
# When set, allows the specified addresses to pass HTTP headers
|
||||
# containing username and remote IP address for each session. If a
|
||||
# non-empty username is passed in this way, then resource controls
|
||||
# such as often resulting in "tooBusy" errors will be lifted. However,
|
||||
@@ -248,9 +261,9 @@
|
||||
# proxies. Since rippled trusts these hosts, they must be
|
||||
# responsible for properly authenticating the remote user.
|
||||
#
|
||||
# The same IP address cannot be used in both "admin" and "secure_gateway"
|
||||
# lists for the same port. In this case, rippled will abort with an error
|
||||
# message to the console shortly after startup
|
||||
# If some IP addresses are included for both "admin" and
|
||||
# "secure_gateway" connections, then they will be treated as
|
||||
# "admin" addresses.
|
||||
#
|
||||
# ssl_key = <filename>
|
||||
# ssl_cert = <filename>
|
||||
@@ -351,6 +364,14 @@
|
||||
# connection is no longer available.
|
||||
#
|
||||
#
|
||||
# [server_domain]
|
||||
#
|
||||
# domain name
|
||||
#
|
||||
# The domain under which a TOML file applicable to this server can be
|
||||
# found. A server may lie about its domain so the TOML should contain
|
||||
# a reference to this server by pubkey in the [nodes] array.
|
||||
#
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
@@ -385,7 +406,7 @@
|
||||
#
|
||||
# [ips]
|
||||
# 192.168.0.1
|
||||
# 192.168.0.1 2459
|
||||
# 192.168.0.1 2459
|
||||
# r.ripple.com 51235
|
||||
#
|
||||
#
|
||||
@@ -453,6 +474,13 @@
|
||||
#
|
||||
#
|
||||
#
|
||||
# [max_transactions]
|
||||
#
|
||||
# Configure the maximum number of transactions to have in the job queue
|
||||
#
|
||||
# Must be a number between 100 and 1000, defaults to 250
|
||||
#
|
||||
#
|
||||
# [overlay]
|
||||
#
|
||||
# Controls settings related to the peer to peer overlay.
|
||||
@@ -474,6 +502,23 @@
|
||||
# single host from consuming all inbound slots. If the value is not
|
||||
# present the server will autoconfigure an appropriate limit.
|
||||
#
|
||||
# max_unknown_time = <number>
|
||||
#
|
||||
# The maximum amount of time, in seconds, that an outbound connection
|
||||
# is allowed to stay in the "unknown" tracking state. This option can
|
||||
# take any value between 300 and 1800 seconds, inclusive. If the option
|
||||
# is not present the server will autoconfigure an appropriate limit.
|
||||
#
|
||||
# The current default (which is subject to change) is 600 seconds.
|
||||
#
|
||||
# max_diverged_time = <number>
|
||||
#
|
||||
# The maximum amount of time, in seconds, that an outbound connection
|
||||
# is allowed to stay in the "diverged" tracking state. The option can
|
||||
# take any value between 60 and 900 seconds, inclusive. If the option
|
||||
# is not present the server will autoconfigure an appropriate limit.
|
||||
#
|
||||
# The current default (which is subject to change) is 300 seconds.
|
||||
#
|
||||
#
|
||||
# [transaction_queue] EXPERIMENTAL
|
||||
@@ -506,14 +551,6 @@
|
||||
# than the original transaction's fee, or meet the current open
|
||||
# ledger fee to be considered. Default: 25.
|
||||
#
|
||||
# multi_txn_percent = <number>
|
||||
#
|
||||
# If a client submits multiple transactions (different sequence
|
||||
# numbers), later transactions must pay a fee at least <number>
|
||||
# percent higher than the transaction with the previous sequence
|
||||
# number.
|
||||
# Default: -90.
|
||||
#
|
||||
# minimum_escalation_multiplier = <number>
|
||||
#
|
||||
# At ledger close time, the median fee level of the transactions
|
||||
@@ -583,22 +620,43 @@
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 3. Ripple Protocol
|
||||
# 3. Protocol
|
||||
#
|
||||
#-------------------
|
||||
#
|
||||
# These settings affect the behavior of the server instance with respect
|
||||
# to Ripple payment protocol level activities such as validating and
|
||||
# closing ledgers or adjusting fees in response to server overloads.
|
||||
# to protocol level activities such as validating and closing ledgers
|
||||
# adjusting fees in response to server overloads.
|
||||
#
|
||||
#
|
||||
#
|
||||
# [node_size]
|
||||
#
|
||||
# Tunes the servers based on the expected load and available memory. Legal
|
||||
# sizes are "tiny", "small", "medium", "large", and "huge". We recommend
|
||||
# you start at the default and raise the setting if you have extra memory.
|
||||
# The default is "tiny".
|
||||
# [relay_proposals]
|
||||
#
|
||||
# Controls the relay and processing behavior for proposals received by this
|
||||
# server that are issued by validators that are not on the server's UNL.
|
||||
#
|
||||
# Legal values are:
|
||||
# "all" - Relay and process all incoming proposals
|
||||
# "trusted" - Relay only trusted proposals, but locally process all
|
||||
# "drop_untrusted" - Relay only trusted proposals, do not process untrusted
|
||||
#
|
||||
# The default is "trusted".
|
||||
#
|
||||
#
|
||||
# [relay_validations]
|
||||
#
|
||||
# Controls the relay and processing behavior for validations received by this
|
||||
# server that are issued by validators that are not on the server's UNL.
|
||||
#
|
||||
# Legal values are:
|
||||
# "all" - Relay and process all incoming validations
|
||||
# "trusted" - Relay only trusted validations, but locally process all
|
||||
# "drop_untrusted" - Relay only trusted validations, do not process untrusted
|
||||
#
|
||||
# The default is "all".
|
||||
#
|
||||
#
|
||||
#
|
||||
#
|
||||
#
|
||||
@@ -728,9 +786,17 @@
|
||||
# [workers]
|
||||
#
|
||||
# Configures the number of threads for processing work submitted by peers
|
||||
# and clients. If not specified, then the value is automatically determined
|
||||
# by factors including the number of system processors and whether this
|
||||
# node is a validator.
|
||||
# and clients. If not specified, then the value is automatically set to the
|
||||
# number of processor threads plus 2 for networked nodes. Nodes running in
|
||||
# stand alone mode default to 1 worker.
|
||||
#
|
||||
# [io_workers]
|
||||
#
|
||||
# Configures the number of threads for processing raw inbound and outbound IO.
|
||||
#
|
||||
# [prefetch_workers]
|
||||
#
|
||||
# Configures the number of threads for performing nodestore prefetching.
|
||||
#
|
||||
#
|
||||
#
|
||||
@@ -746,11 +812,23 @@
|
||||
#
|
||||
# main -> 0
|
||||
# testnet -> 1
|
||||
# devnet -> 2
|
||||
#
|
||||
# If this value is not specified the server is not explicitly configured
|
||||
# to track a particular network.
|
||||
#
|
||||
#
|
||||
# [ledger_replay]
|
||||
#
|
||||
# 0 or 1.
|
||||
#
|
||||
# 0: Disable the ledger replay feature [default]
|
||||
# 1: Enable the ledger replay feature. With this feature enabled, when
|
||||
# acquiring a ledger from the network, a rippled node only downloads
|
||||
# the ledger header and the transactions instead of the whole ledger.
|
||||
# And the ledger is built by applying the transactions to the parent
|
||||
# ledger.
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 4. HTTPS Client
|
||||
@@ -791,18 +869,135 @@
|
||||
# certificates that the server will accept for verifying HTTP servers.
|
||||
# Used only for outbound HTTPS client connections.
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 5. Reporting Mode
|
||||
#
|
||||
#------------
|
||||
#
|
||||
# rippled has an optional operating mode called Reporting Mode. In Reporting
|
||||
# Mode, rippled does not connect to the peer to peer network. Instead, rippled
|
||||
# will continuously extract data from one or more rippled servers that are
|
||||
# connected to the peer to peer network (referred to as an ETL source).
|
||||
# Reporting mode servers will forward RPC requests that require access to the
|
||||
# peer to peer network (submit, fee, etc) to an ETL source.
|
||||
#
|
||||
# [reporting] Settings for Reporting Mode. If and only if this section is
|
||||
# present, rippled will start in reporting mode. This section
|
||||
# contains a list of ETL source names, and key-value pairs. The
|
||||
# ETL source names each correspond to a configuration file
|
||||
# section; the names must match exactly. The key-value pairs are
|
||||
# optional.
|
||||
#
|
||||
#
|
||||
# [<name>]
|
||||
#
|
||||
# A series of key/value pairs that specify an ETL source.
|
||||
#
|
||||
# source_ip = <IP-address>
|
||||
#
|
||||
# Required. IP address of the ETL source. Can also be a DNS record.
|
||||
#
|
||||
# source_ws_port = <number>
|
||||
#
|
||||
# Required. Port on which ETL source is accepting unencrypted websocket
|
||||
# connections.
|
||||
#
|
||||
# source_grpc_port = <number>
|
||||
#
|
||||
# Required for ETL. Port on which ETL source is accepting gRPC requests.
|
||||
# If this option is ommitted, this ETL source cannot actually be used for
|
||||
# ETL; the Reporting Mode server can still forward RPCs to this ETL
|
||||
# source, but cannot extract data from this ETL source.
|
||||
#
|
||||
#
|
||||
# Key-value pairs (all optional):
|
||||
#
|
||||
# read_only Valid values: 0, 1. Default is 0. If set to 1, the server
|
||||
# will start in strict read-only mode, and will not perform
|
||||
# ETL. The server will still handle RPC requests, and will
|
||||
# still forward RPC requests that require access to the p2p
|
||||
# network.
|
||||
#
|
||||
# start_sequence
|
||||
# Sequence of first ledger to extract if the database is empty.
|
||||
# ETL extracts ledgers in order. If this setting is absent and
|
||||
# the database is empty, ETL will start with the next ledger
|
||||
# validated by the network. If this setting is present and the
|
||||
# database is not empty, an exception is thrown.
|
||||
#
|
||||
# num_markers Degree of parallelism used during the initial ledger
|
||||
# download. Only used if the database is empty. Valid values
|
||||
# are 1-256. A higher degree of parallelism results in a
|
||||
# faster download, but puts more load on the ETL source.
|
||||
# Default is 2.
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# [reporting]
|
||||
# etl_source1
|
||||
# etl_source2
|
||||
# read_only=0
|
||||
# start_sequence=32570
|
||||
# num_markers=8
|
||||
#
|
||||
# [etl_source1]
|
||||
# source_ip=1.2.3.4
|
||||
# source_ws_port=6005
|
||||
# source_grpc_port=50051
|
||||
#
|
||||
# [etl_source2]
|
||||
# source_ip=5.6.7.8
|
||||
# source_ws_port=6005
|
||||
# source_grpc_port=50051
|
||||
#
|
||||
# Minimal Example:
|
||||
#
|
||||
# [reporting]
|
||||
# etl_source1
|
||||
#
|
||||
# [etl_source1]
|
||||
# source_ip=1.2.3.4
|
||||
# source_ws_port=6005
|
||||
# source_grpc_port=50051
|
||||
#
|
||||
#
|
||||
# Notes:
|
||||
#
|
||||
# Reporting Mode requires Postgres (instead of SQLite). The Postgres
|
||||
# connection info is specified under the [ledger_tx_tables] config section;
|
||||
# see the Database section for further documentation.
|
||||
#
|
||||
# Each ETL source specified must have gRPC enabled (by adding a [port_grpc]
|
||||
# section to the config). It is recommended to add a secure_gateway entry to
|
||||
# the gRPC section, in order to bypass the server's rate limiting.
|
||||
# This section needs to be added to the config of the ETL source, not
|
||||
# the config of the reporting node. In the example below, the
|
||||
# reporting server is running at 127.0.0.1. Multiple IPs can be
|
||||
# specified in secure_gateway via a comma separated list.
|
||||
#
|
||||
# [port_grpc]
|
||||
# ip = 0.0.0.0
|
||||
# port = 50051
|
||||
# secure_gateway = 127.0.0.1
|
||||
#
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 5. Database
|
||||
# 6. Database
|
||||
#
|
||||
#------------
|
||||
#
|
||||
# rippled creates 4 SQLite database to hold bookkeeping information
|
||||
# about transactions, local credentials, and various other things.
|
||||
# It also creates the NodeDB, which holds all the objects that
|
||||
# make up the current and historical ledgers.
|
||||
# make up the current and historical ledgers. In Reporting Mode, rippled
|
||||
# uses a Postgres database instead of SQLite.
|
||||
#
|
||||
# The simplest way to work with Postgres is to install it locally.
|
||||
# When it is running, execute the initdb.sh script in the current
|
||||
# directory as: sudo -u postgres ./initdb.sh
|
||||
# This will create the rippled user and an empty database of the same name.
|
||||
#
|
||||
# The size of the NodeDB grows in proportion to the amount of new data and the
|
||||
# amount of historical data (a configurable setting) so the performance of the
|
||||
@@ -844,25 +1039,137 @@
|
||||
# keeping full history is not advised, and using online delete is
|
||||
# recommended.
|
||||
#
|
||||
# Required keys:
|
||||
# path Location to store the database (all types)
|
||||
# type = Cassandra
|
||||
#
|
||||
# Optional keys:
|
||||
# Apache Cassandra is an open-source, distributed key-value store - see
|
||||
# https://cassandra.apache.org/ for more details.
|
||||
#
|
||||
# These keys are possible for any type of backend:
|
||||
# Cassandra is an alternative backend to be used only with Reporting Mode.
|
||||
# See the Reporting Mode section for more details about Reporting Mode.
|
||||
#
|
||||
# Required keys for NuDB and RocksDB:
|
||||
#
|
||||
# path Location to store the database
|
||||
#
|
||||
# Required keys for Cassandra:
|
||||
#
|
||||
# contact_points IP of a node in the Cassandra cluster
|
||||
#
|
||||
# port CQL Native Transport Port
|
||||
#
|
||||
# secure_connect_bundle
|
||||
# Absolute path to a secure connect bundle. When using
|
||||
# a secure connect bundle, contact_points and port are
|
||||
# not required.
|
||||
#
|
||||
# keyspace Name of Cassandra keyspace to use
|
||||
#
|
||||
# table_name Name of table in above keyspace to use
|
||||
#
|
||||
# Optional keys
|
||||
#
|
||||
# cache_size Size of cache for database records. Default is 16384.
|
||||
# Setting this value to 0 will use the default value.
|
||||
#
|
||||
# cache_age Length of time in minutes to keep database records
|
||||
# cached. Default is 5 minutes. Setting this value to
|
||||
# 0 will use the default value.
|
||||
#
|
||||
# Note: if neither cache_size nor cache_age is
|
||||
# specified, the cache for database records will not
|
||||
# be created. If only one of cache_size or cache_age
|
||||
# is specified, the cache will be created using the
|
||||
# default value for the unspecified parameter.
|
||||
#
|
||||
# Note: the cache will not be created if online_delete
|
||||
# is specified, or if shards are used.
|
||||
#
|
||||
# fast_load Boolean. If set, load the last persisted ledger
|
||||
# from disk upon process start before syncing to
|
||||
# the network. This is likely to improve performance
|
||||
# if sufficient IOPS capacity is available.
|
||||
# Default 0.
|
||||
#
|
||||
# Optional keys for NuDB or RocksDB:
|
||||
#
|
||||
# earliest_seq The default is 32570 to match the XRP ledger
|
||||
# network's earliest allowed sequence. Alternate
|
||||
# networks may set this value. Minimum value of 1.
|
||||
# If a [shard_db] section is defined, and this
|
||||
# value is present either [node_db] or [shard_db],
|
||||
# it must be defined with the same value in both
|
||||
# sections.
|
||||
#
|
||||
# online_delete Minimum value of 256. Enable automatic purging
|
||||
# of older ledger information. Maintain at least this
|
||||
# number of ledger records online. Must be greater
|
||||
# than or equal to ledger_history.
|
||||
#
|
||||
# advisory_delete 0 for disabled, 1 for enabled. If set, then
|
||||
# require administrative RPC call "can_delete"
|
||||
# to enable online deletion of ledger records.
|
||||
# These keys modify the behavior of online_delete, and thus are only
|
||||
# relevant if online_delete is defined and non-zero:
|
||||
#
|
||||
# earliest_seq The default is 32570 to match the XRP ledger
|
||||
# network's earliest allowed sequence. Alternate
|
||||
# networks may set this value. Minimum value of 1.
|
||||
# advisory_delete 0 for disabled, 1 for enabled. If set, the
|
||||
# administrative RPC call "can_delete" is required
|
||||
# to enable online deletion of ledger records.
|
||||
# Online deletion does not run automatically if
|
||||
# non-zero and the last deletion was on a ledger
|
||||
# greater than the current "can_delete" setting.
|
||||
# Default is 0.
|
||||
#
|
||||
# delete_batch When automatically purging, SQLite database
|
||||
# records are deleted in batches. This value
|
||||
# controls the maximum size of each batch. Larger
|
||||
# batches keep the databases locked for more time,
|
||||
# which may cause other functions to fall behind,
|
||||
# and thus cause the node to lose sync.
|
||||
# Default is 100.
|
||||
#
|
||||
# back_off_milliseconds
|
||||
# Number of milliseconds to wait between
|
||||
# online_delete batches to allow other functions
|
||||
# to catch up.
|
||||
# Default is 100.
|
||||
#
|
||||
# age_threshold_seconds
|
||||
# The online delete process will only run if the
|
||||
# latest validated ledger is younger than this
|
||||
# number of seconds.
|
||||
# Default is 60.
|
||||
#
|
||||
# recovery_wait_seconds
|
||||
# The online delete process checks periodically
|
||||
# that rippled is still in sync with the network,
|
||||
# and that the validated ledger is less than
|
||||
# 'age_threshold_seconds' old. By default, if it
|
||||
# is not the online delete process aborts and
|
||||
# tries again later. If 'recovery_wait_seconds'
|
||||
# is set and rippled is out of sync, but likely to
|
||||
# recover quickly, then online delete will wait
|
||||
# this number of seconds for rippled to get back
|
||||
# into sync before it aborts.
|
||||
# Set this value if the node is otherwise staying
|
||||
# in sync, or recovering quickly, but the online
|
||||
# delete process is unable to finish.
|
||||
# Default is unset.
|
||||
#
|
||||
# Optional keys for Cassandra:
|
||||
#
|
||||
# username Username to use if Cassandra cluster requires
|
||||
# authentication
|
||||
#
|
||||
# password Password to use if Cassandra cluster requires
|
||||
# authentication
|
||||
#
|
||||
# max_requests_outstanding
|
||||
# Limits the maximum number of concurrent database
|
||||
# writes. Default is 10 million. For slower clusters,
|
||||
# large numbers of concurrent writes can overload the
|
||||
# cluster. Setting this option can help eliminate
|
||||
# write timeouts and other write errors due to the
|
||||
# cluster being overloaded.
|
||||
# io_threads
|
||||
# Set the number of IO threads used by the
|
||||
# Cassandra driver. Defaults to 4.
|
||||
#
|
||||
# Notes:
|
||||
# The 'node_db' entry configures the primary, persistent storage.
|
||||
@@ -874,6 +1181,12 @@
|
||||
# [import_db] Settings for performing a one-time import (optional)
|
||||
# [database_path] Path to the book-keeping databases.
|
||||
#
|
||||
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
|
||||
# the 'database_path' location. If you omit this configuration setting,
|
||||
# the server creates a directory called "db" located in the same place as
|
||||
# your rippled.cfg file.
|
||||
# Partial pathnames are relative to the location of the rippled executable.
|
||||
#
|
||||
# [shard_db] Settings for the Shard Database (optional)
|
||||
#
|
||||
# Format (without spaces):
|
||||
@@ -887,21 +1200,137 @@
|
||||
# Required keys:
|
||||
# path Location to store the database
|
||||
#
|
||||
# max_size_gb Maximum disk space the database will utilize (in gigabytes)
|
||||
#
|
||||
#
|
||||
# There are 4 bookkeeping SQLite database that the server creates and
|
||||
# maintains. If you omit this configuration setting, it will default to
|
||||
# creating a directory called "db" located in the same place as your
|
||||
# rippled.cfg file. Partial pathnames will be considered relative to
|
||||
# the location of the rippled executable.
|
||||
# Optional keys:
|
||||
# max_historical_shards
|
||||
# The maximum number of historical shards
|
||||
# to store.
|
||||
#
|
||||
# [historical_shard_paths] Additional storage paths for the Shard Database (optional)
|
||||
#
|
||||
# Format (without spaces):
|
||||
# One or more lines, each expressing a full path for storing historical shards:
|
||||
# /mnt/disk1
|
||||
# /mnt/disk2
|
||||
# ...
|
||||
#
|
||||
# [sqlite] Tuning settings for the SQLite databases (optional)
|
||||
#
|
||||
# Format (without spaces):
|
||||
# One or more lines of case-insensitive key / value pairs:
|
||||
# <key> '=' <value>
|
||||
# ...
|
||||
#
|
||||
# Example 1:
|
||||
# safety_level=low
|
||||
#
|
||||
# Example 2:
|
||||
# journal_mode=off
|
||||
# synchronous=off
|
||||
#
|
||||
# WARNING: These settings can have significant effects on data integrity,
|
||||
# particularly in systemic failure scenarios. It is strongly recommended
|
||||
# that they be left at their defaults unless the server is having
|
||||
# performance issues during normal operation or during automatic purging
|
||||
# (online_delete) operations. A warning will be logged on startup if
|
||||
# 'ledger_history' is configured to store more than 10,000,000 ledgers and
|
||||
# any of these settings are less safe than the default. This is due to the
|
||||
# inordinate amount of time and bandwidth it will take to safely rebuild a
|
||||
# corrupted database of that size from other peers.
|
||||
#
|
||||
# Optional keys:
|
||||
#
|
||||
# safety_level Valid values: high, low
|
||||
# The default is "high", which tunes the SQLite
|
||||
# databases in the most reliable mode, and is
|
||||
# equivalent to:
|
||||
# journal_mode=wal
|
||||
# synchronous=normal
|
||||
# temp_store=file
|
||||
# "low" is equivalent to:
|
||||
# journal_mode=memory
|
||||
# synchronous=off
|
||||
# temp_store=memory
|
||||
# These "low" settings trade speed and reduced I/O
|
||||
# for a higher risk of data loss. See the
|
||||
# individual settings below for more information.
|
||||
# This setting may not be combined with any of the
|
||||
# other tuning settings: "journal_mode",
|
||||
# "synchronous", or "temp_store".
|
||||
#
|
||||
# journal_mode Valid values: delete, truncate, persist, memory, wal, off
|
||||
# The default is "wal", which uses a write-ahead
|
||||
# log to implement database transactions.
|
||||
# Alternately, "memory" saves disk I/O, but if
|
||||
# rippled crashes during a transaction, the
|
||||
# database is likely to be corrupted.
|
||||
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
|
||||
# for more details about the available options.
|
||||
# This setting may not be combined with the
|
||||
# "safety_level" setting.
|
||||
#
|
||||
# synchronous Valid values: off, normal, full, extra
|
||||
# The default is "normal", which works well with
|
||||
# the "wal" journal mode. Alternatively, "off"
|
||||
# allows rippled to continue as soon as data is
|
||||
# passed to the OS, which can significantly
|
||||
# increase speed, but risks data corruption if
|
||||
# the host computer crashes before writing that
|
||||
# data to disk.
|
||||
# See https://www.sqlite.org/pragma.html#pragma_synchronous
|
||||
# for more details about the available options.
|
||||
# This setting may not be combined with the
|
||||
# "safety_level" setting.
|
||||
#
|
||||
# temp_store Valid values: default, file, memory
|
||||
# The default is "file", which will use files
|
||||
# for temporary database tables and indices.
|
||||
# Alternatively, "memory" may save I/O, but
|
||||
# rippled does not currently use many, if any,
|
||||
# of these temporary objects.
|
||||
# See https://www.sqlite.org/pragma.html#pragma_temp_store
|
||||
# for more details about the available options.
|
||||
# This setting may not be combined with the
|
||||
# "safety_level" setting.
|
||||
#
|
||||
# [ledger_tx_tables] (optional)
|
||||
#
|
||||
# conninfo Info for connecting to Postgres. Format is
|
||||
# postgres://[username]:[password]@[ip]/[database].
|
||||
# The database and user must already exist. If this
|
||||
# section is missing and rippled is running in
|
||||
# Reporting Mode, rippled will connect as the
|
||||
# user running rippled to a database with the
|
||||
# same name. On Linux and Mac OS X, the connection
|
||||
# will take place using the server's UNIX domain
|
||||
# socket. On Windows, through the localhost IP
|
||||
# address. Default is empty.
|
||||
#
|
||||
# use_tx_tables Valid values: 1, 0
|
||||
# The default is 1 (true). Determines whether to use
|
||||
# the SQLite transaction database. If set to 0,
|
||||
# rippled will not write to the transaction database,
|
||||
# and will reject tx, account_tx and tx_history RPCs.
|
||||
# In Reporting Mode, this setting is ignored.
|
||||
#
|
||||
# max_connections Valid values: any positive integer up to 64 bit
|
||||
# storage length. This configures the maximum
|
||||
# number of concurrent connections to postgres.
|
||||
# Default is the maximum possible value to
|
||||
# fit in a 64 bit integer.
|
||||
#
|
||||
# timeout Number of seconds after which idle postgres
|
||||
# connections are discconnected. If set to 0,
|
||||
# connections never timeout. Default is 600.
|
||||
#
|
||||
#
|
||||
# remember_ip Value values: 1, 0
|
||||
# Default is 1 (true). Whether to cache host and
|
||||
# port connection settings.
|
||||
#
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 6. Diagnostics
|
||||
# 7. Diagnostics
|
||||
#
|
||||
#---------------
|
||||
#
|
||||
@@ -975,7 +1404,7 @@
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 7. Voting
|
||||
# 8. Voting
|
||||
#
|
||||
#----------
|
||||
#
|
||||
@@ -1026,10 +1455,30 @@
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 8. Misc Settings
|
||||
# 9. Misc Settings
|
||||
#
|
||||
#-----------------
|
||||
#
|
||||
# [node_size]
|
||||
#
|
||||
# Tunes the servers based on the expected load and available memory. Legal
|
||||
# sizes are "tiny", "small", "medium", "large", and "huge". We recommend
|
||||
# you start at the default and raise the setting if you have extra memory.
|
||||
#
|
||||
# The code attempts to automatically determine the appropriate size for
|
||||
# this parameter based on the amount of RAM and the number of execution
|
||||
# cores available to the server. The current decision matrix is:
|
||||
#
|
||||
# | | Cores |
|
||||
# |---------|------------------------|
|
||||
# | RAM | 1 | 2 or 3 | ≥ 4 |
|
||||
# |---------|------|--------|--------|
|
||||
# | < ~8GB | tiny | tiny | tiny |
|
||||
# | < ~12GB | tiny | small | small |
|
||||
# | < ~16GB | tiny | small | medium |
|
||||
# | < ~24GB | tiny | small | large |
|
||||
# | < ~32GB | tiny | small | huge |
|
||||
#
|
||||
# [signing_support]
|
||||
#
|
||||
# Specifies whether the server will accept "sign" and "sign_for" commands
|
||||
@@ -1103,9 +1552,18 @@
|
||||
# Enable or disable access to /vl requests. Default is '1' which
|
||||
# enables access.
|
||||
#
|
||||
# [beta_rpc_api]
|
||||
#
|
||||
# 0 or 1.
|
||||
#
|
||||
# 0: Disable the beta API version for JSON-RPC and WebSocket [default]
|
||||
# 1: Enable the beta API version for testing. The beta API version
|
||||
# contains breaking changes that require a new API version number.
|
||||
# They are not ready for public consumption.
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 9. Example Settings
|
||||
# 10. Example Settings
|
||||
#
|
||||
#--------------------
|
||||
#
|
||||
@@ -1181,6 +1639,7 @@ protocol = ws
|
||||
#[port_grpc]
|
||||
#port = 50051
|
||||
#ip = 0.0.0.0
|
||||
#secure_gateway = 127.0.0.1
|
||||
|
||||
#[port_ws_public]
|
||||
#port = 6005
|
||||
@@ -1189,36 +1648,53 @@ protocol = ws
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
|
||||
[node_size]
|
||||
medium
|
||||
|
||||
# This is primary persistent datastore for rippled. This includes transaction
|
||||
# metadata, account states, and ledger headers. Helpful information can be
|
||||
# found here: https://ripple.com/wiki/NodeBackEnd
|
||||
# delete old ledgers while maintaining at least 2000. Do not require an
|
||||
# external administrative command to initiate deletion.
|
||||
# found at https://xrpl.org/capacity-planning.html#node-db-type
|
||||
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
|
||||
# slow / spinning disks should use RocksDB. Caution: Spinning disks are
|
||||
# not recommended. They do not perform well enough to consistently remain
|
||||
# synced to the network.
|
||||
# online_delete=512 is recommended to delete old ledgers while maintaining at
|
||||
# least 512.
|
||||
# advisory_delete=0 allows the online delete process to run automatically
|
||||
# when the node has approximately two times the "online_delete" value of
|
||||
# ledgers. No external administrative command is required to initiate
|
||||
# deletion.
|
||||
[node_db]
|
||||
type=RocksDB
|
||||
path=/var/lib/rippled/db/rocksdb
|
||||
open_files=2000
|
||||
filter_bits=12
|
||||
cache_mb=256
|
||||
file_size_mb=8
|
||||
file_size_mult=2
|
||||
online_delete=2000
|
||||
type=NuDB
|
||||
path=/var/lib/rippled/db/nudb
|
||||
online_delete=512
|
||||
advisory_delete=0
|
||||
|
||||
# This is the persistent datastore for shards. It is important for the health
|
||||
# of the ripple network that rippled operators shard as much as practical.
|
||||
# NuDB requires SSD storage. Helpful information can be found here
|
||||
# https://ripple.com/build/history-sharding
|
||||
# NuDB requires SSD storage. Helpful information can be found at
|
||||
# https://xrpl.org/history-sharding.html
|
||||
#[shard_db]
|
||||
#path=/var/lib/rippled/db/shards/nudb
|
||||
#max_size_gb=500
|
||||
#max_historical_shards=50
|
||||
#
|
||||
# This optional section can be configured with a list
|
||||
# of paths to use for storing historical shards. Each
|
||||
# path must correspond to a unique filesystem.
|
||||
#[historical_shard_paths]
|
||||
#/path/1
|
||||
#/path/2
|
||||
|
||||
[database_path]
|
||||
/var/lib/rippled/db
|
||||
|
||||
|
||||
# To use Postgres, uncomment this section and fill in the appropriate connection
|
||||
# info. Postgres can only be used in Reporting Mode.
|
||||
# To disable writing to the transaction database, uncomment this section, and
|
||||
# set use_tx_tables=0
|
||||
# [ledger_tx_tables]
|
||||
# conninfo = postgres://[username]:[password]@[ip]/[database]
|
||||
# use_tx_tables=1
|
||||
|
||||
|
||||
# This needs to be an absolute directory reference, not a relative one.
|
||||
# Modify this value as required.
|
||||
[debug_logfile]
|
||||
@@ -1230,7 +1706,8 @@ time.apple.com
|
||||
time.nist.gov
|
||||
pool.ntp.org
|
||||
|
||||
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
|
||||
# To use the XRP test network
|
||||
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
|
||||
# use the following [ips] section:
|
||||
# [ips]
|
||||
# r.altnet.rippletest.net 51235
|
||||
@@ -1251,3 +1728,15 @@ validators.txt
|
||||
# set to ssl_verify to 0.
|
||||
[ssl_verify]
|
||||
1
|
||||
|
||||
|
||||
# To run in Reporting Mode, uncomment this section and fill in the appropriate
|
||||
# connection info for one or more ETL sources.
|
||||
# [reporting]
|
||||
# etl_source
|
||||
#
|
||||
#
|
||||
# [etl_source]
|
||||
# source_grpc_port=50051
|
||||
# source_ws_port=6005
|
||||
# source_ip=127.0.0.1
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
#
|
||||
# Default validators.txt
|
||||
#
|
||||
# A list of domains to bootstrap a nodes UNLs or for clients to indirectly
|
||||
# locate IPs to contact the Ripple network.
|
||||
# This file is located in the same folder as your rippled.cfg file
|
||||
# and defines which validators your server trusts not to collude.
|
||||
#
|
||||
# This file is UTF-8 with Dos, UNIX, or Mac style end of lines.
|
||||
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.
|
||||
# Blank lines and lines starting with a '#' are ignored.
|
||||
# All other lines should be hankos or domain names.
|
||||
#
|
||||
#
|
||||
#
|
||||
@@ -25,11 +24,10 @@
|
||||
#
|
||||
# List of URIs serving lists of recommended validators.
|
||||
#
|
||||
# The latest list of recommended validator sites can be
|
||||
# obtained from https://ripple.com/ripple.txt
|
||||
#
|
||||
# Examples:
|
||||
# https://vl.ripple.com
|
||||
# https://vl.coil.com
|
||||
# https://vl.xrplf.org
|
||||
# http://127.0.0.1:8000
|
||||
# file:///etc/opt/ripple/vl.txt
|
||||
#
|
||||
@@ -41,13 +39,9 @@
|
||||
# publisher key.
|
||||
# Validator list keys should be hex-encoded.
|
||||
#
|
||||
# The latest list of recommended validator keys can be
|
||||
# obtained from https://ripple.com/ripple.txt
|
||||
#
|
||||
# Examples:
|
||||
# ed499d732bded01504a7407c224412ef550cc1ade638a4de4eb88af7c36cb8b282
|
||||
# 0202d3f36a801349f3be534e3f64cfa77dede6e1b6310a0b48f40f20f955cec945
|
||||
# 02dd8b7075f64d77d9d2bdb88da364f29fcd975f9ea6f21894abcc7564efda8054
|
||||
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
|
||||
# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
|
||||
#
|
||||
|
||||
# The default validator list publishers that the rippled instance
|
||||
@@ -61,11 +55,15 @@
|
||||
|
||||
[validator_list_sites]
|
||||
https://vl.ripple.com
|
||||
https://vl.xrplf.org
|
||||
|
||||
[validator_list_keys]
|
||||
#vl.ripple.com
|
||||
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
|
||||
# vl.xrplf.org
|
||||
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
|
||||
|
||||
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
|
||||
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
|
||||
# use the following configuration instead:
|
||||
#
|
||||
# [validator_list_sites]
|
||||
|
||||
597
docs/0001-negative-unl/README.md
Normal file
597
docs/0001-negative-unl/README.md
Normal file
@@ -0,0 +1,597 @@
|
||||
# Negative UNL Engineering Spec
|
||||
|
||||
## The Problem Statement
|
||||
|
||||
The moment-to-moment health of the XRP Ledger network depends on the health and
|
||||
connectivity of a small number of computers (nodes). The most important nodes
|
||||
are validators, specifically ones listed on the unique node list
|
||||
([UNL](#Question-What-are-UNLs)). Ripple publishes a recommended UNL that most
|
||||
network nodes use to determine which peers in the network are trusted. Although
|
||||
most validators use the same list, they are not required to. The XRP Ledger
|
||||
network progresses to the next ledger when enough validators reach agreement
|
||||
(above the minimum quorum of 80%) about what transactions to include in the next
|
||||
ledger.
|
||||
|
||||
As an example, if there are 10 validators on the UNL, at least 8 validators have
|
||||
to agree with the latest ledger for it to become validated. But what if enough
|
||||
of those validators are offline to drop the network below the 80% quorum? The
|
||||
XRP Ledger network favors safety/correctness over advancing the ledger. Which
|
||||
means if enough validators are offline, the network will not be able to validate
|
||||
ledgers.
|
||||
|
||||
Unfortunately validators can go offline at any time for many different reasons.
|
||||
Power outages, network connectivity issues, and hardware failures are just a few
|
||||
scenarios where a validator would appear "offline". Given that most of these
|
||||
events are temporary, it would make sense to temporarily remove that validator
|
||||
from the UNL. But the UNL is updated infrequently and not every node uses the
|
||||
same UNL. So instead of removing the unreliable validator from the Ripple
|
||||
recommended UNL, we can create a second negative UNL which is stored directly on
|
||||
the ledger (so the entire network has the same view). This will help the network
|
||||
see which validators are **currently** unreliable, and adjust their quorum
|
||||
calculation accordingly.
|
||||
|
||||
*Improving the liveness of the network is the main motivation for the negative UNL.*
|
||||
|
||||
### Targeted Faults
|
||||
|
||||
In order to determine which validators are unreliable, we need clearly define
|
||||
what kind of faults to measure and analyze. We want to deal with the faults we
|
||||
frequently observe in the production network. Hence we will only monitor for
|
||||
validators that do not reliably respond to network messages or send out
|
||||
validations disagreeing with the locally generated validations. We will not
|
||||
target other byzantine faults.
|
||||
|
||||
To track whether or not a validator is responding to the network, we could
|
||||
monitor them with a “heartbeat” protocol. Instead of creating a new heartbeat
|
||||
protocol, we can leverage some existing protocol messages to mimic the
|
||||
heartbeat. We picked validation messages because validators should send one and
|
||||
only one validation message per ledger. In addition, we only count the
|
||||
validation messages that agree with the local node's validations.
|
||||
|
||||
With the negative UNL, the network could keep making forward progress safely
|
||||
even if the number of remaining validators gets to 60%. Say we have a network
|
||||
with 10 validators on the UNL and everything is operating correctly. The quorum
|
||||
required for this network would be 8 (80% of 10). When validators fail, the
|
||||
quorum required would be as low as 6 (60% of 10), which is the absolute
|
||||
***minimum quorum***. We need the absolute minimum quorum to be strictly greater
|
||||
than 50% of the original UNL so that there cannot be two partitions of
|
||||
well-behaved nodes headed in different directions. We arbitrarily choose 60% as
|
||||
the minimum quorum to give a margin of safety.
|
||||
|
||||
Consider these events in the absence of negative UNL:
|
||||
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
|
||||
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
|
||||
1. 5:00pm - validator3 fails, votes vs. quorum: 7 < 8, we don’t have quorum
|
||||
* **network cannot validate new ledgers with 3 failed validators**
|
||||
|
||||
We're below 80% agreement, so new ledgers cannot be validated. This is how the
|
||||
XRP Ledger operates today, but if the negative UNL was enabled, the events would
|
||||
happen as follows. (Please note that the events below are from a simplified
|
||||
version of our protocol.)
|
||||
|
||||
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
|
||||
1. 1:40pm - network adds validator1 to negative UNL, quorum changes to ceil(9 * 0.8), or 8
|
||||
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
|
||||
1. 3:40pm - network adds validator2 to negative UNL, quorum changes to ceil(8 * 0.8), or 7
|
||||
1. 5:00pm - validator3 fails, votes vs. quorum: 7 >= 7, we have quorum
|
||||
1. 5:40pm - network adds validator3 to negative UNL, quorum changes to ceil(7 * 0.8), or 6
|
||||
1. 7:00pm - validator4 fails, votes vs. quorum: 6 >= 6, we have quorum
|
||||
* **network can still validate new ledgers with 4 failed validators**
|
||||
|
||||
## External Interactions
|
||||
|
||||
### Message Format Changes
|
||||
This proposal will:
|
||||
1. add a new pseudo-transaction type
|
||||
1. add the negative UNL to the ledger data structure.
|
||||
|
||||
Any tools or systems that rely on the format of this data will have to be
|
||||
updated.
|
||||
|
||||
### Amendment
|
||||
This feature **will** need an amendment to activate.
|
||||
|
||||
## Design
|
||||
|
||||
This section discusses the following topics about the Negative UNL design:
|
||||
|
||||
* [Negative UNL protocol overview](#Negative-UNL-Protocol-Overview)
|
||||
* [Validator reliability measurement](#Validator-Reliability-Measurement)
|
||||
* [Format Changes](#Format-Changes)
|
||||
* [Negative UNL maintenance](#Negative-UNL-Maintenance)
|
||||
* [Quorum size calculation](#Quorum-Size-Calculation)
|
||||
* [Filter validation messages](#Filter-Validation-Messages)
|
||||
* [High level sequence diagram of code
|
||||
changes](#High-Level-Sequence-Diagram-of-Code-Changes)
|
||||
|
||||
### Negative UNL Protocol Overview
|
||||
|
||||
Every ledger stores a list of zero or more unreliable validators. Updates to the
|
||||
list must be approved by the validators using the consensus mechanism that
|
||||
validators use to agree on the set of transactions. The list is used only when
|
||||
checking if a ledger is fully validated. If a validator V is in the list, nodes
|
||||
with V in their UNL adjust the quorum and V’s validation message is not counted
|
||||
when verifying if a ledger is fully validated. V’s flow of messages and network
|
||||
interactions, however, will remain the same.
|
||||
|
||||
We define the ***effective UNL** = original UNL - negative UNL*, and the
|
||||
***effective quorum*** as the quorum of the *effective UNL*. And we set
|
||||
*effective quorum = Ceiling(80% * effective UNL)*.
|
||||
|
||||
### Validator Reliability Measurement
|
||||
|
||||
A node only measures the reliability of validators on its own UNL, and only
|
||||
proposes based on local observations. There are many metrics that a node can
|
||||
measure about its validators, but we have chosen ledger validation messages.
|
||||
This is because every validator shall send one and only one signed validation
|
||||
message per ledger. This keeps the measurement simple and removes
|
||||
timing/clock-sync issues. A node will measure the percentage of agreeing
|
||||
validation messages (*PAV*) received from each validator on the node's UNL. Note
|
||||
that the node will only count the validation messages that agree with its own
|
||||
validations.
|
||||
|
||||
We define the **PAV** as the **P**ercentage of **A**greed **V**alidation
|
||||
messages received for the last N ledgers, where N = 256 by default.
|
||||
|
||||
When the PAV drops below the ***low-water mark***, the validator is considered
|
||||
unreliable, and is a candidate to be disabled by being added to the negative
|
||||
UNL. A validator must have a PAV higher than the ***high-water mark*** to be
|
||||
re-enabled. The validator is re-enabled by removing it from the negative UNL. In
|
||||
the implementation, we plan to set the low-water mark as 50% and the high-water
|
||||
mark as 80%.
|
||||
|
||||
### Format Changes
|
||||
|
||||
The negative UNL component in a ledger contains three fields.
|
||||
* ***NegativeUNL***: The current negative UNL, a list of unreliable validators.
|
||||
* ***ToDisable***: The validator to be added to the negative UNL on the next
|
||||
flag ledger.
|
||||
* ***ToReEnable***: The validator to be removed from the negative UNL on the
|
||||
next flag ledger.
|
||||
|
||||
All three fields are optional. When the *ToReEnable* field exists, the
|
||||
*NegativeUNL* field cannot be empty.
|
||||
|
||||
A new pseudo-transaction, ***UNLModify***, is added. It has three fields
|
||||
* ***Disabling***: A flag indicating whether the modification is to disable or
|
||||
to re-enable a validator.
|
||||
* ***Seq***: The ledger sequence number.
|
||||
* ***Validator***: The validator to be disabled or re-enabled.
|
||||
|
||||
There would be at most one *disable* `UNLModify` and one *re-enable* `UNLModify`
|
||||
transaction per flag ledger. The full machinery is described further on.
|
||||
|
||||
### Negative UNL Maintenance
|
||||
|
||||
The negative UNL can only be modified on the flag ledgers. If a validator's
|
||||
reliability status changes, it takes two flag ledgers to modify the negative
|
||||
UNL. Let's see an example of the algorithm:
|
||||
|
||||
* Ledger seq = 100: A validator V goes offline.
|
||||
* Ledger seq = 256: This is a flag ledger, and V's reliability measurement *PAV*
|
||||
is lower than the low-water mark. Other validators add `UNLModify`
|
||||
pseudo-transactions `{true, 256, V}` to the transaction set which goes through
|
||||
the consensus. Then the pseudo-transaction is applied to the negative UNL
|
||||
ledger component by setting `ToDisable = V`.
|
||||
* Ledger seq = 257 ~ 511: The negative UNL ledger component is copied from the
|
||||
parent ledger.
|
||||
* Ledger seq=512: This is a flag ledger, and the negative UNL is updated
|
||||
`NegativeUNL = NegativeUNL + ToDisable`.
|
||||
|
||||
The negative UNL may have up to `MaxNegativeListed = floor(original UNL * 25%)`
|
||||
validators. The 25% is because of 75% * 80% = 60%, where 75% = 100% - 25%, 80%
|
||||
is the quorum of the effective UNL, and 60% is the absolute minimum quorum of
|
||||
the original UNL. Adding more than 25% validators to the negative UNL does not
|
||||
improve the liveness of the network, because adding more validators to the
|
||||
negative UNL cannot lower the effective quorum.
|
||||
|
||||
The following is the detailed algorithm:
|
||||
|
||||
* **If** the ledger seq = x is a flag ledger
|
||||
|
||||
1. Compute `NegativeUNL = NegativeUNL + ToDisable - ToReEnable` if they
|
||||
exist in the parent ledger
|
||||
|
||||
1. Try to find a candidate to disable if `sizeof NegativeUNL < MaxNegativeListed`
|
||||
|
||||
1. Find a validator V that has a *PAV* lower than the low-water
|
||||
mark, but is not in `NegativeUNL`.
|
||||
|
||||
1. If two or more are found, their public keys are XORed with the hash
|
||||
of the parent ledger and the one with the lowest XOR result is chosen.
|
||||
|
||||
1. If V is found, create a `UNLModify` pseudo-transaction
|
||||
`TxDisableValidator = {true, x, V}`
|
||||
|
||||
1. Try to find a candidate to re-enable if `sizeof NegativeUNL > 0`:
|
||||
|
||||
1. Find a validator U that is in `NegativeUNL` and has a *PAV* higher
|
||||
than the high-water mark.
|
||||
|
||||
1. If U is not found, try to find one in `NegativeUNL` but not in the
|
||||
local *UNL*.
|
||||
|
||||
1. If two or more are found, their public keys are XORed with the hash
|
||||
of the parent ledger and the one with the lowest XOR result is chosen.
|
||||
|
||||
1. If U is found, create a `UNLModify` pseudo-transaction
|
||||
`TxReEnableValidator = {false, x, U}`
|
||||
|
||||
1. If any `UNLModify` pseudo-transactions are created, add them to the
|
||||
transaction set. The transaction set goes through the consensus algorithm.
|
||||
|
||||
1. If have enough support, the `UNLModify` pseudo-transactions remain in the
|
||||
transaction set agreed by the validators. Then the pseudo-transactions are
|
||||
applied to the ledger:
|
||||
|
||||
1. If have `TxDisableValidator`, set `ToDisable=TxDisableValidator.V`.
|
||||
Else clear `ToDisable`.
|
||||
|
||||
1. If have `TxReEnableValidator`, set
|
||||
`ToReEnable=TxReEnableValidator.U`. Else clear `ToReEnable`.
|
||||
|
||||
* **Else** (not a flag ledger)
|
||||
|
||||
1. Copy the negative UNL ledger component from the parent ledger
|
||||
|
||||
The negative UNL is stored on each ledger because we don't know when a validator
|
||||
may reconnect to the network. If the negative UNL was stored only on every flag
|
||||
ledger, then a new validator would have to wait until it acquires the latest
|
||||
flag ledger to know the negative UNL. So any new ledgers created that are not
|
||||
flag ledgers copy the negative UNL from the parent ledger.
|
||||
|
||||
Note that when we have a validator to disable and a validator to re-enable at
|
||||
the same flag ledger, we create two separate `UNLModify` pseudo-transactions. We
|
||||
want either one or the other or both to make it into the ledger on their own
|
||||
merits.
|
||||
|
||||
Readers may have noticed that we defined several rules of creating the
|
||||
`UNLModify` pseudo-transactions but did not describe how to enforce the rules.
|
||||
The rules are actually enforced by the existing consensus algorithm. Unless
|
||||
enough validators propose the same pseudo-transaction it will not be included in
|
||||
the transaction set of the ledger.
|
||||
|
||||
### Quorum Size Calculation
|
||||
|
||||
The effective quorum is 80% of the effective UNL. Note that because at most 25%
|
||||
of the original UNL can be on the negative UNL, the quorum should not be lower
|
||||
than the absolute minimum quorum (i.e. 60%) of the original UNL. However,
|
||||
considering that different nodes may have different UNLs, to be safe we compute
|
||||
`quorum = Ceiling(max(60% * original UNL, 80% * effective UNL))`.
|
||||
|
||||
### Filter Validation Messages
|
||||
|
||||
If a validator V is in the negative UNL, it still participates in consensus
|
||||
sessions in the same way, i.e. V still follows the protocol and publishes
|
||||
proposal and validation messages. The messages from V are still stored the same
|
||||
way by everyone, used to calculate the new PAV for V, and could be used in
|
||||
future consensus sessions if needed. However V's ledger validation message is
|
||||
not counted when checking if the ledger is fully validated.
|
||||
|
||||
### High Level Sequence Diagram of Code Changes
|
||||
|
||||
The diagram below is the sequence of one round of consensus. Classes and
|
||||
components with non-trivial changes are colored green.
|
||||
|
||||
* The `ValidatorList` class is modified to compute the quorum of the effective
|
||||
UNL.
|
||||
|
||||
* The `Validations` class provides an interface for querying the validation
|
||||
messages from trusted validators.
|
||||
|
||||
* The `ConsensusAdaptor` component:
|
||||
|
||||
* The `RCLConsensus::Adaptor` class is modified for creating `UNLModify`
|
||||
Pseudo-Transactions.
|
||||
|
||||
* The `Change` class is modified for applying `UNLModify`
|
||||
Pseudo-Transactions.
|
||||
|
||||
* The `Ledger` class is modified for creating and adjusting the negative UNL
|
||||
ledger component.
|
||||
|
||||
* The `LedgerMaster` class is modified for filtering out validation messages
|
||||
from negative UNL validators when verifying if a ledger is fully
|
||||
validated.
|
||||
|
||||

|
||||
|
||||
|
||||
## Roads Not Taken
|
||||
|
||||
### Use a Mechanism Like Fee Voting to Process UNLModify Pseudo-Transactions
|
||||
|
||||
The previous version of the negative UNL specification used the same mechanism
|
||||
as the [fee voting](https://xrpl.org/fee-voting.html#voting-process.) for
|
||||
creating the negative UNL, and used the negative UNL as soon as the ledger was
|
||||
fully validated. However the timing of fully validation can differ among nodes,
|
||||
so different negative UNLs could be used, resulting in different effective UNLs
|
||||
and different quorums for the same ledger. As a result, the network's safety is
|
||||
impacted.
|
||||
|
||||
This updated version does not impact safety though operates a bit more slowly.
|
||||
The negative UNL modifications in the *UNLModify* pseudo-transaction approved by
|
||||
the consensus will take effect at the next flag ledger. The extra time of the
|
||||
256 ledgers should be enough for nodes to be in sync of the negative UNL
|
||||
modifications.
|
||||
|
||||
### Use an Expiration Approach to Re-enable Validators
|
||||
|
||||
After a validator disabled by the negative UNL becomes reliable, other
|
||||
validators explicitly vote for re-enabling it. An alternative approach to
|
||||
re-enable a validator is the expiration approach, which was considered in the
|
||||
previous version of the specification. In the expiration approach, every entry
|
||||
in the negative UNL has a fixed expiration time. One flag ledger interval was
|
||||
chosen as the expiration interval. Once expired, the other validators must
|
||||
continue voting to keep the unreliable validator on the negative UNL. The
|
||||
advantage of this approach is its simplicity. But it has a requirement. The
|
||||
negative UNL protocol must be able to vote multiple unreliable validators to be
|
||||
disabled at the same flag ledger. In this version of the specification, however,
|
||||
only one unreliable validator can be disabled at a flag ledger. So the
|
||||
expiration approach cannot be simply applied.
|
||||
|
||||
### Validator Reliability Measurement and Flag Ledger Frequency
|
||||
|
||||
If the ledger time is about 4.5 seconds and the low-water mark is 50%, then in
|
||||
the worst case, it takes 48 minutes *((0.5 * 256 + 256 + 256) * 4.5 / 60 = 48)*
|
||||
to put an offline validator on the negative UNL. We considered lowering the flag
|
||||
ledger frequency so that the negative UNL can be more responsive. We also
|
||||
considered decoupling the reliability measurement and flag ledger frequency to
|
||||
be more flexible. In practice, however, their benefits are not clear.
|
||||
|
||||
|
||||
## New Attack Vectors
|
||||
|
||||
A group of malicious validators may try to frame a reliable validator and put it
|
||||
on the negative UNL. But they cannot succeed. Because:
|
||||
|
||||
1. A reliable validator sends a signed validation message every ledger. A
|
||||
sufficient peer-to-peer network will propagate the validation messages to other
|
||||
validators. The validators will decide if another validator is reliable or not
|
||||
only by its local observation of the validation messages received. So an honest
|
||||
validator’s vote on another validator’s reliability is accurate.
|
||||
|
||||
1. Given the votes are accurate, and one vote per validator, an honest validator
|
||||
will not create a UNLModify transaction of a reliable validator.
|
||||
|
||||
1. A validator can be added to a negative UNL only through a UNLModify
|
||||
transaction.
|
||||
|
||||
Assuming the group of malicious validators is less than the quorum, they cannot
|
||||
frame a reliable validator.
|
||||
|
||||
## Summary
|
||||
|
||||
The bullet points below briefly summarize the current proposal:
|
||||
|
||||
* The motivation of the negative UNL is to improve the liveness of the network.
|
||||
|
||||
* The targeted faults are the ones frequently observed in the production
|
||||
network.
|
||||
|
||||
* Validators propose negative UNL candidates based on their local measurements.
|
||||
|
||||
* The absolute minimum quorum is 60% of the original UNL.
|
||||
|
||||
* The format of the ledger is changed, and a new *UNLModify* pseudo-transaction
|
||||
is added. Any tools or systems that rely on the format of these data will have
|
||||
to be updated.
|
||||
|
||||
* The negative UNL can only be modified on the flag ledgers.
|
||||
|
||||
* At most one validator can be added to the negative UNL at a flag ledger.
|
||||
|
||||
* At most one validator can be removed from the negative UNL at a flag ledger.
|
||||
|
||||
* If a validator's reliability status changes, it takes two flag ledgers to
|
||||
modify the negative UNL.
|
||||
|
||||
* The quorum is the larger of 80% of the effective UNL and 60% of the original
|
||||
UNL.
|
||||
|
||||
* If a validator is on the negative UNL, its validation messages are ignored
|
||||
when the local node verifies if a ledger is fully validated.
|
||||
|
||||
## FAQ
|
||||
|
||||
### Question: What are UNLs?
|
||||
|
||||
Quote from the [Technical FAQ](https://xrpl.org/technical-faq.html): "They are
|
||||
the lists of transaction validators a given participant believes will not
|
||||
conspire to defraud them."
|
||||
|
||||
### Question: How does the negative UNL proposal affect network liveness?
|
||||
|
||||
The network can make forward progress when more than a quorum of the trusted
|
||||
validators agree with the progress. The lower the quorum size is, the easier for
|
||||
the network to progress. If the quorum is too low, however, the network is not
|
||||
safe because nodes may have different results. So the quorum size used in the
|
||||
consensus protocol is a balance between the safety and the liveness of the
|
||||
network. The negative UNL reduces the size of the effective UNL, resulting in a
|
||||
lower quorum size while keeping the network safe.
|
||||
|
||||
<h3> Question: How does a validator get into the negative UNL? How is a
|
||||
validator removed from the negative UNL? </h3>
|
||||
|
||||
A validator’s reliability is measured by other validators. If a validator
|
||||
becomes unreliable, at a flag ledger, other validators propose *UNLModify*
|
||||
pseudo-transactions which vote the validator to add to the negative UNL during
|
||||
the consensus session. If agreed, the validator is added to the negative UNL at
|
||||
the next flag ledger. The mechanism of removing a validator from the negative
|
||||
UNL is the same.
|
||||
|
||||
### Question: Given a negative UNL, what happens if the UNL changes?
|
||||
|
||||
Answer: Let’s consider the cases:
|
||||
|
||||
1. A validator is added to the UNL, and it is already in the negative UNL. This
|
||||
case could happen when not all the nodes have the same UNL. Note that the
|
||||
negative UNL on the ledger lists unreliable nodes that are not necessarily the
|
||||
validators for everyone.
|
||||
|
||||
In this case, the liveness is affected negatively. Because the minimum
|
||||
quorum could be larger but the usable validators are not increased.
|
||||
|
||||
1. A validator is removed from the UNL, and it is in the negative UNL.
|
||||
|
||||
In this case, the liveness is affected positively. Because the quorum could
|
||||
be smaller but the usable validators are not reduced.
|
||||
|
||||
1. A validator is added to the UNL, and it is not in the negative UNL.
|
||||
1. A validator is removed from the UNL, and it is not in the negative UNL.
|
||||
|
||||
Case 3 and 4 are not affected by the negative UNL protocol.
|
||||
|
||||
### Question: Can we simply lower the quorum to 60% without the negative UNL?
|
||||
|
||||
Answer: No, because the negative UNL approach is safer.
|
||||
|
||||
First let’s compare the two approaches intuitively, (1) the *negative UNL*
|
||||
approach, and (2) *lower quorum*: simply lowering the quorum from 80% to 60%
|
||||
without the negative UNL. The negative UNL approach uses consensus to come up
|
||||
with a list of unreliable validators, which are then removed from the effective
|
||||
UNL temporarily. With this approach, the list of unreliable validators is agreed
|
||||
to by a quorum of validators and will be used by every node in the network to
|
||||
adjust its UNL. The quorum is always 80% of the effective UNL. The lower quorum
|
||||
approach is a tradeoff between safety and liveness and against our principle of
|
||||
preferring safety over liveness. Note that different validators don't have to
|
||||
agree on which validation sources they are ignoring.
|
||||
|
||||
Next we compare the two approaches quantitatively with examples, and apply
|
||||
Theorem 8 of [Analysis of the XRP Ledger Consensus
|
||||
Protocol](https://arxiv.org/abs/1802.07242) paper:
|
||||
|
||||
*XRP LCP guarantees fork safety if **O<sub>i,j</sub> > n<sub>j</sub> / 2 +
|
||||
n<sub>i</sub> − q<sub>i</sub> + t<sub>i,j</sub>** for every pair of nodes
|
||||
P<sub>i</sub>, P<sub>j</sub>,*
|
||||
|
||||
where *O<sub>i,j</sub>* is the overlapping requirement, n<sub>j</sub> and
|
||||
n<sub>i</sub> are UNL sizes, q<sub>i</sub> is the quorum size of P<sub>i</sub>,
|
||||
*t<sub>i,j</sub> = min(t<sub>i</sub>, t<sub>j</sub>, O<sub>i,j</sub>)*, and
|
||||
t<sub>i</sub> and t<sub>j</sub> are the number of faults can be tolerated by
|
||||
P<sub>i</sub> and P<sub>j</sub>.
|
||||
|
||||
We denote *UNL<sub>i</sub>* as *P<sub>i</sub>'s UNL*, and *|UNL<sub>i</sub>|* as
|
||||
the size of *P<sub>i</sub>'s UNL*.
|
||||
|
||||
Assuming *|UNL<sub>i</sub>| = |UNL<sub>j</sub>|*, let's consider the following
|
||||
three cases:
|
||||
|
||||
1. With 80% quorum and 20% faults, *O<sub>i,j</sub> > 100% / 2 + 100% - 80% +
|
||||
20% = 90%*. I.e. fork safety requires > 90% UNL overlaps. This is one of the
|
||||
results in the analysis paper.
|
||||
|
||||
1. If the quorum is 60%, the relationship between the overlapping requirement
|
||||
and the faults that can be tolerated is *O<sub>i,j</sub> > 90% +
|
||||
t<sub>i,j</sub>*. Under the same overlapping condition (i.e. 90%), to guarantee
|
||||
the fork safety, the network cannot tolerate any faults. So under the same
|
||||
overlapping condition, if the quorum is simply lowered, the network can tolerate
|
||||
fewer faults.
|
||||
|
||||
1. With the negative UNL approach, we want to argue that the inequation
|
||||
*O<sub>i,j</sub> > n<sub>j</sub> / 2 + n<sub>i</sub> − q<sub>i</sub> +
|
||||
t<sub>i,j</sub>* is always true to guarantee fork safety, while the negative UNL
|
||||
protocol runs, i.e. the effective quorum is lowered without weakening the
|
||||
network's fault tolerance. To make the discussion easier, we rewrite the
|
||||
inequation as *O<sub>i,j</sub> > n<sub>j</sub> / 2 + (n<sub>i</sub> −
|
||||
q<sub>i</sub>) + min(t<sub>i</sub>, t<sub>j</sub>)*, where O<sub>i,j</sub> is
|
||||
dropped from the definition of t<sub>i,j</sub> because *O<sub>i,j</sub> >
|
||||
min(t<sub>i</sub>, t<sub>j</sub>)* always holds under the parameters we will
|
||||
use. Assuming a validator V is added to the negative UNL, now let's consider the
|
||||
4 cases:
|
||||
|
||||
1. V is not on UNL<sub>i</sub> nor UNL<sub>j</sub>
|
||||
|
||||
The inequation holds because none of the variables change.
|
||||
|
||||
1. V is on UNL<sub>i</sub> but not on UNL<sub>j</sub>
|
||||
|
||||
The value of *(n<sub>i</sub> − q<sub>i</sub>)* is smaller. The value of
|
||||
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
|
||||
variables do not change. Overall, the left side of the inequation does
|
||||
not change, but the right side is smaller. So the inequation holds.
|
||||
|
||||
1. V is not on UNL<sub>i</sub> but on UNL<sub>j</sub>
|
||||
|
||||
The value of *n<sub>j</sub> / 2* is smaller. The value of
|
||||
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
|
||||
variables do not change. Overall, the left side of the inequation does
|
||||
not change, but the right side is smaller. So the inequation holds.
|
||||
|
||||
1. V is on both UNL<sub>i</sub> and UNL<sub>j</sub>
|
||||
|
||||
The value of *O<sub>i,j</sub>* is reduced by 1. The values of
|
||||
*n<sub>j</sub> / 2*, *(n<sub>i</sub> − q<sub>i</sub>)*, and
|
||||
*min(t<sub>i</sub>, t<sub>j</sub>)* are reduced by 0.5, 0.2, and 1
|
||||
respectively. The right side is reduced by 1.7. Overall, the left side
|
||||
of the inequation is reduced by 1, and the right side is reduced by 1.7.
|
||||
So the inequation holds.
|
||||
|
||||
The inequation holds for all the cases. So with the negative UNL approach,
|
||||
the network's fork safety is preserved, while the quorum is lowered that
|
||||
increases the network's liveness.
|
||||
|
||||
<h3> Question: We have observed that occasionally a validator wanders off on its
|
||||
own chain. How is this case handled by the negative UNL algorithm? </h3>
|
||||
|
||||
Answer: The case that a validator wanders off on its own chain can be measured
|
||||
with the validations agreement. Because the validations by this validator must
|
||||
be different from other validators' validations of the same sequence numbers.
|
||||
When there are enough disagreed validations, other validators will vote this
|
||||
validator onto the negative UNL.
|
||||
|
||||
In general by measuring the agreement of validations, we also measured the
|
||||
"sanity". If two validators have too many disagreements, one of them could be
|
||||
insane. When enough validators think a validator is insane, that validator is
|
||||
put on the negative UNL.
|
||||
|
||||
<h3> Question: Why would there be at most one disable UNLModify and one
|
||||
re-enable UNLModify transaction per flag ledger? </h3>
|
||||
|
||||
Answer: It is a design choice so that the effective UNL does not change too
|
||||
quickly. A typical targeted scenario is several validators go offline slowly
|
||||
during a long weekend. The current design can handle this kind of cases well
|
||||
without changing the effective UNL too quickly.
|
||||
|
||||
## Appendix
|
||||
|
||||
### Confidence Test
|
||||
|
||||
We will use two test networks, a single machine test network with multiple IP
|
||||
addresses and the QE test network with multiple machines. The single machine
|
||||
network will be used to test all the test cases and to debug. The QE network
|
||||
will be used after that. We want to see the test cases still pass with real
|
||||
network delay. A test case specifies:
|
||||
|
||||
1. a UNL with different number of validators for different test cases,
|
||||
1. a network with zero or more non-validator nodes,
|
||||
1. a sequence of validator reliability change events (by killing/restarting
|
||||
nodes, or by running modified rippled that does not send all validation
|
||||
messages),
|
||||
1. the correct outcomes.
|
||||
|
||||
For all the test cases, the correct outcomes are verified by examining logs. We
|
||||
will grep the log to see if the correct negative UNLs are generated, and whether
|
||||
or not the network is making progress when it should be. The ripdtop tool will
|
||||
be helpful for monitoring validators' states and ledger progress. Some of the
|
||||
timing parameters of rippled will be changed to have faster ledger time. Most if
|
||||
not all test cases do not need client transactions.
|
||||
|
||||
For example, the test cases for the prototype:
|
||||
1. A 10-validator UNL.
|
||||
1. The network does not have other nodes.
|
||||
1. The validators will be started from the genesis. Once they start to produce
|
||||
ledgers, we kill five validators, one every flag ledger interval. Then we
|
||||
will restart them one by one.
|
||||
1. A sequence of events (or the lack of events) such as a killed validator is
|
||||
added to the negative UNL.
|
||||
|
||||
#### Roads Not Taken: Test with Extended CSF
|
||||
|
||||
We considered testing with the current unit test framework, specifically the
|
||||
[Consensus Simulation
|
||||
Framework](https://github.com/ripple/rippled/blob/develop/src/test/csf/README.md)
|
||||
(CSF). However, the CSF currently can only test the generic consensus algorithm
|
||||
as in the paper: [Analysis of the XRP Ledger Consensus
|
||||
Protocol](https://arxiv.org/abs/1802.07242).
|
||||
79
docs/0001-negative-unl/negativeUNLSqDiagram.puml
Normal file
79
docs/0001-negative-unl/negativeUNLSqDiagram.puml
Normal file
@@ -0,0 +1,79 @@
|
||||
@startuml negativeUNL_highLevel_sequence
|
||||
|
||||
skinparam sequenceArrowThickness 2
|
||||
skinparam roundcorner 20
|
||||
skinparam maxmessagesize 160
|
||||
|
||||
actor "Rippled Start" as RS
|
||||
participant "Timer" as T
|
||||
participant "NetworkOPs" as NOP
|
||||
participant "ValidatorList" as VL #lightgreen
|
||||
participant "Consensus" as GC
|
||||
participant "ConsensusAdaptor" as CA #lightgreen
|
||||
participant "Validations" as RM #lightgreen
|
||||
|
||||
RS -> NOP: begin consensus
|
||||
activate NOP
|
||||
NOP -[#green]> VL: <font color=green>update negative UNL
|
||||
hnote over VL#lightgreen: store a copy of\nnegative UNL
|
||||
VL -> NOP
|
||||
NOP -> VL: update trusted validators
|
||||
activate VL
|
||||
VL -> VL: re-calculate quorum
|
||||
hnote over VL#lightgreen: ignore negative listed validators\nwhen calculate quorum
|
||||
VL -> NOP
|
||||
deactivate VL
|
||||
NOP -> GC: start round
|
||||
activate GC
|
||||
GC -> GC: phase = OPEN
|
||||
GC -> NOP
|
||||
deactivate GC
|
||||
deactivate NOP
|
||||
|
||||
loop at regular frequency
|
||||
T -> GC: timerEntry
|
||||
activate GC
|
||||
end
|
||||
|
||||
alt phase == OPEN
|
||||
alt should close ledger
|
||||
GC -> GC: phase = ESTABLISH
|
||||
GC -> CA: onClose
|
||||
activate CA
|
||||
alt sqn%256==0
|
||||
CA -[#green]> RM: <font color=green>getValidations
|
||||
CA -[#green]> CA: <font color=green>create UNLModify Tx
|
||||
hnote over CA#lightgreen: use validatations of the last 256 ledgers\nto figure out UNLModify Tx candidates.\nIf any, create UNLModify Tx, and add to TxSet.
|
||||
end
|
||||
CA -> GC
|
||||
GC -> CA: propose
|
||||
deactivate CA
|
||||
end
|
||||
else phase == ESTABLISH
|
||||
hnote over GC: receive peer postions
|
||||
GC -> GC : update our position
|
||||
GC -> CA : propose \n(if position changed)
|
||||
GC -> GC : check if have consensus
|
||||
alt consensus reached
|
||||
GC -> GC: phase = ACCEPT
|
||||
GC -> CA : onAccept
|
||||
activate CA
|
||||
CA -> CA : build LCL
|
||||
hnote over CA #lightgreen: copy negative UNL from parent ledger
|
||||
alt sqn%256==0
|
||||
CA -[#green]> CA: <font color=green>Adjust negative UNL
|
||||
CA -[#green]> CA: <font color=green>apply UNLModify Tx
|
||||
end
|
||||
CA -> CA : validate and send validation message
|
||||
activate NOP
|
||||
CA -> NOP : end consensus and\n<b>begin next consensus round
|
||||
deactivate NOP
|
||||
deactivate CA
|
||||
hnote over RM: receive validations
|
||||
end
|
||||
else phase == ACCEPTED
|
||||
hnote over GC: timerEntry hash nothing to do at this phase
|
||||
end
|
||||
deactivate GC
|
||||
|
||||
@enduml
|
||||
BIN
docs/0001-negative-unl/negativeUNL_highLevel_sequence.png
Normal file
BIN
docs/0001-negative-unl/negativeUNL_highLevel_sequence.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 138 KiB |
88
docs/0010-ledger-replay/README.md
Normal file
88
docs/0010-ledger-replay/README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Ledger Replay
|
||||
|
||||
`LedgerReplayer` is a new `Stoppable` for replaying ledgers.
|
||||
Patterned after two other `Stoppable`s under `JobQueue`---`InboundLedgers`
|
||||
and `InboundTransactions`---it acts like a factory for creating
|
||||
state-machine workers, and a network message demultiplexer for those workers.
|
||||
Think of these workers like asynchronous functions.
|
||||
Like functions, they each take a set of parameters.
|
||||
The `Stoppable` memoizes these functions. It maintains a table for each
|
||||
worker type, mapping sets of arguments to the worker currently working
|
||||
on that argument set.
|
||||
Whenever the `Stoppable` is asked to construct a worker, it first searches its
|
||||
table to see if there is an existing worker with the same or overlapping
|
||||
argument set.
|
||||
If one exists, then it is used. If not, then a new one is created,
|
||||
initialized, and added to the table.
|
||||
|
||||
For `LedgerReplayer`, there are three worker types: `LedgerReplayTask`,
|
||||
`SkipListAcquire`, and `LedgerDeltaAcquire`.
|
||||
Each is derived from `TimeoutCounter` to give it a timeout.
|
||||
For `LedgerReplayTask`, the parameter set
|
||||
is {reason, finish ledger ID, number of ledgers}. For `SkipListAcquire` and
|
||||
`LedgerDeltaAcquire`, there is just one parameter: a ledger ID.
|
||||
|
||||
Each `Stoppable` has an entry point. For `LedgerReplayer`, it is `replay`.
|
||||
`replay` creates two workers: a `LedgerReplayTask` and a `SkipListAcquire`.
|
||||
`LedgerDeltaAcquire`s are created in the callback for when the skip list
|
||||
returns.
|
||||
|
||||
For `SkipListAcquire` and `LedgerDeltaAcquire`, initialization fires off the
|
||||
underlying asynchronous network request and starts the timeout. The argument
|
||||
set identifying the worker is included in the network request, and copied to
|
||||
the network response. `SkipListAcquire` sends a request for a proof path for
|
||||
the skip list of the desired ledger. `LedgerDeltaAcquire` sends a request for
|
||||
the transaction set of the desired ledger.
|
||||
|
||||
`LedgerReplayer` is also a network message demultiplexer.
|
||||
When a response arrives for a request that was sent by a `SkipListAcquire` or
|
||||
`LedgerDeltaAcquire` worker, the `Peer` object knows to send it to the
|
||||
`LedgerReplayer`, which looks up the worker waiting for that response based on
|
||||
the identifying argument set included in the response.
|
||||
|
||||
`LedgerReplayTask` may ask `InboundLedgers` to send requests to acquire
|
||||
the start ledger, but there is no way to attach a callback or be notified when
|
||||
the `InboundLedger` worker completes. All the responses for its messages will
|
||||
be directed to `InboundLedgers`, not `LedgerReplayer`. Instead,
|
||||
`LedgerReplayTask` checks whether the start ledger has arrived every time its
|
||||
timeout expires.
|
||||
|
||||
Like a promise, each worker keeps track of whether it is pending (`!isDone()`)
|
||||
or whether it has resolved successfully (`complete_ == true`) or unsuccessfully
|
||||
(`failed_ == true`). It will never exist in both resolved states at once, nor
|
||||
will it return to a pending state after reaching a resolved state.
|
||||
|
||||
Like promises, some workers can accept continuations to be called when they
|
||||
reach a resolved state, or immediately if they are already resolved.
|
||||
`SkipListAcquire` and `LedgerDeltaAcquire` both accept continuations of a type
|
||||
specific to their payload, both via a method named `addDataCallback()`. Continuations
|
||||
cannot be removed explicitly, but they are held by `std::weak_ptr` so they can
|
||||
be removed implicitly.
|
||||
|
||||
`LedgerReplayTask` is simultaneously:
|
||||
|
||||
1. an asynchronous function,
|
||||
1. a continuation to one `SkipListAcquire` asynchronous function,
|
||||
1. a continuation to zero or more `LedgerDeltaAcquire` asynchronous functions, and
|
||||
1. a continuation to its own timeout.
|
||||
|
||||
Each of these roles corresponds to different entry points:
|
||||
|
||||
1. `init()`
|
||||
1. the callback added to `SkipListAcquire`, which calls `updateSkipList(...)` or `cancel()`
|
||||
1. the callback added to `LedgerDeltaAcquire`, which calls `deltaReady(...)` or `cancel()`
|
||||
1. `onTimer()`
|
||||
|
||||
Each of these entry points does something unique to that entry point. They
|
||||
either (a) transition `LedgerReplayTask` to a terminal failed resolved state
|
||||
(`cancel()` and `onTimer()`) or (b) try to make progress toward the successful
|
||||
resolved state. `init()` and `updateSkipList(...)` call `trigger()` while
|
||||
`deltaReady(...)` calls `tryAdvance()`. There's a similarity between this
|
||||
pattern and the way coroutines are implemented, where every yield saves the spot
|
||||
in the code where it left off and every resume jumps back to that spot.
|
||||
|
||||
### Sequence Diagram
|
||||

|
||||
|
||||
### Class Diagram
|
||||

|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user