Compare commits

...

103 Commits

Author SHA1 Message Date
Denis Angell
eaaab2309c Merge branch 'dev' into fix-delivered-amt 2024-06-05 12:17:12 +02:00
Wietse Wind
849a4435e0 CI Split jobs with prev job dependency & CI on jshooks (#320)
* CI on `jshooks` branch

* CI Split jobs with prev job dependency

* No multi branch worker in parallel

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-05-29 13:45:59 +02:00
Wietse Wind
247e9d98bf CI on jshooks branch (#317) 2024-05-24 10:10:40 +10:00
Denis Angell
dde03634f8 remove print 2024-05-14 06:37:26 +02:00
Denis Angell
49b766181b update tests 2024-05-14 06:14:40 +02:00
Denis Angell
75ab16e989 fix delivered amount 2024-05-13 16:15:41 +02:00
Denis Angell
acd455f5df Fix: Server Definitions Typo (#306) 2024-04-19 07:39:21 +10:00
Denis Angell
6636e3b6fd Add RPC Tests (#295)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-04-18 18:22:46 +10:00
Denis Angell
3c5f118b59 Fix: Add Tx Flags To Server Definitions (#304) 2024-04-18 15:43:48 +10:00
Vasu
88308126cc Removing duplicate macro #define LPAREN ( (#233) 2024-03-25 09:11:21 +11:00
Denis Angell
497e52fcc6 Update Macros (#279)
* add common macros
* remove old/unused macros
2024-03-25 08:43:46 +11:00
Denis Angell
a3852763e7 Fix: Namespace Delete (OwnerCount) (#296)
* fix ns delete owner count
* add a new success code and refactor success checks, limit ns delete operations to 256 entries per txn
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-25 08:37:08 +11:00
RichardAH
7cd8f0a03a ZeroB2M amendment (#293)
* ZeroB2M amendment

Co-authored-by: Denis Angell <dangell@transia.co>
2024-03-22 10:49:35 +11:00
Denis Angell
d24c134612 add emitted order test (#273) 2024-03-11 11:46:03 +11:00
Denis Angell
cdac69a111 Fix: URIToken Test (#254)
* update tests for fixXahauV1
2024-03-11 10:06:15 +11:00
Denis Angell
1500522427 fix tsh on nftoken (#269) 2024-03-11 09:38:45 +11:00
Denis Angell
75aba531d6 Amendment: featureRemit (#278)
* Remit Amendment

Co-authored-by: Denis Angell <dangell@transia.co>

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-11 09:29:39 +11:00
Wietse Wind
caa8b382d8 🤦 2024-02-22 23:28:19 +01:00
Wietse Wind
82e04073be Revert checkout v3 2024-02-14 15:21:43 +01:00
Wietse Wind
e1b78f9682 Do clean 2024-02-14 15:20:24 +01:00
Wietse Wind
901d1d4e8d Update checkout CI to v4 2024-02-14 15:17:45 +01:00
Wietse Wind
aca5241515 Build Container per user 2024-02-14 15:15:12 +01:00
RichardAH
780378c221 fix hook emission ordering (#270) 2024-01-24 19:55:28 +01:00
RichardAH
2dc5e670ac fix buildinfo test (#266) 2024-01-22 12:34:13 +01:00
RichardAH
4dff5a5c8e fix permisisons on inject (#264) 2024-01-22 10:48:01 +01:00
Denis Angell
f64e626a3f Fix: TSH Updates & Emitted Txn (#261)
fixXahauV2
* refactor tsh
* add uritoken mint/cancel tsh
* add flags to hookexections meta and nonce to hookemissions meta

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-01-22 10:25:36 +01:00
RichardAH
f21d3e1e97 Merge pull request #260 from Xahau/emit_guard
Fix: EmittedTxn Reliability
2024-01-19 14:05:06 +01:00
Denis Angell
858055c811 clang-format 2024-01-19 11:07:22 +01:00
Denis Angell
5d66f17574 add test 2024-01-19 11:05:45 +01:00
Denis Angell
00328cb782 Merge branch 'dev' into emit_guard 2024-01-19 10:24:34 +01:00
Richard Holland
fdf7ea4174 dbg inject clang + permission check 2024-01-17 18:15:17 +00:00
Richard Holland
7877ed9704 debug txn injector 2024-01-17 18:07:05 +00:00
Richard Holland
17ccec9ac5 Add additional checks for emitted txns 2024-01-17 15:39:02 +00:00
RichardAH
de522ac4ae Merge pull request #255 from Xahau/candidate
Candidate/release/sync
2023-12-29 22:18:04 +01:00
RichardAH
74c83a9271 Merge branch 'release' into candidate 2023-12-29 21:56:05 +01:00
Wietse Wind
66ee96d456 Build on release after all 2023-12-29 15:43:33 +01:00
Wietse Wind
b476aea55b Do not auto build on release 2023-12-29 15:39:48 +01:00
RichardAH
4ad697069f Fix xahau v1audit (#250) 2023-12-27 14:53:38 +01:00
Denis Angell
97acfe9f97 Fix: URIToken Test (#247) 2023-12-22 11:54:54 +01:00
Denis Angell
475b6f7347 Amendment: Fix Xahau v1 (#231)
* FXV1: Meta Amount (#225)

* FXV1: Optional Offer Sequence (#224)

* FXV1: Patch Hooks OwnerDir (#236)

* FXV1:  Fix `Import` Quorum (#235)

* FXV1: Namespace Limit (#220)

* FXV1: allow duplicate entries in genesis mint transactor (#239)

* FXV1: Fix URIToken (#243)

* lite fixes for tsh issues (#244)

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-12-21 16:21:17 +01:00
RichardAH
64a07e5539 move utf8 checking to header file (#241)
Co-authored-by: Denis Angell <dangell@transia.co>
2023-12-19 19:52:34 +01:00
Denis Angell
c49b7a2301 Fix: Verify Emitted Result on GenesisMint tests (#242) 2023-12-19 10:08:51 +01:00
Denis Angell
ac694c7c90 Add HookParameters fee calc to all txs (#223) 2023-12-07 14:30:30 +01:00
Denis Angell
3be6baded9 update definitions test (#228) 2023-12-07 13:40:18 +01:00
Richard Holland
49798b1792 make server_definitions caching thread_local 2023-12-05 13:47:54 +00:00
Denis Angell
27f43ba9ee TSH Tests (#222) 2023-12-03 10:13:05 +01:00
Wietse Wind
7881b7f69f Build on most cores (nproc / 3 » nproc / 1.337) (#226)
Use those resources! 🎉🚀
2023-12-01 12:47:50 +01:00
Denis Angell
d2c45bf560 Refactor Testcases (#219)
* Refactor Testcases (#216)

* remove unused function

* add offer id tests

* add escrow id tests

* clang-format

* fix offer

* fix escrow

* fix offer test
2023-11-24 11:41:16 +01:00
RichardAH
c917977eb0 wildcard network (id=65535) ignores signature verification (#201)
* wildcard network (id=65535) ignores signature verification

* move isWildcardNetwork deeper to mimick normal signature checking in multisig

* again

* add wildcard test

* Update Transactor.cpp

* clang-format

* fix UNLReport

* move free pass

* update test for multisign

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-21 18:47:08 +01:00
tequ
4cb93f943b Update RELEASENOTES.XAHAUD.md (#208) 2023-11-17 12:10:35 +01:00
Denis Angell
5447c4010d Add features to server_definitions (#190)
* add features to `server_definitions`

* clang-format

* Update RPCCall.cpp

* only return features without params

* clang-format

* include features in hashed value

* clang-format

* rework features addition to server_defintions to be cached at flag ledgers

* fix clang, duplicate hash key

---------

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2023-11-16 23:38:13 +01:00
Denis Angell
b77b0e70e3 Add Script to check for suspicious patterns (#199)
* Create check_keys.sh

* add to workflow
2023-11-16 16:23:55 +01:00
RichardAH
98833e4934 Fix server version (#195)
* modify the way server_version int is built to include build number

* clang

* Update BuildInfo_test.cpp

* clang-format

* Update BuildInfo_test.cpp

* clang-format

* Update BuildInfo_test.cpp

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-16 16:20:32 +01:00
dorianlynn
b74697ff9a Update settings.json (#205) 2023-11-16 12:47:57 +01:00
Denis Angell
ac1883bbf7 Update Documentation & README (#192)
* update md files

* Update RELEASENOTES.XAHAUD.md

* fixup

* Update README.md

* update readme

* Update README.md

* Update README.md

* update review

* misc fixup

* Update RELEASENOTES.XAHAUD.md

* Update README.md
2023-11-16 12:01:51 +01:00
Denis Angell
4ada3f85bb fix: uritoken destination & amount preflight check (#188)
* fix: uritoken destination & amount

* Update URIToken.cpp

* add lsfBurnable flag

* make uritoken patch a fix amendment

* clang-format
2023-11-10 10:42:57 +01:00
Denis Angell
559b504c7d Update build-in-docker.yml (#196) 2023-11-09 19:44:28 +01:00
Denis Angell
43cb255337 Update Workflow (#193) 2023-11-09 19:01:50 +01:00
Denis Angell
63bb1906ed remove rippled release cmake ci/dpkg (#183)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-11-09 12:58:27 +01:00
Denis Angell
ac6c102876 add response message and remove unused response code (#185) 2023-11-09 12:31:15 +01:00
Denis Angell
195904574c Change TER response codes from _XRP to _NATIVE. (#184)
* Change `_XRP` response codes to `_NATIVE`

* Update ServerDefinitions_test.cpp
2023-11-09 12:25:09 +01:00
Denis Angell
2a18ec563d Fix HBB workflow (#191)
* Update build-in-docker.yml

* Update build-in-docker.yml

* Update build-in-docker.yml

* Update build-core.sh

* update workflow

* Update build-core.sh

* Update build-core.sh

* Update build-core.sh

* Update build-core.sh

* update workflow

* fix workflow

* Update xahaud.binary.dockerfile

* fixup

* fixup

* fixup

* fixup

* fixup

* fixup

* Update build-in-docker.yml
2023-11-09 12:20:51 +01:00
Denis Angell
91f9e424a3 Server Definitions Cleanup (#174)
* update server-definitions + test

* Update ServerDefinitions.cpp

* clang-format
2023-11-06 11:01:37 +01:00
RichardAH
c2e8bd2be1 add native currency to various rpc repsonses (#179) 2023-11-02 12:15:45 +01:00
RichardAH
4d2ec0004b fix for ctid on tx rpc (#177)
* fix for ctid on tx rpc

* clang-format

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-11-02 11:47:57 +01:00
RichardAH
b564fe3c92 fix for delivered_amount (#178) 2023-11-02 11:47:30 +01:00
Denis Angell
04ceb5af9a update workflow (#175) 2023-11-01 17:04:13 +01:00
Denis Angell
7dee512991 Housekeeping (#170)
* Update pull_request_template.md

* strip out unused files

* Update CONTRIBUTING.md

* fix docker

* remove conan stuff

* update license

* update hook directory

* Delete .codecov.yml

* Update .dockerignore

* Update genesis.json

* update validator list example

* Update rippled-standalone.cfg

* Update CONTRIBUTING.md
2023-11-01 14:56:12 +01:00
Denis Angell
70bd7c2ce7 Reintroduce Clang-Format & Levelization (#171)
* clang-format

* levelization

* clang-format

* update workflow (#172)

* update workflow

* Update build-in-docker.yml

* fix from `clang-format`

* Update Enum.h
2023-11-01 14:12:24 +01:00
Denis Angell
1b9373e220 add array xpop path 2023-10-30 15:53:48 +01:00
Denis Angell
c3a0e1df44 add array xpop path 2023-10-30 15:53:01 +01:00
Richard Holland
c65ee9d7f9 final distributions 2023-10-30 11:09:23 +00:00
Richard Holland
090b64728e fix for https://github.com/Xahau/xahaud/issues/161 2023-10-28 14:01:05 +00:00
Richard Holland
283ec19976 additional fix for https://github.com/Xahau/xahaud/issues/158 2023-10-28 13:37:33 +00:00
Richard Holland
ee450926a7 change tequ addr in xahaugenesis 2023-10-28 09:55:41 +00:00
Richard Holland
9ae0a907f5 fix for https://github.com/Xahau/xahaud/issues/158 2023-10-27 16:43:08 +00:00
Richard Holland
81b0bf8041 fix for https://github.com/Xahau/xahaud/issues/155 2023-10-27 10:46:19 +02:00
Denis Angell
c54377eb7a add uritoken delete test 2023-10-27 10:46:19 +02:00
Richard Holland
1516900c4f issuing URITokens is account deletion blocker 2023-10-27 10:46:19 +02:00
Richard Holland
ac5ff4ca32 remove completely the codepath for undoing l2 votes 2023-10-25 11:54:58 +00:00
Denis Angell
e48e4db9e6 Fix TxQ tests (#151) 2023-10-25 02:38:12 +02:00
Denis Angell
97cac67bba Patch import (#150)
* add index test

* remove comment

* remove warn env
2023-10-24 23:14:19 +02:00
Denis Angell
5c6a855ae3 add lut bad currency check (#147) 2023-10-24 23:12:42 +02:00
Richard Holland
5fcebdabc5 replace vecFromAcc with HBB compiler-friendly version 2023-10-24 17:32:04 +00:00
Richard Holland
69d7d653be testing CI tests without standalone check 2023-10-24 14:04:33 +00:00
Richard Holland
d5d29e319c fix for https://github.com/Xahau/xahaud/issues/148 breaks some tests that lack index field. xls41 needs to be updated to include index field. 2023-10-24 09:11:40 +00:00
Denis Angell
1f052020cb Fixup Import Signers (#138)
* add guard and tests

* disable xpop array feature
2023-10-24 10:57:58 +02:00
Richard Holland
263c6342cf fix for https://github.com/Xahau/xahaud/issues/149 2023-10-20 13:21:30 +00:00
Richard Holland
fcd935cecf fix for https://github.com/Xahau/xahaud/issues/146 2023-10-19 12:22:59 +00:00
Richard Holland
96cf8f089d fix for https://github.com/Xahau/xahaud/issues/145 2023-10-19 12:18:29 +00:00
Denis Angell
adf30fb819 XRP -> XAH (#134)
* `XRP` -> `XAH`

* update rpc

* fix badCurrency

* revert consensus update

* `xah` -> `native` in rpc commands

* Update Consensus.h
2023-10-19 12:11:12 +02:00
Denis Angell
a2c41016b0 add ticket tests (#129) 2023-10-18 10:10:49 +02:00
Denis Angell
251a79e897 add claim reward flag (#139) 2023-10-18 10:09:47 +02:00
Denis Angell
d734fe600b Fix tests re: xahau genesis (#135)
* Update AccountTxPaging_test.cpp

* fix failing tests

* fix failing tests

* Update Import_test.cpp
2023-10-18 10:06:29 +02:00
Richard Holland
a6c86b5a9e fix for https://github.com/Xahau/xahaud/issues/144 2023-10-18 08:04:40 +00:00
Richard Holland
dae172ba22 fix for https://github.com/Xahau/xahaud/issues/143 2023-10-18 08:00:44 +00:00
Richard Holland
d716787378 fix for: https://github.com/Xahau/xahaud/issues/142 2023-10-18 07:41:45 +00:00
Richard Holland
7bfc60c6ad fix for https://github.com/Xahau/xahaud/issues/141 2023-10-16 13:10:26 +00:00
Denis Angell
f341558735 fix import signers check (#136) 2023-10-09 22:26:00 +02:00
Richard Holland
9317562d8b Merge branch 'dev' of github.com:Xahau/xahaud into dev 2023-10-09 13:03:00 +00:00
Richard Holland
3bf3010f50 Xahau mainnet distribution 2023-10-09 13:02:51 +00:00
RichardAH
b4022c35b3 make account starting seq the parent close time to prevent replay attacks in reset networks (#131)
* make account starting seq the parent close time to prevent replay attacks in reset networks

* add tests for activation

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2023-10-09 14:18:23 +02:00
Richard Holland
428009c8ca ensure XahauGenesis is enabled correctly just after the genesis ledger 2023-10-09 12:15:24 +00:00
374 changed files with 57638 additions and 39935 deletions

View File

@@ -1,5 +0,0 @@
codecov:
ci:
- !appveyor
- travis

View File

@@ -2,6 +2,6 @@
*
# Allow files and directories
!/build-inner.sh
!/release-builder.sh
!/build-core.sh
!/build-full.sh
!/release-builder.sh

View File

@@ -33,10 +33,27 @@ Please check [x] relevant options, delete irrelevant ones.
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (non-breaking change that only restructures code)
- [ ] Tests (You added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation Updates
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation update
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
- [ ] Release
### API Impact
<!--
Please check [x] relevant options, delete irrelevant ones.
* If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
* Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
* libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
* Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ] `libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
@@ -52,4 +69,4 @@ This section may not be needed if your change includes thoroughly commented unit
<!--
## Future Tasks
For future tasks related to PR.
-->
-->

View File

@@ -2,34 +2,37 @@ name: Build using Docker
on:
push:
branches: [ "dev", "test", "release" ]
branches: [ "dev", "candidate", "release", "jshooks" ]
pull_request:
branches: [ "dev", "release" ]
branches: [ "dev", "candidate", "release", "jshooks" ]
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: false
jobs:
builder:
checkout:
runs-on: [self-hosted, vanity]
steps:
- uses: actions/checkout@v3
with:
clean: false
checkpatterns:
runs-on: [self-hosted, vanity]
needs: checkout
steps:
- name: Check for suspicious patterns
run: /bin/bash suspicious_patterns.sh
build:
runs-on: [self-hosted, vanity]
needs: checkpatterns
steps:
- name: Build using Docker
run: /bin/bash release-builder.sh
unittests:
needs: builder
tests:
runs-on: [self-hosted, vanity]
needs: build
steps:
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
# publisher:
# needs: builder
# runs-on: [self-hosted, vanity]
# steps:
# - uses: actions/upload-artifact@v3
# with:
# name: build-${{ github.run_number }}
# path: |
# release-build/xahaud
# release-build/release.info

View File

@@ -8,7 +8,7 @@ jobs:
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install clang-format
run: |
codename=$( lsb_release --codename --short )
@@ -52,7 +52,7 @@ jobs:
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions c,cpp,h,cxx,ipp develop'
2. Run 'git-clang-format --extensions c,cpp,h,cxx,ipp dev'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"

View File

@@ -2,7 +2,7 @@ name: Build and publish Doxygen documentation
on:
push:
branches:
- develop
- dev
jobs:
job:

View File

@@ -8,7 +8,7 @@ jobs:
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences

View File

@@ -1,169 +0,0 @@
# I don't know what the minimum size is, but we cannot build on t3.micro.
# TODO: Factor common builds between different tests.
# The parameters for our job matrix:
#
# 1. Generator (Make, Ninja, MSBuild)
# 2. Compiler (GCC, Clang, MSVC)
# 3. Build type (Debug, Release)
# 4. Definitions (-Dunity=OFF, -Dassert=ON, ...)
.job_linux_build_test:
only:
variables:
- $CI_PROJECT_URL =~ /^https?:\/\/gitlab.com\//
stage: build
tags:
- linux
- c5.2xlarge
image: thejohnfreeman/rippled-build-ubuntu:4b73694e07f0
script:
- bin/ci/build.sh
- bin/ci/test.sh
cache:
# Use a different key for each unique combination of (generator, compiler,
# build type). Caches are stored as `.zip` files; they are not merged.
# Generate a new key whenever you want to bust the cache, e.g. when the
# dependency versions have been bumped.
# By default, jobs pull the cache. Only a few specially chosen jobs update
# the cache (with policy `pull-push`); one for each unique combination of
# (generator, compiler, build type).
policy: pull
paths:
- .nih_c/
'build+test Make GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 62ada41c-fc9e-4949-9533-736d4d6512b6
policy: pull-push
'build+test Ninja GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
policy: pull-push
'build+test Ninja GCC Debug -Dstatic=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dstatic=OFF -DBUILD_SHARED_LIBS=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF -DBUILD_SHARED_LIBS=ON'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
policy: pull-push
'build+test(manual) Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
MANUAL_TEST: 'true'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
'build+test Make clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: bf578dc2-5277-4580-8de5-6b9523118b19
policy: pull-push
'build+test Ninja clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
policy: pull-push
'build+test Ninja clang Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=address':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=address'
CONCURRENT_TESTS: 1
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=undefined':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=undefined'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: 7751be37-2358-4f08-b1d0-7e72e0ad266d
policy: pull-push

6
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,6 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v10.0.1
hooks:
- id: clang-format

View File

@@ -1,460 +0,0 @@
# There is a known issue where Travis will have trouble fetching the cache,
# particularly on non-linux builds. Try restarting the individual build
# (probably will not be necessary in the "windep" stages) if the end of the
# log looks like:
#
#---------------------------------------
# attempting to download cache archive
# fetching travisorder/cache--windows-1809-containers-f2bf1c76c7fb4095c897a4999bd7c9b3fb830414dfe91f33d665443b52416d39--compiler-gpp.tgz
# found cache
# adding C:/Users/travis/_cache to cache
# creating directory C:/Users/travis/_cache
# No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
# Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
# The build has been terminated
#---------------------------------------
language: cpp
dist: bionic
services:
- docker
stages:
- windep-vcpkg
- windep-boost
- build
env:
global:
- DOCKER_IMAGE="rippleci/rippled-ci-builder:2020-01-08"
- CMAKE_EXTRA_ARGS="-Dwerr=ON -Dwextra=ON"
- NINJA_BUILD=true
# change this if we get more VM capacity
- MAX_TIME_MIN=80
- CACHE_DIR=${TRAVIS_HOME}/_cache
- NIH_CACHE_ROOT=${CACHE_DIR}/nih_c
- PARALLEL_TESTS=true
# this is NOT used by linux container based builds (which already have boost installed)
- BOOST_URL='https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz'
# Alternate dowload location
- BOOST_URL2='https://downloads.sourceforge.net/project/boost/boost/1.75.0/boost_1_75_0.tar.bz2?r=&amp;ts=1594393912&amp;use_mirror=newcontinuum'
# Travis downloader doesn't seem to have updated certs. Using this option
# introduces obvious security risks, but they're Travis's risks.
# Note that this option is only used if the "normal" build fails.
- BOOST_WGET_OPTIONS='--no-check-certificate'
- VCPKG_DIR=${CACHE_DIR}/vcpkg
- USE_CCACHE=true
- CCACHE_BASEDIR=${TRAVIS_HOME}"
- CCACHE_NOHASHDIR=true
- CCACHE_DIR=${CACHE_DIR}/ccache
before_install:
- export NUM_PROCESSORS=$(nproc)
- echo "NUM PROC is ${NUM_PROCESSORS}"
- if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
- if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
- if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
- bin/ci/ubuntu/travis-cache-start.sh
matrix:
fast_finish: true
allow_failures:
# TODO these need more investigation
#
# there are a number of UBs caught currently that need triage
- name: ubsan, clang-8
# this one often runs out of memory:
- name: manual tests, gcc-8, release
# The Windows build may fail if any of the dependencies fail, but
# allow the rest of the builds to continue. They may succeed if the
# dependency is already cached. These do not need to be retried if
# _any_ of the Windows builds succeed.
- stage: windep-vcpkg
- stage: windep-boost
# https://docs.travis-ci.com/user/build-config-yaml#usage-of-yaml-anchors-and-aliases
include:
# debug builds
- &linux
stage: build
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: gcc-8
name: gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
script:
- sudo chmod -R a+rw ${CACHE_DIR}
- ccache -s
- travis_wait ${MAX_TIME_MIN} bin/ci/ubuntu/build-in-docker.sh
- ccache -s
- <<: *linux
compiler: clang-8
name: clang-8, debug
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-8
name: reporting, clang-8, debug
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dreporting=ON"
# coverage builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
compiler: gcc-8
name: coverage, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
compiler: clang-8
name: coverage, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
# test-free builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: gcc-8
name: no-tests-unity, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dtests=OFF"
- SKIP_TESTS=true
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
compiler: clang-8
name: no-tests-non-unity, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dtests=OFF -Dunity=OFF"
- SKIP_TESTS=true
# nounity
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
compiler: gcc-8
name: non-unity, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
compiler: clang-8
name: non-unity, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
# manual tests
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
compiler: gcc-8
name: manual tests, gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- MANUAL_TESTS=true
# manual tests
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
compiler: gcc-8
name: manual tests, gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON -Dunity=OFF"
- MANUAL_TESTS=true
# release builds
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
compiler: gcc-8
name: gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON -Dunity=OFF"
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
compiler: clang-8
name: clang-8, release
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON"
# asan
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
compiler: clang-8
name: asan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=address"
- ASAN_OPTIONS="print_stats=true:atexit=true"
#- LSAN_OPTIONS="verbosity=1:log_threads=1"
- PARALLEL_TESTS=false
# ubsan
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
compiler: clang-8
name: ubsan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=undefined"
# once we can run clean under ubsan, add halt_on_error=1 to options below
- UBSAN_OPTIONS="print_stacktrace=1:report_error_type=1"
- PARALLEL_TESTS=false
# tsan
# current tsan failure *might* be related to:
# https://github.com/google/sanitizers/issues/1104
# but we can't get it to run, so leave it disabled for now
# - <<: *linux
# if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
# compiler: clang-8
# name: tsan, clang-8
# env:
# - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
# - BUILD_TYPE=Release
# - CMAKE_ADD="-Dsan=thread"
# - TSAN_OPTIONS="history_size=3 external_symbolizer_path=/usr/bin/llvm-symbolizer verbosity=1"
# - PARALLEL_TESTS=false
# dynamic lib builds
- <<: *linux
compiler: gcc-8
name: non-static, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF"
- <<: *linux
compiler: gcc-8
name: non-static + BUILD_SHARED_LIBS, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF -DBUILD_SHARED_LIBS=ON"
# makefile
- <<: *linux
compiler: gcc-8
name: makefile generator, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- NINJA_BUILD=false
# misc alternative compilers
- <<: *linux
compiler: gcc-9
name: gcc-9
env:
- MATRIX_EVAL="CC=gcc-9 && CXX=g++-9"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-9
name: clang-9, debug
env:
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
- BUILD_TYPE=Debug
- <<: *linux
compiler: clang-9
name: clang-9, release
env:
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
- BUILD_TYPE=Release
# verify build with min version of cmake
- <<: *linux
compiler: gcc-8
name: min cmake version
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_EXE=/opt/local/cmake/bin/cmake
- SKIP_TESTS=true
# validator keys project as subproj of rippled
- <<: *linux
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_vkeys/
compiler: gcc-8
name: validator-keys
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dvalidator_keys=ON"
- TARGET=validator-keys
# macos
- &macos
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_mac/
stage: build
os: osx
osx_image: xcode13.1
name: xcode13.1, debug
env:
# put NIH in non-cache location since it seems to
# cause failures when homebrew updates
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
- BLD_CONFIG=Debug
- TEST_EXTRA_ARGS=""
- BOOST_ROOT=${CACHE_DIR}/boost_1_75_0
- >-
CMAKE_ADD="
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_ARCHITECTURE=-x64
-DBoost_NO_SYSTEM_PATHS=ON
-DCMAKE_VERBOSE_MAKEFILE=ON"
addons:
homebrew:
packages:
- protobuf
- grpc
- pkg-config
- bash
- ninja
- cmake
- wget
- zstd
- libarchive
- openssl@1.1
update: true
install:
- export OPENSSL_ROOT=$(brew --prefix openssl@1.1)
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
- brew uninstall --ignore-dependencies boost
script:
- mkdir -p build.macos && cd build.macos
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
- ./rippled --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS} ${TEST_EXTRA_ARGS}
- <<: *macos
name: xcode13.1, release
before_script:
- export BLD_CONFIG=Release
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dassert=ON"
- <<: *macos
name: ipv6 (macos)
before_script:
- export TEST_EXTRA_ARGS="--unittest-ipv6"
- <<: *macos
osx_image: xcode13.1
name: xcode13.1, debug
# windows
- &windows
if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_win/
os: windows
env:
# put NIH in a non-cached location until
# we come up with a way to stabilize that
# cache on windows (minimize incremental changes)
- CACHE_NAME=win_01
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
- VCPKG_DEFAULT_TRIPLET="x64-windows-static"
- MATRIX_EVAL="CC=cl.exe && CXX=cl.exe"
- BOOST_ROOT=${CACHE_DIR}/boost_1_75
- >-
CMAKE_ADD="
-DCMAKE_PREFIX_PATH=${BOOST_ROOT}/_INSTALLED_
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_DIR=${BOOST_ROOT}/_INSTALLED_/lib/cmake/Boost-1.75.0
-DBoost_COMPILER=vc141
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_TOOLCHAIN_FILE=${VCPKG_DIR}/scripts/buildsystems/vcpkg.cmake
-DVCPKG_TARGET_TRIPLET=x64-windows-static"
stage: windep-vcpkg
name: prereq-vcpkg
install:
- choco upgrade cmake.install
- choco install ninja visualstudio2017-workload-vctools -y
script:
- df -h
- env
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh openssl
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh grpc
- travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh libarchive[lz4]
# TBD consider rocksdb via vcpkg if/when we can build with the
# vcpkg version
# - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh rocksdb[snappy,lz4,zlib]
- <<: *windows
stage: windep-boost
name: prereq-keep-boost
install:
- choco upgrade cmake.install
- choco install ninja visualstudio2017-workload-vctools -y
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
script:
- export BOOST_TOOLSET=msvc-14.1
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
- &windows-bld
<<: *windows
stage: build
name: windows, debug
before_script:
- export BLD_CONFIG=Debug
script:
- df -h
- . ./bin/sh/setup-msvc.sh
- mkdir -p build.ms && cd build.ms
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
# override num procs to force fewer unit test jobs
- export NUM_PROCESSORS=2
- travis_wait ${MAX_TIME_MIN} ./rippled.exe --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
- <<: *windows-bld
name: windows, release
before_script:
- export BLD_CONFIG=Release
- <<: *windows-bld
name: windows, visual studio, debug
script:
- mkdir -p build.ms && cd build.ms
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DCMAKE_GENERATOR_TOOLSET=host=x64"
- cmake -G "Visual Studio 15 2017 Win64" ${CMAKE_EXTRA_ARGS} ..
- export DESTDIR=${PWD}/_installed_
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose --config ${BLD_CONFIG} --target install
# override num procs to force fewer unit test jobs
- export NUM_PROCESSORS=2
- >-
travis_wait ${MAX_TIME_MIN} "./_installed_/Program Files/rippled/bin/rippled.exe" --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
- <<: *windows-bld
name: windows, vc2019
install:
- choco upgrade cmake.install
- choco install ninja -y
- choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
before_script:
- export BLD_CONFIG=Release
# we want to use the boost build from cache, which was built using the
# vs2017 compiler so we need to specify the Boost_COMPILER. BUT, we
# can't use the cmake config files generated by boost b/c they are
# broken for Boost_COMPILER override, so we need to specify both
# Boost_NO_BOOST_CMAKE and a slightly different Boost_COMPILER string
# to make the legacy find module work for us. If the cmake configs are
# fixed in the future, it should be possible to remove these
# workarounds.
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DBoost_NO_BOOST_CMAKE=ON -DBoost_COMPILER=-vc141"
before_cache:
- if [ $(uname) = "Linux" ] ; then SUDO="sudo"; else SUDO=""; fi
- cd ${TRAVIS_HOME}
- if [ -f cache_ignore.tar ] ; then $SUDO tar xvf cache_ignore.tar; fi
- cd ${TRAVIS_BUILD_DIR}
cache:
timeout: 900
directories:
- $CACHE_DIR
notifications:
email: false

View File

@@ -1,6 +1,6 @@
{
"C_Cpp.formatting": "clangFormat",
"C_Cpp.clang_format_path": "/Users/dustedfloor/projects/transia-rnd/rippled-icv2/.clang-format",
"C_Cpp.clang_format_path": ".clang-format",
"C_Cpp.clang_format_fallbackStyle": "{ ColumnLimit: 0 }",
"[cpp]":{
"editor.wordBasedSuggestions": false,
@@ -10,4 +10,4 @@
"editor.defaultFormatter": "xaver.clang-format",
"editor.formatOnSave": false
}
}
}

View File

@@ -455,6 +455,7 @@ target_sources (rippled PRIVATE
src/ripple/app/tx/impl/GenesisMint.cpp
src/ripple/app/tx/impl/Import.cpp
src/ripple/app/tx/impl/Invoke.cpp
src/ripple/app/tx/impl/Remit.cpp
src/ripple/app/tx/impl/SetSignerList.cpp
src/ripple/app/tx/impl/SetTrust.cpp
src/ripple/app/tx/impl/SignerEntries.cpp
@@ -637,6 +638,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/Random.cpp
src/ripple/rpc/handlers/Reservations.cpp
src/ripple/rpc/handlers/RipplePathFind.cpp
src/ripple/rpc/handlers/ServerDefinitions.cpp
src/ripple/rpc/handlers/ServerInfo.cpp
src/ripple/rpc/handlers/ServerState.cpp
src/ripple/rpc/handlers/SignFor.cpp
@@ -702,6 +704,7 @@ if (tests)
src/test/app/AccountDelete_test.cpp
src/test/app/AccountTxPaging_test.cpp
src/test/app/AmendmentTable_test.cpp
src/test/app/BaseFee_test.cpp
src/test/app/Check_test.cpp
src/test/app/ClaimReward_test.cpp
src/test/app/CrossingLimits_test.cpp
@@ -738,6 +741,7 @@ if (tests)
src/test/app/RCLCensorshipDetector_test.cpp
src/test/app/RCLValidations_test.cpp
src/test/app/Regression_test.cpp
src/test/app/Remit_test.cpp
src/test/app/SHAMapStore_test.cpp
src/test/app/SetAuth_test.cpp
src/test/app/SetRegularKey_test.cpp
@@ -753,6 +757,8 @@ if (tests)
src/test/app/ValidatorList_test.cpp
src/test/app/ValidatorSite_test.cpp
src/test/app/SetHook_test.cpp
src/test/app/SetHookTSH_test.cpp
src/test/app/Wildcard_test.cpp
src/test/app/XahauGenesis_test.cpp
src/test/app/tx/apply_test.cpp
#[===============================[
@@ -864,20 +870,28 @@ if (tests)
src/test/jtx/impl/delivermin.cpp
src/test/jtx/impl/deposit.cpp
src/test/jtx/impl/envconfig.cpp
src/test/jtx/impl/escrow.cpp
src/test/jtx/impl/fee.cpp
src/test/jtx/impl/flags.cpp
src/test/jtx/impl/genesis.cpp
src/test/jtx/impl/import.cpp
src/test/jtx/impl/invoice_id.cpp
src/test/jtx/impl/invoke.cpp
src/test/jtx/impl/jtx_json.cpp
src/test/jtx/impl/last_ledger_sequence.cpp
src/test/jtx/impl/memo.cpp
src/test/jtx/impl/multisign.cpp
src/test/jtx/impl/network.cpp
src/test/jtx/impl/offer.cpp
src/test/jtx/impl/owners.cpp
src/test/jtx/impl/paths.cpp
src/test/jtx/impl/pay.cpp
src/test/jtx/impl/paychan.cpp
src/test/jtx/impl/quality2.cpp
src/test/jtx/impl/rate.cpp
src/test/jtx/impl/regkey.cpp
src/test/jtx/impl/reward.cpp
src/test/jtx/impl/remit.cpp
src/test/jtx/impl/sendmax.cpp
src/test/jtx/impl/seq.cpp
src/test/jtx/impl/sig.cpp
@@ -886,6 +900,8 @@ if (tests)
src/test/jtx/impl/token.cpp
src/test/jtx/impl/trust.cpp
src/test/jtx/impl/txflags.cpp
src/test/jtx/impl/unl.cpp
src/test/jtx/impl/uritoken.cpp
src/test/jtx/impl/utility.cpp
#[===============================[
@@ -967,6 +983,7 @@ if (tests)
src/test/rpc/AccountLinesRPC_test.cpp
src/test/rpc/AccountObjects_test.cpp
src/test/rpc/AccountOffers_test.cpp
src/test/rpc/AccountNamespace_test.cpp
src/test/rpc/AccountSet_test.cpp
src/test/rpc/AccountTx_test.cpp
src/test/rpc/AmendmentBlocked_test.cpp
@@ -993,6 +1010,7 @@ if (tests)
src/test/rpc/RPCCall_test.cpp
src/test/rpc/RPCOverload_test.cpp
src/test/rpc/RobustTransaction_test.cpp
src/test/rpc/ServerDefinitions_test.cpp
src/test/rpc/ServerInfo_test.cpp
src/test/rpc/ShardArchiveHandler_test.cpp
src/test/rpc/Status_test.cpp

View File

@@ -1,197 +0,0 @@
#[===================================================================[
package/container targets - (optional)
#]===================================================================]
if (is_root_project)
if (NOT DOCKER)
find_program (DOCKER docker)
endif ()
if (DOCKER)
# if no container label is provided, use current git hash
git_hash (commit_hash)
if (NOT container_label)
set (container_label ${commit_hash})
endif ()
message (STATUS "using [${container_label}] as build container tag...")
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/packages)
file (MAKE_DIRECTORY ${NIH_CACHE_ROOT}/pkgbuild)
if (is_linux)
execute_process (COMMAND id -u
OUTPUT_VARIABLE DOCKER_USER_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local user id: ${DOCKER_USER_ID}")
execute_process (COMMAND id -g
OUTPUT_VARIABLE DOCKER_GROUP_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local group id: ${DOCKER_GROUP_ID}")
endif ()
if (DOCKER_USER_ID AND DOCKER_GROUP_ID)
set(map_user TRUE)
endif ()
#[===================================================================[
rpm
#]===================================================================]
add_custom_target (rpm_container
docker build
--pull
--build-arg GIT_COMMIT=${commit_hash}
-t rippleci/rippled-rpm-builder:${container_label}
$<$<BOOL:${rpm_cache_from}>:--cache-from=${rpm_cache_from}>
-f centos-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/centos-builder/Dockerfile
Builds/containers/centos-builder/centos_setup.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/packaging/rpm/rippled.spec
Builds/containers/packaging/rpm/build_rpm.sh
Builds/containers/packaging/rpm/50-rippled.preset
Builds/containers/packaging/rpm/50-rippled-reporting.preset
bin/getRippledInfo
)
exclude_from_default (rpm_container)
add_custom_target (rpm
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippleci/rippled-rpm-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/rpm/build_rpm.sh . && ./build_rpm.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/rpm/rippled.spec
)
exclude_from_default (rpm)
if (NOT have_package_container)
add_dependencies(rpm rpm_container)
endif ()
#[===================================================================[
dpkg
#]===================================================================]
# currently use ubuntu 18.04 as a base b/c it has one of
# the lower versions of libc among ubuntu and debian releases.
# we could change this in the future and build with some other deb
# based system.
add_custom_target (dpkg_container
docker build
--pull
--build-arg DIST_TAG=18.04
--build-arg GIT_COMMIT=${commit_hash}
-t rippled-dpkg-builder:${container_label}
$<$<BOOL:${dpkg_cache_from}>:--cache-from=${dpkg_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/rippled-reporting.links
Builds/containers/packaging/dpkg/debian/copyright
Builds/containers/packaging/dpkg/debian/rules
Builds/containers/packaging/dpkg/debian/rippled-reporting.install
Builds/containers/packaging/dpkg/debian/rippled-reporting.postinst
Builds/containers/packaging/dpkg/debian/rippled.links
Builds/containers/packaging/dpkg/debian/rippled.prerm
Builds/containers/packaging/dpkg/debian/rippled.postinst
Builds/containers/packaging/dpkg/debian/rippled-dev.install
Builds/containers/packaging/dpkg/debian/dirs
Builds/containers/packaging/dpkg/debian/rippled.postrm
Builds/containers/packaging/dpkg/debian/rippled.conffiles
Builds/containers/packaging/dpkg/debian/compat
Builds/containers/packaging/dpkg/debian/source/format
Builds/containers/packaging/dpkg/debian/source/local-options
Builds/containers/packaging/dpkg/debian/README.Debian
Builds/containers/packaging/dpkg/debian/rippled.install
Builds/containers/packaging/dpkg/debian/rippled.preinst
Builds/containers/packaging/dpkg/debian/docs
Builds/containers/packaging/dpkg/debian/control
Builds/containers/packaging/dpkg/debian/rippled-reporting.dirs
Builds/containers/packaging/dpkg/build_dpkg.sh
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
bin/getRippledInfo
Builds/containers/shared/install_cmake.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/shared/update_sources.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/rippled-reporting.service
Builds/containers/shared/rippled-logrotate
Builds/containers/shared/update-rippled-cron
)
exclude_from_default (dpkg_container)
add_custom_target (dpkg
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_CURRENT_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
-t rippled-dpkg-builder:${container_label}
/bin/bash -c "cp -fpu rippled/Builds/containers/packaging/dpkg/build_dpkg.sh . && ./build_dpkg.sh"
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/control
)
exclude_from_default (dpkg)
if (NOT have_package_container)
add_dependencies(dpkg dpkg_container)
endif ()
#[===================================================================[
ci container
#]===================================================================]
# now use the same ubuntu image for our travis-ci docker images,
# but we use a newer distro (18.04 vs 16.04).
#
# the following steps assume the github pkg repo, but it's possible to
# adapt these for other docker hub repositories.
#
# steps for publishing a new CI image when you make changes:
#
# mkdir bld.ci && cd bld.ci && cmake -Dpackages_only=ON -Dcontainer_label=CI_LATEST
# cmake --build . --target ci_container --verbose
# docker tag rippled-ci-builder:CI_LATEST <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: change YYYY-MM-DD to match current date, or use a different
# tag/version scheme if you prefer)
# docker push <HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD
# (NOTE: <HUB REPO PATH> is probably your user or org name if using
# docker hub, or it might be something like
# docker.pkg.github.com/ripple/rippled if using the github pkg
# registry. for any registry, you will need to be logged-in via
# docker and have push access.)
#
# ...then change the DOCKER_IMAGE line in .travis.yml :
# - DOCKER_IMAGE="<HUB REPO PATH>/rippled-ci-builder:YYYY-MM-DD"
add_custom_target (ci_container
docker build
--pull
--build-arg DIST_TAG=18.04
--build-arg GIT_COMMIT=${commit_hash}
--build-arg CI_USE=true
-t rippled-ci-builder:${container_label}
$<$<BOOL:${ci_cache_from}>:--cache-from=${ci_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
)
exclude_from_default (ci_container)
else ()
message (STATUS "docker NOT found -- won't be able to build containers for packaging")
endif ()
endif ()

View File

@@ -1,405 +0,0 @@
#!/usr/bin/env python
# This file is part of rippled: https://github.com/ripple/rippled
# Copyright (c) 2012 - 2017 Ripple Labs Inc.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
Invocation:
./Builds/Test.py - builds and tests all configurations
The build must succeed without shell aliases for this to work.
To pass flags to cmake, put them at the very end of the command line, after
the -- flag - like this:
./Builds/Test.py -- -j4 # Pass -j4 to cmake --build
Common problems:
1) Boost not found. Solution: export BOOST_ROOT=[path to boost folder]
2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
3) cmake is not found. Solution: Be sure cmake directory is on your $PATH
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import itertools
import os
import platform
import re
import shutil
import sys
import subprocess
def powerset(iterable):
"""powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"""
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s) + 1))
IS_WINDOWS = platform.system().lower() == 'windows'
IS_OS_X = platform.system().lower() == 'darwin'
# CMake
if IS_WINDOWS:
CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
CMAKE_NONUNITY_CONFIGS = ['Debug', 'Release']
else:
CMAKE_UNITY_CONFIGS = []
CMAKE_NONUNITY_CONFIGS = []
CMAKE_UNITY_COMBOS = { '' : [['rippled'], CMAKE_UNITY_CONFIGS],
'.nounity' : [['rippled'], CMAKE_NONUNITY_CONFIGS] }
if IS_WINDOWS:
CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
CMAKE_UNITY_COMBOS.items() }
elif IS_OS_X:
CMAKE_DIR_TARGETS = { (build + unity,) : targets
for build in ['debug', 'release']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
else:
CMAKE_DIR_TARGETS = { (cc + "." + build + unity,) : targets
for cc in ['gcc', 'clang']
for build in ['debug', 'release', 'coverage', 'profile']
for unity, targets in CMAKE_UNITY_COMBOS.items() }
# list of tuples of all possible options
if IS_WINDOWS or IS_OS_X:
CMAKE_ALL_GENERATE_OPTIONS = [tuple(x) for x in powerset(['-GNinja', '-Dassert=true'])]
else:
CMAKE_ALL_GENERATE_OPTIONS = list(set(
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
parser = argparse.ArgumentParser(
description='Test.py - run ripple tests'
)
parser.add_argument(
'--all', '-a',
action='store_true',
help='Build all configurations.',
)
parser.add_argument(
'--keep_going', '-k',
action='store_true',
help='Keep going after one configuration has failed.',
)
parser.add_argument(
'--silent', '-s',
action='store_true',
help='Silence all messages except errors',
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help=('Report more information about which commands are executed and the '
'results.'),
)
parser.add_argument(
'--test', '-t',
default='',
help='Add a prefix for unit tests',
)
parser.add_argument(
'--testjobs',
default='0',
type=int,
help='Run tests in parallel'
)
parser.add_argument(
'--ipv6',
action='store_true',
help='Use IPv6 localhost when running unit tests.',
)
parser.add_argument(
'--clean', '-c',
action='store_true',
help='delete all build artifacts after testing',
)
parser.add_argument(
'--quiet', '-q',
action='store_true',
help='Reduce output where possible (unit tests)',
)
parser.add_argument(
'--dir', '-d',
default=(),
nargs='*',
help='Specify one or more CMake dir names. '
'Will also be used as -Dtarget=<dir> running cmake.'
)
parser.add_argument(
'--target',
default=(),
nargs='*',
help='Specify one or more CMake build targets. '
'Will be used as --target <target> running cmake --build.'
)
parser.add_argument(
'--config',
default=(),
nargs='*',
help='Specify one or more CMake build configs. '
'Will be used as --config <config> running cmake --build.'
)
parser.add_argument(
'--generator_option',
action='append',
help='Specify a CMake generator option. Repeat for multiple options. '
'Will be passed to the cmake generator. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --generator_option=-GNinja.')
parser.add_argument(
'--build_option',
action='append',
help='Specify a build option. Repeat for multiple options. '
'Will be passed to the build tool via cmake --build. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --build_option=-j8.')
parser.add_argument(
'extra_args',
default=(),
nargs='*',
help='Extra arguments are passed through to the tools'
)
ARGS = parser.parse_args()
def decodeString(line):
# Python 2 vs. Python 3
if isinstance(line, str):
return line
else:
return line.decode()
def shell(cmd, args=(), silent=False, cust_env=None):
""""Execute a shell command and return the output."""
silent = ARGS.silent or silent
verbose = not silent and ARGS.verbose
if verbose:
print('$' + cmd, *args)
command = (cmd,) + args
# shell is needed in Windows to find executable in the path
process = subprocess.Popen(
command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=cust_env,
shell=IS_WINDOWS)
lines = []
count = 0
# readline returns '' at EOF
for line in iter(process.stdout.readline, ''):
if process.poll() is None:
decoded = decodeString(line)
lines.append(decoded)
if verbose:
print(decoded, end='')
elif not silent:
count += 1
if count >= 80:
print()
count = 0
else:
print('.', end='')
else:
break
if not verbose and count:
print()
process.wait()
return process.returncode, lines
def get_cmake_dir(cmake_dir):
return os.path.join('build' , 'cmake' , cmake_dir)
def run_cmake(directory, cmake_dir, args):
print('Generating build in', directory, 'with', *args or ('default options',))
old_dir = os.getcwd()
if not os.path.exists(directory):
os.makedirs(directory)
os.chdir(directory)
if IS_WINDOWS and not any(arg.startswith("-G") for arg in args) and not os.path.exists("CMakeCache.txt"):
if '--ninja' in args:
args += ( '-GNinja', )
else:
args += ( '-GVisual Studio 14 2015 Win64', )
# hack to extract cmake options/args from the legacy target format
if re.search('\.unity', cmake_dir):
args += ( '-Dunity=ON', )
if re.search('\.nounity', cmake_dir):
args += ( '-Dunity=OFF', )
if re.search('coverage', cmake_dir):
args += ( '-Dcoverage=ON', )
if re.search('profile', cmake_dir):
args += ( '-Dprofile=ON', )
if re.search('debug', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Debug', )
if re.search('release', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Release', )
m = re.search('gcc(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=g++' + m.group(1), )
elif re.search('gcc', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=gcc', '-DCMAKE_CXX_COMPILER=g++', )
m = re.search('clang(-[^.]*)', cmake_dir)
if m:
args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
'-DCMAKE_CXX_COMPILER=clang++' + m.group(1), )
elif re.search('clang', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=clang', '-DCMAKE_CXX_COMPILER=clang++', )
args += ( os.path.join('..', '..', '..'), )
resultcode, lines = shell('cmake', args)
if resultcode:
print('Generating FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
os.chdir(old_dir)
def run_cmake_build(directory, target, config, args):
print('Building', target, config, 'in', directory, 'with', *args or ('default options',))
build_args=('--build', directory)
if target:
build_args += ('--target', target)
if config:
build_args += ('--config', config)
if args:
build_args += ('--',)
build_args += tuple(args)
resultcode, lines = shell('cmake', build_args)
if resultcode:
print('Build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
def run_cmake_tests(directory, target, config):
failed = []
if IS_WINDOWS:
target += '.exe'
executable = os.path.join(directory, config if config else 'Debug', target)
if(not os.path.exists(executable)):
executable = os.path.join(directory, target)
print('Unit tests for', executable)
testflag = '--unittest'
quiet = ''
testjobs = ''
ipv6 = ''
if ARGS.test:
testflag += ('=' + ARGS.test)
if ARGS.quiet:
quiet = '-q'
if ARGS.ipv6:
ipv6 = '--unittest-ipv6'
if ARGS.testjobs:
testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
resultcode, lines = shell(executable, (testflag, quiet, testjobs, ipv6))
if resultcode:
if not ARGS.verbose:
print('ERROR:', *lines, sep='')
failed.append([target, 'unittest'])
return failed
def main():
all_failed = []
if ARGS.all:
build_dir_targets = CMAKE_DIR_TARGETS
generator_options = CMAKE_ALL_GENERATE_OPTIONS
else:
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
if ARGS.generator_option:
generator_options = [tuple(ARGS.generator_option)]
else:
generator_options = [tuple()]
if not build_dir_targets:
# Let CMake choose the build tool.
build_dir_targets = { () : [] }
if ARGS.build_option:
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
else:
ARGS.build_option = list(ARGS.extra_args)
for args in generator_options:
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
if not build_dirs:
build_dirs = ('default',)
if not build_targets:
build_targets = ('rippled',)
if not build_configs:
build_configs = ('',)
for cmake_dir in build_dirs:
cmake_full_dir = get_cmake_dir(cmake_dir)
run_cmake(cmake_full_dir, cmake_dir, args)
for target in build_targets:
for config in build_configs:
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
failed = run_cmake_tests(cmake_full_dir, target, config)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
all_failed.extend([decodeString(cmake_dir +
"." + target + "." + config), ':'.join(f)]
for f in failed)
else:
print('Success')
if ARGS.clean:
shutil.rmtree(cmake_full_dir)
if all_failed:
if len(all_failed) > 1:
print()
print('FAILED:', *(':'.join(f) for f in all_failed))
sys.exit(1)
if __name__ == '__main__':
main()
sys.exit(0)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1,45 +0,0 @@
{
// See https://go.microsoft.com//fwlink//?linkid=834763 for more information about this file.
"configurations": [
{
"name": "x64-Debug",
"generator": "Visual Studio 16 2019",
"configurationType": "Debug",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
},
{
"name": "x64-Release",
"generator": "Visual Studio 16 2019",
"configurationType": "Release",
"inheritEnvironments": [ "msvc_x64_x64" ],
"buildRoot": "${thisFileDir}\\build\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v:minimal",
"ctestCommandArgs": "",
"variables": [
{
"name": "BOOST_ROOT",
"value": "C:\\lib\\boost"
},
{
"name": "OPENSSL_ROOT",
"value": "C:\\lib\\OpenSSL-Win64"
}
]
}
]
}

View File

@@ -1,263 +0,0 @@
# Visual Studio 2019 Build Instructions
## Important
We do not recommend Windows for rippled production use at this time. Currently,
the Ubuntu platform has received the highest level of quality assurance,
testing, and support. Additionally, 32-bit Windows versions are not supported.
## Prerequisites
To clone the source code repository, create branches for inspection or
modification, build rippled under Visual Studio, and run the unit tests you will
need these software components
| Component | Minimum Recommended Version |
|-----------|-----------------------|
| [Visual Studio 2019](README.md#install-visual-studio-2019)| 15.5.4 |
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
| [OpenSSL Library](README.md#install-openssl) | 1.1.1L |
| [Boost library](README.md#build-boost) | 1.70.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
\* Only needed if not using the integrated CMake in VS 2019 and prefer generating dedicated project/solution files.
## Install Software
### Install Visual Studio 2019
If not already installed on your system, download your choice of installer from
the [Visual Studio 2019
Download](https://www.visualstudio.com/downloads/download-visual-studio-vs)
page, run the installer, and follow the directions. **You may need to choose the
`Desktop development with C++` workload to install all necessary C++ features.**
Any version of Visual Studio 2019 may be used to build rippled. The **Visual
Studio 2019 Community** edition is available free of charge (see [the product
page](https://www.visualstudio.com/products/visual-studio-community-vs) for
licensing details), while paid editions may be used for an initial free-trial
period.
### Install Git for Windows
Git is a distributed revision control system. The Windows version also provides
the bash shell and many Windows versions of Unix commands. While there are other
varieties of Git (such as TortoiseGit, which has a native Windows interface and
integrates with the Explorer shell), we recommend installing [Git for
Windows](https://git-scm.com/) since it provides a Unix-like command line
environment useful for running shell scripts. Use of the bash shell under
Windows is mandatory for running the unit tests.
### Install OpenSSL
[Download the latest version of
OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
several `Win64` bit variants available, you want the non-light
`v1.1` line. As of this writing, you **should** select
* Win64 OpenSSL v1.1.1q
and should **not** select
* Anything with "Win32" in the name
* Anything with "light" in the name
* Anything with "EXPERIMENTAL" in the name
* Anything in the 3.0 line - rippled won't currently build with this version.
Run the installer, and choose an appropriate location for your OpenSSL
installation. In this guide we use `C:\lib\OpenSSL-Win64` as the destination
location.
You may be informed on running the installer that "Visual C++ 2008
Redistributables" must first be installed first. If so, download it from the
[same page](http://slproweb.com/products/Win32OpenSSL.html), again making sure
to get the correct 32-/64-bit variant.
* NOTE: Since rippled links statically to OpenSSL, it does not matter where the
OpenSSL .DLL files are placed, or what version they are. rippled does not use
or require any external .DLL files to run other than the standard operating
system ones.
### Build Boost
Boost 1.70 or later is required.
[Download boost](http://www.boost.org/users/download/) and unpack it
to `c:\lib`. As of this writing, the most recent version of boost is 1.80.0,
which will unpack into a directory named `boost_1_80_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_80_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
```powershell
cd C:\lib\boost
bootstrap
```
The rippled application is linked statically to the standard runtimes and
external dependencies on Windows, to ensure that the behavior of the executable
is not affected by changes in outside files. Therefore, it is necessary to build
the required boost static libraries using this command:
```powershell
b2 -j<Num Parallel> --toolset=msvc-14.2 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared,static stage
```
where you should replace `<Num Parallel>` with the number of parallel
invocations to use build, e.g. `bjam -j8 ...` would use up to 8 concurrent build
shell commands for the build.
Building the boost libraries may take considerable time. When the build process
is completed, take note of both the reported compiler include paths and linker
library paths as they will be required later.
### (Optional) Install CMake for Windows
[CMake](http://cmake.org) is a cross platform build system generator. Visual
Studio 2019 includes an integrated version of CMake that avoids having to
manually run CMake, but it is undergoing continuous improvement. Users that
prefer to use standard Visual Studio project and solution files need to install
a dedicated version of CMake to generate them. The latest version can be found
at the [CMake download site](https://cmake.org/download/). It is recommended you
select the install option to add CMake to your path.
## Clone the rippled repository
If you are familiar with cloning github repositories, just follow your normal
process and clone `git@github.com:ripple/rippled.git`. Otherwise follow this
section for instructions.
1. If you don't have a github account, sign up for one at
[github.com](https://github.com/).
2. Make sure you have Github ssh keys. For help see
[generating-ssh-keys](https://help.github.com/articles/generating-ssh-keys).
Open the "Git Bash" shell that was installed with "Git for Windows" in the step
above. Navigate to the directory where you want to clone rippled (git bash uses
`/c` for windows's `C:` and forward slash where windows uses backslash, so
`C:\Users\joe\projs` would be `/c/Users/joe/projs` in git bash). Now clone the
repository and optionally switch to the *master* branch. Type the following at
the bash prompt:
```powershell
git clone git@github.com:XRPLF/rippled.git
cd rippled
```
If you receive an error about not having the "correct access rights" make sure
you have Github ssh keys, as described above.
For a stable release, choose the `master` branch or one of the tagged releases
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
```
git checkout master
```
To test the latest release candidate, choose the `release` branch.
```
git checkout release
```
If you are doing development work and want the latest set of beta features,
you can consider using the `develop` branch instead.
```
git checkout develop
```
# Build using Visual Studio integrated CMake
In Visual Studio 2017, Microsoft added [integrated IDE support for
cmake](https://blogs.msdn.microsoft.com/vcblog/2016/10/05/cmake-support-in-visual-studio/).
To begin, simply:
1. Launch Visual Studio and choose **File | Open | Folder**, navigating to the
cloned rippled folder.
2. Right-click on `CMakeLists.txt` in the **Solution Explorer - Folder View** to
generate a `CMakeSettings.json` file. A sample settings file is provided
[here](/Builds/VisualStudio2019/CMakeSettings-example.json). Customize the
settings for `BOOST_ROOT`, `OPENSSL_ROOT` to match the install paths if they
differ from those in the file.
4. Select either the `x64-Release` or `x64-Debug` configuration from the
**Project Settings** drop-down. This should invoke the built-in CMake project
generator. If not, you can right-click on the `CMakeLists.txt` file and
choose **Configure rippled**.
5. Select the `rippled.exe`
option in the **Select Startup Item** drop-down. This will be the target
built when you press F7. Alternatively, you can choose a target to build from
the top-level **CMake | Build** menu. Note that at this time, there are other
targets listed that come from third party visual studio files embedded in the
rippled repo, e.g. `datagen.vcxproj`. Please ignore them.
For details on configuring debugging sessions or further customization of CMake,
please refer to the [CMake tools for VS
documentation](https://docs.microsoft.com/en-us/cpp/ide/cmake-tools-for-visual-cpp).
If using the provided `CMakeSettings.json` file, the executable will be in
```
.\build\x64-Release\Release\rippled.exe
```
or
```
.\build\x64-Debug\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Build using stand-alone CMake
This requires having installed [CMake for
Windows](README.md#optional-install-cmake-for-windows). We do not recommend
mixing this method with the integrated CMake method for the same repository
clone. Assuming you included the cmake executable folder in your path,
execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 16 2019" -Ax64 -DBOOST_ROOT="C:\lib\boost" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64" -DCMAKE_GENERATOR_TOOLSET=host=x64
```
Now launch Visual Studio 2019 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
file. You can then choose whether to build the `Debug` or `Release` solution
configuration.
The executable will be in
```
.\build\cmake\Release\rippled.exe
```
or
```
.\build\cmake\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Unity/No-Unity Builds
The rippled build system defaults to using
[unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
to improve build times. In some cases it might be desirable to disable the
unity build and compile individual translation units. Here is how you can
switch to a "no-unity" build configuration:
## Visual Studio Integrated CMake
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF`
to the `cmakeCommandArgs` entry for each build configuration.
## Standalone CMake Builds
When running cmake to generate the Visual Studio project files, add
`-Dunity=OFF` to the command line options passed to cmake.
**Note:** you will need to re-run the cmake configuration step anytime you
want to switch between unity/no-unity builds.
# Unit Test (Recommended)
`rippled` builds a set of unit tests into the server executable. To run these
unit tests after building, pass the `--unittest` option to the compiled
`rippled` executable. The executable will exit with summary info after running
the unit tests.

View File

@@ -1,7 +0,0 @@
#!/usr/bin/env bash
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of physical cores
path=$(cd $(dirname $0) && pwd)
cd $(dirname $path)
${path}/Test.py -a -c --testjobs=${num_procs} -- -j${num_procs}

View File

@@ -1,31 +0,0 @@
# rippled Packaging and Containers
This folder contains docker container definitions and configuration
files to support building rpm and deb packages of rippled. The container
definitions include some additional software/packages that are used
for general build/test CI workflows of rippled but are not explicitly
needed for the package building workflow.
## CMake Targets
If you have docker installed on your local system, then the main
CMake file will enable several targets related to building packages:
`rpm_container`, `rpm`, `dpkg_container`, and `dpkg`. The package targets
depend on the container targets and will trigger a build of those first.
The container builds can take several dozen minutes to complete (depending
on hardware specs), so quick build cycles are not possible currently. As
such, these targets are often best suited to CI/automated build systems.
The package build can be invoked like any other cmake target from the
rippled root folder:
```
mkdir -p build/pkg && cd build/pkg
cmake -Dpackages_only=ON ../..
cmake --build . --target rpm
```
Upon successful completion, the generated package files will be in
the `build/pkg/packages` directory. For deb packages, simply replace
`rpm` with `dpkg` in the build command above.

View File

@@ -1,26 +0,0 @@
FROM rippleci/centos:7
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
COPY centos-builder/centos_setup.sh /tmp/
COPY shared/install_cmake.sh /tmp/
RUN chmod +x /tmp/centos_setup.sh && \
chmod +x /tmp/install_cmake.sh
RUN /tmp/centos_setup.sh
RUN /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16
RUN ln -s /opt/local/cmake-3.16 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# TODO: Install latest CMake for testing
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16; fi
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
WORKDIR /opt/rippled_bld/pkg
RUN mkdir -m 777 ./rpmbuild
RUN mkdir -m 777 ./rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
COPY packaging/rpm/build_rpm.sh ./
CMD ./build_rpm.sh

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
yum -y upgrade
yum -y update
yum -y install epel-release centos-release-scl
yum -y install \
wget curl time gcc-c++ yum-utils autoconf automake pkgconfig libtool \
libstdc++-static rpm-build gnupg which make cmake \
devtoolset-11 devtoolset-11-gdb devtoolset-11-binutils devtoolset-11-libstdc++-devel \
devtoolset-11-libasan-devel devtoolset-11-libtsan-devel devtoolset-11-libubsan-devel devtoolset-11-liblsan-devel \
flex flex-devel bison bison-devel parallel \
ncurses ncurses-devel ncurses-libs graphviz graphviz-devel \
lzip p7zip bzip2 bzip2-devel lzma-sdk lzma-sdk-devel xz-devel \
zlib zlib-devel zlib-static texinfo openssl openssl-static \
jemalloc jemalloc-devel \
libicu-devel htop \
rh-python38 \
ninja-build git svn \
swig perl-Digest-MD5

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_NAME}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_NAME}"
else
echo "invalid package type"
exit 1
fi
if docker pull "${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}"; then
echo "found container for latest - using as cache."
docker tag \
"${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}" \
"${container_name}:latest_${CI_COMMIT_REF_SLUG}"
CMAKE_EXTRA="-D${pkgtype}_cache_from=${container_name}:latest_${CI_COMMIT_REF_SLUG}"
fi
cmake --version
test -d build && rm -rf build
mkdir -p build/container && cd build/container
eval time \
cmake -Dpackages_only=ON -DCMAKE_VERBOSE_MAKEFILE=ON ${CMAKE_EXTRA} \
-G Ninja ../..
time cmake --build . --target "${pkgtype}_container" -- -v

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_FULLNAME}"
container_tag="${RPM_CONTAINER_TAG}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_FULLNAME}"
container_tag="${DPKG_CONTAINER_TAG}"
else
echo "invalid package type"
exit 1
fi
time docker pull "${ARTIFACTORY_HUB}/${container_name}"
docker tag \
"${ARTIFACTORY_HUB}/${container_name}" \
"${container_name}"
docker images
test -d build && rm -rf build
mkdir -p build/${pkgtype} && cd build/${pkgtype}
time cmake \
-Dpackages_only=ON \
-Dcontainer_label="${container_tag}" \
-Dhave_package_container=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dunity=OFF \
-G Ninja ../..
time cmake --build . --target ${pkgtype} -- -v

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env sh
set -e
# used as a before/setup script for docker steps in gitlab-ci
# expects to be run in standard alpine/dind image
echo $(nproc)
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
apk add --update py-pip
apk add \
bash util-linux coreutils binutils grep \
make ninja cmake build-base gcc g++ abuild git \
python3 python3-dev
pip3 install awscli
# list curdir contents to build log:
ls -la

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env sh
case ${CI_COMMIT_REF_NAME} in
develop)
export COMPONENT="nightly"
;;
release)
export COMPONENT="unstable"
;;
master)
export COMPONENT="stable"
;;
*)
export COMPONENT="_unknown_"
;;
esac

View File

@@ -1,646 +0,0 @@
#########################################################################
## ##
## gitlab CI defintition for rippled build containers and distro ##
## packages (rpm and dpkg). ##
## ##
#########################################################################
# NOTE: these are sensible defaults for Ripple pipelines. These
# can be overridden by project or group variables as needed.
variables:
# these containers are built manually using the rippled
# cmake build (container targets) and tagged/pushed so they
# can be used here
RPM_CONTAINER_TAG: "2023-02-13"
RPM_CONTAINER_NAME: "rippled-rpm-builder"
RPM_CONTAINER_FULLNAME: "${RPM_CONTAINER_NAME}:${RPM_CONTAINER_TAG}"
DPKG_CONTAINER_TAG: "2023-03-20"
DPKG_CONTAINER_NAME: "rippled-dpkg-builder"
DPKG_CONTAINER_FULLNAME: "${DPKG_CONTAINER_NAME}:${DPKG_CONTAINER_TAG}"
ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/xrpledger/rippled-packages/snippets/49/raw"
PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
# also need to define this variable ONLY for the primary
# build/publish pipeline on the mainline repo:
# IS_PRIMARY_REPO = "true"
stages:
- build_packages
- sign_packages
- smoketest
- verify_sig
- tag_images
- push_to_test
- verify_from_test
- wait_approval_prod
- push_to_prod
- verify_from_prod
- get_final_hashes
- build_containers
.dind_template: &dind_param
before_script:
- . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- 4xlarge
.only_primary_template: &only_primary
only:
refs:
- /^(master|release|develop)$/
variables:
- $IS_PRIMARY_REPO == "true"
.smoketest_local_template: &run_local_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh local
.smoketest_repo_template: &run_repo_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh repo
#########################################################################
## ##
## stage: build_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/rpm/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh rpm
dpkg_build:
timeout: "1h 30m"
stage: build_packages
<<: *dind_param
artifacts:
paths:
- build/dpkg/packages/
script:
- . ./Builds/containers/gitlab-ci/build_package.sh dpkg
#########################################################################
## ##
## stage: sign_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_sign:
stage: sign_packages
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *only_primary
before_script:
- |
# Make sure GnuPG is installed
yum -y install gnupg rpm-sign
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/rpm/packages/
script:
- ls -alh build/rpm/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh rpm
dpkg_sign:
stage: sign_packages
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
<<: *only_primary
before_script:
- |
# make sure we have GnuPG
apt update
apt install -y gpg dpkg-sig
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/dpkg/packages/
script:
- ls -alh build/dpkg/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh dpkg
#########################################################################
## ##
## stage: smoketest ##
## ##
## install unsigned packages from previous step and run unit tests. ##
## ##
#########################################################################
centos_7_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/centos:7
<<: *run_local_smoketest
rocky_8_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
<<: *run_local_smoketest
fedora_37_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:37
<<: *run_local_smoketest
fedora_38_smoketest:
stage: smoketest
dependencies:
- rpm_build
image:
name: artifactory.ops.ripple.com/fedora:38
<<: *run_local_smoketest
ubuntu_18_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
<<: *run_local_smoketest
ubuntu_20_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
<<: *run_local_smoketest
ubuntu_22_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
<<: *run_local_smoketest
debian_10_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/debian:10
<<: *run_local_smoketest
debian_11_smoketest:
stage: smoketest
dependencies:
- dpkg_build
image:
name: artifactory.ops.ripple.com/debian:11
<<: *run_local_smoketest
#########################################################################
## ##
## stage: verify_sig ##
## ##
## use git/gpg to verify that HEAD is signed by an approved ##
## committer. The whitelist of pubkeys is manually mantained ##
## and fetched from GIT_SIGN_PUBKEYS_URL (currently a snippet ##
## link). ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
verify_head_signed:
stage: verify_sig
image:
name: artifactory.ops.ripple.com/ubuntu:latest
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/verify_head_commit.sh
#########################################################################
## ##
## stage: tag_images ##
## ##
## apply rippled version tag to containers from previous stage. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
tag_bld_images:
stage: tag_images
variables:
docker_driver: overlay2
DOCKER_TLS_CERTDIR: ""
image:
name: artifactory.ops.ripple.com/docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- name: artifactory.ops.ripple.com/docker:stable-dind
alias: docker
tags:
- large
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/tag_docker_image.sh
#########################################################################
## ##
## stage: push_to_test ##
## ##
## push packages to artifactory repositories (test) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_test:
stage: push_to_test
variables:
DEB_REPO: "rippled-deb-test-mirror"
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/alpine:latest
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_test ##
## ##
## install/test packages from test repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_10_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "buster"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/debian:10
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: wait_approval_prod ##
## ##
## wait for manual approval before proceeding to next stage ##
## which pushes to prod repo. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
wait_before_push_prod:
stage: wait_approval_prod
image:
name: artifactory.ops.ripple.com/alpine:latest
<<: *only_primary
script:
- echo "proceeding to next stage"
when: manual
allow_failure: false
#########################################################################
## ##
## stage: push_to_prod ##
## ##
## push packages to artifactory repositories (prod) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_prod:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: push_to_prod
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_prod ##
## ##
## install/test packages from prod repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
rocky_8_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_37_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:37
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_38_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/fedora:38
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_20_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "focal"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:20.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_22_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "jammy"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/ubuntu:22.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_10_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "buster"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/debian:10
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_11_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bullseye"
DEB_REPO: "rippled-deb"
image:
name: artifactory.ops.ripple.com/debian:11
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: get_final_hashes ##
## ##
## fetch final hashes from artifactory. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
get_prod_hashes:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: artifactory.ops.ripple.com/alpine:latest
stage: get_final_hashes
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "GET" ".checksums"
#########################################################################
## ##
## stage: build_containers ##
## ##
## build containers from docker definitions. These containers are NOT ##
## used for the package build. This step is only used to ensure that ##
## the package build targets and files are still working properly. ##
## ##
#########################################################################
build_centos_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh rpm
build_ubuntu_container:
stage: build_containers
<<: *dind_param
script:
- . ./Builds/containers/gitlab-ci/build_container.sh dpkg

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env sh
set -e
action=$1
filter=$2
. ./Builds/containers/gitlab-ci/get_component.sh
apk add curl jq coreutils util-linux
TOPDIR=$(pwd)
# DPKG
cd $TOPDIR
cd build/dpkg/packages
CURLARGS="-sk -X${action} -urippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}"
RIPPLED_PKG=$(ls rippled_*.deb)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting_*.deb)
RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.*deb)
RIPPLED_REPORTING_DBG_PKG=$(ls rippled-reporting-dbgsym_*.*deb)
# TODO - where to upload src tgz?
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
for dist in buster bullseye bionic focal jammy; do
DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
done
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
for deb in ${RIPPLED_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG} ${RIPPLED_REPORTING_DBG_PKG}; do
# first item doesn't get a comma separator
if [ $deb != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${deb}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${DEB_REPO}/pool/${COMPONENT}/${deb}${DEB_MATRIX}"
ca="${ca} -T${deb}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${DEB_REPO}/pool/${COMPONENT}/${deb}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}," >> "${TOPDIR}/files.info"
# RPM
cd $TOPDIR
cd build/rpm/packages
RIPPLED_PKG=$(ls rippled-[0-9]*.x86_64.rpm)
RIPPLED_DEV_PKG=$(ls rippled-devel*.rpm)
RIPPLED_DBG_PKG=$(ls rippled-debuginfo*.rpm)
RIPPLED_REPORTING_PKG=$(ls rippled-reporting*.rpm)
# TODO - where to upload src rpm ?
RIPPLED_SRC=$(ls rippled-[0-9]*.src.rpm)
echo "\"rpms\": {" >> "${TOPDIR}/files.info"
for rpm in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG}; do
# first item doesn't get a comma separator
if [ $rpm != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${rpm}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${RPM_REPO}/${COMPONENT}/"
ca="${ca} -T${rpm}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${RPM_REPO}/${COMPONENT}/${rpm}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}}" >> "${TOPDIR}/files.info"
jq '.' "${TOPDIR}/files.info" > "${TOPDIR}/files.info.tmp"
mv "${TOPDIR}/files.info.tmp" "${TOPDIR}/files.info"
if [ ! -z "${SLACK_NOTIFY_URL}" ] && [ "${action}" = "GET" ] ; then
# extract files.info content to variable and sanitize so it can
# be interpolated into a slack text field below
finfo=$(cat ${TOPDIR}/files.info | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | sed -E 's/"/\\"/g')
# try posting file info to slack.
# can add channel field to payload if the
# default channel is incorrect. Get rid of
# newlines in payload json since slack doesn't accept them
CONTENT=$(tr -d '[\n]' <<JSON
payload={
"username": "GitlabCI",
"text": "The package build for branch \`${CI_COMMIT_REF_NAME}\` is complete. File hashes are: \`\`\`${finfo}\`\`\`",
"icon_emoji": ":package:"}
JSON
)
curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
fi

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
sign_dpkg() {
if [ -n "${GPG_KEYID}" ]; then
dpkg-sig \
-g "--no-tty --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --pinentry-mode=loopback" \
-k "${GPG_KEYID}" \
--sign builder \
"build/dpkg/packages/*.deb"
fi
}
sign_rpm() {
if [ -n "${GPG_KEYID}" ] ; then
find build/rpm/packages -name "*.rpm" -exec bash -c '
echo "yes" | setsid rpm \
--define "_gpg_name ${GPG_KEYID}" \
--define "_signature gpg" \
--define "__gpg_check_password_cmd /bin/true" \
--define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
--addsign '{} \;
fi
}
case "${1}" in
dpkg)
sign_dpkg
;;
rpm)
sign_rpm
;;
*)
echo "Usage: ${0} (dpkg|rpm)"
;;
esac

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env sh
set -e
install_from=$1
use_private=${2:-0} # this option not currently needed by any CI scripts,
# reserved for possible future use
if [ "$use_private" -gt 0 ] ; then
REPO_ROOT="https://rippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}@${ARTIFACTORY_HOST}/artifactory"
else
REPO_ROOT="${PUBLIC_REPO_ROOT}"
fi
. ./Builds/containers/gitlab-ci/get_component.sh
. /etc/os-release
case ${ID} in
ubuntu|debian)
pkgtype="dpkg"
;;
fedora|centos|rhel|scientific|rocky)
pkgtype="rpm"
;;
*)
echo "unrecognized distro!"
exit 1
;;
esac
# this script provides info variables about pkg version
. build/${pkgtype}/packages/build_vars
if [ "${pkgtype}" = "dpkg" ] ; then
# sometimes update fails and requires a cleanup
updateWithRetry()
{
if ! apt-get -y update ; then
rm -rvf /var/lib/apt/lists/*
apt-get -y clean
apt-get -y update
fi
}
if [ "${install_from}" = "repo" ] ; then
apt-get -y upgrade
updateWithRetry
apt-get -y install apt apt-transport-https ca-certificates coreutils util-linux wget gnupg
wget -q -O - "${REPO_ROOT}/api/gpg/key/public" | apt-key add -
echo "deb ${REPO_ROOT}/${DEB_REPO} ${DISTRO} ${COMPONENT}" >> /etc/apt/sources.list
updateWithRetry
# uncomment this next line if you want to see the available package versions
# apt-cache policy rippled
apt-get -y install rippled=${dpkg_full_version}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
updateWithRetry
apt-get -y install libprotobuf-dev libprotoc-dev protobuf-compiler libssl-dev
rm -f build/dpkg/packages/rippled-dbgsym*.*
dpkg --no-debsig -i build/dpkg/packages/*.deb
else
echo "unrecognized pkg source!"
exit 1
fi
else
yum -y update
if [ "${install_from}" = "repo" ] ; then
pkgs=("yum-utils coreutils util-linux")
if [ "$ID" = "rocky" ]; then
pkgs="${pkgs[@]/coreutils}"
fi
yum install -y $pkgs
REPOFILE="/etc/yum.repos.d/artifactory.repo"
echo "[Artifactory]" > ${REPOFILE}
echo "name=Artifactory" >> ${REPOFILE}
echo "baseurl=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/" >> ${REPOFILE}
echo "enabled=1" >> ${REPOFILE}
echo "gpgcheck=0" >> ${REPOFILE}
echo "gpgkey=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/repodata/repomd.xml.key" >> ${REPOFILE}
echo "repo_gpgcheck=1" >> ${REPOFILE}
yum -y update
# uncomment this next line if you want to see the available package versions
# yum --showduplicates list rippled
yum -y install ${rpm_version_release}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
pkgs=("yum-utils openssl-static zlib-static")
if [[ "$ID" =~ rocky|fedora ]]; then
if [[ "$ID" =~ "rocky" ]]; then
sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/Rocky-PowerTools.repo
fi
pkgs="${pkgs[@]/openssl-static}"
fi
yum install -y $pkgs
rm -f build/rpm/packages/rippled-debug*.rpm
rm -f build/rpm/packages/*.src.rpm
rpm -i build/rpm/packages/*.rpm
else
echo "unrecognized pkg source!"
exit 1
fi
fi
# verify installed version
INSTALLED=$(/opt/ripple/bin/rippled --version | awk '{print $NF}')
if [ "${rippled_version}" != "${INSTALLED}" ] ; then
echo "INSTALLED version ${INSTALLED} does not match ${rippled_version}"
exit 1
fi
# run unit tests
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
/opt/ripple/bin/validator-keys --unittest

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env sh
set -e
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} "${ARTIFACTORY_HUB}"
# this gives us rippled_version :
source build/rpm/packages/build_vars
docker pull "${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}"
docker pull "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}"
# tag/push two labels...one using the current rippled version and one just using "latest"
for label in ${rippled_version} latest ; do
docker tag \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker tag \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}" \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
done

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env sh
set -ex
apt -y update
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
apt -y install software-properties-common curl git gnupg
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
gpg --import rippled-pubkeys.txt
if git verify-commit HEAD; then
echo "git commit signature check passed"
else
echo "git commit signature check failed"
git log -n 5 --color \
--pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an> [%G?]%Creset' \
--abbrev-commit
exit 1
fi

View File

@@ -1,95 +0,0 @@
#!/usr/bin/env bash
set -ex
# make sure pkg source files are up to date with repo
cd /opt/rippled_bld/pkg
cp -fpru rippled/Builds/containers/packaging/dpkg/debian/. debian/
cp -fpu rippled/Builds/containers/shared/rippled*.service debian/
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the dpkg
#dpkg uses - as separator, so we need to change our -bN versions to tilde
RIPPLED_DPKG_VERSION=$(echo "${RIPPLED_VERSION}" | sed 's!-!~!g')
# TODO - decide how to handle the trailing/release
# version here (hardcoded to 1). Does it ever need to change?
RIPPLED_DPKG_FULL_VERSION="${RIPPLED_DPKG_VERSION}-1"
git config --global --add safe.directory /opt/rippled_bld/pkg/rippled
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled-${RIPPLED_DPKG_VERSION}/ -o ../rippled-${RIPPLED_DPKG_VERSION}.tar.gz HEAD
cd ..
# dpkg debmake would normally create this link, but we do it manually
ln -s ./rippled-${RIPPLED_DPKG_VERSION}.tar.gz rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz
tar xvf rippled-${RIPPLED_DPKG_VERSION}.tar.gz
cd rippled-${RIPPLED_DPKG_VERSION}
cp -pr ../debian .
# dpkg requires a changelog. We don't currently maintain
# a useable one, so let's just fake it with our current version
# TODO : not sure if the "unstable" will need to change for
# release packages (?)
NOWSTR=$(TZ=UTC date -R)
cat << CHANGELOG > ./debian/changelog
rippled (${RIPPLED_DPKG_FULL_VERSION}) unstable; urgency=low
* see RELEASENOTES
-- Ripple Labs Inc. <support@ripple.com> ${NOWSTR}
CHANGELOG
# PATH must be preserved for our more modern cmake in /opt/local
# TODO : consider allowing lintian to run in future ?
export DH_BUILD_DDEBS=1
debuild --no-lintian --preserve-envvar PATH --preserve-env -us -uc
rc=$?; if [[ $rc != 0 ]]; then
error "error building dpkg"
fi
cd ..
# copy artifacts
cp rippled-reporting_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.dsc ${PKG_OUTDIR}
# dbgsym suffix is ddeb under newer debuild, but just deb under earlier
cp rippled-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled-reporting-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.build ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.debian.tar.xz ${PKG_OUTDIR}
# buildinfo is only generated by later version of debuild
if [ -e rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ] ; then
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ${PKG_OUTDIR}
fi
cat rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes
# extract the text in the .changes file that appears between
# Checksums-Sha256: ...
# and
# Files: ...
awk '/Checksums-Sha256:/{hit=1;next}/Files:/{hit=0}hit' \
rippled_${RIPPLED_DPKG_VERSION}-1_amd64.changes | \
sed -E 's!^[[:space:]]+!!' > shasums
DEB_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
DBG_SHA256=$(cat shasums | \
grep "rippled-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_DBG_SHA256=$(cat shasums | \
grep "rippled-reporting-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
REPORTING_SHA256=$(cat shasums | \
grep "rippled-reporting_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
SRC_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz" | cut -d " " -f 1)
echo "deb_sha256=${DEB_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=${DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_sha256=${REPORTING_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "reporting_dbg_sha256=${REPORTING_DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=${SRC_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=${RIPPLED_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_version=${RIPPLED_DPKG_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_full_version=${RIPPLED_DPKG_FULL_VERSION}" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,3 +0,0 @@
rippled daemon
-- Mike Ellery <mellery451@gmail.com> Tue, 04 Dec 2018 18:19:03 +0000

View File

@@ -1 +0,0 @@
10

View File

@@ -1,19 +0,0 @@
Source: rippled
Section: misc
Priority: extra
Maintainer: Ripple Labs Inc. <support@ripple.com>
Build-Depends: cmake, debhelper (>=9), zlib1g-dev, dh-systemd, ninja-build
Standards-Version: 3.9.7
Homepage: http://ripple.com/
Package: rippled
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled daemon
Package: rippled-reporting
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled reporting daemon

View File

@@ -1,86 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rippled
Source: https://github.com/ripple/rippled
Files: *
Copyright: 2012-2019 Ripple Labs Inc.
License: __UNKNOWN__
The accompanying files under various copyrights.
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The accompanying files incorporate work covered by the following copyright
and previous license notice:
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
Some code from Raw Material Software, Ltd., provided under the terms of the
ISC License. See the corresponding source files for more details.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Some code from ASIO examples:
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
Some code from Bitcoin:
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2011 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
Some code from Tom Wu:
This software is covered under the following copyright:
/*
* Copyright (c) 2003-2005 Tom Wu
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
*
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*
* In addition, the following condition applies:
*
* All redistributions must retain an intact copy of this copyright notice
* and disclaimer.
*/
Address all questions regarding this license to:
Tom Wu
tjw@cs.Stanford.EDU

View File

@@ -1,3 +0,0 @@
/var/log/rippled/
/var/lib/rippled/
/etc/systemd/system/rippled.service.d/

View File

@@ -1,3 +0,0 @@
README.md
LICENSE.md
RELEASENOTES.md

View File

@@ -1,3 +0,0 @@
opt/ripple/include
opt/ripple/lib/*.a
opt/ripple/lib/cmake/ripple

View File

@@ -1,3 +0,0 @@
/var/log/rippled-reporting/
/var/lib/rippled-reporting/
/etc/systemd/system/rippled-reporting.service.d/

View File

@@ -1,8 +0,0 @@
bld/rippled-reporting/rippled-reporting opt/rippled-reporting/bin
cfg/rippled-reporting.cfg opt/rippled-reporting/etc
debian/tmp/opt/rippled-reporting/etc/validators.txt opt/rippled-reporting/etc
opt/rippled-reporting/bin/update-rippled-reporting.sh
opt/rippled-reporting/bin/getRippledReportingInfo
opt/rippled-reporting/etc/update-rippled-reporting-cron
etc/logrotate.d/rippled-reporting

View File

@@ -1,3 +0,0 @@
opt/rippled-reporting/etc/rippled-reporting.cfg etc/opt/rippled-reporting/rippled-reporting.cfg
opt/rippled-reporting/etc/validators.txt etc/opt/rippled-reporting/validators.txt
opt/rippled-reporting/bin/rippled-reporting usr/local/bin/rippled-reporting

View File

@@ -1,33 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /opt/rippled-reporting
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,2 +0,0 @@
/opt/ripple/etc/rippled.cfg
/opt/ripple/etc/validators.txt

View File

@@ -1,8 +0,0 @@
opt/ripple/bin/rippled
opt/ripple/bin/validator-keys
opt/ripple/bin/update-rippled.sh
opt/ripple/bin/getRippledInfo
opt/ripple/etc/rippled.cfg
opt/ripple/etc/validators.txt
opt/ripple/etc/update-rippled-cron
etc/logrotate.d/rippled

View File

@@ -1,3 +0,0 @@
opt/ripple/etc/rippled.cfg etc/opt/ripple/rippled.cfg
opt/ripple/etc/validators.txt etc/opt/ripple/validators.txt
opt/ripple/bin/rippled usr/local/bin/rippled

View File

@@ -1,35 +0,0 @@
#!/bin/sh
set -e
USER_NAME=rippled
GROUP_NAME=rippled
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME /opt/ripple
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 /opt/ripple/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME /opt/ripple/etc/update-rippled-cron
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,17 +0,0 @@
#!/bin/sh
set -e
case "$1" in
purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
install|upgrade)
;;
abort-upgrade)
;;
*)
echo "preinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,20 +0,0 @@
#!/bin/sh
set -e
case "$1" in
remove|upgrade|deconfigure)
;;
failed-upgrade)
;;
*)
echo "prerm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -1,80 +0,0 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export DH_OPTIONS = -v
# debuild sets some warnings that don't work well
# for our curent build..so try to remove those flags here:
export CFLAGS:=$(subst -Wformat,,$(CFLAGS))
export CFLAGS:=$(subst -Werror=format-security,,$(CFLAGS))
export CXXFLAGS:=$(subst -Wformat,,$(CXXFLAGS))
export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
%:
dh $@ --with systemd
override_dh_systemd_start:
dh_systemd_start --no-restart-on-upgrade
override_dh_auto_configure:
env
rm -rf bld
conan export external/snappy snappy/1.1.9@
conan install . \
--install-folder bld/rippled \
--build missing \
--build boost \
--build sqlite3 \
--settings build_type=Release
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/ripple \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dvalidator_keys=ON \
-B bld/rippled
conan install . \
--install-folder bld/rippled-reporting \
--build missing \
--build boost \
--build sqlite3 \
--build libuv \
--settings build_type=Release \
--options reporting=True
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/rippled-reporting \
-Dstatic=ON \
-Dunity=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-Dreporting=ON \
-B bld/rippled-reporting
override_dh_auto_build:
cmake --build bld/rippled --target rippled --target validator-keys -j${nproc}
cmake --build bld/rippled-reporting --target rippled -j${nproc}
override_dh_auto_install:
cmake --install bld/rippled --prefix debian/tmp/opt/ripple
install -D bld/rippled/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
install -D bin/getRippledInfo debian/tmp/opt/ripple/bin/getRippledInfo
install -D Builds/containers/shared/update-rippled-cron debian/tmp/opt/ripple/etc/update-rippled-cron
install -D Builds/containers/shared/rippled-logrotate debian/tmp/etc/logrotate.d/rippled
rm -rf debian/tmp/opt/ripple/lib64/cmake/date
mkdir -p debian/tmp/opt/rippled-reporting/etc
mkdir -p debian/tmp/opt/rippled-reporting/bin
cp cfg/validators-example.txt debian/tmp/opt/rippled-reporting/etc/validators.txt
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled.sh > debian/tmp/opt/rippled-reporting/bin/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' bin/getRippledInfo > debian/tmp/opt/rippled-reporting/bin/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled-cron > debian/tmp/opt/rippled-reporting/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/rippled-logrotate > debian/tmp/etc/logrotate.d/rippled-reporting

View File

@@ -1 +0,0 @@
3.0 (quilt)

View File

@@ -1,2 +0,0 @@
#abort-on-upstream-changes
#unapply-patches

View File

@@ -1 +0,0 @@
enable rippled-reporting.service

View File

@@ -1 +0,0 @@
enable rippled.service

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env bash
set -ex
cd /opt/rippled_bld/pkg
cp -fpu rippled/Builds/containers/packaging/rpm/rippled.spec .
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
source update_sources.sh
# Build the rpm
IFS='-' read -r RIPPLED_RPM_VERSION RELEASE <<< "$RIPPLED_VERSION"
export RIPPLED_RPM_VERSION
RPM_RELEASE=${RPM_RELEASE-1}
# post-release version
if [ "hf" = "$(echo "$RELEASE" | cut -c -2)" ]; then
RPM_RELEASE="${RPM_RELEASE}.${RELEASE}"
# pre-release version (-b or -rc)
elif [[ $RELEASE ]]; then
RPM_RELEASE="0.${RPM_RELEASE}.${RELEASE}"
fi
export RPM_RELEASE
if [[ $RPM_PATCH ]]; then
RPM_PATCH=".${RPM_PATCH}"
export RPM_PATCH
fi
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled/ -o ../rpmbuild/SOURCES/rippled.tar.gz HEAD
cd ..
source /opt/rh/devtoolset-11/enable
rpmbuild --define "_topdir ${PWD}/rpmbuild" -ba rippled.spec
rc=$?; if [[ $rc != 0 ]]; then
error "error building rpm"
fi
# Make a tar of the rpm and source rpm
RPM_VERSION_RELEASE=$(rpm -qp --qf='%{NAME}-%{VERSION}-%{RELEASE}' ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm)
tar_file=$RPM_VERSION_RELEASE.tar.gz
cp ./rpmbuild/RPMS/x86_64/* ${PKG_OUTDIR}
cp ./rpmbuild/SRPMS/* ${PKG_OUTDIR}
RPM_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm 2>/dev/null)
DBG_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm 2>/dev/null)
DEV_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm 2>/dev/null)
REP_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm 2>/dev/null)
SRC_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/SRPMS/*.rpm 2>/dev/null)
RPM_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm | awk '{ print $1}')"
DBG_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm | awk '{ print $1}')"
REP_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm | awk '{ print $1}')"
DEV_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm | awk '{ print $1}')"
SRC_SHA256="$(sha256sum ./rpmbuild/SRPMS/*.rpm | awk '{ print $1}')"
echo "rpm_md5sum=$RPM_MD5SUM" > ${PKG_OUTDIR}/build_vars
echo "rep_md5sum=$REP_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dbg_md5sum=$DBG_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dev_md5sum=$DEV_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "src_md5sum=$SRC_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "rpm_sha256=$RPM_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rep_sha256=$REP_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=$DBG_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dev_sha256=$DEV_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=$SRC_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=$RIPPLED_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version=$RIPPLED_RPM_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_file_name=$tar_file" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version_release=$RPM_VERSION_RELEASE" >> ${PKG_OUTDIR}/build_vars

View File

@@ -1,236 +0,0 @@
%define rippled_version %(echo $RIPPLED_RPM_VERSION)
%define rpm_release %(echo $RPM_RELEASE)
%define rpm_patch %(echo $RPM_PATCH)
%define _prefix /opt/ripple
Name: rippled
# Dashes in Version extensions must be converted to underscores
Version: %{rippled_version}
Release: %{rpm_release}%{?dist}%{rpm_patch}
Summary: rippled daemon
License: MIT
URL: http://ripple.com/
Source0: rippled.tar.gz
BuildRequires: cmake zlib-static ninja-build
%description
rippled
%package devel
Summary: Files for development of applications using xrpl core library
Group: Development/Libraries
Requires: zlib-static
%description devel
core library for development of standalone applications that sign transactions.
%package reporting
Summary: Reporting Server for rippled
%description reporting
History server for XRP Ledger
%prep
%setup -c -n rippled
%build
rm -rf ~/.conan/profiles/default
cp /opt/libcstd/libstdc++.so.6.0.22 /usr/lib64
cp /opt/libcstd/libstdc++.so.6.0.22 /lib64
ln -sf /usr/lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
ln -sf /lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
source /opt/rh/rh-python38/enable
pip install "conan<2"
conan profile new default --detect
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
cd rippled
mkdir -p bld.rippled
conan export external/snappy snappy/1.1.9@
pushd bld.rippled
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled --target validator-keys
popd
mkdir -p bld.rippled-reporting
pushd bld.rippled-reporting
conan install .. \
--settings build_type=Release \
--output-folder . \
--build missing \
--settings compiler.cppstd=17 \
--options reporting=True
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_INSTALL_PREFIX=%{_prefix} \
-DCMAKE_BUILD_TYPE=Release \
-Dunity=OFF \
-Dstatic=ON \
-Dvalidator_keys=ON \
-Dreporting=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON \
..
cmake --build . --parallel $(nproc) --target rippled
%pre
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
%install
rm -rf $RPM_BUILD_ROOT
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.rippled --target install #-- -v
mkdir -p $RPM_BUILD_ROOT
rm -rf ${RPM_BUILD_ROOT}/%{_prefix}/lib64/
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
install -d ${RPM_BUILD_ROOT}/usr/local/bin
install -D ./rippled/cfg/rippled-example.cfg ${RPM_BUILD_ROOT}/%{_prefix}/etc/rippled.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}/%{_prefix}/etc/validators.txt
ln -sf %{_prefix}/etc/rippled.cfg ${RPM_BUILD_ROOT}/etc/opt/ripple/rippled.cfg
ln -sf %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/ripple/validators.txt
ln -sf %{_prefix}/bin/rippled ${RPM_BUILD_ROOT}/usr/local/bin/rippled
install -D rippled/bld.rippled/validator-keys/validator-keys ${RPM_BUILD_ROOT}%{_bindir}/validator-keys
install -D ./rippled/Builds/containers/shared/rippled.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled.service
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled.preset
install -D ./rippled/Builds/containers/shared/update-rippled.sh ${RPM_BUILD_ROOT}%{_bindir}/update-rippled.sh
install -D ./rippled/bin/getRippledInfo ${RPM_BUILD_ROOT}%{_bindir}/getRippledInfo
install -D ./rippled/Builds/containers/shared/update-rippled-cron ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-cron
install -D ./rippled/Builds/containers/shared/rippled-logrotate ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled
install -d $RPM_BUILD_ROOT/var/log/rippled
install -d $RPM_BUILD_ROOT/var/lib/rippled
# reporting mode
%define _prefix /opt/rippled-reporting
mkdir -p ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/
install -D rippled/bld.rippled-reporting/rippled-reporting ${RPM_BUILD_ROOT}%{_bindir}/rippled-reporting
install -D ./rippled/cfg/rippled-reporting.cfg ${RPM_BUILD_ROOT}%{_prefix}/etc/rippled-reporting.cfg
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}%{_prefix}/etc/validators.txt
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled-reporting.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled-reporting.preset
ln -s %{_prefix}/bin/rippled-reporting ${RPM_BUILD_ROOT}/usr/local/bin/rippled-reporting
ln -s %{_prefix}/etc/rippled-reporting.cfg ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/rippled-reporting.cfg
ln -s %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/validators.txt
install -d $RPM_BUILD_ROOT/var/log/rippled-reporting
install -d $RPM_BUILD_ROOT/var/lib/rippled-reporting
install -D ./rippled/Builds/containers/shared/rippled-reporting.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled-reporting.service
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled.sh > ${RPM_BUILD_ROOT}%{_bindir}/update-rippled-reporting.sh
sed -E 's/rippled?/rippled-reporting/g' ./rippled/bin/getRippledInfo > ${RPM_BUILD_ROOT}%{_bindir}/getRippledReportingInfo
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled-cron > ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-reporting-cron
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/rippled-logrotate > ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled-reporting
%post
%define _prefix /opt/ripple
USER_NAME=rippled
GROUP_NAME=rippled
getent passwd $USER_NAME &>/dev/null || useradd $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 %{_prefix}/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
%post reporting
%define _prefix /opt/rippled-reporting
USER_NAME=rippled-reporting
GROUP_NAME=rippled-reporting
getent passwd $USER_NAME &>/dev/null || useradd -r $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled-reporting/
chmod 755 /var/lib/rippled-reporting/
chmod -x /usr/lib/systemd/system/rippled-reporting.service
%files
%define _prefix /opt/ripple
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled
/usr/local/bin/rippled
%{_bindir}/update-rippled.sh
%{_bindir}/getRippledInfo
%{_prefix}/etc/update-rippled-cron
%{_bindir}/validator-keys
%config(noreplace) %{_prefix}/etc/rippled.cfg
%config(noreplace) /etc/opt/ripple/rippled.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/ripple/validators.txt
%config(noreplace) /etc/logrotate.d/rippled
%config(noreplace) /usr/lib/systemd/system/rippled.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled.preset
%dir /var/log/rippled/
%dir /var/lib/rippled/
%files devel
%{_prefix}/include
%{_prefix}/lib/*.a
%{_prefix}/lib/cmake/ripple
%files reporting
%define _prefix /opt/rippled-reporting
%doc rippled/README.md rippled/LICENSE.md
%{_bindir}/rippled-reporting
/usr/local/bin/rippled-reporting
%config(noreplace) /etc/opt/rippled-reporting/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/rippled-reporting.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/rippled-reporting/validators.txt
%config(noreplace) /usr/lib/systemd/system/rippled-reporting.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled-reporting.preset
%dir /var/log/rippled-reporting/
%dir /var/lib/rippled-reporting/
%{_bindir}/update-rippled-reporting.sh
%{_bindir}/getRippledReportingInfo
%{_prefix}/etc/update-rippled-reporting-cron
%config(noreplace) /etc/logrotate.d/rippled-reporting
%changelog
* Wed Aug 28 2019 Mike Ellery <mellery451@gmail.com>
- Switch to subproject build for validator-keys
* Wed May 15 2019 Mike Ellery <mellery451@gmail.com>
- Make validator-keys use local rippled build for core lib
* Wed Aug 01 2018 Mike Ellery <mellery451@gmail.com>
- add devel package for signing library
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
- Install validators.txt

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -e
IFS=. read cm_maj cm_min cm_rel <<<"$1"
: ${cm_rel:-0}
CMAKE_ROOT=${2:-"${HOME}/cmake"}
function cmake_version ()
{
if [[ -d ${CMAKE_ROOT} ]] ; then
local perms=$(test $(uname) = "Linux" && echo "/111" || echo "+111")
local installed=$(find ${CMAKE_ROOT} -perm ${perms} -type f -name cmake)
if [[ "${installed}" != "" ]] ; then
echo "$(${installed} --version | head -1)"
fi
fi
}
installed=$(cmake_version)
if [[ "${installed}" != "" && ${installed} =~ ${cm_maj}.${cm_min}.${cm_rel} ]] ; then
echo "cmake already installed: ${installed}"
exit
fi
# From CMake 20+ "Linux" is lowercase so using `uname` won't create be the correct path
if [ ${cm_min} -gt 19 ]; then
linux="linux"
else
linux=$(uname)
fi
pkgname="cmake-${cm_maj}.${cm_min}.${cm_rel}-${linux}-x86_64.tar.gz"
tmppkg="/tmp/cmake.tar.gz"
wget --quiet https://cmake.org/files/v${cm_maj}.${cm_min}/${pkgname} -O ${tmppkg}
mkdir -p ${CMAKE_ROOT}
cd ${CMAKE_ROOT}
tar --strip-components 1 -xf ${tmppkg}
rm -f ${tmppkg}
echo "installed: $(cmake_version)"

View File

@@ -1,15 +0,0 @@
/var/log/rippled/*.log {
daily
minsize 200M
rotate 7
nocreate
missingok
notifempty
compress
compresscmd /usr/bin/nice
compressoptions -n19 ionice -c3 gzip
compressext .gz
postrotate
/opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate
endscript
}

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/rippled-reporting/bin/rippled-reporting --silent --conf /etc/opt/rippled-reporting/rippled-reporting.cfg
Restart=on-failure
User=rippled-reporting
Group=rippled-reporting
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Ripple Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
Restart=on-failure
User=rippled
Group=rippled
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -1,10 +0,0 @@
# For automatic updates, symlink this file to /etc/cron.d/
# Do not remove the newline at the end of this cron script
# bash required for use of RANDOM below.
SHELL=/bin/bash
PATH=/sbin;/bin;/usr/sbin;/usr/bin
# invoke check/update script with random delay up to 59 mins
0 * * * * root sleep $((RANDOM*3540/32768)) && /opt/ripple/bin/update-rippled.sh

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env bash
# auto-update script for rippled daemon
# Check for sudo/root permissions
if [[ $(id -u) -ne 0 ]] ; then
echo "This update script must be run as root or sudo"
exit 1
fi
LOCKDIR=/tmp/rippleupdate.lock
UPDATELOG=/var/log/rippled/update.log
function cleanup {
# If this directory isn't removed, future updates will fail.
rmdir $LOCKDIR
}
# Use mkdir to check if process is already running. mkdir is atomic, as against file create.
if ! mkdir $LOCKDIR 2>/dev/null; then
echo $(date -u) "lockdir exists - won't proceed." >> $UPDATELOG
exit 1
fi
trap cleanup EXIT
source /etc/os-release
can_update=false
if [[ "$ID" == "ubuntu" || "$ID" == "debian" ]] ; then
# Silent update
apt-get update -qq
# The next line is an "awk"ward way to check if the package needs to be updated.
RIPPLE=$(apt-get install -s --only-upgrade rippled | awk '/^Inst/ { print $2 }')
test "$RIPPLE" == "rippled" && can_update=true
function apply_update {
apt-get install rippled -qq
}
elif [[ "$ID" == "fedora" || "$ID" == "centos" || "$ID" == "rhel" || "$ID" == "scientific" ]] ; then
RIPPLE_REPO=${RIPPLE_REPO-stable}
yum --disablerepo=* --enablerepo=ripple-$RIPPLE_REPO clean expire-cache
yum check-update -q --enablerepo=ripple-$RIPPLE_REPO rippled || can_update=true
function apply_update {
yum update -y --enablerepo=ripple-$RIPPLE_REPO rippled
}
else
echo "unrecognized distro!"
exit 1
fi
# Do the actual update and restart the service after reloading systemctl daemon.
if [ "$can_update" = true ] ; then
exec 3>&1 1>>${UPDATELOG} 2>&1
set -e
apply_update
systemctl daemon-reload
systemctl restart rippled.service
echo $(date -u) "rippled daemon updated."
else
echo $(date -u) "no updates available" >> $UPDATELOG
fi

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env bash
function error {
echo $1
exit 1
}
cd /opt/rippled_bld/pkg/rippled
export RIPPLED_VERSION=$(egrep -i -o "\b(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-[0-9a-z\-]+(\.[0-9a-z\-]+)*)?(\+[0-9a-z\-]+(\.[0-9a-z\-]+)*)?\b" src/ripple/protocol/impl/BuildInfo.cpp)
: ${PKG_OUTDIR:=/opt/rippled_bld/pkg/out}
export PKG_OUTDIR
if [ ! -d ${PKG_OUTDIR} ]; then
error "${PKG_OUTDIR} is not mounted"
fi
if [ -x ${OPENSSL_ROOT}/bin/openssl ]; then
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${OPENSSL_ROOT}/lib ${OPENSSL_ROOT}/bin/openssl version -a
fi

View File

@@ -1,15 +0,0 @@
ARG DIST_TAG=18.04
FROM ubuntu:$DIST_TAG
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
WORKDIR /root
COPY ubuntu-builder/ubuntu_setup.sh .
RUN ./ubuntu_setup.sh && rm ubuntu_setup.sh
RUN mkdir -m 777 -p /opt/rippled_bld/pkg/
WORKDIR /opt/rippled_bld/pkg
COPY packaging/dpkg/build_dpkg.sh ./
CMD ./build_dpkg.sh

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o xtrace
# Parameters
gcc_version=${GCC_VERSION:-10}
cmake_version=${CMAKE_VERSION:-3.25.1}
conan_version=${CONAN_VERSION:-1.59}
apt update
# Iteratively build the list of packages to install so that we can interleave
# the lines with comments explaining their inclusion.
dependencies=''
# - to identify the Ubuntu version
dependencies+=' lsb-release'
# - for add-apt-repository
dependencies+=' software-properties-common'
# - to download CMake
dependencies+=' curl'
# - to build CMake
dependencies+=' libssl-dev'
# - Python headers for Boost.Python
dependencies+=' python3-dev'
# - to install Conan
dependencies+=' python3-pip'
# - to download rippled
dependencies+=' git'
# - CMake generators (but not CMake itself)
dependencies+=' make ninja-build'
apt install --yes ${dependencies}
add-apt-repository --yes ppa:ubuntu-toolchain-r/test
apt install --yes gcc-${gcc_version} g++-${gcc_version} \
debhelper debmake debsums gnupg dh-buildinfo dh-make dh-systemd cmake \
ninja-build zlib1g-dev make cmake ninja-build autoconf automake \
pkg-config apt-transport-https
# Give us nice unversioned aliases for gcc and company.
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-${gcc_version} 100 \
--slave /usr/bin/g++ g++ /usr/bin/g++-${gcc_version} \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${gcc_version} \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${gcc_version} \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${gcc_version} \
--slave /usr/bin/gcov gcov /usr/bin/gcov-${gcc_version} \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-${gcc_version} \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-${gcc_version}
update-alternatives --auto gcc
# Download and unpack CMake.
cmake_slug="cmake-${cmake_version}"
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v${cmake_version}/${cmake_slug}.tar.gz"
tar xzf ${cmake_slug}.tar.gz
rm ${cmake_slug}.tar.gz
# Build and install CMake.
cd ${cmake_slug}
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
rm --recursive --force ${cmake_slug}
# Install Conan.
pip3 install conan==${conan_version}
conan profile new --detect gcc
conan profile update settings.compiler=gcc gcc
conan profile update settings.compiler.version=${gcc_version} gcc
conan profile update settings.compiler.libcxx=libstdc++11 gcc
conan profile update env.CC=/usr/bin/gcc gcc
conan profile update env.CXX=/usr/bin/g++ gcc

View File

@@ -16,6 +16,9 @@ Loop: ripple.app ripple.overlay
Loop: ripple.app ripple.peerfinder
ripple.app > ripple.peerfinder
Loop: ripple.app ripple.protocol
ripple.app > ripple.protocol
Loop: ripple.app ripple.rpc
ripple.rpc > ripple.app
@@ -46,6 +49,12 @@ Loop: ripple.nodestore ripple.overlay
Loop: ripple.overlay ripple.rpc
ripple.rpc ~= ripple.overlay
Loop: test.app test.jtx
test.app > test.jtx
Loop: test.app test.rpc
test.rpc == test.app
Loop: test.jtx test.toplevel
test.toplevel > test.jtx

View File

@@ -4,7 +4,6 @@ ripple.app > ripple.conditions
ripple.app > ripple.consensus
ripple.app > ripple.crypto
ripple.app > ripple.json
ripple.app > ripple.protocol
ripple.app > ripple.resource
ripple.app > test.unit_test
ripple.basics > ripple.beast
@@ -91,8 +90,6 @@ test.app > ripple.overlay
test.app > ripple.protocol
test.app > ripple.resource
test.app > ripple.rpc
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.basics > ripple.basics
@@ -111,7 +108,9 @@ test.consensus > ripple.app
test.consensus > ripple.basics
test.consensus > ripple.beast
test.consensus > ripple.consensus
test.consensus > ripple.core
test.consensus > ripple.ledger
test.consensus > ripple.protocol
test.consensus > test.csf
test.consensus > test.toplevel
test.consensus > test.unit_test
@@ -185,7 +184,6 @@ test.protocol > ripple.basics
test.protocol > ripple.beast
test.protocol > ripple.crypto
test.protocol > ripple.json
test.protocol > ripple.ledger
test.protocol > ripple.protocol
test.protocol > test.toplevel
test.resource > ripple.basics

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -1 +0,0 @@
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)

View File

@@ -45,7 +45,6 @@ include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
include(RippledNIH)
include(RippledRelease)
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)

View File

@@ -1,67 +1,185 @@
# Contributing
The XRP Ledger has many and diverse stakeholders, and everyone deserves a chance to contribute meaningful changes to the code that runs the XRPL.
To contribute, please:
1. Fork the repository under your own user.
2. Create a new branch on which to write your changes. Please note that changes which alter transaction processing must be composed via and guarded using [Amendments](https://xrpl.org/amendments.html). Changes which are _read only_ i.e. RPC, or changes which are only refactors and maintain the existing behaviour do not need to be made through an Amendment.
3. Write and test your code.
4. Ensure that your code compiles with the provided build engine and update the provided build engine as part of your PR where needed and where appropriate.
5. Write test cases for your code and include those in `src/test` such that they are runnable from the command line using `./rippled -u`. (Some changes will not be able to be tested this way.)
6. Ensure your code passes automated checks (e.g. clang-format and levelization.)
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change.)
8. Open a PR to the main repository onto the _develop_ branch, and follow the provided template.
Xahau has many and diverse stakeholders, and everyone deserves
a chance to contribute meaningful changes to the code that runs Xahau.
# Contributing
We assume you are familiar with the general practice of [making
contributions on GitHub][1]. This file includes only special
instructions specific to this project.
## Before you start
In general, contributions should be developed in your personal
[fork](https://github.com/xahau/xahaud/fork).
The following branches exist in the main project repository:
- `dev`: The latest set of unreleased features, and the most common
starting point for contributions.
- `candidate`: The latest beta release or release candidate.
- `release`: The latest stable release.
The tip of each branch must be signed. In order for GitHub to sign a
squashed commit that it builds from your pull request, GitHub must know
your verifying key. Please set up [signature verification][signing].
[rippled]: https://github.com/xahau/xahaud
[signing]:
https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
## Major contributions
If your contribution is a major feature or breaking change, then you
must first write a Xahau Standard (XLS) describing it. Go to
[Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
choose the next available standard number, and open a discussion with an
appropriate title to propose your draft standard.
When you submit a pull request, please link the corresponding XLS in the
description. An XLS still in draft status is considered a
work-in-progress and open for discussion. Please allow time for
questions, suggestions, and changes to the XLS draft. It is the
responsibility of the XLS author to update the draft to match the final
implementation when its corresponding pull request is merged, unless the
author delegates that responsibility to others.
## Before making a pull request
Changes that alter transaction processing must be guarded by an
[Amendment](https://docs.xahau.network/features/amendments).
All other changes that maintain the existing behavior do not need an
Amendment.
Ensure that your code compiles according to the build instructions in the
[`documentation`](https://docs.xahau.network/infrastructure/building-xahau).
If you create new source files, they must go under `src/ripple`.
You will need to add them to one of the
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
Please write tests for your code.
If you create new test source files, they must go under `src/test`.
You will need to add them to one of the
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
If your test can be run offline, in under 60 seconds, then it can be an
automatic test run by `rippled --unittest`.
Otherwise, it must be a manual test.
The source must be formatted according to the style guide below.
Header includes must be [levelized](./Builds/levelization).
## Pull requests
In general, pull requests use `develop` as the base branch.
(Hotfixes are an exception.)
Changes to pull requests must be added as new commits.
Once code reviewers have started looking at your code, please avoid
force-pushing a branch in a pull request.
This preserves the ability for reviewers to filter changes since their last
review.
A pull request must obtain **approvals from at least two reviewers** before it
can be considered for merge by a Maintainer.
Maintainers retain discretion to require more approvals if they feel the
credibility of the existing approvals is insufficient.
Pull requests must be merged by [squash-and-merge][2]
to preserve a linear history for the `develop` branch.
# Major Changes
If your code change is a major feature, a breaking change or in some other way makes a significant alteration to the way the XRPL will operate, then you must first write an XLS document (XRP Ledger Standard) describing your change.
To do this:
1. Go to [XLS Standards](https://github.com/XRPLF/XRPL-Standards/discussions).
2. Choose the next available standard number.
3. Open a discussion with the appropriate title to propose your draft standard.
4. Link your XLS in your PR.
# Style guide
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of _thou shalt not_ commandments.
This is a non-exhaustive list of recommended style guidelines. These are
not always strictly enforced and serve as a way to keep the codebase
coherent rather than a set of _thou shalt not_ commandments.
## Formatting
All code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain.
To change your code to conform use `clang-format -i <your changed files>`.
All code must conform to `clang-format` version 10,
according to the settings in [`.clang-format`](./.clang-format),
unless the result would be unreasonably difficult to read or maintain.
To demarcate lines that should be left as-is, surround them with comments like
this:
```
// clang-format off
...
// clang-format on
```
You can format individual files in place by running `clang-format -i <file>...`
from any directory within this project.
You can install a pre-commit hook to automatically run `clang-format` before every commit:
```
pip3 install pre-commit
pre-commit install
```
## Avoid
1. Proliferation of nearly identical code.
2. Proliferation of new files and classes.
3. Complex inheritance and complex OOP patterns.
4. Unmanaged memory allocation and raw pointers.
5. Macros and non-trivial templates (unless they add significant value.)
6. Lambda patterns (unless these add significant value.)
7. CPU or architecture-specific code unless there is a good reason to include it, and where it is used guard it with macros and provide explanatory comments.
5. Macros and non-trivial templates (unless they add significant value).
6. Lambda patterns (unless these add significant value).
7. CPU or architecture-specific code unless there is a good reason to
include it, and where it is used, guard it with macros and provide
explanatory comments.
8. Importing new libraries unless there is a very good reason to do so.
## Seek to
9. Extend functionality of existing code rather than creating new code.
10. Prefer readability over terseness where important logic is concerned.
11. Inline functions that are not used or are not likely to be used elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables, structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer would need to understand what your code does.
10. Prefer readability over terseness where important logic is
concerned.
11. Inline functions that are not used or are not likely to be used
elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables,
structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for
function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer
would need to understand what your code does.
# Maintainers
Maintainers are ecosystem participants with elevated access to the repository. They are able to push new code, make decisions on when a release should be made, etc.
## Code Review
New contributors' PRs must be reviewed by at least two of the maintainers. Well established prior contributors can be reviewed by a single maintainer.
Maintainers are ecosystem participants with elevated access to the repository.
They are able to push new code, make decisions on when a release should be
made, etc.
## Adding and Removing
New maintainers can be proposed by two existing maintainers, subject to a vote by a quorum of the existing maintainers. A minimum of 50% support and a 50% participation is required. In the event of a tie vote, the addition of the new maintainer will be rejected.
Existing maintainers can resign, or be subject to a vote for removal at the behest of two existing maintainers. A minimum of 60% agreement and 50% participation are required. The XRP Ledger Foundation will have the ability, for cause, to remove an existing maintainer without a vote.
## Adding and removing
## Existing Maintainers
* [JoelKatz](https://github.com/JoelKatz) (Ripple)
* [Manojsdoshi](https://github.com/manojsdoshi) (Ripple)
* [N3tc4t](https://github.com/n3tc4t) (XRPL Labs)
* [Nikolaos D Bougalis](https://github.com/nbougalis)
* [Nixer89](https://github.com/nixer89) (XRP Ledger Foundation)
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Seelabs](https://github.com/seelabs) (Ripple)
* [Silkjaer](https://github.com/Silkjaer) (XRP Ledger Foundation)
* [WietseWind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
* [Ximinez](https://github.com/ximinez) (Ripple)
New maintainers can be proposed by two existing maintainers, subject to a vote
by a quorum of the existing maintainers.
A minimum of 50% support and a 50% participation is required.
In the event of a tie vote, the addition of the new maintainer will be
rejected.
Existing maintainers can resign, or be subject to a vote for removal at the
behest of two existing maintainers.
A minimum of 60% agreement and 50% participation are required.
The XRP Ledger Foundation will have the ability, for cause, to remove an
existing maintainer without a vote.
## Current Maintainers
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + XRP Ledger Foundation)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits

View File

@@ -2,6 +2,7 @@ ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2020, the XRP Ledger developers.
Copyright (c) 2020-2024, XRPL Labs.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@@ -1,3 +1,71 @@
# The Xahau Ledger
# Xahau
TODO: Doco
**Note:** Throughout this README, references to "we" or "our" pertain to the community and contributors involved in the Xahau network. It does not imply a legal entity or a specific collection of individuals.
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://docs.xahau.network/)
## XAH
XAH is the public, counterparty-free asset native to Xahau and functions primarily as network gas. Transactions submitted to the Xahau network must supply an appropriate amount of XAH, to be burnt by the network as a fee, in order to be successfully included in a validated ledger. In addition, XAH also acts as a bridge currency within the Xahau DEX. XAH is traded on the open-market and is available for anyone to access. Xahau was created in 2023 with a supply of 600 million units of XAH.
## xahaud
The server software that powers Xahau is called `xahaud` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `xahaud` server software is written primarily in C++ and runs on a variety of platforms. The `xahaud` server software can run in several modes depending on its configuration.
### Build from Source
* [Read the build instructions in our documentation](https://docs.xahau.network/infrastructure/building-xahau)
* If you encounter any issues, please [open an issue](https://github.com/xahau/xahaud/issues)
## Highlights of Xahau
1. **Hooks**: Hooks are small, efficient WebAssembly modules designed specifically for Xahau. They add a robust smart contract functionality to Xahau, allowing you to construct and deploy applications with bespoke functionalities. Hooks can block or allow transactions to and from the account, change and keep track of the hooks internal state and logic, and autonomously initiate new transactions on the accounts behalf. They can be written in any language that can be compiled into WebAssembly.
2. **Balance Rewards**: Xahau offers a Balance Rewards feature that provides a 4% per annum reward. This feature encourages users to maintain a balance in their accounts and rewards them for doing so.
3. **URIToken**: The URIToken is a feature in Xahau that allows for the creation and management of non fungible tokens within the network. This feature can be used for a variety of purposes and specific use cases.
4. **Import/B2M**: The Import/B2M feature in Xahau allows for the importation of assets into the network. This feature can be used to bring external assets into the Xahau network, expanding the range of assets that can be managed and traded within the network.
5. **Governance Game**: The Governance Game is a feature in Xahau that allows for the decentralized governance of the network. This feature allows users to participate in the decision-making process of the network, ensuring that the network remains democratic and responsive to the needs of its users.
## Binary Releases and Versioning System
Xahau provides pre-compiled binary releases of its software, which are ready-to-run versions that users can download and execute without compiling the source code themselves. These binaries are built automatically using GitHub Actions whenever a new commit is pushed or a pull request is merged.
The versioning system for Xahau binaries is based on the date of the build, the branch name, and a build number, following the format `YYYY.MM.DD-branch+buildnumber`. For example, `2023.10.30-release+443` indicates a binary built on October 30, 2023, from the `release` branch, and it is the 443rd build from that branch.
Users can access these binaries on the [Xahau Build Server](https://build.xahau.tech/), which provides an organized list of releases along with release notes for each version. This system simplifies the deployment process for users and ensures they can easily identify and download the appropriate version for their needs.
## Source Code
Here are some good places to start learning the source code:
- Read the markdown files in the source tree: `src/ripple/**/*.md`.
- Read [the levelization document](./Builds/levelization) to get an idea of the internal dependency graph.
- In the big picture, the `main` function constructs an `ApplicationImp` object, which implements the `Application` virtual interface. Almost every component in the application takes an `Application&` parameter in its constructor, typically named `app` and stored as a member variable `app_`. This allows most components to depend on any other component.
### Repository Contents
| Folder | Contents |
|:-----------|:-------------------------------------------------|
| `./Builds` | Platform-specific guides for building `xahaud`. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
## Resources
- **Documentation**: Documentation for XRPL, Xahau and Hooks.
- [Xrpl Documentation](https://xrpl.org)
- [Xahau Documentation](https://docs.xahau.network/)
- [Hooks Technical Documentation](https://xrpl-hooks.readme.io/)
- **Explorers**: Explore the Xahau ledger using various explorers:
- [xahauexplorer.com](https://xahauexplorer.com)
- [xahscan.com](https://xahscan.com)
- [xahau.xrpl.org](https://xahau.xrpl.org)
- [explorer.xahau.network](https://explorer.xahau.network)
- **Testnet & Faucet**: Test applications and obtain test XAH at [xahau-test.net](https://xahau-test.net) and use the testnet explorer at [explorer.xahau.network](https://explorer.xahau.network).
- **Supporting Wallets**: A list of wallets that support XAH and Xahau-based assets.
- [Xumm](https://xumm.app)
- [Crossmark](https://crossmark.io)

74
RELEASENOTES.XAHAUD.md Normal file
View File

@@ -0,0 +1,74 @@
# Release Notes
This document contains the release notes for `xahaud`, the reference server implementation of the Xahau protocol. To learn more about how to build, run or update a `xahaud` server, visit https://docs.xahau.network/infrastructure/peering/connect-to-xahau-mainnet
Have new ideas? Need help with setting up your node? [Please open an issue here](https://github.com/xahau/xahaud/issues/new/choose).
# Introducing Xahau version 2023.10.30-release+443
Version 2023.10.30-release+443 of `xahaud`, the reference server implementation of the Xahau protocol, is now available at [Build Server](https://build.xahau.tech/).
[Download Release Binary](https://build.xahau.tech/2023.10.30-release%2B443)
[Sign Up for Future Release Announcements](https://groups.google.com/g/xahau-server)
<!-- BREAK -->
## Action Required
New amendments are now open for voting according to Xahau's [amendment process](https://docs.xahau.network/features/amendments), which enables protocol changes following five days of >80% support from trusted validators.
If you operate a Xahau server, upgrade to version 2023.10.30-release+443 by October 31 to ensure service continuity. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
## Install / Upgrade
On supported platforms, see the [instructions on installing or updating `xahaud`](https://docs.xahau.network/infrastructure/peering/connect-to-xahau-mainnet).
## New Amendments
- **`Hooks`**: This amendment activates hooks and the hook API in the Xahau network, allowing custom logic to be executed on the ledger in response to transactions.
- **`BalanceRewards`**: This amendment enables `ClaimReward` and `GenesisMint` transactions, facilitating balance rewards to be paid in XAH, the Xahau network's native currency.
- **`PaychanAndEscrowForTokens`**: This amendment allows the use of IOU tokens for PaymentChannels and Escrow transactions, enhancing flexibility and functionality.
- **`URIToken`**: This amendment activates URITokens, which are non-fungible, hook-friendly tokens in the Xahau network.
- **`Import`**: This amendment enables the Import transaction for B2M xpop processing, allowing transactions to be imported from another network or system.
- **`XahauGenesis`**: This amendment activates the genesis amendment for the initial distribution of XAH and the establishment of the governance game.
- **`HooksUpdate1`**: This amendment extends the hooks API to include Xpop functionality, enabling more complex transactions involving Xpop.
## Changelog
### New Features and Improvements
- **Server Definitions**: The new feature introduces a `server_definitions` endpoint. This endpoint is designed to return a JSON object that contains the definitions of various types, fields, and transaction results used in the Xahau protocol.
This feature enhances the functionality of the system by providing an efficient way to fetch and verify the current definitions used in the Ripple protocol.
### GitHub
The public source code repository for `xahaud` is hosted on GitHub at <https://github.com/xahau/xahaud>.
We welcome all contributions and invite everyone to join the community of Xahau developers to help build the Internet of Value.
### Credits
The following people contributed directly to this release:
- Nikolaos D. Bougalis <nikb@bougalis.net>
- Wietse Wind <wietse@xrpl-labs.com>
- Richard Holland <richard@xrpl-labs.com>
- Denis Angell <denis@xrpl-labs.com>
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xahau.network

View File

@@ -1,149 +1,74 @@
### Operating an XRP Ledger server securely
### Operating the Xahau server securely
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
For more details on operating the Xahau server securely, please visit https://docs.xahau.network/infrastructure/building-xahau.
# Security Policy
## Supported Versions
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **release**, **candidate** or **dev** branches, and with proposed, [open pull requests](https://github.com/xahau/xahaud/pulls).
## Identifying and Reporting Vulnerabilities
# Responsible Disclosure
We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.
## Responsible disclosure policy
### Responsible Investigation
At [Xahau](https://xahau.network) we believe that the security of our systems is extremely important.
We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.
Despite our concern for the security of our systems during product development and maintenance, there's always the possibility of someone finding something we need to improve / update / change / fix / ...
Responsible investigation includes, but isn't limited to, the following:
We appreciate you notifying us if you found a weak point in one of our systems as soon as possible so that we can take measures immediately to protect our customers and their data.
- Not performing tests on the main network. If testing is necessary, use the [Testnet or Devnet](https://xrpl.org/xrp-testnet-faucet.html).
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
## How to report
### Responsible Disclosure
If you believe you found a security issue in one of our systems, please notify us as soon as possible by [send an email to bugs@xahau.network](mailto:bugs@xahau.network).
If you discover a vulnerability or potential threat, or if you _think_
you have, please reach out by dropping an email using the contact
information below.
## Rules
Your report should include the following:
This responsible disclosure policy is not an open invitation to actively scan our network and applications for vulnerabilities. Our continuous monitoring will likely detect your scan and these will be investigated.
- Your contact information (typically, an email address);
- The description of the vulnerability;
- The attack scenario (if any);
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
### We ask you to:
In your mail, please describe of the issue or the potential threat; if possible, please include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as extensive as possible.
- Not share information about the security issue with others until the problem is resolved and to immediately delete any confidential data acquired
- Not further abuse the problem, for example, by downloading more data than is necessary in order to demonstrate the leak or to view, delete or amend the data of third parties
- Provide detailed information in order for us to reproduce, validate and resolve the problem as quickly as possible. Include your test data, timestamps and URL(s) of the system(s) involved
- Leave your contact details (e-mail address and/or phone number) so that we may contact you about the progress of the solution. We do accept anonymous reports
- Do not use attacks on physical security, social engineering, distributed denial of service, spam or applications of third parties
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
## Responsible Disclosure procedure(s)
## Report Handling Process
### When you report a security issue, we will act according to the following:
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
- You will receive a confirmation of receipt from us within 4 working days after the report was made
- You will receive a response with the assessment of the security issue and an expected date of resolution within 4 working days after the confirmation of receipt was sent
- We will take no legal steps against you in relation to the report if you have kept to the conditions as set out above
- We will handle your report confidentially and we will not share your details with third parties without your permission, unless that is necessary in order to fulfil a legal obligation
Once we receive a report, we:
### This responsible disclosure scheme is not intended for:
1. Assign two people to independently evaluate the report;
2. Consider their recommendations;
3. If action is necessary, formulate a plan to address the issue;
4. Communicate privately with the reporter to explain our plan.
5. Prepare, test and release a version which fixes the issue; and
6. Announce the vulnerability publicly.
- Complaints
- Website unavailable reports
- Phishing reports
- Fraud reports
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
For these complaints or reports, please [contact our support team](mailto:bugs@xahau.network).
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
## Bug bounty program
## Bug Bounty Program
[Xahau](https://xahau.network) encourages the reporting of security issues or vulnerabilities. We may make an appropriate reward for confidential disclosure of any design or implementation issue that could be used to compromise the confidentiality or integrity of our users' data that was not yet known to us. We decide whether the report is eligible and the amount of the reward.
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/ripple/rippled) (and other related projects, like [`ripple-lib`](https://github.com/ripple/ripple-lib)).
## Exclusions
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, order to qualify for a bounty, the bug must be:
### The following type of security problems are excluded
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled` and `ripple-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the XRP Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromises the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
- (D)DOS attacks
- Error messages or error pages without sensitive data
- Tests & sample data as publicly available in our repositories at Github
- Common issues like browser header warnings or DNS configuration, identified by vulnerability scans
- Vulnerability scan reports for software we publicly use
- Security issues related to outdated OS's, browsers or plugins
- Reports for security problems that we have been notified of before
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
Please note: Reports that are lacking any proof (such as screenshots or other data), detailed information or details on how to reproduce any unexpected result will be investigated but will not be eligible for any reward.
### Contacting Us
To report a qualifying bug, please send a detailed report to:
|Email Address|bugs@ripple.com |
|:-----------:|:----------------------------------------------------|
|Short Key ID | `0xC57929BE` |
|Long Key ID | `0xCD49A0AFC57929BE` |
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
-----END PGP PUBLIC KEY BLOCK-----
```
This policy is based on the National Cyber Security Centres Responsible Disclosure Guidelines and an [example by Floor Terra](https://responsibledisclosure.nl).

View File

@@ -1,470 +0,0 @@
#!/usr/bin/node
//
// ledger?l=L
// transaction?h=H
// ledger_entry?l=L&h=H
// account?l=L&a=A
// directory?l=L&dir_root=H&i=I
// directory?l=L&o=A&i=I // owner directory
// offer?l=L&offer=H
// offer?l=L&account=A&i=I
// ripple_state=l=L&a=A&b=A&c=C
// account_lines?l=L&a=A
//
// A=address
// C=currency 3 letter code
// H=hash
// I=index
// L=current | closed | validated | index | hash
//
var async = require("async");
var extend = require("extend");
var http = require("http");
var url = require("url");
var Remote = require("ripple-lib").Remote;
var program = process.argv[1];
var httpd_response = function (res, opts) {
var self=this;
res.statusCode = opts.statusCode;
res.end(
"<HTML>"
+ "<HEAD><TITLE>Title</TITLE></HEAD>"
+ "<BODY BACKGROUND=\"#FFFFFF\">"
+ "State:" + self.state
+ "<UL>"
+ "<LI><A HREF=\"/\">home</A>"
+ "<LI>" + html_link('r4EM4gBQfr1QgQLXSPF4r7h84qE9mb6iCC')
// + "<LI><A HREF=\""+test+"\">rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh</A>"
+ "<LI><A HREF=\"/ledger\">ledger</A>"
+ "</UL>"
+ (opts.body || '')
+ '<HR><PRE>'
+ (opts.url || '')
+ '</PRE>'
+ "</BODY>"
+ "</HTML>"
);
};
var html_link = function (generic) {
return '<A HREF="' + build_uri({ type: 'account', account: generic}) + '">' + generic + '</A>';
};
// Build a link to a type.
var build_uri = function (params, opts) {
var c;
if (params.type === 'account') {
c = {
pathname: 'account',
query: {
l: params.ledger,
a: params.account,
},
};
} else if (params.type === 'ledger') {
c = {
pathname: 'ledger',
query: {
l: params.ledger,
},
};
} else if (params.type === 'transaction') {
c = {
pathname: 'transaction',
query: {
h: params.hash,
},
};
} else {
c = {};
}
opts = opts || {};
c.protocol = "http";
c.hostname = opts.hostname || self.base.hostname;
c.port = opts.port || self.base.port;
return url.format(c);
};
var build_link = function (item, link) {
console.log(link);
return "<A HREF=" + link + ">" + item + "</A>";
};
var rewrite_field = function (type, obj, field, opts) {
if (field in obj) {
obj[field] = rewrite_type(type, obj[field], opts);
}
};
var rewrite_type = function (type, obj, opts) {
if ('amount' === type) {
if ('string' === typeof obj) {
// XRP.
return '<B>' + obj + '</B>';
} else {
rewrite_field('address', obj, 'issuer', opts);
return obj;
}
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
if ('address' === type) {
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
else if ('ledger' === type) {
return build_link(
obj,
build_uri({
type: 'ledger',
ledger: obj,
}, opts)
);
}
else if ('node' === type) {
// A node
if ('PreviousTxnID' in obj)
obj.PreviousTxnID = rewrite_type('transaction', obj.PreviousTxnID, opts);
if ('Offer' === obj.LedgerEntryType) {
if ('NewFields' in obj) {
if ('TakerGets' in obj.NewFields)
obj.NewFields.TakerGets = rewrite_type('amount', obj.NewFields.TakerGets, opts);
if ('TakerPays' in obj.NewFields)
obj.NewFields.TakerPays = rewrite_type('amount', obj.NewFields.TakerPays, opts);
}
}
obj.LedgerEntryType = '<B>' + obj.LedgerEntryType + '</B>';
return obj;
}
else if ('transaction' === type) {
// Reference to a transaction.
return build_link(
obj,
build_uri({
type: 'transaction',
hash: obj
}, opts)
);
}
return 'ERROR: ' + type;
};
var rewrite_object = function (obj, opts) {
var out = extend({}, obj);
rewrite_field('address', out, 'Account', opts);
rewrite_field('ledger', out, 'parent_hash', opts);
rewrite_field('ledger', out, 'ledger_index', opts);
rewrite_field('ledger', out, 'ledger_current_index', opts);
rewrite_field('ledger', out, 'ledger_hash', opts);
if ('ledger' in obj) {
// It's a ledger header.
out.ledger = rewrite_object(out.ledger, opts);
if ('ledger_hash' in out.ledger)
out.ledger.ledger_hash = '<B>' + out.ledger.ledger_hash + '</B>';
delete out.ledger.hash;
delete out.ledger.totalCoins;
}
if ('TransactionType' in obj) {
// It's a transaction.
out.TransactionType = '<B>' + obj.TransactionType + '</B>';
rewrite_field('amount', out, 'TakerGets', opts);
rewrite_field('amount', out, 'TakerPays', opts);
rewrite_field('ledger', out, 'inLedger', opts);
out.meta.AffectedNodes = out.meta.AffectedNodes.map(function (node) {
var kind = 'CreatedNode' in node
? 'CreatedNode'
: 'ModifiedNode' in node
? 'ModifiedNode'
: 'DeletedNode' in node
? 'DeletedNode'
: undefined;
if (kind) {
node[kind] = rewrite_type('node', node[kind], opts);
}
return node;
});
}
else if ('node' in obj && 'LedgerEntryType' in obj.node) {
// Its a ledger entry.
if (obj.node.LedgerEntryType === 'AccountRoot') {
rewrite_field('address', out.node, 'Account', opts);
rewrite_field('transaction', out.node, 'PreviousTxnID', opts);
rewrite_field('ledger', out.node, 'PreviousTxnLgrSeq', opts);
}
out.node.LedgerEntryType = '<B>' + out.node.LedgerEntryType + '</B>';
}
return out;
};
var augment_object = function (obj, opts, done) {
if (obj.node.LedgerEntryType == 'AccountRoot') {
var tx_hash = obj.node.PreviousTxnID;
var tx_ledger = obj.node.PreviousTxnLgrSeq;
obj.history = [];
async.whilst(
function () { return tx_hash; },
function (callback) {
// console.log("augment_object: request: %s %s", tx_hash, tx_ledger);
opts.remote.request_tx(tx_hash)
.on('success', function (m) {
tx_hash = undefined;
tx_ledger = undefined;
//console.log("augment_object: ", JSON.stringify(m));
m.meta.AffectedNodes.filter(function(n) {
// console.log("augment_object: ", JSON.stringify(n));
// if (n.ModifiedNode)
// console.log("augment_object: %s %s %s %s %s %s/%s", 'ModifiedNode' in n, n.ModifiedNode && (n.ModifiedNode.LedgerEntryType === 'AccountRoot'), n.ModifiedNode && n.ModifiedNode.FinalFields && (n.ModifiedNode.FinalFields.Account === obj.node.Account), Object.keys(n)[0], n.ModifiedNode && (n.ModifiedNode.LedgerEntryType), obj.node.Account, n.ModifiedNode && n.ModifiedNode.FinalFields && n.ModifiedNode.FinalFields.Account);
// if ('ModifiedNode' in n && n.ModifiedNode.LedgerEntryType === 'AccountRoot')
// {
// console.log("***: ", JSON.stringify(m));
// console.log("***: ", JSON.stringify(n));
// }
return 'ModifiedNode' in n
&& n.ModifiedNode.LedgerEntryType === 'AccountRoot'
&& n.ModifiedNode.FinalFields
&& n.ModifiedNode.FinalFields.Account === obj.node.Account;
})
.forEach(function (n) {
tx_hash = n.ModifiedNode.PreviousTxnID;
tx_ledger = n.ModifiedNode.PreviousTxnLgrSeq;
obj.history.push({
tx_hash: tx_hash,
tx_ledger: tx_ledger
});
console.log("augment_object: next: %s %s", tx_hash, tx_ledger);
});
callback();
})
.on('error', function (m) {
callback(m);
})
.request();
},
function (err) {
if (err) {
done();
}
else {
async.forEach(obj.history, function (o, callback) {
opts.remote.request_account_info(obj.node.Account)
.ledger_index(o.tx_ledger)
.on('success', function (m) {
//console.log("augment_object: ", JSON.stringify(m));
o.Balance = m.account_data.Balance;
// o.account_data = m.account_data;
callback();
})
.on('error', function (m) {
o.error = m;
callback();
})
.request();
},
function (err) {
done(err);
});
}
});
}
else {
done();
}
};
if (process.argv.length < 4 || process.argv.length > 7) {
console.log("Usage: %s ws_ip ws_port [<ip> [<port> [<start>]]]", program);
}
else {
var ws_ip = process.argv[2];
var ws_port = process.argv[3];
var ip = process.argv.length > 4 ? process.argv[4] : "127.0.0.1";
var port = process.argv.length > 5 ? process.argv[5] : "8080";
// console.log("START");
var self = this;
var remote = (new Remote({
websocket_ip: ws_ip,
websocket_port: ws_port,
trace: false
}))
.on('state', function (m) {
console.log("STATE: %s", m);
self.state = m;
})
// .once('ledger_closed', callback)
.connect()
;
self.base = {
hostname: ip,
port: port,
remote: remote,
};
// console.log("SERVE");
var server = http.createServer(function (req, res) {
var input = "";
req.setEncoding();
req.on('data', function (buffer) {
// console.log("DATA: %s", buffer);
input = input + buffer;
});
req.on('end', function () {
// console.log("URL: %s", req.url);
// console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
var _parsed = url.parse(req.url, true);
var _url = JSON.stringify(_parsed, undefined, 2);
// console.log("HEADERS: %s", JSON.stringify(_parsed, undefined, 2));
if (_parsed.pathname === "/account") {
var request = remote
.request_ledger_entry('account_root')
.ledger_index(-1)
.account_root(_parsed.query.a)
.on('success', function (m) {
// console.log("account_root: %s", JSON.stringify(m, undefined, 2));
augment_object(m, self.base, function() {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+ "</PRE>"
});
});
})
.request();
} else if (_parsed.pathname === "/ledger") {
var request = remote
.request_ledger(undefined, { expand: true, transactions: true })
.on('success', function (m) {
// console.log("Ledger: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
if (_parsed.query.l && _parsed.query.l.length === 64) {
request.ledger_hash(_parsed.query.l);
}
else if (_parsed.query.l) {
request.ledger_index(Number(_parsed.query.l));
}
else {
request.ledger_index(-1);
}
request.request();
} else if (_parsed.pathname === "/transaction") {
var request = remote
.request_tx(_parsed.query.h)
// .request_transaction_entry(_parsed.query.h)
// .ledger_select(_parsed.query.l)
.on('success', function (m) {
// console.log("transaction: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
.on('error', function (m) {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ 'ERROR: ' + JSON.stringify(m, undefined, 2)
+"</PRE>"
});
})
.request();
} else {
var test = build_uri({
type: 'account',
ledger: 'closed',
account: 'rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
}, self.base);
httpd_response(res,
{
statusCode: req.url === "/" ? 200 : 404,
url: _url,
});
}
});
});
server.listen(port, ip, undefined,
function () {
console.log("Listening at: http://%s:%s", ip, port);
});
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,24 +0,0 @@
In this directory are two scripts, `build.sh` and `test.sh` used for building
and testing rippled.
(For now, they assume Bash and Linux. Once I get Windows containers for
testing, I'll try them there, but if Bash is not available, then they will
soon be joined by PowerShell scripts `build.ps` and `test.ps`.)
We don't want these scripts to require arcane invocations that can only be
pieced together from within a CI configuration. We want something that humans
can easily invoke, read, and understand, for when we eventually have to test
and debug them interactively. That means:
(1) They should work with no arguments.
(2) They should document their arguments.
(3) They should expand short arguments into long arguments.
While we want to provide options for common use cases, we don't need to offer
the kitchen sink. We can rightfully expect users with esoteric, complicated
needs to write their own scripts.
To make argument-handling easy for us, the implementers, we can just take all
arguments from environment variables. They have the nice advantage that every
command-line uses named arguments. For the benefit of us and our users, we
document those variables at the top of each script.

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# The build system. Either 'Unix Makefiles' or 'Ninja'.
GENERATOR=${GENERATOR:-Unix Makefiles}
# The compiler. Either 'gcc' or 'clang'.
COMPILER=${COMPILER:-gcc}
# The build type. Either 'Debug' or 'Release'.
BUILD_TYPE=${BUILD_TYPE:-Debug}
# Additional arguments to CMake.
# We use the `-` substitution here instead of `:-` so that callers can erase
# the default by setting `$CMAKE_ARGS` to the empty string.
CMAKE_ARGS=${CMAKE_ARGS-'-Dwerr=ON'}
# https://gitlab.kitware.com/cmake/cmake/issues/18865
CMAKE_ARGS="-DBoost_NO_BOOST_CMAKE=ON ${CMAKE_ARGS}"
if [[ ${COMPILER} == 'gcc' ]]; then
export CC='gcc'
export CXX='g++'
elif [[ ${COMPILER} == 'clang' ]]; then
export CC='clang'
export CXX='clang++'
fi
mkdir build
cd build
cmake -G "${GENERATOR}" -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_ARGS} ..
cmake --build . -- -j $(nproc)

View File

@@ -1,41 +0,0 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# Set to 'true' to run the known "manual" tests in rippled.
MANUAL_TESTS=${MANUAL_TESTS:-false}
# The maximum number of concurrent tests.
CONCURRENT_TESTS=${CONCURRENT_TESTS:-$(nproc)}
# The path to rippled.
RIPPLED=${RIPPLED:-build/rippled}
# Additional arguments to rippled.
RIPPLED_ARGS=${RIPPLED_ARGS:-}
function join_by { local IFS="$1"; shift; echo "$*"; }
declare -a manual_tests=(
'beast.chrono.abstract_clock'
'beast.unit_test.print'
'ripple.NodeStore.Timing'
'ripple.app.Flow_manual'
'ripple.app.NoRippleCheckLimits'
'ripple.app.PayStrandAllPairs'
'ripple.consensus.ByzantineFailureSim'
'ripple.consensus.DistributedValidators'
'ripple.consensus.ScaleFreeSim'
'ripple.tx.CrossingLimits'
'ripple.tx.FindOversizeCross'
'ripple.tx.Offer_manual'
'ripple.tx.OversizeMeta'
'ripple.tx.PlumpBook'
)
if [[ ${MANUAL_TESTS} == 'true' ]]; then
RIPPLED_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
RIPPLED_ARGS+=" --unittest --quiet --unittest-log"
fi
RIPPLED_ARGS+=" --unittest-jobs ${CONCURRENT_TESTS}"
${RIPPLED} ${RIPPLED_ARGS}

View File

@@ -1,274 +0,0 @@
#!/usr/bin/env bash
set -ex
function version_ge() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1"; }
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
echo "using CC: ${CC}"
"${CC}" --version
export CC
COMPNAME=$(basename $CC)
echo "using CXX: ${CXX:-notset}"
if [[ $CXX ]]; then
"${CXX}" --version
export CXX
fi
: ${BUILD_TYPE:=Debug}
echo "BUILD TYPE: ${BUILD_TYPE}"
: ${TARGET:=install}
echo "BUILD TARGET: ${TARGET}"
JOBS=${NUM_PROCESSORS:-2}
if [[ ${TRAVIS:-false} != "true" ]]; then
JOBS=$((JOBS+1))
fi
if [[ ! -z "${CMAKE_EXE:-}" ]] ; then
export PATH="$(dirname ${CMAKE_EXE}):$PATH"
fi
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
fi
echo "Building rippled"
: ${CMAKE_EXTRA_ARGS:=""}
if [[ ${NINJA_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -G Ninja"
fi
coverage=false
if [[ "${TARGET}" == "coverage_report" ]] ; then
echo "coverage option detected."
coverage=true
fi
cmake --version
CMAKE_VER=$(cmake --version | cut -d " " -f 3 | head -1)
#
# allow explicit setting of the name of the build
# dir, otherwise default to the compiler.build_type
#
: "${BUILD_DIR:=${COMPNAME}.${BUILD_TYPE}}"
BUILDARGS="--target ${TARGET}"
BUILDTOOLARGS=""
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" --parallel"
fi
if [[ ${NINJA_BUILD:-} == false ]]; then
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" ${JOBS}"
else
BUILDTOOLARGS+=" -j ${JOBS}"
fi
fi
if [[ ${VERBOSE_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
if version_ge $CMAKE_VER "3.14.0" ; then
BUILDARGS+=" --verbose"
else
if [[ ${NINJA_BUILD:-} == false ]]; then
BUILDTOOLARGS+=" verbose=1"
else
BUILDTOOLARGS+=" -v"
fi
fi
fi
if [[ ${USE_CCACHE:-} == true ]]; then
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
if [ -d "build/${BUILD_DIR}" ]; then
rm -rf "build/${BUILD_DIR}"
fi
mkdir -p "build/${BUILD_DIR}"
pushd "build/${BUILD_DIR}"
# cleanup possible artifacts
rm -fv CMakeFiles/CMakeOutput.log CMakeFiles/CMakeError.log
# Clean up NIH directories which should be git repos, but aren't
for nih_path in ${NIH_CACHE_ROOT}/*/*/*/src ${NIH_CACHE_ROOT}/*/*/src
do
for dir in lz4 snappy rocksdb
do
if [ -e ${nih_path}/${dir} -a \! -e ${nih_path}/${dir}/.git ]
then
ls -la ${nih_path}/${dir}*
rm -rfv ${nih_path}/${dir}*
fi
done
done
# generate
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
# Display the cmake output, to help with debugging if something fails
for file in CMakeOutput.log CMakeError.log
do
if [ -f CMakeFiles/${file} ]
then
ls -l CMakeFiles/${file}
cat CMakeFiles/${file}
fi
done
# build
export DESTDIR=$(pwd)/_INSTALLED_
${time} eval cmake --build . ${BUILDARGS} -- ${BUILDTOOLARGS}
if [[ ${TARGET} == "docs" ]]; then
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f docs/html/index.html ]; then
echo "1 case, 1 test total, 0 failures"
else
echo "1 case, 1 test total, 1 failures"
fi
exit
fi
popd
if [[ "${TARGET}" == "validator-keys" ]] ; then
export APP_PATH="$PWD/build/${BUILD_DIR}/validator-keys/validator-keys"
else
export APP_PATH="$PWD/build/${BUILD_DIR}/rippled"
fi
echo "using APP_PATH: ${APP_PATH}"
# See what we've actually built
ldd ${APP_PATH}
: ${APP_ARGS:=}
if [[ "${TARGET}" == "validator-keys" ]] ; then
APP_ARGS="--unittest"
else
function join_by { local IFS="$1"; shift; echo "$*"; }
# This is a list of manual tests
# in rippled that we want to run
# ORDER matters here...sorted in approximately
# descending execution time (longest running tests at top)
declare -a manual_tests=(
'ripple.ripple_data.reduce_relay_simulate'
'ripple.tx.Offer_manual'
'ripple.tx.CrossingLimits'
'ripple.tx.PlumpBook'
'ripple.app.Flow_manual'
'ripple.tx.OversizeMeta'
'ripple.consensus.DistributedValidators'
'ripple.app.NoRippleCheckLimits'
'ripple.ripple_data.compression'
'ripple.NodeStore.Timing'
'ripple.consensus.ByzantineFailureSim'
'beast.chrono.abstract_clock'
'beast.unit_test.print'
)
if [[ ${TRAVIS:-false} != "true" ]]; then
# these two tests cause travis CI to run out of memory.
# TODO: investigate possible workarounds.
manual_tests=(
'ripple.consensus.ScaleFreeSim'
'ripple.tx.FindOversizeCross'
"${manual_tests[@]}"
)
fi
if [[ ${MANUAL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
if [[ ${IPV6_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-ipv6"
fi
fi
if [[ ${coverage} == true && $CC =~ ^gcc ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
fi
if [[ ${SKIP_TESTS:-} == true ]]; then
echo "skipping tests."
exit
fi
ulimit -a
corepat=$(cat /proc/sys/kernel/core_pattern)
if [[ ${corepat} =~ ^[:space:]*\| ]] ; then
echo "WARNING: core pattern is piping - can't search for core files"
look_core=false
else
look_core=true
coredir=$(dirname ${corepat})
fi
if [[ ${look_core} == true ]]; then
before=$(ls -A1 ${coredir})
fi
set +e
echo "Running tests for ${APP_PATH}"
if [[ ${MANUAL_TESTS:-} == true && ${PARALLEL_TESTS:-} != true ]]; then
for t in "${manual_tests[@]}" ; do
${APP_PATH} --unittest=${t}
TEST_STAT=$?
if [[ $TEST_STAT -ne 0 ]] ; then
break
fi
done
else
${APP_PATH} ${APP_ARGS}
TEST_STAT=$?
fi
set -e
if [[ ${look_core} == true ]]; then
after=$(ls -A1 ${coredir})
oIFS="${IFS}"
IFS=$'\n\r'
found_core=false
for l in $(diff -w --suppress-common-lines <(echo "$before") <(echo "$after")) ; do
if [[ "$l" =~ ^[[:space:]]*\>[[:space:]]*(.+)$ ]] ; then
corefile="${BASH_REMATCH[1]}"
echo "FOUND core dump file at '${coredir}/${corefile}'"
gdb_output=$(/bin/mktemp /tmp/gdb_output_XXXXXXXXXX.txt)
found_core=true
gdb \
-ex "set height 0" \
-ex "set logging file ${gdb_output}" \
-ex "set logging on" \
-ex "print 'ripple::BuildInfo::versionString'" \
-ex "thread apply all backtrace full" \
-ex "info inferiors" \
-ex quit \
"$APP_PATH" \
"${coredir}/${corefile}" &> /dev/null
echo -e "CORE INFO: \n\n $(cat ${gdb_output}) \n\n)"
fi
done
IFS="${oIFS}"
fi
if [[ ${found_core} == true ]]; then
exit -1
else
exit $TEST_STAT
fi

View File

@@ -1,36 +0,0 @@
#!/usr/bin/env bash
# run our build script in a docker container
# using travis-ci hosts
set -eux
function join_by { local IFS="$1"; shift; echo "$*"; }
set +x
echo "VERBOSE_BUILD=true" > /tmp/co.env
matchers=(
'TRAVIS.*' 'CI' 'CC' 'CXX'
'BUILD_TYPE' 'TARGET' 'MAX_TIME'
'CODECOV.+' 'CMAKE.*' '.+_TESTS'
'.+_OPTIONS' 'NINJA.*' 'NUM_.+'
'NIH_.+' 'BOOST.*' '.*CCACHE.*')
matchstring=$(join_by '|' "${matchers[@]}")
echo "MATCHSTRING IS:: $matchstring"
env | grep -E "^(${matchstring})=" >> /tmp/co.env
set -x
# need to eliminate TRAVIS_CMD...don't want to pass it to the container
cat /tmp/co.env | grep -v TRAVIS_CMD > /tmp/co.env.2
mv /tmp/co.env.2 /tmp/co.env
cat /tmp/co.env
mkdir -p -m 0777 ${TRAVIS_BUILD_DIR}/cores
echo "${TRAVIS_BUILD_DIR}/cores/%e.%p" | sudo tee /proc/sys/kernel/core_pattern
docker run \
-t --env-file /tmp/co.env \
-v ${TRAVIS_HOME}:${TRAVIS_HOME} \
-w ${TRAVIS_BUILD_DIR} \
--cap-add SYS_PTRACE \
--ulimit "core=-1" \
$DOCKER_IMAGE \
/bin/bash -c 'if [[ $CC =~ ([[:alpha:]]+)-([[:digit:].]+) ]] ; then sudo update-alternatives --set ${BASH_REMATCH[1]} /usr/bin/$CC; fi; bin/ci/ubuntu/build-and-test.sh'

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
# some cached files create churn, so save them here for
# later restoration before packing the cache
set -eux
clean_cache="travis_clean_cache"
if [[ ! ( "${TRAVIS_JOB_NAME}" =~ "windows" || \
"${TRAVIS_JOB_NAME}" =~ "prereq-keep" ) ]] && \
( [[ "${TRAVIS_COMMIT_MESSAGE}" =~ "${clean_cache}" ]] || \
( [[ -v TRAVIS_PULL_REQUEST_SHA && \
"${TRAVIS_PULL_REQUEST_SHA}" != "" ]] && \
git log -1 "${TRAVIS_PULL_REQUEST_SHA}" | grep -cq "${clean_cache}" -
)
)
then
find ${TRAVIS_HOME}/_cache -maxdepth 2 -type d
rm -rf ${TRAVIS_HOME}/_cache
mkdir -p ${TRAVIS_HOME}/_cache
fi
pushd ${TRAVIS_HOME}
if [ -f cache_ignore.tar ] ; then
rm -f cache_ignore.tar
fi
if [ -d _cache/nih_c ] ; then
find _cache/nih_c -name "build.ninja" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_deps" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.tlog" | tar rf cache_ignore.tar --files-from -
# show .a files in the cache, for sanity checking
find _cache/nih_c -name "*.a" -ls
fi
if [ -d _cache/ccache ] ; then
find _cache/ccache -name "stats" | tar rf cache_ignore.tar --files-from -
fi
if [ -f cache_ignore.tar ] ; then
tar -tf cache_ignore.tar
fi
popd

View File

@@ -1,64 +0,0 @@
var ripple = require('ripple-lib');
var v = {
seed: "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
addr: "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
};
var remote = ripple.Remote.from_config({
"trusted" : true,
"websocket_ip" : "127.0.0.1",
"websocket_port" : 5006,
"websocket_ssl" : false,
"local_signing" : true
});
var tx_json = {
"Account" : v.addr,
"Amount" : "10000000",
"Destination" : "rEu2ULPiEQm1BAL8pYzmXnNX1aFX9sCks",
"Fee" : "10",
"Flags" : 0,
"Sequence" : 3,
"TransactionType" : "Payment"
//"SigningPubKey": '0396941B22791A448E5877A44CE98434DB217D6FB97D63F0DAD23BE49ED45173C9'
};
remote.on('connected', function () {
var req = remote.request_sign(v.seed, tx_json);
req.message.debug_signing = true;
req.on('success', function (result) {
console.log("SERVER RESULT");
console.log(result);
var sim = {};
var tx = remote.transaction();
tx.tx_json = tx_json;
tx._secret = v.seed;
tx.complete();
var unsigned = tx.serialize().to_hex();
tx.sign();
sim.tx_blob = tx.serialize().to_hex();
sim.tx_json = tx.tx_json;
sim.tx_signing_hash = tx.signing_hash().to_hex();
sim.tx_unsigned = unsigned;
console.log("\nLOCAL RESULT");
console.log(sim);
remote.connect(false);
});
req.on('error', function (err) {
if (err.error === "remoteError" && err.remote.error === "srcActNotFound") {
console.log("Please fund account "+v.addr+" to run this test.");
} else {
console.log('error', err);
}
remote.connect(false);
});
req.request();
});
remote.connect();

View File

@@ -1,18 +0,0 @@
#!/usr/bin/node
//
// Returns a Gravatar style hash as per: http://en.gravatar.com/site/implement/hash/
//
if (3 != process.argv.length) {
process.stderr.write("Usage: " + process.argv[1] + " email_address\n\nReturns gravatar style hash.\n");
process.exit(1);
} else {
var md5 = require('crypto').createHash('md5');
md5.update(process.argv[2].trim().toLowerCase());
process.stdout.write(md5.digest('hex') + "\n");
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,31 +0,0 @@
#!/usr/bin/node
//
// This program allows IE 9 ripple-clients to make websocket connections to
// rippled using flash. As IE 9 does not have websocket support, this required
// if you wish to support IE 9 ripple-clients.
//
// http://www.lightsphere.com/dev/articles/flash_socket_policy.html
//
// For better security, be sure to set the Port below to the port of your
// [websocket_public_port].
//
var net = require("net"),
port = "*",
domains = ["*:"+port]; // Domain:Port
net.createServer(
function(socket) {
socket.write("<?xml version='1.0' ?>\n");
socket.write("<!DOCTYPE cross-domain-policy SYSTEM 'http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd'>\n");
socket.write("<cross-domain-policy>\n");
domains.forEach(
function(domain) {
var parts = domain.split(':');
socket.write("\t<allow-access-from domain='" + parts[0] + "' to-ports='" + parts[1] + "' />\n");
}
);
socket.write("</cross-domain-policy>\n");
socket.end();
}
).listen(843);

View File

@@ -1,150 +0,0 @@
#!/usr/bin/env bash
# This script generates information about your rippled installation
# and system. It can be used to help debug issues that you may face
# in your installation. While this script endeavors to not display any
# sensitive information, it is recommended that you read the output
# before sharing with any third parties.
rippled_exe=/opt/ripple/bin/rippled
conf_file=/etc/opt/ripple/rippled.cfg
while getopts ":e:c:" opt; do
case $opt in
e)
rippled_exe=${OPTARG}
;;
c)
conf_file=${OPTARG}
;;
\?)
echo "Invalid option: -$OPTARG"
exit -1
esac
done
tmp_loc=$(mktemp -d --tmpdir ripple_info.XXXXX)
chmod 751 ${tmp_loc}
awk_prog=${tmp_loc}/cfg.awk
summary_out=${tmp_loc}/rippled_info.md
printf "# rippled report info\n\n> generated at %s\n" "$(date -R)" > ${summary_out}
function log_section {
printf "\n## %s\n" "$*" >> ${summary_out}
while read -r l; do
echo " $l" >> ${summary_out}
done </dev/stdin
}
function join_by {
local IFS="$1"; shift; echo "$*";
}
if [[ -f ${conf_file} ]] ; then
exclude=( ips ips_fixed node_seed validation_seed validator_token )
cleaned_conf=${tmp_loc}/cleaned_rippled_cfg.txt
cat << 'EOP' >> ${awk_prog}
BEGIN {FS="[[:space:]]*=[[:space:]]*"; skip=0; db_path=""; print > OUT_FILE; split(exl,exa,"|")}
/^#/ {next}
save==2 && /^[[:space:]]*$/ {next}
/^\[.+\]$/ {
section=tolower(gensub(/^\[[[:space:]]*([a-zA-Z_]+)[[:space:]]*\]$/, "\\1", "g"))
skip = 0
for (i in exa) {
if (section == exa[i])
skip = 1
}
if (section == "database_path")
save = 1
}
skip==1 {next}
save==2 {save=0; db_path=$0}
save==1 {save=2}
$1 ~ /password/ {$0=$1"=<redacted>"}
{print >> OUT_FILE}
END {print db_path}
EOP
db=$(\
sed -r -e 's/\<s[[:alnum:]]{28}\>/<redactedsecret>/g;s/^[[:space:]]*//;s/[[:space:]]*$//' ${conf_file} |\
awk -v OUT_FILE=${cleaned_conf} -v exl="$(join_by '|' "${exclude[@]}")" -f ${awk_prog})
rm ${awk_prog}
cat ${cleaned_conf} | log_section "cleaned config file"
rm ${cleaned_conf}
echo "${db}" | log_section "database path"
df ${db} | log_section "df: database"
fi
# Send output from this script to a log file
## this captures any messages
## or errors from the script itself
log_file=${tmp_loc}/get_info.log
exec 3>&1 1>>${log_file} 2>&1
## Send all stdout files to /tmp
if [[ -x ${rippled_exe} ]] ; then
pgrep rippled && \
${rippled_exe} --conf ${conf_file} \
-- server_info | log_section "server info"
fi
cat /proc/meminfo | log_section "meminfo"
cat /proc/swaps | log_section "swap space"
ulimit -a | log_section "ulimit"
if command -v lshw >/dev/null 2>&1 ; then
lshw 2>/dev/null | log_section "hardware info"
else
lscpu > ${tmp_loc}/hw_info.txt
hwinfo >> ${tmp_loc}/hw_info.txt
lspci >> ${tmp_loc}/hw_info.txt
lsblk >> ${tmp_loc}/hw_info.txt
cat ${tmp_loc}/hw_info.txt | log_section "hardware info"
rm ${tmp_loc}/hw_info.txt
fi
if command -v iostat >/dev/null 2>&1 ; then
iostat -t -d -x 2 6 | log_section "iostat"
fi
df -h | log_section "free disk space"
drives=($(df | awk '$1 ~ /^\/dev\// {print $1}' | xargs -n 1 basename))
block_devs=($(ls /sys/block/))
for d in "${drives[@]}"; do
for dev in "${block_devs[@]}"; do
#echo "D: [$d], DEV: [$dev]"
if [[ $d =~ $dev ]]; then
# this file (if exists) has 0 for SSD and 1 for HDD
if [[ "$(cat /sys/block/${dev}/queue/rotational 2>/dev/null)" == 0 ]] ; then
echo "${d} : SSD" >> ${tmp_loc}/is_ssd.txt
else
echo "${d} : NO SSD" >> ${tmp_loc}/is_ssd.txt
fi
fi
done
done
if [[ -f ${tmp_loc}/is_ssd.txt ]] ; then
cat ${tmp_loc}/is_ssd.txt | log_section "SSD"
rm ${tmp_loc}/is_ssd.txt
fi
cat ${log_file} | log_section "script log"
cat << MSG | tee /dev/fd/3
####################################################
rippled info has been gathered. Please copy the
contents of ${summary_out}
to a github gist at https://gist.github.com/
PLEASE REVIEW THIS FILE FOR ANY SENSITIVE DATA
BEFORE POSTING! We have tried our best to omit
any sensitive information from this file, but you
should verify before posting.
####################################################
MSG

View File

@@ -1,23 +0,0 @@
#!/usr/bin/node
//
// Returns hex of lowercasing a string.
//
var stringToHex = function (s) {
return Array.prototype.map.call(s, function (c) {
var b = c.charCodeAt(0);
return b < 16 ? "0" + b.toString(16) : b.toString(16);
}).join("");
};
if (3 != process.argv.length) {
process.stderr.write("Usage: " + process.argv[1] + " string\n\nReturns hex of lowercasing string.\n");
process.exit(1);
} else {
process.stdout.write(stringToHex(process.argv[2].toLowerCase()) + "\n");
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,42 +0,0 @@
#!/usr/bin/node
//
// This is a tool to issue JSON-RPC requests from the command line.
//
// This can be used to test a JSON-RPC server.
//
// Requires: npm simple-jsonrpc
//
var jsonrpc = require('simple-jsonrpc');
var program = process.argv[1];
if (5 !== process.argv.length) {
console.log("Usage: %s <URL> <method> <json>", program);
}
else {
var url = process.argv[2];
var method = process.argv[3];
var json_raw = process.argv[4];
var json;
try {
json = JSON.parse(json_raw);
}
catch (e) {
console.log("JSON parse error: %s", e.message);
throw e;
}
var client = jsonrpc.client(url);
client.call(method, json,
function (result) {
console.log(JSON.stringify(result, undefined, 2));
},
function (error) {
console.log(JSON.stringify(error, undefined, 2));
});
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,68 +0,0 @@
#!/usr/bin/node
//
// This is a tool to listen for JSON-RPC requests at an IP and port.
//
// This will report the request to console and echo back the request as the response.
//
var http = require("http");
var program = process.argv[1];
if (4 !== process.argv.length) {
console.log("Usage: %s <ip> <port>", program);
}
else {
var ip = process.argv[2];
var port = process.argv[3];
var server = http.createServer(function (req, res) {
console.log("CONNECT");
var input = "";
req.setEncoding();
req.on('data', function (buffer) {
// console.log("DATA: %s", buffer);
input = input + buffer;
});
req.on('end', function () {
// console.log("END");
var json_req;
console.log("URL: %s", req.url);
console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
try {
json_req = JSON.parse(input);
console.log("REQ: %s", JSON.stringify(json_req, undefined, 2));
}
catch (e) {
console.log("BAD JSON: %s", e.message);
json_req = { error : e.message }
}
res.statusCode = 200;
res.end(JSON.stringify({
jsonrpc: "2.0",
result: { request : json_req },
id: req.id
}));
});
req.on('close', function () {
console.log("CLOSE");
});
});
server.listen(port, ip, undefined,
function () {
console.log("Listening at: %s:%s", ip, port);
});
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,252 +0,0 @@
#!/usr/bin/node
var async = require('async');
var Remote = require('ripple-lib').Remote;
var Transaction = require('ripple-lib').Transaction;
var UInt160 = require('ripple-lib').UInt160;
var Amount = require('ripple-lib').Amount;
var book_key = function (book) {
return book.taker_pays.currency
+ ":" + book.taker_pays.issuer
+ ":" + book.taker_gets.currency
+ ":" + book.taker_gets.issuer;
};
var book_key_cross = function (book) {
return book.taker_gets.currency
+ ":" + book.taker_gets.issuer
+ ":" + book.taker_pays.currency
+ ":" + book.taker_pays.issuer;
};
var ledger_verify = function (ledger) {
var dir_nodes = ledger.accountState.filter(function (entry) {
return entry.LedgerEntryType === 'DirectoryNode' // Only directories
&& entry.index === entry.RootIndex // Only root nodes
&& 'TakerGetsCurrency' in entry; // Only offer directories
});
var books = {};
dir_nodes.forEach(function (node) {
var book = {
taker_gets: {
currency: UInt160.from_generic(node.TakerGetsCurrency).to_json(),
issuer: UInt160.from_generic(node.TakerGetsIssuer).to_json()
},
taker_pays: {
currency: UInt160.from_generic(node.TakerPaysCurrency).to_json(),
issuer: UInt160.from_generic(node.TakerPaysIssuer).to_json()
},
quality: Amount.from_quality(node.RootIndex),
index: node.RootIndex
};
books[book_key(book)] = book;
// console.log(JSON.stringify(node, undefined, 2));
});
// console.log(JSON.stringify(dir_entry, undefined, 2));
console.log("#%s books: %s", ledger.ledger_index, Object.keys(books).length);
Object.keys(books).forEach(function (key) {
var book = books[key];
var key_cross = book_key_cross(book);
var book_cross = books[key_cross];
if (book && book_cross && !book_cross.done)
{
var book_cross_quality_inverted = Amount.from_json("1.0/1/1").divide(book_cross.quality);
if (book_cross_quality_inverted.compareTo(book.quality) >= 0)
{
// Crossing books
console.log("crossing: #%s :: %s :: %s :: %s :: %s :: %s :: %s", ledger.ledger_index, key, book.quality.to_text(), book_cross.quality.to_text(), book_cross_quality_inverted.to_text(),
book.index, book_cross.index);
}
book_cross.done = true;
}
});
var ripple_selfs = {};
var accounts = {};
var counts = {};
ledger.accountState.forEach(function (entry) {
if (entry.LedgerEntryType === 'Offer')
{
counts[entry.Account] = (counts[entry.Account] || 0) + 1;
}
else if (entry.LedgerEntryType === 'RippleState')
{
if (entry.Flags & (0x10000 | 0x40000))
{
counts[entry.LowLimit.issuer] = (counts[entry.LowLimit.issuer] || 0) + 1;
}
if (entry.Flags & (0x20000 | 0x80000))
{
counts[entry.HighLimit.issuer] = (counts[entry.HighLimit.issuer] || 0) + 1;
}
if (entry.HighLimit.issuer === entry.LowLimit.issuer)
ripple_selfs[entry.Account] = entry;
}
else if (entry.LedgerEntryType == 'AccountRoot')
{
accounts[entry.Account] = entry;
}
});
var low = 0; // Accounts with too low a count.
var high = 0;
var missing_accounts = 0; // Objects with no referencing account.
var missing_objects = 0; // Accounts specifying an object but having none.
Object.keys(counts).forEach(function (account) {
if (account in accounts)
{
if (counts[account] !== accounts[account].OwnerCount)
{
if (counts[account] < accounts[account].OwnerCount)
{
high += 1;
console.log("%s: high count %s/%s", account, counts[account], accounts[account].OwnerCount);
}
else
{
low += 1;
console.log("%s: low count %s/%s", account, counts[account], accounts[account].OwnerCount);
}
}
}
else
{
missing_accounts += 1;
console.log("%s: missing : count %s", account, counts[account]);
}
});
Object.keys(accounts).forEach(function (account) {
if (!('OwnerCount' in accounts[account]))
{
console.log("%s: bad entry : %s", account, JSON.stringify(accounts[account], undefined, 2));
}
else if (!(account in counts) && accounts[account].OwnerCount)
{
missing_objects += 1;
console.log("%s: no objects : %s/%s", account, 0, accounts[account].OwnerCount);
}
});
if (low)
console.log("counts too low = %s", low);
if (high)
console.log("counts too high = %s", high);
if (missing_objects)
console.log("missing_objects = %s", missing_objects);
if (missing_accounts)
console.log("missing_accounts = %s", missing_accounts);
if (Object.keys(ripple_selfs).length)
console.log("RippleState selfs = %s", Object.keys(ripple_selfs).length);
};
var ledger_request = function (remote, ledger_index, done) {
remote.request_ledger(undefined, {
accounts: true,
expand: true,
})
.ledger_index(ledger_index)
.on('success', function (m) {
// console.log("ledger: ", ledger_index);
// console.log("ledger: ", JSON.stringify(m, undefined, 2));
done(m.ledger);
})
.on('error', function (m) {
console.log("error");
done();
})
.request();
};
var usage = function () {
console.log("rlint.js _websocket_ip_ _websocket_port_ ");
};
var finish = function (remote) {
remote.disconnect();
// XXX Because remote.disconnect() doesn't work:
process.exit();
};
console.log("args: ", process.argv.length);
console.log("args: ", process.argv);
if (process.argv.length < 4) {
usage();
}
else {
var remote = Remote.from_config({
websocket_ip: process.argv[2],
websocket_port: process.argv[3],
})
.once('ledger_closed', function (m) {
console.log("ledger_closed: ", JSON.stringify(m, undefined, 2));
if (process.argv.length === 5) {
var ledger_index = process.argv[4];
ledger_request(remote, ledger_index, function (l) {
if (l) {
ledger_verify(l);
}
finish(remote);
});
} else if (process.argv.length === 6) {
var ledger_start = Number(process.argv[4]);
var ledger_end = Number(process.argv[5]);
var ledger_cursor = ledger_end;
async.whilst(
function () {
return ledger_start <= ledger_cursor && ledger_cursor <=ledger_end;
},
function (callback) {
// console.log(ledger_cursor);
ledger_request(remote, ledger_cursor, function (l) {
if (l) {
ledger_verify(l);
}
--ledger_cursor;
callback();
});
},
function (error) {
finish(remote);
});
} else {
finish(remote);
}
})
.connect();
}
// vim:sw=2:sts=2:ts=8:et

View File

@@ -1,51 +0,0 @@
#!/usr/bin/env bash
set -exu
: ${TRAVIS_BUILD_DIR:=""}
: ${VCPKG_DIR:=".vcpkg"}
export VCPKG_ROOT=${VCPKG_DIR}
: ${VCPKG_DEFAULT_TRIPLET:="x64-windows-static"}
export VCPKG_DEFAULT_TRIPLET
EXE="vcpkg"
if [[ -z ${COMSPEC:-} ]]; then
EXE="${EXE}.exe"
fi
if [[ -d "${VCPKG_DIR}" && -x "${VCPKG_DIR}/${EXE}" && -d "${VCPKG_DIR}/installed" ]] ; then
echo "Using cached vcpkg at ${VCPKG_DIR}"
${VCPKG_DIR}/${EXE} list
else
if [[ -d "${VCPKG_DIR}" ]] ; then
rm -rf "${VCPKG_DIR}"
fi
git clone --branch 2021.04.30 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
pushd ${VCPKG_DIR}
BSARGS=()
if [[ "$(uname)" == "Darwin" ]] ; then
BSARGS+=(--allowAppleClang)
fi
if [[ -z ${COMSPEC:-} ]]; then
chmod +x ./bootstrap-vcpkg.sh
time ./bootstrap-vcpkg.sh "${BSARGS[@]}"
else
time ./bootstrap-vcpkg.bat
fi
popd
fi
# TODO: bring boost in this way as well ?
# NOTE: can pin specific ports to a commit/version like this:
# git checkout <SOME COMMIT HASH> ports/boost
if [ $# -eq 0 ]; then
echo "No extra packages specified..."
PKGS=()
else
PKGS=( "$@" )
fi
for LIB in "${PKGS[@]}"; do
time ${VCPKG_DIR}/${EXE} --clean-after-build install ${LIB}
done

View File

@@ -1,40 +0,0 @@
# NOTE: must be sourced from a shell so it can export vars
cat << BATCH > ./getenv.bat
CALL %*
ENV
BATCH
while read line ; do
IFS='"' read x path arg <<<"${line}"
if [ -f "${path}" ] ; then
echo "FOUND: $path"
export VCINSTALLDIR=$(./getenv.bat "${path}" ${arg} | grep "^VCINSTALLDIR=" | sed -E "s/^VCINSTALLDIR=//g")
if [ "${VCINSTALLDIR}" != "" ] ; then
echo "USING ${VCINSTALLDIR}"
export LIB=$(./getenv.bat "${path}" ${arg} | grep "^LIB=" | sed -E "s/^LIB=//g")
export LIBPATH=$(./getenv.bat "${path}" ${arg} | grep "^LIBPATH=" | sed -E "s/^LIBPATH=//g")
export INCLUDE=$(./getenv.bat "${path}" ${arg} | grep "^INCLUDE=" | sed -E "s/^INCLUDE=//g")
ADDPATH=$(./getenv.bat "${path}" ${arg} | grep "^PATH=" | sed -E "s/^PATH=//g")
export PATH="${ADDPATH}:${PATH}"
break
fi
fi
done <<EOL
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio 15.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 13.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
EOL
# TODO: update the list above as needed to support newer versions of msvc tools
rm -f getenv.bat
if [ "${VCINSTALLDIR}" = "" ] ; then
echo "No compatible visual studio found!"
fi

View File

@@ -1,246 +0,0 @@
#!/usr/bin/env python
"""A script to test rippled in an infinite loop of start-sync-stop.
- Requires Python 3.7+.
- Can be stopped with SIGINT.
- Has no dependencies outside the standard library.
"""
import sys
assert sys.version_info.major == 3 and sys.version_info.minor >= 7
import argparse
import asyncio
import configparser
import contextlib
import json
import logging
import os
from pathlib import Path
import platform
import subprocess
import time
import urllib.error
import urllib.request
# Enable asynchronous subprocesses on Windows. The default changed in 3.8.
# https://docs.python.org/3.7/library/asyncio-platforms.html#subprocess-support-on-windows
if (platform.system() == 'Windows' and sys.version_info.major == 3
and sys.version_info.minor < 8):
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
DEFAULT_EXE = 'rippled'
DEFAULT_CONFIGURATION_FILE = 'rippled.cfg'
# Number of seconds to wait before forcefully terminating.
PATIENCE = 120
# Number of contiguous seconds in a sync state to be considered synced.
DEFAULT_SYNC_DURATION = 60
# Number of seconds between polls of state.
DEFAULT_POLL_INTERVAL = 5
SYNC_STATES = ('full', 'validating', 'proposing')
def read_config(config_file):
# strict = False: Allow duplicate keys, e.g. [rpc_startup].
# allow_no_value = True: Allow keys with no values. Generally, these
# instances use the "key" as the value, and the section name is the key,
# e.g. [debug_logfile].
# delimiters = ('='): Allow ':' as a character in Windows paths. Some of
# our "keys" are actually values, and we don't want to split them on ':'.
config = configparser.ConfigParser(
strict=False,
allow_no_value=True,
delimiters=('='),
)
config.read(config_file)
return config
def to_list(value, separator=','):
"""Parse a list from a delimited string value."""
return [s.strip() for s in value.split(separator) if s]
def find_log_file(config_file):
"""Try to figure out what log file the user has chosen. Raises all kinds
of exceptions if there is any possibility of ambiguity."""
config = read_config(config_file)
values = list(config['debug_logfile'].keys())
if len(values) < 1:
raise ValueError(
f'no [debug_logfile] in configuration file: {config_file}')
if len(values) > 1:
raise ValueError(
f'too many [debug_logfile] in configuration file: {config_file}')
return values[0]
def find_http_port(config_file):
config = read_config(config_file)
names = list(config['server'].keys())
for name in names:
server = config[name]
if 'http' in to_list(server.get('protocol', '')):
return int(server['port'])
raise ValueError(f'no server in [server] for "http" protocol')
@contextlib.asynccontextmanager
async def rippled(exe=DEFAULT_EXE, config_file=DEFAULT_CONFIGURATION_FILE):
"""A context manager for a rippled process."""
# Start the server.
process = await asyncio.create_subprocess_exec(
str(exe),
'--conf',
str(config_file),
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
logging.info(f'rippled started with pid {process.pid}')
try:
yield process
finally:
# Ask it to stop.
logging.info(f'asking rippled (pid: {process.pid}) to stop')
start = time.time()
process.terminate()
# Wait nicely.
try:
await asyncio.wait_for(process.wait(), PATIENCE)
except asyncio.TimeoutError:
# Ask the operating system to kill it.
logging.warning(f'killing rippled ({process.pid})')
try:
process.kill()
except ProcessLookupError:
pass
code = await process.wait()
end = time.time()
logging.info(
f'rippled stopped after {end - start:.1f} seconds with code {code}'
)
async def sync(
port,
*,
duration=DEFAULT_SYNC_DURATION,
interval=DEFAULT_POLL_INTERVAL,
):
"""Poll rippled on an interval until it has been synced for a duration."""
start = time.perf_counter()
while (time.perf_counter() - start) < duration:
await asyncio.sleep(interval)
request = urllib.request.Request(
f'http://127.0.0.1:{port}',
data=json.dumps({
'method': 'server_state'
}).encode(),
headers={'Content-Type': 'application/json'},
)
with urllib.request.urlopen(request) as response:
try:
body = json.loads(response.read())
except urllib.error.HTTPError as cause:
logging.warning(f'server_state returned not JSON: {cause}')
start = time.perf_counter()
continue
try:
state = body['result']['state']['server_state']
except KeyError as cause:
logging.warning(f'server_state response missing key: {cause.key}')
start = time.perf_counter()
continue
logging.info(f'server_state: {state}')
if state not in SYNC_STATES:
# Require a contiguous sync state.
start = time.perf_counter()
async def loop(test,
*,
exe=DEFAULT_EXE,
config_file=DEFAULT_CONFIGURATION_FILE):
"""
Start-test-stop rippled in an infinite loop.
Moves log to a different file after each iteration.
"""
log_file = find_log_file(config_file)
id = 0
while True:
logging.info(f'iteration: {id}')
async with rippled(exe, config_file) as process:
start = time.perf_counter()
exited = asyncio.create_task(process.wait())
tested = asyncio.create_task(test())
# Try to sync as long as the process is running.
done, pending = await asyncio.wait(
{exited, tested},
return_when=asyncio.FIRST_COMPLETED,
)
if done == {exited}:
code = exited.result()
logging.warning(
f'server halted for unknown reason with code {code}')
else:
assert done == {tested}
assert tested.exception() is None
end = time.perf_counter()
logging.info(f'synced after {end - start:.0f} seconds')
os.replace(log_file, f'debug.{id}.log')
id += 1
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
)
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'rippled',
type=Path,
nargs='?',
default=DEFAULT_EXE,
help='Path to rippled.',
)
parser.add_argument(
'--conf',
type=Path,
default=DEFAULT_CONFIGURATION_FILE,
help='Path to configuration file.',
)
parser.add_argument(
'--duration',
type=int,
default=DEFAULT_SYNC_DURATION,
help='Number of contiguous seconds required in a synchronized state.',
)
parser.add_argument(
'--interval',
type=int,
default=DEFAULT_POLL_INTERVAL,
help='Number of seconds to wait between polls of state.',
)
args = parser.parse_args()
port = find_http_port(args.conf)
def test():
return sync(port, duration=args.duration, interval=args.interval)
try:
asyncio.run(loop(test, exe=args.rippled, config_file=args.conf))
except KeyboardInterrupt:
# Squelch the message. This is a normal mode of exit.
pass

View File

@@ -1,133 +0,0 @@
/* -------------------------------- REQUIRES -------------------------------- */
var child = require("child_process");
var assert = require("assert");
/* --------------------------------- CONFIG --------------------------------- */
if (process.argv[2] == null) {
[
'Usage: ',
'',
' `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
'',
' Launch rippled and stop it after n seconds for all n in [i, j}',
' For all even values of n launch rippled with `--fg`',
' For values of n where n % 3 == 0 launch rippled with `--fg`\n',
'Examples: ',
'',
' $ node bin/stop-test.js 5,10',
(' $ node bin/stop-test.js 1,4 ' +
'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
]
.forEach(function(l){console.log(l)});
process.exit();
} else {
var testRange = process.argv[2].split(',').map(Number);
var rippledPath = process.argv[3] || 'build/rippled'
var rippledConf = process.argv[4] || 'rippled.cfg'
}
var options = {
env: process.env,
stdio: 'ignore' // we could dump the child io when it fails abnormally
};
// default args
var conf_args = ['--conf='+rippledConf];
var start_args = conf_args.concat([/*'--net'*/])
var stop_args = conf_args.concat(['stop']);
/* --------------------------------- HELPERS -------------------------------- */
function start(args) {
return child.spawn(rippledPath, args, options);
}
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
function show_results_and_exit(results) {
console.log(JSON.stringify(results, undefined, 2));
process.exit();
}
var timeTakes = function (range) {
function sumRange(n) {return (n+1) * n /2}
var ret = sumRange(range[1]);
if (range[0] > 1) {
ret = ret - sumRange(range[0] - 1)
}
var stopping = (range[1] - range[0]) * 0.5;
return ret + stopping;
}
/* ---------------------------------- TEST ---------------------------------- */
console.log("Test will take ~%s seconds", timeTakes(testRange));
(function oneTest(n /* seconds */, results) {
if (n >= testRange[1]) {
// show_results_and_exit(results);
console.log(JSON.stringify(results, undefined, 2));
oneTest(testRange[0], []);
return;
}
var args = start_args;
if (n % 2 == 0) {args = args.concat(['--fg'])}
if (n % 3 == 0) {args = args.concat(['--net'])}
var result = {args: args, alive_for: n};
results.push(result);
console.log("\nLaunching `%s` with `%s` for %d seconds",
rippledPath, JSON.stringify(args), n);
rippled = start(args);
console.log("Rippled pid: %d", rippled.pid);
// defaults
var b4StopSent = false;
var stopSent = false;
var stop_took = null;
rippled.once('exit', function(){
if (!stopSent && !b4StopSent) {
console.warn('\nRippled exited itself b4 stop issued');
process.exit();
};
// The io handles close AFTER exit, may have implications for
// `stdio:'inherit'` option to `child.spawn`.
rippled.once('close', function() {
result.stop_took = (+new Date() - stop_took) / 1000; // seconds
console.log("Stopping after %d seconds took %s seconds",
n, result.stop_took);
oneTest(n+1, results);
});
});
secs_l8r(n, function(){
console.log("Stopping rippled after %d seconds", n);
// possible race here ?
// seems highly unlikely, but I was having issues at one point
b4StopSent=true;
stop_took = (+new Date());
// when does `exit` actually get sent?
stop();
stopSent=true;
// Sometimes we want to attach with a debugger.
if (process.env.ABORT_TESTS_ON_STALL != null) {
// We wait 30 seconds, and if it hasn't stopped, we abort the process
secs_l8r(30, function() {
if (result.stop_took == null) {
console.log("rippled has stalled");
process.exit();
};
});
}
})
}(testRange[0], []));

View File

@@ -1,119 +0,0 @@
/**
* bin/update_bintypes.js
*
* This unholy abomination of a script generates the JavaScript file
* src/js/bintypes.js from various parts of the C++ source code.
*
* This should *NOT* be part of any automatic build process unless the C++
* source data are brought into a more easily parseable format. Until then,
* simply run this script manually and fix as needed.
*/
// XXX: Process LedgerFormats.(h|cpp) as well.
var filenameProto = __dirname + '/../src/cpp/ripple/SerializeProto.h',
filenameTxFormatsH = __dirname + '/../src/cpp/ripple/TransactionFormats.h',
filenameTxFormats = __dirname + '/../src/cpp/ripple/TransactionFormats.cpp';
var fs = require('fs');
var output = [];
// Stage 1: Get the field types and codes from SerializeProto.h
var types = {},
fields = {};
String(fs.readFileSync(filenameProto)).split('\n').forEach(function (line) {
line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
if (!line.length || line.slice(0, 2) === '//' || line.slice(-1) !== ')') return;
var tmp = line.slice(0, -1).split('('),
type = tmp[0],
opts = tmp[1].split(',');
if (type === 'TYPE') types[opts[1]] = [opts[0], +opts[2]];
else if (type === 'FIELD') fields[opts[0]] = [types[opts[1]][0], +opts[2]];
});
output.push('var ST = require("./serializedtypes");');
output.push('');
output.push('var REQUIRED = exports.REQUIRED = 0,');
output.push(' OPTIONAL = exports.OPTIONAL = 1,');
output.push(' DEFAULT = exports.DEFAULT = 2;');
output.push('');
function pad(s, n) { while (s.length < n) s += ' '; return s; }
function padl(s, n) { while (s.length < n) s = ' '+s; return s; }
Object.keys(types).forEach(function (type) {
output.push(pad('ST.'+types[type][0]+'.id', 25) + ' = '+types[type][1]+';');
});
output.push('');
// Stage 2: Get the transaction type IDs from TransactionFormats.h
var ttConsts = {};
String(fs.readFileSync(filenameTxFormatsH)).split('\n').forEach(function (line) {
var regex = /tt([A-Z_]+)\s+=\s+([0-9-]+)/;
var match = line.match(regex);
if (match) ttConsts[match[1]] = +match[2];
});
// Stage 3: Get the transaction formats from TransactionFormats.cpp
var base = [],
sections = [],
current = base;
String(fs.readFileSync(filenameTxFormats)).split('\n').forEach(function (line) {
line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
var d_regex = /DECLARE_TF\(([A-Za-z]+),tt([A-Z_]+)/;
var d_match = line.match(d_regex);
var s_regex = /SOElement\(sf([a-z]+),SOE_(REQUIRED|OPTIONAL|DEFAULT)/i;
var s_match = line.match(s_regex);
if (d_match) sections.push(current = [d_match[1], ttConsts[d_match[2]]]);
else if (s_match) current.push([s_match[1], s_match[2]]);
});
function removeFinalComma(arr) {
arr[arr.length-1] = arr[arr.length-1].slice(0, -1);
}
output.push('var base = [');
base.forEach(function (field) {
var spec = fields[field[0]];
output.push(' [ '+
pad("'"+field[0]+"'", 21)+', '+
pad(field[1], 8)+', '+
padl(""+spec[1], 2)+', '+
'ST.'+pad(spec[0], 3)+
' ],');
});
removeFinalComma(output);
output.push('];');
output.push('');
output.push('exports.tx = {');
sections.forEach(function (section) {
var name = section.shift(),
ttid = section.shift();
output.push(' '+name+': ['+ttid+'].concat(base, [');
section.forEach(function (field) {
var spec = fields[field[0]];
output.push(' [ '+
pad("'"+field[0]+"'", 21)+', '+
pad(field[1], 8)+', '+
padl(""+spec[1], 2)+', '+
'ST.'+pad(spec[0], 3)+
' ],');
});
removeFinalComma(output);
output.push(' ]),');
});
removeFinalComma(output);
output.push('};');
output.push('');
console.log(output.join('\n'));

View File

@@ -12,7 +12,14 @@ umask 0000;
cd /io/ &&
echo "Importing env... Lines:" &&
cat .env|wc -l &&
source .env &&
source .env
echo $?
if [[ "$?" -ne "0" ]]; then
echo "ERR no .env found/sourced"
exit 127
fi
perl -i -pe "s/^(\\s*)-DBUILD_SHARED_LIBS=OFF/\\1-DBUILD_SHARED_LIBS=OFF\\n\\1-DROCKSDB_BUILD_SHARED=OFF/g" Builds/CMake/deps/Rocksdb.cmake &&
mv Builds/CMake/deps/WasmEdge.cmake Builds/CMake/deps/WasmEdge.old &&
echo "find_package(LLVM REQUIRED CONFIG)
@@ -53,6 +60,7 @@ else
fi
cd ..;
mv src/ripple/net/impl/RegisterSSLCerts.cpp.old src/ripple/net/impl/RegisterSSLCerts.cpp;
mv Builds/CMake/deps/Rocksdb.cmake.old Builds/CMake/deps/Rocksdb.cmake;
mv Builds/CMake/deps/WasmEdge.old Builds/CMake/deps/WasmEdge.cmake;

View File

@@ -152,6 +152,13 @@ echo "Persisting ENV:"
cat .env
./build-core.sh "$1" "$2" "$3" "$4"
echo $?
if [[ "$?" -ne "0" ]]; then
echo "ERR build-core.sh non 0 exit code"
exit 127
fi
echo "END [ build-core.sh ]"
echo "END INSIDE CONTAINER - FULL"

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +0,0 @@
#!/bin/sh
# Execute this script with a running Postgres server on the current host.
# It should work with the most generic installation of Postgres,
# and is necessary for rippled to store data in Postgres.
# usage: sudo -u postgres ./initdb.sh
psql -c "CREATE USER rippled"
psql -c "CREATE DATABASE rippled WITH OWNER = rippled"

Some files were not shown because too many files have changed in this diff Show More