mirror of
				https://github.com/Xahau/xahaud.git
				synced 2025-11-04 02:35:48 +00:00 
			
		
		
		
	Housekeeping (#170)
* Update pull_request_template.md * strip out unused files * Update CONTRIBUTING.md * fix docker * remove conan stuff * update license * update hook directory * Delete .codecov.yml * Update .dockerignore * Update genesis.json * update validator list example * Update rippled-standalone.cfg * Update CONTRIBUTING.md
This commit is contained in:
		@@ -1,5 +0,0 @@
 | 
			
		||||
 | 
			
		||||
codecov:
 | 
			
		||||
  ci:
 | 
			
		||||
    - !appveyor
 | 
			
		||||
    - travis
 | 
			
		||||
							
								
								
									
										21
									
								
								.github/pull_request_template.md
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										21
									
								
								.github/pull_request_template.md
									
									
									
									
										vendored
									
									
								
							@@ -33,10 +33,27 @@ Please check [x] relevant options, delete irrelevant ones.
 | 
			
		||||
- [ ] New feature (non-breaking change which adds functionality)
 | 
			
		||||
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
 | 
			
		||||
- [ ] Refactor (non-breaking change that only restructures code)
 | 
			
		||||
- [ ] Tests (You added tests for code that already exists, or your new feature included in this PR)
 | 
			
		||||
- [ ] Documentation Updates
 | 
			
		||||
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
 | 
			
		||||
- [ ] Documentation update
 | 
			
		||||
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
 | 
			
		||||
- [ ] Release
 | 
			
		||||
 | 
			
		||||
### API Impact
 | 
			
		||||
 | 
			
		||||
<!--
 | 
			
		||||
Please check [x] relevant options, delete irrelevant ones.
 | 
			
		||||
 | 
			
		||||
* If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
 | 
			
		||||
  * Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
 | 
			
		||||
* libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
 | 
			
		||||
* Peer Protocol: See https://xrpl.org/peer-protocol.html
 | 
			
		||||
-->
 | 
			
		||||
 | 
			
		||||
- [ ] Public API: New feature (new methods and/or new fields)
 | 
			
		||||
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
 | 
			
		||||
- [ ] `libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
 | 
			
		||||
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
 | 
			
		||||
 | 
			
		||||
<!--
 | 
			
		||||
## Before / After
 | 
			
		||||
If relevant, use this section for an English description of the change at a technical level.
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										169
									
								
								.gitlab-ci.yml
									
									
									
									
									
								
							
							
						
						
									
										169
									
								
								.gitlab-ci.yml
									
									
									
									
									
								
							@@ -1,169 +0,0 @@
 | 
			
		||||
# I don't know what the minimum size is, but we cannot build on t3.micro.
 | 
			
		||||
 | 
			
		||||
# TODO: Factor common builds between different tests.
 | 
			
		||||
 | 
			
		||||
# The parameters for our job matrix:
 | 
			
		||||
#
 | 
			
		||||
# 1. Generator (Make, Ninja, MSBuild)
 | 
			
		||||
# 2. Compiler (GCC, Clang, MSVC)
 | 
			
		||||
# 3. Build type (Debug, Release)
 | 
			
		||||
# 4. Definitions (-Dunity=OFF, -Dassert=ON, ...)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.job_linux_build_test:
 | 
			
		||||
  only:
 | 
			
		||||
    variables:
 | 
			
		||||
      - $CI_PROJECT_URL =~ /^https?:\/\/gitlab.com\//
 | 
			
		||||
  stage: build
 | 
			
		||||
  tags:
 | 
			
		||||
    - linux
 | 
			
		||||
    - c5.2xlarge
 | 
			
		||||
  image: thejohnfreeman/rippled-build-ubuntu:4b73694e07f0
 | 
			
		||||
  script:
 | 
			
		||||
    - bin/ci/build.sh
 | 
			
		||||
    - bin/ci/test.sh
 | 
			
		||||
  cache:
 | 
			
		||||
    # Use a different key for each unique combination of (generator, compiler,
 | 
			
		||||
    # build type). Caches are stored as `.zip` files; they are not merged.
 | 
			
		||||
    # Generate a new key whenever you want to bust the cache, e.g. when the
 | 
			
		||||
    # dependency versions have been bumped.
 | 
			
		||||
    # By default, jobs pull the cache. Only a few specially chosen jobs update
 | 
			
		||||
    # the cache (with policy `pull-push`); one for each unique combination of
 | 
			
		||||
    # (generator, compiler, build type).
 | 
			
		||||
    policy: pull
 | 
			
		||||
    paths:
 | 
			
		||||
      - .nih_c/
 | 
			
		||||
 | 
			
		||||
'build+test Make GCC Debug':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Unix Makefiles
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 62ada41c-fc9e-4949-9533-736d4d6512b6
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
 | 
			
		||||
'build+test Ninja GCC Debug':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 1665d3eb-6233-4eef-9f57-172636899faa
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
 | 
			
		||||
'build+test Ninja GCC Debug -Dstatic=OFF':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dstatic=OFF'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 1665d3eb-6233-4eef-9f57-172636899faa
 | 
			
		||||
 | 
			
		||||
'build+test Ninja GCC Debug -Dstatic=OFF -DBUILD_SHARED_LIBS=ON':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dstatic=OFF -DBUILD_SHARED_LIBS=ON'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 1665d3eb-6233-4eef-9f57-172636899faa
 | 
			
		||||
 | 
			
		||||
'build+test Ninja GCC Debug -Dunity=OFF':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dunity=OFF'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 1665d3eb-6233-4eef-9f57-172636899faa
 | 
			
		||||
 | 
			
		||||
'build+test Ninja GCC Release -Dassert=ON':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Release
 | 
			
		||||
    CMAKE_ARGS: '-Dassert=ON'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: c45ec125-9625-4c19-acf7-4e889d5f90bd
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
 | 
			
		||||
'build+test(manual) Ninja GCC Release -Dassert=ON':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: gcc
 | 
			
		||||
    BUILD_TYPE: Release
 | 
			
		||||
    CMAKE_ARGS: '-Dassert=ON'
 | 
			
		||||
    MANUAL_TEST: 'true'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: c45ec125-9625-4c19-acf7-4e889d5f90bd
 | 
			
		||||
 | 
			
		||||
'build+test Make clang Debug':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Unix Makefiles
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
  cache:
 | 
			
		||||
    key: bf578dc2-5277-4580-8de5-6b9523118b19
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
 | 
			
		||||
'build+test Ninja clang Debug':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
 | 
			
		||||
'build+test Ninja clang Debug -Dunity=OFF':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dunity=OFF'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
 | 
			
		||||
 | 
			
		||||
'build+test Ninja clang Debug -Dunity=OFF -Dsan=address':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dunity=OFF -Dsan=address'
 | 
			
		||||
    CONCURRENT_TESTS: 1
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
 | 
			
		||||
 | 
			
		||||
'build+test Ninja clang Debug -Dunity=OFF -Dsan=undefined':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Debug
 | 
			
		||||
    CMAKE_ARGS: '-Dunity=OFF -Dsan=undefined'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
 | 
			
		||||
 | 
			
		||||
'build+test Ninja clang Release -Dassert=ON':
 | 
			
		||||
  extends: .job_linux_build_test
 | 
			
		||||
  variables:
 | 
			
		||||
    GENERATOR: Ninja
 | 
			
		||||
    COMPILER: clang
 | 
			
		||||
    BUILD_TYPE: Release
 | 
			
		||||
    CMAKE_ARGS: '-Dassert=ON'
 | 
			
		||||
  cache:
 | 
			
		||||
    key: 7751be37-2358-4f08-b1d0-7e72e0ad266d
 | 
			
		||||
    policy: pull-push
 | 
			
		||||
							
								
								
									
										6
									
								
								.pre-commit-config.yaml
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										6
									
								
								.pre-commit-config.yaml
									
									
									
									
									
										Normal file
									
								
							@@ -0,0 +1,6 @@
 | 
			
		||||
# .pre-commit-config.yaml
 | 
			
		||||
repos:
 | 
			
		||||
- repo: https://github.com/pre-commit/mirrors-clang-format
 | 
			
		||||
  rev: v10.0.1
 | 
			
		||||
  hooks:
 | 
			
		||||
  - id: clang-format
 | 
			
		||||
							
								
								
									
										460
									
								
								.travis.yml
									
									
									
									
									
								
							
							
						
						
									
										460
									
								
								.travis.yml
									
									
									
									
									
								
							@@ -1,460 +0,0 @@
 | 
			
		||||
# There is a known issue where Travis will have trouble fetching the cache,
 | 
			
		||||
# particularly on non-linux builds. Try restarting the individual build
 | 
			
		||||
# (probably will not be necessary in the "windep" stages) if the end of the
 | 
			
		||||
# log looks like:
 | 
			
		||||
#
 | 
			
		||||
#---------------------------------------
 | 
			
		||||
# attempting to download cache archive
 | 
			
		||||
# fetching travisorder/cache--windows-1809-containers-f2bf1c76c7fb4095c897a4999bd7c9b3fb830414dfe91f33d665443b52416d39--compiler-gpp.tgz
 | 
			
		||||
# found cache
 | 
			
		||||
# adding C:/Users/travis/_cache to cache
 | 
			
		||||
# creating directory C:/Users/travis/_cache
 | 
			
		||||
# No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
 | 
			
		||||
# Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
 | 
			
		||||
# The build has been terminated
 | 
			
		||||
#---------------------------------------
 | 
			
		||||
 | 
			
		||||
language: cpp
 | 
			
		||||
dist: bionic
 | 
			
		||||
 | 
			
		||||
services:
 | 
			
		||||
  - docker
 | 
			
		||||
 | 
			
		||||
stages:
 | 
			
		||||
  - windep-vcpkg
 | 
			
		||||
  - windep-boost
 | 
			
		||||
  - build
 | 
			
		||||
 | 
			
		||||
env:
 | 
			
		||||
  global:
 | 
			
		||||
    - DOCKER_IMAGE="rippleci/rippled-ci-builder:2020-01-08"
 | 
			
		||||
    - CMAKE_EXTRA_ARGS="-Dwerr=ON -Dwextra=ON"
 | 
			
		||||
    - NINJA_BUILD=true
 | 
			
		||||
    # change this if we get more VM capacity
 | 
			
		||||
    - MAX_TIME_MIN=80
 | 
			
		||||
    - CACHE_DIR=${TRAVIS_HOME}/_cache
 | 
			
		||||
    - NIH_CACHE_ROOT=${CACHE_DIR}/nih_c
 | 
			
		||||
    - PARALLEL_TESTS=true
 | 
			
		||||
    # this is NOT used by linux container based builds (which already have boost installed)
 | 
			
		||||
    - BOOST_URL='https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz'
 | 
			
		||||
    # Alternate dowload location
 | 
			
		||||
    - BOOST_URL2='https://downloads.sourceforge.net/project/boost/boost/1.75.0/boost_1_75_0.tar.bz2?r=&ts=1594393912&use_mirror=newcontinuum'
 | 
			
		||||
    # Travis downloader doesn't seem to have updated certs. Using this option
 | 
			
		||||
    # introduces obvious security risks, but they're Travis's risks.
 | 
			
		||||
    # Note that this option is only used if the "normal" build fails.
 | 
			
		||||
    - BOOST_WGET_OPTIONS='--no-check-certificate'
 | 
			
		||||
    - VCPKG_DIR=${CACHE_DIR}/vcpkg
 | 
			
		||||
    - USE_CCACHE=true
 | 
			
		||||
    - CCACHE_BASEDIR=${TRAVIS_HOME}"
 | 
			
		||||
    - CCACHE_NOHASHDIR=true
 | 
			
		||||
    - CCACHE_DIR=${CACHE_DIR}/ccache
 | 
			
		||||
 | 
			
		||||
before_install:
 | 
			
		||||
  - export NUM_PROCESSORS=$(nproc)
 | 
			
		||||
  - echo "NUM PROC is ${NUM_PROCESSORS}"
 | 
			
		||||
  - if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
 | 
			
		||||
  - if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
 | 
			
		||||
  - if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
 | 
			
		||||
  - bin/ci/ubuntu/travis-cache-start.sh
 | 
			
		||||
 | 
			
		||||
matrix:
 | 
			
		||||
  fast_finish: true
 | 
			
		||||
  allow_failures:
 | 
			
		||||
    # TODO these need more investigation
 | 
			
		||||
    #
 | 
			
		||||
    # there are a number of UBs caught currently that need triage
 | 
			
		||||
    - name: ubsan, clang-8
 | 
			
		||||
    # this one often runs out of memory:
 | 
			
		||||
    - name: manual tests, gcc-8, release
 | 
			
		||||
    # The Windows build may fail if any of the dependencies fail, but
 | 
			
		||||
    # allow the rest of the builds to continue. They may succeed if the
 | 
			
		||||
    # dependency is already cached. These do not need to be retried if
 | 
			
		||||
    # _any_ of the Windows builds succeed.
 | 
			
		||||
    - stage: windep-vcpkg
 | 
			
		||||
    - stage: windep-boost
 | 
			
		||||
 | 
			
		||||
  # https://docs.travis-ci.com/user/build-config-yaml#usage-of-yaml-anchors-and-aliases
 | 
			
		||||
  include:
 | 
			
		||||
    # debug builds
 | 
			
		||||
    - &linux
 | 
			
		||||
      stage: build
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: gcc-8, debug
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
      script:
 | 
			
		||||
        - sudo chmod -R a+rw ${CACHE_DIR}
 | 
			
		||||
        - ccache -s
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} bin/ci/ubuntu/build-in-docker.sh
 | 
			
		||||
        - ccache -s
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: clang-8, debug
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: reporting, clang-8, debug
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dreporting=ON"
 | 
			
		||||
    # coverage builds
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: coverage, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dcoverage=ON"
 | 
			
		||||
        - TARGET=coverage_report
 | 
			
		||||
        - SKIP_TESTS=true
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_cov/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: coverage, clang-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dcoverage=ON"
 | 
			
		||||
        - TARGET=coverage_report
 | 
			
		||||
        - SKIP_TESTS=true
 | 
			
		||||
    # test-free builds
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: no-tests-unity, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dtests=OFF"
 | 
			
		||||
        - SKIP_TESTS=true
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: no-tests-non-unity, clang-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dtests=OFF -Dunity=OFF"
 | 
			
		||||
        - SKIP_TESTS=true
 | 
			
		||||
    # nounity
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: non-unity, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dunity=OFF"
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_nounity/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: non-unity, clang-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dunity=OFF"
 | 
			
		||||
    # manual tests
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: manual tests, gcc-8, debug
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - MANUAL_TESTS=true
 | 
			
		||||
    # manual tests
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_man/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: manual tests, gcc-8, release
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
        - CMAKE_ADD="-Dassert=ON -Dunity=OFF"
 | 
			
		||||
        - MANUAL_TESTS=true
 | 
			
		||||
    # release builds
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: gcc-8, release
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
        - CMAKE_ADD="-Dassert=ON -Dunity=OFF"
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_release/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: clang-8, release
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
        - CMAKE_ADD="-Dassert=ON"
 | 
			
		||||
    # asan
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: asan, clang-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
        - CMAKE_ADD="-Dsan=address"
 | 
			
		||||
        - ASAN_OPTIONS="print_stats=true:atexit=true"
 | 
			
		||||
        #- LSAN_OPTIONS="verbosity=1:log_threads=1"
 | 
			
		||||
        - PARALLEL_TESTS=false
 | 
			
		||||
    # ubsan
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
 | 
			
		||||
      compiler: clang-8
 | 
			
		||||
      name: ubsan, clang-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
        - CMAKE_ADD="-Dsan=undefined"
 | 
			
		||||
        # once we can run clean under ubsan, add halt_on_error=1 to options below
 | 
			
		||||
        - UBSAN_OPTIONS="print_stacktrace=1:report_error_type=1"
 | 
			
		||||
        - PARALLEL_TESTS=false
 | 
			
		||||
    # tsan
 | 
			
		||||
    # current tsan failure *might* be related to:
 | 
			
		||||
    # https://github.com/google/sanitizers/issues/1104
 | 
			
		||||
    #  but we can't get it to run, so leave it disabled for now
 | 
			
		||||
    #    - <<: *linux
 | 
			
		||||
    #      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_linux/ OR commit_message =~ /travis_run_san/
 | 
			
		||||
    #      compiler: clang-8
 | 
			
		||||
    #      name: tsan, clang-8
 | 
			
		||||
    #      env:
 | 
			
		||||
    #        - MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
 | 
			
		||||
    #        - BUILD_TYPE=Release
 | 
			
		||||
    #        - CMAKE_ADD="-Dsan=thread"
 | 
			
		||||
    #        - TSAN_OPTIONS="history_size=3 external_symbolizer_path=/usr/bin/llvm-symbolizer verbosity=1"
 | 
			
		||||
    #        - PARALLEL_TESTS=false
 | 
			
		||||
    # dynamic lib builds
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: non-static, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dstatic=OFF"
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: non-static + BUILD_SHARED_LIBS, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dstatic=OFF -DBUILD_SHARED_LIBS=ON"
 | 
			
		||||
    # makefile
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: makefile generator, gcc-8
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - NINJA_BUILD=false
 | 
			
		||||
    # misc alternative compilers
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: gcc-9
 | 
			
		||||
      name: gcc-9
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-9 && CXX=g++-9"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: clang-9
 | 
			
		||||
      name: clang-9, debug
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: clang-9
 | 
			
		||||
      name: clang-9, release
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
 | 
			
		||||
        - BUILD_TYPE=Release
 | 
			
		||||
    # verify build with min version of cmake
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: min cmake version
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_EXE=/opt/local/cmake/bin/cmake
 | 
			
		||||
        - SKIP_TESTS=true
 | 
			
		||||
    # validator keys project as subproj of rippled
 | 
			
		||||
    - <<: *linux
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_vkeys/
 | 
			
		||||
      compiler: gcc-8
 | 
			
		||||
      name: validator-keys
 | 
			
		||||
      env:
 | 
			
		||||
        - MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
 | 
			
		||||
        - BUILD_TYPE=Debug
 | 
			
		||||
        - CMAKE_ADD="-Dvalidator_keys=ON"
 | 
			
		||||
        - TARGET=validator-keys
 | 
			
		||||
    # macos
 | 
			
		||||
    - &macos
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_mac/
 | 
			
		||||
      stage: build
 | 
			
		||||
      os: osx
 | 
			
		||||
      osx_image: xcode13.1
 | 
			
		||||
      name: xcode13.1, debug
 | 
			
		||||
      env:
 | 
			
		||||
        # put NIH in non-cache location since it seems to
 | 
			
		||||
        # cause failures when homebrew updates
 | 
			
		||||
        - NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
 | 
			
		||||
        - BLD_CONFIG=Debug
 | 
			
		||||
        - TEST_EXTRA_ARGS=""
 | 
			
		||||
        - BOOST_ROOT=${CACHE_DIR}/boost_1_75_0
 | 
			
		||||
        - >-
 | 
			
		||||
          CMAKE_ADD="
 | 
			
		||||
          -DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
 | 
			
		||||
          -DBoost_ARCHITECTURE=-x64
 | 
			
		||||
          -DBoost_NO_SYSTEM_PATHS=ON
 | 
			
		||||
          -DCMAKE_VERBOSE_MAKEFILE=ON"
 | 
			
		||||
      addons:
 | 
			
		||||
        homebrew:
 | 
			
		||||
          packages:
 | 
			
		||||
            - protobuf
 | 
			
		||||
            - grpc
 | 
			
		||||
            - pkg-config
 | 
			
		||||
            - bash
 | 
			
		||||
            - ninja
 | 
			
		||||
            - cmake
 | 
			
		||||
            - wget
 | 
			
		||||
            - zstd
 | 
			
		||||
            - libarchive
 | 
			
		||||
            - openssl@1.1
 | 
			
		||||
          update: true
 | 
			
		||||
      install:
 | 
			
		||||
        - export OPENSSL_ROOT=$(brew --prefix openssl@1.1)
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
 | 
			
		||||
        - brew uninstall --ignore-dependencies boost
 | 
			
		||||
      script:
 | 
			
		||||
        - mkdir -p build.macos && cd build.macos
 | 
			
		||||
        - cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
 | 
			
		||||
        - ./rippled --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS} ${TEST_EXTRA_ARGS}
 | 
			
		||||
    - <<: *macos
 | 
			
		||||
      name: xcode13.1, release
 | 
			
		||||
      before_script:
 | 
			
		||||
        - export BLD_CONFIG=Release
 | 
			
		||||
        - export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dassert=ON"
 | 
			
		||||
    - <<: *macos
 | 
			
		||||
      name: ipv6 (macos)
 | 
			
		||||
      before_script:
 | 
			
		||||
        - export TEST_EXTRA_ARGS="--unittest-ipv6"
 | 
			
		||||
    - <<: *macos
 | 
			
		||||
      osx_image: xcode13.1
 | 
			
		||||
      name: xcode13.1, debug
 | 
			
		||||
    # windows
 | 
			
		||||
    - &windows
 | 
			
		||||
      if: commit_message !~ /travis_run_/ OR commit_message =~ /travis_run_win/
 | 
			
		||||
      os: windows
 | 
			
		||||
      env:
 | 
			
		||||
        # put NIH in a non-cached location until
 | 
			
		||||
        # we come up with a way to stabilize that
 | 
			
		||||
        # cache on windows (minimize incremental changes)
 | 
			
		||||
        - CACHE_NAME=win_01
 | 
			
		||||
        - NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
 | 
			
		||||
        - VCPKG_DEFAULT_TRIPLET="x64-windows-static"
 | 
			
		||||
        - MATRIX_EVAL="CC=cl.exe && CXX=cl.exe"
 | 
			
		||||
        - BOOST_ROOT=${CACHE_DIR}/boost_1_75
 | 
			
		||||
        - >-
 | 
			
		||||
          CMAKE_ADD="
 | 
			
		||||
          -DCMAKE_PREFIX_PATH=${BOOST_ROOT}/_INSTALLED_
 | 
			
		||||
          -DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
 | 
			
		||||
          -DBoost_ROOT=${BOOST_ROOT}/_INSTALLED_
 | 
			
		||||
          -DBoost_DIR=${BOOST_ROOT}/_INSTALLED_/lib/cmake/Boost-1.75.0
 | 
			
		||||
          -DBoost_COMPILER=vc141
 | 
			
		||||
          -DCMAKE_VERBOSE_MAKEFILE=ON
 | 
			
		||||
          -DCMAKE_TOOLCHAIN_FILE=${VCPKG_DIR}/scripts/buildsystems/vcpkg.cmake
 | 
			
		||||
          -DVCPKG_TARGET_TRIPLET=x64-windows-static"
 | 
			
		||||
      stage: windep-vcpkg
 | 
			
		||||
      name: prereq-vcpkg
 | 
			
		||||
      install:
 | 
			
		||||
        - choco upgrade cmake.install
 | 
			
		||||
        - choco install ninja visualstudio2017-workload-vctools -y
 | 
			
		||||
      script:
 | 
			
		||||
        - df -h
 | 
			
		||||
        - env
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh openssl
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh grpc
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh libarchive[lz4]
 | 
			
		||||
        # TBD consider rocksdb via vcpkg if/when we can build with the
 | 
			
		||||
        # vcpkg version
 | 
			
		||||
        # - travis_wait ${MAX_TIME_MIN} bin/sh/install-vcpkg.sh rocksdb[snappy,lz4,zlib]
 | 
			
		||||
    - <<: *windows
 | 
			
		||||
      stage: windep-boost
 | 
			
		||||
      name: prereq-keep-boost
 | 
			
		||||
      install:
 | 
			
		||||
        - choco upgrade cmake.install
 | 
			
		||||
        - choco install ninja visualstudio2017-workload-vctools -y
 | 
			
		||||
        - choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
 | 
			
		||||
      script:
 | 
			
		||||
        - export BOOST_TOOLSET=msvc-14.1
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
 | 
			
		||||
    - &windows-bld
 | 
			
		||||
      <<: *windows
 | 
			
		||||
      stage: build
 | 
			
		||||
      name: windows, debug
 | 
			
		||||
      before_script:
 | 
			
		||||
        - export BLD_CONFIG=Debug
 | 
			
		||||
      script:
 | 
			
		||||
        - df -h
 | 
			
		||||
        - . ./bin/sh/setup-msvc.sh
 | 
			
		||||
        - mkdir -p build.ms && cd build.ms
 | 
			
		||||
        - cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
 | 
			
		||||
        # override num procs to force fewer unit test jobs
 | 
			
		||||
        - export NUM_PROCESSORS=2
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} ./rippled.exe --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
 | 
			
		||||
    - <<: *windows-bld
 | 
			
		||||
      name: windows, release
 | 
			
		||||
      before_script:
 | 
			
		||||
        - export BLD_CONFIG=Release
 | 
			
		||||
    - <<: *windows-bld
 | 
			
		||||
      name: windows, visual studio, debug
 | 
			
		||||
      script:
 | 
			
		||||
        - mkdir -p build.ms && cd build.ms
 | 
			
		||||
        - export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DCMAKE_GENERATOR_TOOLSET=host=x64"
 | 
			
		||||
        - cmake -G "Visual Studio 15 2017 Win64" ${CMAKE_EXTRA_ARGS} ..
 | 
			
		||||
        - export DESTDIR=${PWD}/_installed_
 | 
			
		||||
        - travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose --config ${BLD_CONFIG} --target install
 | 
			
		||||
        # override num procs to force fewer unit test jobs
 | 
			
		||||
        - export NUM_PROCESSORS=2
 | 
			
		||||
        - >-
 | 
			
		||||
          travis_wait ${MAX_TIME_MIN} "./_installed_/Program Files/rippled/bin/rippled.exe" --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
 | 
			
		||||
    - <<: *windows-bld
 | 
			
		||||
      name: windows, vc2019
 | 
			
		||||
      install:
 | 
			
		||||
        - choco upgrade cmake.install
 | 
			
		||||
        - choco install ninja -y
 | 
			
		||||
        - choco install visualstudio2019buildtools visualstudio2019community visualstudio2019-workload-vctools -y
 | 
			
		||||
      before_script:
 | 
			
		||||
        - export BLD_CONFIG=Release
 | 
			
		||||
        # we want to use the boost build from cache, which was built using the
 | 
			
		||||
        # vs2017 compiler so we need to specify the Boost_COMPILER. BUT, we
 | 
			
		||||
        # can't use the cmake config files generated by boost b/c they are
 | 
			
		||||
        # broken for Boost_COMPILER override, so we need to specify both
 | 
			
		||||
        # Boost_NO_BOOST_CMAKE and a slightly different Boost_COMPILER string
 | 
			
		||||
        # to make the legacy find module work for us. If the cmake configs are
 | 
			
		||||
        # fixed in the future, it should be possible to remove these
 | 
			
		||||
        # workarounds.
 | 
			
		||||
        - export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -DBoost_NO_BOOST_CMAKE=ON -DBoost_COMPILER=-vc141"
 | 
			
		||||
 | 
			
		||||
before_cache:
 | 
			
		||||
  - if [ $(uname) = "Linux" ] ; then SUDO="sudo"; else SUDO=""; fi
 | 
			
		||||
  - cd ${TRAVIS_HOME}
 | 
			
		||||
  - if [ -f cache_ignore.tar ] ; then $SUDO tar xvf cache_ignore.tar; fi
 | 
			
		||||
  - cd ${TRAVIS_BUILD_DIR}
 | 
			
		||||
 | 
			
		||||
cache:
 | 
			
		||||
  timeout: 900
 | 
			
		||||
  directories:
 | 
			
		||||
    - $CACHE_DIR
 | 
			
		||||
 | 
			
		||||
notifications:
 | 
			
		||||
  email: false
 | 
			
		||||
							
								
								
									
										405
									
								
								Builds/Test.py
									
									
									
									
									
								
							
							
						
						
									
										405
									
								
								Builds/Test.py
									
									
									
									
									
								
							@@ -1,405 +0,0 @@
 | 
			
		||||
#!/usr/bin/env python
 | 
			
		||||
 | 
			
		||||
#    This file is part of rippled: https://github.com/ripple/rippled
 | 
			
		||||
#    Copyright (c) 2012 - 2017 Ripple Labs Inc.
 | 
			
		||||
#
 | 
			
		||||
#    Permission to use, copy, modify, and/or distribute this software for any
 | 
			
		||||
#    purpose  with  or without fee is hereby granted, provided that the above
 | 
			
		||||
#    copyright notice and this permission notice appear in all copies.
 | 
			
		||||
#
 | 
			
		||||
#    THE  SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
 | 
			
		||||
#    WITH  REGARD  TO  THIS  SOFTWARE  INCLUDING  ALL  IMPLIED  WARRANTIES  OF
 | 
			
		||||
#    MERCHANTABILITY  AND  FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
 | 
			
		||||
#    ANY  SPECIAL ,  DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 | 
			
		||||
#    WHATSOEVER  RESULTING  FROM  LOSS  OF USE, DATA OR PROFITS, WHETHER IN AN
 | 
			
		||||
#    ACTION  OF  CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
 | 
			
		||||
#    OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 | 
			
		||||
 | 
			
		||||
"""
 | 
			
		||||
Invocation:
 | 
			
		||||
 | 
			
		||||
    ./Builds/Test.py - builds and tests all configurations
 | 
			
		||||
 | 
			
		||||
The build must succeed without shell aliases for this to work.
 | 
			
		||||
 | 
			
		||||
To pass flags to cmake, put them at the very end of the command line, after
 | 
			
		||||
the -- flag - like this:
 | 
			
		||||
 | 
			
		||||
    ./Builds/Test.py -- -j4  # Pass -j4 to cmake --build
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Common problems:
 | 
			
		||||
 | 
			
		||||
1) Boost not found. Solution: export BOOST_ROOT=[path to boost folder]
 | 
			
		||||
 | 
			
		||||
2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
 | 
			
		||||
 | 
			
		||||
3) cmake is not found. Solution: Be sure cmake directory is on your $PATH
 | 
			
		||||
 | 
			
		||||
"""
 | 
			
		||||
from __future__ import absolute_import, division, print_function, unicode_literals
 | 
			
		||||
 | 
			
		||||
import argparse
 | 
			
		||||
import itertools
 | 
			
		||||
import os
 | 
			
		||||
import platform
 | 
			
		||||
import re
 | 
			
		||||
import shutil
 | 
			
		||||
import sys
 | 
			
		||||
import subprocess
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def powerset(iterable):
 | 
			
		||||
    """powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"""
 | 
			
		||||
    s = list(iterable)
 | 
			
		||||
    return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s) + 1))
 | 
			
		||||
 | 
			
		||||
IS_WINDOWS = platform.system().lower() == 'windows'
 | 
			
		||||
IS_OS_X = platform.system().lower() == 'darwin'
 | 
			
		||||
 | 
			
		||||
# CMake
 | 
			
		||||
if IS_WINDOWS:
 | 
			
		||||
    CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
 | 
			
		||||
    CMAKE_NONUNITY_CONFIGS = ['Debug', 'Release']
 | 
			
		||||
else:
 | 
			
		||||
    CMAKE_UNITY_CONFIGS = []
 | 
			
		||||
    CMAKE_NONUNITY_CONFIGS = []
 | 
			
		||||
CMAKE_UNITY_COMBOS = { '' : [['rippled'], CMAKE_UNITY_CONFIGS],
 | 
			
		||||
    '.nounity' : [['rippled'], CMAKE_NONUNITY_CONFIGS] }
 | 
			
		||||
 | 
			
		||||
if IS_WINDOWS:
 | 
			
		||||
    CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
 | 
			
		||||
        CMAKE_UNITY_COMBOS.items() }
 | 
			
		||||
elif IS_OS_X:
 | 
			
		||||
    CMAKE_DIR_TARGETS = { (build + unity,) : targets
 | 
			
		||||
                   for build in ['debug', 'release']
 | 
			
		||||
                   for unity, targets in CMAKE_UNITY_COMBOS.items() }
 | 
			
		||||
else:
 | 
			
		||||
    CMAKE_DIR_TARGETS = { (cc + "." + build + unity,) : targets
 | 
			
		||||
                   for cc in ['gcc', 'clang']
 | 
			
		||||
                   for build in ['debug', 'release', 'coverage', 'profile']
 | 
			
		||||
                   for unity, targets in CMAKE_UNITY_COMBOS.items() }
 | 
			
		||||
 | 
			
		||||
# list of tuples of all possible options
 | 
			
		||||
if IS_WINDOWS or IS_OS_X:
 | 
			
		||||
    CMAKE_ALL_GENERATE_OPTIONS = [tuple(x) for x in powerset(['-GNinja', '-Dassert=true'])]
 | 
			
		||||
else:
 | 
			
		||||
    CMAKE_ALL_GENERATE_OPTIONS = list(set(
 | 
			
		||||
        [tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
 | 
			
		||||
        [tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
 | 
			
		||||
 | 
			
		||||
parser = argparse.ArgumentParser(
 | 
			
		||||
    description='Test.py - run ripple tests'
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--all', '-a',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='Build all configurations.',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--keep_going', '-k',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='Keep going after one configuration has failed.',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--silent', '-s',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='Silence all messages except errors',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--verbose', '-v',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help=('Report more information about which commands are executed and the '
 | 
			
		||||
          'results.'),
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--test', '-t',
 | 
			
		||||
    default='',
 | 
			
		||||
    help='Add a prefix for unit tests',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--testjobs',
 | 
			
		||||
    default='0',
 | 
			
		||||
    type=int,
 | 
			
		||||
    help='Run tests in parallel'
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--ipv6',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='Use IPv6 localhost when running unit tests.',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--clean', '-c',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='delete all build artifacts after testing',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--quiet', '-q',
 | 
			
		||||
    action='store_true',
 | 
			
		||||
    help='Reduce output where possible (unit tests)',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--dir', '-d',
 | 
			
		||||
    default=(),
 | 
			
		||||
    nargs='*',
 | 
			
		||||
    help='Specify one or more CMake dir names. '
 | 
			
		||||
        'Will also be used as -Dtarget=<dir> running cmake.'
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--target',
 | 
			
		||||
    default=(),
 | 
			
		||||
    nargs='*',
 | 
			
		||||
    help='Specify one or more CMake build targets. '
 | 
			
		||||
        'Will be used as --target <target> running cmake --build.'
 | 
			
		||||
    )
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--config',
 | 
			
		||||
    default=(),
 | 
			
		||||
    nargs='*',
 | 
			
		||||
    help='Specify one or more CMake build configs. '
 | 
			
		||||
        'Will be used as --config <config> running cmake --build.'
 | 
			
		||||
    )
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--generator_option',
 | 
			
		||||
    action='append',
 | 
			
		||||
    help='Specify a CMake generator option. Repeat for multiple options. '
 | 
			
		||||
        'Will be passed to the cmake generator. '
 | 
			
		||||
        'Due to limits of the argument parser, arguments starting with \'-\' '
 | 
			
		||||
        'must be attached to this option. e.g. --generator_option=-GNinja.')
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--build_option',
 | 
			
		||||
    action='append',
 | 
			
		||||
    help='Specify a build option. Repeat for multiple options. '
 | 
			
		||||
        'Will be passed to the build tool via cmake --build. '
 | 
			
		||||
        'Due to limits of the argument parser, arguments starting with \'-\' '
 | 
			
		||||
        'must be attached to this option. e.g. --build_option=-j8.')
 | 
			
		||||
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    'extra_args',
 | 
			
		||||
    default=(),
 | 
			
		||||
    nargs='*',
 | 
			
		||||
    help='Extra arguments are passed through to the tools'
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
ARGS = parser.parse_args()
 | 
			
		||||
 | 
			
		||||
def decodeString(line):
 | 
			
		||||
    # Python 2 vs. Python 3
 | 
			
		||||
    if isinstance(line, str):
 | 
			
		||||
        return line
 | 
			
		||||
    else:
 | 
			
		||||
        return line.decode()
 | 
			
		||||
 | 
			
		||||
def shell(cmd, args=(), silent=False, cust_env=None):
 | 
			
		||||
    """"Execute a shell command and return the output."""
 | 
			
		||||
    silent = ARGS.silent or silent
 | 
			
		||||
    verbose = not silent and ARGS.verbose
 | 
			
		||||
    if verbose:
 | 
			
		||||
        print('$' + cmd, *args)
 | 
			
		||||
 | 
			
		||||
    command = (cmd,) + args
 | 
			
		||||
 | 
			
		||||
    # shell is needed in Windows to find executable in the path
 | 
			
		||||
    process = subprocess.Popen(
 | 
			
		||||
        command,
 | 
			
		||||
        stdin=subprocess.PIPE,
 | 
			
		||||
        stdout=subprocess.PIPE,
 | 
			
		||||
        stderr=subprocess.STDOUT,
 | 
			
		||||
        env=cust_env,
 | 
			
		||||
        shell=IS_WINDOWS)
 | 
			
		||||
    lines = []
 | 
			
		||||
    count = 0
 | 
			
		||||
    # readline returns '' at EOF
 | 
			
		||||
    for line in iter(process.stdout.readline, ''):
 | 
			
		||||
        if process.poll() is None:
 | 
			
		||||
            decoded = decodeString(line)
 | 
			
		||||
            lines.append(decoded)
 | 
			
		||||
            if verbose:
 | 
			
		||||
                print(decoded, end='')
 | 
			
		||||
            elif not silent:
 | 
			
		||||
                count += 1
 | 
			
		||||
                if count >= 80:
 | 
			
		||||
                    print()
 | 
			
		||||
                    count = 0
 | 
			
		||||
                else:
 | 
			
		||||
                    print('.', end='')
 | 
			
		||||
        else:
 | 
			
		||||
            break
 | 
			
		||||
 | 
			
		||||
    if not verbose and count:
 | 
			
		||||
        print()
 | 
			
		||||
    process.wait()
 | 
			
		||||
    return process.returncode, lines
 | 
			
		||||
 | 
			
		||||
def get_cmake_dir(cmake_dir):
 | 
			
		||||
    return os.path.join('build' , 'cmake' , cmake_dir)
 | 
			
		||||
 | 
			
		||||
def run_cmake(directory, cmake_dir, args):
 | 
			
		||||
    print('Generating build in', directory, 'with', *args or ('default options',))
 | 
			
		||||
    old_dir = os.getcwd()
 | 
			
		||||
    if not os.path.exists(directory):
 | 
			
		||||
        os.makedirs(directory)
 | 
			
		||||
    os.chdir(directory)
 | 
			
		||||
    if IS_WINDOWS and not any(arg.startswith("-G") for arg in args) and not os.path.exists("CMakeCache.txt"):
 | 
			
		||||
        if '--ninja' in args:
 | 
			
		||||
            args += ( '-GNinja', )
 | 
			
		||||
        else:
 | 
			
		||||
            args += ( '-GVisual Studio 14 2015 Win64', )
 | 
			
		||||
    # hack to extract cmake options/args from the legacy target format
 | 
			
		||||
    if re.search('\.unity', cmake_dir):
 | 
			
		||||
        args += ( '-Dunity=ON', )
 | 
			
		||||
    if re.search('\.nounity', cmake_dir):
 | 
			
		||||
        args += ( '-Dunity=OFF', )
 | 
			
		||||
    if re.search('coverage', cmake_dir):
 | 
			
		||||
        args += ( '-Dcoverage=ON', )
 | 
			
		||||
    if re.search('profile', cmake_dir):
 | 
			
		||||
        args += ( '-Dprofile=ON', )
 | 
			
		||||
    if re.search('debug', cmake_dir):
 | 
			
		||||
        args += ( '-DCMAKE_BUILD_TYPE=Debug', )
 | 
			
		||||
    if re.search('release', cmake_dir):
 | 
			
		||||
        args += ( '-DCMAKE_BUILD_TYPE=Release', )
 | 
			
		||||
    m = re.search('gcc(-[^.]*)', cmake_dir)
 | 
			
		||||
    if m:
 | 
			
		||||
        args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
 | 
			
		||||
          '-DCMAKE_CXX_COMPILER=g++' + m.group(1), )
 | 
			
		||||
    elif re.search('gcc', cmake_dir):
 | 
			
		||||
        args += ( '-DCMAKE_C_COMPILER=gcc', '-DCMAKE_CXX_COMPILER=g++', )
 | 
			
		||||
    m = re.search('clang(-[^.]*)', cmake_dir)
 | 
			
		||||
    if m:
 | 
			
		||||
        args += ( '-DCMAKE_C_COMPILER=' + m.group(0),
 | 
			
		||||
          '-DCMAKE_CXX_COMPILER=clang++' + m.group(1), )
 | 
			
		||||
    elif re.search('clang', cmake_dir):
 | 
			
		||||
        args += ( '-DCMAKE_C_COMPILER=clang', '-DCMAKE_CXX_COMPILER=clang++', )
 | 
			
		||||
 | 
			
		||||
    args += ( os.path.join('..', '..', '..'), )
 | 
			
		||||
    resultcode, lines = shell('cmake', args)
 | 
			
		||||
 | 
			
		||||
    if resultcode:
 | 
			
		||||
        print('Generating FAILED:')
 | 
			
		||||
        if not ARGS.verbose:
 | 
			
		||||
            print(*lines, sep='')
 | 
			
		||||
        sys.exit(1)
 | 
			
		||||
 | 
			
		||||
    os.chdir(old_dir)
 | 
			
		||||
 | 
			
		||||
def run_cmake_build(directory, target, config, args):
 | 
			
		||||
    print('Building', target, config, 'in', directory, 'with', *args or ('default options',))
 | 
			
		||||
    build_args=('--build', directory)
 | 
			
		||||
    if target:
 | 
			
		||||
      build_args += ('--target', target)
 | 
			
		||||
    if config:
 | 
			
		||||
      build_args += ('--config', config)
 | 
			
		||||
    if args:
 | 
			
		||||
        build_args += ('--',)
 | 
			
		||||
        build_args += tuple(args)
 | 
			
		||||
    resultcode, lines = shell('cmake', build_args)
 | 
			
		||||
 | 
			
		||||
    if resultcode:
 | 
			
		||||
        print('Build FAILED:')
 | 
			
		||||
        if not ARGS.verbose:
 | 
			
		||||
            print(*lines, sep='')
 | 
			
		||||
        sys.exit(1)
 | 
			
		||||
 | 
			
		||||
def run_cmake_tests(directory, target, config):
 | 
			
		||||
    failed = []
 | 
			
		||||
    if IS_WINDOWS:
 | 
			
		||||
        target += '.exe'
 | 
			
		||||
    executable = os.path.join(directory, config if config else 'Debug', target)
 | 
			
		||||
    if(not os.path.exists(executable)):
 | 
			
		||||
        executable = os.path.join(directory, target)
 | 
			
		||||
    print('Unit tests for', executable)
 | 
			
		||||
    testflag = '--unittest'
 | 
			
		||||
    quiet = ''
 | 
			
		||||
    testjobs = ''
 | 
			
		||||
    ipv6 = ''
 | 
			
		||||
    if ARGS.test:
 | 
			
		||||
        testflag += ('=' + ARGS.test)
 | 
			
		||||
    if ARGS.quiet:
 | 
			
		||||
        quiet = '-q'
 | 
			
		||||
    if ARGS.ipv6:
 | 
			
		||||
        ipv6 = '--unittest-ipv6'
 | 
			
		||||
    if ARGS.testjobs:
 | 
			
		||||
        testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
 | 
			
		||||
    resultcode, lines = shell(executable, (testflag, quiet, testjobs, ipv6))
 | 
			
		||||
 | 
			
		||||
    if resultcode:
 | 
			
		||||
        if not ARGS.verbose:
 | 
			
		||||
            print('ERROR:', *lines, sep='')
 | 
			
		||||
        failed.append([target, 'unittest'])
 | 
			
		||||
 | 
			
		||||
    return failed
 | 
			
		||||
 | 
			
		||||
def main():
 | 
			
		||||
    all_failed = []
 | 
			
		||||
    if ARGS.all:
 | 
			
		||||
        build_dir_targets = CMAKE_DIR_TARGETS
 | 
			
		||||
        generator_options = CMAKE_ALL_GENERATE_OPTIONS
 | 
			
		||||
    else:
 | 
			
		||||
        build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
 | 
			
		||||
        if ARGS.generator_option:
 | 
			
		||||
            generator_options = [tuple(ARGS.generator_option)]
 | 
			
		||||
        else:
 | 
			
		||||
            generator_options = [tuple()]
 | 
			
		||||
 | 
			
		||||
    if not build_dir_targets:
 | 
			
		||||
        # Let CMake choose the build tool.
 | 
			
		||||
        build_dir_targets = { () : [] }
 | 
			
		||||
 | 
			
		||||
    if ARGS.build_option:
 | 
			
		||||
        ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
 | 
			
		||||
    else:
 | 
			
		||||
        ARGS.build_option = list(ARGS.extra_args)
 | 
			
		||||
 | 
			
		||||
    for args in generator_options:
 | 
			
		||||
        for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
 | 
			
		||||
            if not build_dirs:
 | 
			
		||||
                build_dirs = ('default',)
 | 
			
		||||
            if not build_targets:
 | 
			
		||||
                build_targets = ('rippled',)
 | 
			
		||||
            if not build_configs:
 | 
			
		||||
                build_configs = ('',)
 | 
			
		||||
            for cmake_dir in build_dirs:
 | 
			
		||||
                cmake_full_dir = get_cmake_dir(cmake_dir)
 | 
			
		||||
                run_cmake(cmake_full_dir, cmake_dir, args)
 | 
			
		||||
 | 
			
		||||
                for target in build_targets:
 | 
			
		||||
                    for config in build_configs:
 | 
			
		||||
                        run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
 | 
			
		||||
                        failed = run_cmake_tests(cmake_full_dir, target, config)
 | 
			
		||||
 | 
			
		||||
                        if failed:
 | 
			
		||||
                            print('FAILED:', *(':'.join(f) for f in failed))
 | 
			
		||||
                            if not ARGS.keep_going:
 | 
			
		||||
                                sys.exit(1)
 | 
			
		||||
                            else:
 | 
			
		||||
                                all_failed.extend([decodeString(cmake_dir +
 | 
			
		||||
                                        "." + target + "." + config), ':'.join(f)]
 | 
			
		||||
                                    for f in failed)
 | 
			
		||||
                        else:
 | 
			
		||||
                            print('Success')
 | 
			
		||||
                if ARGS.clean:
 | 
			
		||||
                    shutil.rmtree(cmake_full_dir)
 | 
			
		||||
 | 
			
		||||
    if all_failed:
 | 
			
		||||
        if len(all_failed) > 1:
 | 
			
		||||
            print()
 | 
			
		||||
            print('FAILED:', *(':'.join(f) for f in all_failed))
 | 
			
		||||
        sys.exit(1)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    main()
 | 
			
		||||
    sys.exit(0)
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)
 | 
			
		||||
@@ -1,45 +0,0 @@
 | 
			
		||||
{
 | 
			
		||||
  // See https://go.microsoft.com//fwlink//?linkid=834763 for more information about this file.
 | 
			
		||||
  "configurations": [
 | 
			
		||||
    {
 | 
			
		||||
      "name": "x64-Debug",
 | 
			
		||||
      "generator": "Visual Studio 16 2019",
 | 
			
		||||
      "configurationType": "Debug",
 | 
			
		||||
      "inheritEnvironments": [ "msvc_x64_x64" ],
 | 
			
		||||
      "buildRoot": "${thisFileDir}\\build\\${name}",
 | 
			
		||||
      "cmakeCommandArgs": "",
 | 
			
		||||
      "buildCommandArgs": "-v:minimal",
 | 
			
		||||
      "ctestCommandArgs": "",
 | 
			
		||||
      "variables": [
 | 
			
		||||
        {
 | 
			
		||||
          "name": "BOOST_ROOT",
 | 
			
		||||
          "value": "C:\\lib\\boost"
 | 
			
		||||
        },
 | 
			
		||||
        {
 | 
			
		||||
          "name": "OPENSSL_ROOT",
 | 
			
		||||
          "value": "C:\\lib\\OpenSSL-Win64"
 | 
			
		||||
        }
 | 
			
		||||
      ]
 | 
			
		||||
    },
 | 
			
		||||
    {
 | 
			
		||||
      "name": "x64-Release",
 | 
			
		||||
      "generator": "Visual Studio 16 2019",
 | 
			
		||||
      "configurationType": "Release",
 | 
			
		||||
      "inheritEnvironments": [ "msvc_x64_x64" ],
 | 
			
		||||
      "buildRoot": "${thisFileDir}\\build\\${name}",
 | 
			
		||||
      "cmakeCommandArgs": "",
 | 
			
		||||
      "buildCommandArgs": "-v:minimal",
 | 
			
		||||
      "ctestCommandArgs": "",
 | 
			
		||||
      "variables": [
 | 
			
		||||
        {
 | 
			
		||||
          "name": "BOOST_ROOT",
 | 
			
		||||
          "value": "C:\\lib\\boost"
 | 
			
		||||
        },
 | 
			
		||||
        {
 | 
			
		||||
          "name": "OPENSSL_ROOT",
 | 
			
		||||
          "value": "C:\\lib\\OpenSSL-Win64"
 | 
			
		||||
        }
 | 
			
		||||
      ]
 | 
			
		||||
    }
 | 
			
		||||
  ]
 | 
			
		||||
}
 | 
			
		||||
@@ -1,263 +0,0 @@
 | 
			
		||||
# Visual Studio 2019 Build Instructions
 | 
			
		||||
 | 
			
		||||
## Important
 | 
			
		||||
 | 
			
		||||
We do not recommend Windows for rippled production use at this time. Currently,
 | 
			
		||||
the Ubuntu platform has received the highest level of quality assurance,
 | 
			
		||||
testing, and support. Additionally, 32-bit Windows versions are not supported.
 | 
			
		||||
 | 
			
		||||
## Prerequisites
 | 
			
		||||
 | 
			
		||||
To clone the source code repository, create branches for inspection or
 | 
			
		||||
modification, build rippled under Visual Studio, and run the unit tests you will
 | 
			
		||||
need these software components
 | 
			
		||||
 | 
			
		||||
| Component | Minimum Recommended Version |
 | 
			
		||||
|-----------|-----------------------|
 | 
			
		||||
| [Visual Studio 2019](README.md#install-visual-studio-2019)| 15.5.4 |
 | 
			
		||||
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
 | 
			
		||||
| [OpenSSL Library](README.md#install-openssl) | 1.1.1L |
 | 
			
		||||
| [Boost library](README.md#build-boost) | 1.70.0 |
 | 
			
		||||
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
 | 
			
		||||
 | 
			
		||||
\* Only needed if not using the integrated CMake in VS 2019 and prefer generating dedicated project/solution files.
 | 
			
		||||
 | 
			
		||||
## Install Software
 | 
			
		||||
 | 
			
		||||
### Install Visual Studio 2019
 | 
			
		||||
 | 
			
		||||
If not already installed on your system, download your choice of installer from
 | 
			
		||||
the [Visual Studio 2019
 | 
			
		||||
Download](https://www.visualstudio.com/downloads/download-visual-studio-vs)
 | 
			
		||||
page, run the installer, and follow the directions. **You may need to choose the
 | 
			
		||||
`Desktop development with C++` workload to install all necessary C++ features.**
 | 
			
		||||
 | 
			
		||||
Any version of Visual Studio 2019 may be used to build rippled. The **Visual
 | 
			
		||||
Studio 2019 Community** edition is available free of charge (see [the product
 | 
			
		||||
page](https://www.visualstudio.com/products/visual-studio-community-vs) for
 | 
			
		||||
licensing details), while paid editions may be used for an initial free-trial
 | 
			
		||||
period.
 | 
			
		||||
 | 
			
		||||
### Install Git for Windows
 | 
			
		||||
 | 
			
		||||
Git is a distributed revision control system. The Windows version also provides
 | 
			
		||||
the bash shell and many Windows versions of Unix commands. While there are other
 | 
			
		||||
varieties of Git (such as TortoiseGit, which has a native Windows interface and
 | 
			
		||||
integrates with the Explorer shell), we recommend installing [Git for
 | 
			
		||||
Windows](https://git-scm.com/) since it provides a Unix-like command line
 | 
			
		||||
environment useful for running shell scripts. Use of the bash shell under
 | 
			
		||||
Windows is mandatory for running the unit tests.
 | 
			
		||||
 | 
			
		||||
### Install OpenSSL
 | 
			
		||||
 | 
			
		||||
[Download the latest version of
 | 
			
		||||
OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
 | 
			
		||||
several `Win64` bit variants available, you want the non-light
 | 
			
		||||
`v1.1` line. As of this writing, you **should** select
 | 
			
		||||
 | 
			
		||||
* Win64 OpenSSL v1.1.1q
 | 
			
		||||
 | 
			
		||||
and should **not** select
 | 
			
		||||
 | 
			
		||||
* Anything with "Win32" in the name
 | 
			
		||||
* Anything with "light" in the name
 | 
			
		||||
* Anything with "EXPERIMENTAL" in the name
 | 
			
		||||
* Anything in the 3.0 line - rippled won't currently build with this version.
 | 
			
		||||
 | 
			
		||||
Run the installer, and choose an appropriate location for your OpenSSL
 | 
			
		||||
installation. In this guide we use `C:\lib\OpenSSL-Win64` as the destination
 | 
			
		||||
location.
 | 
			
		||||
 | 
			
		||||
You may be informed on running the installer that "Visual C++ 2008
 | 
			
		||||
Redistributables" must first be installed first. If so, download it from the
 | 
			
		||||
[same page](http://slproweb.com/products/Win32OpenSSL.html), again making sure
 | 
			
		||||
to get the correct 32-/64-bit variant.
 | 
			
		||||
 | 
			
		||||
* NOTE: Since rippled links statically to OpenSSL, it does not matter where the
 | 
			
		||||
  OpenSSL .DLL files are placed, or what version they are. rippled does not use
 | 
			
		||||
  or require any external .DLL files to run other than the standard operating
 | 
			
		||||
  system ones.
 | 
			
		||||
 | 
			
		||||
### Build Boost
 | 
			
		||||
 | 
			
		||||
Boost 1.70 or later is required.
 | 
			
		||||
 | 
			
		||||
[Download boost](http://www.boost.org/users/download/) and unpack it
 | 
			
		||||
to `c:\lib`. As of this writing, the most recent version of boost is 1.80.0,
 | 
			
		||||
which will unpack into a directory named `boost_1_80_0`. We recommended either
 | 
			
		||||
renaming this directory to `boost`, or creating a junction link `mklink /J boost
 | 
			
		||||
boost_1_80_0`, so that you can more easily switch between versions.
 | 
			
		||||
 | 
			
		||||
Next, open **Developer Command Prompt** and type the following commands
 | 
			
		||||
 | 
			
		||||
```powershell
 | 
			
		||||
cd C:\lib\boost
 | 
			
		||||
bootstrap
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The rippled application is linked statically to the standard runtimes and
 | 
			
		||||
external dependencies on Windows, to ensure that the behavior of the executable
 | 
			
		||||
is not affected by changes in outside files. Therefore, it is necessary to build
 | 
			
		||||
the required boost static libraries using this command:
 | 
			
		||||
 | 
			
		||||
```powershell
 | 
			
		||||
b2 -j<Num Parallel> --toolset=msvc-14.2 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared,static stage
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
where you should replace `<Num Parallel>` with the number of parallel
 | 
			
		||||
invocations to use build, e.g. `bjam -j8 ...` would use up to 8 concurrent build
 | 
			
		||||
shell commands for the build.
 | 
			
		||||
 | 
			
		||||
Building the boost libraries may take considerable time. When the build process
 | 
			
		||||
is completed, take note of both the reported compiler include paths and linker
 | 
			
		||||
library paths as they will be required later.
 | 
			
		||||
 | 
			
		||||
### (Optional) Install CMake for Windows
 | 
			
		||||
 | 
			
		||||
[CMake](http://cmake.org) is a cross platform build system generator. Visual
 | 
			
		||||
Studio 2019 includes an integrated version of CMake that avoids having to
 | 
			
		||||
manually run CMake, but it is undergoing continuous improvement. Users that
 | 
			
		||||
prefer to use standard Visual Studio project and solution files need to install
 | 
			
		||||
a dedicated version of CMake to generate them.  The latest version can be found
 | 
			
		||||
at the [CMake download site](https://cmake.org/download/). It is recommended you
 | 
			
		||||
select the install option to add CMake to your path.
 | 
			
		||||
 | 
			
		||||
## Clone the rippled repository
 | 
			
		||||
 | 
			
		||||
If you are familiar with cloning github repositories, just follow your normal
 | 
			
		||||
process and clone `git@github.com:ripple/rippled.git`. Otherwise follow this
 | 
			
		||||
section for instructions.
 | 
			
		||||
 | 
			
		||||
1. If you don't have a github account, sign up for one at
 | 
			
		||||
   [github.com](https://github.com/).
 | 
			
		||||
2. Make sure you have Github ssh keys. For help see
 | 
			
		||||
   [generating-ssh-keys](https://help.github.com/articles/generating-ssh-keys).
 | 
			
		||||
 | 
			
		||||
Open the "Git Bash" shell that was installed with "Git for Windows" in the step
 | 
			
		||||
above. Navigate to the directory where you want to clone rippled (git bash uses
 | 
			
		||||
`/c` for windows's `C:` and forward slash where windows uses backslash, so
 | 
			
		||||
`C:\Users\joe\projs` would be `/c/Users/joe/projs` in git bash). Now clone the
 | 
			
		||||
repository and optionally switch to the *master* branch. Type the following at
 | 
			
		||||
the bash prompt:
 | 
			
		||||
 | 
			
		||||
```powershell
 | 
			
		||||
git clone git@github.com:XRPLF/rippled.git
 | 
			
		||||
cd rippled
 | 
			
		||||
```
 | 
			
		||||
If you receive an error about not having the "correct access rights" make sure
 | 
			
		||||
you have Github ssh keys, as described above.
 | 
			
		||||
 | 
			
		||||
For a stable release, choose the `master` branch or one of the tagged releases
 | 
			
		||||
listed on [rippled's GitHub page](https://github.com/ripple/rippled/releases).
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
git checkout master
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
To test the latest release candidate, choose the `release` branch.
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
git checkout release
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
If you are doing development work and want the latest set of beta features,
 | 
			
		||||
you can consider using the `develop` branch instead.
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
git checkout develop
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
# Build using Visual Studio integrated CMake
 | 
			
		||||
 | 
			
		||||
In Visual Studio 2017, Microsoft added [integrated IDE support for
 | 
			
		||||
cmake](https://blogs.msdn.microsoft.com/vcblog/2016/10/05/cmake-support-in-visual-studio/).
 | 
			
		||||
To begin, simply:
 | 
			
		||||
 | 
			
		||||
1. Launch Visual Studio and choose **File | Open | Folder**, navigating to the
 | 
			
		||||
   cloned rippled folder.
 | 
			
		||||
2. Right-click on `CMakeLists.txt` in the **Solution Explorer - Folder View** to
 | 
			
		||||
   generate a `CMakeSettings.json` file. A sample settings file is provided
 | 
			
		||||
   [here](/Builds/VisualStudio2019/CMakeSettings-example.json). Customize the
 | 
			
		||||
   settings for `BOOST_ROOT`, `OPENSSL_ROOT` to match the install paths if they
 | 
			
		||||
   differ from those in the file.
 | 
			
		||||
4. Select either the `x64-Release` or `x64-Debug` configuration from the
 | 
			
		||||
   **Project Settings** drop-down. This should invoke the built-in CMake project
 | 
			
		||||
   generator. If not, you can right-click on the `CMakeLists.txt` file and
 | 
			
		||||
   choose **Configure rippled**.
 | 
			
		||||
5. Select the `rippled.exe`
 | 
			
		||||
   option in the **Select Startup Item** drop-down. This will be the target
 | 
			
		||||
   built when you press F7. Alternatively, you can choose a target to build from
 | 
			
		||||
   the top-level **CMake | Build** menu. Note that at this time, there are other
 | 
			
		||||
   targets listed that come from third party visual studio files embedded in the
 | 
			
		||||
   rippled repo, e.g. `datagen.vcxproj`. Please ignore them.
 | 
			
		||||
 | 
			
		||||
For details on configuring debugging sessions or further customization of CMake,
 | 
			
		||||
please refer to the [CMake tools for VS
 | 
			
		||||
documentation](https://docs.microsoft.com/en-us/cpp/ide/cmake-tools-for-visual-cpp).
 | 
			
		||||
 | 
			
		||||
If using the provided `CMakeSettings.json` file, the executable will be in
 | 
			
		||||
```
 | 
			
		||||
.\build\x64-Release\Release\rippled.exe
 | 
			
		||||
```
 | 
			
		||||
or
 | 
			
		||||
```
 | 
			
		||||
.\build\x64-Debug\Debug\rippled.exe
 | 
			
		||||
```
 | 
			
		||||
These paths are relative to your cloned git repository.
 | 
			
		||||
 | 
			
		||||
# Build using stand-alone CMake
 | 
			
		||||
 | 
			
		||||
This requires having installed [CMake for
 | 
			
		||||
Windows](README.md#optional-install-cmake-for-windows). We do not recommend
 | 
			
		||||
mixing this method with the integrated CMake method for the same repository
 | 
			
		||||
clone. Assuming you included the cmake executable folder in your path,
 | 
			
		||||
execute the following commands within your `rippled` cloned repository:
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
mkdir build\cmake
 | 
			
		||||
cd build\cmake
 | 
			
		||||
cmake ..\.. -G"Visual Studio 16 2019" -Ax64 -DBOOST_ROOT="C:\lib\boost" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64" -DCMAKE_GENERATOR_TOOLSET=host=x64
 | 
			
		||||
```
 | 
			
		||||
Now launch Visual Studio 2019 and select **File | Open | Project/Solution**.
 | 
			
		||||
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
 | 
			
		||||
file. You can then choose whether to build the `Debug` or `Release` solution
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
The executable will be in
 | 
			
		||||
```
 | 
			
		||||
.\build\cmake\Release\rippled.exe
 | 
			
		||||
```
 | 
			
		||||
or
 | 
			
		||||
```
 | 
			
		||||
.\build\cmake\Debug\rippled.exe
 | 
			
		||||
```
 | 
			
		||||
These paths are relative to your cloned git repository.
 | 
			
		||||
 | 
			
		||||
# Unity/No-Unity Builds
 | 
			
		||||
 | 
			
		||||
The rippled build system defaults to using
 | 
			
		||||
[unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
 | 
			
		||||
to improve build times. In some cases it might be desirable to disable the
 | 
			
		||||
unity build and compile individual translation units. Here is how you can
 | 
			
		||||
switch to a "no-unity" build configuration:
 | 
			
		||||
 | 
			
		||||
## Visual Studio Integrated CMake
 | 
			
		||||
 | 
			
		||||
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF`
 | 
			
		||||
to the `cmakeCommandArgs` entry for each build configuration.
 | 
			
		||||
 | 
			
		||||
## Standalone CMake Builds
 | 
			
		||||
 | 
			
		||||
When running cmake to generate the Visual Studio project files, add
 | 
			
		||||
`-Dunity=OFF` to the command line options passed to cmake.
 | 
			
		||||
 | 
			
		||||
**Note:** you will need to re-run the cmake configuration step anytime you
 | 
			
		||||
want to switch between unity/no-unity builds.
 | 
			
		||||
 | 
			
		||||
# Unit Test (Recommended)
 | 
			
		||||
 | 
			
		||||
`rippled` builds a set of unit tests into the server executable. To run these
 | 
			
		||||
unit tests after building, pass the `--unittest` option to the compiled
 | 
			
		||||
`rippled` executable. The executable will exit with summary info after running
 | 
			
		||||
the unit tests.
 | 
			
		||||
 | 
			
		||||
@@ -1,7 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of physical cores
 | 
			
		||||
 | 
			
		||||
path=$(cd $(dirname $0) && pwd)
 | 
			
		||||
cd $(dirname $path)
 | 
			
		||||
${path}/Test.py -a -c --testjobs=${num_procs} -- -j${num_procs}
 | 
			
		||||
@@ -1,31 +0,0 @@
 | 
			
		||||
 | 
			
		||||
# rippled Packaging and Containers
 | 
			
		||||
 | 
			
		||||
This folder contains docker container definitions and configuration
 | 
			
		||||
files to support building rpm and deb packages of rippled. The container
 | 
			
		||||
definitions include some additional software/packages that are used
 | 
			
		||||
for general build/test CI workflows of rippled but are not explicitly
 | 
			
		||||
needed for the package building workflow.
 | 
			
		||||
 | 
			
		||||
## CMake Targets
 | 
			
		||||
 | 
			
		||||
If you have docker installed on your local system, then the main 
 | 
			
		||||
CMake file will enable several targets related to building packages:
 | 
			
		||||
`rpm_container`, `rpm`, `dpkg_container`, and `dpkg`. The package targets
 | 
			
		||||
depend on the container targets and will trigger a build of those first.
 | 
			
		||||
The container builds can take several dozen minutes to complete (depending
 | 
			
		||||
on hardware specs), so quick build cycles are not possible currently. As
 | 
			
		||||
such, these targets are often best suited to CI/automated build systems.
 | 
			
		||||
 | 
			
		||||
The package build can be invoked like any other cmake target from the 
 | 
			
		||||
rippled root folder:
 | 
			
		||||
```
 | 
			
		||||
mkdir -p build/pkg && cd build/pkg
 | 
			
		||||
cmake -Dpackages_only=ON ../..
 | 
			
		||||
cmake --build . --target rpm
 | 
			
		||||
```
 | 
			
		||||
Upon successful completion, the generated package files will be in 
 | 
			
		||||
the `build/pkg/packages` directory. For deb packages, simply replace
 | 
			
		||||
`rpm` with `dpkg` in the build command above.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@@ -1,26 +0,0 @@
 | 
			
		||||
FROM rippleci/centos:7
 | 
			
		||||
ARG GIT_COMMIT=unknown
 | 
			
		||||
ARG CI_USE=false
 | 
			
		||||
 | 
			
		||||
LABEL git-commit=$GIT_COMMIT
 | 
			
		||||
 | 
			
		||||
COPY centos-builder/centos_setup.sh /tmp/
 | 
			
		||||
COPY shared/install_cmake.sh /tmp/
 | 
			
		||||
RUN chmod +x /tmp/centos_setup.sh && \
 | 
			
		||||
    chmod +x /tmp/install_cmake.sh
 | 
			
		||||
RUN /tmp/centos_setup.sh
 | 
			
		||||
 | 
			
		||||
RUN /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16
 | 
			
		||||
RUN ln -s /opt/local/cmake-3.16 /opt/local/cmake
 | 
			
		||||
ENV PATH="/opt/local/cmake/bin:$PATH"
 | 
			
		||||
# TODO: Install latest CMake for testing
 | 
			
		||||
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.16.3 /opt/local/cmake-3.16; fi
 | 
			
		||||
 | 
			
		||||
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
 | 
			
		||||
 | 
			
		||||
WORKDIR /opt/rippled_bld/pkg
 | 
			
		||||
RUN mkdir -m 777 ./rpmbuild
 | 
			
		||||
RUN mkdir -m 777 ./rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
 | 
			
		||||
 | 
			
		||||
COPY packaging/rpm/build_rpm.sh ./
 | 
			
		||||
CMD ./build_rpm.sh
 | 
			
		||||
@@ -1,22 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -ex
 | 
			
		||||
 | 
			
		||||
source /etc/os-release
 | 
			
		||||
 | 
			
		||||
yum -y upgrade
 | 
			
		||||
yum -y update
 | 
			
		||||
yum -y install epel-release centos-release-scl
 | 
			
		||||
yum -y install \
 | 
			
		||||
    wget curl time gcc-c++ yum-utils autoconf automake pkgconfig libtool \
 | 
			
		||||
    libstdc++-static rpm-build gnupg which make cmake \
 | 
			
		||||
    devtoolset-11 devtoolset-11-gdb devtoolset-11-binutils devtoolset-11-libstdc++-devel \
 | 
			
		||||
    devtoolset-11-libasan-devel devtoolset-11-libtsan-devel devtoolset-11-libubsan-devel devtoolset-11-liblsan-devel \
 | 
			
		||||
    flex flex-devel bison bison-devel parallel \
 | 
			
		||||
    ncurses ncurses-devel ncurses-libs graphviz graphviz-devel \
 | 
			
		||||
    lzip p7zip bzip2 bzip2-devel lzma-sdk lzma-sdk-devel xz-devel \
 | 
			
		||||
    zlib zlib-devel zlib-static texinfo openssl openssl-static \
 | 
			
		||||
    jemalloc jemalloc-devel \
 | 
			
		||||
    libicu-devel htop \
 | 
			
		||||
    rh-python38 \
 | 
			
		||||
    ninja-build git svn \
 | 
			
		||||
    swig perl-Digest-MD5
 | 
			
		||||
@@ -1,28 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -ex
 | 
			
		||||
pkgtype=$1
 | 
			
		||||
if [ "${pkgtype}" = "rpm" ] ; then
 | 
			
		||||
    container_name="${RPM_CONTAINER_NAME}"
 | 
			
		||||
elif [ "${pkgtype}" = "dpkg" ] ; then
 | 
			
		||||
    container_name="${DPKG_CONTAINER_NAME}"
 | 
			
		||||
else
 | 
			
		||||
    echo "invalid package type"
 | 
			
		||||
    exit 1
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if docker pull "${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}"; then
 | 
			
		||||
    echo "found container for latest - using as cache."
 | 
			
		||||
    docker tag \
 | 
			
		||||
       "${ARTIFACTORY_HUB}/${container_name}:latest_${CI_COMMIT_REF_SLUG}" \
 | 
			
		||||
       "${container_name}:latest_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
    CMAKE_EXTRA="-D${pkgtype}_cache_from=${container_name}:latest_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cmake --version
 | 
			
		||||
test -d build && rm -rf build
 | 
			
		||||
mkdir -p build/container && cd build/container
 | 
			
		||||
eval time \
 | 
			
		||||
    cmake -Dpackages_only=ON -DCMAKE_VERBOSE_MAKEFILE=ON ${CMAKE_EXTRA} \
 | 
			
		||||
    -G Ninja ../..
 | 
			
		||||
time cmake --build . --target "${pkgtype}_container" -- -v
 | 
			
		||||
 | 
			
		||||
@@ -1,28 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -ex
 | 
			
		||||
pkgtype=$1
 | 
			
		||||
if [ "${pkgtype}" = "rpm" ] ; then
 | 
			
		||||
    container_name="${RPM_CONTAINER_FULLNAME}"
 | 
			
		||||
    container_tag="${RPM_CONTAINER_TAG}"
 | 
			
		||||
elif [ "${pkgtype}" = "dpkg" ] ; then
 | 
			
		||||
    container_name="${DPKG_CONTAINER_FULLNAME}"
 | 
			
		||||
    container_tag="${DPKG_CONTAINER_TAG}"
 | 
			
		||||
else
 | 
			
		||||
    echo "invalid package type"
 | 
			
		||||
    exit 1
 | 
			
		||||
fi
 | 
			
		||||
time docker pull "${ARTIFACTORY_HUB}/${container_name}"
 | 
			
		||||
docker tag \
 | 
			
		||||
  "${ARTIFACTORY_HUB}/${container_name}" \
 | 
			
		||||
  "${container_name}"
 | 
			
		||||
docker images
 | 
			
		||||
test -d build && rm -rf build
 | 
			
		||||
mkdir -p build/${pkgtype} && cd build/${pkgtype}
 | 
			
		||||
time cmake \
 | 
			
		||||
  -Dpackages_only=ON \
 | 
			
		||||
  -Dcontainer_label="${container_tag}" \
 | 
			
		||||
  -Dhave_package_container=ON \
 | 
			
		||||
  -DCMAKE_VERBOSE_MAKEFILE=ON \
 | 
			
		||||
  -Dunity=OFF \
 | 
			
		||||
  -G Ninja ../..
 | 
			
		||||
time cmake --build . --target ${pkgtype} -- -v
 | 
			
		||||
@@ -1,15 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -e
 | 
			
		||||
# used as a before/setup script for docker steps in gitlab-ci
 | 
			
		||||
# expects to be run in standard alpine/dind image
 | 
			
		||||
echo $(nproc)
 | 
			
		||||
docker login -u rippled \
 | 
			
		||||
    -p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
 | 
			
		||||
apk add --update py-pip
 | 
			
		||||
apk add \
 | 
			
		||||
    bash util-linux coreutils binutils grep \
 | 
			
		||||
    make ninja cmake build-base gcc g++ abuild git \
 | 
			
		||||
    python3 python3-dev
 | 
			
		||||
pip3 install awscli
 | 
			
		||||
# list curdir contents to build log:
 | 
			
		||||
ls -la
 | 
			
		||||
@@ -1,16 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
case ${CI_COMMIT_REF_NAME} in
 | 
			
		||||
    develop)
 | 
			
		||||
        export COMPONENT="nightly"
 | 
			
		||||
        ;;
 | 
			
		||||
    release)
 | 
			
		||||
        export COMPONENT="unstable"
 | 
			
		||||
        ;;
 | 
			
		||||
    master)
 | 
			
		||||
        export COMPONENT="stable"
 | 
			
		||||
        ;;
 | 
			
		||||
    *)
 | 
			
		||||
        export COMPONENT="_unknown_"
 | 
			
		||||
        ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
@@ -1,646 +0,0 @@
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  gitlab CI defintition for rippled build containers and distro      ##
 | 
			
		||||
##  packages (rpm and dpkg).                                           ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
# NOTE: these are sensible defaults for Ripple pipelines. These
 | 
			
		||||
# can be overridden by project or group variables as needed.
 | 
			
		||||
variables:
 | 
			
		||||
  # these containers are built manually using the rippled
 | 
			
		||||
  # cmake build (container targets) and tagged/pushed so they
 | 
			
		||||
  # can be used here
 | 
			
		||||
  RPM_CONTAINER_TAG: "2023-02-13"
 | 
			
		||||
  RPM_CONTAINER_NAME: "rippled-rpm-builder"
 | 
			
		||||
  RPM_CONTAINER_FULLNAME: "${RPM_CONTAINER_NAME}:${RPM_CONTAINER_TAG}"
 | 
			
		||||
  DPKG_CONTAINER_TAG: "2023-03-20"
 | 
			
		||||
  DPKG_CONTAINER_NAME: "rippled-dpkg-builder"
 | 
			
		||||
  DPKG_CONTAINER_FULLNAME: "${DPKG_CONTAINER_NAME}:${DPKG_CONTAINER_TAG}"
 | 
			
		||||
  ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
 | 
			
		||||
  ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
 | 
			
		||||
  GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/xrpledger/rippled-packages/snippets/49/raw"
 | 
			
		||||
  PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
 | 
			
		||||
  # also need to define this variable ONLY for the primary
 | 
			
		||||
  # build/publish pipeline on the mainline repo:
 | 
			
		||||
  #   IS_PRIMARY_REPO = "true"
 | 
			
		||||
 | 
			
		||||
stages:
 | 
			
		||||
  - build_packages
 | 
			
		||||
  - sign_packages
 | 
			
		||||
  - smoketest
 | 
			
		||||
  - verify_sig
 | 
			
		||||
  - tag_images
 | 
			
		||||
  - push_to_test
 | 
			
		||||
  - verify_from_test
 | 
			
		||||
  - wait_approval_prod
 | 
			
		||||
  - push_to_prod
 | 
			
		||||
  - verify_from_prod
 | 
			
		||||
  - get_final_hashes
 | 
			
		||||
  - build_containers
 | 
			
		||||
 | 
			
		||||
.dind_template: &dind_param
 | 
			
		||||
  before_script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
 | 
			
		||||
  variables:
 | 
			
		||||
    docker_driver: overlay2
 | 
			
		||||
    DOCKER_TLS_CERTDIR: ""
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/docker:latest
 | 
			
		||||
  services:
 | 
			
		||||
    # workaround for TLS issues - consider going back
 | 
			
		||||
    # back to unversioned `dind` when issues are resolved
 | 
			
		||||
    - name: artifactory.ops.ripple.com/docker:stable-dind
 | 
			
		||||
      alias: docker
 | 
			
		||||
  tags:
 | 
			
		||||
    - 4xlarge
 | 
			
		||||
 | 
			
		||||
.only_primary_template: &only_primary
 | 
			
		||||
  only:
 | 
			
		||||
    refs:
 | 
			
		||||
      - /^(master|release|develop)$/
 | 
			
		||||
    variables:
 | 
			
		||||
      - $IS_PRIMARY_REPO == "true"
 | 
			
		||||
 | 
			
		||||
.smoketest_local_template: &run_local_smoketest
 | 
			
		||||
  tags:
 | 
			
		||||
    - xlarge
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/smoketest.sh local
 | 
			
		||||
 | 
			
		||||
.smoketest_repo_template: &run_repo_smoketest
 | 
			
		||||
  tags:
 | 
			
		||||
    - xlarge
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/smoketest.sh repo
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: build_packages                                              ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  build packages using containers from previous stage.               ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
rpm_build:
 | 
			
		||||
  timeout: "1h 30m"
 | 
			
		||||
  stage: build_packages
 | 
			
		||||
  <<: *dind_param
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - build/rpm/packages/
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/build_package.sh rpm
 | 
			
		||||
 | 
			
		||||
dpkg_build:
 | 
			
		||||
  timeout: "1h 30m"
 | 
			
		||||
  stage: build_packages
 | 
			
		||||
  <<: *dind_param
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - build/dpkg/packages/
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/build_package.sh dpkg
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: sign_packages                                               ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  build packages using containers from previous stage.               ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
rpm_sign:
 | 
			
		||||
  stage: sign_packages
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/centos:7
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  before_script:
 | 
			
		||||
  - |
 | 
			
		||||
    # Make sure GnuPG is installed
 | 
			
		||||
    yum -y install gnupg rpm-sign
 | 
			
		||||
    # checking GPG signing support
 | 
			
		||||
    if [ -n "$GPG_KEY_B64" ]; then
 | 
			
		||||
      echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
 | 
			
		||||
      unset GPG_KEY_B64
 | 
			
		||||
      export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
 | 
			
		||||
      unset GPG_KEY_PASS_B64
 | 
			
		||||
      export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
 | 
			
		||||
    else
 | 
			
		||||
      echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
 | 
			
		||||
      exit 1
 | 
			
		||||
    fi
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - build/rpm/packages/
 | 
			
		||||
  script:
 | 
			
		||||
    - ls -alh build/rpm/packages
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/sign_package.sh rpm
 | 
			
		||||
 | 
			
		||||
dpkg_sign:
 | 
			
		||||
  stage: sign_packages
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:18.04
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  before_script:
 | 
			
		||||
  - |
 | 
			
		||||
    # make sure we have GnuPG
 | 
			
		||||
    apt update
 | 
			
		||||
    apt install -y gpg dpkg-sig
 | 
			
		||||
    # checking GPG signing support
 | 
			
		||||
    if [ -n "$GPG_KEY_B64" ]; then
 | 
			
		||||
      echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
 | 
			
		||||
      unset GPG_KEY_B64
 | 
			
		||||
      export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
 | 
			
		||||
      unset GPG_KEY_PASS_B64
 | 
			
		||||
      export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
 | 
			
		||||
    else
 | 
			
		||||
      echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
 | 
			
		||||
      exit 1
 | 
			
		||||
    fi
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - build/dpkg/packages/
 | 
			
		||||
  script:
 | 
			
		||||
    - ls -alh build/dpkg/packages
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/sign_package.sh dpkg
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: smoketest                                                   ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  install unsigned packages from previous step and run unit tests.   ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
centos_7_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/centos:7
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
rocky_8_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_37_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:37
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_38_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:38
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_18_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:18.04
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_20_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:20.04
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_22_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:22.04
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
debian_10_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:10
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
debian_11_smoketest:
 | 
			
		||||
  stage: smoketest
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_build
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:11
 | 
			
		||||
  <<: *run_local_smoketest
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: verify_sig                                                  ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  use git/gpg to verify that HEAD is signed by an approved           ##
 | 
			
		||||
##  committer. The whitelist of pubkeys is manually mantained          ##
 | 
			
		||||
##  and fetched from GIT_SIGN_PUBKEYS_URL (currently a snippet         ##
 | 
			
		||||
##  link).                                                             ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
verify_head_signed:
 | 
			
		||||
  stage: verify_sig
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:latest
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/verify_head_commit.sh
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: tag_images                                                  ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  apply rippled version tag to containers from previous stage.       ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
tag_bld_images:
 | 
			
		||||
  stage: tag_images
 | 
			
		||||
  variables:
 | 
			
		||||
    docker_driver: overlay2
 | 
			
		||||
    DOCKER_TLS_CERTDIR: ""
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/docker:latest
 | 
			
		||||
  services:
 | 
			
		||||
    # workaround for TLS issues - consider going back
 | 
			
		||||
    # back to unversioned `dind` when issues are resolved
 | 
			
		||||
    - name: artifactory.ops.ripple.com/docker:stable-dind
 | 
			
		||||
      alias: docker
 | 
			
		||||
  tags:
 | 
			
		||||
    - large
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/tag_docker_image.sh
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: push_to_test                                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  push packages to artifactory repositories (test)                   ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
push_test:
 | 
			
		||||
  stage: push_to_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
    RPM_REPO: "rippled-rpm-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/alpine:latest
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - files.info
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: verify_from_test                                            ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  install/test packages from test repos.                             ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
centos_7_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/centos:7
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
rocky_8_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_37_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:37
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_38_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:38
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_18_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "bionic"
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:18.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_20_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "focal"
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:20.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_22_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "jammy"
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:22.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
debian_10_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "buster"
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:10
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
debian_11_verify_repo_test:
 | 
			
		||||
  stage: verify_from_test
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "bullseye"
 | 
			
		||||
    DEB_REPO: "rippled-deb-test-mirror"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:11
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: wait_approval_prod                                          ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  wait for manual approval before proceeding to next stage           ##
 | 
			
		||||
##  which pushes to prod repo.                                         ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
wait_before_push_prod:
 | 
			
		||||
  stage: wait_approval_prod
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/alpine:latest
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - echo "proceeding to next stage"
 | 
			
		||||
  when: manual
 | 
			
		||||
  allow_failure: false
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: push_to_prod                                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  push packages to artifactory repositories (prod)                   ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
push_prod:
 | 
			
		||||
  variables:
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/alpine:latest
 | 
			
		||||
  stage: push_to_prod
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - files.info
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: verify_from_prod                                            ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  install/test packages from prod repos.                             ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
centos_7_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/centos:7
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
rocky_8_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/rockylinux/rockylinux:8
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_37_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:37
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
fedora_38_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/fedora:38
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_18_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "bionic"
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:18.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_20_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "focal"
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:20.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
ubuntu_22_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "jammy"
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/ubuntu:22.04
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
debian_10_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "buster"
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:10
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
debian_11_verify_repo_prod:
 | 
			
		||||
  stage: verify_from_prod
 | 
			
		||||
  variables:
 | 
			
		||||
    DISTRO: "bullseye"
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/debian:11
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  <<: *run_repo_smoketest
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: get_final_hashes                                            ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  fetch final hashes from artifactory.                               ##
 | 
			
		||||
##  ONLY RUNS FOR PRIMARY BRANCHES/REPO                                ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
get_prod_hashes:
 | 
			
		||||
  variables:
 | 
			
		||||
    DEB_REPO: "rippled-deb"
 | 
			
		||||
    RPM_REPO: "rippled-rpm"
 | 
			
		||||
  image:
 | 
			
		||||
    name: artifactory.ops.ripple.com/alpine:latest
 | 
			
		||||
  stage: get_final_hashes
 | 
			
		||||
  artifacts:
 | 
			
		||||
    paths:
 | 
			
		||||
      - files.info
 | 
			
		||||
  dependencies:
 | 
			
		||||
    - rpm_sign
 | 
			
		||||
    - dpkg_sign
 | 
			
		||||
  <<: *only_primary
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "GET" ".checksums"
 | 
			
		||||
 | 
			
		||||
#########################################################################
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  stage: build_containers                                            ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
##  build containers from docker definitions. These containers are NOT ##
 | 
			
		||||
##  used for the package build. This step is only used to ensure that  ##
 | 
			
		||||
##  the package build targets and files are still working properly.    ##
 | 
			
		||||
##                                                                     ##
 | 
			
		||||
#########################################################################
 | 
			
		||||
 | 
			
		||||
build_centos_container:
 | 
			
		||||
  stage: build_containers
 | 
			
		||||
  <<: *dind_param
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/build_container.sh rpm
 | 
			
		||||
 | 
			
		||||
build_ubuntu_container:
 | 
			
		||||
  stage: build_containers
 | 
			
		||||
  <<: *dind_param
 | 
			
		||||
  script:
 | 
			
		||||
    - . ./Builds/containers/gitlab-ci/build_container.sh dpkg
 | 
			
		||||
@@ -1,92 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -e
 | 
			
		||||
action=$1
 | 
			
		||||
filter=$2
 | 
			
		||||
 | 
			
		||||
. ./Builds/containers/gitlab-ci/get_component.sh
 | 
			
		||||
 | 
			
		||||
apk add curl jq coreutils util-linux
 | 
			
		||||
TOPDIR=$(pwd)
 | 
			
		||||
 | 
			
		||||
# DPKG
 | 
			
		||||
 | 
			
		||||
cd $TOPDIR
 | 
			
		||||
cd build/dpkg/packages
 | 
			
		||||
CURLARGS="-sk -X${action} -urippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}"
 | 
			
		||||
RIPPLED_PKG=$(ls rippled_*.deb)
 | 
			
		||||
RIPPLED_REPORTING_PKG=$(ls rippled-reporting_*.deb)
 | 
			
		||||
RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.*deb)
 | 
			
		||||
RIPPLED_REPORTING_DBG_PKG=$(ls rippled-reporting-dbgsym_*.*deb)
 | 
			
		||||
# TODO - where to upload src tgz?
 | 
			
		||||
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
 | 
			
		||||
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
 | 
			
		||||
for dist in buster bullseye bionic focal jammy; do
 | 
			
		||||
    DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
 | 
			
		||||
done
 | 
			
		||||
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
 | 
			
		||||
for deb in ${RIPPLED_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG} ${RIPPLED_REPORTING_DBG_PKG}; do
 | 
			
		||||
    # first item doesn't get a comma separator
 | 
			
		||||
    if [ $deb != $RIPPLED_PKG ] ; then
 | 
			
		||||
        echo "," >> "${TOPDIR}/files.info"
 | 
			
		||||
    fi
 | 
			
		||||
    echo "\"${deb}\"": | tee -a "${TOPDIR}/files.info"
 | 
			
		||||
    ca="${CURLARGS}"
 | 
			
		||||
    if [ "${action}" = "PUT" ] ; then
 | 
			
		||||
        url="https://${ARTIFACTORY_HOST}/artifactory/${DEB_REPO}/pool/${COMPONENT}/${deb}${DEB_MATRIX}"
 | 
			
		||||
        ca="${ca} -T${deb}"
 | 
			
		||||
    elif [ "${action}" = "GET" ] ; then
 | 
			
		||||
        url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${DEB_REPO}/pool/${COMPONENT}/${deb}"
 | 
			
		||||
    fi
 | 
			
		||||
    echo "file info request url --> ${url}"
 | 
			
		||||
    eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
 | 
			
		||||
done
 | 
			
		||||
echo "}," >> "${TOPDIR}/files.info"
 | 
			
		||||
 | 
			
		||||
# RPM
 | 
			
		||||
 | 
			
		||||
cd $TOPDIR
 | 
			
		||||
cd build/rpm/packages
 | 
			
		||||
RIPPLED_PKG=$(ls rippled-[0-9]*.x86_64.rpm)
 | 
			
		||||
RIPPLED_DEV_PKG=$(ls rippled-devel*.rpm)
 | 
			
		||||
RIPPLED_DBG_PKG=$(ls rippled-debuginfo*.rpm)
 | 
			
		||||
RIPPLED_REPORTING_PKG=$(ls rippled-reporting*.rpm)
 | 
			
		||||
# TODO - where to upload src rpm ?
 | 
			
		||||
RIPPLED_SRC=$(ls rippled-[0-9]*.src.rpm)
 | 
			
		||||
echo "\"rpms\": {" >> "${TOPDIR}/files.info"
 | 
			
		||||
for rpm in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ${RIPPLED_REPORTING_PKG}; do
 | 
			
		||||
    # first item doesn't get a comma separator
 | 
			
		||||
    if [ $rpm != $RIPPLED_PKG ] ; then
 | 
			
		||||
        echo "," >> "${TOPDIR}/files.info"
 | 
			
		||||
    fi
 | 
			
		||||
    echo "\"${rpm}\"": | tee -a "${TOPDIR}/files.info"
 | 
			
		||||
    ca="${CURLARGS}"
 | 
			
		||||
    if [ "${action}" = "PUT" ] ; then
 | 
			
		||||
        url="https://${ARTIFACTORY_HOST}/artifactory/${RPM_REPO}/${COMPONENT}/"
 | 
			
		||||
        ca="${ca} -T${rpm}"
 | 
			
		||||
    elif [ "${action}" = "GET" ] ; then
 | 
			
		||||
        url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${RPM_REPO}/${COMPONENT}/${rpm}"
 | 
			
		||||
    fi
 | 
			
		||||
    echo "file info request url --> ${url}"
 | 
			
		||||
    eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
 | 
			
		||||
done
 | 
			
		||||
echo "}}" >> "${TOPDIR}/files.info"
 | 
			
		||||
jq '.' "${TOPDIR}/files.info" > "${TOPDIR}/files.info.tmp"
 | 
			
		||||
mv "${TOPDIR}/files.info.tmp" "${TOPDIR}/files.info"
 | 
			
		||||
 | 
			
		||||
if [ ! -z "${SLACK_NOTIFY_URL}" ] && [ "${action}" = "GET" ] ; then
 | 
			
		||||
    # extract files.info content to variable and sanitize so it can
 | 
			
		||||
    # be interpolated into a slack text field below
 | 
			
		||||
    finfo=$(cat ${TOPDIR}/files.info | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | sed -E 's/"/\\"/g')
 | 
			
		||||
    # try posting file info to slack.
 | 
			
		||||
    # can add channel field to payload if the
 | 
			
		||||
    # default channel is incorrect. Get rid of
 | 
			
		||||
    # newlines in payload json since slack doesn't accept them
 | 
			
		||||
    CONTENT=$(tr -d '[\n]' <<JSON
 | 
			
		||||
       payload={
 | 
			
		||||
         "username": "GitlabCI",
 | 
			
		||||
         "text": "The package build for branch \`${CI_COMMIT_REF_NAME}\` is complete. File hashes are: \`\`\`${finfo}\`\`\`",
 | 
			
		||||
         "icon_emoji": ":package:"}
 | 
			
		||||
JSON
 | 
			
		||||
)
 | 
			
		||||
    curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
 | 
			
		||||
fi
 | 
			
		||||
@@ -1,38 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
set -eo pipefail
 | 
			
		||||
 | 
			
		||||
sign_dpkg() {
 | 
			
		||||
  if [ -n "${GPG_KEYID}" ]; then
 | 
			
		||||
    dpkg-sig \
 | 
			
		||||
      -g "--no-tty --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --pinentry-mode=loopback" \
 | 
			
		||||
			-k "${GPG_KEYID}" \
 | 
			
		||||
			--sign builder \
 | 
			
		||||
			"build/dpkg/packages/*.deb"
 | 
			
		||||
	fi
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
sign_rpm() {
 | 
			
		||||
  if [ -n "${GPG_KEYID}" ] ; then
 | 
			
		||||
    find build/rpm/packages -name "*.rpm" -exec bash -c '
 | 
			
		||||
	echo "yes" | setsid rpm \
 | 
			
		||||
			--define "_gpg_name ${GPG_KEYID}" \
 | 
			
		||||
			--define "_signature gpg" \
 | 
			
		||||
			--define "__gpg_check_password_cmd /bin/true" \
 | 
			
		||||
			--define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
 | 
			
		||||
			--addsign '{} \;
 | 
			
		||||
	fi
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
case "${1}" in
 | 
			
		||||
    dpkg)
 | 
			
		||||
        sign_dpkg
 | 
			
		||||
        ;;
 | 
			
		||||
    rpm)
 | 
			
		||||
        sign_rpm
 | 
			
		||||
        ;;
 | 
			
		||||
    *)
 | 
			
		||||
        echo "Usage: ${0} (dpkg|rpm)"
 | 
			
		||||
        ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
@@ -1,108 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -e
 | 
			
		||||
install_from=$1
 | 
			
		||||
use_private=${2:-0} # this option not currently needed by any CI scripts,
 | 
			
		||||
                    # reserved for possible future use
 | 
			
		||||
if [ "$use_private" -gt 0 ] ; then
 | 
			
		||||
    REPO_ROOT="https://rippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}@${ARTIFACTORY_HOST}/artifactory"
 | 
			
		||||
else
 | 
			
		||||
    REPO_ROOT="${PUBLIC_REPO_ROOT}"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
. ./Builds/containers/gitlab-ci/get_component.sh
 | 
			
		||||
 | 
			
		||||
. /etc/os-release
 | 
			
		||||
case ${ID} in
 | 
			
		||||
    ubuntu|debian)
 | 
			
		||||
        pkgtype="dpkg"
 | 
			
		||||
        ;;
 | 
			
		||||
    fedora|centos|rhel|scientific|rocky)
 | 
			
		||||
        pkgtype="rpm"
 | 
			
		||||
        ;;
 | 
			
		||||
    *)
 | 
			
		||||
        echo "unrecognized distro!"
 | 
			
		||||
        exit 1
 | 
			
		||||
        ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
# this script provides info variables about pkg version
 | 
			
		||||
. build/${pkgtype}/packages/build_vars
 | 
			
		||||
 | 
			
		||||
if [ "${pkgtype}" = "dpkg" ] ; then
 | 
			
		||||
    # sometimes update fails and requires a cleanup
 | 
			
		||||
    updateWithRetry()
 | 
			
		||||
    {
 | 
			
		||||
        if ! apt-get -y update ; then
 | 
			
		||||
            rm -rvf /var/lib/apt/lists/*
 | 
			
		||||
            apt-get -y clean
 | 
			
		||||
            apt-get -y update
 | 
			
		||||
        fi
 | 
			
		||||
    }
 | 
			
		||||
    if [ "${install_from}" = "repo" ] ; then
 | 
			
		||||
        apt-get -y upgrade
 | 
			
		||||
        updateWithRetry
 | 
			
		||||
        apt-get -y install apt apt-transport-https ca-certificates coreutils util-linux wget gnupg
 | 
			
		||||
        wget -q -O - "${REPO_ROOT}/api/gpg/key/public" | apt-key add -
 | 
			
		||||
        echo "deb ${REPO_ROOT}/${DEB_REPO} ${DISTRO} ${COMPONENT}" >> /etc/apt/sources.list
 | 
			
		||||
        updateWithRetry
 | 
			
		||||
        # uncomment this next line if you want to see the available package versions
 | 
			
		||||
        # apt-cache policy rippled
 | 
			
		||||
        apt-get -y install rippled=${dpkg_full_version}
 | 
			
		||||
    elif [ "${install_from}" = "local" ] ; then
 | 
			
		||||
        # cached pkg install
 | 
			
		||||
        updateWithRetry
 | 
			
		||||
        apt-get -y install libprotobuf-dev libprotoc-dev protobuf-compiler libssl-dev
 | 
			
		||||
        rm -f build/dpkg/packages/rippled-dbgsym*.*
 | 
			
		||||
        dpkg --no-debsig -i build/dpkg/packages/*.deb
 | 
			
		||||
    else
 | 
			
		||||
        echo "unrecognized pkg source!"
 | 
			
		||||
        exit 1
 | 
			
		||||
    fi
 | 
			
		||||
else
 | 
			
		||||
    yum -y update
 | 
			
		||||
    if [ "${install_from}" = "repo" ] ; then
 | 
			
		||||
        pkgs=("yum-utils coreutils util-linux")
 | 
			
		||||
        if [ "$ID" = "rocky" ]; then
 | 
			
		||||
            pkgs="${pkgs[@]/coreutils}"
 | 
			
		||||
        fi
 | 
			
		||||
        yum install -y $pkgs
 | 
			
		||||
        REPOFILE="/etc/yum.repos.d/artifactory.repo"
 | 
			
		||||
        echo "[Artifactory]" > ${REPOFILE}
 | 
			
		||||
        echo "name=Artifactory" >> ${REPOFILE}
 | 
			
		||||
        echo "baseurl=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/" >> ${REPOFILE}
 | 
			
		||||
        echo "enabled=1" >> ${REPOFILE}
 | 
			
		||||
        echo "gpgcheck=0" >> ${REPOFILE}
 | 
			
		||||
        echo "gpgkey=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/repodata/repomd.xml.key" >> ${REPOFILE}
 | 
			
		||||
        echo "repo_gpgcheck=1" >> ${REPOFILE}
 | 
			
		||||
        yum -y update
 | 
			
		||||
        # uncomment this next line if you want to see the available package versions
 | 
			
		||||
        # yum --showduplicates list rippled
 | 
			
		||||
        yum -y install ${rpm_version_release}
 | 
			
		||||
    elif [ "${install_from}" = "local" ] ; then
 | 
			
		||||
        # cached pkg install
 | 
			
		||||
        pkgs=("yum-utils openssl-static zlib-static")
 | 
			
		||||
        if [[ "$ID" =~ rocky|fedora ]]; then
 | 
			
		||||
            if [[ "$ID" =~ "rocky" ]]; then
 | 
			
		||||
                sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/Rocky-PowerTools.repo
 | 
			
		||||
            fi
 | 
			
		||||
            pkgs="${pkgs[@]/openssl-static}"
 | 
			
		||||
        fi
 | 
			
		||||
        yum install -y $pkgs
 | 
			
		||||
        rm -f build/rpm/packages/rippled-debug*.rpm
 | 
			
		||||
        rm -f build/rpm/packages/*.src.rpm
 | 
			
		||||
        rpm -i build/rpm/packages/*.rpm
 | 
			
		||||
    else
 | 
			
		||||
        echo "unrecognized pkg source!"
 | 
			
		||||
        exit 1
 | 
			
		||||
    fi
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
# verify installed version
 | 
			
		||||
INSTALLED=$(/opt/ripple/bin/rippled --version | awk '{print $NF}')
 | 
			
		||||
if [ "${rippled_version}" != "${INSTALLED}" ] ; then
 | 
			
		||||
    echo "INSTALLED version ${INSTALLED} does not match ${rippled_version}"
 | 
			
		||||
    exit 1
 | 
			
		||||
fi
 | 
			
		||||
# run unit tests
 | 
			
		||||
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
 | 
			
		||||
/opt/ripple/bin/validator-keys --unittest
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -e
 | 
			
		||||
docker login -u rippled \
 | 
			
		||||
    -p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} "${ARTIFACTORY_HUB}"
 | 
			
		||||
# this gives us rippled_version :
 | 
			
		||||
source build/rpm/packages/build_vars
 | 
			
		||||
docker pull "${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}"
 | 
			
		||||
docker pull "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}"
 | 
			
		||||
# tag/push two labels...one using the current rippled version and one just using "latest"
 | 
			
		||||
for label in ${rippled_version} latest ; do
 | 
			
		||||
    docker tag \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${RPM_CONTAINER_FULLNAME}" \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
    docker push \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
    docker tag \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_FULLNAME}" \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
    docker push \
 | 
			
		||||
        "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
 | 
			
		||||
done
 | 
			
		||||
@@ -1,17 +0,0 @@
 | 
			
		||||
#!/usr/bin/env sh
 | 
			
		||||
set -ex
 | 
			
		||||
apt -y update
 | 
			
		||||
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
 | 
			
		||||
apt -y install software-properties-common curl git gnupg
 | 
			
		||||
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
 | 
			
		||||
gpg --import rippled-pubkeys.txt
 | 
			
		||||
if git verify-commit HEAD; then
 | 
			
		||||
    echo "git commit signature check passed"
 | 
			
		||||
else
 | 
			
		||||
    echo "git commit signature check failed"
 | 
			
		||||
    git log -n 5 --color \
 | 
			
		||||
        --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an> [%G?]%Creset' \
 | 
			
		||||
        --abbrev-commit
 | 
			
		||||
    exit 1
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
@@ -1,95 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -ex
 | 
			
		||||
 | 
			
		||||
# make sure pkg source files are up to date with repo
 | 
			
		||||
cd /opt/rippled_bld/pkg
 | 
			
		||||
cp -fpru rippled/Builds/containers/packaging/dpkg/debian/. debian/
 | 
			
		||||
cp -fpu rippled/Builds/containers/shared/rippled*.service debian/
 | 
			
		||||
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
 | 
			
		||||
source update_sources.sh
 | 
			
		||||
 | 
			
		||||
# Build the dpkg
 | 
			
		||||
 | 
			
		||||
#dpkg uses - as separator, so we need to change our -bN versions to tilde
 | 
			
		||||
RIPPLED_DPKG_VERSION=$(echo "${RIPPLED_VERSION}" | sed 's!-!~!g')
 | 
			
		||||
# TODO - decide how to handle the trailing/release
 | 
			
		||||
# version here (hardcoded to 1). Does it ever need to change?
 | 
			
		||||
RIPPLED_DPKG_FULL_VERSION="${RIPPLED_DPKG_VERSION}-1"
 | 
			
		||||
git config --global --add safe.directory /opt/rippled_bld/pkg/rippled
 | 
			
		||||
cd /opt/rippled_bld/pkg/rippled
 | 
			
		||||
if [[ -n $(git status --porcelain) ]]; then
 | 
			
		||||
    git status
 | 
			
		||||
    error "Unstaged changes in this repo - please commit first"
 | 
			
		||||
fi
 | 
			
		||||
git archive --format tar.gz --prefix rippled-${RIPPLED_DPKG_VERSION}/ -o ../rippled-${RIPPLED_DPKG_VERSION}.tar.gz HEAD
 | 
			
		||||
cd ..
 | 
			
		||||
# dpkg debmake would normally create this link, but we do it manually
 | 
			
		||||
ln -s ./rippled-${RIPPLED_DPKG_VERSION}.tar.gz rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz
 | 
			
		||||
tar xvf rippled-${RIPPLED_DPKG_VERSION}.tar.gz
 | 
			
		||||
cd rippled-${RIPPLED_DPKG_VERSION}
 | 
			
		||||
cp -pr ../debian .
 | 
			
		||||
 | 
			
		||||
# dpkg requires a changelog. We don't currently maintain
 | 
			
		||||
# a useable one, so let's just fake it with our current version
 | 
			
		||||
# TODO : not sure if the "unstable" will need to change for
 | 
			
		||||
# release packages (?)
 | 
			
		||||
NOWSTR=$(TZ=UTC date -R)
 | 
			
		||||
cat << CHANGELOG > ./debian/changelog
 | 
			
		||||
rippled (${RIPPLED_DPKG_FULL_VERSION}) unstable; urgency=low
 | 
			
		||||
 | 
			
		||||
  * see RELEASENOTES
 | 
			
		||||
 | 
			
		||||
 -- Ripple Labs Inc. <support@ripple.com>  ${NOWSTR}
 | 
			
		||||
CHANGELOG
 | 
			
		||||
 | 
			
		||||
# PATH must be preserved for our more modern cmake in /opt/local
 | 
			
		||||
# TODO : consider allowing lintian to run in future ?
 | 
			
		||||
export DH_BUILD_DDEBS=1
 | 
			
		||||
debuild --no-lintian --preserve-envvar PATH --preserve-env -us -uc
 | 
			
		||||
rc=$?; if [[ $rc != 0 ]]; then
 | 
			
		||||
    error "error building dpkg"
 | 
			
		||||
fi
 | 
			
		||||
cd ..
 | 
			
		||||
 | 
			
		||||
# copy artifacts
 | 
			
		||||
cp rippled-reporting_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.dsc ${PKG_OUTDIR}
 | 
			
		||||
# dbgsym suffix is ddeb under newer debuild, but just deb under earlier
 | 
			
		||||
cp rippled-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
 | 
			
		||||
cp rippled-reporting-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.build ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz ${PKG_OUTDIR}
 | 
			
		||||
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.debian.tar.xz ${PKG_OUTDIR}
 | 
			
		||||
# buildinfo is only generated by later version of debuild
 | 
			
		||||
if [ -e rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ] ; then
 | 
			
		||||
    cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ${PKG_OUTDIR}
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cat rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes
 | 
			
		||||
# extract the text in the .changes file that appears between
 | 
			
		||||
#    Checksums-Sha256:  ...
 | 
			
		||||
# and
 | 
			
		||||
#    Files: ...
 | 
			
		||||
awk '/Checksums-Sha256:/{hit=1;next}/Files:/{hit=0}hit' \
 | 
			
		||||
    rippled_${RIPPLED_DPKG_VERSION}-1_amd64.changes | \
 | 
			
		||||
        sed -E 's!^[[:space:]]+!!' > shasums
 | 
			
		||||
DEB_SHA256=$(cat shasums | \
 | 
			
		||||
    grep "rippled_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
 | 
			
		||||
DBG_SHA256=$(cat shasums | \
 | 
			
		||||
    grep "rippled-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
 | 
			
		||||
REPORTING_DBG_SHA256=$(cat shasums | \
 | 
			
		||||
    grep "rippled-reporting-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
 | 
			
		||||
REPORTING_SHA256=$(cat shasums | \
 | 
			
		||||
    grep "rippled-reporting_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
 | 
			
		||||
SRC_SHA256=$(cat shasums | \
 | 
			
		||||
    grep "rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz" | cut -d " " -f 1)
 | 
			
		||||
echo "deb_sha256=${DEB_SHA256}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dbg_sha256=${DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "reporting_sha256=${REPORTING_SHA256}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "reporting_dbg_sha256=${REPORTING_DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "src_sha256=${SRC_SHA256}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rippled_version=${RIPPLED_VERSION}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dpkg_version=${RIPPLED_DPKG_VERSION}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dpkg_full_version=${RIPPLED_DPKG_FULL_VERSION}" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
rippled daemon
 | 
			
		||||
 | 
			
		||||
 -- Mike Ellery <mellery451@gmail.com>  Tue, 04 Dec 2018 18:19:03 +0000
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
10
 | 
			
		||||
@@ -1,19 +0,0 @@
 | 
			
		||||
Source: rippled
 | 
			
		||||
Section: misc
 | 
			
		||||
Priority: extra
 | 
			
		||||
Maintainer: Ripple Labs Inc. <support@ripple.com>
 | 
			
		||||
Build-Depends: cmake, debhelper (>=9), zlib1g-dev, dh-systemd, ninja-build
 | 
			
		||||
Standards-Version: 3.9.7
 | 
			
		||||
Homepage: http://ripple.com/
 | 
			
		||||
 | 
			
		||||
Package: rippled
 | 
			
		||||
Architecture: any
 | 
			
		||||
Multi-Arch: foreign
 | 
			
		||||
Depends: ${misc:Depends}, ${shlibs:Depends}
 | 
			
		||||
Description: rippled daemon
 | 
			
		||||
 | 
			
		||||
Package: rippled-reporting
 | 
			
		||||
Architecture: any
 | 
			
		||||
Multi-Arch: foreign
 | 
			
		||||
Depends: ${misc:Depends}, ${shlibs:Depends}
 | 
			
		||||
Description: rippled reporting daemon
 | 
			
		||||
@@ -1,86 +0,0 @@
 | 
			
		||||
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
 | 
			
		||||
Upstream-Name: rippled
 | 
			
		||||
Source: https://github.com/ripple/rippled 
 | 
			
		||||
 | 
			
		||||
Files: *
 | 
			
		||||
Copyright: 2012-2019 Ripple Labs Inc.
 | 
			
		||||
 | 
			
		||||
License:   __UNKNOWN__
 | 
			
		||||
 | 
			
		||||
The accompanying files under various copyrights.
 | 
			
		||||
 | 
			
		||||
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
 | 
			
		||||
 | 
			
		||||
Permission to use, copy, modify, and distribute this software for any
 | 
			
		||||
purpose with or without fee is hereby granted, provided that the above
 | 
			
		||||
copyright notice and this permission notice appear in all copies.
 | 
			
		||||
 | 
			
		||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
 | 
			
		||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
 | 
			
		||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
 | 
			
		||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 | 
			
		||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
 | 
			
		||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
 | 
			
		||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 | 
			
		||||
 | 
			
		||||
The accompanying files incorporate work covered by the following copyright
 | 
			
		||||
and previous license notice:
 | 
			
		||||
 | 
			
		||||
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
 | 
			
		||||
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
 | 
			
		||||
 | 
			
		||||
Some code from Raw Material Software, Ltd., provided under the terms of the
 | 
			
		||||
  ISC License. See the corresponding source files for more details.
 | 
			
		||||
  Copyright (c) 2013 - Raw Material Software Ltd.
 | 
			
		||||
  Please visit http://www.juce.com
 | 
			
		||||
 | 
			
		||||
Some code from ASIO examples:
 | 
			
		||||
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
 | 
			
		||||
//
 | 
			
		||||
// Distributed under the Boost Software License, Version 1.0. (See accompanying
 | 
			
		||||
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
 | 
			
		||||
 | 
			
		||||
Some code from Bitcoin:
 | 
			
		||||
// Copyright (c) 2009-2010 Satoshi Nakamoto
 | 
			
		||||
// Copyright (c) 2011 The Bitcoin developers
 | 
			
		||||
// Distributed under the MIT/X11 software license, see the accompanying
 | 
			
		||||
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
 | 
			
		||||
 | 
			
		||||
Some code from Tom Wu:
 | 
			
		||||
This software is covered under the following copyright:
 | 
			
		||||
 | 
			
		||||
/*
 | 
			
		||||
 * Copyright (c) 2003-2005  Tom Wu
 | 
			
		||||
 * All Rights Reserved.
 | 
			
		||||
 *
 | 
			
		||||
 * Permission is hereby granted, free of charge, to any person obtaining
 | 
			
		||||
 * a copy of this software and associated documentation files (the
 | 
			
		||||
 * "Software"), to deal in the Software without restriction, including
 | 
			
		||||
 * without limitation the rights to use, copy, modify, merge, publish,
 | 
			
		||||
 * distribute, sublicense, and/or sell copies of the Software, and to
 | 
			
		||||
 * permit persons to whom the Software is furnished to do so, subject to
 | 
			
		||||
 * the following conditions:
 | 
			
		||||
 *
 | 
			
		||||
 * The above copyright notice and this permission notice shall be
 | 
			
		||||
 * included in all copies or substantial portions of the Software.
 | 
			
		||||
 *
 | 
			
		||||
 * THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, 
 | 
			
		||||
 * EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY 
 | 
			
		||||
 * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.  
 | 
			
		||||
 *
 | 
			
		||||
 * IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
 | 
			
		||||
 * INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
 | 
			
		||||
 * RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
 | 
			
		||||
 * THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
 | 
			
		||||
 * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 | 
			
		||||
 *
 | 
			
		||||
 * In addition, the following condition applies:
 | 
			
		||||
 *
 | 
			
		||||
 * All redistributions must retain an intact copy of this copyright notice
 | 
			
		||||
 * and disclaimer.
 | 
			
		||||
 */
 | 
			
		||||
 | 
			
		||||
Address all questions regarding this license to:
 | 
			
		||||
 | 
			
		||||
  Tom Wu
 | 
			
		||||
  tjw@cs.Stanford.EDU
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
/var/log/rippled/
 | 
			
		||||
/var/lib/rippled/
 | 
			
		||||
/etc/systemd/system/rippled.service.d/
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
README.md
 | 
			
		||||
LICENSE.md
 | 
			
		||||
RELEASENOTES.md
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
opt/ripple/include
 | 
			
		||||
opt/ripple/lib/*.a
 | 
			
		||||
opt/ripple/lib/cmake/ripple
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
/var/log/rippled-reporting/
 | 
			
		||||
/var/lib/rippled-reporting/
 | 
			
		||||
/etc/systemd/system/rippled-reporting.service.d/
 | 
			
		||||
@@ -1,8 +0,0 @@
 | 
			
		||||
bld/rippled-reporting/rippled-reporting opt/rippled-reporting/bin
 | 
			
		||||
cfg/rippled-reporting.cfg opt/rippled-reporting/etc
 | 
			
		||||
debian/tmp/opt/rippled-reporting/etc/validators.txt opt/rippled-reporting/etc
 | 
			
		||||
 | 
			
		||||
opt/rippled-reporting/bin/update-rippled-reporting.sh
 | 
			
		||||
opt/rippled-reporting/bin/getRippledReportingInfo
 | 
			
		||||
opt/rippled-reporting/etc/update-rippled-reporting-cron
 | 
			
		||||
etc/logrotate.d/rippled-reporting
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
opt/rippled-reporting/etc/rippled-reporting.cfg etc/opt/rippled-reporting/rippled-reporting.cfg
 | 
			
		||||
opt/rippled-reporting/etc/validators.txt etc/opt/rippled-reporting/validators.txt
 | 
			
		||||
opt/rippled-reporting/bin/rippled-reporting usr/local/bin/rippled-reporting
 | 
			
		||||
@@ -1,33 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
USER_NAME=rippled-reporting
 | 
			
		||||
GROUP_NAME=rippled-reporting
 | 
			
		||||
case "$1" in
 | 
			
		||||
    configure)
 | 
			
		||||
        id -u $USER_NAME >/dev/null 2>&1 || \
 | 
			
		||||
        adduser --system --quiet \
 | 
			
		||||
            --home /nonexistent --no-create-home \
 | 
			
		||||
            --disabled-password \
 | 
			
		||||
            --group "$GROUP_NAME"
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
 | 
			
		||||
        chmod 755 /var/log/rippled-reporting/
 | 
			
		||||
        chmod 755 /var/lib/rippled-reporting/
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /opt/rippled-reporting
 | 
			
		||||
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    abort-upgrade|abort-remove|abort-deconfigure)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    *)
 | 
			
		||||
        echo "postinst called with unknown argument \`$1'" >&2
 | 
			
		||||
        exit 1
 | 
			
		||||
    ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#DEBHELPER#
 | 
			
		||||
 | 
			
		||||
exit 0
 | 
			
		||||
@@ -1,2 +0,0 @@
 | 
			
		||||
/opt/ripple/etc/rippled.cfg
 | 
			
		||||
/opt/ripple/etc/validators.txt
 | 
			
		||||
@@ -1,8 +0,0 @@
 | 
			
		||||
opt/ripple/bin/rippled
 | 
			
		||||
opt/ripple/bin/validator-keys
 | 
			
		||||
opt/ripple/bin/update-rippled.sh
 | 
			
		||||
opt/ripple/bin/getRippledInfo
 | 
			
		||||
opt/ripple/etc/rippled.cfg
 | 
			
		||||
opt/ripple/etc/validators.txt
 | 
			
		||||
opt/ripple/etc/update-rippled-cron
 | 
			
		||||
etc/logrotate.d/rippled
 | 
			
		||||
@@ -1,3 +0,0 @@
 | 
			
		||||
opt/ripple/etc/rippled.cfg etc/opt/ripple/rippled.cfg
 | 
			
		||||
opt/ripple/etc/validators.txt etc/opt/ripple/validators.txt
 | 
			
		||||
opt/ripple/bin/rippled usr/local/bin/rippled
 | 
			
		||||
@@ -1,35 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
USER_NAME=rippled
 | 
			
		||||
GROUP_NAME=rippled
 | 
			
		||||
case "$1" in
 | 
			
		||||
    configure)
 | 
			
		||||
        id -u $USER_NAME >/dev/null 2>&1 || \
 | 
			
		||||
        adduser --system --quiet \
 | 
			
		||||
            --home /nonexistent --no-create-home \
 | 
			
		||||
            --disabled-password \
 | 
			
		||||
            --group "$GROUP_NAME"
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
 | 
			
		||||
        chown -R $USER_NAME:$GROUP_NAME /opt/ripple
 | 
			
		||||
        chmod 755 /var/log/rippled/
 | 
			
		||||
        chmod 755 /var/lib/rippled/
 | 
			
		||||
        chmod 644 /opt/ripple/etc/update-rippled-cron
 | 
			
		||||
        chmod 644 /etc/logrotate.d/rippled
 | 
			
		||||
        chown -R root:$GROUP_NAME /opt/ripple/etc/update-rippled-cron
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    abort-upgrade|abort-remove|abort-deconfigure)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    *)
 | 
			
		||||
        echo "postinst called with unknown argument \`$1'" >&2
 | 
			
		||||
        exit 1
 | 
			
		||||
    ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#DEBHELPER#
 | 
			
		||||
 | 
			
		||||
exit 0
 | 
			
		||||
@@ -1,17 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
case "$1" in
 | 
			
		||||
    purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    *)
 | 
			
		||||
        echo "postrm called with unknown argument \`$1'" >&2
 | 
			
		||||
        exit 1
 | 
			
		||||
    ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#DEBHELPER#
 | 
			
		||||
 | 
			
		||||
exit 0
 | 
			
		||||
@@ -1,20 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
case "$1" in
 | 
			
		||||
    install|upgrade)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    abort-upgrade)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    *)
 | 
			
		||||
        echo "preinst called with unknown argument \`$1'" >&2
 | 
			
		||||
        exit 1
 | 
			
		||||
    ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#DEBHELPER#
 | 
			
		||||
 | 
			
		||||
exit 0
 | 
			
		||||
@@ -1,20 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
case "$1" in
 | 
			
		||||
    remove|upgrade|deconfigure)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    failed-upgrade)
 | 
			
		||||
    ;;
 | 
			
		||||
 | 
			
		||||
    *)
 | 
			
		||||
        echo "prerm called with unknown argument \`$1'" >&2
 | 
			
		||||
        exit 1
 | 
			
		||||
    ;;
 | 
			
		||||
esac
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#DEBHELPER#
 | 
			
		||||
 | 
			
		||||
exit 0
 | 
			
		||||
@@ -1,80 +0,0 @@
 | 
			
		||||
#!/usr/bin/make -f
 | 
			
		||||
export DH_VERBOSE = 1
 | 
			
		||||
export DH_OPTIONS = -v
 | 
			
		||||
# debuild sets some warnings that don't work well
 | 
			
		||||
# for our curent build..so try to remove those flags here:
 | 
			
		||||
export CFLAGS:=$(subst -Wformat,,$(CFLAGS))
 | 
			
		||||
export CFLAGS:=$(subst -Werror=format-security,,$(CFLAGS))
 | 
			
		||||
export CXXFLAGS:=$(subst -Wformat,,$(CXXFLAGS))
 | 
			
		||||
export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
 | 
			
		||||
 | 
			
		||||
%:
 | 
			
		||||
	dh $@ --with systemd
 | 
			
		||||
 | 
			
		||||
override_dh_systemd_start:
 | 
			
		||||
	dh_systemd_start --no-restart-on-upgrade
 | 
			
		||||
 | 
			
		||||
override_dh_auto_configure:
 | 
			
		||||
	env
 | 
			
		||||
	rm -rf bld
 | 
			
		||||
 | 
			
		||||
	conan export external/snappy snappy/1.1.9@
 | 
			
		||||
 | 
			
		||||
	conan install . \
 | 
			
		||||
		--install-folder bld/rippled  \
 | 
			
		||||
		--build missing \
 | 
			
		||||
		--build boost \
 | 
			
		||||
		--build sqlite3 \
 | 
			
		||||
		--settings build_type=Release
 | 
			
		||||
 | 
			
		||||
	cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
 | 
			
		||||
		-G Ninja \
 | 
			
		||||
		-DCMAKE_BUILD_TYPE=Release \
 | 
			
		||||
		-DCMAKE_INSTALL_PREFIX=/opt/ripple \
 | 
			
		||||
		-Dstatic=ON \
 | 
			
		||||
		-Dunity=OFF \
 | 
			
		||||
		-DCMAKE_VERBOSE_MAKEFILE=ON \
 | 
			
		||||
		-Dvalidator_keys=ON \
 | 
			
		||||
		-B bld/rippled
 | 
			
		||||
 | 
			
		||||
	conan install . \
 | 
			
		||||
		--install-folder bld/rippled-reporting \
 | 
			
		||||
		--build missing \
 | 
			
		||||
		--build boost \
 | 
			
		||||
		--build sqlite3 \
 | 
			
		||||
		--build libuv \
 | 
			
		||||
		--settings build_type=Release \
 | 
			
		||||
		--options reporting=True
 | 
			
		||||
 | 
			
		||||
	cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
 | 
			
		||||
		-G Ninja \
 | 
			
		||||
		-DCMAKE_BUILD_TYPE=Release \
 | 
			
		||||
		-DCMAKE_INSTALL_PREFIX=/opt/rippled-reporting \
 | 
			
		||||
		-Dstatic=ON \
 | 
			
		||||
		-Dunity=OFF \
 | 
			
		||||
		-DCMAKE_VERBOSE_MAKEFILE=ON \
 | 
			
		||||
		-Dreporting=ON \
 | 
			
		||||
		-B bld/rippled-reporting
 | 
			
		||||
 | 
			
		||||
override_dh_auto_build:
 | 
			
		||||
	cmake --build bld/rippled --target rippled --target validator-keys -j${nproc}
 | 
			
		||||
 | 
			
		||||
	cmake --build bld/rippled-reporting --target rippled -j${nproc}
 | 
			
		||||
 | 
			
		||||
override_dh_auto_install:
 | 
			
		||||
	cmake --install bld/rippled --prefix debian/tmp/opt/ripple
 | 
			
		||||
	install -D bld/rippled/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
 | 
			
		||||
	install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
 | 
			
		||||
	install -D bin/getRippledInfo debian/tmp/opt/ripple/bin/getRippledInfo
 | 
			
		||||
	install -D Builds/containers/shared/update-rippled-cron debian/tmp/opt/ripple/etc/update-rippled-cron
 | 
			
		||||
	install -D Builds/containers/shared/rippled-logrotate debian/tmp/etc/logrotate.d/rippled
 | 
			
		||||
	rm -rf debian/tmp/opt/ripple/lib64/cmake/date
 | 
			
		||||
 | 
			
		||||
	mkdir -p debian/tmp/opt/rippled-reporting/etc
 | 
			
		||||
	mkdir -p debian/tmp/opt/rippled-reporting/bin
 | 
			
		||||
	cp cfg/validators-example.txt debian/tmp/opt/rippled-reporting/etc/validators.txt
 | 
			
		||||
 | 
			
		||||
	sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled.sh > debian/tmp/opt/rippled-reporting/bin/update-rippled-reporting.sh
 | 
			
		||||
	sed -E 's/rippled?/rippled-reporting/g' bin/getRippledInfo > debian/tmp/opt/rippled-reporting/bin/getRippledReportingInfo
 | 
			
		||||
	sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/update-rippled-cron > debian/tmp/opt/rippled-reporting/etc/update-rippled-reporting-cron
 | 
			
		||||
	sed -E 's/rippled?/rippled-reporting/g' Builds/containers/shared/rippled-logrotate > debian/tmp/etc/logrotate.d/rippled-reporting
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
3.0 (quilt)
 | 
			
		||||
@@ -1,2 +0,0 @@
 | 
			
		||||
#abort-on-upstream-changes
 | 
			
		||||
#unapply-patches
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
enable rippled-reporting.service
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
enable rippled.service
 | 
			
		||||
@@ -1,82 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -ex
 | 
			
		||||
 | 
			
		||||
cd /opt/rippled_bld/pkg
 | 
			
		||||
cp -fpu rippled/Builds/containers/packaging/rpm/rippled.spec .
 | 
			
		||||
cp -fpu rippled/Builds/containers/shared/update_sources.sh .
 | 
			
		||||
source update_sources.sh
 | 
			
		||||
 | 
			
		||||
# Build the rpm
 | 
			
		||||
 | 
			
		||||
IFS='-' read -r RIPPLED_RPM_VERSION RELEASE <<< "$RIPPLED_VERSION"
 | 
			
		||||
export RIPPLED_RPM_VERSION
 | 
			
		||||
 | 
			
		||||
RPM_RELEASE=${RPM_RELEASE-1}
 | 
			
		||||
 | 
			
		||||
# post-release version
 | 
			
		||||
if [ "hf" = "$(echo "$RELEASE" | cut -c -2)" ]; then
 | 
			
		||||
    RPM_RELEASE="${RPM_RELEASE}.${RELEASE}"
 | 
			
		||||
# pre-release version (-b or -rc)
 | 
			
		||||
elif [[ $RELEASE ]]; then
 | 
			
		||||
    RPM_RELEASE="0.${RPM_RELEASE}.${RELEASE}"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
export RPM_RELEASE
 | 
			
		||||
 | 
			
		||||
if [[ $RPM_PATCH ]]; then
 | 
			
		||||
    RPM_PATCH=".${RPM_PATCH}"
 | 
			
		||||
    export RPM_PATCH
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cd /opt/rippled_bld/pkg/rippled
 | 
			
		||||
 | 
			
		||||
if [[ -n $(git status --porcelain) ]]; then
 | 
			
		||||
   git status
 | 
			
		||||
   error "Unstaged changes in this repo - please commit first"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
git archive --format tar.gz --prefix rippled/ -o ../rpmbuild/SOURCES/rippled.tar.gz HEAD
 | 
			
		||||
 | 
			
		||||
cd ..
 | 
			
		||||
 | 
			
		||||
source /opt/rh/devtoolset-11/enable
 | 
			
		||||
 | 
			
		||||
rpmbuild --define "_topdir ${PWD}/rpmbuild" -ba rippled.spec
 | 
			
		||||
 | 
			
		||||
rc=$?; if [[ $rc != 0 ]]; then
 | 
			
		||||
    error "error building rpm"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
# Make a tar of the rpm and source rpm
 | 
			
		||||
RPM_VERSION_RELEASE=$(rpm -qp --qf='%{NAME}-%{VERSION}-%{RELEASE}' ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm)
 | 
			
		||||
tar_file=$RPM_VERSION_RELEASE.tar.gz
 | 
			
		||||
 | 
			
		||||
cp ./rpmbuild/RPMS/x86_64/* ${PKG_OUTDIR}
 | 
			
		||||
cp ./rpmbuild/SRPMS/* ${PKG_OUTDIR}
 | 
			
		||||
 | 
			
		||||
RPM_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm 2>/dev/null)
 | 
			
		||||
DBG_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm 2>/dev/null)
 | 
			
		||||
DEV_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm 2>/dev/null)
 | 
			
		||||
REP_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm 2>/dev/null)
 | 
			
		||||
SRC_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/SRPMS/*.rpm 2>/dev/null)
 | 
			
		||||
 | 
			
		||||
RPM_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm | awk '{ print $1}')"
 | 
			
		||||
DBG_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm | awk '{ print $1}')"
 | 
			
		||||
REP_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-reporting*.rpm | awk '{ print $1}')"
 | 
			
		||||
DEV_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm | awk '{ print $1}')"
 | 
			
		||||
SRC_SHA256="$(sha256sum ./rpmbuild/SRPMS/*.rpm | awk '{ print $1}')"
 | 
			
		||||
 | 
			
		||||
echo "rpm_md5sum=$RPM_MD5SUM" >  ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rep_md5sum=$REP_MD5SUM" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dbg_md5sum=$DBG_MD5SUM" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dev_md5sum=$DEV_MD5SUM" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "src_md5sum=$SRC_MD5SUM" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rpm_sha256=$RPM_SHA256" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rep_sha256=$REP_SHA256" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dbg_sha256=$DBG_SHA256" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "dev_sha256=$DEV_SHA256" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "src_sha256=$SRC_SHA256" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rippled_version=$RIPPLED_VERSION" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rpm_version=$RIPPLED_RPM_VERSION" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rpm_file_name=$tar_file" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
echo "rpm_version_release=$RPM_VERSION_RELEASE" >> ${PKG_OUTDIR}/build_vars
 | 
			
		||||
@@ -1,236 +0,0 @@
 | 
			
		||||
%define rippled_version %(echo $RIPPLED_RPM_VERSION)
 | 
			
		||||
%define rpm_release %(echo $RPM_RELEASE)
 | 
			
		||||
%define rpm_patch %(echo $RPM_PATCH)
 | 
			
		||||
%define _prefix /opt/ripple
 | 
			
		||||
 | 
			
		||||
Name:           rippled
 | 
			
		||||
# Dashes in Version extensions must be converted to underscores
 | 
			
		||||
Version:        %{rippled_version}
 | 
			
		||||
Release:        %{rpm_release}%{?dist}%{rpm_patch}
 | 
			
		||||
Summary:        rippled daemon
 | 
			
		||||
 | 
			
		||||
License:        MIT
 | 
			
		||||
URL:            http://ripple.com/
 | 
			
		||||
Source0:        rippled.tar.gz
 | 
			
		||||
 | 
			
		||||
BuildRequires:  cmake zlib-static ninja-build
 | 
			
		||||
 | 
			
		||||
%description
 | 
			
		||||
rippled
 | 
			
		||||
 | 
			
		||||
%package devel
 | 
			
		||||
Summary: Files for development of applications using xrpl core library
 | 
			
		||||
Group: Development/Libraries
 | 
			
		||||
Requires: zlib-static
 | 
			
		||||
 | 
			
		||||
%description devel
 | 
			
		||||
core library for development of standalone applications that sign transactions.
 | 
			
		||||
 | 
			
		||||
%package reporting
 | 
			
		||||
Summary: Reporting Server for rippled
 | 
			
		||||
 | 
			
		||||
%description reporting
 | 
			
		||||
History server for XRP Ledger
 | 
			
		||||
 | 
			
		||||
%prep
 | 
			
		||||
%setup -c -n rippled
 | 
			
		||||
 | 
			
		||||
%build
 | 
			
		||||
rm -rf ~/.conan/profiles/default
 | 
			
		||||
 | 
			
		||||
cp /opt/libcstd/libstdc++.so.6.0.22 /usr/lib64
 | 
			
		||||
cp /opt/libcstd/libstdc++.so.6.0.22 /lib64
 | 
			
		||||
ln -sf /usr/lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
 | 
			
		||||
ln -sf /lib64/libstdc++.so.6.0.22 /usr/lib64/libstdc++.so.6
 | 
			
		||||
 | 
			
		||||
source /opt/rh/rh-python38/enable
 | 
			
		||||
pip install "conan<2"
 | 
			
		||||
conan profile new default --detect
 | 
			
		||||
conan profile update settings.compiler.libcxx=libstdc++11 default
 | 
			
		||||
conan profile update settings.compiler.cppstd=20 default
 | 
			
		||||
 | 
			
		||||
cd rippled
 | 
			
		||||
 | 
			
		||||
mkdir -p bld.rippled
 | 
			
		||||
conan export external/snappy snappy/1.1.9@
 | 
			
		||||
 | 
			
		||||
pushd bld.rippled
 | 
			
		||||
conan install .. \
 | 
			
		||||
     --settings build_type=Release \
 | 
			
		||||
     --output-folder . \
 | 
			
		||||
     --build missing
 | 
			
		||||
 | 
			
		||||
cmake -G Ninja \
 | 
			
		||||
     -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
 | 
			
		||||
     -DCMAKE_INSTALL_PREFIX=%{_prefix} \
 | 
			
		||||
     -DCMAKE_BUILD_TYPE=Release \
 | 
			
		||||
     -Dunity=OFF \
 | 
			
		||||
     -Dstatic=ON \
 | 
			
		||||
     -Dvalidator_keys=ON \
 | 
			
		||||
     -DCMAKE_VERBOSE_MAKEFILE=ON \
 | 
			
		||||
     ..
 | 
			
		||||
 | 
			
		||||
cmake --build . --parallel $(nproc) --target rippled --target validator-keys
 | 
			
		||||
popd
 | 
			
		||||
 | 
			
		||||
mkdir -p bld.rippled-reporting
 | 
			
		||||
pushd bld.rippled-reporting
 | 
			
		||||
 | 
			
		||||
conan install .. \
 | 
			
		||||
     --settings build_type=Release \
 | 
			
		||||
     --output-folder . \
 | 
			
		||||
     --build missing \
 | 
			
		||||
     --settings compiler.cppstd=17 \
 | 
			
		||||
     --options reporting=True
 | 
			
		||||
 | 
			
		||||
cmake -G Ninja \
 | 
			
		||||
     -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
 | 
			
		||||
     -DCMAKE_INSTALL_PREFIX=%{_prefix} \
 | 
			
		||||
     -DCMAKE_BUILD_TYPE=Release \
 | 
			
		||||
     -Dunity=OFF \
 | 
			
		||||
     -Dstatic=ON \
 | 
			
		||||
     -Dvalidator_keys=ON \
 | 
			
		||||
     -Dreporting=ON \
 | 
			
		||||
     -DCMAKE_VERBOSE_MAKEFILE=ON \
 | 
			
		||||
     ..
 | 
			
		||||
 | 
			
		||||
cmake --build . --parallel $(nproc) --target rippled
 | 
			
		||||
 | 
			
		||||
%pre
 | 
			
		||||
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
 | 
			
		||||
 | 
			
		||||
%install
 | 
			
		||||
rm -rf $RPM_BUILD_ROOT
 | 
			
		||||
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.rippled --target install #-- -v
 | 
			
		||||
mkdir -p $RPM_BUILD_ROOT
 | 
			
		||||
rm -rf ${RPM_BUILD_ROOT}/%{_prefix}/lib64/
 | 
			
		||||
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
 | 
			
		||||
install -d ${RPM_BUILD_ROOT}/usr/local/bin
 | 
			
		||||
 | 
			
		||||
install -D ./rippled/cfg/rippled-example.cfg ${RPM_BUILD_ROOT}/%{_prefix}/etc/rippled.cfg
 | 
			
		||||
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}/%{_prefix}/etc/validators.txt
 | 
			
		||||
 | 
			
		||||
ln -sf %{_prefix}/etc/rippled.cfg ${RPM_BUILD_ROOT}/etc/opt/ripple/rippled.cfg
 | 
			
		||||
ln -sf %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/ripple/validators.txt
 | 
			
		||||
ln -sf %{_prefix}/bin/rippled ${RPM_BUILD_ROOT}/usr/local/bin/rippled
 | 
			
		||||
install -D rippled/bld.rippled/validator-keys/validator-keys ${RPM_BUILD_ROOT}%{_bindir}/validator-keys
 | 
			
		||||
install -D ./rippled/Builds/containers/shared/rippled.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled.service
 | 
			
		||||
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled.preset
 | 
			
		||||
install -D ./rippled/Builds/containers/shared/update-rippled.sh ${RPM_BUILD_ROOT}%{_bindir}/update-rippled.sh
 | 
			
		||||
install -D ./rippled/bin/getRippledInfo ${RPM_BUILD_ROOT}%{_bindir}/getRippledInfo
 | 
			
		||||
install -D ./rippled/Builds/containers/shared/update-rippled-cron ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-cron
 | 
			
		||||
install -D ./rippled/Builds/containers/shared/rippled-logrotate ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled
 | 
			
		||||
install -d $RPM_BUILD_ROOT/var/log/rippled
 | 
			
		||||
install -d $RPM_BUILD_ROOT/var/lib/rippled
 | 
			
		||||
 | 
			
		||||
# reporting mode
 | 
			
		||||
%define _prefix /opt/rippled-reporting
 | 
			
		||||
mkdir -p ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/
 | 
			
		||||
install -D rippled/bld.rippled-reporting/rippled-reporting ${RPM_BUILD_ROOT}%{_bindir}/rippled-reporting
 | 
			
		||||
install -D ./rippled/cfg/rippled-reporting.cfg ${RPM_BUILD_ROOT}%{_prefix}/etc/rippled-reporting.cfg
 | 
			
		||||
install -D ./rippled/cfg/validators-example.txt ${RPM_BUILD_ROOT}%{_prefix}/etc/validators.txt
 | 
			
		||||
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled-reporting.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled-reporting.preset
 | 
			
		||||
ln -s %{_prefix}/bin/rippled-reporting ${RPM_BUILD_ROOT}/usr/local/bin/rippled-reporting
 | 
			
		||||
ln -s %{_prefix}/etc/rippled-reporting.cfg ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/rippled-reporting.cfg
 | 
			
		||||
ln -s %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/rippled-reporting/validators.txt
 | 
			
		||||
install -d $RPM_BUILD_ROOT/var/log/rippled-reporting
 | 
			
		||||
install -d $RPM_BUILD_ROOT/var/lib/rippled-reporting
 | 
			
		||||
install -D ./rippled/Builds/containers/shared/rippled-reporting.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled-reporting.service
 | 
			
		||||
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled.sh > ${RPM_BUILD_ROOT}%{_bindir}/update-rippled-reporting.sh
 | 
			
		||||
sed -E 's/rippled?/rippled-reporting/g' ./rippled/bin/getRippledInfo > ${RPM_BUILD_ROOT}%{_bindir}/getRippledReportingInfo
 | 
			
		||||
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/update-rippled-cron > ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-reporting-cron
 | 
			
		||||
sed -E 's/rippled?/rippled-reporting/g' ./rippled/Builds/containers/shared/rippled-logrotate > ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled-reporting
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
%post
 | 
			
		||||
%define _prefix /opt/ripple
 | 
			
		||||
USER_NAME=rippled
 | 
			
		||||
GROUP_NAME=rippled
 | 
			
		||||
 | 
			
		||||
getent passwd $USER_NAME &>/dev/null || useradd $USER_NAME
 | 
			
		||||
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
 | 
			
		||||
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
 | 
			
		||||
 | 
			
		||||
chmod 755 /var/log/rippled/
 | 
			
		||||
chmod 755 /var/lib/rippled/
 | 
			
		||||
 | 
			
		||||
chmod 644 %{_prefix}/etc/update-rippled-cron
 | 
			
		||||
chmod 644 /etc/logrotate.d/rippled
 | 
			
		||||
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
 | 
			
		||||
 | 
			
		||||
%post reporting
 | 
			
		||||
%define _prefix /opt/rippled-reporting
 | 
			
		||||
USER_NAME=rippled-reporting
 | 
			
		||||
GROUP_NAME=rippled-reporting
 | 
			
		||||
 | 
			
		||||
getent passwd $USER_NAME &>/dev/null || useradd -r $USER_NAME
 | 
			
		||||
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
 | 
			
		||||
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled-reporting/
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled-reporting/
 | 
			
		||||
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
 | 
			
		||||
 | 
			
		||||
chmod 755 /var/log/rippled-reporting/
 | 
			
		||||
chmod 755 /var/lib/rippled-reporting/
 | 
			
		||||
chmod -x /usr/lib/systemd/system/rippled-reporting.service
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
%files
 | 
			
		||||
%define _prefix /opt/ripple
 | 
			
		||||
%doc rippled/README.md rippled/LICENSE.md
 | 
			
		||||
%{_bindir}/rippled
 | 
			
		||||
/usr/local/bin/rippled
 | 
			
		||||
%{_bindir}/update-rippled.sh
 | 
			
		||||
%{_bindir}/getRippledInfo
 | 
			
		||||
%{_prefix}/etc/update-rippled-cron
 | 
			
		||||
%{_bindir}/validator-keys
 | 
			
		||||
%config(noreplace) %{_prefix}/etc/rippled.cfg
 | 
			
		||||
%config(noreplace) /etc/opt/ripple/rippled.cfg
 | 
			
		||||
%config(noreplace) %{_prefix}/etc/validators.txt
 | 
			
		||||
%config(noreplace) /etc/opt/ripple/validators.txt
 | 
			
		||||
%config(noreplace) /etc/logrotate.d/rippled
 | 
			
		||||
%config(noreplace) /usr/lib/systemd/system/rippled.service
 | 
			
		||||
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled.preset
 | 
			
		||||
 | 
			
		||||
%dir /var/log/rippled/
 | 
			
		||||
%dir /var/lib/rippled/
 | 
			
		||||
 | 
			
		||||
%files devel
 | 
			
		||||
%{_prefix}/include
 | 
			
		||||
%{_prefix}/lib/*.a
 | 
			
		||||
%{_prefix}/lib/cmake/ripple
 | 
			
		||||
 | 
			
		||||
%files reporting
 | 
			
		||||
%define _prefix /opt/rippled-reporting
 | 
			
		||||
%doc rippled/README.md rippled/LICENSE.md
 | 
			
		||||
 | 
			
		||||
%{_bindir}/rippled-reporting
 | 
			
		||||
/usr/local/bin/rippled-reporting
 | 
			
		||||
%config(noreplace) /etc/opt/rippled-reporting/rippled-reporting.cfg
 | 
			
		||||
%config(noreplace) %{_prefix}/etc/rippled-reporting.cfg
 | 
			
		||||
%config(noreplace) %{_prefix}/etc/validators.txt
 | 
			
		||||
%config(noreplace) /etc/opt/rippled-reporting/validators.txt
 | 
			
		||||
%config(noreplace) /usr/lib/systemd/system/rippled-reporting.service
 | 
			
		||||
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled-reporting.preset
 | 
			
		||||
%dir /var/log/rippled-reporting/
 | 
			
		||||
%dir /var/lib/rippled-reporting/
 | 
			
		||||
%{_bindir}/update-rippled-reporting.sh
 | 
			
		||||
%{_bindir}/getRippledReportingInfo
 | 
			
		||||
%{_prefix}/etc/update-rippled-reporting-cron
 | 
			
		||||
%config(noreplace) /etc/logrotate.d/rippled-reporting
 | 
			
		||||
 | 
			
		||||
%changelog
 | 
			
		||||
* Wed Aug 28 2019 Mike Ellery <mellery451@gmail.com>
 | 
			
		||||
- Switch to subproject build for validator-keys
 | 
			
		||||
 | 
			
		||||
* Wed May 15 2019 Mike Ellery <mellery451@gmail.com>
 | 
			
		||||
- Make validator-keys use local rippled build for core lib
 | 
			
		||||
 | 
			
		||||
* Wed Aug 01 2018 Mike Ellery <mellery451@gmail.com>
 | 
			
		||||
- add devel package for signing library
 | 
			
		||||
 | 
			
		||||
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
 | 
			
		||||
- Install validators.txt
 | 
			
		||||
@@ -1,37 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
IFS=. read cm_maj cm_min cm_rel <<<"$1"
 | 
			
		||||
: ${cm_rel:-0}
 | 
			
		||||
CMAKE_ROOT=${2:-"${HOME}/cmake"}
 | 
			
		||||
 | 
			
		||||
function cmake_version ()
 | 
			
		||||
{
 | 
			
		||||
    if [[ -d ${CMAKE_ROOT} ]] ; then
 | 
			
		||||
        local perms=$(test $(uname) = "Linux" && echo "/111" || echo "+111")
 | 
			
		||||
        local installed=$(find ${CMAKE_ROOT} -perm ${perms} -type f -name cmake)
 | 
			
		||||
        if [[ "${installed}" != "" ]] ; then
 | 
			
		||||
            echo "$(${installed} --version | head -1)"
 | 
			
		||||
        fi
 | 
			
		||||
    fi
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
installed=$(cmake_version)
 | 
			
		||||
if [[ "${installed}" != "" && ${installed} =~ ${cm_maj}.${cm_min}.${cm_rel} ]] ; then
 | 
			
		||||
    echo "cmake already installed: ${installed}"
 | 
			
		||||
    exit
 | 
			
		||||
fi
 | 
			
		||||
# From CMake 20+ "Linux" is lowercase so using `uname` won't create be the correct path
 | 
			
		||||
if [ ${cm_min} -gt 19 ]; then
 | 
			
		||||
    linux="linux"
 | 
			
		||||
else
 | 
			
		||||
    linux=$(uname)
 | 
			
		||||
fi
 | 
			
		||||
pkgname="cmake-${cm_maj}.${cm_min}.${cm_rel}-${linux}-x86_64.tar.gz"
 | 
			
		||||
tmppkg="/tmp/cmake.tar.gz"
 | 
			
		||||
wget --quiet https://cmake.org/files/v${cm_maj}.${cm_min}/${pkgname} -O ${tmppkg}
 | 
			
		||||
mkdir -p ${CMAKE_ROOT}
 | 
			
		||||
cd ${CMAKE_ROOT}
 | 
			
		||||
tar --strip-components 1 -xf ${tmppkg}
 | 
			
		||||
rm -f ${tmppkg}
 | 
			
		||||
echo "installed: $(cmake_version)"
 | 
			
		||||
@@ -1,15 +0,0 @@
 | 
			
		||||
/var/log/rippled/*.log {
 | 
			
		||||
  daily
 | 
			
		||||
  minsize 200M
 | 
			
		||||
  rotate 7
 | 
			
		||||
  nocreate
 | 
			
		||||
  missingok
 | 
			
		||||
  notifempty
 | 
			
		||||
  compress
 | 
			
		||||
  compresscmd /usr/bin/nice
 | 
			
		||||
  compressoptions -n19 ionice -c3 gzip
 | 
			
		||||
  compressext .gz
 | 
			
		||||
  postrotate
 | 
			
		||||
    /opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate
 | 
			
		||||
  endscript
 | 
			
		||||
}
 | 
			
		||||
@@ -1,15 +0,0 @@
 | 
			
		||||
[Unit]
 | 
			
		||||
Description=Ripple Daemon
 | 
			
		||||
After=network-online.target
 | 
			
		||||
Wants=network-online.target
 | 
			
		||||
 | 
			
		||||
[Service]
 | 
			
		||||
Type=simple
 | 
			
		||||
ExecStart=/opt/rippled-reporting/bin/rippled-reporting --silent --conf /etc/opt/rippled-reporting/rippled-reporting.cfg
 | 
			
		||||
Restart=on-failure
 | 
			
		||||
User=rippled-reporting
 | 
			
		||||
Group=rippled-reporting
 | 
			
		||||
LimitNOFILE=65536
 | 
			
		||||
 | 
			
		||||
[Install]
 | 
			
		||||
WantedBy=multi-user.target
 | 
			
		||||
@@ -1,15 +0,0 @@
 | 
			
		||||
[Unit]
 | 
			
		||||
Description=Ripple Daemon
 | 
			
		||||
After=network-online.target
 | 
			
		||||
Wants=network-online.target
 | 
			
		||||
 | 
			
		||||
[Service]
 | 
			
		||||
Type=simple
 | 
			
		||||
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
 | 
			
		||||
Restart=on-failure
 | 
			
		||||
User=rippled
 | 
			
		||||
Group=rippled
 | 
			
		||||
LimitNOFILE=65536
 | 
			
		||||
 | 
			
		||||
[Install]
 | 
			
		||||
WantedBy=multi-user.target
 | 
			
		||||
@@ -1,10 +0,0 @@
 | 
			
		||||
# For automatic updates, symlink this file to /etc/cron.d/
 | 
			
		||||
# Do not remove the newline at the end of this cron script
 | 
			
		||||
 | 
			
		||||
# bash required for use of RANDOM below.
 | 
			
		||||
SHELL=/bin/bash
 | 
			
		||||
PATH=/sbin;/bin;/usr/sbin;/usr/bin
 | 
			
		||||
 | 
			
		||||
# invoke check/update script with random delay up to 59 mins
 | 
			
		||||
0 * * * * root sleep $((RANDOM*3540/32768)) && /opt/ripple/bin/update-rippled.sh
 | 
			
		||||
 | 
			
		||||
@@ -1,65 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
# auto-update script for rippled daemon
 | 
			
		||||
 | 
			
		||||
# Check for sudo/root permissions
 | 
			
		||||
if [[ $(id -u) -ne 0 ]] ; then
 | 
			
		||||
   echo "This update script must be run as root or sudo"
 | 
			
		||||
   exit 1
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
LOCKDIR=/tmp/rippleupdate.lock
 | 
			
		||||
UPDATELOG=/var/log/rippled/update.log
 | 
			
		||||
 | 
			
		||||
function cleanup {
 | 
			
		||||
  # If this directory isn't removed, future updates will fail.
 | 
			
		||||
  rmdir $LOCKDIR
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
# Use mkdir to check if process is already running. mkdir is atomic, as against file create.
 | 
			
		||||
if ! mkdir $LOCKDIR 2>/dev/null; then
 | 
			
		||||
  echo $(date -u) "lockdir exists - won't proceed." >> $UPDATELOG
 | 
			
		||||
  exit 1
 | 
			
		||||
fi
 | 
			
		||||
trap cleanup EXIT
 | 
			
		||||
 | 
			
		||||
source /etc/os-release
 | 
			
		||||
can_update=false
 | 
			
		||||
 | 
			
		||||
if [[ "$ID" == "ubuntu" || "$ID" == "debian" ]] ; then
 | 
			
		||||
  # Silent update
 | 
			
		||||
  apt-get update -qq
 | 
			
		||||
 | 
			
		||||
  # The next line is an "awk"ward way to check if the package needs to be updated.
 | 
			
		||||
  RIPPLE=$(apt-get install -s --only-upgrade rippled | awk '/^Inst/ { print $2 }')
 | 
			
		||||
  test "$RIPPLE" == "rippled" && can_update=true
 | 
			
		||||
 | 
			
		||||
  function apply_update {
 | 
			
		||||
    apt-get install rippled -qq
 | 
			
		||||
  }
 | 
			
		||||
elif [[ "$ID" == "fedora" || "$ID" == "centos" || "$ID" == "rhel" || "$ID" == "scientific" ]] ; then
 | 
			
		||||
  RIPPLE_REPO=${RIPPLE_REPO-stable}
 | 
			
		||||
  yum --disablerepo=* --enablerepo=ripple-$RIPPLE_REPO clean expire-cache
 | 
			
		||||
 | 
			
		||||
  yum check-update -q --enablerepo=ripple-$RIPPLE_REPO rippled || can_update=true
 | 
			
		||||
 | 
			
		||||
  function apply_update {
 | 
			
		||||
    yum update -y --enablerepo=ripple-$RIPPLE_REPO rippled
 | 
			
		||||
  }
 | 
			
		||||
else
 | 
			
		||||
  echo "unrecognized distro!"
 | 
			
		||||
  exit 1
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
# Do the actual update and restart the service after reloading systemctl daemon.
 | 
			
		||||
if [ "$can_update" = true ] ; then
 | 
			
		||||
  exec 3>&1 1>>${UPDATELOG} 2>&1
 | 
			
		||||
  set -e
 | 
			
		||||
  apply_update
 | 
			
		||||
  systemctl daemon-reload
 | 
			
		||||
  systemctl restart rippled.service
 | 
			
		||||
  echo $(date -u) "rippled daemon updated."
 | 
			
		||||
else
 | 
			
		||||
  echo $(date -u) "no updates available" >> $UPDATELOG
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
@@ -1,20 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
function error {
 | 
			
		||||
    echo $1
 | 
			
		||||
    exit 1
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
cd /opt/rippled_bld/pkg/rippled
 | 
			
		||||
export RIPPLED_VERSION=$(egrep -i -o "\b(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-[0-9a-z\-]+(\.[0-9a-z\-]+)*)?(\+[0-9a-z\-]+(\.[0-9a-z\-]+)*)?\b" src/ripple/protocol/impl/BuildInfo.cpp)
 | 
			
		||||
 | 
			
		||||
: ${PKG_OUTDIR:=/opt/rippled_bld/pkg/out}
 | 
			
		||||
export PKG_OUTDIR
 | 
			
		||||
if [ ! -d ${PKG_OUTDIR} ]; then
 | 
			
		||||
    error "${PKG_OUTDIR} is not mounted"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [ -x ${OPENSSL_ROOT}/bin/openssl ]; then
 | 
			
		||||
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${OPENSSL_ROOT}/lib ${OPENSSL_ROOT}/bin/openssl version -a
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
@@ -1,15 +0,0 @@
 | 
			
		||||
ARG DIST_TAG=18.04
 | 
			
		||||
FROM ubuntu:$DIST_TAG
 | 
			
		||||
ARG GIT_COMMIT=unknown
 | 
			
		||||
ARG CI_USE=false
 | 
			
		||||
LABEL git-commit=$GIT_COMMIT
 | 
			
		||||
 | 
			
		||||
WORKDIR /root
 | 
			
		||||
COPY ubuntu-builder/ubuntu_setup.sh .
 | 
			
		||||
RUN ./ubuntu_setup.sh && rm ubuntu_setup.sh
 | 
			
		||||
 | 
			
		||||
RUN mkdir -m 777 -p /opt/rippled_bld/pkg/
 | 
			
		||||
WORKDIR /opt/rippled_bld/pkg
 | 
			
		||||
 | 
			
		||||
COPY packaging/dpkg/build_dpkg.sh ./
 | 
			
		||||
CMD ./build_dpkg.sh
 | 
			
		||||
@@ -1,76 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
set -o errexit
 | 
			
		||||
set -o nounset
 | 
			
		||||
set -o xtrace
 | 
			
		||||
 | 
			
		||||
# Parameters
 | 
			
		||||
 | 
			
		||||
gcc_version=${GCC_VERSION:-10}
 | 
			
		||||
cmake_version=${CMAKE_VERSION:-3.25.1}
 | 
			
		||||
conan_version=${CONAN_VERSION:-1.59}
 | 
			
		||||
 | 
			
		||||
apt update
 | 
			
		||||
# Iteratively build the list of packages to install so that we can interleave
 | 
			
		||||
# the lines with comments explaining their inclusion.
 | 
			
		||||
dependencies=''
 | 
			
		||||
# - to identify the Ubuntu version
 | 
			
		||||
dependencies+=' lsb-release'
 | 
			
		||||
# - for add-apt-repository
 | 
			
		||||
dependencies+=' software-properties-common'
 | 
			
		||||
# - to download CMake
 | 
			
		||||
dependencies+=' curl'
 | 
			
		||||
# - to build CMake
 | 
			
		||||
dependencies+=' libssl-dev'
 | 
			
		||||
# - Python headers for Boost.Python
 | 
			
		||||
dependencies+=' python3-dev'
 | 
			
		||||
# - to install Conan
 | 
			
		||||
dependencies+=' python3-pip'
 | 
			
		||||
# - to download rippled
 | 
			
		||||
dependencies+=' git'
 | 
			
		||||
# - CMake generators (but not CMake itself)
 | 
			
		||||
dependencies+=' make ninja-build'
 | 
			
		||||
apt install --yes ${dependencies}
 | 
			
		||||
 | 
			
		||||
add-apt-repository --yes ppa:ubuntu-toolchain-r/test
 | 
			
		||||
apt install --yes gcc-${gcc_version} g++-${gcc_version} \
 | 
			
		||||
  debhelper debmake debsums gnupg dh-buildinfo dh-make dh-systemd cmake \
 | 
			
		||||
  ninja-build zlib1g-dev make cmake ninja-build autoconf automake \
 | 
			
		||||
  pkg-config apt-transport-https
 | 
			
		||||
 | 
			
		||||
# Give us nice unversioned aliases for gcc and company.
 | 
			
		||||
update-alternatives --install \
 | 
			
		||||
  /usr/bin/gcc gcc /usr/bin/gcc-${gcc_version} 100 \
 | 
			
		||||
  --slave /usr/bin/g++ g++ /usr/bin/g++-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcov gcov /usr/bin/gcov-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-${gcc_version} \
 | 
			
		||||
  --slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-${gcc_version}
 | 
			
		||||
update-alternatives --auto gcc
 | 
			
		||||
 | 
			
		||||
# Download and unpack CMake.
 | 
			
		||||
cmake_slug="cmake-${cmake_version}"
 | 
			
		||||
curl --location --remote-name \
 | 
			
		||||
  "https://github.com/Kitware/CMake/releases/download/v${cmake_version}/${cmake_slug}.tar.gz"
 | 
			
		||||
tar xzf ${cmake_slug}.tar.gz
 | 
			
		||||
rm ${cmake_slug}.tar.gz
 | 
			
		||||
 | 
			
		||||
# Build and install CMake.
 | 
			
		||||
cd ${cmake_slug}
 | 
			
		||||
./bootstrap --parallel=$(nproc)
 | 
			
		||||
make --jobs $(nproc)
 | 
			
		||||
make install
 | 
			
		||||
cd ..
 | 
			
		||||
rm --recursive --force ${cmake_slug}
 | 
			
		||||
 | 
			
		||||
# Install Conan.
 | 
			
		||||
pip3 install conan==${conan_version}
 | 
			
		||||
 | 
			
		||||
conan profile new --detect gcc
 | 
			
		||||
conan profile update settings.compiler=gcc gcc
 | 
			
		||||
conan profile update settings.compiler.version=${gcc_version} gcc
 | 
			
		||||
conan profile update settings.compiler.libcxx=libstdc++11 gcc
 | 
			
		||||
conan profile update env.CC=/usr/bin/gcc gcc
 | 
			
		||||
conan profile update env.CXX=/usr/bin/g++ gcc
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)
 | 
			
		||||
@@ -1 +0,0 @@
 | 
			
		||||
[Build instructions are currently located in `BUILD.md`](../../BUILD.md)
 | 
			
		||||
							
								
								
									
										211
									
								
								CONTRIBUTING.md
									
									
									
									
									
								
							
							
						
						
									
										211
									
								
								CONTRIBUTING.md
									
									
									
									
									
								
							@@ -1,67 +1,186 @@
 | 
			
		||||
# Contributing
 | 
			
		||||
The XRP Ledger has many and diverse stakeholders, and everyone deserves a chance to contribute meaningful changes to the code that runs the XRPL.
 | 
			
		||||
To contribute, please:
 | 
			
		||||
1. Fork the repository under your own user.
 | 
			
		||||
2. Create a new branch on which to write your changes. Please note that changes which alter transaction processing must be composed via and guarded using [Amendments](https://xrpl.org/amendments.html). Changes which are _read only_ i.e. RPC, or changes which are only refactors and maintain the existing behaviour do not need to be made through an Amendment.
 | 
			
		||||
3. Write and test your code.
 | 
			
		||||
4. Ensure that your code compiles with the provided build engine and update the provided build engine as part of your PR where needed and where appropriate.
 | 
			
		||||
5. Write test cases for your code and include those in `src/test` such that they are runnable from the command line using `./rippled -u`. (Some changes will not be able to be tested this way.)
 | 
			
		||||
6. Ensure your code passes automated checks (e.g. clang-format and levelization.)
 | 
			
		||||
7. Squash your commits (i.e. rebase) into as few commits as is reasonable to describe your changes at a high level (typically a single commit for a small change.)
 | 
			
		||||
8. Open a PR to the main repository onto the _develop_ branch, and follow the provided template.
 | 
			
		||||
The XRP Ledger has many and diverse stakeholders, and everyone deserves
 | 
			
		||||
a chance to contribute meaningful changes to the code that runs the
 | 
			
		||||
XRPL.
 | 
			
		||||
 | 
			
		||||
# Contributing
 | 
			
		||||
 | 
			
		||||
We assume you are familiar with the general practice of [making
 | 
			
		||||
contributions on GitHub][1]. This file includes only special
 | 
			
		||||
instructions specific to this project.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Before you start
 | 
			
		||||
 | 
			
		||||
In general, contributions should be developed in your personal
 | 
			
		||||
[fork](https://github.com/XRPLF/rippled/fork).
 | 
			
		||||
 | 
			
		||||
The following branches exist in the main project repository:
 | 
			
		||||
 | 
			
		||||
- `dev`: The latest set of unreleased features, and the most common
 | 
			
		||||
    starting point for contributions.
 | 
			
		||||
- `candidate`: The latest beta release or release candidate.
 | 
			
		||||
- `release`: The latest stable release.
 | 
			
		||||
 | 
			
		||||
The tip of each branch must be signed. In order for GitHub to sign a
 | 
			
		||||
squashed commit that it builds from your pull request, GitHub must know
 | 
			
		||||
your verifying key. Please set up [signature verification][signing].
 | 
			
		||||
 | 
			
		||||
[rippled]: https://github.com/XRPLF/rippled
 | 
			
		||||
[signing]:
 | 
			
		||||
    https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Major contributions
 | 
			
		||||
 | 
			
		||||
If your contribution is a major feature or breaking change, then you
 | 
			
		||||
must first write an XRP Ledger Standard (XLS) describing it. Go to
 | 
			
		||||
[XRPL-Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
 | 
			
		||||
choose the next available standard number, and open a discussion with an
 | 
			
		||||
appropriate title to propose your draft standard.
 | 
			
		||||
 | 
			
		||||
When you submit a pull request, please link the corresponding XLS in the
 | 
			
		||||
description. An XLS still in draft status is considered a
 | 
			
		||||
work-in-progress and open for discussion. Please allow time for
 | 
			
		||||
questions, suggestions, and changes to the XLS draft. It is the
 | 
			
		||||
responsibility of the XLS author to update the draft to match the final
 | 
			
		||||
implementation when its corresponding pull request is merged, unless the
 | 
			
		||||
author delegates that responsibility to others.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Before making a pull request
 | 
			
		||||
 | 
			
		||||
Changes that alter transaction processing must be guarded by an
 | 
			
		||||
[Amendment](https://xrpl.org/amendments.html).
 | 
			
		||||
All other changes that maintain the existing behavior do not need an
 | 
			
		||||
Amendment.
 | 
			
		||||
 | 
			
		||||
Ensure that your code compiles according to the build instructions in
 | 
			
		||||
[`BUILD.md`](./BUILD.md).
 | 
			
		||||
If you create new source files, they must go under `src/ripple`.
 | 
			
		||||
You will need to add them to one of the
 | 
			
		||||
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
 | 
			
		||||
 | 
			
		||||
Please write tests for your code.
 | 
			
		||||
If you create new test source files, they must go under `src/test`.
 | 
			
		||||
You will need to add them to one of the
 | 
			
		||||
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
 | 
			
		||||
If your test can be run offline, in under 60 seconds, then it can be an
 | 
			
		||||
automatic test run by `rippled --unittest`.
 | 
			
		||||
Otherwise, it must be a manual test.
 | 
			
		||||
 | 
			
		||||
The source must be formatted according to the style guide below.
 | 
			
		||||
 | 
			
		||||
Header includes must be [levelized](./Builds/levelization).
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Pull requests
 | 
			
		||||
 | 
			
		||||
In general, pull requests use `develop` as the base branch.
 | 
			
		||||
 | 
			
		||||
(Hotfixes are an exception.)
 | 
			
		||||
 | 
			
		||||
Changes to pull requests must be added as new commits.
 | 
			
		||||
Once code reviewers have started looking at your code, please avoid
 | 
			
		||||
force-pushing a branch in a pull request.
 | 
			
		||||
This preserves the ability for reviewers to filter changes since their last
 | 
			
		||||
review.
 | 
			
		||||
 | 
			
		||||
A pull request must obtain **approvals from at least two reviewers** before it
 | 
			
		||||
can be considered for merge by a Maintainer.
 | 
			
		||||
Maintainers retain discretion to require more approvals if they feel the
 | 
			
		||||
credibility of the existing approvals is insufficient.
 | 
			
		||||
 | 
			
		||||
Pull requests must be merged by [squash-and-merge][2]
 | 
			
		||||
to preserve a linear history for the `develop` branch.
 | 
			
		||||
 | 
			
		||||
# Major Changes
 | 
			
		||||
If your code change is a major feature, a breaking change or in some other way makes a significant alteration to the way the XRPL will operate, then you must first write an XLS document (XRP Ledger Standard) describing your change.
 | 
			
		||||
To do this:
 | 
			
		||||
1. Go to [XLS Standards](https://github.com/XRPLF/XRPL-Standards/discussions).
 | 
			
		||||
2. Choose the next available standard number.
 | 
			
		||||
3. Open a discussion with the appropriate title to propose your draft standard.
 | 
			
		||||
4. Link your XLS in your PR.
 | 
			
		||||
 | 
			
		||||
# Style guide
 | 
			
		||||
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of _thou shalt not_ commandments.
 | 
			
		||||
 | 
			
		||||
This is a non-exhaustive list of recommended style guidelines. These are
 | 
			
		||||
not always strictly enforced and serve as a way to keep the codebase
 | 
			
		||||
coherent rather than a set of _thou shalt not_ commandments.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Formatting
 | 
			
		||||
All code must conform to `clang-format` version 10, unless the result would be unreasonably difficult to read or maintain.
 | 
			
		||||
To change your code to conform use `clang-format -i <your changed files>`.
 | 
			
		||||
 | 
			
		||||
All code must conform to `clang-format` version 10,
 | 
			
		||||
according to the settings in [`.clang-format`](./.clang-format),
 | 
			
		||||
unless the result would be unreasonably difficult to read or maintain.
 | 
			
		||||
To demarcate lines that should be left as-is, surround them with comments like
 | 
			
		||||
this:
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
// clang-format off
 | 
			
		||||
...
 | 
			
		||||
// clang-format on
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
You can format individual files in place by running `clang-format -i <file>...`
 | 
			
		||||
from any directory within this project.
 | 
			
		||||
 | 
			
		||||
You can install a pre-commit hook to automatically run `clang-format` before every commit:
 | 
			
		||||
```
 | 
			
		||||
pip3 install pre-commit
 | 
			
		||||
pre-commit install
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
## Avoid
 | 
			
		||||
 | 
			
		||||
1. Proliferation of nearly identical code.
 | 
			
		||||
2. Proliferation of new files and classes.
 | 
			
		||||
3. Complex inheritance and complex OOP patterns.
 | 
			
		||||
4. Unmanaged memory allocation and raw pointers.
 | 
			
		||||
5. Macros and non-trivial templates (unless they add significant value.)
 | 
			
		||||
6. Lambda patterns (unless these add significant value.)
 | 
			
		||||
7. CPU or architecture-specific code unless there is a good reason to include it, and where it is used guard it with macros and provide explanatory comments.
 | 
			
		||||
5. Macros and non-trivial templates (unless they add significant value).
 | 
			
		||||
6. Lambda patterns (unless these add significant value).
 | 
			
		||||
7. CPU or architecture-specific code unless there is a good reason to
 | 
			
		||||
   include it, and where it is used, guard it with macros and provide
 | 
			
		||||
   explanatory comments.
 | 
			
		||||
8. Importing new libraries unless there is a very good reason to do so.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Seek to
 | 
			
		||||
 | 
			
		||||
9. Extend functionality of existing code rather than creating new code.
 | 
			
		||||
10. Prefer readability over terseness where important logic is concerned.
 | 
			
		||||
11. Inline functions that are not used or are not likely to be used elsewhere in the codebase.
 | 
			
		||||
12. Use clear and self-explanatory names for functions, variables, structs and classes.
 | 
			
		||||
13. Use TitleCase for classes, structs and filenames, camelCase for function and variable names, lower case for namespaces and folders.
 | 
			
		||||
14. Provide as many comments as you feel that a competent programmer would need to understand what your code does.
 | 
			
		||||
10. Prefer readability over terseness where important logic is
 | 
			
		||||
    concerned.
 | 
			
		||||
11. Inline functions that are not used or are not likely to be used
 | 
			
		||||
    elsewhere in the codebase.
 | 
			
		||||
12. Use clear and self-explanatory names for functions, variables,
 | 
			
		||||
    structs and classes.
 | 
			
		||||
13. Use TitleCase for classes, structs and filenames, camelCase for
 | 
			
		||||
    function and variable names, lower case for namespaces and folders.
 | 
			
		||||
14. Provide as many comments as you feel that a competent programmer
 | 
			
		||||
    would need to understand what your code does.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
# Maintainers
 | 
			
		||||
Maintainers are ecosystem participants with elevated access to the repository. They are able to push new code, make decisions on when a release should be made, etc.
 | 
			
		||||
 | 
			
		||||
## Code Review
 | 
			
		||||
New contributors' PRs must be reviewed by at least two of the maintainers. Well established prior contributors can be reviewed by a single maintainer.
 | 
			
		||||
Maintainers are ecosystem participants with elevated access to the repository.
 | 
			
		||||
They are able to push new code, make decisions on when a release should be
 | 
			
		||||
made, etc.
 | 
			
		||||
 | 
			
		||||
## Adding and Removing
 | 
			
		||||
New maintainers can be proposed by two existing maintainers, subject to a vote by a quorum of the existing maintainers. A minimum of 50% support and a 50% participation is required. In the event of a tie vote, the addition of the new maintainer will be rejected.
 | 
			
		||||
 | 
			
		||||
Existing maintainers can resign, or be subject to a vote for removal at the behest of two existing maintainers. A minimum of 60% agreement and 50% participation are required. The XRP Ledger Foundation will have the ability, for cause, to remove an existing maintainer without a vote.
 | 
			
		||||
## Adding and removing
 | 
			
		||||
 | 
			
		||||
## Existing Maintainers
 | 
			
		||||
* [JoelKatz](https://github.com/JoelKatz) (Ripple)
 | 
			
		||||
* [Manojsdoshi](https://github.com/manojsdoshi) (Ripple)
 | 
			
		||||
* [N3tc4t](https://github.com/n3tc4t) (XRPL Labs)
 | 
			
		||||
* [Nikolaos D Bougalis](https://github.com/nbougalis)
 | 
			
		||||
* [Nixer89](https://github.com/nixer89) (XRP Ledger Foundation)
 | 
			
		||||
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
 | 
			
		||||
* [Seelabs](https://github.com/seelabs) (Ripple)
 | 
			
		||||
* [Silkjaer](https://github.com/Silkjaer) (XRP Ledger Foundation)
 | 
			
		||||
* [WietseWind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
 | 
			
		||||
* [Ximinez](https://github.com/ximinez) (Ripple)
 | 
			
		||||
New maintainers can be proposed by two existing maintainers, subject to a vote
 | 
			
		||||
by a quorum of the existing maintainers.
 | 
			
		||||
A minimum of 50% support and a 50% participation is required.
 | 
			
		||||
In the event of a tie vote, the addition of the new maintainer will be
 | 
			
		||||
rejected.
 | 
			
		||||
 | 
			
		||||
Existing maintainers can resign, or be subject to a vote for removal at the
 | 
			
		||||
behest of two existing maintainers.
 | 
			
		||||
A minimum of 60% agreement and 50% participation are required.
 | 
			
		||||
The XRP Ledger Foundation will have the ability, for cause, to remove an
 | 
			
		||||
existing maintainer without a vote.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Current Maintainers
 | 
			
		||||
 | 
			
		||||
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
 | 
			
		||||
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + XRP Ledger Foundation)
 | 
			
		||||
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
 | 
			
		||||
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits
 | 
			
		||||
@@ -2,6 +2,7 @@ ISC License
 | 
			
		||||
 | 
			
		||||
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
 | 
			
		||||
Copyright (c) 2012-2020, the XRP Ledger developers.
 | 
			
		||||
Copyright (c) 2021-2024, XRPL Labs.
 | 
			
		||||
 | 
			
		||||
Permission to use, copy, modify, and distribute this software for any
 | 
			
		||||
purpose with or without fee is hereby granted, provided that the above
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										470
									
								
								bin/browser.js
									
									
									
									
									
								
							
							
						
						
									
										470
									
								
								bin/browser.js
									
									
									
									
									
								
							@@ -1,470 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// ledger?l=L
 | 
			
		||||
// transaction?h=H
 | 
			
		||||
// ledger_entry?l=L&h=H
 | 
			
		||||
// account?l=L&a=A
 | 
			
		||||
// directory?l=L&dir_root=H&i=I
 | 
			
		||||
// directory?l=L&o=A&i=I     // owner directory
 | 
			
		||||
// offer?l=L&offer=H
 | 
			
		||||
// offer?l=L&account=A&i=I
 | 
			
		||||
// ripple_state=l=L&a=A&b=A&c=C
 | 
			
		||||
// account_lines?l=L&a=A
 | 
			
		||||
//
 | 
			
		||||
// A=address
 | 
			
		||||
// C=currency 3 letter code
 | 
			
		||||
// H=hash
 | 
			
		||||
// I=index
 | 
			
		||||
// L=current | closed | validated | index | hash
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
var async     = require("async");
 | 
			
		||||
var extend    = require("extend");
 | 
			
		||||
var http      = require("http");
 | 
			
		||||
var url       = require("url");
 | 
			
		||||
 | 
			
		||||
var Remote    = require("ripple-lib").Remote;
 | 
			
		||||
 | 
			
		||||
var program   = process.argv[1];
 | 
			
		||||
 | 
			
		||||
var httpd_response = function (res, opts) {
 | 
			
		||||
  var self=this;
 | 
			
		||||
 | 
			
		||||
  res.statusCode = opts.statusCode;
 | 
			
		||||
  res.end(
 | 
			
		||||
    "<HTML>"
 | 
			
		||||
      + "<HEAD><TITLE>Title</TITLE></HEAD>"
 | 
			
		||||
      + "<BODY BACKGROUND=\"#FFFFFF\">"
 | 
			
		||||
      + "State:" + self.state
 | 
			
		||||
      + "<UL>"
 | 
			
		||||
      + "<LI><A HREF=\"/\">home</A>"
 | 
			
		||||
      + "<LI>" + html_link('r4EM4gBQfr1QgQLXSPF4r7h84qE9mb6iCC')
 | 
			
		||||
//      + "<LI><A HREF=\""+test+"\">rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh</A>"
 | 
			
		||||
      + "<LI><A HREF=\"/ledger\">ledger</A>"
 | 
			
		||||
      + "</UL>"
 | 
			
		||||
      + (opts.body || '')
 | 
			
		||||
      + '<HR><PRE>'
 | 
			
		||||
      + (opts.url || '')
 | 
			
		||||
      + '</PRE>'
 | 
			
		||||
      + "</BODY>"
 | 
			
		||||
      + "</HTML>"
 | 
			
		||||
    );
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var html_link = function (generic) {
 | 
			
		||||
  return '<A HREF="' + build_uri({ type: 'account', account: generic}) + '">' + generic + '</A>';
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
// Build a link to a type.
 | 
			
		||||
var build_uri = function (params, opts) {
 | 
			
		||||
  var c;
 | 
			
		||||
 | 
			
		||||
  if (params.type === 'account') {
 | 
			
		||||
    c = {
 | 
			
		||||
        pathname: 'account',
 | 
			
		||||
        query: {
 | 
			
		||||
          l: params.ledger,
 | 
			
		||||
          a: params.account,
 | 
			
		||||
        },
 | 
			
		||||
      };
 | 
			
		||||
 | 
			
		||||
  } else if (params.type === 'ledger') {
 | 
			
		||||
    c = {
 | 
			
		||||
        pathname: 'ledger',
 | 
			
		||||
        query: {
 | 
			
		||||
          l: params.ledger,
 | 
			
		||||
        },
 | 
			
		||||
      };
 | 
			
		||||
 | 
			
		||||
  } else if (params.type === 'transaction') {
 | 
			
		||||
    c = {
 | 
			
		||||
        pathname: 'transaction',
 | 
			
		||||
        query: {
 | 
			
		||||
          h: params.hash,
 | 
			
		||||
        },
 | 
			
		||||
      };
 | 
			
		||||
  } else {
 | 
			
		||||
    c = {};
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  opts  = opts || {};
 | 
			
		||||
 | 
			
		||||
  c.protocol  = "http";
 | 
			
		||||
  c.hostname  = opts.hostname || self.base.hostname;
 | 
			
		||||
  c.port      = opts.port || self.base.port;
 | 
			
		||||
 | 
			
		||||
  return url.format(c);
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var build_link = function (item, link) {
 | 
			
		||||
console.log(link);
 | 
			
		||||
  return "<A HREF=" + link + ">" + item + "</A>";
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var rewrite_field = function (type, obj, field, opts) {
 | 
			
		||||
  if (field in obj) {
 | 
			
		||||
    obj[field]  = rewrite_type(type, obj[field], opts);
 | 
			
		||||
  }
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var rewrite_type = function (type, obj, opts) {
 | 
			
		||||
  if ('amount' === type) {
 | 
			
		||||
    if ('string' === typeof obj) {
 | 
			
		||||
      // XRP.
 | 
			
		||||
      return '<B>' + obj + '</B>';
 | 
			
		||||
 | 
			
		||||
    } else {
 | 
			
		||||
      rewrite_field('address', obj, 'issuer', opts);
 | 
			
		||||
 | 
			
		||||
      return obj; 
 | 
			
		||||
    }
 | 
			
		||||
    return build_link(
 | 
			
		||||
      obj,
 | 
			
		||||
      build_uri({
 | 
			
		||||
          type: 'account',
 | 
			
		||||
          account: obj
 | 
			
		||||
        }, opts)
 | 
			
		||||
    );
 | 
			
		||||
  }
 | 
			
		||||
  if ('address' === type) {
 | 
			
		||||
    return build_link(
 | 
			
		||||
      obj,
 | 
			
		||||
      build_uri({
 | 
			
		||||
          type: 'account',
 | 
			
		||||
          account: obj
 | 
			
		||||
        }, opts)
 | 
			
		||||
    );
 | 
			
		||||
  }
 | 
			
		||||
  else if ('ledger' === type) {
 | 
			
		||||
    return build_link(
 | 
			
		||||
      obj,
 | 
			
		||||
      build_uri({
 | 
			
		||||
          type: 'ledger',
 | 
			
		||||
          ledger: obj,
 | 
			
		||||
        }, opts)
 | 
			
		||||
      );
 | 
			
		||||
  }
 | 
			
		||||
  else if ('node' === type) {
 | 
			
		||||
    // A node
 | 
			
		||||
    if ('PreviousTxnID' in obj)
 | 
			
		||||
      obj.PreviousTxnID      = rewrite_type('transaction', obj.PreviousTxnID, opts);
 | 
			
		||||
 | 
			
		||||
    if ('Offer' === obj.LedgerEntryType) {
 | 
			
		||||
      if ('NewFields' in obj) {
 | 
			
		||||
        if ('TakerGets' in obj.NewFields)
 | 
			
		||||
          obj.NewFields.TakerGets = rewrite_type('amount', obj.NewFields.TakerGets, opts);
 | 
			
		||||
 | 
			
		||||
        if ('TakerPays' in obj.NewFields)
 | 
			
		||||
          obj.NewFields.TakerPays = rewrite_type('amount', obj.NewFields.TakerPays, opts);
 | 
			
		||||
      }
 | 
			
		||||
    }
 | 
			
		||||
 | 
			
		||||
    obj.LedgerEntryType  = '<B>' + obj.LedgerEntryType + '</B>';
 | 
			
		||||
 | 
			
		||||
    return obj;
 | 
			
		||||
  }
 | 
			
		||||
  else if ('transaction' === type) {
 | 
			
		||||
    // Reference to a transaction.
 | 
			
		||||
    return build_link(
 | 
			
		||||
      obj,
 | 
			
		||||
      build_uri({
 | 
			
		||||
          type: 'transaction',
 | 
			
		||||
          hash: obj
 | 
			
		||||
        }, opts)
 | 
			
		||||
      );
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  return 'ERROR: ' + type;
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var rewrite_object = function (obj, opts) {
 | 
			
		||||
  var out = extend({}, obj);
 | 
			
		||||
 | 
			
		||||
  rewrite_field('address', out, 'Account', opts);
 | 
			
		||||
 | 
			
		||||
  rewrite_field('ledger', out, 'parent_hash', opts);
 | 
			
		||||
  rewrite_field('ledger', out, 'ledger_index', opts);
 | 
			
		||||
  rewrite_field('ledger', out, 'ledger_current_index', opts);
 | 
			
		||||
  rewrite_field('ledger', out, 'ledger_hash', opts);
 | 
			
		||||
 | 
			
		||||
  if ('ledger' in obj) {
 | 
			
		||||
    // It's a ledger header.
 | 
			
		||||
    out.ledger  = rewrite_object(out.ledger, opts);
 | 
			
		||||
 | 
			
		||||
    if ('ledger_hash' in out.ledger)
 | 
			
		||||
      out.ledger.ledger_hash = '<B>' + out.ledger.ledger_hash + '</B>';
 | 
			
		||||
 | 
			
		||||
    delete out.ledger.hash;
 | 
			
		||||
    delete out.ledger.totalCoins;
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  if ('TransactionType' in obj) {
 | 
			
		||||
    // It's a transaction.
 | 
			
		||||
    out.TransactionType = '<B>' + obj.TransactionType + '</B>';
 | 
			
		||||
 | 
			
		||||
    rewrite_field('amount', out, 'TakerGets', opts);
 | 
			
		||||
    rewrite_field('amount', out, 'TakerPays', opts);
 | 
			
		||||
    rewrite_field('ledger', out, 'inLedger', opts);
 | 
			
		||||
 | 
			
		||||
    out.meta.AffectedNodes = out.meta.AffectedNodes.map(function (node) {
 | 
			
		||||
        var kind  = 'CreatedNode' in node
 | 
			
		||||
          ? 'CreatedNode'
 | 
			
		||||
          : 'ModifiedNode' in node
 | 
			
		||||
            ? 'ModifiedNode'
 | 
			
		||||
            : 'DeletedNode' in node
 | 
			
		||||
              ? 'DeletedNode'
 | 
			
		||||
              : undefined;
 | 
			
		||||
        
 | 
			
		||||
        if (kind) {
 | 
			
		||||
          node[kind]  = rewrite_type('node', node[kind], opts);
 | 
			
		||||
        }
 | 
			
		||||
        return node;
 | 
			
		||||
      });
 | 
			
		||||
  }
 | 
			
		||||
  else if ('node' in obj && 'LedgerEntryType' in obj.node) {
 | 
			
		||||
    // Its a ledger entry.
 | 
			
		||||
 | 
			
		||||
    if (obj.node.LedgerEntryType === 'AccountRoot') {
 | 
			
		||||
      rewrite_field('address', out.node, 'Account', opts);
 | 
			
		||||
      rewrite_field('transaction', out.node, 'PreviousTxnID', opts);
 | 
			
		||||
      rewrite_field('ledger', out.node, 'PreviousTxnLgrSeq', opts);
 | 
			
		||||
    }
 | 
			
		||||
 | 
			
		||||
    out.node.LedgerEntryType = '<B>' + out.node.LedgerEntryType + '</B>';
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  return out;
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var augment_object = function (obj, opts, done) {
 | 
			
		||||
  if (obj.node.LedgerEntryType == 'AccountRoot') {
 | 
			
		||||
    var   tx_hash   = obj.node.PreviousTxnID;
 | 
			
		||||
    var   tx_ledger = obj.node.PreviousTxnLgrSeq;
 | 
			
		||||
 | 
			
		||||
    obj.history                 = [];
 | 
			
		||||
 | 
			
		||||
    async.whilst(
 | 
			
		||||
      function () { return tx_hash; },
 | 
			
		||||
      function (callback) {
 | 
			
		||||
// console.log("augment_object: request: %s %s", tx_hash, tx_ledger);
 | 
			
		||||
        opts.remote.request_tx(tx_hash)
 | 
			
		||||
          .on('success', function (m) {
 | 
			
		||||
              tx_hash   = undefined;
 | 
			
		||||
              tx_ledger = undefined;
 | 
			
		||||
 | 
			
		||||
//console.log("augment_object: ", JSON.stringify(m));
 | 
			
		||||
              m.meta.AffectedNodes.filter(function(n) {
 | 
			
		||||
// console.log("augment_object: ", JSON.stringify(n));
 | 
			
		||||
// if (n.ModifiedNode)
 | 
			
		||||
// console.log("augment_object: %s %s %s %s %s %s/%s", 'ModifiedNode' in n, n.ModifiedNode && (n.ModifiedNode.LedgerEntryType === 'AccountRoot'), n.ModifiedNode && n.ModifiedNode.FinalFields && (n.ModifiedNode.FinalFields.Account === obj.node.Account), Object.keys(n)[0], n.ModifiedNode && (n.ModifiedNode.LedgerEntryType), obj.node.Account, n.ModifiedNode && n.ModifiedNode.FinalFields && n.ModifiedNode.FinalFields.Account);
 | 
			
		||||
// if ('ModifiedNode' in n && n.ModifiedNode.LedgerEntryType === 'AccountRoot')
 | 
			
		||||
// {
 | 
			
		||||
//   console.log("***: ", JSON.stringify(m));
 | 
			
		||||
//   console.log("***: ", JSON.stringify(n));
 | 
			
		||||
// }
 | 
			
		||||
                  return 'ModifiedNode' in n
 | 
			
		||||
                    && n.ModifiedNode.LedgerEntryType === 'AccountRoot'
 | 
			
		||||
                    && n.ModifiedNode.FinalFields
 | 
			
		||||
                    && n.ModifiedNode.FinalFields.Account === obj.node.Account;
 | 
			
		||||
                })
 | 
			
		||||
              .forEach(function (n) {
 | 
			
		||||
                  tx_hash   = n.ModifiedNode.PreviousTxnID;
 | 
			
		||||
                  tx_ledger = n.ModifiedNode.PreviousTxnLgrSeq;
 | 
			
		||||
 | 
			
		||||
                  obj.history.push({
 | 
			
		||||
                      tx_hash:    tx_hash,
 | 
			
		||||
                      tx_ledger:  tx_ledger
 | 
			
		||||
                    });
 | 
			
		||||
console.log("augment_object: next: %s %s", tx_hash, tx_ledger);
 | 
			
		||||
                });
 | 
			
		||||
 | 
			
		||||
              callback();
 | 
			
		||||
            })
 | 
			
		||||
          .on('error', function (m) {
 | 
			
		||||
              callback(m);
 | 
			
		||||
            })
 | 
			
		||||
          .request();
 | 
			
		||||
      },
 | 
			
		||||
      function (err) {
 | 
			
		||||
        if (err) {
 | 
			
		||||
          done();
 | 
			
		||||
        }
 | 
			
		||||
        else {
 | 
			
		||||
          async.forEach(obj.history, function (o, callback) {
 | 
			
		||||
              opts.remote.request_account_info(obj.node.Account)
 | 
			
		||||
                .ledger_index(o.tx_ledger)
 | 
			
		||||
                .on('success', function (m) {
 | 
			
		||||
//console.log("augment_object: ", JSON.stringify(m));
 | 
			
		||||
                    o.Balance       = m.account_data.Balance;
 | 
			
		||||
//                    o.account_data  = m.account_data;
 | 
			
		||||
                    callback();
 | 
			
		||||
                  })
 | 
			
		||||
                .on('error', function (m) {
 | 
			
		||||
                    o.error = m;
 | 
			
		||||
                    callback();
 | 
			
		||||
                  })
 | 
			
		||||
                .request();
 | 
			
		||||
            },
 | 
			
		||||
            function (err) {
 | 
			
		||||
              done(err);
 | 
			
		||||
            });
 | 
			
		||||
        }
 | 
			
		||||
      });
 | 
			
		||||
  }
 | 
			
		||||
  else {
 | 
			
		||||
    done();
 | 
			
		||||
  }
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
if (process.argv.length < 4 || process.argv.length > 7) {
 | 
			
		||||
  console.log("Usage: %s ws_ip ws_port [<ip> [<port> [<start>]]]", program);
 | 
			
		||||
}
 | 
			
		||||
else {
 | 
			
		||||
  var ws_ip   = process.argv[2];
 | 
			
		||||
  var ws_port = process.argv[3];
 | 
			
		||||
  var ip      = process.argv.length > 4 ? process.argv[4] : "127.0.0.1";
 | 
			
		||||
  var port    = process.argv.length > 5 ? process.argv[5] : "8080";
 | 
			
		||||
 | 
			
		||||
// console.log("START");
 | 
			
		||||
  var self  = this;
 | 
			
		||||
  
 | 
			
		||||
  var remote  = (new Remote({
 | 
			
		||||
                    websocket_ip: ws_ip,
 | 
			
		||||
                    websocket_port: ws_port,
 | 
			
		||||
                    trace: false
 | 
			
		||||
                  }))
 | 
			
		||||
                  .on('state', function (m) {
 | 
			
		||||
                      console.log("STATE: %s", m);
 | 
			
		||||
 | 
			
		||||
                      self.state   = m;
 | 
			
		||||
                    })
 | 
			
		||||
//                  .once('ledger_closed', callback)
 | 
			
		||||
                  .connect()
 | 
			
		||||
                  ;
 | 
			
		||||
 | 
			
		||||
  self.base = {
 | 
			
		||||
      hostname: ip,
 | 
			
		||||
      port:     port,
 | 
			
		||||
      remote:   remote,
 | 
			
		||||
    };
 | 
			
		||||
 | 
			
		||||
// console.log("SERVE");
 | 
			
		||||
  var server  = http.createServer(function (req, res) {
 | 
			
		||||
      var input = "";
 | 
			
		||||
 | 
			
		||||
      req.setEncoding();
 | 
			
		||||
 | 
			
		||||
      req.on('data', function (buffer) {
 | 
			
		||||
          // console.log("DATA: %s", buffer);
 | 
			
		||||
          input = input + buffer;
 | 
			
		||||
        });
 | 
			
		||||
 | 
			
		||||
      req.on('end', function () {
 | 
			
		||||
          // console.log("URL: %s", req.url);
 | 
			
		||||
          // console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
 | 
			
		||||
 | 
			
		||||
          var _parsed = url.parse(req.url, true);
 | 
			
		||||
          var _url    = JSON.stringify(_parsed, undefined, 2);
 | 
			
		||||
 | 
			
		||||
          // console.log("HEADERS: %s", JSON.stringify(_parsed, undefined, 2));
 | 
			
		||||
          if (_parsed.pathname === "/account") {
 | 
			
		||||
              var request = remote
 | 
			
		||||
                .request_ledger_entry('account_root')
 | 
			
		||||
                .ledger_index(-1)
 | 
			
		||||
                .account_root(_parsed.query.a)
 | 
			
		||||
                .on('success', function (m) {
 | 
			
		||||
                    // console.log("account_root: %s", JSON.stringify(m, undefined, 2));
 | 
			
		||||
 | 
			
		||||
                    augment_object(m, self.base, function() {
 | 
			
		||||
                      httpd_response(res,
 | 
			
		||||
                          {
 | 
			
		||||
                            statusCode: 200,
 | 
			
		||||
                            url: _url,
 | 
			
		||||
                            body: "<PRE>"
 | 
			
		||||
                              + JSON.stringify(rewrite_object(m, self.base), undefined, 2)
 | 
			
		||||
                              + "</PRE>"
 | 
			
		||||
                          });
 | 
			
		||||
                    });
 | 
			
		||||
                  })
 | 
			
		||||
                .request();
 | 
			
		||||
 | 
			
		||||
          } else if (_parsed.pathname === "/ledger") {
 | 
			
		||||
            var request = remote
 | 
			
		||||
              .request_ledger(undefined, { expand: true, transactions: true })
 | 
			
		||||
              .on('success', function (m) {
 | 
			
		||||
                  // console.log("Ledger: %s", JSON.stringify(m, undefined, 2));
 | 
			
		||||
 | 
			
		||||
                  httpd_response(res,
 | 
			
		||||
                      {
 | 
			
		||||
                        statusCode: 200,
 | 
			
		||||
                        url: _url,
 | 
			
		||||
                        body: "<PRE>"
 | 
			
		||||
                          + JSON.stringify(rewrite_object(m, self.base), undefined, 2)
 | 
			
		||||
                          +"</PRE>"
 | 
			
		||||
                      });
 | 
			
		||||
                })
 | 
			
		||||
 | 
			
		||||
            if (_parsed.query.l && _parsed.query.l.length === 64) {
 | 
			
		||||
              request.ledger_hash(_parsed.query.l);
 | 
			
		||||
            }
 | 
			
		||||
            else if (_parsed.query.l) {
 | 
			
		||||
              request.ledger_index(Number(_parsed.query.l));
 | 
			
		||||
            }
 | 
			
		||||
            else {
 | 
			
		||||
              request.ledger_index(-1);
 | 
			
		||||
            }
 | 
			
		||||
 | 
			
		||||
            request.request();
 | 
			
		||||
 | 
			
		||||
          } else if (_parsed.pathname === "/transaction") {
 | 
			
		||||
              var request = remote
 | 
			
		||||
                .request_tx(_parsed.query.h)
 | 
			
		||||
//                .request_transaction_entry(_parsed.query.h)
 | 
			
		||||
//              .ledger_select(_parsed.query.l)
 | 
			
		||||
                .on('success', function (m) {
 | 
			
		||||
                    // console.log("transaction: %s", JSON.stringify(m, undefined, 2));
 | 
			
		||||
 | 
			
		||||
                    httpd_response(res,
 | 
			
		||||
                        {
 | 
			
		||||
                          statusCode: 200,
 | 
			
		||||
                          url: _url,
 | 
			
		||||
                          body: "<PRE>"
 | 
			
		||||
                            + JSON.stringify(rewrite_object(m, self.base), undefined, 2)
 | 
			
		||||
                            +"</PRE>"
 | 
			
		||||
                        });
 | 
			
		||||
                  })
 | 
			
		||||
                .on('error', function (m) {
 | 
			
		||||
                    httpd_response(res,
 | 
			
		||||
                        {
 | 
			
		||||
                          statusCode: 200,
 | 
			
		||||
                          url: _url,
 | 
			
		||||
                          body: "<PRE>"
 | 
			
		||||
                            + 'ERROR: ' + JSON.stringify(m, undefined, 2)
 | 
			
		||||
                            +"</PRE>"
 | 
			
		||||
                        });
 | 
			
		||||
                  })
 | 
			
		||||
                .request();
 | 
			
		||||
 | 
			
		||||
          } else {
 | 
			
		||||
            var test  = build_uri({
 | 
			
		||||
                type: 'account',
 | 
			
		||||
                ledger: 'closed',
 | 
			
		||||
                account: 'rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
 | 
			
		||||
              }, self.base);
 | 
			
		||||
 | 
			
		||||
            httpd_response(res,
 | 
			
		||||
                {
 | 
			
		||||
                  statusCode: req.url === "/" ? 200 : 404,
 | 
			
		||||
                  url: _url,
 | 
			
		||||
                });
 | 
			
		||||
          }
 | 
			
		||||
        });
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  server.listen(port, ip, undefined,
 | 
			
		||||
    function () {
 | 
			
		||||
      console.log("Listening at: http://%s:%s", ip, port);
 | 
			
		||||
    });
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
@@ -1,24 +0,0 @@
 | 
			
		||||
In this directory are two scripts, `build.sh` and `test.sh` used for building
 | 
			
		||||
and testing rippled.
 | 
			
		||||
 | 
			
		||||
(For now, they assume Bash and Linux. Once I get Windows containers for
 | 
			
		||||
testing, I'll try them there, but if Bash is not available, then they will
 | 
			
		||||
soon be joined by PowerShell scripts `build.ps` and `test.ps`.)
 | 
			
		||||
 | 
			
		||||
We don't want these scripts to require arcane invocations that can only be
 | 
			
		||||
pieced together from within a CI configuration. We want something that humans
 | 
			
		||||
can easily invoke, read, and understand, for when we eventually have to test
 | 
			
		||||
and debug them interactively. That means:
 | 
			
		||||
 | 
			
		||||
(1) They should work with no arguments.
 | 
			
		||||
(2) They should document their arguments.
 | 
			
		||||
(3) They should expand short arguments into long arguments.
 | 
			
		||||
 | 
			
		||||
While we want to provide options for common use cases, we don't need to offer
 | 
			
		||||
the kitchen sink. We can rightfully expect users with esoteric, complicated
 | 
			
		||||
needs to write their own scripts.
 | 
			
		||||
 | 
			
		||||
To make argument-handling easy for us, the implementers, we can just take all
 | 
			
		||||
arguments from environment variables. They have the nice advantage that every
 | 
			
		||||
command-line uses named arguments. For the benefit of us and our users, we
 | 
			
		||||
document those variables at the top of each script.
 | 
			
		||||
@@ -1,31 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
set -o xtrace
 | 
			
		||||
set -o errexit
 | 
			
		||||
 | 
			
		||||
# The build system. Either 'Unix Makefiles' or 'Ninja'.
 | 
			
		||||
GENERATOR=${GENERATOR:-Unix Makefiles}
 | 
			
		||||
# The compiler. Either 'gcc' or 'clang'.
 | 
			
		||||
COMPILER=${COMPILER:-gcc}
 | 
			
		||||
# The build type. Either 'Debug' or 'Release'.
 | 
			
		||||
BUILD_TYPE=${BUILD_TYPE:-Debug}
 | 
			
		||||
# Additional arguments to CMake.
 | 
			
		||||
# We use the `-` substitution here instead of `:-` so that callers can erase
 | 
			
		||||
# the default by setting `$CMAKE_ARGS` to the empty string.
 | 
			
		||||
CMAKE_ARGS=${CMAKE_ARGS-'-Dwerr=ON'}
 | 
			
		||||
 | 
			
		||||
# https://gitlab.kitware.com/cmake/cmake/issues/18865
 | 
			
		||||
CMAKE_ARGS="-DBoost_NO_BOOST_CMAKE=ON ${CMAKE_ARGS}"
 | 
			
		||||
 | 
			
		||||
if [[ ${COMPILER} == 'gcc' ]]; then
 | 
			
		||||
  export CC='gcc'
 | 
			
		||||
  export CXX='g++'
 | 
			
		||||
elif [[ ${COMPILER} == 'clang' ]]; then
 | 
			
		||||
  export CC='clang'
 | 
			
		||||
  export CXX='clang++'
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
mkdir build
 | 
			
		||||
cd build
 | 
			
		||||
cmake -G "${GENERATOR}" -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_ARGS} ..
 | 
			
		||||
cmake --build . -- -j $(nproc)
 | 
			
		||||
@@ -1,41 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
set -o xtrace
 | 
			
		||||
set -o errexit
 | 
			
		||||
 | 
			
		||||
# Set to 'true' to run the known "manual" tests in rippled.
 | 
			
		||||
MANUAL_TESTS=${MANUAL_TESTS:-false}
 | 
			
		||||
# The maximum number of concurrent tests.
 | 
			
		||||
CONCURRENT_TESTS=${CONCURRENT_TESTS:-$(nproc)}
 | 
			
		||||
# The path to rippled.
 | 
			
		||||
RIPPLED=${RIPPLED:-build/rippled}
 | 
			
		||||
# Additional arguments to rippled.
 | 
			
		||||
RIPPLED_ARGS=${RIPPLED_ARGS:-}
 | 
			
		||||
 | 
			
		||||
function join_by { local IFS="$1"; shift; echo "$*"; }
 | 
			
		||||
 | 
			
		||||
declare -a manual_tests=(
 | 
			
		||||
  'beast.chrono.abstract_clock'
 | 
			
		||||
  'beast.unit_test.print'
 | 
			
		||||
  'ripple.NodeStore.Timing'
 | 
			
		||||
  'ripple.app.Flow_manual'
 | 
			
		||||
  'ripple.app.NoRippleCheckLimits'
 | 
			
		||||
  'ripple.app.PayStrandAllPairs'
 | 
			
		||||
  'ripple.consensus.ByzantineFailureSim'
 | 
			
		||||
  'ripple.consensus.DistributedValidators'
 | 
			
		||||
  'ripple.consensus.ScaleFreeSim'
 | 
			
		||||
  'ripple.tx.CrossingLimits'
 | 
			
		||||
  'ripple.tx.FindOversizeCross'
 | 
			
		||||
  'ripple.tx.Offer_manual'
 | 
			
		||||
  'ripple.tx.OversizeMeta'
 | 
			
		||||
  'ripple.tx.PlumpBook'
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
if [[ ${MANUAL_TESTS} == 'true' ]]; then
 | 
			
		||||
  RIPPLED_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
 | 
			
		||||
else
 | 
			
		||||
  RIPPLED_ARGS+=" --unittest --quiet --unittest-log"
 | 
			
		||||
fi
 | 
			
		||||
RIPPLED_ARGS+=" --unittest-jobs ${CONCURRENT_TESTS}"
 | 
			
		||||
 | 
			
		||||
${RIPPLED} ${RIPPLED_ARGS}
 | 
			
		||||
@@ -1,274 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -ex
 | 
			
		||||
 | 
			
		||||
function version_ge() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1"; }
 | 
			
		||||
 | 
			
		||||
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
 | 
			
		||||
echo "using CC: ${CC}"
 | 
			
		||||
"${CC}" --version
 | 
			
		||||
export CC
 | 
			
		||||
 | 
			
		||||
COMPNAME=$(basename $CC)
 | 
			
		||||
echo "using CXX: ${CXX:-notset}"
 | 
			
		||||
if [[ $CXX ]]; then
 | 
			
		||||
   "${CXX}" --version
 | 
			
		||||
   export CXX
 | 
			
		||||
fi
 | 
			
		||||
: ${BUILD_TYPE:=Debug}
 | 
			
		||||
echo "BUILD TYPE: ${BUILD_TYPE}"
 | 
			
		||||
 | 
			
		||||
: ${TARGET:=install}
 | 
			
		||||
echo "BUILD TARGET: ${TARGET}"
 | 
			
		||||
 | 
			
		||||
JOBS=${NUM_PROCESSORS:-2}
 | 
			
		||||
if [[ ${TRAVIS:-false} != "true" ]]; then
 | 
			
		||||
    JOBS=$((JOBS+1))
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ! -z "${CMAKE_EXE:-}" ]] ; then
 | 
			
		||||
    export PATH="$(dirname ${CMAKE_EXE}):$PATH"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [ -x /usr/bin/time ] ; then
 | 
			
		||||
    : ${TIME:="Duration: %E"}
 | 
			
		||||
    export TIME
 | 
			
		||||
    time=/usr/bin/time
 | 
			
		||||
else
 | 
			
		||||
    time=
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
echo "Building rippled"
 | 
			
		||||
: ${CMAKE_EXTRA_ARGS:=""}
 | 
			
		||||
if [[ ${NINJA_BUILD:-} == true ]]; then
 | 
			
		||||
    CMAKE_EXTRA_ARGS+=" -G Ninja"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
coverage=false
 | 
			
		||||
if [[ "${TARGET}" == "coverage_report" ]] ; then
 | 
			
		||||
    echo "coverage option detected."
 | 
			
		||||
    coverage=true
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cmake --version
 | 
			
		||||
CMAKE_VER=$(cmake --version | cut -d " " -f 3 | head -1)
 | 
			
		||||
 | 
			
		||||
#
 | 
			
		||||
# allow explicit setting of the name of the build
 | 
			
		||||
# dir, otherwise default to the compiler.build_type
 | 
			
		||||
#
 | 
			
		||||
: "${BUILD_DIR:=${COMPNAME}.${BUILD_TYPE}}"
 | 
			
		||||
BUILDARGS="--target ${TARGET}"
 | 
			
		||||
BUILDTOOLARGS=""
 | 
			
		||||
if version_ge $CMAKE_VER "3.12.0" ; then
 | 
			
		||||
    BUILDARGS+=" --parallel"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${NINJA_BUILD:-} == false ]]; then
 | 
			
		||||
    if version_ge $CMAKE_VER "3.12.0" ; then
 | 
			
		||||
        BUILDARGS+=" ${JOBS}"
 | 
			
		||||
    else
 | 
			
		||||
        BUILDTOOLARGS+=" -j ${JOBS}"
 | 
			
		||||
    fi
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${VERBOSE_BUILD:-} == true ]]; then
 | 
			
		||||
    CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
 | 
			
		||||
    if version_ge $CMAKE_VER "3.14.0" ; then
 | 
			
		||||
        BUILDARGS+=" --verbose"
 | 
			
		||||
    else
 | 
			
		||||
        if [[ ${NINJA_BUILD:-} == false ]]; then
 | 
			
		||||
            BUILDTOOLARGS+=" verbose=1"
 | 
			
		||||
        else
 | 
			
		||||
            BUILDTOOLARGS+=" -v"
 | 
			
		||||
        fi
 | 
			
		||||
    fi
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${USE_CCACHE:-} == true ]]; then
 | 
			
		||||
    echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
 | 
			
		||||
    CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
 | 
			
		||||
fi
 | 
			
		||||
if [ -d "build/${BUILD_DIR}" ]; then
 | 
			
		||||
    rm -rf "build/${BUILD_DIR}"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
mkdir -p "build/${BUILD_DIR}"
 | 
			
		||||
pushd "build/${BUILD_DIR}"
 | 
			
		||||
 | 
			
		||||
# cleanup possible artifacts
 | 
			
		||||
rm -fv CMakeFiles/CMakeOutput.log CMakeFiles/CMakeError.log
 | 
			
		||||
# Clean up NIH directories which should be git repos, but aren't
 | 
			
		||||
for nih_path in ${NIH_CACHE_ROOT}/*/*/*/src ${NIH_CACHE_ROOT}/*/*/src
 | 
			
		||||
do
 | 
			
		||||
  for dir in lz4 snappy rocksdb
 | 
			
		||||
  do
 | 
			
		||||
    if [ -e ${nih_path}/${dir} -a \! -e ${nih_path}/${dir}/.git ]
 | 
			
		||||
    then
 | 
			
		||||
      ls -la ${nih_path}/${dir}*
 | 
			
		||||
      rm -rfv ${nih_path}/${dir}*
 | 
			
		||||
    fi
 | 
			
		||||
  done
 | 
			
		||||
done
 | 
			
		||||
 | 
			
		||||
# generate
 | 
			
		||||
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
 | 
			
		||||
# Display the cmake output, to help with debugging if something fails
 | 
			
		||||
for file in CMakeOutput.log CMakeError.log
 | 
			
		||||
do
 | 
			
		||||
  if [ -f CMakeFiles/${file} ]
 | 
			
		||||
  then
 | 
			
		||||
    ls -l CMakeFiles/${file}
 | 
			
		||||
    cat CMakeFiles/${file}
 | 
			
		||||
  fi
 | 
			
		||||
done
 | 
			
		||||
# build
 | 
			
		||||
export DESTDIR=$(pwd)/_INSTALLED_
 | 
			
		||||
 | 
			
		||||
${time} eval cmake --build . ${BUILDARGS} -- ${BUILDTOOLARGS}
 | 
			
		||||
 | 
			
		||||
if [[ ${TARGET} == "docs" ]]; then
 | 
			
		||||
    ## mimic the standard test output for docs build
 | 
			
		||||
    ## to make controlling processes like jenkins happy
 | 
			
		||||
    if [ -f docs/html/index.html ]; then
 | 
			
		||||
        echo "1 case, 1 test total, 0 failures"
 | 
			
		||||
    else
 | 
			
		||||
        echo "1 case, 1 test total, 1 failures"
 | 
			
		||||
    fi
 | 
			
		||||
    exit
 | 
			
		||||
fi
 | 
			
		||||
popd
 | 
			
		||||
 | 
			
		||||
if [[ "${TARGET}" == "validator-keys" ]] ; then
 | 
			
		||||
    export APP_PATH="$PWD/build/${BUILD_DIR}/validator-keys/validator-keys"
 | 
			
		||||
else
 | 
			
		||||
    export APP_PATH="$PWD/build/${BUILD_DIR}/rippled"
 | 
			
		||||
fi
 | 
			
		||||
echo "using APP_PATH: ${APP_PATH}"
 | 
			
		||||
 | 
			
		||||
# See what we've actually built
 | 
			
		||||
ldd ${APP_PATH}
 | 
			
		||||
 | 
			
		||||
: ${APP_ARGS:=}
 | 
			
		||||
 | 
			
		||||
if [[ "${TARGET}" == "validator-keys" ]] ; then
 | 
			
		||||
    APP_ARGS="--unittest"
 | 
			
		||||
else
 | 
			
		||||
    function join_by { local IFS="$1"; shift; echo "$*"; }
 | 
			
		||||
 | 
			
		||||
    # This is a list of manual tests
 | 
			
		||||
    # in rippled that we want to run
 | 
			
		||||
    # ORDER matters here...sorted in approximately
 | 
			
		||||
    # descending execution time (longest running tests at top)
 | 
			
		||||
    declare -a manual_tests=(
 | 
			
		||||
        'ripple.ripple_data.reduce_relay_simulate'
 | 
			
		||||
        'ripple.tx.Offer_manual'
 | 
			
		||||
        'ripple.tx.CrossingLimits'
 | 
			
		||||
        'ripple.tx.PlumpBook'
 | 
			
		||||
        'ripple.app.Flow_manual'
 | 
			
		||||
        'ripple.tx.OversizeMeta'
 | 
			
		||||
        'ripple.consensus.DistributedValidators'
 | 
			
		||||
        'ripple.app.NoRippleCheckLimits'
 | 
			
		||||
        'ripple.ripple_data.compression'
 | 
			
		||||
        'ripple.NodeStore.Timing'
 | 
			
		||||
        'ripple.consensus.ByzantineFailureSim'
 | 
			
		||||
        'beast.chrono.abstract_clock'
 | 
			
		||||
        'beast.unit_test.print'
 | 
			
		||||
    )
 | 
			
		||||
    if [[ ${TRAVIS:-false} != "true" ]]; then
 | 
			
		||||
        # these two tests cause travis CI to run out of memory.
 | 
			
		||||
        # TODO: investigate possible workarounds.
 | 
			
		||||
        manual_tests=(
 | 
			
		||||
            'ripple.consensus.ScaleFreeSim'
 | 
			
		||||
            'ripple.tx.FindOversizeCross'
 | 
			
		||||
            "${manual_tests[@]}"
 | 
			
		||||
        )
 | 
			
		||||
    fi
 | 
			
		||||
 | 
			
		||||
    if [[ ${MANUAL_TESTS:-} == true ]]; then
 | 
			
		||||
        APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
 | 
			
		||||
    else
 | 
			
		||||
        APP_ARGS+=" --unittest --quiet --unittest-log"
 | 
			
		||||
    fi
 | 
			
		||||
    if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
 | 
			
		||||
        APP_ARGS+=" --unittest-jobs ${JOBS}"
 | 
			
		||||
    fi
 | 
			
		||||
 | 
			
		||||
    if [[ ${IPV6_TESTS:-} == true ]]; then
 | 
			
		||||
        APP_ARGS+=" --unittest-ipv6"
 | 
			
		||||
    fi
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${coverage} == true && $CC =~ ^gcc ]]; then
 | 
			
		||||
    # Push the results (lcov.info) to codecov
 | 
			
		||||
    codecov -X gcov # don't even try and look for .gcov files ;)
 | 
			
		||||
    find . -name "*.gcda" | xargs rm -f
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${SKIP_TESTS:-} == true ]]; then
 | 
			
		||||
    echo "skipping tests."
 | 
			
		||||
    exit
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
ulimit -a
 | 
			
		||||
corepat=$(cat /proc/sys/kernel/core_pattern)
 | 
			
		||||
if [[ ${corepat} =~ ^[:space:]*\| ]] ; then
 | 
			
		||||
    echo "WARNING: core pattern is piping - can't search for core files"
 | 
			
		||||
    look_core=false
 | 
			
		||||
else
 | 
			
		||||
    look_core=true
 | 
			
		||||
    coredir=$(dirname ${corepat})
 | 
			
		||||
fi
 | 
			
		||||
if [[ ${look_core} == true ]]; then
 | 
			
		||||
    before=$(ls -A1 ${coredir})
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
set +e
 | 
			
		||||
echo "Running tests for ${APP_PATH}"
 | 
			
		||||
if [[ ${MANUAL_TESTS:-} == true && ${PARALLEL_TESTS:-} != true ]]; then
 | 
			
		||||
    for t in "${manual_tests[@]}" ; do
 | 
			
		||||
        ${APP_PATH} --unittest=${t}
 | 
			
		||||
        TEST_STAT=$?
 | 
			
		||||
        if [[ $TEST_STAT -ne 0 ]] ; then
 | 
			
		||||
            break
 | 
			
		||||
        fi
 | 
			
		||||
    done
 | 
			
		||||
else
 | 
			
		||||
    ${APP_PATH} ${APP_ARGS}
 | 
			
		||||
    TEST_STAT=$?
 | 
			
		||||
fi
 | 
			
		||||
set -e
 | 
			
		||||
 | 
			
		||||
if [[ ${look_core} == true ]]; then
 | 
			
		||||
    after=$(ls -A1 ${coredir})
 | 
			
		||||
    oIFS="${IFS}"
 | 
			
		||||
    IFS=$'\n\r'
 | 
			
		||||
    found_core=false
 | 
			
		||||
    for l in $(diff -w --suppress-common-lines <(echo "$before") <(echo "$after")) ; do
 | 
			
		||||
        if [[ "$l" =~ ^[[:space:]]*\>[[:space:]]*(.+)$ ]] ; then
 | 
			
		||||
            corefile="${BASH_REMATCH[1]}"
 | 
			
		||||
            echo "FOUND core dump file at '${coredir}/${corefile}'"
 | 
			
		||||
            gdb_output=$(/bin/mktemp /tmp/gdb_output_XXXXXXXXXX.txt)
 | 
			
		||||
            found_core=true
 | 
			
		||||
            gdb \
 | 
			
		||||
                -ex "set height 0" \
 | 
			
		||||
                -ex "set logging file ${gdb_output}" \
 | 
			
		||||
                -ex "set logging on" \
 | 
			
		||||
                -ex "print 'ripple::BuildInfo::versionString'" \
 | 
			
		||||
                -ex "thread apply all backtrace full" \
 | 
			
		||||
                -ex "info inferiors" \
 | 
			
		||||
                -ex quit \
 | 
			
		||||
                "$APP_PATH" \
 | 
			
		||||
                "${coredir}/${corefile}" &> /dev/null
 | 
			
		||||
 | 
			
		||||
            echo -e "CORE INFO: \n\n $(cat ${gdb_output}) \n\n)"
 | 
			
		||||
        fi
 | 
			
		||||
    done
 | 
			
		||||
    IFS="${oIFS}"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ ${found_core} == true ]]; then
 | 
			
		||||
    exit -1
 | 
			
		||||
else
 | 
			
		||||
    exit $TEST_STAT
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
@@ -1,36 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
# run our build script in a docker container
 | 
			
		||||
# using travis-ci hosts
 | 
			
		||||
set -eux
 | 
			
		||||
 | 
			
		||||
function join_by { local IFS="$1"; shift; echo "$*"; }
 | 
			
		||||
 | 
			
		||||
set +x
 | 
			
		||||
echo "VERBOSE_BUILD=true" > /tmp/co.env
 | 
			
		||||
matchers=(
 | 
			
		||||
   'TRAVIS.*' 'CI' 'CC' 'CXX'
 | 
			
		||||
   'BUILD_TYPE' 'TARGET' 'MAX_TIME'
 | 
			
		||||
   'CODECOV.+' 'CMAKE.*' '.+_TESTS'
 | 
			
		||||
   '.+_OPTIONS' 'NINJA.*' 'NUM_.+'
 | 
			
		||||
   'NIH_.+' 'BOOST.*' '.*CCACHE.*')
 | 
			
		||||
 | 
			
		||||
matchstring=$(join_by '|' "${matchers[@]}")
 | 
			
		||||
echo "MATCHSTRING IS:: $matchstring"
 | 
			
		||||
env | grep -E "^(${matchstring})=" >> /tmp/co.env
 | 
			
		||||
set -x
 | 
			
		||||
# need to eliminate TRAVIS_CMD...don't want to pass it to the container
 | 
			
		||||
cat /tmp/co.env | grep -v TRAVIS_CMD > /tmp/co.env.2
 | 
			
		||||
mv /tmp/co.env.2 /tmp/co.env
 | 
			
		||||
cat /tmp/co.env
 | 
			
		||||
mkdir -p -m 0777 ${TRAVIS_BUILD_DIR}/cores
 | 
			
		||||
echo "${TRAVIS_BUILD_DIR}/cores/%e.%p" | sudo tee /proc/sys/kernel/core_pattern
 | 
			
		||||
docker run \
 | 
			
		||||
    -t --env-file /tmp/co.env \
 | 
			
		||||
    -v ${TRAVIS_HOME}:${TRAVIS_HOME} \
 | 
			
		||||
    -w ${TRAVIS_BUILD_DIR} \
 | 
			
		||||
    --cap-add SYS_PTRACE \
 | 
			
		||||
    --ulimit "core=-1" \
 | 
			
		||||
    $DOCKER_IMAGE \
 | 
			
		||||
    /bin/bash -c 'if [[ $CC =~ ([[:alpha:]]+)-([[:digit:].]+) ]] ; then sudo update-alternatives --set ${BASH_REMATCH[1]} /usr/bin/$CC; fi; bin/ci/ubuntu/build-and-test.sh'
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@@ -1,44 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
# some cached files create churn, so save them here for
 | 
			
		||||
# later restoration before packing the cache
 | 
			
		||||
set -eux
 | 
			
		||||
clean_cache="travis_clean_cache"
 | 
			
		||||
if [[ ! ( "${TRAVIS_JOB_NAME}" =~ "windows" || \
 | 
			
		||||
    "${TRAVIS_JOB_NAME}" =~ "prereq-keep" ) ]] && \
 | 
			
		||||
    ( [[ "${TRAVIS_COMMIT_MESSAGE}" =~ "${clean_cache}" ]] || \
 | 
			
		||||
        ( [[ -v TRAVIS_PULL_REQUEST_SHA && \
 | 
			
		||||
            "${TRAVIS_PULL_REQUEST_SHA}" != "" ]] && \
 | 
			
		||||
          git log -1 "${TRAVIS_PULL_REQUEST_SHA}" | grep -cq "${clean_cache}" -
 | 
			
		||||
        )
 | 
			
		||||
    )
 | 
			
		||||
then
 | 
			
		||||
    find ${TRAVIS_HOME}/_cache -maxdepth 2 -type d
 | 
			
		||||
    rm -rf ${TRAVIS_HOME}/_cache
 | 
			
		||||
    mkdir -p ${TRAVIS_HOME}/_cache
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
pushd ${TRAVIS_HOME}
 | 
			
		||||
if [ -f cache_ignore.tar ] ; then
 | 
			
		||||
    rm -f cache_ignore.tar
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [ -d _cache/nih_c ] ; then
 | 
			
		||||
    find _cache/nih_c -name "build.ninja" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
    find _cache/nih_c -name ".ninja_deps" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
    find _cache/nih_c -name ".ninja_log" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
    find _cache/nih_c -name "*.log" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
    find _cache/nih_c -name "*.tlog" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
    # show .a files in the cache, for sanity checking
 | 
			
		||||
    find _cache/nih_c -name "*.a" -ls
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [ -d _cache/ccache ] ; then
 | 
			
		||||
    find _cache/ccache -name "stats" | tar rf cache_ignore.tar --files-from -
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [ -f cache_ignore.tar ] ; then
 | 
			
		||||
    tar -tf cache_ignore.tar
 | 
			
		||||
fi
 | 
			
		||||
popd
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@@ -1,64 +0,0 @@
 | 
			
		||||
var ripple = require('ripple-lib');
 | 
			
		||||
 | 
			
		||||
var v = {
 | 
			
		||||
  seed: "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
 | 
			
		||||
  addr: "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var remote = ripple.Remote.from_config({
 | 
			
		||||
  "trusted" : true,
 | 
			
		||||
  "websocket_ip" : "127.0.0.1",
 | 
			
		||||
  "websocket_port" : 5006,
 | 
			
		||||
  "websocket_ssl" : false,
 | 
			
		||||
  "local_signing" : true
 | 
			
		||||
});
 | 
			
		||||
 | 
			
		||||
var tx_json = {
 | 
			
		||||
	"Account" : v.addr,
 | 
			
		||||
	"Amount" : "10000000",
 | 
			
		||||
	"Destination" : "rEu2ULPiEQm1BAL8pYzmXnNX1aFX9sCks",
 | 
			
		||||
	"Fee" : "10",
 | 
			
		||||
	"Flags" : 0,
 | 
			
		||||
	"Sequence" : 3,
 | 
			
		||||
	"TransactionType" : "Payment"
 | 
			
		||||
 | 
			
		||||
  //"SigningPubKey": '0396941B22791A448E5877A44CE98434DB217D6FB97D63F0DAD23BE49ED45173C9'
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
remote.on('connected', function () {
 | 
			
		||||
  var req = remote.request_sign(v.seed, tx_json);
 | 
			
		||||
  req.message.debug_signing = true;
 | 
			
		||||
  req.on('success', function (result) {
 | 
			
		||||
    console.log("SERVER RESULT");
 | 
			
		||||
    console.log(result);
 | 
			
		||||
 | 
			
		||||
    var sim = {};
 | 
			
		||||
    var tx = remote.transaction();
 | 
			
		||||
    tx.tx_json = tx_json;
 | 
			
		||||
    tx._secret = v.seed;
 | 
			
		||||
    tx.complete();
 | 
			
		||||
    var unsigned = tx.serialize().to_hex();
 | 
			
		||||
    tx.sign();
 | 
			
		||||
 | 
			
		||||
    sim.tx_blob = tx.serialize().to_hex();
 | 
			
		||||
    sim.tx_json = tx.tx_json;
 | 
			
		||||
    sim.tx_signing_hash = tx.signing_hash().to_hex();
 | 
			
		||||
    sim.tx_unsigned = unsigned;
 | 
			
		||||
 | 
			
		||||
    console.log("\nLOCAL RESULT");
 | 
			
		||||
    console.log(sim);
 | 
			
		||||
 | 
			
		||||
    remote.connect(false);
 | 
			
		||||
  });
 | 
			
		||||
  req.on('error', function (err) {
 | 
			
		||||
    if (err.error === "remoteError" && err.remote.error === "srcActNotFound") {
 | 
			
		||||
      console.log("Please fund account "+v.addr+" to run this test.");
 | 
			
		||||
    } else {
 | 
			
		||||
      console.log('error', err);
 | 
			
		||||
    }
 | 
			
		||||
    remote.connect(false);
 | 
			
		||||
  });
 | 
			
		||||
  req.request();
 | 
			
		||||
 | 
			
		||||
});
 | 
			
		||||
remote.connect();
 | 
			
		||||
@@ -1,18 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// Returns a Gravatar style hash as per: http://en.gravatar.com/site/implement/hash/
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
if (3 != process.argv.length) {
 | 
			
		||||
  process.stderr.write("Usage: " + process.argv[1] + " email_address\n\nReturns gravatar style hash.\n");
 | 
			
		||||
  process.exit(1);
 | 
			
		||||
 | 
			
		||||
} else {
 | 
			
		||||
  var md5 = require('crypto').createHash('md5');
 | 
			
		||||
 | 
			
		||||
  md5.update(process.argv[2].trim().toLowerCase());
 | 
			
		||||
 | 
			
		||||
  process.stdout.write(md5.digest('hex') + "\n");
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
@@ -1,31 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// This program allows IE 9 ripple-clients to make websocket connections to
 | 
			
		||||
// rippled using flash.  As IE 9 does not have websocket support, this required
 | 
			
		||||
// if you wish to support IE 9 ripple-clients.
 | 
			
		||||
//
 | 
			
		||||
// http://www.lightsphere.com/dev/articles/flash_socket_policy.html
 | 
			
		||||
//
 | 
			
		||||
// For better security, be sure to set the Port below to the port of your
 | 
			
		||||
// [websocket_public_port].
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
var net	    = require("net"),
 | 
			
		||||
    port    = "*",
 | 
			
		||||
    domains = ["*:"+port]; // Domain:Port
 | 
			
		||||
 | 
			
		||||
net.createServer(
 | 
			
		||||
  function(socket) {
 | 
			
		||||
    socket.write("<?xml version='1.0' ?>\n");
 | 
			
		||||
    socket.write("<!DOCTYPE cross-domain-policy SYSTEM 'http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd'>\n");
 | 
			
		||||
    socket.write("<cross-domain-policy>\n");
 | 
			
		||||
    domains.forEach(
 | 
			
		||||
      function(domain) {
 | 
			
		||||
        var parts = domain.split(':');
 | 
			
		||||
        socket.write("\t<allow-access-from domain='" + parts[0] + "' to-ports='" + parts[1] + "' />\n");
 | 
			
		||||
      }
 | 
			
		||||
    );
 | 
			
		||||
    socket.write("</cross-domain-policy>\n");
 | 
			
		||||
    socket.end();
 | 
			
		||||
  }
 | 
			
		||||
).listen(843);
 | 
			
		||||
@@ -1,150 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
 | 
			
		||||
# This script generates information about your rippled installation
 | 
			
		||||
# and system. It can be used to help debug issues that you may face
 | 
			
		||||
# in your installation. While this script endeavors to not display any 
 | 
			
		||||
# sensitive information, it is recommended that you read the output
 | 
			
		||||
# before sharing with any third parties.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
rippled_exe=/opt/ripple/bin/rippled
 | 
			
		||||
conf_file=/etc/opt/ripple/rippled.cfg
 | 
			
		||||
 | 
			
		||||
while getopts ":e:c:" opt; do
 | 
			
		||||
    case $opt in
 | 
			
		||||
        e)
 | 
			
		||||
            rippled_exe=${OPTARG}
 | 
			
		||||
            ;;
 | 
			
		||||
        c)
 | 
			
		||||
            conf_file=${OPTARG}
 | 
			
		||||
            ;;
 | 
			
		||||
        \?)
 | 
			
		||||
            echo "Invalid option: -$OPTARG"
 | 
			
		||||
            exit -1
 | 
			
		||||
    esac
 | 
			
		||||
done
 | 
			
		||||
 | 
			
		||||
tmp_loc=$(mktemp -d --tmpdir ripple_info.XXXXX)
 | 
			
		||||
chmod 751 ${tmp_loc}
 | 
			
		||||
awk_prog=${tmp_loc}/cfg.awk
 | 
			
		||||
summary_out=${tmp_loc}/rippled_info.md
 | 
			
		||||
printf "# rippled report info\n\n> generated at %s\n" "$(date -R)" > ${summary_out}
 | 
			
		||||
 | 
			
		||||
function log_section {
 | 
			
		||||
    printf "\n## %s\n" "$*" >> ${summary_out}
 | 
			
		||||
 | 
			
		||||
    while read -r l; do
 | 
			
		||||
        echo "    $l" >> ${summary_out}
 | 
			
		||||
    done </dev/stdin
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
function join_by {
 | 
			
		||||
    local IFS="$1"; shift; echo "$*";
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
if [[ -f ${conf_file} ]] ; then
 | 
			
		||||
    exclude=( ips ips_fixed node_seed validation_seed validator_token )
 | 
			
		||||
    cleaned_conf=${tmp_loc}/cleaned_rippled_cfg.txt
 | 
			
		||||
    cat << 'EOP' >> ${awk_prog}
 | 
			
		||||
    BEGIN {FS="[[:space:]]*=[[:space:]]*"; skip=0; db_path=""; print > OUT_FILE; split(exl,exa,"|")}
 | 
			
		||||
    /^#/ {next}
 | 
			
		||||
    save==2 && /^[[:space:]]*$/ {next}
 | 
			
		||||
    /^\[.+\]$/ {
 | 
			
		||||
      section=tolower(gensub(/^\[[[:space:]]*([a-zA-Z_]+)[[:space:]]*\]$/, "\\1", "g"))
 | 
			
		||||
      skip = 0
 | 
			
		||||
      for (i in exa) {
 | 
			
		||||
        if (section == exa[i])
 | 
			
		||||
          skip = 1
 | 
			
		||||
      }
 | 
			
		||||
      if (section == "database_path")
 | 
			
		||||
        save = 1
 | 
			
		||||
    }
 | 
			
		||||
    skip==1 {next}
 | 
			
		||||
    save==2 {save=0; db_path=$0}
 | 
			
		||||
    save==1 {save=2}
 | 
			
		||||
    $1 ~ /password/ {$0=$1"=<redacted>"}
 | 
			
		||||
    {print >> OUT_FILE}
 | 
			
		||||
    END {print db_path}
 | 
			
		||||
EOP
 | 
			
		||||
 | 
			
		||||
    db=$(\
 | 
			
		||||
        sed -r -e 's/\<s[[:alnum:]]{28}\>/<redactedsecret>/g;s/^[[:space:]]*//;s/[[:space:]]*$//' ${conf_file} |\
 | 
			
		||||
        awk -v OUT_FILE=${cleaned_conf} -v exl="$(join_by '|' "${exclude[@]}")" -f ${awk_prog})
 | 
			
		||||
    rm ${awk_prog}
 | 
			
		||||
    cat ${cleaned_conf} | log_section "cleaned config file"
 | 
			
		||||
    rm ${cleaned_conf}
 | 
			
		||||
    echo "${db}"  | log_section "database path"
 | 
			
		||||
    df ${db}      | log_section "df: database"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
# Send output from this script to a log file
 | 
			
		||||
## this captures any messages
 | 
			
		||||
## or errors from the script itself
 | 
			
		||||
 | 
			
		||||
log_file=${tmp_loc}/get_info.log
 | 
			
		||||
exec 3>&1 1>>${log_file} 2>&1
 | 
			
		||||
 | 
			
		||||
## Send all stdout files to /tmp
 | 
			
		||||
 | 
			
		||||
if [[ -x ${rippled_exe} ]] ; then
 | 
			
		||||
    pgrep rippled && \
 | 
			
		||||
    ${rippled_exe} --conf ${conf_file} \
 | 
			
		||||
    -- server_info                  | log_section "server info"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cat /proc/meminfo                   | log_section "meminfo"
 | 
			
		||||
cat /proc/swaps                     | log_section "swap space"
 | 
			
		||||
ulimit -a                           | log_section "ulimit"
 | 
			
		||||
 | 
			
		||||
if command -v lshw >/dev/null 2>&1 ; then
 | 
			
		||||
    lshw    2>/dev/null             | log_section "hardware info"
 | 
			
		||||
else
 | 
			
		||||
    lscpu                           >  ${tmp_loc}/hw_info.txt
 | 
			
		||||
    hwinfo                          >> ${tmp_loc}/hw_info.txt
 | 
			
		||||
    lspci                           >> ${tmp_loc}/hw_info.txt
 | 
			
		||||
    lsblk                           >> ${tmp_loc}/hw_info.txt
 | 
			
		||||
    cat ${tmp_loc}/hw_info.txt | log_section "hardware info"
 | 
			
		||||
    rm ${tmp_loc}/hw_info.txt
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if command -v iostat >/dev/null 2>&1 ; then
 | 
			
		||||
    iostat -t -d -x 2 6             | log_section "iostat"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
df -h                               | log_section "free disk space"
 | 
			
		||||
drives=($(df | awk '$1 ~ /^\/dev\// {print $1}' | xargs -n 1 basename))
 | 
			
		||||
block_devs=($(ls /sys/block/))
 | 
			
		||||
for d in "${drives[@]}"; do
 | 
			
		||||
    for dev in "${block_devs[@]}"; do
 | 
			
		||||
        #echo "D: [$d], DEV: [$dev]"
 | 
			
		||||
        if [[ $d =~ $dev ]]; then
 | 
			
		||||
            # this file (if exists) has 0 for SSD and 1 for HDD
 | 
			
		||||
            if [[ "$(cat /sys/block/${dev}/queue/rotational 2>/dev/null)" == 0 ]] ; then
 | 
			
		||||
                echo "${d} : SSD" >> ${tmp_loc}/is_ssd.txt
 | 
			
		||||
            else
 | 
			
		||||
                echo "${d} : NO SSD" >> ${tmp_loc}/is_ssd.txt
 | 
			
		||||
            fi
 | 
			
		||||
        fi
 | 
			
		||||
    done
 | 
			
		||||
done
 | 
			
		||||
 | 
			
		||||
if [[ -f ${tmp_loc}/is_ssd.txt ]] ; then
 | 
			
		||||
    cat ${tmp_loc}/is_ssd.txt | log_section "SSD"
 | 
			
		||||
    rm ${tmp_loc}/is_ssd.txt
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
cat ${log_file} | log_section "script log"
 | 
			
		||||
 | 
			
		||||
cat << MSG | tee /dev/fd/3
 | 
			
		||||
####################################################
 | 
			
		||||
  rippled info has been gathered. Please copy the
 | 
			
		||||
  contents of ${summary_out}
 | 
			
		||||
  to a github gist at https://gist.github.com/
 | 
			
		||||
 | 
			
		||||
  PLEASE REVIEW THIS FILE FOR ANY SENSITIVE DATA
 | 
			
		||||
  BEFORE POSTING! We have tried our best to omit
 | 
			
		||||
  any sensitive information from this file, but you
 | 
			
		||||
  should verify before posting.
 | 
			
		||||
####################################################
 | 
			
		||||
MSG
 | 
			
		||||
 | 
			
		||||
@@ -1,23 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// Returns hex of lowercasing a string.
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
var stringToHex = function (s) {
 | 
			
		||||
  return Array.prototype.map.call(s, function (c) {
 | 
			
		||||
      var b = c.charCodeAt(0);
 | 
			
		||||
 | 
			
		||||
      return b < 16 ? "0" + b.toString(16) : b.toString(16);
 | 
			
		||||
    }).join("");
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
if (3 != process.argv.length) {
 | 
			
		||||
  process.stderr.write("Usage: " + process.argv[1] + " string\n\nReturns hex of lowercasing string.\n");
 | 
			
		||||
  process.exit(1);
 | 
			
		||||
 | 
			
		||||
} else {
 | 
			
		||||
 | 
			
		||||
  process.stdout.write(stringToHex(process.argv[2].toLowerCase()) + "\n");
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
@@ -1,42 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// This is a tool to issue JSON-RPC requests from the command line.
 | 
			
		||||
//
 | 
			
		||||
// This can be used to test a JSON-RPC server.
 | 
			
		||||
//
 | 
			
		||||
// Requires: npm simple-jsonrpc
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
var jsonrpc   = require('simple-jsonrpc');
 | 
			
		||||
 | 
			
		||||
var program   = process.argv[1];
 | 
			
		||||
 | 
			
		||||
if (5 !== process.argv.length) {
 | 
			
		||||
  console.log("Usage: %s <URL> <method> <json>", program);
 | 
			
		||||
}
 | 
			
		||||
else {
 | 
			
		||||
  var url       = process.argv[2];
 | 
			
		||||
  var method    = process.argv[3];
 | 
			
		||||
  var json_raw  = process.argv[4];
 | 
			
		||||
  var json;
 | 
			
		||||
 | 
			
		||||
  try {
 | 
			
		||||
    json      = JSON.parse(json_raw);
 | 
			
		||||
  }
 | 
			
		||||
  catch (e) {
 | 
			
		||||
      console.log("JSON parse error: %s", e.message);
 | 
			
		||||
      throw e;
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  var client  = jsonrpc.client(url);
 | 
			
		||||
 | 
			
		||||
  client.call(method, json,
 | 
			
		||||
    function (result) {
 | 
			
		||||
      console.log(JSON.stringify(result, undefined, 2));
 | 
			
		||||
    },
 | 
			
		||||
    function (error) {
 | 
			
		||||
      console.log(JSON.stringify(error, undefined, 2));
 | 
			
		||||
    });
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
@@ -1,68 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
//
 | 
			
		||||
// This is a tool to listen for JSON-RPC requests at an IP and port.
 | 
			
		||||
//
 | 
			
		||||
// This will report the request to console and echo back the request as the response.
 | 
			
		||||
//
 | 
			
		||||
 | 
			
		||||
var http      = require("http");
 | 
			
		||||
 | 
			
		||||
var program   = process.argv[1];
 | 
			
		||||
 | 
			
		||||
if (4 !== process.argv.length) {
 | 
			
		||||
  console.log("Usage: %s <ip> <port>", program);
 | 
			
		||||
}
 | 
			
		||||
else {
 | 
			
		||||
  var ip      = process.argv[2];
 | 
			
		||||
  var port    = process.argv[3];
 | 
			
		||||
 | 
			
		||||
  var server  = http.createServer(function (req, res) {
 | 
			
		||||
      console.log("CONNECT");
 | 
			
		||||
      var input = "";
 | 
			
		||||
 | 
			
		||||
      req.setEncoding();
 | 
			
		||||
 | 
			
		||||
      req.on('data', function (buffer) {
 | 
			
		||||
          // console.log("DATA: %s", buffer);
 | 
			
		||||
          input = input + buffer;
 | 
			
		||||
        });
 | 
			
		||||
 | 
			
		||||
      req.on('end', function () {
 | 
			
		||||
          // console.log("END");
 | 
			
		||||
 | 
			
		||||
          var json_req;
 | 
			
		||||
 | 
			
		||||
          console.log("URL: %s", req.url);
 | 
			
		||||
          console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
 | 
			
		||||
 | 
			
		||||
          try {
 | 
			
		||||
            json_req = JSON.parse(input);
 | 
			
		||||
 | 
			
		||||
            console.log("REQ: %s", JSON.stringify(json_req, undefined, 2));
 | 
			
		||||
          }
 | 
			
		||||
          catch (e) {
 | 
			
		||||
            console.log("BAD JSON: %s", e.message);
 | 
			
		||||
 | 
			
		||||
            json_req = { error : e.message }
 | 
			
		||||
          }
 | 
			
		||||
 | 
			
		||||
          res.statusCode = 200;
 | 
			
		||||
          res.end(JSON.stringify({
 | 
			
		||||
              jsonrpc: "2.0",
 | 
			
		||||
              result: { request : json_req },
 | 
			
		||||
              id: req.id
 | 
			
		||||
            }));
 | 
			
		||||
        });
 | 
			
		||||
 | 
			
		||||
      req.on('close', function () {
 | 
			
		||||
          console.log("CLOSE");
 | 
			
		||||
        });
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  server.listen(port, ip, undefined,
 | 
			
		||||
    function () {
 | 
			
		||||
      console.log("Listening at: %s:%s", ip, port);
 | 
			
		||||
    });
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
							
								
								
									
										252
									
								
								bin/rlint.js
									
									
									
									
									
								
							
							
						
						
									
										252
									
								
								bin/rlint.js
									
									
									
									
									
								
							@@ -1,252 +0,0 @@
 | 
			
		||||
#!/usr/bin/node
 | 
			
		||||
 | 
			
		||||
var async       = require('async');
 | 
			
		||||
var Remote      = require('ripple-lib').Remote;
 | 
			
		||||
var Transaction = require('ripple-lib').Transaction;
 | 
			
		||||
var UInt160     = require('ripple-lib').UInt160;
 | 
			
		||||
var Amount      = require('ripple-lib').Amount;
 | 
			
		||||
 | 
			
		||||
var book_key = function (book) {
 | 
			
		||||
  return book.taker_pays.currency
 | 
			
		||||
    + ":" + book.taker_pays.issuer
 | 
			
		||||
    + ":" + book.taker_gets.currency
 | 
			
		||||
    + ":" + book.taker_gets.issuer;
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var book_key_cross = function (book) {
 | 
			
		||||
  return book.taker_gets.currency
 | 
			
		||||
    + ":" + book.taker_gets.issuer
 | 
			
		||||
    + ":" + book.taker_pays.currency
 | 
			
		||||
    + ":" + book.taker_pays.issuer;
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var ledger_verify = function (ledger) {
 | 
			
		||||
  var dir_nodes = ledger.accountState.filter(function (entry) {
 | 
			
		||||
      return entry.LedgerEntryType === 'DirectoryNode'    // Only directories
 | 
			
		||||
        && entry.index === entry.RootIndex                // Only root nodes
 | 
			
		||||
        && 'TakerGetsCurrency' in entry;                  // Only offer directories
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  var books = {};
 | 
			
		||||
 | 
			
		||||
  dir_nodes.forEach(function (node) {
 | 
			
		||||
      var book = {
 | 
			
		||||
        taker_gets: {
 | 
			
		||||
            currency: UInt160.from_generic(node.TakerGetsCurrency).to_json(),
 | 
			
		||||
            issuer: UInt160.from_generic(node.TakerGetsIssuer).to_json()
 | 
			
		||||
          },
 | 
			
		||||
        taker_pays: {
 | 
			
		||||
          currency: UInt160.from_generic(node.TakerPaysCurrency).to_json(),
 | 
			
		||||
          issuer: UInt160.from_generic(node.TakerPaysIssuer).to_json()
 | 
			
		||||
        },
 | 
			
		||||
        quality: Amount.from_quality(node.RootIndex),
 | 
			
		||||
        index: node.RootIndex
 | 
			
		||||
      };
 | 
			
		||||
 | 
			
		||||
      books[book_key(book)] = book;
 | 
			
		||||
 | 
			
		||||
//      console.log(JSON.stringify(node, undefined, 2));
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
//  console.log(JSON.stringify(dir_entry, undefined, 2));
 | 
			
		||||
  console.log("#%s books: %s", ledger.ledger_index, Object.keys(books).length);
 | 
			
		||||
 | 
			
		||||
  Object.keys(books).forEach(function (key) {
 | 
			
		||||
      var book        = books[key];
 | 
			
		||||
      var key_cross   = book_key_cross(book);
 | 
			
		||||
      var book_cross  = books[key_cross];
 | 
			
		||||
 | 
			
		||||
      if (book && book_cross && !book_cross.done)
 | 
			
		||||
      {
 | 
			
		||||
        var book_cross_quality_inverted = Amount.from_json("1.0/1/1").divide(book_cross.quality);
 | 
			
		||||
 | 
			
		||||
        if (book_cross_quality_inverted.compareTo(book.quality) >= 0)
 | 
			
		||||
        {
 | 
			
		||||
          // Crossing books
 | 
			
		||||
          console.log("crossing: #%s :: %s :: %s :: %s :: %s :: %s :: %s", ledger.ledger_index, key, book.quality.to_text(), book_cross.quality.to_text(), book_cross_quality_inverted.to_text(),
 | 
			
		||||
            book.index, book_cross.index);
 | 
			
		||||
        }
 | 
			
		||||
 | 
			
		||||
        book_cross.done = true;
 | 
			
		||||
      }
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  var ripple_selfs  = {};
 | 
			
		||||
 | 
			
		||||
  var accounts  = {};
 | 
			
		||||
  var counts    = {};
 | 
			
		||||
 | 
			
		||||
  ledger.accountState.forEach(function (entry) {
 | 
			
		||||
      if (entry.LedgerEntryType === 'Offer')
 | 
			
		||||
      {
 | 
			
		||||
        counts[entry.Account] = (counts[entry.Account] || 0) + 1;
 | 
			
		||||
      }
 | 
			
		||||
      else if (entry.LedgerEntryType === 'RippleState')
 | 
			
		||||
      {
 | 
			
		||||
        if (entry.Flags & (0x10000 | 0x40000))
 | 
			
		||||
        {
 | 
			
		||||
          counts[entry.LowLimit.issuer]   = (counts[entry.LowLimit.issuer] || 0) + 1;
 | 
			
		||||
        }
 | 
			
		||||
 | 
			
		||||
        if (entry.Flags & (0x20000 | 0x80000))
 | 
			
		||||
        {
 | 
			
		||||
          counts[entry.HighLimit.issuer]  = (counts[entry.HighLimit.issuer] || 0) + 1;
 | 
			
		||||
        }
 | 
			
		||||
 | 
			
		||||
        if (entry.HighLimit.issuer === entry.LowLimit.issuer)
 | 
			
		||||
          ripple_selfs[entry.Account] = entry;
 | 
			
		||||
      }
 | 
			
		||||
      else if (entry.LedgerEntryType == 'AccountRoot')
 | 
			
		||||
      {
 | 
			
		||||
        accounts[entry.Account] = entry;
 | 
			
		||||
      }
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  var low               = 0;  // Accounts with too low a count.
 | 
			
		||||
  var high              = 0;
 | 
			
		||||
  var missing_accounts  = 0;  // Objects with no referencing account.
 | 
			
		||||
  var missing_objects   = 0;  // Accounts specifying an object but having none.
 | 
			
		||||
 | 
			
		||||
  Object.keys(counts).forEach(function (account) {
 | 
			
		||||
      if (account in accounts)
 | 
			
		||||
      {
 | 
			
		||||
        if (counts[account] !== accounts[account].OwnerCount)
 | 
			
		||||
        {
 | 
			
		||||
          if (counts[account] < accounts[account].OwnerCount)
 | 
			
		||||
          {
 | 
			
		||||
            high  += 1;
 | 
			
		||||
            console.log("%s: high count %s/%s", account, counts[account], accounts[account].OwnerCount);
 | 
			
		||||
          }
 | 
			
		||||
          else
 | 
			
		||||
          {
 | 
			
		||||
            low   += 1;
 | 
			
		||||
            console.log("%s: low count %s/%s", account, counts[account], accounts[account].OwnerCount);
 | 
			
		||||
          }
 | 
			
		||||
        }
 | 
			
		||||
      }
 | 
			
		||||
      else
 | 
			
		||||
      {
 | 
			
		||||
        missing_accounts  += 1;
 | 
			
		||||
 | 
			
		||||
        console.log("%s: missing : count %s", account, counts[account]);
 | 
			
		||||
      }
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  Object.keys(accounts).forEach(function (account) {
 | 
			
		||||
      if (!('OwnerCount' in accounts[account]))
 | 
			
		||||
      {
 | 
			
		||||
          console.log("%s: bad entry : %s", account, JSON.stringify(accounts[account], undefined, 2));
 | 
			
		||||
      }
 | 
			
		||||
      else if (!(account in counts) && accounts[account].OwnerCount)
 | 
			
		||||
      {
 | 
			
		||||
          missing_objects += 1;
 | 
			
		||||
 | 
			
		||||
          console.log("%s: no objects : %s/%s", account, 0, accounts[account].OwnerCount);
 | 
			
		||||
      }
 | 
			
		||||
    });
 | 
			
		||||
 | 
			
		||||
  if (low)
 | 
			
		||||
    console.log("counts too low = %s", low);
 | 
			
		||||
 | 
			
		||||
  if (high)
 | 
			
		||||
    console.log("counts too high = %s", high);
 | 
			
		||||
 | 
			
		||||
  if (missing_objects)
 | 
			
		||||
    console.log("missing_objects = %s", missing_objects);
 | 
			
		||||
 | 
			
		||||
  if (missing_accounts)
 | 
			
		||||
    console.log("missing_accounts = %s", missing_accounts);
 | 
			
		||||
 | 
			
		||||
  if (Object.keys(ripple_selfs).length)
 | 
			
		||||
    console.log("RippleState selfs = %s", Object.keys(ripple_selfs).length);
 | 
			
		||||
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var ledger_request = function (remote, ledger_index, done) {
 | 
			
		||||
 remote.request_ledger(undefined, {
 | 
			
		||||
      accounts: true,
 | 
			
		||||
      expand: true,
 | 
			
		||||
    })
 | 
			
		||||
  .ledger_index(ledger_index)
 | 
			
		||||
  .on('success', function (m) {
 | 
			
		||||
      // console.log("ledger: ", ledger_index);
 | 
			
		||||
      // console.log("ledger: ", JSON.stringify(m, undefined, 2));
 | 
			
		||||
      done(m.ledger);
 | 
			
		||||
    })
 | 
			
		||||
  .on('error', function (m) {
 | 
			
		||||
      console.log("error");
 | 
			
		||||
      done();
 | 
			
		||||
    })
 | 
			
		||||
  .request();
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var usage = function () {
 | 
			
		||||
  console.log("rlint.js _websocket_ip_ _websocket_port_ ");
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
var finish = function (remote) {
 | 
			
		||||
  remote.disconnect();
 | 
			
		||||
 | 
			
		||||
  // XXX Because remote.disconnect() doesn't work:
 | 
			
		||||
  process.exit();
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
console.log("args: ", process.argv.length);
 | 
			
		||||
console.log("args: ", process.argv);
 | 
			
		||||
 | 
			
		||||
if (process.argv.length < 4) {
 | 
			
		||||
  usage();
 | 
			
		||||
}
 | 
			
		||||
else {
 | 
			
		||||
  var remote  = Remote.from_config({
 | 
			
		||||
        websocket_ip:   process.argv[2],
 | 
			
		||||
        websocket_port: process.argv[3],
 | 
			
		||||
      })
 | 
			
		||||
    .once('ledger_closed', function (m) {
 | 
			
		||||
        console.log("ledger_closed: ", JSON.stringify(m, undefined, 2));
 | 
			
		||||
 | 
			
		||||
        if (process.argv.length === 5) {
 | 
			
		||||
          var ledger_index  = process.argv[4];
 | 
			
		||||
 | 
			
		||||
          ledger_request(remote, ledger_index, function (l) {
 | 
			
		||||
              if (l) {
 | 
			
		||||
                ledger_verify(l);
 | 
			
		||||
              }
 | 
			
		||||
 | 
			
		||||
              finish(remote);
 | 
			
		||||
            });
 | 
			
		||||
 | 
			
		||||
        } else if (process.argv.length === 6) {
 | 
			
		||||
          var ledger_start  = Number(process.argv[4]);
 | 
			
		||||
          var ledger_end    = Number(process.argv[5]);
 | 
			
		||||
          var ledger_cursor = ledger_end;
 | 
			
		||||
 | 
			
		||||
          async.whilst(
 | 
			
		||||
            function () {
 | 
			
		||||
              return ledger_start <= ledger_cursor && ledger_cursor <=ledger_end;
 | 
			
		||||
            },
 | 
			
		||||
            function (callback) {
 | 
			
		||||
              // console.log(ledger_cursor);
 | 
			
		||||
 | 
			
		||||
              ledger_request(remote, ledger_cursor, function (l) {
 | 
			
		||||
                  if (l) {
 | 
			
		||||
                    ledger_verify(l);
 | 
			
		||||
                  }
 | 
			
		||||
 | 
			
		||||
                  --ledger_cursor;
 | 
			
		||||
 | 
			
		||||
                  callback();
 | 
			
		||||
                });
 | 
			
		||||
            },
 | 
			
		||||
            function (error) {
 | 
			
		||||
              finish(remote);
 | 
			
		||||
            });
 | 
			
		||||
 | 
			
		||||
        } else {
 | 
			
		||||
          finish(remote);
 | 
			
		||||
        }
 | 
			
		||||
      })
 | 
			
		||||
    .connect();
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// vim:sw=2:sts=2:ts=8:et
 | 
			
		||||
@@ -1,51 +0,0 @@
 | 
			
		||||
#!/usr/bin/env bash
 | 
			
		||||
set -exu
 | 
			
		||||
 | 
			
		||||
: ${TRAVIS_BUILD_DIR:=""}
 | 
			
		||||
: ${VCPKG_DIR:=".vcpkg"}
 | 
			
		||||
export VCPKG_ROOT=${VCPKG_DIR}
 | 
			
		||||
: ${VCPKG_DEFAULT_TRIPLET:="x64-windows-static"}
 | 
			
		||||
 | 
			
		||||
export VCPKG_DEFAULT_TRIPLET
 | 
			
		||||
 | 
			
		||||
EXE="vcpkg"
 | 
			
		||||
if [[ -z ${COMSPEC:-} ]]; then
 | 
			
		||||
    EXE="${EXE}.exe"
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
if [[ -d "${VCPKG_DIR}" && -x "${VCPKG_DIR}/${EXE}" && -d "${VCPKG_DIR}/installed" ]] ; then
 | 
			
		||||
    echo "Using cached vcpkg at ${VCPKG_DIR}"
 | 
			
		||||
    ${VCPKG_DIR}/${EXE} list
 | 
			
		||||
else
 | 
			
		||||
    if [[ -d "${VCPKG_DIR}" ]] ; then
 | 
			
		||||
        rm -rf "${VCPKG_DIR}"
 | 
			
		||||
    fi
 | 
			
		||||
    git clone --branch 2021.04.30 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
 | 
			
		||||
    pushd ${VCPKG_DIR}
 | 
			
		||||
    BSARGS=()
 | 
			
		||||
    if [[ "$(uname)" == "Darwin" ]] ; then
 | 
			
		||||
        BSARGS+=(--allowAppleClang)
 | 
			
		||||
    fi
 | 
			
		||||
    if [[ -z ${COMSPEC:-} ]]; then
 | 
			
		||||
        chmod +x ./bootstrap-vcpkg.sh
 | 
			
		||||
        time ./bootstrap-vcpkg.sh "${BSARGS[@]}"
 | 
			
		||||
    else
 | 
			
		||||
        time ./bootstrap-vcpkg.bat
 | 
			
		||||
    fi
 | 
			
		||||
    popd
 | 
			
		||||
fi
 | 
			
		||||
 | 
			
		||||
# TODO: bring boost in this way as well ?
 | 
			
		||||
# NOTE: can pin specific ports to a commit/version like this:
 | 
			
		||||
#    git checkout <SOME COMMIT HASH> ports/boost
 | 
			
		||||
if [ $# -eq 0 ]; then
 | 
			
		||||
    echo "No extra packages specified..."
 | 
			
		||||
    PKGS=()
 | 
			
		||||
else
 | 
			
		||||
    PKGS=( "$@" )
 | 
			
		||||
fi
 | 
			
		||||
for LIB in "${PKGS[@]}"; do
 | 
			
		||||
    time ${VCPKG_DIR}/${EXE} --clean-after-build install ${LIB}
 | 
			
		||||
done
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@@ -1,40 +0,0 @@
 | 
			
		||||
 | 
			
		||||
# NOTE: must be sourced from a shell so it can export vars
 | 
			
		||||
 | 
			
		||||
cat << BATCH > ./getenv.bat
 | 
			
		||||
CALL %*
 | 
			
		||||
ENV
 | 
			
		||||
BATCH
 | 
			
		||||
 | 
			
		||||
while read line ; do
 | 
			
		||||
  IFS='"' read x path arg <<<"${line}"
 | 
			
		||||
  if [ -f "${path}" ] ; then
 | 
			
		||||
    echo "FOUND: $path"
 | 
			
		||||
    export VCINSTALLDIR=$(./getenv.bat "${path}" ${arg} | grep "^VCINSTALLDIR=" | sed -E "s/^VCINSTALLDIR=//g")
 | 
			
		||||
    if [ "${VCINSTALLDIR}" != "" ] ; then
 | 
			
		||||
      echo "USING ${VCINSTALLDIR}"
 | 
			
		||||
      export LIB=$(./getenv.bat "${path}" ${arg} | grep "^LIB=" | sed -E "s/^LIB=//g")
 | 
			
		||||
      export LIBPATH=$(./getenv.bat "${path}" ${arg} | grep "^LIBPATH=" | sed -E "s/^LIBPATH=//g")
 | 
			
		||||
      export INCLUDE=$(./getenv.bat "${path}" ${arg} | grep "^INCLUDE=" | sed -E "s/^INCLUDE=//g")
 | 
			
		||||
      ADDPATH=$(./getenv.bat "${path}" ${arg} | grep "^PATH=" | sed -E "s/^PATH=//g")
 | 
			
		||||
      export PATH="${ADDPATH}:${PATH}"
 | 
			
		||||
      break
 | 
			
		||||
    fi
 | 
			
		||||
  fi
 | 
			
		||||
done <<EOL
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio 15.0/VC/vcvarsall.bat" amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/vcvarsall.bat" amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio 13.0/VC/vcvarsall.bat" amd64
 | 
			
		||||
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
 | 
			
		||||
EOL
 | 
			
		||||
# TODO: update the list above as needed to support newer versions of msvc tools
 | 
			
		||||
 | 
			
		||||
rm -f getenv.bat
 | 
			
		||||
 | 
			
		||||
if [ "${VCINSTALLDIR}" = "" ] ; then
 | 
			
		||||
  echo "No compatible visual studio found!"
 | 
			
		||||
fi
 | 
			
		||||
@@ -1,246 +0,0 @@
 | 
			
		||||
#!/usr/bin/env python
 | 
			
		||||
"""A script to test rippled in an infinite loop of start-sync-stop.
 | 
			
		||||
 | 
			
		||||
- Requires Python 3.7+.
 | 
			
		||||
- Can be stopped with SIGINT.
 | 
			
		||||
- Has no dependencies outside the standard library.
 | 
			
		||||
"""
 | 
			
		||||
 | 
			
		||||
import sys
 | 
			
		||||
 | 
			
		||||
assert sys.version_info.major == 3 and sys.version_info.minor >= 7
 | 
			
		||||
 | 
			
		||||
import argparse
 | 
			
		||||
import asyncio
 | 
			
		||||
import configparser
 | 
			
		||||
import contextlib
 | 
			
		||||
import json
 | 
			
		||||
import logging
 | 
			
		||||
import os
 | 
			
		||||
from pathlib import Path
 | 
			
		||||
import platform
 | 
			
		||||
import subprocess
 | 
			
		||||
import time
 | 
			
		||||
import urllib.error
 | 
			
		||||
import urllib.request
 | 
			
		||||
 | 
			
		||||
# Enable asynchronous subprocesses on Windows. The default changed in 3.8.
 | 
			
		||||
# https://docs.python.org/3.7/library/asyncio-platforms.html#subprocess-support-on-windows
 | 
			
		||||
if (platform.system() == 'Windows' and sys.version_info.major == 3
 | 
			
		||||
        and sys.version_info.minor < 8):
 | 
			
		||||
    asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
 | 
			
		||||
 | 
			
		||||
DEFAULT_EXE = 'rippled'
 | 
			
		||||
DEFAULT_CONFIGURATION_FILE = 'rippled.cfg'
 | 
			
		||||
# Number of seconds to wait before forcefully terminating.
 | 
			
		||||
PATIENCE = 120
 | 
			
		||||
# Number of contiguous seconds in a sync state to be considered synced.
 | 
			
		||||
DEFAULT_SYNC_DURATION = 60
 | 
			
		||||
# Number of seconds between polls of state.
 | 
			
		||||
DEFAULT_POLL_INTERVAL = 5
 | 
			
		||||
SYNC_STATES = ('full', 'validating', 'proposing')
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def read_config(config_file):
 | 
			
		||||
    # strict = False: Allow duplicate keys, e.g. [rpc_startup].
 | 
			
		||||
    # allow_no_value = True: Allow keys with no values. Generally, these
 | 
			
		||||
    # instances use the "key" as the value, and the section name is the key,
 | 
			
		||||
    # e.g. [debug_logfile].
 | 
			
		||||
    # delimiters = ('='): Allow ':' as a character in Windows paths. Some of
 | 
			
		||||
    # our "keys" are actually values, and we don't want to split them on ':'.
 | 
			
		||||
    config = configparser.ConfigParser(
 | 
			
		||||
        strict=False,
 | 
			
		||||
        allow_no_value=True,
 | 
			
		||||
        delimiters=('='),
 | 
			
		||||
    )
 | 
			
		||||
    config.read(config_file)
 | 
			
		||||
    return config
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def to_list(value, separator=','):
 | 
			
		||||
    """Parse a list from a delimited string value."""
 | 
			
		||||
    return [s.strip() for s in value.split(separator) if s]
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def find_log_file(config_file):
 | 
			
		||||
    """Try to figure out what log file the user has chosen. Raises all kinds
 | 
			
		||||
    of exceptions if there is any possibility of ambiguity."""
 | 
			
		||||
    config = read_config(config_file)
 | 
			
		||||
    values = list(config['debug_logfile'].keys())
 | 
			
		||||
    if len(values) < 1:
 | 
			
		||||
        raise ValueError(
 | 
			
		||||
            f'no [debug_logfile] in configuration file: {config_file}')
 | 
			
		||||
    if len(values) > 1:
 | 
			
		||||
        raise ValueError(
 | 
			
		||||
            f'too many [debug_logfile] in configuration file: {config_file}')
 | 
			
		||||
    return values[0]
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def find_http_port(config_file):
 | 
			
		||||
    config = read_config(config_file)
 | 
			
		||||
    names = list(config['server'].keys())
 | 
			
		||||
    for name in names:
 | 
			
		||||
        server = config[name]
 | 
			
		||||
        if 'http' in to_list(server.get('protocol', '')):
 | 
			
		||||
            return int(server['port'])
 | 
			
		||||
    raise ValueError(f'no server in [server] for "http" protocol')
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@contextlib.asynccontextmanager
 | 
			
		||||
async def rippled(exe=DEFAULT_EXE, config_file=DEFAULT_CONFIGURATION_FILE):
 | 
			
		||||
    """A context manager for a rippled process."""
 | 
			
		||||
    # Start the server.
 | 
			
		||||
    process = await asyncio.create_subprocess_exec(
 | 
			
		||||
        str(exe),
 | 
			
		||||
        '--conf',
 | 
			
		||||
        str(config_file),
 | 
			
		||||
        stdout=subprocess.DEVNULL,
 | 
			
		||||
        stderr=subprocess.DEVNULL,
 | 
			
		||||
    )
 | 
			
		||||
    logging.info(f'rippled started with pid {process.pid}')
 | 
			
		||||
    try:
 | 
			
		||||
        yield process
 | 
			
		||||
    finally:
 | 
			
		||||
        # Ask it to stop.
 | 
			
		||||
        logging.info(f'asking rippled (pid: {process.pid}) to stop')
 | 
			
		||||
        start = time.time()
 | 
			
		||||
        process.terminate()
 | 
			
		||||
 | 
			
		||||
        # Wait nicely.
 | 
			
		||||
        try:
 | 
			
		||||
            await asyncio.wait_for(process.wait(), PATIENCE)
 | 
			
		||||
        except asyncio.TimeoutError:
 | 
			
		||||
            # Ask the operating system to kill it.
 | 
			
		||||
            logging.warning(f'killing rippled ({process.pid})')
 | 
			
		||||
            try:
 | 
			
		||||
                process.kill()
 | 
			
		||||
            except ProcessLookupError:
 | 
			
		||||
                pass
 | 
			
		||||
 | 
			
		||||
        code = await process.wait()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        logging.info(
 | 
			
		||||
            f'rippled stopped after {end - start:.1f} seconds with code {code}'
 | 
			
		||||
        )
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
async def sync(
 | 
			
		||||
        port,
 | 
			
		||||
        *,
 | 
			
		||||
        duration=DEFAULT_SYNC_DURATION,
 | 
			
		||||
        interval=DEFAULT_POLL_INTERVAL,
 | 
			
		||||
):
 | 
			
		||||
    """Poll rippled on an interval until it has been synced for a duration."""
 | 
			
		||||
    start = time.perf_counter()
 | 
			
		||||
    while (time.perf_counter() - start) < duration:
 | 
			
		||||
        await asyncio.sleep(interval)
 | 
			
		||||
 | 
			
		||||
        request = urllib.request.Request(
 | 
			
		||||
            f'http://127.0.0.1:{port}',
 | 
			
		||||
            data=json.dumps({
 | 
			
		||||
                'method': 'server_state'
 | 
			
		||||
            }).encode(),
 | 
			
		||||
            headers={'Content-Type': 'application/json'},
 | 
			
		||||
        )
 | 
			
		||||
        with urllib.request.urlopen(request) as response:
 | 
			
		||||
            try:
 | 
			
		||||
                body = json.loads(response.read())
 | 
			
		||||
            except urllib.error.HTTPError as cause:
 | 
			
		||||
                logging.warning(f'server_state returned not JSON: {cause}')
 | 
			
		||||
                start = time.perf_counter()
 | 
			
		||||
                continue
 | 
			
		||||
 | 
			
		||||
        try:
 | 
			
		||||
            state = body['result']['state']['server_state']
 | 
			
		||||
        except KeyError as cause:
 | 
			
		||||
            logging.warning(f'server_state response missing key: {cause.key}')
 | 
			
		||||
            start = time.perf_counter()
 | 
			
		||||
            continue
 | 
			
		||||
        logging.info(f'server_state: {state}')
 | 
			
		||||
        if state not in SYNC_STATES:
 | 
			
		||||
            # Require a contiguous sync state.
 | 
			
		||||
            start = time.perf_counter()
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
async def loop(test,
 | 
			
		||||
               *,
 | 
			
		||||
               exe=DEFAULT_EXE,
 | 
			
		||||
               config_file=DEFAULT_CONFIGURATION_FILE):
 | 
			
		||||
    """
 | 
			
		||||
    Start-test-stop rippled in an infinite loop.
 | 
			
		||||
 | 
			
		||||
    Moves log to a different file after each iteration.
 | 
			
		||||
    """
 | 
			
		||||
    log_file = find_log_file(config_file)
 | 
			
		||||
    id = 0
 | 
			
		||||
    while True:
 | 
			
		||||
        logging.info(f'iteration: {id}')
 | 
			
		||||
        async with rippled(exe, config_file) as process:
 | 
			
		||||
            start = time.perf_counter()
 | 
			
		||||
            exited = asyncio.create_task(process.wait())
 | 
			
		||||
            tested = asyncio.create_task(test())
 | 
			
		||||
            # Try to sync as long as the process is running.
 | 
			
		||||
            done, pending = await asyncio.wait(
 | 
			
		||||
                {exited, tested},
 | 
			
		||||
                return_when=asyncio.FIRST_COMPLETED,
 | 
			
		||||
            )
 | 
			
		||||
            if done == {exited}:
 | 
			
		||||
                code = exited.result()
 | 
			
		||||
                logging.warning(
 | 
			
		||||
                    f'server halted for unknown reason with code {code}')
 | 
			
		||||
            else:
 | 
			
		||||
                assert done == {tested}
 | 
			
		||||
                assert tested.exception() is None
 | 
			
		||||
            end = time.perf_counter()
 | 
			
		||||
            logging.info(f'synced after {end - start:.0f} seconds')
 | 
			
		||||
        os.replace(log_file, f'debug.{id}.log')
 | 
			
		||||
        id += 1
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
logging.basicConfig(
 | 
			
		||||
    format='%(asctime)s %(levelname)-8s %(message)s',
 | 
			
		||||
    level=logging.INFO,
 | 
			
		||||
    datefmt='%Y-%m-%d %H:%M:%S',
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
parser = argparse.ArgumentParser(
 | 
			
		||||
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    'rippled',
 | 
			
		||||
    type=Path,
 | 
			
		||||
    nargs='?',
 | 
			
		||||
    default=DEFAULT_EXE,
 | 
			
		||||
    help='Path to rippled.',
 | 
			
		||||
)
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--conf',
 | 
			
		||||
    type=Path,
 | 
			
		||||
    default=DEFAULT_CONFIGURATION_FILE,
 | 
			
		||||
    help='Path to configuration file.',
 | 
			
		||||
)
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--duration',
 | 
			
		||||
    type=int,
 | 
			
		||||
    default=DEFAULT_SYNC_DURATION,
 | 
			
		||||
    help='Number of contiguous seconds required in a synchronized state.',
 | 
			
		||||
)
 | 
			
		||||
parser.add_argument(
 | 
			
		||||
    '--interval',
 | 
			
		||||
    type=int,
 | 
			
		||||
    default=DEFAULT_POLL_INTERVAL,
 | 
			
		||||
    help='Number of seconds to wait between polls of state.',
 | 
			
		||||
)
 | 
			
		||||
args = parser.parse_args()
 | 
			
		||||
 | 
			
		||||
port = find_http_port(args.conf)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def test():
 | 
			
		||||
    return sync(port, duration=args.duration, interval=args.interval)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
try:
 | 
			
		||||
    asyncio.run(loop(test, exe=args.rippled, config_file=args.conf))
 | 
			
		||||
except KeyboardInterrupt:
 | 
			
		||||
    # Squelch the message. This is a normal mode of exit.
 | 
			
		||||
    pass
 | 
			
		||||
							
								
								
									
										133
									
								
								bin/stop-test.js
									
									
									
									
									
								
							
							
						
						
									
										133
									
								
								bin/stop-test.js
									
									
									
									
									
								
							@@ -1,133 +0,0 @@
 | 
			
		||||
/* -------------------------------- REQUIRES -------------------------------- */
 | 
			
		||||
 | 
			
		||||
var child = require("child_process");
 | 
			
		||||
var assert = require("assert");
 | 
			
		||||
 | 
			
		||||
/* --------------------------------- CONFIG --------------------------------- */
 | 
			
		||||
 | 
			
		||||
if (process.argv[2] == null) {
 | 
			
		||||
  [
 | 
			
		||||
   'Usage: ',
 | 
			
		||||
   '',
 | 
			
		||||
   '  `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
 | 
			
		||||
   '',
 | 
			
		||||
   '  Launch rippled and stop it after n seconds for all n in [i, j}',
 | 
			
		||||
   '  For all even values of n launch rippled with `--fg`',
 | 
			
		||||
   '  For values of n where n % 3 == 0 launch rippled with `--fg`\n',
 | 
			
		||||
   'Examples: ',
 | 
			
		||||
   '',
 | 
			
		||||
   '  $ node bin/stop-test.js 5,10',
 | 
			
		||||
   ('  $ node bin/stop-test.js 1,4 ' +
 | 
			
		||||
      'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
 | 
			
		||||
   ]
 | 
			
		||||
      .forEach(function(l){console.log(l)});
 | 
			
		||||
 | 
			
		||||
  process.exit();
 | 
			
		||||
} else {
 | 
			
		||||
  var testRange = process.argv[2].split(',').map(Number);
 | 
			
		||||
  var rippledPath = process.argv[3] || 'build/rippled'
 | 
			
		||||
  var rippledConf = process.argv[4] || 'rippled.cfg'
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
var options = {
 | 
			
		||||
  env: process.env,
 | 
			
		||||
  stdio: 'ignore' // we could dump the child io when it fails abnormally
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
// default args
 | 
			
		||||
var conf_args = ['--conf='+rippledConf];
 | 
			
		||||
var start_args  = conf_args.concat([/*'--net'*/])
 | 
			
		||||
var stop_args = conf_args.concat(['stop']);
 | 
			
		||||
 | 
			
		||||
/* --------------------------------- HELPERS -------------------------------- */
 | 
			
		||||
 | 
			
		||||
function start(args) {
 | 
			
		||||
    return child.spawn(rippledPath, args, options);
 | 
			
		||||
}
 | 
			
		||||
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
 | 
			
		||||
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
 | 
			
		||||
 | 
			
		||||
function show_results_and_exit(results) {
 | 
			
		||||
  console.log(JSON.stringify(results, undefined, 2));
 | 
			
		||||
  process.exit();
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
var timeTakes = function (range) {
 | 
			
		||||
  function sumRange(n) {return (n+1) * n /2}
 | 
			
		||||
  var ret = sumRange(range[1]);
 | 
			
		||||
  if (range[0] > 1) {
 | 
			
		||||
    ret = ret - sumRange(range[0] - 1)
 | 
			
		||||
  }
 | 
			
		||||
  var stopping = (range[1] - range[0]) * 0.5;
 | 
			
		||||
  return ret + stopping;
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
/* ---------------------------------- TEST ---------------------------------- */
 | 
			
		||||
 | 
			
		||||
console.log("Test will take ~%s seconds", timeTakes(testRange));
 | 
			
		||||
 | 
			
		||||
(function oneTest(n /* seconds */, results) {
 | 
			
		||||
  if (n >= testRange[1]) {
 | 
			
		||||
    // show_results_and_exit(results);
 | 
			
		||||
    console.log(JSON.stringify(results, undefined, 2));
 | 
			
		||||
    oneTest(testRange[0], []);
 | 
			
		||||
    return;
 | 
			
		||||
  }
 | 
			
		||||
 | 
			
		||||
  var args = start_args;
 | 
			
		||||
  if (n % 2 == 0) {args = args.concat(['--fg'])}
 | 
			
		||||
  if (n % 3 == 0) {args = args.concat(['--net'])}
 | 
			
		||||
 | 
			
		||||
  var result = {args: args, alive_for: n};
 | 
			
		||||
  results.push(result);
 | 
			
		||||
 | 
			
		||||
  console.log("\nLaunching `%s` with `%s` for %d seconds",
 | 
			
		||||
                rippledPath, JSON.stringify(args), n);
 | 
			
		||||
 | 
			
		||||
  rippled = start(args);
 | 
			
		||||
  console.log("Rippled pid: %d", rippled.pid);
 | 
			
		||||
 | 
			
		||||
  // defaults
 | 
			
		||||
  var b4StopSent = false;
 | 
			
		||||
  var stopSent = false;
 | 
			
		||||
  var stop_took = null;
 | 
			
		||||
 | 
			
		||||
  rippled.once('exit', function(){
 | 
			
		||||
    if (!stopSent && !b4StopSent) {
 | 
			
		||||
      console.warn('\nRippled exited itself b4 stop issued');
 | 
			
		||||
      process.exit();
 | 
			
		||||
    };
 | 
			
		||||
 | 
			
		||||
    // The io handles close AFTER exit, may have implications for
 | 
			
		||||
    // `stdio:'inherit'` option to `child.spawn`.
 | 
			
		||||
    rippled.once('close', function() {
 | 
			
		||||
      result.stop_took = (+new Date() - stop_took) / 1000; // seconds
 | 
			
		||||
      console.log("Stopping after %d seconds took %s seconds",
 | 
			
		||||
                   n, result.stop_took);
 | 
			
		||||
      oneTest(n+1, results);
 | 
			
		||||
    });
 | 
			
		||||
  });
 | 
			
		||||
 | 
			
		||||
  secs_l8r(n, function(){
 | 
			
		||||
    console.log("Stopping rippled after %d seconds", n);
 | 
			
		||||
 | 
			
		||||
    // possible race here ?
 | 
			
		||||
    // seems highly unlikely, but I was having issues at one point
 | 
			
		||||
    b4StopSent=true;
 | 
			
		||||
    stop_took = (+new Date());
 | 
			
		||||
    // when does `exit` actually get sent?
 | 
			
		||||
    stop();
 | 
			
		||||
    stopSent=true;
 | 
			
		||||
 | 
			
		||||
    // Sometimes we want to attach with a debugger.
 | 
			
		||||
    if (process.env.ABORT_TESTS_ON_STALL != null) {
 | 
			
		||||
      // We wait 30 seconds, and if it hasn't stopped, we abort the process
 | 
			
		||||
      secs_l8r(30, function() {
 | 
			
		||||
        if (result.stop_took == null) {
 | 
			
		||||
          console.log("rippled has stalled");
 | 
			
		||||
          process.exit();
 | 
			
		||||
        };
 | 
			
		||||
      });
 | 
			
		||||
    }
 | 
			
		||||
  })
 | 
			
		||||
}(testRange[0], []));
 | 
			
		||||
@@ -1,119 +0,0 @@
 | 
			
		||||
/**
 | 
			
		||||
 * bin/update_bintypes.js
 | 
			
		||||
 *
 | 
			
		||||
 * This unholy abomination of a script generates the JavaScript file
 | 
			
		||||
 * src/js/bintypes.js from various parts of the C++ source code.
 | 
			
		||||
 *
 | 
			
		||||
 * This should *NOT* be part of any automatic build process unless the C++
 | 
			
		||||
 * source data are brought into a more easily parseable format. Until then,
 | 
			
		||||
 * simply run this script manually and fix as needed.
 | 
			
		||||
 */
 | 
			
		||||
 | 
			
		||||
// XXX: Process LedgerFormats.(h|cpp) as well.
 | 
			
		||||
 | 
			
		||||
var filenameProto = __dirname + '/../src/cpp/ripple/SerializeProto.h',
 | 
			
		||||
    filenameTxFormatsH = __dirname + '/../src/cpp/ripple/TransactionFormats.h',
 | 
			
		||||
    filenameTxFormats = __dirname + '/../src/cpp/ripple/TransactionFormats.cpp';
 | 
			
		||||
 | 
			
		||||
var fs = require('fs');
 | 
			
		||||
 | 
			
		||||
var output = [];
 | 
			
		||||
 | 
			
		||||
// Stage 1: Get the field types and codes from SerializeProto.h
 | 
			
		||||
var types = {},
 | 
			
		||||
    fields = {};
 | 
			
		||||
String(fs.readFileSync(filenameProto)).split('\n').forEach(function (line) {
 | 
			
		||||
  line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
 | 
			
		||||
  if (!line.length || line.slice(0, 2) === '//' || line.slice(-1) !== ')') return;
 | 
			
		||||
 | 
			
		||||
  var tmp = line.slice(0, -1).split('('),
 | 
			
		||||
      type = tmp[0],
 | 
			
		||||
      opts = tmp[1].split(',');
 | 
			
		||||
 | 
			
		||||
  if (type === 'TYPE') types[opts[1]] = [opts[0], +opts[2]];
 | 
			
		||||
  else if (type === 'FIELD') fields[opts[0]] = [types[opts[1]][0], +opts[2]];
 | 
			
		||||
});
 | 
			
		||||
 | 
			
		||||
output.push('var ST = require("./serializedtypes");');
 | 
			
		||||
output.push('');
 | 
			
		||||
output.push('var REQUIRED = exports.REQUIRED = 0,');
 | 
			
		||||
output.push('    OPTIONAL = exports.OPTIONAL = 1,');
 | 
			
		||||
output.push('    DEFAULT  = exports.DEFAULT  = 2;');
 | 
			
		||||
output.push('');
 | 
			
		||||
 | 
			
		||||
function pad(s, n) { while (s.length < n) s += ' '; return s; }
 | 
			
		||||
function padl(s, n) { while (s.length < n) s = ' '+s; return s; }
 | 
			
		||||
 | 
			
		||||
Object.keys(types).forEach(function (type) {
 | 
			
		||||
  output.push(pad('ST.'+types[type][0]+'.id', 25) + ' = '+types[type][1]+';');
 | 
			
		||||
});
 | 
			
		||||
output.push('');
 | 
			
		||||
 | 
			
		||||
// Stage 2: Get the transaction type IDs from TransactionFormats.h
 | 
			
		||||
var ttConsts = {};
 | 
			
		||||
String(fs.readFileSync(filenameTxFormatsH)).split('\n').forEach(function (line) {
 | 
			
		||||
  var regex = /tt([A-Z_]+)\s+=\s+([0-9-]+)/;
 | 
			
		||||
  var match = line.match(regex);
 | 
			
		||||
  if (match) ttConsts[match[1]] = +match[2];
 | 
			
		||||
});
 | 
			
		||||
 | 
			
		||||
// Stage 3: Get the transaction formats from TransactionFormats.cpp
 | 
			
		||||
var base = [],
 | 
			
		||||
    sections = [],
 | 
			
		||||
    current = base;
 | 
			
		||||
String(fs.readFileSync(filenameTxFormats)).split('\n').forEach(function (line) {
 | 
			
		||||
  line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
 | 
			
		||||
 | 
			
		||||
  var d_regex = /DECLARE_TF\(([A-Za-z]+),tt([A-Z_]+)/;
 | 
			
		||||
  var d_match = line.match(d_regex);
 | 
			
		||||
 | 
			
		||||
  var s_regex = /SOElement\(sf([a-z]+),SOE_(REQUIRED|OPTIONAL|DEFAULT)/i;
 | 
			
		||||
  var s_match = line.match(s_regex);
 | 
			
		||||
 | 
			
		||||
  if (d_match) sections.push(current = [d_match[1], ttConsts[d_match[2]]]);
 | 
			
		||||
  else if (s_match) current.push([s_match[1], s_match[2]]);
 | 
			
		||||
});
 | 
			
		||||
 | 
			
		||||
function removeFinalComma(arr) {
 | 
			
		||||
  arr[arr.length-1] = arr[arr.length-1].slice(0, -1);
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
output.push('var base = [');
 | 
			
		||||
base.forEach(function (field) {
 | 
			
		||||
  var spec = fields[field[0]];
 | 
			
		||||
  output.push('  [ '+
 | 
			
		||||
              pad("'"+field[0]+"'", 21)+', '+
 | 
			
		||||
              pad(field[1], 8)+', '+
 | 
			
		||||
              padl(""+spec[1], 2)+', '+
 | 
			
		||||
              'ST.'+pad(spec[0], 3)+
 | 
			
		||||
              ' ],');
 | 
			
		||||
});
 | 
			
		||||
removeFinalComma(output);
 | 
			
		||||
output.push('];');
 | 
			
		||||
output.push('');
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
output.push('exports.tx = {');
 | 
			
		||||
sections.forEach(function (section) {
 | 
			
		||||
  var name = section.shift(),
 | 
			
		||||
      ttid = section.shift();
 | 
			
		||||
 | 
			
		||||
  output.push('  '+name+': ['+ttid+'].concat(base, [');
 | 
			
		||||
  section.forEach(function (field) {
 | 
			
		||||
    var spec = fields[field[0]];
 | 
			
		||||
    output.push('    [ '+
 | 
			
		||||
                pad("'"+field[0]+"'", 21)+', '+
 | 
			
		||||
                pad(field[1], 8)+', '+
 | 
			
		||||
                padl(""+spec[1], 2)+', '+
 | 
			
		||||
                'ST.'+pad(spec[0], 3)+
 | 
			
		||||
                ' ],');
 | 
			
		||||
  });
 | 
			
		||||
  removeFinalComma(output);
 | 
			
		||||
  output.push('  ]),');
 | 
			
		||||
});
 | 
			
		||||
removeFinalComma(output);
 | 
			
		||||
output.push('};');
 | 
			
		||||
output.push('');
 | 
			
		||||
 | 
			
		||||
console.log(output.join('\n'));
 | 
			
		||||
 | 
			
		||||
							
								
								
									
										171
									
								
								cfg/genesis.json
									
									
									
									
									
								
							
							
						
						
									
										171
									
								
								cfg/genesis.json
									
									
									
									
									
								
							
										
											
												File diff suppressed because one or more lines are too long
											
										
									
								
							@@ -1,10 +0,0 @@
 | 
			
		||||
#!/bin/sh
 | 
			
		||||
 | 
			
		||||
# Execute this script with a running Postgres server on the current host.
 | 
			
		||||
# It should work with the most generic installation of Postgres,
 | 
			
		||||
# and is necessary for rippled to store data in Postgres.
 | 
			
		||||
 | 
			
		||||
# usage: sudo -u postgres ./initdb.sh
 | 
			
		||||
psql -c "CREATE USER rippled"
 | 
			
		||||
psql -c "CREATE DATABASE rippled WITH OWNER = rippled"
 | 
			
		||||
 | 
			
		||||
@@ -127,13 +127,20 @@ B6B3EEDC0267AB50491FDC450A398AF30DBCD977CECED8BEF2499CAB5DAC19E2 fixRmSmallIncre
 | 
			
		||||
98DECF327BF79997AEC178323AD51A830E457BFC6D454DAF3E46E5EC42DC619F CheckCashMakesTrustLine
 | 
			
		||||
B2A4DB846F0891BF2C76AB2F2ACC8F5B4EC64437135C6E56F3F859DE5FFD5856 ExpandedSignerList
 | 
			
		||||
32A122F1352A4C7B3A6D790362CC34749C5E57FCE896377BFDC6CCD14F6CD627 NonFungibleTokensV1_1
 | 
			
		||||
DF8B4536989BDACE3F934F29423848B9F1D76D09BE6A1FCFE7E7F06AA26ABEAD fixRemoveNFTokenAutoTrustLine
 | 
			
		||||
75A7E01C505DD5A179DFE3E000A9B6F1EDDEB55A12F95579A23E15B15DC8BE5A ImmediateOfferKilled
 | 
			
		||||
47C3002ABA31628447E8E9A8B315FAA935CE30183F9A9B86845E469CA2CDC3DF DisallowIncoming
 | 
			
		||||
93E516234E35E08CA689FA33A6D38E103881F8DCB53023F728C307AA89D515A7 XRPFees
 | 
			
		||||
2E2FB9CF8A44EB80F4694D38AADAE9B8B7ADAFD2F092E10068E61C98C4F092B0 fixUniversalNumber
 | 
			
		||||
73761231F7F3D94EC3D8C63D91BDD0D89045C6F71B917D1925C01253515A6669 fixNonFungibleTokensV1_2
 | 
			
		||||
AE35ABDEFBDE520372B31C957020B34A7A4A9DC3115A69803A44016477C84D6E fixNFTokenRemint
 | 
			
		||||
ECF412BE0964EC2E71DCF807EEEA6EA8470D3DB15173D46F28AB6E234860AC32 Hooks
 | 
			
		||||
ECE6819DBA5DB528F1A241695F5A9811EF99467CDE22510954FD357780BBD078 Hooks
 | 
			
		||||
42F8B586B357ABBAAAA1C733C3E7D3B75761395340D0CDF600179E8737E22478 BalanceRewards
 | 
			
		||||
919857E4B902A20216E4819B9BD9FD1FD19A66ECF63151C18A4C48C873DB9578 PaychanAndEscrowForTokens
 | 
			
		||||
ECF412BE0964EC2E71DCF807EEEA6EA8470D3DB15173D46F28AB6E234860AC32 URIToken
 | 
			
		||||
F5751842D26FC057B92CAA435ABF4F1428C2BCC4180D18775ADE92CB2643BBA3 Import
 | 
			
		||||
6E739F4F8B07BED29FC9FF440DA3C301CD14A180DF45819F658FEC2F7DE31427 XahauGenesis
 | 
			
		||||
D686F2538F410C9D0D856788E98E3579595DAF7B38D38887F81ECAC934B06040 HooksUpdate1
 | 
			
		||||
86E83A7D2ECE3AD5FA87AB2195AE015C950469ABF0B72EAACED318F74886AE90 CryptoConditionsSuite
 | 
			
		||||
3C43D9A973AA4443EF3FC38E42DD306160FBFFDAB901CD8BAA15D09F2597EB87 NonFungibleTokensV1
 | 
			
		||||
0285B7E5E08E1A8E4C15636F0591D87F73CB6A7B6452A932AD72BBC8E5D1CBE3 fixNFTokenDirV1
 | 
			
		||||
 
 | 
			
		||||
@@ -42,6 +42,15 @@
 | 
			
		||||
#    ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
 | 
			
		||||
#    ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
 | 
			
		||||
#
 | 
			
		||||
# [import_vl_keys]
 | 
			
		||||
#
 | 
			
		||||
#   This section is used to import the public keys of trusted validator list publishers.
 | 
			
		||||
#   The keys are used to authenticate and accept new lists of trusted validators.
 | 
			
		||||
#   In this example, the key for the publisher "vl.xrplf.org" is imported.
 | 
			
		||||
#   Each key is represented as a hexadecimal string.
 | 
			
		||||
# 
 | 
			
		||||
#   Examples:
 | 
			
		||||
#    ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
 | 
			
		||||
 | 
			
		||||
# The default validator list publishers that the rippled instance
 | 
			
		||||
# trusts.
 | 
			
		||||
@@ -62,6 +71,10 @@ ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
 | 
			
		||||
# vl.xrplf.org
 | 
			
		||||
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
 | 
			
		||||
 | 
			
		||||
[import_vl_keys]
 | 
			
		||||
# vl.xrplf.org
 | 
			
		||||
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
 | 
			
		||||
 | 
			
		||||
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
 | 
			
		||||
# use the following configuration instead:
 | 
			
		||||
#
 | 
			
		||||
@@ -70,3 +83,6 @@ ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
 | 
			
		||||
#
 | 
			
		||||
# [validator_list_keys]
 | 
			
		||||
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
 | 
			
		||||
#
 | 
			
		||||
# [import_vl_keys]
 | 
			
		||||
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
 | 
			
		||||
 
 | 
			
		||||
@@ -1,43 +0,0 @@
 | 
			
		||||
# Use the official image as a parent image.
 | 
			
		||||
FROM centos
 | 
			
		||||
 | 
			
		||||
# Set the working directory.
 | 
			
		||||
WORKDIR /opt/xrpld-hooks/
 | 
			
		||||
 | 
			
		||||
# Copy the file from your host to your current location.
 | 
			
		||||
COPY docker/screenrc /root/.screenrc
 | 
			
		||||
COPY docker/wasm2wat /usr/bin/
 | 
			
		||||
COPY rippled .
 | 
			
		||||
COPY testnet.cfg .
 | 
			
		||||
COPY testnetvalidators.txt .
 | 
			
		||||
COPY docker/libboost/libboost_coroutine.so.1.70.0 /usr/lib/
 | 
			
		||||
COPY docker/libboost/libboost_context.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_filesystem.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_program_options.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_regex.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_system.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_thread.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_chrono.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_date_time.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/libboost/libboost_atomic.so.1.70.0 /usr/lib
 | 
			
		||||
COPY docker/js/ ./
 | 
			
		||||
# Run the command inside your image filesystem.
 | 
			
		||||
RUN dnf install epel-release -y
 | 
			
		||||
RUN yum install -y vim screen python3-setuptools-wheel python3-pip-wheel python3 python3-pip curl make nodejs
 | 
			
		||||
RUN curl https://cmake.org/files/v3.17/cmake-3.17.1-Linux-x86_64.sh --output cmake-3.17.1-Linux-x86_64.sh \
 | 
			
		||||
    &&  mkdir /opt/cmake \
 | 
			
		||||
    &&  printf "y\nn\n" | sh cmake-3.17.1-Linux-x86_64.sh --prefix=/opt/cmake > /dev/null \
 | 
			
		||||
    &&  ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake
 | 
			
		||||
RUN curl https://raw.githubusercontent.com/wasienv/wasienv/master/install.sh | sh
 | 
			
		||||
RUN echo 'PATH=$PATH:/root/.wasienv/bin/' >> /root/.bash_rc
 | 
			
		||||
RUN rm -f cmake-3.17.1-Linux-x86_64.sh
 | 
			
		||||
RUN mkdir /etc/opt/ripple
 | 
			
		||||
RUN ln -s /opt/xrpld-hooks/testnet.cfg /etc/opt/ripple/rippled.cfg
 | 
			
		||||
RUN ln -s /opt/xrpld-hooks/testnetvalidators.txt /etc/opt/ripple/testnetvalidators.txt
 | 
			
		||||
 | 
			
		||||
# Add metadata to the image to describe which port the container is listening on at runtime.
 | 
			
		||||
EXPOSE 6005
 | 
			
		||||
EXPOSE 5005
 | 
			
		||||
 | 
			
		||||
# Run the specified command within the container.
 | 
			
		||||
CMD ./rippled --conf testnet.cfg --net >> log 2>> log
 | 
			
		||||
@@ -1,6 +0,0 @@
 | 
			
		||||
#!/bin/bash
 | 
			
		||||
cp -r ../hook-api-examples docker/js #docker doesnt like symlinks?
 | 
			
		||||
/usr/bin/cp /root/wabt/bin/wasm2wat docker/ 
 | 
			
		||||
docker build --tag xrpllabsofficial/xrpld-hooks-testnet:latest . && docker create xrpllabsofficial/xrpld-hooks-testnet
 | 
			
		||||
rm -rf docker/js
 | 
			
		||||
docker push xrpllabsofficial/xrpld-hooks-testnet:latest
 | 
			
		||||
							
								
								
									
										3
									
								
								docs/.gitignore
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										3
									
								
								docs/.gitignore
									
									
									
									
										vendored
									
									
								
							@@ -1,3 +0,0 @@
 | 
			
		||||
html
 | 
			
		||||
temp
 | 
			
		||||
out.txt
 | 
			
		||||
@@ -1,597 +0,0 @@
 | 
			
		||||
# Negative UNL Engineering Spec
 | 
			
		||||
 | 
			
		||||
## The Problem Statement
 | 
			
		||||
 | 
			
		||||
The moment-to-moment health of the XRP Ledger network depends on the health and
 | 
			
		||||
connectivity of a small number of computers (nodes). The most important nodes
 | 
			
		||||
are validators, specifically ones listed on the unique node list
 | 
			
		||||
([UNL](#Question-What-are-UNLs)). Ripple publishes a recommended UNL that most
 | 
			
		||||
network nodes use to determine which peers in the network are trusted. Although
 | 
			
		||||
most validators use the same list, they are not required to. The XRP Ledger
 | 
			
		||||
network progresses to the next ledger when enough validators reach agreement
 | 
			
		||||
(above the minimum quorum of 80%) about what transactions to include in the next
 | 
			
		||||
ledger.
 | 
			
		||||
 | 
			
		||||
As an example, if there are 10 validators on the UNL, at least 8 validators have
 | 
			
		||||
to agree with the latest ledger for it to become validated. But what if enough
 | 
			
		||||
of those validators are offline to drop the network below the 80% quorum? The
 | 
			
		||||
XRP Ledger network favors safety/correctness over advancing the ledger. Which
 | 
			
		||||
means if enough validators are offline, the network will not be able to validate
 | 
			
		||||
ledgers.
 | 
			
		||||
 | 
			
		||||
Unfortunately validators can go offline at any time for many different reasons.
 | 
			
		||||
Power outages, network connectivity issues, and hardware failures are just a few
 | 
			
		||||
scenarios where a validator would appear "offline". Given that most of these
 | 
			
		||||
events are temporary, it would make sense to temporarily remove that validator
 | 
			
		||||
from the UNL. But the UNL is updated infrequently and not every node uses the
 | 
			
		||||
same UNL. So instead of removing the unreliable validator from the Ripple
 | 
			
		||||
recommended UNL, we can create a second negative UNL which is stored directly on
 | 
			
		||||
the ledger (so the entire network has the same view). This will help the network
 | 
			
		||||
see which validators are **currently** unreliable, and adjust their quorum
 | 
			
		||||
calculation accordingly.
 | 
			
		||||
 | 
			
		||||
*Improving the liveness of the network is the main motivation for the negative UNL.*
 | 
			
		||||
 | 
			
		||||
### Targeted Faults
 | 
			
		||||
 | 
			
		||||
In order to determine which validators are unreliable, we need clearly define
 | 
			
		||||
what kind of faults to measure and analyze. We want to deal with the faults we
 | 
			
		||||
frequently observe in the production network. Hence we will only monitor for
 | 
			
		||||
validators that do not reliably respond to network messages or send out
 | 
			
		||||
validations disagreeing with the locally generated validations. We will not
 | 
			
		||||
target other byzantine faults.
 | 
			
		||||
 | 
			
		||||
To track whether or not a validator is responding to the network, we could
 | 
			
		||||
monitor them with a “heartbeat” protocol. Instead of creating a new heartbeat
 | 
			
		||||
protocol, we can leverage some existing protocol messages to mimic the
 | 
			
		||||
heartbeat. We picked validation messages because validators should send one and
 | 
			
		||||
only one validation message per ledger. In addition, we only count the
 | 
			
		||||
validation messages that agree with the local node's validations.
 | 
			
		||||
 | 
			
		||||
With the negative UNL, the network could keep making forward progress safely
 | 
			
		||||
even if the number of remaining validators gets to 60%. Say we have a network
 | 
			
		||||
with 10 validators on the UNL and everything is operating correctly. The quorum
 | 
			
		||||
required for this network would be 8 (80% of 10). When validators fail, the
 | 
			
		||||
quorum required would be as low as 6 (60% of 10), which is the absolute
 | 
			
		||||
***minimum quorum***. We need the absolute minimum quorum to be strictly greater
 | 
			
		||||
than 50% of the original UNL so that there cannot be two partitions of
 | 
			
		||||
well-behaved nodes headed in different directions. We arbitrarily choose 60% as
 | 
			
		||||
the minimum quorum to give a margin of safety.
 | 
			
		||||
 | 
			
		||||
Consider these events in the absence of negative UNL:
 | 
			
		||||
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
 | 
			
		||||
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
 | 
			
		||||
1. 5:00pm - validator3 fails, votes vs. quorum: 7 < 8, we don’t have quorum
 | 
			
		||||
    * **network cannot validate new ledgers with 3 failed validators**
 | 
			
		||||
 | 
			
		||||
We're below 80% agreement, so new ledgers cannot be validated. This is how the
 | 
			
		||||
XRP Ledger operates today, but if the negative UNL was enabled, the events would
 | 
			
		||||
happen as follows. (Please note that the events below are from a simplified
 | 
			
		||||
version of our protocol.)
 | 
			
		||||
 | 
			
		||||
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
 | 
			
		||||
1. 1:40pm - network adds validator1 to negative UNL, quorum changes to ceil(9 * 0.8), or 8
 | 
			
		||||
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
 | 
			
		||||
1. 3:40pm - network adds validator2 to negative UNL, quorum changes to ceil(8 * 0.8), or 7
 | 
			
		||||
1. 5:00pm - validator3 fails, votes vs. quorum: 7 >= 7, we have quorum
 | 
			
		||||
1. 5:40pm - network adds validator3 to negative UNL, quorum changes to ceil(7 * 0.8), or 6
 | 
			
		||||
1. 7:00pm - validator4 fails, votes vs. quorum: 6 >= 6, we have quorum
 | 
			
		||||
    * **network can still validate new ledgers with 4 failed validators**
 | 
			
		||||
 | 
			
		||||
## External Interactions
 | 
			
		||||
 | 
			
		||||
### Message Format Changes
 | 
			
		||||
This proposal will:
 | 
			
		||||
1. add a new pseudo-transaction type
 | 
			
		||||
1. add the negative UNL to the ledger data structure.
 | 
			
		||||
 | 
			
		||||
Any tools or systems that rely on the format of this data will have to be
 | 
			
		||||
updated.
 | 
			
		||||
 | 
			
		||||
### Amendment
 | 
			
		||||
This feature **will** need an amendment to activate.
 | 
			
		||||
 | 
			
		||||
## Design
 | 
			
		||||
 | 
			
		||||
This section discusses the following topics about the Negative UNL design:
 | 
			
		||||
 | 
			
		||||
* [Negative UNL protocol overview](#Negative-UNL-Protocol-Overview)
 | 
			
		||||
* [Validator reliability measurement](#Validator-Reliability-Measurement)
 | 
			
		||||
* [Format Changes](#Format-Changes)
 | 
			
		||||
* [Negative UNL maintenance](#Negative-UNL-Maintenance)
 | 
			
		||||
* [Quorum size calculation](#Quorum-Size-Calculation)
 | 
			
		||||
* [Filter validation messages](#Filter-Validation-Messages)
 | 
			
		||||
* [High level sequence diagram of code
 | 
			
		||||
  changes](#High-Level-Sequence-Diagram-of-Code-Changes)
 | 
			
		||||
 | 
			
		||||
### Negative UNL Protocol Overview
 | 
			
		||||
 | 
			
		||||
Every ledger stores a list of zero or more unreliable validators. Updates to the
 | 
			
		||||
list must be approved by the validators using the consensus mechanism that
 | 
			
		||||
validators use to agree on the set of transactions. The list is used only when
 | 
			
		||||
checking if a ledger is fully validated. If a validator V is in the list, nodes
 | 
			
		||||
with V in their UNL adjust the quorum and V’s validation message is not counted
 | 
			
		||||
when verifying if a ledger is fully validated. V’s flow of messages and network
 | 
			
		||||
interactions, however, will remain the same.
 | 
			
		||||
 | 
			
		||||
We define the ***effective UNL** = original UNL - negative UNL*, and the
 | 
			
		||||
***effective quorum*** as the quorum of the *effective UNL*. And we set
 | 
			
		||||
*effective quorum = Ceiling(80% * effective UNL)*.
 | 
			
		||||
 | 
			
		||||
### Validator Reliability Measurement
 | 
			
		||||
 | 
			
		||||
A node only measures the reliability of validators on its own UNL, and only
 | 
			
		||||
proposes based on local observations. There are many metrics that a node can
 | 
			
		||||
measure about its validators, but we have chosen ledger validation messages.
 | 
			
		||||
This is because every validator shall send one and only one signed validation
 | 
			
		||||
message per ledger. This keeps the measurement simple and removes
 | 
			
		||||
timing/clock-sync issues. A node will measure the percentage of agreeing
 | 
			
		||||
validation messages (*PAV*) received from each validator on the node's UNL. Note
 | 
			
		||||
that the node will only count the validation messages that agree with its own
 | 
			
		||||
validations.
 | 
			
		||||
 | 
			
		||||
We define the **PAV** as the **P**ercentage of **A**greed **V**alidation
 | 
			
		||||
messages received for the last N ledgers, where N = 256 by default.
 | 
			
		||||
 | 
			
		||||
When the PAV drops below the ***low-water mark***, the validator is considered
 | 
			
		||||
unreliable, and is a candidate to be disabled by being added to the negative
 | 
			
		||||
UNL. A validator must have a PAV higher than the ***high-water mark*** to be
 | 
			
		||||
re-enabled. The validator is re-enabled by removing it from the negative UNL. In
 | 
			
		||||
the implementation, we plan to set the low-water mark as 50% and the high-water
 | 
			
		||||
mark as 80%.
 | 
			
		||||
 | 
			
		||||
### Format Changes
 | 
			
		||||
 | 
			
		||||
The negative UNL component in a ledger contains three fields.
 | 
			
		||||
* ***NegativeUNL***: The current negative UNL, a list of unreliable validators.
 | 
			
		||||
* ***ToDisable***: The validator to be added to the negative UNL on the next
 | 
			
		||||
  flag ledger.
 | 
			
		||||
* ***ToReEnable***: The validator to be removed from the negative UNL on the
 | 
			
		||||
  next flag ledger.
 | 
			
		||||
 | 
			
		||||
All three fields are optional. When the *ToReEnable* field exists, the
 | 
			
		||||
*NegativeUNL* field cannot be empty.
 | 
			
		||||
 | 
			
		||||
A new pseudo-transaction, ***UNLModify***, is added. It has three fields
 | 
			
		||||
* ***Disabling***: A flag indicating whether the modification is to disable or
 | 
			
		||||
  to re-enable a validator.
 | 
			
		||||
* ***Seq***: The ledger sequence number.
 | 
			
		||||
* ***Validator***: The validator to be disabled or re-enabled.
 | 
			
		||||
 | 
			
		||||
There would be at most one *disable* `UNLModify` and one *re-enable* `UNLModify`
 | 
			
		||||
transaction per flag ledger. The full machinery is described further on.
 | 
			
		||||
 | 
			
		||||
### Negative UNL Maintenance
 | 
			
		||||
 | 
			
		||||
The negative UNL can only be modified on the flag ledgers. If a validator's
 | 
			
		||||
reliability status changes, it takes two flag ledgers to modify the negative
 | 
			
		||||
UNL. Let's see an example of the algorithm:
 | 
			
		||||
 | 
			
		||||
* Ledger seq = 100: A validator V goes offline.
 | 
			
		||||
* Ledger seq = 256: This is a flag ledger, and V's reliability measurement *PAV*
 | 
			
		||||
  is lower than the low-water mark. Other validators add `UNLModify`
 | 
			
		||||
  pseudo-transactions `{true, 256, V}` to the transaction set which goes through
 | 
			
		||||
  the consensus. Then the pseudo-transaction is applied to the negative UNL
 | 
			
		||||
  ledger component by setting `ToDisable = V`.
 | 
			
		||||
* Ledger seq = 257 ~ 511: The negative UNL ledger component is copied from the
 | 
			
		||||
  parent ledger.
 | 
			
		||||
* Ledger seq=512: This is a flag ledger, and the negative UNL is updated
 | 
			
		||||
  `NegativeUNL = NegativeUNL + ToDisable`.
 | 
			
		||||
 | 
			
		||||
The negative UNL may have up to `MaxNegativeListed = floor(original UNL * 25%)`
 | 
			
		||||
validators. The 25% is because of 75% * 80% = 60%, where 75% = 100% - 25%, 80%
 | 
			
		||||
is the quorum of the effective UNL, and 60% is the absolute minimum quorum of
 | 
			
		||||
the original UNL. Adding more than 25% validators to the negative UNL does not
 | 
			
		||||
improve the liveness of the network, because adding more validators to the
 | 
			
		||||
negative UNL cannot lower the effective quorum.
 | 
			
		||||
 | 
			
		||||
The following is the detailed algorithm:
 | 
			
		||||
 | 
			
		||||
* **If** the ledger seq = x is a flag ledger
 | 
			
		||||
 | 
			
		||||
    1. Compute `NegativeUNL = NegativeUNL + ToDisable - ToReEnable` if they
 | 
			
		||||
    exist in the parent ledger
 | 
			
		||||
 | 
			
		||||
		1. Try to find a candidate to disable if `sizeof NegativeUNL < MaxNegativeListed`
 | 
			
		||||
 | 
			
		||||
		1. Find a validator V that has a *PAV* lower than the low-water
 | 
			
		||||
		mark, but is not in `NegativeUNL`.
 | 
			
		||||
 | 
			
		||||
        1. If two or more are found, their public keys are XORed with the hash
 | 
			
		||||
        of the parent ledger and the one with the lowest XOR result is chosen.
 | 
			
		||||
				
 | 
			
		||||
        1. If V is found, create a `UNLModify` pseudo-transaction
 | 
			
		||||
        `TxDisableValidator = {true, x, V}`
 | 
			
		||||
				
 | 
			
		||||
    1. Try to find a candidate to re-enable if `sizeof NegativeUNL > 0`:
 | 
			
		||||
		
 | 
			
		||||
        1. Find a validator U that is in `NegativeUNL` and has a *PAV* higher
 | 
			
		||||
        than the high-water mark.
 | 
			
		||||
				
 | 
			
		||||
        1. If U is not found, try to find one in `NegativeUNL` but not in the
 | 
			
		||||
        local *UNL*.
 | 
			
		||||
				
 | 
			
		||||
        1. If two or more are found, their public keys are XORed with the hash
 | 
			
		||||
        of the parent ledger and the one with the lowest XOR result is chosen.
 | 
			
		||||
				
 | 
			
		||||
        1. If U is found, create a `UNLModify` pseudo-transaction
 | 
			
		||||
        `TxReEnableValidator = {false, x, U}`
 | 
			
		||||
				
 | 
			
		||||
    1. If any `UNLModify` pseudo-transactions are created, add them to the
 | 
			
		||||
    transaction set. The transaction set goes through the consensus algorithm.
 | 
			
		||||
		
 | 
			
		||||
    1. If have enough support, the `UNLModify` pseudo-transactions remain in the
 | 
			
		||||
    transaction set agreed by the validators. Then the pseudo-transactions are
 | 
			
		||||
    applied to the ledger:
 | 
			
		||||
		
 | 
			
		||||
        1. If have `TxDisableValidator`, set `ToDisable=TxDisableValidator.V`.
 | 
			
		||||
        Else clear `ToDisable`.
 | 
			
		||||
				
 | 
			
		||||
        1. If have `TxReEnableValidator`, set
 | 
			
		||||
        `ToReEnable=TxReEnableValidator.U`. Else clear `ToReEnable`.
 | 
			
		||||
				
 | 
			
		||||
* **Else** (not a flag ledger)
 | 
			
		||||
 | 
			
		||||
    1. Copy the negative UNL ledger component from the parent ledger
 | 
			
		||||
 | 
			
		||||
The negative UNL is stored on each ledger because we don't know when a validator
 | 
			
		||||
may reconnect to the network. If the negative UNL was stored only on every flag
 | 
			
		||||
ledger, then a new validator would have to wait until it acquires the latest
 | 
			
		||||
flag ledger to know the negative UNL. So any new ledgers created that are not
 | 
			
		||||
flag ledgers copy the negative UNL from the parent ledger.
 | 
			
		||||
 | 
			
		||||
Note that when we have a validator to disable and a validator to re-enable at
 | 
			
		||||
the same flag ledger, we create two separate `UNLModify` pseudo-transactions. We
 | 
			
		||||
want either one or the other or both to make it into the ledger on their own
 | 
			
		||||
merits.
 | 
			
		||||
 | 
			
		||||
Readers may have noticed that we defined several rules of creating the
 | 
			
		||||
`UNLModify` pseudo-transactions but did not describe how to enforce the rules.
 | 
			
		||||
The rules are actually enforced by the existing consensus algorithm. Unless
 | 
			
		||||
enough validators propose the same pseudo-transaction it will not be included in
 | 
			
		||||
the transaction set of the ledger.
 | 
			
		||||
 | 
			
		||||
### Quorum Size Calculation
 | 
			
		||||
 | 
			
		||||
The effective quorum is 80% of the effective UNL. Note that because at most 25%
 | 
			
		||||
of the original UNL can be on the negative UNL, the quorum should not be lower
 | 
			
		||||
than the absolute minimum quorum (i.e. 60%) of the original UNL. However,
 | 
			
		||||
considering that different nodes may have different UNLs, to be safe we compute
 | 
			
		||||
`quorum = Ceiling(max(60% * original UNL, 80% * effective UNL))`.
 | 
			
		||||
 | 
			
		||||
### Filter Validation Messages
 | 
			
		||||
 | 
			
		||||
If a validator V is in the negative UNL, it still participates in consensus
 | 
			
		||||
sessions in the same way, i.e. V still follows the protocol and publishes
 | 
			
		||||
proposal and validation messages. The messages from V are still stored the same
 | 
			
		||||
way by everyone, used to calculate the new PAV for V, and could be used in
 | 
			
		||||
future consensus sessions if needed. However V's ledger validation message is
 | 
			
		||||
not counted when checking if the ledger is fully validated.
 | 
			
		||||
 | 
			
		||||
### High Level Sequence Diagram of Code Changes
 | 
			
		||||
 | 
			
		||||
The diagram below is the sequence of one round of consensus. Classes and
 | 
			
		||||
components with non-trivial changes are colored green.
 | 
			
		||||
 | 
			
		||||
* The `ValidatorList` class is modified to compute the quorum of the effective
 | 
			
		||||
  UNL.
 | 
			
		||||
 | 
			
		||||
* The `Validations` class provides an interface for querying the validation
 | 
			
		||||
  messages from trusted validators.
 | 
			
		||||
 | 
			
		||||
* The `ConsensusAdaptor` component:
 | 
			
		||||
 | 
			
		||||
    * The `RCLConsensus::Adaptor` class is modified for creating `UNLModify`
 | 
			
		||||
      Pseudo-Transactions.
 | 
			
		||||
		
 | 
			
		||||
    * The `Change` class is modified for applying `UNLModify`
 | 
			
		||||
      Pseudo-Transactions.
 | 
			
		||||
		
 | 
			
		||||
    * The `Ledger` class is modified for creating and adjusting the negative UNL
 | 
			
		||||
      ledger component.
 | 
			
		||||
		
 | 
			
		||||
    * The `LedgerMaster` class is modified for filtering out validation messages
 | 
			
		||||
      from negative UNL validators when verifying if a ledger is fully
 | 
			
		||||
      validated.
 | 
			
		||||
 | 
			
		||||

 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## Roads Not Taken
 | 
			
		||||
 | 
			
		||||
### Use a Mechanism Like Fee Voting to Process UNLModify Pseudo-Transactions
 | 
			
		||||
 | 
			
		||||
The previous version of the negative UNL specification used the same mechanism
 | 
			
		||||
as the [fee voting](https://xrpl.org/fee-voting.html#voting-process.) for
 | 
			
		||||
creating the negative UNL, and used the negative UNL as soon as the ledger was
 | 
			
		||||
fully validated. However the timing of fully validation can differ among nodes,
 | 
			
		||||
so different negative UNLs could be used, resulting in different effective UNLs
 | 
			
		||||
and different quorums for the same ledger. As a result, the network's safety is
 | 
			
		||||
impacted.
 | 
			
		||||
 | 
			
		||||
This updated version does not impact safety though operates a bit more slowly.
 | 
			
		||||
The negative UNL modifications in the *UNLModify* pseudo-transaction approved by
 | 
			
		||||
the consensus will take effect at the next flag ledger. The extra time of the
 | 
			
		||||
256 ledgers should be enough for nodes to be in sync of the negative UNL
 | 
			
		||||
modifications.
 | 
			
		||||
 | 
			
		||||
### Use an Expiration Approach to Re-enable Validators
 | 
			
		||||
 | 
			
		||||
After a validator disabled by the negative UNL becomes reliable, other
 | 
			
		||||
validators explicitly vote for re-enabling it. An alternative approach to
 | 
			
		||||
re-enable a validator is the expiration approach, which was considered in the
 | 
			
		||||
previous version of the specification. In the expiration approach, every entry
 | 
			
		||||
in the negative UNL has a fixed expiration time. One flag ledger interval was
 | 
			
		||||
chosen as the expiration interval. Once expired, the other validators must
 | 
			
		||||
continue voting to keep the unreliable validator on the negative UNL. The
 | 
			
		||||
advantage of this approach is its simplicity. But it has a requirement. The
 | 
			
		||||
negative UNL protocol must be able to vote multiple unreliable validators to be
 | 
			
		||||
disabled at the same flag ledger. In this version of the specification, however,
 | 
			
		||||
only one unreliable validator can be disabled at a flag ledger. So the
 | 
			
		||||
expiration approach cannot be simply applied.
 | 
			
		||||
 | 
			
		||||
### Validator Reliability Measurement and Flag Ledger Frequency
 | 
			
		||||
 | 
			
		||||
If the ledger time is about 4.5 seconds and the low-water mark is 50%, then in
 | 
			
		||||
the worst case, it takes 48 minutes *((0.5 * 256 + 256 + 256) * 4.5 / 60 = 48)*
 | 
			
		||||
to put an offline validator on the negative UNL. We considered lowering the flag
 | 
			
		||||
ledger frequency so that the negative UNL can be more responsive. We also
 | 
			
		||||
considered decoupling the reliability measurement and flag ledger frequency to
 | 
			
		||||
be more flexible. In practice, however, their benefits are not clear.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
## New Attack Vectors
 | 
			
		||||
 | 
			
		||||
A group of malicious validators may try to frame a reliable validator and put it
 | 
			
		||||
on the negative UNL. But they cannot succeed. Because:
 | 
			
		||||
 | 
			
		||||
1. A reliable validator sends a signed validation message every ledger. A
 | 
			
		||||
sufficient peer-to-peer network will propagate the validation messages to other
 | 
			
		||||
validators. The validators will decide if another validator is reliable or not
 | 
			
		||||
only by its local observation of the validation messages received. So an honest
 | 
			
		||||
validator’s vote on another validator’s reliability is accurate.
 | 
			
		||||
 | 
			
		||||
1. Given the votes are accurate, and one vote per validator, an honest validator
 | 
			
		||||
will not create a UNLModify transaction of a reliable validator.
 | 
			
		||||
 | 
			
		||||
1. A validator can be added to a negative UNL only through a UNLModify
 | 
			
		||||
transaction.
 | 
			
		||||
 | 
			
		||||
Assuming the group of malicious validators is less than the quorum, they cannot
 | 
			
		||||
frame a reliable validator.
 | 
			
		||||
 | 
			
		||||
## Summary
 | 
			
		||||
 | 
			
		||||
The bullet points below briefly summarize the current proposal:
 | 
			
		||||
 | 
			
		||||
* The motivation of the negative UNL is to improve the liveness of the network.
 | 
			
		||||
 | 
			
		||||
* The targeted faults are the ones frequently observed in the production
 | 
			
		||||
  network.
 | 
			
		||||
 | 
			
		||||
* Validators propose negative UNL candidates based on their local measurements.
 | 
			
		||||
 | 
			
		||||
* The absolute minimum quorum is 60% of the original UNL.
 | 
			
		||||
 | 
			
		||||
* The format of the ledger is changed, and a new *UNLModify* pseudo-transaction
 | 
			
		||||
  is added. Any tools or systems that rely on the format of these data will have
 | 
			
		||||
  to be updated.
 | 
			
		||||
 | 
			
		||||
* The negative UNL can only be modified on the flag ledgers.
 | 
			
		||||
 | 
			
		||||
* At most one validator can be added to the negative UNL at a flag ledger.
 | 
			
		||||
 | 
			
		||||
* At most one validator can be removed from the negative UNL at a flag ledger.
 | 
			
		||||
 | 
			
		||||
* If a validator's reliability status changes, it takes two flag ledgers to
 | 
			
		||||
  modify the negative UNL.
 | 
			
		||||
 | 
			
		||||
* The quorum is the larger of 80% of the effective UNL and 60% of the original
 | 
			
		||||
  UNL.
 | 
			
		||||
 | 
			
		||||
* If a validator is on the negative UNL, its validation messages are ignored
 | 
			
		||||
  when the local node verifies if a ledger is fully validated.
 | 
			
		||||
 | 
			
		||||
## FAQ
 | 
			
		||||
 | 
			
		||||
### Question: What are UNLs?
 | 
			
		||||
 | 
			
		||||
Quote from the [Technical FAQ](https://xrpl.org/technical-faq.html): "They are
 | 
			
		||||
the lists of transaction validators a given participant believes will not
 | 
			
		||||
conspire to defraud them."
 | 
			
		||||
 | 
			
		||||
### Question: How does the negative UNL proposal affect network liveness?
 | 
			
		||||
 | 
			
		||||
The network can make forward progress when more than a quorum of the trusted
 | 
			
		||||
validators agree with the progress. The lower the quorum size is, the easier for
 | 
			
		||||
the network to progress. If the quorum is too low, however, the network is not
 | 
			
		||||
safe because nodes may have different results. So the quorum size used in the
 | 
			
		||||
consensus protocol is a balance between the safety and the liveness of the
 | 
			
		||||
network. The negative UNL reduces the size of the effective UNL, resulting in a
 | 
			
		||||
lower quorum size while keeping the network safe.
 | 
			
		||||
 | 
			
		||||
<h3> Question: How does a validator get into the negative UNL? How is a
 | 
			
		||||
validator removed from the negative UNL? </h3>
 | 
			
		||||
 | 
			
		||||
A validator’s reliability is measured by other validators. If a validator
 | 
			
		||||
becomes unreliable, at a flag ledger, other validators propose *UNLModify*
 | 
			
		||||
pseudo-transactions which vote the validator to add to the negative UNL during
 | 
			
		||||
the consensus session. If agreed, the validator is added to the negative UNL at
 | 
			
		||||
the next flag ledger. The mechanism of removing a validator from the negative
 | 
			
		||||
UNL is the same.
 | 
			
		||||
 | 
			
		||||
### Question: Given a negative UNL, what happens if the UNL changes?
 | 
			
		||||
 | 
			
		||||
Answer: Let’s consider the cases: 
 | 
			
		||||
 | 
			
		||||
1. A validator is added to the UNL, and it is already in the negative UNL. This
 | 
			
		||||
case could happen when not all the nodes have the same UNL. Note that the
 | 
			
		||||
negative UNL on the ledger lists unreliable nodes that are not necessarily the
 | 
			
		||||
validators for everyone.
 | 
			
		||||
 | 
			
		||||
    In this case, the liveness is affected negatively. Because the minimum
 | 
			
		||||
    quorum could be larger but the usable validators are not increased.
 | 
			
		||||
 | 
			
		||||
1. A validator is removed from the UNL, and it is in the negative UNL.
 | 
			
		||||
 | 
			
		||||
    In this case, the liveness is affected positively. Because the quorum could
 | 
			
		||||
    be smaller but the usable validators are not reduced.
 | 
			
		||||
 | 
			
		||||
1. A validator is added to the UNL, and it is not in the negative UNL.
 | 
			
		||||
1. A validator is removed from the UNL, and it is not in the negative UNL.
 | 
			
		||||
	
 | 
			
		||||
    Case 3 and 4 are not affected by the negative UNL protocol.
 | 
			
		||||
 | 
			
		||||
### Question: Can we simply lower the quorum to 60% without the negative UNL? 
 | 
			
		||||
 | 
			
		||||
Answer: No, because the negative UNL approach is safer.
 | 
			
		||||
 | 
			
		||||
First let’s compare the two approaches intuitively, (1) the *negative UNL*
 | 
			
		||||
approach, and (2) *lower quorum*: simply lowering the quorum from 80% to 60%
 | 
			
		||||
without the negative UNL. The negative UNL approach uses consensus to come up
 | 
			
		||||
with a list of unreliable validators, which are then removed from the effective
 | 
			
		||||
UNL temporarily. With this approach, the list of unreliable validators is agreed
 | 
			
		||||
to by a quorum of validators and will be used by every node in the network to
 | 
			
		||||
adjust its UNL. The quorum is always 80% of the effective UNL. The lower quorum
 | 
			
		||||
approach is a tradeoff between safety and liveness and against our principle of
 | 
			
		||||
preferring safety over liveness. Note that different validators don't have to
 | 
			
		||||
agree on which validation sources they are ignoring.
 | 
			
		||||
 | 
			
		||||
Next we compare the two approaches quantitatively with examples, and apply
 | 
			
		||||
Theorem 8 of [Analysis of the XRP Ledger Consensus
 | 
			
		||||
Protocol](https://arxiv.org/abs/1802.07242) paper:
 | 
			
		||||
 | 
			
		||||
*XRP LCP guarantees fork safety if **O<sub>i,j</sub> > n<sub>j</sub> / 2 +
 | 
			
		||||
n<sub>i</sub> − q<sub>i</sub> + t<sub>i,j</sub>** for every pair of nodes
 | 
			
		||||
P<sub>i</sub>, P<sub>j</sub>,*
 | 
			
		||||
 | 
			
		||||
where *O<sub>i,j</sub>* is the overlapping requirement, n<sub>j</sub> and
 | 
			
		||||
n<sub>i</sub> are UNL sizes, q<sub>i</sub> is the quorum size of P<sub>i</sub>,
 | 
			
		||||
*t<sub>i,j</sub> = min(t<sub>i</sub>, t<sub>j</sub>, O<sub>i,j</sub>)*, and
 | 
			
		||||
t<sub>i</sub> and t<sub>j</sub> are the number of faults can be tolerated by
 | 
			
		||||
P<sub>i</sub> and P<sub>j</sub>.
 | 
			
		||||
 | 
			
		||||
We denote *UNL<sub>i</sub>* as *P<sub>i</sub>'s UNL*, and *|UNL<sub>i</sub>|* as
 | 
			
		||||
the size of *P<sub>i</sub>'s UNL*.
 | 
			
		||||
 | 
			
		||||
Assuming *|UNL<sub>i</sub>| = |UNL<sub>j</sub>|*, let's consider the following
 | 
			
		||||
three cases:
 | 
			
		||||
 | 
			
		||||
1. With 80% quorum and 20% faults, *O<sub>i,j</sub> > 100% / 2 + 100% - 80% +
 | 
			
		||||
20% = 90%*. I.e. fork safety requires > 90% UNL overlaps. This is one of the
 | 
			
		||||
results in the analysis paper.
 | 
			
		||||
 | 
			
		||||
1. If the quorum is 60%, the relationship between the overlapping requirement
 | 
			
		||||
and the faults that can be tolerated is *O<sub>i,j</sub> > 90% +
 | 
			
		||||
t<sub>i,j</sub>*. Under the same overlapping condition (i.e. 90%), to guarantee
 | 
			
		||||
the fork safety, the network cannot tolerate any faults. So under the same
 | 
			
		||||
overlapping condition, if the quorum is simply lowered, the network can tolerate
 | 
			
		||||
fewer faults.
 | 
			
		||||
 | 
			
		||||
1. With the negative UNL approach, we want to argue that the inequation
 | 
			
		||||
*O<sub>i,j</sub> > n<sub>j</sub> / 2 + n<sub>i</sub> − q<sub>i</sub> +
 | 
			
		||||
t<sub>i,j</sub>* is always true to guarantee fork safety, while the negative UNL
 | 
			
		||||
protocol runs, i.e. the effective quorum is lowered without weakening the
 | 
			
		||||
network's fault tolerance. To make the discussion easier, we rewrite the
 | 
			
		||||
inequation as *O<sub>i,j</sub> > n<sub>j</sub> / 2 + (n<sub>i</sub> −
 | 
			
		||||
q<sub>i</sub>) + min(t<sub>i</sub>, t<sub>j</sub>)*, where O<sub>i,j</sub> is
 | 
			
		||||
dropped from the definition of t<sub>i,j</sub> because *O<sub>i,j</sub> >
 | 
			
		||||
min(t<sub>i</sub>, t<sub>j</sub>)* always holds under the parameters we will
 | 
			
		||||
use. Assuming a validator V is added to the negative UNL, now let's consider the
 | 
			
		||||
4 cases:
 | 
			
		||||
 | 
			
		||||
    1. V is not on UNL<sub>i</sub> nor UNL<sub>j</sub>
 | 
			
		||||
 | 
			
		||||
        The inequation holds because none of the variables change.
 | 
			
		||||
 | 
			
		||||
    1. V is on UNL<sub>i</sub> but not on UNL<sub>j</sub>
 | 
			
		||||
 | 
			
		||||
        The value of *(n<sub>i</sub> − q<sub>i</sub>)* is smaller. The value of
 | 
			
		||||
        *min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
 | 
			
		||||
        variables do not change. Overall, the left side of the inequation does
 | 
			
		||||
        not change, but the right side is smaller. So the inequation holds.
 | 
			
		||||
    
 | 
			
		||||
    1. V is not on UNL<sub>i</sub> but on UNL<sub>j</sub>
 | 
			
		||||
 | 
			
		||||
        The value of *n<sub>j</sub> / 2* is smaller. The value of
 | 
			
		||||
        *min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
 | 
			
		||||
        variables do not change. Overall, the left side of the inequation does
 | 
			
		||||
        not change, but the right side is smaller. So the inequation holds.
 | 
			
		||||
    
 | 
			
		||||
    1. V is on both UNL<sub>i</sub> and UNL<sub>j</sub>
 | 
			
		||||
 | 
			
		||||
        The value of *O<sub>i,j</sub>* is reduced by 1. The values of
 | 
			
		||||
        *n<sub>j</sub> / 2*, *(n<sub>i</sub> − q<sub>i</sub>)*, and
 | 
			
		||||
        *min(t<sub>i</sub>, t<sub>j</sub>)* are reduced by 0.5, 0.2, and 1
 | 
			
		||||
        respectively. The right side is reduced by 1.7. Overall, the left side
 | 
			
		||||
        of the inequation is reduced by 1, and the right side is reduced by 1.7.
 | 
			
		||||
        So the inequation holds.
 | 
			
		||||
 | 
			
		||||
    The inequation holds for all the cases. So with the negative UNL approach,
 | 
			
		||||
    the network's fork safety is preserved, while the quorum is lowered that
 | 
			
		||||
    increases the network's liveness.
 | 
			
		||||
 | 
			
		||||
<h3> Question: We have observed that occasionally a validator wanders off on its
 | 
			
		||||
own chain. How is this case handled by the negative UNL algorithm? </h3>
 | 
			
		||||
 | 
			
		||||
Answer: The case that a validator wanders off on its own chain can be measured
 | 
			
		||||
with the validations agreement. Because the validations by this validator must
 | 
			
		||||
be different from other validators' validations of the same sequence numbers.
 | 
			
		||||
When there are enough disagreed validations, other validators will vote this
 | 
			
		||||
validator onto the negative UNL.
 | 
			
		||||
 | 
			
		||||
In general by measuring the agreement of validations, we also measured the
 | 
			
		||||
"sanity". If two validators have too many disagreements, one of them could be
 | 
			
		||||
insane. When enough validators think a validator is insane, that validator is
 | 
			
		||||
put on the negative UNL.
 | 
			
		||||
 | 
			
		||||
<h3> Question: Why would there be at most one disable UNLModify and one
 | 
			
		||||
re-enable UNLModify transaction per flag ledger? </h3>
 | 
			
		||||
 | 
			
		||||
Answer: It is a design choice so that the effective UNL does not change too
 | 
			
		||||
quickly. A typical targeted scenario is several validators go offline slowly
 | 
			
		||||
during a long weekend. The current design can handle this kind of cases well
 | 
			
		||||
without changing the effective UNL too quickly.
 | 
			
		||||
 | 
			
		||||
## Appendix
 | 
			
		||||
 | 
			
		||||
### Confidence Test
 | 
			
		||||
 | 
			
		||||
We will use two test networks, a single machine test network with multiple IP
 | 
			
		||||
addresses and the QE test network with multiple machines. The single machine
 | 
			
		||||
network will be used to test all the test cases and to debug. The QE network
 | 
			
		||||
will be used after that. We want to see the test cases still pass with real
 | 
			
		||||
network delay. A test case specifies:
 | 
			
		||||
 | 
			
		||||
1. a UNL with different number of validators for different test cases,
 | 
			
		||||
1. a network with zero or more non-validator nodes, 
 | 
			
		||||
1. a sequence of validator reliability change events (by killing/restarting
 | 
			
		||||
   nodes, or by running modified rippled that does not send all validation
 | 
			
		||||
   messages),
 | 
			
		||||
1. the correct outcomes. 
 | 
			
		||||
 | 
			
		||||
For all the test cases, the correct outcomes are verified by examining logs. We
 | 
			
		||||
will grep the log to see if the correct negative UNLs are generated, and whether
 | 
			
		||||
or not the network is making progress when it should be. The ripdtop tool will
 | 
			
		||||
be helpful for monitoring validators' states and ledger progress. Some of the
 | 
			
		||||
timing parameters of rippled will be changed to have faster ledger time. Most if
 | 
			
		||||
not all test cases do not need client transactions.
 | 
			
		||||
 | 
			
		||||
For example, the test cases for the prototype:
 | 
			
		||||
1. A 10-validator UNL.
 | 
			
		||||
1. The network does not have other nodes.
 | 
			
		||||
1. The validators will be started from the genesis. Once they start to produce
 | 
			
		||||
   ledgers, we kill five validators, one every flag ledger interval. Then we
 | 
			
		||||
   will restart them one by one.
 | 
			
		||||
1. A sequence of events (or the lack of events) such as a killed validator is
 | 
			
		||||
   added to the negative UNL.
 | 
			
		||||
 | 
			
		||||
#### Roads Not Taken: Test with Extended CSF 
 | 
			
		||||
 | 
			
		||||
We considered testing with the current unit test framework, specifically the
 | 
			
		||||
[Consensus Simulation
 | 
			
		||||
Framework](https://github.com/ripple/rippled/blob/develop/src/test/csf/README.md)
 | 
			
		||||
(CSF). However, the CSF currently can only test the generic consensus algorithm
 | 
			
		||||
as in the paper: [Analysis of the XRP Ledger Consensus
 | 
			
		||||
Protocol](https://arxiv.org/abs/1802.07242).
 | 
			
		||||
@@ -1,79 +0,0 @@
 | 
			
		||||
@startuml negativeUNL_highLevel_sequence
 | 
			
		||||
 | 
			
		||||
skinparam sequenceArrowThickness 2
 | 
			
		||||
skinparam roundcorner 20
 | 
			
		||||
skinparam maxmessagesize 160
 | 
			
		||||
 | 
			
		||||
actor "Rippled Start" as RS
 | 
			
		||||
participant "Timer" as T 
 | 
			
		||||
participant "NetworkOPs" as NOP 
 | 
			
		||||
participant "ValidatorList" as VL #lightgreen
 | 
			
		||||
participant "Consensus" as GC
 | 
			
		||||
participant "ConsensusAdaptor" as CA #lightgreen
 | 
			
		||||
participant "Validations" as RM #lightgreen
 | 
			
		||||
 | 
			
		||||
RS -> NOP: begin consensus
 | 
			
		||||
activate NOP
 | 
			
		||||
NOP -[#green]> VL: <font color=green>update negative UNL
 | 
			
		||||
hnote over VL#lightgreen: store a copy of\nnegative UNL
 | 
			
		||||
VL -> NOP
 | 
			
		||||
NOP -> VL: update trusted validators
 | 
			
		||||
activate VL
 | 
			
		||||
VL -> VL: re-calculate quorum
 | 
			
		||||
hnote over VL#lightgreen: ignore negative listed validators\nwhen calculate quorum 
 | 
			
		||||
VL -> NOP
 | 
			
		||||
deactivate VL
 | 
			
		||||
NOP -> GC: start round
 | 
			
		||||
activate GC
 | 
			
		||||
GC -> GC: phase = OPEN
 | 
			
		||||
GC -> NOP
 | 
			
		||||
deactivate GC
 | 
			
		||||
deactivate NOP
 | 
			
		||||
 | 
			
		||||
loop at regular frequency
 | 
			
		||||
T -> GC: timerEntry
 | 
			
		||||
activate GC
 | 
			
		||||
end
 | 
			
		||||
 | 
			
		||||
alt phase == OPEN
 | 
			
		||||
    alt should close ledger 
 | 
			
		||||
        GC -> GC: phase = ESTABLISH
 | 
			
		||||
        GC -> CA: onClose
 | 
			
		||||
        activate CA
 | 
			
		||||
            alt sqn%256==0 
 | 
			
		||||
            CA -[#green]> RM: <font color=green>getValidations
 | 
			
		||||
            CA -[#green]> CA: <font color=green>create UNLModify Tx 
 | 
			
		||||
            hnote over CA#lightgreen: use validatations of the last 256 ledgers\nto figure out UNLModify Tx candidates.\nIf any, create UNLModify Tx, and add to TxSet.               
 | 
			
		||||
            end
 | 
			
		||||
        CA -> GC
 | 
			
		||||
        GC -> CA: propose
 | 
			
		||||
        deactivate CA
 | 
			
		||||
    end
 | 
			
		||||
else phase == ESTABLISH
 | 
			
		||||
    hnote over GC: receive peer postions
 | 
			
		||||
    GC -> GC : update our position
 | 
			
		||||
    GC -> CA : propose \n(if position changed)
 | 
			
		||||
    GC -> GC : check if have consensus
 | 
			
		||||
    alt consensus reached
 | 
			
		||||
        GC -> GC: phase = ACCEPT
 | 
			
		||||
        GC -> CA : onAccept
 | 
			
		||||
        activate CA
 | 
			
		||||
            CA -> CA : build LCL
 | 
			
		||||
            hnote over CA #lightgreen: copy negative UNL from parent ledger
 | 
			
		||||
            alt sqn%256==0
 | 
			
		||||
                CA -[#green]> CA: <font color=green>Adjust negative UNL 
 | 
			
		||||
                CA -[#green]> CA: <font color=green>apply UNLModify Tx
 | 
			
		||||
            end
 | 
			
		||||
            CA -> CA : validate and send validation message
 | 
			
		||||
            activate NOP
 | 
			
		||||
                CA -> NOP : end consensus and\n<b>begin next consensus round
 | 
			
		||||
            deactivate NOP
 | 
			
		||||
        deactivate CA        
 | 
			
		||||
        hnote over RM: receive validations
 | 
			
		||||
    end
 | 
			
		||||
else phase == ACCEPTED
 | 
			
		||||
    hnote over GC: timerEntry hash nothing to do at this phase
 | 
			
		||||
end
 | 
			
		||||
deactivate GC
 | 
			
		||||
 | 
			
		||||
@enduml
 | 
			
		||||
										
											Binary file not shown.
										
									
								
							| 
		 Before Width: | Height: | Size: 138 KiB  | 
@@ -1,88 +0,0 @@
 | 
			
		||||
# Ledger Replay
 | 
			
		||||
 | 
			
		||||
`LedgerReplayer` is a new `Stoppable` for replaying ledgers.
 | 
			
		||||
Patterned after two other `Stoppable`s under `JobQueue`---`InboundLedgers`
 | 
			
		||||
and `InboundTransactions`---it acts like a factory for creating
 | 
			
		||||
state-machine workers, and a network message demultiplexer for those workers.
 | 
			
		||||
Think of these workers like asynchronous functions.
 | 
			
		||||
Like functions, they each take a set of parameters.
 | 
			
		||||
The `Stoppable` memoizes these functions. It maintains a table for each
 | 
			
		||||
worker type, mapping sets of arguments to the worker currently working
 | 
			
		||||
on that argument set.
 | 
			
		||||
Whenever the `Stoppable` is asked to construct a worker, it first searches its
 | 
			
		||||
table to see if there is an existing worker with the same or overlapping
 | 
			
		||||
argument set.
 | 
			
		||||
If one exists, then it is used. If not, then a new one is created,
 | 
			
		||||
initialized, and added to the table.
 | 
			
		||||
 | 
			
		||||
For `LedgerReplayer`, there are three worker types: `LedgerReplayTask`,
 | 
			
		||||
`SkipListAcquire`, and `LedgerDeltaAcquire`.
 | 
			
		||||
Each is derived from `TimeoutCounter` to give it a timeout.
 | 
			
		||||
For `LedgerReplayTask`, the parameter set
 | 
			
		||||
is {reason, finish ledger ID, number of ledgers}. For `SkipListAcquire` and
 | 
			
		||||
`LedgerDeltaAcquire`, there is just one parameter: a ledger ID.
 | 
			
		||||
 | 
			
		||||
Each `Stoppable` has an entry point. For `LedgerReplayer`, it is `replay`.
 | 
			
		||||
`replay` creates two workers: a `LedgerReplayTask` and a `SkipListAcquire`.
 | 
			
		||||
`LedgerDeltaAcquire`s are created in the callback for when the skip list
 | 
			
		||||
returns.
 | 
			
		||||
 | 
			
		||||
For `SkipListAcquire` and `LedgerDeltaAcquire`, initialization fires off the
 | 
			
		||||
underlying asynchronous network request and starts the timeout. The argument
 | 
			
		||||
set identifying the worker is included in the network request, and copied to
 | 
			
		||||
the network response. `SkipListAcquire` sends a request for a proof path for
 | 
			
		||||
the skip list of the desired ledger. `LedgerDeltaAcquire` sends a request for
 | 
			
		||||
the transaction set of the desired ledger.
 | 
			
		||||
 | 
			
		||||
`LedgerReplayer` is also a network message demultiplexer.
 | 
			
		||||
When a response arrives for a request that was sent by a `SkipListAcquire` or
 | 
			
		||||
`LedgerDeltaAcquire` worker, the `Peer` object knows to send it to the
 | 
			
		||||
`LedgerReplayer`, which looks up the worker waiting for that response based on
 | 
			
		||||
the identifying argument set included in the response.
 | 
			
		||||
 | 
			
		||||
`LedgerReplayTask` may ask `InboundLedgers` to send requests to acquire
 | 
			
		||||
the start ledger, but there is no way to attach a callback or be notified when
 | 
			
		||||
the `InboundLedger` worker completes. All the responses for its messages will
 | 
			
		||||
be directed to `InboundLedgers`, not `LedgerReplayer`. Instead,
 | 
			
		||||
`LedgerReplayTask` checks whether the start ledger has arrived every time its
 | 
			
		||||
timeout expires.
 | 
			
		||||
 | 
			
		||||
Like a promise, each worker keeps track of whether it is pending (`!isDone()`)
 | 
			
		||||
or whether it has resolved successfully (`complete_ == true`) or unsuccessfully
 | 
			
		||||
(`failed_ == true`). It will never exist in both resolved states at once, nor
 | 
			
		||||
will it return to a pending state after reaching a resolved state.
 | 
			
		||||
 | 
			
		||||
Like promises, some workers can accept continuations to be called when they
 | 
			
		||||
reach a resolved state, or immediately if they are already resolved.
 | 
			
		||||
`SkipListAcquire` and `LedgerDeltaAcquire` both accept continuations of a type
 | 
			
		||||
specific to their payload, both via a method named `addDataCallback()`. Continuations
 | 
			
		||||
cannot be removed explicitly, but they are held by `std::weak_ptr` so they can
 | 
			
		||||
be removed implicitly.
 | 
			
		||||
 | 
			
		||||
`LedgerReplayTask` is simultaneously:
 | 
			
		||||
 | 
			
		||||
1. an asynchronous function,
 | 
			
		||||
1. a continuation to one `SkipListAcquire` asynchronous function,
 | 
			
		||||
1. a continuation to zero or more `LedgerDeltaAcquire` asynchronous functions, and
 | 
			
		||||
1. a continuation to its own timeout.
 | 
			
		||||
 | 
			
		||||
Each of these roles corresponds to different entry points:
 | 
			
		||||
 | 
			
		||||
1. `init()`
 | 
			
		||||
1. the callback added to `SkipListAcquire`, which calls `updateSkipList(...)` or `cancel()`
 | 
			
		||||
1. the callback added to `LedgerDeltaAcquire`, which calls `deltaReady(...)` or `cancel()`
 | 
			
		||||
1. `onTimer()`
 | 
			
		||||
 | 
			
		||||
Each of these entry points does something unique to that entry point. They
 | 
			
		||||
either (a) transition `LedgerReplayTask` to a terminal failed resolved state
 | 
			
		||||
(`cancel()` and `onTimer()`) or (b) try to make progress toward the successful
 | 
			
		||||
resolved state. `init()` and `updateSkipList(...)` call `trigger()` while
 | 
			
		||||
`deltaReady(...)` calls `tryAdvance()`. There's a similarity between this
 | 
			
		||||
pattern and the way coroutines are implemented, where every yield saves the spot
 | 
			
		||||
in the code where it left off and every resume jumps back to that spot.
 | 
			
		||||
 | 
			
		||||
### Sequence Diagram
 | 
			
		||||

 | 
			
		||||
 | 
			
		||||
### Class Diagram
 | 
			
		||||

 | 
			
		||||
										
											Binary file not shown.
										
									
								
							| 
		 Before Width: | Height: | Size: 90 KiB  | 
@@ -1,98 +0,0 @@
 | 
			
		||||
@startuml
 | 
			
		||||
 | 
			
		||||
class TimeoutCounter {
 | 
			
		||||
  #app_ : Application&
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
TimeoutCounter o-- "1" Application 
 | 
			
		||||
': app_
 | 
			
		||||
 | 
			
		||||
Stoppable <.. Application
 | 
			
		||||
 | 
			
		||||
class Application {
 | 
			
		||||
  -m_ledgerReplayer : uptr<LedgerReplayer>
 | 
			
		||||
  -m_inboundLedgers : uptr<InboundLedgers>
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
Application *-- "1" LedgerReplayer 
 | 
			
		||||
': m_ledgerReplayer
 | 
			
		||||
Application *-- "1" InboundLedgers 
 | 
			
		||||
': m_inboundLedgers
 | 
			
		||||
 | 
			
		||||
Stoppable <.. InboundLedgers
 | 
			
		||||
Application "1" --o InboundLedgers 
 | 
			
		||||
': app_
 | 
			
		||||
 | 
			
		||||
class InboundLedgers {
 | 
			
		||||
  -app_ : Application&
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
Stoppable <.. LedgerReplayer
 | 
			
		||||
InboundLedgers "1" --o LedgerReplayer 
 | 
			
		||||
': inboundLedgers_
 | 
			
		||||
Application "1" --o LedgerReplayer 
 | 
			
		||||
': app_
 | 
			
		||||
 | 
			
		||||
class LedgerReplayer {
 | 
			
		||||
  +createDeltas(LedgerReplayTask)
 | 
			
		||||
  -app_ : Application&
 | 
			
		||||
  -inboundLedgers_ : InboundLedgers&
 | 
			
		||||
  -tasks_ : vector<sptr<LedgerReplayTask>>
 | 
			
		||||
  -deltas_ : hash_map<u256, wptr<LedgerDeltaAcquire>>
 | 
			
		||||
  -skipLists_ : hash_map<u256, wptr<SkipListAcquire>>
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
LedgerReplayer *-- LedgerReplayTask 
 | 
			
		||||
': tasks_
 | 
			
		||||
LedgerReplayer o-- LedgerDeltaAcquire 
 | 
			
		||||
': deltas_
 | 
			
		||||
LedgerReplayer o-- SkipListAcquire 
 | 
			
		||||
': skipLists_
 | 
			
		||||
 | 
			
		||||
TimeoutCounter <.. LedgerReplayTask
 | 
			
		||||
InboundLedgers "1" --o LedgerReplayTask 
 | 
			
		||||
': inboundLedgers_
 | 
			
		||||
LedgerReplayer "1" --o LedgerReplayTask 
 | 
			
		||||
': replayer_
 | 
			
		||||
 | 
			
		||||
class LedgerReplayTask {
 | 
			
		||||
  -inboundLedgers_ : InboundLedgers&
 | 
			
		||||
  -replayer_ : LedgerReplayer&
 | 
			
		||||
  -skipListAcquirer_ : sptr<SkipListAcquire>
 | 
			
		||||
  -deltas_ : vector<sptr<LedgerDeltaAcquire>>
 | 
			
		||||
  +addDelta(sptr<LedgerDeltaAcquire>)
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
LedgerReplayTask *-- "1" SkipListAcquire 
 | 
			
		||||
': skipListAcquirer_
 | 
			
		||||
LedgerReplayTask *-- LedgerDeltaAcquire 
 | 
			
		||||
': deltas_
 | 
			
		||||
 | 
			
		||||
TimeoutCounter <.. SkipListAcquire
 | 
			
		||||
InboundLedgers "1" --o SkipListAcquire 
 | 
			
		||||
': inboundLedgers_
 | 
			
		||||
LedgerReplayer "1" --o SkipListAcquire 
 | 
			
		||||
': replayer_
 | 
			
		||||
LedgerReplayTask --o SkipListAcquire : implicit via callback
 | 
			
		||||
 | 
			
		||||
class SkipListAcquire {
 | 
			
		||||
  +addDataCallback(callback)
 | 
			
		||||
  -inboundLedgers_ : InboundLedgers&
 | 
			
		||||
  -replayer_ : LedgerReplayer&
 | 
			
		||||
  -dataReadyCallbacks_ : vector<callback>
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
TimeoutCounter <.. LedgerDeltaAcquire
 | 
			
		||||
InboundLedgers "1" --o LedgerDeltaAcquire 
 | 
			
		||||
': inboundLedgers_
 | 
			
		||||
LedgerReplayer "1" --o LedgerDeltaAcquire 
 | 
			
		||||
': replayer_
 | 
			
		||||
LedgerReplayTask --o LedgerDeltaAcquire : implicit via callback
 | 
			
		||||
 | 
			
		||||
class LedgerDeltaAcquire {
 | 
			
		||||
  +addDataCallback(callback)
 | 
			
		||||
  -inboundLedgers_ : InboundLedgers&
 | 
			
		||||
  -replayer_ : LedgerReplayer&
 | 
			
		||||
  -dataReadyCallbacks_ : vector<callback>
 | 
			
		||||
}
 | 
			
		||||
@enduml
 | 
			
		||||
										
											Binary file not shown.
										
									
								
							| 
		 Before Width: | Height: | Size: 121 KiB  | 
@@ -1,85 +0,0 @@
 | 
			
		||||
@startuml
 | 
			
		||||
 | 
			
		||||
autoactivate on
 | 
			
		||||
 | 
			
		||||
' participant app as "Application"
 | 
			
		||||
participant peer as "Peer"
 | 
			
		||||
participant lr as "LedgerReplayer"
 | 
			
		||||
participant lrt as "LedgerReplayTask"
 | 
			
		||||
participant sla as "SkipListAcquire"
 | 
			
		||||
participant lda as "LedgerDeltaAcquire"
 | 
			
		||||
 | 
			
		||||
[-> lr : replay(finishId, numLedgers)
 | 
			
		||||
  lr -> sla : make_shared(finishHash)
 | 
			
		||||
  return skipList
 | 
			
		||||
  lr -> lrt : make_shared(skipList)
 | 
			
		||||
  return task
 | 
			
		||||
  lr -> sla : init(numPeers=1)
 | 
			
		||||
    sla -> sla : trigger(numPeers=1)
 | 
			
		||||
      sla -> peer : sendRequest(ProofPathRequest)
 | 
			
		||||
      return
 | 
			
		||||
    return
 | 
			
		||||
  return
 | 
			
		||||
  lr -> lrt : init()
 | 
			
		||||
    lrt -> sla : addDataCallback(callback)
 | 
			
		||||
    return
 | 
			
		||||
  return
 | 
			
		||||
deactivate lr
 | 
			
		||||
 | 
			
		||||
[-> peer : onMessage(ProofPathResponse)
 | 
			
		||||
  peer -> lr : gotSkipList(ledgerHeader, item)
 | 
			
		||||
    lr -> sla : processData(ledgerSeq, item)
 | 
			
		||||
      sla -> sla : onSkipListAcquired(skipList, ledgerSeq)
 | 
			
		||||
        sla -> sla : notify()
 | 
			
		||||
        note over sla: call the callbacks added by\naddDataCallback(callback).
 | 
			
		||||
          sla -> lrt : callback(ledgerId)
 | 
			
		||||
            lrt -> lrt : updateSkipList(ledgerId, ledgerSeq, skipList)
 | 
			
		||||
            lrt -> lr : createDeltas(this)
 | 
			
		||||
            loop
 | 
			
		||||
              lr -> lda : make_shared(ledgerId, ledgerSeq)
 | 
			
		||||
              return delta
 | 
			
		||||
              lr -> lrt : addDelta(delta)              
 | 
			
		||||
                lrt -> lda : addDataCallback(callback)
 | 
			
		||||
                return
 | 
			
		||||
              return
 | 
			
		||||
              lr -> lda : init(numPeers=1)
 | 
			
		||||
                lda -> lda : trigger(numPeers=1)
 | 
			
		||||
                  lda -> peer : sendRequest(ReplayDeltaRequest)
 | 
			
		||||
                  return
 | 
			
		||||
                return
 | 
			
		||||
              return
 | 
			
		||||
              end
 | 
			
		||||
            return
 | 
			
		||||
          return
 | 
			
		||||
        return
 | 
			
		||||
      return
 | 
			
		||||
    return
 | 
			
		||||
  return
 | 
			
		||||
deactivate peer
 | 
			
		||||
 | 
			
		||||
[-> peer : onMessage(ReplayDeltaResponse)
 | 
			
		||||
  peer -> lr : gotReplayDelta(ledgerHeader)
 | 
			
		||||
    lr -> lda : processData(ledgerHeader, txns)
 | 
			
		||||
      lda -> lda : notify()
 | 
			
		||||
      note over lda: call the callbacks added by\naddDataCallback(callback).
 | 
			
		||||
        lda -> lrt : callback(ledgerId) 
 | 
			
		||||
        lrt -> lrt : deltaReady(ledgerId)
 | 
			
		||||
          lrt -> lrt : tryAdvance()
 | 
			
		||||
            loop as long as child can be built
 | 
			
		||||
            lrt -> lda : tryBuild(parent)
 | 
			
		||||
              lda -> lda : onLedgerBuilt()
 | 
			
		||||
                note over lda
 | 
			
		||||
                  Schedule a job to store the built ledger.
 | 
			
		||||
                end note
 | 
			
		||||
              return
 | 
			
		||||
            return child
 | 
			
		||||
            end
 | 
			
		||||
          return
 | 
			
		||||
        return
 | 
			
		||||
      return
 | 
			
		||||
    return
 | 
			
		||||
  return
 | 
			
		||||
deactivate peer
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
@enduml
 | 
			
		||||
@@ -1,20 +0,0 @@
 | 
			
		||||
# Code Style Cheat Sheet
 | 
			
		||||
 | 
			
		||||
## Form
 | 
			
		||||
 | 
			
		||||
- One class per header file.
 | 
			
		||||
- Place each data member on its own line.
 | 
			
		||||
- Place each ctor-initializer on its own line.
 | 
			
		||||
- Create typedefs for primitive types to describe them.
 | 
			
		||||
- Return descriptive local variables instead of constants.
 | 
			
		||||
- Use long descriptive names instead of abbreviations.
 | 
			
		||||
- Use "explicit" for single-argument ctors
 | 
			
		||||
- Avoid globals especially objects with static storage duration
 | 
			
		||||
- Order class declarations as types, public, protected, private, then data.
 | 
			
		||||
- Prefer 'private' over 'protected'
 | 
			
		||||
 | 
			
		||||
## Function
 | 
			
		||||
 | 
			
		||||
- Minimize external dependencies
 | 
			
		||||
  * Pass options in the ctor instead of using theConfig
 | 
			
		||||
  * Use as few other classes as possible
 | 
			
		||||
@@ -1,82 +0,0 @@
 | 
			
		||||
# Coding Standards
 | 
			
		||||
 | 
			
		||||
Coding standards used here gradually evolve and propagate through 
 | 
			
		||||
code reviews. Some aspects are enforced more strictly than others.
 | 
			
		||||
 | 
			
		||||
## Rules
 | 
			
		||||
 | 
			
		||||
These rules only apply to our own code. We can't enforce any sort of 
 | 
			
		||||
style on the external repositories and libraries we include. The best
 | 
			
		||||
guideline is to maintain the standards that are used in those libraries.
 | 
			
		||||
 | 
			
		||||
* Tab inserts 4 spaces. No tab characters.
 | 
			
		||||
* Braces are indented in the [Allman style][1].
 | 
			
		||||
* Modern C++ principles. No naked ```new``` or ```delete```.
 | 
			
		||||
* Line lengths limited to 80 characters. Exceptions limited to data and tables.
 | 
			
		||||
 | 
			
		||||
## Guidelines
 | 
			
		||||
 | 
			
		||||
If you want to do something contrary to these guidelines, understand
 | 
			
		||||
why you're doing it. Think, use common sense, and consider that this
 | 
			
		||||
your changes will probably need to be maintained long after you've
 | 
			
		||||
moved on to other projects.
 | 
			
		||||
 | 
			
		||||
* Use white space and blank lines to guide the eye and keep your intent clear.
 | 
			
		||||
* Put private data members at the top of a class, and the 6 public special
 | 
			
		||||
members immediately after, in the following order:
 | 
			
		||||
  * Destructor
 | 
			
		||||
  * Default constructor
 | 
			
		||||
  * Copy constructor
 | 
			
		||||
  * Copy assignment
 | 
			
		||||
  * Move constructor
 | 
			
		||||
  * Move assignment
 | 
			
		||||
* Don't over-inline by defining large functions within the class
 | 
			
		||||
declaration, not even for template classes.
 | 
			
		||||
 | 
			
		||||
## Formatting
 | 
			
		||||
 | 
			
		||||
The goal of source code formatting should always be to make things as easy to
 | 
			
		||||
read as possible. White space is used to guide the eye so that details are not
 | 
			
		||||
overlooked. Blank lines are used to separate code into "paragraphs."
 | 
			
		||||
 | 
			
		||||
* Always place a space before and after all binary operators,
 | 
			
		||||
  especially assignments (`operator=`).
 | 
			
		||||
* The `!` operator should be preceded by a space, but not followed by one.
 | 
			
		||||
* The `~` operator should be preceded by a space, but not followed by one.
 | 
			
		||||
* The `++` and `--` operators should have no spaces between the operator and
 | 
			
		||||
  the operand.
 | 
			
		||||
* A space never appears before a comma, and always appears after a comma.
 | 
			
		||||
* Don't put spaces after a parenthesis. A typical member function call might
 | 
			
		||||
  look like this: `foobar (1, 2, 3);`
 | 
			
		||||
* In general, leave a blank line before an `if` statement.
 | 
			
		||||
* In general, leave a blank line after a closing brace `}`.
 | 
			
		||||
* Do not place code on the same line as any opening or
 | 
			
		||||
  closing brace.
 | 
			
		||||
* Do not write `if` statements all-on-one-line. The exception to this is when
 | 
			
		||||
  you've got a sequence of similar `if` statements, and are aligning them all
 | 
			
		||||
  vertically to highlight their similarities.
 | 
			
		||||
* In an `if-else` statement, if you surround one half of the statement with
 | 
			
		||||
  braces, you also need to put braces around the other half, to match.
 | 
			
		||||
* When writing a pointer type, use this spacing: `SomeObject* myObject`.
 | 
			
		||||
  Technically, a more correct spacing would be `SomeObject *myObject`, but
 | 
			
		||||
  it makes more sense for the asterisk to be grouped with the type name,
 | 
			
		||||
  since being a pointer is part of the type, not the variable name. The only
 | 
			
		||||
  time that this can lead to any problems is when you're declaring multiple
 | 
			
		||||
  pointers of the same type in the same statement - which leads on to the next
 | 
			
		||||
  rule:
 | 
			
		||||
* When declaring multiple pointers, never do so in a single statement, e.g.
 | 
			
		||||
  `SomeObject* p1, *p2;` - instead, always split them out onto separate lines
 | 
			
		||||
  and write the type name again, to make it quite clear what's going on, and
 | 
			
		||||
  avoid the danger of missing out any vital asterisks.
 | 
			
		||||
* The previous point also applies to references, so always put the `&` next to
 | 
			
		||||
  the type rather than the variable, e.g. `void foo (Thing const& thing)`. And
 | 
			
		||||
  don't put a space on both sides of the `*` or `&` - always put a space after
 | 
			
		||||
  it, but never before it.
 | 
			
		||||
* The word `const` should be placed to the right of the thing that it modifies,
 | 
			
		||||
  for consistency. For example `int const` refers to an int which is const.
 | 
			
		||||
  `int const*` is a pointer to an int which is const. `int *const` is a const
 | 
			
		||||
  pointer to an int.
 | 
			
		||||
* Always place a space in between the template angle brackets and the type
 | 
			
		||||
  name. Template code is already hard enough to read!
 | 
			
		||||
 | 
			
		||||
[1]: http://en.wikipedia.org/wiki/Indent_style#Allman_style
 | 
			
		||||
@@ -1,5 +0,0 @@
 | 
			
		||||
# `rippled` Docker Image
 | 
			
		||||
 | 
			
		||||
- Some info relating to Docker containers can be found here: [../Builds/containers](../Builds/containers)
 | 
			
		||||
- Images for building and testing rippled can be found here: [thejohnfreeman/rippled-docker](https://github.com/thejohnfreeman/rippled-docker/)
 | 
			
		||||
  - These images do not have rippled. They have all the tools necessary to build rippled.
 | 
			
		||||
Some files were not shown because too many files have changed in this diff Show More
		Reference in New Issue
	
	Block a user