Compare commits

..

4 Commits

Author SHA1 Message Date
Vinnie Falco
7d47304efd Set version to 0.24.0-rc2 2014-05-02 11:40:46 -07:00
JoelKatz
91f84e1512 Pathfinding dispatch improvements
* Simplify decision whether to update a path
* Prevent pathfinding threads from blocking each other
2014-05-02 11:39:36 -07:00
Vinnie Falco
bae05d3a98 Set version to 0.24.0-rc1 2014-05-01 14:49:44 -07:00
JoelKatz
9ffb81b8c2 Validation timing fixes:
* Log whether consensus built a ledger we already had or were acquiring
* Don't trigger an acquire for a validation for a ledger we might be building
* When we finish building a ledger, try to accept that ledger
* If we cannot accept a built ledger, check held validations
* Correctly set isCurrent for untrusted validations
* Add appropriate logging

This fixes a race condition that could cause spurious and expensive
ledger fetches across the network and delayed recognition of
fully-validated ledger.
2014-05-01 14:49:44 -07:00
4087 changed files with 741198 additions and 525387 deletions

View File

@@ -1,91 +0,0 @@
---
Language: Cpp
AccessModifierOffset: -4
AlignAfterOpenBracket: AlwaysBreak
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
AlignEscapedNewlinesLeft: true
AlignOperands: false
AlignTrailingComments: true
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: false
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterReturnType: All
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
AfterClass: true
AfterControlStatement: true
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeCategories:
- Regex: '^<(test)/'
Priority: 0
- Regex: '^<(xrpld)/'
Priority: 1
- Regex: '^<(xrpl)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
- Regex: '.*'
Priority: 4
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: false
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
RequiresClausePosition: OwnLine
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never

View File

@@ -1,37 +0,0 @@
codecov:
require_ci_to_pass: true
comment:
behavior: default
layout: reach,diff,flags,tree,reach
show_carryforward_flags: false
coverage:
range: "60..80"
precision: 1
round: nearest
status:
project:
default:
target: 60%
threshold: 2%
patch:
default:
target: auto
threshold: 2%
changes: false
github_checks:
annotations: true
parsers:
cobertura:
partials_as_hits: true
handle_missing_conditions : true
slack_app: false
ignore:
- "src/test/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"

View File

@@ -1,13 +0,0 @@
# This feature requires Git >= 2.24
# To use it by default in git blame:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
50760c693510894ca368e90369b0cc2dabfd07f3
e2384885f5f630c8f0ffe4bf21a169b433a16858
241b9ddde9e11beb7480600fd5ed90e1ef109b21
760f16f56835663d9286bd29294d074de26a7ba6
0eebe6a5f4246fced516d52b83ec4e7f47373edd
2189cc950c0cebb89e4e2fa3b2d8817205bf7cef
b9d007813378ad0ff45660dc07285b823c7e9855
fe9a5365b8a52d4acc42eb27369247e6f238a4f9
9a93577314e6a8d4b4a8368cc9d2b15a5d8303e8
552377c76f55b403a1c876df873a23d780fcc81c

2
.gitattributes vendored
View File

@@ -1,5 +1,5 @@
# Set default behaviour, in case users don't have core.autocrlf set.
#* text=auto
* text=auto
# These annoying files
rippled.1 binary

View File

@@ -1,31 +0,0 @@
---
name: Bug Report
about: Create a report to help us improve rippled
title: "[Title with short description] (Version: [rippled version])"
labels: ''
assignees: ''
---
<!-- Please search existing issues to avoid creating duplicates.-->
## Issue Description
<!--Provide a summary for your issue/bug.-->
## Steps to Reproduce
<!--List in detail the exact steps to reproduce the unexpected behavior of the software.-->
## Expected Result
<!--Explain in detail what behavior you expected to happen.-->
## Actual Result
<!--Explain in detail what behavior actually happened.-->
## Environment
<!--Please describe your environment setup (such as Ubuntu 18.04 with Boost 1.70).-->
<!-- If you are using a formal release, please use the version returned by './rippled --version' as the version number-->
<!-- If you are working off of develop, please add the git hash via 'git rev-parse HEAD'-->
## Supporting Files
<!--If you have supporting files such as a log, feel free to post a link here using Github Gist.-->
<!--Consider adding configuration files with private information removed via Github Gist. -->

View File

@@ -1,8 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: XRP Ledger Documentation
url: https://xrpl.org/
about: All things about XRPL
- name: Security bug bounty program
url: https://ripple.com/bug-bounty/
about: Please report security-relevant bugs in our software here.

View File

@@ -1,21 +0,0 @@
---
name: Feature Request
about: Suggest a new feature for the rippled project
title: "[Title with short description] (Version: [rippled version])"
labels: Feature Request
assignees: ''
---
<!-- Please search existing issues to avoid creating duplicates.-->
## Summary
<!-- Provide a summary to the feature request-->
## Motivation
<!-- Why do we need this feature?-->
## Solution
<!-- What is the solution?-->
## Paths Not Taken
<!-- What other alternatives have been considered?-->

View File

@@ -1,34 +0,0 @@
name: build
inputs:
generator:
default: null
configuration:
required: true
cmake-args:
default: null
cmake-target:
default: all
# An implicit input is the environment variable `build_dir`.
runs:
using: composite
steps:
- name: configure
shell: bash
run: |
cd ${build_dir}
cmake \
${{ inputs.generator && format('-G "{0}"', inputs.generator) || '' }} \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }} \
-Dtests=TRUE \
-Dxrpld=TRUE \
${{ inputs.cmake-args }} \
..
- name: build
shell: bash
run: |
cmake \
--build ${build_dir} \
--config ${{ inputs.configuration }} \
--parallel ${NUM_PROCESSORS:-$(nproc)} \
--target ${{ inputs.cmake-target }}

View File

@@ -1,60 +0,0 @@
name: dependencies
inputs:
configuration:
required: true
# An implicit input is the environment variable `build_dir`.
runs:
using: composite
steps:
- name: unlock Conan
shell: bash
run: conan remove --locks
- name: export custom recipes
shell: bash
run: |
conan config set general.revisions_enabled=1
conan export external/snappy snappy/1.1.10@
conan export external/rocksdb rocksdb/6.29.5@
conan export external/soci soci/4.0.3@
- name: add Ripple Conan remote
shell: bash
run: |
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with
# a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
# `conan user` implicitly uses the environment variables
# CONAN_LOGIN_USERNAME_<REMOTE> and CONAN_PASSWORD_<REMOTE>.
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: list missing binaries
id: binaries
shell: bash
# Print the list of dependencies that would need to be built locally.
# A non-empty list means we have "failed" to cache binaries remotely.
run: |
echo missing=$(conan info . --build missing --settings build_type=${{ inputs.configuration }} --json 2>/dev/null | grep '^\[') | tee ${GITHUB_OUTPUT}
- name: install dependencies
shell: bash
run: |
mkdir ${build_dir}
cd ${build_dir}
conan install \
--output-folder . \
--build missing \
--options tests=True \
--options xrpld=True \
--settings build_type=${{ inputs.configuration }} \
..
- name: upload dependencies to remote
if: (steps.binaries.outputs.missing != '[]') && (steps.remote.outputs.outcome == 'success')
shell: bash
run: conan upload --remote ripple '*' --all --parallel --confirm

View File

@@ -1,85 +0,0 @@
<!--
This PR template helps you to write a good pull request description.
Please feel free to include additional useful information even beyond what is requested below.
If your branch is on a personal fork and has a name that allows it to
run CI build/test jobs (e.g. "ci/foo"), remember to rename it BEFORE
opening the PR. This avoids unnecessary redundant test runs. Renaming
the branch after opening the PR will close the PR.
https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/renaming-a-branch
-->
## High Level Overview of Change
<!--
Please include a summary of the changes.
This may be a direct input to the release notes.
If too broad, please consider splitting into multiple PRs.
If a relevant task or issue, please link it here.
-->
### Context of Change
<!--
Please include the context of a change.
If a bug fix, when was the bug introduced? What was the behavior?
If a new feature, why was this architecture chosen? What were the alternatives?
If a refactor, how is this better than the previous implementation?
If there is a spec or design document for this feature, please link it here.
-->
### Type of Change
<!--
Please check [x] relevant options, delete irrelevant ones.
-->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (non-breaking change that only restructures code)
- [ ] Performance (increase or change in throughput and/or latency)
- [ ] Tests (you added tests for code that already exists, or your new feature included in this PR)
- [ ] Documentation update
- [ ] Chore (no impact to binary, e.g. `.gitignore`, formatting, dropping support for older tooling)
- [ ] Release
### API Impact
<!--
Please check [x] relevant options, delete irrelevant ones.
* If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
* Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
* libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
* Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ] `libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
If this change affects an API, examples should be included here.
For performance-impacting changes, please provide these details:
1. Is this a new feature, bug fix, or improvement to existing functionality?
2. What behavior/functionality does the change impact?
3. In what processing can the impact be measured? Be as specific as possible - e.g. RPC client call, payment transaction that involves LOB, AMM, caching, DB operations, etc.
4. Does this change affect concurrent processing - e.g. does it involve acquiring locks, multi-threaded processing, or async processing?
-->
<!--
## Test Plan
If helpful, please describe the tests that you ran to verify your changes and provide instructions so that others can reproduce.
This section may not be needed if your change includes thoroughly commented unit tests.
-->
<!--
## Future Tasks
For future tasks related to PR.
-->

View File

@@ -1,59 +0,0 @@
name: clang-format
on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-24.04
env:
CLANG_VERSION: 18
steps:
- uses: actions/checkout@v4
- name: Install clang-format
run: |
codename=$( lsb_release --codename --short )
sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
EOF
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format first-party sources
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |
set -o pipefail
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: clang-format.patch
if-no-files-found: ignore
path: clang-format.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
PREAMBLE: |
If you are reading this, you are looking at a failed Github Actions
job. That means you pushed one or more files that did not conform
to the formatting specified in .clang-format. That may be because
you neglected to run 'git clang-format' or 'clang-format' before
committing, or that your version of clang-format has an
incompatibility with the one on this
machine, which is:
SUGGESTION: |
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions cpp,h,hpp,ipp develop'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"
clang-format-${CLANG_VERSION} --version
echo "${SUGGESTION}"
exit 1

View File

@@ -1,37 +0,0 @@
name: Build and publish Doxygen documentation
# To test this workflow, push your changes to your fork's `develop` branch.
on:
push:
branches:
- develop
- doxygen
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
job:
runs-on: ubuntu-latest
permissions:
contents: write
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: checkout
uses: actions/checkout@v4
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
cmake --version
doxygen --version
env | sort
- name: build
run: |
mkdir build
cd build
cmake -Donly_docs=TRUE ..
cmake --build . --target docs --parallel $(nproc)
- name: publish
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/docs/html

View File

@@ -1,103 +0,0 @@
name: instrumentation
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# NOTE we are not using dependencies built inside nix because nix is lagging
# with compiler versions. Instrumentation requires clang version 16 or later
instrumentation-build:
env:
CLANG_RELEASE: 16
strategy:
fail-fast: false
runs-on: [self-hosted, heavy]
container: debian:bookworm
steps:
- name: install prerequisites
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update
apt-get install --yes --no-install-recommends \
clang-${CLANG_RELEASE} clang++-${CLANG_RELEASE} \
python3-pip python-is-python3 make cmake git wget
apt-get clean
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-${CLANG_RELEASE} 100 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-${CLANG_RELEASE}
update-alternatives --auto clang
pip install --no-cache --break-system-packages "conan<2"
- name: checkout
uses: actions/checkout@v4
- name: prepare environment
run: |
mkdir ${GITHUB_WORKSPACE}/.build
echo "SOURCE_DIR=$GITHUB_WORKSPACE" >> $GITHUB_ENV
echo "BUILD_DIR=$GITHUB_WORKSPACE/.build" >> $GITHUB_ENV
echo "CC=/usr/bin/clang" >> $GITHUB_ENV
echo "CXX=/usr/bin/clang++" >> $GITHUB_ENV
- name: configure Conan
run: |
conan profile new --detect default
conan profile update settings.compiler=clang default
conan profile update settings.compiler.version=${CLANG_RELEASE} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
conan profile update options.rocksdb=False default
conan profile update \
'conf.tools.build:compiler_executables={"c": "/usr/bin/clang", "cpp": "/usr/bin/clang++"}' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
- name: build dependencies
run: |
cd ${BUILD_DIR}
conan install ${SOURCE_DIR} \
--output-folder ${BUILD_DIR} \
--install-folder ${BUILD_DIR} \
--build missing \
--settings build_type=Debug
- name: build with instrumentation
run: |
cd ${BUILD_DIR}
cmake -S ${SOURCE_DIR} -B ${BUILD_DIR} \
-Dvoidstar=ON \
-Dtests=ON \
-Dxrpld=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DSECP256K1_BUILD_BENCHMARK=OFF \
-DSECP256K1_BUILD_TESTS=OFF \
-DSECP256K1_BUILD_EXHAUSTIVE_TESTS=OFF \
-DCMAKE_TOOLCHAIN_FILE=${BUILD_DIR}/build/generators/conan_toolchain.cmake
cmake --build . --parallel $(nproc)
- name: verify instrumentation enabled
run: |
cd ${BUILD_DIR}
./rippled --version | grep libvoidstar
- name: run unit tests
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))

View File

@@ -1,49 +0,0 @@
name: levelization
on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-latest
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v4
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences
id: assert
run: |
set -o pipefail
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: levelization.patch
if-no-files-found: ignore
path: levelization.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
MESSAGE: |
If you are reading this, you are looking at a failed Github
Actions job. That means you changed the dependency relationships
between the modules in rippled. That may be an improvement or a
regression. This check doesn't judge.
A rule of thumb, though, is that if your changes caused
something to be removed from loops.txt, that's probably an
improvement. If something was added, it's probably a regression.
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run './Builds/levelization/levelization.sh' in your repo,
commit, and push.
See Builds/levelization/README.md for more info.
run: |
echo "${MESSAGE}"
exit 1

View File

@@ -1,88 +0,0 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
on:
pull_request:
paths:
- 'src/libxrpl/protocol/BuildInfo.cpp'
- '.github/workflows/libxrpl.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
publish:
name: Publish libXRPL
outputs:
outcome: ${{ steps.upload.outputs.outcome }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.channel.outputs.channel }}
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: Wait for essential checks to succeed
uses: lewagon/wait-on-check-action@v1.3.4
with:
ref: ${{ github.event.pull_request.head.sha || github.sha }}
running-workflow-name: wait-for-check-regexp
check-regexp: '(dependencies|test).*linux.*' # Ignore windows and mac tests but make sure linux passes
repo-token: ${{ secrets.GITHUB_TOKEN }}
wait-interval: 10
- name: Checkout
uses: actions/checkout@v4
- name: Generate channel
id: channel
shell: bash
run: |
echo channel="clio/pr_${{ github.event.pull_request.number }}" | tee ${GITHUB_OUTPUT}
- name: Export new package
shell: bash
run: |
conan export . ${{ steps.channel.outputs.channel }}
- name: Add Ripple Conan remote
shell: bash
run: |
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: Parse new version
id: version
shell: bash
run: |
echo version="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" \
| awk -F '"' '{print $2}')" | tee ${GITHUB_OUTPUT}
- name: Try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
# `conan user` implicitly uses the environment variables CONAN_LOGIN_USERNAME_<REMOTE> and CONAN_PASSWORD_<REMOTE>.
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: Upload new package
id: upload
if: (steps.remote.outputs.outcome == 'success')
shell: bash
run: |
echo "conan upload version ${{ steps.version.outputs.version }} on channel ${{ steps.channel.outputs.channel }}"
echo outcome=$(conan upload xrpl/${{ steps.version.outputs.version }}@${{ steps.channel.outputs.channel }} --remote ripple --confirm >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
notify_clio:
name: Notify Clio
runs-on: ubuntu-latest
needs: publish
env:
GH_TOKEN: ${{ secrets.CLIO_NOTIFY_TOKEN }}
steps:
- name: Notify Clio about new version
if: (needs.publish.outputs.outcome == 'success')
shell: bash
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[version]=${{ needs.publish.outputs.version }}@${{ needs.publish.outputs.channel }}"

View File

@@ -1,71 +0,0 @@
name: macos
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
matrix:
platform:
- macos
generator:
- Ninja
configuration:
- Release
runs-on: [self-hosted, macOS]
env:
# The `build` action requires these variables.
build_dir: .build
NUM_PROCESSORS: 12
steps:
- name: checkout
uses: actions/checkout@v4
- name: install Conan
run: |
brew install conan@1
echo '/opt/homebrew/opt/conan@1/bin' >> $GITHUB_PATH
- name: install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
python --version
conan --version
cmake --version
- name: configure Conan
run : |
conan profile new default --detect || true
conan profile update settings.compiler.cppstd=20 default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
- name: test
run: |
${build_dir}/rippled --unittest

View File

@@ -1,284 +0,0 @@
name: nix
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This workflow has two job matrixes.
# They can be considered phases because the second matrix ("test")
# depends on the first ("dependencies").
#
# The first phase has a job in the matrix for each combination of
# variables that affects dependency ABI:
# platform, compiler, and configuration.
# It creates a GitHub artifact holding the Conan profile,
# and builds and caches binaries for all the dependencies.
# If an Artifactory remote is configured, they are cached there.
# If not, they are added to the GitHub artifact.
# GitHub's "cache" action has a size limit (10 GB) that is too small
# to hold the binaries if they are built locally.
# We must use the "{upload,download}-artifact" actions instead.
#
# The second phase has a job in the matrix for each test configuration.
# It installs dependency binaries from the cache, whichever was used,
# and builds and tests rippled.
jobs:
dependencies:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
- clang
configuration:
- Debug
- Release
include:
- compiler: gcc
profile:
version: 11
cc: /usr/bin/gcc
cxx: /usr/bin/g++
- compiler: clang
profile:
version: 14
cc: /usr/bin/clang-14
cxx: /usr/bin/clang++-14
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@v4
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
env | sort
- name: configure Conan
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.version=${{ matrix.profile.version }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=${{ matrix.profile.cc }} default
conan profile update env.CXX=${{ matrix.profile.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "${{ matrix.profile.cc }}", "cpp": "${{ matrix.profile.cxx }}"}' default
- name: archive profile
# Create this archive before dependencies are added to the local cache.
run: tar -czf conan.tar -C ~/.conan .
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: upload archive
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
path: conan.tar
if-no-files-found: error
test:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
- clang
configuration:
- Debug
- Release
cmake-args:
-
- "-Dunity=ON"
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: ${{ matrix.cmake-args }}
- name: test
run: |
${build_dir}/rippled --unittest --unittest-jobs $(nproc)
coverage:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
configuration:
- Debug
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: install gcovr
run: pip install "gcovr>=7,<8"
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
gcovr --version
env | sort
ls ~/.conan
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: >-
-Dcoverage=ON
-Dcoverage_format=xml
-DCODE_COVERAGE_VERBOSE=ON
-DCMAKE_CXX_FLAGS="-O0"
-DCMAKE_C_FLAGS="-O0"
cmake-target: coverage
- name: move coverage report
shell: bash
run: |
mv "${build_dir}/coverage.xml" ./
- name: archive coverage report
uses: actions/upload-artifact@v3
with:
name: coverage.xml
path: coverage.xml
retention-days: 30
- name: upload coverage report
uses: wandalen/wretry.action@v1.4.10
with:
action: codecov/codecov-action@v4.5.0
with: |
files: coverage.xml
fail_ci_if_error: true
disable_search: true
verbose: true
plugin: noop
token: ${{ secrets.CODECOV_TOKEN }}
attempt_limit: 5
attempt_delay: 210000 # in milliseconds
conan:
needs: dependencies
runs-on: [self-hosted, heavy]
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
configuration: Release
steps:
- name: download cache
uses: actions/download-artifact@v3
with:
name: linux-gcc-${{ env.configuration }}
- name: extract cache
run: |
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
- name: checkout
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ env.configuration }}
- name: export
run: |
version=$(conan inspect --raw version .)
reference="xrpl/${version}@local/test"
conan remove -f ${reference} || true
conan export . local/test
echo "reference=${reference}" >> "${GITHUB_ENV}"
- name: build
run: |
cd examples/example
mkdir ${build_dir}
cd ${build_dir}
conan install .. --output-folder . \
--require-override ${reference} --build missing
cmake .. \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=./build/${configuration}/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${configuration}
cmake --build .
./example | grep '^[[:digit:]]\+\.[[:digit:]]\+\.[[:digit:]]\+'

View File

@@ -1,91 +0,0 @@
name: windows
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
fail-fast: false
matrix:
generator:
- Visual Studio 16 2019
configuration:
- Release
# Github hosted runners tend to hang when running Debug unit tests.
# Instead of trying to work around it, disable the Debug job until
# something beefier (i.e. a heavy self-hosted runner) becomes
# available.
# - Debug
runs-on: windows-2019
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@v4
- name: choose Python
uses: actions/setup-python@v5
with:
python-version: 3.9
- name: learn Python cache directory
id: pip-cache
shell: bash
run: |
python -m pip install --upgrade pip
echo "dir=$(pip cache dir)" | tee ${GITHUB_OUTPUT}
- name: restore Python cache directory
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ hashFiles('.github/workflows/windows.yml') }}
- name: install Conan
run: pip install wheel 'conan<2'
- name: check environment
run: |
dir env:
$env:PATH -split ';'
python --version
conan --version
cmake --version
- name: configure Conan
shell: bash
run: |
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler.runtime=MT${{ matrix.configuration == 'Debug' && 'd' || '' }} default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: '${{ matrix.generator }}'
configuration: ${{ matrix.configuration }}
# Hard code for now. Move to the matrix if varied options are needed
cmake-args: '-Dassert=ON -Dreporting=OFF -Dunity=ON'
cmake-target: install
- name: test
shell: bash
run: |
${build_dir}/${{ matrix.configuration }}/rippled --unittest --unittest-jobs $(nproc)

55
.gitignore vendored
View File

@@ -1,9 +1,5 @@
# .gitignore
bin/boostbook_catalog.xml
bin/config.log
bin/project-cache.jam
# Ignore vim swap files.
*.swp
@@ -21,32 +17,19 @@ bin/project-cache.jam
# Ignore object files.
*.o
.nih_c
build
tags
TAGS
GTAGS
GRTAGS
GPATH
bin/rippled
Debug/*.*
Release/*.*
# Ignore coverage files.
*.gcno
*.gcda
*.gcov
# Levelization checking
Builds/levelization/results/rawincludes.txt
Builds/levelization/results/paths.txt
Builds/levelization/results/includes/
Builds/levelization/results/includedby/
# Ignore locally installed node_modules
/node_modules
# Ignore tmp directory.
tmp
# Ignore database directory.
db/
db/*.db
db/*.db-*
@@ -56,15 +39,15 @@ debug_log.txt
# Ignore customized configs
rippled.cfg
validators.txt
test/config.js
# Doxygen generated documentation output
HtmlDocumentation
docs/html_doc
# Xcode user-specific project settings
# Xcode
.DS_Store
/build/
*/build/*
*.pbxuser
!default.pbxuser
*.mode1v3
@@ -83,32 +66,12 @@ DerivedData
# Intel Parallel Studio 2013 XE
My Amplifier XE Results - RippleD
# KeyvaDB files
*.key
*.val
# Compiler intermediate output
/out.txt
# Build Log
rippled-build.log
# Profiling data
gmon.out
Builds/VisualStudio2015/*.db
Builds/VisualStudio2015/*.user
Builds/VisualStudio2015/*.opendb
Builds/VisualStudio2015/*.sdf
# MSVC
*.pdb
.vs/
CMakeSettings.json
compile_commands.json
.clangd
packages
pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
.vscode
# Suggested in-tree build directory
/.build/

View File

@@ -1,6 +0,0 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.3
hooks:
- id: clang-format

53
.travis.yml Normal file
View File

@@ -0,0 +1,53 @@
language: cpp
compiler:
- clang
- gcc
before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq python-software-properties
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
- sudo add-apt-repository -y ppa:boost-latest/ppa
- sudo apt-get update -qq
- sudo apt-get install -qq g++-4.8
- sudo apt-get install -qq libboost1.55-all-dev
# We want debug symbols for boost as we install gdb later
- sudo apt-get install -qq libboost1.55-dbg
- sudo apt-get install -qq mlocate
- sudo updatedb
- sudo locate libboost | grep /lib | grep -e ".a$"
- sudo apt-get install -qq protobuf-compiler libprotobuf-dev libssl-dev exuberant-ctags
# We need gcc >= 4.8 for some c++11 features
- sudo apt-get install -qq gcc-4.8
- sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8
- sudo update-alternatives --set gcc /usr/bin/gcc-4.8
# Stuff is gold. Nuff said ;)
- sudo apt-get -y install binutils-gold
# We can get a backtrace if the guy crashes
- sudo apt-get -y install gdb
# What versions are we ACTUALLY running?
- g++ -v
- clang -v
script:
# Set so any failing command will abort the build
- set -e
# If only we could do -j12 ;)
- scons
# See what we've actually built
- ldd ./build/rippled
# Run unittests (under gdb)
- | # create gdb script
echo "set env MALLOC_CHECK_=3" > script.gdb
echo "run" >> script.gdb
echo "backtrace full" >> script.gdb
# gdb --help
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args ./build/rippled --unittest
# Run integration tests
- npm install
- npm test
notifications:
email:
false
irc:
channels:
- "chat.freenode.net#ripple-dev"

View File

@@ -1,201 +0,0 @@
# API Changelog
This changelog is intended to list all updates to the [public API methods](https://xrpl.org/public-api-methods.html).
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
- `tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
- `ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
- `hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
- `ledger_index` - Ledger index (only set on validated ledgers)
- `ledger_hash` - Ledger hash (only set on closed or validated ledgers)
- `close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
- `validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
- `tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
- `transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
- `subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
- `sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
- `signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
### Inconsistency: server_info - network_id
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.4.0
### Addition in 2.4
- `ledger_entry`: `state` is added an alias for `ripple_state`.
## XRP Ledger server version 2.3.0
### Breaking change in 2.3
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Addition in 2.3
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
The following additions are non-breaking (because they are purely additive).
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
- `server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
- `ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
- `Account`: The issuer of the asset being clawed back. Must also be the sender of the transaction.
- `Amount`: The amount being clawed back, with the `Amount.issuer` being the token holder's address.
- Adds [AMM](https://github.com/XRPLF/XRPL-Standards/discussions/78) ([#4294](https://github.com/XRPLF/rippled/pull/4294), [#4626](https://github.com/XRPLF/rippled/pull/4626)) feature:
- Adds `amm_info` API to retrieve AMM information for a given tokens pair.
- Adds `AMMCreate` transaction type to create `AMM` instance.
- Adds `AMMDeposit` transaction type to deposit funds into `AMM` instance.
- Adds `AMMWithdraw` transaction type to withdraw funds from `AMM` instance.
- Adds `AMMVote` transaction type to vote for the trading fee of `AMM` instance.
- Adds `AMMBid` transaction type to bid for the Auction Slot of `AMM` instance.
- Adds `AMMDelete` transaction type to delete `AMM` instance.
- Adds `sfAMMID` to `AccountRoot` to indicate that the account is `AMM`'s account. `AMMID` is used to fetch `ltAMM`.
- Adds `lsfAMMNode` `TrustLine` flag to indicate that one side of the `TrustLine` is `AMM` account.
- Adds `tfLPToken`, `tfSingleAsset`, `tfTwoAsset`, `tfOneAssetLPToken`, `tfLimitLPToken`, `tfTwoAssetIfEmpty`,
`tfWithdrawAll`, `tfOneAssetWithdrawAll` which allow a trader to specify different fields combination
for `AMMDeposit` and `AMMWithdraw` transactions.
- Adds new transaction result codes:
- tecUNFUNDED_AMM: insufficient balance to fund AMM. The account does not have funds for liquidity provision.
- tecAMM_BALANCE: AMM has invalid balance. Calculated balances greater than the current pool balances.
- tecAMM_FAILED: AMM transaction failed. Fails due to a processing failure.
- tecAMM_INVALID_TOKENS: AMM invalid LP tokens. Invalid input values, format, or calculated values.
- tecAMM_EMPTY: AMM is in empty state. Transaction requires AMM in non-empty state (LP tokens > 0).
- tecAMM_NOT_EMPTY: AMM is not in empty state. Transaction requires AMM in empty state (LP tokens == 0).
- tecAMM_ACCOUNT: AMM account. Clawback of AMM account.
- tecINCOMPLETE: Some work was completed, but more submissions required to finish. AMMDelete partially deletes the trustlines.
## XRP Ledger server version 1.11.0
[Version 1.11.0](https://github.com/XRPLF/rippled/releases/tag/1.11.0) was released on Jun 20, 2023.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`: a `NetworkID` is present but the chain's network ID is less than 1025. Remove the field from the transaction, and try again.
- `telREQUIRES_NETWORK_ID`: a `NetworkID` is required, but is not present. Add the field to the transaction, and try again.
- `telWRONG_NETWORK`: a `NetworkID` is specified, but it is for a different network. Submit the transaction to a different server which is connected to the correct network.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
## XRP Ledger server version 1.10.0
[Version 1.10.0](https://github.com/XRPLF/rippled/releases/tag/1.10.0)
was released on Mar 14, 2023.
### Breaking changes in 1.10
- If the `XRPFees` feature is enabled, the `fee_ref` field will be
removed from the [ledger subscription stream](https://xrpl.org/subscribe.html#ledger-stream), because it will no longer
have any meaning.
# Unit tests for API changes
The following information is useful to developers contributing to this project:
The purpose of unit tests is to catch bugs and prevent regressions. In general, it often makes sense to create a test function when there is a breaking change to the API. For APIs that have changed in a new API version, the tests should be modified so that both the prior version and the new version are properly tested.
To take one example: for `account_info` version 1, WebSocket and JSON-RPC behavior should be tested. The latest API version, i.e. API version 2, should be tested over WebSocket, JSON-RPC, and command line.

478
BUILD.md
View File

@@ -1,478 +0,0 @@
| :warning: **WARNING** :warning:
|---|
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
### Linux
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
### Windows
Windows is not recommended for production use at this time.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake.
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
You'll need at least one Conan profile:
```
conan profile new default --detect
```
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
```
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
```
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
```
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
### Build and Test
1. Create a build directory and move into it.
```
mkdir .build
cd .build
```
You can use any directory name. Conan treats your working directory as an
install folder and generates files with implementation details.
You don't need to worry about these files, but make sure to change
your working directory to your build directory before calling Conan.
**Note:** You can specify a directory for the installation files by adding
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Generate CMake files for every configuration you want to build.
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
Single-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
Multi-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build .
```
Multi-config generators:
```
cmake --build . --config Release
cmake --build . --config Debug
```
5. Test rippled.
Single-config generators:
```
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
## Coverage report
The coverage report is intended for developers using compilers GCC
or Clang (including Apple Clang). It is generated by the build target `coverage`,
which is only enabled when the `coverage` option is set, e.g. with
`--options coverage=True` in `conan` or `-Dcoverage=ON` variable in `cmake`
Prerequisites for the coverage report:
- [gcovr tool][gcovr] (can be installed e.g. with [pip][python-pip])
- `gcov` for GCC (installed with the compiler by default) or
- `llvm-cov` for Clang (installed with the compiler by default)
- `Debug` build type
A coverage report is created when the following steps are completed, in order:
1. `rippled` binary built with instrumentation data, enabled by the `coverage`
option mentioned above
2. completed run of unit tests, which populates coverage capture data
3. completed run of the `gcovr` tool (which internally invokes either `gcov` or `llvm-cov`)
to assemble both instrumentation data and the coverage capture data into a coverage report
The above steps are automated into a single target `coverage`. The instrumented
`rippled` binary can also be used for regular development or testing work, at
the cost of extra disk space utilization and a small performance hit
(to store coverage capture). In case of a spurious failure of unit tests, it is
possible to re-run the `coverage` target without rebuilding the `rippled` binary
(since it is simply a dependency of the coverage report target). It is also possible
to select only specific tests for the purpose of the coverage report, by setting
the `coverage_test` variable in `cmake`
The default coverage report format is `html-details`, but the user
can override it to any of the formats listed in `Builds/CMake/CodeCoverage.cmake`
by setting the `coverage_format` variable in `cmake`. It is also possible
to generate more than one format at a time by setting the `coverage_extra_args`
variable in `cmake`. The specific command line used to run the `gcovr` tool will be
displayed if the `CODE_COVERAGE_VERBOSE` variable is set.
By default, the code coverage tool runs parallel unit tests with `--unittest-jobs`
set to the number of available CPU cores. This may cause spurious test
errors on Apple. Developers can override the number of unit test jobs with
the `coverage_test_parallelism` variable in `cmake`.
Example use with some cmake variables set:
```
cd .build
conan install .. --output-folder . --build missing --settings build_type=Debug
cmake -DCMAKE_BUILD_TYPE=Debug -Dcoverage=ON -Dxrpld=ON -Dtests=ON -Dcoverage_test_parallelism=2 -Dcoverage_format=html-details -Dcoverage_extra_args="--json coverage.json" -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
cmake --build . --target coverage
```
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Options
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove the Conan cache:
```
rm -rf ~/.conan/data
```
4. Re-run [conan install](#build-and-test).
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
### call to 'async_teardown' is ambiguous
If you are compiling with an early version of Clang 16, then you might hit
a [regression][6] when compiling C++20 that manifests as an [error in a Boost
header][7]. You can workaround it by adding this preprocessor definition:
```
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
conan install --build boost ...
```
## Add a Dependency
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[2]: https://en.cppreference.com/w/cpp/compiler_support/20
[3]: https://docs.conan.io/en/latest/getting_started.html
[5]: https://en.wikipedia.org/wiki/Unity_build
[6]: https://github.com/boostorg/beast/issues/2648
[7]: https://github.com/boostorg/beast/issues/2661
[gcovr]: https://gcovr.com/en/stable/getting-started.html
[python-pip]: https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/
[build_type]: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
[profile]: https://docs.conan.io/en/latest/reference/profiles.html

41
Builds/ArchLinux/PKGBUILD Normal file
View File

@@ -0,0 +1,41 @@
# Maintainer: Roberto Catini <roberto.catini@gmail.com>
pkgname=rippled
pkgrel=1
pkgver=0
pkgdesc="Ripple peer-to-peer network daemon"
arch=('i686' 'x86_64')
url="https://github.com/ripple/rippled"
license=('custom:ISC')
depends=('protobuf' 'openssl' 'boost-libs')
makedepends=('git' 'scons' 'boost')
checkdepends=('nodejs')
backup=("etc/$pkgname/rippled.cfg")
source=("git://github.com/ripple/rippled.git#branch=master")
sha512sums=('SKIP')
pkgver() {
cd "$srcdir/$pkgname"
git describe --long --tags | sed -r 's/([^-]*-g)/r\1/;s/-/./g'
}
build() {
cd "$srcdir/$pkgname"
scons build/rippled
}
check() {
cd "$srcdir/$pkgname"
npm install
npm test
build/rippled --unittest
}
package() {
cd "$srcdir/$pkgname"
install -D -m644 LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
install -D build/rippled "$pkgdir/usr/bin/rippled"
install -D -m644 doc/rippled-example.cfg "$pkgdir/etc/$pkgname/rippled.cfg"
mkdir -p "$pkgdir/var/lib/$pkgname/db"
mkdir -p "$pkgdir/var/log/$pkgname"
}

5
Builds/QtCreator/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
# QTCreator
Makefile
*.user

View File

@@ -0,0 +1,112 @@
# Ripple protocol buffers
PROTOS = ../../src/ripple_data/protocol/ripple.proto
PROTOS_DIR = ../../build/proto
# Google Protocol Buffers support
protobuf_h.name = protobuf header
protobuf_h.input = PROTOS
protobuf_h.output = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.h
protobuf_h.depends = ${QMAKE_FILE_NAME}
protobuf_h.commands = protoc --cpp_out=$${PROTOS_DIR} --proto_path=${QMAKE_FILE_PATH} ${QMAKE_FILE_NAME}
protobuf_h.variable_out = HEADERS
QMAKE_EXTRA_COMPILERS += protobuf_h
protobuf_cc.name = protobuf implementation
protobuf_cc.input = PROTOS
protobuf_cc.output = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.cc
protobuf_cc.depends = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.h
protobuf_cc.commands = $$escape_expand(\\n)
#protobuf_cc.variable_out = SOURCES
QMAKE_EXTRA_COMPILERS += protobuf_cc
# Ripple compilation
DESTDIR = ../../build/QtCreator
OBJECTS_DIR = ../../build/QtCreator/obj
TEMPLATE = app
CONFIG += console thread warn_off
CONFIG -= qt gui
DEFINES += _DEBUG
linux-g++:QMAKE_CXXFLAGS += \
-Wall \
-Wno-sign-compare \
-Wno-char-subscripts \
-Wno-invalid-offsetof \
-Wno-unused-parameter \
-Wformat \
-O0 \
-std=c++11 \
-pthread
INCLUDEPATH += \
"../../src/leveldb/" \
"../../src/leveldb/port" \
"../../src/leveldb/include" \
$${PROTOS_DIR}
OTHER_FILES += \
# $$files(../../src/*, true) \
# $$files(../../src/beast/*) \
# $$files(../../src/beast/modules/beast_basics/diagnostic/*)
# $$files(../../src/beast/modules/beast_core/, true)
UI_HEADERS_DIR += ../../src/ripple_basics
# ---------
# New style
#
SOURCES += \
../../src/ripple/beast/ripple_beast.cpp \
../../src/ripple/beast/ripple_beastc.c \
../../src/ripple/common/ripple_common.cpp \
../../src/ripple/http/ripple_http.cpp \
../../src/ripple/json/ripple_json.cpp \
../../src/ripple/peerfinder/ripple_peerfinder.cpp \
../../src/ripple/radmap/ripple_radmap.cpp \
../../src/ripple/resource/ripple_resource.cpp \
../../src/ripple/sitefiles/ripple_sitefiles.cpp \
../../src/ripple/sslutil/ripple_sslutil.cpp \
../../src/ripple/testoverlay/ripple_testoverlay.cpp \
../../src/ripple/types/ripple_types.cpp \
../../src/ripple/validators/ripple_validators.cpp
# ---------
# Old style
#
SOURCES += \
../../src/ripple_app/ripple_app.cpp \
../../src/ripple_app/ripple_app_pt1.cpp \
../../src/ripple_app/ripple_app_pt2.cpp \
../../src/ripple_app/ripple_app_pt3.cpp \
../../src/ripple_app/ripple_app_pt4.cpp \
../../src/ripple_app/ripple_app_pt5.cpp \
../../src/ripple_app/ripple_app_pt6.cpp \
../../src/ripple_app/ripple_app_pt7.cpp \
../../src/ripple_app/ripple_app_pt8.cpp \
../../src/ripple_basics/ripple_basics.cpp \
../../src/ripple_core/ripple_core.cpp \
../../src/ripple_data/ripple_data.cpp \
../../src/ripple_hyperleveldb/ripple_hyperleveldb.cpp \
../../src/ripple_leveldb/ripple_leveldb.cpp \
../../src/ripple_net/ripple_net.cpp \
../../src/ripple_overlay/ripple_overlay.cpp \
../../src/ripple_rpc/ripple_rpc.cpp \
../../src/ripple_websocket/ripple_websocket.cpp
LIBS += \
-lboost_date_time-mt\
-lboost_filesystem-mt \
-lboost_program_options-mt \
-lboost_regex-mt \
-lboost_system-mt \
-lboost_thread-mt \
-lboost_random-mt \
-lprotobuf \
-lssl \
-lrt

View File

@@ -0,0 +1,33 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros">
<RepoDir>..\..</RepoDir>
</PropertyGroup>
<PropertyGroup>
<OutDir>$(RepoDir)\build\VisualStudio2013\$(Configuration).$(Platform)\</OutDir>
<IntDir>$(RepoDir)\build\obj\VisualStudio2013\$(Configuration).$(Platform)\</IntDir>
<TargetName>rippled</TargetName>
</PropertyGroup>
<ItemDefinitionGroup>
<ClCompile>
<PreprocessorDefinitions>_VARIADIC_MAX=10;_WIN32_WINNT=0x0600;_SCL_SECURE_NO_WARNINGS;_CRT_SECURE_NO_WARNINGS;WIN32;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<MultiProcessorCompilation>true</MultiProcessorCompilation>
<WarningLevel>Level3</WarningLevel>
<AdditionalIncludeDirectories>$(RepoDir)\src\protobuf\src;$(RepoDir)\src\protobuf\vsprojects;$(RepoDir)\src\leveldb;$(RepoDir)\src\leveldb\include;$(RepoDir)\build\proto;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalOptions>/bigobj %(AdditionalOptions)</AdditionalOptions>
<ExceptionHandling>Async</ExceptionHandling>
<DisableSpecificWarnings>4018;4244</DisableSpecificWarnings>
</ClCompile>
<Link>
<AdditionalDependencies>Shlwapi.lib;%(AdditionalDependencies)</AdditionalDependencies>
<SpecifySectionAttributes>
</SpecifySectionAttributes>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<BuildMacro Include="RepoDir">
<Value>$(RepoDir)</Value>
</BuildMacro>
</ItemGroup>
</Project>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros" />
<PropertyGroup />
<ItemDefinitionGroup />
<ItemGroup />
</Project>

View File

@@ -0,0 +1,38 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Express 2013 for Windows Desktop
VisualStudioVersion = 12.0.30110.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "beast", "..\..\src\beast\Builds\VisualStudio2013\beast.vcxproj", "{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}"
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "RippleD", "RippleD.vcxproj", "{B7F39ECD-473C-484D-BC34-31F8362506A5}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
Debug|x64 = Debug|x64
Release|Win32 = Release|Win32
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|Win32.ActiveCfg = Debug|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|Win32.Build.0 = Debug|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|x64.ActiveCfg = Debug|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|x64.Build.0 = Debug|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|Win32.ActiveCfg = Release|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|Win32.Build.0 = Release|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|x64.ActiveCfg = Release|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|x64.Build.0 = Release|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|Win32.ActiveCfg = Debug|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|Win32.Build.0 = Debug|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|x64.ActiveCfg = Debug|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|x64.Build.0 = Debug|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|Win32.ActiveCfg = Release|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|Win32.Build.0 = Release|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|x64.ActiveCfg = Release|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros" />
<PropertyGroup />
<ItemDefinitionGroup>
<ClCompile>
<DisableSpecificWarnings>4018;4244;4267</DisableSpecificWarnings>
</ClCompile>
</ItemDefinitionGroup>
<ItemGroup />
</Project>

1
Builds/XCode/README.txt Normal file
View File

@@ -0,0 +1 @@
Place XCode project file here!

View File

@@ -1,114 +0,0 @@
# Levelization
Levelization is the term used to describe efforts to prevent rippled from
having or creating cyclic dependencies.
rippled code is organized into directories under `src/rippled` (and
`src/test`) representing modules. The modules are intended to be
organized into "tiers" or "levels" such that a module from one level can
only include code from lower levels. Additionally, a module
in one level should never include code in an `impl` folder of any level
other than it's own.
Unfortunately, over time, enforcement of levelization has been
inconsistent, so the current state of the code doesn't necessarily
reflect these rules. Whenever possible, developers should refactor any
levelization violations they find (by moving files or individual
classes). At the very least, don't make things worse.
The table below summarizes the _desired_ division of modules, based on the
state of the rippled code when it was created. The levels are numbered from
the bottom up with the lower level, lower numbered, more independent
modules listed first, and the higher level, higher numbered modules with
more dependencies listed later.
**tl;dr:** The modules listed first are more independent than the modules
listed later.
| Level / Tier | Module(s) |
|--------------|-----------------------------------------------|
| 01 | ripple/beast ripple/unity
| 02 | ripple/basics
| 03 | ripple/json ripple/crypto
| 04 | ripple/protocol
| 05 | ripple/core ripple/conditions ripple/consensus ripple/resource ripple/server
| 06 | ripple/peerfinder ripple/ledger ripple/nodestore ripple/net
| 07 | ripple/shamap ripple/overlay
| 08 | ripple/app
| 09 | ripple/rpc
| 10 | ripple/perflog
| 11 | test/jtx test/beast test/csf
| 12 | test/unit_test
| 13 | test/crypto test/conditions test/json test/resource test/shamap test/peerfinder test/basics test/overlay
| 14 | test
| 15 | test/net test/protocol test/ledger test/consensus test/core test/server test/nodestore
| 16 | test/rpc test/app
(Note that `test` levelization is *much* less important and *much* less
strictly enforced than `ripple` levelization, other than the requirement
that `test` code should *never* be included in `ripple` code.)
## Validation
The [levelization.sh](levelization.sh) script takes no parameters,
reads no environment variables, and can be run from any directory,
as long as it is in the expected location in the rippled repo.
It can be run at any time from within a checked out repo, and will
do an analysis of all the `#include`s in
the rippled source. The only caveat is that it runs much slower
under Windows than in Linux. It hasn't yet been tested under MacOS.
It generates many files of [results](results):
* `rawincludes.txt`: The raw dump of the `#includes`
* `paths.txt`: A second dump grouping the source module
to the destination module, deduped, and with frequency counts.
* `includes/`: A directory where each file represents a module and
contains a list of modules and counts that the module _includes_.
* `includedby/`: Similar to `includes/`, but the other way around. Each
file represents a module and contains a list of modules and counts
that _include_ the module.
* [`loops.txt`](results/loops.txt): A list of direct loops detected
between modules as they actually exist, as opposed to how they are
desired as described above. In a perfect repo, this file will be
empty.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
that nothing changed.
* [`ordering.txt`](results/ordering.txt): A list showing relationships
between modules where there are no loops as they actually exist, as
opposed to how they are desired as described above.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
that nothing changed.
* [`levelization.yml`](../../.github/workflows/levelization.yml)
Github Actions workflow to test that levelization loops haven't
changed. Unfortunately, if changes are detected, it can't tell if
they are improvements or not, so if you have resolved any issues or
done anything else to improve levelization, run `levelization.sh`,
and commit the updated results.
The `loops.txt` and `ordering.txt` files relate the modules
using comparison signs, which indicate the number of times each
module is included in the other.
* `A > B` means that A should probably be at a higher level than B,
because B is included in A significantly more than A is included in B.
These results can be included in both `loops.txt` and `ordering.txt`.
Because `ordering.txt`only includes relationships where B is not
included in A at all, it will only include these types of results.
* `A ~= B` means that A and B are included in each other a different
number of times, but the values are so close that the script can't
definitively say that one should be above the other. These results
will only be included in `loops.txt`.
* `A == B` means that A and B include each other the same number of
times, so the script has no clue which should be higher. These results
will only be included in `loops.txt`.
The committed files hide the detailed values intentionally, to
prevent false alarms and merging issues, and because it's easy to
get those details locally.
1. Run `levelization.sh`
2. Grep the modules in `paths.txt`.
* For example, if a cycle is found `A ~= B`, simply `grep -w
A Builds/levelization/results/paths.txt | grep -w B`

View File

@@ -1,130 +0,0 @@
#!/bin/bash
# Usage: levelization.sh
# This script takes no parameters, reads no environment variables,
# and can be run from any directory, as long as it is in the expected
# location in the repo.
pushd $( dirname $0 )
if [ -v PS1 ]
then
# if the shell is interactive, clean up any flotsam before analyzing
git clean -ix
fi
# Ensure all sorting is ASCII-order consistently across platforms.
export LANG=C
rm -rfv results
mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../..
echo Raw includes:
grep -r '#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results
oldifs=${IFS}
IFS=:
mkdir includes
mkdir includedby
echo Build levelization paths
exec 3< ${includes} # open rawincludes.txt for input
while read -r -u 3 file include
do
level=$( echo ${file} | cut -d/ -f 2,3 )
# If the "level" indicates a file, cut off the filename
if [[ "${level##*.}" != "${level}" ]]
then
# Use the "toplevel" label as a workaround for `sort`
# inconsistencies between different utility versions
level="$( dirname ${level} )/toplevel"
fi
level=$( echo ${level} | tr '/' '.' )
includelevel=$( echo ${include} | sed 's/.*["<]//; s/[">].*//' | \
cut -d/ -f 1,2 )
if [[ "${includelevel##*.}" != "${includelevel}" ]]
then
# Use the "toplevel" label as a workaround for `sort`
# inconsistencies between different utility versions
includelevel="$( dirname ${includelevel} )/toplevel"
fi
includelevel=$( echo ${includelevel} | tr '/' '.' )
if [[ "$level" != "$includelevel" ]]
then
echo $level $includelevel | tee -a paths.txt
fi
done
echo Sort and dedup paths
sort -ds paths.txt | uniq -c | tee sortedpaths.txt
mv sortedpaths.txt paths.txt
exec 3>&- #close fd 3
IFS=${oldifs}
unset oldifs
echo Split into flat-file database
exec 4<paths.txt # open paths.txt for input
while read -r -u 4 count level include
do
echo ${include} ${count} | tee -a includes/${level}
echo ${level} ${count} | tee -a includedby/${include}
done
exec 4>&- #close fd 4
loops="$( pwd )/loops.txt"
ordering="$( pwd )/ordering.txt"
pushd includes
echo Search for loops
# Redirect stdout to a file
exec 4>&1
exec 1>"${loops}"
for source in *
do
if [[ -f "$source" ]]
then
exec 5<"${source}" # open for input
while read -r -u 5 include includefreq
do
if [[ -f $include ]]
then
if grep -q -w $source $include
then
if grep -q -w "Loop: $include $source" "${loops}"
then
continue
fi
sourcefreq=$( grep -w $source $include | cut -d\ -f2 )
echo "Loop: $source $include"
# If the counts are close, indicate that the two modules are
# on the same level, though they shouldn't be
if [[ $(( $includefreq - $sourcefreq )) -gt 3 ]]
then
echo -e " $source > $include\n"
elif [[ $(( $sourcefreq - $includefreq )) -gt 3 ]]
then
echo -e " $include > $source\n"
elif [[ $sourcefreq -eq $includefreq ]]
then
echo -e " $include == $source\n"
else
echo -e " $include ~= $source\n"
fi
else
echo "$source > $include" >> "${ordering}"
fi
fi
done
exec 5>&- #close fd 5
fi
done
exec 1>&4 #close fd 1
exec 4>&- #close fd 4
cat "${ordering}"
cat "${loops}"
popd
popd
popd

View File

@@ -1,42 +0,0 @@
Loop: test.jtx test.toplevel
test.toplevel > test.jtx
Loop: test.jtx test.unit_test
test.unit_test == test.jtx
Loop: xrpld.app xrpld.core
xrpld.app > xrpld.core
Loop: xrpld.app xrpld.ledger
xrpld.app > xrpld.ledger
Loop: xrpld.app xrpld.net
xrpld.app > xrpld.net
Loop: xrpld.app xrpld.overlay
xrpld.overlay == xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.app > xrpld.peerfinder
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app
Loop: xrpld.app xrpld.shamap
xrpld.app > xrpld.shamap
Loop: xrpld.core xrpld.net
xrpld.net > xrpld.core
Loop: xrpld.core xrpld.perflog
xrpld.perflog == xrpld.core
Loop: xrpld.net xrpld.rpc
xrpld.rpc ~= xrpld.net
Loop: xrpld.overlay xrpld.rpc
xrpld.rpc ~= xrpld.overlay
Loop: xrpld.perflog xrpld.rpc
xrpld.rpc ~= xrpld.perflog

View File

@@ -1,195 +0,0 @@
libxrpl.basics > xrpl.basics
libxrpl.crypto > xrpl.basics
libxrpl.json > xrpl.basics
libxrpl.json > xrpl.json
libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
libxrpl.server > xrpl.protocol
libxrpl.server > xrpl.server
test.app > test.jtx
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.app > xrpl.basics
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.ledger
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.json
test.app > xrpl.protocol
test.app > xrpl.resource
test.basics > test.jtx
test.basics > test.unit_test
test.basics > xrpl.basics
test.basics > xrpld.perflog
test.basics > xrpld.rpc
test.basics > xrpl.json
test.basics > xrpl.protocol
test.beast > xrpl.basics
test.conditions > xrpl.basics
test.conditions > xrpld.conditions
test.consensus > test.csf
test.consensus > test.toplevel
test.consensus > test.unit_test
test.consensus > xrpl.basics
test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpld.ledger
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
test.core > xrpl.basics
test.core > xrpld.core
test.core > xrpld.perflog
test.core > xrpl.json
test.core > xrpl.server
test.csf > xrpl.basics
test.csf > xrpld.consensus
test.csf > xrpl.json
test.csf > xrpl.protocol
test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
test.jtx > xrpld.app
test.jtx > xrpld.consensus
test.jtx > xrpld.core
test.jtx > xrpld.ledger
test.jtx > xrpld.net
test.jtx > xrpld.rpc
test.jtx > xrpl.json
test.jtx > xrpl.protocol
test.jtx > xrpl.resource
test.jtx > xrpl.server
test.ledger > test.jtx
test.ledger > test.toplevel
test.ledger > xrpl.basics
test.ledger > xrpld.app
test.ledger > xrpld.core
test.ledger > xrpld.ledger
test.ledger > xrpl.protocol
test.nodestore > test.jtx
test.nodestore > test.toplevel
test.nodestore > test.unit_test
test.nodestore > xrpl.basics
test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.unit_test
test.overlay > xrpl.basics
test.overlay > xrpld.app
test.overlay > xrpld.overlay
test.overlay > xrpld.peerfinder
test.overlay > xrpld.shamap
test.overlay > xrpl.protocol
test.peerfinder > test.beast
test.peerfinder > test.unit_test
test.peerfinder > xrpl.basics
test.peerfinder > xrpld.core
test.peerfinder > xrpld.peerfinder
test.peerfinder > xrpl.protocol
test.protocol > test.toplevel
test.protocol > xrpl.basics
test.protocol > xrpl.json
test.protocol > xrpl.protocol
test.resource > test.unit_test
test.resource > xrpl.basics
test.resource > xrpl.resource
test.rpc > test.jtx
test.rpc > test.toplevel
test.rpc > xrpl.basics
test.rpc > xrpld.app
test.rpc > xrpld.core
test.rpc > xrpld.net
test.rpc > xrpld.overlay
test.rpc > xrpld.rpc
test.rpc > xrpl.json
test.rpc > xrpl.protocol
test.rpc > xrpl.resource
test.server > test.jtx
test.server > test.toplevel
test.server > test.unit_test
test.server > xrpl.basics
test.server > xrpld.app
test.server > xrpld.core
test.server > xrpld.rpc
test.server > xrpl.json
test.server > xrpl.server
test.shamap > test.unit_test
test.shamap > xrpl.basics
test.shamap > xrpld.nodestore
test.shamap > xrpld.shamap
test.shamap > xrpl.protocol
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
xrpl.json > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
xrpl.resource > xrpl.basics
xrpl.resource > xrpl.json
xrpl.resource > xrpl.protocol
xrpl.server > xrpl.basics
xrpl.server > xrpl.json
xrpl.server > xrpl.protocol
xrpld.app > test.unit_test
xrpld.app > xrpl.basics
xrpld.app > xrpld.conditions
xrpld.app > xrpld.consensus
xrpld.app > xrpld.nodestore
xrpld.app > xrpld.perflog
xrpld.app > xrpl.json
xrpld.app > xrpl.protocol
xrpld.app > xrpl.resource
xrpld.conditions > xrpl.basics
xrpld.conditions > xrpl.protocol
xrpld.consensus > xrpl.basics
xrpld.consensus > xrpl.json
xrpld.consensus > xrpl.protocol
xrpld.core > xrpl.basics
xrpld.core > xrpl.json
xrpld.core > xrpl.protocol
xrpld.ledger > xrpl.basics
xrpld.ledger > xrpld.core
xrpld.ledger > xrpl.json
xrpld.ledger > xrpl.protocol
xrpld.net > xrpl.basics
xrpld.net > xrpl.json
xrpld.net > xrpl.protocol
xrpld.net > xrpl.resource
xrpld.nodestore > xrpl.basics
xrpld.nodestore > xrpld.core
xrpld.nodestore > xrpld.unity
xrpld.nodestore > xrpl.json
xrpld.nodestore > xrpl.protocol
xrpld.overlay > xrpl.basics
xrpld.overlay > xrpld.core
xrpld.overlay > xrpld.peerfinder
xrpld.overlay > xrpld.perflog
xrpld.overlay > xrpl.json
xrpld.overlay > xrpl.protocol
xrpld.overlay > xrpl.resource
xrpld.overlay > xrpl.server
xrpld.peerfinder > xrpl.basics
xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.ledger
xrpld.rpc > xrpld.nodestore
xrpld.rpc > xrpl.json
xrpld.rpc > xrpl.protocol
xrpld.rpc > xrpl.resource
xrpld.rpc > xrpl.server
xrpld.shamap > xrpl.basics
xrpld.shamap > xrpld.nodestore
xrpld.shamap > xrpl.protocol

53
Builds/rpm/rippled.spec Normal file
View File

@@ -0,0 +1,53 @@
Name: rippled
Version: 0.23.0
Release: 1%{?dist}
Summary: Ripple peer-to-peer network daemon
Group: Applications/Internet
License: ISC
URL: https://github.com/ripple/rippled
# curl -L -o SOURCES/rippled-release.zip https://github.com/ripple/rippled/archive/release.zip
Source0: rippled-release.zip
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)
BuildRequires: gcc-c++ scons openssl-devel protobuf-devel
Requires: protobuf openssl
%description
Rippled is the server component of the Ripple network.
%prep
%setup -n rippled-release
%build
# Assume boost is manually installed
export RIPPLED_BOOST_HOME=/usr/local/boost_1_55_0
scons -j `grep -c processor /proc/cpuinfo` build/rippled
%install
rm -rf %{buildroot}
mkdir -p %{buildroot}/usr/share/%{name}
cp LICENSE %{buildroot}/usr/share/%{name}/
mkdir -p %{buildroot}/usr/bin
cp build/rippled %{buildroot}/usr/bin/rippled
mkdir -p %{buildroot}/etc/%{name}
cp doc/rippled-example.cfg %{buildroot}/etc/%{name}/rippled.cfg
mkdir -p %{buildroot}/var/lib/%{name}/db
mkdir -p %{buildroot}/var/log/%{name}
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root,-)
/usr/bin/rippled
/usr/share/rippled/LICENSE
/etc/rippled/rippled-example.cfg

View File

@@ -1,139 +0,0 @@
cmake_minimum_required(VERSION 3.16)
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
project(xrpl)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git describe --always --abbrev=40
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif()
endif() #git
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
include (CheckCXXCompilerFlag)
include (FetchContent)
include (ExternalProject)
include (CMakeFuncs) # must come *after* ExternalProject b/c it overrides one function in EP
if (target)
message (FATAL_ERROR "The target option has been removed - use native cmake options to control build")
endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
if (NOT TARGET rpm)
message (FATAL_ERROR "packages_only requested, but targets were not created - is docker installed?")
endif()
return ()
endif ()
include(RippledCompiler)
include(RippledInterface)
option(only_docs "Include only the docs target?" FALSE)
include(RippledDocs)
if(only_docs)
return()
endif()
###
include(deps/Boost)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
add_subdirectory(external/antithesis-sdk)
find_package(gRPC REQUIRED)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
# https://conan.io/center/recipes/wasmedge?version=0.9.0
option(wasmedge "Enable WASM" ON)
if(wasmedge)
find_package(wasmedge REQUIRED)
set_target_properties(wasmedge::wasmedge PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_WASMEDGE_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE wasmedge::wasmedge)
# include(deps/WasmEdge)
endif()
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
find_package(xxHash REQUIRED)
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
# Work around changes to Conan recipe for now.
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
if(coverage)
include(RippledCov)
endif()
set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
include(RippledInstall)
include(RippledValidatorKeys)

View File

@@ -1,516 +0,0 @@
The XRP Ledger has many and diverse stakeholders, and everyone deserves
a chance to contribute meaningful changes to the code that runs the
XRPL.
# Contributing
We assume you are familiar with the general practice of [making
contributions on GitHub][1]. This file includes only special
instructions specific to this project.
## Before you start
In general, contributions should be developed in your personal
[fork](https://github.com/XRPLF/rippled/fork).
The following branches exist in the main project repository:
- `develop`: The latest set of unreleased features, and the most common
starting point for contributions.
- `release`: The latest beta release or release candidate.
- `master`: The latest stable release.
- `gh-pages`: The documentation for this project, built by Doxygen.
The tip of each branch must be signed. In order for GitHub to sign a
squashed commit that it builds from your pull request, GitHub must know
your verifying key. Please set up [signature verification][signing].
[rippled]: https://github.com/XRPLF/rippled
[signing]:
https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification
## Major contributions
If your contribution is a major feature or breaking change, then you
must first write an XRP Ledger Standard (XLS) describing it. Go to
[XRPL-Standards](https://github.com/XRPLF/XRPL-Standards/discussions),
choose the next available standard number, and open a discussion with an
appropriate title to propose your draft standard.
When you submit a pull request, please link the corresponding XLS in the
description. An XLS still in draft status is considered a
work-in-progress and open for discussion. Please allow time for
questions, suggestions, and changes to the XLS draft. It is the
responsibility of the XLS author to update the draft to match the final
implementation when its corresponding pull request is merged, unless the
author delegates that responsibility to others.
## Before making a pull request
Changes that alter transaction processing must be guarded by an
[Amendment](https://xrpl.org/amendments.html).
All other changes that maintain the existing behavior do not need an
Amendment.
Ensure that your code compiles according to the build instructions in
[`BUILD.md`](./BUILD.md).
If you create new source files, they must go under `src/ripple`.
You will need to add them to one of the
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
Please write tests for your code.
If you create new test source files, they must go under `src/test`.
You will need to add them to one of the
[source lists](./Builds/CMake/RippledCore.cmake) in CMake.
If your test can be run offline, in under 60 seconds, then it can be an
automatic test run by `rippled --unittest`.
Otherwise, it must be a manual test.
The source must be formatted according to the style guide below.
Header includes must be [levelized](./Builds/levelization).
Changes should be usually squashed down into a single commit.
Some larger or more complicated change sets make more sense,
and are easier to review if organized into multiple logical commits.
Either way, all commits should fit the following criteria:
* Changes should be presented in a single commit or a logical
sequence of commits.
Specifically, chronological commits that simply
reflect the history of how the author implemented
the change, "warts and all", are not useful to
reviewers.
* Every commit should have a [good message](#good-commit-messages).
to explain a specific aspects of the change.
* Every commit should be signed.
* Every commit should be well-formed (builds successfully,
unit tests passing), as this helps to resolve merge
conflicts, and makes it easier to use `git bisect`
to find bugs.
### Good commit messages
Refer to
["How to Write a Git Commit Message"](https://cbea.ms/git-commit/)
for general rules on writing a good commit message.
In addition to those guidelines, please add one of the following
prefixes to the subject line if appropriate.
* `fix:` - The primary purpose is to fix an existing bug.
* `perf:` - The primary purpose is performance improvements.
* `refactor:` - The changes refactor code without affecting
functionality.
* `test:` - The changes _only_ affect unit tests.
* `docs:` - The changes _only_ affect documentation. This can
include code comments in addition to `.md` files like this one.
* `build:` - The changes _only_ affect the build process,
including CMake and/or Conan settings.
* `chore:` - Other tasks that don't affect the binary, but don't fit
any of the other cases. e.g. formatting, git settings, updating
Github Actions jobs.
Whenever possible, when updating commits after the PR is open, please
add the PR number to the end of the subject line. e.g. `test: Add
unit tests for Feature X (#1234)`.
## Pull requests
In general, pull requests use `develop` as the base branch.
(Hotfixes are an exception.)
If your changes are not quite ready, but you want to make it easily available
for preliminary examination or review, you can create a "Draft" pull request.
While a pull request is marked as a "Draft", you can rebase or reorganize the
commits in the pull request as desired.
Github pull requests are created as "Ready" by default, or you can mark
a "Draft" pull request as "Ready".
Once a pull request is marked as "Ready",
any changes must be added as new commits. Do not
force-push to a branch in a pull request under review.
(This includes rebasing your branch onto the updated base branch.
Use a merge operation, instead or hit the "Update branch" button
at the bottom of the Github PR page.)
This preserves the ability for reviewers to filter changes since their last
review.
A pull request must obtain **approvals from at least two reviewers**
before it can be considered for merge by a Maintainer.
Maintainers retain discretion to require more approvals if they feel the
credibility of the existing approvals is insufficient.
Pull requests must be merged by [squash-and-merge][2]
to preserve a linear history for the `develop` branch.
### When and how to merge pull requests
#### "Passed"
A pull request should only have the "Passed" label added when it
meets a few criteria:
1. It must have two approving reviews [as described
above](#pull-requests). (Exception: PRs that are deemed "trivial"
only need one approval.)
2. All CI checks must be complete and passed. (One-off failures may
be acceptable if they are related to a known issue.)
3. The PR must have a [good commit message](#good-commit-messages).
* If the PR started with a good commit message, and it doesn't
need to be updated, the author can indicate that in a comment.
* Any contributor, preferably the author, can leave a comment
suggesting a commit message.
* If the author squashes and rebases the code in preparation for
merge, they should also ensure the commit message(s) are updated
as well.
4. The PR branch must be up to date with the base branch (usually
`develop`). This is usually accomplised by merging the base branch
into the feature branch, but if the other criteria are met, the
changes can be squashed and rebased on top of the base branch.
5. Finally, and most importantly, the author of the PR must
positively indicate that the PR is ready to merge. That can be
accomplished by adding the "Passed" label if their role allows,
or by leaving a comment to the effect that the PR is ready to
merge.
Once the "Passed" label is added, a maintainer may merge the PR at
any time, so don't use it lightly.
#### Instructions for maintainers
The maintainer should double-check that the PR has met all the
necessary criteria, and can request additional information from the
owner, or additional reviews, and can always feel free to remove the
"Passed" label if appropriate. The maintainer has final say on
whether a PR gets merged, and are encouraged to communicate and
issues or concerns to other maintainers.
##### Most pull requests: "Squash and merge"
Most pull requests don't need special handling, and can simply be
merged using the "Squash and merge" button on the Github UI. Update
the suggested commit message if necessary.
##### Slightly more complicated pull requests
Some pull requests need to be pushed to `develop` as more than one
commit. There are multiple ways to accomplish this. If the author
describes a process, and it is reasonable, follow it. Otherwise, do
a fast forward only merge (`--ff-only`) on the command line and push.
Either way, check that:
* The commits are based on the current tip of `develop`.
* The commits are clean: No merge commits (except when reverse
merging), no "[FOLD]" or "fixup!" messages.
* All commits are signed. If the commits are not signed by the author, use
`git commit --amend -S` to sign them yourself.
* At least one (but preferably all) of the commits has the PR number
in the commit message.
**Never use the "Create a merge commit" or "Rebase and merge"
functions!**
##### Releases, release candidates, and betas
All releases, including release candidates and betas, are handled
differently from typical PRs. Most importantly, never use
the Github UI to merge a release.
1. There are two possible conditions that the `develop` branch will
be in when preparing a release.
1. Ready or almost ready to go: There may be one or two PRs that
need to be merged, but otherwise, the only change needed is to
update the version number in `BuildInfo.cpp`. In this case,
merge those PRs as appropriate, updating the second one, and
waiting for CI to finish in between. Then update
`BuildInfo.cpp`.
2. Several pending PRs: In this case, do not use the Github UI,
because the delays waiting for CI in between each merge will be
unnecessarily onerous. Instead, create a working branch (e.g.
`develop-next`) based off of `develop`. Squash the changes
from each PR onto the branch, one commit each (unless
more are needed), being sure to sign each commit and update
the commit message to include the PR number. You may be able
to use a fast-forward merge for the first PR. The workflow may
look something like:
```
git fetch upstream
git checkout upstream/develop
git checkout -b develop-next
# Use -S on the ff-only merge if prbranch1 isn't signed.
# Or do another branch first.
git merge --ff-only user1/prbranch1
git merge --squash user2/prbranch2
git commit -S
git merge --squash user3/prbranch3
git commit -S
[...]
git push --set-upstream origin develop-next
</pre>
```
2. Create the Pull Request with `release` as the base branch. If any
of the included PRs are still open,
[use closing keywords](https://docs.github.com/articles/closing-issues-using-keywords)
in the description to ensure they are closed when the code is
released. e.g. "Closes #1234"
3. Instead of the default template, reuse and update the message from
the previous release. Include the following verbiage somewhere in
the description:
```
The base branch is release. All releases (including betas) go in
release. This PR will be merged with --ff-only (not squashed or
rebased, and not using the GitHub UI) to both release and develop.
```
4. Sign-offs for the three platforms usually occur offline, but at
least one approval will be needed on the PR.
5. Once everything is ready to go, open a terminal, and do the
fast-forward merges manually. Do not push any branches until you
verify that all of them update correctly.
```
git fetch upstream
git checkout -b upstream--develop -t upstream/develop || git checkout upstream--develop
git reset --hard upstream/develop
# develop-next must be signed already!
git merge --ff-only origin/develop-next
git checkout -b upstream--release -t upstream/release || git checkout upstream--release
git reset --hard upstream/release
git merge --ff-only origin/develop-next
# Only do these 3 steps if pushing a release. No betas or RCs
git checkout -b upstream--master -t upstream/master || git checkout upstream--master
git reset --hard upstream/master
git merge --ff-only origin/develop-next
# Check that all of the branches are updated
git log -1 --oneline
# The output should look like:
# 02ec8b7962 (HEAD -> upstream--master, origin/develop-next, upstream--release, upstream--develop, develop-next) Set version to 2.2.0-rc1
# Note that all of the upstream--develop/release/master are on this commit.
# (Master will be missing for betas, etc.)
# Just to be safe, do a dry run first:
git push --dry-run upstream-push HEAD:develop
git push --dry-run upstream-push HEAD:release
# git push --dry-run upstream-push HEAD:master
# Now push
git push upstream-push HEAD:develop
git push upstream-push HEAD:release
# git push upstream-push HEAD:master
# Don't forget to tag the release, too.
git tag <version number>
git push upstream-push <version number>
```
6. Finally
[create a new release on Github](https://github.com/XRPLF/rippled/releases).
# Style guide
This is a non-exhaustive list of recommended style guidelines. These are
not always strictly enforced and serve as a way to keep the codebase
coherent rather than a set of _thou shalt not_ commandments.
## Formatting
All code must conform to `clang-format` version 10,
according to the settings in [`.clang-format`](./.clang-format),
unless the result would be unreasonably difficult to read or maintain.
To demarcate lines that should be left as-is, surround them with comments like
this:
```
// clang-format off
...
// clang-format on
```
You can format individual files in place by running `clang-format -i <file>...`
from any directory within this project.
There is a Continuous Integration job that runs clang-format on pull requests. If the code doesn't comply, a patch file that corrects auto-fixable formatting issues is generated.
To download the patch file:
1. Next to `clang-format / check (pull_request) Failing after #s` -> click **Details** to open the details page.
2. Left menu -> click **Summary**
3. Scroll down to near the bottom-right under `Artifacts` -> click **clang-format.patch**
4. Download the zip file and extract it to your local git repository. Run `git apply [patch-file-name]`.
5. Commit and push.
You can install a pre-commit hook to automatically run `clang-format` before every commit:
```
pip3 install pre-commit
pre-commit install
```
## Contracts and instrumentation
We are using [Antithesis](https://antithesis.com/) for continuous fuzzing,
and keep a copy of [Antithesis C++ SDK](https://github.com/antithesishq/antithesis-sdk-cpp/)
in `external/antithesis-sdk`. One of the aims of fuzzing is to identify bugs
by finding external conditions which cause contracts violations inside `rippled`.
The contracts are expressed as `XRPL_ASSERT` or `UNREACHABLE` (defined in
`include/xrpl/beast/utility/instrumentation.h`), which are effectively (outside
of Antithesis) wrappers for `assert(...)` with added name. The purpose of name
is to provide contracts with stable identity which does not rely on line numbers.
When `rippled` is built with the Antithesis instrumentation enabled
(using `voidstar` CMake option) and ran on the Antithesis platform, the
contracts become
[test properties](https://antithesis.com/docs/using_antithesis/properties.html);
otherwise they are just like a regular `assert`.
To learn more about Antithesis, see
[How Antithesis Works](https://antithesis.com/docs/introduction/how_antithesis_works.html)
and [C++ SDK](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html#)
We continue to use the old style `assert` or `assert(false)` in certain
locations, where the reporting of contract violations on the Antithesis
platform is either not possible or not useful.
For this reason:
* The locations where `assert` or `assert(false)` contracts should continue to be used:
* `constexpr` functions
* unit tests i.e. files under `src/test`
* unit tests-related modules (files under `beast/test` and `beast/unit_test`)
* Outside of the listed locations, do not use `assert`; use `XRPL_ASSERT` instead,
giving it unique name, with the short description of the contract.
* Outside of the listed locations, do not use `assert(false)`; use
`UNREACHABLE` instead, giving it unique name, with the description of the
condition being violated
* The contract name should start with a full name (including scope) of the
function, optionally a named lambda, followed by a colon ` : ` and a brief
(typically at most five words) description. `UNREACHABLE` contracts
can use slightly longer descriptions. If there are multiple overloads of the
function, use common sense to balance both brevity and unambiguity of the
function name. NOTE: the purpose of name is to provide stable means of
unique identification of every contract; for this reason try to avoid elements
which can change in some obvious refactors or when reinforcing the condition.
* Contract description typically (except for `UNREACHABLE`) should describe the
_expected_ condition, as in "I assert that _expected_ is true".
* Contract description for `UNREACHABLE` should describe the _unexpected_
situation which caused the line to have been reached.
* Example good name for an
`UNREACHABLE` macro `"Json::operator==(Value, Value) : invalid type"`; example
good name for an `XRPL_ASSERT` macro `"Json::Value::asCString : valid type"`.
* Example **bad** name
`"RFC1751::insert(char* s, int x, int start, int length) : length is greater than or equal zero"`
(missing namespace, unnecessary full function signature, description too verbose).
Good name: `"ripple::RFC1751::insert : minimum length"`.
* In **few** well-justified cases a non-standard name can be used, in which case a
comment should be placed to explain the rationale (example in `contract.cpp`)
* Do **not** rename a contract without a good reason (e.g. the name no longer
reflects the location or the condition being checked)
* Do not use `std::unreachable`
* Do not put contracts where they can be violated by an external condition
(e.g. timing, data payload before mandatory validation etc.) as this creates
bogus bug reports (and causes crashes of Debug builds)
## Unit Tests
To execute all unit tests:
```rippled --unittest --unittest-jobs=<number of cores>```
(Note: Using multiple cores on a Mac M1 can cause spurious test failures. The
cause is still under investigation. If you observe this problem, try specifying fewer jobs.)
To run a specific set of test suites:
```
rippled --unittest TestSuiteName
```
Note: In this example, all tests with prefix `TestSuiteName` will be run, so if
`TestSuiteName1` and `TestSuiteName2` both exist, then both tests will run.
Alternatively, if the unit test name finds an exact match, it will stop
doing partial matches, i.e. if a unit test with a title of `TestSuiteName`
exists, then no other unit test will be executed, apart from `TestSuiteName`.
## Avoid
1. Proliferation of nearly identical code.
2. Proliferation of new files and classes.
3. Complex inheritance and complex OOP patterns.
4. Unmanaged memory allocation and raw pointers.
5. Macros and non-trivial templates (unless they add significant value).
6. Lambda patterns (unless these add significant value).
7. CPU or architecture-specific code unless there is a good reason to
include it, and where it is used, guard it with macros and provide
explanatory comments.
8. Importing new libraries unless there is a very good reason to do so.
## Seek to
9. Extend functionality of existing code rather than creating new code.
10. Prefer readability over terseness where important logic is
concerned.
11. Inline functions that are not used or are not likely to be used
elsewhere in the codebase.
12. Use clear and self-explanatory names for functions, variables,
structs and classes.
13. Use TitleCase for classes, structs and filenames, camelCase for
function and variable names, lower case for namespaces and folders.
14. Provide as many comments as you feel that a competent programmer
would need to understand what your code does.
# Maintainers
Maintainers are ecosystem participants with elevated access to the repository.
They are able to push new code, make decisions on when a release should be
made, etc.
## Adding and removing
New maintainers can be proposed by two existing maintainers, subject to a vote
by a quorum of the existing maintainers.
A minimum of 50% support and a 50% participation is required.
In the event of a tie vote, the addition of the new maintainer will be
rejected.
Existing maintainers can resign, or be subject to a vote for removal at the
behest of two existing maintainers.
A minimum of 60% agreement and 50% participation are required.
The XRP Ledger Foundation will have the ability, for cause, to remove an
existing maintainer without a vote.
## Current Maintainers
Maintainers are users with admin access to the repo. Maintainers do not typically approve or deny pull requests.
* [intelliot](https://github.com/intelliot) (Ripple)
* [JoelKatz](https://github.com/JoelKatz) (Ripple)
* [nixer89](https://github.com/nixer89) (XRP Ledger Foundation)
* [Silkjaer](https://github.com/Silkjaer) (XRP Ledger Foundation)
* [WietseWind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
## Current Code Reviewers
Code Reviewers are developers who have the ability to review and approve source code changes.
* [HowardHinnant](https://github.com/HowardHinnant) (Ripple)
* [scottschurr](https://github.com/scottschurr) (Ripple)
* [seelabs](https://github.com/seelabs) (Ripple)
* [Ed Hennis](https://github.com/ximinez) (Ripple)
* [mvadari](https://github.com/mvadari) (Ripple)
* [thejohnfreeman](https://github.com/thejohnfreeman) (Ripple)
* [Bronek](https://github.com/Bronek) (Ripple)
* [manojsdoshi](https://github.com/manojsdoshi) (Ripple)
* [godexsoft](https://github.com/godexsoft) (Ripple)
* [mDuo13](https://github.com/mDuo13) (Ripple)
* [ckniffen](https://github.com/ckniffen) (Ripple)
* [arihantkothari](https://github.com/arihantkothari) (Ripple)
* [pwang200](https://github.com/pwang200) (Ripple)
* [sophiax851](https://github.com/sophiax851) (Ripple)
* [shawnxie999](https://github.com/shawnxie999) (Ripple)
* [gregtatcam](https://github.com/gregtatcam) (Ripple)
* [mtrippled](https://github.com/mtrippled) (Ripple)
* [ckeshava](https://github.com/ckeshava) (Ripple)
* [nbougalis](https://github.com/nbougalis) None
* [RichardAH](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [dangell7](https://github.com/dangell7) (XRPL Labs)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits

BIN
LICENSE Normal file

Binary file not shown.

View File

@@ -1,17 +0,0 @@
ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2020, the XRP Ledger developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -1,69 +1,41 @@
# The XRP Ledger
#rippled - Ripple P2P server
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
##[![Build Status](https://travis-ci.org/ripple/rippled.png?branch=develop)](https://travis-ci.org/ripple/rippled)
## XRP
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.
This is the repository for Ripple's `rippled`, reference P2P server.
## rippled
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
###Build instructions:
* https://ripple.com/wiki/Rippled_build_instructions
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (rippled Reporting Mode has been replaced by Clio.)
### Build from Source
* [Read the build instructions in `BUILD.md`](BUILD.md)
* If you encounter any issues, please [open an issue](https://github.com/XRPLF/rippled/issues)
## Key Features of the XRP Ledger
- **[Censorship-Resistant Transaction Processing][]:** No single party decides which transactions succeed or fail, and no one can "roll back" a transaction after it completes. As long as those who choose to participate in the network keep it healthy, they can settle transactions in seconds.
- **[Fast, Efficient Consensus Algorithm][]:** The XRP Ledger's consensus algorithm settles transactions in 4 to 5 seconds, processing at a throughput of up to 1500 transactions per second. These properties put XRP at least an order of magnitude ahead of other top digital assets.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction costs.
- **[Responsible Software Governance][]:** A team of full-time, world-class developers at Ripple maintain and continually improve the XRP Ledger's underlying software with contributions from the open-source community. Ripple acts as a steward for the technology and an advocate for its interests, and builds constructive relationships with governments and financial institutions worldwide.
- **[Secure, Adaptable Cryptography][]:** The XRP Ledger relies on industry standard digital signature systems like ECDSA (the same scheme used by Bitcoin) but also supports modern, efficient algorithms like Ed25519. The extensible nature of the XRP Ledger's software makes it possible to add and disable algorithms as the state of the art in cryptography advances.
- **[Modern Features for Smart Contracts][]:** Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the [Interledger Protocol](https://interledger.org/). This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
- **[On-Ledger Decentralized Exchange][]:** In addition to all the features that make XRP useful on its own, the XRP Ledger also has a fully-functional accounting system for tracking and trading obligations denominated in any way users want, and an exchange built into the protocol. The XRP Ledger can settle long, cross-currency payment paths and exchanges of multiple currencies in atomic transactions, bridging gaps of trust with XRP.
[Censorship-Resistant Transaction Processing]: https://xrpl.org/xrp-ledger-overview.html#censorship-resistant-transaction-processing
[Fast, Efficient Consensus Algorithm]: https://xrpl.org/xrp-ledger-overview.html#fast-efficient-consensus-algorithm
[Finite XRP Supply]: https://xrpl.org/xrp-ledger-overview.html#finite-xrp-supply
[Responsible Software Governance]: https://xrpl.org/xrp-ledger-overview.html#responsible-software-governance
[Secure, Adaptable Cryptography]: https://xrpl.org/xrp-ledger-overview.html#secure-adaptable-cryptography
[Modern Features for Smart Contracts]: https://xrpl.org/xrp-ledger-overview.html#modern-features-for-smart-contracts
[On-Ledger Decentralized Exchange]: https://xrpl.org/xrp-ledger-overview.html#on-ledger-decentralized-exchange
## Source Code
Here are some good places to start learning the source code:
- Read the markdown files in the source tree: `src/ripple/**/*.md`.
- Read [the levelization document](./Builds/levelization) to get an idea of the internal dependency graph.
- In the big picture, the `main` function constructs an `ApplicationImp` object, which implements the `Application` virtual interface. Almost every component in the application takes an `Application&` parameter in its constructor, typically named `app` and stored as a member variable `app_`. This allows most components to depend on any other component.
###Setup instructions:
* https://ripple.com/wiki/Rippled_setup_instructions
### Repository Contents
| Folder | Contents |
|:-----------|:-------------------------------------------------|
| `./bin` | Scripts and data files for Ripple integrators. |
| `./Builds` | Platform-specific guides for building `rippled`. |
| `./docs` | Source documentation files and doxygen config. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
#### ./bin
Scripts and data files for Ripple integrators.
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
#### ./build
Intermediate and final build outputs.
#### ./Builds
Platform or IDE-specific project files.
## Additional Documentation
#### ./doc
Documentation and example configuration files.
* [XRP Ledger Dev Portal](https://xrpl.org/)
* [Setup and Installation](https://xrpl.org/install-rippled.html)
* [Source Documentation (Doxygen)](https://xrplf.github.io/rippled/)
#### ./src
Source code directory. Some of the directories contained here are
external repositories inlined via git-subtree, see the corresponding
README for more details.
## See Also
#### ./test
Javascript / Mocha tests.
* [Clio API Server for the XRP Ledger](https://github.com/XRPLF/clio)
* [Mailing List for Release Announcements](https://groups.google.com/g/ripple-server)
* [Learn more about the XRP Ledger (YouTube)](https://www.youtube.com/playlist?list=PLJQ55Tj1hIVZtJ_JdTvSum2qMTsedWkNi)
## License
Ripple is open source and permissively licensed under the ISC license. See the
LICENSE file for more details.
###For more information:
* https://ripple.com
* https://ripple.com/wiki

File diff suppressed because it is too large Load Diff

472
SConstruct Normal file
View File

@@ -0,0 +1,472 @@
#
# Ripple - SConstruct
#
import commands
import copy
import glob
import os
import platform
import re
import sys
import textwrap
OSX = bool(platform.mac_ver()[0])
FreeBSD = bool('FreeBSD' == platform.system())
Linux = bool('Linux' == platform.system())
Ubuntu = bool(Linux and 'Ubuntu' == platform.linux_distribution()[0])
Debian = bool(Linux and 'debian' == platform.linux_distribution()[0])
Fedora = bool(Linux and 'Fedora' == platform.linux_distribution()[0])
Archlinux = bool(Linux and ('','','') == platform.linux_distribution()) #Arch still has issues with the platform module
USING_CLANG = OSX or os.environ.get('CC', None) == 'clang'
#
# We expect this to be set
#
BOOST_HOME = os.environ.get("RIPPLED_BOOST_HOME", None)
if OSX or Ubuntu or Debian or Archlinux:
CTAGS = 'ctags'
elif FreeBSD:
CTAGS = 'exctags'
else:
CTAGS = 'exuberant-ctags'
#
# scons tools
#
HONOR_ENVS = ['CC', 'CXX', 'PATH']
env = Environment(
tools = ['default', 'protoc'],
ENV = dict((k, os.environ[k]) for k in HONOR_ENVS if k in os.environ)
)
if os.environ.get('CC', None):
env.Replace(CC = os.environ['CC'])
if os.environ.get('CXX', None):
env.Replace(CXX = os.environ['CXX'])
if os.environ.get('PATH', None):
env.Replace(PATH = os.environ['PATH'])
# Use a newer gcc on FreeBSD
if FreeBSD:
env.Replace(CC = 'gcc46')
env.Replace(CXX = 'g++46')
env.Append(CCFLAGS = ['-Wl,-rpath=/usr/local/lib/gcc46'])
env.Append(LINKFLAGS = ['-Wl,-rpath=/usr/local/lib/gcc46'])
if USING_CLANG:
env.Replace(CC= 'clang')
env.Replace(CXX= 'clang++')
if Linux:
env.Append(CXXFLAGS = ['-std=c++11', '-stdlib=libstdc++'])
env.Append(LINKFLAGS='-stdlib=libstdc++')
if OSX:
env.Append(CXXFLAGS = ['-std=c++11', '-stdlib=libc++',
'-Wno-deprecated-register'])
env.Append(LINKFLAGS='-stdlib=libc++')
env['FRAMEWORKS'] = ['AppKit','Foundation']
GCC_VERSION = re.split('\.', commands.getoutput(env['CXX'] + ' -dumpversion'))
# Add support for ccache. Usage: scons ccache=1
ccache = ARGUMENTS.get('ccache', 0)
if int(ccache):
env.Prepend(CC = ['ccache'])
env.Prepend(CXX = ['ccache'])
ccache_dir = os.getenv('CCACHE_DIR')
if ccache_dir:
env.Replace(CCACHE_DIR = ccache_dir)
#
# Builder for CTags
#
ctags = Builder(action = '$CTAGS $CTAGSOPTIONS -f $TARGET $SOURCES')
env.Append(BUILDERS = { 'CTags' : ctags })
if OSX:
env.Replace(CTAGS = CTAGS)
else:
env.Replace(CTAGS = CTAGS, CTAGSOPTIONS = '--tag-relative')
# Use openssl
env.ParseConfig('pkg-config --static --cflags --libs openssl')
# Use protobuf
env.ParseConfig('pkg-config --static --cflags --libs protobuf')
# Beast uses kvm on FreeBSD
if FreeBSD:
env.Append (
LIBS = [
'kvm'
]
)
# The required version of boost is documented in the README file.
BOOST_LIBS = [
'boost_date_time',
'boost_filesystem',
'boost_program_options',
'boost_regex',
'boost_system',
'boost_thread',
]
# We whitelist platforms where the non -mt version is linked with pthreads. This
# can be verified with: ldd libboost_filesystem.* If a threading library is
# included the platform can be whitelisted.
# if FreeBSD or Ubuntu or Archlinux:
if not (USING_CLANG and Linux) and (FreeBSD or Ubuntu or Archlinux or Debian or OSX or Fedora):
# non-mt libs do link with pthreads.
env.Append(
LIBS = BOOST_LIBS
)
elif Linux and USING_CLANG and Ubuntu:
# It's likely going to be here if using boost 1.55
boost_statics = [ ("/usr/lib/x86_64-linux-gnu/lib%s.a" % a) for a in
BOOST_LIBS ]
if not all(os.path.exists(f) for f in boost_statics):
# Else here
boost_statics = [("/usr/lib/lib%s.a" % a) for a in BOOST_LIBS]
env.Append(LIBS = [File(f) for f in boost_statics])
else:
env.Append(
LIBS = [l + '-mt' for l in BOOST_LIBS]
)
#-------------------------------------------------------------------------------
# Change the way that information is printed so that we can get a nice
# output
#-------------------------------------------------------------------------------
BuildLogFile = None
def print_cmd_line_worker(item, fmt, cmd):
sys.stdout.write(fmt % (" \033[94m" + item + "\033[0m"))
global BuildLogFile
if not BuildLogFile:
BuildLogFile = open('rippled-build.log', 'w')
if BuildLogFile:
wrapper = textwrap.TextWrapper()
wrapper.break_long_words = False
wrapper.break_on_hyphens = False
wrapper.width = 75
lines = wrapper.wrap(cmd)
for line in lines:
BuildLogFile.write("%s\n" % line)
def print_cmd_line(s, target, src, env):
target = (''.join([str(x) for x in target]))
source = (''.join([str(x) for x in src]))
if ('build/rippled' == target):
print_cmd_line_worker(target, "%s\n", s)
elif ('tags' == target):
sys.stdout.write(" Generating tags")
else:
print_cmd_line_worker(source, "%s\n", s)
# Originally, we wanted to suppress verbose display when running on Travis,
# but we no longer want that functionality. Just use the following if to
# get the suppression functionality again:
#
#if (os.environ.get('TRAVIS', '0') != 'true') and
# (os.environ.get('CI', '0') != 'true'):
#
# env['PRINT_CMD_LINE_FUNC'] = print_cmd_line
env['PRINT_CMD_LINE_FUNC'] = print_cmd_line
#-------------------------------------------------------------------------------
#
# VFALCO NOTE Clean area.
#
#-------------------------------------------------------------------------------
#
# Nothing having to do with directories, source files,
# or include paths should reside outside the boundaries.
#
# List of includes passed to the C++ compiler.
# These are all relative to the repo dir.
#
INCLUDE_PATHS = [
'.',
'src/leveldb',
'src/leveldb/port',
'src/leveldb/include'
]
# if BOOST_HOME:
# INCLUDE_PATHS.append(BOOST_HOME)
#-------------------------------------------------------------------------------
#
# Compiled sources
#
COMPILED_FILES = []
# -------------------
# Beast unity sources
#
if OSX:
# OSX: Use the Objective C++ version of beast_core
COMPILED_FILES.extend (['src/ripple/beast/ripple_beastobjc.mm'])
else:
COMPILED_FILES.extend (['src/ripple/beast/ripple_beast.cpp'])
COMPILED_FILES.extend (['src/ripple/beast/ripple_beastc.c'])
# ------------------------------
# New-style Ripple unity sources
#
COMPILED_FILES.extend([
'src/ripple/http/ripple_http.cpp',
'src/ripple/json/ripple_json.cpp',
'src/ripple/peerfinder/ripple_peerfinder.cpp',
'src/ripple/radmap/ripple_radmap.cpp',
'src/ripple/resource/ripple_resource.cpp',
'src/ripple/rocksdb/ripple_rocksdb.cpp',
'src/ripple/sitefiles/ripple_sitefiles.cpp',
'src/ripple/sslutil/ripple_sslutil.cpp',
'src/ripple/testoverlay/ripple_testoverlay.cpp',
'src/ripple/types/ripple_types.cpp',
'src/ripple/validators/ripple_validators.cpp',
'src/ripple/common/ripple_common.cpp',
])
# ------------------------------
# Old-style Ripple unity sources
#
COMPILED_FILES.extend([
'src/ripple_app/ripple_app.cpp',
'src/ripple_app/ripple_app_pt1.cpp',
'src/ripple_app/ripple_app_pt2.cpp',
'src/ripple_app/ripple_app_pt3.cpp',
'src/ripple_app/ripple_app_pt4.cpp',
'src/ripple_app/ripple_app_pt5.cpp',
'src/ripple_app/ripple_app_pt6.cpp',
'src/ripple_app/ripple_app_pt7.cpp',
'src/ripple_app/ripple_app_pt8.cpp',
'src/ripple_app/ripple_app_pt9.cpp',
'src/ripple_basics/ripple_basics.cpp',
'src/ripple_core/ripple_core.cpp',
'src/ripple_data/ripple_data.cpp',
'src/ripple_hyperleveldb/ripple_hyperleveldb.cpp',
'src/ripple_leveldb/ripple_leveldb.cpp',
'src/ripple_net/ripple_net.cpp',
'src/ripple_overlay/ripple_overlay.cpp',
'src/ripple_rpc/ripple_rpc.cpp',
'src/ripple_websocket/ripple_websocket.cpp'
])
#
#
#-------------------------------------------------------------------------------
# Map top level source directories to their location in the outputs
#
VariantDir('build/obj/src', 'src', duplicate=0)
#-------------------------------------------------------------------------------
# Add the list of includes to compiler include paths.
#
for path in INCLUDE_PATHS:
env.Append (CPPPATH = [ path ])
if BOOST_HOME:
env.Prepend (CPPPATH = [ BOOST_HOME ])
#-------------------------------------------------------------------------------
# Apparently, only linux uses -ldl
if Linux: # not FreeBSD:
env.Append(
LIBS = [
'dl', # dynamic linking for linux
]
)
env.Append(
LIBS = \
# rt is for clock_nanosleep in beast
['rt'] if not OSX else [] +\
[
'z'
]
)
# We prepend, in case there's another BOOST somewhere on the path
# such, as installed into `/usr/lib/`
if BOOST_HOME is not None:
env.Prepend(
LIBPATH = ["%s/stage/lib" % BOOST_HOME])
if not OSX:
env.Append(LINKFLAGS = [
'-rdynamic',
'-pthread',
])
DEBUGFLAGS = ['-g', '-DDEBUG', '-D_DEBUG']
env.Append(CCFLAGS = ['-pthread', '-Wall', '-Wno-sign-compare', '-Wno-char-subscripts']+DEBUGFLAGS)
if not USING_CLANG:
more_warnings = ['-Wno-unused-local-typedefs']
else:
# This disables the "You said it was a struct AND a class, wth is going on
# warnings"
more_warnings = ['-Wno-mismatched-tags'] # add '-Wshorten-64-to-32' some day
# This needs to be a CCFLAGS not a CXXFLAGS
env.Append(CCFLAGS = more_warnings)
# add '-Wconversion' some day
env.Append(CXXFLAGS = ['-O3', '-fno-strict-aliasing', '-pthread', '-Wno-invalid-offsetof', '-Wformat']+more_warnings+DEBUGFLAGS)
# RTTI is required for Beast and CountedObject.
#
env.Append(CXXFLAGS = ['-frtti'])
UBUNTU_GCC_48_INSTALL_STEPS = '''
https://ripple.com/wiki/Ubuntu_build_instructions#Ubuntu_versions_older_than_13.10_:_Install_gcc_4.8'''
if not USING_CLANG:
if (int(GCC_VERSION[0]) == 4 and int(GCC_VERSION[1]) < 8):
print "\n\033[91mTo compile rippled using GCC you need version 4.8.1 or later.\033[0m\n"
if Ubuntu:
print "For information how to update your GCC, please visit:"
print UBUNTU_GCC_48_INSTALL_STEPS
print "\n"
sys.exit(1)
else:
env.Append(CXXFLAGS = ['-std=c++11'])
# FreeBSD doesn't support O_DSYNC
if FreeBSD:
env.Append(CPPFLAGS = ['-DMDB_DSYNC=O_SYNC'])
if OSX:
env.Append(LINKFLAGS = ['-L/usr/local/opt/openssl/lib'])
env.Append(CXXFLAGS = ['-I/usr/local/opt/openssl/include'])
# Determine if this is a Travis continuous integration build:
TravisBuild = (os.environ.get('TRAVIS', '0') == 'true') and \
(os.environ.get('CI', '0') == 'true')
RippleRepository = False
# Determine if we're building against the main ripple repo or a developer repo
if TravisBuild:
Slug = os.environ.get('TRAVIS_REPO_SLUG', '')
if (Slug.find ("ripple/") == 0):
RippleRepository = True
if TravisBuild:
env.Append(CFLAGS = ['-DTRAVIS_CI_BUILD'])
env.Append(CXXFLAGS = ['-DTRAVIS_CI_BUILD'])
if RippleRepository:
env.Append(CFLAGS = ['-DRIPPLE_MASTER_BUILD'])
env.Append(CXXFLAGS = ['-DRIPPLE_MASTER_BUILD'])
# Display build configuration information for debugging purposes
def print_nv_pair(n, v):
name = ("%s" % n.rjust(10))
sys.stdout.write("%s \033[94m%s\033[0m\n" % (name, v))
def print_build_config(var):
val = env.get(var, '')
if val and val != '':
name = ("%s" % var.rjust(10))
wrapper = textwrap.TextWrapper()
wrapper.break_long_words = False
wrapper.break_on_hyphens = False
wrapper.width = 69
if type(val) is str:
lines = wrapper.wrap(val)
else:
lines = wrapper.wrap(" ".join(str(x) for x in val))
for line in lines:
print_nv_pair (name, line)
name = " "
config_vars = ['CC', 'CXX', 'CFLAGS', 'CPPFLAGS', 'CXXFLAGS', 'LINKFLAGS', 'LIBS']
if TravisBuild:
Slug = os.environ.get('TRAVIS_REPO_SLUG', None)
Branch = os.environ.get('TRAVIS_BRANCH', None)
Commit = os.environ.get('TRAVIS_COMMIT', None)
sys.stdout.write("\nBuild Type:\n")
if (Slug.find ("ripple/") == 0):
print_nv_pair ("Build", "Travis - Ripple Master Repository")
else:
print_nv_pair ("Build", "Travis - Ripple Developer Fork")
if (Slug):
print_nv_pair ("Repo", Slug)
if (Branch):
print_nv_pair ("Branch", Branch)
if (Commit):
print_nv_pair ("Commit", Commit)
sys.stdout.write("\nConfiguration:\n")
for var in config_vars:
print_build_config(var)
sys.stdout.write("\nBuilding:\n")
PROTO_SRCS = env.Protoc([], 'src/ripple/proto/ripple.proto',
PROTOCOUTDIR='src/ripple/proto',
PROTOCPROTOPATH=['src/ripple/proto'],
PROTOCPYTHONOUTDIR=None)
# Only tag actual Ripple files.
TAG_SRCS = copy.copy(COMPILED_FILES)
# Derive the object files from the source files.
OBJECT_FILES = []
OBJECT_FILES.append(PROTO_SRCS[0])
for file in COMPILED_FILES:
OBJECT_FILES.append('build/obj/' + file)
#
# Targets
#
rippled = env.Program('build/rippled', OBJECT_FILES)
tags = env.CTags('tags', TAG_SRCS)
Default(rippled, tags)

View File

@@ -1,149 +0,0 @@
### Operating an XRP Ledger server securely
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
# Security Policy
## Supported Versions
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
## Identifying and Reporting Vulnerabilities
We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.
### Responsible Investigation
We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.
Responsible investigation includes, but isn't limited to, the following:
- Not performing tests on the main network. If testing is necessary, use the [Testnet or Devnet](https://xrpl.org/xrp-testnet-faucet.html).
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
### Responsible Disclosure
If you discover a vulnerability or potential threat, or if you _think_
you have, please reach out by dropping an email using the contact
information below.
Your report should include the following:
- Your contact information (typically, an email address);
- The description of the vulnerability;
- The attack scenario (if any);
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
In your email, please describe the issue or potential threat. If possible, include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as detailed and comprehensive as possible.
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
## Report Handling Process
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
Once we receive a report, we:
1. Assign two people to independently evaluate the report;
2. Consider their recommendations;
3. If action is necessary, formulate a plan to address the issue;
4. Communicate privately with the reporter to explain our plan.
5. Prepare, test and release a version which fixes the issue; and
6. Announce the vulnerability publicly.
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
## Bug Bounty Program
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/XRPLF/rippled) (and other related projects, like [`xrpl.js`](https://github.com/XRPLF/xrpl.js), [`xrpl-py`](https://github.com/XRPLF/xrpl-py), [`xrpl4j`](https://github.com/XRPLF/xrpl4j)).
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, in order to qualify for a bounty, the bug must be:
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled`, `xrpl.js`, `xrpl-py`, `xrpl4j`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy, or the operation of the XRP Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
### Contacting Us
To report a qualifying bug, please send a detailed report to:
|Email Address|bugs@ripple.com |
|:-----------:|:----------------------------------------------------|
|Short Key ID | `0xC57929BE` |
|Long Key ID | `0xCD49A0AFC57929BE` |
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt
kCpUYEDal0ygkKobu8SzOoATcDl18iCrScX39VpTm96vISFZMhmOryYCIp4QLJNN
4HKc2ZdBj6W4igNi6vj5Qo6JMyGpLY2mz4CZskbt0TNuUxWrGood+UrCzpY8x7/N
a93fcvNw+prgCr0rCH3hAPmAFfsOBbtGzNnmq7xf3jg5r4Z4sDiNIF1X1y53DAfV
rWDx49IKsuCEJfPMp1MnBSvDvLaQ2hKXs+cOpx1BCZgHn3skouEUxxgqbtTzBLt1
xXpmuijsaltWngPnGO7mOAzbpZSdBm82/Emrk9bPMuD0QaLQjWr7HkTSUs6ZsKt4
7CLPdWqxyY/QVw9UaxeHEtWGQGMIQGgVJGh1fjtUr5O1sC9z9jXcQ0HuIHnRCTls
GP7hklJmfH5V4SyAJQ06/hLuEhUJ7dn+BlqCsT0tLmYTgZYNzNcLHcqBFMEZHvHw
9GENMx/tDXgajKql4bJnzuTK0iGU/YepanANLd1JHECJ4jzTtmKOus9SOGlB2/l1
0t0ADDYAS3eqOdOcUvo9ElSLCI5vSVHhShSte/n2FMWU+kMUboTUisEG8CgQnrng
g2CvvQvqDkeOtZeqMcC7HdiZS0q3LJUWtwA/ViwxrVlBDCxiTUXCotyBWwARAQAB
tDBSaXBwbGUgTGFicyBCdWcgQm91bnR5IFByb2dyYW0gPGJ1Z3NAcmlwcGxlLmNv
bT6JAjcEEwEKACEFAlUwGHYCGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
zUmgr8V5Kb6R0g//SwY/mVJY59k87iL26/KayauSoOcz7xjcST26l4ZHVVX85gOY
HYZl8k0+m8X3zxeYm9a3QAoAml8sfoaFRFQP8ynnefRrLUPaZ2MjbJ0SACMwZNef
T6o7Mi8LBAaiNZdYVyIfX1oM6YXtqYkuJdav6ZCyvVYqc9OvMJPY2ZzJYuI/ZtvQ
/lTndxCeg9ALNX/iezOLGdfMpf4HuIFVwcPPlwGi+HDlB9/bggDEHC8z434SXVFc
aQatXAPcDkjMUweU7y0CZtYEj00HITd4pSX6MqGiHrxlDZTqinCOPs1Ieqp7qufs
MzlM6irLGucxj1+wa16ieyYvEtGaPIsksUKkywx0O7cf8N2qKg+eIkUk6O0Uc6eO
CszizmiXIXy4O6OiLlVHGKkXHMSW9Nwe9GE95O8G9WR8OZCEuDv+mHPAutO+IjdP
PDAAUvy+3XnkceO+HGWRpVvJZfFP2YH4A33InFL5yqlJmSoR/yVingGLxk55bZDM
+HYGR3VeMb8Xj1rf/02qERsZyccMCFdAvKDbTwmvglyHdVLu5sPmktxbBYiemfyJ
qxMxmYXCc9S0hWrWZW7edktBa9NpE58z1mx+hRIrDNbS2sDHrib9PULYCySyVYcF
P+PWEe1CAS5jqkR2ker5td2/pHNnJIycynBEs7l6zbc9fu+nktFJz0q2B+GJAhwE
EAEKAAYFAlUwGaQACgkQ+tiY1qQ2QkjMFw//f2hNY3BPNe+1qbhzumMDCnbTnGif
kLuAGl9OKt81VHG1f6RnaGiLpR696+6Ja45KzH15cQ5JJl5Bgs1YkR/noTGX8IAD
c70eNwiFu8JXTaaeeJrsmFkF9Tueufb364risYkvPP8tNUD3InBFEZT3WN7JKwix
coD4/BwekUwOZVDd/uCFEyhlhZsROxdKNisNo3VtAq2s+3tIBAmTrriFUl0K+ZC5
zgavcpnPN57zMtW9aK+VO3wXqAKYLYmtgxkVzSLUZt2M7JuwOaAdyuYWAneKZPCu
1AXkmyo+d84sd5mZaKOr5xArAFiNMWPUcZL4rkS1Fq4dKtGAqzzR7a7hWtA5o27T
6vynuxZ1n0PPh0er2O/zF4znIjm5RhTlfjp/VmhZdQfpulFEQ/dMxxGkQ9z5IYbX
mTlSDbCSb+FMsanRBJ7Drp5EmBIudVGY6SHI5Re1RQiEh7GoDfUMUwZO+TVDII5R
Ra7WyuimYleJgDo/+7HyfuIyGDaUCVj6pwVtYtYIdOI3tTw1R1Mr0V8yaNVnJghL
CHcEJQL+YHSmiMM3ySil3O6tm1By6lFz8bVe/rgG/5uklQrnjMR37jYboi1orCC4
yeIoQeV0ItlxeTyBwYIV/o1DBNxDevTZvJabC93WiGLw2XFjpZ0q/9+zI2rJUZJh
qxmKP+D4e27lCI65Ag0EVTAYdgEQAMvttYNqeRNBRpSX8fk45WVIV8Fb21fWdwk6
2SkZnJURbiC0LxQnOi7wrtii7DeFZtwM2kFHihS1VHekBnIKKZQSgGoKuFAQMGyu
a426H4ZsSmA9Ufd7kRbvdtEcp7/RTAanhrSL4lkBhaKJrXlxBJ27o3nd7/rh7r3a
OszbPY6DJ5bWClX3KooPTDl/RF2lHn+fweFk58UvuunHIyo4BWJUdilSXIjLun+P
Qaik4ZAsZVwNhdNz05d+vtai4AwbYoO7adboMLRkYaXSQwGytkm+fM6r7OpXHYuS
cR4zB/OK5hxCVEpWfiwN71N2NMvnEMaWd/9uhqxJzyvYgkVUXV9274TUe16pzXnW
ZLfmitjwc91e7mJBBfKNenDdhaLEIlDRwKTLj7k58f9srpMnyZFacntu5pUMNblB
cjXwWxz5ZaQikLnKYhIvrIEwtWPyjqOzNXNvYfZamve/LJ8HmWGCKao3QHoAIDvB
9XBxrDyTJDpxbog6Qu4SY8AdgVlan6c/PsLDc7EUegeYiNTzsOK+eq3G5/E92eIu
TsUXlciypFcRm1q8vLRr+HYYe2mJDo4GetB1zLkAFBcYJm/x9iJQbu0hn5NxJvZO
R0Y5nOJQdyi+muJzKYwhkuzaOlswzqVXkq/7+QCjg7QsycdcwDjiQh3OrsgXHrwl
M7gyafL9ABEBAAGJAh8EGAEKAAkFAlUwGHYCGwwACgkQzUmgr8V5Kb50BxAAhj9T
TwmNrgRldTHszj+Qc+v8RWqV6j+R+zc0cn5XlUa6XFaXI1OFFg71H4dhCPEiYeN0
IrnocyMNvCol+eKIlPKbPTmoixjQ4udPTR1DC1Bx1MyW5FqOrsgBl5t0e1VwEViM
NspSStxu5Hsr6oWz2GD48lXZWJOgoL1RLs+uxjcyjySD/em2fOKASwchYmI+ezRv
plfhAFIMKTSCN2pgVTEOaaz13M0U+MoprThqF1LWzkGkkC7n/1V1f5tn83BWiagG
2N2Q4tHLfyouzMUKnX28kQ9sXfxwmYb2sA9FNIgxy+TdKU2ofLxivoWT8zS189z/
Yj9fErmiMjns2FzEDX+bipAw55X4D/RsaFgC+2x2PDbxeQh6JalRA2Wjq32Ouubx
u+I4QhEDJIcVwt9x6LPDuos1F+M5QW0AiUhKrZJ17UrxOtaquh/nPUL9T3l2qPUn
1ChrZEEEhHO6vA8+jn0+cV9n5xEz30Str9iHnDQ5QyR5LyV4UBPgTdWyQzNVKA69
KsSr9lbHEtQFRzGuBKwt6UlSFv9vPWWJkJit5XDKAlcKuGXj0J8OlltToocGElkF
+gEBZfoOWi/IBjRLrFW2cT3p36DTR5O1Ud/1DLnWRqgWNBLrbs2/KMKE6EnHttyD
7Tz8SQkuxltX/yBXMV3Ddy0t6nWV2SZEfuxJAQI=
=spg4
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -1,150 +0,0 @@
#!/usr/bin/env bash
# This script generates information about your rippled installation
# and system. It can be used to help debug issues that you may face
# in your installation. While this script endeavors to not display any
# sensitive information, it is recommended that you read the output
# before sharing with any third parties.
rippled_exe=/opt/ripple/bin/rippled
conf_file=/etc/opt/ripple/rippled.cfg
while getopts ":e:c:" opt; do
case $opt in
e)
rippled_exe=${OPTARG}
;;
c)
conf_file=${OPTARG}
;;
\?)
echo "Invalid option: -$OPTARG"
exit -1
esac
done
tmp_loc=$(mktemp -d --tmpdir ripple_info.XXXXX)
chmod 751 ${tmp_loc}
awk_prog=${tmp_loc}/cfg.awk
summary_out=${tmp_loc}/rippled_info.md
printf "# rippled report info\n\n> generated at %s\n" "$(date -R)" > ${summary_out}
function log_section {
printf "\n## %s\n" "$*" >> ${summary_out}
while read -r l; do
echo " $l" >> ${summary_out}
done </dev/stdin
}
function join_by {
local IFS="$1"; shift; echo "$*";
}
if [[ -f ${conf_file} ]] ; then
exclude=( ips ips_fixed node_seed validation_seed validator_token )
cleaned_conf=${tmp_loc}/cleaned_rippled_cfg.txt
cat << 'EOP' >> ${awk_prog}
BEGIN {FS="[[:space:]]*=[[:space:]]*"; skip=0; db_path=""; print > OUT_FILE; split(exl,exa,"|")}
/^#/ {next}
save==2 && /^[[:space:]]*$/ {next}
/^\[.+\]$/ {
section=tolower(gensub(/^\[[[:space:]]*([a-zA-Z_]+)[[:space:]]*\]$/, "\\1", "g"))
skip = 0
for (i in exa) {
if (section == exa[i])
skip = 1
}
if (section == "database_path")
save = 1
}
skip==1 {next}
save==2 {save=0; db_path=$0}
save==1 {save=2}
$1 ~ /password/ {$0=$1"=<redacted>"}
{print >> OUT_FILE}
END {print db_path}
EOP
db=$(\
sed -r -e 's/\<s[[:alnum:]]{28}\>/<redactedsecret>/g;s/^[[:space:]]*//;s/[[:space:]]*$//' ${conf_file} |\
awk -v OUT_FILE=${cleaned_conf} -v exl="$(join_by '|' "${exclude[@]}")" -f ${awk_prog})
rm ${awk_prog}
cat ${cleaned_conf} | log_section "cleaned config file"
rm ${cleaned_conf}
echo "${db}" | log_section "database path"
df ${db} | log_section "df: database"
fi
# Send output from this script to a log file
## this captures any messages
## or errors from the script itself
log_file=${tmp_loc}/get_info.log
exec 3>&1 1>>${log_file} 2>&1
## Send all stdout files to /tmp
if [[ -x ${rippled_exe} ]] ; then
pgrep rippled && \
${rippled_exe} --conf ${conf_file} \
-- server_info | log_section "server info"
fi
cat /proc/meminfo | log_section "meminfo"
cat /proc/swaps | log_section "swap space"
ulimit -a | log_section "ulimit"
if command -v lshw >/dev/null 2>&1 ; then
lshw 2>/dev/null | log_section "hardware info"
else
lscpu > ${tmp_loc}/hw_info.txt
hwinfo >> ${tmp_loc}/hw_info.txt
lspci >> ${tmp_loc}/hw_info.txt
lsblk >> ${tmp_loc}/hw_info.txt
cat ${tmp_loc}/hw_info.txt | log_section "hardware info"
rm ${tmp_loc}/hw_info.txt
fi
if command -v iostat >/dev/null 2>&1 ; then
iostat -t -d -x 2 6 | log_section "iostat"
fi
df -h | log_section "free disk space"
drives=($(df | awk '$1 ~ /^\/dev\// {print $1}' | xargs -n 1 basename))
block_devs=($(ls /sys/block/))
for d in "${drives[@]}"; do
for dev in "${block_devs[@]}"; do
#echo "D: [$d], DEV: [$dev]"
if [[ $d =~ $dev ]]; then
# this file (if exists) has 0 for SSD and 1 for HDD
if [[ "$(cat /sys/block/${dev}/queue/rotational 2>/dev/null)" == 0 ]] ; then
echo "${d} : SSD" >> ${tmp_loc}/is_ssd.txt
else
echo "${d} : NO SSD" >> ${tmp_loc}/is_ssd.txt
fi
fi
done
done
if [[ -f ${tmp_loc}/is_ssd.txt ]] ; then
cat ${tmp_loc}/is_ssd.txt | log_section "SSD"
rm ${tmp_loc}/is_ssd.txt
fi
cat ${log_file} | log_section "script log"
cat << MSG | tee /dev/fd/3
####################################################
rippled info has been gathered. Please copy the
contents of ${summary_out}
to a github gist at https://gist.github.com/
PLEASE REVIEW THIS FILE FOR ANY SENSITIVE DATA
BEFORE POSTING! We have tried our best to omit
any sensitive information from this file, but you
should verify before posting.
####################################################
MSG

View File

@@ -1,218 +0,0 @@
#!/bin/bash
set -o errexit
marker_base=985c80fbc6131f3a8cedd0da7e8af98dfceb13c7
marker_commit=${1:-${marker_base}}
if [ $(git merge-base ${marker_commit} ${marker_base}) != ${marker_base} ]; then
echo "first marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ $(git merge-base ${marker_commit} HEAD) != $(git rev-parse --verify ${marker_commit}) ]; then
echo "given marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ -e Builds/CMake ]; then
echo move CMake
git mv Builds/CMake cmake
git add --update .
git commit -m 'Move CMake directory' --author 'Pretty Printer <cpp@ripple.com>'
fi
if [ -e src/ripple ]; then
echo move protocol buffers
mkdir -p include/xrpl
if [ -e src/ripple/proto ]; then
git mv src/ripple/proto include/xrpl
fi
extract_list() {
git show ${marker_commit}:Builds/CMake/RippledCore.cmake | \
awk "/END ${1}/ { p = 0 } p && /src\/ripple/; /BEGIN ${1}/ { p = 1 }" | \
sed -e 's#src/ripple/##' -e 's#[^a-z]\+$##'
}
move_files() {
oldroot="$1"; shift
newroot="$1"; shift
detail="$1"; shift
files=("$@")
for file in ${files[@]}; do
if [ ! -e ${oldroot}/${file} ]; then
continue
fi
dir=$(dirname ${file})
if [ $(basename ${dir}) == 'details' ]; then
dir=$(dirname ${dir})
fi
if [ $(basename ${dir}) == 'impl' ]; then
dir="$(dirname ${dir})/${detail}"
fi
mkdir -p ${newroot}/${dir}
git mv ${oldroot}/${file} ${newroot}/${dir}
done
}
echo move libxrpl headers
files=$(extract_list 'LIBXRPL HEADERS')
files+=(
basics/SlabAllocator.h
beast/asio/io_latency_probe.h
beast/container/aged_container.h
beast/container/aged_container_utility.h
beast/container/aged_map.h
beast/container/aged_multimap.h
beast/container/aged_multiset.h
beast/container/aged_set.h
beast/container/aged_unordered_map.h
beast/container/aged_unordered_multimap.h
beast/container/aged_unordered_multiset.h
beast/container/aged_unordered_set.h
beast/container/detail/aged_associative_container.h
beast/container/detail/aged_container_iterator.h
beast/container/detail/aged_ordered_container.h
beast/container/detail/aged_unordered_container.h
beast/container/detail/empty_base_optimization.h
beast/core/LockFreeStack.h
beast/insight/Collector.h
beast/insight/Counter.h
beast/insight/CounterImpl.h
beast/insight/Event.h
beast/insight/EventImpl.h
beast/insight/Gauge.h
beast/insight/GaugeImpl.h
beast/insight/Group.h
beast/insight/Groups.h
beast/insight/Hook.h
beast/insight/HookImpl.h
beast/insight/Insight.h
beast/insight/Meter.h
beast/insight/MeterImpl.h
beast/insight/NullCollector.h
beast/insight/StatsDCollector.h
beast/test/fail_counter.h
beast/test/fail_stream.h
beast/test/pipe_stream.h
beast/test/sig_wait.h
beast/test/string_iostream.h
beast/test/string_istream.h
beast/test/string_ostream.h
beast/test/test_allocator.h
beast/test/yield_to.h
beast/utility/hash_pair.h
beast/utility/maybe_const.h
beast/utility/temp_dir.h
# included by only json/impl/json_assert.h
json/json_errors.h
protocol/PayChan.h
protocol/RippleLedgerHash.h
protocol/messages.h
protocol/st.h
)
files+=(
basics/README.md
crypto/README.md
json/README.md
protocol/README.md
resource/README.md
)
move_files src/ripple include/xrpl detail ${files[@]}
echo move libxrpl sources
files=$(extract_list 'LIBXRPL SOURCES')
move_files src/ripple src/libxrpl "" ${files[@]}
echo check leftovers
dirs=$(cd include/xrpl; ls -d */)
dirs=$(cd src/ripple; ls -d ${dirs} 2>/dev/null || true)
files="$(cd src/ripple; find ${dirs} -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
echo remove empty directories
empty_dirs="$(cd src/ripple; find ${dirs} -depth -type d)"
for dir in ${empty_dirs[@]}; do
if [ -e ${dir} ]; then
rmdir ${dir}
fi
done
echo move xrpld sources
files=$(
extract_list 'XRPLD SOURCES'
cd src/ripple
find * -regex '.*\.\(h\|ipp\|md\|pu\|uml\|png\)'
)
move_files src/ripple src/xrpld detail ${files[@]}
files="$(cd src/ripple; find . -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
fi
rm -rf src/ripple
echo rename .hpp to .h
find include src -name '*.hpp' -exec bash -c 'f="{}"; git mv "${f}" "${f%hpp}h"' \;
echo move PerfLog.h
if [ -e include/xrpl/basics/PerfLog.h ]; then
git mv include/xrpl/basics/PerfLog.h src/xrpld/perflog
fi
# Make sure all protobuf includes have the correct prefix.
protobuf_replace='s:^#include\s*["<].*org/xrpl\([^">]\+\)[">]:#include <xrpl/proto/org/xrpl\1>:'
# Make sure first-party includes use angle brackets and .h extension.
ripple_replace='s:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:'
beast_replace='s:include\s*<beast/:include <xrpl/beast/:'
# Rename impl directories to detail.
impl_rename='s:\(<xrpl.*\)/impl\(/details\)\?/:\1/detail/:'
echo rewrite includes in libxrpl
find include/xrpl src/libxrpl -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e 's:^#include <ripple/:#include <xrpl/:' \
-e "${impl_rename}" \
{} +
echo rewrite includes in xrpld
# # https://www.baeldung.com/linux/join-multiple-lines
libxrpl_dirs="$(cd include/xrpl; ls -d1 */ | sed 's:/$::')"
# libxrpl_dirs='a\nb\nc\n'
readarray -t libxrpl_dirs <<< "${libxrpl_dirs}"
# libxrpl_dirs=(a b c)
libxrpl_dirs=$(printf -v txt '%s\\|' "${libxrpl_dirs[@]}"; echo "${txt%\\|}")
# libxrpl_dirs='a\|b\|c'
find src/xrpld src/test -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e "s:^#include <ripple/basics/PerfLog.h>:#include <xrpld/perflog/PerfLog.h>:" \
-e "s:^#include <ripple/\(${libxrpl_dirs}\)/:#include <xrpl/\1/:" \
-e 's:^#include <ripple/:#include <xrpld/:' \
-e "${impl_rename}" \
{} +
git commit -m 'Rearrange sources' --author 'Pretty Printer <cpp@ripple.com>'
find include src -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-10 -i {} +
git add --update .
git commit -m 'Rewrite includes' --author 'Pretty Printer <cpp@ripple.com>'
./Builds/levelization/levelization.sh
git add --update .
git commit -m 'Recompute loops' --author 'Pretty Printer <cpp@ripple.com>'

View File

@@ -1,51 +0,0 @@
#!/usr/bin/env bash
set -exu
: ${TRAVIS_BUILD_DIR:=""}
: ${VCPKG_DIR:=".vcpkg"}
export VCPKG_ROOT=${VCPKG_DIR}
: ${VCPKG_DEFAULT_TRIPLET:="x64-windows-static"}
export VCPKG_DEFAULT_TRIPLET
EXE="vcpkg"
if [[ -z ${COMSPEC:-} ]]; then
EXE="${EXE}.exe"
fi
if [[ -d "${VCPKG_DIR}" && -x "${VCPKG_DIR}/${EXE}" && -d "${VCPKG_DIR}/installed" ]] ; then
echo "Using cached vcpkg at ${VCPKG_DIR}"
${VCPKG_DIR}/${EXE} list
else
if [[ -d "${VCPKG_DIR}" ]] ; then
rm -rf "${VCPKG_DIR}"
fi
git clone --branch 2021.04.30 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
pushd ${VCPKG_DIR}
BSARGS=()
if [[ "$(uname)" == "Darwin" ]] ; then
BSARGS+=(--allowAppleClang)
fi
if [[ -z ${COMSPEC:-} ]]; then
chmod +x ./bootstrap-vcpkg.sh
time ./bootstrap-vcpkg.sh "${BSARGS[@]}"
else
time ./bootstrap-vcpkg.bat
fi
popd
fi
# TODO: bring boost in this way as well ?
# NOTE: can pin specific ports to a commit/version like this:
# git checkout <SOME COMMIT HASH> ports/boost
if [ $# -eq 0 ]; then
echo "No extra packages specified..."
PKGS=()
else
PKGS=( "$@" )
fi
for LIB in "${PKGS[@]}"; do
time ${VCPKG_DIR}/${EXE} --clean-after-build install ${LIB}
done

View File

@@ -1,40 +0,0 @@
# NOTE: must be sourced from a shell so it can export vars
cat << BATCH > ./getenv.bat
CALL %*
ENV
BATCH
while read line ; do
IFS='"' read x path arg <<<"${line}"
if [ -f "${path}" ] ; then
echo "FOUND: $path"
export VCINSTALLDIR=$(./getenv.bat "${path}" ${arg} | grep "^VCINSTALLDIR=" | sed -E "s/^VCINSTALLDIR=//g")
if [ "${VCINSTALLDIR}" != "" ] ; then
echo "USING ${VCINSTALLDIR}"
export LIB=$(./getenv.bat "${path}" ${arg} | grep "^LIB=" | sed -E "s/^LIB=//g")
export LIBPATH=$(./getenv.bat "${path}" ${arg} | grep "^LIBPATH=" | sed -E "s/^LIBPATH=//g")
export INCLUDE=$(./getenv.bat "${path}" ${arg} | grep "^INCLUDE=" | sed -E "s/^INCLUDE=//g")
ADDPATH=$(./getenv.bat "${path}" ${arg} | grep "^PATH=" | sed -E "s/^PATH=//g")
export PATH="${ADDPATH}:${PATH}"
break
fi
fi
done <<EOL
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio 15.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 13.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
EOL
# TODO: update the list above as needed to support newer versions of msvc tools
rm -f getenv.bat
if [ "${VCINSTALLDIR}" = "" ] ; then
echo "No compatible visual studio found!"
fi

View File

@@ -1,246 +0,0 @@
#!/usr/bin/env python
"""A script to test rippled in an infinite loop of start-sync-stop.
- Requires Python 3.7+.
- Can be stopped with SIGINT.
- Has no dependencies outside the standard library.
"""
import sys
assert sys.version_info.major == 3 and sys.version_info.minor >= 7
import argparse
import asyncio
import configparser
import contextlib
import json
import logging
import os
from pathlib import Path
import platform
import subprocess
import time
import urllib.error
import urllib.request
# Enable asynchronous subprocesses on Windows. The default changed in 3.8.
# https://docs.python.org/3.7/library/asyncio-platforms.html#subprocess-support-on-windows
if (platform.system() == 'Windows' and sys.version_info.major == 3
and sys.version_info.minor < 8):
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
DEFAULT_EXE = 'rippled'
DEFAULT_CONFIGURATION_FILE = 'rippled.cfg'
# Number of seconds to wait before forcefully terminating.
PATIENCE = 120
# Number of contiguous seconds in a sync state to be considered synced.
DEFAULT_SYNC_DURATION = 60
# Number of seconds between polls of state.
DEFAULT_POLL_INTERVAL = 5
SYNC_STATES = ('full', 'validating', 'proposing')
def read_config(config_file):
# strict = False: Allow duplicate keys, e.g. [rpc_startup].
# allow_no_value = True: Allow keys with no values. Generally, these
# instances use the "key" as the value, and the section name is the key,
# e.g. [debug_logfile].
# delimiters = ('='): Allow ':' as a character in Windows paths. Some of
# our "keys" are actually values, and we don't want to split them on ':'.
config = configparser.ConfigParser(
strict=False,
allow_no_value=True,
delimiters=('='),
)
config.read(config_file)
return config
def to_list(value, separator=','):
"""Parse a list from a delimited string value."""
return [s.strip() for s in value.split(separator) if s]
def find_log_file(config_file):
"""Try to figure out what log file the user has chosen. Raises all kinds
of exceptions if there is any possibility of ambiguity."""
config = read_config(config_file)
values = list(config['debug_logfile'].keys())
if len(values) < 1:
raise ValueError(
f'no [debug_logfile] in configuration file: {config_file}')
if len(values) > 1:
raise ValueError(
f'too many [debug_logfile] in configuration file: {config_file}')
return values[0]
def find_http_port(config_file):
config = read_config(config_file)
names = list(config['server'].keys())
for name in names:
server = config[name]
if 'http' in to_list(server.get('protocol', '')):
return int(server['port'])
raise ValueError(f'no server in [server] for "http" protocol')
@contextlib.asynccontextmanager
async def rippled(exe=DEFAULT_EXE, config_file=DEFAULT_CONFIGURATION_FILE):
"""A context manager for a rippled process."""
# Start the server.
process = await asyncio.create_subprocess_exec(
str(exe),
'--conf',
str(config_file),
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
logging.info(f'rippled started with pid {process.pid}')
try:
yield process
finally:
# Ask it to stop.
logging.info(f'asking rippled (pid: {process.pid}) to stop')
start = time.time()
process.terminate()
# Wait nicely.
try:
await asyncio.wait_for(process.wait(), PATIENCE)
except asyncio.TimeoutError:
# Ask the operating system to kill it.
logging.warning(f'killing rippled ({process.pid})')
try:
process.kill()
except ProcessLookupError:
pass
code = await process.wait()
end = time.time()
logging.info(
f'rippled stopped after {end - start:.1f} seconds with code {code}'
)
async def sync(
port,
*,
duration=DEFAULT_SYNC_DURATION,
interval=DEFAULT_POLL_INTERVAL,
):
"""Poll rippled on an interval until it has been synced for a duration."""
start = time.perf_counter()
while (time.perf_counter() - start) < duration:
await asyncio.sleep(interval)
request = urllib.request.Request(
f'http://127.0.0.1:{port}',
data=json.dumps({
'method': 'server_state'
}).encode(),
headers={'Content-Type': 'application/json'},
)
with urllib.request.urlopen(request) as response:
try:
body = json.loads(response.read())
except urllib.error.HTTPError as cause:
logging.warning(f'server_state returned not JSON: {cause}')
start = time.perf_counter()
continue
try:
state = body['result']['state']['server_state']
except KeyError as cause:
logging.warning(f'server_state response missing key: {cause.key}')
start = time.perf_counter()
continue
logging.info(f'server_state: {state}')
if state not in SYNC_STATES:
# Require a contiguous sync state.
start = time.perf_counter()
async def loop(test,
*,
exe=DEFAULT_EXE,
config_file=DEFAULT_CONFIGURATION_FILE):
"""
Start-test-stop rippled in an infinite loop.
Moves log to a different file after each iteration.
"""
log_file = find_log_file(config_file)
id = 0
while True:
logging.info(f'iteration: {id}')
async with rippled(exe, config_file) as process:
start = time.perf_counter()
exited = asyncio.create_task(process.wait())
tested = asyncio.create_task(test())
# Try to sync as long as the process is running.
done, pending = await asyncio.wait(
{exited, tested},
return_when=asyncio.FIRST_COMPLETED,
)
if done == {exited}:
code = exited.result()
logging.warning(
f'server halted for unknown reason with code {code}')
else:
assert done == {tested}
assert tested.exception() is None
end = time.perf_counter()
logging.info(f'synced after {end - start:.0f} seconds')
os.replace(log_file, f'debug.{id}.log')
id += 1
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
)
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'rippled',
type=Path,
nargs='?',
default=DEFAULT_EXE,
help='Path to rippled.',
)
parser.add_argument(
'--conf',
type=Path,
default=DEFAULT_CONFIGURATION_FILE,
help='Path to configuration file.',
)
parser.add_argument(
'--duration',
type=int,
default=DEFAULT_SYNC_DURATION,
help='Number of contiguous seconds required in a synchronized state.',
)
parser.add_argument(
'--interval',
type=int,
default=DEFAULT_POLL_INTERVAL,
help='Number of seconds to wait between polls of state.',
)
args = parser.parse_args()
port = find_http_port(args.conf)
def test():
return sync(port, duration=args.duration, interval=args.interval)
try:
asyncio.run(loop(test, exe=args.rippled, config_file=args.conf))
except KeyboardInterrupt:
# Squelch the message. This is a normal mode of exit.
pass

View File

@@ -1,133 +0,0 @@
/* -------------------------------- REQUIRES -------------------------------- */
var child = require("child_process");
var assert = require("assert");
/* --------------------------------- CONFIG --------------------------------- */
if (process.argv[2] == null) {
[
'Usage: ',
'',
' `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
'',
' Launch rippled and stop it after n seconds for all n in [i, j}',
' For all even values of n launch rippled with `--fg`',
' For values of n where n % 3 == 0 launch rippled with `--fg`\n',
'Examples: ',
'',
' $ node bin/stop-test.js 5,10',
(' $ node bin/stop-test.js 1,4 ' +
'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
]
.forEach(function(l){console.log(l)});
process.exit();
} else {
var testRange = process.argv[2].split(',').map(Number);
var rippledPath = process.argv[3] || 'build/rippled'
var rippledConf = process.argv[4] || 'rippled.cfg'
}
var options = {
env: process.env,
stdio: 'ignore' // we could dump the child io when it fails abnormally
};
// default args
var conf_args = ['--conf='+rippledConf];
var start_args = conf_args.concat([/*'--net'*/])
var stop_args = conf_args.concat(['stop']);
/* --------------------------------- HELPERS -------------------------------- */
function start(args) {
return child.spawn(rippledPath, args, options);
}
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
function show_results_and_exit(results) {
console.log(JSON.stringify(results, undefined, 2));
process.exit();
}
var timeTakes = function (range) {
function sumRange(n) {return (n+1) * n /2}
var ret = sumRange(range[1]);
if (range[0] > 1) {
ret = ret - sumRange(range[0] - 1)
}
var stopping = (range[1] - range[0]) * 0.5;
return ret + stopping;
}
/* ---------------------------------- TEST ---------------------------------- */
console.log("Test will take ~%s seconds", timeTakes(testRange));
(function oneTest(n /* seconds */, results) {
if (n >= testRange[1]) {
// show_results_and_exit(results);
console.log(JSON.stringify(results, undefined, 2));
oneTest(testRange[0], []);
return;
}
var args = start_args;
if (n % 2 == 0) {args = args.concat(['--fg'])}
if (n % 3 == 0) {args = args.concat(['--net'])}
var result = {args: args, alive_for: n};
results.push(result);
console.log("\nLaunching `%s` with `%s` for %d seconds",
rippledPath, JSON.stringify(args), n);
rippled = start(args);
console.log("Rippled pid: %d", rippled.pid);
// defaults
var b4StopSent = false;
var stopSent = false;
var stop_took = null;
rippled.once('exit', function(){
if (!stopSent && !b4StopSent) {
console.warn('\nRippled exited itself b4 stop issued');
process.exit();
};
// The io handles close AFTER exit, may have implications for
// `stdio:'inherit'` option to `child.spawn`.
rippled.once('close', function() {
result.stop_took = (+new Date() - stop_took) / 1000; // seconds
console.log("Stopping after %d seconds took %s seconds",
n, result.stop_took);
oneTest(n+1, results);
});
});
secs_l8r(n, function(){
console.log("Stopping rippled after %d seconds", n);
// possible race here ?
// seems highly unlikely, but I was having issues at one point
b4StopSent=true;
stop_took = (+new Date());
// when does `exit` actually get sent?
stop();
stopSent=true;
// Sometimes we want to attach with a debugger.
if (process.env.ABORT_TESTS_ON_STALL != null) {
// We wait 30 seconds, and if it hasn't stopped, we abort the process
secs_l8r(30, function() {
if (result.stop_took == null) {
console.log("rippled has stalled");
process.exit();
};
});
}
})
}(testRange[0], []));

File diff suppressed because it is too large Load Diff

View File

@@ -1,72 +0,0 @@
#
# Default validators.txt
#
# This file is located in the same folder as your rippled.cfg file
# and defines which validators your server trusts not to collude.
#
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.
# Blank lines and lines starting with a '#' are ignored.
#
#
#
# [validators]
#
# List of the validation public keys of nodes to always accept as validators.
#
# Manually listing validator keys is not recommended for production networks.
# See validator_list_sites and validator_list_keys below.
#
# Examples:
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt
#
# [validator_list_sites]
#
# List of URIs serving lists of recommended validators.
#
# Examples:
# https://vl.ripple.com
# https://vl.xrplf.org
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
#
# [validator_list_keys]
#
# List of keys belonging to trusted validator list publishers.
# Validator lists fetched from configured sites will only be considered
# if the list is accompanied by a valid signature from a trusted
# publisher key.
# Validator list keys should be hex-encoded.
#
# Examples:
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
#
# The default validator list publishers that the rippled instance
# trusts.
#
# WARNING: Changing these values can cause your rippled instance to see a
# validated ledger that contradicts other rippled instances'
# validated ledgers (aka a ledger fork) if your validator list(s)
# do not sufficiently overlap with the list(s) used by others.
# See: https://arxiv.org/pdf/1802.07242.pdf
[validator_list_sites]
https://vl.ripple.com
https://vl.xrplf.org
[validator_list_keys]
#vl.ripple.com
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# use the following configuration instead:
#
# [validator_list_sites]
# https://vl.altnet.rippletest.net
#
# [validator_list_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860

View File

@@ -1,48 +0,0 @@
macro(group_sources_in source_dir curdir)
file(GLOB children RELATIVE ${source_dir}/${curdir}
${source_dir}/${curdir}/*)
foreach (child ${children})
if (IS_DIRECTORY ${source_dir}/${curdir}/${child})
group_sources_in(${source_dir} ${curdir}/${child})
else()
string(REPLACE "/" "\\" groupname ${curdir})
source_group(${groupname} FILES
${source_dir}/${curdir}/${child})
endif()
endforeach()
endmacro()
macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro (exclude_from_default target_)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
macro (exclude_if_included target_)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default (${target_})
endif ()
endmacro ()
find_package(Git)
function (git_branch branch_val)
if (NOT GIT_FOUND)
return ()
endif ()
set (_branch "")
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_branch ${_temp_branch})
endif ()
set (${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()

View File

@@ -1,453 +0,0 @@
# Copyright (c) 2012 - 2017, Lars Bilke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CHANGES:
#
# 2012-01-31, Lars Bilke
# - Enable Code Coverage
#
# 2013-09-17, Joakim Söderberg
# - Added support for Clang.
# - Some additional usage instructions.
#
# 2016-02-03, Lars Bilke
# - Refactored functions to use named parameters
#
# 2017-06-02, Lars Bilke
# - Merged with modified version from github.com/ufz/ogs
#
# 2019-05-06, Anatolii Kurotych
# - Remove unnecessary --coverage flag
#
# 2019-12-13, FeRD (Frank Dana)
# - Deprecate COVERAGE_LCOVR_EXCLUDES and COVERAGE_GCOVR_EXCLUDES lists in favor
# of tool-agnostic COVERAGE_EXCLUDES variable, or EXCLUDE setup arguments.
# - CMake 3.4+: All excludes can be specified relative to BASE_DIRECTORY
# - All setup functions: accept BASE_DIRECTORY, EXCLUDE list
# - Set lcov basedir with -b argument
# - Add automatic --demangle-cpp in lcovr, if 'c++filt' is available (can be
# overridden with NO_DEMANGLE option in setup_target_for_coverage_lcovr().)
# - Delete output dir, .info file on 'make clean'
# - Remove Python detection, since version mismatches will break gcovr
# - Minor cleanup (lowercase function names, update examples...)
#
# 2019-12-19, FeRD (Frank Dana)
# - Rename Lcov outputs, make filtered file canonical, fix cleanup for targets
#
# 2020-01-19, Bob Apthorpe
# - Added gfortran support
#
# 2020-02-17, FeRD (Frank Dana)
# - Make all add_custom_target()s VERBATIM to auto-escape wildcard characters
# in EXCLUDEs, and remove manual escaping from gcovr targets
#
# 2021-01-19, Robin Mueller
# - Add CODE_COVERAGE_VERBOSE option which will allow to print out commands which are run
# - Added the option for users to set the GCOVR_ADDITIONAL_ARGS variable to supply additional
# flags to the gcovr command
#
# 2020-05-04, Mihchael Davis
# - Add -fprofile-abs-path to make gcno files contain absolute paths
# - Fix BASE_DIRECTORY not working when defined
# - Change BYPRODUCT from folder to index.html to stop ninja from complaining about double defines
#
# 2021-05-10, Martin Stump
# - Check if the generator is multi-config before warning about non-Debug builds
#
# 2022-02-22, Marko Wehle
# - Change gcovr output from -o <filename> for --xml <filename> and --html <filename> output respectively.
# This will allow for Multiple Output Formats at the same time by making use of GCOVR_ADDITIONAL_ARGS, e.g. GCOVR_ADDITIONAL_ARGS "--txt".
#
# 2022-09-28, Sebastian Mueller
# - fix append_coverage_compiler_flags_to_target to correctly add flags
# - replace "-fprofile-arcs -ftest-coverage" with "--coverage" (equivalent)
#
# 2024-01-04, Bronek Kozicki
# - remove setup_target_for_coverage_lcov (slow) and setup_target_for_coverage_fastcov (no support for Clang)
# - fix Clang support by adding find_program( ... llvm-cov )
# - add Apple Clang support by adding execute_process( COMMAND xcrun -f llvm-cov ... )
# - add CODE_COVERAGE_GCOV_TOOL to explicitly select gcov tool and disable find_program
# - replace both functions setup_target_for_coverage_gcovr_* with a single setup_target_for_coverage_gcovr
# - add support for all gcovr output formats
#
# 2024-04-03, Bronek Kozicki
# - add support for output formats: jacoco, clover, lcov
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
#
# 2. Add the following line to your CMakeLists.txt (best inside an if-condition
# using a CMake option() to enable it just optionally):
# include(CodeCoverage)
#
# 3. Append necessary compiler flags for all supported source files:
# append_coverage_compiler_flags()
# Or for specific target:
# append_coverage_compiler_flags_to_target(YOUR_TARGET_NAME)
#
# 3.a (OPTIONAL) Set appropriate optimization flags, e.g. -O0, -O1 or -Og
#
# 4. If you need to exclude additional directories from the report, specify them
# using full paths in the COVERAGE_EXCLUDES variable before calling
# setup_target_for_coverage_*().
# Example:
# set(COVERAGE_EXCLUDES
# '${PROJECT_SOURCE_DIR}/src/dir1/*'
# '/path/to/my/src/dir2/*')
# Or, use the EXCLUDE argument to setup_target_for_coverage_*().
# Example:
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# EXCLUDE "${PROJECT_SOURCE_DIR}/src/dir1/*" "/path/to/my/src/dir2/*")
#
# 4.a NOTE: With CMake 3.4+, COVERAGE_EXCLUDES or EXCLUDE can also be set
# relative to the BASE_DIRECTORY (default: PROJECT_SOURCE_DIR)
# Example:
# set(COVERAGE_EXCLUDES "dir1/*")
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# FORMAT html-details
# BASE_DIRECTORY "${PROJECT_SOURCE_DIR}/src"
# EXCLUDE "dir2/*")
#
# 4.b If you need to pass specific options to gcovr, specify them in
# GCOVR_ADDITIONAL_ARGS variable.
# Example:
# set (GCOVR_ADDITIONAL_ARGS --exclude-throw-branches --exclude-noncode-lines -s)
# setup_target_for_coverage_gcovr(
# NAME coverage
# EXECUTABLE testrunner
# EXCLUDE "src/dir1" "src/dir2")
#
# 5. Use the functions described below to create a custom make target which
# runs your test executable and produces a code coverage report.
#
# 6. Build a Debug build:
# cmake -DCMAKE_BUILD_TYPE=Debug ..
# make
# make my_coverage_target
include(CMakeParseArguments)
option(CODE_COVERAGE_VERBOSE "Verbose information" FALSE)
# Check prereqs
find_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
if(DEFINED CODE_COVERAGE_GCOV_TOOL)
set(GCOV_TOOL "${CODE_COVERAGE_GCOV_TOOL}")
elseif(DEFINED ENV{CODE_COVERAGE_GCOV_TOOL})
set(GCOV_TOOL "$ENV{CODE_COVERAGE_GCOV_TOOL}")
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if(APPLE)
execute_process( COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVMCOV_PATH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
else()
find_program( LLVMCOV_PATH llvm-cov )
endif()
if(LLVMCOV_PATH)
set(GCOV_TOOL "${LLVMCOV_PATH} gcov")
endif()
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
find_program( GCOV_PATH gcov )
set(GCOV_TOOL "${GCOV_PATH}")
endif()
# Check supported compiler (Clang, GNU and Flang)
get_property(LANGUAGES GLOBAL PROPERTY ENABLED_LANGUAGES)
foreach(LANG ${LANGUAGES})
if("${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_${LANG}_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif()
elseif(NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "GNU"
AND NOT "${CMAKE_${LANG}_COMPILER_ID}" MATCHES "(LLVM)?[Ff]lang")
message(FATAL_ERROR "Compiler is not GNU or Flang! Aborting...")
endif()
endforeach()
set(COVERAGE_COMPILER_FLAGS "-g --coverage"
CACHE INTERNAL "")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
include(CheckCCompilerFlag)
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
endif()
set(CMAKE_Fortran_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the Fortran compiler during coverage builds."
FORCE )
set(CMAKE_CXX_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C++ compiler during coverage builds."
FORCE )
set(CMAKE_C_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C compiler during coverage builds."
FORCE )
set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used for linking binaries during coverage builds."
FORCE )
set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used by the shared libraries linker during coverage builds."
FORCE )
mark_as_advanced(
CMAKE_Fortran_FLAGS_COVERAGE
CMAKE_CXX_FLAGS_COVERAGE
CMAKE_C_FLAGS_COVERAGE
CMAKE_EXE_LINKER_FLAGS_COVERAGE
CMAKE_SHARED_LINKER_FLAGS_COVERAGE )
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if(NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG))
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif() # NOT (CMAKE_BUILD_TYPE STREQUAL "Debug" OR GENERATOR_IS_MULTI_CONFIG)
if(CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_Fortran_COMPILER_ID STREQUAL "GNU")
link_libraries(gcov)
endif()
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# setup_target_for_coverage_gcovr(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# BASE_DIRECTORY "../" # Base directory for report
# # (defaults to PROJECT_SOURCE_DIR)
# FORMAT "cobertura" # Output format, one of:
# # xml cobertura sonarqube jacoco clover
# # json-summary json-details coveralls csv
# # txt html-single html-nested html-details
# # lcov (xml is an alias to cobertura;
# # if no format is set, defaults to xml)
# EXCLUDE "src/dir1/*" "src/dir2/*" # Patterns to exclude (can be relative
# # to BASE_DIRECTORY, with CMake 3.4+)
# )
# The user can set the variable GCOVR_ADDITIONAL_ARGS to supply additional flags to the
# GCVOR command.
function(setup_target_for_coverage_gcovr)
set(options NONE)
set(oneValueArgs BASE_DIRECTORY NAME FORMAT)
set(multiValueArgs EXCLUDE EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT GCOV_TOOL)
message(FATAL_ERROR "Could not find gcov or llvm-cov tool! Aborting...")
endif()
if(NOT GCOVR_PATH)
message(FATAL_ERROR "Could not find gcovr tool! Aborting...")
endif()
# Set base directory (as absolute path), or default to PROJECT_SOURCE_DIR
if(DEFINED Coverage_BASE_DIRECTORY)
get_filename_component(BASEDIR ${Coverage_BASE_DIRECTORY} ABSOLUTE)
else()
set(BASEDIR ${PROJECT_SOURCE_DIR})
endif()
if(NOT DEFINED Coverage_FORMAT)
set(Coverage_FORMAT xml)
endif()
if("--output" IN_LIST GCOVR_ADDITIONAL_ARGS)
message(FATAL_ERROR "Unsupported --output option detected in GCOVR_ADDITIONAL_ARGS! Aborting...")
else()
if((Coverage_FORMAT STREQUAL "html-details")
OR (Coverage_FORMAT STREQUAL "html-nested"))
set(GCOVR_OUTPUT_FILE ${PROJECT_BINARY_DIR}/${Coverage_NAME}/index.html)
set(GCOVR_CREATE_FOLDER ${PROJECT_BINARY_DIR}/${Coverage_NAME})
elseif(Coverage_FORMAT STREQUAL "html-single")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.html)
elseif((Coverage_FORMAT STREQUAL "json-summary")
OR (Coverage_FORMAT STREQUAL "json-details")
OR (Coverage_FORMAT STREQUAL "coveralls"))
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.json)
elseif(Coverage_FORMAT STREQUAL "txt")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.txt)
elseif(Coverage_FORMAT STREQUAL "csv")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.csv)
elseif(Coverage_FORMAT STREQUAL "lcov")
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.lcov)
else()
set(GCOVR_OUTPUT_FILE ${Coverage_NAME}.xml)
endif()
endif()
if((Coverage_FORMAT STREQUAL "cobertura")
OR (Coverage_FORMAT STREQUAL "xml"))
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --cobertura-pretty )
set(Coverage_FORMAT cobertura) # overwrite xml
elseif(Coverage_FORMAT STREQUAL "sonarqube")
list(APPEND GCOVR_ADDITIONAL_ARGS --sonarqube "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "jacoco")
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --jacoco-pretty )
elseif(Coverage_FORMAT STREQUAL "clover")
list(APPEND GCOVR_ADDITIONAL_ARGS --clover "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --clover-pretty )
elseif(Coverage_FORMAT STREQUAL "lcov")
list(APPEND GCOVR_ADDITIONAL_ARGS --lcov "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "json-summary")
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-summary-pretty)
elseif(Coverage_FORMAT STREQUAL "json-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --json "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --json-pretty)
elseif(Coverage_FORMAT STREQUAL "coveralls")
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --coveralls-pretty)
elseif(Coverage_FORMAT STREQUAL "csv")
list(APPEND GCOVR_ADDITIONAL_ARGS --csv "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "txt")
list(APPEND GCOVR_ADDITIONAL_ARGS --txt "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-single")
list(APPEND GCOVR_ADDITIONAL_ARGS --html "${GCOVR_OUTPUT_FILE}" )
list(APPEND GCOVR_ADDITIONAL_ARGS --html-self-contained)
elseif(Coverage_FORMAT STREQUAL "html-nested")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-nested "${GCOVR_OUTPUT_FILE}" )
elseif(Coverage_FORMAT STREQUAL "html-details")
list(APPEND GCOVR_ADDITIONAL_ARGS --html-details "${GCOVR_OUTPUT_FILE}" )
else()
message(FATAL_ERROR "Unsupported output style ${Coverage_FORMAT}! Aborting...")
endif()
# Collect excludes (CMake 3.4+: Also compute absolute paths)
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${Coverage_EXCLUDE} ${COVERAGE_EXCLUDES} ${COVERAGE_GCOVR_EXCLUDES})
if(CMAKE_VERSION VERSION_GREATER 3.4)
get_filename_component(EXCLUDE ${EXCLUDE} ABSOLUTE BASE_DIR ${BASEDIR})
endif()
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
list(REMOVE_DUPLICATES GCOVR_EXCLUDES)
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDE_ARGS "")
foreach(EXCLUDE ${GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDE_ARGS "-e")
list(APPEND GCOVR_EXCLUDE_ARGS "${EXCLUDE}")
endforeach()
# Set up commands which will be run to generate coverage data
# Run tests
set(GCOVR_EXEC_TESTS_CMD
${Coverage_EXECUTABLE} ${Coverage_EXECUTABLE_ARGS}
)
# Create folder
if(DEFINED GCOVR_CREATE_FOLDER)
set(GCOVR_FOLDER_CMD
${CMAKE_COMMAND} -E make_directory ${GCOVR_CREATE_FOLDER})
else()
set(GCOVR_FOLDER_CMD echo) # dummy
endif()
# Running gcovr
set(GCOVR_CMD
${GCOVR_PATH}
--gcov-executable ${GCOV_TOOL}
--gcov-ignore-parse-errors=negative_hits.warn_once_per_file
-r ${BASEDIR}
${GCOVR_ADDITIONAL_ARGS}
${GCOVR_EXCLUDE_ARGS}
--object-directory=${PROJECT_BINARY_DIR}
)
if(CODE_COVERAGE_VERBOSE)
message(STATUS "Executed command report")
message(STATUS "Command to run tests: ")
string(REPLACE ";" " " GCOVR_EXEC_TESTS_CMD_SPACED "${GCOVR_EXEC_TESTS_CMD}")
message(STATUS "${GCOVR_EXEC_TESTS_CMD_SPACED}")
if(NOT GCOVR_FOLDER_CMD STREQUAL "echo")
message(STATUS "Command to create a folder: ")
string(REPLACE ";" " " GCOVR_FOLDER_CMD_SPACED "${GCOVR_FOLDER_CMD}")
message(STATUS "${GCOVR_FOLDER_CMD_SPACED}")
endif()
message(STATUS "Command to generate gcovr coverage data: ")
string(REPLACE ";" " " GCOVR_CMD_SPACED "${GCOVR_CMD}")
message(STATUS "${GCOVR_CMD_SPACED}")
endif()
add_custom_target(${Coverage_NAME}
COMMAND ${GCOVR_EXEC_TESTS_CMD}
COMMAND ${GCOVR_FOLDER_CMD}
COMMAND ${GCOVR_CMD}
BYPRODUCTS ${GCOVR_OUTPUT_FILE}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
VERBATIM # Protect arguments to commands
COMMENT "Running gcovr to produce code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Code coverage report saved in ${GCOVR_OUTPUT_FILE} formatted as ${Coverage_FORMAT}"
)
endfunction() # setup_target_for_coverage_gcovr
function(append_coverage_compiler_flags)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
message(STATUS "Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}")
endfunction() # append_coverage_compiler_flags
# Setup coverage for specific library
function(append_coverage_compiler_flags_to_target name)
separate_arguments(_flag_list NATIVE_COMMAND "${COVERAGE_COMPILER_FLAGS}")
target_compile_options(${name} PRIVATE ${_flag_list})
if(CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_Fortran_COMPILER_ID STREQUAL "GNU")
target_link_libraries(${name} PRIVATE gcov)
endif()
endfunction()

View File

@@ -1,56 +0,0 @@
include (CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set (Boost_USE_STATIC_LIBS ON)
endif ()
set (Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set (Boost_USE_STATIC_RUNTIME ON)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency (Boost 1.70
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
program_options
regex
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program (homebrew brew)
if (homebrew)
execute_process (COMMAND ${homebrew} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_dependency (OpenSSL 1.1.1 REQUIRED)
find_dependency (ZLIB)
find_dependency (date)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()
include ("${CMAKE_CURRENT_LIST_DIR}/RippleTargets.cmake")

View File

@@ -1,202 +0,0 @@
#[===================================================================[
setup project-wide compiler settings
#]===================================================================]
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library (common INTERFACE)
add_library (Ripple::common ALIAS common)
# add a single global dependency on this interface lib
link_libraries (Ripple::common)
set_target_properties (common
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_definitions (common
INTERFACE
$<$<CONFIG:Debug>:DEBUG _DEBUG>
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>)
# ^^^^ NOTE: CMAKE release builds already have NDEBUG
# defined, so no need to add it explicitly except for
# this special case of (profile ON) and (assert OFF)
# -- presumably this is because we don't want profile
# builds asserting unless asserts were specifically
# requested
if (MSVC)
# remove existing exception flag since we set it to -EHa
string (REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_
CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime
string (REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
# /ZI (Edit & Continue debugging information) is incompatible with Gy-
string (REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
# omit debug info completely under CI (not needed)
if (is_ci)
string (REPLACE "/Zi" " " ${var_} "${${var_}}")
endif ()
endforeach ()
target_compile_options (common
INTERFACE
-bigobj # Increase object file max size
-fp:precise # Floating point behavior
-Gd # __cdecl calling convention
-Gm- # Minimal rebuild: disabled
-Gy- # Function level linking: disabled
-MP # Multiprocessor compilation
-openmp- # pragma omp: disabled
-errorReport:none # No error reporting to Internet
-nologo # Suppress login banner
-wd4018 # Disable signed/unsigned comparison warnings
-wd4244 # Disable float to int possible loss of data warnings
-wd4267 # Disable size_t to T possible loss of data warnings
-wd4800 # Disable C4800(int to bool performance)
-wd4503 # Decorated name length exceeded, name was truncated
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>
)
target_compile_definitions (common
INTERFACE
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
WIN32_LEAN_AND_MEAN
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries (common
INTERFACE
-errorreport:none
-machine:X64)
else ()
# HACK : because these need to come first, before any warning demotion
string (APPEND CMAKE_CXX_FLAGS " -Wall -Wdeprecated")
if (wextra)
string (APPEND CMAKE_CXX_FLAGS " -Wextra -Wno-unused-parameter")
endif ()
# not MSVC
target_compile_options (common
INTERFACE
$<$<BOOL:${werr}>:-Werror>
$<$<COMPILE_LANGUAGE:CXX>:
-frtti
-Wnon-virtual-dtor
>
-Wno-sign-compare
-Wno-char-subscripts
-Wno-format
-Wno-unused-local-typedefs
-fstack-protector
$<$<BOOL:${is_gcc}>:
-Wno-unused-but-set-variable
-Wno-deprecated
>
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
target_link_libraries (common
INTERFACE
-rdynamic
$<$<BOOL:${is_linux}>:-Wl,-z,relro,-z,now,--build-id>
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:
-static-libstdc++
-static-libgcc
>)
endif ()
# Antithesis instrumentation will only be built and deployed using machines running Linux.
if (voidstar)
if (NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(FATAL_ERROR "Antithesis instrumentation requires Debug build type, aborting...")
elseif (NOT is_linux)
message(FATAL_ERROR "Antithesis instrumentation requires Linux, aborting...")
elseif (NOT (is_clang AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 16.0))
message(FATAL_ERROR "Antithesis instrumentation requires Clang version 16 or later, aborting...")
endif ()
endif ()
if (use_mold)
# use mold linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=mold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "mold")
target_link_libraries (common INTERFACE -fuse-ld=mold)
endif ()
unset (LD_VERSION)
elseif (use_gold AND is_gcc)
# use gold linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default intead of DT_RPATH, so you might have slightly
unexpected runtime ld behavior if you were expecting
DT_RPATH. Specify --disable-new-dtags to gold if you do
not want the default DT_RUNPATH behavior. This rpath
treatment as well as static/dynamic selection means that
gold does not currently have ideal default behavior when
we are using jemalloc. Thus for simplicity we don't use
it when jemalloc is requested. An alternative to
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries (common
INTERFACE
-fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
unset (LD_VERSION)
elseif (use_lld)
# use lld linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries (common INTERFACE -fuse-ld=lld)
endif ()
unset (LD_VERSION)
endif()
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
STRING (REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach ()
endif ()

View File

@@ -1,227 +0,0 @@
#[===================================================================[
Exported targets.
#]===================================================================]
include(target_protobuf_sources)
# Protocol buffers cannot participate in a unity build,
# because all the generated sources
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS include/xrpl/proto/ripple.proto
)
file(GLOB_RECURSE protos "include/xrpl/proto/org/*.proto")
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE grpc
IMPORT_DIRS include/xrpl/proto
PROTOS "${protos}"
PLUGIN protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc
)
target_compile_options(xrpl.libpb
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
)
target_link_libraries(xrpl.libpb
PUBLIC
protobuf::libprotobuf
gRPC::grpc++
)
# TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE)
target_link_libraries(xrpl.imports.main INTERFACE
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xxHash::xxhash
)
include(add_module)
include(target_link_modules)
# Level 01
add_module(xrpl beast)
target_link_libraries(xrpl.libxrpl.beast PUBLIC
xrpl.imports.main
xrpl.libpb
)
# Level 02
add_module(xrpl basics)
target_link_libraries(xrpl.libxrpl.basics PUBLIC xrpl.libxrpl.beast)
# Level 03
add_module(xrpl json)
target_link_libraries(xrpl.libxrpl.json PUBLIC xrpl.libxrpl.basics)
add_module(xrpl crypto)
target_link_libraries(xrpl.libxrpl.crypto PUBLIC xrpl.libxrpl.basics)
# Level 04
add_module(xrpl protocol)
target_link_libraries(xrpl.libxrpl.protocol PUBLIC
xrpl.libxrpl.crypto
xrpl.libxrpl.json
)
# Level 05
add_module(xrpl resource)
target_link_libraries(xrpl.libxrpl.resource PUBLIC xrpl.libxrpl.protocol)
add_module(xrpl server)
target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol)
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
if(unity)
set_target_properties(xrpl.libxrpl PROPERTIES UNITY_BUILD ON)
endif()
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/libxrpl/*.cpp"
)
target_sources(xrpl.libxrpl PRIVATE ${sources})
target_link_modules(xrpl PUBLIC
basics
beast
crypto
json
protocol
resource
server
)
# All headers in libxrpl are in modules.
# Uncomment this stanza if you have not yet moved new headers into a module.
# target_include_directories(xrpl.libxrpl
# PRIVATE
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
# PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
target_compile_definitions(xrpl.libxrpl
PUBLIC
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1)
target_compile_options(xrpl.libxrpl
PUBLIC
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${voidstar}>:-DENABLE_VOIDSTAR>
)
target_link_libraries(xrpl.libxrpl
PUBLIC
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
if(xrpld)
add_executable(rippled)
if(unity)
set_target_properties(rippled PROPERTIES UNITY_BUILD ON)
endif()
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
endif()
target_include_directories(rippled
PRIVATE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/xrpld/*.cpp"
)
target_sources(rippled PRIVATE ${sources})
if(tests)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/test/*.cpp"
)
target_sources(rippled PRIVATE ${sources})
endif()
target_link_libraries(rippled
Ripple::boost
Ripple::opts
Ripple::libs
xrpl.libxrpl
)
exclude_if_included(rippled)
# define a macro for tests that might need to
# be exluded or run differently in CI environment
if(is_ci)
target_compile_definitions(rippled PRIVATE RIPPLED_RUNNING_IN_CI)
endif ()
if(voidstar)
target_compile_options(rippled
PRIVATE
-fsanitize-coverage=trace-pc-guard
)
# rippled requires access to antithesis-sdk-cpp implementation file
# antithesis_instrumentation.h, which is not exported as INTERFACE
target_include_directories(rippled
PRIVATE
${CMAKE_SOURCE_DIR}/external/antithesis-sdk
)
endif()
# any files that don't play well with unity should be added here
if(tests)
set_source_files_properties(
# these two seem to produce conflicts in beast teardown template methods
src/test/rpc/ValidatorRPC_test.cpp
src/test/ledger/Invariants_test.cpp
# These depend on wasmedge.h, which has some duplicate declarations
src/ripple/app/hook/impl/applyHook.cpp
src/ripple/app/misc/NetworkOPs.cpp
src/ripple/app/tx/impl/applySteps.cpp
src/ripple/app/tx/impl/SetHook.cpp
src/ripple/app/tx/impl/Transactor.cpp
src/ripple/rpc/handlers/Fee1.cpp
PROPERTIES SKIP_UNITY_BUILD_INCLUSION TRUE)
endif()
endif()

View File

@@ -1,38 +0,0 @@
#[===================================================================[
coverage report target
#]===================================================================]
if(NOT coverage)
message(FATAL_ERROR "Code coverage not enabled! Aborting ...")
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "MSVC")
message(WARNING "Code coverage on Windows is not supported, ignoring 'coverage' flag")
return()
endif()
include(CodeCoverage)
# The instructions for these commands come from the `CodeCoverage` module,
# which was copied from https://github.com/bilke/cmake-modules, commit fb7d2a3,
# then locally changed (see CHANGES: section in `CodeCoverage.cmake`)
set(GCOVR_ADDITIONAL_ARGS ${coverage_extra_args})
if(NOT GCOVR_ADDITIONAL_ARGS STREQUAL "")
separate_arguments(GCOVR_ADDITIONAL_ARGS)
endif()
list(APPEND GCOVR_ADDITIONAL_ARGS
--exclude-throw-branches
--exclude-noncode-lines
--exclude-unreachable-branches -s
-j ${coverage_test_parallelism})
setup_target_for_coverage_gcovr(
NAME coverage
FORMAT ${coverage_format}
EXECUTABLE rippled
EXECUTABLE_ARGS --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --unittest-jobs ${coverage_test_parallelism} --quiet --unittest-log
EXCLUDE "src/test" "include/xrpl/beast/test" "include/xrpl/beast/unit_test" "${CMAKE_BINARY_DIR}/pb-xrpl.libpb"
DEPENDENCIES rippled
)

View File

@@ -1,85 +0,0 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
option(with_docs "Include the docs target?" FALSE)
if(NOT (with_docs OR only_docs))
return()
endif()
find_package(Doxygen)
if(NOT TARGET Doxygen::doxygen)
message(STATUS "doxygen executable not found -- skipping docs target")
return()
endif()
set(doxygen_output_directory "${CMAKE_BINARY_DIR}/docs")
set(doxygen_include_path "${CMAKE_CURRENT_SOURCE_DIR}/src")
set(doxygen_index_file "${doxygen_output_directory}/html/index.html")
set(doxyfile "${CMAKE_CURRENT_SOURCE_DIR}/docs/Doxyfile")
file(GLOB_RECURSE doxygen_input
docs/*.md
include/*.h
include/*.cpp
include/*.md
src/*.h
src/*.cpp
src/*.md
Builds/*.md
*.md)
list(APPEND doxygen_input
external/README.md
)
set(dependencies "${doxygen_input}" "${doxyfile}")
function(verbose_find_path variable name)
# find_path sets a CACHE variable, so don't try using a "local" variable.
find_path(${variable} "${name}" ${ARGN})
if(NOT ${variable})
message(NOTICE "could not find ${name}")
else()
message(STATUS "found ${name}: ${${variable}}/${name}")
endif()
endfunction()
verbose_find_path(doxygen_plantuml_jar_path plantuml.jar PATH_SUFFIXES share/plantuml)
verbose_find_path(doxygen_dot_path dot)
# https://en.cppreference.com/w/Cppreference:Archives
# https://stackoverflow.com/questions/60822559/how-to-move-a-file-download-from-configure-step-to-build-step
set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE
"${download_script}"
"file(DOWNLOAD \
http://upload.cppreference.com/mwiki/images/b/b2/html_book_20190607.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=82b3a612d7d35a83e3cb1195a63689ab \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \
)\n"
)
set(tagfile "${CMAKE_BINARY_DIR}/docs/cppreference-doxygen-web.tag.xml")
add_custom_command(
OUTPUT "${tagfile}"
COMMAND "${CMAKE_COMMAND}" -P "${download_script}"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}/docs"
)
set(doxygen_tagfiles "${tagfile}=http://en.cppreference.com/w/")
add_custom_command(
OUTPUT "${doxygen_index_file}"
COMMAND "${CMAKE_COMMAND}" -E env
"DOXYGEN_OUTPUT_DIRECTORY=${doxygen_output_directory}"
"DOXYGEN_INCLUDE_PATH=${doxygen_include_path}"
"DOXYGEN_TAGFILES=${doxygen_tagfiles}"
"DOXYGEN_PLANTUML_JAR_PATH=${doxygen_plantuml_jar_path}"
"DOXYGEN_DOT_PATH=${doxygen_dot_path}"
"${DOXYGEN_EXECUTABLE}" "${doxyfile}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
DEPENDS "${dependencies}" "${tagfile}")
add_custom_target(docs
DEPENDS "${doxygen_index_file}"
SOURCES "${dependencies}")

View File

@@ -1,81 +0,0 @@
#[===================================================================[
install stuff
#]===================================================================]
include(create_symbolic_link)
install (
TARGETS
common
opts
ripple_syslibs
ripple_boost
xrpl.imports.main
xrpl.libpb
xrpl.libxrpl.basics
xrpl.libxrpl.beast
xrpl.libxrpl.crypto
xrpl.libxrpl.json
xrpl.libxrpl.protocol
xrpl.libxrpl.resource
xrpl.libxrpl.server
xrpl.libxrpl
antithesis-sdk-cpp
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install(
DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/include/xrpl"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(xrpl \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}/ripple)
")
install (EXPORT RippleExports
FILE RippleTargets.cmake
NAMESPACE Ripple::
DESTINATION lib/cmake/ripple)
include (CMakePackageConfigHelpers)
write_basic_package_version_file (
RippleConfigVersion.cmake
VERSION ${rippled_version}
COMPATIBILITY SameMajorVersion)
if (is_root_project AND TARGET rippled)
install (TARGETS rippled RUNTIME DESTINATION bin)
set_target_properties(rippled PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/rippled-example.cfg\" etc rippled.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "
set(CMAKE_MODULE_PATH \"${CMAKE_MODULE_PATH}\")
include(create_symbolic_link)
create_symbolic_link(rippled${suffix} \
\${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}/xrpld${suffix})
")
endif ()
install (
FILES
${CMAKE_CURRENT_SOURCE_DIR}/cmake/RippleConfig.cmake
${CMAKE_CURRENT_BINARY_DIR}/RippleConfigVersion.cmake
DESTINATION lib/cmake/ripple)

View File

@@ -1,96 +0,0 @@
#[===================================================================[
rippled compile options/settings via an interface library
#]===================================================================]
add_library (opts INTERFACE)
add_library (Ripple::opts ALIAS opts)
target_compile_definitions (opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>)
target_compile_options (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-g --coverage -fprofile-abs-path>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-g --coverage>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-g --coverage -fprofile-abs-path>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-g --coverage>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if(jemalloc)
find_package(jemalloc REQUIRED)
target_compile_definitions(opts INTERFACE PROFILE_JEMALLOC)
target_link_libraries(opts INTERFACE jemalloc::jemalloc)
endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[
rippled transitive library deps via an interface library
#]===================================================================]
add_library (ripple_syslibs INTERFACE)
add_library (Ripple::syslibs ALIAS ripple_syslibs)
target_link_libraries (ripple_syslibs
INTERFACE
$<$<BOOL:${MSVC}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${MSVC}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>)
if (NOT MSVC)
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
target_link_libraries (ripple_syslibs INTERFACE Threads::Threads)
endif ()
add_library (ripple_libs INTERFACE)
add_library (Ripple::libs ALIAS ripple_libs)
target_link_libraries (ripple_libs INTERFACE Ripple::syslibs)

View File

@@ -1,80 +0,0 @@
#[===================================================================[
convenience variables and sanity checks
#]===================================================================]
include(ProcessorCount)
if (NOT ep_procs)
ProcessorCount(ep_procs)
if (ep_procs GREATER 1)
# never use more than half of cores for EP builds
math (EXPR ep_procs "${ep_procs} / 2")
message (STATUS "Using ${ep_procs} cores for ExternalProject builds.")
endif ()
endif ()
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
message (STATUS "Build type not specified - defaulting to Release")
set (CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting
# the build types simplifies dealing with external project builds especially
message (FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
set (is_root_project OFF)
else ()
set (is_root_project ON)
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
message (FATAL_ERROR "This project requires clang 8 or later")
endif ()
# TODO min AppleClang version check ?
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8.0)
message (FATAL_ERROR "This project requires GCC 8 or later")
endif ()
endif ()
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else ()
set (is_linux FALSE)
endif ()
if ("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set (is_ci TRUE)
else ()
set (is_ci FALSE)
endif ()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message (FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if (MSVC AND CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
message (FATAL_ERROR "Visual Studio 32-bit build is not supported.")
endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Rippled requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build rippled with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program (HOMEBREW brew)
endif ()

View File

@@ -1,133 +0,0 @@
#[===================================================================[
declare user options/settings
#]===================================================================]
include(ProcessorCount)
ProcessorCount(PROCESSOR_COUNT)
option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
option(unity "Creates a build using UNITY support in cmake. This is the default" ON)
if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif()
endif()
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)
endif()
if(is_gcc OR is_clang)
option(coverage "Generates coverage info." OFF)
option(profile "Add profiling flags" OFF)
set(coverage_test_parallelism "${PROCESSOR_COUNT}" CACHE STRING
"Unit tests parallelism for the purpose of coverage report.")
set(coverage_format "html-details" CACHE STRING
"Output format of the coverage report.")
set(coverage_extra_args "" CACHE STRING
"Additional arguments to pass to gcovr.")
set(coverage_test "" CACHE STRING
"On gcc & clang, the specific unit test(s) to run for coverage. Default is all tests.")
if(coverage_test AND NOT coverage)
set(coverage ON CACHE BOOL "gcc/clang only" FORCE)
endif()
option(wextra "compile with extra gcc/clang warnings enabled" ON)
else()
set(profile OFF CACHE BOOL "gcc/clang only" FORCE)
set(coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set(wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif()
if(is_linux)
option(BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option(static "link protobuf, openssl, libc++, and boost statically" ON)
option(perf "Enables flags that assist with perf recording" OFF)
option(use_gold "enables detection of gold (binutils) linker" ON)
option(use_mold "enables detection of mold (binutils) linker" ON)
else()
# we are not ready to allow shared-libs on windows because it would require
# export declarations. On macos it's more feasible, but static openssl
# produces odd linker errors, thus we disable shared lib builds for now.
set(BUILD_SHARED_LIBS OFF CACHE BOOL "build shared ripple libraries - OFF for win/macos" FORCE)
set(static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set(perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set(use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
set(use_mold OFF CACHE BOOL "mold linker, linux only" FORCE)
endif()
if(is_clang)
option(use_lld "enables detection of lld linker" ON)
else()
set(use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif()
option(jemalloc "Enables jemalloc for heap profiling" OFF)
option(werr "treat warnings as errors" OFF)
option(local_protobuf
"Force a local build of protobuf instead of looking for an installed version." OFF)
option(local_grpc
"Force a local build of gRPC instead of looking for an installed version." OFF)
# this one is a string and therefore can't be an option
set(san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property(CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if(san)
string(TOLOWER ${san} san)
set(SAN_FLAG "-fsanitize=${san}")
set(SAN_LIB "")
if(is_gcc)
if(san STREQUAL "address")
set(SAN_LIB "asan")
elseif(san STREQUAL "thread")
set(SAN_LIB "tsan")
elseif(san STREQUAL "memory")
set(SAN_LIB "msan")
elseif(san STREQUAL "undefined")
set(SAN_LIB "ubsan")
endif()
endif()
set(_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set(CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag(${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set(CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if(NOT COMPILER_SUPPORTS_SAN)
message(FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif()
endif()
set(container_label "" CACHE STRING "tag to use for package building containers")
option(packages_only
"ONLY generate package building targets. This is special use-case and almost \
certainly not what you want. Use with caution as you won't be able to build \
any compiled targets locally." OFF)
option(have_package_container
"Sometimes you already have the tagged container you want to use for package \
building and you don't want docker to rebuild it. This flag will detach the \
dependency of the package build from the container build. It's an advanced \
use case and most likely you should not be touching this flag." OFF)
# the remaining options are obscure and rarely used
option(beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
option(single_io_service_thread
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option(boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
option(beast_hashers
"Use local implementations for sha/ripemd hashes (experimental, not recommended)"
OFF)
if(WIN32)
option(beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else()
set(beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif()
if(coverage)
message(STATUS "coverage build requested - forcing Debug build")
set(CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif()

View File

@@ -1,22 +0,0 @@
option (validator_keys "Enables building of validator-keys-tool as a separate target (imported via FetchContent)" OFF)
if (validator_keys)
git_branch (current_branch)
# default to tracking VK master branch unless we are on release
if (NOT (current_branch STREQUAL "release"))
set (current_branch "master")
endif ()
message (STATUS "tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare (
validator_keys_src
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_GetProperties (validator_keys_src)
if (NOT validator_keys_src_POPULATED)
message (STATUS "Pausing to download ValidatorKeys...")
FetchContent_Populate (validator_keys_src)
endif ()
add_subdirectory (${validator_keys_src_SOURCE_DIR} ${CMAKE_BINARY_DIR}/validator-keys)
endif ()

View File

@@ -1,15 +0,0 @@
#[===================================================================[
read version from source
#]===================================================================]
file(STRINGS src/libxrpl/protocol/BuildInfo.cpp BUILD_INFO)
foreach(line_ ${BUILD_INFO})
if(line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set(rippled_version ${CMAKE_MATCH_1})
endif()
endforeach()
if(rippled_version)
message(STATUS "rippled version: ${rippled_version}")
else()
message(FATAL_ERROR "unable to determine rippled version")
endif()

View File

@@ -1,37 +0,0 @@
include(isolate_headers)
# Create an OBJECT library target named
#
# ${PROJECT_NAME}.lib${parent}.${name}
#
# with sources in src/lib${parent}/${name}
# and headers in include/${parent}/${name}
# that cannot include headers from other directories in include/
# unless they come through linked libraries.
#
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
function(add_module parent name)
set(target ${PROJECT_NAME}.lib${parent}.${name})
add_library(${target} OBJECT)
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}/*.cpp"
)
target_sources(${target} PRIVATE ${sources})
target_include_directories(${target} PUBLIC
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/include"
"${CMAKE_CURRENT_SOURCE_DIR}/include/${parent}/${name}"
PUBLIC
)
isolate_headers(
${target}
"${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/lib${parent}/${name}"
PRIVATE
)
endfunction()

View File

@@ -1,20 +0,0 @@
# file(CREATE_SYMLINK) only works on Windows with administrator privileges.
# https://stackoverflow.com/a/61244115/618906
function(create_symbolic_link target link)
if(WIN32)
if(NOT IS_SYMLINK "${link}")
if(NOT IS_ABSOLUTE "${target}")
# Relative links work do not work on Windows.
set(target "${link}/../${target}")
endif()
file(TO_NATIVE_PATH "${target}" target)
file(TO_NATIVE_PATH "${link}" link)
execute_process(COMMAND cmd.exe /c mklink /J "${link}" "${target}")
endif()
else()
file(CREATE_LINK "${target}" "${link}" SYMBOLIC)
endif()
if(NOT IS_SYMLINK "${link}")
message(ERROR "failed to create symlink: <${link}>")
endif()
endfunction()

View File

@@ -1,53 +0,0 @@
find_package(Boost 1.82 REQUIRED
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
json
program_options
regex
system
thread
)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
if(XCODE)
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
else()
target_include_directories(ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
endif()
target_link_libraries(ripple_boost
INTERFACE
Boost::boost
Boost::chrono
Boost::container
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::json
Boost::program_options
Boost::regex
Boost::system
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)
endif()
if(san AND is_clang)
# TODO: gcc does not support -fsanitize-blacklist...can we do something else
# for gcc ?
if(NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property(Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options(opts
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif()

View File

@@ -1,52 +0,0 @@
#[===================================================================[
NIH dep: wasmedge: web assembly runtime for hooks.
#]===================================================================]
if (is_root_project) # WasmEdge not needed in the case of xrpl_core inclusion build
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(OLD_DEBUG_POSTFIX ${CMAKE_DEBUG_POSTFIX})
set(CMAKE_DEBUG_POSTFIX _d)
endif ()
add_library (wasmedge INTERFACE)
ExternalProject_Add (wasmedge_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/WasmEdge/WasmEdge.git
GIT_TAG 0.9.0
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/api/libwasmedge_c_${CMAKE_DEBUG_POSTFIX}.so
)
ExternalProject_Get_Property (wasmedge_src BINARY_DIR)
set (wasmedge_src_BINARY_DIR "${BINARY_DIR}")
add_dependencies (wasmedge wasmedge_src)
target_include_directories (ripple_libs SYSTEM INTERFACE "${wasmedge_src_BINARY_DIR}/include/api")
target_link_libraries (wasmedge
INTERFACE
Boost::thread
Boost::system)
target_link_libraries (wasmedge INTERFACE
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge_c${CMAKE_DEBUG_POSTFIX}.so")
target_link_libraries (ripple_libs INTERFACE wasmedge)
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(CMAKE_DEBUG_POSTFIX ${OLD_DEBUG_POSTFIX})
endif ()
endif ()

View File

@@ -1,48 +0,0 @@
include(create_symbolic_link)
# Consider include directory B nested under prefix A:
#
# /path/to/A/then/to/B/...
#
# Call C the relative path from A to B.
# C is what we want to write in `#include` directives:
#
# #include <then/to/B/...>
#
# Examples, all from the `jobqueue` module:
#
# - Library public headers:
# B = /include/xrpl/jobqueue
# A = /include/
# C = xrpl/jobqueue
#
# - Library private headers:
# B = /src/libxrpl/jobqueue
# A = /src/
# C = libxrpl/jobqueue
#
# - Test private headers:
# B = /tests/jobqueue
# A = /
# C = tests/jobqueue
#
# To isolate headers from each other,
# we want to create a symlink Y that points to B,
# within a subdirectory X of the `CMAKE_BINARY_DIR`,
# that has the same relative path C between X and Y,
# and then add X as an include directory of the target,
# sometimes `PUBLIC` and sometimes `PRIVATE`.
# The Cs are all guaranteed to be unique.
# We can guarantee a unique X per target by using
# `${CMAKE_CURRENT_BINARY_DIR}/include/${target}`.
#
# isolate_headers(target A B scope)
function(isolate_headers target A B scope)
file(RELATIVE_PATH C "${A}" "${B}")
set(X "${CMAKE_CURRENT_BINARY_DIR}/modules/${target}")
set(Y "${X}/${C}")
cmake_path(GET Y PARENT_PATH parent)
file(MAKE_DIRECTORY "${parent}")
create_symbolic_link("${B}" "${Y}")
target_include_directories(${target} ${scope} "$<BUILD_INTERFACE:${X}>")
endfunction()

View File

@@ -1,24 +0,0 @@
# Link a library to its modules (see: `add_module`)
# and remove the module sources from the library's sources.
#
# add_module(parent a)
# add_module(parent b)
# target_link_libraries(project.libparent.b PUBLIC project.libparent.a)
# add_library(project.libparent)
# target_link_modules(parent PUBLIC a b)
function(target_link_modules parent scope)
set(library ${PROJECT_NAME}.lib${parent})
foreach(name ${ARGN})
set(module ${library}.${name})
get_target_property(sources ${library} SOURCES)
list(LENGTH sources before)
get_target_property(dupes ${module} SOURCES)
list(LENGTH dupes expected)
list(REMOVE_ITEM sources ${dupes})
list(LENGTH sources after)
math(EXPR actual "${before} - ${after}")
message(STATUS "${module} with ${expected} sources took ${actual} sources from ${library}")
set_target_properties(${library} PROPERTIES SOURCES "${sources}")
target_link_libraries(${library} ${scope} ${module})
endforeach()
endfunction()

View File

@@ -1,62 +0,0 @@
find_package(Protobuf REQUIRED)
# .proto files import each other like this:
#
# import "path/to/file.proto";
#
# For the protobuf compiler to find these imports,
# the parent directory of "path" must be in the import path.
#
# When generating C++,
# it turns into an include statement like this:
#
# #include "path/to/file.pb.h"
#
# and the header is generated at a path relative to the output directory
# that matches the given .proto path relative to the source directory
# minus the first matching prefix on the import path.
#
# In other words, a file `include/package/path/to/file.proto`
# with import path [`include/package`, `include`]
# will generate files `output/path/to/file.pb.{h,cc}`
# with includes like `#include "path/to/file.pb.h".
#
# During build, the generated files can find each other if the output
# directory is an include directory, but we want to install that directory
# under our package's include directory (`include/package`), not as a sibling.
# After install, they can find each other if that subdirectory is an include
# directory.
# Add protocol buffer sources to an existing library target.
# target:
# The name of the library target.
# prefix:
# The install prefix for headers relative to `CMAKE_INSTALL_INCLUDEDIR`.
# This prefix should appear at the start of all your consumer includes.
# ARGN:
# A list of .proto files.
function(target_protobuf_sources target prefix)
set(dir "${CMAKE_CURRENT_BINARY_DIR}/pb-${target}")
file(MAKE_DIRECTORY "${dir}/${prefix}")
protobuf_generate(
TARGET ${target}
PROTOC_OUT_DIR "${dir}/${prefix}"
"${ARGN}"
)
target_include_directories(${target} SYSTEM PUBLIC
# Allows #include <package/path/to/file.proto> used by consumer files.
$<BUILD_INTERFACE:${dir}>
# Allows #include "path/to/file.proto" used by generated files.
$<BUILD_INTERFACE:${dir}/${prefix}>
# Allows #include <package/path/to/file.proto> used by consumer files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
# Allows #include "path/to/file.proto" used by generated files.
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${prefix}>
)
install(
DIRECTORY ${dir}/
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
FILES_MATCHING PATTERN "*.h"
)
endfunction()

View File

@@ -1,179 +0,0 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Xrpl(ConanFile):
name = 'xrpl'
license = 'ISC'
author = 'John Freeman <jfreeman@ripple.com>'
url = 'https://github.com/xrplf/rippled'
description = 'The XRP Ledger'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'assertions': [True, False],
'coverage': [True, False],
'fPIC': [True, False],
'jemalloc': [True, False],
'rocksdb': [True, False],
'shared': [True, False],
'static': [True, False],
'tests': [True, False],
'unity': [True, False],
'xrpld': [True, False],
}
requires = [
'date/3.0.1',
'grpc/1.50.1',
'libarchive/3.6.2',
'nudb/2.0.8',
'openssl/1.1.1u',
'soci/4.0.3',
'wasmedge/0.9.0',
'xxhash/0.8.2',
'zlib/1.2.13',
]
tool_requires = [
'protobuf/3.21.9',
]
default_options = {
'assertions': False,
'coverage': False,
'fPIC': True,
'jemalloc': False,
'rocksdb': True,
'shared': False,
'static': True,
'tests': False,
'unity': False,
'xrpld': False,
'date/*:header_only': True,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libarchive/*:shared': False,
'libarchive/*:with_acl': False,
'libarchive/*:with_bzip2': False,
'libarchive/*:with_cng': False,
'libarchive/*:with_expat': False,
'libarchive/*:with_iconv': False,
'libarchive/*:with_libxml2': False,
'libarchive/*:with_lz4': True,
'libarchive/*:with_lzma': False,
'libarchive/*:with_lzo': False,
'libarchive/*:with_nettle': False,
'libarchive/*:with_openssl': False,
'libarchive/*:with_pcreposix': False,
'libarchive/*:with_xattr': False,
'libarchive/*:with_zlib': False,
'lz4/*:shared': False,
'openssl/*:shared': False,
'protobuf/*:shared': False,
'protobuf/*:with_zlib': True,
'rocksdb/*:enable_sse': False,
'rocksdb/*:lite': False,
'rocksdb/*:shared': False,
'rocksdb/*:use_rtti': True,
'rocksdb/*:with_jemalloc': False,
'rocksdb/*:with_lz4': True,
'rocksdb/*:with_snappy': True,
'snappy/*:shared': False,
'soci/*:shared': False,
'soci/*:with_sqlite3': True,
'soci/*:with_boost': True,
'xxhash/*:shared': False,
}
def set_version(self):
path = f'{self.recipe_folder}/src/libxrpl/protocol/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, 'r') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
def requirements(self):
self.requires('boost/1.82.0', force=True)
self.requires('lz4/1.9.3', force=True)
self.requires('protobuf/3.21.9', force=True)
self.requires('sqlite3/3.42.0', force=True)
if self.options.jemalloc:
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/6.29.5')
exports_sources = (
'CMakeLists.txt',
'bin/getRippledInfo',
'cfg/*',
'cmake/*',
'external/*',
'include/*',
'src/*',
)
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
generators = 'CMakeDeps'
def generate(self):
tc = CMakeToolchain(self)
tc.variables['tests'] = self.options.tests
tc.variables['assert'] = self.options.assertions
tc.variables['coverage'] = self.options.coverage
tc.variables['jemalloc'] = self.options.jemalloc
tc.variables['rocksdb'] = self.options.rocksdb
tc.variables['BUILD_SHARED_LIBS'] = self.options.shared
tc.variables['static'] = self.options.static
tc.variables['unity'] = self.options.unity
tc.variables['xrpld'] = self.options.xrpld
tc.generate()
def build(self):
cmake = CMake(self)
cmake.verbose = True
cmake.configure()
cmake.build()
def package(self):
cmake = CMake(self)
cmake.verbose = True
cmake.install()
def package_info(self):
libxrpl = self.cpp_info.components['libxrpl']
libxrpl.libs = [
'xrpl',
'xrpl.libpb',
'ed25519',
'secp256k1',
]
# TODO: Fix the protobufs to include each other relative to
# `include/`, not `include/ripple/proto/`.
libxrpl.includedirs = ['include', 'include/ripple/proto']
libxrpl.requires = [
'boost::boost',
'date::date',
'grpc::grpc++',
'libarchive::libarchive',
'lz4::lz4',
'nudb::nudb',
'openssl::crypto',
'protobuf::libprotobuf',
'soci::soci',
'sqlite3::sqlite',
'xxhash::xxhash',
'zlib::zlib',
]
if self.options.rocksdb:
libxrpl.requires.append('rocksdb::librocksdb')

19
doc/CHANGELOG Normal file
View File

@@ -0,0 +1,19 @@
VFALCO NOTE - This file appears to be unmaintained.
Critical protocol changes
-------------------------
Mon Apr 8 16:13:12 PDT 2013
* The JSON field "inLedger" changing to "ledger_index"
Previous
* The JSON field "metaData" changing to "meta".
* RPC ledger will no longer take "ledger", use "ledger_hash" or "ledger_index".
* "ledgerClose" events:
** "hash" DEPRECATED: use "ledger_hash"
** "seqNum" DEPRECATED: use "ledger_index"
** "closeTime" DEPRECATED: use "close" or "close_human"
* stream "rt_accounts" --> "accounts_proposed"
* stream "rt_transactions" --> "transactions_proposed"
* subscribe "username" --> "url_username"
* subscribe "password" --> "url_password"

View File

@@ -1,6 +1,4 @@
# Code Style Cheat Sheet
## Form
# Form
- One class per header file.
- Place each data member on its own line.
@@ -13,7 +11,7 @@
- Order class declarations as types, public, protected, private, then data.
- Prefer 'private' over 'protected'
## Function
# Function
- Minimize external dependencies
* Pass options in the ctor instead of using theConfig

217
doc/CodingStyle.md Normal file
View File

@@ -0,0 +1,217 @@
--------------------------------------------------------------------------------
# Coding Standards
Coding standards used here are extreme strict and consistent. The style
evolved gradually over the years, incorporating generally acknowledged
best-practice C++ advice, experience, and personal preference.
## Don't Repeat Yourself!
The [Don't Repeat Yourself][1] principle summarises the essence of what it
means to write good code, in all languages, at all levels.
## Formatting
The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
* No tab characters please.
* Tab stops are set to 4 spaces.
* Braces are indented in the [Allman style][2].
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should always be followed by a space.
* The `~` operator should be preceded by a space, but not followed by one.
* The `++` and `--` operators should have no spaces between the operator and
the operand.
* A space never appears before a comma, and always appears after a comma.
* Always place a space before an opening parenthesis. One exception is if
the parentheses are empty.
* Don't put spaces after a parenthesis. A typical member function call might
look like this: `foobar (1, 2, 3);`
* In general, leave a blank line before an `if` statement.
* In general, leave a blank line after a closing brace `}`.
* Do not place code or comments on the same line as any opening or
closing brace.
* Do not write `if` statements all-on-one-line. The exception to this is when
you've got a sequence of similar `if` statements, and are aligning them all
vertically to highlight their similarities.
* In an `if-else` statement, if you surround one half of the statement with
braces, you also need to put braces around the other half, to match.
* When writing a pointer type, use this spacing: `SomeObject* myObject`.
Technically, a more correct spacing would be `SomeObject *myObject`, but
it makes more sense for the asterisk to be grouped with the type name,
since being a pointer is part of the type, not the variable name. The only
time that this can lead to any problems is when you're declaring multiple
pointers of the same type in the same statement - which leads on to the next
rule:
* When declaring multiple pointers, never do so in a single statement, e.g.
`SomeObject* p1, *p2;` - instead, always split them out onto separate lines
and write the type name again, to make it quite clear what's going on, and
avoid the danger of missing out any vital asterisks.
* The previous point also applies to references, so always put the `&` next to
the type rather than the variable, e.g. `void foo (Thing const& thing)`. And
don't put a space on both sides of the `*` or `&` - always put a space after
it, but never before it.
* The word `const` should be placed to the right of the thing that it modifies,
for consistency. For example `int const` refers to an int which is const.
`int const*` is a pointer to an int which is const. `int *const` is a const
pointer to an int.
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
## Naming conventions
* Member variables and method names are written with camel-case, and never
begin with a capital letter.
* Class names are also written in camel-case, but always begin with a capital
letter.
* For global variables... well, you shouldn't have any, so it doesn't matter.
* Class data members begin with `m_`, static data members begin with `s_`.
Global variables begin with `g_`. This is so the scope of the corresponding
declaration can be easily determined.
* Avoid underscores in your names, especially leading or trailing underscores.
In particular, leading underscores should be avoided, as these are often used
in standard library code, so to use them in your own code looks quite jarring.
* If you really have to write a macro for some reason, then make it all caps,
with underscores to separate the words. And obviously make sure that its name
is unlikely to clash with symbols used in other libraries or 3rd party code.
## Types, const-correctness
* If a method can (and should!) be const, make it const!
* If a method definitely doesn't throw an exception (be careful!), mark it as
`noexcept`
* When returning a temporary object, e.g. a String, the returned object should
be non-const, so that if the class has a C++11 move operator, it can be used.
* If a local variable can be const, then make it const!
* Remember that pointers can be const as well as primitives; For example, if
you have a `char*` whose contents are going to be altered, you may still be
able to make the pointer itself const, e.g. `char* const foobar = getFoobar();`.
* Do not declare all your local variables at the top of a function or method
(i.e. in the old-fashioned C-style). Declare them at the last possible moment,
and give them as small a scope as possible.
* Object parameters should be passed as `const&` wherever possible. Only
pass a parameter as a copy-by-value object if you really need to mutate
a local copy inside the method, and if making a local copy inside the method
would be difficult.
* Use portable `for()` loop variable scoping (i.e. do not have multiple for
loops in the same scope that each re-declare the same variable name, as
this fails on older compilers)
* When you're testing a pointer to see if it's null, never write
`if (myPointer)`. Always avoid that implicit cast-to-bool by writing it more
fully: `if (myPointer != nullptr)`. And likewise, never ever write
`if (! myPointer)`, instead always write `if (myPointer == nullptr)`.
It is more readable that way.
* Avoid C-style casts except when converting between primitive numeric types.
Some people would say "avoid C-style casts altogether", but `static_cast` is
a bit unreadable when you just want to cast an `int` to a `float`. But
whenever a pointer is involved, or a non-primitive object, always use
`static_cast`. And when you're reinterpreting data, always use
`reinterpret_cast`.
* Until C++ gets a universal 64-bit primitive type (part of the C++11
standard), it's best to stick to the `int64` and `uint64` typedefs.
## Object lifetime and ownership
* Absolutely do NOT use `delete`, `deleteAndZero`, etc. There are very very few
situations where you can't use a `ScopedPointer` or some other automatic
lifetime management class.
* Do not use `new` unless there's no alternative. Whenever you type `new`, always
treat it as a failure to find a better solution. If a local variable can be
allocated on the stack rather than the heap, then always do so.
* Do not ever use `new` or `malloc` to allocate a C++ array. Always use a
`HeapBlock` instead.
* And just to make it doubly clear: Never use `malloc` or `calloc`.
* If a parent object needs to create and own some kind of child object, always
use composition as your first choice. If that's not possible (e.g. if the
child needs a pointer to the parent for its constructor), then use a
`ScopedPointer`.
* If possible, pass an object as a reference rather than a pointer. If possible,
make it a `const` reference.
* Obviously avoid static and global values. Sometimes there's no alternative,
but if there is an alternative, then use it, no matter how much effort it
involves.
* If allocating a local POD structure (e.g. an operating-system structure in
native code), and you need to initialise it with zeros, use the `= { 0 };`
syntax as your first choice for doing this. If for some reason that's not
appropriate, use the `zerostruct()` function, or in case that isn't suitable,
use `zeromem()`. Don't use `memset()`.
## Classes
* Declare a class's public section first, and put its constructors and
destructor first. Any protected items come next, and then private ones.
* Use the most restrictive access-specifier possible for each member. Prefer
`private` over `protected`, and `protected` over `public`. Don't expose
things unnecessarily.
* Preferred positioning for any inherited classes is to put them to the right
of the class name, vertically aligned, e.g.:
class Thing : public Foo,
private Bar
{
}
* Put a class's member variables (which should almost always be private, of course),
after all the public and protected method declarations.
* Any private methods can go towards the end of the class, after the member
variables.
* If your class does not have copy-by-value semantics, derive the class from
`Uncopyable`.
* If your class is likely to be leaked, then derive your class from
`LeakChecked<>`.
* Constructors that take a single parameter should be default be marked
`explicit`. Obviously there are cases where you do want implicit conversion,
but always think about it carefully before writing a non-explicit constructor.
* Do not use `NULL`, `null`, or 0 for a null-pointer. And especially never use
'0L', which is particulary burdensome. Use `nullptr` instead - this is the
C++2011 standard, so get used to it. There's a fallback definition for `nullptr`
in Beast, so it's always possible to use it even if your compiler isn't yet
C++2011 compliant.
* All the C++ 'guru' books and articles are full of excellent and detailed advice
on when it's best to use inheritance vs composition. If you're not already
familiar with the received wisdom in these matters, then do some reading!
## Miscellaneous
* `goto` statements should not be used at all, even if the alternative is
more verbose code. The only exception is when implementing an algorithm in
a function as a state machine.
* Don't use macros! OK, obviously there are many situations where they're the
right tool for the job, but treat them as a last resort. Certainly don't ever
use a macro just to hold a constant value or to perform any kind of function
that could have been done as a real inline function. And it goes without saying
that you should give them names which aren't going to clash with other code.
And `#undef` them after you've used them, if possible.
* When using the `++` or `--` operators, never use post-increment if
pre-increment could be used instead. Although it doesn't matter for
primitive types, it's good practice to pre-increment since this can be
much more efficient for more complex objects. In particular, if you're
writing a for loop, always use pre-increment,
e.g. `for (int = 0; i < 10; ++i)`
* Never put an "else" statement after a "return"! This is well-explained in the
LLVM coding standards...and a couple of other very good pieces of advice from
the LLVM standards are in there as well.
* When getting a possibly-null pointer and using it only if it's non-null, limit
the scope of the pointer as much as possible - e.g. Do NOT do this:
Foo* f = getFoo ();
if (f != nullptr)
f->doSomething ();
// other code
f->doSomething (); // oops! f may be null!
..instead, prefer to write it like this, which reduces the scope of the
pointer, making it impossible to write code that accidentally uses a null
pointer:
if (Foo* f = getFoo ())
f->doSomethingElse ();
// f is out-of-scope here, so impossible to use it if it's null
(This also results in smaller, cleaner code)
[1]: http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[2]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

1890
doc/Doxyfile Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

126
doc/ripple-example.txt Normal file
View File

@@ -0,0 +1,126 @@
#
# Sample ripple.txt
#
# More information: https://ripple.com/wiki/Ripple.txt
#
# Publishing this file allows a site to declare in a trustworthy manor ripple
# associated information.
#
# This file is stored on the web server for a domain. This file is searched
# for in the following order:
# - https://ripple.DOMAIN/ripple.txt
# - https://www.DOMAIN/ripple.txt
# - https://DOMAIN/ripple.txt
#
# The server MUST set the following HTTP header when serving this file:
# Access-Control-Allow-Origin: *
#
# This file is UTF-8 with Dos, UNIX, or Mac style end of lines.
# Blank lines and lines beginning with '#' are ignored.
# Undefined sections are reserved.
# No escapes are currently defined.
#
# IMPORTANT: Please remove these comments before publishing this file. Doing so
# makes the file much more readable to humans trying to find info on your site.
#
# [expires]
# Number of days after which this file should be considered expire. Clients
# are expected to check this file when they have cause to believe it has
# changed or expired. For example, if a client discovers an account declares
# that it is associated with the domain, it should check to see if this file
# has been updated. If an expiration date is not declared, the default
# expiration for this file is 30 days.
#
# Example: 30
#
# [accounts]
# Only valid in "ripple.txt". A list of accounts that are declared to be
# controlled by this domain. A client wishing to indicate that an account is
# verified as belonging to a domain will be show the account with a green
# background. A client can verify that an account belongs to a domain by
# checking if the account's root entry mentions the domain containing this
# file and this file found at the domain mentions the account in this
# section.
#
# Example: rG1QQv2nh2gr7RCZ1P8YYcBUKCCN633jCn
#
# [hotwallets]
# Only valid in "ripple.txt". A list of accounts that are declared to be
# controlled by this domain and used as hotwallets.
#
# Example: rG1QQv2nh2gr7RCZ1P8YYcBUKCCN633jCn
#
# [validation_public_key]:
# Only valid in "ripple.txt". A validation public key that is declared
# to be used by this domain for validating ledgers and that it is the
# authorized signature for the domain. This does not imply that a node
# serving the Ripple protocol is avilable at the web host providing the
# file.
#
# Example: n9MZTnHe5D5Q2cgE8oV2usFwRqhUvEA8MwP5Mu1XVD6TxmssPRev
#
# [domain]:
# Mandatory in "ripple.txt".
# Only valid in "ripple.txt".
# Must match location of file.
#
# Example: google.com
#
# [ips]:
# Only valid in "rippled.cfg", "ripple.txt", and the referered [ips_url].
# List of ips where the Newcoin protocol is avialable.
# One ipv4 or ipv6 address per line.
# A port may optionally be specified after adding a space to the address.
# By convention, if known, IPs are listed in from most to least trusted.
#
# Examples:
# 192.168.0.1
# 192.168.0.1 3939
# 2001:0db8:0100:f101:0210:a4ff:fee3:9566
#
# [validators]:
# Only valid in "rippled.cfg", "ripple.txt", and the referered [validators_url].
# List of Ripple validators this node recommends.
#
# For domains, rippled will probe for https web servers at the specied
# domain in the following order: ripple.DOMAIN, www.DOMAIN, DOMAIN
#
# These are encoded 257-bit secp256k1 public keys.
#
# Examples:
# redstem.com
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt John Doe
#
# [ips_url]:
# Only valid in "ripple.txt".
# https URL to a similarily formatted file containing [ips].
#
# Example: https://google.com/ripple_ips.txt
#
# [validators_url]:
# Only valid in "ripple.txt".
# https URL to a similarily formatted file containing [validators].
#
# Example: https://google.com/ripple_validators.txt
#
# [currencies]:
# This section allows a site to declare currencies it currently issues.
#
# Examples: (multiple allowed one per line)
# USD
# BTC
# LTC
#
[validation_public_key]
n9MZTnHe5D5Q2cgE8oV2usFwRqhUvEA8MwP5Mu1XVD6TxmssPRev
[domain]
loss
[ips]
192.168.0.5
[validators]
redstem.com

876
doc/rippled-example.cfg Normal file
View File

@@ -0,0 +1,876 @@
#-------------------------------------------------------------------------------
#
# Rippled Server Instance Configuration Example
#
#-------------------------------------------------------------------------------
#
# Contents
#
# 1. Peer Networking
#
# 2. Websocket Networking
#
# 3. RPC Networking
#
# 4. SMS Gateway
#
# 5. Ripple Protcol
#
# 6. HTTPS Client
#
# 7. Database
#
# 8. Diagnostics
#
#-------------------------------------------------------------------------------
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is lanched, it looks
# for a file with the following name:
#
# rippled.cfg
#
# For more information on where the rippled serer instance searches for
# the file please visit the Ripple wiki. Specifically, the section explaining
# the --conf command line option:
#
# https://ripple.com/wiki/Rippled#--conf.3Dpath
#
# This file should be named rippled.cfg. This file is UTF-8 with Dos, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
#
#
#-------------------------------------------------------------------------------
#
# 1. Peer Networking
#
#-------------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Networking implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to make up the
# components of closed ledgers.
#
#
#
# [ips]
#
# List of hostnames or ips where the Ripple protocol is served. For a starter
# list, you can either copy entries from: https://ripple.com/ripple.txt or if
# you prefer you can specify r.ripple.com 51235
#
# One IPv4 address or domain names per line is allowed. A port may optionally
# be specified after adding a space to the address. By convention, if known,
# IPs are listed in from most to least trusted.
#
# Examples:
# 192.168.0.1
# 192.168.0.1 3939
# r.ripple.com 51235
#
# This will give you a good, up-to-date list of addresses:
#
# [ips]
# r.ripple.com 51235
#
#
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which rippled should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
# of cluster peers.
#
# One IPv4 address or domain names per line is allowed. A port may optionally
# be specified after adding a space to the address.
#
#
#
# [peer_ip]
#
# IP address or domain to bind to allow external connections from peers.
# Defaults to not binding, which disallows external connections from peers.
#
# Examples: 0.0.0.0 - Bind on all interfaces.
#
#
#
# [peer_port]
#
# If peer_ip is supplied, corresponding port to bind to for peer connections.
#
#
#
# [peer_port_proxy]
#
# An optional, additional listening port number for peers. Incoming
# connections on this port will be required to provide a PROXY Protocol
# handshake, described in this document (external link):
#
# http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
#
# The PROXY Protocol is a popular method used by elastic load balancing
# service providers such as Amazon, to identify the true IP address and
# port number of external incoming connections.
#
# In addition to enabling this setting, it will also be required to
# use your provider-specific control panel or administrative web page
# to configure your server instance to receive PROXY Protocol handshakes,
# and also to restrict access to your instance to the Elastic Load Balancer.
#
#
#
# [peer_private]
#
# 0 or 1.
#
# 0: Request peers to broadcast your address. Normal outbound peer connections [default]
# 1: Request peers not broadcast your address. Only connect to configured peers.
#
#
#
# [max_peers]
#
# The largest number of desired peer connections (incoming or outgoing).
# Cluster and fixed peers do not count towards this total. There are
# implementation-defined lower limits imposed on this value for security
# purposes.
#
#
#
# [peer_ssl_cipher_list]
#
# A colon delimited string with the allowed SSL cipher modes for peer. The
# choices for for ciphers are defined by the OpenSSL API function
# SSL_CTX_set_cipher_list, documented here (external link):
#
# http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc2%2Fcpp_ssl_ctx_set_cipher_list.html
#
# The default setting is "ALL:!LOW:!EXP:!MD5:@STRENGTH", which allows
# non-authenticated peer connections (they are, however, secure).
#
#
#
# [node_seed]
#
# This is used for clustering. To force a particular node seed or key, the
# key can be set here. The format is the same as the validation_seed field.
# To obtain a validation seed, use the validation_create command.
#
# Examples: RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
# shfArahZT9Q9ckTf3s1psJ7C7qzVN
#
#
#
# [cluster_nodes]
#
# To extend full trust to other nodes, place their node public keys here.
# Generally, you should only do this for nodes under common administration.
# Node public keys start with an 'n'. To give a node a name for identification
# place a space after the public key and then the name.
#
#
#
# [sntp_servers]
#
# IP address or domain of NTP servers to use for time synchronization.
#
# These NTP servers are suitable for rippled servers located in the United
# States:
# time.windows.com
# time.apple.com
# time.nist.gov
# pool.ntp.org
#
#
#
#-------------------------------------------------------------------------------
#
# 2. Websocket Networking
#
#------------------------
#
# These settings control security and access attributes of the Websocket
# server section of the rippled process, primarily used to service
# client requests and backend applications.
#
#
#
# [websocket_public_ip]
#
# IP address or domain to bind to allow untrusted connections from clients.
# In the future, this option will go away and the peer_ip will accept
# websocket client connections.
#
# Examples: 0.0.0.0 - Bind on all interfaces.
# 127.0.0.1 - Bind on localhost interface. Only local programs may connect.
#
#
#
# [websocket_public_port]
#
# Port to bind to allow untrusted connections from clients. In the future,
# this option will go away and the peer_ip will accept websocket client
# connections.
#
#
#
# [websocket_public_secure]
#
# 0, 1 or 2.
# 0: Provide ws service for websocket_public_ip/websocket_public_port.
# 1: Provide both ws and wss service for websocket_public_ip/websocket_public_port. [default]
# 2: Provide wss service only for websocket_public_ip/websocket_public_port.
#
# Browser pages like the Ripple client will not be able to connect to a secure
# websocket connection if a self-signed certificate is used. As the Ripple
# reference client currently shares secrets with its server, this should be
# enabled.
#
#
#
# [websocket_ping_frequency]
#
# <number>
#
# The amount of time to wait in seconds, before sending a websocket 'ping'
# message. Ping messages are used to determine if the remote end of the
# connection is no longer availabile.
#
#
#
# [websocket_ip]
#
# IP address or domain to bind to allow trusted ADMIN connections from backend
# applications.
#
# Examples: 0.0.0.0 - Bind on all interfaces.
# 127.0.0.1 - Bind on localhost interface. Only local programs may connect.
#
#
#
# [websocket_port]
#
# Port to bind to allow trusted ADMIN connections from backend applications.
#
#
#
# [websocket_secure]
#
# 0, 1, or 2.
# 0: Provide ws service only for websocket_ip/websocket_port. [default]
# 1: Provide ws and wss service for websocket_ip/websocket_port
# 2: Provide wss service for websocket_ip/websocket_port.
#
#
#
# [websocket_ssl_cert]
#
# Specify the path to the SSL certificate file in PEM format.
# This is not needed if the chain includes it.
#
#
#
# [websocket_ssl_chain]
#
# If you need a certificate chain, specify the path to the certificate chain
# here. The chain may include the end certificate.
#
#
#
# [websocket_ssl_key]
#
# Specify the filename holding the SSL key in PEM format.
#
#
#
#-------------------------------------------------------------------------------
#
# 3. RPC Networking
#
#------------------
#
# This group of settings configures security and access attributes of the
# RPC server section of the rippled process, used to service both local
# an optional remote clients.
#
#
#
# [rpc_allow_remote]
#
# 0 or 1.
#
# 0: Allow RPC connections only from 127.0.0.1. [default]
# 1: Allow RPC connections from any IP.
#
#
#
# [rpc_admin_allow]
#
# Specify an list of IP addresses allowed to have admin access. One per line.
# If you want to test the output of non-admin commands add this section and
# just put an ip address not under your control.
# Defaults to 127.0.0.1.
#
#
#
# [rpc_admin_user]
#
# As a server, require this as the admin user to be specified. Also, require
# rpc_admin_user and rpc_admin_password to be checked for RPC admin functions.
# The request must specify these as the admin_user and admin_password in the
# request object.
#
# As a client, supply this to the server in the request object.
#
#
#
# [rpc_admin_password]
#
# As a server, require this as the admin pasword to be specified. Also,
# require rpc_admin_user and rpc_admin_password to be checked for RPC admin
# functions. The request must specify these as the admin_user and
# admin_password in the request object.
#
# As a client, supply this to the server in the request object.
#
#
#
# [rpc_ip]
#
# IP address or domain to bind to allow insecure RPC connections.
# Defaults to not binding, which disallows RPC connections.
#
#
#
# [rpc_port]
#
# If rpc_ip is supplied, corresponding port to bind to for peer connections.
#
#
#
# [rpc_user]
#
# As a server, require a this user to specified and require rpc_password to
# be checked for RPC access via the rpc_ip and rpc_port. The user and password
# must be specified via HTTP's basic authentication method.
# As a client, supply this to the server via HTTP's basic authentication
# method.
#
#
#
# [rpc_password]
#
# As a server, require a this password to specified and require rpc_user to
# be checked for RPC access via the rpc_ip and rpc_port. The user and password
# must be specified via HTTP's basic authentication method.
# As a client, supply this to the server via HTTP's basic authentication
# method.
#
#
#
# [rpc_startup]
#
# Specify a list of RPC commands to run at startup.
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
#
#
#
# [rpc_secure]
#
# 0 or 1.
#
# 0: Server certificates are not provided for RPC clients using SSL [default]
# 1: Client RPC connections wil be provided with SSL certificates.
#
# Note that if rpc_secure is enabled, it will also be necessasry to configure the
# certificate file settings located in rpc_ssl_cert, rpc_ssl_chain, and rpc_ssl_key
#
#
#
# [rpc_ssl_cert]
#
# <pathname>
#
# A file system path leading to the SSL certificate file to use for secure RPC.
# The file is in PEM format. The file is not needed if the chain includes it.
#
#
#
# [rpc_ssl_chain]
#
# <pathname>
#
# A file system path leading to the file with the certificate chain.
# The chain may include the end certificate.
#
#
#
# [rpc_ssl_key]
#
# <pathname>
#
# A file system path leading to the file with the SSL key.
# The file is in PEM format.
#
#
#
#-------------------------------------------------------------------------------
#
# 4. SMS Gateway
#
#---------------
#
# If you have a certain SMS messaging provider you can configure these
# settings to allow the rippled server instance to send an SMS text to the
# configured gateway in response to an admin-level RPC command "sms" with
# one parameter, 'text' containing the message to send. This allows backend
# applications to use the rippled instance to securely notify administrators
# of custom events or information via SMS gateway.
#
# When the 'sms' RPC command is issued, the configured SMS gateway will be
# contacted via HTTPS GET at the URL indicated by sms_url. The URI formed
# will be in this format:
#
# [sms_url]?from=[sms_from]&to=[sms_to]&api_key=[sms_key]&api_secret=[sms_secret]&text=['text']
#
# Where [...] are the corresponding valus from the configuration file, and
# ['test'] is the value of the JSON field with name 'text'.
#
# [sms_url]
#
# The URL to contact via HTTPS when sending SMS messages
#
# [sms_from]
# [sms_to]
# [sms_key]
# [sms_secret]
#
# These are all strings passed directly in the URI as query parameters
# to the provider of the SMS gateway.
#
#
#
#-------------------------------------------------------------------------------
#
# 5. Ripple Protocol
#
#------------------
#
# These settings affect the behavior of the server instance with respect
# to Ripple payment protocol level activities such as validating and
# closing ledgers, establishing a quorum, or adjusting fees in response
# to server overloads.
#
#
#
# [node_size]
#
# Tunes the servers based on the expected load and available memory. Legal
# sizes are "tiny", "small", "medium", "large", and "huge". We recommend
# you start at the default and raise the setting if you have extra memory.
# The default is "tiny".
#
#
#
# [validation_quorum]
#
# Sets the minimum number of trusted validations a ledger must have before
# the server considers it fully validated. Note that if you are validating,
# your validation counts.
#
#
#
# [ledger_history]
#
# The number of past ledgers to acquire on server startup and the minimum to
# maintain while running.
#
# To serve clients, servers need historical ledger data. Servers that don't
# need to serve clients can set this to "none". Servers that want complete
# history can set this to "full".
#
# The default is: 256
#
#
#
# [fetch_depth]
#
# The number of past ledgers to serve to other peers that request historical
# ledger data (or "full" for no limit).
#
# Servers that require low latency and high local performance may wish to
# restrict the historical ledgers they are willing to serve. Setting this
# below 32 can harm network stability as servers require easy access to
# recent history to stay in sync. Values below 128 are not recommended.
#
# The default is: full
#
#
#
# [validation_seed]
#
# To perform validation, this section should contain either a validation seed
# or key. The validation seed is used to generate the validation
# public/private key pair. To obtain a validation seed, use the
# validation_create command.
#
# Examples: RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
# shfArahZT9Q9ckTf3s1psJ7C7qzVN
#
#
#
# [validators]
#
# List of nodes to always accept as validators. Nodes are specified by domain
# or public key.
#
# For domains, rippled will probe for https web servers at the specified
# domain in the following order: ripple.DOMAIN, www.DOMAIN, DOMAIN
#
# For public key entries, a comment may optionally be spcified after adding a
# space to the pulic key.
#
# Examples:
# ripple.com
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt John Doe
#
#
#
# [validators_file]
#
# Path to file contain a list of nodes to always accept as validators. Use
# this to specify a file other than this file to manage your validators list.
#
# If this entry is not present or empty and no nodes from previous runs were
# found in the database, rippled will look for a validators.txt in the config
# directory. If not found there, it will attempt to retrieve the file from
# the [validators_site] web site.
#
# After specifying a different [validators_file] or changing the contents of
# the validators file, issue a RPC unl_load command to have rippled load the
# file.
#
# Specify the file by specifying its full path.
#
# Examples:
# C:/home/johndoe/ripple/validators.txt
# /home/johndoe/ripple/validators.txt
#
#
#
# [validators_site]
#
# Specifies where to find validators.txt for UNL boostrapping and RPC
# unl_network command.
#
# Example: ripple.com
#
#
#
# [path_search]
# When searching for paths, the default search aggressiveness. This can take
# exponentially more resources as the size is increased.
#
# The default is: 7
#
# [path_search_fast]
# [path_search_max]
# When seaching for paths, the minimum and maximum search aggressiveness.
#
# The default for 'path_search_fast' is 2. The default for 'path_search_max' is 10.
#
# [path_search_old]
#
# For clients that use the legacy path finding interfaces, the search
# agressiveness to use. The default is 7.
#
#
#
#-------------------------------------------------------------------------------
#
# 6. HTTPS Client
#
#----------------
#
# The rippled server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to the SMS Messaging Gateway
# feature and also for contacting trusted domains to fetch information
# such as mapping an email address to a Ripple Payment Network address.
#
# [ssl_verify]
#
# 0 or 1.
#
# 0. HTTPS client connections will not verify certificates.
# 1. Certificates will be checked for HTTPS client connections .
#
#
#
# [ssl_verify_file]
#
# <pathname>
#
# A file system path leading to the certificate verification file for
# HTTPS client requests.
#
#
#
# [ssl_verify_dir]
#
# <pathname>
#
#
# A file system path leading to a file or directory containing the root
# certificates that the server will accept for verifying HTTP servers.
# Used only for outbound HTTPS client connections.
#
#
#
#-------------------------------------------------------------------------------
#
# 7. Database
#
#------------
#
# rippled creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers. The size of the NodeDB
# grows in proportion to the amount of new data and the amount of
# historical data (a configurable setting).
#
# The performance of the underlying storage media where the NodeDB
# is placed can affect the performance of the server. Some virtual
# hosting providers offer high speed secondary storage, with the
# caveat that the data is not persisted across launches. If rippled
# runs in such an environment, it can be beneficial to configure the
# temp_db setting, which activates a secondary "look-aside" cache
# that can speed up the server. Some testing is suggested to determine
# if the temp_db setting is an improvement for your environment
#
# Partial pathnames will be considered relative to the location of
# the rippled.cfg file.
#
# [node_db] Settings for the NodeDB (required)
# [temp_db] Settings for the look-aside temporary db (optional)
# [import_db] Settings for performing a one-time import (optional)
#
# Format (without spaces):
# One or more lines of key / value pairs:
# <key> '=' <value>
# ...
#
# Examples:
# type=HyperLevelDB
# path=db/hyperldb
#
# Choices for 'type' (not case-sensitive)
# HyperLevelDB Use an improved version of LevelDB (preferred)
# LevelDB Use Google's LevelDB database (deprecated)
# none Use no backend
# RocksDB Use Facebook's RocksDB database
# SQLite Use SQLite
#
# Required keys:
# path Location to store the database (all types)
#
# Optional keys:
# (none yet)
#
# Notes:
# The 'node_db' entry configures the primary, persistent storage.
#
# The 'temp_db' configures a look-aside cache for high volume storage
# which doesn't necessarily persist between server launches. This
# is an optional configuration parameter. If it is left out then
# no look-aside database is created or used.
#
# The 'import_db' is used with the '--import' command line option to
# migrate the specified database into the current database given
# in the [node_db] section.
#
# [database_path] Path to the book-keeping databases.
#
# There are 4 book-keeping SQLite database that the server creates and
# maintains. If you omit this configuration setting, it will default to
# creating a directory called "db" located in the same place as your
# rippled.cfg file.
#
#
#
#-------------------------------------------------------------------------------
#
# 8. Diagnostics
#
#---------------
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the rippled process.
#
#
#
# [debug_logfile]
#
# Specifies were a debug logfile is kept. By default, no debug log is kept.
# Unless absolute, the path is relative the directory containing this file.
#
# Example: debug.log
#
#
#
# [insight]
#
# Configuration parameters for the Beast.Insight stats collection module.
#
# Insight is a module that collects information from the areas of rippled
# that have instrumentation. The configuration paramters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
#
# "server"
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while rippled is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
# When server=statsd, these additional keys are used:
#
# "address" The UDP address and port of the listening StatsD server,
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of rippled.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
#
# Example:
#
# [insight]
# server=statsd
# address=192.168.0.95:4201
# prefix=my_validator
#
#-------------------------------------------------------------------------------
# Allow other peers to connect to this server.
#
[peer_ip]
0.0.0.0
[peer_port]
51235
# Allow untrusted clients to connect to this server.
#
[websocket_public_ip]
0.0.0.0
[websocket_public_port]
5006
# Provide trusted websocket ADMIN access to the localhost.
#
[websocket_ip]
127.0.0.1
[websocket_port]
6006
# Provide trusted json-rpc ADMIN access to the localhost.
#
[rpc_ip]
127.0.0.1
[rpc_port]
5005
[rpc_allow_remote]
0
[node_size]
medium
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
[node_db]
type=RocksDB
path=/var/lib/rippled/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
[database_path]
/var/lib/rippled/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
# Where to find some other servers speaking the Ripple protocol.
#
[ips]
r.ripple.com 51235
# These validators are taken from the v0.16 release notes on the wiki:
# https://ripple.com/wiki/Latest_rippled_release_notes
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
# Ditto.
[validation_quorum]
3
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
# Configure SSL for WebSockets. Not enabled by default because not everybody
# has an SSL cert on their server, but if you uncomment the following lines and
# set the path to the SSL certificate and private key the WebSockets protocol
# will be protected by SSL/TLS.
#[websocket_secure]
#1
#[websocket_ssl_cert]
#/etc/ssl/certs/server.crt
#[websocket_ssl_key]
#/etc/ssl/private/server.key
# Defaults to 0 ("no") so that you can use self-signed SSL certificates for
# development, or internally.
#[ssl_verify]
#0

View File

@@ -0,0 +1,10 @@
[Unit]
Description=Ripple Peer-to-Peer Network Daemon
[Service]
Type=simple
User=nobody
ExecStart=/usr/bin/rippled --conf=/etc/rippled/rippled.cfg
[Install]
WantedBy=multi-user.target

114
doc/rippled.init Normal file
View File

@@ -0,0 +1,114 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: ripple
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the ripple network node
# Description: starts rippled using start-stop-daemon
### END INIT INFO
set -e
NAME=rippled
USER="rippled"
GROUP="rippled"
PIDFILE=/var/run/$NAME.pid
DAEMON=/usr/local/sbin/rippled
DAEMON_OPTS="--conf /etc/ripple/rippled.cfg"
NET_OPTS="--net $DAEMON_OPTS"
LOGDIR="/var/log/rippled"
DBDIR="/var/db/rippled/db/hyperldb"
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
# I wish it didn't come down to this, but this is the easiest way to ensure
# sanity of an install.
if [ ! -d $LOGDIR ]; then
mkdir -p $LOGDIR
chown $USER:$GROUP $LOGDIR
fi
if [ ! -d $DBDIR ]; then
mkdir -p $DBDIR
chown -R $USER:$GROUP $DBDIR
fi
case "$1" in
start)
echo -n "Starting daemon: "$NAME
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP --verbose -- $NET_OPTS
echo "."
;;
stop)
echo -n "Stopping daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
echo "."
;;
restart)
echo -n "Restarting daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP -- $NET_OPTS
echo "."
;;
status)
echo "Status of $NAME:"
echo -n "PID of $NAME: "
if [ -f "$PIDFILE" ]; then
cat $PIDFILE
$DAEMON $DAEMON_OPTS server_info
else
echo "$NAME not running."
fi
echo "."
;;
fetch)
echo "$NAME ledger fetching info:"
$DAEMON $DAEMON_OPTS fetch_info
echo "."
;;
uptime)
echo "$NAME uptime:"
$DAEMON $DAEMON_OPTS get_counts
echo "."
;;
startconfig)
echo "$NAME is being started with the following command line:"
echo "$DAEMON $NET_OPTS"
echo "."
;;
command)
# Truncate the script's argument vector by one position to get rid of
# this entry.
shift
# Pass the remainder of the argument vector to rippled.
$DAEMON $DAEMON_OPTS "$@"
echo "."
;;
test)
$DAEMON $DAEMON_OPTS ping
echo "."
;;
*)
echo "Usage: $0 {start|stop|restart|status|fetch|uptime|startconfig|"
echo " command|test}"
exit 1
esac
exit 0

View File

@@ -0,0 +1,25 @@
#
# Default validators.txt
#
# A list of domains to bootstrap a nodes UNLs or for clients to indirectly
# locate IPs to contact the Ripple network.
#
# This file is UTF-8 with Dos, UNIX, or Mac style end of lines.
# Blank lines and lines starting with a '#' are ignored.
# All other lines should be hankos or domain names.
#
# [validators]:
# List of nodes to accept as validators specified by public key or domain.
#
# For domains, rippled will probe for https web servers at the specified
# domain in the following order: ripple.DOMAIN, www.DOMAIN, DOMAIN
#
# Examples: redstem.com
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt John Doe
#
[validators]
n9KPnVLn7ewVzHvn218DcEYsnWLzKerTDwhpofhk4Ym1RUq4TeGw first
n9LFzWuhKNvXStHAuemfRKFVECLApowncMAM5chSCL9R5ECHGN4V second
n94rSdgTyBNGvYg8pZXGuNt59Y5bGAZGxbxyvjDaqD9ceRAgD85P third

3
docs/.gitignore vendored
View File

@@ -1,3 +0,0 @@
html
temp
out.txt

View File

@@ -1,597 +0,0 @@
# Negative UNL Engineering Spec
## The Problem Statement
The moment-to-moment health of the XRP Ledger network depends on the health and
connectivity of a small number of computers (nodes). The most important nodes
are validators, specifically ones listed on the unique node list
([UNL](#Question-What-are-UNLs)). Ripple publishes a recommended UNL that most
network nodes use to determine which peers in the network are trusted. Although
most validators use the same list, they are not required to. The XRP Ledger
network progresses to the next ledger when enough validators reach agreement
(above the minimum quorum of 80%) about what transactions to include in the next
ledger.
As an example, if there are 10 validators on the UNL, at least 8 validators have
to agree with the latest ledger for it to become validated. But what if enough
of those validators are offline to drop the network below the 80% quorum? The
XRP Ledger network favors safety/correctness over advancing the ledger. Which
means if enough validators are offline, the network will not be able to validate
ledgers.
Unfortunately validators can go offline at any time for many different reasons.
Power outages, network connectivity issues, and hardware failures are just a few
scenarios where a validator would appear "offline". Given that most of these
events are temporary, it would make sense to temporarily remove that validator
from the UNL. But the UNL is updated infrequently and not every node uses the
same UNL. So instead of removing the unreliable validator from the Ripple
recommended UNL, we can create a second negative UNL which is stored directly on
the ledger (so the entire network has the same view). This will help the network
see which validators are **currently** unreliable, and adjust their quorum
calculation accordingly.
*Improving the liveness of the network is the main motivation for the negative UNL.*
### Targeted Faults
In order to determine which validators are unreliable, we need clearly define
what kind of faults to measure and analyze. We want to deal with the faults we
frequently observe in the production network. Hence we will only monitor for
validators that do not reliably respond to network messages or send out
validations disagreeing with the locally generated validations. We will not
target other byzantine faults.
To track whether or not a validator is responding to the network, we could
monitor them with a “heartbeat” protocol. Instead of creating a new heartbeat
protocol, we can leverage some existing protocol messages to mimic the
heartbeat. We picked validation messages because validators should send one and
only one validation message per ledger. In addition, we only count the
validation messages that agree with the local node's validations.
With the negative UNL, the network could keep making forward progress safely
even if the number of remaining validators gets to 60%. Say we have a network
with 10 validators on the UNL and everything is operating correctly. The quorum
required for this network would be 8 (80% of 10). When validators fail, the
quorum required would be as low as 6 (60% of 10), which is the absolute
***minimum quorum***. We need the absolute minimum quorum to be strictly greater
than 50% of the original UNL so that there cannot be two partitions of
well-behaved nodes headed in different directions. We arbitrarily choose 60% as
the minimum quorum to give a margin of safety.
Consider these events in the absence of negative UNL:
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
1. 5:00pm - validator3 fails, votes vs. quorum: 7 < 8, we dont have quorum
* **network cannot validate new ledgers with 3 failed validators**
We're below 80% agreement, so new ledgers cannot be validated. This is how the
XRP Ledger operates today, but if the negative UNL was enabled, the events would
happen as follows. (Please note that the events below are from a simplified
version of our protocol.)
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
1. 1:40pm - network adds validator1 to negative UNL, quorum changes to ceil(9 * 0.8), or 8
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
1. 3:40pm - network adds validator2 to negative UNL, quorum changes to ceil(8 * 0.8), or 7
1. 5:00pm - validator3 fails, votes vs. quorum: 7 >= 7, we have quorum
1. 5:40pm - network adds validator3 to negative UNL, quorum changes to ceil(7 * 0.8), or 6
1. 7:00pm - validator4 fails, votes vs. quorum: 6 >= 6, we have quorum
* **network can still validate new ledgers with 4 failed validators**
## External Interactions
### Message Format Changes
This proposal will:
1. add a new pseudo-transaction type
1. add the negative UNL to the ledger data structure.
Any tools or systems that rely on the format of this data will have to be
updated.
### Amendment
This feature **will** need an amendment to activate.
## Design
This section discusses the following topics about the Negative UNL design:
* [Negative UNL protocol overview](#Negative-UNL-Protocol-Overview)
* [Validator reliability measurement](#Validator-Reliability-Measurement)
* [Format Changes](#Format-Changes)
* [Negative UNL maintenance](#Negative-UNL-Maintenance)
* [Quorum size calculation](#Quorum-Size-Calculation)
* [Filter validation messages](#Filter-Validation-Messages)
* [High level sequence diagram of code
changes](#High-Level-Sequence-Diagram-of-Code-Changes)
### Negative UNL Protocol Overview
Every ledger stores a list of zero or more unreliable validators. Updates to the
list must be approved by the validators using the consensus mechanism that
validators use to agree on the set of transactions. The list is used only when
checking if a ledger is fully validated. If a validator V is in the list, nodes
with V in their UNL adjust the quorum and Vs validation message is not counted
when verifying if a ledger is fully validated. Vs flow of messages and network
interactions, however, will remain the same.
We define the ***effective UNL** = original UNL - negative UNL*, and the
***effective quorum*** as the quorum of the *effective UNL*. And we set
*effective quorum = Ceiling(80% * effective UNL)*.
### Validator Reliability Measurement
A node only measures the reliability of validators on its own UNL, and only
proposes based on local observations. There are many metrics that a node can
measure about its validators, but we have chosen ledger validation messages.
This is because every validator shall send one and only one signed validation
message per ledger. This keeps the measurement simple and removes
timing/clock-sync issues. A node will measure the percentage of agreeing
validation messages (*PAV*) received from each validator on the node's UNL. Note
that the node will only count the validation messages that agree with its own
validations.
We define the **PAV** as the **P**ercentage of **A**greed **V**alidation
messages received for the last N ledgers, where N = 256 by default.
When the PAV drops below the ***low-water mark***, the validator is considered
unreliable, and is a candidate to be disabled by being added to the negative
UNL. A validator must have a PAV higher than the ***high-water mark*** to be
re-enabled. The validator is re-enabled by removing it from the negative UNL. In
the implementation, we plan to set the low-water mark as 50% and the high-water
mark as 80%.
### Format Changes
The negative UNL component in a ledger contains three fields.
* ***NegativeUNL***: The current negative UNL, a list of unreliable validators.
* ***ToDisable***: The validator to be added to the negative UNL on the next
flag ledger.
* ***ToReEnable***: The validator to be removed from the negative UNL on the
next flag ledger.
All three fields are optional. When the *ToReEnable* field exists, the
*NegativeUNL* field cannot be empty.
A new pseudo-transaction, ***UNLModify***, is added. It has three fields
* ***Disabling***: A flag indicating whether the modification is to disable or
to re-enable a validator.
* ***Seq***: The ledger sequence number.
* ***Validator***: The validator to be disabled or re-enabled.
There would be at most one *disable* `UNLModify` and one *re-enable* `UNLModify`
transaction per flag ledger. The full machinery is described further on.
### Negative UNL Maintenance
The negative UNL can only be modified on the flag ledgers. If a validator's
reliability status changes, it takes two flag ledgers to modify the negative
UNL. Let's see an example of the algorithm:
* Ledger seq = 100: A validator V goes offline.
* Ledger seq = 256: This is a flag ledger, and V's reliability measurement *PAV*
is lower than the low-water mark. Other validators add `UNLModify`
pseudo-transactions `{true, 256, V}` to the transaction set which goes through
the consensus. Then the pseudo-transaction is applied to the negative UNL
ledger component by setting `ToDisable = V`.
* Ledger seq = 257 ~ 511: The negative UNL ledger component is copied from the
parent ledger.
* Ledger seq=512: This is a flag ledger, and the negative UNL is updated
`NegativeUNL = NegativeUNL + ToDisable`.
The negative UNL may have up to `MaxNegativeListed = floor(original UNL * 25%)`
validators. The 25% is because of 75% * 80% = 60%, where 75% = 100% - 25%, 80%
is the quorum of the effective UNL, and 60% is the absolute minimum quorum of
the original UNL. Adding more than 25% validators to the negative UNL does not
improve the liveness of the network, because adding more validators to the
negative UNL cannot lower the effective quorum.
The following is the detailed algorithm:
* **If** the ledger seq = x is a flag ledger
1. Compute `NegativeUNL = NegativeUNL + ToDisable - ToReEnable` if they
exist in the parent ledger
1. Try to find a candidate to disable if `sizeof NegativeUNL < MaxNegativeListed`
1. Find a validator V that has a *PAV* lower than the low-water
mark, but is not in `NegativeUNL`.
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If V is found, create a `UNLModify` pseudo-transaction
`TxDisableValidator = {true, x, V}`
1. Try to find a candidate to re-enable if `sizeof NegativeUNL > 0`:
1. Find a validator U that is in `NegativeUNL` and has a *PAV* higher
than the high-water mark.
1. If U is not found, try to find one in `NegativeUNL` but not in the
local *UNL*.
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If U is found, create a `UNLModify` pseudo-transaction
`TxReEnableValidator = {false, x, U}`
1. If any `UNLModify` pseudo-transactions are created, add them to the
transaction set. The transaction set goes through the consensus algorithm.
1. If have enough support, the `UNLModify` pseudo-transactions remain in the
transaction set agreed by the validators. Then the pseudo-transactions are
applied to the ledger:
1. If have `TxDisableValidator`, set `ToDisable=TxDisableValidator.V`.
Else clear `ToDisable`.
1. If have `TxReEnableValidator`, set
`ToReEnable=TxReEnableValidator.U`. Else clear `ToReEnable`.
* **Else** (not a flag ledger)
1. Copy the negative UNL ledger component from the parent ledger
The negative UNL is stored on each ledger because we don't know when a validator
may reconnect to the network. If the negative UNL was stored only on every flag
ledger, then a new validator would have to wait until it acquires the latest
flag ledger to know the negative UNL. So any new ledgers created that are not
flag ledgers copy the negative UNL from the parent ledger.
Note that when we have a validator to disable and a validator to re-enable at
the same flag ledger, we create two separate `UNLModify` pseudo-transactions. We
want either one or the other or both to make it into the ledger on their own
merits.
Readers may have noticed that we defined several rules of creating the
`UNLModify` pseudo-transactions but did not describe how to enforce the rules.
The rules are actually enforced by the existing consensus algorithm. Unless
enough validators propose the same pseudo-transaction it will not be included in
the transaction set of the ledger.
### Quorum Size Calculation
The effective quorum is 80% of the effective UNL. Note that because at most 25%
of the original UNL can be on the negative UNL, the quorum should not be lower
than the absolute minimum quorum (i.e. 60%) of the original UNL. However,
considering that different nodes may have different UNLs, to be safe we compute
`quorum = Ceiling(max(60% * original UNL, 80% * effective UNL))`.
### Filter Validation Messages
If a validator V is in the negative UNL, it still participates in consensus
sessions in the same way, i.e. V still follows the protocol and publishes
proposal and validation messages. The messages from V are still stored the same
way by everyone, used to calculate the new PAV for V, and could be used in
future consensus sessions if needed. However V's ledger validation message is
not counted when checking if the ledger is fully validated.
### High Level Sequence Diagram of Code Changes
The diagram below is the sequence of one round of consensus. Classes and
components with non-trivial changes are colored green.
* The `ValidatorList` class is modified to compute the quorum of the effective
UNL.
* The `Validations` class provides an interface for querying the validation
messages from trusted validators.
* The `ConsensusAdaptor` component:
* The `RCLConsensus::Adaptor` class is modified for creating `UNLModify`
Pseudo-Transactions.
* The `Change` class is modified for applying `UNLModify`
Pseudo-Transactions.
* The `Ledger` class is modified for creating and adjusting the negative UNL
ledger component.
* The `LedgerMaster` class is modified for filtering out validation messages
from negative UNL validators when verifying if a ledger is fully
validated.
![Sequence diagram](./negativeUNL_highLevel_sequence.png?raw=true "Negative UNL
Changes")
## Roads Not Taken
### Use a Mechanism Like Fee Voting to Process UNLModify Pseudo-Transactions
The previous version of the negative UNL specification used the same mechanism
as the [fee voting](https://xrpl.org/fee-voting.html#voting-process.) for
creating the negative UNL, and used the negative UNL as soon as the ledger was
fully validated. However the timing of fully validation can differ among nodes,
so different negative UNLs could be used, resulting in different effective UNLs
and different quorums for the same ledger. As a result, the network's safety is
impacted.
This updated version does not impact safety though operates a bit more slowly.
The negative UNL modifications in the *UNLModify* pseudo-transaction approved by
the consensus will take effect at the next flag ledger. The extra time of the
256 ledgers should be enough for nodes to be in sync of the negative UNL
modifications.
### Use an Expiration Approach to Re-enable Validators
After a validator disabled by the negative UNL becomes reliable, other
validators explicitly vote for re-enabling it. An alternative approach to
re-enable a validator is the expiration approach, which was considered in the
previous version of the specification. In the expiration approach, every entry
in the negative UNL has a fixed expiration time. One flag ledger interval was
chosen as the expiration interval. Once expired, the other validators must
continue voting to keep the unreliable validator on the negative UNL. The
advantage of this approach is its simplicity. But it has a requirement. The
negative UNL protocol must be able to vote multiple unreliable validators to be
disabled at the same flag ledger. In this version of the specification, however,
only one unreliable validator can be disabled at a flag ledger. So the
expiration approach cannot be simply applied.
### Validator Reliability Measurement and Flag Ledger Frequency
If the ledger time is about 4.5 seconds and the low-water mark is 50%, then in
the worst case, it takes 48 minutes *((0.5 * 256 + 256 + 256) * 4.5 / 60 = 48)*
to put an offline validator on the negative UNL. We considered lowering the flag
ledger frequency so that the negative UNL can be more responsive. We also
considered decoupling the reliability measurement and flag ledger frequency to
be more flexible. In practice, however, their benefits are not clear.
## New Attack Vectors
A group of malicious validators may try to frame a reliable validator and put it
on the negative UNL. But they cannot succeed. Because:
1. A reliable validator sends a signed validation message every ledger. A
sufficient peer-to-peer network will propagate the validation messages to other
validators. The validators will decide if another validator is reliable or not
only by its local observation of the validation messages received. So an honest
validators vote on another validators reliability is accurate.
1. Given the votes are accurate, and one vote per validator, an honest validator
will not create a UNLModify transaction of a reliable validator.
1. A validator can be added to a negative UNL only through a UNLModify
transaction.
Assuming the group of malicious validators is less than the quorum, they cannot
frame a reliable validator.
## Summary
The bullet points below briefly summarize the current proposal:
* The motivation of the negative UNL is to improve the liveness of the network.
* The targeted faults are the ones frequently observed in the production
network.
* Validators propose negative UNL candidates based on their local measurements.
* The absolute minimum quorum is 60% of the original UNL.
* The format of the ledger is changed, and a new *UNLModify* pseudo-transaction
is added. Any tools or systems that rely on the format of these data will have
to be updated.
* The negative UNL can only be modified on the flag ledgers.
* At most one validator can be added to the negative UNL at a flag ledger.
* At most one validator can be removed from the negative UNL at a flag ledger.
* If a validator's reliability status changes, it takes two flag ledgers to
modify the negative UNL.
* The quorum is the larger of 80% of the effective UNL and 60% of the original
UNL.
* If a validator is on the negative UNL, its validation messages are ignored
when the local node verifies if a ledger is fully validated.
## FAQ
### Question: What are UNLs?
Quote from the [Technical FAQ](https://xrpl.org/technical-faq.html): "They are
the lists of transaction validators a given participant believes will not
conspire to defraud them."
### Question: How does the negative UNL proposal affect network liveness?
The network can make forward progress when more than a quorum of the trusted
validators agree with the progress. The lower the quorum size is, the easier for
the network to progress. If the quorum is too low, however, the network is not
safe because nodes may have different results. So the quorum size used in the
consensus protocol is a balance between the safety and the liveness of the
network. The negative UNL reduces the size of the effective UNL, resulting in a
lower quorum size while keeping the network safe.
<h3> Question: How does a validator get into the negative UNL? How is a
validator removed from the negative UNL? </h3>
A validators reliability is measured by other validators. If a validator
becomes unreliable, at a flag ledger, other validators propose *UNLModify*
pseudo-transactions which vote the validator to add to the negative UNL during
the consensus session. If agreed, the validator is added to the negative UNL at
the next flag ledger. The mechanism of removing a validator from the negative
UNL is the same.
### Question: Given a negative UNL, what happens if the UNL changes?
Answer: Lets consider the cases:
1. A validator is added to the UNL, and it is already in the negative UNL. This
case could happen when not all the nodes have the same UNL. Note that the
negative UNL on the ledger lists unreliable nodes that are not necessarily the
validators for everyone.
In this case, the liveness is affected negatively. Because the minimum
quorum could be larger but the usable validators are not increased.
1. A validator is removed from the UNL, and it is in the negative UNL.
In this case, the liveness is affected positively. Because the quorum could
be smaller but the usable validators are not reduced.
1. A validator is added to the UNL, and it is not in the negative UNL.
1. A validator is removed from the UNL, and it is not in the negative UNL.
Case 3 and 4 are not affected by the negative UNL protocol.
### Question: Can we simply lower the quorum to 60% without the negative UNL?
Answer: No, because the negative UNL approach is safer.
First lets compare the two approaches intuitively, (1) the *negative UNL*
approach, and (2) *lower quorum*: simply lowering the quorum from 80% to 60%
without the negative UNL. The negative UNL approach uses consensus to come up
with a list of unreliable validators, which are then removed from the effective
UNL temporarily. With this approach, the list of unreliable validators is agreed
to by a quorum of validators and will be used by every node in the network to
adjust its UNL. The quorum is always 80% of the effective UNL. The lower quorum
approach is a tradeoff between safety and liveness and against our principle of
preferring safety over liveness. Note that different validators don't have to
agree on which validation sources they are ignoring.
Next we compare the two approaches quantitatively with examples, and apply
Theorem 8 of [Analysis of the XRP Ledger Consensus
Protocol](https://arxiv.org/abs/1802.07242) paper:
*XRP LCP guarantees fork safety if **O<sub>i,j</sub> > n<sub>j</sub> / 2 +
n<sub>i</sub> q<sub>i</sub> + t<sub>i,j</sub>** for every pair of nodes
P<sub>i</sub>, P<sub>j</sub>,*
where *O<sub>i,j</sub>* is the overlapping requirement, n<sub>j</sub> and
n<sub>i</sub> are UNL sizes, q<sub>i</sub> is the quorum size of P<sub>i</sub>,
*t<sub>i,j</sub> = min(t<sub>i</sub>, t<sub>j</sub>, O<sub>i,j</sub>)*, and
t<sub>i</sub> and t<sub>j</sub> are the number of faults can be tolerated by
P<sub>i</sub> and P<sub>j</sub>.
We denote *UNL<sub>i</sub>* as *P<sub>i</sub>'s UNL*, and *|UNL<sub>i</sub>|* as
the size of *P<sub>i</sub>'s UNL*.
Assuming *|UNL<sub>i</sub>| = |UNL<sub>j</sub>|*, let's consider the following
three cases:
1. With 80% quorum and 20% faults, *O<sub>i,j</sub> > 100% / 2 + 100% - 80% +
20% = 90%*. I.e. fork safety requires > 90% UNL overlaps. This is one of the
results in the analysis paper.
1. If the quorum is 60%, the relationship between the overlapping requirement
and the faults that can be tolerated is *O<sub>i,j</sub> > 90% +
t<sub>i,j</sub>*. Under the same overlapping condition (i.e. 90%), to guarantee
the fork safety, the network cannot tolerate any faults. So under the same
overlapping condition, if the quorum is simply lowered, the network can tolerate
fewer faults.
1. With the negative UNL approach, we want to argue that the inequation
*O<sub>i,j</sub> > n<sub>j</sub> / 2 + n<sub>i</sub> q<sub>i</sub> +
t<sub>i,j</sub>* is always true to guarantee fork safety, while the negative UNL
protocol runs, i.e. the effective quorum is lowered without weakening the
network's fault tolerance. To make the discussion easier, we rewrite the
inequation as *O<sub>i,j</sub> > n<sub>j</sub> / 2 + (n<sub>i</sub>
q<sub>i</sub>) + min(t<sub>i</sub>, t<sub>j</sub>)*, where O<sub>i,j</sub> is
dropped from the definition of t<sub>i,j</sub> because *O<sub>i,j</sub> >
min(t<sub>i</sub>, t<sub>j</sub>)* always holds under the parameters we will
use. Assuming a validator V is added to the negative UNL, now let's consider the
4 cases:
1. V is not on UNL<sub>i</sub> nor UNL<sub>j</sub>
The inequation holds because none of the variables change.
1. V is on UNL<sub>i</sub> but not on UNL<sub>j</sub>
The value of *(n<sub>i</sub> q<sub>i</sub>)* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
1. V is not on UNL<sub>i</sub> but on UNL<sub>j</sub>
The value of *n<sub>j</sub> / 2* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
1. V is on both UNL<sub>i</sub> and UNL<sub>j</sub>
The value of *O<sub>i,j</sub>* is reduced by 1. The values of
*n<sub>j</sub> / 2*, *(n<sub>i</sub> q<sub>i</sub>)*, and
*min(t<sub>i</sub>, t<sub>j</sub>)* are reduced by 0.5, 0.2, and 1
respectively. The right side is reduced by 1.7. Overall, the left side
of the inequation is reduced by 1, and the right side is reduced by 1.7.
So the inequation holds.
The inequation holds for all the cases. So with the negative UNL approach,
the network's fork safety is preserved, while the quorum is lowered that
increases the network's liveness.
<h3> Question: We have observed that occasionally a validator wanders off on its
own chain. How is this case handled by the negative UNL algorithm? </h3>
Answer: The case that a validator wanders off on its own chain can be measured
with the validations agreement. Because the validations by this validator must
be different from other validators' validations of the same sequence numbers.
When there are enough disagreed validations, other validators will vote this
validator onto the negative UNL.
In general by measuring the agreement of validations, we also measured the
"sanity". If two validators have too many disagreements, one of them could be
insane. When enough validators think a validator is insane, that validator is
put on the negative UNL.
<h3> Question: Why would there be at most one disable UNLModify and one
re-enable UNLModify transaction per flag ledger? </h3>
Answer: It is a design choice so that the effective UNL does not change too
quickly. A typical targeted scenario is several validators go offline slowly
during a long weekend. The current design can handle this kind of cases well
without changing the effective UNL too quickly.
## Appendix
### Confidence Test
We will use two test networks, a single machine test network with multiple IP
addresses and the QE test network with multiple machines. The single machine
network will be used to test all the test cases and to debug. The QE network
will be used after that. We want to see the test cases still pass with real
network delay. A test case specifies:
1. a UNL with different number of validators for different test cases,
1. a network with zero or more non-validator nodes,
1. a sequence of validator reliability change events (by killing/restarting
nodes, or by running modified rippled that does not send all validation
messages),
1. the correct outcomes.
For all the test cases, the correct outcomes are verified by examining logs. We
will grep the log to see if the correct negative UNLs are generated, and whether
or not the network is making progress when it should be. The ripdtop tool will
be helpful for monitoring validators' states and ledger progress. Some of the
timing parameters of rippled will be changed to have faster ledger time. Most if
not all test cases do not need client transactions.
For example, the test cases for the prototype:
1. A 10-validator UNL.
1. The network does not have other nodes.
1. The validators will be started from the genesis. Once they start to produce
ledgers, we kill five validators, one every flag ledger interval. Then we
will restart them one by one.
1. A sequence of events (or the lack of events) such as a killed validator is
added to the negative UNL.
#### Roads Not Taken: Test with Extended CSF
We considered testing with the current unit test framework, specifically the
[Consensus Simulation
Framework](https://github.com/ripple/rippled/blob/develop/src/test/csf/README.md)
(CSF). However, the CSF currently can only test the generic consensus algorithm
as in the paper: [Analysis of the XRP Ledger Consensus
Protocol](https://arxiv.org/abs/1802.07242).

View File

@@ -1,79 +0,0 @@
@startuml negativeUNL_highLevel_sequence
skinparam sequenceArrowThickness 2
skinparam roundcorner 20
skinparam maxmessagesize 160
actor "Rippled Start" as RS
participant "Timer" as T
participant "NetworkOPs" as NOP
participant "ValidatorList" as VL #lightgreen
participant "Consensus" as GC
participant "ConsensusAdaptor" as CA #lightgreen
participant "Validations" as RM #lightgreen
RS -> NOP: begin consensus
activate NOP
NOP -[#green]> VL: <font color=green>update negative UNL
hnote over VL#lightgreen: store a copy of\nnegative UNL
VL -> NOP
NOP -> VL: update trusted validators
activate VL
VL -> VL: re-calculate quorum
hnote over VL#lightgreen: ignore negative listed validators\nwhen calculate quorum
VL -> NOP
deactivate VL
NOP -> GC: start round
activate GC
GC -> GC: phase = OPEN
GC -> NOP
deactivate GC
deactivate NOP
loop at regular frequency
T -> GC: timerEntry
activate GC
end
alt phase == OPEN
alt should close ledger
GC -> GC: phase = ESTABLISH
GC -> CA: onClose
activate CA
alt sqn%256==0
CA -[#green]> RM: <font color=green>getValidations
CA -[#green]> CA: <font color=green>create UNLModify Tx
hnote over CA#lightgreen: use validatations of the last 256 ledgers\nto figure out UNLModify Tx candidates.\nIf any, create UNLModify Tx, and add to TxSet.
end
CA -> GC
GC -> CA: propose
deactivate CA
end
else phase == ESTABLISH
hnote over GC: receive peer postions
GC -> GC : update our position
GC -> CA : propose \n(if position changed)
GC -> GC : check if have consensus
alt consensus reached
GC -> GC: phase = ACCEPT
GC -> CA : onAccept
activate CA
CA -> CA : build LCL
hnote over CA #lightgreen: copy negative UNL from parent ledger
alt sqn%256==0
CA -[#green]> CA: <font color=green>Adjust negative UNL
CA -[#green]> CA: <font color=green>apply UNLModify Tx
end
CA -> CA : validate and send validation message
activate NOP
CA -> NOP : end consensus and\n<b>begin next consensus round
deactivate NOP
deactivate CA
hnote over RM: receive validations
end
else phase == ACCEPTED
hnote over GC: timerEntry hash nothing to do at this phase
end
deactivate GC
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

View File

@@ -1,88 +0,0 @@
# Ledger Replay
`LedgerReplayer` is a new `Stoppable` for replaying ledgers.
Patterned after two other `Stoppable`s under `JobQueue`---`InboundLedgers`
and `InboundTransactions`---it acts like a factory for creating
state-machine workers, and a network message demultiplexer for those workers.
Think of these workers like asynchronous functions.
Like functions, they each take a set of parameters.
The `Stoppable` memoizes these functions. It maintains a table for each
worker type, mapping sets of arguments to the worker currently working
on that argument set.
Whenever the `Stoppable` is asked to construct a worker, it first searches its
table to see if there is an existing worker with the same or overlapping
argument set.
If one exists, then it is used. If not, then a new one is created,
initialized, and added to the table.
For `LedgerReplayer`, there are three worker types: `LedgerReplayTask`,
`SkipListAcquire`, and `LedgerDeltaAcquire`.
Each is derived from `TimeoutCounter` to give it a timeout.
For `LedgerReplayTask`, the parameter set
is {reason, finish ledger ID, number of ledgers}. For `SkipListAcquire` and
`LedgerDeltaAcquire`, there is just one parameter: a ledger ID.
Each `Stoppable` has an entry point. For `LedgerReplayer`, it is `replay`.
`replay` creates two workers: a `LedgerReplayTask` and a `SkipListAcquire`.
`LedgerDeltaAcquire`s are created in the callback for when the skip list
returns.
For `SkipListAcquire` and `LedgerDeltaAcquire`, initialization fires off the
underlying asynchronous network request and starts the timeout. The argument
set identifying the worker is included in the network request, and copied to
the network response. `SkipListAcquire` sends a request for a proof path for
the skip list of the desired ledger. `LedgerDeltaAcquire` sends a request for
the transaction set of the desired ledger.
`LedgerReplayer` is also a network message demultiplexer.
When a response arrives for a request that was sent by a `SkipListAcquire` or
`LedgerDeltaAcquire` worker, the `Peer` object knows to send it to the
`LedgerReplayer`, which looks up the worker waiting for that response based on
the identifying argument set included in the response.
`LedgerReplayTask` may ask `InboundLedgers` to send requests to acquire
the start ledger, but there is no way to attach a callback or be notified when
the `InboundLedger` worker completes. All the responses for its messages will
be directed to `InboundLedgers`, not `LedgerReplayer`. Instead,
`LedgerReplayTask` checks whether the start ledger has arrived every time its
timeout expires.
Like a promise, each worker keeps track of whether it is pending (`!isDone()`)
or whether it has resolved successfully (`complete_ == true`) or unsuccessfully
(`failed_ == true`). It will never exist in both resolved states at once, nor
will it return to a pending state after reaching a resolved state.
Like promises, some workers can accept continuations to be called when they
reach a resolved state, or immediately if they are already resolved.
`SkipListAcquire` and `LedgerDeltaAcquire` both accept continuations of a type
specific to their payload, both via a method named `addDataCallback()`. Continuations
cannot be removed explicitly, but they are held by `std::weak_ptr` so they can
be removed implicitly.
`LedgerReplayTask` is simultaneously:
1. an asynchronous function,
1. a continuation to one `SkipListAcquire` asynchronous function,
1. a continuation to zero or more `LedgerDeltaAcquire` asynchronous functions, and
1. a continuation to its own timeout.
Each of these roles corresponds to different entry points:
1. `init()`
1. the callback added to `SkipListAcquire`, which calls `updateSkipList(...)` or `cancel()`
1. the callback added to `LedgerDeltaAcquire`, which calls `deltaReady(...)` or `cancel()`
1. `onTimer()`
Each of these entry points does something unique to that entry point. They
either (a) transition `LedgerReplayTask` to a terminal failed resolved state
(`cancel()` and `onTimer()`) or (b) try to make progress toward the successful
resolved state. `init()` and `updateSkipList(...)` call `trigger()` while
`deltaReady(...)` calls `tryAdvance()`. There's a similarity between this
pattern and the way coroutines are implemented, where every yield saves the spot
in the code where it left off and every resume jumps back to that spot.
### Sequence Diagram
![Sequence diagram](./ledger_replay_sequence.png?raw=true "A successful ledger replay")
### Class Diagram
![Class diagram](./ledger_replay_classes.png?raw=true "Ledger replay classes")

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

View File

@@ -1,98 +0,0 @@
@startuml
class TimeoutCounter {
#app_ : Application&
}
TimeoutCounter o-- "1" Application
': app_
Stoppable <.. Application
class Application {
-m_ledgerReplayer : uptr<LedgerReplayer>
-m_inboundLedgers : uptr<InboundLedgers>
}
Application *-- "1" LedgerReplayer
': m_ledgerReplayer
Application *-- "1" InboundLedgers
': m_inboundLedgers
Stoppable <.. InboundLedgers
Application "1" --o InboundLedgers
': app_
class InboundLedgers {
-app_ : Application&
}
Stoppable <.. LedgerReplayer
InboundLedgers "1" --o LedgerReplayer
': inboundLedgers_
Application "1" --o LedgerReplayer
': app_
class LedgerReplayer {
+createDeltas(LedgerReplayTask)
-app_ : Application&
-inboundLedgers_ : InboundLedgers&
-tasks_ : vector<sptr<LedgerReplayTask>>
-deltas_ : hash_map<u256, wptr<LedgerDeltaAcquire>>
-skipLists_ : hash_map<u256, wptr<SkipListAcquire>>
}
LedgerReplayer *-- LedgerReplayTask
': tasks_
LedgerReplayer o-- LedgerDeltaAcquire
': deltas_
LedgerReplayer o-- SkipListAcquire
': skipLists_
TimeoutCounter <.. LedgerReplayTask
InboundLedgers "1" --o LedgerReplayTask
': inboundLedgers_
LedgerReplayer "1" --o LedgerReplayTask
': replayer_
class LedgerReplayTask {
-inboundLedgers_ : InboundLedgers&
-replayer_ : LedgerReplayer&
-skipListAcquirer_ : sptr<SkipListAcquire>
-deltas_ : vector<sptr<LedgerDeltaAcquire>>
+addDelta(sptr<LedgerDeltaAcquire>)
}
LedgerReplayTask *-- "1" SkipListAcquire
': skipListAcquirer_
LedgerReplayTask *-- LedgerDeltaAcquire
': deltas_
TimeoutCounter <.. SkipListAcquire
InboundLedgers "1" --o SkipListAcquire
': inboundLedgers_
LedgerReplayer "1" --o SkipListAcquire
': replayer_
LedgerReplayTask --o SkipListAcquire : implicit via callback
class SkipListAcquire {
+addDataCallback(callback)
-inboundLedgers_ : InboundLedgers&
-replayer_ : LedgerReplayer&
-dataReadyCallbacks_ : vector<callback>
}
TimeoutCounter <.. LedgerDeltaAcquire
InboundLedgers "1" --o LedgerDeltaAcquire
': inboundLedgers_
LedgerReplayer "1" --o LedgerDeltaAcquire
': replayer_
LedgerReplayTask --o LedgerDeltaAcquire : implicit via callback
class LedgerDeltaAcquire {
+addDataCallback(callback)
-inboundLedgers_ : InboundLedgers&
-replayer_ : LedgerReplayer&
-dataReadyCallbacks_ : vector<callback>
}
@enduml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

View File

@@ -1,85 +0,0 @@
@startuml
autoactivate on
' participant app as "Application"
participant peer as "Peer"
participant lr as "LedgerReplayer"
participant lrt as "LedgerReplayTask"
participant sla as "SkipListAcquire"
participant lda as "LedgerDeltaAcquire"
[-> lr : replay(finishId, numLedgers)
lr -> sla : make_shared(finishHash)
return skipList
lr -> lrt : make_shared(skipList)
return task
lr -> sla : init(numPeers=1)
sla -> sla : trigger(numPeers=1)
sla -> peer : sendRequest(ProofPathRequest)
return
return
return
lr -> lrt : init()
lrt -> sla : addDataCallback(callback)
return
return
deactivate lr
[-> peer : onMessage(ProofPathResponse)
peer -> lr : gotSkipList(ledgerHeader, item)
lr -> sla : processData(ledgerSeq, item)
sla -> sla : onSkipListAcquired(skipList, ledgerSeq)
sla -> sla : notify()
note over sla: call the callbacks added by\naddDataCallback(callback).
sla -> lrt : callback(ledgerId)
lrt -> lrt : updateSkipList(ledgerId, ledgerSeq, skipList)
lrt -> lr : createDeltas(this)
loop
lr -> lda : make_shared(ledgerId, ledgerSeq)
return delta
lr -> lrt : addDelta(delta)
lrt -> lda : addDataCallback(callback)
return
return
lr -> lda : init(numPeers=1)
lda -> lda : trigger(numPeers=1)
lda -> peer : sendRequest(ReplayDeltaRequest)
return
return
return
end
return
return
return
return
return
return
deactivate peer
[-> peer : onMessage(ReplayDeltaResponse)
peer -> lr : gotReplayDelta(ledgerHeader)
lr -> lda : processData(ledgerHeader, txns)
lda -> lda : notify()
note over lda: call the callbacks added by\naddDataCallback(callback).
lda -> lrt : callback(ledgerId)
lrt -> lrt : deltaReady(ledgerId)
lrt -> lrt : tryAdvance()
loop as long as child can be built
lrt -> lda : tryBuild(parent)
lda -> lda : onLedgerBuilt()
note over lda
Schedule a job to store the built ledger.
end note
return
return child
end
return
return
return
return
return
deactivate peer
@enduml

View File

@@ -1,82 +0,0 @@
# Coding Standards
Coding standards used here gradually evolve and propagate through
code reviews. Some aspects are enforced more strictly than others.
## Rules
These rules only apply to our own code. We can't enforce any sort of
style on the external repositories and libraries we include. The best
guideline is to maintain the standards that are used in those libraries.
* Tab inserts 4 spaces. No tab characters.
* Braces are indented in the [Allman style][1].
* Modern C++ principles. No naked ```new``` or ```delete```.
* Line lengths limited to 80 characters. Exceptions limited to data and tables.
## Guidelines
If you want to do something contrary to these guidelines, understand
why you're doing it. Think, use common sense, and consider that this
your changes will probably need to be maintained long after you've
moved on to other projects.
* Use white space and blank lines to guide the eye and keep your intent clear.
* Put private data members at the top of a class, and the 6 public special
members immediately after, in the following order:
* Destructor
* Default constructor
* Copy constructor
* Copy assignment
* Move constructor
* Move assignment
* Don't over-inline by defining large functions within the class
declaration, not even for template classes.
## Formatting
The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should be preceded by a space, but not followed by one.
* The `~` operator should be preceded by a space, but not followed by one.
* The `++` and `--` operators should have no spaces between the operator and
the operand.
* A space never appears before a comma, and always appears after a comma.
* Don't put spaces after a parenthesis. A typical member function call might
look like this: `foobar (1, 2, 3);`
* In general, leave a blank line before an `if` statement.
* In general, leave a blank line after a closing brace `}`.
* Do not place code on the same line as any opening or
closing brace.
* Do not write `if` statements all-on-one-line. The exception to this is when
you've got a sequence of similar `if` statements, and are aligning them all
vertically to highlight their similarities.
* In an `if-else` statement, if you surround one half of the statement with
braces, you also need to put braces around the other half, to match.
* When writing a pointer type, use this spacing: `SomeObject* myObject`.
Technically, a more correct spacing would be `SomeObject *myObject`, but
it makes more sense for the asterisk to be grouped with the type name,
since being a pointer is part of the type, not the variable name. The only
time that this can lead to any problems is when you're declaring multiple
pointers of the same type in the same statement - which leads on to the next
rule:
* When declaring multiple pointers, never do so in a single statement, e.g.
`SomeObject* p1, *p2;` - instead, always split them out onto separate lines
and write the type name again, to make it quite clear what's going on, and
avoid the danger of missing out any vital asterisks.
* The previous point also applies to references, so always put the `&` next to
the type rather than the variable, e.g. `void foo (Thing const& thing)`. And
don't put a space on both sides of the `*` or `&` - always put a space after
it, but never before it.
* The word `const` should be placed to the right of the thing that it modifies,
for consistency. For example `int const` refers to an int which is const.
`int const*` is a pointer to an int which is const. `int *const` is a const
pointer to an int.
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
[1]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

View File

@@ -1,5 +0,0 @@
# `rippled` Docker Image
- Some info relating to Docker containers can be found here: [../Builds/containers](../Builds/containers)
- Images for building and testing rippled can be found here: [thejohnfreeman/rippled-docker](https://github.com/thejohnfreeman/rippled-docker/)
- These images do not have rippled. They have all the tools necessary to build rippled.

View File

@@ -1,343 +0,0 @@
#---------------------------------------------------------------------------
# Project related configuration options
#---------------------------------------------------------------------------
DOXYFILE_ENCODING = UTF-8
PROJECT_NAME = "rippled"
PROJECT_NUMBER =
PROJECT_BRIEF =
PROJECT_LOGO =
PROJECT_LOGO =
OUTPUT_DIRECTORY = $(DOXYGEN_OUTPUT_DIRECTORY)
CREATE_SUBDIRS = NO
ALLOW_UNICODE_NAMES = NO
OUTPUT_LANGUAGE = English
BRIEF_MEMBER_DESC = YES
REPEAT_BRIEF = YES
ABBREVIATE_BRIEF =
ALWAYS_DETAILED_SEC = NO
INLINE_INHERITED_MEMB = YES
FULL_PATH_NAMES = NO
STRIP_FROM_PATH = src/
STRIP_FROM_INC_PATH =
SHORT_NAMES = NO
JAVADOC_AUTOBRIEF = YES
QT_AUTOBRIEF = NO
MULTILINE_CPP_IS_BRIEF = NO
INHERIT_DOCS = YES
SEPARATE_MEMBER_PAGES = NO
TAB_SIZE = 4
ALIASES =
OPTIMIZE_OUTPUT_FOR_C = NO
OPTIMIZE_OUTPUT_JAVA = NO
OPTIMIZE_FOR_FORTRAN = NO
OPTIMIZE_OUTPUT_VHDL = NO
EXTENSION_MAPPING =
MARKDOWN_SUPPORT = YES
AUTOLINK_SUPPORT = YES
BUILTIN_STL_SUPPORT = YES
CPP_CLI_SUPPORT = NO
SIP_SUPPORT = NO
IDL_PROPERTY_SUPPORT = YES
DISTRIBUTE_GROUP_DOC = NO
GROUP_NESTED_COMPOUNDS = NO
SUBGROUPING = YES
INLINE_GROUPED_CLASSES = NO
INLINE_SIMPLE_STRUCTS = NO
TYPEDEF_HIDES_STRUCT = NO
LOOKUP_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
EXTRACT_ALL = YES
EXTRACT_PRIVATE = YES
EXTRACT_PACKAGE = NO
EXTRACT_STATIC = YES
EXTRACT_LOCAL_CLASSES = YES
EXTRACT_LOCAL_METHODS = YES
EXTRACT_ANON_NSPACES = NO
HIDE_UNDOC_MEMBERS = NO
HIDE_UNDOC_CLASSES = NO
HIDE_FRIEND_COMPOUNDS = NO
HIDE_IN_BODY_DOCS = NO
INTERNAL_DOCS = NO
CASE_SENSE_NAMES = YES
HIDE_SCOPE_NAMES = NO
HIDE_COMPOUND_REFERENCE= NO
SHOW_INCLUDE_FILES = NO
SHOW_GROUPED_MEMB_INC = NO
FORCE_LOCAL_INCLUDES = NO
INLINE_INFO = NO
SORT_MEMBER_DOCS = NO
SORT_BRIEF_DOCS = NO
SORT_MEMBERS_CTORS_1ST = YES
SORT_GROUP_NAMES = NO
SORT_BY_SCOPE_NAME = NO
STRICT_PROTO_MATCHING = NO
GENERATE_TODOLIST = NO
GENERATE_TESTLIST = NO
GENERATE_BUGLIST = NO
GENERATE_DEPRECATEDLIST= NO
ENABLED_SECTIONS =
MAX_INITIALIZER_LINES = 30
SHOW_USED_FILES = NO
SHOW_FILES = NO
SHOW_NAMESPACES = YES
FILE_VERSION_FILTER =
LAYOUT_FILE =
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
#---------------------------------------------------------------------------
QUIET = NO
WARNINGS = YES
WARN_IF_UNDOCUMENTED = YES
WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = NO
WARN_AS_ERROR = NO
WARN_FORMAT = "$file:$line: $text"
WARN_LOGFILE =
#---------------------------------------------------------------------------
# Configuration options related to the input files
#---------------------------------------------------------------------------
INPUT = \
. \
INPUT_ENCODING = UTF-8
FILE_PATTERNS = *.h *.cpp *.md
RECURSIVE = YES
EXCLUDE = \
.github \
external/ed25519-donna \
external/secp256k1 \
EXCLUDE_SYMLINKS = NO
EXCLUDE_PATTERNS =
EXCLUDE_SYMBOLS =
EXAMPLE_PATH =
EXAMPLE_PATTERNS =
EXAMPLE_RECURSIVE = NO
IMAGE_PATH = \
docs/images/ \
docs/images/consensus/ \
src/test/csf/ \
INPUT_FILTER =
FILTER_PATTERNS =
FILTER_SOURCE_FILES = NO
FILTER_SOURCE_PATTERNS =
USE_MDFILE_AS_MAINPAGE = ./README.md
#---------------------------------------------------------------------------
# Configuration options related to source browsing
#---------------------------------------------------------------------------
SOURCE_BROWSER = YES
INLINE_SOURCES = NO
STRIP_CODE_COMMENTS = YES
REFERENCED_BY_RELATION = NO
REFERENCES_RELATION = NO
REFERENCES_LINK_SOURCE = YES
SOURCE_TOOLTIPS = YES
USE_HTAGS = NO
VERBATIM_HEADERS = YES
CLANG_ASSISTED_PARSING = NO
CLANG_OPTIONS =
#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
#---------------------------------------------------------------------------
ALPHABETICAL_INDEX = YES
COLS_IN_ALPHA_INDEX = 5
IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
#---------------------------------------------------------------------------
GENERATE_HTML = YES
HTML_OUTPUT = html
HTML_FILE_EXTENSION = .html
HTML_HEADER =
HTML_FOOTER =
HTML_STYLESHEET =
HTML_EXTRA_STYLESHEET =
HTML_EXTRA_FILES =
HTML_COLORSTYLE_HUE = 220
HTML_COLORSTYLE_SAT = 100
HTML_COLORSTYLE_GAMMA = 80
HTML_TIMESTAMP = NO
HTML_DYNAMIC_SECTIONS = NO
HTML_INDEX_NUM_ENTRIES = 100
GENERATE_DOCSET = NO
DOCSET_FEEDNAME = "Doxygen generated docs"
DOCSET_BUNDLE_ID = org.doxygen.Project
DOCSET_PUBLISHER_ID = org.doxygen.Publisher
DOCSET_PUBLISHER_NAME = Publisher
GENERATE_HTMLHELP = NO
CHM_FILE =
HHC_LOCATION =
GENERATE_CHI = NO
CHM_INDEX_ENCODING =
BINARY_TOC = NO
TOC_EXPAND = NO
GENERATE_QHP = NO
QCH_FILE =
QHP_NAMESPACE = org.doxygen.Project
QHP_VIRTUAL_FOLDER = doc
QHP_CUST_FILTER_NAME =
QHP_CUST_FILTER_ATTRS =
QHP_SECT_FILTER_ATTRS =
QHG_LOCATION =
GENERATE_ECLIPSEHELP = NO
ECLIPSE_DOC_ID = org.doxygen.Project
DISABLE_INDEX = NO
GENERATE_TREEVIEW = NO
ENUM_VALUES_PER_LINE = 4
TREEVIEW_WIDTH = 250
EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
USE_MATHJAX = NO
MATHJAX_FORMAT = HTML-CSS
MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
MATHJAX_EXTENSIONS =
MATHJAX_CODEFILE =
SEARCHENGINE = YES
SERVER_BASED_SEARCH = NO
EXTERNAL_SEARCH = NO
SEARCHENGINE_URL =
SEARCHDATA_FILE = searchdata.xml
EXTERNAL_SEARCH_ID =
EXTRA_SEARCH_MAPPINGS =
#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
#---------------------------------------------------------------------------
GENERATE_LATEX = NO
LATEX_OUTPUT = latex
LATEX_CMD_NAME =
MAKEINDEX_CMD_NAME = makeindex
COMPACT_LATEX = NO
PAPER_TYPE = a4
EXTRA_PACKAGES =
LATEX_HEADER =
LATEX_FOOTER =
LATEX_EXTRA_STYLESHEET =
LATEX_EXTRA_FILES =
PDF_HYPERLINKS = YES
USE_PDFLATEX = YES
LATEX_BATCHMODE = NO
LATEX_HIDE_INDICES = NO
LATEX_SOURCE_CODE = NO
LATEX_BIB_STYLE = plain
LATEX_TIMESTAMP = NO
#---------------------------------------------------------------------------
# Configuration options related to the RTF output
#---------------------------------------------------------------------------
GENERATE_RTF = NO
RTF_OUTPUT = rtf
COMPACT_RTF = NO
RTF_HYPERLINKS = NO
RTF_STYLESHEET_FILE =
RTF_EXTENSIONS_FILE =
RTF_SOURCE_CODE = NO
#---------------------------------------------------------------------------
# Configuration options related to the man page output
#---------------------------------------------------------------------------
GENERATE_MAN = NO
MAN_OUTPUT = man
MAN_EXTENSION = .3
MAN_SUBDIR =
MAN_LINKS = NO
#---------------------------------------------------------------------------
# Configuration options related to the XML output
#---------------------------------------------------------------------------
GENERATE_XML = NO
XML_OUTPUT = xml
XML_PROGRAMLISTING = YES
#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------
GENERATE_DOCBOOK = NO
DOCBOOK_OUTPUT = docbook
DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
#---------------------------------------------------------------------------
GENERATE_AUTOGEN_DEF = NO
GENERATE_PERLMOD = NO
PERLMOD_LATEX = NO
PERLMOD_PRETTY = YES
PERLMOD_MAKEVAR_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the preprocessor
#---------------------------------------------------------------------------
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
SEARCH_INCLUDES = YES
INCLUDE_PATH = $(DOXYGEN_INCLUDE_PATH)
INCLUDE_FILE_PATTERNS =
PREDEFINED = DOXYGEN \
GENERATING_DOCS \
_MSC_VER \
NUDB_POSIX_FILE=1
EXPAND_AS_DEFINED =
SKIP_FUNCTION_MACROS = YES
#---------------------------------------------------------------------------
# Configuration options related to external references
#---------------------------------------------------------------------------
TAGFILES = $(DOXYGEN_TAGFILES)
GENERATE_TAGFILE =
ALLEXTERNALS = NO
EXTERNAL_GROUPS = YES
EXTERNAL_PAGES = YES
#---------------------------------------------------------------------------
# Configuration options related to the dot tool
#---------------------------------------------------------------------------
CLASS_DIAGRAMS = NO
DIA_PATH =
HIDE_UNDOC_RELATIONS = YES
HAVE_DOT = YES
# DOT_NUM_THREADS = 0 means 1 for every processor.
DOT_NUM_THREADS = 0
DOT_FONTNAME = Helvetica
DOT_FONTSIZE = 10
DOT_FONTPATH =
CLASS_GRAPH = YES
COLLABORATION_GRAPH = YES
GROUP_GRAPHS = YES
UML_LOOK = NO
UML_LIMIT_NUM_FIELDS = 10
TEMPLATE_RELATIONS = NO
INCLUDE_GRAPH = YES
INCLUDED_BY_GRAPH = YES
CALL_GRAPH = NO
CALLER_GRAPH = NO
GRAPHICAL_HIERARCHY = YES
DIRECTORY_GRAPH = YES
DOT_IMAGE_FORMAT = png
INTERACTIVE_SVG = NO
DOT_PATH = $(DOXYGEN_DOT_PATH)
DOTFILE_DIRS =
MSCFILE_DIRS =
DIAFILE_DIRS =
PLANTUML_JAR_PATH = $(DOXYGEN_PLANTUML_JAR_PATH)
PLANTUML_INCLUDE_PATH =
DOT_GRAPH_MAX_NODES = 50
MAX_DOT_GRAPH_DEPTH = 0
DOT_TRANSPARENT = NO
DOT_MULTI_TARGETS = NO
GENERATE_LEGEND = YES
DOT_CLEANUP = YES

View File

@@ -1,63 +0,0 @@
## Heap profiling of rippled with jemalloc
The jemalloc library provides a good API for doing heap analysis,
including a mechanism to dump a description of the heap from within the
running application via a function call. Details on how to perform this
activity in general, as well as how to acquire the software, are available on
the jemalloc site:
[https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling)
jemalloc is acquired separately from rippled, and is not affiliated
with Ripple Labs. If you compile and install jemalloc from the
source release with default options, it will install the library and header
under `/usr/local/lib` and `/usr/local/include`, respectively. Heap
profiling has been tested with rippled on a Linux platform. It should
work on platforms on which both rippled and jemalloc are available.
To link rippled with jemalloc, the argument
`profile-jemalloc=<jemalloc_dir>` is provided after the optional target.
The `<jemalloc_dir>` argument should be the same as that of the
`--prefix` parameter passed to the jemalloc configure script when building.
## Examples:
Build rippled with jemalloc library under /usr/local/lib and
header under /usr/local/include:
$ scons profile-jemalloc=/usr/local
Build rippled using clang with the jemalloc library under /opt/local/lib
and header under /opt/local/include:
$ scons clang profile-jemalloc=/opt/local
----------------------
## Using the jemalloc library from within the code
The `profile-jemalloc` parameter enables a macro definition called
`PROFILE_JEMALLOC`. Include the jemalloc header file as
well as the api call(s) that you wish to make within preprocessor
conditional groups, such as:
In global scope:
#ifdef PROFILE_JEMALLOC
#include <jemalloc/jemalloc.h>
#endif
And later, within a function scope:
#ifdef PROFILE_JEMALLOC
mallctl("prof.dump", NULL, NULL, NULL, 0);
#endif
Fuller descriptions of how to acquire and use jemalloc's api to do memory
analysis are available at the [jemalloc
site.](http://www.canonware.com/jemalloc/)
Linking against the jemalloc library will override
the system's default `malloc()` and related functions with jemalloc's
implementation. This is the case even if the code is not instrumented
to use jemalloc's specific API.

View File

@@ -1,63 +0,0 @@
# Building documentation
## Dependencies
Install these dependencies:
- [Doxygen](http://www.doxygen.nl): All major platforms have [official binary
distributions](http://www.doxygen.nl/download.html#srcbin), or you can
build from [source](http://www.doxygen.nl/download.html#srcbin).
- MacOS: We recommend installing via Homebrew: `brew install doxygen`.
The executable will be installed in `/usr/local/bin` which is already
in the default `PATH`.
If you use the official binary distribution, then you'll need to make
Doxygen available to your command line. You can do this by adding
a symbolic link from `/usr/local/bin` to the `doxygen` executable. For
example,
```
$ ln -s /Applications/Doxygen.app/Contents/Resources/doxygen /usr/local/bin/doxygen
```
- [PlantUML](http://plantuml.com):
1. Install a functioning Java runtime, if you don't already have one.
2. Download [`plantuml.jar`](http://sourceforge.net/projects/plantuml/files/plantuml.jar/download).
- [Graphviz](https://www.graphviz.org):
- Linux: Install from your package manager.
- Windows: Use an [official installer](https://graphviz.gitlab.io/_pages/Download/Download_windows.html).
- MacOS: Install via Homebrew: `brew install graphviz`.
## Docker
Instead of installing the above dependencies locally, you can use the official
build environment Docker image, which has all of them installed already.
1. Install [Docker](https://docs.docker.com/engine/installation/)
2. Pull the image:
```
sudo docker pull rippleci/rippled-ci-builder:2944b78d22db
```
3. Run the image from the project folder:
```
sudo docker run -v $PWD:/opt/rippled --rm rippleci/rippled-ci-builder:2944b78d22db
```
## Build
There is a `docs` target in the CMake configuration.
```
mkdir build
cd build
cmake ..
cmake --build . --target docs
```
The output will be in `build/docs/html`.

Some files were not shown because too many files have changed in this diff Show More