Compare commits

..

2 Commits
2.6.1 ... hooks

Author SHA1 Message Date
Ed Hennis
5883d756db Merge remote-tracking branch 'upstream/develop' into hooks-merge
* upstream/develop: (518 commits)
  Set version to 2.4.0-b1
  fix: Add header for set_difference (5197)
  fix: allow overlapping types in `Expected` (5218)
  Add MPTIssue to STIssue (5200)
  Antithesis instrumentation improvements (5213)
  Enforce levelization in libxrpl with CMake (5111)
  refactor: clean up `LedgerEntry.cpp` (5199)
  test: Add more test cases for Base58 parser (5174)
  test: Check for some unlikely null dereferences in tests (5004)
  Add Antithesis intrumentation (5042)
  Set version to 2.3.0
  refactor(AMMClawback): move tfClawTwoAssets check (5201)
  Add a new serialized type: STNumber (5121)
  fix: check for valid ammID field in amm_info RPC (5188)
  Set version to 2.3.0-rc2
  fix: include `index` in `server_definitions` RPC (5190)
  Fix ledger_entry crash on invalid credentials request (5189)
  Set version to 2.3.0-rc1
  Replace Uint192 with Hash192 in server_definitions response (5177)
  Fix potential deadlock (5124)
  ...
2024-12-20 13:00:48 -05:00
RichardAH
df8d8796f5 Introduce in-development Hooks amendment (#4225)
* Hooks V2 squashed from
  006524a10a

* Include changes from code review

Introduce "Hooks" functionality with the new SetHook transactor. This
significantly expands the scope of transaction processing capabilities
with an implementation that supports the execution of programmable
scripts that attach to accounts and execute during transactions. This
provides programmability and allows for advanced use cases involving
transaction validation, automation, and complex multi-signature
arrangements. The core transaction pipeline executes hooks, manages
their states, and handles associated fees. Adjustments across the code
enable seamless integration of hooks with existing functionality, e.g.
signer lists. This is a large advancement in the architecture and
functionality of the system.

Additional testing, including performance testing, and reviews,
including security reviews, are necessary before this change should be
considered production-ready. Additionally, there is a need to modify the
change so that it can build on Windows, which is currently a supported
platform for development purposes.
2024-02-22 08:47:59 -08:00
1383 changed files with 58039 additions and 83070 deletions

View File

@@ -1,5 +1,5 @@
---
Language: Cpp
Language: Cpp
AccessModifierOffset: -4
AlignAfterOpenBracket: AlwaysBreak
AlignConsecutiveAssignments: false
@@ -19,52 +19,47 @@ AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
AfterClass: true
AfterClass: true
AfterControlStatement: true
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
CommentPragmas: "^ IWYU pragma:"
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [Q_FOREACH, BOOST_FOREACH]
IncludeBlocks: Regroup
ForEachMacros: [ Q_FOREACH, BOOST_FOREACH ]
IncludeCategories:
- Regex: "^<(test)/"
Priority: 0
- Regex: "^<(xrpld)/"
Priority: 1
- Regex: "^<(xrpl)/"
Priority: 2
- Regex: "^<(boost)/"
Priority: 3
- Regex: "^.*/"
Priority: 4
- Regex: '^.*\.h'
Priority: 5
- Regex: ".*"
Priority: 6
IncludeIsMainRegex: "$"
- Regex: '^<(test)/'
Priority: 0
- Regex: '^<(xrpld)/'
Priority: 1
- Regex: '^<(xrpl)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
- Regex: '.*'
Priority: 4
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentRequiresClause: true
IndentWidth: 4
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1
@@ -78,25 +73,19 @@ PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
ReflowComments: true
RequiresClausePosition: OwnLine
SortIncludes: true
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: false
SpacesInAngles: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never
QualifierAlignment: Right
---
Language: JavaScript
---
Language: Json
IndentWidth: 2
Standard: Cpp11
TabWidth: 8
UseTab: Never

View File

@@ -7,13 +7,13 @@ comment:
show_carryforward_flags: false
coverage:
range: "70..85"
range: "60..80"
precision: 1
round: nearest
status:
project:
default:
target: 75%
target: 60%
threshold: 2%
patch:
default:
@@ -27,7 +27,7 @@ github_checks:
parsers:
cobertura:
partials_as_hits: true
handle_missing_conditions: true
handle_missing_conditions : true
slack_app: false

View File

@@ -11,4 +11,3 @@ b9d007813378ad0ff45660dc07285b823c7e9855
fe9a5365b8a52d4acc42eb27369247e6f238a4f9
9a93577314e6a8d4b4a8368cc9d2b15a5d8303e8
552377c76f55b403a1c876df873a23d780fcc81c
97f0747e103f13e26e45b731731059b32f7679ac

8
.github/CODEOWNERS vendored
View File

@@ -1,8 +0,0 @@
# Allow anyone to review any change by default.
*
# Require the rpc-reviewers team to review changes to the rpc code.
include/xrpl/protocol/ @xrplf/rpc-reviewers
src/libxrpl/protocol/ @xrplf/rpc-reviewers
src/xrpld/rpc/ @xrplf/rpc-reviewers
src/xrpld/app/misc/ @xrplf/rpc-reviewers

View File

@@ -2,35 +2,30 @@
name: Bug Report
about: Create a report to help us improve rippled
title: "[Title with short description] (Version: [rippled version])"
labels: ""
assignees: ""
---
labels: ''
assignees: ''
---
<!-- Please search existing issues to avoid creating duplicates.-->
## Issue Description
<!--Provide a summary for your issue/bug.-->
## Steps to Reproduce
<!--List in detail the exact steps to reproduce the unexpected behavior of the software.-->
## Expected Result
<!--Explain in detail what behavior you expected to happen.-->
## Actual Result
<!--Explain in detail what behavior actually happened.-->
## Environment
<!--Please describe your environment setup (such as Ubuntu 18.04 with Boost 1.70).-->
<!-- If you are using a formal release, please use the version returned by './rippled --version' as the version number-->
<!-- If you are working off of develop, please add the git hash via 'git rev-parse HEAD'-->
## Supporting Files
<!--If you have supporting files such as a log, feel free to post a link here using Github Gist.-->
<!--Consider adding configuration files with private information removed via Github Gist. -->

View File

@@ -3,23 +3,19 @@ name: Feature Request
about: Suggest a new feature for the rippled project
title: "[Title with short description] (Version: [rippled version])"
labels: Feature Request
assignees: ""
---
assignees: ''
---
<!-- Please search existing issues to avoid creating duplicates.-->
## Summary
<!-- Provide a summary to the feature request-->
## Motivation
<!-- Why do we need this feature?-->
## Solution
<!-- What is the solution?-->
## Paths Not Taken
<!-- What other alternatives have been considered?-->

View File

@@ -2,37 +2,59 @@ name: dependencies
inputs:
configuration:
required: true
# Implicit inputs are the environment variables `build_dir`, CONAN_REMOTE_URL,
# CONAN_REMOTE_USERNAME, and CONAN_REMOTE_PASSWORD. The latter two are only
# used to upload newly built dependencies to the Conan remote.
# An implicit input is the environment variable `build_dir`.
runs:
using: composite
steps:
- name: add Conan remote
if: ${{ env.CONAN_REMOTE_URL != '' }}
- name: unlock Conan
shell: bash
run: conan remove --locks
- name: export custom recipes
shell: bash
run: |
conan config set general.revisions_enabled=1
conan export external/snappy snappy/1.1.10@
conan export external/rocksdb rocksdb/6.29.5@
conan export external/soci soci/4.0.3@
- name: add Ripple Conan remote
shell: bash
run: |
echo "Adding Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote add --index 0 --force xrplf ${{ env.CONAN_REMOTE_URL }}
echo "Listing Conan remotes."
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with
# a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
# `conan user` implicitly uses the environment variables
# CONAN_LOGIN_USERNAME_<REMOTE> and CONAN_PASSWORD_<REMOTE>.
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: list missing binaries
id: binaries
shell: bash
# Print the list of dependencies that would need to be built locally.
# A non-empty list means we have "failed" to cache binaries remotely.
run: |
echo missing=$(conan info . --build missing --settings build_type=${{ inputs.configuration }} --json 2>/dev/null | grep '^\[') | tee ${GITHUB_OUTPUT}
- name: install dependencies
shell: bash
run: |
mkdir -p ${{ env.build_dir }}
cd ${{ env.build_dir }}
mkdir ${build_dir}
cd ${build_dir}
conan install \
--output-folder . \
--build missing \
--options:host "&:tests=True" \
--options:host "&:xrpld=True" \
--settings:all build_type=${{ inputs.configuration }} \
--options tests=True \
--options xrpld=True \
--settings build_type=${{ inputs.configuration }} \
..
- name: upload dependencies
if: ${{ env.CONAN_REMOTE_URL != '' && env.CONAN_REMOTE_USERNAME != '' && env.CONAN_REMOTE_PASSWORD != '' && github.ref_type == 'branch' && github.ref_name == github.event.repository.default_branch }}
- name: upload dependencies to remote
if: (steps.binaries.outputs.missing != '[]') && (steps.remote.outputs.outcome == 'success')
shell: bash
run: |
echo "Logging into Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote login xrplf "${{ env.CONAN_REMOTE_USERNAME }}" --password "${{ env.CONAN_REMOTE_PASSWORD }}"
echo "Uploading dependencies."
conan upload '*' --confirm --check --remote xrplf
run: conan upload --remote ripple '*' --all --parallel --confirm

View File

@@ -1,64 +1,59 @@
name: clang-format
on:
push:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
on: [push, pull_request]
jobs:
check:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
runs-on: ubuntu-24.04
container: ghcr.io/xrplf/ci/tools-rippled-clang-format
env:
CLANG_VERSION: 18
steps:
# For jobs running in containers, $GITHUB_WORKSPACE and ${{ github.workspace }} might not be the
# same directory. The actions/checkout step is *supposed* to checkout into $GITHUB_WORKSPACE and
# then add it to safe.directory (see instructions at https://github.com/actions/checkout)
# but that's apparently not happening for some container images. We can't be sure what is actually
# happening, so let's pre-emptively add both directories to safe.directory. There's a
# Github issue opened in 2022 and not resolved in 2025 https://github.com/actions/runner/issues/2058 ¯\_(ツ)_/¯
- run: |
git config --global --add safe.directory $GITHUB_WORKSPACE
git config --global --add safe.directory ${{ github.workspace }}
- uses: actions/checkout@v4
- name: Format first-party sources
run: |
clang-format --version
find include src tests -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format -i {} +
- name: Check for differences
id: assert
shell: bash
run: |
set -o pipefail
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: clang-format.patch
if-no-files-found: ignore
path: clang-format.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
PREAMBLE: |
If you are reading this, you are looking at a failed Github Actions
job. That means you pushed one or more files that did not conform
to the formatting specified in .clang-format. That may be because
you neglected to run 'git clang-format' or 'clang-format' before
committing, or that your version of clang-format has an
incompatibility with the one on this
machine, which is:
SUGGESTION: |
- uses: actions/checkout@v4
- name: Install clang-format
run: |
codename=$( lsb_release --codename --short )
sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
EOF
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
- name: Format first-party sources
run: find include src -type f \( -name '*.cpp' -o -name '*.hpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-${CLANG_VERSION} -i {} +
- name: Check for differences
id: assert
run: |
set -o pipefail
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: clang-format.patch
if-no-files-found: ignore
path: clang-format.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
PREAMBLE: |
If you are reading this, you are looking at a failed Github Actions
job. That means you pushed one or more files that did not conform
to the formatting specified in .clang-format. That may be because
you neglected to run 'git clang-format' or 'clang-format' before
committing, or that your version of clang-format has an
incompatibility with the one on this
machine, which is:
SUGGESTION: |
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions cpp,h,hpp,ipp develop'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"
clang-format --version
echo "${SUGGESTION}"
exit 1
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run 'git-clang-format --extensions cpp,h,hpp,ipp develop'
in your repo, commit, and push.
run: |
echo "${PREAMBLE}"
clang-format-${CLANG_VERSION} --version
echo "${SUGGESTION}"
exit 1

View File

@@ -10,11 +10,11 @@ concurrency:
cancel-in-progress: true
jobs:
documentation:
job:
runs-on: ubuntu-latest
permissions:
contents: write
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: checkout
uses: actions/checkout@v4

103
.github/workflows/instrumentation.yml vendored Normal file
View File

@@ -0,0 +1,103 @@
name: instrumentation
on:
pull_request:
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# NOTE we are not using dependencies built inside nix because nix is lagging
# with compiler versions. Instrumentation requires clang version 16 or later
instrumentation-build:
env:
CLANG_RELEASE: 16
strategy:
fail-fast: false
runs-on: [self-hosted, heavy]
container: debian:bookworm
steps:
- name: install prerequisites
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update
apt-get install --yes --no-install-recommends \
clang-${CLANG_RELEASE} clang++-${CLANG_RELEASE} \
python3-pip python-is-python3 make cmake git wget
apt-get clean
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-${CLANG_RELEASE} 100 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-${CLANG_RELEASE}
update-alternatives --auto clang
pip install --no-cache --break-system-packages "conan<2"
- name: checkout
uses: actions/checkout@v4
- name: prepare environment
run: |
mkdir ${GITHUB_WORKSPACE}/.build
echo "SOURCE_DIR=$GITHUB_WORKSPACE" >> $GITHUB_ENV
echo "BUILD_DIR=$GITHUB_WORKSPACE/.build" >> $GITHUB_ENV
echo "CC=/usr/bin/clang" >> $GITHUB_ENV
echo "CXX=/usr/bin/clang++" >> $GITHUB_ENV
- name: configure Conan
run: |
conan profile new --detect default
conan profile update settings.compiler=clang default
conan profile update settings.compiler.version=${CLANG_RELEASE} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update settings.compiler.cppstd=20 default
conan profile update options.rocksdb=False default
conan profile update \
'conf.tools.build:compiler_executables={"c": "/usr/bin/clang", "cpp": "/usr/bin/clang++"}' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
conan export external/snappy snappy/1.1.10@
conan export external/soci soci/4.0.3@
- name: build dependencies
run: |
cd ${BUILD_DIR}
conan install ${SOURCE_DIR} \
--output-folder ${BUILD_DIR} \
--install-folder ${BUILD_DIR} \
--build missing \
--settings build_type=Debug
- name: build with instrumentation
run: |
cd ${BUILD_DIR}
cmake -S ${SOURCE_DIR} -B ${BUILD_DIR} \
-Dvoidstar=ON \
-Dtests=ON \
-Dxrpld=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DSECP256K1_BUILD_BENCHMARK=OFF \
-DSECP256K1_BUILD_TESTS=OFF \
-DSECP256K1_BUILD_EXHAUSTIVE_TESTS=OFF \
-DCMAKE_TOOLCHAIN_FILE=${BUILD_DIR}/build/generators/conan_toolchain.cmake
cmake --build . --parallel $(nproc)
- name: verify instrumentation enabled
run: |
cd ${BUILD_DIR}
./rippled --version | grep libvoidstar
- name: run unit tests
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))

View File

@@ -1,53 +1,49 @@
name: levelization
on:
push:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
on: [push, pull_request]
jobs:
check:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
runs-on: ubuntu-latest
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v4
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences
id: assert
run: |
set -o pipefail
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: levelization.patch
if-no-files-found: ignore
path: levelization.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
MESSAGE: |
If you are reading this, you are looking at a failed Github
Actions job. That means you changed the dependency relationships
between the modules in rippled. That may be an improvement or a
regression. This check doesn't judge.
- uses: actions/checkout@v4
- name: Check levelization
run: Builds/levelization/levelization.sh
- name: Check for differences
id: assert
run: |
set -o pipefail
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: levelization.patch
if-no-files-found: ignore
path: levelization.patch
- name: What happened?
if: failure() && steps.assert.outcome == 'failure'
env:
MESSAGE: |
If you are reading this, you are looking at a failed Github
Actions job. That means you changed the dependency relationships
between the modules in rippled. That may be an improvement or a
regression. This check doesn't judge.
A rule of thumb, though, is that if your changes caused
something to be removed from loops.txt, that's probably an
improvement. If something was added, it's probably a regression.
A rule of thumb, though, is that if your changes caused
something to be removed from loops.txt, that's probably an
improvement. If something was added, it's probably a regression.
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run './Builds/levelization/levelization.sh' in your repo,
commit, and push.
To fix it, you can do one of two things:
1. Download and apply the patch generated as an artifact of this
job to your repo, commit, and push.
2. Run './Builds/levelization/levelization.sh' in your repo,
commit, and push.
See Builds/levelization/README.md for more info.
run: |
echo "${MESSAGE}"
exit 1
See Builds/levelization/README.md for more info.
run: |
echo "${MESSAGE}"
exit 1

View File

@@ -1,35 +1,33 @@
name: Check libXRPL compatibility with Clio
env:
CONAN_REMOTE_URL: https://conan.ripplex.io
CONAN_LOGIN_USERNAME_XRPLF: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_PASSWORD_XRPLF: ${{ secrets.CONAN_REMOTE_PASSWORD }}
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
on:
pull_request:
paths:
- "src/libxrpl/protocol/BuildInfo.cpp"
- ".github/workflows/libxrpl.yml"
types: [opened, reopened, synchronize, ready_for_review]
- 'src/libxrpl/protocol/BuildInfo.cpp'
- '.github/workflows/libxrpl.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
publish:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
name: Publish libXRPL
outputs:
outcome: ${{ steps.upload.outputs.outcome }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.channel.outputs.channel }}
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/rippled-build-ubuntu:aaf5e3e
container: rippleci/rippled-build-ubuntu:aaf5e3e
steps:
- name: Wait for essential checks to succeed
uses: lewagon/wait-on-check-action@v1.3.4
with:
ref: ${{ github.event.pull_request.head.sha || github.sha }}
running-workflow-name: wait-for-check-regexp
check-regexp: "(dependencies|test).*linux.*" # Ignore windows and mac tests but make sure linux passes
check-regexp: '(dependencies|test).*linux.*' # Ignore windows and mac tests but make sure linux passes
repo-token: ${{ secrets.GITHUB_TOKEN }}
wait-interval: 10
- name: Checkout
@@ -43,20 +41,20 @@ jobs:
shell: bash
run: |
conan export . ${{ steps.channel.outputs.channel }}
- name: Add Conan remote
- name: Add Ripple Conan remote
shell: bash
run: |
echo "Adding Conan remote 'xrplf' at ${{ env.CONAN_REMOTE_URL }}."
conan remote add xrplf ${{ env.CONAN_REMOTE_URL }} --insert 0 --force
echo "Listing Conan remotes."
conan remote list
conan remote remove ripple || true
# Do not quote the URL. An empty string will be accepted (with a non-fatal warning), but a missing argument will not.
conan remote add ripple ${{ env.CONAN_URL }} --insert 0
- name: Parse new version
id: version
shell: bash
run: |
echo version="$(cat src/libxrpl/protocol/BuildInfo.cpp | grep "versionString =" \
| awk -F '"' '{print $2}')" | tee ${GITHUB_OUTPUT}
- name: Try to authenticate to Conan remote
- name: Try to authenticate to Ripple Conan remote
id: remote
shell: bash
run: |
@@ -64,7 +62,7 @@ jobs:
# https://docs.conan.io/1/reference/commands/misc/user.html#using-environment-variables
# https://docs.conan.io/1/reference/env_vars.html#conan-login-username-conan-login-username-remote-name
# https://docs.conan.io/1/reference/env_vars.html#conan-password-conan-password-remote-name
echo outcome=$(conan user --remote xrplf --password >&2 \
echo outcome=$(conan user --remote ripple --password >&2 \
&& echo success || echo failure) | tee ${GITHUB_OUTPUT}
- name: Upload new package
id: upload
@@ -87,5 +85,4 @@ jobs:
run: |
gh api --method POST -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" \
/repos/xrplf/clio/dispatches -f "event_type=check_libxrpl" \
-F "client_payload[version]=${{ needs.publish.outputs.version }}@${{ needs.publish.outputs.channel }}" \
-F "client_payload[pr]=${{ github.event.pull_request.number }}"
-F "client_payload[version]=${{ needs.publish.outputs.version }}@${{ needs.publish.outputs.channel }}"

View File

@@ -1,7 +1,6 @@
name: macos
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
@@ -11,29 +10,14 @@ on:
- release
- master
# Branches that opt-in to running
- "ci/**"
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This part of Conan configuration is specific to this workflow only; we do not want
# to pollute conan/profiles directory with settings which might not work for others
env:
CONAN_REMOTE_URL: https://conan.ripplex.io
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
tools.build:jobs={{ (os.cpu_count() * 4/5) | int }}
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
jobs:
test:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
matrix:
platform:
@@ -42,38 +26,21 @@ jobs:
- Ninja
configuration:
- Release
runs-on: [self-hosted, macOS, mac-runner-m1]
runs-on: [self-hosted, macOS]
env:
# The `build` action requires these variables.
build_dir: .build
NUM_PROCESSORS: 12
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: install Conan
run: |
brew install conan
brew install conan@1
echo '/opt/homebrew/opt/conan@1/bin' >> $GITHUB_PATH
- name: install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: install python
run: |
if which python > /dev/null 2>&1; then
echo "Python executable exists"
else
brew install python@3.13
ln -s /opt/homebrew/bin/python3 /opt/homebrew/bin/python
fi
- name: install cmake
run: |
if which cmake > /dev/null 2>&1; then
echo "cmake executable exists"
else
brew install cmake
fi
- name: install nproc
run: |
brew install coreutils
- name: check environment
run: |
env | sort
@@ -81,19 +48,17 @@ jobs:
python --version
conan --version
cmake --version
nproc --version
echo -n "nproc returns: "
nproc
system_profiler SPHardwareDataType
sysctl -n hw.logicalcpu
clang --version
- name: configure Conan
run: |
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
run : |
conan profile new default --detect || true
conan profile update settings.compiler.cppstd=20 default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: build
@@ -101,12 +66,6 @@ jobs:
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: test
run: |
n=$(nproc)
echo "Using $n test jobs"
cd ${build_dir}
./rippled --unittest --unittest-jobs $n
ctest -j $n --output-on-failure
${build_dir}/rippled --unittest

View File

@@ -1,60 +0,0 @@
name: missing-commits
on:
push:
branches:
# Only check that the branches are up to date when updating the
# relevant branches.
- develop
- release
jobs:
up_to_date:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for missing commits
id: commits
env:
SUGGESTION: |
If you are reading this, then the commits indicated above are
missing from "develop" and/or "release". Do a reverse-merge
as soon as possible. See CONTRIBUTING.md for instructions.
run: |
set -o pipefail
# Branches ordered by how "canonical" they are. Every commit in
# one branch should be in all the branches behind it
order=( master release develop )
branches=()
for branch in "${order[@]}"
do
# Check that the branches exist so that this job will work on
# forked repos, which don't necessarily have master and
# release branches.
if git ls-remote --exit-code --heads origin \
refs/heads/${branch} > /dev/null
then
branches+=( origin/${branch} )
fi
done
prior=()
for branch in "${branches[@]}"
do
if [[ ${#prior[@]} -ne 0 ]]
then
echo "Checking ${prior[@]} for commits missing from ${branch}"
git log --oneline --no-merges "${prior[@]}" \
^$branch | tee -a "missing-commits.txt"
echo
fi
prior+=( "${branch}" )
done
if [[ $( cat missing-commits.txt | wc -l ) -ne 0 ]]
then
echo "${SUGGESTION}"
exit 1
fi

View File

@@ -1,38 +1,23 @@
name: nix
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows)
# build/test jobs (nix, macos, windows, instrumentation)
branches:
# Always build the package branches
- develop
- release
- master
# Branches that opt-in to running
- "ci/**"
- 'ci/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
CONAN_REMOTE_URL: https://conan.ripplex.io
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{ os.cpu_count() }}
core.upload:parallel={{ os.cpu_count() }}
tools.build:jobs={{ (os.cpu_count() * 4/5) | int }}
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
# This workflow has multiple job matrixes.
# They can be considered phases because most of the matrices ("test",
# "coverage", "conan", ) depend on the first ("dependencies").
# This workflow has two job matrixes.
# They can be considered phases because the second matrix ("test")
# depends on the first ("dependencies").
#
# The first phase has a job in the matrix for each combination of
# variables that affects dependency ABI:
@@ -45,16 +30,13 @@ env:
# to hold the binaries if they are built locally.
# We must use the "{upload,download}-artifact" actions instead.
#
# The remaining phases have a job in the matrix for each test
# configuration. They install dependency binaries from the cache,
# whichever was used, and build and test rippled.
#
# "instrumentation" is independent, but is included here because it also
# builds on linux in the same "on:" conditions.
# The second phase has a job in the matrix for each test configuration.
# It installs dependency binaries from the cache, whichever was used,
# and builds and tests rippled.
jobs:
dependencies:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
fail-fast: false
matrix:
@@ -68,47 +50,57 @@ jobs:
- Release
include:
- compiler: gcc
compiler_version: 12
distro: ubuntu
codename: jammy
profile:
version: 11
cc: /usr/bin/gcc
cxx: /usr/bin/g++
- compiler: clang
compiler_version: 16
distro: debian
codename: bookworm
profile:
version: 14
cc: /usr/bin/clang-14
cxx: /usr/bin/clang++-14
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/ci/${{ matrix.distro }}-${{ matrix.codename }}:${{ matrix.compiler }}-${{ matrix.compiler_version }}
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
lsb_release -a || true
${{ matrix.compiler }}-${{ matrix.compiler_version }} --version
conan --version
cmake --version
env | sort
- name: configure Conan
run: |
echo "${CONAN_GLOBAL_CONF}" >> $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler=${{ matrix.compiler }} default
conan profile update settings.compiler.version=${{ matrix.profile.version }} default
conan profile update settings.compiler.libcxx=libstdc++11 default
conan profile update env.CC=${{ matrix.profile.cc }} default
conan profile update env.CXX=${{ matrix.profile.cxx }} default
conan profile update conf.tools.build:compiler_executables='{"c": "${{ matrix.profile.cc }}", "cpp": "${{ matrix.profile.cxx }}"}' default
- name: archive profile
# Create this archive before dependencies are added to the local cache.
run: tar -czf conan.tar.gz -C ${CONAN_HOME} .
run: tar -czf conan.tar -C ~/.conan .
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration }}
- name: upload archive
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
path: conan.tar.gz
path: conan.tar
if-no-files-found: error
test:
strategy:
fail-fast: false
@@ -121,32 +113,23 @@ jobs:
configuration:
- Debug
- Release
include:
- compiler: gcc
compiler_version: 12
distro: ubuntu
codename: jammy
- compiler: clang
compiler_version: 16
distro: debian
codename: bookworm
cmake-args:
-
- "-Dunity=ON"
needs: dependencies
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/ci/${{ matrix.distro }}-${{ matrix.codename }}:${{ matrix.compiler }}-${{ matrix.compiler_version }}
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
tar -xzf conan.tar.gz -C ${CONAN_HOME}
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
@@ -154,9 +137,11 @@ jobs:
conan --version
cmake --version
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
@@ -164,73 +149,11 @@ jobs:
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: check linking
run: |
cd ${build_dir}
ldd ./rippled
if [ "$(ldd ./rippled | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
echo 'The binary is statically linked.'
else
echo 'The binary is dynamically linked.'
exit 1
fi
cmake-args: ${{ matrix.cmake-args }}
- name: test
run: |
cd ${build_dir}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
${build_dir}/rippled --unittest --unittest-jobs $(nproc)
reference-fee-test:
strategy:
fail-fast: false
matrix:
platform:
- linux
compiler:
- gcc
configuration:
- Debug
cmake-args:
- "-DUNIT_TEST_REFERENCE_FEE=200"
- "-DUNIT_TEST_REFERENCE_FEE=1000"
needs: dependencies
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/ci/ubuntu-jammy:gcc-12
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
tar -xzf conan.tar.gz -C ${CONAN_HOME}
- name: check environment
run: |
env | sort
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: dependencies
uses: ./.github/actions/dependencies
with:
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: "-Dassert=TRUE -Dwerr=TRUE ${{ matrix.cmake-args }}"
- name: test
run: |
cd ${build_dir}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
coverage:
strategy:
@@ -244,18 +167,20 @@ jobs:
- Debug
needs: dependencies
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/ci/ubuntu-jammy:gcc-12
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v3
with:
name: ${{ matrix.platform }}-${{ matrix.compiler }}-${{ matrix.configuration }}
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
tar -xzf conan.tar.gz -C ${CONAN_HOME}
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: install gcovr
run: pip install "gcovr>=7,<8"
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
@@ -263,11 +188,13 @@ jobs:
cmake --version
gcovr --version
env | sort
ls ${CONAN_HOME}
ls ~/.conan
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ matrix.configuration }}
- name: build
@@ -276,8 +203,6 @@ jobs:
generator: Ninja
configuration: ${{ matrix.configuration }}
cmake-args: >-
-Dassert=TRUE
-Dwerr=TRUE
-Dcoverage=ON
-Dcoverage_format=xml
-DCODE_COVERAGE_VERBOSE=ON
@@ -289,7 +214,7 @@ jobs:
run: |
mv "${build_dir}/coverage.xml" ./
- name: archive coverage report
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
uses: actions/upload-artifact@v3
with:
name: coverage.xml
path: coverage.xml
@@ -311,23 +236,19 @@ jobs:
conan:
needs: dependencies
runs-on: [self-hosted, heavy]
container:
image: ghcr.io/xrplf/ci/ubuntu-jammy:gcc-12
container: rippleci/rippled-build-ubuntu:aaf5e3e
env:
build_dir: .build
platform: linux
compiler: gcc
compiler_version: 12
configuration: Release
steps:
- name: download cache
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v3
with:
name: ${{ env.platform }}-${{ env.compiler }}-${{ env.configuration }}
name: linux-gcc-${{ env.configuration }}
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
tar -xzf conan.tar.gz -C ${CONAN_HOME}
mkdir -p ~/.conan
tar -xzf conan.tar -C ~/.conan
- name: check environment
run: |
env | sort
@@ -335,88 +256,29 @@ jobs:
conan --version
cmake --version
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
with:
configuration: ${{ env.configuration }}
- name: export
run: |
conan export . --version head
version=$(conan inspect --raw version .)
reference="xrpl/${version}@local/test"
conan remove -f ${reference} || true
conan export . local/test
echo "reference=${reference}" >> "${GITHUB_ENV}"
- name: build
run: |
cd tests/conan
mkdir ${build_dir} && cd ${build_dir}
conan install .. \
--settings:all build_type=${configuration} \
--output-folder . \
--build missing
cd examples/example
mkdir ${build_dir}
cd ${build_dir}
conan install .. --output-folder . \
--require-override ${reference} --build missing
cmake .. \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=./build/${configuration}/generators/conan_toolchain.cmake \
-DCMAKE_BUILD_TYPE=${configuration}
cmake --build .
./example | grep '^[[:digit:]]\+\.[[:digit:]]\+\.[[:digit:]]\+'
instrumentation-build:
needs: dependencies
runs-on: [self-hosted, heavy]
container: ghcr.io/xrplf/ci/debian-bookworm:clang-16
env:
build_dir: .build
steps:
- name: download cache
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
with:
name: linux-clang-Debug
- name: extract cache
run: |
mkdir -p ${CONAN_HOME}
tar -xzf conan.tar.gz -C ${CONAN_HOME}
- name: check environment
run: |
echo ${PATH} | tr ':' '\n'
conan --version
cmake --version
env | sort
ls ${CONAN_HOME}
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: dependencies
uses: ./.github/actions/dependencies
with:
configuration: Debug
- name: prepare environment
run: |
mkdir -p ${build_dir}
echo "SOURCE_DIR=$(pwd)" >> $GITHUB_ENV
echo "BUILD_DIR=$(pwd)/${build_dir}" >> $GITHUB_ENV
- name: build with instrumentation
run: |
cd ${BUILD_DIR}
cmake -S ${SOURCE_DIR} -B ${BUILD_DIR} \
-Dvoidstar=ON \
-Dtests=ON \
-Dxrpld=ON \
-DCMAKE_BUILD_TYPE=Debug \
-DSECP256K1_BUILD_BENCHMARK=OFF \
-DSECP256K1_BUILD_TESTS=OFF \
-DSECP256K1_BUILD_EXHAUSTIVE_TESTS=OFF \
-DCMAKE_TOOLCHAIN_FILE=${BUILD_DIR}/build/generators/conan_toolchain.cmake
cmake --build . --parallel $(nproc)
- name: verify instrumentation enabled
run: |
cd ${BUILD_DIR}
./rippled --version | grep libvoidstar
- name: run unit tests
run: |
cd ${BUILD_DIR}
./rippled -u --unittest-jobs $(( $(nproc)/4 ))
ctest -j $(nproc) --output-on-failure

View File

@@ -2,7 +2,6 @@ name: windows
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
push:
# If the branches list is ever changed, be sure to change it on all
# build/test jobs (nix, macos, windows, instrumentation)
@@ -12,53 +11,38 @@ on:
- release
- master
# Branches that opt-in to running
- "ci/**"
- 'ci/**'
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
CONAN_REMOTE_URL: https://conan.ripplex.io
CONAN_REMOTE_USERNAME: ${{ secrets.CONAN_REMOTE_USERNAME }}
CONAN_REMOTE_PASSWORD: ${{ secrets.CONAN_REMOTE_PASSWORD }}
# This part of the Conan configuration is specific to this workflow only; we
# do not want to pollute the 'conan/profiles' directory with settings that
# might not work for other workflows.
CONAN_GLOBAL_CONF: |
core.download:parallel={{os.cpu_count()}}
core.upload:parallel={{os.cpu_count()}}
tools.build:jobs=24
tools.build:verbosity=verbose
tools.compilation:verbosity=verbose
jobs:
test:
if: ${{ github.event_name == 'push' || github.event.pull_request.draft != true || contains(github.event.pull_request.labels.*.name, 'DraftRunCI') }}
strategy:
fail-fast: false
matrix:
version:
- generator: Visual Studio 17 2022
runs-on: windows-2022
generator:
- Visual Studio 16 2019
configuration:
- type: Release
tests: true
- type: Debug
# Skip running unit tests on debug builds, because they
# take an unreasonable amount of time
tests: false
runtime: d
runs-on: ${{ matrix.version.runs-on }}
- Release
# Github hosted runners tend to hang when running Debug unit tests.
# Instead of trying to work around it, disable the Debug job until
# something beefier (i.e. a heavy self-hosted runner) becomes
# available.
# - Debug
runs-on: windows-2019
env:
build_dir: .build
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@v4
- name: choose Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065
uses: actions/setup-python@v5
with:
python-version: 3.13
python-version: 3.9
- name: learn Python cache directory
id: pip-cache
shell: bash
@@ -66,12 +50,12 @@ jobs:
python -m pip install --upgrade pip
echo "dir=$(pip cache dir)" | tee ${GITHUB_OUTPUT}
- name: restore Python cache directory
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684
uses: actions/cache@v4
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ hashFiles('.github/workflows/windows.yml') }}
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ hashFiles('.github/workflows/windows.yml') }}
- name: install Conan
run: pip install wheel conan
run: pip install wheel 'conan<2'
- name: check environment
run: |
dir env:
@@ -82,25 +66,26 @@ jobs:
- name: configure Conan
shell: bash
run: |
echo "${CONAN_GLOBAL_CONF}" > $(conan config home)/global.conf
conan config install conan/profiles/ -tf $(conan config home)/profiles/
conan profile show
conan profile new default --detect
conan profile update settings.compiler.cppstd=20 default
conan profile update settings.compiler.runtime=MT${{ matrix.configuration == 'Debug' && 'd' || '' }} default
- name: build dependencies
uses: ./.github/actions/dependencies
env:
CONAN_URL: http://18.143.149.228:8081/artifactory/api/conan/conan-non-prod
CONAN_LOGIN_USERNAME_RIPPLE: ${{ secrets.CONAN_USERNAME }}
CONAN_PASSWORD_RIPPLE: ${{ secrets.CONAN_TOKEN }}
with:
configuration: ${{ matrix.configuration.type }}
configuration: ${{ matrix.configuration }}
- name: build
uses: ./.github/actions/build
with:
generator: "${{ matrix.version.generator }}"
configuration: ${{ matrix.configuration.type }}
generator: '${{ matrix.generator }}'
configuration: ${{ matrix.configuration }}
# Hard code for now. Move to the matrix if varied options are needed
cmake-args: "-Dassert=TRUE -Dwerr=TRUE -Dreporting=OFF -Dunity=ON"
cmake-args: '-Dassert=ON -Dreporting=OFF -Dunity=ON'
cmake-target: install
- name: test
shell: bash
if: ${{ matrix.configuration.tests }}
run: |
cd ${build_dir}/${{ matrix.configuration.type }}
./rippled --unittest --unittest-jobs $(nproc)
ctest -j $(nproc) --output-on-failure
${build_dir}/${{ matrix.configuration }}/rippled --unittest --unittest-jobs $(nproc)

View File

@@ -1,6 +1,6 @@
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.8
hooks:
- id: clang-format
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v18.1.3
hooks:
- id: clang-format

View File

@@ -83,49 +83,25 @@ The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conve
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.5.0
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.5.0
- `channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled.
## XRP Ledger server version 2.4.0
[Version 2.4.0](https://github.com/XRPLF/rippled/releases/tag/2.4.0) was released on March 4, 2025.
### Additions and bugfixes in 2.4.0
### Addition in 2.4
- `ledger_entry`: `state` is added an alias for `ripple_state`.
- `ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
- `validators`: Added new field `validator_list_threshold` in response.
- `simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
[Version 2.3.0](https://github.com/XRPLF/rippled/releases/tag/2.3.0) was released on Nov 25, 2024.
### Breaking changes in 2.3.0
### Breaking change in 2.3
- `book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0
### Addition in 2.3
- `book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
[Version 2.0.0](https://github.com/XRPLF/rippled/releases/tag/2.0.0) was released on Jan 9, 2024. The following additions are non-breaking (because they are purely additive):
The following additions are non-breaking (because they are purely additive).
- `server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.

601
BUILD.md
View File

@@ -3,29 +3,29 @@
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan, you can read our
> [crash course](./docs/build/conan.md) or the official [Getting Started][3]
> walkthrough.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```bash
```
git checkout master
```
For the latest release candidate, choose the `release` branch.
```bash
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```bash
```
git checkout develop
```
@@ -33,323 +33,176 @@ git checkout develop
See [System Requirements](https://xrpl.org/system-requirements.html).
Building rippled generally requires git, Python, Conan, CMake, and a C++
compiler. Some guidance on setting up such a [C++ development environment can be
found here](./docs/build/environment.md).
Building rippled generally requires git, Python, Conan, CMake, and a C++ compiler. Some guidance on setting up such a [C++ development environment can be found here](./docs/build/environment.md).
- [Python 3.11](https://www.python.org/downloads/), or higher
- [Conan 2.17](https://conan.io/downloads.html)[^1], or higher
- [CMake 3.22](https://cmake.org/download/)[^2], or higher
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 1.60](https://conan.io/downloads.html)[^1]
- [CMake 3.16](https://cmake.org/download/)
[^1]:
It is possible to build with Conan 1.60+, but the instructions are
significantly different, which is why we are not recommending it.
[^2]:
CMake 4 is not yet supported by all dependencies required by this project.
If you are affected by this issue, follow [conan workaround for cmake
4](#workaround-for-cmake-4)
[^1]: It is possible to build with Conan 2.x,
but the instructions are significantly different,
which is why we are not recommending it yet.
Notably, the `conan profile update` command is removed in 2.x.
Profiles must be edited by hand.
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
| ----------- | --------- |
| GCC | 12 |
| Clang | 16 |
| Apple Clang | 16 |
| MSVC | 19.44[^3] |
| Compiler | Version |
|-------------|---------|
| GCC | 11 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
### Linux
The Ubuntu Linux distribution has received the highest level of quality
assurance, testing, and support. We also support Red Hat and use Debian
internally.
The Ubuntu operating system has received the highest level of
quality assurance, testing, and support.
Here are [sample instructions for setting up a C++ development environment on
Linux](./docs/build/environment.md#linux).
Here are [sample instructions for setting up a C++ development environment on Linux](./docs/build/environment.md#linux).
### Mac
Many rippled engineers use macOS for development.
Here are [sample instructions for setting up a C++ development environment on
macOS](./docs/build/environment.md#macos).
Here are [sample instructions for setting up a C++ development environment on macOS](./docs/build/environment.md#macos).
### Windows
Windows is used by some engineers for development only.
Windows is not recommended for production use at this time.
[^3]: Windows is not recommended for production use.
- Additionally, 32-bit Windows development is not supported.
[Boost]: https://www.boost.org/
## Steps
### Set Up Conan
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python,
Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
After you have a [C++ development environment](./docs/build/environment.md) ready with Git, Python, Conan, CMake, and a C++ compiler, you may need to set up your Conan profile.
These instructions assume a basic familiarity with Conan and CMake. If you are
unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official
[Getting Started][3] walkthrough.
These instructions assume a basic familiarity with Conan and CMake.
#### Default profile
If you are unfamiliar with Conan, then please read [this crash course](./docs/build/conan.md) or the official [Getting Started][3] walkthrough.
We recommend that you import the provided `conan/profiles/default` profile:
You'll need at least one Conan profile:
```bash
conan config install conan/profiles/ -tf $(conan config home)/profiles/
```
conan profile new default --detect
```
Update the compiler settings:
```
conan profile update settings.compiler.cppstd=20 default
```
Configure Conan (1.x only) to use recipe revisions:
```
conan config set general.revisions_enabled=1
```
**Linux** developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI:
```
conan profile update settings.compiler.libcxx=libstdc++11 default
```
Ensure inter-operability between `boost::string_view` and `std::string_view` types:
```
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_BEAST_USE_STD_STRING_VIEW"]' default
conan profile update 'env.CXXFLAGS="-DBOOST_BEAST_USE_STD_STRING_VIEW"' default
```
You can check your Conan profile by running:
```bash
conan profile show
If you have other flags in the `conf.tools.build` or `env.CXXFLAGS` sections, make sure to retain the existing flags and append the new ones. You can check them with:
```
conan profile show default
```
#### Custom profile
If the default profile does not work for you and you do not yet have a Conan
profile, you can create one by running:
**Windows** developers may need to use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
```bash
conan profile detect
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```
conan profile update settings.arch=x86_64 default
```
### Multiple compilers
When `/usr/bin/g++` exists on a platform, it is the default cpp compiler. This
default works for some users.
However, if this compiler cannot build rippled or its dependencies, then you can
install another compiler and set Conan and CMake to use it.
Update the `conf.tools.build:compiler_executables` setting in order to set the correct variables (`CMAKE_<LANG>_COMPILER`) in the
generated CMake toolchain file.
For example, on Ubuntu 20, you may have gcc at `/usr/bin/gcc` and g++ at `/usr/bin/g++`; if that is the case, you can select those compilers with:
```
conan profile update 'conf.tools.build:compiler_executables={"c": "/usr/bin/gcc", "cpp": "/usr/bin/g++"}' default
```
You may need to make changes to the profile to suit your environment. You can
refer to the provided `conan/profiles/default` profile for inspiration, and you
may also need to apply the required [tweaks](#conan-profile-tweaks) to this
default profile.
Replace `/usr/bin/gcc` and `/usr/bin/g++` with paths to the desired compilers.
### Patched recipes
It should choose the compiler for dependencies as well,
but not all of them have a Conan recipe that respects this setting (yet).
For the rest, you can set these environment variables.
Replace `<path>` with paths to the desired compilers:
The recipes in Conan Center occasionally need to be patched for compatibility
with the latest version of `rippled`. We maintain a fork of the Conan Center
[here](https://github.com/XRPLF/conan-center-index/) containing the patches.
- `conan profile update env.CC=<path> default`
- `conan profile update env.CXX=<path> default`
To ensure our patched recipes are used, you must add our Conan remote at a
higher index than the default Conan Center remote, so it is consulted first. You
can do this by running:
Export our [Conan recipe for Snappy](./external/snappy).
It does not explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```bash
conan remote add --index 0 xrplf "https://conan.ripplex.io"
```
```
# Conan 1.x
conan export external/snappy snappy/1.1.10@
# Conan 2.x
conan export --version 1.1.10 external/snappy
```
Alternatively, you can pull the patched recipes into the repository and use them
locally:
Export our [Conan recipe for RocksDB](./external/rocksdb).
It does not override paths to dependencies when building with Visual Studio.
```bash
cd external
git init
git remote add origin git@github.com:XRPLF/conan-center-index.git
git sparse-checkout init
git sparse-checkout set recipes/snappy
git sparse-checkout add recipes/soci
git fetch origin master
git checkout master
conan export --version 1.1.10 recipes/snappy/all
conan export --version 4.0.3 recipes/soci/all
rm -rf .git
```
```
# Conan 1.x
conan export external/rocksdb rocksdb/6.29.5@
# Conan 2.x
conan export --version 6.29.5 external/rocksdb
```
In the case we switch to a newer version of a dependency that still requires a
patch, it will be necessary for you to pull in the changes and re-export the
updated dependencies with the newer version. However, if we switch to a newer
version that no longer requires a patch, no action is required on your part, as
the new recipe will be automatically pulled from the official Conan Center.
Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
### Conan profile tweaks
```
# Conan 1.x
conan export external/soci soci/4.0.3@
# Conan 2.x
conan export --version 4.0.3 external/soci
```
#### Missing compiler version
Export our [Conan recipe for NuDB](./external/nudb).
It fixes some source files to add missing `#include`s.
If you see an error similar to the following after running `conan profile show`:
```bash
ERROR: Invalid setting '17' is not a valid 'settings.compiler.version' value.
Possible values are ['5.0', '5.1', '6.0', '6.1', '7.0', '7.3', '8.0', '8.1',
'9.0', '9.1', '10.0', '11.0', '12.0', '13', '13.0', '13.1', '14', '14.0', '15',
'15.0', '16', '16.0']
Read "http://docs.conan.io/2/knowledge/faq.html#error-invalid-setting"
```
you need to amend the list of compiler versions in
`$(conan config home)/settings.yml`, by appending the required version number(s)
to the `version` array specific for your compiler. For example:
```yaml
apple-clang:
version:
[
"5.0",
"5.1",
"6.0",
"6.1",
"7.0",
"7.3",
"8.0",
"8.1",
"9.0",
"9.1",
"10.0",
"11.0",
"12.0",
"13",
"13.0",
"13.1",
"14",
"14.0",
"15",
"15.0",
"16",
"16.0",
"17",
"17.0",
]
```
#### Multiple compilers
If you have multiple compilers installed, make sure to select the one to use in
your default Conan configuration **before** running `conan profile detect`, by
setting the `CC` and `CXX` environment variables.
For example, if you are running MacOS and have [homebrew
LLVM@18](https://formulae.brew.sh/formula/llvm@18), and want to use it as a
compiler in the new Conan profile:
```bash
export CC=$(brew --prefix llvm@18)/bin/clang
export CXX=$(brew --prefix llvm@18)/bin/clang++
conan profile detect
```
You should also explicitly set the path to the compiler in the profile file,
which helps to avoid errors when `CC` and/or `CXX` are set and disagree with the
selected Conan profile. For example:
```text
[conf]
tools.build:compiler_executables={'c':'/usr/bin/gcc','cpp':'/usr/bin/g++'}
```
#### Multiple profiles
You can manage multiple Conan profiles in the directory
`$(conan config home)/profiles`, for example renaming `default` to a different
name and then creating a new `default` profile for a different compiler.
#### Select language
The default profile created by Conan will typically select different C++ dialect
than C++20 used by this project. You should set `20` in the profile line
starting with `compiler.cppstd=`. For example:
```bash
sed -i.bak -e 's|^compiler\.cppstd=.*$|compiler.cppstd=20|' $(conan config home)/profiles/default
```
#### Select standard library in Linux
**Linux** developers will commonly have a default Conan [profile][] that
compiles with GCC and links with libstdc++. If you are linking with libstdc++
(see profile setting `compiler.libcxx`), then you will need to choose the
`libstdc++11` ABI:
```bash
sed -i.bak -e 's|^compiler\.libcxx=.*$|compiler.libcxx=libstdc++11|' $(conan config home)/profiles/default
```
#### Select architecture and runtime in Windows
**Windows** developers may need to use the x64 native build tools. An easy way
to do that is to run the shortcut "x64 Native Tools Command Prompt" for the
version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```bash
sed -i.bak -e 's|^arch=.*$|arch=x86_64|' $(conan config home)/profiles/default
```
**Windows** developers also must select static runtime:
```bash
sed -i.bak -e 's|^compiler\.runtime=.*$|compiler.runtime=static|' $(conan config home)/profiles/default
```
#### Workaround for CMake 4
If your system CMake is version 4 rather than 3, you may have to configure Conan
profile to use CMake version 3 for dependencies, by adding the following two
lines to your profile:
```text
[tool_requires]
!cmake/*: cmake/[>=3 <4]
```
This will force Conan to download and use a locally cached CMake 3 version, and
is needed because some of the dependencies used by this project do not support
CMake 4.
#### Clang workaround for grpc
If your compiler is clang, version 19 or later, or apple-clang, version 17 or
later, you may encounter a compilation error while building the `grpc`
dependency:
```text
In file included from .../lib/promise/try_seq.h:26:
.../lib/promise/detail/basic_seq.h:499:38: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
499 | Traits::template CallSeqFactory(f_, *cur_, std::move(arg)));
| ^
```
The workaround for this error is to add two lines to profile:
```text
[conf]
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
```
#### Workaround for gcc 12
If your compiler is gcc, version 12, and you have enabled `werr` option, you may
encounter a compilation error such as:
```text
/usr/include/c++/12/bits/char_traits.h:435:56: error: 'void* __builtin_memcpy(void*, const void*, long unsigned int)' accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Werror=restrict]
435 | return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-Wno-restrict']
```
#### Workaround for clang 16
If your compiler is clang, version 16, you may encounter compilation error such
as:
```text
In file included from .../boost/beast/websocket/stream.hpp:2857:
.../boost/beast/websocket/impl/read.hpp:695:17: error: call to 'async_teardown' is ambiguous
async_teardown(impl.role, impl.stream(),
^~~~~~~~~~~~~~
```
The workaround for this error is to add two lines to your profile:
```text
[conf]
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
```
```
# Conan 1.x
conan export external/nudb nudb/2.0.8@
# Conan 2.x
conan export --version 2.0.8 external/nudb
```
### Build and Test
@@ -369,67 +222,63 @@ tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Use conan to generate CMake files for every configuration you want to build:
2. Generate CMake files for every configuration you want to build.
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
To build Debug, in the next step, be sure to set `-DCMAKE_BUILD_TYPE=Debug`
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
Single-config generators:
Single-config generators:
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the one of the `build_type` settings
you chose in the previous step.
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
For example, to build Debug, in the next command, replace "Release" with "Debug"
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..
```
Multi-config generators:
Multi-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `rippled` in this step.
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator, you
must pass the option `--config` to select the build configuration.
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
@@ -449,22 +298,19 @@ tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
Single-config generators:
```
./rippled --unittest --unittest-jobs N
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest --unittest-jobs N
./Debug/rippled --unittest --unittest-jobs N
./Release/rippled --unittest
./Debug/rippled --unittest
```
Replace the `--unittest-jobs` parameter N with the desired unit tests
concurrency. Recommended setting is half of the number of available CPU
cores.
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
The location of `rippled` binary in your build directory depends on your
CMake generator. Pass `--help` to see the rest of the command line options.
## Coverage report
@@ -505,7 +351,7 @@ variable in `cmake`. The specific command line used to run the `gcovr` tool will
displayed if the `CODE_COVERAGE_VERBOSE` variable is set.
By default, the code coverage tool runs parallel unit tests with `--unittest-jobs`
set to the number of available CPU cores. This may cause spurious test
set to the number of available CPU cores. This may cause spurious test
errors on Apple. Developers can override the number of unit test jobs with
the `coverage_test_parallelism` variable in `cmake`.
@@ -521,68 +367,88 @@ cmake --build . --target coverage
After the `coverage` target is completed, the generated coverage report will be
stored inside the build directory, as either of:
- file named `coverage.`_extension_, with a suitable extension for the report format, or
- file named `coverage.`_extension_ , with a suitable extension for the report format, or
- directory named `coverage`, with the `index.html` and other files inside, for the `html-details` or `html-nested` report formats.
## Options
| Option | Default Value | Description |
| ---------- | ------------- | -------------------------------------------------------------------------- |
| `assert` | OFF | Enable assertions. |
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | ON | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove individual libraries from the Conan cache, e.g.
```bash
conan remove 'grpc/*'
2. Remove the Conan cache:
```
**or**
Remove all libraries from Conan cache:
```bash
conan remove '*'
rm -rf ~/.conan/data
```
3. Re-run [conan export](#patched-recipes) if needed.
4. Re-run [conan install](#build-and-test).
### `protobuf/port_def.inc` file not found
If `cmake --build .` results in an error due to a missing a protobuf file, then
you might have generated CMake files for a different `build_type` than the
`CMAKE_BUILD_TYPE` you passed to Conan.
### no std::result_of
If your compiler version is recent enough to have removed `std::result_of` as
part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor
definition to your build.
```
/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
10 | #include <google/protobuf/port_def.inc>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
conan profile update 'options.boost:extra_b2_flags="define=BOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"' default
conan profile update 'conf.tools.build:cflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_HAS_STD_INVOKE_RESULT"]' default
```
For example, if you want to build Debug:
1. For conan install, pass `--settings build_type=Debug`
2. For cmake, pass `-DCMAKE_BUILD_TYPE=Debug`
### call to 'async_teardown' is ambiguous
If you are compiling with an early version of Clang 16, then you might hit
a [regression][6] when compiling C++20 that manifests as an [error in a Boost
header][7]. You can workaround it by adding this preprocessor definition:
```
conan profile update 'env.CXXFLAGS="-DBOOST_ASIO_DISABLE_CONCEPTS"' default
conan profile update 'conf.tools.build:cxxflags+=["-DBOOST_ASIO_DISABLE_CONCEPTS"]' default
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
conan install --build boost ...
```
## Add a Dependency
@@ -590,15 +456,16 @@ If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[2]: https://en.cppreference.com/w/cpp/compiler_support/20
[3]: https://docs.conan.io/en/latest/getting_started.html

View File

@@ -25,28 +25,28 @@ more dependencies listed later.
**tl;dr:** The modules listed first are more independent than the modules
listed later.
| Level / Tier | Module(s) |
| ------------ | -------------------------------------------------------------------------------------------------------- |
| 01 | ripple/beast ripple/unity |
| 02 | ripple/basics |
| 03 | ripple/json ripple/crypto |
| 04 | ripple/protocol |
| 05 | ripple/core ripple/conditions ripple/consensus ripple/resource ripple/server |
| 06 | ripple/peerfinder ripple/ledger ripple/nodestore ripple/net |
| 07 | ripple/shamap ripple/overlay |
| 08 | ripple/app |
| 09 | ripple/rpc |
| 10 | ripple/perflog |
| 11 | test/jtx test/beast test/csf |
| 12 | test/unit_test |
| 13 | test/crypto test/conditions test/json test/resource test/shamap test/peerfinder test/basics test/overlay |
| 14 | test |
| 15 | test/net test/protocol test/ledger test/consensus test/core test/server test/nodestore |
| 16 | test/rpc test/app |
| Level / Tier | Module(s) |
|--------------|-----------------------------------------------|
| 01 | ripple/beast ripple/unity
| 02 | ripple/basics
| 03 | ripple/json ripple/crypto
| 04 | ripple/protocol
| 05 | ripple/core ripple/conditions ripple/consensus ripple/resource ripple/server
| 06 | ripple/peerfinder ripple/ledger ripple/nodestore ripple/net
| 07 | ripple/shamap ripple/overlay
| 08 | ripple/app
| 09 | ripple/rpc
| 10 | ripple/perflog
| 11 | test/jtx test/beast test/csf
| 12 | test/unit_test
| 13 | test/crypto test/conditions test/json test/resource test/shamap test/peerfinder test/basics test/overlay
| 14 | test
| 15 | test/net test/protocol test/ledger test/consensus test/core test/server test/nodestore
| 16 | test/rpc test/app
(Note that `test` levelization is _much_ less important and _much_ less
(Note that `test` levelization is *much* less important and *much* less
strictly enforced than `ripple` levelization, other than the requirement
that `test` code should _never_ be included in `ripple` code.)
that `test` code should *never* be included in `ripple` code.)
## Validation
@@ -59,48 +59,48 @@ the rippled source. The only caveat is that it runs much slower
under Windows than in Linux. It hasn't yet been tested under MacOS.
It generates many files of [results](results):
- `rawincludes.txt`: The raw dump of the `#includes`
- `paths.txt`: A second dump grouping the source module
* `rawincludes.txt`: The raw dump of the `#includes`
* `paths.txt`: A second dump grouping the source module
to the destination module, deduped, and with frequency counts.
- `includes/`: A directory where each file represents a module and
* `includes/`: A directory where each file represents a module and
contains a list of modules and counts that the module _includes_.
- `includedby/`: Similar to `includes/`, but the other way around. Each
* `includedby/`: Similar to `includes/`, but the other way around. Each
file represents a module and contains a list of modules and counts
that _include_ the module.
- [`loops.txt`](results/loops.txt): A list of direct loops detected
* [`loops.txt`](results/loops.txt): A list of direct loops detected
between modules as they actually exist, as opposed to how they are
desired as described above. In a perfect repo, this file will be
empty.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
that nothing changed.
- [`ordering.txt`](results/ordering.txt): A list showing relationships
* [`ordering.txt`](results/ordering.txt): A list showing relationships
between modules where there are no loops as they actually exist, as
opposed to how they are desired as described above.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
that nothing changed.
- [`levelization.yml`](../../.github/workflows/levelization.yml)
* [`levelization.yml`](../../.github/workflows/levelization.yml)
Github Actions workflow to test that levelization loops haven't
changed. Unfortunately, if changes are detected, it can't tell if
changed. Unfortunately, if changes are detected, it can't tell if
they are improvements or not, so if you have resolved any issues or
done anything else to improve levelization, run `levelization.sh`,
and commit the updated results.
The `loops.txt` and `ordering.txt` files relate the modules
The `loops.txt` and `ordering.txt` files relate the modules
using comparison signs, which indicate the number of times each
module is included in the other.
- `A > B` means that A should probably be at a higher level than B,
* `A > B` means that A should probably be at a higher level than B,
because B is included in A significantly more than A is included in B.
These results can be included in both `loops.txt` and `ordering.txt`.
Because `ordering.txt`only includes relationships where B is not
included in A at all, it will only include these types of results.
- `A ~= B` means that A and B are included in each other a different
* `A ~= B` means that A and B are included in each other a different
number of times, but the values are so close that the script can't
definitively say that one should be above the other. These results
will only be included in `loops.txt`.
- `A == B` means that A and B include each other the same number of
* `A == B` means that A and B include each other the same number of
times, so the script has no clue which should be higher. These results
will only be included in `loops.txt`.
@@ -110,5 +110,5 @@ get those details locally.
1. Run `levelization.sh`
2. Grep the modules in `paths.txt`.
- For example, if a cycle is found `A ~= B`, simply `grep -w
A Builds/levelization/results/paths.txt | grep -w B`
* For example, if a cycle is found `A ~= B`, simply `grep -w
A Builds/levelization/results/paths.txt | grep -w B`

View File

@@ -21,7 +21,7 @@ mkdir results
includes="$( pwd )/results/rawincludes.txt"
pushd ../..
echo Raw includes:
grep -r '^[ ]*#include.*/.*\.h' include src | \
grep -r '#include.*/.*\.h' include src | \
grep -v boost | tee ${includes}
popd
pushd results

View File

@@ -14,10 +14,10 @@ Loop: xrpld.app xrpld.net
xrpld.app > xrpld.net
Loop: xrpld.app xrpld.overlay
xrpld.overlay > xrpld.app
xrpld.overlay == xrpld.app
Loop: xrpld.app xrpld.peerfinder
xrpld.peerfinder ~= xrpld.app
xrpld.app > xrpld.peerfinder
Loop: xrpld.app xrpld.rpc
xrpld.rpc > xrpld.app

View File

@@ -6,7 +6,6 @@ libxrpl.protocol > xrpl.basics
libxrpl.protocol > xrpl.json
libxrpl.protocol > xrpl.protocol
libxrpl.resource > xrpl.basics
libxrpl.resource > xrpl.json
libxrpl.resource > xrpl.resource
libxrpl.server > xrpl.basics
libxrpl.server > xrpl.json
@@ -20,7 +19,6 @@ test.app > xrpl.basics
test.app > xrpld.app
test.app > xrpld.core
test.app > xrpld.ledger
test.app > xrpld.nodestore
test.app > xrpld.overlay
test.app > xrpld.rpc
test.app > xrpl.json
@@ -43,7 +41,6 @@ test.consensus > xrpl.basics
test.consensus > xrpld.app
test.consensus > xrpld.consensus
test.consensus > xrpld.ledger
test.consensus > xrpl.json
test.core > test.jtx
test.core > test.toplevel
test.core > test.unit_test
@@ -60,6 +57,7 @@ test.json > test.jtx
test.json > xrpl.json
test.jtx > xrpl.basics
test.jtx > xrpld.app
test.jtx > xrpld.consensus
test.jtx > xrpld.core
test.jtx > xrpld.ledger
test.jtx > xrpld.net
@@ -83,7 +81,6 @@ test.nodestore > xrpld.core
test.nodestore > xrpld.nodestore
test.nodestore > xrpld.unity
test.overlay > test.jtx
test.overlay > test.toplevel
test.overlay > test.unit_test
test.overlay > xrpl.basics
test.overlay > xrpld.app
@@ -132,7 +129,6 @@ test.shamap > xrpl.protocol
test.toplevel > test.csf
test.toplevel > xrpl.json
test.unit_test > xrpl.basics
tests.libxrpl > xrpl.basics
xrpl.json > xrpl.basics
xrpl.protocol > xrpl.basics
xrpl.protocol > xrpl.json
@@ -160,6 +156,7 @@ xrpld.core > xrpl.basics
xrpld.core > xrpl.json
xrpld.core > xrpl.protocol
xrpld.ledger > xrpl.basics
xrpld.ledger > xrpld.core
xrpld.ledger > xrpl.json
xrpld.ledger > xrpl.protocol
xrpld.net > xrpl.basics
@@ -184,6 +181,7 @@ xrpld.peerfinder > xrpld.core
xrpld.peerfinder > xrpl.protocol
xrpld.perflog > xrpl.basics
xrpld.perflog > xrpl.json
xrpld.perflog > xrpl.protocol
xrpld.rpc > xrpl.basics
xrpld.rpc > xrpld.core
xrpld.rpc > xrpld.ledger

View File

@@ -16,36 +16,18 @@ set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU")
# GCC-specific fixes
add_compile_options(-Wno-unknown-pragmas -Wno-subobject-linkage)
# -Wno-subobject-linkage can be removed when we upgrade GCC version to at least 13.3
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# Clang-specific fixes
add_compile_options(-Wno-unknown-warning-option) # Ignore unknown warning options
elseif(MSVC)
# MSVC-specific fixes
add_compile_options(/wd4068) # Ignore unknown pragmas
endif()
set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
find_package(Git)
if(Git_FOUND)
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse HEAD
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git describe --always --abbrev=40
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gch)
if(gch)
set(GIT_COMMIT_HASH "${gch}")
message(STATUS gch: ${GIT_COMMIT_HASH})
add_definitions(-DGIT_COMMIT_HASH="${GIT_COMMIT_HASH}")
endif()
execute_process(COMMAND ${GIT_EXECUTABLE} --git-dir=${CMAKE_CURRENT_SOURCE_DIR}/.git rev-parse --abbrev-ref HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE gb)
if(gb)
set(GIT_BRANCH "${gb}")
message(STATUS gb: ${GIT_BRANCH})
add_definitions(-DGIT_BRANCH="${GIT_BRANCH}")
endif()
endif() #git
if(thread_safety_analysis)
@@ -90,11 +72,6 @@ set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
set(SECP256K1_INSTALL TRUE)
set(SECP256K1_BUILD_BENCHMARK FALSE)
set(SECP256K1_BUILD_TESTS FALSE)
set(SECP256K1_BUILD_EXHAUSTIVE_TESTS FALSE)
set(SECP256K1_BUILD_CTIME_TESTS FALSE)
set(SECP256K1_BUILD_EXAMPLES FALSE)
add_subdirectory(external/secp256k1)
add_library(secp256k1::secp256k1 ALIAS secp256k1)
add_subdirectory(external/ed25519-donna)
@@ -108,6 +85,17 @@ find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
# https://conan.io/center/recipes/wasmedge?version=0.9.0
option(wasmedge "Enable WASM" ON)
if(wasmedge)
find_package(wasmedge REQUIRED)
set_target_properties(wasmedge::wasmedge PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_WASMEDGE_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE wasmedge::wasmedge)
# include(deps/WasmEdge)
endif()
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
@@ -149,8 +137,3 @@ set(PROJECT_EXPORT_SET RippleExports)
include(RippledCore)
include(RippledInstall)
include(RippledValidatorKeys)
if(tests)
include(CTest)
add_subdirectory(src/tests/libxrpl)
endif()

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
ISC License
ISC License
Copyright (c) 2011, Arthur Britto, David Schwartz, Jed McCaleb, Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant.
Copyright (c) 2012-2020, the XRP Ledger developers.
@@ -14,3 +14,4 @@ ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -1,23 +1,19 @@
[![codecov](https://codecov.io/gh/XRPLF/rippled/graph/badge.svg?token=WyFr5ajq3O)](https://codecov.io/gh/XRPLF/rippled)
# The XRP Ledger
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
## XRP
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.
## rippled
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (rippled Reporting Mode has been replaced by Clio.)
### Build from Source
- [Read the build instructions in `BUILD.md`](BUILD.md)
- If you encounter any issues, please [open an issue](https://github.com/XRPLF/rippled/issues)
* [Read the build instructions in `BUILD.md`](BUILD.md)
* If you encounter any issues, please [open an issue](https://github.com/XRPLF/rippled/issues)
## Key Features of the XRP Ledger
@@ -37,6 +33,7 @@ If you are interested in running an **API Server** (including a **Full History S
[Modern Features for Smart Contracts]: https://xrpl.org/xrp-ledger-overview.html#modern-features-for-smart-contracts
[On-Ledger Decentralized Exchange]: https://xrpl.org/xrp-ledger-overview.html#on-ledger-decentralized-exchange
## Source Code
Here are some good places to start learning the source code:
@@ -48,7 +45,7 @@ Here are some good places to start learning the source code:
### Repository Contents
| Folder | Contents |
| :--------- | :----------------------------------------------- |
|:-----------|:-------------------------------------------------|
| `./bin` | Scripts and data files for Ripple integrators. |
| `./Builds` | Platform-specific guides for building `rippled`. |
| `./docs` | Source documentation files and doxygen config. |
@@ -58,14 +55,15 @@ Here are some good places to start learning the source code:
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
## Additional Documentation
- [XRP Ledger Dev Portal](https://xrpl.org/)
- [Setup and Installation](https://xrpl.org/install-rippled.html)
- [Source Documentation (Doxygen)](https://xrplf.github.io/rippled/)
* [XRP Ledger Dev Portal](https://xrpl.org/)
* [Setup and Installation](https://xrpl.org/install-rippled.html)
* [Source Documentation (Doxygen)](https://xrplf.github.io/rippled/)
## See Also
- [Clio API Server for the XRP Ledger](https://github.com/XRPLF/clio)
- [Mailing List for Release Announcements](https://groups.google.com/g/ripple-server)
- [Learn more about the XRP Ledger (YouTube)](https://www.youtube.com/playlist?list=PLJQ55Tj1hIVZtJ_JdTvSum2qMTsedWkNi)
* [Clio API Server for the XRP Ledger](https://github.com/XRPLF/clio)
* [Mailing List for Release Announcements](https://groups.google.com/g/ripple-server)
* [Learn more about the XRP Ledger (YouTube)](https://www.youtube.com/playlist?list=PLJQ55Tj1hIVZtJ_JdTvSum2qMTsedWkNi)

4659
RELEASENOTES.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
# Security Policy
## Supported Versions
@@ -76,14 +77,13 @@ The amount paid varies dramatically. Vulnerabilities that are harmless on their
To report a qualifying bug, please send a detailed report to:
| Email Address | bugs@ripple.com |
| :-----------: | :-------------------------------------------------- |
| Short Key ID | `0xC57929BE` |
| Long Key ID | `0xCD49A0AFC57929BE` |
| Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keyserver.ubuntu.com](https://keyserver.ubuntu.com)), is:
|Email Address|bugs@ripple.com |
|:-----------:|:----------------------------------------------------|
|Short Key ID | `0xC57929BE` |
|Long Key ID | `0xCD49A0AFC57929BE` |
|Fingerprint | `24E6 3B02 37E0 FA9C 5E96 8974 CD49 A0AF C579 29BE` |
The full PGP key for this address, which is also available on several key servers (e.g. on [keys.gnupg.net](https://keys.gnupg.net)), is:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUwGHYBEAC0wpGpBPkd8W1UdQjg9+cEFzeIEJRaoZoeuJD8mofwI5Ejnjdt

470
bin/browser.js Executable file
View File

@@ -0,0 +1,470 @@
#!/usr/bin/node
//
// ledger?l=L
// transaction?h=H
// ledger_entry?l=L&h=H
// account?l=L&a=A
// directory?l=L&dir_root=H&i=I
// directory?l=L&o=A&i=I // owner directory
// offer?l=L&offer=H
// offer?l=L&account=A&i=I
// ripple_state=l=L&a=A&b=A&c=C
// account_lines?l=L&a=A
//
// A=address
// C=currency 3 letter code
// H=hash
// I=index
// L=current | closed | validated | index | hash
//
var async = require("async");
var extend = require("extend");
var http = require("http");
var url = require("url");
var Remote = require("ripple-lib").Remote;
var program = process.argv[1];
var httpd_response = function (res, opts) {
var self=this;
res.statusCode = opts.statusCode;
res.end(
"<HTML>"
+ "<HEAD><TITLE>Title</TITLE></HEAD>"
+ "<BODY BACKGROUND=\"#FFFFFF\">"
+ "State:" + self.state
+ "<UL>"
+ "<LI><A HREF=\"/\">home</A>"
+ "<LI>" + html_link('r4EM4gBQfr1QgQLXSPF4r7h84qE9mb6iCC')
// + "<LI><A HREF=\""+test+"\">rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh</A>"
+ "<LI><A HREF=\"/ledger\">ledger</A>"
+ "</UL>"
+ (opts.body || '')
+ '<HR><PRE>'
+ (opts.url || '')
+ '</PRE>'
+ "</BODY>"
+ "</HTML>"
);
};
var html_link = function (generic) {
return '<A HREF="' + build_uri({ type: 'account', account: generic}) + '">' + generic + '</A>';
};
// Build a link to a type.
var build_uri = function (params, opts) {
var c;
if (params.type === 'account') {
c = {
pathname: 'account',
query: {
l: params.ledger,
a: params.account,
},
};
} else if (params.type === 'ledger') {
c = {
pathname: 'ledger',
query: {
l: params.ledger,
},
};
} else if (params.type === 'transaction') {
c = {
pathname: 'transaction',
query: {
h: params.hash,
},
};
} else {
c = {};
}
opts = opts || {};
c.protocol = "http";
c.hostname = opts.hostname || self.base.hostname;
c.port = opts.port || self.base.port;
return url.format(c);
};
var build_link = function (item, link) {
console.log(link);
return "<A HREF=" + link + ">" + item + "</A>";
};
var rewrite_field = function (type, obj, field, opts) {
if (field in obj) {
obj[field] = rewrite_type(type, obj[field], opts);
}
};
var rewrite_type = function (type, obj, opts) {
if ('amount' === type) {
if ('string' === typeof obj) {
// XRP.
return '<B>' + obj + '</B>';
} else {
rewrite_field('address', obj, 'issuer', opts);
return obj;
}
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
if ('address' === type) {
return build_link(
obj,
build_uri({
type: 'account',
account: obj
}, opts)
);
}
else if ('ledger' === type) {
return build_link(
obj,
build_uri({
type: 'ledger',
ledger: obj,
}, opts)
);
}
else if ('node' === type) {
// A node
if ('PreviousTxnID' in obj)
obj.PreviousTxnID = rewrite_type('transaction', obj.PreviousTxnID, opts);
if ('Offer' === obj.LedgerEntryType) {
if ('NewFields' in obj) {
if ('TakerGets' in obj.NewFields)
obj.NewFields.TakerGets = rewrite_type('amount', obj.NewFields.TakerGets, opts);
if ('TakerPays' in obj.NewFields)
obj.NewFields.TakerPays = rewrite_type('amount', obj.NewFields.TakerPays, opts);
}
}
obj.LedgerEntryType = '<B>' + obj.LedgerEntryType + '</B>';
return obj;
}
else if ('transaction' === type) {
// Reference to a transaction.
return build_link(
obj,
build_uri({
type: 'transaction',
hash: obj
}, opts)
);
}
return 'ERROR: ' + type;
};
var rewrite_object = function (obj, opts) {
var out = extend({}, obj);
rewrite_field('address', out, 'Account', opts);
rewrite_field('ledger', out, 'parent_hash', opts);
rewrite_field('ledger', out, 'ledger_index', opts);
rewrite_field('ledger', out, 'ledger_current_index', opts);
rewrite_field('ledger', out, 'ledger_hash', opts);
if ('ledger' in obj) {
// It's a ledger header.
out.ledger = rewrite_object(out.ledger, opts);
if ('ledger_hash' in out.ledger)
out.ledger.ledger_hash = '<B>' + out.ledger.ledger_hash + '</B>';
delete out.ledger.hash;
delete out.ledger.totalCoins;
}
if ('TransactionType' in obj) {
// It's a transaction.
out.TransactionType = '<B>' + obj.TransactionType + '</B>';
rewrite_field('amount', out, 'TakerGets', opts);
rewrite_field('amount', out, 'TakerPays', opts);
rewrite_field('ledger', out, 'inLedger', opts);
out.meta.AffectedNodes = out.meta.AffectedNodes.map(function (node) {
var kind = 'CreatedNode' in node
? 'CreatedNode'
: 'ModifiedNode' in node
? 'ModifiedNode'
: 'DeletedNode' in node
? 'DeletedNode'
: undefined;
if (kind) {
node[kind] = rewrite_type('node', node[kind], opts);
}
return node;
});
}
else if ('node' in obj && 'LedgerEntryType' in obj.node) {
// Its a ledger entry.
if (obj.node.LedgerEntryType === 'AccountRoot') {
rewrite_field('address', out.node, 'Account', opts);
rewrite_field('transaction', out.node, 'PreviousTxnID', opts);
rewrite_field('ledger', out.node, 'PreviousTxnLgrSeq', opts);
}
out.node.LedgerEntryType = '<B>' + out.node.LedgerEntryType + '</B>';
}
return out;
};
var augment_object = function (obj, opts, done) {
if (obj.node.LedgerEntryType == 'AccountRoot') {
var tx_hash = obj.node.PreviousTxnID;
var tx_ledger = obj.node.PreviousTxnLgrSeq;
obj.history = [];
async.whilst(
function () { return tx_hash; },
function (callback) {
// console.log("augment_object: request: %s %s", tx_hash, tx_ledger);
opts.remote.request_tx(tx_hash)
.on('success', function (m) {
tx_hash = undefined;
tx_ledger = undefined;
//console.log("augment_object: ", JSON.stringify(m));
m.meta.AffectedNodes.filter(function(n) {
// console.log("augment_object: ", JSON.stringify(n));
// if (n.ModifiedNode)
// console.log("augment_object: %s %s %s %s %s %s/%s", 'ModifiedNode' in n, n.ModifiedNode && (n.ModifiedNode.LedgerEntryType === 'AccountRoot'), n.ModifiedNode && n.ModifiedNode.FinalFields && (n.ModifiedNode.FinalFields.Account === obj.node.Account), Object.keys(n)[0], n.ModifiedNode && (n.ModifiedNode.LedgerEntryType), obj.node.Account, n.ModifiedNode && n.ModifiedNode.FinalFields && n.ModifiedNode.FinalFields.Account);
// if ('ModifiedNode' in n && n.ModifiedNode.LedgerEntryType === 'AccountRoot')
// {
// console.log("***: ", JSON.stringify(m));
// console.log("***: ", JSON.stringify(n));
// }
return 'ModifiedNode' in n
&& n.ModifiedNode.LedgerEntryType === 'AccountRoot'
&& n.ModifiedNode.FinalFields
&& n.ModifiedNode.FinalFields.Account === obj.node.Account;
})
.forEach(function (n) {
tx_hash = n.ModifiedNode.PreviousTxnID;
tx_ledger = n.ModifiedNode.PreviousTxnLgrSeq;
obj.history.push({
tx_hash: tx_hash,
tx_ledger: tx_ledger
});
console.log("augment_object: next: %s %s", tx_hash, tx_ledger);
});
callback();
})
.on('error', function (m) {
callback(m);
})
.request();
},
function (err) {
if (err) {
done();
}
else {
async.forEach(obj.history, function (o, callback) {
opts.remote.request_account_info(obj.node.Account)
.ledger_index(o.tx_ledger)
.on('success', function (m) {
//console.log("augment_object: ", JSON.stringify(m));
o.Balance = m.account_data.Balance;
// o.account_data = m.account_data;
callback();
})
.on('error', function (m) {
o.error = m;
callback();
})
.request();
},
function (err) {
done(err);
});
}
});
}
else {
done();
}
};
if (process.argv.length < 4 || process.argv.length > 7) {
console.log("Usage: %s ws_ip ws_port [<ip> [<port> [<start>]]]", program);
}
else {
var ws_ip = process.argv[2];
var ws_port = process.argv[3];
var ip = process.argv.length > 4 ? process.argv[4] : "127.0.0.1";
var port = process.argv.length > 5 ? process.argv[5] : "8080";
// console.log("START");
var self = this;
var remote = (new Remote({
websocket_ip: ws_ip,
websocket_port: ws_port,
trace: false
}))
.on('state', function (m) {
console.log("STATE: %s", m);
self.state = m;
})
// .once('ledger_closed', callback)
.connect()
;
self.base = {
hostname: ip,
port: port,
remote: remote,
};
// console.log("SERVE");
var server = http.createServer(function (req, res) {
var input = "";
req.setEncoding();
req.on('data', function (buffer) {
// console.log("DATA: %s", buffer);
input = input + buffer;
});
req.on('end', function () {
// console.log("URL: %s", req.url);
// console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
var _parsed = url.parse(req.url, true);
var _url = JSON.stringify(_parsed, undefined, 2);
// console.log("HEADERS: %s", JSON.stringify(_parsed, undefined, 2));
if (_parsed.pathname === "/account") {
var request = remote
.request_ledger_entry('account_root')
.ledger_index(-1)
.account_root(_parsed.query.a)
.on('success', function (m) {
// console.log("account_root: %s", JSON.stringify(m, undefined, 2));
augment_object(m, self.base, function() {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+ "</PRE>"
});
});
})
.request();
} else if (_parsed.pathname === "/ledger") {
var request = remote
.request_ledger(undefined, { expand: true, transactions: true })
.on('success', function (m) {
// console.log("Ledger: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
if (_parsed.query.l && _parsed.query.l.length === 64) {
request.ledger_hash(_parsed.query.l);
}
else if (_parsed.query.l) {
request.ledger_index(Number(_parsed.query.l));
}
else {
request.ledger_index(-1);
}
request.request();
} else if (_parsed.pathname === "/transaction") {
var request = remote
.request_tx(_parsed.query.h)
// .request_transaction_entry(_parsed.query.h)
// .ledger_select(_parsed.query.l)
.on('success', function (m) {
// console.log("transaction: %s", JSON.stringify(m, undefined, 2));
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ JSON.stringify(rewrite_object(m, self.base), undefined, 2)
+"</PRE>"
});
})
.on('error', function (m) {
httpd_response(res,
{
statusCode: 200,
url: _url,
body: "<PRE>"
+ 'ERROR: ' + JSON.stringify(m, undefined, 2)
+"</PRE>"
});
})
.request();
} else {
var test = build_uri({
type: 'account',
ledger: 'closed',
account: 'rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
}, self.base);
httpd_response(res,
{
statusCode: req.url === "/" ? 200 : 404,
url: _url,
});
}
});
});
server.listen(port, ip, undefined,
function () {
console.log("Listening at: http://%s:%s", ip, port);
});
}
// vim:sw=2:sts=2:ts=8:et

64
bin/debug_local_sign.js Normal file
View File

@@ -0,0 +1,64 @@
var ripple = require('ripple-lib');
var v = {
seed: "snoPBrXtMeMyMHUVTgbuqAfg1SUTb",
addr: "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh"
};
var remote = ripple.Remote.from_config({
"trusted" : true,
"websocket_ip" : "127.0.0.1",
"websocket_port" : 5006,
"websocket_ssl" : false,
"local_signing" : true
});
var tx_json = {
"Account" : v.addr,
"Amount" : "10000000",
"Destination" : "rEu2ULPiEQm1BAL8pYzmXnNX1aFX9sCks",
"Fee" : "10",
"Flags" : 0,
"Sequence" : 3,
"TransactionType" : "Payment"
//"SigningPubKey": '0396941B22791A448E5877A44CE98434DB217D6FB97D63F0DAD23BE49ED45173C9'
};
remote.on('connected', function () {
var req = remote.request_sign(v.seed, tx_json);
req.message.debug_signing = true;
req.on('success', function (result) {
console.log("SERVER RESULT");
console.log(result);
var sim = {};
var tx = remote.transaction();
tx.tx_json = tx_json;
tx._secret = v.seed;
tx.complete();
var unsigned = tx.serialize().to_hex();
tx.sign();
sim.tx_blob = tx.serialize().to_hex();
sim.tx_json = tx.tx_json;
sim.tx_signing_hash = tx.signing_hash().to_hex();
sim.tx_unsigned = unsigned;
console.log("\nLOCAL RESULT");
console.log(sim);
remote.connect(false);
});
req.on('error', function (err) {
if (err.error === "remoteError" && err.remote.error === "srcActNotFound") {
console.log("Please fund account "+v.addr+" to run this test.");
} else {
console.log('error', err);
}
remote.connect(false);
});
req.request();
});
remote.connect();

18
bin/email_hash.js Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/node
//
// Returns a Gravatar style hash as per: http://en.gravatar.com/site/implement/hash/
//
if (3 != process.argv.length) {
process.stderr.write("Usage: " + process.argv[1] + " email_address\n\nReturns gravatar style hash.\n");
process.exit(1);
} else {
var md5 = require('crypto').createHash('md5');
md5.update(process.argv[2].trim().toLowerCase());
process.stdout.write(md5.digest('hex') + "\n");
}
// vim:sw=2:sts=2:ts=8:et

31
bin/flash_policy.js Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/node
//
// This program allows IE 9 ripple-clients to make websocket connections to
// rippled using flash. As IE 9 does not have websocket support, this required
// if you wish to support IE 9 ripple-clients.
//
// http://www.lightsphere.com/dev/articles/flash_socket_policy.html
//
// For better security, be sure to set the Port below to the port of your
// [websocket_public_port].
//
var net = require("net"),
port = "*",
domains = ["*:"+port]; // Domain:Port
net.createServer(
function(socket) {
socket.write("<?xml version='1.0' ?>\n");
socket.write("<!DOCTYPE cross-domain-policy SYSTEM 'http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd'>\n");
socket.write("<cross-domain-policy>\n");
domains.forEach(
function(domain) {
var parts = domain.split(':');
socket.write("\t<allow-access-from domain='" + parts[0] + "' to-ports='" + parts[1] + "' />\n");
}
);
socket.write("</cross-domain-policy>\n");
socket.end();
}
).listen(843);

150
bin/getRippledInfo Executable file
View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# This script generates information about your rippled installation
# and system. It can be used to help debug issues that you may face
# in your installation. While this script endeavors to not display any
# sensitive information, it is recommended that you read the output
# before sharing with any third parties.
rippled_exe=/opt/ripple/bin/rippled
conf_file=/etc/opt/ripple/rippled.cfg
while getopts ":e:c:" opt; do
case $opt in
e)
rippled_exe=${OPTARG}
;;
c)
conf_file=${OPTARG}
;;
\?)
echo "Invalid option: -$OPTARG"
exit -1
esac
done
tmp_loc=$(mktemp -d --tmpdir ripple_info.XXXXX)
chmod 751 ${tmp_loc}
awk_prog=${tmp_loc}/cfg.awk
summary_out=${tmp_loc}/rippled_info.md
printf "# rippled report info\n\n> generated at %s\n" "$(date -R)" > ${summary_out}
function log_section {
printf "\n## %s\n" "$*" >> ${summary_out}
while read -r l; do
echo " $l" >> ${summary_out}
done </dev/stdin
}
function join_by {
local IFS="$1"; shift; echo "$*";
}
if [[ -f ${conf_file} ]] ; then
exclude=( ips ips_fixed node_seed validation_seed validator_token )
cleaned_conf=${tmp_loc}/cleaned_rippled_cfg.txt
cat << 'EOP' >> ${awk_prog}
BEGIN {FS="[[:space:]]*=[[:space:]]*"; skip=0; db_path=""; print > OUT_FILE; split(exl,exa,"|")}
/^#/ {next}
save==2 && /^[[:space:]]*$/ {next}
/^\[.+\]$/ {
section=tolower(gensub(/^\[[[:space:]]*([a-zA-Z_]+)[[:space:]]*\]$/, "\\1", "g"))
skip = 0
for (i in exa) {
if (section == exa[i])
skip = 1
}
if (section == "database_path")
save = 1
}
skip==1 {next}
save==2 {save=0; db_path=$0}
save==1 {save=2}
$1 ~ /password/ {$0=$1"=<redacted>"}
{print >> OUT_FILE}
END {print db_path}
EOP
db=$(\
sed -r -e 's/\<s[[:alnum:]]{28}\>/<redactedsecret>/g;s/^[[:space:]]*//;s/[[:space:]]*$//' ${conf_file} |\
awk -v OUT_FILE=${cleaned_conf} -v exl="$(join_by '|' "${exclude[@]}")" -f ${awk_prog})
rm ${awk_prog}
cat ${cleaned_conf} | log_section "cleaned config file"
rm ${cleaned_conf}
echo "${db}" | log_section "database path"
df ${db} | log_section "df: database"
fi
# Send output from this script to a log file
## this captures any messages
## or errors from the script itself
log_file=${tmp_loc}/get_info.log
exec 3>&1 1>>${log_file} 2>&1
## Send all stdout files to /tmp
if [[ -x ${rippled_exe} ]] ; then
pgrep rippled && \
${rippled_exe} --conf ${conf_file} \
-- server_info | log_section "server info"
fi
cat /proc/meminfo | log_section "meminfo"
cat /proc/swaps | log_section "swap space"
ulimit -a | log_section "ulimit"
if command -v lshw >/dev/null 2>&1 ; then
lshw 2>/dev/null | log_section "hardware info"
else
lscpu > ${tmp_loc}/hw_info.txt
hwinfo >> ${tmp_loc}/hw_info.txt
lspci >> ${tmp_loc}/hw_info.txt
lsblk >> ${tmp_loc}/hw_info.txt
cat ${tmp_loc}/hw_info.txt | log_section "hardware info"
rm ${tmp_loc}/hw_info.txt
fi
if command -v iostat >/dev/null 2>&1 ; then
iostat -t -d -x 2 6 | log_section "iostat"
fi
df -h | log_section "free disk space"
drives=($(df | awk '$1 ~ /^\/dev\// {print $1}' | xargs -n 1 basename))
block_devs=($(ls /sys/block/))
for d in "${drives[@]}"; do
for dev in "${block_devs[@]}"; do
#echo "D: [$d], DEV: [$dev]"
if [[ $d =~ $dev ]]; then
# this file (if exists) has 0 for SSD and 1 for HDD
if [[ "$(cat /sys/block/${dev}/queue/rotational 2>/dev/null)" == 0 ]] ; then
echo "${d} : SSD" >> ${tmp_loc}/is_ssd.txt
else
echo "${d} : NO SSD" >> ${tmp_loc}/is_ssd.txt
fi
fi
done
done
if [[ -f ${tmp_loc}/is_ssd.txt ]] ; then
cat ${tmp_loc}/is_ssd.txt | log_section "SSD"
rm ${tmp_loc}/is_ssd.txt
fi
cat ${log_file} | log_section "script log"
cat << MSG | tee /dev/fd/3
####################################################
rippled info has been gathered. Please copy the
contents of ${summary_out}
to a github gist at https://gist.github.com/
PLEASE REVIEW THIS FILE FOR ANY SENSITIVE DATA
BEFORE POSTING! We have tried our best to omit
any sensitive information from this file, but you
should verify before posting.
####################################################
MSG

View File

@@ -1,86 +0,0 @@
#!/bin/bash
if [[ $# -ne 1 || "$1" == "--help" || "$1" == "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name <username>
Where <username> is the Github username of the upstream repo. e.g. XRPLF
USAGE
exit 0
fi
# Create upstream remotes based on origin
shift
user="$1"
# Get the origin URL. Expect it be an SSH-style URL
origin=$( git remote get-url origin )
if [[ "${origin}" == "" ]]
then
echo Invalid origin remote >&2
exit 1
fi
# echo "Origin: ${origin}"
# Parse the origin
ifs_orig="${IFS}"
IFS=':' read remote originpath <<< "${origin}"
# echo "Remote: ${remote}, Originpath: ${originpath}"
IFS='@' read sshuser server <<< "${remote}"
# echo "SSHUser: ${sshuser}, Server: ${server}"
IFS='/' read originuser repo <<< "${originpath}"
# echo "Originuser: ${originuser}, Repo: ${repo}"
if [[ "${sshuser}" == "" || "${server}" == "" || "${originuser}" == ""
|| "${repo}" == "" ]]
then
echo "Can't parse origin URL: ${origin}" >&2
exit 1
fi
upstream="https://${server}/${user}/${repo}"
upstreampush="${remote}:${user}/${repo}"
upstreamgroup="upstream upstream-push"
current=$( git remote get-url upstream 2>/dev/null )
currentpush=$( git remote get-url upstream-push 2>/dev/null )
currentgroup=$( git config remotes.upstreams )
if [[ "${current}" == "${upstream}" ]]
then
echo "Upstream already set up correctly. Skip"
elif [[ -n "${current}" && "${current}" != "${upstream}" &&
"${current}" != "${upstreampush}" ]]
then
echo "Upstream already set up as: ${current}. Skip"
else
if [[ "${current}" == "${upstreampush}" ]]
then
echo "Upstream set to dangerous push URL. Update."
_run git remote rename upstream upstream-push || \
_run git remote remove upstream
currentpush=$( git remote get-url upstream-push 2>/dev/null )
fi
_run git remote add upstream "${upstream}"
fi
if [[ "${currentpush}" == "${upstreampush}" ]]
then
echo "upstream-push already set up correctly. Skip"
elif [[ -n "${currentpush}" && "${currentpush}" != "${upstreampush}" ]]
then
echo "upstream-push already set up as: ${currentpush}. Skip"
else
_run git remote add upstream-push "${upstreampush}"
fi
if [[ "${currentgroup}" == "${upstreamgroup}" ]]
then
echo "Upstreams group already set up correctly. Skip"
elif [[ -n "${currentgroup}" && "${currentgroup}" != "${upstreamgroup}" ]]
then
echo "Upstreams group already set up as: ${currentgroup}. Skip"
else
_run git config --add remotes.upstreams "${upstreamgroup}"
fi
_run git fetch --jobs=$(nproc) upstreams
exit 0

View File

@@ -1,69 +0,0 @@
#!/bin/bash
if [[ $# -lt 3 || "$1" == "--help" || "$1" = "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch user/branch [user/branch [...]]
* workbranch will be created locally from base/branch
* base/branch and user/branch may be specified as user:branch to allow
easy copying from Github PRs
* Remotes for each user must already be set up
USAGE
exit 0
fi
work="$1"
shift
branches=( $( echo "${@}" | sed "s/:/\//" ) )
base="${branches[0]}"
unset branches[0]
set -e
users=()
for b in "${branches[@]}"
do
users+=( $( echo $b | cut -d/ -f1 ) )
done
users=( $( printf '%s\n' "${users[@]}" | sort -u ) )
git fetch --multiple upstreams "${users[@]}"
git checkout -B "$work" --no-track "$base"
for b in "${branches[@]}"
do
git merge --squash "${b}"
git commit -S # Use the commit message provided on the PR
done
# Make sure the commits look right
git log --show-signature "$base..HEAD"
parts=( $( echo $base | sed "s/\// /" ) )
repo="${parts[0]}"
b="${parts[1]}"
push=$repo
if [[ "$push" == "upstream" ]]
then
push="upstream-push"
fi
if [[ "$repo" == "upstream" ]]
then
repo="upstreams"
fi
cat << PUSH
-------------------------------------------------------------------
This script will not push. Verify everything is correct, then push
to your repo, and create a PR if necessary. Once the PR is approved,
run:
git push $push HEAD:$b
git fetch $repo
-------------------------------------------------------------------
PUSH

View File

@@ -1,58 +0,0 @@
#!/bin/bash
if [[ $# -ne 3 || "$1" == "--help" || "$1" = "-h" ]]
then
name=$( basename $0 )
cat <<- USAGE
Usage: $name workbranch base/branch version
* workbranch will be created locally from base/branch. If it exists,
it will be reused, so make sure you don't overwrite any work.
* base/branch may be specified as user:branch to allow easy copying
from Github PRs.
USAGE
exit 0
fi
work="$1"
shift
base=$( echo "$1" | sed "s/:/\//" )
shift
version=$1
shift
set -e
git fetch upstreams
git checkout -B "${work}" --no-track "${base}"
push=$( git rev-parse --abbrev-ref --symbolic-full-name '@{push}' \
2>/dev/null ) || true
if [[ "${push}" != "" ]]
then
echo "Warning: ${push} may already exist."
fi
build=$( find -name BuildInfo.cpp )
sed 's/\(^.*versionString =\).*$/\1 "'${version}'"/' ${build} > version.cpp && \
diff "${build}" version.cpp && exit 1 || \
mv -vi version.cpp ${build}
git diff
git add ${build}
git commit -S -m "Set version to ${version}"
git log --oneline --first-parent ${base}^..
cat << PUSH
-------------------------------------------------------------------
This script will not push. Verify everything is correct, then push
to your repo, and create a PR as described in CONTRIBUTING.md.
-------------------------------------------------------------------
PUSH

23
bin/hexify.js Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/node
//
// Returns hex of lowercasing a string.
//
var stringToHex = function (s) {
return Array.prototype.map.call(s, function (c) {
var b = c.charCodeAt(0);
return b < 16 ? "0" + b.toString(16) : b.toString(16);
}).join("");
};
if (3 != process.argv.length) {
process.stderr.write("Usage: " + process.argv[1] + " string\n\nReturns hex of lowercasing string.\n");
process.exit(1);
} else {
process.stdout.write(stringToHex(process.argv[2].toLowerCase()) + "\n");
}
// vim:sw=2:sts=2:ts=8:et

42
bin/jsonrpc_request.js Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/node
//
// This is a tool to issue JSON-RPC requests from the command line.
//
// This can be used to test a JSON-RPC server.
//
// Requires: npm simple-jsonrpc
//
var jsonrpc = require('simple-jsonrpc');
var program = process.argv[1];
if (5 !== process.argv.length) {
console.log("Usage: %s <URL> <method> <json>", program);
}
else {
var url = process.argv[2];
var method = process.argv[3];
var json_raw = process.argv[4];
var json;
try {
json = JSON.parse(json_raw);
}
catch (e) {
console.log("JSON parse error: %s", e.message);
throw e;
}
var client = jsonrpc.client(url);
client.call(method, json,
function (result) {
console.log(JSON.stringify(result, undefined, 2));
},
function (error) {
console.log(JSON.stringify(error, undefined, 2));
});
}
// vim:sw=2:sts=2:ts=8:et

68
bin/jsonrpc_server.js Executable file
View File

@@ -0,0 +1,68 @@
#!/usr/bin/node
//
// This is a tool to listen for JSON-RPC requests at an IP and port.
//
// This will report the request to console and echo back the request as the response.
//
var http = require("http");
var program = process.argv[1];
if (4 !== process.argv.length) {
console.log("Usage: %s <ip> <port>", program);
}
else {
var ip = process.argv[2];
var port = process.argv[3];
var server = http.createServer(function (req, res) {
console.log("CONNECT");
var input = "";
req.setEncoding();
req.on('data', function (buffer) {
// console.log("DATA: %s", buffer);
input = input + buffer;
});
req.on('end', function () {
// console.log("END");
var json_req;
console.log("URL: %s", req.url);
console.log("HEADERS: %s", JSON.stringify(req.headers, undefined, 2));
try {
json_req = JSON.parse(input);
console.log("REQ: %s", JSON.stringify(json_req, undefined, 2));
}
catch (e) {
console.log("BAD JSON: %s", e.message);
json_req = { error : e.message }
}
res.statusCode = 200;
res.end(JSON.stringify({
jsonrpc: "2.0",
result: { request : json_req },
id: req.id
}));
});
req.on('close', function () {
console.log("CLOSE");
});
});
server.listen(port, ip, undefined,
function () {
console.log("Listening at: %s:%s", ip, port);
});
}
// vim:sw=2:sts=2:ts=8:et

218
bin/physical.sh Executable file
View File

@@ -0,0 +1,218 @@
#!/bin/bash
set -o errexit
marker_base=985c80fbc6131f3a8cedd0da7e8af98dfceb13c7
marker_commit=${1:-${marker_base}}
if [ $(git merge-base ${marker_commit} ${marker_base}) != ${marker_base} ]; then
echo "first marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ $(git merge-base ${marker_commit} HEAD) != $(git rev-parse --verify ${marker_commit}) ]; then
echo "given marker commit not an ancestor: ${marker_commit}"
exit 1
fi
if [ -e Builds/CMake ]; then
echo move CMake
git mv Builds/CMake cmake
git add --update .
git commit -m 'Move CMake directory' --author 'Pretty Printer <cpp@ripple.com>'
fi
if [ -e src/ripple ]; then
echo move protocol buffers
mkdir -p include/xrpl
if [ -e src/ripple/proto ]; then
git mv src/ripple/proto include/xrpl
fi
extract_list() {
git show ${marker_commit}:Builds/CMake/RippledCore.cmake | \
awk "/END ${1}/ { p = 0 } p && /src\/ripple/; /BEGIN ${1}/ { p = 1 }" | \
sed -e 's#src/ripple/##' -e 's#[^a-z]\+$##'
}
move_files() {
oldroot="$1"; shift
newroot="$1"; shift
detail="$1"; shift
files=("$@")
for file in ${files[@]}; do
if [ ! -e ${oldroot}/${file} ]; then
continue
fi
dir=$(dirname ${file})
if [ $(basename ${dir}) == 'details' ]; then
dir=$(dirname ${dir})
fi
if [ $(basename ${dir}) == 'impl' ]; then
dir="$(dirname ${dir})/${detail}"
fi
mkdir -p ${newroot}/${dir}
git mv ${oldroot}/${file} ${newroot}/${dir}
done
}
echo move libxrpl headers
files=$(extract_list 'LIBXRPL HEADERS')
files+=(
basics/SlabAllocator.h
beast/asio/io_latency_probe.h
beast/container/aged_container.h
beast/container/aged_container_utility.h
beast/container/aged_map.h
beast/container/aged_multimap.h
beast/container/aged_multiset.h
beast/container/aged_set.h
beast/container/aged_unordered_map.h
beast/container/aged_unordered_multimap.h
beast/container/aged_unordered_multiset.h
beast/container/aged_unordered_set.h
beast/container/detail/aged_associative_container.h
beast/container/detail/aged_container_iterator.h
beast/container/detail/aged_ordered_container.h
beast/container/detail/aged_unordered_container.h
beast/container/detail/empty_base_optimization.h
beast/core/LockFreeStack.h
beast/insight/Collector.h
beast/insight/Counter.h
beast/insight/CounterImpl.h
beast/insight/Event.h
beast/insight/EventImpl.h
beast/insight/Gauge.h
beast/insight/GaugeImpl.h
beast/insight/Group.h
beast/insight/Groups.h
beast/insight/Hook.h
beast/insight/HookImpl.h
beast/insight/Insight.h
beast/insight/Meter.h
beast/insight/MeterImpl.h
beast/insight/NullCollector.h
beast/insight/StatsDCollector.h
beast/test/fail_counter.h
beast/test/fail_stream.h
beast/test/pipe_stream.h
beast/test/sig_wait.h
beast/test/string_iostream.h
beast/test/string_istream.h
beast/test/string_ostream.h
beast/test/test_allocator.h
beast/test/yield_to.h
beast/utility/hash_pair.h
beast/utility/maybe_const.h
beast/utility/temp_dir.h
# included by only json/impl/json_assert.h
json/json_errors.h
protocol/PayChan.h
protocol/RippleLedgerHash.h
protocol/messages.h
protocol/st.h
)
files+=(
basics/README.md
crypto/README.md
json/README.md
protocol/README.md
resource/README.md
)
move_files src/ripple include/xrpl detail ${files[@]}
echo move libxrpl sources
files=$(extract_list 'LIBXRPL SOURCES')
move_files src/ripple src/libxrpl "" ${files[@]}
echo check leftovers
dirs=$(cd include/xrpl; ls -d */)
dirs=$(cd src/ripple; ls -d ${dirs} 2>/dev/null || true)
files="$(cd src/ripple; find ${dirs} -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
echo remove empty directories
empty_dirs="$(cd src/ripple; find ${dirs} -depth -type d)"
for dir in ${empty_dirs[@]}; do
if [ -e ${dir} ]; then
rmdir ${dir}
fi
done
echo move xrpld sources
files=$(
extract_list 'XRPLD SOURCES'
cd src/ripple
find * -regex '.*\.\(h\|ipp\|md\|pu\|uml\|png\)'
)
move_files src/ripple src/xrpld detail ${files[@]}
files="$(cd src/ripple; find . -type f)"
if [ -n "${files}" ]; then
echo "leftover files:"
echo ${files}
exit
fi
fi
rm -rf src/ripple
echo rename .hpp to .h
find include src -name '*.hpp' -exec bash -c 'f="{}"; git mv "${f}" "${f%hpp}h"' \;
echo move PerfLog.h
if [ -e include/xrpl/basics/PerfLog.h ]; then
git mv include/xrpl/basics/PerfLog.h src/xrpld/perflog
fi
# Make sure all protobuf includes have the correct prefix.
protobuf_replace='s:^#include\s*["<].*org/xrpl\([^">]\+\)[">]:#include <xrpl/proto/org/xrpl\1>:'
# Make sure first-party includes use angle brackets and .h extension.
ripple_replace='s:include\s*["<]ripple/\(.*\)\.h\(pp\)\?[">]:include <ripple/\1.h>:'
beast_replace='s:include\s*<beast/:include <xrpl/beast/:'
# Rename impl directories to detail.
impl_rename='s:\(<xrpl.*\)/impl\(/details\)\?/:\1/detail/:'
echo rewrite includes in libxrpl
find include/xrpl src/libxrpl -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e 's:^#include <ripple/:#include <xrpl/:' \
-e "${impl_rename}" \
{} +
echo rewrite includes in xrpld
# # https://www.baeldung.com/linux/join-multiple-lines
libxrpl_dirs="$(cd include/xrpl; ls -d1 */ | sed 's:/$::')"
# libxrpl_dirs='a\nb\nc\n'
readarray -t libxrpl_dirs <<< "${libxrpl_dirs}"
# libxrpl_dirs=(a b c)
libxrpl_dirs=$(printf -v txt '%s\\|' "${libxrpl_dirs[@]}"; echo "${txt%\\|}")
# libxrpl_dirs='a\|b\|c'
find src/xrpld src/test -type f -exec sed -i \
-e "${protobuf_replace}" \
-e "${ripple_replace}" \
-e "${beast_replace}" \
-e "s:^#include <ripple/basics/PerfLog.h>:#include <xrpld/perflog/PerfLog.h>:" \
-e "s:^#include <ripple/\(${libxrpl_dirs}\)/:#include <xrpl/\1/:" \
-e 's:^#include <ripple/:#include <xrpld/:' \
-e "${impl_rename}" \
{} +
git commit -m 'Rearrange sources' --author 'Pretty Printer <cpp@ripple.com>'
find include src -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -exec clang-format-10 -i {} +
git add --update .
git commit -m 'Rewrite includes' --author 'Pretty Printer <cpp@ripple.com>'
./Builds/levelization/levelization.sh
git add --update .
git commit -m 'Recompute loops' --author 'Pretty Printer <cpp@ripple.com>'

252
bin/rlint.js Executable file
View File

@@ -0,0 +1,252 @@
#!/usr/bin/node
var async = require('async');
var Remote = require('ripple-lib').Remote;
var Transaction = require('ripple-lib').Transaction;
var UInt160 = require('ripple-lib').UInt160;
var Amount = require('ripple-lib').Amount;
var book_key = function (book) {
return book.taker_pays.currency
+ ":" + book.taker_pays.issuer
+ ":" + book.taker_gets.currency
+ ":" + book.taker_gets.issuer;
};
var book_key_cross = function (book) {
return book.taker_gets.currency
+ ":" + book.taker_gets.issuer
+ ":" + book.taker_pays.currency
+ ":" + book.taker_pays.issuer;
};
var ledger_verify = function (ledger) {
var dir_nodes = ledger.accountState.filter(function (entry) {
return entry.LedgerEntryType === 'DirectoryNode' // Only directories
&& entry.index === entry.RootIndex // Only root nodes
&& 'TakerGetsCurrency' in entry; // Only offer directories
});
var books = {};
dir_nodes.forEach(function (node) {
var book = {
taker_gets: {
currency: UInt160.from_generic(node.TakerGetsCurrency).to_json(),
issuer: UInt160.from_generic(node.TakerGetsIssuer).to_json()
},
taker_pays: {
currency: UInt160.from_generic(node.TakerPaysCurrency).to_json(),
issuer: UInt160.from_generic(node.TakerPaysIssuer).to_json()
},
quality: Amount.from_quality(node.RootIndex),
index: node.RootIndex
};
books[book_key(book)] = book;
// console.log(JSON.stringify(node, undefined, 2));
});
// console.log(JSON.stringify(dir_entry, undefined, 2));
console.log("#%s books: %s", ledger.ledger_index, Object.keys(books).length);
Object.keys(books).forEach(function (key) {
var book = books[key];
var key_cross = book_key_cross(book);
var book_cross = books[key_cross];
if (book && book_cross && !book_cross.done)
{
var book_cross_quality_inverted = Amount.from_json("1.0/1/1").divide(book_cross.quality);
if (book_cross_quality_inverted.compareTo(book.quality) >= 0)
{
// Crossing books
console.log("crossing: #%s :: %s :: %s :: %s :: %s :: %s :: %s", ledger.ledger_index, key, book.quality.to_text(), book_cross.quality.to_text(), book_cross_quality_inverted.to_text(),
book.index, book_cross.index);
}
book_cross.done = true;
}
});
var ripple_selfs = {};
var accounts = {};
var counts = {};
ledger.accountState.forEach(function (entry) {
if (entry.LedgerEntryType === 'Offer')
{
counts[entry.Account] = (counts[entry.Account] || 0) + 1;
}
else if (entry.LedgerEntryType === 'RippleState')
{
if (entry.Flags & (0x10000 | 0x40000))
{
counts[entry.LowLimit.issuer] = (counts[entry.LowLimit.issuer] || 0) + 1;
}
if (entry.Flags & (0x20000 | 0x80000))
{
counts[entry.HighLimit.issuer] = (counts[entry.HighLimit.issuer] || 0) + 1;
}
if (entry.HighLimit.issuer === entry.LowLimit.issuer)
ripple_selfs[entry.Account] = entry;
}
else if (entry.LedgerEntryType == 'AccountRoot')
{
accounts[entry.Account] = entry;
}
});
var low = 0; // Accounts with too low a count.
var high = 0;
var missing_accounts = 0; // Objects with no referencing account.
var missing_objects = 0; // Accounts specifying an object but having none.
Object.keys(counts).forEach(function (account) {
if (account in accounts)
{
if (counts[account] !== accounts[account].OwnerCount)
{
if (counts[account] < accounts[account].OwnerCount)
{
high += 1;
console.log("%s: high count %s/%s", account, counts[account], accounts[account].OwnerCount);
}
else
{
low += 1;
console.log("%s: low count %s/%s", account, counts[account], accounts[account].OwnerCount);
}
}
}
else
{
missing_accounts += 1;
console.log("%s: missing : count %s", account, counts[account]);
}
});
Object.keys(accounts).forEach(function (account) {
if (!('OwnerCount' in accounts[account]))
{
console.log("%s: bad entry : %s", account, JSON.stringify(accounts[account], undefined, 2));
}
else if (!(account in counts) && accounts[account].OwnerCount)
{
missing_objects += 1;
console.log("%s: no objects : %s/%s", account, 0, accounts[account].OwnerCount);
}
});
if (low)
console.log("counts too low = %s", low);
if (high)
console.log("counts too high = %s", high);
if (missing_objects)
console.log("missing_objects = %s", missing_objects);
if (missing_accounts)
console.log("missing_accounts = %s", missing_accounts);
if (Object.keys(ripple_selfs).length)
console.log("RippleState selfs = %s", Object.keys(ripple_selfs).length);
};
var ledger_request = function (remote, ledger_index, done) {
remote.request_ledger(undefined, {
accounts: true,
expand: true,
})
.ledger_index(ledger_index)
.on('success', function (m) {
// console.log("ledger: ", ledger_index);
// console.log("ledger: ", JSON.stringify(m, undefined, 2));
done(m.ledger);
})
.on('error', function (m) {
console.log("error");
done();
})
.request();
};
var usage = function () {
console.log("rlint.js _websocket_ip_ _websocket_port_ ");
};
var finish = function (remote) {
remote.disconnect();
// XXX Because remote.disconnect() doesn't work:
process.exit();
};
console.log("args: ", process.argv.length);
console.log("args: ", process.argv);
if (process.argv.length < 4) {
usage();
}
else {
var remote = Remote.from_config({
websocket_ip: process.argv[2],
websocket_port: process.argv[3],
})
.once('ledger_closed', function (m) {
console.log("ledger_closed: ", JSON.stringify(m, undefined, 2));
if (process.argv.length === 5) {
var ledger_index = process.argv[4];
ledger_request(remote, ledger_index, function (l) {
if (l) {
ledger_verify(l);
}
finish(remote);
});
} else if (process.argv.length === 6) {
var ledger_start = Number(process.argv[4]);
var ledger_end = Number(process.argv[5]);
var ledger_cursor = ledger_end;
async.whilst(
function () {
return ledger_start <= ledger_cursor && ledger_cursor <=ledger_end;
},
function (callback) {
// console.log(ledger_cursor);
ledger_request(remote, ledger_cursor, function (l) {
if (l) {
ledger_verify(l);
}
--ledger_cursor;
callback();
});
},
function (error) {
finish(remote);
});
} else {
finish(remote);
}
})
.connect();
}
// vim:sw=2:sts=2:ts=8:et

51
bin/sh/install-vcpkg.sh Executable file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bash
set -exu
: ${TRAVIS_BUILD_DIR:=""}
: ${VCPKG_DIR:=".vcpkg"}
export VCPKG_ROOT=${VCPKG_DIR}
: ${VCPKG_DEFAULT_TRIPLET:="x64-windows-static"}
export VCPKG_DEFAULT_TRIPLET
EXE="vcpkg"
if [[ -z ${COMSPEC:-} ]]; then
EXE="${EXE}.exe"
fi
if [[ -d "${VCPKG_DIR}" && -x "${VCPKG_DIR}/${EXE}" && -d "${VCPKG_DIR}/installed" ]] ; then
echo "Using cached vcpkg at ${VCPKG_DIR}"
${VCPKG_DIR}/${EXE} list
else
if [[ -d "${VCPKG_DIR}" ]] ; then
rm -rf "${VCPKG_DIR}"
fi
git clone --branch 2021.04.30 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
pushd ${VCPKG_DIR}
BSARGS=()
if [[ "$(uname)" == "Darwin" ]] ; then
BSARGS+=(--allowAppleClang)
fi
if [[ -z ${COMSPEC:-} ]]; then
chmod +x ./bootstrap-vcpkg.sh
time ./bootstrap-vcpkg.sh "${BSARGS[@]}"
else
time ./bootstrap-vcpkg.bat
fi
popd
fi
# TODO: bring boost in this way as well ?
# NOTE: can pin specific ports to a commit/version like this:
# git checkout <SOME COMMIT HASH> ports/boost
if [ $# -eq 0 ]; then
echo "No extra packages specified..."
PKGS=()
else
PKGS=( "$@" )
fi
for LIB in "${PKGS[@]}"; do
time ${VCPKG_DIR}/${EXE} --clean-after-build install ${LIB}
done

40
bin/sh/setup-msvc.sh Executable file
View File

@@ -0,0 +1,40 @@
# NOTE: must be sourced from a shell so it can export vars
cat << BATCH > ./getenv.bat
CALL %*
ENV
BATCH
while read line ; do
IFS='"' read x path arg <<<"${line}"
if [ -f "${path}" ] ; then
echo "FOUND: $path"
export VCINSTALLDIR=$(./getenv.bat "${path}" ${arg} | grep "^VCINSTALLDIR=" | sed -E "s/^VCINSTALLDIR=//g")
if [ "${VCINSTALLDIR}" != "" ] ; then
echo "USING ${VCINSTALLDIR}"
export LIB=$(./getenv.bat "${path}" ${arg} | grep "^LIB=" | sed -E "s/^LIB=//g")
export LIBPATH=$(./getenv.bat "${path}" ${arg} | grep "^LIBPATH=" | sed -E "s/^LIBPATH=//g")
export INCLUDE=$(./getenv.bat "${path}" ${arg} | grep "^INCLUDE=" | sed -E "s/^INCLUDE=//g")
ADDPATH=$(./getenv.bat "${path}" ${arg} | grep "^PATH=" | sed -E "s/^PATH=//g")
export PATH="${ADDPATH}:${PATH}"
break
fi
fi
done <<EOL
"C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio 15.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 13.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
EOL
# TODO: update the list above as needed to support newer versions of msvc tools
rm -f getenv.bat
if [ "${VCINSTALLDIR}" = "" ] ; then
echo "No compatible visual studio found!"
fi

246
bin/start_sync_stop.py Normal file
View File

@@ -0,0 +1,246 @@
#!/usr/bin/env python
"""A script to test rippled in an infinite loop of start-sync-stop.
- Requires Python 3.7+.
- Can be stopped with SIGINT.
- Has no dependencies outside the standard library.
"""
import sys
assert sys.version_info.major == 3 and sys.version_info.minor >= 7
import argparse
import asyncio
import configparser
import contextlib
import json
import logging
import os
from pathlib import Path
import platform
import subprocess
import time
import urllib.error
import urllib.request
# Enable asynchronous subprocesses on Windows. The default changed in 3.8.
# https://docs.python.org/3.7/library/asyncio-platforms.html#subprocess-support-on-windows
if (platform.system() == 'Windows' and sys.version_info.major == 3
and sys.version_info.minor < 8):
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
DEFAULT_EXE = 'rippled'
DEFAULT_CONFIGURATION_FILE = 'rippled.cfg'
# Number of seconds to wait before forcefully terminating.
PATIENCE = 120
# Number of contiguous seconds in a sync state to be considered synced.
DEFAULT_SYNC_DURATION = 60
# Number of seconds between polls of state.
DEFAULT_POLL_INTERVAL = 5
SYNC_STATES = ('full', 'validating', 'proposing')
def read_config(config_file):
# strict = False: Allow duplicate keys, e.g. [rpc_startup].
# allow_no_value = True: Allow keys with no values. Generally, these
# instances use the "key" as the value, and the section name is the key,
# e.g. [debug_logfile].
# delimiters = ('='): Allow ':' as a character in Windows paths. Some of
# our "keys" are actually values, and we don't want to split them on ':'.
config = configparser.ConfigParser(
strict=False,
allow_no_value=True,
delimiters=('='),
)
config.read(config_file)
return config
def to_list(value, separator=','):
"""Parse a list from a delimited string value."""
return [s.strip() for s in value.split(separator) if s]
def find_log_file(config_file):
"""Try to figure out what log file the user has chosen. Raises all kinds
of exceptions if there is any possibility of ambiguity."""
config = read_config(config_file)
values = list(config['debug_logfile'].keys())
if len(values) < 1:
raise ValueError(
f'no [debug_logfile] in configuration file: {config_file}')
if len(values) > 1:
raise ValueError(
f'too many [debug_logfile] in configuration file: {config_file}')
return values[0]
def find_http_port(config_file):
config = read_config(config_file)
names = list(config['server'].keys())
for name in names:
server = config[name]
if 'http' in to_list(server.get('protocol', '')):
return int(server['port'])
raise ValueError(f'no server in [server] for "http" protocol')
@contextlib.asynccontextmanager
async def rippled(exe=DEFAULT_EXE, config_file=DEFAULT_CONFIGURATION_FILE):
"""A context manager for a rippled process."""
# Start the server.
process = await asyncio.create_subprocess_exec(
str(exe),
'--conf',
str(config_file),
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
logging.info(f'rippled started with pid {process.pid}')
try:
yield process
finally:
# Ask it to stop.
logging.info(f'asking rippled (pid: {process.pid}) to stop')
start = time.time()
process.terminate()
# Wait nicely.
try:
await asyncio.wait_for(process.wait(), PATIENCE)
except asyncio.TimeoutError:
# Ask the operating system to kill it.
logging.warning(f'killing rippled ({process.pid})')
try:
process.kill()
except ProcessLookupError:
pass
code = await process.wait()
end = time.time()
logging.info(
f'rippled stopped after {end - start:.1f} seconds with code {code}'
)
async def sync(
port,
*,
duration=DEFAULT_SYNC_DURATION,
interval=DEFAULT_POLL_INTERVAL,
):
"""Poll rippled on an interval until it has been synced for a duration."""
start = time.perf_counter()
while (time.perf_counter() - start) < duration:
await asyncio.sleep(interval)
request = urllib.request.Request(
f'http://127.0.0.1:{port}',
data=json.dumps({
'method': 'server_state'
}).encode(),
headers={'Content-Type': 'application/json'},
)
with urllib.request.urlopen(request) as response:
try:
body = json.loads(response.read())
except urllib.error.HTTPError as cause:
logging.warning(f'server_state returned not JSON: {cause}')
start = time.perf_counter()
continue
try:
state = body['result']['state']['server_state']
except KeyError as cause:
logging.warning(f'server_state response missing key: {cause.key}')
start = time.perf_counter()
continue
logging.info(f'server_state: {state}')
if state not in SYNC_STATES:
# Require a contiguous sync state.
start = time.perf_counter()
async def loop(test,
*,
exe=DEFAULT_EXE,
config_file=DEFAULT_CONFIGURATION_FILE):
"""
Start-test-stop rippled in an infinite loop.
Moves log to a different file after each iteration.
"""
log_file = find_log_file(config_file)
id = 0
while True:
logging.info(f'iteration: {id}')
async with rippled(exe, config_file) as process:
start = time.perf_counter()
exited = asyncio.create_task(process.wait())
tested = asyncio.create_task(test())
# Try to sync as long as the process is running.
done, pending = await asyncio.wait(
{exited, tested},
return_when=asyncio.FIRST_COMPLETED,
)
if done == {exited}:
code = exited.result()
logging.warning(
f'server halted for unknown reason with code {code}')
else:
assert done == {tested}
assert tested.exception() is None
end = time.perf_counter()
logging.info(f'synced after {end - start:.0f} seconds')
os.replace(log_file, f'debug.{id}.log')
id += 1
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
)
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'rippled',
type=Path,
nargs='?',
default=DEFAULT_EXE,
help='Path to rippled.',
)
parser.add_argument(
'--conf',
type=Path,
default=DEFAULT_CONFIGURATION_FILE,
help='Path to configuration file.',
)
parser.add_argument(
'--duration',
type=int,
default=DEFAULT_SYNC_DURATION,
help='Number of contiguous seconds required in a synchronized state.',
)
parser.add_argument(
'--interval',
type=int,
default=DEFAULT_POLL_INTERVAL,
help='Number of seconds to wait between polls of state.',
)
args = parser.parse_args()
port = find_http_port(args.conf)
def test():
return sync(port, duration=args.duration, interval=args.interval)
try:
asyncio.run(loop(test, exe=args.rippled, config_file=args.conf))
except KeyboardInterrupt:
# Squelch the message. This is a normal mode of exit.
pass

133
bin/stop-test.js Normal file
View File

@@ -0,0 +1,133 @@
/* -------------------------------- REQUIRES -------------------------------- */
var child = require("child_process");
var assert = require("assert");
/* --------------------------------- CONFIG --------------------------------- */
if (process.argv[2] == null) {
[
'Usage: ',
'',
' `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
'',
' Launch rippled and stop it after n seconds for all n in [i, j}',
' For all even values of n launch rippled with `--fg`',
' For values of n where n % 3 == 0 launch rippled with `--fg`\n',
'Examples: ',
'',
' $ node bin/stop-test.js 5,10',
(' $ node bin/stop-test.js 1,4 ' +
'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
]
.forEach(function(l){console.log(l)});
process.exit();
} else {
var testRange = process.argv[2].split(',').map(Number);
var rippledPath = process.argv[3] || 'build/rippled'
var rippledConf = process.argv[4] || 'rippled.cfg'
}
var options = {
env: process.env,
stdio: 'ignore' // we could dump the child io when it fails abnormally
};
// default args
var conf_args = ['--conf='+rippledConf];
var start_args = conf_args.concat([/*'--net'*/])
var stop_args = conf_args.concat(['stop']);
/* --------------------------------- HELPERS -------------------------------- */
function start(args) {
return child.spawn(rippledPath, args, options);
}
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
function show_results_and_exit(results) {
console.log(JSON.stringify(results, undefined, 2));
process.exit();
}
var timeTakes = function (range) {
function sumRange(n) {return (n+1) * n /2}
var ret = sumRange(range[1]);
if (range[0] > 1) {
ret = ret - sumRange(range[0] - 1)
}
var stopping = (range[1] - range[0]) * 0.5;
return ret + stopping;
}
/* ---------------------------------- TEST ---------------------------------- */
console.log("Test will take ~%s seconds", timeTakes(testRange));
(function oneTest(n /* seconds */, results) {
if (n >= testRange[1]) {
// show_results_and_exit(results);
console.log(JSON.stringify(results, undefined, 2));
oneTest(testRange[0], []);
return;
}
var args = start_args;
if (n % 2 == 0) {args = args.concat(['--fg'])}
if (n % 3 == 0) {args = args.concat(['--net'])}
var result = {args: args, alive_for: n};
results.push(result);
console.log("\nLaunching `%s` with `%s` for %d seconds",
rippledPath, JSON.stringify(args), n);
rippled = start(args);
console.log("Rippled pid: %d", rippled.pid);
// defaults
var b4StopSent = false;
var stopSent = false;
var stop_took = null;
rippled.once('exit', function(){
if (!stopSent && !b4StopSent) {
console.warn('\nRippled exited itself b4 stop issued');
process.exit();
};
// The io handles close AFTER exit, may have implications for
// `stdio:'inherit'` option to `child.spawn`.
rippled.once('close', function() {
result.stop_took = (+new Date() - stop_took) / 1000; // seconds
console.log("Stopping after %d seconds took %s seconds",
n, result.stop_took);
oneTest(n+1, results);
});
});
secs_l8r(n, function(){
console.log("Stopping rippled after %d seconds", n);
// possible race here ?
// seems highly unlikely, but I was having issues at one point
b4StopSent=true;
stop_took = (+new Date());
// when does `exit` actually get sent?
stop();
stopSent=true;
// Sometimes we want to attach with a debugger.
if (process.env.ABORT_TESTS_ON_STALL != null) {
// We wait 30 seconds, and if it hasn't stopped, we abort the process
secs_l8r(30, function() {
if (result.stop_took == null) {
console.log("rippled has stalled");
process.exit();
};
});
}
})
}(testRange[0], []));

119
bin/update_binformat.js Normal file
View File

@@ -0,0 +1,119 @@
/**
* bin/update_bintypes.js
*
* This unholy abomination of a script generates the JavaScript file
* src/js/bintypes.js from various parts of the C++ source code.
*
* This should *NOT* be part of any automatic build process unless the C++
* source data are brought into a more easily parseable format. Until then,
* simply run this script manually and fix as needed.
*/
// XXX: Process LedgerFormats.(h|cpp) as well.
var filenameProto = __dirname + '/../src/cpp/ripple/SerializeProto.h',
filenameTxFormatsH = __dirname + '/../src/cpp/ripple/TransactionFormats.h',
filenameTxFormats = __dirname + '/../src/cpp/ripple/TransactionFormats.cpp';
var fs = require('fs');
var output = [];
// Stage 1: Get the field types and codes from SerializeProto.h
var types = {},
fields = {};
String(fs.readFileSync(filenameProto)).split('\n').forEach(function (line) {
line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
if (!line.length || line.slice(0, 2) === '//' || line.slice(-1) !== ')') return;
var tmp = line.slice(0, -1).split('('),
type = tmp[0],
opts = tmp[1].split(',');
if (type === 'TYPE') types[opts[1]] = [opts[0], +opts[2]];
else if (type === 'FIELD') fields[opts[0]] = [types[opts[1]][0], +opts[2]];
});
output.push('var ST = require("./serializedtypes");');
output.push('');
output.push('var REQUIRED = exports.REQUIRED = 0,');
output.push(' OPTIONAL = exports.OPTIONAL = 1,');
output.push(' DEFAULT = exports.DEFAULT = 2;');
output.push('');
function pad(s, n) { while (s.length < n) s += ' '; return s; }
function padl(s, n) { while (s.length < n) s = ' '+s; return s; }
Object.keys(types).forEach(function (type) {
output.push(pad('ST.'+types[type][0]+'.id', 25) + ' = '+types[type][1]+';');
});
output.push('');
// Stage 2: Get the transaction type IDs from TransactionFormats.h
var ttConsts = {};
String(fs.readFileSync(filenameTxFormatsH)).split('\n').forEach(function (line) {
var regex = /tt([A-Z_]+)\s+=\s+([0-9-]+)/;
var match = line.match(regex);
if (match) ttConsts[match[1]] = +match[2];
});
// Stage 3: Get the transaction formats from TransactionFormats.cpp
var base = [],
sections = [],
current = base;
String(fs.readFileSync(filenameTxFormats)).split('\n').forEach(function (line) {
line = line.replace(/^\s+|\s+$/g, '').replace(/\s+/g, '');
var d_regex = /DECLARE_TF\(([A-Za-z]+),tt([A-Z_]+)/;
var d_match = line.match(d_regex);
var s_regex = /SOElement\(sf([a-z]+),SOE_(REQUIRED|OPTIONAL|DEFAULT)/i;
var s_match = line.match(s_regex);
if (d_match) sections.push(current = [d_match[1], ttConsts[d_match[2]]]);
else if (s_match) current.push([s_match[1], s_match[2]]);
});
function removeFinalComma(arr) {
arr[arr.length-1] = arr[arr.length-1].slice(0, -1);
}
output.push('var base = [');
base.forEach(function (field) {
var spec = fields[field[0]];
output.push(' [ '+
pad("'"+field[0]+"'", 21)+', '+
pad(field[1], 8)+', '+
padl(""+spec[1], 2)+', '+
'ST.'+pad(spec[0], 3)+
' ],');
});
removeFinalComma(output);
output.push('];');
output.push('');
output.push('exports.tx = {');
sections.forEach(function (section) {
var name = section.shift(),
ttid = section.shift();
output.push(' '+name+': ['+ttid+'].concat(base, [');
section.forEach(function (field) {
var spec = fields[field[0]];
output.push(' [ '+
pad("'"+field[0]+"'", 21)+', '+
pad(field[1], 8)+', '+
padl(""+spec[1], 2)+', '+
'ST.'+pad(spec[0], 3)+
' ],');
});
removeFinalComma(output);
output.push(' ]),');
});
removeFinalComma(output);
output.push('};');
output.push('');
console.log(output.join('\n'));

View File

@@ -410,17 +410,14 @@
# starter list is included in the code and used if no other hostnames are
# available.
#
# One address or domain name per line is allowed. A port may be specified
# after adding a space to the address. If a port is not specified, the default
# port of 2459 will be used. Many servers still use the legacy port of 51235.
# To connect to such servers, you must specify the port number. The ordering
# of entries does not generally matter.
# One address or domain name per line is allowed. A port may must be
# specified after adding a space to the address. The ordering of entries
# does not generally matter.
#
# The default list of entries is:
# - r.ripple.com 51235
# - sahyadri.isrdc.in 51235
# - hubs.xrpkuwait.com 51235
# - hub.xrpl-commons.org 51235
#
# Examples:
#
@@ -1426,9 +1423,6 @@ admin = 127.0.0.1
protocol = http
[port_peer]
# Many servers still use the legacy port of 51235, so for backward-compatibility
# we maintain that port number here. However, for new servers we recommend
# changing this to the default port of 2459.
port = 51235
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:

View File

@@ -26,7 +26,7 @@
#
# Examples:
# https://vl.ripple.com
# https://unl.xrplf.org
# https://vl.xrplf.org
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
#
@@ -54,13 +54,13 @@
[validator_list_sites]
https://vl.ripple.com
https://unl.xrplf.org
https://vl.xrplf.org
[validator_list_keys]
#vl.ripple.com
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
#unl.xrplf.org
ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# use the following configuration instead:
@@ -70,21 +70,3 @@ ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
#
# [validator_list_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
# [validator_list_threshold]
#
# Minimum number of validator lists on which a validator must be listed in
# order to be used.
#
# This can be set explicitly to any positive integer number not greater than
# the size of [validator_list_keys]. If it is not set, or set to 0, the
# value will be calculated at startup from the size of [validator_list_keys],
# where the calculation is:
#
# threshold = size(validator_list_keys) < 3
# ? 1
# : floor(size(validator_list_keys) / 2) + 1
[validator_list_threshold]
0

View File

@@ -98,9 +98,6 @@
# 2024-04-03, Bronek Kozicki
# - add support for output formats: jacoco, clover, lcov
#
# 2025-05-12, Jingchen Wu
# - add -fprofile-update=atomic to ensure atomic profile generation
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
@@ -203,27 +200,15 @@ set(COVERAGE_COMPILER_FLAGS "-g --coverage"
CACHE INTERNAL "")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Clang)")
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
check_cxx_compiler_flag(-fprofile-abs-path HAVE_cxx_fprofile_abs_path)
if(HAVE_cxx_fprofile_abs_path)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
include(CheckCCompilerFlag)
check_c_compiler_flag(-fprofile-abs-path HAVE_c_fprofile_abs_path)
if(HAVE_c_fprofile_abs_path)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-abs-path")
endif()
check_cxx_compiler_flag(-fprofile-update HAVE_cxx_fprofile_update)
if(HAVE_cxx_fprofile_update)
set(COVERAGE_CXX_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-update=atomic")
endif()
check_c_compiler_flag(-fprofile-update HAVE_c_fprofile_update)
if(HAVE_c_fprofile_update)
set(COVERAGE_C_COMPILER_FLAGS "${COVERAGE_COMPILER_FLAGS} -fprofile-update=atomic")
endif()
endif()
set(CMAKE_Fortran_FLAGS_COVERAGE

View File

@@ -90,16 +90,28 @@ if (MSVC)
-errorreport:none
-machine:X64)
else ()
# HACK : because these need to come first, before any warning demotion
string (APPEND CMAKE_CXX_FLAGS " -Wall -Wdeprecated")
if (wextra)
string (APPEND CMAKE_CXX_FLAGS " -Wextra -Wno-unused-parameter")
endif ()
# not MSVC
target_compile_options (common
INTERFACE
-Wall
-Wdeprecated
$<$<BOOL:${is_clang}>:-Wno-deprecated-declarations>
$<$<BOOL:${wextra}>:-Wextra -Wno-unused-parameter>
$<$<BOOL:${werr}>:-Werror>
-fstack-protector
$<$<COMPILE_LANGUAGE:CXX>:
-frtti
-Wnon-virtual-dtor
>
-Wno-sign-compare
-Wno-unused-but-set-variable
-Wno-char-subscripts
-Wno-format
-Wno-unused-local-typedefs
-fstack-protector
$<$<BOOL:${is_gcc}>:
-Wno-unused-but-set-variable
-Wno-deprecated
>
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>

View File

@@ -9,7 +9,6 @@ include(target_protobuf_sources)
# define a bunch of `static const` variables with the same names,
# so we just build them as a separate library.
add_library(xrpl.libpb)
set_target_properties(xrpl.libpb PROPERTIES UNITY_BUILD OFF)
target_protobuf_sources(xrpl.libpb xrpl/proto
LANGUAGE cpp
IMPORT_DIRS include/xrpl/proto
@@ -50,9 +49,7 @@ target_link_libraries(xrpl.libpb
# TODO: Clean up the number of library targets later.
add_library(xrpl.imports.main INTERFACE)
target_link_libraries(xrpl.imports.main
INTERFACE
target_link_libraries(xrpl.imports.main INTERFACE
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
@@ -62,9 +59,7 @@ target_link_libraries(xrpl.imports.main
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
include(add_module)
@@ -105,6 +100,9 @@ target_link_libraries(xrpl.libxrpl.server PUBLIC xrpl.libxrpl.protocol)
add_library(xrpl.libxrpl)
set_target_properties(xrpl.libxrpl PROPERTIES OUTPUT_NAME xrpl)
if(unity)
set_target_properties(xrpl.libxrpl PROPERTIES UNITY_BUILD ON)
endif()
add_library(xrpl::libxrpl ALIAS xrpl.libxrpl)
@@ -132,13 +130,41 @@ target_link_modules(xrpl PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include>)
target_compile_definitions(xrpl.libxrpl
PUBLIC
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1)
target_compile_options(xrpl.libxrpl
PUBLIC
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${voidstar}>:-DENABLE_VOIDSTAR>
)
target_link_libraries(xrpl.libxrpl
PUBLIC
LibArchive::LibArchive
OpenSSL::Crypto
Ripple::boost
Ripple::opts
Ripple::syslibs
absl::random_random
date::date
ed25519::ed25519
secp256k1::secp256k1
xrpl.libpb
xxHash::xxhash
$<$<BOOL:${voidstar}>:antithesis-sdk-cpp>
)
if(xrpld)
add_executable(rippled)
if(unity)
set_target_properties(rippled PROPERTIES UNITY_BUILD ON)
endif()
if(tests)
target_compile_definitions(rippled PUBLIC ENABLE_TESTS)
target_compile_definitions(rippled PRIVATE
UNIT_TEST_REFERENCE_FEE=${UNIT_TEST_REFERENCE_FEE}
)
endif()
target_include_directories(rippled
PRIVATE
@@ -189,6 +215,13 @@ if(xrpld)
# these two seem to produce conflicts in beast teardown template methods
src/test/rpc/ValidatorRPC_test.cpp
src/test/ledger/Invariants_test.cpp
# These depend on wasmedge.h, which has some duplicate declarations
src/ripple/app/hook/impl/applyHook.cpp
src/ripple/app/misc/NetworkOPs.cpp
src/ripple/app/tx/impl/applySteps.cpp
src/ripple/app/tx/impl/SetHook.cpp
src/ripple/app/tx/impl/Transactor.cpp
src/ripple/rpc/handlers/Fee1.cpp
PROPERTIES SKIP_UNITY_BUILD_INCLUSION TRUE)
endif()
endif()

View File

@@ -53,9 +53,9 @@ set(download_script "${CMAKE_BINARY_DIR}/docs/download-cppreference.cmake")
file(WRITE
"${download_script}"
"file(DOWNLOAD \
https://github.com/PeterFeicht/cppreference-doc/releases/download/v20250209/html-book-20250209.zip \
http://upload.cppreference.com/mwiki/images/b/b2/html_book_20190607.zip \
${CMAKE_BINARY_DIR}/docs/cppreference.zip \
EXPECTED_HASH MD5=bda585f72fbca4b817b29a3d5746567b \
EXPECTED_HASH MD5=82b3a612d7d35a83e3cb1195a63689ab \
)\n \
execute_process( \
COMMAND \"${CMAKE_COMMAND}\" -E tar -xf cppreference.zip \

View File

@@ -7,9 +7,6 @@ add_library (Ripple::opts ALIAS opts)
target_compile_definitions (opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT
BOOST_CONTAINER_FWD_BAD_DEQUE
HAS_UNCAUGHT_EXCEPTIONS=1
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
@@ -21,12 +18,10 @@ target_compile_definitions (opts
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
$<$<BOOL:${voidstar}>:ENABLE_VOIDSTAR>)
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>)
target_compile_options (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${is_gcc}>:-Wno-maybe-uninitialized>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-g --coverage -fprofile-abs-path>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-g --coverage>

View File

@@ -2,6 +2,16 @@
convenience variables and sanity checks
#]===================================================================]
include(ProcessorCount)
if (NOT ep_procs)
ProcessorCount(ep_procs)
if (ep_procs GREATER 1)
# never use more than half of cores for EP builds
math (EXPR ep_procs "${ep_procs} / 2")
message (STATUS "Using ${ep_procs} cores for ExternalProject builds.")
endif ()
endif ()
get_property(is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)

View File

@@ -11,19 +11,12 @@ option(assert "Enables asserts, even in release builds" OFF)
option(xrpld "Build xrpld" ON)
option(tests "Build tests" ON)
if(tests)
# This setting allows making a separate workflow to test fees other than default 10
if(NOT UNIT_TEST_REFERENCE_FEE)
set(UNIT_TEST_REFERENCE_FEE "10" CACHE STRING "")
endif()
endif()
option(unity "Creates a build using UNITY support in cmake." OFF)
option(unity "Creates a build using UNITY support in cmake. This is the default" ON)
if(unity)
if(NOT is_ci)
set(CMAKE_UNITY_BUILD_BATCH_SIZE 15 CACHE STRING "")
endif()
set(CMAKE_UNITY_BUILD ON CACHE BOOL "Do a unity build")
endif()
if(is_clang AND is_linux)
option(voidstar "Enable Antithesis instrumentation." OFF)

View File

@@ -2,6 +2,7 @@ find_package(Boost 1.82 REQUIRED
COMPONENTS
chrono
container
context
coroutine
date_time
filesystem
@@ -23,7 +24,7 @@ endif()
target_link_libraries(ripple_boost
INTERFACE
Boost::headers
Boost::boost
Boost::chrono
Boost::container
Boost::coroutine

52
cmake/deps/WasmEdge.cmake Normal file
View File

@@ -0,0 +1,52 @@
#[===================================================================[
NIH dep: wasmedge: web assembly runtime for hooks.
#]===================================================================]
if (is_root_project) # WasmEdge not needed in the case of xrpl_core inclusion build
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(OLD_DEBUG_POSTFIX ${CMAKE_DEBUG_POSTFIX})
set(CMAKE_DEBUG_POSTFIX _d)
endif ()
add_library (wasmedge INTERFACE)
ExternalProject_Add (wasmedge_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/WasmEdge/WasmEdge.git
GIT_TAG 0.9.0
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/api/libwasmedge_c_${CMAKE_DEBUG_POSTFIX}.so
)
ExternalProject_Get_Property (wasmedge_src BINARY_DIR)
set (wasmedge_src_BINARY_DIR "${BINARY_DIR}")
add_dependencies (wasmedge wasmedge_src)
target_include_directories (ripple_libs SYSTEM INTERFACE "${wasmedge_src_BINARY_DIR}/include/api")
target_link_libraries (wasmedge
INTERFACE
Boost::thread
Boost::system)
target_link_libraries (wasmedge INTERFACE
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge_c${CMAKE_DEBUG_POSTFIX}.so")
target_link_libraries (ripple_libs INTERFACE wasmedge)
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(CMAKE_DEBUG_POSTFIX ${OLD_DEBUG_POSTFIX})
endif ()
endif ()

View File

@@ -1,41 +0,0 @@
include(isolate_headers)
function(xrpl_add_test name)
set(target ${PROJECT_NAME}.test.${name})
file(GLOB_RECURSE sources CONFIGURE_DEPENDS
"${CMAKE_CURRENT_SOURCE_DIR}/${name}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/${name}.cpp"
)
add_executable(${target} EXCLUDE_FROM_ALL ${ARGN} ${sources})
isolate_headers(
${target}
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/tests/${name}"
PRIVATE
)
# Make sure the test isn't optimized away in unity builds
set_target_properties(${target} PROPERTIES
UNITY_BUILD_MODE GROUP
UNITY_BUILD_BATCH_SIZE 0) # Adjust as needed
add_test(NAME ${target} COMMAND ${target})
set_tests_properties(
${target} PROPERTIES
FIXTURES_REQUIRED ${target}_fixture
)
add_test(
NAME ${target}.build
COMMAND
${CMAKE_COMMAND}
--build ${CMAKE_BINARY_DIR}
--config $<CONFIG>
--target ${target}
)
set_tests_properties(${target}.build PROPERTIES
FIXTURES_SETUP ${target}_fixture
)
endfunction()

View File

@@ -1,37 +0,0 @@
{% set os = detect_api.detect_os() %}
{% set arch = detect_api.detect_arch() %}
{% set compiler, version, compiler_exe = detect_api.detect_default_compiler() %}
{% set compiler_version = version %}
{% if os == "Linux" %}
{% set compiler_version = detect_api.default_compiler_version(compiler, version) %}
{% endif %}
[settings]
os={{ os }}
arch={{ arch }}
build_type=Debug
compiler={{compiler}}
compiler.version={{ compiler_version }}
compiler.cppstd=20
{% if os == "Windows" %}
compiler.runtime=static
{% else %}
compiler.libcxx={{detect_api.detect_libcxx(compiler, version, compiler_exe)}}
{% endif %}
[conf]
{% if compiler == "clang" and compiler_version >= 19 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "apple-clang" and compiler_version >= 17 %}
tools.build:cxxflags=['-Wno-missing-template-arg-list-after-template-kw']
{% endif %}
{% if compiler == "clang" and compiler_version == 16 %}
tools.build:cxxflags=['-DBOOST_ASIO_DISABLE_CONCEPTS']
{% endif %}
{% if compiler == "gcc" and compiler_version < 13 %}
tools.build:cxxflags=['-Wno-restrict']
{% endif %}
[tool_requires]
!cmake/*: cmake/[>=3 <4]

View File

@@ -1,4 +1,4 @@
from conan import ConanFile, __version__ as conan_version
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
@@ -24,20 +24,19 @@ class Xrpl(ConanFile):
}
requires = [
'date/3.0.1',
'grpc/1.50.1',
'libarchive/3.8.1',
'nudb/2.0.9',
'openssl/1.1.1w',
'libarchive/3.6.2',
'nudb/2.0.8',
'openssl/1.1.1u',
'soci/4.0.3',
'zlib/1.3.1',
]
test_requires = [
'doctest/2.4.11',
'wasmedge/0.9.0',
'xxhash/0.8.2',
'zlib/1.2.13',
]
tool_requires = [
'protobuf/3.21.12',
'protobuf/3.21.9',
]
default_options = {
@@ -89,34 +88,30 @@ class Xrpl(ConanFile):
}
def set_version(self):
if self.version is None:
path = f'{self.recipe_folder}/src/libxrpl/protocol/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, encoding='utf-8') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
path = f'{self.recipe_folder}/src/libxrpl/protocol/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, 'r') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost'].visibility = 'global'
def requirements(self):
# Conan 2 requires transitive headers to be specified
transitive_headers_opt = {'transitive_headers': True} if conan_version.split('.')[0] == '2' else {}
self.requires('boost/1.83.0', force=True, **transitive_headers_opt)
self.requires('date/3.0.4', **transitive_headers_opt)
self.requires('lz4/1.10.0', force=True)
self.requires('protobuf/3.21.12', force=True)
self.requires('sqlite3/3.49.1', force=True)
self.requires('boost/1.82.0', force=True)
self.requires('lz4/1.9.3', force=True)
self.requires('protobuf/3.21.9', force=True)
self.requires('sqlite3/3.42.0', force=True)
if self.options.jemalloc:
self.requires('jemalloc/5.3.0')
if self.options.rocksdb:
self.requires('rocksdb/10.0.1')
self.requires('xxhash/0.8.3', **transitive_headers_opt)
self.requires('rocksdb/6.29.5')
exports_sources = (
'CMakeLists.txt',
'bin/getRippledInfo',
'cfg/*',
'cmake/*',
'external/*',
@@ -167,17 +162,7 @@ class Xrpl(ConanFile):
# `include/`, not `include/ripple/proto/`.
libxrpl.includedirs = ['include', 'include/ripple/proto']
libxrpl.requires = [
'boost::headers',
'boost::chrono',
'boost::container',
'boost::coroutine',
'boost::date_time',
'boost::filesystem',
'boost::json',
'boost::program_options',
'boost::regex',
'boost::system',
'boost::thread',
'boost::boost',
'date::date',
'grpc::grpc++',
'libarchive::libarchive',

View File

@@ -30,7 +30,7 @@ the ledger (so the entire network has the same view). This will help the network
see which validators are **currently** unreliable, and adjust their quorum
calculation accordingly.
_Improving the liveness of the network is the main motivation for the negative UNL._
*Improving the liveness of the network is the main motivation for the negative UNL.*
### Targeted Faults
@@ -53,17 +53,16 @@ even if the number of remaining validators gets to 60%. Say we have a network
with 10 validators on the UNL and everything is operating correctly. The quorum
required for this network would be 8 (80% of 10). When validators fail, the
quorum required would be as low as 6 (60% of 10), which is the absolute
**_minimum quorum_**. We need the absolute minimum quorum to be strictly greater
***minimum quorum***. We need the absolute minimum quorum to be strictly greater
than 50% of the original UNL so that there cannot be two partitions of
well-behaved nodes headed in different directions. We arbitrarily choose 60% as
the minimum quorum to give a margin of safety.
Consider these events in the absence of negative UNL:
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
1. 5:00pm - validator3 fails, votes vs. quorum: 7 < 8, we dont have quorum
- **network cannot validate new ledgers with 3 failed validators**
* **network cannot validate new ledgers with 3 failed validators**
We're below 80% agreement, so new ledgers cannot be validated. This is how the
XRP Ledger operates today, but if the negative UNL was enabled, the events would
@@ -71,20 +70,18 @@ happen as follows. (Please note that the events below are from a simplified
version of our protocol.)
1. 1:00pm - validator1 fails, votes vs. quorum: 9 >= 8, we have quorum
1. 1:40pm - network adds validator1 to negative UNL, quorum changes to ceil(9 \* 0.8), or 8
1. 1:40pm - network adds validator1 to negative UNL, quorum changes to ceil(9 * 0.8), or 8
1. 3:00pm - validator2 fails, votes vs. quorum: 8 >= 8, we have quorum
1. 3:40pm - network adds validator2 to negative UNL, quorum changes to ceil(8 \* 0.8), or 7
1. 3:40pm - network adds validator2 to negative UNL, quorum changes to ceil(8 * 0.8), or 7
1. 5:00pm - validator3 fails, votes vs. quorum: 7 >= 7, we have quorum
1. 5:40pm - network adds validator3 to negative UNL, quorum changes to ceil(7 \* 0.8), or 6
1. 5:40pm - network adds validator3 to negative UNL, quorum changes to ceil(7 * 0.8), or 6
1. 7:00pm - validator4 fails, votes vs. quorum: 6 >= 6, we have quorum
- **network can still validate new ledgers with 4 failed validators**
* **network can still validate new ledgers with 4 failed validators**
## External Interactions
### Message Format Changes
This proposal will:
1. add a new pseudo-transaction type
1. add the negative UNL to the ledger data structure.
@@ -92,20 +89,19 @@ Any tools or systems that rely on the format of this data will have to be
updated.
### Amendment
This feature **will** need an amendment to activate.
## Design
This section discusses the following topics about the Negative UNL design:
- [Negative UNL protocol overview](#Negative-UNL-Protocol-Overview)
- [Validator reliability measurement](#Validator-Reliability-Measurement)
- [Format Changes](#Format-Changes)
- [Negative UNL maintenance](#Negative-UNL-Maintenance)
- [Quorum size calculation](#Quorum-Size-Calculation)
- [Filter validation messages](#Filter-Validation-Messages)
- [High level sequence diagram of code
* [Negative UNL protocol overview](#Negative-UNL-Protocol-Overview)
* [Validator reliability measurement](#Validator-Reliability-Measurement)
* [Format Changes](#Format-Changes)
* [Negative UNL maintenance](#Negative-UNL-Maintenance)
* [Quorum size calculation](#Quorum-Size-Calculation)
* [Filter validation messages](#Filter-Validation-Messages)
* [High level sequence diagram of code
changes](#High-Level-Sequence-Diagram-of-Code-Changes)
### Negative UNL Protocol Overview
@@ -118,9 +114,9 @@ with V in their UNL adjust the quorum and Vs validation message is not counte
when verifying if a ledger is fully validated. Vs flow of messages and network
interactions, however, will remain the same.
We define the **\*effective UNL** = original UNL - negative UNL\*, and the
**_effective quorum_** as the quorum of the _effective UNL_. And we set
_effective quorum = Ceiling(80% _ effective UNL)\*.
We define the ***effective UNL** = original UNL - negative UNL*, and the
***effective quorum*** as the quorum of the *effective UNL*. And we set
*effective quorum = Ceiling(80% * effective UNL)*.
### Validator Reliability Measurement
@@ -130,16 +126,16 @@ measure about its validators, but we have chosen ledger validation messages.
This is because every validator shall send one and only one signed validation
message per ledger. This keeps the measurement simple and removes
timing/clock-sync issues. A node will measure the percentage of agreeing
validation messages (_PAV_) received from each validator on the node's UNL. Note
validation messages (*PAV*) received from each validator on the node's UNL. Note
that the node will only count the validation messages that agree with its own
validations.
We define the **PAV** as the **P**ercentage of **A**greed **V**alidation
messages received for the last N ledgers, where N = 256 by default.
When the PAV drops below the **_low-water mark_**, the validator is considered
When the PAV drops below the ***low-water mark***, the validator is considered
unreliable, and is a candidate to be disabled by being added to the negative
UNL. A validator must have a PAV higher than the **_high-water mark_** to be
UNL. A validator must have a PAV higher than the ***high-water mark*** to be
re-enabled. The validator is re-enabled by removing it from the negative UNL. In
the implementation, we plan to set the low-water mark as 50% and the high-water
mark as 80%.
@@ -147,24 +143,22 @@ mark as 80%.
### Format Changes
The negative UNL component in a ledger contains three fields.
- **_NegativeUNL_**: The current negative UNL, a list of unreliable validators.
- **_ToDisable_**: The validator to be added to the negative UNL on the next
* ***NegativeUNL***: The current negative UNL, a list of unreliable validators.
* ***ToDisable***: The validator to be added to the negative UNL on the next
flag ledger.
- **_ToReEnable_**: The validator to be removed from the negative UNL on the
* ***ToReEnable***: The validator to be removed from the negative UNL on the
next flag ledger.
All three fields are optional. When the _ToReEnable_ field exists, the
_NegativeUNL_ field cannot be empty.
All three fields are optional. When the *ToReEnable* field exists, the
*NegativeUNL* field cannot be empty.
A new pseudo-transaction, **_UNLModify_**, is added. It has three fields
- **_Disabling_**: A flag indicating whether the modification is to disable or
A new pseudo-transaction, ***UNLModify***, is added. It has three fields
* ***Disabling***: A flag indicating whether the modification is to disable or
to re-enable a validator.
- **_Seq_**: The ledger sequence number.
- **_Validator_**: The validator to be disabled or re-enabled.
* ***Seq***: The ledger sequence number.
* ***Validator***: The validator to be disabled or re-enabled.
There would be at most one _disable_ `UNLModify` and one _re-enable_ `UNLModify`
There would be at most one *disable* `UNLModify` and one *re-enable* `UNLModify`
transaction per flag ledger. The full machinery is described further on.
### Negative UNL Maintenance
@@ -173,19 +167,19 @@ The negative UNL can only be modified on the flag ledgers. If a validator's
reliability status changes, it takes two flag ledgers to modify the negative
UNL. Let's see an example of the algorithm:
- Ledger seq = 100: A validator V goes offline.
- Ledger seq = 256: This is a flag ledger, and V's reliability measurement _PAV_
* Ledger seq = 100: A validator V goes offline.
* Ledger seq = 256: This is a flag ledger, and V's reliability measurement *PAV*
is lower than the low-water mark. Other validators add `UNLModify`
pseudo-transactions `{true, 256, V}` to the transaction set which goes through
the consensus. Then the pseudo-transaction is applied to the negative UNL
ledger component by setting `ToDisable = V`.
- Ledger seq = 257 ~ 511: The negative UNL ledger component is copied from the
* Ledger seq = 257 ~ 511: The negative UNL ledger component is copied from the
parent ledger.
- Ledger seq=512: This is a flag ledger, and the negative UNL is updated
* Ledger seq=512: This is a flag ledger, and the negative UNL is updated
`NegativeUNL = NegativeUNL + ToDisable`.
The negative UNL may have up to `MaxNegativeListed = floor(original UNL * 25%)`
validators. The 25% is because of 75% \* 80% = 60%, where 75% = 100% - 25%, 80%
validators. The 25% is because of 75% * 80% = 60%, where 75% = 100% - 25%, 80%
is the quorum of the effective UNL, and 60% is the absolute minimum quorum of
the original UNL. Adding more than 25% validators to the negative UNL does not
improve the liveness of the network, because adding more validators to the
@@ -193,43 +187,52 @@ negative UNL cannot lower the effective quorum.
The following is the detailed algorithm:
- **If** the ledger seq = x is a flag ledger
1. Compute `NegativeUNL = NegativeUNL + ToDisable - ToReEnable` if they
exist in the parent ledger
* **If** the ledger seq = x is a flag ledger
1. Try to find a candidate to disable if `sizeof NegativeUNL < MaxNegativeListed`
1. Compute `NegativeUNL = NegativeUNL + ToDisable - ToReEnable` if they
exist in the parent ledger
1. Find a validator V that has a _PAV_ lower than the low-water
mark, but is not in `NegativeUNL`.
1. Try to find a candidate to disable if `sizeof NegativeUNL < MaxNegativeListed`
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If V is found, create a `UNLModify` pseudo-transaction
`TxDisableValidator = {true, x, V}`
1. Try to find a candidate to re-enable if `sizeof NegativeUNL > 0`:
1. Find a validator U that is in `NegativeUNL` and has a _PAV_ higher
than the high-water mark.
1. If U is not found, try to find one in `NegativeUNL` but not in the
local _UNL_.
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If U is found, create a `UNLModify` pseudo-transaction
`TxReEnableValidator = {false, x, U}`
1. Find a validator V that has a *PAV* lower than the low-water
mark, but is not in `NegativeUNL`.
1. If any `UNLModify` pseudo-transactions are created, add them to the
transaction set. The transaction set goes through the consensus algorithm.
1. If have enough support, the `UNLModify` pseudo-transactions remain in the
transaction set agreed by the validators. Then the pseudo-transactions are
applied to the ledger:
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If V is found, create a `UNLModify` pseudo-transaction
`TxDisableValidator = {true, x, V}`
1. Try to find a candidate to re-enable if `sizeof NegativeUNL > 0`:
1. Find a validator U that is in `NegativeUNL` and has a *PAV* higher
than the high-water mark.
1. If U is not found, try to find one in `NegativeUNL` but not in the
local *UNL*.
1. If two or more are found, their public keys are XORed with the hash
of the parent ledger and the one with the lowest XOR result is chosen.
1. If U is found, create a `UNLModify` pseudo-transaction
`TxReEnableValidator = {false, x, U}`
1. If any `UNLModify` pseudo-transactions are created, add them to the
transaction set. The transaction set goes through the consensus algorithm.
1. If have enough support, the `UNLModify` pseudo-transactions remain in the
transaction set agreed by the validators. Then the pseudo-transactions are
applied to the ledger:
1. If have `TxDisableValidator`, set `ToDisable=TxDisableValidator.V`.
Else clear `ToDisable`.
1. If have `TxReEnableValidator`, set
`ToReEnable=TxReEnableValidator.U`. Else clear `ToReEnable`.
* **Else** (not a flag ledger)
1. If have `TxDisableValidator`, set `ToDisable=TxDisableValidator.V`.
Else clear `ToDisable`.
1. If have `TxReEnableValidator`, set
`ToReEnable=TxReEnableValidator.U`. Else clear `ToReEnable`.
- **Else** (not a flag ledger)
1. Copy the negative UNL ledger component from the parent ledger
1. Copy the negative UNL ledger component from the parent ledger
The negative UNL is stored on each ledger because we don't know when a validator
may reconnect to the network. If the negative UNL was stored only on every flag
@@ -270,26 +273,31 @@ not counted when checking if the ledger is fully validated.
The diagram below is the sequence of one round of consensus. Classes and
components with non-trivial changes are colored green.
- The `ValidatorList` class is modified to compute the quorum of the effective
* The `ValidatorList` class is modified to compute the quorum of the effective
UNL.
- The `Validations` class provides an interface for querying the validation
* The `Validations` class provides an interface for querying the validation
messages from trusted validators.
- The `ConsensusAdaptor` component:
- The `RCLConsensus::Adaptor` class is modified for creating `UNLModify`
Pseudo-Transactions.
- The `Change` class is modified for applying `UNLModify`
Pseudo-Transactions.
- The `Ledger` class is modified for creating and adjusting the negative UNL
ledger component.
- The `LedgerMaster` class is modified for filtering out validation messages
from negative UNL validators when verifying if a ledger is fully
validated.
* The `ConsensusAdaptor` component:
* The `RCLConsensus::Adaptor` class is modified for creating `UNLModify`
Pseudo-Transactions.
* The `Change` class is modified for applying `UNLModify`
Pseudo-Transactions.
* The `Ledger` class is modified for creating and adjusting the negative UNL
ledger component.
* The `LedgerMaster` class is modified for filtering out validation messages
from negative UNL validators when verifying if a ledger is fully
validated.
![Sequence diagram](./negativeUNL_highLevel_sequence.png?raw=true "Negative UNL
Changes")
## Roads Not Taken
### Use a Mechanism Like Fee Voting to Process UNLModify Pseudo-Transactions
@@ -303,7 +311,7 @@ and different quorums for the same ledger. As a result, the network's safety is
impacted.
This updated version does not impact safety though operates a bit more slowly.
The negative UNL modifications in the _UNLModify_ pseudo-transaction approved by
The negative UNL modifications in the *UNLModify* pseudo-transaction approved by
the consensus will take effect at the next flag ledger. The extra time of the
256 ledgers should be enough for nodes to be in sync of the negative UNL
modifications.
@@ -326,28 +334,29 @@ expiration approach cannot be simply applied.
### Validator Reliability Measurement and Flag Ledger Frequency
If the ledger time is about 4.5 seconds and the low-water mark is 50%, then in
the worst case, it takes 48 minutes _((0.5 _ 256 + 256 + 256) _ 4.5 / 60 = 48)_
the worst case, it takes 48 minutes *((0.5 * 256 + 256 + 256) * 4.5 / 60 = 48)*
to put an offline validator on the negative UNL. We considered lowering the flag
ledger frequency so that the negative UNL can be more responsive. We also
considered decoupling the reliability measurement and flag ledger frequency to
be more flexible. In practice, however, their benefits are not clear.
## New Attack Vectors
A group of malicious validators may try to frame a reliable validator and put it
on the negative UNL. But they cannot succeed. Because:
1. A reliable validator sends a signed validation message every ledger. A
sufficient peer-to-peer network will propagate the validation messages to other
validators. The validators will decide if another validator is reliable or not
only by its local observation of the validation messages received. So an honest
validators vote on another validators reliability is accurate.
sufficient peer-to-peer network will propagate the validation messages to other
validators. The validators will decide if another validator is reliable or not
only by its local observation of the validation messages received. So an honest
validators vote on another validators reliability is accurate.
1. Given the votes are accurate, and one vote per validator, an honest validator
will not create a UNLModify transaction of a reliable validator.
will not create a UNLModify transaction of a reliable validator.
1. A validator can be added to a negative UNL only through a UNLModify
transaction.
transaction.
Assuming the group of malicious validators is less than the quorum, they cannot
frame a reliable validator.
@@ -356,32 +365,32 @@ frame a reliable validator.
The bullet points below briefly summarize the current proposal:
- The motivation of the negative UNL is to improve the liveness of the network.
* The motivation of the negative UNL is to improve the liveness of the network.
- The targeted faults are the ones frequently observed in the production
* The targeted faults are the ones frequently observed in the production
network.
- Validators propose negative UNL candidates based on their local measurements.
* Validators propose negative UNL candidates based on their local measurements.
- The absolute minimum quorum is 60% of the original UNL.
* The absolute minimum quorum is 60% of the original UNL.
- The format of the ledger is changed, and a new _UNLModify_ pseudo-transaction
* The format of the ledger is changed, and a new *UNLModify* pseudo-transaction
is added. Any tools or systems that rely on the format of these data will have
to be updated.
- The negative UNL can only be modified on the flag ledgers.
* The negative UNL can only be modified on the flag ledgers.
- At most one validator can be added to the negative UNL at a flag ledger.
* At most one validator can be added to the negative UNL at a flag ledger.
- At most one validator can be removed from the negative UNL at a flag ledger.
* At most one validator can be removed from the negative UNL at a flag ledger.
- If a validator's reliability status changes, it takes two flag ledgers to
* If a validator's reliability status changes, it takes two flag ledgers to
modify the negative UNL.
- The quorum is the larger of 80% of the effective UNL and 60% of the original
* The quorum is the larger of 80% of the effective UNL and 60% of the original
UNL.
- If a validator is on the negative UNL, its validation messages are ignored
* If a validator is on the negative UNL, its validation messages are ignored
when the local node verifies if a ledger is fully validated.
## FAQ
@@ -406,7 +415,7 @@ lower quorum size while keeping the network safe.
validator removed from the negative UNL? </h3>
A validators reliability is measured by other validators. If a validator
becomes unreliable, at a flag ledger, other validators propose _UNLModify_
becomes unreliable, at a flag ledger, other validators propose *UNLModify*
pseudo-transactions which vote the validator to add to the negative UNL during
the consensus session. If agreed, the validator is added to the negative UNL at
the next flag ledger. The mechanism of removing a validator from the negative
@@ -414,32 +423,32 @@ UNL is the same.
### Question: Given a negative UNL, what happens if the UNL changes?
Answer: Lets consider the cases:
Answer: Lets consider the cases:
1. A validator is added to the UNL, and it is already in the negative UNL. This
case could happen when not all the nodes have the same UNL. Note that the
negative UNL on the ledger lists unreliable nodes that are not necessarily the
validators for everyone.
1. A validator is added to the UNL, and it is already in the negative UNL. This
case could happen when not all the nodes have the same UNL. Note that the
negative UNL on the ledger lists unreliable nodes that are not necessarily the
validators for everyone.
In this case, the liveness is affected negatively. Because the minimum
quorum could be larger but the usable validators are not increased.
In this case, the liveness is affected negatively. Because the minimum
quorum could be larger but the usable validators are not increased.
1. A validator is removed from the UNL, and it is in the negative UNL.
1. A validator is removed from the UNL, and it is in the negative UNL.
In this case, the liveness is affected positively. Because the quorum could
be smaller but the usable validators are not reduced.
1. A validator is added to the UNL, and it is not in the negative UNL.
1. A validator is removed from the UNL, and it is not in the negative UNL.
1. A validator is added to the UNL, and it is not in the negative UNL.
1. A validator is removed from the UNL, and it is not in the negative UNL.
Case 3 and 4 are not affected by the negative UNL protocol.
### Question: Can we simply lower the quorum to 60% without the negative UNL?
### Question: Can we simply lower the quorum to 60% without the negative UNL?
Answer: No, because the negative UNL approach is safer.
First lets compare the two approaches intuitively, (1) the _negative UNL_
approach, and (2) _lower quorum_: simply lowering the quorum from 80% to 60%
First lets compare the two approaches intuitively, (1) the *negative UNL*
approach, and (2) *lower quorum*: simply lowering the quorum from 80% to 60%
without the negative UNL. The negative UNL approach uses consensus to come up
with a list of unreliable validators, which are then removed from the effective
UNL temporarily. With this approach, the list of unreliable validators is agreed
@@ -453,75 +462,75 @@ Next we compare the two approaches quantitatively with examples, and apply
Theorem 8 of [Analysis of the XRP Ledger Consensus
Protocol](https://arxiv.org/abs/1802.07242) paper:
_XRP LCP guarantees fork safety if **O<sub>i,j</sub> > n<sub>j</sub> / 2 +
*XRP LCP guarantees fork safety if **O<sub>i,j</sub> > n<sub>j</sub> / 2 +
n<sub>i</sub> q<sub>i</sub> + t<sub>i,j</sub>** for every pair of nodes
P<sub>i</sub>, P<sub>j</sub>,_
P<sub>i</sub>, P<sub>j</sub>,*
where _O<sub>i,j</sub>_ is the overlapping requirement, n<sub>j</sub> and
where *O<sub>i,j</sub>* is the overlapping requirement, n<sub>j</sub> and
n<sub>i</sub> are UNL sizes, q<sub>i</sub> is the quorum size of P<sub>i</sub>,
_t<sub>i,j</sub> = min(t<sub>i</sub>, t<sub>j</sub>, O<sub>i,j</sub>)_, and
*t<sub>i,j</sub> = min(t<sub>i</sub>, t<sub>j</sub>, O<sub>i,j</sub>)*, and
t<sub>i</sub> and t<sub>j</sub> are the number of faults can be tolerated by
P<sub>i</sub> and P<sub>j</sub>.
We denote _UNL<sub>i</sub>_ as _P<sub>i</sub>'s UNL_, and _|UNL<sub>i</sub>|_ as
the size of _P<sub>i</sub>'s UNL_.
We denote *UNL<sub>i</sub>* as *P<sub>i</sub>'s UNL*, and *|UNL<sub>i</sub>|* as
the size of *P<sub>i</sub>'s UNL*.
Assuming _|UNL<sub>i</sub>| = |UNL<sub>j</sub>|_, let's consider the following
Assuming *|UNL<sub>i</sub>| = |UNL<sub>j</sub>|*, let's consider the following
three cases:
1. With 80% quorum and 20% faults, _O<sub>i,j</sub> > 100% / 2 + 100% - 80% +
20% = 90%_. I.e. fork safety requires > 90% UNL overlaps. This is one of the
results in the analysis paper.
1. With 80% quorum and 20% faults, *O<sub>i,j</sub> > 100% / 2 + 100% - 80% +
20% = 90%*. I.e. fork safety requires > 90% UNL overlaps. This is one of the
results in the analysis paper.
1. If the quorum is 60%, the relationship between the overlapping requirement
and the faults that can be tolerated is _O<sub>i,j</sub> > 90% +
t<sub>i,j</sub>_. Under the same overlapping condition (i.e. 90%), to guarantee
the fork safety, the network cannot tolerate any faults. So under the same
overlapping condition, if the quorum is simply lowered, the network can tolerate
fewer faults.
1. If the quorum is 60%, the relationship between the overlapping requirement
and the faults that can be tolerated is *O<sub>i,j</sub> > 90% +
t<sub>i,j</sub>*. Under the same overlapping condition (i.e. 90%), to guarantee
the fork safety, the network cannot tolerate any faults. So under the same
overlapping condition, if the quorum is simply lowered, the network can tolerate
fewer faults.
1. With the negative UNL approach, we want to argue that the inequation
_O<sub>i,j</sub> > n<sub>j</sub> / 2 + n<sub>i</sub> q<sub>i</sub> +
t<sub>i,j</sub>_ is always true to guarantee fork safety, while the negative UNL
protocol runs, i.e. the effective quorum is lowered without weakening the
network's fault tolerance. To make the discussion easier, we rewrite the
inequation as _O<sub>i,j</sub> > n<sub>j</sub> / 2 + (n<sub>i</sub>
q<sub>i</sub>) + min(t<sub>i</sub>, t<sub>j</sub>)_, where O<sub>i,j</sub> is
dropped from the definition of t<sub>i,j</sub> because _O<sub>i,j</sub> >
min(t<sub>i</sub>, t<sub>j</sub>)_ always holds under the parameters we will
use. Assuming a validator V is added to the negative UNL, now let's consider the
4 cases:
1. With the negative UNL approach, we want to argue that the inequation
*O<sub>i,j</sub> > n<sub>j</sub> / 2 + n<sub>i</sub> q<sub>i</sub> +
t<sub>i,j</sub>* is always true to guarantee fork safety, while the negative UNL
protocol runs, i.e. the effective quorum is lowered without weakening the
network's fault tolerance. To make the discussion easier, we rewrite the
inequation as *O<sub>i,j</sub> > n<sub>j</sub> / 2 + (n<sub>i</sub>
q<sub>i</sub>) + min(t<sub>i</sub>, t<sub>j</sub>)*, where O<sub>i,j</sub> is
dropped from the definition of t<sub>i,j</sub> because *O<sub>i,j</sub> >
min(t<sub>i</sub>, t<sub>j</sub>)* always holds under the parameters we will
use. Assuming a validator V is added to the negative UNL, now let's consider the
4 cases:
1. V is not on UNL<sub>i</sub> nor UNL<sub>j</sub>
1. V is not on UNL<sub>i</sub> nor UNL<sub>j</sub>
The inequation holds because none of the variables change.
The inequation holds because none of the variables change.
1. V is on UNL<sub>i</sub> but not on UNL<sub>j</sub>
1. V is on UNL<sub>i</sub> but not on UNL<sub>j</sub>
The value of *(n<sub>i</sub> q<sub>i</sub>)* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
The value of *(n<sub>i</sub> q<sub>i</sub>)* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
1. V is not on UNL<sub>i</sub> but on UNL<sub>j</sub>
1. V is not on UNL<sub>i</sub> but on UNL<sub>j</sub>
The value of *n<sub>j</sub> / 2* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
1. V is on both UNL<sub>i</sub> and UNL<sub>j</sub>
The value of *n<sub>j</sub> / 2* is smaller. The value of
*min(t<sub>i</sub>, t<sub>j</sub>)* could be smaller too. Other
variables do not change. Overall, the left side of the inequation does
not change, but the right side is smaller. So the inequation holds.
The value of *O<sub>i,j</sub>* is reduced by 1. The values of
*n<sub>j</sub> / 2*, *(n<sub>i</sub> q<sub>i</sub>)*, and
*min(t<sub>i</sub>, t<sub>j</sub>)* are reduced by 0.5, 0.2, and 1
respectively. The right side is reduced by 1.7. Overall, the left side
of the inequation is reduced by 1, and the right side is reduced by 1.7.
So the inequation holds.
1. V is on both UNL<sub>i</sub> and UNL<sub>j</sub>
The value of *O<sub>i,j</sub>* is reduced by 1. The values of
*n<sub>j</sub> / 2*, *(n<sub>i</sub> q<sub>i</sub>)*, and
*min(t<sub>i</sub>, t<sub>j</sub>)* are reduced by 0.5, 0.2, and 1
respectively. The right side is reduced by 1.7. Overall, the left side
of the inequation is reduced by 1, and the right side is reduced by 1.7.
So the inequation holds.
The inequation holds for all the cases. So with the negative UNL approach,
the network's fork safety is preserved, while the quorum is lowered that
increases the network's liveness.
The inequation holds for all the cases. So with the negative UNL approach,
the network's fork safety is preserved, while the quorum is lowered that
increases the network's liveness.
<h3> Question: We have observed that occasionally a validator wanders off on its
own chain. How is this case handled by the negative UNL algorithm? </h3>
@@ -556,11 +565,11 @@ will be used after that. We want to see the test cases still pass with real
network delay. A test case specifies:
1. a UNL with different number of validators for different test cases,
1. a network with zero or more non-validator nodes,
1. a network with zero or more non-validator nodes,
1. a sequence of validator reliability change events (by killing/restarting
nodes, or by running modified rippled that does not send all validation
messages),
1. the correct outcomes.
1. the correct outcomes.
For all the test cases, the correct outcomes are verified by examining logs. We
will grep the log to see if the correct negative UNLs are generated, and whether
@@ -570,7 +579,6 @@ timing parameters of rippled will be changed to have faster ledger time. Most if
not all test cases do not need client transactions.
For example, the test cases for the prototype:
1. A 10-validator UNL.
1. The network does not have other nodes.
1. The validators will be started from the genesis. Once they start to produce
@@ -579,11 +587,11 @@ For example, the test cases for the prototype:
1. A sequence of events (or the lack of events) such as a killed validator is
added to the negative UNL.
#### Roads Not Taken: Test with Extended CSF
#### Roads Not Taken: Test with Extended CSF
We considered testing with the current unit test framework, specifically the
[Consensus Simulation
Framework](https://github.com/ripple/rippled/blob/develop/src/test/csf/README.md)
(CSF). However, the CSF currently can only test the generic consensus algorithm
as in the paper: [Analysis of the XRP Ledger Consensus
Protocol](https://arxiv.org/abs/1802.07242).
Protocol](https://arxiv.org/abs/1802.07242).

View File

@@ -82,9 +82,7 @@ pattern and the way coroutines are implemented, where every yield saves the spot
in the code where it left off and every resume jumps back to that spot.
### Sequence Diagram
![Sequence diagram](./ledger_replay_sequence.png?raw=true "A successful ledger replay")
### Class Diagram
![Class diagram](./ledger_replay_classes.png?raw=true "Ledger replay classes")

View File

@@ -16,5 +16,5 @@
## Function
- Minimize external dependencies
- Pass options in the ctor instead of using theConfig
- Use as few other classes as possible
* Pass options in the ctor instead of using theConfig
* Use as few other classes as possible

View File

@@ -1,18 +1,18 @@
# Coding Standards
Coding standards used here gradually evolve and propagate through
Coding standards used here gradually evolve and propagate through
code reviews. Some aspects are enforced more strictly than others.
## Rules
These rules only apply to our own code. We can't enforce any sort of
These rules only apply to our own code. We can't enforce any sort of
style on the external repositories and libraries we include. The best
guideline is to maintain the standards that are used in those libraries.
- Tab inserts 4 spaces. No tab characters.
- Braces are indented in the [Allman style][1].
- Modern C++ principles. No naked `new` or `delete`.
- Line lengths limited to 80 characters. Exceptions limited to data and tables.
* Tab inserts 4 spaces. No tab characters.
* Braces are indented in the [Allman style][1].
* Modern C++ principles. No naked ```new``` or ```delete```.
* Line lengths limited to 80 characters. Exceptions limited to data and tables.
## Guidelines
@@ -21,17 +21,17 @@ why you're doing it. Think, use common sense, and consider that this
your changes will probably need to be maintained long after you've
moved on to other projects.
- Use white space and blank lines to guide the eye and keep your intent clear.
- Put private data members at the top of a class, and the 6 public special
members immediately after, in the following order:
- Destructor
- Default constructor
- Copy constructor
- Copy assignment
- Move constructor
- Move assignment
- Don't over-inline by defining large functions within the class
declaration, not even for template classes.
* Use white space and blank lines to guide the eye and keep your intent clear.
* Put private data members at the top of a class, and the 6 public special
members immediately after, in the following order:
* Destructor
* Default constructor
* Copy constructor
* Copy assignment
* Move constructor
* Move assignment
* Don't over-inline by defining large functions within the class
declaration, not even for template classes.
## Formatting
@@ -39,44 +39,44 @@ The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
- Always place a space before and after all binary operators,
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
- The `!` operator should be preceded by a space, but not followed by one.
- The `~` operator should be preceded by a space, but not followed by one.
- The `++` and `--` operators should have no spaces between the operator and
* The `!` operator should be preceded by a space, but not followed by one.
* The `~` operator should be preceded by a space, but not followed by one.
* The `++` and `--` operators should have no spaces between the operator and
the operand.
- A space never appears before a comma, and always appears after a comma.
- Don't put spaces after a parenthesis. A typical member function call might
* A space never appears before a comma, and always appears after a comma.
* Don't put spaces after a parenthesis. A typical member function call might
look like this: `foobar (1, 2, 3);`
- In general, leave a blank line before an `if` statement.
- In general, leave a blank line after a closing brace `}`.
- Do not place code on the same line as any opening or
* In general, leave a blank line before an `if` statement.
* In general, leave a blank line after a closing brace `}`.
* Do not place code on the same line as any opening or
closing brace.
- Do not write `if` statements all-on-one-line. The exception to this is when
* Do not write `if` statements all-on-one-line. The exception to this is when
you've got a sequence of similar `if` statements, and are aligning them all
vertically to highlight their similarities.
- In an `if-else` statement, if you surround one half of the statement with
* In an `if-else` statement, if you surround one half of the statement with
braces, you also need to put braces around the other half, to match.
- When writing a pointer type, use this spacing: `SomeObject* myObject`.
* When writing a pointer type, use this spacing: `SomeObject* myObject`.
Technically, a more correct spacing would be `SomeObject *myObject`, but
it makes more sense for the asterisk to be grouped with the type name,
since being a pointer is part of the type, not the variable name. The only
time that this can lead to any problems is when you're declaring multiple
pointers of the same type in the same statement - which leads on to the next
rule:
- When declaring multiple pointers, never do so in a single statement, e.g.
* When declaring multiple pointers, never do so in a single statement, e.g.
`SomeObject* p1, *p2;` - instead, always split them out onto separate lines
and write the type name again, to make it quite clear what's going on, and
avoid the danger of missing out any vital asterisks.
- The previous point also applies to references, so always put the `&` next to
* The previous point also applies to references, so always put the `&` next to
the type rather than the variable, e.g. `void foo (Thing const& thing)`. And
don't put a space on both sides of the `*` or `&` - always put a space after
it, but never before it.
- The word `const` should be placed to the right of the thing that it modifies,
* The word `const` should be placed to the right of the thing that it modifies,
for consistency. For example `int const` refers to an int which is const.
`int const*` is a pointer to an int which is const. `int *const` is a const
pointer to an int.
- Always place a space in between the template angle brackets and the type
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
[1]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

View File

@@ -31,7 +31,7 @@ and header under /opt/local/include:
$ scons clang profile-jemalloc=/opt/local
---
----------------------
## Using the jemalloc library from within the code
@@ -60,3 +60,4 @@ Linking against the jemalloc library will override
the system's default `malloc()` and related functions with jemalloc's
implementation. This is the case even if the code is not instrumented
to use jemalloc's specific API.

View File

@@ -7,6 +7,7 @@ Install these dependencies:
- [Doxygen](http://www.doxygen.nl): All major platforms have [official binary
distributions](http://www.doxygen.nl/download.html#srcbin), or you can
build from [source](http://www.doxygen.nl/download.html#srcbin).
- MacOS: We recommend installing via Homebrew: `brew install doxygen`.
The executable will be installed in `/usr/local/bin` which is already
in the default `PATH`.
@@ -20,15 +21,18 @@ Install these dependencies:
$ ln -s /Applications/Doxygen.app/Contents/Resources/doxygen /usr/local/bin/doxygen
```
- [PlantUML](http://plantuml.com):
- [PlantUML](http://plantuml.com):
1. Install a functioning Java runtime, if you don't already have one.
2. Download [`plantuml.jar`](http://sourceforge.net/projects/plantuml/files/plantuml.jar/download).
- [Graphviz](https://www.graphviz.org):
- Linux: Install from your package manager.
- Windows: Use an [official installer](https://graphviz.gitlab.io/_pages/Download/Download_windows.html).
- MacOS: Install via Homebrew: `brew install graphviz`.
## Docker
Instead of installing the above dependencies locally, you can use the official
@@ -36,16 +40,14 @@ build environment Docker image, which has all of them installed already.
1. Install [Docker](https://docs.docker.com/engine/installation/)
2. Pull the image:
```
sudo docker pull rippleci/rippled-ci-builder:2944b78d22db
```
```
sudo docker pull rippleci/rippled-ci-builder:2944b78d22db
```
3. Run the image from the project folder:
```
sudo docker run -v $PWD:/opt/rippled --rm rippleci/rippled-ci-builder:2944b78d22db
```
```
sudo docker run -v $PWD:/opt/rippled --rm rippleci/rippled-ci-builder:2944b78d22db
```
## Build

14
docs/build/conan.md vendored
View File

@@ -5,6 +5,7 @@ we should first understand _why_ we use Conan,
and to understand that,
we need to understand how we use CMake.
### CMake
Technically, you don't need CMake to build this project.
@@ -32,9 +33,9 @@ Parameters include:
- where to find the compiler and linker
- where to find dependencies, e.g. libraries and headers
- how to link dependencies, e.g. any special compiler or linker flags that
need to be used with them, including preprocessor definitions
need to be used with them, including preprocessor definitions
- how to compile translation units, e.g. with optimizations, debug symbols,
position-independent code, etc.
position-independent code, etc.
- on Windows, which runtime library to link with
For some of these parameters, like the build system and compiler,
@@ -53,6 +54,7 @@ Most humans prefer to put them into a configuration file, once, that
CMake can read every time it is configured.
For CMake, that file is a [toolchain file][toolchain].
### Conan
These next few paragraphs on Conan are going to read much like the ones above
@@ -77,10 +79,10 @@ Those files include:
- A single toolchain file.
- For every dependency, a CMake [package configuration file][pcf],
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
The toolchain file itself amends the search path
([`CMAKE_PREFIX_PATH`][prefix_path]) so that [`find_package()`][find_package]

View File

@@ -2,7 +2,8 @@ We recommend two different methods to depend on libxrpl in your own [CMake][]
project.
Both methods add a CMake library target named `xrpl::libxrpl`.
## Conan requirement
## Conan requirement
The first method adds libxrpl as a [Conan][] requirement.
With this method, there is no need for a Git [submodule][].
@@ -47,6 +48,7 @@ cmake \
cmake --build . --parallel
```
## CMake subdirectory
The second method adds the [rippled][] project as a CMake
@@ -88,6 +90,7 @@ cmake \
cmake --build . --parallel
```
[add_subdirectory]: https://cmake.org/cmake/help/latest/command/add_subdirectory.html
[submodule]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
[rippled]: https://github.com/ripple/rippled

View File

@@ -5,39 +5,41 @@ platforms: Linux, macOS, or Windows.
[BUILD.md]: ../../BUILD.md
## Linux
Package ecosystems vary across Linux distributions,
so there is no one set of instructions that will work for every Linux user.
The instructions below are written for Debian 12 (Bookworm).
These instructions are written for Ubuntu 22.04.
They are largely copied from the [script][1] used to configure our Docker
container for continuous integration.
That script handles many more responsibilities.
These instructions are just the bare minimum to build one configuration of
rippled.
You can check that codebase for other Linux distributions and versions.
If you cannot find yours there,
then we hope that these instructions can at least guide you in the right
direction.
```
export GCC_RELEASE=12
sudo apt update
sudo apt install --yes gcc-${GCC_RELEASE} g++-${GCC_RELEASE} python3-pip \
python-is-python3 python3-venv python3-dev curl wget ca-certificates \
git build-essential cmake ninja-build libc6-dev
sudo pip install --break-system-packages conan
apt update
apt install --yes curl git libssl-dev python3.10-dev python3-pip make g++-11 libprotobuf-dev protobuf-compiler
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-${GCC_RELEASE} 999
sudo update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-${GCC_RELEASE} 100 \
--slave /usr/bin/g++ g++ /usr/bin/g++-${GCC_RELEASE} \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-${GCC_RELEASE} \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-${GCC_RELEASE} \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-${GCC_RELEASE} \
--slave /usr/bin/gcov gcov /usr/bin/gcov-${GCC_RELEASE} \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-tool-${GCC_RELEASE} \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-dump-${GCC_RELEASE} \
--slave /usr/bin/lto-dump lto-dump /usr/bin/lto-dump-${GCC_RELEASE}
sudo update-alternatives --auto cc
sudo update-alternatives --auto gcc
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
rm cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
pip3 install 'conan<2'
```
If you use different Linux distribution, hope the instruction above can guide
you in the right direction. We try to maintain compatibility with all recent
compiler releases, so if you use a rolling distribution like e.g. Arch or CentOS
then there is a chance that everything will "just work".
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh
## macOS
@@ -50,33 +52,6 @@ minimum required (see [BUILD.md][]).
clang --version
```
### Install Xcode Specific Version (Optional)
If you develop other applications using XCode you might be consistently updating to the newest version of Apple Clang.
This will likely cause issues building rippled. You may want to install a specific version of Xcode:
1. **Download Xcode**
- Visit [Apple Developer Downloads](https://developer.apple.com/download/more/)
- Sign in with your Apple Developer account
- Search for an Xcode version that includes **Apple Clang (Expected Version)**
- Download the `.xip` file
2. **Install and Configure Xcode**
```bash
# Extract the .xip file and rename for version management
# Example: Xcode_16.2.app
# Move to Applications directory
sudo mv Xcode_16.2.app /Applications/
# Set as default toolchain (persistent)
sudo xcode-select -s /Applications/Xcode_16.2.app/Contents/Developer
# Set as environment variable (temporary)
export DEVELOPER_DIR=/Applications/Xcode_16.2.app/Contents/Developer
```
The command line developer tools should include Git too:
```
@@ -96,10 +71,10 @@ and use it to install Conan:
brew update
brew install xz
brew install pyenv
pyenv install 3.11
pyenv global 3.11
pyenv install 3.10-dev
pyenv global 3.10-dev
eval "$(pyenv init -)"
pip install 'conan'
pip install 'conan<2'
```
Install CMake with Homebrew too:

42
docs/build/install.md vendored
View File

@@ -6,6 +6,7 @@ like CentOS.
Installing from source is an option for all platforms,
and the only supported option for installing custom builds.
## From source
From a source build, you can install rippled and libxrpl using CMake's
@@ -20,23 +21,25 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
[1]: https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html
## With the APT package manager
1. Update repositories:
1. Update repositories:
sudo apt update -y
2. Install utilities:
2. Install utilities:
sudo apt install -y apt-transport-https ca-certificates wget gnupg
3. Add Ripple's package-signing GPG key to your list of trusted keys:
3. Add Ripple's package-signing GPG key to your list of trusted keys:
sudo mkdir /usr/local/share/keyrings/
wget -q -O - "https://repos.ripple.com/repos/api/gpg/key/public" | gpg --dearmor > ripple-key.gpg
sudo mv ripple-key.gpg /usr/local/share/keyrings
4. Check the fingerprint of the newly-added key:
4. Check the fingerprint of the newly-added key:
gpg /usr/local/share/keyrings/ripple-key.gpg
@@ -48,34 +51,37 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
uid TechOps Team at Ripple <techops+rippled@ripple.com>
sub rsa3072 2019-02-14 [E] [expires: 2026-02-17]
In particular, make sure that the fingerprint matches. (In the above example, the fingerprint is on the third line, starting with `C001`.)
5. Add the appropriate Ripple repository for your operating system version:
4. Add the appropriate Ripple repository for your operating system version:
echo "deb [signed-by=/usr/local/share/keyrings/ripple-key.gpg] https://repos.ripple.com/repos/rippled-deb focal stable" | \
sudo tee -a /etc/apt/sources.list.d/ripple.list
The above example is appropriate for **Ubuntu 20.04 Focal Fossa**. For other operating systems, replace the word `focal` with one of the following:
- `jammy` for **Ubuntu 22.04 Jammy Jellyfish**
- `bionic` for **Ubuntu 18.04 Bionic Beaver**
- `bullseye` for **Debian 11 Bullseye**
- `buster` for **Debian 10 Buster**
If you want access to development or pre-release versions of `rippled`, use one of the following instead of `stable`:
- `unstable` - Pre-release builds ([`release` branch](https://github.com/ripple/rippled/tree/release))
- `nightly` - Experimental/development builds ([`develop` branch](https://github.com/ripple/rippled/tree/develop))
**Warning:** Unstable and nightly builds may be broken at any time. Do not use these builds for production servers.
6. Fetch the Ripple repository.
5. Fetch the Ripple repository.
sudo apt -y update
7. Install the `rippled` software package:
6. Install the `rippled` software package:
sudo apt -y install rippled
8. Check the status of the `rippled` service:
7. Check the status of the `rippled` service:
systemctl status rippled.service
@@ -83,22 +89,24 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
sudo systemctl start rippled.service
9. Optional: allow `rippled` to bind to privileged ports.
8. Optional: allow `rippled` to bind to privileged ports.
This allows you to serve incoming API requests on port 80 or 443. (If you want to do so, you must also update the config file's port settings.)
sudo setcap 'cap_net_bind_service=+ep' /opt/ripple/bin/rippled
## With the YUM package manager
1. Install the Ripple RPM repository:
1. Install the Ripple RPM repository:
Choose the appropriate RPM repository for the stability of releases you want:
- `stable` for the latest production release (`master` branch)
- `unstable` for pre-release builds (`release` branch)
- `nightly` for experimental/development builds (`develop` branch)
_Stable_
*Stable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-stable]
@@ -110,7 +118,7 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
gpgkey=https://repos.ripple.com/repos/rippled-rpm/stable/repodata/repomd.xml.key
REPOFILE
_Unstable_
*Unstable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-unstable]
@@ -122,7 +130,7 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
gpgkey=https://repos.ripple.com/repos/rippled-rpm/unstable/repodata/repomd.xml.key
REPOFILE
_Nightly_
*Nightly*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-nightly]
@@ -134,18 +142,18 @@ The default [prefix][1] is typically `/usr/local` on Linux and macOS and
gpgkey=https://repos.ripple.com/repos/rippled-rpm/nightly/repodata/repomd.xml.key
REPOFILE
2. Fetch the latest repo updates:
2. Fetch the latest repo updates:
sudo yum -y update
3. Install the new `rippled` package:
3. Install the new `rippled` package:
sudo yum install -y rippled
4. Configure the `rippled` service to start on boot:
4. Configure the `rippled` service to start on boot:
sudo systemctl enable rippled.service
5. Start the `rippled` service:
5. Start the `rippled` service:
sudo systemctl start rippled.service

View File

@@ -3,7 +3,7 @@
**This section is a work in progress!!**
Consensus is the task of reaching agreement within a distributed system in the
presence of faulty or even malicious participants. This document outlines the
presence of faulty or even malicious participants. This document outlines the
[XRP Ledger Consensus Algorithm](https://arxiv.org/abs/1802.07242)
as implemented in [rippled](https://github.com/ripple/rippled), but
focuses on its utility as a generic consensus algorithm independent of the
@@ -15,38 +15,38 @@ collectively trusted subnetworks.
## Distributed Agreement
A challenge for distributed systems is reaching agreement on changes in shared
state. For the Ripple network, the shared state is the current ledger--account
information, account balances, order books and other financial data. We will
state. For the Ripple network, the shared state is the current ledger--account
information, account balances, order books and other financial data. We will
refer to shared distributed state as a /ledger/ throughout the remainder of this
document.
![Ledger Chain](images/consensus/ledger_chain.png "Ledger Chain")
As shown above, new ledgers are made by applying a set of transactions to the
prior ledger. For the Ripple network, transactions include payments,
prior ledger. For the Ripple network, transactions include payments,
modification of account settings, updates to offers and more.
In a centralized system, generating the next ledger is trivial since there is a
single unique arbiter of which transactions to include and how to apply them to
a ledger. For decentralized systems, participants must resolve disagreements on
a ledger. For decentralized systems, participants must resolve disagreements on
the set of transactions to include, the order to apply those transactions, and
even the resulting ledger after applying the transactions. This is even more
even the resulting ledger after applying the transactions. This is even more
difficult when some participants are faulty or malicious.
The Ripple network is a decentralized and **trust-full** network. Anyone is free
The Ripple network is a decentralized and **trust-full** network. Anyone is free
to join and participants are free to choose a subset of peers that are
collectively trusted to not collude in an attempt to defraud the participant.
Leveraging this network of trust, the Ripple algorithm has two main components.
- _Consensus_ in which network participants agree on the transactions to apply
* *Consensus* in which network participants agree on the transactions to apply
to a prior ledger, based on the positions of their chosen peers.
- _Validation_ in which network participants agree on what ledger was
* *Validation* in which network participants agree on what ledger was
generated, based on the ledgers generated by chosen peers.
These phases are continually repeated to process transactions submitted to the
network, generating successive ledgers and giving rise to the blockchain ledger
history depicted below. In this diagram, time is flowing to the right, but
links between ledgers point backward to the parent. Also note the alternate
history depicted below. In this diagram, time is flowing to the right, but
links between ledgers point backward to the parent. Also note the alternate
Ledger 2 that was generated by some participants, but which failed validation
and was abandoned.
@@ -54,7 +54,7 @@ and was abandoned.
The remainder of this section describes the Consensus and Validation algorithms
in more detail and is meant as a companion guide to understanding the generic
implementation in `rippled`. The document **does not** discuss correctness,
implementation in `rippled`. The document **does not** discuss correctness,
fault-tolerance or liveness properties of the algorithms or the full details of
how they integrate within `rippled` to support the Ripple Consensus Ledger.
@@ -62,76 +62,76 @@ how they integrate within `rippled` to support the Ripple Consensus Ledger.
### Definitions
- The _ledger_ is the shared distributed state. Each ledger has a unique ID to
distinguish it from all other ledgers. During consensus, the _previous_,
_prior_ or _last-closed_ ledger is the most recent ledger seen by consensus
* The *ledger* is the shared distributed state. Each ledger has a unique ID to
distinguish it from all other ledgers. During consensus, the *previous*,
*prior* or *last-closed* ledger is the most recent ledger seen by consensus
and is the basis upon which it will build the next ledger.
- A _transaction_ is an instruction for an atomic change in the ledger state. A
* A *transaction* is an instruction for an atomic change in the ledger state. A
unique ID distinguishes a transaction from other transactions.
- A _transaction set_ is a set of transactions under consideration by consensus.
The goal of consensus is to reach agreement on this set. The generic
* A *transaction set* is a set of transactions under consideration by consensus.
The goal of consensus is to reach agreement on this set. The generic
consensus algorithm does not rely on an ordering of transactions within the
set, nor does it specify how to apply a transaction set to a ledger to
generate a new ledger. A unique ID distinguishes a set of transactions from
generate a new ledger. A unique ID distinguishes a set of transactions from
all other sets of transactions.
- A _node_ is one of the distributed actors running the consensus algorithm. It
* A *node* is one of the distributed actors running the consensus algorithm. It
has a unique ID to distinguish it from all other nodes.
- A _peer_ of a node is another node that it has chosen to follow and which it
believes will not collude with other chosen peers. The choice of peers is not
* A *peer* of a node is another node that it has chosen to follow and which it
believes will not collude with other chosen peers. The choice of peers is not
symmetric, since participants can decide on their chosen sets independently.
- A /position/ is the current belief of the next ledger's transaction set and
* A /position/ is the current belief of the next ledger's transaction set and
close time. Position can refer to the node's own position or the position of a
peer.
- A _proposal_ is one of a sequence of positions a node shares during consensus.
* A *proposal* is one of a sequence of positions a node shares during consensus.
An initial proposal contains the starting position taken by a node before it
considers any peer positions. If a node subsequently updates its position in
response to its peers, it will issue an updated proposal. A proposal is
considers any peer positions. If a node subsequently updates its position in
response to its peers, it will issue an updated proposal. A proposal is
uniquely identified by the ID of the proposing node, the ID of the position
taken, the ID of the prior ledger the proposal is for, and the sequence number
of the proposal.
- A _dispute_ is a transaction that is either not part of a node's position or
* A *dispute* is a transaction that is either not part of a node's position or
not in a peer's position. During consensus, the node will add or remove
disputed transactions from its position based on that transaction's support
amongst its peers.
Note that most types have an ID as a lightweight identifier of instances of that
type. Consensus often operates on the IDs directly since the underlying type is
potentially expensive to share over the network. For example, proposal's only
contain the ID of the position of a peer. Since many peers likely have the same
type. Consensus often operates on the IDs directly since the underlying type is
potentially expensive to share over the network. For example, proposal's only
contain the ID of the position of a peer. Since many peers likely have the same
position, this reduces the need to send the full transaction set multiple times.
Instead, a node can request the transaction set from the network if necessary.
### Overview
### Overview
![Consensus Overview](images/consensus/consensus_overview.png "Consensus Overview")
The diagram above is an overview of the consensus process from the perspective
of a single participant. Recall that during a single consensus round, a node is
of a single participant. Recall that during a single consensus round, a node is
trying to agree with its peers on which transactions to apply to its prior
ledger when generating the next ledger. It also attempts to agree on the
[network time when the ledger closed](#effective_close_time). There are
ledger when generating the next ledger. It also attempts to agree on the
[network time when the ledger closed](#effective_close_time). There are
3 main phases to a consensus round:
- A call to `startRound` places the node in the `Open` phase. In this phase,
the node is waiting for transactions to include in its open ledger.
- At some point, the node will `Close` the open ledger and transition to the
`Establish` phase. In this phase, the node shares/receives peer proposals on
which transactions should be accepted in the closed ledger.
- At some point, the node determines it has reached consensus with its peers on
which transactions to include. It transitions to the `Accept` phase. In this
phase, the node works on applying the transactions to the prior ledger to
generate a new closed ledger. Once the new ledger is completed, the node shares
the validated ledger hash with the network and makes a call to `startRound` to
start the cycle again for the next ledger.
* A call to `startRound` places the node in the `Open` phase. In this phase,
the node is waiting for transactions to include in its open ledger.
* At some point, the node will `Close` the open ledger and transition to the
`Establish` phase. In this phase, the node shares/receives peer proposals on
which transactions should be accepted in the closed ledger.
* At some point, the node determines it has reached consensus with its peers on
which transactions to include. It transitions to the `Accept` phase. In this
phase, the node works on applying the transactions to the prior ledger to
generate a new closed ledger. Once the new ledger is completed, the node shares
the validated ledger hash with the network and makes a call to `startRound` to
start the cycle again for the next ledger.
Throughout, a heartbeat timer calls `timerEntry` at a regular frequency to drive
the process forward. Although the `startRound` call occurs at arbitrary times
based on when the initial round began and the time it takes to apply
transactions, the transitions from `Open` to `Establish` and `Establish` to
`Accept` only occur during calls to `timerEntry`. Similarly, transactions can
`Accept` only occur during calls to `timerEntry`. Similarly, transactions can
arrive at arbitrary times, independent of the heartbeat timer. Transactions
received after the `Open` to `Close` transition and not part of peer proposals
won't be considered until the next consensus round. They are represented above
won't be considered until the next consensus round. They are represented above
by the light green triangles.
Peer proposals are issued by a node during a `timerEntry` call, but since peers
@@ -139,16 +139,16 @@ do not synchronize `timerEntry` calls, they are received by other peers at
arbitrary times. Peer proposals are only considered if received prior to the
`Establish` to `Accept` transition, and only if the peer is working on the same
prior ledger. Peer proposals received after consensus is reached will not be
meaningful and are represented above by the circle with the X in it. Only
meaningful and are represented above by the circle with the X in it. Only
proposals from chosen peers are considered.
### Effective Close Time ### {#effective_close_time}
### Effective Close Time ### {#effective_close_time}
In addition to agreeing on a transaction set, each consensus round tries to
agree on the time the ledger closed. Each node calculates its own close time
when it closes the open ledger. This exact close time is rounded to the nearest
multiple of the current _effective close time resolution_. It is this
_effective close time_ that nodes seek to agree on. This allows servers to
agree on the time the ledger closed. Each node calculates its own close time
when it closes the open ledger. This exact close time is rounded to the nearest
multiple of the current *effective close time resolution*. It is this
*effective close time* that nodes seek to agree on. This allows servers to
derive a common time for a ledger without the need for perfectly synchronized
clocks. As depicted below, the 3 pink arrows represent exact close times from 3
consensus nodes that round to the same effective close time given the current
@@ -158,9 +158,9 @@ different effective close time given the current resolution.
![Effective Close Time](images/consensus/EffCloseTime.png "Effective Close Time")
The effective close time is part of the node's position and is shared with peers
in its proposals. Just like the position on the consensus transaction set, a
in its proposals. Just like the position on the consensus transaction set, a
node will update its close time position in response to its peers' effective
close time positions. Peers can agree to disagree on the close time, in which
close time positions. Peers can agree to disagree on the close time, in which
case the effective close time is taken as 1 second past the prior close.
The close time resolution is itself dynamic, decreasing (coarser) resolution in
@@ -173,12 +173,12 @@ reach close time consensus.
Internally, a node operates under one of the following consensus modes. Either
of the first two modes may be chosen when a consensus round starts.
- _Proposing_ indicates the node is a full-fledged consensus participant. It
* *Proposing* indicates the node is a full-fledged consensus participant. It
takes on positions and sends proposals to its peers.
- _Observing_ indicates the node is a passive consensus participant. It
* *Observing* indicates the node is a passive consensus participant. It
maintains a position internally, but does not propose that position to its
peers. Instead, it receives peer proposals and updates its position
to track the majority of its peers. This may be preferred if the node is only
to track the majority of its peers. This may be preferred if the node is only
being used to track the state of the network or during a start-up phase while
it is still synchronizing with the network.
@@ -186,14 +186,14 @@ The other two modes are set internally during the consensus round when the node
believes it is no longer working on the dominant ledger chain based on peer
validations. It checks this on every call to `timerEntry`.
- _Wrong Ledger_ indicates the node is not working on the correct prior ledger
and does not have it available. It requests that ledger from the network, but
continues to work towards consensus this round while waiting. If it had been
_proposing_, it will send a special "bowout" proposal to its peers to indicate
* *Wrong Ledger* indicates the node is not working on the correct prior ledger
and does not have it available. It requests that ledger from the network, but
continues to work towards consensus this round while waiting. If it had been
*proposing*, it will send a special "bowout" proposal to its peers to indicate
its change in mode for the rest of this round. For the duration of the round,
it defers to peer positions for determining the consensus outcome as if it
were just _observing_.
- _Switch Ledger_ indicates that the node has acquired the correct prior ledger
were just *observing*.
* *Switch Ledger* indicates that the node has acquired the correct prior ledger
from the network. Although it now has the correct prior ledger, the fact that
it had the wrong one at some point during this round means it is likely behind
and should defer to peer positions for determining the consensus outcome.
@@ -201,7 +201,7 @@ validations. It checks this on every call to `timerEntry`.
![Consensus Modes](images/consensus/consensus_modes.png "Consensus Modes")
Once either wrong ledger or switch ledger are reached, the node cannot
return to proposing or observing until the next consensus round. However,
return to proposing or observing until the next consensus round. However,
the node could change its view of the correct prior ledger, so going from
switch ledger to wrong ledger and back again is possible.
@@ -215,16 +215,16 @@ decide how best to generate the next ledger once it declares consensus.
### Phases
As depicted in the overview diagram, consensus is best viewed as a progression
through 3 phases. There are 4 public methods of the generic consensus algorithm
through 3 phases. There are 4 public methods of the generic consensus algorithm
that determine this progression
- `startRound` begins a consensus round.
- `timerEntry` is called at a regular frequency (`LEDGER_MIN_CLOSE`) and is the
only call to consensus that can change the phase from `Open` to `Establish`
* `startRound` begins a consensus round.
* `timerEntry` is called at a regular frequency (`LEDGER_MIN_CLOSE`) and is the
only call to consensus that can change the phase from `Open` to `Establish`
or `Accept`.
- `peerProposal` is called whenever a peer proposal is received and is what
* `peerProposal` is called whenever a peer proposal is received and is what
allows a node to update its position in a subsequent `timerEntry` call.
- `gotTxSet` is called when a transaction set is received from the network. This
* `gotTxSet` is called when a transaction set is received from the network. This
is typically in response to a prior request from the node to acquire the
transaction set corresponding to a disagreeing peer's position.
@@ -234,13 +234,13 @@ actions are taken in response to these calls.
#### Open
The `Open` phase is a quiescent period to allow transactions to build up in the
node's open ledger. The duration is a trade-off between latency and throughput.
node's open ledger. The duration is a trade-off between latency and throughput.
A shorter window reduces the latency to generating the next ledger, but also
reduces transaction throughput due to fewer transactions accepted into the
ledger.
A call to `startRound` would forcibly begin the next consensus round, skipping
completion of the current round. This is not expected during normal operation.
completion of the current round. This is not expected during normal operation.
Calls to `peerProposal` or `gotTxSet` simply store the proposal or transaction
set for use in the coming `Establish` phase.
@@ -254,27 +254,28 @@ the ledger.
Under normal circumstances, the open ledger period ends when one of the following
is true
- if there are transactions in the open ledger and more than `LEDGER_MIN_CLOSE`
have elapsed. This is the typical behavior.
- if there are no open transactions and a suitably longer idle interval has
elapsed. This increases the opportunity to get some transaction into
* if there are transactions in the open ledger and more than `LEDGER_MIN_CLOSE`
have elapsed. This is the typical behavior.
* if there are no open transactions and a suitably longer idle interval has
elapsed. This increases the opportunity to get some transaction into
the next ledger and avoids doing useless work closing an empty ledger.
- if more than half the number of prior round peers have already closed or finished
* if more than half the number of prior round peers have already closed or finished
this round. This indicates the node is falling behind and needs to catch up.
When closing the ledger, the node takes its initial position based on the
transactions in the open ledger and uses the current time as
its initial close time estimate. If in the proposing mode, the node shares its
initial position with peers. Now that the node has taken a position, it will
consider any peer positions for this round that arrived earlier. The node
its initial close time estimate. If in the proposing mode, the node shares its
initial position with peers. Now that the node has taken a position, it will
consider any peer positions for this round that arrived earlier. The node
generates disputed transactions for each transaction not in common with a peer's
position. The node also records the vote of each peer for each disputed
position. The node also records the vote of each peer for each disputed
transaction.
In the example below, we suppose our node has closed with transactions 1,2 and 3. It creates disputes
In the example below, we suppose our node has closed with transactions 1,2 and 3. It creates disputes
for transactions 2,3 and 4, since at least one peer position differs on each.
##### disputes ##### {#disputes_image}
##### disputes ##### {#disputes_image}
![Disputes](images/consensus/disputes.png "Disputes")
@@ -285,22 +286,22 @@ exchanges proposals with peers in an attempt to reach agreement on the consensus
transactions and effective close time.
A call to `startRound` would forcibly begin the next consensus round, skipping
completion of the current round. This is not expected during normal operation.
completion of the current round. This is not expected during normal operation.
Calls to `peerProposal` or `gotTxSet` that reflect new positions will generate
disputed transactions for any new disagreements and will update the peer's vote
for all disputed transactions.
A call to `timerEntry` first checks that the node is working from the correct
prior ledger. If not, the node will update the mode and request the correct
ledger. Otherwise, the node updates the node's position and considers whether
to switch to the `Accepted` phase and declare consensus reached. However, at
least `LEDGER_MIN_CONSENSUS` time must have elapsed before doing either. This
prior ledger. If not, the node will update the mode and request the correct
ledger. Otherwise, the node updates the node's position and considers whether
to switch to the `Accepted` phase and declare consensus reached. However, at
least `LEDGER_MIN_CONSENSUS` time must have elapsed before doing either. This
allows peers an opportunity to take an initial position and share it.
##### Update Position
In order to achieve consensus, the node is looking for a transaction set that is
supported by a super-majority of peers. The node works towards this set by
supported by a super-majority of peers. The node works towards this set by
adding or removing disputed transactions from its position based on an
increasing threshold for inclusion.
@@ -309,23 +310,23 @@ increasing threshold for inclusion.
By starting with a lower threshold, a node initially allows a wide set of
transactions into its position. If the establish round continues and the node is
"stuck", a higher threshold can focus on accepting transactions with the most
support. The constants that define the thresholds and durations at which the
support. The constants that define the thresholds and durations at which the
thresholds change are given by `AV_XXX_CONSENSUS_PCT` and
`AV_XXX_CONSENSUS_TIME` respectively, where `XXX` is `INIT`,`MID`,`LATE` and
`STUCK`. The effective close time position is updated using the same
`STUCK`. The effective close time position is updated using the same
thresholds.
Given the [example disputes above](#disputes_image) and an initial threshold
of 50%, our node would retain its position since transaction 1 was not in
dispute and transactions 2 and 3 have 75% support. Since its position did not
change, it would not need to send a new proposal to peers. Peer C would not
dispute and transactions 2 and 3 have 75% support. Since its position did not
change, it would not need to send a new proposal to peers. Peer C would not
change either. Peer A would add transaction 3 to its position and Peer B would
remove transaction 4 from its position; both would then send an updated
position.
Conversely, if the diagram reflected a later call to =timerEntry= that occurs in
the stuck region with a threshold of say 95%, our node would remove transactions
2 and 3 from its candidate set and send an updated position. Likewise, all the
2 and 3 from its candidate set and send an updated position. Likewise, all the
other peers would end up with only transaction 1 in their position.
Lastly, if our node were not in the proposing mode, it would not include its own
@@ -335,7 +336,7 @@ our node would maintain its position of transactions 1, 2 and 3.
##### Checking Consensus
After updating its position, the node checks for supermajority agreement with
its peers on its current position. This agreement is of the exact transaction
its peers on its current position. This agreement is of the exact transaction
set, not just the support of individual transactions. That is, if our position
is a subset of a peer's position, that counts as a disagreement. Also recall
that effective close time agreement allows a supermajority of participants
@@ -343,10 +344,10 @@ agreeing to disagree.
Consensus is declared when the following 3 clauses are true:
- `LEDGER_MIN_CONSENSUS` time has elapsed in the establish phase
- At least 75% of the prior round proposers have proposed OR this establish
* `LEDGER_MIN_CONSENSUS` time has elapsed in the establish phase
* At least 75% of the prior round proposers have proposed OR this establish
phase is `LEDGER_MIN_CONSENSUS` longer than the last round's establish phase
- `minimumConsensusPercentage` of ourself and our peers share the same position
* `minimumConsensusPercentage` of ourself and our peers share the same position
The middle condition ensures slower peers have a chance to share positions, but
prevents waiting too long on peers that have disconnected. Additionally, a node
@@ -363,22 +364,22 @@ logic.
Once consensus is reached (or moved on), the node switches to the `Accept` phase
and signals to the implementing code that the round is complete. That code is
responsible for using the consensus transaction set to generate the next ledger
and calling `startRound` to begin the next round. The implementation has total
and calling `startRound` to begin the next round. The implementation has total
freedom on ordering transactions, deciding what to do if consensus moved on,
determining whether to retry or abandon local transactions that did not make the
consensus set and updating any internal state based on the consensus progress.
#### Accept
The `Accept` phase is the terminal phase of the consensus algorithm. Calls to
The `Accept` phase is the terminal phase of the consensus algorithm. Calls to
`timerEntry`, `peerProposal` and `gotTxSet` will not change the internal
consensus state while in the accept phase. The expectation is that the
consensus state while in the accept phase. The expectation is that the
application specific code is working to generate the new ledger based on the
consensus outcome. Once complete, that code should make a call to `startRound`
to kick off the next consensus round. The `startRound` call includes the new
prior ledger, prior ledger ID and whether the round should begin in the
proposing or observing mode. After setting some initial state, the phase
transitions to `Open`. The node will also check if the provided prior ledger
proposing or observing mode. After setting some initial state, the phase
transitions to `Open`. The node will also check if the provided prior ledger
and ID are correct, updating the mode and requesting the proper ledger from the
network if necessary.
@@ -447,9 +448,9 @@ struct TxSet
### Ledger
The `Ledger` type represents the state shared amongst the
distributed participants. Notice that the details of how the next ledger is
distributed participants. Notice that the details of how the next ledger is
generated from the prior ledger and the consensus accepted transaction set is
not part of the interface. Within the generic code, this type is primarily used
not part of the interface. Within the generic code, this type is primarily used
to know that peers are working on the same tip of the ledger chain and to
provide some basic timing data for consensus.
@@ -557,7 +558,7 @@ struct ConsensusResult
ConsensusTimer roundTime;
// Indicates state in which consensus ended. Once in the accept phase
// will be either Yes or MovedOn or Expired
// will be either Yes or MovedOn
ConsensusState state = ConsensusState::No;
};
@@ -625,7 +626,7 @@ struct Adaptor
// Called when consensus operating mode changes
void onModeChange(ConsensuMode before, ConsensusMode after);
// Called when ledger closes. Implementation should generate an initial Result
// with position based on the current open ledger's transactions.
ConsensusResult onClose(Ledger const &, Ledger const & prev, ConsensusMode mode);
@@ -656,24 +657,27 @@ struct Adaptor
The implementing class hides many details of the peer communication
model from the generic code.
- The `share` member functions are responsible for sharing the given type with a
* The `share` member functions are responsible for sharing the given type with a
node's peers, but are agnostic to the mechanism. Ideally, messages are delivered
faster than `LEDGER_GRANULARITY`.
- The generic code does not specify how transactions are submitted by clients,
faster than `LEDGER_GRANULARITY`.
* The generic code does not specify how transactions are submitted by clients,
propagated through the network or stored in the open ledger. Indeed, the open
ledger is only conceptual from the perspective of the generic code---the
initial position and transaction set are opaquely generated in a
`Consensus::Result` instance returned from the `onClose` callback.
- The calls to `acquireLedger` and `acquireTxSet` only have non-trivial return
if the ledger or transaction set of interest is available. The implementing
* The calls to `acquireLedger` and `acquireTxSet` only have non-trivial return
if the ledger or transaction set of interest is available. The implementing
class is free to block while acquiring, or return the empty option while
servicing the request asynchronously. Due to legacy reasons, the two calls
servicing the request asynchronously. Due to legacy reasons, the two calls
are not symmetric. `acquireTxSet` requires the host application to call
`gotTxSet` when an asynchronous `acquire` completes. Conversely,
`acquireLedger` will be called again later by the consensus code if it still
desires the ledger with the hope that the asynchronous acquisition is
complete.
## Validation
Coming Soon!

View File

@@ -9,7 +9,7 @@ project(
LANGUAGES CXX
)
find_package(xrpl CONFIG REQUIRED)
find_package(xrpl REQUIRED)
add_executable(example)
target_sources(example PRIVATE src/example.cpp)

View File

@@ -0,0 +1,59 @@
from conan import ConanFile, conan_version
from conan.tools.cmake import CMake, cmake_layout
class Example(ConanFile):
def set_name(self):
if self.name is None:
self.name = 'example'
def set_version(self):
if self.version is None:
self.version = '0.1.0'
license = 'ISC'
author = 'John Freeman <jfreeman08@gmail.com>'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {'shared': [True, False], 'fPIC': [True, False]}
default_options = {
'shared': False,
'fPIC': True,
'xrpl:xrpld': False,
}
requires = ['xrpl/2.2.0-rc1@jfreeman/nodestore']
generators = ['CMakeDeps', 'CMakeToolchain']
exports_sources = [
'CMakeLists.txt',
'cmake/*',
'external/*',
'include/*',
'src/*',
]
# For out-of-source build.
# https://docs.conan.io/en/latest/reference/build_helpers/cmake.html#configure
no_copy_source = True
def layout(self):
cmake_layout(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def build(self):
cmake = CMake(self)
cmake.configure(variables={'BUILD_TESTING': 'NO'})
cmake.build()
def package(self):
cmake = CMake(self)
cmake.install()
def package_info(self):
path = f'{self.package_folder}/share/{self.name}/cpp_info.py'
with open(path, 'r') as file:
exec(file.read(), {}, {'self': self.cpp_info})

View File

@@ -1,10 +1,8 @@
#include <xrpl/protocol/BuildInfo.h>
#include <cstdio>
int
main(int argc, char const** argv)
{
#include <xrpl/protocol/BuildInfo.h>
int main(int argc, char const** argv) {
std::printf("%s\n", ripple::BuildInfo::getVersionString().c_str());
return 0;
}

18
external/README.md vendored
View File

@@ -1,10 +1,14 @@
# External Conan recipes
The subdirectories in this directory contain copies of external libraries used
by rippled.
The subdirectories in this directory contain either copies or Conan recipes
of external libraries used by rippled.
The Conan recipes include patches we have not yet pushed upstream.
| Folder | Upstream | Description |
| :--------------- | :------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
| `antithesis-sdk` | [Project](https://github.com/antithesishq/antithesis-sdk-cpp/) | [Antithesis](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html) SDK for C++ |
| `ed25519-donna` | [Project](https://github.com/floodyberry/ed25519-donna) | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `secp256k1` | [Project](https://github.com/bitcoin-core/secp256k1) | ECDSA digital signatures using the **secp256k1** curve |
| Folder | Upstream | Description |
|:----------------|:---------------------------------------------|:------------|
| `antithesis-sdk`| [Project](https://github.com/antithesishq/antithesis-sdk-cpp/) | [Antithesis](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html) SDK for C++ |
| `ed25519-donna` | [Project](https://github.com/floodyberry/ed25519-donna) | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `rocksdb` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/rocksdb) | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | [Project](https://github.com/bitcoin-core/secp256k1) | ECDSA digital signatures using the **secp256k1** curve |
| `snappy` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/snappy) | "Snappy" lossless compression algorithm. |
| `soci` | [Recipe](https://github.com/conan-io/conan-center-index/tree/master/recipes/soci) | Abstraction layer for database access. |

View File

@@ -1,4 +1,4 @@
cmake_minimum_required(VERSION 3.18)
cmake_minimum_required(VERSION 3.25)
# Note, version set explicitly by rippled project
project(antithesis-sdk-cpp VERSION 0.4.4 LANGUAGES CXX)

View File

@@ -1,9 +1,8 @@
# Antithesis C++ SDK
This library provides methods for C++ programs to configure the [Antithesis](https://antithesis.com) platform. It contains three kinds of functionality:
- Assertion macros that allow you to define test properties about your software or workload.
- Randomness functions for requesting both structured and unstructured randomness from the Antithesis platform.
- Lifecycle functions that inform the Antithesis environment that particular test phases or milestones have been reached.
* Assertion macros that allow you to define test properties about your software or workload.
* Randomness functions for requesting both structured and unstructured randomness from the Antithesis platform.
* Lifecycle functions that inform the Antithesis environment that particular test phases or milestones have been reached.
For general usage guidance see the [Antithesis C++ SDK Documentation](https://antithesis.com/docs/using_antithesis/sdk/cpp/overview/)

View File

@@ -17,9 +17,6 @@ add_library(ed25519 STATIC
)
add_library(ed25519::ed25519 ALIAS ed25519)
target_link_libraries(ed25519 PUBLIC OpenSSL::SSL)
if(NOT MSVC)
target_compile_options(ed25519 PRIVATE -Wno-implicit-fallthrough)
endif()
include(GNUInstallDirs)

View File

@@ -1,12 +1,12 @@
[ed25519](http://ed25519.cr.yp.to/) is an
[Elliptic Curve Digital Signature Algortithm](http://en.wikipedia.org/wiki/Elliptic_Curve_DSA),
developed by [Dan Bernstein](http://cr.yp.to/djb.html),
[Niels Duif](http://www.nielsduif.nl/),
[Tanja Lange](http://hyperelliptic.org/tanja),
[Peter Schwabe](http://www.cryptojedi.org/users/peter/),
[ed25519](http://ed25519.cr.yp.to/) is an
[Elliptic Curve Digital Signature Algortithm](http://en.wikipedia.org/wiki/Elliptic_Curve_DSA),
developed by [Dan Bernstein](http://cr.yp.to/djb.html),
[Niels Duif](http://www.nielsduif.nl/),
[Tanja Lange](http://hyperelliptic.org/tanja),
[Peter Schwabe](http://www.cryptojedi.org/users/peter/),
and [Bo-Yin Yang](http://www.iis.sinica.edu.tw/pages/byyang/).
This project provides performant, portable 32-bit & 64-bit implementations. All implementations are
This project provides performant, portable 32-bit & 64-bit implementations. All implementations are
of course constant time in regard to secret data.
#### Performance
@@ -52,35 +52,35 @@ are made.
#### Compilation
No configuration is needed **if you are compiling against OpenSSL**.
No configuration is needed **if you are compiling against OpenSSL**.
##### Hash Options
If you are not compiling aginst OpenSSL, you will need a hash function.
To use a simple/**slow** implementation of SHA-512, use `-DED25519_REFHASH` when compiling `ed25519.c`.
To use a simple/**slow** implementation of SHA-512, use `-DED25519_REFHASH` when compiling `ed25519.c`.
This should never be used except to verify the code works when OpenSSL is not available.
To use a custom hash function, use `-DED25519_CUSTOMHASH` when compiling `ed25519.c` and put your
To use a custom hash function, use `-DED25519_CUSTOMHASH` when compiling `ed25519.c` and put your
custom hash implementation in ed25519-hash-custom.h. The hash must have a 512bit digest and implement
struct ed25519_hash_context;
struct ed25519_hash_context;
void ed25519_hash_init(ed25519_hash_context *ctx);
void ed25519_hash_update(ed25519_hash_context *ctx, const uint8_t *in, size_t inlen);
void ed25519_hash_final(ed25519_hash_context *ctx, uint8_t *hash);
void ed25519_hash(uint8_t *hash, const uint8_t *in, size_t inlen);
void ed25519_hash_init(ed25519_hash_context *ctx);
void ed25519_hash_update(ed25519_hash_context *ctx, const uint8_t *in, size_t inlen);
void ed25519_hash_final(ed25519_hash_context *ctx, uint8_t *hash);
void ed25519_hash(uint8_t *hash, const uint8_t *in, size_t inlen);
##### Random Options
If you are not compiling aginst OpenSSL, you will need a random function for batch verification.
To use a custom random function, use `-DED25519_CUSTOMRANDOM` when compiling `ed25519.c` and put your
To use a custom random function, use `-DED25519_CUSTOMRANDOM` when compiling `ed25519.c` and put your
custom hash implementation in ed25519-randombytes-custom.h. The random function must implement:
void ED25519_FN(ed25519_randombytes_unsafe) (void *p, size_t len);
void ED25519_FN(ed25519_randombytes_unsafe) (void *p, size_t len);
Use `-DED25519_TEST` when compiling `ed25519.c` to use a deterministically seeded, non-thread safe CSPRNG
Use `-DED25519_TEST` when compiling `ed25519.c` to use a deterministically seeded, non-thread safe CSPRNG
variant of Bob Jenkins [ISAAC](http://en.wikipedia.org/wiki/ISAAC_%28cipher%29)
##### Minor options
@@ -91,79 +91,80 @@ Use `-DED25519_FORCE_32BIT` to force the use of 32 bit routines even when compil
##### 32-bit
gcc ed25519.c -m32 -O3 -c
gcc ed25519.c -m32 -O3 -c
##### 64-bit
gcc ed25519.c -m64 -O3 -c
gcc ed25519.c -m64 -O3 -c
##### SSE2
gcc ed25519.c -m32 -O3 -c -DED25519_SSE2 -msse2
gcc ed25519.c -m64 -O3 -c -DED25519_SSE2
gcc ed25519.c -m32 -O3 -c -DED25519_SSE2 -msse2
gcc ed25519.c -m64 -O3 -c -DED25519_SSE2
clang and icc are also supported
#### Usage
To use the code, link against `ed25519.o -mbits` and:
#include "ed25519.h"
#include "ed25519.h"
Add `-lssl -lcrypto` when using OpenSSL (Some systems don't need -lcrypto? It might be trial and error).
To generate a private key, simply generate 32 bytes from a secure
cryptographic source:
ed25519_secret_key sk;
randombytes(sk, sizeof(ed25519_secret_key));
ed25519_secret_key sk;
randombytes(sk, sizeof(ed25519_secret_key));
To generate a public key:
ed25519_public_key pk;
ed25519_publickey(sk, pk);
ed25519_public_key pk;
ed25519_publickey(sk, pk);
To sign a message:
ed25519_signature sig;
ed25519_sign(message, message_len, sk, pk, signature);
ed25519_signature sig;
ed25519_sign(message, message_len, sk, pk, signature);
To verify a signature:
int valid = ed25519_sign_open(message, message_len, pk, signature) == 0;
int valid = ed25519_sign_open(message, message_len, pk, signature) == 0;
To batch verify signatures:
const unsigned char *mp[num] = {message1, message2..}
size_t ml[num] = {message_len1, message_len2..}
const unsigned char *pkp[num] = {pk1, pk2..}
const unsigned char *sigp[num] = {signature1, signature2..}
int valid[num]
const unsigned char *mp[num] = {message1, message2..}
size_t ml[num] = {message_len1, message_len2..}
const unsigned char *pkp[num] = {pk1, pk2..}
const unsigned char *sigp[num] = {signature1, signature2..}
int valid[num]
/* valid[i] will be set to 1 if the individual signature was valid, 0 otherwise */
int all_valid = ed25519_sign_open_batch(mp, ml, pkp, sigp, num, valid) == 0;
/* valid[i] will be set to 1 if the individual signature was valid, 0 otherwise */
int all_valid = ed25519_sign_open_batch(mp, ml, pkp, sigp, num, valid) == 0;
**Note**: Batch verification uses `ed25519_randombytes_unsafe`, implemented in
`ed25519-randombytes.h`, to generate random scalars for the verification code.
**Note**: Batch verification uses `ed25519_randombytes_unsafe`, implemented in
`ed25519-randombytes.h`, to generate random scalars for the verification code.
The default implementation now uses OpenSSLs `RAND_bytes`.
Unlike the [SUPERCOP](http://bench.cr.yp.to/supercop.html) version, signatures are
not appended to messages, and there is no need for padding in front of messages.
Additionally, the secret key does not contain a copy of the public key, so it is
not appended to messages, and there is no need for padding in front of messages.
Additionally, the secret key does not contain a copy of the public key, so it is
32 bytes instead of 64 bytes, and the public key must be provided to the signing
function.
##### Curve25519
Curve25519 public keys can be generated thanks to
[Adam Langley](http://www.imperialviolet.org/2013/05/10/fastercurve25519.html)
Curve25519 public keys can be generated thanks to
[Adam Langley](http://www.imperialviolet.org/2013/05/10/fastercurve25519.html)
leveraging Ed25519's precomputed basepoint scalar multiplication.
curved25519_key sk, pk;
randombytes(sk, sizeof(curved25519_key));
curved25519_scalarmult_basepoint(pk, sk);
curved25519_key sk, pk;
randombytes(sk, sizeof(curved25519_key));
curved25519_scalarmult_basepoint(pk, sk);
Note the name is curved25519, a combination of curve and ed25519, to prevent
Note the name is curved25519, a combination of curve and ed25519, to prevent
name clashes. Performance is slightly faster than short message ed25519
signing due to both using the same code for the scalar multiply.
@@ -179,4 +180,4 @@ with extreme values to ensure they function correctly. SSE2 is now supported.
#### Papers
[Available on the Ed25519 website](http://ed25519.cr.yp.to/papers.html)
[Available on the Ed25519 website](http://ed25519.cr.yp.to/papers.html)

View File

@@ -1,78 +1,78 @@
This code fuzzes ed25519-donna (and optionally ed25519-donna-sse2) against the ref10 implementations of
[curve25519](https://github.com/floodyberry/supercop/tree/master/crypto_scalarmult/curve25519/ref10) and
[curve25519](https://github.com/floodyberry/supercop/tree/master/crypto_scalarmult/curve25519/ref10) and
[ed25519](https://github.com/floodyberry/supercop/tree/master/crypto_sign/ed25519/ref10).
Curve25519 tests that generating a public key from a secret key
# Building
## \*nix + PHP
## *nix + PHP
`php build-nix.php (required parameters) (optional parameters)`
Required parameters:
- `--function=[curve25519,ed25519]`
- `--bits=[32,64]`
* `--function=[curve25519,ed25519]`
* `--bits=[32,64]`
Optional parameters:
- `--with-sse2`
* `--with-sse2`
Also fuzz against ed25519-donna-sse2
Also fuzz against ed25519-donna-sse2
* `--with-openssl`
- `--with-openssl`
Build with OpenSSL's SHA-512.
Build with OpenSSL's SHA-512.
Default: Reference SHA-512 implementation (slow!)
Default: Reference SHA-512 implementation (slow!)
* `--compiler=[gcc,clang,icc]`
- `--compiler=[gcc,clang,icc]`
Default: gcc
Default: gcc
* `--no-asm`
- `--no-asm`
Do not use platform specific assembler
Do not use platform specific assembler
example:
php build-nix.php --bits=64 --function=ed25519 --with-sse2 --compiler=icc
php build-nix.php --bits=64 --function=ed25519 --with-sse2 --compiler=icc
## Windows
Create a project with access to the ed25519 files.
If you are not using OpenSSL, add the `ED25519_REFHASH` define to the projects
If you are not using OpenSSL, add the `ED25519_REFHASH` define to the projects
"Properties/Preprocessor/Preprocessor Definitions" option
Add the following files to the project:
- `fuzz/curve25519-ref10.c`
- `fuzz/ed25519-ref10.c`
- `fuzz/ed25519-donna.c`
- `fuzz/ed25519-donna-sse2.c` (optional)
- `fuzz-[curve25519/ed25519].c` (depending on which you want to fuzz)
* `fuzz/curve25519-ref10.c`
* `fuzz/ed25519-ref10.c`
* `fuzz/ed25519-donna.c`
* `fuzz/ed25519-donna-sse2.c` (optional)
* `fuzz-[curve25519/ed25519].c` (depending on which you want to fuzz)
If you are also fuzzing against ed25519-donna-sse2, add the `ED25519_SSE2` define for `fuzz-[curve25519/ed25519].c` under
If you are also fuzzing against ed25519-donna-sse2, add the `ED25519_SSE2` define for `fuzz-[curve25519/ed25519].c` under
its "Properties/Preprocessor/Preprocessor Definitions" option.
# Running
If everything agrees, the program will only output occasional status dots (every 0x1000 passes)
If everything agrees, the program will only output occasional status dots (every 0x1000 passes)
and a 64bit progress count (every 0x20000 passes):
fuzzing: ref10 curved25519 curved25519-sse2
................................ [0000000000020000]
................................ [0000000000040000]
................................ [0000000000060000]
................................ [0000000000080000]
................................ [00000000000a0000]
................................ [00000000000c0000]
If any of the implementations do not agree with the ref10 implementation, the program will dump
the random data that was used, the data generated by the ref10 implementation, and diffs of the
the random data that was used, the data generated by the ref10 implementation, and diffs of the
ed25519-donna data against the ref10 data.
## Example errors
@@ -83,21 +83,21 @@ These are example error dumps (with intentionally introduced errors).
Random data:
- sk, or Secret Key
- m, or Message
* sk, or Secret Key
* m, or Message
Generated data:
- pk, or Public Key
- sig, or Signature
- valid, or if the signature of the message is valid with the public key
* pk, or Public Key
* sig, or Signature
* valid, or if the signature of the message is valid with the public key
Dump:
sk:
0x3b,0xb7,0x17,0x7a,0x66,0xdc,0xb7,0x9a,0x90,0x25,0x07,0x99,0x96,0xf3,0x92,0xef,
0x78,0xf8,0xad,0x6c,0x35,0x87,0x81,0x67,0x03,0xe6,0x95,0xba,0x06,0x18,0x7c,0x9c,
m:
0x7c,0x8d,0x3d,0xe1,0x92,0xee,0x7a,0xb8,0x4d,0xc9,0xfb,0x02,0x34,0x1e,0x5a,0x91,
0xee,0x01,0xa6,0xb8,0xab,0x37,0x3f,0x3d,0x6d,0xa2,0x47,0xe3,0x27,0x93,0x7c,0xb7,
@@ -107,66 +107,67 @@ Dump:
0x63,0x14,0xe0,0x81,0x52,0xec,0xcd,0xcf,0x70,0x54,0x7d,0xa3,0x49,0x8b,0xf0,0x89,
0x70,0x07,0x12,0x2a,0xd9,0xaa,0x16,0x01,0xb2,0x16,0x3a,0xbb,0xfc,0xfa,0x13,0x5b,
0x69,0x83,0x92,0x70,0x95,0x76,0xa0,0x8e,0x16,0x79,0xcc,0xaa,0xb5,0x7c,0xf8,0x7a,
ref10:
pk:
0x71,0xb0,0x5e,0x62,0x1b,0xe3,0xe7,0x36,0x91,0x8b,0xc0,0x13,0x36,0x0c,0xc9,0x04,
0x16,0xf5,0xff,0x48,0x0c,0x83,0x6b,0x88,0x53,0xa2,0xc6,0x0f,0xf7,0xac,0x42,0x04,
sig:
0x3e,0x05,0xc5,0x37,0x16,0x0b,0x29,0x30,0x89,0xa3,0xe7,0x83,0x08,0x16,0xdd,0x96,
0x02,0xfa,0x0d,0x44,0x2c,0x43,0xaa,0x80,0x93,0x04,0x58,0x22,0x09,0xbf,0x11,0xa5,
0xcc,0xa5,0x3c,0x9f,0xa0,0xa4,0x64,0x5a,0x4a,0xdb,0x20,0xfb,0xc7,0x9b,0xfd,0x3f,
0x08,0xae,0xc4,0x3c,0x1e,0xd8,0xb6,0xb4,0xd2,0x6d,0x80,0x92,0xcb,0x71,0xf3,0x02,
valid: yes
ed25519-donna:
pk diff:
____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,
____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,
sig diff:
0x2c,0xb9,0x25,0x14,0xd0,0x94,0xeb,0xfe,0x46,0x02,0xc2,0xe8,0xa3,0xeb,0xbf,0xb5,
0x72,0x84,0xbf,0xc1,0x8a,0x32,0x30,0x99,0xf7,0x58,0xfe,0x06,0xa8,0xdc,0xdc,0xab,
0xb5,0x57,0x03,0x33,0x87,0xce,0x54,0x55,0x6a,0x69,0x8a,0xc4,0xb7,0x2a,0xed,0x97,
0xb4,0x68,0xe7,0x52,0x7a,0x07,0x55,0x3b,0xa2,0x94,0xd6,0x5e,0xa1,0x61,0x80,0x08,
valid: no
In this case, the generated public key matches, but the generated signature is completely
In this case, the generated public key matches, but the generated signature is completely
different and does not validate.
### Curve25519
Random data:
- sk, or Secret Key
* sk, or Secret Key
Generated data:
- pk, or Public Key
* pk, or Public Key
Dump:
sk:
0x44,0xec,0x0b,0x0e,0xa2,0x0e,0x9c,0x5b,0x8c,0xce,0x7b,0x1d,0x68,0xae,0x0f,0x9e,
0x81,0xe2,0x04,0x76,0xda,0x87,0xa4,0x9e,0xc9,0x4f,0x3b,0xf9,0xc3,0x89,0x63,0x70,
ref10:
0x24,0x55,0x55,0xc0,0xf9,0x80,0xaf,0x02,0x43,0xee,0x8c,0x7f,0xc1,0xad,0x90,0x95,
0x57,0x91,0x14,0x2e,0xf2,0x14,0x22,0x80,0xdd,0x4e,0x3c,0x85,0x71,0x84,0x8c,0x62,
curved25519 diff:
0x12,0xd1,0x61,0x2b,0x16,0xb3,0xd8,0x29,0xf8,0xa3,0xba,0x70,0x4e,0x49,0x4f,0x43,
0xa1,0x3c,0x6b,0x42,0x11,0x61,0xcc,0x30,0x87,0x73,0x46,0xfb,0x85,0xc7,0x9a,0x35,
curved25519-sse2 diff:
____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,
____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,____,
In this case, curved25519 is totally wrong, while curved25519-sse2 matches the reference
implementation.
In this case, curved25519 is totally wrong, while curved25519-sse2 matches the reference
implementation.

10
external/nudb/conandata.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
sources:
"2.0.8":
url: "https://github.com/CPPAlliance/NuDB/archive/2.0.8.tar.gz"
sha256: "9b71903d8ba111cd893ab064b9a8b6ac4124ed8bd6b4f67250205bc43c7f13a8"
patches:
"2.0.8":
- patch_file: "patches/2.0.8-0001-add-include-stdexcept-for-msvc.patch"
patch_description: "Fix build for MSVC by including stdexcept"
patch_type: "portability"
patch_source: "https://github.com/cppalliance/NuDB/pull/100/files"

72
external/nudb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,72 @@
import os
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get
from conan.tools.layout import basic_layout
required_conan_version = ">=1.52.0"
class NudbConan(ConanFile):
name = "nudb"
description = "A fast key/value insert-only database for SSD drives in C++11"
license = "BSL-1.0"
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/CPPAlliance/NuDB"
topics = ("header-only", "KVS", "insert-only")
package_type = "header-library"
settings = "os", "arch", "compiler", "build_type"
no_copy_source = True
@property
def _min_cppstd(self):
return 11
def export_sources(self):
export_conandata_patches(self)
def layout(self):
basic_layout(self, src_folder="src")
def requirements(self):
self.requires("boost/1.83.0")
def package_id(self):
self.info.clear()
def validate(self):
if self.settings.compiler.cppstd:
check_min_cppstd(self, self._min_cppstd)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def build(self):
apply_conandata_patches(self)
def package(self):
copy(self, "LICENSE*",
dst=os.path.join(self.package_folder, "licenses"),
src=self.source_folder)
copy(self, "*",
dst=os.path.join(self.package_folder, "include"),
src=os.path.join(self.source_folder, "include"))
def package_info(self):
self.cpp_info.bindirs = []
self.cpp_info.libdirs = []
self.cpp_info.set_property("cmake_target_name", "NuDB")
self.cpp_info.set_property("cmake_target_aliases", ["NuDB::nudb"])
self.cpp_info.set_property("cmake_find_mode", "both")
self.cpp_info.components["core"].set_property("cmake_target_name", "nudb")
self.cpp_info.components["core"].names["cmake_find_package"] = "nudb"
self.cpp_info.components["core"].names["cmake_find_package_multi"] = "nudb"
self.cpp_info.components["core"].requires = ["boost::thread", "boost::system"]
# TODO: to remove in conan v2 once cmake_find_package_* generators removed
self.cpp_info.names["cmake_find_package"] = "NuDB"
self.cpp_info.names["cmake_find_package_multi"] = "NuDB"

View File

@@ -0,0 +1,24 @@
diff --git a/include/nudb/detail/stream.hpp b/include/nudb/detail/stream.hpp
index 6c07bf1..e0ce8ed 100644
--- a/include/nudb/detail/stream.hpp
+++ b/include/nudb/detail/stream.hpp
@@ -14,6 +14,7 @@
#include <cstdint>
#include <cstring>
#include <memory>
+#include <stdexcept>
namespace nudb {
namespace detail {
diff --git a/include/nudb/impl/context.ipp b/include/nudb/impl/context.ipp
index beb7058..ffde0b3 100644
--- a/include/nudb/impl/context.ipp
+++ b/include/nudb/impl/context.ipp
@@ -9,6 +9,7 @@
#define NUDB_IMPL_CONTEXT_IPP
#include <nudb/detail/store_base.hpp>
+#include <stdexcept>
namespace nudb {

27
external/rocksdb/conandata.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
sources:
"6.29.5":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.29.5.tar.gz"
sha256: "ddbf84791f0980c0bbce3902feb93a2c7006f6f53bfd798926143e31d4d756f0"
"6.27.3":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.27.3.tar.gz"
sha256: "ee29901749b9132692b26f0a6c1d693f47d1a9ed8e3771e60556afe80282bf58"
"6.20.3":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v6.20.3.tar.gz"
sha256: "c6502c7aae641b7e20fafa6c2b92273d935d2b7b2707135ebd9a67b092169dca"
"8.8.1":
url: "https://github.com/facebook/rocksdb/archive/refs/tags/v8.8.1.tar.gz"
sha256: "056c7e21ad8ae36b026ac3b94b9d6e0fcc60e1d937fc80330921e4181be5c36e"
patches:
"6.29.5":
- patch_file: "patches/6.29.5-0001-add-include-cstdint-for-gcc-13.patch"
patch_description: "Fix build with gcc 13 by including cstdint"
patch_type: "portability"
patch_source: "https://github.com/facebook/rocksdb/pull/11118"
- patch_file: "patches/6.29.5-0002-exclude-thirdparty.patch"
patch_description: "Do not include thirdparty.inc"
patch_type: "portability"
"6.27.3":
- patch_file: "patches/6.27.3-0001-add-include-cstdint-for-gcc-13.patch"
patch_description: "Fix build with gcc 13 by including cstdint"
patch_type: "portability"
patch_source: "https://github.com/facebook/rocksdb/pull/11118"

233
external/rocksdb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,233 @@
import os
import glob
import shutil
from conan import ConanFile
from conan.errors import ConanInvalidConfiguration
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rm, rmdir
from conan.tools.microsoft import check_min_vs, is_msvc, is_msvc_static_runtime
from conan.tools.scm import Version
required_conan_version = ">=1.53.0"
class RocksDBConan(ConanFile):
name = "rocksdb"
homepage = "https://github.com/facebook/rocksdb"
license = ("GPL-2.0-only", "Apache-2.0")
url = "https://github.com/conan-io/conan-center-index"
description = "A library that provides an embeddable, persistent key-value store for fast storage"
topics = ("database", "leveldb", "facebook", "key-value")
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"lite": [True, False],
"with_gflags": [True, False],
"with_snappy": [True, False],
"with_lz4": [True, False],
"with_zlib": [True, False],
"with_zstd": [True, False],
"with_tbb": [True, False],
"with_jemalloc": [True, False],
"enable_sse": [False, "sse42", "avx2"],
"use_rtti": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"lite": False,
"with_snappy": False,
"with_lz4": False,
"with_zlib": False,
"with_zstd": False,
"with_gflags": False,
"with_tbb": False,
"with_jemalloc": False,
"enable_sse": False,
"use_rtti": False,
}
@property
def _min_cppstd(self):
return "11" if Version(self.version) < "8.8.1" else "17"
@property
def _compilers_minimum_version(self):
return {} if self._min_cppstd == "11" else {
"apple-clang": "10",
"clang": "7",
"gcc": "7",
"msvc": "191",
"Visual Studio": "15",
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
if self.settings.arch != "x86_64":
del self.options.with_tbb
if self.settings.build_type == "Debug":
self.options.use_rtti = True # Rtti are used in asserts for debug mode...
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def requirements(self):
if self.options.with_gflags:
self.requires("gflags/2.2.2")
if self.options.with_snappy:
self.requires("snappy/1.1.10")
if self.options.with_lz4:
self.requires("lz4/1.9.4")
if self.options.with_zlib:
self.requires("zlib/[>=1.2.11 <2]")
if self.options.with_zstd:
self.requires("zstd/1.5.5")
if self.options.get_safe("with_tbb"):
self.requires("onetbb/2021.10.0")
if self.options.with_jemalloc:
self.requires("jemalloc/5.3.0")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, self._min_cppstd)
minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
if minimum_version and Version(self.settings.compiler.version) < minimum_version:
raise ConanInvalidConfiguration(
f"{self.ref} requires C++{self._min_cppstd}, which your compiler does not support."
)
if self.settings.arch not in ["x86_64", "ppc64le", "ppc64", "mips64", "armv8"]:
raise ConanInvalidConfiguration("Rocksdb requires 64 bits")
check_min_vs(self, "191")
if self.version == "6.20.3" and \
self.settings.os == "Linux" and \
self.settings.compiler == "gcc" and \
Version(self.settings.compiler.version) < "5":
raise ConanInvalidConfiguration("Rocksdb 6.20.3 is not compilable with gcc <5.") # See https://github.com/facebook/rocksdb/issues/3522
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["FAIL_ON_WARNINGS"] = False
tc.variables["WITH_TESTS"] = False
tc.variables["WITH_TOOLS"] = False
tc.variables["WITH_CORE_TOOLS"] = False
tc.variables["WITH_BENCHMARK_TOOLS"] = False
tc.variables["WITH_FOLLY_DISTRIBUTED_MUTEX"] = False
if is_msvc(self):
tc.variables["WITH_MD_LIBRARY"] = not is_msvc_static_runtime(self)
tc.variables["ROCKSDB_INSTALL_ON_WINDOWS"] = self.settings.os == "Windows"
tc.variables["ROCKSDB_LITE"] = self.options.lite
tc.variables["WITH_GFLAGS"] = self.options.with_gflags
tc.variables["WITH_SNAPPY"] = self.options.with_snappy
tc.variables["WITH_LZ4"] = self.options.with_lz4
tc.variables["WITH_ZLIB"] = self.options.with_zlib
tc.variables["WITH_ZSTD"] = self.options.with_zstd
tc.variables["WITH_TBB"] = self.options.get_safe("with_tbb", False)
tc.variables["WITH_JEMALLOC"] = self.options.with_jemalloc
tc.variables["ROCKSDB_BUILD_SHARED"] = self.options.shared
tc.variables["ROCKSDB_LIBRARY_EXPORTS"] = self.settings.os == "Windows" and self.options.shared
tc.variables["ROCKSDB_DLL" ] = self.settings.os == "Windows" and self.options.shared
tc.variables["USE_RTTI"] = self.options.use_rtti
if not bool(self.options.enable_sse):
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = False
elif self.options.enable_sse == "sse42":
tc.variables["PORTABLE"] = True
tc.variables["FORCE_SSE42"] = True
elif self.options.enable_sse == "avx2":
tc.variables["PORTABLE"] = False
tc.variables["FORCE_SSE42"] = False
# not available yet in CCI
tc.variables["WITH_NUMA"] = False
tc.generate()
deps = CMakeDeps(self)
if self.options.with_jemalloc:
deps.set_property("jemalloc", "cmake_file_name", "JeMalloc")
deps.set_property("jemalloc", "cmake_target_name", "JeMalloc::JeMalloc")
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def _remove_static_libraries(self):
rm(self, "rocksdb.lib", os.path.join(self.package_folder, "lib"))
for lib in glob.glob(os.path.join(self.package_folder, "lib", "*.a")):
if not lib.endswith(".dll.a"):
os.remove(lib)
def _remove_cpp_headers(self):
for path in glob.glob(os.path.join(self.package_folder, "include", "rocksdb", "*")):
if path != os.path.join(self.package_folder, "include", "rocksdb", "c.h"):
if os.path.isfile(path):
os.remove(path)
else:
shutil.rmtree(path)
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
copy(self, "LICENSE*", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
if self.options.shared:
self._remove_static_libraries()
self._remove_cpp_headers() # Force stable ABI for shared libraries
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
rmdir(self, os.path.join(self.package_folder, "lib", "pkgconfig"))
def package_info(self):
cmake_target = "rocksdb-shared" if self.options.shared else "rocksdb"
self.cpp_info.set_property("cmake_file_name", "RocksDB")
self.cpp_info.set_property("cmake_target_name", f"RocksDB::{cmake_target}")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["librocksdb"].libs = collect_libs(self)
if self.settings.os == "Windows":
self.cpp_info.components["librocksdb"].system_libs = ["shlwapi", "rpcrt4"]
if self.options.shared:
self.cpp_info.components["librocksdb"].defines = ["ROCKSDB_DLL"]
elif self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["librocksdb"].system_libs = ["pthread", "m"]
if self.options.lite:
self.cpp_info.components["librocksdb"].defines.append("ROCKSDB_LITE")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "RocksDB"
self.cpp_info.names["cmake_find_package_multi"] = "RocksDB"
self.cpp_info.components["librocksdb"].names["cmake_find_package"] = cmake_target
self.cpp_info.components["librocksdb"].names["cmake_find_package_multi"] = cmake_target
self.cpp_info.components["librocksdb"].set_property("cmake_target_name", f"RocksDB::{cmake_target}")
if self.options.with_gflags:
self.cpp_info.components["librocksdb"].requires.append("gflags::gflags")
if self.options.with_snappy:
self.cpp_info.components["librocksdb"].requires.append("snappy::snappy")
if self.options.with_lz4:
self.cpp_info.components["librocksdb"].requires.append("lz4::lz4")
if self.options.with_zlib:
self.cpp_info.components["librocksdb"].requires.append("zlib::zlib")
if self.options.with_zstd:
self.cpp_info.components["librocksdb"].requires.append("zstd::zstd")
if self.options.get_safe("with_tbb"):
self.cpp_info.components["librocksdb"].requires.append("onetbb::onetbb")
if self.options.with_jemalloc:
self.cpp_info.components["librocksdb"].requires.append("jemalloc::jemalloc")

View File

@@ -0,0 +1,30 @@
--- a/include/rocksdb/utilities/checkpoint.h
+++ b/include/rocksdb/utilities/checkpoint.h
@@ -8,6 +8,7 @@
#pragma once
#ifndef ROCKSDB_LITE
+#include <cstdint>
#include <string>
#include <vector>
#include "rocksdb/status.h"
--- a/table/block_based/data_block_hash_index.h
+++ b/table/block_based/data_block_hash_index.h
@@ -5,6 +5,7 @@
#pragma once
+#include <cstdint>
#include <string>
#include <vector>
--- a/util/string_util.h
+++ b/util/string_util.h
@@ -6,6 +6,7 @@
#pragma once
+#include <cstdint>
#include <sstream>
#include <string>
#include <unordered_map>

View File

@@ -0,0 +1,16 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index ec59d4491..35577c998 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -101 +100,0 @@ if(MSVC)
- option(WITH_GFLAGS "build with GFlags" OFF)
@@ -103,2 +102,2 @@ if(MSVC)
- include(${CMAKE_CURRENT_SOURCE_DIR}/thirdparty.inc)
-else()
+endif()
+
@@ -117 +116 @@ else()
- if(MINGW)
+ if(MINGW OR MSVC)
@@ -183 +181,0 @@ else()
-endif()

View File

@@ -10,8 +10,8 @@ env:
MAKEFLAGS: -j4
BUILD: check
### secp256k1 config
ECMULTWINDOW: 15
ECMULTGENKB: 22
ECMULTWINDOW: auto
ECMULTGENPRECISION: auto
ASM: no
WIDEMUL: auto
WITH_VALGRIND: yes
@@ -20,18 +20,20 @@ env:
EXPERIMENTAL: no
ECDH: no
RECOVERY: no
EXTRAKEYS: no
SCHNORRSIG: no
MUSIG: no
ELLSWIFT: no
### test options
SECP256K1_TEST_ITERS: 64
SECP256K1_TEST_ITERS:
BENCH: yes
SECP256K1_BENCH_ITERS: 2
CTIMETESTS: yes
# Compile and run the tests
EXAMPLES: yes
# https://cirrus-ci.org/pricing/#compute-credits
credits_snippet: &CREDITS
# Don't use any credits for now.
use_compute_credits: false
cat_logs_snippet: &CAT_LOGS
always:
cat_tests_log_script:
@@ -51,51 +53,357 @@ cat_logs_snippet: &CAT_LOGS
cat_ci_env_script:
- env
linux_arm64_container_snippet: &LINUX_ARM64_CONTAINER
env_script:
- env | tee /tmp/env
build_script:
- DOCKER_BUILDKIT=1 docker build --file "ci/linux-debian.Dockerfile" --tag="ci_secp256k1_arm"
- docker image prune --force # Cleanup stale layers
merge_base_script_snippet: &MERGE_BASE
merge_base_script:
- if [ "$CIRRUS_PR" = "" ]; then exit 0; fi
- git fetch --depth=1 $CIRRUS_REPO_CLONE_URL "pull/${CIRRUS_PR}/merge"
- git checkout FETCH_HEAD # Use merged changes to detect silent merge conflicts
linux_container_snippet: &LINUX_CONTAINER
container:
dockerfile: ci/linux-debian.Dockerfile
# Reduce number of CPUs to be able to do more builds in parallel.
cpu: 1
# Gives us more CPUs for free if they're available.
greedy: true
# More than enough for our scripts.
memory: 1G
task:
name: "x86_64: Linux (Debian stable)"
<< : *LINUX_CONTAINER
matrix: &ENV_MATRIX
- env: {WIDEMUL: int64, RECOVERY: yes}
- env: {WIDEMUL: int64, ECDH: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128}
- env: {WIDEMUL: int128_struct}
- env: {WIDEMUL: int128, RECOVERY: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128, ECDH: yes, SCHNORRSIG: yes}
- env: {WIDEMUL: int128, ASM: x86_64}
- env: { RECOVERY: yes, SCHNORRSIG: yes}
- env: {CTIMETESTS: no, RECOVERY: yes, ECDH: yes, SCHNORRSIG: yes, CPPFLAGS: -DVERIFY}
- env: {BUILD: distcheck, WITH_VALGRIND: no, CTIMETESTS: no, BENCH: no}
- env: {CPPFLAGS: -DDETERMINISTIC}
- env: {CFLAGS: -O0, CTIMETESTS: no}
- env: { ECMULTGENPRECISION: 2, ECMULTWINDOW: 2 }
- env: { ECMULTGENPRECISION: 8, ECMULTWINDOW: 4 }
matrix:
- env:
CC: gcc
- env:
CC: clang
<< : *MERGE_BASE
test_script:
- docker run --rm --mount "type=bind,src=./,dst=/ci_secp256k1" --env-file /tmp/env --replace --name "ci_secp256k1_arm" "ci_secp256k1_arm" bash -c "cd /ci_secp256k1/ && ./ci/ci.sh"
task:
name: "ARM64: Linux (Debian stable)"
persistent_worker:
labels:
type: arm64
env:
ECDH: yes
RECOVERY: yes
EXTRAKEYS: yes
SCHNORRSIG: yes
MUSIG: yes
ELLSWIFT: yes
matrix:
# Currently only gcc-snapshot, the other compilers are tested on GHA with QEMU
- env: { CC: 'gcc-snapshot' }
<< : *LINUX_ARM64_CONTAINER
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ARM64: Linux (Debian stable), Valgrind"
persistent_worker:
labels:
type: arm64
name: "i686: Linux (Debian stable)"
<< : *LINUX_CONTAINER
env:
HOST: i686-linux-gnu
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
matrix:
- env:
CC: i686-linux-gnu-gcc
- env:
CC: clang --target=i686-pc-linux-gnu -isystem /usr/i686-linux-gnu/include
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "arm64: macOS Ventura"
macos_instance:
image: ghcr.io/cirruslabs/macos-ventura-base:latest
env:
HOMEBREW_NO_AUTO_UPDATE: 1
HOMEBREW_NO_INSTALL_CLEANUP: 1
# Cirrus gives us a fixed number of 4 virtual CPUs. Not that we even have that many jobs at the moment...
MAKEFLAGS: -j5
matrix:
<< : *ENV_MATRIX
env:
ASM: no
WITH_VALGRIND: no
CTIMETESTS: no
matrix:
- env:
CC: gcc
- env:
CC: clang
brew_script:
- brew install automake libtool gcc
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
<< : *CREDITS
task:
name: "s390x (big-endian): Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-s390x
SECP256K1_TEST_ITERS: 16
HOST: s390x-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
# https://sourceware.org/bugzilla/show_bug.cgi?id=27008
- rm /etc/ld.so.cache
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ARM32: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-arm
SECP256K1_TEST_ITERS: 16
HOST: arm-linux-gnueabihf
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
matrix:
- env: {}
- env: {EXPERIMENTAL: yes, ASM: arm32}
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ARM64: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-aarch64
SECP256K1_TEST_ITERS: 16
HOST: aarch64-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "ppc64le: Linux (Debian stable, QEMU)"
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: qemu-ppc64le
SECP256K1_TEST_ITERS: 16
HOST: powerpc64le-linux-gnu
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: wine
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: no
matrix:
- name: "x86_64 (mingw32-w64): Windows (Debian stable, Wine)"
env:
HOST: x86_64-w64-mingw32
- name: "i686 (mingw32-w64): Windows (Debian stable, Wine)"
env:
HOST: i686-w64-mingw32
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
<< : *LINUX_CONTAINER
env:
WRAPPER_CMD: wine
WERROR_CFLAGS: -WX
WITH_VALGRIND: no
ECDH: yes
RECOVERY: yes
EXPERIMENTAL: yes
SCHNORRSIG: yes
CTIMETESTS: no
# Use a MinGW-w64 host to tell ./configure we're building for Windows.
# This will detect some MinGW-w64 tools but then make will need only
# the MSVC tools CC, AR and NM as specified below.
HOST: x86_64-w64-mingw32
CC: /opt/msvc/bin/x64/cl
AR: /opt/msvc/bin/x64/lib
NM: /opt/msvc/bin/x64/dumpbin -symbols -headers
# Set non-essential options that affect the CLI messages here.
# (They depend on the user's taste, so we don't want to set them automatically in configure.ac.)
CFLAGS: -nologo -diagnostics:caret
LDFLAGS: -Xlinker -Xlinker -Xlinker -nologo
matrix:
- name: "x86_64 (MSVC): Windows (Debian stable, Wine)"
- name: "x86_64 (MSVC): Windows (Debian stable, Wine, int128_struct)"
env:
WIDEMUL: int128_struct
- name: "x86_64 (MSVC): Windows (Debian stable, Wine, int128_struct with __(u)mulh)"
env:
WIDEMUL: int128_struct
CPPFLAGS: -DSECP256K1_MSVC_MULH_TEST_OVERRIDE
- name: "i686 (MSVC): Windows (Debian stable, Wine)"
env:
HOST: i686-w64-mingw32
CC: /opt/msvc/bin/x86/cl
AR: /opt/msvc/bin/x86/lib
NM: /opt/msvc/bin/x86/dumpbin -symbols -headers
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
# Sanitizers
task:
<< : *LINUX_CONTAINER
env:
ECDH: yes
RECOVERY: yes
EXTRAKEYS: yes
SCHNORRSIG: yes
MUSIG: yes
ELLSWIFT: yes
WRAPPER_CMD: 'valgrind --error-exitcode=42'
SECP256K1_TEST_ITERS: 2
CTIMETESTS: no
matrix:
- env: { CC: 'gcc' }
- env: { CC: 'clang' }
- env: { CC: 'gcc-snapshot' }
- env: { CC: 'clang-snapshot' }
<< : *LINUX_ARM64_CONTAINER
- name: "Valgrind (memcheck)"
container:
cpu: 2
env:
# The `--error-exitcode` is required to make the test fail if valgrind found errors, otherwise it'll return 0 (https://www.valgrind.org/docs/manual/manual-core.html)
WRAPPER_CMD: "valgrind --error-exitcode=42"
SECP256K1_TEST_ITERS: 2
- name: "UBSan, ASan, LSan"
container:
memory: 2G
env:
CFLAGS: "-fsanitize=undefined,address -g"
UBSAN_OPTIONS: "print_stacktrace=1:halt_on_error=1"
ASAN_OPTIONS: "strict_string_checks=1:detect_stack_use_after_return=1:detect_leaks=1"
LSAN_OPTIONS: "use_unaligned=1"
SECP256K1_TEST_ITERS: 32
# Try to cover many configurations with just a tiny matrix.
matrix:
- env:
ASM: auto
- env:
ASM: no
ECMULTGENPRECISION: 2
ECMULTWINDOW: 2
matrix:
- env:
CC: clang
- env:
HOST: i686-linux-gnu
CC: i686-linux-gnu-gcc
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
# Memory sanitizers
task:
<< : *LINUX_CONTAINER
name: "MSan"
env:
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
CTIMETESTS: yes
CC: clang
SECP256K1_TEST_ITERS: 32
ASM: no
WITH_VALGRIND: no
container:
memory: 2G
matrix:
- env:
CFLAGS: "-fsanitize=memory -g"
- env:
ECMULTGENPRECISION: 2
ECMULTWINDOW: 2
CFLAGS: "-fsanitize=memory -g -O3"
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "C++ -fpermissive (entire project)"
<< : *LINUX_CONTAINER
env:
CC: g++
CFLAGS: -fpermissive -g
CPPFLAGS: -DSECP256K1_CPLUSPLUS_TEST_OVERRIDE
WERROR_CFLAGS:
ECDH: yes
RECOVERY: yes
SCHNORRSIG: yes
<< : *MERGE_BASE
test_script:
- ./ci/cirrus.sh
<< : *CAT_LOGS
task:
name: "C++ (public headers)"
<< : *LINUX_CONTAINER
test_script:
- g++ -Werror include/*.h
- clang -Werror -x c++-header include/*.h
- /opt/msvc/bin/x64/cl.exe -c -WX -TP include/*.h
task:
name: "sage prover"
<< : *LINUX_CONTAINER
test_script:
- cd sage
- sage prove_group_implementations.sage
task:
name: "x86_64: Windows (VS 2022)"
windows_container:
image: cirrusci/windowsservercore:visualstudio2022
cpu: 4
memory: 3840MB
env:
PATH: '%CIRRUS_WORKING_DIR%\build\src\RelWithDebInfo;%PATH%'
x64_NATIVE_TOOLS: '"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvars64.bat"'
# Ignore MSBuild warning MSB8029.
# See: https://learn.microsoft.com/en-us/visualstudio/msbuild/errors/msb8029?view=vs-2022
IgnoreWarnIntDirInTempDetected: 'true'
merge_script:
- PowerShell -NoLogo -Command if ($env:CIRRUS_PR -ne $null) { git fetch $env:CIRRUS_REPO_CLONE_URL pull/$env:CIRRUS_PR/merge; git reset --hard FETCH_HEAD; }
configure_script:
- '%x64_NATIVE_TOOLS%'
- cmake -E env CFLAGS="/WX" cmake -G "Visual Studio 17 2022" -A x64 -S . -B build -DSECP256K1_ENABLE_MODULE_RECOVERY=ON -DSECP256K1_BUILD_EXAMPLES=ON
build_script:
- '%x64_NATIVE_TOOLS%'
- cmake --build build --config RelWithDebInfo -- -property:UseMultiToolTask=true;CL_MPcount=5
check_script:
- '%x64_NATIVE_TOOLS%'
- ctest -C RelWithDebInfo --test-dir build -j 5
- build\src\RelWithDebInfo\bench_ecmult.exe
- build\src\RelWithDebInfo\bench_internal.exe
- build\src\RelWithDebInfo\bench.exe

View File

@@ -10,8 +10,6 @@ ctime_tests
ecdh_example
ecdsa_example
schnorr_example
ellswift_example
musig_example
*.exe
*.so
*.a

View File

@@ -5,192 +5,79 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.0] - 2024-11-04
#### Added
- New module `musig` implements the MuSig2 multisignature scheme according to the [BIP 327 specification](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki). See:
- Header file `include/secp256k1_musig.h` which defines the new API.
- Document `doc/musig.md` for further notes on API usage.
- Usage example `examples/musig.c`.
- New CMake variable `SECP256K1_APPEND_LDFLAGS` for appending linker flags to the build command.
#### Changed
- API functions now use a significantly more robust method to clear secrets from the stack before returning. However, secret clearing remains a best-effort security measure and cannot guarantee complete removal.
- Any type `secp256k1_foo` can now be forward-declared using `typedef struct secp256k1_foo secp256k1_foo;` (or also `struct secp256k1_foo;` in C++).
- Organized CMake build artifacts into dedicated directories (`bin/` for executables, `lib/` for libraries) to improve build output structure and Windows shared library compatibility.
#### Removed
- Removed the `secp256k1_scratch_space` struct and its associated functions `secp256k1_scratch_space_create` and `secp256k1_scratch_space_destroy` because the scratch space was unused in the API.
#### ABI Compatibility
The symbols `secp256k1_scratch_space_create` and `secp256k1_scratch_space_destroy` were removed.
Otherwise, the library maintains backward compatibility with versions 0.3.x through 0.5.x.
## [0.5.1] - 2024-08-01
#### Added
- Added usage example for an ElligatorSwift key exchange.
#### Changed
- The default size of the precomputed table for signing was changed from 22 KiB to 86 KiB. The size can be changed with the configure option `--ecmult-gen-kb` (`SECP256K1_ECMULT_GEN_KB` for CMake).
- "auto" is no longer an accepted value for the `--with-ecmult-window` and `--with-ecmult-gen-kb` configure options (this also applies to `SECP256K1_ECMULT_WINDOW_SIZE` and `SECP256K1_ECMULT_GEN_KB` in CMake). To achieve the same configuration as previously provided by the "auto" value, omit setting the configure option explicitly.
#### Fixed
- Fixed compilation when the extrakeys module is disabled.
#### ABI Compatibility
The ABI is backward compatible with versions 0.5.0, 0.4.x and 0.3.x.
## [0.5.0] - 2024-05-06
#### Added
- New function `secp256k1_ec_pubkey_sort` that sorts public keys using lexicographic (of compressed serialization) order.
#### Changed
- The implementation of the point multiplication algorithm used for signing and public key generation was changed, resulting in improved performance for those operations.
- The related configure option `--ecmult-gen-precision` was replaced with `--ecmult-gen-kb` (`SECP256K1_ECMULT_GEN_KB` for CMake).
- This changes the supported precomputed table sizes for these operations. The new supported sizes are 2 KiB, 22 KiB, or 86 KiB (while the old supported sizes were 32 KiB, 64 KiB, or 512 KiB).
#### ABI Compatibility
The ABI is backward compatible with versions 0.4.x and 0.3.x.
## [0.4.1] - 2023-12-21
#### Changed
- The point multiplication algorithm used for ECDH operations (module `ecdh`) was replaced with a slightly faster one.
- Optional handwritten x86_64 assembly for field operations was removed because modern C compilers are able to output more efficient assembly. This change results in a significant speedup of some library functions when handwritten x86_64 assembly is enabled (`--with-asm=x86_64` in GNU Autotools, `-DSECP256K1_ASM=x86_64` in CMake), which is the default on x86_64. Benchmarks with GCC 10.5.0 show a 10% speedup for `secp256k1_ecdsa_verify` and `secp256k1_schnorrsig_verify`.
#### ABI Compatibility
The ABI is backward compatible with versions 0.4.0 and 0.3.x.
## [0.4.0] - 2023-09-04
#### Added
- New module `ellswift` implements ElligatorSwift encoding for public keys and x-only Diffie-Hellman key exchange for them.
ElligatorSwift permits representing secp256k1 public keys as 64-byte arrays which cannot be distinguished from uniformly random. See:
- Header file `include/secp256k1_ellswift.h` which defines the new API.
- Document `doc/ellswift.md` which explains the mathematical background of the scheme.
- The [paper](https://eprint.iacr.org/2022/759) on which the scheme is based.
- We now test the library with unreleased development snapshots of GCC and Clang. This gives us an early chance to catch miscompilations and constant-time issues introduced by the compiler (such as those that led to the previous two releases).
#### Fixed
- Fixed symbol visibility in Windows DLL builds, where three internal library symbols were wrongly exported.
#### Changed
- When consuming libsecp256k1 as a static library on Windows, the user must now define the `SECP256K1_STATIC` macro before including `secp256k1.h`.
#### ABI Compatibility
This release is backward compatible with the ABI of 0.3.0, 0.3.1, and 0.3.2. Symbol visibility is now believed to be handled properly on supported platforms and is now considered to be part of the ABI. Please report any improperly exported symbols as a bug.
## [0.3.2] - 2023-05-13
We strongly recommend updating to 0.3.2 if you use or plan to use GCC >=13 to compile libsecp256k1. When in doubt, check the GCC version using `gcc -v`.
#### Security
- Module `ecdh`: Fix "constant-timeness" issue with GCC 13.1 (and potentially future versions of GCC) that could leave applications using libsecp256k1's ECDH module vulnerable to a timing side-channel attack. The fix avoids secret-dependent control flow during ECDH computations when libsecp256k1 is compiled with GCC 13.1.
- Module `ecdh`: Fix "constant-timeness" issue with GCC 13.1 (and potentially future versions of GCC) that could leave applications using libsecp256k1's ECDH module vulnerable to a timing side-channel attack. The fix avoids secret-dependent control flow during ECDH computations when libsecp256k1 is compiled with GCC 13.1.
#### Fixed
- Fixed an old bug that permitted compilers to potentially output bad assembly code on x86_64. In theory, it could lead to a crash or a read of unrelated memory, but this has never been observed on any compilers so far.
- Fixed an old bug that permitted compilers to potentially output bad assembly code on x86_64. In theory, it could lead to a crash or a read of unrelated memory, but this has never been observed on any compilers so far.
#### Changed
- Various improvements and changes to CMake builds. CMake builds remain experimental.
- Made API versioning consistent with GNU Autotools builds.
- Switched to `BUILD_SHARED_LIBS` variable for controlling whether to build a static or a shared library.
- Added `SECP256K1_INSTALL` variable for the controlling whether to install the build artefacts.
- Renamed asm build option `arm` to `arm32`. Use `--with-asm=arm32` instead of `--with-asm=arm` (GNU Autotools), and `-DSECP256K1_ASM=arm32` instead of `-DSECP256K1_ASM=arm` (CMake).
- Various improvements and changes to CMake builds. CMake builds remain experimental.
- Made API versioning consistent with GNU Autotools builds.
- Switched to `BUILD_SHARED_LIBS` variable for controlling whether to build a static or a shared library.
- Added `SECP256K1_INSTALL` variable for the controlling whether to install the build artefacts.
- Renamed asm build option `arm` to `arm32`. Use `--with-asm=arm32` instead of `--with-asm=arm` (GNU Autotools), and `-DSECP256K1_ASM=arm32` instead of `-DSECP256K1_ASM=arm` (CMake).
#### ABI Compatibility
The ABI is compatible with versions 0.3.0 and 0.3.1.
## [0.3.1] - 2023-04-10
We strongly recommend updating to 0.3.1 if you use or plan to use Clang >=14 to compile libsecp256k1, e.g., Xcode >=14 on macOS has Clang >=14. When in doubt, check the Clang version using `clang -v`.
#### Security
- Fix "constant-timeness" issue with Clang >=14 that could leave applications using libsecp256k1 vulnerable to a timing side-channel attack. The fix avoids secret-dependent control flow and secret-dependent memory accesses in conditional moves of memory objects when libsecp256k1 is compiled with Clang >=14.
- Fix "constant-timeness" issue with Clang >=14 that could leave applications using libsecp256k1 vulnerable to a timing side-channel attack. The fix avoids secret-dependent control flow and secret-dependent memory accesses in conditional moves of memory objects when libsecp256k1 is compiled with Clang >=14.
#### Added
- Added tests against [Project Wycheproof's](https://github.com/google/wycheproof/) set of ECDSA test vectors (Bitcoin "low-S" variant), a fixed set of test cases designed to trigger various edge cases.
- Added tests against [Project Wycheproof's](https://github.com/google/wycheproof/) set of ECDSA test vectors (Bitcoin "low-S" variant), a fixed set of test cases designed to trigger various edge cases.
#### Changed
- Increased minimum required CMake version to 3.13. CMake builds remain experimental.
- Increased minimum required CMake version to 3.13. CMake builds remain experimental.
#### ABI Compatibility
The ABI is compatible with version 0.3.0.
## [0.3.0] - 2023-03-08
#### Added
- Added experimental support for CMake builds. Traditional GNU Autotools builds (`./configure` and `make`) remain fully supported.
- Usage examples: Added a recommended method for securely clearing sensitive data, e.g., secret keys, from memory.
- Tests: Added a new test binary `noverify_tests`. This binary runs the tests without some additional checks present in the ordinary `tests` binary and is thereby closer to production binaries. The `noverify_tests` binary is automatically run as part of the `make check` target.
- Added experimental support for CMake builds. Traditional GNU Autotools builds (`./configure` and `make`) remain fully supported.
- Usage examples: Added a recommended method for securely clearing sensitive data, e.g., secret keys, from memory.
- Tests: Added a new test binary `noverify_tests`. This binary runs the tests without some additional checks present in the ordinary `tests` binary and is thereby closer to production binaries. The `noverify_tests` binary is automatically run as part of the `make check` target.
#### Fixed
- Fixed declarations of API variables for MSVC (`__declspec(dllimport)`). This fixes MSVC builds of programs which link against a libsecp256k1 DLL dynamically and use API variables (and not only API functions). Unfortunately, the MSVC linker now will emit warning `LNK4217` when trying to link against libsecp256k1 statically. Pass `/ignore:4217` to the linker to suppress this warning.
- Fixed declarations of API variables for MSVC (`__declspec(dllimport)`). This fixes MSVC builds of programs which link against a libsecp256k1 DLL dynamically and use API variables (and not only API functions). Unfortunately, the MSVC linker now will emit warning `LNK4217` when trying to link against libsecp256k1 statically. Pass `/ignore:4217` to the linker to suppress this warning.
#### Changed
- Forbade cloning or destroying `secp256k1_context_static`. Create a new context instead of cloning the static context. (If this change breaks your code, your code is probably wrong.)
- Forbade randomizing (copies of) `secp256k1_context_static`. Randomizing a copy of `secp256k1_context_static` did not have any effect and did not provide defense-in-depth protection against side-channel attacks. Create a new context if you want to benefit from randomization.
- Forbade cloning or destroying `secp256k1_context_static`. Create a new context instead of cloning the static context. (If this change breaks your code, your code is probably wrong.)
- Forbade randomizing (copies of) `secp256k1_context_static`. Randomizing a copy of `secp256k1_context_static` did not have any effect and did not provide defense-in-depth protection against side-channel attacks. Create a new context if you want to benefit from randomization.
#### Removed
- Removed the configuration header `src/libsecp256k1-config.h`. We recommend passing flags to `./configure` or `cmake` to set configuration options (see `./configure --help` or `cmake -LH`). If you cannot or do not want to use one of the supported build systems, pass configuration flags such as `-DSECP256K1_ENABLE_MODULE_SCHNORRSIG` manually to the compiler (see the file `configure.ac` for supported flags).
- Removed the configuration header `src/libsecp256k1-config.h`. We recommend passing flags to `./configure` or `cmake` to set configuration options (see `./configure --help` or `cmake -LH`). If you cannot or do not want to use one of the supported build systems, pass configuration flags such as `-DSECP256K1_ENABLE_MODULE_SCHNORRSIG` manually to the compiler (see the file `configure.ac` for supported flags).
#### ABI Compatibility
Due to changes in the API regarding `secp256k1_context_static` described above, the ABI is _not_ compatible with previous versions.
Due to changes in the API regarding `secp256k1_context_static` described above, the ABI is *not* compatible with previous versions.
## [0.2.0] - 2022-12-12
#### Added
- Added usage examples for common use cases in a new `examples/` directory.
- Added `secp256k1_selftest`, to be used in conjunction with `secp256k1_context_static`.
- Added support for 128-bit wide multiplication on MSVC for x86_64 and arm64, giving roughly a 20% speedup on those platforms.
- Added usage examples for common use cases in a new `examples/` directory.
- Added `secp256k1_selftest`, to be used in conjunction with `secp256k1_context_static`.
- Added support for 128-bit wide multiplication on MSVC for x86_64 and arm64, giving roughly a 20% speedup on those platforms.
#### Changed
- Enabled modules `schnorrsig`, `extrakeys` and `ecdh` by default in `./configure`.
- The `secp256k1_nonce_function_rfc6979` nonce function, used by default by `secp256k1_ecdsa_sign`, now reduces the message hash modulo the group order to match the specification. This only affects improper use of ECDSA signing API.
- Enabled modules `schnorrsig`, `extrakeys` and `ecdh` by default in `./configure`.
- The `secp256k1_nonce_function_rfc6979` nonce function, used by default by `secp256k1_ecdsa_sign`, now reduces the message hash modulo the group order to match the specification. This only affects improper use of ECDSA signing API.
#### Deprecated
- Deprecated context flags `SECP256K1_CONTEXT_VERIFY` and `SECP256K1_CONTEXT_SIGN`. Use `SECP256K1_CONTEXT_NONE` instead.
- Renamed `secp256k1_context_no_precomp` to `secp256k1_context_static`.
- Module `schnorrsig`: renamed `secp256k1_schnorrsig_sign` to `secp256k1_schnorrsig_sign32`.
- Deprecated context flags `SECP256K1_CONTEXT_VERIFY` and `SECP256K1_CONTEXT_SIGN`. Use `SECP256K1_CONTEXT_NONE` instead.
- Renamed `secp256k1_context_no_precomp` to `secp256k1_context_static`.
- Module `schnorrsig`: renamed `secp256k1_schnorrsig_sign` to `secp256k1_schnorrsig_sign32`.
#### ABI Compatibility
Since this is the first release, we do not compare application binary interfaces.
However, there are earlier unreleased versions of libsecp256k1 that are _not_ ABI compatible with this version.
However, there are earlier unreleased versions of libsecp256k1 that are *not* ABI compatible with this version.
## [0.1.0] - 2013-03-05 to 2021-12-25
@@ -198,11 +85,7 @@ This version was in fact never released.
The number was given by the build system since the introduction of autotools in Jan 2014 (ea0fe5a5bf0c04f9cc955b2966b614f5f378c6f6).
Therefore, this version number does not uniquely identify a set of source files.
[0.6.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.5.1...v0.6.0
[0.5.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.5.0...v0.5.1
[0.5.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.4.1...v0.5.0
[0.4.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.4.0...v0.4.1
[0.4.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.2...v0.4.0
[unreleased]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.2...HEAD
[0.3.2]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.1...v0.3.2
[0.3.1]: https://github.com/bitcoin-core/secp256k1/compare/v0.3.0...v0.3.1
[0.3.0]: https://github.com/bitcoin-core/secp256k1/compare/v0.2.0...v0.3.0

View File

@@ -1,29 +1,32 @@
cmake_minimum_required(VERSION 3.16)
cmake_minimum_required(VERSION 3.13)
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.15)
# MSVC runtime library flags are selected by the CMAKE_MSVC_RUNTIME_LIBRARY abstraction.
cmake_policy(SET CMP0091 NEW)
# MSVC warning flags are not in CMAKE_<LANG>_FLAGS by default.
cmake_policy(SET CMP0092 NEW)
endif()
#=============================
# Project / Package metadata
#=============================
project(libsecp256k1
# The package (a.k.a. release) version is based on semantic versioning 2.0.0 of
# the API. All changes in experimental modules are treated as
# backwards-compatible and therefore at most increase the minor version.
VERSION 0.6.0
VERSION 0.3.2
DESCRIPTION "Optimized C library for ECDSA signatures and secret/public key operations on curve secp256k1."
HOMEPAGE_URL "https://github.com/bitcoin-core/secp256k1"
LANGUAGES C
)
enable_testing()
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)
if(CMAKE_VERSION VERSION_LESS 3.21)
# Emulates CMake 3.21+ behavior.
if(CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
set(PROJECT_IS_TOP_LEVEL ON)
set(${PROJECT_NAME}_IS_TOP_LEVEL ON)
get_directory_property(parent_directory PARENT_DIRECTORY)
if(parent_directory)
set(PROJECT_IS_TOP_LEVEL OFF CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
set(${PROJECT_NAME}_IS_TOP_LEVEL OFF CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
else()
set(PROJECT_IS_TOP_LEVEL OFF)
set(${PROJECT_NAME}_IS_TOP_LEVEL OFF)
set(PROJECT_IS_TOP_LEVEL ON CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
set(${PROJECT_NAME}_IS_TOP_LEVEL ON CACHE INTERNAL "Emulates CMake 3.21+ behavior.")
endif()
unset(parent_directory)
endif()
# The library version is based on libtool versioning of the ABI. The set of
@@ -31,19 +34,15 @@ endif()
# https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
# All changes in experimental modules are treated as if they don't affect the
# interface and therefore only increase the revision.
set(${PROJECT_NAME}_LIB_VERSION_CURRENT 5)
set(${PROJECT_NAME}_LIB_VERSION_REVISION 0)
set(${PROJECT_NAME}_LIB_VERSION_CURRENT 2)
set(${PROJECT_NAME}_LIB_VERSION_REVISION 2)
set(${PROJECT_NAME}_LIB_VERSION_AGE 0)
#=============================
# Language setup
#=============================
set(CMAKE_C_STANDARD 90)
set(CMAKE_C_EXTENSIONS OFF)
#=============================
# Configurable options
#=============================
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)
option(BUILD_SHARED_LIBS "Build shared libraries." ON)
option(SECP256K1_DISABLE_SHARED "Disable shared library. Overrides BUILD_SHARED_LIBS." OFF)
if(SECP256K1_DISABLE_SHARED)
@@ -52,49 +51,24 @@ endif()
option(SECP256K1_INSTALL "Enable installation." ${PROJECT_IS_TOP_LEVEL})
## Modules
# We declare all options before processing them, to make sure we can express
# dependendencies while processing.
option(SECP256K1_ENABLE_MODULE_ECDH "Enable ECDH module." ON)
if(SECP256K1_ENABLE_MODULE_ECDH)
add_compile_definitions(ENABLE_MODULE_ECDH=1)
endif()
option(SECP256K1_ENABLE_MODULE_RECOVERY "Enable ECDSA pubkey recovery module." OFF)
option(SECP256K1_ENABLE_MODULE_EXTRAKEYS "Enable extrakeys module." ON)
option(SECP256K1_ENABLE_MODULE_SCHNORRSIG "Enable schnorrsig module." ON)
option(SECP256K1_ENABLE_MODULE_MUSIG "Enable musig module." ON)
option(SECP256K1_ENABLE_MODULE_ELLSWIFT "Enable ElligatorSwift module." ON)
# Processing must be done in a topological sorting of the dependency graph
# (dependent module first).
if(SECP256K1_ENABLE_MODULE_ELLSWIFT)
add_compile_definitions(ENABLE_MODULE_ELLSWIFT=1)
endif()
if(SECP256K1_ENABLE_MODULE_MUSIG)
if(DEFINED SECP256K1_ENABLE_MODULE_SCHNORRSIG AND NOT SECP256K1_ENABLE_MODULE_SCHNORRSIG)
message(FATAL_ERROR "Module dependency error: You have disabled the schnorrsig module explicitly, but it is required by the musig module.")
endif()
set(SECP256K1_ENABLE_MODULE_SCHNORRSIG ON)
add_compile_definitions(ENABLE_MODULE_MUSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_SCHNORRSIG)
if(DEFINED SECP256K1_ENABLE_MODULE_EXTRAKEYS AND NOT SECP256K1_ENABLE_MODULE_EXTRAKEYS)
message(FATAL_ERROR "Module dependency error: You have disabled the extrakeys module explicitly, but it is required by the schnorrsig module.")
endif()
set(SECP256K1_ENABLE_MODULE_EXTRAKEYS ON)
add_compile_definitions(ENABLE_MODULE_SCHNORRSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_EXTRAKEYS)
add_compile_definitions(ENABLE_MODULE_EXTRAKEYS=1)
endif()
if(SECP256K1_ENABLE_MODULE_RECOVERY)
add_compile_definitions(ENABLE_MODULE_RECOVERY=1)
endif()
if(SECP256K1_ENABLE_MODULE_ECDH)
add_compile_definitions(ENABLE_MODULE_ECDH=1)
option(SECP256K1_ENABLE_MODULE_EXTRAKEYS "Enable extrakeys module." ON)
option(SECP256K1_ENABLE_MODULE_SCHNORRSIG "Enable schnorrsig module." ON)
if(SECP256K1_ENABLE_MODULE_SCHNORRSIG)
set(SECP256K1_ENABLE_MODULE_EXTRAKEYS ON)
add_compile_definitions(ENABLE_MODULE_SCHNORRSIG=1)
endif()
if(SECP256K1_ENABLE_MODULE_EXTRAKEYS)
add_compile_definitions(ENABLE_MODULE_EXTRAKEYS=1)
endif()
option(SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS "Enable external default callback functions." OFF)
@@ -102,25 +76,22 @@ if(SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS)
add_compile_definitions(USE_EXTERNAL_DEFAULT_CALLBACKS=1)
endif()
set(SECP256K1_ECMULT_WINDOW_SIZE 15 CACHE STRING "Window size for ecmult precomputation for verification, specified as integer in range [2..24]. The default value is a reasonable setting for desktop machines (currently 15). [default=15]")
set_property(CACHE SECP256K1_ECMULT_WINDOW_SIZE PROPERTY STRINGS 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24)
set(SECP256K1_ECMULT_WINDOW_SIZE "AUTO" CACHE STRING "Window size for ecmult precomputation for verification, specified as integer in range [2..24]. \"AUTO\" is a reasonable setting for desktop machines (currently 15). [default=AUTO]")
set_property(CACHE SECP256K1_ECMULT_WINDOW_SIZE PROPERTY STRINGS "AUTO" 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24)
include(CheckStringOptionValue)
check_string_option_value(SECP256K1_ECMULT_WINDOW_SIZE)
if(SECP256K1_ECMULT_WINDOW_SIZE STREQUAL "AUTO")
set(SECP256K1_ECMULT_WINDOW_SIZE 15)
endif()
add_compile_definitions(ECMULT_WINDOW_SIZE=${SECP256K1_ECMULT_WINDOW_SIZE})
set(SECP256K1_ECMULT_GEN_KB 86 CACHE STRING "The size of the precomputed table for signing in multiples of 1024 bytes (on typical platforms). Larger values result in possibly better signing or key generation performance at the cost of a larger table. Valid choices are 2, 22, 86. The default value is a reasonable setting for desktop machines (currently 86). [default=86]")
set_property(CACHE SECP256K1_ECMULT_GEN_KB PROPERTY STRINGS 2 22 86)
check_string_option_value(SECP256K1_ECMULT_GEN_KB)
if(SECP256K1_ECMULT_GEN_KB EQUAL 2)
add_compile_definitions(COMB_BLOCKS=2)
add_compile_definitions(COMB_TEETH=5)
elseif(SECP256K1_ECMULT_GEN_KB EQUAL 22)
add_compile_definitions(COMB_BLOCKS=11)
add_compile_definitions(COMB_TEETH=6)
elseif(SECP256K1_ECMULT_GEN_KB EQUAL 86)
add_compile_definitions(COMB_BLOCKS=43)
add_compile_definitions(COMB_TEETH=6)
set(SECP256K1_ECMULT_GEN_PREC_BITS "AUTO" CACHE STRING "Precision bits to tune the precomputed table size for signing, specified as integer 2, 4 or 8. \"AUTO\" is a reasonable setting for desktop machines (currently 4). [default=AUTO]")
set_property(CACHE SECP256K1_ECMULT_GEN_PREC_BITS PROPERTY STRINGS "AUTO" 2 4 8)
check_string_option_value(SECP256K1_ECMULT_GEN_PREC_BITS)
if(SECP256K1_ECMULT_GEN_PREC_BITS STREQUAL "AUTO")
set(SECP256K1_ECMULT_GEN_PREC_BITS 4)
endif()
add_compile_definitions(ECMULT_GEN_PREC_BITS=${SECP256K1_ECMULT_GEN_PREC_BITS})
set(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY "OFF" CACHE STRING "Test-only override of the (autodetected by the C code) \"widemul\" setting. Legal values are: \"OFF\", \"int128_struct\", \"int128\" or \"int64\". [default=OFF]")
set_property(CACHE SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY PROPERTY STRINGS "OFF" "int128_struct" "int128" "int64")
@@ -131,7 +102,7 @@ if(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
endif()
mark_as_advanced(FORCE SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
set(SECP256K1_ASM "AUTO" CACHE STRING "Assembly to use: \"AUTO\", \"OFF\", \"x86_64\" or \"arm32\" (experimental). [default=AUTO]")
set(SECP256K1_ASM "AUTO" CACHE STRING "Assembly optimizations to use: \"AUTO\", \"OFF\", \"x86_64\" or \"arm32\" (experimental). [default=AUTO]")
set_property(CACHE SECP256K1_ASM PROPERTY STRINGS "AUTO" "OFF" "x86_64" "arm32")
check_string_option_value(SECP256K1_ASM)
if(SECP256K1_ASM STREQUAL "arm32")
@@ -141,7 +112,7 @@ if(SECP256K1_ASM STREQUAL "arm32")
if(HAVE_ARM32_ASM)
add_compile_definitions(USE_EXTERNAL_ASM=1)
else()
message(FATAL_ERROR "ARM32 assembly requested but not available.")
message(FATAL_ERROR "ARM32 assembly optimization requested but not available.")
endif()
elseif(SECP256K1_ASM)
include(CheckX86_64Assembly)
@@ -152,14 +123,14 @@ elseif(SECP256K1_ASM)
elseif(SECP256K1_ASM STREQUAL "AUTO")
set(SECP256K1_ASM "OFF")
else()
message(FATAL_ERROR "x86_64 assembly requested but not available.")
message(FATAL_ERROR "x86_64 assembly optimization requested but not available.")
endif()
endif()
option(SECP256K1_EXPERIMENTAL "Allow experimental configuration options." OFF)
if(NOT SECP256K1_EXPERIMENTAL)
if(SECP256K1_ASM STREQUAL "arm32")
message(FATAL_ERROR "ARM32 assembly is experimental. Use -DSECP256K1_EXPERIMENTAL=ON to allow.")
message(FATAL_ERROR "ARM32 assembly optimization is experimental. Use -DSECP256K1_EXPERIMENTAL=ON to allow.")
endif()
endif()
@@ -196,7 +167,7 @@ else()
string(REGEX REPLACE "-DNDEBUG[ \t\r\n]*" "" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
string(REGEX REPLACE "-DNDEBUG[ \t\r\n]*" "" CMAKE_C_FLAGS_MINSIZEREL "${CMAKE_C_FLAGS_MINSIZEREL}")
# Prefer -O2 optimization level. (-O3 is CMake's default for Release for many compilers.)
string(REGEX REPLACE "-O3( |$)" "-O2\\1" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
string(REGEX REPLACE "-O3[ \t\r\n]*" "-O2" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
endif()
# Define custom "Coverage" build type.
@@ -218,37 +189,31 @@ mark_as_advanced(
CMAKE_SHARED_LINKER_FLAGS_COVERAGE
)
if(PROJECT_IS_TOP_LEVEL)
get_property(is_multi_config GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(default_build_type "RelWithDebInfo")
if(is_multi_config)
set(CMAKE_CONFIGURATION_TYPES "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage" CACHE STRING
"Supported configuration types."
get_property(is_multi_config GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
set(default_build_type "RelWithDebInfo")
if(is_multi_config)
set(CMAKE_CONFIGURATION_TYPES "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage" CACHE STRING
"Supported configuration types."
FORCE
)
else()
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY
STRINGS "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage"
)
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Setting build type to \"${default_build_type}\" as none was specified")
set(CMAKE_BUILD_TYPE "${default_build_type}" CACHE STRING
"Choose the type of build."
FORCE
)
else()
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY
STRINGS "${default_build_type}" "Release" "Debug" "MinSizeRel" "Coverage"
)
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Setting build type to \"${default_build_type}\" as none was specified")
set(CMAKE_BUILD_TYPE "${default_build_type}" CACHE STRING
"Choose the type of build."
FORCE
)
endif()
endif()
endif()
include(TryAppendCFlags)
if(MSVC)
# Keep the following commands ordered lexicographically.
try_append_c_flags(/W3) # Production quality warning level.
try_append_c_flags(/W2) # Moderate warning level.
try_append_c_flags(/wd4146) # Disable warning C4146 "unary minus operator applied to unsigned type, result still unsigned".
try_append_c_flags(/wd4244) # Disable warning C4244 "'conversion' conversion from 'type1' to 'type2', possible loss of data".
try_append_c_flags(/wd4267) # Disable warning C4267 "'var' : conversion from 'size_t' to 'type', possible loss of data".
# Eliminate deprecation warnings for the older, less secure functions.
add_compile_definitions(_CRT_SECURE_NO_WARNINGS)
else()
# Keep the following commands ordered lexicographically.
try_append_c_flags(-pedantic)
@@ -269,41 +234,17 @@ endif()
set(CMAKE_C_VISIBILITY_PRESET hidden)
set(print_msan_notice)
if(SECP256K1_BUILD_CTIME_TESTS)
include(CheckMemorySanitizer)
check_memory_sanitizer(msan_enabled)
if(msan_enabled)
try_append_c_flags(-fno-sanitize-memory-param-retval)
set(print_msan_notice YES)
endif()
unset(msan_enabled)
# Ask CTest to create a "check" target (e.g., make check) as alias for the "test" target.
# CTEST_TEST_TARGET_ALIAS is not documented but supposed to be user-facing.
# See: https://gitlab.kitware.com/cmake/cmake/-/commit/816c9d1aa1f2b42d40c81a991b68c96eb12b6d2
set(CTEST_TEST_TARGET_ALIAS check)
include(CTest)
# We do not use CTest's BUILD_TESTING because a single toggle for all tests is too coarse for our needs.
mark_as_advanced(BUILD_TESTING)
if(SECP256K1_BUILD_BENCHMARK OR SECP256K1_BUILD_TESTS OR SECP256K1_BUILD_EXHAUSTIVE_TESTS OR SECP256K1_BUILD_CTIME_TESTS OR SECP256K1_BUILD_EXAMPLES)
enable_testing()
endif()
set(SECP256K1_APPEND_CFLAGS "" CACHE STRING "Compiler flags that are appended to the command line after all other flags added by the build system. This variable is intended for debugging and special builds.")
if(SECP256K1_APPEND_CFLAGS)
# Appending to this low-level rule variable is the only way to
# guarantee that the flags appear at the end of the command line.
string(APPEND CMAKE_C_COMPILE_OBJECT " ${SECP256K1_APPEND_CFLAGS}")
endif()
set(SECP256K1_APPEND_LDFLAGS "" CACHE STRING "Linker flags that are appended to the command line after all other flags added by the build system. This variable is intended for debugging and special builds.")
if(SECP256K1_APPEND_LDFLAGS)
# Appending to this low-level rule variable is the only way to
# guarantee that the flags appear at the end of the command line.
string(APPEND CMAKE_C_CREATE_SHARED_LIBRARY " ${SECP256K1_APPEND_LDFLAGS}")
string(APPEND CMAKE_C_LINK_EXECUTABLE " ${SECP256K1_APPEND_LDFLAGS}")
endif()
if(NOT CMAKE_RUNTIME_OUTPUT_DIRECTORY)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin)
endif()
if(NOT CMAKE_LIBRARY_OUTPUT_DIRECTORY)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)
endif()
if(NOT CMAKE_ARCHIVE_OUTPUT_DIRECTORY)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)
endif()
add_subdirectory(src)
if(SECP256K1_BUILD_EXAMPLES)
add_subdirectory(examples)
@@ -325,13 +266,11 @@ message(" ECDH ................................ ${SECP256K1_ENABLE_MODULE_ECDH}
message(" ECDSA pubkey recovery ............... ${SECP256K1_ENABLE_MODULE_RECOVERY}")
message(" extrakeys ........................... ${SECP256K1_ENABLE_MODULE_EXTRAKEYS}")
message(" schnorrsig .......................... ${SECP256K1_ENABLE_MODULE_SCHNORRSIG}")
message(" musig ............................... ${SECP256K1_ENABLE_MODULE_MUSIG}")
message(" ElligatorSwift ...................... ${SECP256K1_ENABLE_MODULE_ELLSWIFT}")
message("Parameters:")
message(" ecmult window size .................. ${SECP256K1_ECMULT_WINDOW_SIZE}")
message(" ecmult gen table size ............... ${SECP256K1_ECMULT_GEN_KB} KiB")
message(" ecmult gen precision bits ........... ${SECP256K1_ECMULT_GEN_PREC_BITS}")
message("Optional features:")
message(" assembly ............................ ${SECP256K1_ASM}")
message(" assembly optimization ............... ${SECP256K1_ASM}")
message(" external callbacks .................. ${SECP256K1_USE_EXTERNAL_DEFAULT_CALLBACKS}")
if(SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY)
message(" wide multiplication (test-only) ..... ${SECP256K1_TEST_OVERRIDE_WIDE_MULTIPLY}")
@@ -358,7 +297,7 @@ message("Valgrind .............................. ${SECP256K1_VALGRIND}")
get_directory_property(definitions COMPILE_DEFINITIONS)
string(REPLACE ";" " " definitions "${definitions}")
message("Preprocessor defined macros ........... ${definitions}")
message("C compiler ............................ ${CMAKE_C_COMPILER_ID} ${CMAKE_C_COMPILER_VERSION}, ${CMAKE_C_COMPILER}")
message("C compiler ............................ ${CMAKE_C_COMPILER}")
message("CFLAGS ................................ ${CMAKE_C_FLAGS}")
get_directory_property(compile_options COMPILE_OPTIONS)
string(REPLACE ";" " " compile_options "${compile_options}")
@@ -381,20 +320,7 @@ else()
message(" - LDFLAGS for executables ............ ${CMAKE_EXE_LINKER_FLAGS_DEBUG}")
message(" - LDFLAGS for shared libraries ....... ${CMAKE_SHARED_LINKER_FLAGS_DEBUG}")
endif()
if(SECP256K1_APPEND_CFLAGS)
message("SECP256K1_APPEND_CFLAGS ............... ${SECP256K1_APPEND_CFLAGS}")
endif()
if(SECP256K1_APPEND_LDFLAGS)
message("SECP256K1_APPEND_LDFLAGS .............. ${SECP256K1_APPEND_LDFLAGS}")
endif()
message("")
if(print_msan_notice)
message(
"Note:\n"
" MemorySanitizer detected, tried to add -fno-sanitize-memory-param-retval to compile options\n"
" to avoid false positives in ctime_tests. Pass -DSECP256K1_BUILD_CTIME_TESTS=OFF to avoid this.\n"
)
endif()
message("\n")
if(SECP256K1_EXPERIMENTAL)
message(
" ******\n"

View File

@@ -1,9 +1,5 @@
{
"cmakeMinimumRequired": {
"major": 3,
"minor": 21,
"patch": 0
},
"cmakeMinimumRequired": {"major": 3, "minor": 21, "patch": 0},
"version": 3,
"configurePresets": [
{

View File

@@ -1,108 +0,0 @@
# Contributing to libsecp256k1
## Scope
libsecp256k1 is a library for elliptic curve cryptography on the curve secp256k1, not a general-purpose cryptography library.
The library primarily serves the needs of the Bitcoin Core project but provides additional functionality for the benefit of the wider Bitcoin ecosystem.
## Adding new functionality or modules
The libsecp256k1 project welcomes contributions in the form of new functionality or modules, provided they are within the project's scope.
It is the responsibility of the contributors to convince the maintainers that the proposed functionality is within the project's scope, high-quality and maintainable.
Contributors are recommended to provide the following in addition to the new code:
- **Specification:**
A specification can help significantly in reviewing the new code as it provides documentation and context.
It may justify various design decisions, give a motivation and outline security goals.
If the specification contains pseudocode, a reference implementation or test vectors, these can be used to compare with the proposed libsecp256k1 code.
- **Security Arguments:**
In addition to a defining the security goals, it should be argued that the new functionality meets these goals.
Depending on the nature of the new functionality, a wide range of security arguments are acceptable, ranging from being "obviously secure" to rigorous proofs of security.
- **Relevance Arguments:**
The relevance of the new functionality for the Bitcoin ecosystem should be argued by outlining clear use cases.
These are not the only factors taken into account when considering to add new functionality.
The proposed new libsecp256k1 code must be of high quality, including API documentation and tests, as well as featuring a misuse-resistant API design.
We recommend reaching out to other contributors (see [Communication Channels](#communication-channels)) and get feedback before implementing new functionality.
## Communication channels
Most communication about libsecp256k1 occurs on the GitHub repository: in issues, pull request or on the discussion board.
Additionally, there is an IRC channel dedicated to libsecp256k1, with biweekly meetings (see channel topic).
The channel is `#secp256k1` on Libera Chat.
The easiest way to participate on IRC is with the web client, [web.libera.chat](https://web.libera.chat/#secp256k1).
Chat history logs can be found at https://gnusha.org/secp256k1/.
## Contributor workflow & peer review
The Contributor Workflow & Peer Review in libsecp256k1 are similar to Bitcoin Core's workflow and review processes described in its [CONTRIBUTING.md](https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md).
### Coding conventions
In addition, libsecp256k1 tries to maintain the following coding conventions:
- No runtime heap allocation (e.g., no `malloc`) unless explicitly requested by the caller (via `secp256k1_context_create` or `secp256k1_scratch_space_create`, for example). Moreover, it should be possible to use the library without any heap allocations.
- The tests should cover all lines and branches of the library (see [Test coverage](#coverage)).
- Operations involving secret data should be tested for being constant time with respect to the secrets (see [src/ctime_tests.c](src/ctime_tests.c)).
- Local variables containing secret data should be cleared explicitly to try to delete secrets from memory.
- Use `secp256k1_memcmp_var` instead of `memcmp` (see [#823](https://github.com/bitcoin-core/secp256k1/issues/823)).
- As a rule of thumb, the default values for configuration options should target standard desktop machines and align with Bitcoin Core's defaults, and the tests should mostly exercise the default configuration (see [#1549](https://github.com/bitcoin-core/secp256k1/issues/1549#issuecomment-2200559257)).
#### Style conventions
- Commits should be atomic and diffs should be easy to read. For this reason, do not mix any formatting fixes or code moves with actual code changes. Make sure each individual commit is hygienic: that it builds successfully on its own without warnings, errors, regressions, or test failures.
- New code should adhere to the style of existing, in particular surrounding, code. Other than that, we do not enforce strict rules for code formatting.
- The code conforms to C89. Most notably, that means that only `/* ... */` comments are allowed (no `//` line comments). Moreover, any declarations in a `{ ... }` block (e.g., a function) must appear at the beginning of the block before any statements. When you would like to declare a variable in the middle of a block, you can open a new block:
```C
void secp256k_foo(void) {
unsigned int x; /* declaration */
int y = 2*x; /* declaration */
x = 17; /* statement */
{
int a, b; /* declaration */
a = x + y; /* statement */
secp256k_bar(x, &b); /* statement */
}
}
```
- Use `unsigned int` instead of just `unsigned`.
- Use `void *ptr` instead of `void* ptr`.
- Arguments of the publicly-facing API must have a specific order defined in [include/secp256k1.h](include/secp256k1.h).
- User-facing comment lines in headers should be limited to 80 chars if possible.
- All identifiers in file scope should start with `secp256k1_`.
- Avoid trailing whitespace.
### Tests
#### Coverage
This library aims to have full coverage of reachable lines and branches.
To create a test coverage report, configure with `--enable-coverage` (use of GCC is necessary):
$ ./configure --enable-coverage
Run the tests:
$ make check
To create a report, `gcovr` is recommended, as it includes branch coverage reporting:
$ gcovr --exclude 'src/bench*' --print-summary
To create a HTML report with coloured and annotated source code:
$ mkdir -p coverage
$ gcovr --exclude 'src/bench*' --html --html-details -o coverage/coverage.html
#### Exhaustive tests
There are tests of several functions in which a small group replaces secp256k1.
These tests are _exhaustive_ since they provide all elements and scalars of the small group as input arguments (see [src/tests_exhaustive.c](src/tests_exhaustive.c)).
### Benchmarks
See `src/bench*.c` for examples of benchmarks.

View File

@@ -37,6 +37,7 @@ noinst_HEADERS += src/field_10x26_impl.h
noinst_HEADERS += src/field_5x52.h
noinst_HEADERS += src/field_5x52_impl.h
noinst_HEADERS += src/field_5x52_int128_impl.h
noinst_HEADERS += src/field_5x52_asm_impl.h
noinst_HEADERS += src/modinv32.h
noinst_HEADERS += src/modinv32_impl.h
noinst_HEADERS += src/modinv64.h
@@ -45,7 +46,6 @@ noinst_HEADERS += src/precomputed_ecmult.h
noinst_HEADERS += src/precomputed_ecmult_gen.h
noinst_HEADERS += src/assumptions.h
noinst_HEADERS += src/checkmem.h
noinst_HEADERS += src/testutil.h
noinst_HEADERS += src/util.h
noinst_HEADERS += src/int128.h
noinst_HEADERS += src/int128_impl.h
@@ -64,8 +64,6 @@ noinst_HEADERS += src/field.h
noinst_HEADERS += src/field_impl.h
noinst_HEADERS += src/bench.h
noinst_HEADERS += src/wycheproof/ecdsa_secp256k1_sha256_bitcoin_test.h
noinst_HEADERS += src/hsort.h
noinst_HEADERS += src/hsort_impl.h
noinst_HEADERS += contrib/lax_der_parsing.h
noinst_HEADERS += contrib/lax_der_parsing.c
noinst_HEADERS += contrib/lax_der_privatekey_parsing.h
@@ -155,7 +153,7 @@ endif
if USE_EXAMPLES
noinst_PROGRAMS += ecdsa_example
ecdsa_example_SOURCES = examples/ecdsa.c
ecdsa_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ecdsa_example_CPPFLAGS = -I$(top_srcdir)/include
ecdsa_example_LDADD = libsecp256k1.la
ecdsa_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -165,7 +163,7 @@ TESTS += ecdsa_example
if ENABLE_MODULE_ECDH
noinst_PROGRAMS += ecdh_example
ecdh_example_SOURCES = examples/ecdh.c
ecdh_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ecdh_example_CPPFLAGS = -I$(top_srcdir)/include
ecdh_example_LDADD = libsecp256k1.la
ecdh_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -176,7 +174,7 @@ endif
if ENABLE_MODULE_SCHNORRSIG
noinst_PROGRAMS += schnorr_example
schnorr_example_SOURCES = examples/schnorr.c
schnorr_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
schnorr_example_CPPFLAGS = -I$(top_srcdir)/include
schnorr_example_LDADD = libsecp256k1.la
schnorr_example_LDFLAGS = -static
if BUILD_WINDOWS
@@ -184,28 +182,6 @@ schnorr_example_LDFLAGS += -lbcrypt
endif
TESTS += schnorr_example
endif
if ENABLE_MODULE_ELLSWIFT
noinst_PROGRAMS += ellswift_example
ellswift_example_SOURCES = examples/ellswift.c
ellswift_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
ellswift_example_LDADD = libsecp256k1.la
ellswift_example_LDFLAGS = -static
if BUILD_WINDOWS
ellswift_example_LDFLAGS += -lbcrypt
endif
TESTS += ellswift_example
endif
if ENABLE_MODULE_MUSIG
noinst_PROGRAMS += musig_example
musig_example_SOURCES = examples/musig.c
musig_example_CPPFLAGS = -I$(top_srcdir)/include -DSECP256K1_STATIC
musig_example_LDADD = libsecp256k1.la
musig_example_LDFLAGS = -static
if BUILD_WINDOWS
musig_example_LDFLAGS += -lbcrypt
endif
TESTS += musig_example
endif
endif
### Precomputed tables
@@ -213,11 +189,11 @@ EXTRA_PROGRAMS = precompute_ecmult precompute_ecmult_gen
CLEANFILES = $(EXTRA_PROGRAMS)
precompute_ecmult_SOURCES = src/precompute_ecmult.c
precompute_ecmult_CPPFLAGS = $(SECP_CONFIG_DEFINES) -DVERIFY
precompute_ecmult_CPPFLAGS = $(SECP_CONFIG_DEFINES)
precompute_ecmult_LDADD = $(COMMON_LIB)
precompute_ecmult_gen_SOURCES = src/precompute_ecmult_gen.c
precompute_ecmult_gen_CPPFLAGS = $(SECP_CONFIG_DEFINES) -DVERIFY
precompute_ecmult_gen_CPPFLAGS = $(SECP_CONFIG_DEFINES)
precompute_ecmult_gen_LDADD = $(COMMON_LIB)
# See Automake manual, Section "Errors with distclean".
@@ -265,7 +241,6 @@ maintainer-clean-local: clean-testvectors
### Additional files to distribute
EXTRA_DIST = autogen.sh CHANGELOG.md SECURITY.md
EXTRA_DIST += doc/release-process.md doc/safegcd_implementation.md
EXTRA_DIST += doc/ellswift.md doc/musig.md
EXTRA_DIST += examples/EXAMPLES_COPYING
EXTRA_DIST += sage/gen_exhaustive_groups.sage
EXTRA_DIST += sage/gen_split_lambda_constants.sage
@@ -292,11 +267,3 @@ endif
if ENABLE_MODULE_SCHNORRSIG
include src/modules/schnorrsig/Makefile.am.include
endif
if ENABLE_MODULE_MUSIG
include src/modules/musig/Makefile.am.include
endif
if ENABLE_MODULE_ELLSWIFT
include src/modules/ellswift/Makefile.am.include
endif

View File

@@ -1,66 +1,67 @@
# libsecp256k1
libsecp256k1
============
[![Build Status](https://api.cirrus-ci.com/github/bitcoin-core/secp256k1.svg?branch=master)](https://cirrus-ci.com/github/bitcoin-core/secp256k1)
![Dependencies: None](https://img.shields.io/badge/dependencies-none-success)
[![irc.libera.chat #secp256k1](https://img.shields.io/badge/irc.libera.chat-%23secp256k1-success)](https://web.libera.chat/#secp256k1)
High-performance high-assurance C library for digital signatures and other cryptographic primitives on the secp256k1 elliptic curve.
Optimized C library for ECDSA signatures and secret/public key operations on curve secp256k1.
This library is intended to be the highest quality publicly available library for cryptography on the secp256k1 curve. However, the primary focus of its development has been for usage in the Bitcoin system and usage unlike Bitcoin's may be less well tested, verified, or suffer from a less well thought out interface. Correct usage requires some care and consideration that the library is fit for your application's purpose.
Features:
* secp256k1 ECDSA signing/verification and key generation.
* Additive and multiplicative tweaking of secret/public keys.
* Serialization/parsing of secret keys, public keys, signatures.
* Constant time, constant memory access signing and public key generation.
* Derandomized ECDSA (via RFC6979 or with a caller provided function.)
* Very efficient implementation.
* Suitable for embedded systems.
* No runtime dependencies.
* Optional module for public key recovery.
* Optional module for ECDH key exchange.
* Optional module for Schnorr signatures according to [BIP-340](https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki).
- secp256k1 ECDSA signing/verification and key generation.
- Additive and multiplicative tweaking of secret/public keys.
- Serialization/parsing of secret keys, public keys, signatures.
- Constant time, constant memory access signing and public key generation.
- Derandomized ECDSA (via RFC6979 or with a caller provided function.)
- Very efficient implementation.
- Suitable for embedded systems.
- No runtime dependencies.
- Optional module for public key recovery.
- Optional module for ECDH key exchange.
- Optional module for Schnorr signatures according to [BIP-340](https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki).
- Optional module for ElligatorSwift key exchange according to [BIP-324](https://github.com/bitcoin/bips/blob/master/bip-0324.mediawiki).
- Optional module for MuSig2 Schnorr multi-signatures according to [BIP-327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki).
Implementation details
----------------------
## Implementation details
* General
* No runtime heap allocation.
* Extensive testing infrastructure.
* Structured to facilitate review and analysis.
* Intended to be portable to any system with a C89 compiler and uint64_t support.
* No use of floating types.
* Expose only higher level interfaces to minimize the API surface and improve application security. ("Be difficult to use insecurely.")
* Field operations
* Optimized implementation of arithmetic modulo the curve's field size (2^256 - 0x1000003D1).
* Using 5 52-bit limbs (including hand-optimized assembly for x86_64, by Diederik Huys).
* Using 10 26-bit limbs (including hand-optimized assembly for 32-bit ARM, by Wladimir J. van der Laan).
* This is an experimental feature that has not received enough scrutiny to satisfy the standard of quality of this library but is made available for testing and review by the community.
* Scalar operations
* Optimized implementation without data-dependent branches of arithmetic modulo the curve's order.
* Using 4 64-bit limbs (relying on __int128 support in the compiler).
* Using 8 32-bit limbs.
* Modular inverses (both field elements and scalars) based on [safegcd](https://gcd.cr.yp.to/index.html) with some modifications, and a variable-time variant (by Peter Dettman).
* Group operations
* Point addition formula specifically simplified for the curve equation (y^2 = x^3 + 7).
* Use addition between points in Jacobian and affine coordinates where possible.
* Use a unified addition/doubling formula where necessary to avoid data-dependent branches.
* Point/x comparison without a field inversion by comparison in the Jacobian coordinate space.
* Point multiplication for verification (a*P + b*G).
* Use wNAF notation for point multiplicands.
* Use a much larger window for multiples of G, using precomputed multiples.
* Use Shamir's trick to do the multiplication with the public key and the generator simultaneously.
* Use secp256k1's efficiently-computable endomorphism to split the P multiplicand into 2 half-sized ones.
* Point multiplication for signing
* Use a precomputed table of multiples of powers of 16 multiplied with the generator, so general multiplication becomes a series of additions.
* Intended to be completely free of timing sidechannels for secret-key operations (on reasonable hardware/toolchains)
* Access the table with branch-free conditional moves so memory access is uniform.
* No data-dependent branches
* Optional runtime blinding which attempts to frustrate differential power analysis.
* The precomputed tables add and eventually subtract points for which no known scalar (secret key) is known, preventing even an attacker with control over the secret key used to control the data internally.
- General
- No runtime heap allocation.
- Extensive testing infrastructure.
- Structured to facilitate review and analysis.
- Intended to be portable to any system with a C89 compiler and uint64_t support.
- No use of floating types.
- Expose only higher level interfaces to minimize the API surface and improve application security. ("Be difficult to use insecurely.")
- Field operations
- Optimized implementation of arithmetic modulo the curve's field size (2^256 - 0x1000003D1).
- Using 5 52-bit limbs
- Using 10 26-bit limbs (including hand-optimized assembly for 32-bit ARM, by Wladimir J. van der Laan).
- This is an experimental feature that has not received enough scrutiny to satisfy the standard of quality of this library but is made available for testing and review by the community.
- Scalar operations
- Optimized implementation without data-dependent branches of arithmetic modulo the curve's order.
- Using 4 64-bit limbs (relying on \_\_int128 support in the compiler).
- Using 8 32-bit limbs.
- Modular inverses (both field elements and scalars) based on [safegcd](https://gcd.cr.yp.to/index.html) with some modifications, and a variable-time variant (by Peter Dettman).
- Group operations
- Point addition formula specifically simplified for the curve equation (y^2 = x^3 + 7).
- Use addition between points in Jacobian and affine coordinates where possible.
- Use a unified addition/doubling formula where necessary to avoid data-dependent branches.
- Point/x comparison without a field inversion by comparison in the Jacobian coordinate space.
- Point multiplication for verification (a*P + b*G).
- Use wNAF notation for point multiplicands.
- Use a much larger window for multiples of G, using precomputed multiples.
- Use Shamir's trick to do the multiplication with the public key and the generator simultaneously.
- Use secp256k1's efficiently-computable endomorphism to split the P multiplicand into 2 half-sized ones.
- Point multiplication for signing
- Use a precomputed table of multiples of powers of 16 multiplied with the generator, so general multiplication becomes a series of additions.
- Intended to be completely free of timing sidechannels for secret-key operations (on reasonable hardware/toolchains)
- Access the table with branch-free conditional moves so memory access is uniform.
- No data-dependent branches
- Optional runtime blinding which attempts to frustrate differential power analysis.
- The precomputed tables add and eventually subtract points for which no known scalar (secret key) is known, preventing even an attacker with control over the secret key used to control the data internally.
## Building with Autotools
Building with Autotools
-----------------------
$ ./autogen.sh
$ ./configure
@@ -70,7 +71,8 @@ Features:
To compile optional modules (such as Schnorr signatures), you need to run `./configure` with additional flags (such as `--enable-module-schnorrsig`). Run `./configure --help` to see the full list of available flags.
## Building with CMake (experimental)
Building with CMake (experimental)
----------------------------------
To maintain a pristine source tree, CMake encourages to perform an out-of-source build by using a separate dedicated build tree.
@@ -78,9 +80,9 @@ To maintain a pristine source tree, CMake encourages to perform an out-of-source
$ mkdir build && cd build
$ cmake ..
$ cmake --build .
$ ctest # run the test suite
$ sudo cmake --install . # optional
$ make
$ make check # run the test suite
$ sudo make install # optional
To compile optional modules (such as Schnorr signatures), you need to run `cmake` with additional flags (such as `-DSECP256K1_ENABLE_MODULE_SCHNORRSIG=ON`). Run `cmake .. -LH` to see the full list of available flags.
@@ -106,19 +108,39 @@ In "Developer Command Prompt for VS 2022":
>cmake -G "Visual Studio 17 2022" -A x64 -S . -B build
>cmake --build build --config RelWithDebInfo
## Usage examples
Usage examples
-----------
Usage examples can be found in the [examples](examples) directory. To compile them you need to configure with `--enable-examples`.
- [ECDSA example](examples/ecdsa.c)
- [Schnorr signatures example](examples/schnorr.c)
- [Deriving a shared secret (ECDH) example](examples/ecdh.c)
- [ElligatorSwift key exchange example](examples/ellswift.c)
* [ECDSA example](examples/ecdsa.c)
* [Schnorr signatures example](examples/schnorr.c)
* [Deriving a shared secret (ECDH) example](examples/ecdh.c)
To compile the Schnorr signature and ECDH examples, you also need to configure with `--enable-module-schnorrsig` and `--enable-module-ecdh`.
## Benchmark
Test coverage
-----------
This library aims to have full coverage of the reachable lines and branches.
To create a test coverage report, configure with `--enable-coverage` (use of GCC is necessary):
$ ./configure --enable-coverage
Run the tests:
$ make check
To create a report, `gcovr` is recommended, as it includes branch coverage reporting:
$ gcovr --exclude 'src/bench*' --print-summary
To create a HTML report with coloured and annotated source code:
$ mkdir -p coverage
$ gcovr --exclude 'src/bench*' --html --html-details -o coverage/coverage.html
Benchmark
------------
If configured with `--enable-benchmark` (which is the default), binaries for benchmarking the libsecp256k1 functions will be present in the root directory after the build.
To print the benchmark result to the command line:
@@ -129,10 +151,7 @@ To create a CSV file for the benchmark result :
$ ./bench_name | sed '2d;s/ \{1,\}//g' > bench_name.csv
## Reporting a vulnerability
Reporting a vulnerability
------------
See [SECURITY.md](SECURITY.md)
## Contributing to libsecp256k1
See [CONTRIBUTING.md](CONTRIBUTING.md)

View File

@@ -6,10 +6,10 @@ To report security issues send an email to secp256k1-security@bitcoincore.org (n
The following keys may be used to communicate sensitive information to developers:
| Name | Fingerprint |
| ------------- | ------------------------------------------------- |
| Pieter Wuille | 133E AC17 9436 F14A 5CF1 B794 860F EB80 4E66 9320 |
| Jonas Nick | 36C7 1A37 C9D9 88BD E825 08D9 B1A7 0E4F 8DCD 0366 |
| Tim Ruffing | 09E0 3F87 1092 E40E 106E 902B 33BC 86AB 80FF 5516 |
| Name | Fingerprint |
|------|-------------|
| Pieter Wuille | 133E AC17 9436 F14A 5CF1 B794 860F EB80 4E66 9320 |
| Jonas Nick | 36C7 1A37 C9D9 88BD E825 08D9 B1A7 0E4F 8DCD 0366 |
| Tim Ruffing | 09E0 3F87 1092 E40E 106E 902B 33BC 86AB 80FF 5516 |
You can import a key by running the following command with that individuals fingerprint: `gpg --keyserver hkps://keys.openpgp.org --recv-keys "<fingerprint>"` Ensure that you put quotes around fingerprints containing spaces.

View File

@@ -45,22 +45,6 @@ fi
AC_MSG_RESULT($has_valgrind)
])
AC_DEFUN([SECP_MSAN_CHECK], [
AC_MSG_CHECKING(whether MemorySanitizer is enabled)
AC_COMPILE_IFELSE([AC_LANG_SOURCE([[
#if defined(__has_feature)
# if __has_feature(memory_sanitizer)
/* MemorySanitizer is enabled. */
# elif
# error "MemorySanitizer is disabled."
# endif
#else
# error "__has_feature is not defined."
#endif
]])], [msan_enabled=yes], [msan_enabled=no])
AC_MSG_RESULT([$msan_enabled])
])
dnl SECP_TRY_APPEND_CFLAGS(flags, VAR)
dnl Append flags to VAR if CC accepts them.
AC_DEFUN([SECP_TRY_APPEND_CFLAGS], [

View File

@@ -4,21 +4,19 @@ set -eux
export LC_ALL=C
# Print commit and relevant CI environment to allow reproducing the job outside of CI.
git show --no-patch
# Print relevant CI environment to allow reproducing the job outside of CI.
print_environment() {
# Turn off -x because it messes up the output
set +x
# There are many ways to print variable names and their content. This one
# does not rely on bash.
for var in WERROR_CFLAGS MAKEFLAGS BUILD \
ECMULTWINDOW ECMULTGENKB ASM WIDEMUL WITH_VALGRIND EXTRAFLAGS \
EXPERIMENTAL ECDH RECOVERY EXTRAKEYS MUSIG SCHNORRSIG ELLSWIFT \
ECMULTWINDOW ECMULTGENPRECISION ASM WIDEMUL WITH_VALGRIND EXTRAFLAGS \
EXPERIMENTAL ECDH RECOVERY SCHNORRSIG \
SECP256K1_TEST_ITERS BENCH SECP256K1_BENCH_ITERS CTIMETESTS\
EXAMPLES \
HOST WRAPPER_CMD \
CC CFLAGS CPPFLAGS AR NM \
UBSAN_OPTIONS ASAN_OPTIONS LSAN_OPTIONS
CC CFLAGS CPPFLAGS AR NM
do
eval "isset=\${$var+x}"
if [ -n "$isset" ]; then
@@ -32,15 +30,19 @@ print_environment() {
}
print_environment
env >> test_env.log
# If gcc is requested, assert that it's in fact gcc (and not some symlinked Apple clang).
case "${CC:-undefined}" in
*gcc*)
$CC -v 2>&1 | grep -q "gcc version" || exit 1;
# Start persistent wineserver if necessary.
# This speeds up jobs with many invocations of wine (e.g., ./configure with MSVC) tremendously.
case "$WRAPPER_CMD" in
*wine*)
# Make sure to shutdown wineserver whenever we exit.
trap "wineserver -k || true" EXIT INT HUP
# This is apparently only reliable when we run a dummy command such as "hh.exe" afterwards.
wineserver -p && wine hh.exe
;;
esac
env >> test_env.log
if [ -n "${CC+x}" ]; then
# The MSVC compiler "cl" doesn't understand "-v"
$CC -v || true
@@ -52,55 +54,22 @@ if [ -n "$WRAPPER_CMD" ]; then
$WRAPPER_CMD --version
fi
# Workaround for https://bugs.kde.org/show_bug.cgi?id=452758 (fixed in valgrind 3.20.0).
case "${CC:-undefined}" in
clang*)
if [ "$CTIMETESTS" = "yes" ] && [ "$WITH_VALGRIND" = "yes" ]
then
export CFLAGS="${CFLAGS:+$CFLAGS }-gdwarf-4"
else
case "$WRAPPER_CMD" in
valgrind*)
export CFLAGS="${CFLAGS:+$CFLAGS }-gdwarf-4"
;;
esac
fi
;;
esac
./autogen.sh
./configure \
--enable-experimental="$EXPERIMENTAL" \
--with-test-override-wide-multiply="$WIDEMUL" --with-asm="$ASM" \
--with-ecmult-window="$ECMULTWINDOW" \
--with-ecmult-gen-kb="$ECMULTGENKB" \
--with-ecmult-gen-precision="$ECMULTGENPRECISION" \
--enable-module-ecdh="$ECDH" --enable-module-recovery="$RECOVERY" \
--enable-module-ellswift="$ELLSWIFT" \
--enable-module-extrakeys="$EXTRAKEYS" \
--enable-module-schnorrsig="$SCHNORRSIG" \
--enable-module-musig="$MUSIG" \
--enable-examples="$EXAMPLES" \
--enable-ctime-tests="$CTIMETESTS" \
--with-valgrind="$WITH_VALGRIND" \
--host="$HOST" $EXTRAFLAGS
# We have set "-j<n>" in MAKEFLAGS.
build_exit_code=0
make > make.log 2>&1 || build_exit_code=$?
cat make.log
if [ $build_exit_code -ne 0 ]; then
case "${CC:-undefined}" in
*snapshot*)
# Ignore internal compiler errors in gcc-snapshot and clang-snapshot
grep -e "internal compiler error:" -e "PLEASE submit a bug report" make.log
return $?;
;;
*)
return 1;
;;
esac
fi
make
# Print information about binaries so that we can see that the architecture is correct
file *tests* || true

View File

@@ -1,17 +1,4 @@
FROM debian:stable-slim
SHELL ["/bin/bash", "-c"]
WORKDIR /root
# A too high maximum number of file descriptors (with the default value
# inherited from the docker host) can cause issues with some of our tools:
# - sanitizers hanging: https://github.com/google/sanitizers/issues/1662
# - valgrind crashing: https://stackoverflow.com/a/75293014
# This is not be a problem on our CI hosts, but developers who run the image
# on their machines may run into this (e.g., on Arch Linux), so warn them.
# (Note that .bashrc is only executed in interactive bash shells.)
RUN echo 'if [[ $(ulimit -n) -gt 200000 ]]; then echo "WARNING: Very high value reported by \"ulimit -n\". Consider passing \"--ulimit nofile=32768\" to \"docker run\"."; fi' >> /root/.bashrc
FROM debian:stable
RUN dpkg --add-architecture i386 && \
dpkg --add-architecture s390x && \
@@ -24,56 +11,27 @@ RUN dpkg --add-architecture i386 && \
RUN apt-get update && apt-get install --no-install-recommends -y \
git ca-certificates \
make automake libtool pkg-config dpkg-dev valgrind qemu-user \
gcc clang llvm libclang-rt-dev libc6-dbg \
gcc clang llvm libc6-dbg \
g++ \
gcc-i686-linux-gnu libc6-dev-i386-cross libc6-dbg:i386 libubsan1:i386 libasan8:i386 \
gcc-i686-linux-gnu libc6-dev-i386-cross libc6-dbg:i386 libubsan1:i386 libasan6:i386 \
gcc-s390x-linux-gnu libc6-dev-s390x-cross libc6-dbg:s390x \
gcc-arm-linux-gnueabihf libc6-dev-armhf-cross libc6-dbg:armhf \
gcc-aarch64-linux-gnu libc6-dev-arm64-cross libc6-dbg:arm64 \
gcc-powerpc64le-linux-gnu libc6-dev-ppc64el-cross libc6-dbg:ppc64el \
gcc-mingw-w64-x86-64-win32 wine64 wine \
gcc-mingw-w64-i686-win32 wine32 \
python3 && \
if ! ( dpkg --print-architecture | grep --quiet "arm64" ) ; then \
apt-get install --no-install-recommends -y \
gcc-aarch64-linux-gnu libc6-dev-arm64-cross libc6-dbg:arm64 ;\
fi && \
apt-get clean && rm -rf /var/lib/apt/lists/*
sagemath
# Build and install gcc snapshot
ARG GCC_SNAPSHOT_MAJOR=15
RUN apt-get update && apt-get install --no-install-recommends -y wget libgmp-dev libmpfr-dev libmpc-dev flex && \
mkdir gcc && cd gcc && \
wget --progress=dot:giga --https-only --recursive --accept '*.tar.xz' --level 1 --no-directories "https://gcc.gnu.org/pub/gcc/snapshots/LATEST-${GCC_SNAPSHOT_MAJOR}" && \
wget "https://gcc.gnu.org/pub/gcc/snapshots/LATEST-${GCC_SNAPSHOT_MAJOR}/sha512.sum" && \
sha512sum --check --ignore-missing sha512.sum && \
# We should have downloaded exactly one tar.xz file
ls && \
[ $(ls *.tar.xz | wc -l) -eq "1" ] && \
tar xf *.tar.xz && \
mkdir gcc-build && cd gcc-build && \
../*/configure --prefix=/opt/gcc-snapshot --enable-languages=c --disable-bootstrap --disable-multilib --without-isl && \
make -j $(nproc) && \
make install && \
cd ../.. && rm -rf gcc && \
ln -s /opt/gcc-snapshot/bin/gcc /usr/bin/gcc-snapshot && \
apt-get autoremove -y wget libgmp-dev libmpfr-dev libmpc-dev flex && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Install clang snapshot, see https://apt.llvm.org/
RUN \
# Setup GPG keys of LLVM repository
apt-get update && apt-get install --no-install-recommends -y wget && \
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key | tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc && \
# Add repository for this Debian release
. /etc/os-release && echo "deb http://apt.llvm.org/${VERSION_CODENAME} llvm-toolchain-${VERSION_CODENAME} main" >> /etc/apt/sources.list && \
apt-get update && \
# Determine the version number of the LLVM development branch
LLVM_VERSION=$(apt-cache search --names-only '^clang-[0-9]+$' | sort -V | tail -1 | cut -f1 -d" " | cut -f2 -d"-" ) && \
# Install
apt-get install --no-install-recommends -y "clang-${LLVM_VERSION}" && \
# Create symlink
ln -s "/usr/bin/clang-${LLVM_VERSION}" /usr/bin/clang-snapshot && \
# Clean up
apt-get autoremove -y wget && \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /root
# The "wine" package provides a convience wrapper that we need
RUN apt-get update && apt-get install --no-install-recommends -y \
git ca-certificates wine64 wine python3-simplejson python3-six msitools winbind procps && \
git clone https://github.com/mstorsjo/msvc-wine && \
mkdir /opt/msvc && \
python3 msvc-wine/vsdownload.py --accept-license --dest /opt/msvc Microsoft.VisualStudio.Workload.VCTools && \
msvc-wine/install.sh /opt/msvc
# Initialize the wine environment. Wait until the wineserver process has
# exited before closing the session, to avoid corrupting the wine prefix.
RUN wine64 wineboot --init && \
while (ps -A | grep wineserver) > /dev/null; do sleep 1; done

Some files were not shown because too many files have changed in this diff Show More