Compare commits

..

7 Commits

Author SHA1 Message Date
Denis Angell
c543d42029 change amendment name 2026-01-05 19:02:24 -05:00
Denis Angell
0769bbc20a fix TokenEscrow edge case 2026-01-05 18:34:30 -05:00
Mayukha Vadari
44d21b8f6d test: add more tests for ledger_entry RPC (#5858)
This change adds some basic tests for all the `ledger_entry` helper functions, so each ledger entry type is covered. There are further some minor refactors in `parseAMM` to provide better error messages. Finally, to improve readability, alphabetization was applied in the helper functions.
2026-01-05 10:54:24 -05:00
Bart
3d1b3a49b3 refactor: Rename rippled.cfg to xrpld.cfg (#6098)
This change renames all occurrences of `rippled.cfg` to `xrpld.cfg`. It also provides a script to allow developers to replicate the changes in their local branch or fork to avoid conflicts. For the time being it maintains support for `rippled.cfg` as config file, if `xrpld.cfg` does not exist.
2026-01-05 14:55:12 +00:00
Ayaz Salikhov
0b87a26f04 Revert "chore: Pin ruamel.yaml<0.19 in pre-commit-hooks (#6166)" (#6167)
This reverts commit 0f23ad820c.
2026-01-05 14:01:14 +00:00
Ayaz Salikhov
0f23ad820c chore: Pin ruamel.yaml<0.19 in pre-commit-hooks (#6166)
See https://github.com/pre-commit/pre-commit-hooks/issues/1229 for more details.
2026-01-02 11:53:33 -05:00
Michael Legleux
b7139da4d0 fix: Remove cryptographic libs from libxrpl Conan package (#6163)
* fix: rm crypto libs and fix protobuf path

* update/rm comments
2025-12-23 16:38:35 -08:00
48 changed files with 1621 additions and 3266 deletions

View File

@@ -32,9 +32,7 @@ parsers:
slack_app: false
ignore:
- ".github/scripts/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"
- "src/test/"
- "src/tests/"
- "tests/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"

View File

@@ -31,6 +31,9 @@ run from the repository root.
the `xrpld` binary.
5. `.github/scripts/rename/namespace.sh`: This script will rename the C++
namespaces from `ripple` to `xrpl`.
6. `.github/scripts/rename/config.sh`: This script will rename the config from
`rippled.cfg` to `xrpld.cfg`, and updating the code accordingly. The old
filename will still be accepted.
You can run all these scripts from the repository root as follows:
@@ -40,4 +43,5 @@ You can run all these scripts from the repository root as follows:
./.github/scripts/rename/cmake.sh .
./.github/scripts/rename/binary.sh .
./.github/scripts/rename/namespace.sh .
./.github/scripts/rename/config.sh .
```

72
.github/scripts/rename/config.sh vendored Executable file
View File

@@ -0,0 +1,72 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the config from `rippled.cfg` to `xrpld.cfg`, and updates
# the code accordingly. The old filename will still be accepted.
# Usage: .github/scripts/rename/config.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
# Add the xrpld.cfg to the .gitignore.
if ! grep -q 'xrpld.cfg' .gitignore; then
${SED_COMMAND} -i '/rippled.cfg/a\
/xrpld.cfg' .gitignore
fi
# Rename the files.
if [ -e rippled.cfg ]; then
mv rippled.cfg xrpld.cfg
fi
if [ -e cfg/rippled-example.cfg ]; then
mv cfg/rippled-example.cfg cfg/xrpld-example.cfg
fi
# Rename inside the files.
DIRECTORIES=("cfg" "cmake" "include" "src")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" -o -name "*.cmake" -o -name "*.txt" -o -name "*.cfg" -o -name "*.md" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i -E 's/rippled(-example)?[ .]cfg/xrpld\1.cfg/g' "${FILE}"
done
done
${SED_COMMAND} -i 's/rippled/xrpld/g' cfg/xrpld-example.cfg
${SED_COMMAND} -i 's/rippled/xrpld/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/ripplevalidators/xrplvalidators/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/rippleConfig/xrpldConfig/g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's@ripple/@xrpld/@g' src/test/core/Config_test.cpp
${SED_COMMAND} -i 's/Rippled/File/g' src/test/core/Config_test.cpp
# Restore the old config file name in the code that maintains support for now.
${SED_COMMAND} -i 's/configLegacyName = "xrpld.cfg"/configLegacyName = "rippled.cfg"/g' src/xrpld/core/detail/Config.cpp
# Restore an URL.
${SED_COMMAND} -i 's/connect-your-xrpld-to-the-xrp-test-net.html/connect-your-rippled-to-the-xrp-test-net.html/g' cfg/xrpld-example.cfg
popd
echo "Renaming complete."

View File

@@ -1,118 +0,0 @@
# Strategy Matrix
The scripts in this directory will generate a strategy matrix for GitHub Actions
CI, depending on the trigger that caused the workflow to run and the platform
specified.
There are several build, test, and publish settings that can be enabled for each
configuration. The settings are combined in a Cartesian product to generate the
full matrix, while filtering out any combinations not applicable to the trigger.
## Platforms
We support three platforms: Linux, macOS, and Windows.
### Linux
We support a variety of distributions (Debian, RHEL, and Ubuntu) and compilers
(GCC and Clang) on Linux. As there are so many combinations, we don't run them
all. Instead, we focus on a few key ones for PR commits and merges, while we run
most of them on a scheduled or ad hoc basis.
Some noteworthy configurations are:
- The official release build is GCC 14 on Debian Bullseye.
- Although we generally enable assertions in release builds, we disable them
for the official release build.
- We publish .deb and .rpm packages for this build, as well as a Docker image.
- For PR commits we also publish packages and images for testing purposes.
- Antithesis instrumentation is only supported on Clang 16+ on AMD64.
- We publish a Docker image for this build, but no packages.
- Coverage reports are generated on Bullseye with GCC 15.
- It must be enabled for both commits (to show PR coverage) and merges (to
show default branch coverage).
Note that we try to run pipelines equally across both AMD64 and ARM64, but in
some cases we cannot build on ARM64:
- All Clang 20+ builds on ARM64 are currently skipped due to a Boost build
error.
- All RHEL builds on AMD64 are currently skipped due to a build failure that
needs further investigation.
Also note that to create a Docker image we ideally build on both AMD64 and
ARM64 to create a multi-arch image. Both configs should therefore be triggered
by the same event. However, as the script outputs individual configs, the
workflow must be able to run both builds separately and then merge the
single-arch images afterward into a multi-arch image.
### MacOS
We support building on macOS, which uses the Apple Clang compiler and the ARM64
architecture. We use default settings for all builds, and don't publish any
packages or images.
### Windows
We also support building on Windows, which uses the MSVC compiler and the AMD64
architecture. While we could build on ARM64, we have not yet found a suitable
cloud machine to use as a GitHub runner. We use default settings for all builds,
and don't publish any packages or images.
## Triggers
We have four triggers that can cause the workflow to run:
- `commit`: A commit is pushed to a branch for which a pull request is open.
- `merge`: A pull request is merged.
- `label`: A label is added to a pull request.
- `schedule`: The workflow is run on a scheduled basis.
The `label` trigger is currently not used, but it is reserved for future use.
The `schedule` trigger is used to run the workflow each weekday, and is also
used for ad hoc testing via the `workflow_dispatch` event.
### Dependencies
The pipeline that is run for the `schedule` trigger will recompile and upload
all Conan packages to the remote for each configuration that is enabled. In
case any dependencies were added or updated in a recently merged PR, they will
then be available in the remote for the following pipeline runs. It is therefore
important that all configurations that are enabled for the `commit`, `merge`,
and `label` triggers are also enabled for the `schedule` trigger. We run
additional configurations in the `schedule` trigger that are not run for the
other triggers, to get extra confidence that the codebase can compile and run on
all supported platforms.
#### Caveats
There is some nuance here in that certain options affect the compilation of the
dependencies, while others do not. This means that that same options need to be
enabled for the `schedule` trigger as for the other triggers to ensure any
dependency changes get cached in the Conan remote.
- Build mode (`unity`): Does not affect the dependencies.
- Build option (`coverage`, `voidstar`): Does not affect the dependencies.
- Build option (`sanitizer asan`, `sanitizer tsan`): Affects the dependencies.
- Build type (`debug`, `release`): Affects the dependencies.
- Build type (`publish`): Same effect as `release` on the dependencies.
- Test option (`reference fee`): Does not affect the dependencies.
- Publish option (`package`, `image`): Does not affect the dependencies.
## Usage
Our GitHub CI pipeline uses the `generate.py` script to generate the matrix for
the current workflow invocation. Naturally, the script can be run locally to
generate the matrix for testing purposes, e.g.:
```bash
python3 generate.py --platform=linux --trigger=commit
```
If you want to pretty-print the output, you can pipe it to `jq` after stripping
off the `matrix=` prefix, e.g.:
```bash
python3 generate.py --platform=linux --trigger=commit | cut -d= -f2- | jq
```

View File

@@ -1,211 +1,301 @@
#!/usr/bin/env python3
import argparse
import dataclasses
import itertools
from collections.abc import Iterator
import json
from dataclasses import dataclass
from pathlib import Path
import linux
import macos
import windows
from helpers.defs import *
from helpers.enums import *
from helpers.funcs import *
from helpers.unique import *
# The GitHub runner tags to use for the different architectures.
RUNNER_TAGS = {
Arch.LINUX_AMD64: ["self-hosted", "Linux", "X64", "heavy"],
Arch.LINUX_ARM64: ["self-hosted", "Linux", "ARM64", "heavy-arm64"],
Arch.MACOS_ARM64: ["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
Arch.WINDOWS_AMD64: ["self-hosted", "Windows", "devbox"],
}
THIS_DIR = Path(__file__).parent.resolve()
def generate_configs(distros: list[Distro], trigger: Trigger) -> list[Config]:
"""Generate a strategy matrix for GitHub Actions CI.
Args:
distros: The distros to generate the matrix for.
trigger: The trigger that caused the workflow to run.
Returns:
list[Config]: The generated configurations.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
"""
configs = []
for distro in distros:
for config in generate_config_for_distro(distro, trigger):
configs.append(config)
if not is_unique(configs):
raise ValueError("configs must be a list of unique Config")
return configs
@dataclass
class Config:
architecture: list[dict]
os: list[dict]
build_type: list[str]
cmake_args: list[str]
def generate_config_for_distro(distro: Distro, trigger: Trigger) -> Iterator[Config]:
"""Generate a strategy matrix for a specific distro.
"""
Generate a strategy matrix for GitHub Actions CI.
Args:
distro: The distro to generate the matrix for.
trigger: The trigger that caused the workflow to run.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop, release, or master
branches, we will build all configurations, and test most of them.
Yields:
Config: The next configuration to build.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
"""
for spec in distro.specs:
if trigger not in spec.triggers:
continue
os_name = distro.os_name
os_version = distro.os_version
compiler_name = distro.compiler_name
compiler_version = distro.compiler_version
image_sha = distro.image_sha
yield from generate_config_for_distro_spec(
os_name,
os_version,
compiler_name,
compiler_version,
image_sha,
spec,
trigger,
)
We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options.
- All builds will have the `wextra` option except for GCC 12 and Clang 16.
- All release builds will have the `assert` option.
- Certain Debian Bookworm configurations will change the reference fee, enable
codecov, and enable voidstar in PRs.
"""
def generate_config_for_distro_spec(
os_name: str,
os_version: str,
compiler_name: str,
compiler_version: str,
image_sha: str,
spec: Spec,
trigger: Trigger,
) -> Iterator[Config]:
"""Generate a strategy matrix for a specific distro and spec.
Args:
os_name: The OS name.
os_version: The OS version.
compiler_name: The compiler name.
compiler_version: The compiler version.
image_sha: The image SHA.
spec: The spec to generate the matrix for.
trigger: The trigger that caused the workflow to run.
Yields:
Config: The next configuration to build.
"""
for trigger_, arch, build_mode, build_type in itertools.product(
spec.triggers, spec.archs, spec.build_modes, spec.build_types
def generate_strategy_matrix(all: bool, config: Config) -> list:
configurations = []
for architecture, os, build_type, cmake_args in itertools.product(
config.architecture, config.os, config.build_type, config.cmake_args
):
if trigger_ != trigger:
# The default CMake target is 'all' for Linux and MacOS and 'install'
# for Windows, but it can get overridden for certain configurations.
cmake_target = "install" if os["distro_name"] == "windows" else "all"
# We build and test all configurations by default, except for Windows in
# Debug, because it is too slow, as well as when code coverage is
# enabled as that mode already runs the tests.
build_only = False
if os["distro_name"] == "windows" and build_type == "Debug":
build_only = True
# Only generate a subset of configurations in PRs.
if not all:
# Debian:
# - Bookworm using GCC 13: Release and Unity on linux/amd64, set
# the reference fee to 500.
# - Bookworm using GCC 15: Debug and no Unity on linux/amd64, enable
# code coverage (which will be done below).
# - Bookworm using Clang 16: Debug and no Unity on linux/arm64,
# enable voidstar.
# - Bookworm using Clang 17: Release and no Unity on linux/amd64,
# set the reference fee to 1000.
# - Bookworm using Clang 20: Debug and Unity on linux/amd64.
if os["distro_name"] == "debian":
skip = True
if os["distro_version"] == "bookworm":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-13"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=500 {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/arm64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-17"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=1000 {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
and build_type == "Debug"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if skip:
continue
# RHEL:
# - 9 using GCC 12: Debug and Unity on linux/amd64.
# - 10 using Clang: Release and no Unity on linux/amd64.
if os["distro_name"] == "rhel":
skip = True
if os["distro_version"] == "9":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
elif os["distro_version"] == "10":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-any"
and build_type == "Release"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if skip:
continue
# Ubuntu:
# - Jammy using GCC 12: Debug and no Unity on linux/arm64.
# - Noble using GCC 14: Release and Unity on linux/amd64.
# - Noble using Clang 18: Debug and no Unity on linux/amd64.
# - Noble using Clang 19: Release and Unity on linux/arm64.
if os["distro_name"] == "ubuntu":
skip = True
if os["distro_version"] == "jammy":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/arm64"
):
skip = False
elif os["distro_version"] == "noble":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-14"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-18"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-19"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/arm64"
):
skip = False
if skip:
continue
# MacOS:
# - Debug and no Unity on macos/arm64.
if os["distro_name"] == "macos" and not (
build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "macos/arm64"
):
continue
# Windows:
# - Release and Unity on windows/amd64.
if os["distro_name"] == "windows" and not (
build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "windows/amd64"
):
continue
# Additional CMake arguments.
cmake_args = f"{cmake_args} -Dtests=ON -Dwerr=ON -Dxrpld=ON"
if not f"{os['compiler_name']}-{os['compiler_version']}" in [
"gcc-12",
"clang-16",
]:
cmake_args = f"{cmake_args} -Dwextra=ON"
if build_type == "Release":
cmake_args = f"{cmake_args} -Dassert=ON"
# We skip all RHEL on arm64 due to a build failure that needs further
# investigation.
if os["distro_name"] == "rhel" and architecture["platform"] == "linux/arm64":
continue
build_option = spec.build_option
test_option = spec.test_option
publish_option = spec.publish_option
# We skip all clang 20+ on arm64 due to Boost build error.
if (
f"{os['compiler_name']}-{os['compiler_version']}"
in ["clang-20", "clang-21"]
and architecture["platform"] == "linux/arm64"
):
continue
# Determine the configuration name.
config_name = generate_config_name(
os_name,
os_version,
compiler_name,
compiler_version,
arch,
build_type,
build_mode,
build_option,
# Enable code coverage for Debian Bookworm using GCC 15 in Debug and no
# Unity on linux/amd64
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0 {cmake_args}"
# Generate a unique name for the configuration, e.g. macos-arm64-debug
# or debian-bookworm-gcc-12-amd64-release-unity.
config_name = os["distro_name"]
if (n := os["distro_version"]) != "":
config_name += f"-{n}"
if (n := os["compiler_name"]) != "":
config_name += f"-{n}"
if (n := os["compiler_version"]) != "":
config_name += f"-{n}"
config_name += (
f"-{architecture['platform'][architecture['platform'].find('/') + 1 :]}"
)
config_name += f"-{build_type.lower()}"
if "-Dunity=ON" in cmake_args:
config_name += "-unity"
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
}
)
# Determine the CMake arguments.
cmake_args = generate_cmake_args(
compiler_name,
compiler_version,
build_type,
build_mode,
build_option,
test_option,
)
return configurations
# Determine the CMake target.
cmake_target = generate_cmake_target(os_name, build_type)
# Determine whether to enable running tests, and to create a package
# and/or image.
enable_tests, enable_package, enable_image = generate_enable_options(
os_name, build_type, publish_option
)
def read_config(file: Path) -> Config:
config = json.loads(file.read_text())
if (
config["architecture"] is None
or config["os"] is None
or config["build_type"] is None
or config["cmake_args"] is None
):
raise Exception("Invalid configuration file.")
# Determine the image to run in, if applicable.
image = generate_image_name(
os_name,
os_version,
compiler_name,
compiler_version,
image_sha,
)
# Generate the configuration.
yield Config(
config_name=config_name,
cmake_args=cmake_args,
cmake_target=cmake_target,
build_type=("Debug" if build_type == BuildType.DEBUG else "Release"),
enable_tests=enable_tests,
enable_package=enable_package,
enable_image=enable_image,
runs_on=RUNNER_TAGS[arch],
image=image,
)
return Config(**config)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--platform",
"-p",
required=False,
type=Platform,
choices=list(Platform),
help="The platform to run on.",
"-a",
"--all",
help="Set to generate all configurations (generally used when merging a PR) or leave unset to generate a subset of configurations (generally used when committing to a PR).",
action="store_true",
)
parser.add_argument(
"--trigger",
"-t",
required=True,
type=Trigger,
choices=list(Trigger),
help="The trigger that caused the workflow to run.",
"-c",
"--config",
help="Path to the JSON file containing the strategy matrix configurations.",
required=False,
type=Path,
)
args = parser.parse_args()
# Collect the distros to generate configs for.
distros = []
if args.platform in [None, Platform.LINUX]:
distros += linux.DEBIAN_DISTROS + linux.RHEL_DISTROS + linux.UBUNTU_DISTROS
if args.platform in [None, Platform.MACOS]:
distros += macos.DISTROS
if args.platform in [None, Platform.WINDOWS]:
distros += windows.DISTROS
matrix = []
if args.config is None or args.config == "":
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "linux.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "macos.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "windows.json")
)
else:
matrix += generate_strategy_matrix(args.all, read_config(args.config))
# Generate the configs.
configs = generate_configs(distros, args.trigger)
# Convert the configs into the format expected by GitHub Actions.
include = []
for config in configs:
include.append(dataclasses.asdict(config))
print(f"matrix={json.dumps({'include': include})}")
# Generate the strategy matrix.
print(f"matrix={json.dumps({'include': matrix})}")

View File

@@ -1,466 +0,0 @@
import pytest
from generate import *
@pytest.fixture
def macos_distro():
return Distro(
os_name="macos",
specs=[
Spec(
archs=[Arch.MACOS_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.COVERAGE,
build_types=[BuildType.RELEASE],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
],
)
@pytest.fixture
def windows_distro():
return Distro(
os_name="windows",
specs=[
Spec(
archs=[Arch.WINDOWS_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_option=BuildOption.SANITIZE_ASAN,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.IMAGE_ONLY,
test_option=TestOption.REFERENCE_FEE_500,
triggers=[Trigger.COMMIT, Trigger.SCHEDULE],
)
],
)
@pytest.fixture
def linux_distro():
return Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="16",
image_sha="a1b2c3d4",
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.SANITIZE_TSAN,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.LABEL],
),
Spec(
archs=[Arch.LINUX_AMD64, Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_OFF, BuildMode.UNITY_ON],
build_option=BuildOption.VOIDSTAR,
build_types=[BuildType.PUBLISH],
publish_option=PublishOption.PACKAGE_AND_IMAGE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT, Trigger.LABEL],
),
],
)
def test_macos_generate_config_for_distro_spec_matches_trigger(macos_distro):
trigger = Trigger.COMMIT
distro = macos_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
)
]
def test_macos_generate_config_for_distro_spec_no_match_trigger(macos_distro):
trigger = Trigger.MERGE
distro = macos_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == []
def test_macos_generate_config_for_distro_matches_trigger(macos_distro):
trigger = Trigger.COMMIT
distro = macos_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
)
]
def test_macos_generate_config_for_distro_no_match_trigger(macos_distro):
trigger = Trigger.MERGE
distro = macos_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_windows_generate_config_for_distro_spec_matches_trigger(
windows_distro,
):
trigger = Trigger.COMMIT
distro = windows_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == [
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
)
]
def test_windows_generate_config_for_distro_spec_no_match_trigger(
windows_distro,
):
trigger = Trigger.MERGE
distro = windows_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == []
def test_windows_generate_config_for_distro_matches_trigger(
windows_distro,
):
trigger = Trigger.COMMIT
distro = windows_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
)
]
def test_windows_generate_config_for_distro_no_match_trigger(
windows_distro,
):
trigger = Trigger.MERGE
distro = windows_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_linux_generate_config_for_distro_spec_matches_trigger(linux_distro):
trigger = Trigger.LABEL
distro = linux_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[1],
trigger,
)
)
assert result == [
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_linux_generate_config_for_distro_spec_no_match_trigger(linux_distro):
trigger = Trigger.MERGE
distro = linux_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[1],
trigger,
)
)
assert result == []
def test_linux_generate_config_for_distro_matches_trigger(linux_distro):
trigger = Trigger.LABEL
distro = linux_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="debian-bookworm-clang-16-tsan-debug-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_linux_generate_config_for_distro_no_match_trigger(linux_distro):
trigger = Trigger.MERGE
distro = linux_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_generate_configs(macos_distro, windows_distro, linux_distro):
trigger = Trigger.COMMIT
distros = [macos_distro, windows_distro, linux_distro]
result = generate_configs(distros, trigger)
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
),
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_generate_configs_raises_on_duplicate_configs(macos_distro):
trigger = Trigger.COMMIT
distros = [macos_distro, macos_distro]
with pytest.raises(ValueError):
generate_configs(distros, trigger)

View File

@@ -1,190 +0,0 @@
from dataclasses import dataclass, field
from helpers.enums import *
from helpers.unique import *
@dataclass
class Config:
"""Represents a configuration to include in the strategy matrix.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
"""
config_name: str
cmake_args: str
cmake_target: str
build_type: str
enable_tests: bool
enable_package: bool
enable_image: bool
runs_on: list[str]
image: str | None = None
def __post_init__(self):
if not self.config_name:
raise ValueError("config_name cannot be empty")
if not isinstance(self.config_name, str):
raise TypeError("config_name must be a string")
if not self.cmake_args:
raise ValueError("cmake_args cannot be empty")
if not isinstance(self.cmake_args, str):
raise TypeError("cmake_args must be a string")
if not self.cmake_target:
raise ValueError("cmake_target cannot be empty")
if not isinstance(self.cmake_target, str):
raise TypeError("cmake_target must be a string")
if self.cmake_target not in ["all", "install"]:
raise ValueError("cmake_target must be 'all' or 'install'")
if not self.build_type:
raise ValueError("build_type cannot be empty")
if not isinstance(self.build_type, str):
raise TypeError("build_type must be a string")
if self.build_type not in ["Debug", "Release"]:
raise ValueError("build_type must be 'Debug' or 'Release'")
if not isinstance(self.enable_tests, bool):
raise TypeError("enable_tests must be a boolean")
if not isinstance(self.enable_package, bool):
raise TypeError("enable_package must be a boolean")
if not isinstance(self.enable_image, bool):
raise TypeError("enable_image must be a boolean")
if not self.runs_on:
raise ValueError("runs_on cannot be empty")
if not isinstance(self.runs_on, list):
raise TypeError("runs_on must be a list")
if not all(isinstance(runner, str) for runner in self.runs_on):
raise TypeError("runs_on must be a list of strings")
if not all(self.runs_on):
raise ValueError("runs_on must be a list of non-empty strings")
if len(self.runs_on) != len(set(self.runs_on)):
raise ValueError("runs_on must be a list of unique strings")
if self.image and not isinstance(self.image, str):
raise TypeError("image must be a string")
@dataclass
class Spec:
"""Represents a specification used by a configuration.
Raises:
ValueError: If any of the required fields are empty.
TypeError: If any of the required fields are of the wrong type.
"""
archs: list[Arch] = field(
default_factory=lambda: [Arch.LINUX_AMD64, Arch.LINUX_ARM64]
)
build_option: BuildOption = BuildOption.NONE
build_modes: list[BuildMode] = field(
default_factory=lambda: [BuildMode.UNITY_OFF, BuildMode.UNITY_ON]
)
build_types: list[BuildType] = field(
default_factory=lambda: [BuildType.DEBUG, BuildType.RELEASE]
)
publish_option: PublishOption = PublishOption.NONE
test_option: TestOption = TestOption.NONE
triggers: list[Trigger] = field(
default_factory=lambda: [Trigger.COMMIT, Trigger.MERGE, Trigger.SCHEDULE]
)
def __post_init__(self):
if not self.archs:
raise ValueError("archs cannot be empty")
if not isinstance(self.archs, list):
raise TypeError("archs must be a list")
if not all(isinstance(arch, str) for arch in self.archs):
raise TypeError("archs must be a list of Arch")
if len(self.archs) != len(set(self.archs)):
raise ValueError("archs must be a list of unique Arch")
if not isinstance(self.build_option, BuildOption):
raise TypeError("build_option must be a BuildOption")
if not self.build_modes:
raise ValueError("build_modes cannot be empty")
if not isinstance(self.build_modes, list):
raise TypeError("build_modes must be a list")
if not all(
isinstance(build_mode, BuildMode) for build_mode in self.build_modes
):
raise TypeError("build_modes must be a list of BuildMode")
if len(self.build_modes) != len(set(self.build_modes)):
raise ValueError("build_modes must be a list of unique BuildMode")
if not self.build_types:
raise ValueError("build_types cannot be empty")
if not isinstance(self.build_types, list):
raise TypeError("build_types must be a list")
if not all(
isinstance(build_type, BuildType) for build_type in self.build_types
):
raise TypeError("build_types must be a list of BuildType")
if len(self.build_types) != len(set(self.build_types)):
raise ValueError("build_types must be a list of unique BuildType")
if not isinstance(self.publish_option, PublishOption):
raise TypeError("publish_option must be a PublishOption")
if not isinstance(self.test_option, TestOption):
raise TypeError("test_option must be a TestOption")
if not self.triggers:
raise ValueError("triggers cannot be empty")
if not isinstance(self.triggers, list):
raise TypeError("triggers must be a list")
if not all(isinstance(trigger, Trigger) for trigger in self.triggers):
raise TypeError("triggers must be a list of Trigger")
if len(self.triggers) != len(set(self.triggers)):
raise ValueError("triggers must be a list of unique Trigger")
@dataclass
class Distro:
"""Represents a Linux, Windows or macOS distribution with specifications.
Raises:
ValueError: If any of the required fields are empty.
TypeError: If any of the required fields are of the wrong type.
"""
os_name: str
os_version: str = ""
compiler_name: str = ""
compiler_version: str = ""
image_sha: str = ""
specs: list[Spec] = field(default_factory=list)
def __post_init__(self):
if not self.os_name:
raise ValueError("os_name cannot be empty")
if not isinstance(self.os_name, str):
raise TypeError("os_name must be a string")
if self.os_version and not isinstance(self.os_version, str):
raise TypeError("os_version must be a string")
if self.compiler_name and not isinstance(self.compiler_name, str):
raise TypeError("compiler_name must be a string")
if self.compiler_version and not isinstance(self.compiler_version, str):
raise TypeError("compiler_version must be a string")
if self.image_sha and not isinstance(self.image_sha, str):
raise TypeError("image_sha must be a string")
if not self.specs:
raise ValueError("specs cannot be empty")
if not isinstance(self.specs, list):
raise TypeError("specs must be a list")
if not all(isinstance(spec, Spec) for spec in self.specs):
raise TypeError("specs must be a list of Spec")
if not is_unique(self.specs):
raise ValueError("specs must be a list of unique Spec")

View File

@@ -1,743 +0,0 @@
import pytest
from helpers.defs import *
from helpers.enums import *
from helpers.funcs import *
def test_config_valid_none_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image=None,
)
def test_config_valid_empty_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=True,
enable_image=False,
runs_on=["label"],
image="",
)
def test_config_valid_with_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="install",
build_type="Release",
enable_tests=False,
enable_package=True,
enable_image=True,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_config_name():
with pytest.raises(ValueError):
Config(
config_name="",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_config_name():
with pytest.raises(TypeError):
Config(
config_name=123,
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_cmake_args():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_cmake_args():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args=123,
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_cmake_target():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_invalid_cmake_target():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="invalid",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_cmake_target():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target=123,
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_build_type():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_invalid_build_type():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="invalid",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_build_type():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type=123,
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_tests():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=123,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_package():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=123,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_image():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=True,
enable_image=123,
runs_on=["label"],
image="image",
)
def test_config_raises_on_none_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=None,
image="image",
)
def test_config_raises_on_empty_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[],
image="image",
)
def test_config_raises_on_invalid_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[""],
image="image",
)
def test_config_raises_on_wrong_runs_on():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[123],
image="image",
)
def test_config_raises_on_duplicate_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label", "label"],
image="image",
)
def test_config_raises_on_wrong_image():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image=123,
)
def test_spec_valid():
assert Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_archs():
with pytest.raises(ValueError):
Spec(
archs=None,
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_archs():
with pytest.raises(ValueError):
Spec(
archs=[],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_archs():
with pytest.raises(TypeError):
Spec(
archs=[123],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_duplicate_archs():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64, Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=123,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_build_modes():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=None,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_build_modes():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_modes():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[123],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=None,
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_types():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[123],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_duplicate_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG, BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_publish_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=123,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_test_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=123,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=None,
)
def test_spec_raises_on_empty_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[],
)
def test_spec_raises_on_wrong_triggers():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[123],
)
def test_spec_raises_on_duplicate_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT, Trigger.COMMIT],
)
def test_distro_valid_none_image_sha():
assert Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha=None,
specs=[Spec()], # This is valid due to the default values.
)
def test_distro_valid_empty_os_compiler_image_sha():
assert Distro(
os_name="os_name",
os_version="",
compiler_name="",
compiler_version="",
image_sha="",
specs=[Spec()],
)
def test_distro_valid_with_image():
assert Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_empty_os_name():
with pytest.raises(ValueError):
Distro(
os_name="",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_os_name():
with pytest.raises(TypeError):
Distro(
os_name=123,
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_os_version():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version=123,
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_compiler_name():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name=123,
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_compiler_version():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version=123,
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_image_sha():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha=123,
specs=[Spec()],
)
def test_distro_raises_on_none_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=None,
)
def test_distro_raises_on_empty_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[],
)
def test_distro_raises_on_invalid_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec(triggers=[])],
)
def test_distro_raises_on_duplicate_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec(), Spec()],
)
def test_distro_raises_on_wrong_specs():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[123],
)

View File

@@ -1,75 +0,0 @@
from enum import StrEnum, auto
class Arch(StrEnum):
"""Represents architectures to build for."""
LINUX_AMD64 = "linux/amd64"
LINUX_ARM64 = "linux/arm64"
MACOS_ARM64 = "macos/arm64"
WINDOWS_AMD64 = "windows/amd64"
class BuildMode(StrEnum):
"""Represents whether to perform a unity or non-unity build."""
UNITY_OFF = auto()
UNITY_ON = auto()
class BuildOption(StrEnum):
"""Represents build options to enable."""
NONE = auto()
COVERAGE = auto()
SANITIZE_ASAN = (
auto()
) # Address Sanitizer, also includes Undefined Behavior Sanitizer.
SANITIZE_TSAN = (
auto()
) # Thread Sanitizer, also includes Undefined Behavior Sanitizer.
VOIDSTAR = auto()
class BuildType(StrEnum):
"""Represents the build type to use."""
DEBUG = auto()
RELEASE = auto()
PUBLISH = auto() # Release build without assertions.
class PublishOption(StrEnum):
"""Represents whether to publish a package, an image, or both."""
NONE = auto()
PACKAGE_ONLY = auto()
IMAGE_ONLY = auto()
PACKAGE_AND_IMAGE = auto()
class TestOption(StrEnum):
"""Represents test options to enable, specifically the reference fee to use."""
__test__ = False # Tell pytest to not consider this as a test class.
NONE = "" # Use the default reference fee of 10.
REFERENCE_FEE_500 = "500"
REFERENCE_FEE_1000 = "1000"
class Platform(StrEnum):
"""Represents the platform to use."""
LINUX = "linux"
MACOS = "macos"
WINDOWS = "windows"
class Trigger(StrEnum):
"""Represents the trigger that caused the workflow to run."""
COMMIT = "commit"
LABEL = "label"
MERGE = "merge"
SCHEDULE = "schedule"

View File

@@ -1,235 +0,0 @@
from helpers.defs import *
from helpers.enums import *
def generate_config_name(
os_name: str,
os_version: str | None,
compiler_name: str | None,
compiler_version: str | None,
arch: Arch,
build_type: BuildType,
build_mode: BuildMode,
build_option: BuildOption,
) -> str:
"""Create a configuration name based on the distro details and build
attributes.
The configuration name is used as the display name in the GitHub Actions
UI, and since GitHub truncates long names we have to make sure the most
important information is at the beginning of the name.
Args:
os_name (str): The OS name.
os_version (str): The OS version.
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
arch (Arch): The architecture.
build_type (BuildType): The build type.
build_mode (BuildMode): The build mode.
build_option (BuildOption): The build option.
Returns:
str: The configuration name.
Raises:
ValueError: If the OS name is empty.
"""
if not os_name:
raise ValueError("os_name cannot be empty")
config_name = os_name
if os_version:
config_name += f"-{os_version}"
if compiler_name:
config_name += f"-{compiler_name}"
if compiler_version:
config_name += f"-{compiler_version}"
if build_option == BuildOption.COVERAGE:
config_name += "-coverage"
elif build_option == BuildOption.VOIDSTAR:
config_name += "-voidstar"
elif build_option == BuildOption.SANITIZE_ASAN:
config_name += "-asan"
elif build_option == BuildOption.SANITIZE_TSAN:
config_name += "-tsan"
if build_type == BuildType.DEBUG:
config_name += "-debug"
elif build_type == BuildType.RELEASE:
config_name += "-release"
elif build_type == BuildType.PUBLISH:
config_name += "-publish"
if build_mode == BuildMode.UNITY_ON:
config_name += "-unity"
config_name += f"-{arch.value.split('/')[1]}"
return config_name
def generate_cmake_args(
compiler_name: str | None,
compiler_version: str | None,
build_type: BuildType,
build_mode: BuildMode,
build_option: BuildOption,
test_option: TestOption,
) -> str:
"""Create the CMake arguments based on the build type and enabled build
options.
- All builds will have the `tests`, `werr`, and `xrpld` options.
- All builds will have the `wextra` option except for GCC 12 and Clang 16.
- All release builds will have the `assert` option.
- Set the unity option if specified.
- Set the coverage option if specified.
- Set the voidstar option if specified.
- Set the reference fee if specified.
Args:
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
build_type (BuildType): The build type.
build_mode (BuildMode): The build mode.
build_option (BuildOption): The build option.
test_option (TestOption): The test option.
Returns:
str: The CMake arguments.
"""
cmake_args = "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
if not f"{compiler_name}-{compiler_version}" in [
"gcc-12",
"clang-16",
]:
cmake_args += " -Dwextra=ON"
if build_type == BuildType.RELEASE:
cmake_args += " -Dassert=ON"
if build_mode == BuildMode.UNITY_ON:
cmake_args += " -Dunity=ON"
if build_option == BuildOption.COVERAGE:
cmake_args += " -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
elif build_option == BuildOption.SANITIZE_ASAN:
pass # TODO: Add ASAN-UBSAN flags.
elif build_option == BuildOption.SANITIZE_TSAN:
pass # TODO: Add TSAN-UBSAN flags.
elif build_option == BuildOption.VOIDSTAR:
cmake_args += " -Dvoidstar=ON"
if test_option != TestOption.NONE:
cmake_args += f" -DUNIT_TEST_REFERENCE_FEE={test_option.value}"
return cmake_args
def generate_cmake_target(os_name: str, build_type: BuildType) -> str:
"""Create the CMake target based on the build type.
The `install` target is used for Windows and for publishing a package, while
the `all` target is used for all other configurations.
Args:
os_name (str): The OS name.
build_type (BuildType): The build type.
Returns:
str: The CMake target.
"""
if os_name == "windows" or build_type == BuildType.PUBLISH:
return "install"
return "all"
def generate_enable_options(
os_name: str,
build_type: BuildType,
publish_option: PublishOption,
) -> tuple[bool, bool, bool]:
"""Create the enable flags based on the OS name, build option, and publish
option.
We build and test all configurations by default, except for Windows in
Debug, because it is too slow.
Args:
os_name (str): The OS name.
build_type (BuildType): The build type.
publish_option (PublishOption): The publish option.
Returns:
tuple: A tuple containing the enable test, enable package, and enable image flags.
"""
enable_tests = (
False if os_name == "windows" and build_type == BuildType.DEBUG else True
)
enable_package = (
True
if publish_option
in [
PublishOption.PACKAGE_ONLY,
PublishOption.PACKAGE_AND_IMAGE,
]
else False
)
enable_image = (
True
if publish_option
in [
PublishOption.IMAGE_ONLY,
PublishOption.PACKAGE_AND_IMAGE,
]
else False
)
return enable_tests, enable_package, enable_image
def generate_image_name(
os_name: str,
os_version: str,
compiler_name: str,
compiler_version: str,
image_sha: str,
) -> str | None:
"""Create the Docker image name based on the distro details.
Args:
os_name (str): The OS name.
os_version (str): The OS version.
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
image_sha (str): The image SHA.
Returns:
str: The Docker image name or None if not applicable.
Raises:
ValueError: If any of the arguments is empty for Linux.
"""
if os_name == "windows" or os_name == "macos":
return None
if not os_name:
raise ValueError("os_name cannot be empty")
if not os_version:
raise ValueError("os_version cannot be empty")
if not compiler_name:
raise ValueError("compiler_name cannot be empty")
if not compiler_version:
raise ValueError("compiler_version cannot be empty")
if not image_sha:
raise ValueError("image_sha cannot be empty")
return f"ghcr.io/xrplf/ci/{os_name}-{os_version}:{compiler_name}-{compiler_version}-{image_sha}"

View File

@@ -1,419 +0,0 @@
import pytest
from helpers.enums import *
from helpers.funcs import *
def test_generate_config_name_a_b_c_d_debug_amd64():
assert (
generate_config_name(
"a",
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
== "a-b-c-d-debug-amd64"
)
def test_generate_config_name_a_b_c_release_unity_arm64():
assert (
generate_config_name(
"a",
"b",
"c",
"",
Arch.LINUX_ARM64,
BuildType.RELEASE,
BuildMode.UNITY_ON,
BuildOption.NONE,
)
== "a-b-c-release-unity-arm64"
)
def test_generate_config_name_a_b_coverage_publish_amd64():
assert (
generate_config_name(
"a",
"b",
"",
"",
Arch.LINUX_AMD64,
BuildType.PUBLISH,
BuildMode.UNITY_OFF,
BuildOption.COVERAGE,
)
== "a-b-coverage-publish-amd64"
)
def test_generate_config_name_a_asan_debug_unity_arm64():
assert (
generate_config_name(
"a",
"",
"",
"",
Arch.LINUX_ARM64,
BuildType.DEBUG,
BuildMode.UNITY_ON,
BuildOption.SANITIZE_ASAN,
)
== "a-asan-debug-unity-arm64"
)
def test_generate_config_name_a_c_tsan_release_amd64():
assert (
generate_config_name(
"a",
"",
"c",
"",
Arch.LINUX_AMD64,
BuildType.RELEASE,
BuildMode.UNITY_OFF,
BuildOption.SANITIZE_TSAN,
)
== "a-c-tsan-release-amd64"
)
def test_generate_config_name_a_d_voidstar_debug_amd64():
assert (
generate_config_name(
"a",
"",
"",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.VOIDSTAR,
)
== "a-d-voidstar-debug-amd64"
)
def test_generate_config_name_raises_on_none_os_name():
with pytest.raises(ValueError):
generate_config_name(
None,
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
def test_generate_config_name_raises_on_empty_os_name():
with pytest.raises(ValueError):
generate_config_name(
"",
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
def test_generate_cmake_args_a_b_debug():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON"
)
def test_generate_cmake_args_gcc_12_no_wextra():
assert (
generate_cmake_args(
"gcc",
"12",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
)
def test_generate_cmake_args_clang_16_no_wextra():
assert (
generate_cmake_args(
"clang",
"16",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
)
def test_generate_cmake_args_a_b_release():
assert (
generate_cmake_args(
"a",
"b",
BuildType.RELEASE,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON"
)
def test_generate_cmake_args_a_b_publish():
assert (
generate_cmake_args(
"a",
"b",
BuildType.PUBLISH,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON"
)
def test_generate_cmake_args_a_b_unity():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_ON,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON"
)
def test_generate_cmake_args_a_b_coverage():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.COVERAGE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
)
def test_generate_cmake_args_a_b_voidstar():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.VOIDSTAR,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dvoidstar=ON"
)
def test_generate_cmake_args_a_b_reference_fee_500():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.REFERENCE_FEE_500,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -DUNIT_TEST_REFERENCE_FEE=500"
)
def test_generate_cmake_args_a_b_reference_fee_1000():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.REFERENCE_FEE_1000,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -DUNIT_TEST_REFERENCE_FEE=1000"
)
def test_generate_cmake_args_a_b_multiple():
assert (
generate_cmake_args(
"a",
"b",
BuildType.RELEASE,
BuildMode.UNITY_ON,
BuildOption.VOIDSTAR,
TestOption.REFERENCE_FEE_500,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dunity=ON -Dvoidstar=ON -DUNIT_TEST_REFERENCE_FEE=500"
)
def test_generate_cmake_target_linux_debug():
assert generate_cmake_target("linux", BuildType.DEBUG) == "all"
def test_generate_cmake_target_linux_release():
assert generate_cmake_target("linux", BuildType.RELEASE) == "all"
def test_generate_cmake_target_linux_publish():
assert generate_cmake_target("linux", BuildType.PUBLISH) == "install"
def test_generate_cmake_target_macos_debug():
assert generate_cmake_target("macos", BuildType.DEBUG) == "all"
def test_generate_cmake_target_macos_release():
assert generate_cmake_target("macos", BuildType.RELEASE) == "all"
def test_generate_cmake_target_macos_publish():
assert generate_cmake_target("macos", BuildType.PUBLISH) == "install"
def test_generate_cmake_target_windows_debug():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_cmake_target_windows_release():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_cmake_target_windows_publish():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_enable_options_linux_debug_no_publish():
assert generate_enable_options("linux", BuildType.DEBUG, PublishOption.NONE) == (
True,
False,
False,
)
def test_generate_enable_options_linux_release_package_only():
assert generate_enable_options(
"linux", BuildType.RELEASE, PublishOption.PACKAGE_ONLY
) == (True, True, False)
def test_generate_enable_options_linux_publish_image_only():
assert generate_enable_options(
"linux", BuildType.PUBLISH, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_enable_options_macos_debug_package_only():
assert generate_enable_options(
"macos", BuildType.DEBUG, PublishOption.PACKAGE_ONLY
) == (True, True, False)
def test_generate_enable_options_macos_release_image_only():
assert generate_enable_options(
"macos", BuildType.RELEASE, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_enable_options_macos_publish_package_and_image():
assert generate_enable_options(
"macos", BuildType.PUBLISH, PublishOption.PACKAGE_AND_IMAGE
) == (True, True, True)
def test_generate_enable_options_windows_debug_package_and_image():
assert generate_enable_options(
"windows", BuildType.DEBUG, PublishOption.PACKAGE_AND_IMAGE
) == (False, True, True)
def test_generate_enable_options_windows_release_no_publish():
assert generate_enable_options(
"windows", BuildType.RELEASE, PublishOption.NONE
) == (True, False, False)
def test_generate_enable_options_windows_publish_image_only():
assert generate_enable_options(
"windows", BuildType.PUBLISH, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_image_name_linux():
assert generate_image_name("a", "b", "c", "d", "e") == "ghcr.io/xrplf/ci/a-b:c-d-e"
def test_generate_image_name_linux_raises_on_empty_os_name():
with pytest.raises(ValueError):
generate_image_name("", "b", "c", "d", "e")
def test_generate_image_name_linux_raises_on_empty_os_version():
with pytest.raises(ValueError):
generate_image_name("a", "", "c", "d", "e")
def test_generate_image_name_linux_raises_on_empty_compiler_name():
with pytest.raises(ValueError):
generate_image_name("a", "b", "", "d", "e")
def test_generate_image_name_linux_raises_on_empty_compiler_version():
with pytest.raises(ValueError):
generate_image_name("a", "b", "c", "", "e")
def test_generate_image_name_linux_raises_on_empty_image_sha():
with pytest.raises(ValueError):
generate_image_name("a", "b", "c", "e", "")
def test_generate_image_name_macos():
assert generate_image_name("macos", "", "", "", "") is None
def test_generate_image_name_macos_extra():
assert generate_image_name("macos", "value", "does", "not", "matter") is None
def test_generate_image_name_windows():
assert generate_image_name("windows", "", "", "", "") is None
def test_generate_image_name_windows_extra():
assert generate_image_name("windows", "value", "does", "not", "matter") is None

View File

@@ -1,30 +0,0 @@
import json
from dataclasses import _is_dataclass_instance, asdict
from typing import Any
def is_unique(items: list[Any]) -> bool:
"""Check if a list of dataclass objects contains only unique items.
As the items may not be hashable, we convert them to JSON strings first, and
then check if the list of strings is the same size as the set of strings.
Args:
items: The list of dataclass objects to check.
Returns:
True if the list contains only unique items, False otherwise.
Raises:
TypeError: If any of the items is not a dataclass.
"""
l = list()
s = set()
for item in items:
if not _is_dataclass_instance(item):
raise TypeError("items must be a list of dataclasses")
j = json.dumps(asdict(item))
l.append(j)
s.add(j)
return len(l) == len(s)

View File

@@ -1,40 +0,0 @@
from dataclasses import dataclass
import pytest
from helpers.unique import *
@dataclass
class ExampleInt:
value: int
@dataclass
class ExampleList:
values: list[int]
def test_unique_int():
assert is_unique([ExampleInt(1), ExampleInt(2), ExampleInt(3)])
def test_not_unique_int():
assert not is_unique([ExampleInt(1), ExampleInt(2), ExampleInt(1)])
def test_unique_list():
assert is_unique(
[ExampleList([1, 2, 3]), ExampleList([4, 5, 6]), ExampleList([7, 8, 9])]
)
def test_not_unique_list():
assert not is_unique(
[ExampleList([1, 2, 3]), ExampleList([4, 5, 6]), ExampleList([1, 2, 3])]
)
def test_unique_raises_on_non_dataclass():
with pytest.raises(TypeError):
is_unique([1, 2, 3])

View File

@@ -0,0 +1,212 @@
{
"architecture": [
{
"platform": "linux/amd64",
"runner": ["self-hosted", "Linux", "X64", "heavy"]
},
{
"platform": "linux/arm64",
"runner": ["self-hosted", "Linux", "ARM64", "heavy-arm64"]
}
],
"os": [
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "21",
"image_sha": "0525eae"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "jammy",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "e1782cd"
}
],
"build_type": ["Debug", "Release"],
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

View File

@@ -1,385 +0,0 @@
from helpers.defs import *
from helpers.enums import *
# The default CI image SHAs to use, which can be specified per distro group and
# can be overridden for individual distros, which is useful when debugging using
# a locally built CI image. See https://github.com/XRPLF/ci for the images.
DEBIAN_SHA = "sha-ca4517d"
RHEL_SHA = "sha-ca4517d"
UBUNTU_SHA = "sha-84afd81"
# We only build a selection of configurations for the various triggers to reduce
# pipeline runtime. Across all three operating systems we aim to cover all GCC
# and Clang versions, while not duplicating configurations too much. See also
# the README for more details.
# The Debian distros to build configurations for.
#
# We have the following distros available:
# - Debian Bullseye: GCC 12-15
# - Debian Bookworm: GCC 13-15, Clang 16-20
# - Debian Trixie: GCC 14-15, Clang 20-21
DEBIAN_DISTROS = [
Distro(
os_name="debian",
os_version="bullseye",
compiler_name="gcc",
compiler_version="14",
image_sha=DEBIAN_SHA,
specs=[
Spec(
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.PACKAGE_ONLY,
triggers=[Trigger.COMMIT, Trigger.LABEL],
),
Spec(
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.PUBLISH],
publish_option=PublishOption.PACKAGE_AND_IMAGE,
triggers=[Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bullseye",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_option=BuildOption.COVERAGE,
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="16",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.VOIDSTAR,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.IMAGE_ONLY,
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="17",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="18",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="19",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="trixie",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="trixie",
compiler_name="clang",
compiler_version="21",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]
# The RHEL distros to build configurations for.
#
# We have the following distros available:
# - RHEL 8: GCC 14, Clang "any"
# - RHEL 9: GCC 12-14, Clang "any"
# - RHEL 10: GCC 14, Clang "any"
RHEL_DISTROS = [
Distro(
os_name="rhel",
os_version="8",
compiler_name="gcc",
compiler_version="14",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="8",
compiler_name="clang",
compiler_version="any",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="9",
compiler_name="gcc",
compiler_version="12",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="9",
compiler_name="gcc",
compiler_version="13",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="10",
compiler_name="clang",
compiler_version="any",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]
# The Ubuntu distros to build configurations for.
#
# We have the following distros available:
# - Ubuntu Jammy (22.04): GCC 12
# - Ubuntu Noble (24.04): GCC 13-14, Clang 16-20
UBUNTU_DISTROS = [
Distro(
os_name="ubuntu",
os_version="jammy",
compiler_name="gcc",
compiler_version="12",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="gcc",
compiler_version="13",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="gcc",
compiler_version="14",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="17",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="18",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="19",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="20",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -0,0 +1,22 @@
{
"architecture": [
{
"platform": "macos/arm64",
"runner": ["self-hosted", "macOS", "ARM64", "mac-runner-m1"]
}
],
"os": [
{
"distro_name": "macos",
"distro_version": "",
"compiler_name": "",
"compiler_version": "",
"image_sha": ""
}
],
"build_type": ["Debug", "Release"],
"cmake_args": [
"-Dunity=OFF -DCMAKE_POLICY_VERSION_MINIMUM=3.5",
"-Dunity=ON -DCMAKE_POLICY_VERSION_MINIMUM=3.5"
]
}

View File

@@ -1,20 +0,0 @@
from helpers.defs import *
from helpers.enums import *
DISTROS = [
Distro(
os_name="macos",
specs=[
Spec(
archs=[Arch.MACOS_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
archs=[Arch.MACOS_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -0,0 +1,19 @@
{
"architecture": [
{
"platform": "windows/amd64",
"runner": ["self-hosted", "Windows", "devbox"]
}
],
"os": [
{
"distro_name": "windows",
"distro_version": "",
"compiler_name": "",
"compiler_version": "",
"image_sha": ""
}
],
"build_type": ["Debug", "Release"],
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

View File

@@ -1,20 +0,0 @@
from helpers.defs import *
from helpers.enums import *
DISTROS = [
Distro(
os_name="windows",
specs=[
Spec(
archs=[Arch.WINDOWS_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
archs=[Arch.WINDOWS_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -112,10 +112,9 @@ jobs:
strategy:
fail-fast: false
matrix:
platform: [linux, macos, windows]
os: [linux, macos, windows]
with:
platform: ${{ matrix.platform }}
trigger: commit
os: ${{ matrix.os }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -66,10 +66,9 @@ jobs:
strategy:
fail-fast: ${{ github.event_name == 'merge_group' }}
matrix:
platform: [linux, macos, windows]
os: [linux, macos, windows]
with:
platform: ${{ matrix.platform }}
# The workflow dispatch event uses the same trigger as the schedule event.
trigger: ${{ github.event_name == 'push' && 'merge' || 'schedule' }}
os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -3,6 +3,11 @@ name: Build and test configuration
on:
workflow_call:
inputs:
build_only:
description: 'Whether to only build or to build and test the code ("true", "false").'
required: true
type: boolean
build_type:
description: 'The build type to use ("Debug", "Release").'
type: string
@@ -19,21 +24,6 @@ on:
type: string
required: true
enable_tests:
description: "Whether to run the tests."
required: true
type: boolean
enable_package:
description: "Whether to publish a package."
required: true
type: boolean
enable_image:
description: "Whether to publish an image."
required: true
type: boolean
runs_on:
description: Runner to run the job on as a JSON string
required: true
@@ -166,7 +156,7 @@ jobs:
./xrpld --version | grep libvoidstar
- name: Run the separate tests
if: ${{ inputs.enable_tests }}
if: ${{ !inputs.build_only }}
working-directory: ${{ env.BUILD_DIR }}
# Windows locks some of the build files while running tests, and parallel jobs can collide
env:
@@ -179,7 +169,7 @@ jobs:
-j "${PARALLELISM}"
- name: Run the embedded tests
if: ${{ inputs.enable_tests }}
if: ${{ !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
@@ -187,7 +177,7 @@ jobs:
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Debug failure (Linux)
if: ${{ (failure() || cancelled()) && runner.os == 'Linux' && inputs.enable_tests }}
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
run: |
echo "IPv4 local port range:"
cat /proc/sys/net/ipv4/ip_local_port_range
@@ -195,7 +185,7 @@ jobs:
netstat -an
- name: Prepare coverage report
if: ${{ github.repository_owner == 'XRPLF' && env.ENABLED_COVERAGE == 'true' }}
if: ${{ !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
@@ -208,7 +198,7 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && env.ENABLED_COVERAGE == 'true' }}
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true

View File

@@ -8,14 +8,16 @@ name: Build and test
on:
workflow_call:
inputs:
platform:
description: "The platform to generate the strategy matrix for ('linux', 'macos', 'windows'). If not provided all platforms are used."
required: false
type: string
trigger:
description: "The trigger that caused the workflow to run ('commit', 'label', 'merge', 'schedule')."
os:
description: 'The operating system to use for the build ("linux", "macos", "windows").'
required: true
type: string
strategy_matrix:
# TODO: Support additional strategies, e.g. "ubuntu" for generating all Ubuntu configurations.
description: 'The strategy matrix to use for generating the configurations ("minimal", "all").'
required: false
type: string
default: "minimal"
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -26,8 +28,8 @@ jobs:
generate-matrix:
uses: ./.github/workflows/reusable-strategy-matrix.yml
with:
platform: ${{ inputs.platform }}
trigger: ${{ inputs.trigger }}
os: ${{ inputs.os }}
strategy_matrix: ${{ inputs.strategy_matrix }}
# Build and test the binary for each configuration.
build-test-config:
@@ -39,14 +41,12 @@ jobs:
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
max-parallel: 10
with:
build_only: ${{ matrix.build_only }}
build_type: ${{ matrix.build_type }}
cmake_args: ${{ matrix.cmake_args }}
cmake_target: ${{ matrix.cmake_target }}
enable_tests: ${{ matrix.enable_tests }}
enable_package: ${{ matrix.enable_package }}
enable_image: ${{ matrix.enable_image }}
runs_on: ${{ toJson(matrix.runs_on) }}
image: ${{ matrix.image }}
runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
config_name: ${{ matrix.config_name }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -29,6 +29,8 @@ jobs:
run: .github/scripts/rename/binary.sh .
- name: Check namespaces
run: .github/scripts/rename/namespace.sh .
- name: Check config name
run: .github/scripts/rename/config.sh .
- name: Check for differences
env:
MESSAGE: |

View File

@@ -3,14 +3,16 @@ name: Generate strategy matrix
on:
workflow_call:
inputs:
platform:
description: "The platform to generate the strategy matrix for ('linux', 'macos', 'windows'). If not provided all platforms are used."
os:
description: 'The operating system to use for the build ("linux", "macos", "windows").'
required: false
type: string
trigger:
description: "The trigger that caused the workflow to run ('commit', 'label', 'merge', 'schedule')."
required: true
strategy_matrix:
# TODO: Support additional strategies, e.g. "ubuntu" for generating all Ubuntu configurations.
description: 'The strategy matrix to use for generating the configurations ("minimal", "all").'
required: false
type: string
default: "minimal"
outputs:
matrix:
description: "The generated strategy matrix."
@@ -38,6 +40,6 @@ jobs:
working-directory: .github/scripts/strategy-matrix
id: generate
env:
PLATFORM: ${{ inputs.platform != '' && format('--platform={0}', inputs.platform) || '' }}
TRIGGER: ${{ format('--trigger={0}', inputs.trigger) }}
run: ./generate.py ${PLATFORM} ${TRIGGER} >> "${GITHUB_OUTPUT}"
GENERATE_CONFIG: ${{ inputs.os != '' && format('--config={0}.json', inputs.os) || '' }}
GENERATE_OPTION: ${{ inputs.strategy_matrix == 'all' && '--all' || '' }}
run: ./generate.py ${GENERATE_OPTION} ${GENERATE_CONFIG} >> "${GITHUB_OUTPUT}"

View File

@@ -19,17 +19,17 @@ on:
branches: [develop]
paths:
# This allows testing changes to the upload workflow in a PR
- ".github/workflows/upload-conan-deps.yml"
- .github/workflows/upload-conan-deps.yml
push:
branches: [develop]
paths:
- ".github/workflows/upload-conan-deps.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/actions/build-deps/action.yml"
- ".github/actions/setup-conan/action.yml"
- .github/workflows/upload-conan-deps.yml
- .github/workflows/reusable-strategy-matrix.yml
- .github/actions/build-deps/action.yml
- .github/actions/setup-conan/action.yml
- ".github/scripts/strategy-matrix/**"
- "conanfile.py"
- "conan.lock"
- conanfile.py
- conan.lock
env:
CONAN_REMOTE_NAME: xrplf
@@ -49,8 +49,7 @@ jobs:
generate-matrix:
uses: ./.github/workflows/reusable-strategy-matrix.yml
with:
# The workflow dispatch event uses the same trigger as the schedule event.
trigger: ${{ github.event_name == 'pull_request' && 'commit' || (github.event_name == 'push' && 'merge' || 'schedule') }}
strategy_matrix: ${{ github.event_name == 'pull_request' && 'minimal' || 'all' }}
# Build and upload the dependencies for each configuration.
run-upload-conan-deps:
@@ -60,8 +59,8 @@ jobs:
fail-fast: false
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
max-parallel: 10
runs-on: ${{ matrix.runs_on }}
container: ${{ matrix.image }}
runs-on: ${{ matrix.architecture.runner }}
container: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || null }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}

2
.gitignore vendored
View File

@@ -19,7 +19,6 @@ Release/
/tmp/
CMakeSettings.json
CMakeUserPresets.json
__pycache__
# Coverage files.
*.gcno
@@ -36,6 +35,7 @@ gmon.out
# Customized configs.
/rippled.cfg
/xrpld.cfg
/validators.txt
# Locally patched Conan recipes

View File

@@ -1,7 +1,7 @@
#
# Default validators.txt
#
# This file is located in the same folder as your rippled.cfg file
# This file is located in the same folder as your xrpld.cfg file
# and defines which validators your server trusts not to collude.
#
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.

View File

@@ -29,18 +29,18 @@
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is launched, it
# This file documents and provides examples of all xrpld server process
# configuration options. When the xrpld server instance is launched, it
# looks for a file with the following name:
#
# rippled.cfg
# xrpld.cfg
#
# For more information on where the rippled server instance searches for the
# For more information on where the xrpld server instance searches for the
# file, visit:
#
# https://xrpl.org/commandline-usage.html#generic-options
#
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# This file should be named xrpld.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
@@ -89,8 +89,8 @@
#
#
#
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# xrpld offers various server protocols to clients making inbound
# connections. The listening ports xrpld uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
@@ -103,7 +103,7 @@
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, rippled will look for a configuration file
# For each name in this list, xrpld will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
@@ -134,7 +134,7 @@
# ip = 127.0.0.1
# protocol = http
#
# When rippled is used as a command line client (for example, issuing a
# When xrpld is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
@@ -175,7 +175,7 @@
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, rippled cannot
# NOTE If no ports support the peer protocol, xrpld cannot
# receive incoming peer connections or become a superpeer.
#
# limit = <number>
@@ -194,7 +194,7 @@
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xrpld will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
@@ -237,7 +237,7 @@
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xrpld will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
@@ -258,7 +258,7 @@
# resource controls will default to those for non-administrative users.
#
# The secure_gateway IP addresses are intended to represent
# proxies. Since rippled trusts these hosts, they must be
# proxies. Since xrpld trusts these hosts, they must be
# responsible for properly authenticating the remote user.
#
# If some IP addresses are included for both "admin" and
@@ -272,7 +272,7 @@
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# rippled will generate an internal self-signed certificate.
# xrpld will generate an internal self-signed certificate.
#
# The files have these meanings:
#
@@ -297,12 +297,12 @@
# Control the ciphers which the server will support over SSL on the port,
# specified using the OpenSSL "cipher list format".
#
# NOTE If unspecified, rippled will automatically configure a modern
# NOTE If unspecified, xrpld will automatically configure a modern
# cipher suite. This default suite should be widely supported.
#
# You should not modify this string unless you have a specific
# reason and cryptographic expertise. Incorrect modification may
# keep rippled from connecting to other instances of rippled or
# keep xrpld from connecting to other instances of xrpld or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = [1..65535]
@@ -382,7 +382,7 @@
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Protocol implements the
# server section of the xrpld process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
@@ -396,7 +396,7 @@
# true - enables compression
# false - disables compression [default].
#
# The rippled server can save bandwidth by compressing its peer-to-peer communications,
# The xrpld server can save bandwidth by compressing its peer-to-peer communications,
# at a cost of greater CPU usage. If you enable link compression,
# the server automatically compresses communications with peer servers
# that also have link compression enabled.
@@ -432,7 +432,7 @@
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which rippled should always attempt to
# List of IP addresses or hostnames to which xrpld should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
@@ -573,7 +573,7 @@
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when rippled is running in standalone
# Like minimum_txn_in_ledger when xrpld is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
@@ -710,7 +710,7 @@
#
# [validator_token]
#
# This is an alternative to [validation_seed] that allows rippled to perform
# This is an alternative to [validation_seed] that allows xrpld to perform
# validation without having to store the validator keys on the network
# connected server. The field should contain a single token in the form of a
# base64-encoded blob.
@@ -745,7 +745,7 @@
#
# Specify the file by its name or path.
# Unless an absolute path is specified, it will be considered relative to
# the folder in which the rippled.cfg file is located.
# the folder in which the xrpld.cfg file is located.
#
# Examples:
# /home/ripple/validators.txt
@@ -840,7 +840,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, a rippled node only downloads
# acquiring a ledger from the network, a xrpld node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -851,7 +851,7 @@
#
#----------------
#
# The rippled server instance uses HTTPS GET requests in a variety of
# The xrpld server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to a Ripple Payment
# Network address.
@@ -891,7 +891,7 @@
#
#------------
#
# rippled creates 4 SQLite database to hold bookkeeping information
# xrpld creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers.
@@ -902,7 +902,7 @@
# the performance of the server.
#
# Partial pathnames will be considered relative to the location of
# the rippled.cfg file.
# the xrpld.cfg file.
#
# [node_db] Settings for the Node Database (required)
#
@@ -920,11 +920,11 @@
# type = NuDB
#
# NuDB is a high-performance database written by Ripple Labs and optimized
# for rippled and solid-state drives.
# for xrpld and solid-state drives.
#
# NuDB maintains its high speed regardless of the amount of history
# stored. Online delete may be selected, but is not required. NuDB is
# available on all platforms that rippled runs on.
# available on all platforms that xrpld runs on.
#
# type = RocksDB
#
@@ -1049,7 +1049,7 @@
#
# recovery_wait_seconds
# The online delete process checks periodically
# that rippled is still in sync with the network,
# that xrpld is still in sync with the network,
# and that the validated ledger is less than
# 'age_threshold_seconds' old. If not, then continue
# sleeping for this number of seconds and
@@ -1069,8 +1069,8 @@
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
# the 'database_path' location. If you omit this configuration setting,
# the server creates a directory called "db" located in the same place as
# your rippled.cfg file.
# Partial pathnames are relative to the location of the rippled executable.
# your xrpld.cfg file.
# Partial pathnames are relative to the location of the xrpld executable.
#
# [sqlite] Tuning settings for the SQLite databases (optional)
#
@@ -1120,7 +1120,7 @@
# The default is "wal", which uses a write-ahead
# log to implement database transactions.
# Alternately, "memory" saves disk I/O, but if
# rippled crashes during a transaction, the
# xrpld crashes during a transaction, the
# database is likely to be corrupted.
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
# for more details about the available options.
@@ -1130,7 +1130,7 @@
# synchronous Valid values: off, normal, full, extra
# The default is "normal", which works well with
# the "wal" journal mode. Alternatively, "off"
# allows rippled to continue as soon as data is
# allows xrpld to continue as soon as data is
# passed to the OS, which can significantly
# increase speed, but risks data corruption if
# the host computer crashes before writing that
@@ -1144,7 +1144,7 @@
# The default is "file", which will use files
# for temporary database tables and indices.
# Alternatively, "memory" may save I/O, but
# rippled does not currently use many, if any,
# xrpld does not currently use many, if any,
# of these temporary objects.
# See https://www.sqlite.org/pragma.html#pragma_temp_store
# for more details about the available options.
@@ -1173,7 +1173,7 @@
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the rippled process.
# performed by the xrpld process.
#
#
#
@@ -1190,7 +1190,7 @@
#
# Configuration parameters for the Beast. Insight stats collection module.
#
# Insight is a module that collects information from the areas of rippled
# Insight is a module that collects information from the areas of xrpld
# that have instrumentation. The configuration parameters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
@@ -1199,7 +1199,7 @@
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while rippled is running. More information on StatsD is
# running while xrpld is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
@@ -1209,7 +1209,7 @@
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of rippled.
# to distinguish between different running instances of xrpld.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
@@ -1236,7 +1236,7 @@
#
# Example:
# [perf]
# perf_log=/var/log/rippled/perf.log
# perf_log=/var/log/xrpld/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
@@ -1246,7 +1246,7 @@
#----------
#
# The vote settings configure settings for the entire Ripple network.
# While a single instance of rippled cannot unilaterally enforce network-wide
# While a single instance of xrpld cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
@@ -1260,7 +1260,7 @@
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1272,7 +1272,7 @@
# account's XRP balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1284,7 +1284,7 @@
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xrpld will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1326,7 +1326,7 @@
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that rippled makes available.
# that xrpld makes available.
#
# The default value of this field is "false"
#
@@ -1405,7 +1405,7 @@
#--------------------
#
# Administrators can use these values as a starting point for configuring
# their instance of rippled, but each value should be checked to make sure
# their instance of xrpld, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
@@ -1415,7 +1415,7 @@
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming rippled connections. This does not affect automatic
# incoming xrpld connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
@@ -1432,7 +1432,7 @@
#
# ETL commands for Clio. We recommend setting secure_gateway
# in this section to a comma-separated list of the addresses
# of your Clio servers, in order to bypass rippled's rate limiting.
# of your Clio servers, in order to bypass xrpld's rate limiting.
#
# This port is commented out but can be enabled by removing
# the '#' from each corresponding line including the entry under [server]
@@ -1449,8 +1449,8 @@
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
# 443 (HTTPS), most operating systems will require xrpld to
# run with administrator privileges, or else xrpld will not start.
[server]
port_rpc_admin_local
@@ -1496,7 +1496,7 @@ secure_gateway = 127.0.0.1
#-------------------------------------------------------------------------------
# This is primary persistent datastore for rippled. This includes transaction
# This is primary persistent datastore for xrpld. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found at https://xrpl.org/capacity-planning.html#node-db-type
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
@@ -1511,19 +1511,19 @@ secure_gateway = 127.0.0.1
# deletion.
[node_db]
type=NuDB
path=/var/lib/rippled/db/nudb
path=/var/lib/xrpld/db/nudb
nudb_block_size=4096
online_delete=512
advisory_delete=0
[database_path]
/var/lib/rippled/db
/var/lib/xrpld/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
/var/log/xrpld/debug.log
# To use the XRP test network
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
@@ -1533,7 +1533,7 @@ advisory_delete=0
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
# folder in which the xrpld.cfg file is located.
[validators_file]
validators.txt

View File

@@ -62,7 +62,7 @@ if (is_root_project AND TARGET xrpld)
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/rippled-example.cfg\" etc rippled.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/xrpld-example.cfg\" etc xrpld.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
install(CODE "

View File

@@ -182,12 +182,10 @@ class Xrpl(ConanFile):
libxrpl.libs = [
"xrpl",
"xrpl.libpb",
"ed25519",
"secp256k1",
]
# TODO: Fix the protobufs to include each other relative to
# `include/`, not `include/ripple/proto/`.
libxrpl.includedirs = ["include", "include/ripple/proto"]
# `include/`, not `include/xrpl/proto/`.
libxrpl.includedirs = ["include", "include/xrpl/proto"]
libxrpl.requires = [
"boost::headers",
"boost::chrono",

View File

@@ -40,7 +40,7 @@ public:
using microseconds = std::chrono::microseconds;
/**
* Configuration from [perf] section of rippled.cfg.
* Configuration from [perf] section of xrpld.cfg.
*/
struct Setup
{

View File

@@ -130,7 +130,7 @@ newer versions of RocksDB (TBD).
## Discussion
RocksDBQuickFactory is intended to provide a testbed for comparing potential
rocksdb performance with the existing recommended configuration in rippled.cfg.
rocksdb performance with the existing recommended configuration in xrpld.cfg.
Through various executions and profiling some conclusions are presented below.
- If the write ahead log is enabled, insert speed soon clogs up under load. The

View File

@@ -16,6 +16,7 @@
// Add new amendments to the top of this list.
// Keep it sorted in reverse chronological order.
XRPL_FIX (TokenEscrowV1_1, Supported::yes, VoteBehavior::DefaultNo)
XRPL_FEATURE(LendingProtocol, Supported::no, VoteBehavior::DefaultNo)
XRPL_FEATURE(PermissionDelegationV1_1, Supported::no, VoteBehavior::DefaultNo)
XRPL_FIX (DirectoryLimit, Supported::yes, VoteBehavior::DefaultNo)

View File

@@ -18,8 +18,8 @@ void
ManagerImp::missing_backend()
{
Throw<std::runtime_error>(
"Your rippled.cfg is missing a [node_db] entry, "
"please see the rippled-example.cfg file!");
"Your xrpld.cfg is missing a [node_db] entry, "
"please see the xrpld-example.cfg file!");
}
// We shouldn't rely on global variables for lifetime management because their

View File

@@ -872,6 +872,91 @@ struct EscrowToken_test : public beast::unit_test::suite
}
}
void
testIOUCancelDoApply(FeatureBitset features)
{
testcase("IOU Cancel DoApply");
using namespace test::jtx;
using namespace std::chrono;
// Test: Creator cancels their own escrow after deleting trust line.
// The trust line should be recreated and tokens returned.
{
Env env{*this, features};
auto const baseFee = env.current()->fees().base;
auto const alice = Account("alice");
auto const bob = Account("bob");
auto const gw = Account("gw");
auto const USD = gw["USD"];
// Fund accounts
env.fund(XRP(10'000), alice, bob, gw);
env.close();
// Enable trust line locking for escrow
env(fset(gw, asfAllowTrustLineLocking));
env.close();
// Create trust lines
env.trust(USD(100'000), alice);
env.trust(USD(100'000), bob);
env.close();
// Issue tokens to alice
env(pay(gw, alice, USD(10'000)));
env.close();
// Alice creates IOU escrow to Bob with CancelAfter
auto const seq = env.seq(alice);
env(escrow::create(alice, bob, USD(1'000)),
escrow::finish_time(env.now() + 1s),
escrow::cancel_time(env.now() + 2s),
fee(baseFee));
env.close();
// Verify escrow was created and balance decreased
BEAST_EXPECT(env.balance(alice, USD) == USD(9'000));
// Alice pays back remaining tokens to gateway
env(pay(alice, gw, USD(9'000)));
env.close();
// Alice removes her trust line (balance is 0, so this succeeds)
// The escrowed 1,000 USD is NOT tracked in the trustline balance
env(trust(alice, USD(0)));
env.close();
// Verify trust line is gone
auto const trustLineKey =
keylet::line(alice.id(), gw.id(), USD.currency);
BEAST_EXPECT(!env.current()->exists(trustLineKey));
// Wait for CancelAfter to pass
env.close();
env.close();
// Alice cancels her own escrow
auto const expectedResult =
env.current()->rules().enabled(fixTokenEscrowV1_1)
? ter(tesSUCCESS)
: ter(tefEXCEPTION);
env(escrow::cancel(alice, alice, seq),
fee(baseFee),
expectedResult);
env.close();
if (env.current()->rules().enabled(fixTokenEscrowV1_1))
{
// Verify the escrow was deleted
BEAST_EXPECT(!env.le(keylet::escrow(alice.id(), seq)));
// Verify trust line was recreated and alice got tokens back
BEAST_EXPECT(env.current()->exists(trustLineKey));
BEAST_EXPECT(env.balance(alice, USD) == USD(1'000));
}
}
}
void
testIOUBalances(FeatureBitset features)
{
@@ -2858,6 +2943,80 @@ struct EscrowToken_test : public beast::unit_test::suite
}
}
void
testMPTCancelDoApply(FeatureBitset features)
{
testcase("MPT Cancel DoApply");
using namespace test::jtx;
using namespace std::chrono;
// Test: Creator cancels their own MPT escrow.
// Tokens should be returned and locked amount cleared.
{
Env env{*this, features};
auto const baseFee = env.current()->fees().base;
auto const alice = Account("alice");
auto const bob = Account("bob");
auto const gw = Account("gw");
MPTTester mptGw(env, gw, {.holders = {alice, bob}});
mptGw.create(
{.ownerCount = 1,
.holderCount = 0,
.flags = tfMPTCanEscrow | tfMPTCanTransfer});
mptGw.authorize({.account = alice});
mptGw.authorize({.account = bob});
auto const MPT = mptGw["MPT"];
// Issue tokens to alice
env(pay(gw, alice, MPT(10'000)));
env.close();
// Alice creates MPT escrow to Bob with CancelAfter
auto const seq = env.seq(alice);
env(escrow::create(alice, bob, MPT(1'000)),
escrow::finish_time(env.now() + 1s),
escrow::cancel_time(env.now() + 2s),
fee(baseFee * 150));
env.close();
// Verify escrow was created and locked amount is tracked
BEAST_EXPECT(env.balance(alice, MPT) == MPT(9'000));
BEAST_EXPECT(mptEscrowed(env, alice, MPT) == 1'000);
// Alice pays back remaining tokens to gateway
env(pay(alice, gw, MPT(9'000)));
env.close();
// Verify MPToken still exists with locked amount
BEAST_EXPECT(env.le(keylet::mptoken(MPT.mpt(), alice)));
BEAST_EXPECT(mptEscrowed(env, alice, MPT) == 1'000);
// Wait for CancelAfter to pass
env.close();
env.close();
// Alice cancels her own escrow
env(escrow::cancel(alice, alice, seq),
fee(baseFee),
ter(tesSUCCESS));
env.close();
// Verify the escrow was deleted
BEAST_EXPECT(!env.le(keylet::escrow(alice.id(), seq)));
// Verify alice got tokens back and locked amount is cleared
BEAST_EXPECT(env.balance(alice, MPT) == MPT(1'000));
BEAST_EXPECT(mptEscrowed(env, alice, MPT) == 0);
// Now alice can delete her MPToken
env(pay(alice, gw, MPT(1'000)));
mptGw.authorize({.account = alice, .flags = tfMPTUnauthorize});
env.close();
BEAST_EXPECT(!env.le(keylet::mptoken(MPT.mpt(), alice)));
}
}
void
testMPTBalances(FeatureBitset features)
{
@@ -3887,6 +4046,7 @@ struct EscrowToken_test : public beast::unit_test::suite
testIOUFinishPreclaim(features);
testIOUFinishDoApply(features);
testIOUCancelPreclaim(features);
testIOUCancelDoApply(features);
testIOUBalances(features);
testIOUMetaAndOwnership(features);
testIOURippleState(features);
@@ -3908,6 +4068,7 @@ struct EscrowToken_test : public beast::unit_test::suite
testMPTFinishPreclaim(features);
testMPTFinishDoApply(features);
testMPTCancelPreclaim(features);
testMPTCancelDoApply(features);
testMPTBalances(features);
testMPTMetaAndOwnership(features);
testMPTGateway(features);
@@ -3925,6 +4086,7 @@ public:
using namespace test::jtx;
FeatureBitset const all{testable_amendments()};
testIOUWithFeats(all);
testIOUWithFeats(all - fixTokenEscrowV1_1);
testMPTWithFeats(all);
testMPTWithFeats(all - fixTokenEscrowV1);
}

View File

@@ -5,6 +5,7 @@
#include <xrpld/core/ConfigSections.h>
#include <xrpl/beast/unit_test/suite.h>
#include <xrpl/beast/utility/temp_dir.h>
#include <xrpl/server/Port.h>
#include <boost/filesystem.hpp>
@@ -18,7 +19,7 @@ namespace detail {
std::string
configContents(std::string const& dbPath, std::string const& validatorsFile)
{
static boost::format configContentsTemplate(R"rippleConfig(
static boost::format configContentsTemplate(R"xrpldConfig(
[server]
port_rpc
port_peer
@@ -51,14 +52,14 @@ protocol = wss
[node_size]
medium
# This is primary persistent datastore for rippled. This includes transaction
# This is primary persistent datastore for xrpld. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found on https://xrpl.org/capacity-planning.html#node-db-type
# delete old ledgers while maintaining at least 2000. Do not require an
# external administrative command to initiate deletion.
[node_db]
type=memory
path=/Users/dummy/ripple/config/db/rocksdb
path=/Users/dummy/xrpld/config/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
@@ -72,7 +73,7 @@ file_size_mult=2
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/Users/dummy/ripple/config/log/debug.log
/Users/dummy/xrpld/config/log/debug.log
[sntp_servers]
time.windows.com
@@ -97,7 +98,7 @@ r.ripple.com 51235
[sqdb]
backend=sqlite
)rippleConfig");
)xrpldConfig");
std::string dbPathSection =
dbPath.empty() ? "" : "[database_path]\n" + dbPath;
@@ -107,9 +108,9 @@ backend=sqlite
}
/**
Write a rippled config file and remove when done.
Write a xrpld config file and remove when done.
*/
class RippledCfgGuard : public xrpl::detail::FileDirGuard
class FileCfgGuard : public xrpl::detail::FileDirGuard
{
private:
path dataDir_;
@@ -119,17 +120,18 @@ private:
Config config_;
public:
RippledCfgGuard(
FileCfgGuard(
beast::unit_test::suite& test,
path subDir,
path const& dbPath,
path const& configFile,
path const& validatorsFile,
bool useCounter = true,
std::string confContents = "")
: FileDirGuard(
test,
std::move(subDir),
path(Config::configFileName),
configFile,
confContents.empty()
? configContents(dbPath.string(), validatorsFile.string())
: confContents,
@@ -171,7 +173,7 @@ public:
return fileExists();
}
~RippledCfgGuard()
~FileCfgGuard()
{
try
{
@@ -182,7 +184,7 @@ public:
catch (std::exception& e)
{
// if we throw here, just let it die.
test_.log << "Error in ~RippledCfgGuard: " << e.what() << std::endl;
test_.log << "Error in ~FileCfgGuard: " << e.what() << std::endl;
};
}
};
@@ -190,7 +192,7 @@ public:
std::string
valFileContents()
{
std::string configContents(R"rippleConfig(
std::string configContents(R"xrpldConfig(
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
@@ -204,8 +206,8 @@ nHBu9PTL9dn2GuZtdW4U2WzBwffyX9qsQCd9CNU4Z5YG3PQfViM8
nHUPDdcdb2Y5DZAJne4c2iabFuAP3F34xZUgYQT2NH7qfkdapgnz
[validator_list_sites]
recommendedripplevalidators.com
moreripplevalidators.net
recommendedxrplvalidators.com
morexrplvalidators.net
[validator_list_keys]
03E74EE14CB525AFBB9F1B7D86CD58ECC4B91452294B42AB4E78F260BD905C091D
@@ -213,7 +215,7 @@ moreripplevalidators.net
[validator_list_threshold]
2
)rippleConfig");
)xrpldConfig");
return configContents;
}
@@ -270,7 +272,7 @@ public:
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[server]
port_rpc
port_peer
@@ -278,7 +280,7 @@ port_wss_admin
[ssl_verify]
0
)rippleConfig");
)xrpldConfig");
c.loadFromString(toLoad);
@@ -291,6 +293,126 @@ port_wss_admin
BEAST_EXPECT(c.legacy("not_in_file") == "new_value");
}
void
testConfigFile()
{
testcase("config_file");
using namespace boost::filesystem;
auto const cwd = current_path();
// Test both config file names.
char const* configFiles[] = {
Config::configFileName, Config::configLegacyName};
// Config file in current directory.
for (auto const& configFile : configFiles)
{
// Use a temporary directory for testing.
beast::temp_dir td;
current_path(td.path());
path const f = td.file(configFile);
std::ofstream o(f.string());
o << detail::configContents("", "");
o.close();
// Load the config file from the current directory and verify it.
Config c;
c.setup("", true, false, true);
BEAST_EXPECT(c.section(SECTION_DEBUG_LOGFILE).values().size() == 1);
BEAST_EXPECT(
c.section(SECTION_DEBUG_LOGFILE).values()[0] ==
"/Users/dummy/xrpld/config/log/debug.log");
}
// Config file in HOME or XDG_CONFIG_HOME directory.
#if BOOST_OS_LINUX || BOOST_OS_MACOS
for (auto const& configFile : configFiles)
{
// Point the current working directory to a temporary directory, so
// we don't pick up an actual config file from the repository root.
beast::temp_dir td;
current_path(td.path());
// The XDG config directory is set: the config file must be in a
// subdirectory named after the system.
{
beast::temp_dir tc;
// Set the HOME and XDG_CONFIG_HOME environment variables. The
// HOME variable is not used when XDG_CONFIG_HOME is set, but
// must be set.
char const* h = getenv("HOME");
setenv("HOME", tc.path().c_str(), 1);
char const* x = getenv("XDG_CONFIG_HOME");
setenv("XDG_CONFIG_HOME", tc.path().c_str(), 1);
// Create the config file in '${XDG_CONFIG_HOME}/[systemName]'.
path p = tc.file(systemName());
create_directory(p);
p = tc.file(systemName() + "/" + configFile);
std::ofstream o(p.string());
o << detail::configContents("", "");
o.close();
// Load the config file from the config directory and verify it.
Config c;
c.setup("", true, false, true);
BEAST_EXPECT(
c.section(SECTION_DEBUG_LOGFILE).values().size() == 1);
BEAST_EXPECT(
c.section(SECTION_DEBUG_LOGFILE).values()[0] ==
"/Users/dummy/xrpld/config/log/debug.log");
// Restore the environment variables.
h ? setenv("HOME", h, 1) : unsetenv("HOME");
x ? setenv("XDG_CONFIG_HOME", x, 1)
: unsetenv("XDG_CONFIG_HOME");
}
// The XDG config directory is not set: the config file must be in a
// subdirectory named .config followed by the system name.
{
beast::temp_dir tc;
// Set only the HOME environment variable.
char const* h = getenv("HOME");
setenv("HOME", tc.path().c_str(), 1);
char const* x = getenv("XDG_CONFIG_HOME");
unsetenv("XDG_CONFIG_HOME");
// Create the config file in '${HOME}/.config/[systemName]'.
std::string s = ".config";
path p = tc.file(s);
create_directory(p);
s += "/" + systemName();
p = tc.file(s);
create_directory(p);
p = tc.file(s + "/" + configFile);
std::ofstream o(p.string());
o << detail::configContents("", "");
o.close();
// Load the config file from the config directory and verify it.
Config c;
c.setup("", true, false, true);
BEAST_EXPECT(
c.section(SECTION_DEBUG_LOGFILE).values().size() == 1);
BEAST_EXPECT(
c.section(SECTION_DEBUG_LOGFILE).values()[0] ==
"/Users/dummy/xrpld/config/log/debug.log");
// Restore the environment variables.
h ? setenv("HOME", h, 1) : unsetenv("HOME");
if (x)
setenv("XDG_CONFIG_HOME", x, 1);
}
}
#endif
// Restore the current working directory.
current_path(cwd);
}
void
testDbPath()
{
testcase("database_path");
@@ -326,11 +448,16 @@ port_wss_admin
{
// read from file absolute path
auto const cwd = current_path();
xrpl::detail::DirGuard const g0(*this, "test_db");
detail::DirGuard const g0(*this, "test_db");
path const dataDirRel("test_data_dir");
path const dataDirAbs(cwd / g0.subdir() / dataDirRel);
detail::RippledCfgGuard const g(
*this, g0.subdir(), dataDirAbs, "", false);
detail::FileCfgGuard const g(
*this,
g0.subdir(),
dataDirAbs,
Config::configFileName,
"",
false);
auto const& c(g.config());
BEAST_EXPECT(g.dataDirExists());
BEAST_EXPECT(g.configFileExists());
@@ -339,7 +466,8 @@ port_wss_admin
{
// read from file relative path
std::string const dbPath("my_db");
detail::RippledCfgGuard const g(*this, "test_db", dbPath, "");
detail::FileCfgGuard const g(
*this, "test_db", dbPath, Config::configFileName, "");
auto const& c(g.config());
std::string const nativeDbPath = absolute(path(dbPath)).string();
BEAST_EXPECT(g.dataDirExists());
@@ -348,7 +476,8 @@ port_wss_admin
}
{
// read from file no path
detail::RippledCfgGuard const g(*this, "test_db", "", "");
detail::FileCfgGuard const g(
*this, "test_db", "", Config::configFileName, "");
auto const& c(g.config());
std::string const nativeDbPath =
absolute(g.subdir() / path(Config::databaseDirName)).string();
@@ -378,13 +507,13 @@ port_wss_admin
{
Config c;
static boost::format configTemplate(R"rippleConfig(
static boost::format configTemplate(R"xrpldConfig(
[validation_seed]
%1%
[validator_token]
%2%
)rippleConfig");
)xrpldConfig");
std::string error;
auto const expectedError =
"Cannot have both [validation_seed] "
@@ -410,10 +539,10 @@ port_wss_admin
Config c;
try
{
c.loadFromString(R"rippleConfig(
c.loadFromString(R"xrpldConfig(
[network_id]
main
)rippleConfig");
)xrpldConfig");
}
catch (std::runtime_error& e)
{
@@ -425,8 +554,8 @@ main
try
{
c.loadFromString(R"rippleConfig(
)rippleConfig");
c.loadFromString(R"xrpldConfig(
)xrpldConfig");
}
catch (std::runtime_error& e)
{
@@ -438,10 +567,10 @@ main
try
{
c.loadFromString(R"rippleConfig(
c.loadFromString(R"xrpldConfig(
[network_id]
255
)rippleConfig");
)xrpldConfig");
}
catch (std::runtime_error& e)
{
@@ -453,10 +582,10 @@ main
try
{
c.loadFromString(R"rippleConfig(
c.loadFromString(R"xrpldConfig(
[network_id]
10000
)rippleConfig");
)xrpldConfig");
}
catch (std::runtime_error& e)
{
@@ -516,7 +645,7 @@ main
{
// load validators from config into single section
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
@@ -525,7 +654,7 @@ n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
[validator_keys]
nHUhG1PgAG8H8myUENypM35JgfqXAKNQvRVVAFDRzJrny5eZN8d5
nHBu9PTL9dn2GuZtdW4U2WzBwffyX9qsQCd9CNU4Z5YG3PQfViM8
)rippleConfig");
)xrpldConfig");
c.loadFromString(toLoad);
BEAST_EXPECT(c.legacy("validators_file").empty());
BEAST_EXPECT(c.section(SECTION_VALIDATORS).values().size() == 5);
@@ -534,9 +663,9 @@ nHBu9PTL9dn2GuZtdW4U2WzBwffyX9qsQCd9CNU4Z5YG3PQfViM8
{
// load validator list sites and keys from config
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
@@ -544,13 +673,13 @@ trustthesevalidators.gov
[validator_list_threshold]
1
)rippleConfig");
)xrpldConfig");
c.loadFromString(toLoad);
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values().size() == 2);
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values()[0] ==
"ripplevalidators.com");
"xrplvalidators.com");
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values()[1] ==
"trustthesevalidators.gov");
@@ -570,9 +699,9 @@ trustthesevalidators.gov
{
// load validator list sites and keys from config
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
@@ -580,13 +709,13 @@ trustthesevalidators.gov
[validator_list_threshold]
0
)rippleConfig");
)xrpldConfig");
c.loadFromString(toLoad);
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values().size() == 2);
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values()[0] ==
"ripplevalidators.com");
"xrplvalidators.com");
BEAST_EXPECT(
c.section(SECTION_VALIDATOR_LIST_SITES).values()[1] ==
"trustthesevalidators.gov");
@@ -607,9 +736,9 @@ trustthesevalidators.gov
// load should throw if [validator_list_threshold] is greater than
// the number of [validator_list_keys]
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
@@ -617,7 +746,7 @@ trustthesevalidators.gov
[validator_list_threshold]
2
)rippleConfig");
)xrpldConfig");
std::string error;
auto const expectedError =
"Value in config section [validator_list_threshold] exceeds "
@@ -636,9 +765,9 @@ trustthesevalidators.gov
{
// load should throw if [validator_list_threshold] is malformed
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
@@ -646,7 +775,7 @@ trustthesevalidators.gov
[validator_list_threshold]
value = 2
)rippleConfig");
)xrpldConfig");
std::string error;
auto const expectedError =
"Config section [validator_list_threshold] should contain "
@@ -665,9 +794,9 @@ value = 2
{
// load should throw if [validator_list_threshold] is negative
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
@@ -675,7 +804,7 @@ trustthesevalidators.gov
[validator_list_threshold]
-1
)rippleConfig");
)xrpldConfig");
bool error = false;
try
{
@@ -692,11 +821,11 @@ trustthesevalidators.gov
// load should throw if [validator_list_sites] is configured but
// [validator_list_keys] is not
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
)rippleConfig");
)xrpldConfig");
std::string error;
auto const expectedError =
"[validator_list_keys] config section is missing";
@@ -736,8 +865,13 @@ trustthesevalidators.gov
std::string const valFileName = "validators.txt";
detail::ValidatorsTxtGuard const vtg(
*this, "test_cfg", valFileName);
detail::RippledCfgGuard const rcg(
*this, vtg.subdir(), "", valFileName, false);
detail::FileCfgGuard const rcg(
*this,
vtg.subdir(),
"",
Config::configFileName,
valFileName,
false);
BEAST_EXPECT(vtg.validatorsFileExists());
BEAST_EXPECT(rcg.configFileExists());
auto const& c(rcg.config());
@@ -758,8 +892,13 @@ trustthesevalidators.gov
detail::ValidatorsTxtGuard const vtg(
*this, "test_cfg", "validators.txt");
auto const valFilePath = ".." / vtg.subdir() / "validators.txt";
detail::RippledCfgGuard const rcg(
*this, vtg.subdir(), "", valFilePath, false);
detail::FileCfgGuard const rcg(
*this,
vtg.subdir(),
"",
Config::configFileName,
valFilePath,
false);
BEAST_EXPECT(vtg.validatorsFileExists());
BEAST_EXPECT(rcg.configFileExists());
auto const& c(rcg.config());
@@ -778,8 +917,8 @@ trustthesevalidators.gov
// load from validators file in default location
detail::ValidatorsTxtGuard const vtg(
*this, "test_cfg", "validators.txt");
detail::RippledCfgGuard const rcg(
*this, vtg.subdir(), "", "", false);
detail::FileCfgGuard const rcg(
*this, vtg.subdir(), "", Config::configFileName, "", false);
BEAST_EXPECT(vtg.validatorsFileExists());
BEAST_EXPECT(rcg.configFileExists());
auto const& c(rcg.config());
@@ -803,8 +942,13 @@ trustthesevalidators.gov
detail::ValidatorsTxtGuard const vtgDefault(
*this, vtg.subdir(), "validators.txt", false);
BEAST_EXPECT(vtgDefault.validatorsFileExists());
detail::RippledCfgGuard const rcg(
*this, vtg.subdir(), "", vtg.validatorsFile(), false);
detail::FileCfgGuard const rcg(
*this,
vtg.subdir(),
"",
Config::configFileName,
vtg.validatorsFile(),
false);
BEAST_EXPECT(rcg.configFileExists());
auto const& c(rcg.config());
BEAST_EXPECT(c.legacy("validators_file") == vtg.validatorsFile());
@@ -821,7 +965,7 @@ trustthesevalidators.gov
{
// load validators from both config and validators file
boost::format cc(R"rippleConfig(
boost::format cc(R"xrpldConfig(
[validators_file]
%1%
@@ -837,12 +981,12 @@ nHB1X37qrniVugfQcuBTAjswphC1drx7QjFFojJPZwKHHnt8kU7v
nHUkAWDR4cB8AgPg7VXMX6et8xRTQb2KJfgv1aBEXozwrawRKgMB
[validator_list_sites]
ripplevalidators.com
xrplvalidators.com
trustthesevalidators.gov
[validator_list_keys]
021A99A537FDEBC34E4FCA03B39BEADD04299BB19E85097EC92B15A3518801E566
)rippleConfig");
)xrpldConfig");
detail::ValidatorsTxtGuard const vtg(
*this, "test_cfg", "validators.cfg");
BEAST_EXPECT(vtg.validatorsFileExists());
@@ -861,14 +1005,14 @@ trustthesevalidators.gov
}
{
// load should throw if [validator_list_threshold] is present both
// in rippled cfg and validators file
boost::format cc(R"rippleConfig(
// in xrpld.cfg and validators file
boost::format cc(R"xrpldConfig(
[validators_file]
%1%
[validator_list_threshold]
1
)rippleConfig");
)xrpldConfig");
std::string error;
detail::ValidatorsTxtGuard const vtg(
*this, "test_cfg", "validators.cfg");
@@ -890,7 +1034,7 @@ trustthesevalidators.gov
}
{
// load should throw if [validators], [validator_keys] and
// [validator_list_keys] are missing from rippled cfg and
// [validator_list_keys] are missing from xrpld.cfg and
// validators file
Config c;
boost::format cc("[validators_file]\n%1%\n");
@@ -920,9 +1064,13 @@ trustthesevalidators.gov
void
testSetup(bool explicitPath)
{
detail::RippledCfgGuard const cfg(
*this, "testSetup", explicitPath ? "test_db" : "", "");
/* RippledCfgGuard has a Config object that gets loaded on
detail::FileCfgGuard const cfg(
*this,
"testSetup",
explicitPath ? "test_db" : "",
Config::configFileName,
"");
/* FileCfgGuard has a Config object that gets loaded on
construction, but Config::setup is not reentrant, so we
need a fresh config for every test case, so ignore it.
*/
@@ -1039,7 +1187,8 @@ trustthesevalidators.gov
void
testPort()
{
detail::RippledCfgGuard const cfg(*this, "testPort", "", "");
detail::FileCfgGuard const cfg(
*this, "testPort", "", Config::configFileName, "");
auto const& conf = cfg.config();
if (!BEAST_EXPECT(conf.exists("port_rpc")))
return;
@@ -1065,8 +1214,14 @@ trustthesevalidators.gov
try
{
detail::RippledCfgGuard const cfg(
*this, "testPort", "", "", true, contents);
detail::FileCfgGuard const cfg(
*this,
"testPort",
"",
Config::configFileName,
"",
true,
contents);
BEAST_EXPECT(false);
}
catch (std::exception const& ex)
@@ -1377,9 +1532,9 @@ r.ripple.com:51235
for (auto& [unit, sec, val, shouldPass] : units)
{
Config c;
std::string toLoad(R"rippleConfig(
std::string toLoad(R"xrpldConfig(
[amendment_majority_time]
)rippleConfig");
)xrpldConfig");
toLoad += std::to_string(val) + space + unit;
space = space == "" ? " " : "";
@@ -1480,6 +1635,7 @@ r.ripple.com:51235
run() override
{
testLegacy();
testConfigFile();
testDbPath();
testValidatorKeys();
testValidatorsFile();

View File

@@ -30,6 +30,7 @@ enum class FieldType {
CurrencyField,
HashField,
HashOrObjectField,
IssueField,
ObjectField,
StringField,
TwoAccountArrayField,
@@ -40,6 +41,8 @@ enum class FieldType {
std::vector<std::pair<Json::StaticString, FieldType>> mappings{
{jss::account, FieldType::AccountField},
{jss::accounts, FieldType::TwoAccountArrayField},
{jss::asset, FieldType::IssueField},
{jss::asset2, FieldType::IssueField},
{jss::authorize, FieldType::AccountField},
{jss::authorized, FieldType::AccountField},
{jss::credential_type, FieldType::BlobField},
@@ -74,24 +77,26 @@ getTypeName(FieldType typeID)
{
switch (typeID)
{
case FieldType::UInt32Field:
return "number";
case FieldType::UInt64Field:
return "number";
case FieldType::HashField:
return "hex string";
case FieldType::AccountField:
return "AccountID";
case FieldType::ArrayField:
return "array";
case FieldType::BlobField:
return "hex string";
case FieldType::CurrencyField:
return "Currency";
case FieldType::ArrayField:
return "array";
case FieldType::HashField:
return "hex string";
case FieldType::HashOrObjectField:
return "hex string or object";
case FieldType::IssueField:
return "Issue";
case FieldType::TwoAccountArrayField:
return "length-2 array of Accounts";
case FieldType::UInt32Field:
return "number";
case FieldType::UInt64Field:
return "number";
default:
Throw<std::runtime_error>(
"unknown type " + std::to_string(static_cast<uint8_t>(typeID)));
@@ -192,34 +197,37 @@ class LedgerEntry_test : public beast::unit_test::suite
return values;
};
static auto const& badUInt32Values = remove({2, 3});
static auto const& badUInt64Values = remove({2, 3});
static auto const& badHashValues = remove({2, 3, 7, 8, 16});
static auto const& badAccountValues = remove({12});
static auto const& badArrayValues = remove({17, 20});
static auto const& badBlobValues = remove({3, 7, 8, 16});
static auto const& badCurrencyValues = remove({14});
static auto const& badArrayValues = remove({17, 20});
static auto const& badHashValues = remove({2, 3, 7, 8, 16});
static auto const& badIndexValues = remove({12, 16, 18, 19});
static auto const& badUInt32Values = remove({2, 3});
static auto const& badUInt64Values = remove({2, 3});
static auto const& badIssueValues = remove({});
switch (fieldType)
{
case FieldType::UInt32Field:
return badUInt32Values;
case FieldType::UInt64Field:
return badUInt64Values;
case FieldType::HashField:
return badHashValues;
case FieldType::AccountField:
return badAccountValues;
case FieldType::ArrayField:
case FieldType::TwoAccountArrayField:
return badArrayValues;
case FieldType::BlobField:
return badBlobValues;
case FieldType::CurrencyField:
return badCurrencyValues;
case FieldType::ArrayField:
case FieldType::TwoAccountArrayField:
return badArrayValues;
case FieldType::HashField:
return badHashValues;
case FieldType::HashOrObjectField:
return badIndexValues;
case FieldType::IssueField:
return badIssueValues;
case FieldType::UInt32Field:
return badUInt32Values;
case FieldType::UInt64Field:
return badUInt64Values;
default:
Throw<std::runtime_error>(
"unknown type " +
@@ -236,30 +244,37 @@ class LedgerEntry_test : public beast::unit_test::suite
arr[1u] = "r4MrUGTdB57duTnRs6KbsRGQXgkseGb1b5";
return arr;
}();
static Json::Value const issueObject = []() {
Json::Value arr(Json::objectValue);
arr[jss::currency] = "XRP";
return arr;
}();
auto const typeID = getFieldType(fieldName);
switch (typeID)
{
case FieldType::UInt32Field:
return 1;
case FieldType::UInt64Field:
return 1;
case FieldType::HashField:
return "5233D68B4D44388F98559DE42903767803EFA7C1F8D01413FC16EE6"
"B01403D6D";
case FieldType::AccountField:
return "r4MrUGTdB57duTnRs6KbsRGQXgkseGb1b5";
case FieldType::ArrayField:
return Json::arrayValue;
case FieldType::BlobField:
return "ABCDEF";
case FieldType::CurrencyField:
return "USD";
case FieldType::ArrayField:
return Json::arrayValue;
case FieldType::HashField:
return "5233D68B4D44388F98559DE42903767803EFA7C1F8D01413FC16EE6"
"B01403D6D";
case FieldType::IssueField:
return issueObject;
case FieldType::HashOrObjectField:
return "5233D68B4D44388F98559DE42903767803EFA7C1F8D01413FC16EE6"
"B01403D6D";
case FieldType::TwoAccountArrayField:
return twoAccountArray;
case FieldType::UInt32Field:
return 1;
case FieldType::UInt64Field:
return 1;
default:
Throw<std::runtime_error>(
"unknown type " +
@@ -444,7 +459,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryInvalid()
testInvalid()
{
testcase("Invalid requests");
using namespace test::jtx;
@@ -526,7 +541,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryAccountRoot()
testAccountRoot()
{
testcase("AccountRoot");
using namespace test::jtx;
@@ -632,7 +647,147 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryCheck()
testAmendments()
{
testcase("Amendments");
using namespace test::jtx;
Env env{*this};
// positive test
{
Keylet const keylet = keylet::amendments();
// easier to hack an object into the ledger than generate it
// legitimately
{
auto const amendments = [&](OpenView& view,
beast::Journal) -> bool {
auto const sle = std::make_shared<SLE>(keylet);
// Create Amendments vector (enabled amendments)
std::vector<uint256> enabledAmendments;
enabledAmendments.push_back(
uint256::fromVoid("42426C4D4F1009EE67080A9B7965B44656D7"
"714D104A72F9B4369F97ABF044EE"));
enabledAmendments.push_back(
uint256::fromVoid("4C97EBA926031A7CF7D7B36FDE3ED66DDA54"
"21192D63DE53FFB46E43B9DC8373"));
enabledAmendments.push_back(
uint256::fromVoid("03BDC0099C4E14163ADA272C1B6F6FABB448"
"CC3E51F522F978041E4B57D9158C"));
enabledAmendments.push_back(
uint256::fromVoid("35291ADD2D79EB6991343BDA0912269C817D"
"0F094B02226C1C14AD2858962ED4"));
sle->setFieldV256(
sfAmendments, STVector256(enabledAmendments));
// Create Majorities array
STArray majorities;
auto majority1 = STObject::makeInnerObject(sfMajority);
majority1.setFieldH256(
sfAmendment,
uint256::fromVoid("7BB62DC13EC72B775091E9C71BF8CF97E122"
"647693B50C5E87A80DFD6FCFAC50"));
majority1.setFieldU32(sfCloseTime, 779561310);
majorities.push_back(std::move(majority1));
auto majority2 = STObject::makeInnerObject(sfMajority);
majority2.setFieldH256(
sfAmendment,
uint256::fromVoid("755C971C29971C9F20C6F080F2ED96F87884"
"E40AD19554A5EBECDCEC8A1F77FE"));
majority2.setFieldU32(sfCloseTime, 779561310);
majorities.push_back(std::move(majority2));
sle->setFieldArray(sfMajorities, majorities);
view.rawInsert(sle);
return true;
};
env.app().openLedger().modify(amendments);
}
Json::Value jvParams;
jvParams[jss::amendments] = to_string(keylet.key);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] == jss::Amendments);
}
// negative tests
runLedgerEntryTest(env, jss::amendments);
}
void
testAMM()
{
testcase("AMM");
using namespace test::jtx;
Env env{*this};
// positive test
Account const alice{"alice"};
env.fund(XRP(10000), alice);
env.close();
AMM amm(env, alice, XRP(10), alice["USD"](1000));
env.close();
{
Json::Value jvParams;
jvParams[jss::amm] = to_string(amm.ammID());
auto const result =
env.rpc("json", "ledger_entry", to_string(jvParams));
BEAST_EXPECT(
result.isObject() && result.isMember(jss::result) &&
!result[jss::result].isMember(jss::error) &&
result[jss::result].isMember(jss::node) &&
result[jss::result][jss::node].isMember(
sfLedgerEntryType.jsonName) &&
result[jss::result][jss::node][sfLedgerEntryType.jsonName] ==
jss::AMM);
}
{
Json::Value jvParams;
Json::Value ammParams(Json::objectValue);
{
Json::Value obj(Json::objectValue);
obj[jss::currency] = "XRP";
ammParams[jss::asset] = obj;
}
{
Json::Value obj(Json::objectValue);
obj[jss::currency] = "USD";
obj[jss::issuer] = alice.human();
ammParams[jss::asset2] = obj;
}
jvParams[jss::amm] = ammParams;
auto const result =
env.rpc("json", "ledger_entry", to_string(jvParams));
BEAST_EXPECT(
result.isObject() && result.isMember(jss::result) &&
!result[jss::result].isMember(jss::error) &&
result[jss::result].isMember(jss::node) &&
result[jss::result][jss::node].isMember(
sfLedgerEntryType.jsonName) &&
result[jss::result][jss::node][sfLedgerEntryType.jsonName] ==
jss::AMM);
}
// negative tests
runLedgerEntryTest(
env,
jss::amm,
{
{jss::asset, "malformedRequest"},
{jss::asset2, "malformedRequest"},
});
}
void
testCheck()
{
testcase("Check");
using namespace test::jtx;
@@ -684,7 +839,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryCredentials()
testCredentials()
{
testcase("Credentials");
@@ -752,7 +907,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryDelegate()
testDelegate()
{
testcase("Delegate");
@@ -807,7 +962,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryDepositPreauth()
testDepositPreauth()
{
testcase("Deposit Preauth");
@@ -868,7 +1023,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryDepositPreauthCred()
testDepositPreauthCred()
{
testcase("Deposit Preauth with credentials");
@@ -1149,7 +1304,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryDirectory()
testDirectory()
{
testcase("Directory");
using namespace test::jtx;
@@ -1303,7 +1458,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryEscrow()
testEscrow()
{
testcase("Escrow");
using namespace test::jtx;
@@ -1365,7 +1520,177 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryOffer()
testFeeSettings()
{
testcase("Fee Settings");
using namespace test::jtx;
Env env{*this};
// positive test
{
Keylet const keylet = keylet::fees();
Json::Value jvParams;
jvParams[jss::fee] = to_string(keylet.key);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] == jss::FeeSettings);
}
// negative tests
runLedgerEntryTest(env, jss::fee);
}
void
testLedgerHashes()
{
testcase("Ledger Hashes");
using namespace test::jtx;
Env env{*this};
// positive test
{
Keylet const keylet = keylet::skip();
Json::Value jvParams;
jvParams[jss::hashes] = to_string(keylet.key);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] ==
jss::LedgerHashes);
}
// negative tests
runLedgerEntryTest(env, jss::hashes);
}
void
testNFTokenOffer()
{
testcase("NFT Offer");
using namespace test::jtx;
Env env{*this};
// positive test
Account const issuer{"issuer"};
Account const buyer{"buyer"};
env.fund(XRP(1000), issuer, buyer);
uint256 const nftokenID0 =
token::getNextID(env, issuer, 0, tfTransferable);
env(token::mint(issuer, 0), txflags(tfTransferable));
env.close();
uint256 const offerID = keylet::nftoffer(issuer, env.seq(issuer)).key;
env(token::createOffer(issuer, nftokenID0, drops(1)),
token::destination(buyer),
txflags(tfSellNFToken));
{
Json::Value jvParams;
jvParams[jss::nft_offer] = to_string(offerID);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] ==
jss::NFTokenOffer);
BEAST_EXPECT(jrr[jss::node][sfOwner.jsonName] == issuer.human());
BEAST_EXPECT(
jrr[jss::node][sfNFTokenID.jsonName] == to_string(nftokenID0));
BEAST_EXPECT(jrr[jss::node][sfAmount.jsonName] == "1");
}
// negative tests
runLedgerEntryTest(env, jss::nft_offer);
}
void
testNFTokenPage()
{
testcase("NFT Page");
using namespace test::jtx;
Env env{*this};
// positive test
Account const issuer{"issuer"};
env.fund(XRP(1000), issuer);
env(token::mint(issuer, 0), txflags(tfTransferable));
env.close();
auto const nftpage = keylet::nftpage_max(issuer);
BEAST_EXPECT(env.le(nftpage) != nullptr);
{
Json::Value jvParams;
jvParams[jss::nft_page] = to_string(nftpage.key);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] == jss::NFTokenPage);
}
// negative tests
runLedgerEntryTest(env, jss::nft_page);
}
void
testNegativeUNL()
{
testcase("Negative UNL");
using namespace test::jtx;
Env env{*this};
// positive test
{
Keylet const keylet = keylet::negativeUNL();
// easier to hack an object into the ledger than generate it
// legitimately
{
auto const nUNL = [&](OpenView& view, beast::Journal) -> bool {
auto const sle = std::make_shared<SLE>(keylet);
// Create DisabledValidators array
STArray disabledValidators;
auto disabledValidator =
STObject::makeInnerObject(sfDisabledValidator);
auto pubKeyBlob = strUnHex(
"ED58F6770DB5DD77E59D28CB650EC3816E2FC95021BB56E720C9A1"
"2DA79C58A3AB");
disabledValidator.setFieldVL(sfPublicKey, *pubKeyBlob);
disabledValidator.setFieldU32(
sfFirstLedgerSequence, 91371264);
disabledValidators.push_back(std::move(disabledValidator));
sle->setFieldArray(
sfDisabledValidators, disabledValidators);
sle->setFieldH256(
sfPreviousTxnID,
uint256::fromVoid("8D47FFE664BE6C335108DF689537625855A6"
"A95160CC6D351341B9"
"2624D9C5E3"));
sle->setFieldU32(sfPreviousTxnLgrSeq, 91442944);
view.rawInsert(sle);
return true;
};
env.app().openLedger().modify(nUNL);
}
Json::Value jvParams;
jvParams[jss::nunl] = to_string(keylet.key);
Json::Value const jrr = env.rpc(
"json", "ledger_entry", to_string(jvParams))[jss::result];
BEAST_EXPECT(
jrr[jss::node][sfLedgerEntryType.jsonName] == jss::NegativeUNL);
}
// negative tests
runLedgerEntryTest(env, jss::nunl);
}
void
testOffer()
{
testcase("Offer");
using namespace test::jtx;
@@ -1413,7 +1738,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryPayChan()
testPayChan()
{
testcase("Pay Chan");
using namespace test::jtx;
@@ -1475,7 +1800,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryRippleState()
testRippleState()
{
testcase("RippleState");
using namespace test::jtx;
@@ -1626,7 +1951,16 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryTicket()
testSignerList()
{
testcase("Signer List");
using namespace test::jtx;
Env env{*this};
runLedgerEntryTest(env, jss::signer_list);
}
void
testTicket()
{
testcase("Ticket");
using namespace test::jtx;
@@ -1711,7 +2045,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryDID()
testDID()
{
testcase("DID");
using namespace test::jtx;
@@ -1848,7 +2182,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryMPT()
testMPT()
{
testcase("MPT");
using namespace test::jtx;
@@ -1931,7 +2265,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryPermissionedDomain()
testPermissionedDomain()
{
testcase("PermissionedDomain");
@@ -2010,7 +2344,7 @@ class LedgerEntry_test : public beast::unit_test::suite
}
void
testLedgerEntryCLI()
testCLI()
{
testcase("command-line");
using namespace test::jtx;
@@ -2040,25 +2374,33 @@ public:
void
run() override
{
testLedgerEntryInvalid();
testLedgerEntryAccountRoot();
testLedgerEntryCheck();
testLedgerEntryCredentials();
testLedgerEntryDelegate();
testLedgerEntryDepositPreauth();
testLedgerEntryDepositPreauthCred();
testLedgerEntryDirectory();
testLedgerEntryEscrow();
testLedgerEntryOffer();
testLedgerEntryPayChan();
testLedgerEntryRippleState();
testLedgerEntryTicket();
testLedgerEntryDID();
testInvalid();
testAccountRoot();
testAmendments();
testAMM();
testCheck();
testCredentials();
testDelegate();
testDepositPreauth();
testDepositPreauthCred();
testDirectory();
testEscrow();
testFeeSettings();
testLedgerHashes();
testNFTokenOffer();
testNFTokenPage();
testNegativeUNL();
testOffer();
testPayChan();
testRippleState();
testSignerList();
testTicket();
testDID();
testInvalidOracleLedgerEntry();
testOracleLedgerEntry();
testLedgerEntryMPT();
testLedgerEntryPermissionedDomain();
testLedgerEntryCLI();
testMPT();
testPermissionedDomain();
testCLI();
}
};
@@ -2086,7 +2428,7 @@ class LedgerEntry_XChain_test : public beast::unit_test::suite,
}
void
testLedgerEntryBridge()
testBridge()
{
testcase("ledger_entry: bridge");
using namespace test::jtx;
@@ -2177,7 +2519,7 @@ class LedgerEntry_XChain_test : public beast::unit_test::suite,
}
void
testLedgerEntryClaimID()
testClaimID()
{
testcase("ledger_entry: xchain_claim_id");
using namespace test::jtx;
@@ -2235,7 +2577,7 @@ class LedgerEntry_XChain_test : public beast::unit_test::suite,
}
void
testLedgerEntryCreateAccountClaimID()
testCreateAccountClaimID()
{
testcase("ledger_entry: xchain_create_account_claim_id");
using namespace test::jtx;
@@ -2362,9 +2704,9 @@ public:
void
run() override
{
testLedgerEntryBridge();
testLedgerEntryClaimID();
testLedgerEntryCreateAccountClaimID();
testBridge();
testClaimID();
testCreateAccountClaimID();
}
};

View File

@@ -87,7 +87,7 @@ private:
/// If the node is out of sync during an online_delete healthWait()
/// call, sleep the thread for this time, and continue checking until
/// recovery.
/// See also: "recovery_wait_seconds" in rippled-example.cfg
/// See also: "recovery_wait_seconds" in xrpld-example.cfg
std::chrono::seconds recoveryWaitTime_{5};
// these do not exist upon SHAMapStore creation, but do exist

View File

@@ -1285,7 +1285,10 @@ EscrowCancel::doApply()
return escrowUnlockApplyHelper<T>(
ctx_.view(),
parityRate,
slep,
// fixTokenEscrowV1_1: Pass account SLE instead of
// escrow SLE
ctx_.view().rules().enabled(fixTokenEscrowV1_1) ? sle
: slep,
mPriorBalance,
amount,
issuer,

View File

@@ -68,6 +68,7 @@ class Config : public BasicConfig
public:
// Settings related to the configuration file location and directories
static char const* const configFileName;
static char const* const configLegacyName;
static char const* const databaseDirName;
static char const* const validatorsFileName;

View File

@@ -221,11 +221,12 @@ getSingleSection(
//------------------------------------------------------------------------------
//
// Config (DEPRECATED)
// Config
//
//------------------------------------------------------------------------------
char const* const Config::configFileName = "rippled.cfg";
char const* const Config::configFileName = "xrpld.cfg";
char const* const Config::configLegacyName = "rippled.cfg";
char const* const Config::databaseDirName = "db";
char const* const Config::validatorsFileName = "validators.txt";
@@ -295,76 +296,78 @@ Config::setup(
bool bSilent,
bool bStandalone)
{
boost::filesystem::path dataDir;
std::string strDbPath, strConfFile;
setupControl(bQuiet, bSilent, bStandalone);
// Determine the config and data directories.
// If the config file is found in the current working
// directory, use the current working directory as the
// config directory and that with "db" as the data
// directory.
setupControl(bQuiet, bSilent, bStandalone);
strDbPath = databaseDirName;
if (!strConf.empty())
strConfFile = strConf;
else
strConfFile = configFileName;
boost::filesystem::path dataDir;
if (!strConf.empty())
{
// --conf=<path> : everything is relative that file.
CONFIG_FILE = strConfFile;
CONFIG_FILE = strConf;
CONFIG_DIR = boost::filesystem::absolute(CONFIG_FILE);
CONFIG_DIR.remove_filename();
dataDir = CONFIG_DIR / strDbPath;
dataDir = CONFIG_DIR / databaseDirName;
}
else
{
CONFIG_DIR = boost::filesystem::current_path();
CONFIG_FILE = CONFIG_DIR / strConfFile;
dataDir = CONFIG_DIR / strDbPath;
// Construct XDG config and data home.
// http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
auto const strHome = getEnvVar("HOME");
auto strXdgConfigHome = getEnvVar("XDG_CONFIG_HOME");
auto strXdgDataHome = getEnvVar("XDG_DATA_HOME");
if (boost::filesystem::exists(CONFIG_FILE)
// Can we figure out XDG dirs?
|| (strHome.empty() &&
(strXdgConfigHome.empty() || strXdgDataHome.empty())))
do
{
// Current working directory is fine, put dbs in a subdir.
}
else
{
if (strXdgConfigHome.empty())
// Check if either of the config files exist in the current working
// directory, in which case the databases will be stored in a
// subdirectory.
CONFIG_DIR = boost::filesystem::current_path();
dataDir = CONFIG_DIR / databaseDirName;
CONFIG_FILE = CONFIG_DIR / configFileName;
if (boost::filesystem::exists(CONFIG_FILE))
break;
CONFIG_FILE = CONFIG_DIR / configLegacyName;
if (boost::filesystem::exists(CONFIG_FILE))
break;
// Check if the home directory is set, and optionally the XDG config
// and/or data directories, as the config may be there. See
// http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html.
auto const strHome = getEnvVar("HOME");
if (!strHome.empty())
{
// $XDG_CONFIG_HOME was not set, use default based on $HOME.
strXdgConfigHome = strHome + "/.config";
auto strXdgConfigHome = getEnvVar("XDG_CONFIG_HOME");
auto strXdgDataHome = getEnvVar("XDG_DATA_HOME");
if (strXdgConfigHome.empty())
{
// $XDG_CONFIG_HOME was not set, use default based on $HOME.
strXdgConfigHome = strHome + "/.config";
}
if (strXdgDataHome.empty())
{
// $XDG_DATA_HOME was not set, use default based on $HOME.
strXdgDataHome = strHome + "/.local/share";
}
// Check if either of the config files exist in the XDG config
// dir.
dataDir = strXdgDataHome + "/" + systemName();
CONFIG_DIR = strXdgConfigHome + "/" + systemName();
CONFIG_FILE = CONFIG_DIR / configFileName;
if (boost::filesystem::exists(CONFIG_FILE))
break;
CONFIG_FILE = CONFIG_DIR / configLegacyName;
if (boost::filesystem::exists(CONFIG_FILE))
break;
}
if (strXdgDataHome.empty())
{
// $XDG_DATA_HOME was not set, use default based on $HOME.
strXdgDataHome = strHome + "/.local/share";
}
CONFIG_DIR = strXdgConfigHome + "/" + systemName();
CONFIG_FILE = CONFIG_DIR / strConfFile;
dataDir = strXdgDataHome + "/" + systemName();
if (!boost::filesystem::exists(CONFIG_FILE))
{
CONFIG_DIR = "/etc/opt/" + systemName();
CONFIG_FILE = CONFIG_DIR / strConfFile;
dataDir = "/var/opt/" + systemName();
}
}
// As a last resort, check the system config directory.
dataDir = "/var/opt/" + systemName();
CONFIG_DIR = "/etc/opt/" + systemName();
CONFIG_FILE = CONFIG_DIR / configFileName;
if (boost::filesystem::exists(CONFIG_FILE))
break;
CONFIG_FILE = CONFIG_DIR / configLegacyName;
} while (false);
}
// Update default values
@@ -374,11 +377,9 @@ Config::setup(
std::string const dbPath(legacy("database_path"));
if (!dbPath.empty())
dataDir = boost::filesystem::path(dbPath);
else if (RUN_STANDALONE)
dataDir.clear();
}
if (!dataDir.empty())
if (!RUN_STANDALONE)
{
boost::system::error_code ec;
boost::filesystem::create_directories(dataDir, ec);

View File

@@ -373,7 +373,7 @@ command. The key is in the `pubkey_node` value, and is a text string
beginning with the letter `n`. The key is maintained across runs in a
database.
Cluster members are configured in the `rippled.cfg` file under
Cluster members are configured in the `xrpld.cfg` file under
`[cluster_nodes]`. Each member should be configured on a line beginning
with the node public key, followed optionally by a space and a friendly
name.

View File

@@ -514,7 +514,7 @@ OverlayImpl::start()
m_peerFinder->addFallbackStrings(base + name, ips);
});
// Add the ips_fixed from the rippled.cfg file
// Add the ips_fixed from the xrpld.cfg file
if (!app_.config().standalone() && !app_.config().IPS_FIXED.empty())
{
m_resolver.resolve(

View File

@@ -71,16 +71,17 @@ parseAMM(Json::Value const& params, Json::StaticString const fieldName)
return Unexpected(value.error());
}
try
{
auto const issue = issueFromJson(params[jss::asset]);
auto const issue2 = issueFromJson(params[jss::asset2]);
return keylet::amm(issue, issue2).key;
}
catch (std::runtime_error const&)
{
return LedgerEntryHelpers::malformedError("malformedRequest", "");
}
auto const asset = LedgerEntryHelpers::requiredIssue(
params, jss::asset, "malformedRequest");
if (!asset)
return Unexpected(asset.error());
auto const asset2 = LedgerEntryHelpers::requiredIssue(
params, jss::asset2, "malformedRequest");
if (!asset2)
return Unexpected(asset2.error());
return keylet::amm(*asset, *asset2).key;
}
static Expected<uint256, Json::Value>
@@ -424,7 +425,7 @@ parseLoan(Json::Value const& params, Json::StaticString const fieldName)
}
auto const id = LedgerEntryHelpers::requiredUInt256(
params, jss::loan_broker_id, "malformedOwner");
params, jss::loan_broker_id, "malformedLoanBrokerID");
if (!id)
return Unexpected(id.error());
auto const seq = LedgerEntryHelpers::requiredUInt32(

View File

@@ -218,6 +218,29 @@ requiredUInt192(
return required<uint192>(params, fieldName, err, "Hash192");
}
template <>
std::optional<Issue>
parse(Json::Value const& param)
{
try
{
return issueFromJson(param);
}
catch (std::runtime_error const&)
{
return std::nullopt;
}
}
Expected<Issue, Json::Value>
requiredIssue(
Json::Value const& params,
Json::StaticString const fieldName,
std::string const& err)
{
return required<Issue>(params, fieldName, err, "Issue");
}
Expected<STXChainBridge, Json::Value>
parseBridgeFields(Json::Value const& params)
{