Compare commits

...

14 Commits

Author SHA1 Message Date
Bart
9de4bcac05 Clean up scripts 2025-12-23 10:08:44 -05:00
Bart
90234d1fd0 Improve readme, reduce number of configs to build 2025-12-23 10:04:43 -05:00
Bart
2543f2eb58 Exclude scripts and example code from Codecov 2025-12-22 20:23:57 -05:00
Bart
5c3eaa5101 Fix missing imports 2025-12-22 19:43:30 -05:00
Bart
6dfaeb11bc Skip Clang 20+ on ARM, reduce macOS and Windows builds, temporarily run all 2025-12-22 19:42:16 -05:00
Bart
01d57c9aa3 Update image SHA for Debian and RHEL, pass JSON array as string 2025-12-22 18:58:19 -05:00
Bart
c115b77970 Reusable workflows do not support choice 2025-12-22 18:48:30 -05:00
Bart
05a76895ad ci: Support more flexible strategy matrix generation 2025-12-22 16:52:45 -05:00
Bart
40198d9792 ci: Remove superfluous build directory creation (#6159)
This change modifies the build directory structure from `build/build/xxx` or `.build/build/xxx` to just `build/xxx`. Namely, the `conanfile.py` has the CMake generators build directory hardcoded to `build/generators`. We may as well leverage the top-level build directory without introducing another layer of directory nesting.
2025-12-22 16:30:23 -05:00
Bart
f059f0beda Set version to 3.2.0-b0 (#6153) 2025-12-17 18:21:01 -05:00
Mayukha Vadari
41c1be2bac refactor: remove Json::Object and related files/classes (#5894)
`Json::Object` and related objects are not used at all, so this change removes `include/xrpl/json/Object.h` and all downstream files. There are a number of minor downstream changes as well.

Full list of deleted classes and functions:
* `Json::Collections`
* `Json::Object`
* `Json::Array`
* `Json::WriterObject`
* `Json::setArray`
* `Json::addObject`
* `Json::appendArray`
* `Json::appendObject`

The last helper function, `copyFrom`, seemed a bit more complex and was actually used in a few places, so it was moved to `LedgerToJson.h` instead of deleting it.
2025-12-15 13:40:08 -05:00
Bart
f816ffa55f ci: Update shared actions (#6147)
The latest update to `cleanup-workspace`, `get-nproc`, and `prepare-runner` moved the action to the repository root directory, and also includes some ccache changes. In response, this change updates the various shared actions to the latest commit hash.
2025-12-12 19:47:34 +00:00
liuyueyangxmu
cf748702af chore: Fix some typos in comments (#6082) 2025-12-12 11:06:17 -05:00
Bart
1eb0fdac65 refactor: Rename ripple namespace to xrpl (#5982)
This change renames all occurrences of `namespace ripple` and `ripple::` to `namespace xrpl` and `xrpl::`, respectively, as well as the names of test suites. It also provides a script to allow developers to replicate the changes in their local branch or fork to avoid conflicts.
2025-12-11 16:51:49 +00:00
1290 changed files with 7330 additions and 5872 deletions

View File

@@ -32,7 +32,9 @@ parsers:
slack_app: false
ignore:
- "src/test/"
- "src/tests/"
- ".github/scripts/"
- "include/xrpl/beast/test/"
- "include/xrpl/beast/unit_test/"
- "src/test/"
- "src/tests/"
- "tests/"

View File

@@ -4,9 +4,6 @@ description: "Install Conan dependencies, optionally forcing a rebuild of all de
# Note that actions do not support 'type' and all inputs are strings, see
# https://docs.github.com/en/actions/reference/workflows-and-actions/metadata-syntax#inputs.
inputs:
build_dir:
description: "The directory where to build."
required: true
build_type:
description: 'The build type to use ("Debug", "Release").'
required: true
@@ -28,17 +25,13 @@ runs:
- name: Install Conan dependencies
shell: bash
env:
BUILD_DIR: ${{ inputs.build_dir }}
BUILD_NPROC: ${{ inputs.build_nproc }}
BUILD_OPTION: ${{ inputs.force_build == 'true' && '*' || 'missing' }}
BUILD_TYPE: ${{ inputs.build_type }}
LOG_VERBOSITY: ${{ inputs.log_verbosity }}
run: |
echo 'Installing dependencies.'
mkdir -p "${BUILD_DIR}"
cd "${BUILD_DIR}"
conan install \
--output-folder . \
--build="${BUILD_OPTION}" \
--options:host='&:tests=True' \
--options:host='&:xrpld=True' \
@@ -46,4 +39,4 @@ runs:
--conf:all tools.build:jobs=${BUILD_NPROC} \
--conf:all tools.build:verbosity="${LOG_VERBOSITY}" \
--conf:all tools.compilation:verbosity="${LOG_VERBOSITY}" \
..
.

View File

@@ -29,6 +29,8 @@ run from the repository root.
4. `.github/scripts/rename/binary.sh`: This script will rename the binary from
`rippled` to `xrpld`, and reverses the symlink so that `rippled` points to
the `xrpld` binary.
5. `.github/scripts/rename/namespace.sh`: This script will rename the C++
namespaces from `ripple` to `xrpl`.
You can run all these scripts from the repository root as follows:
@@ -37,4 +39,5 @@ You can run all these scripts from the repository root as follows:
./.github/scripts/rename/copyright.sh .
./.github/scripts/rename/cmake.sh .
./.github/scripts/rename/binary.sh .
./.github/scripts/rename/namespace.sh .
```

58
.github/scripts/rename/namespace.sh vendored Executable file
View File

@@ -0,0 +1,58 @@
#!/bin/bash
# Exit the script as soon as an error occurs.
set -e
# On MacOS, ensure that GNU sed is installed and available as `gsed`.
SED_COMMAND=sed
if [[ "${OSTYPE}" == 'darwin'* ]]; then
if ! command -v gsed &> /dev/null; then
echo "Error: gsed is not installed. Please install it using 'brew install gnu-sed'."
exit 1
fi
SED_COMMAND=gsed
fi
# This script renames the `ripple` namespace to `xrpl` in this project.
# Specifically, it renames all occurrences of `namespace ripple` and `ripple::`
# to `namespace xrpl` and `xrpl::`, respectively, by scanning all header and
# source files in the specified directory and its subdirectories, as well as any
# occurrences in the documentation. It also renames them in the test suites.
# Usage: .github/scripts/rename/namespace.sh <repository directory>
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <repository directory>"
exit 1
fi
DIRECTORY=$1
echo "Processing directory: ${DIRECTORY}"
if [ ! -d "${DIRECTORY}" ]; then
echo "Error: Directory '${DIRECTORY}' does not exist."
exit 1
fi
pushd ${DIRECTORY}
DIRECTORIES=("include" "src" "tests")
for DIRECTORY in "${DIRECTORIES[@]}"; do
echo "Processing directory: ${DIRECTORY}"
find "${DIRECTORY}" -type f \( -name "*.h" -o -name "*.hpp" -o -name "*.ipp" -o -name "*.cpp" \) | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/namespace ripple/namespace xrpl/g' "${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' "${FILE}"
done
done
# Special case for NuDBFactory that has ripple twice in the test suite name.
${SED_COMMAND} -i -E 's/(BEAST_DEFINE_TESTSUITE.+)ripple(.+)/\1xrpl\2/g' src/test/nodestore/NuDBFactory_test.cpp
DIRECTORY=$1
find "${DIRECTORY}" -type f -name "*.md" | while read -r FILE; do
echo "Processing file: ${FILE}"
${SED_COMMAND} -i 's/ripple::/xrpl::/g' "${FILE}"
done
popd
echo "Renaming complete."

View File

@@ -0,0 +1,118 @@
# Strategy Matrix
The scripts in this directory will generate a strategy matrix for GitHub Actions
CI, depending on the trigger that caused the workflow to run and the platform
specified.
There are several build, test, and publish settings that can be enabled for each
configuration. The settings are combined in a Cartesian product to generate the
full matrix, while filtering out any combinations not applicable to the trigger.
## Platforms
We support three platforms: Linux, macOS, and Windows.
### Linux
We support a variety of distributions (Debian, RHEL, and Ubuntu) and compilers
(GCC and Clang) on Linux. As there are so many combinations, we don't run them
all. Instead, we focus on a few key ones for PR commits and merges, while we run
most of them on a scheduled or ad hoc basis.
Some noteworthy configurations are:
- The official release build is GCC 14 on Debian Bullseye.
- Although we generally enable assertions in release builds, we disable them
for the official release build.
- We publish .deb and .rpm packages for this build, as well as a Docker image.
- For PR commits we also publish packages and images for testing purposes.
- Antithesis instrumentation is only supported on Clang 16+ on AMD64.
- We publish a Docker image for this build, but no packages.
- Coverage reports are generated on Bullseye with GCC 15.
- It must be enabled for both commits (to show PR coverage) and merges (to
show default branch coverage).
Note that we try to run pipelines equally across both AMD64 and ARM64, but in
some cases we cannot build on ARM64:
- All Clang 20+ builds on ARM64 are currently skipped due to a Boost build
error.
- All RHEL builds on AMD64 are currently skipped due to a build failure that
needs further investigation.
Also note that to create a Docker image we ideally build on both AMD64 and
ARM64 to create a multi-arch image. Both configs should therefore be triggered
by the same event. However, as the script outputs individual configs, the
workflow must be able to run both builds separately and then merge the
single-arch images afterward into a multi-arch image.
### MacOS
We support building on macOS, which uses the Apple Clang compiler and the ARM64
architecture. We use default settings for all builds, and don't publish any
packages or images.
### Windows
We also support building on Windows, which uses the MSVC compiler and the AMD64
architecture. While we could build on ARM64, we have not yet found a suitable
cloud machine to use as a GitHub runner. We use default settings for all builds,
and don't publish any packages or images.
## Triggers
We have four triggers that can cause the workflow to run:
- `commit`: A commit is pushed to a branch for which a pull request is open.
- `merge`: A pull request is merged.
- `label`: A label is added to a pull request.
- `schedule`: The workflow is run on a scheduled basis.
The `label` trigger is currently not used, but it is reserved for future use.
The `schedule` trigger is used to run the workflow each weekday, and is also
used for ad hoc testing via the `workflow_dispatch` event.
### Dependencies
The pipeline that is run for the `schedule` trigger will recompile and upload
all Conan packages to the remote for each configuration that is enabled. In
case any dependencies were added or updated in a recently merged PR, they will
then be available in the remote for the following pipeline runs. It is therefore
important that all configurations that are enabled for the `commit`, `merge`,
and `label` triggers are also enabled for the `schedule` trigger. We run
additional configurations in the `schedule` trigger that are not run for the
other triggers, to get extra confidence that the codebase can compile and run on
all supported platforms.
#### Caveats
There is some nuance here in that certain options affect the compilation of the
dependencies, while others do not. This means that that same options need to be
enabled for the `schedule` trigger as for the other triggers to ensure any
dependency changes get cached in the Conan remote.
- Build mode (`unity`): Does not affect the dependencies.
- Build option (`coverage`, `voidstar`): Does not affect the dependencies.
- Build option (`sanitizer asan`, `sanitizer tsan`): Affects the dependencies.
- Build type (`debug`, `release`): Affects the dependencies.
- Build type (`publish`): Same effect as `release` on the dependencies.
- Test option (`reference fee`): Does not affect the dependencies.
- Publish option (`package`, `image`): Does not affect the dependencies.
## Usage
Our GitHub CI pipeline uses the `generate.py` script to generate the matrix for
the current workflow invocation. Naturally, the script can be run locally to
generate the matrix for testing purposes, e.g.:
```bash
python3 generate.py --platform=linux --trigger=commit
```
If you want to pretty-print the output, you can pipe it to `jq` after stripping
off the `matrix=` prefix, e.g.:
```bash
python3 generate.py --platform=linux --trigger=commit | cut -d= -f2- | jq
```

View File

View File

@@ -1,301 +1,211 @@
#!/usr/bin/env python3
import argparse
import dataclasses
import itertools
import json
from dataclasses import dataclass
from pathlib import Path
from collections.abc import Iterator
THIS_DIR = Path(__file__).parent.resolve()
import linux
import macos
import windows
from helpers.defs import *
from helpers.enums import *
from helpers.funcs import *
from helpers.unique import *
# The GitHub runner tags to use for the different architectures.
RUNNER_TAGS = {
Arch.LINUX_AMD64: ["self-hosted", "Linux", "X64", "heavy"],
Arch.LINUX_ARM64: ["self-hosted", "Linux", "ARM64", "heavy-arm64"],
Arch.MACOS_ARM64: ["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
Arch.WINDOWS_AMD64: ["self-hosted", "Windows", "devbox"],
}
@dataclass
class Config:
architecture: list[dict]
os: list[dict]
build_type: list[str]
cmake_args: list[str]
def generate_configs(distros: list[Distro], trigger: Trigger) -> list[Config]:
"""Generate a strategy matrix for GitHub Actions CI.
Args:
distros: The distros to generate the matrix for.
trigger: The trigger that caused the workflow to run.
Returns:
list[Config]: The generated configurations.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
"""
configs = []
for distro in distros:
for config in generate_config_for_distro(distro, trigger):
configs.append(config)
if not is_unique(configs):
raise ValueError("configs must be a list of unique Config")
return configs
"""
Generate a strategy matrix for GitHub Actions CI.
def generate_config_for_distro(distro: Distro, trigger: Trigger) -> Iterator[Config]:
"""Generate a strategy matrix for a specific distro.
On each PR commit we will build a selection of Debian, RHEL, Ubuntu, MacOS, and
Windows configurations, while upon merge into the develop, release, or master
branches, we will build all configurations, and test most of them.
Args:
distro: The distro to generate the matrix for.
trigger: The trigger that caused the workflow to run.
We will further set additional CMake arguments as follows:
- All builds will have the `tests`, `werr`, and `xrpld` options.
- All builds will have the `wextra` option except for GCC 12 and Clang 16.
- All release builds will have the `assert` option.
- Certain Debian Bookworm configurations will change the reference fee, enable
codecov, and enable voidstar in PRs.
"""
Yields:
Config: The next configuration to build.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
def generate_strategy_matrix(all: bool, config: Config) -> list:
configurations = []
for architecture, os, build_type, cmake_args in itertools.product(
config.architecture, config.os, config.build_type, config.cmake_args
):
# The default CMake target is 'all' for Linux and MacOS and 'install'
# for Windows, but it can get overridden for certain configurations.
cmake_target = "install" if os["distro_name"] == "windows" else "all"
# We build and test all configurations by default, except for Windows in
# Debug, because it is too slow, as well as when code coverage is
# enabled as that mode already runs the tests.
build_only = False
if os["distro_name"] == "windows" and build_type == "Debug":
build_only = True
# Only generate a subset of configurations in PRs.
if not all:
# Debian:
# - Bookworm using GCC 13: Release and Unity on linux/amd64, set
# the reference fee to 500.
# - Bookworm using GCC 15: Debug and no Unity on linux/amd64, enable
# code coverage (which will be done below).
# - Bookworm using Clang 16: Debug and no Unity on linux/arm64,
# enable voidstar.
# - Bookworm using Clang 17: Release and no Unity on linux/amd64,
# set the reference fee to 1000.
# - Bookworm using Clang 20: Debug and Unity on linux/amd64.
if os["distro_name"] == "debian":
skip = True
if os["distro_version"] == "bookworm":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-13"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=500 {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-16"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/arm64"
):
cmake_args = f"-Dvoidstar=ON {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-17"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-DUNIT_TEST_REFERENCE_FEE=1000 {cmake_args}"
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-20"
and build_type == "Debug"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if skip:
continue
# RHEL:
# - 9 using GCC 12: Debug and Unity on linux/amd64.
# - 10 using Clang: Release and no Unity on linux/amd64.
if os["distro_name"] == "rhel":
skip = True
if os["distro_version"] == "9":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
elif os["distro_version"] == "10":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-any"
and build_type == "Release"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if skip:
continue
# Ubuntu:
# - Jammy using GCC 12: Debug and no Unity on linux/arm64.
# - Noble using GCC 14: Release and Unity on linux/amd64.
# - Noble using Clang 18: Debug and no Unity on linux/amd64.
# - Noble using Clang 19: Release and Unity on linux/arm64.
if os["distro_name"] == "ubuntu":
skip = True
if os["distro_version"] == "jammy":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-12"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/arm64"
):
skip = False
elif os["distro_version"] == "noble":
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-14"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-18"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
skip = False
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "clang-19"
and build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "linux/arm64"
):
skip = False
if skip:
continue
# MacOS:
# - Debug and no Unity on macos/arm64.
if os["distro_name"] == "macos" and not (
build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "macos/arm64"
):
continue
# Windows:
# - Release and Unity on windows/amd64.
if os["distro_name"] == "windows" and not (
build_type == "Release"
and "-Dunity=ON" in cmake_args
and architecture["platform"] == "windows/amd64"
):
continue
# Additional CMake arguments.
cmake_args = f"{cmake_args} -Dtests=ON -Dwerr=ON -Dxrpld=ON"
if not f"{os['compiler_name']}-{os['compiler_version']}" in [
"gcc-12",
"clang-16",
]:
cmake_args = f"{cmake_args} -Dwextra=ON"
if build_type == "Release":
cmake_args = f"{cmake_args} -Dassert=ON"
# We skip all RHEL on arm64 due to a build failure that needs further
# investigation.
if os["distro_name"] == "rhel" and architecture["platform"] == "linux/arm64":
"""
for spec in distro.specs:
if trigger not in spec.triggers:
continue
# We skip all clang 20+ on arm64 due to Boost build error.
if (
f"{os['compiler_name']}-{os['compiler_version']}"
in ["clang-20", "clang-21"]
and architecture["platform"] == "linux/arm64"
):
os_name = distro.os_name
os_version = distro.os_version
compiler_name = distro.compiler_name
compiler_version = distro.compiler_version
image_sha = distro.image_sha
yield from generate_config_for_distro_spec(
os_name,
os_version,
compiler_name,
compiler_version,
image_sha,
spec,
trigger,
)
def generate_config_for_distro_spec(
os_name: str,
os_version: str,
compiler_name: str,
compiler_version: str,
image_sha: str,
spec: Spec,
trigger: Trigger,
) -> Iterator[Config]:
"""Generate a strategy matrix for a specific distro and spec.
Args:
os_name: The OS name.
os_version: The OS version.
compiler_name: The compiler name.
compiler_version: The compiler version.
image_sha: The image SHA.
spec: The spec to generate the matrix for.
trigger: The trigger that caused the workflow to run.
Yields:
Config: The next configuration to build.
"""
for trigger_, arch, build_mode, build_type in itertools.product(
spec.triggers, spec.archs, spec.build_modes, spec.build_types
):
if trigger_ != trigger:
continue
# Enable code coverage for Debian Bookworm using GCC 15 in Debug and no
# Unity on linux/amd64
if (
f"{os['compiler_name']}-{os['compiler_version']}" == "gcc-15"
and build_type == "Debug"
and "-Dunity=OFF" in cmake_args
and architecture["platform"] == "linux/amd64"
):
cmake_args = f"-Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0 {cmake_args}"
build_option = spec.build_option
test_option = spec.test_option
publish_option = spec.publish_option
# Generate a unique name for the configuration, e.g. macos-arm64-debug
# or debian-bookworm-gcc-12-amd64-release-unity.
config_name = os["distro_name"]
if (n := os["distro_version"]) != "":
config_name += f"-{n}"
if (n := os["compiler_name"]) != "":
config_name += f"-{n}"
if (n := os["compiler_version"]) != "":
config_name += f"-{n}"
config_name += (
f"-{architecture['platform'][architecture['platform'].find('/') + 1 :]}"
)
config_name += f"-{build_type.lower()}"
if "-Dunity=ON" in cmake_args:
config_name += "-unity"
# Add the configuration to the list, with the most unique fields first,
# so that they are easier to identify in the GitHub Actions UI, as long
# names get truncated.
configurations.append(
{
"config_name": config_name,
"cmake_args": cmake_args,
"cmake_target": cmake_target,
"build_only": build_only,
"build_type": build_type,
"os": os,
"architecture": architecture,
}
# Determine the configuration name.
config_name = generate_config_name(
os_name,
os_version,
compiler_name,
compiler_version,
arch,
build_type,
build_mode,
build_option,
)
return configurations
# Determine the CMake arguments.
cmake_args = generate_cmake_args(
compiler_name,
compiler_version,
build_type,
build_mode,
build_option,
test_option,
)
# Determine the CMake target.
cmake_target = generate_cmake_target(os_name, build_type)
def read_config(file: Path) -> Config:
config = json.loads(file.read_text())
if (
config["architecture"] is None
or config["os"] is None
or config["build_type"] is None
or config["cmake_args"] is None
):
raise Exception("Invalid configuration file.")
# Determine whether to enable running tests, and to create a package
# and/or image.
enable_tests, enable_package, enable_image = generate_enable_options(
os_name, build_type, publish_option
)
return Config(**config)
# Determine the image to run in, if applicable.
image = generate_image_name(
os_name,
os_version,
compiler_name,
compiler_version,
image_sha,
)
# Generate the configuration.
yield Config(
config_name=config_name,
cmake_args=cmake_args,
cmake_target=cmake_target,
build_type=("Debug" if build_type == BuildType.DEBUG else "Release"),
enable_tests=enable_tests,
enable_package=enable_package,
enable_image=enable_image,
runs_on=RUNNER_TAGS[arch],
image=image,
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-a",
"--all",
help="Set to generate all configurations (generally used when merging a PR) or leave unset to generate a subset of configurations (generally used when committing to a PR).",
action="store_true",
"--platform",
"-p",
required=False,
type=Platform,
choices=list(Platform),
help="The platform to run on.",
)
parser.add_argument(
"-c",
"--config",
help="Path to the JSON file containing the strategy matrix configurations.",
required=False,
type=Path,
"--trigger",
"-t",
required=True,
type=Trigger,
choices=list(Trigger),
help="The trigger that caused the workflow to run.",
)
args = parser.parse_args()
matrix = []
if args.config is None or args.config == "":
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "linux.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "macos.json")
)
matrix += generate_strategy_matrix(
args.all, read_config(THIS_DIR / "windows.json")
)
else:
matrix += generate_strategy_matrix(args.all, read_config(args.config))
# Collect the distros to generate configs for.
distros = []
if args.platform in [None, Platform.LINUX]:
distros += linux.DEBIAN_DISTROS + linux.RHEL_DISTROS + linux.UBUNTU_DISTROS
if args.platform in [None, Platform.MACOS]:
distros += macos.DISTROS
if args.platform in [None, Platform.WINDOWS]:
distros += windows.DISTROS
# Generate the strategy matrix.
print(f"matrix={json.dumps({'include': matrix})}")
# Generate the configs.
configs = generate_configs(distros, args.trigger)
# Convert the configs into the format expected by GitHub Actions.
include = []
for config in configs:
include.append(dataclasses.asdict(config))
print(f"matrix={json.dumps({'include': include})}")

View File

@@ -0,0 +1,466 @@
import pytest
from generate import *
@pytest.fixture
def macos_distro():
return Distro(
os_name="macos",
specs=[
Spec(
archs=[Arch.MACOS_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.COVERAGE,
build_types=[BuildType.RELEASE],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
],
)
@pytest.fixture
def windows_distro():
return Distro(
os_name="windows",
specs=[
Spec(
archs=[Arch.WINDOWS_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_option=BuildOption.SANITIZE_ASAN,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.IMAGE_ONLY,
test_option=TestOption.REFERENCE_FEE_500,
triggers=[Trigger.COMMIT, Trigger.SCHEDULE],
)
],
)
@pytest.fixture
def linux_distro():
return Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="16",
image_sha="a1b2c3d4",
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.SANITIZE_TSAN,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.LABEL],
),
Spec(
archs=[Arch.LINUX_AMD64, Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_OFF, BuildMode.UNITY_ON],
build_option=BuildOption.VOIDSTAR,
build_types=[BuildType.PUBLISH],
publish_option=PublishOption.PACKAGE_AND_IMAGE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT, Trigger.LABEL],
),
],
)
def test_macos_generate_config_for_distro_spec_matches_trigger(macos_distro):
trigger = Trigger.COMMIT
distro = macos_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
)
]
def test_macos_generate_config_for_distro_spec_no_match_trigger(macos_distro):
trigger = Trigger.MERGE
distro = macos_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == []
def test_macos_generate_config_for_distro_matches_trigger(macos_distro):
trigger = Trigger.COMMIT
distro = macos_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
)
]
def test_macos_generate_config_for_distro_no_match_trigger(macos_distro):
trigger = Trigger.MERGE
distro = macos_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_windows_generate_config_for_distro_spec_matches_trigger(
windows_distro,
):
trigger = Trigger.COMMIT
distro = windows_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == [
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
)
]
def test_windows_generate_config_for_distro_spec_no_match_trigger(
windows_distro,
):
trigger = Trigger.MERGE
distro = windows_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[0],
trigger,
)
)
assert result == []
def test_windows_generate_config_for_distro_matches_trigger(
windows_distro,
):
trigger = Trigger.COMMIT
distro = windows_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
)
]
def test_windows_generate_config_for_distro_no_match_trigger(
windows_distro,
):
trigger = Trigger.MERGE
distro = windows_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_linux_generate_config_for_distro_spec_matches_trigger(linux_distro):
trigger = Trigger.LABEL
distro = linux_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[1],
trigger,
)
)
assert result == [
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_linux_generate_config_for_distro_spec_no_match_trigger(linux_distro):
trigger = Trigger.MERGE
distro = linux_distro
result = list(
generate_config_for_distro_spec(
distro.os_name,
distro.os_version,
distro.compiler_name,
distro.compiler_version,
distro.image_sha,
distro.specs[1],
trigger,
)
)
assert result == []
def test_linux_generate_config_for_distro_matches_trigger(linux_distro):
trigger = Trigger.LABEL
distro = linux_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == [
Config(
config_name="debian-bookworm-clang-16-tsan-debug-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_linux_generate_config_for_distro_no_match_trigger(linux_distro):
trigger = Trigger.MERGE
distro = linux_distro
result = list(generate_config_for_distro(distro, trigger))
assert result == []
def test_generate_configs(macos_distro, windows_distro, linux_distro):
trigger = Trigger.COMMIT
distros = [macos_distro, windows_distro, linux_distro]
result = generate_configs(distros, trigger)
assert result == [
Config(
config_name="macos-coverage-release-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0",
cmake_target="all",
build_type="Release",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["self-hosted", "macOS", "ARM64", "mac-runner-m1"],
image=None,
),
Config(
config_name="windows-asan-debug-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON -DUNIT_TEST_REFERENCE_FEE=500",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=False,
enable_image=True,
runs_on=["self-hosted", "Windows", "devbox"],
image=None,
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-amd64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "X64", "heavy"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
Config(
config_name="debian-bookworm-clang-16-voidstar-publish-unity-arm64",
cmake_args="-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dunity=ON -Dvoidstar=ON",
cmake_target="install",
build_type="Release",
enable_tests=True,
enable_package=True,
enable_image=True,
runs_on=["self-hosted", "Linux", "ARM64", "heavy-arm64"],
image="ghcr.io/xrplf/ci/debian-bookworm:clang-16-a1b2c3d4",
),
]
def test_generate_configs_raises_on_duplicate_configs(macos_distro):
trigger = Trigger.COMMIT
distros = [macos_distro, macos_distro]
with pytest.raises(ValueError):
generate_configs(distros, trigger)

View File

View File

@@ -0,0 +1,190 @@
from dataclasses import dataclass, field
from helpers.enums import *
from helpers.unique import *
@dataclass
class Config:
"""Represents a configuration to include in the strategy matrix.
Raises:
ValueError: If any of the required fields are empty or invalid.
TypeError: If any of the required fields are of the wrong type.
"""
config_name: str
cmake_args: str
cmake_target: str
build_type: str
enable_tests: bool
enable_package: bool
enable_image: bool
runs_on: list[str]
image: str | None = None
def __post_init__(self):
if not self.config_name:
raise ValueError("config_name cannot be empty")
if not isinstance(self.config_name, str):
raise TypeError("config_name must be a string")
if not self.cmake_args:
raise ValueError("cmake_args cannot be empty")
if not isinstance(self.cmake_args, str):
raise TypeError("cmake_args must be a string")
if not self.cmake_target:
raise ValueError("cmake_target cannot be empty")
if not isinstance(self.cmake_target, str):
raise TypeError("cmake_target must be a string")
if self.cmake_target not in ["all", "install"]:
raise ValueError("cmake_target must be 'all' or 'install'")
if not self.build_type:
raise ValueError("build_type cannot be empty")
if not isinstance(self.build_type, str):
raise TypeError("build_type must be a string")
if self.build_type not in ["Debug", "Release"]:
raise ValueError("build_type must be 'Debug' or 'Release'")
if not isinstance(self.enable_tests, bool):
raise TypeError("enable_tests must be a boolean")
if not isinstance(self.enable_package, bool):
raise TypeError("enable_package must be a boolean")
if not isinstance(self.enable_image, bool):
raise TypeError("enable_image must be a boolean")
if not self.runs_on:
raise ValueError("runs_on cannot be empty")
if not isinstance(self.runs_on, list):
raise TypeError("runs_on must be a list")
if not all(isinstance(runner, str) for runner in self.runs_on):
raise TypeError("runs_on must be a list of strings")
if not all(self.runs_on):
raise ValueError("runs_on must be a list of non-empty strings")
if len(self.runs_on) != len(set(self.runs_on)):
raise ValueError("runs_on must be a list of unique strings")
if self.image and not isinstance(self.image, str):
raise TypeError("image must be a string")
@dataclass
class Spec:
"""Represents a specification used by a configuration.
Raises:
ValueError: If any of the required fields are empty.
TypeError: If any of the required fields are of the wrong type.
"""
archs: list[Arch] = field(
default_factory=lambda: [Arch.LINUX_AMD64, Arch.LINUX_ARM64]
)
build_option: BuildOption = BuildOption.NONE
build_modes: list[BuildMode] = field(
default_factory=lambda: [BuildMode.UNITY_OFF, BuildMode.UNITY_ON]
)
build_types: list[BuildType] = field(
default_factory=lambda: [BuildType.DEBUG, BuildType.RELEASE]
)
publish_option: PublishOption = PublishOption.NONE
test_option: TestOption = TestOption.NONE
triggers: list[Trigger] = field(
default_factory=lambda: [Trigger.COMMIT, Trigger.MERGE, Trigger.SCHEDULE]
)
def __post_init__(self):
if not self.archs:
raise ValueError("archs cannot be empty")
if not isinstance(self.archs, list):
raise TypeError("archs must be a list")
if not all(isinstance(arch, str) for arch in self.archs):
raise TypeError("archs must be a list of Arch")
if len(self.archs) != len(set(self.archs)):
raise ValueError("archs must be a list of unique Arch")
if not isinstance(self.build_option, BuildOption):
raise TypeError("build_option must be a BuildOption")
if not self.build_modes:
raise ValueError("build_modes cannot be empty")
if not isinstance(self.build_modes, list):
raise TypeError("build_modes must be a list")
if not all(
isinstance(build_mode, BuildMode) for build_mode in self.build_modes
):
raise TypeError("build_modes must be a list of BuildMode")
if len(self.build_modes) != len(set(self.build_modes)):
raise ValueError("build_modes must be a list of unique BuildMode")
if not self.build_types:
raise ValueError("build_types cannot be empty")
if not isinstance(self.build_types, list):
raise TypeError("build_types must be a list")
if not all(
isinstance(build_type, BuildType) for build_type in self.build_types
):
raise TypeError("build_types must be a list of BuildType")
if len(self.build_types) != len(set(self.build_types)):
raise ValueError("build_types must be a list of unique BuildType")
if not isinstance(self.publish_option, PublishOption):
raise TypeError("publish_option must be a PublishOption")
if not isinstance(self.test_option, TestOption):
raise TypeError("test_option must be a TestOption")
if not self.triggers:
raise ValueError("triggers cannot be empty")
if not isinstance(self.triggers, list):
raise TypeError("triggers must be a list")
if not all(isinstance(trigger, Trigger) for trigger in self.triggers):
raise TypeError("triggers must be a list of Trigger")
if len(self.triggers) != len(set(self.triggers)):
raise ValueError("triggers must be a list of unique Trigger")
@dataclass
class Distro:
"""Represents a Linux, Windows or macOS distribution with specifications.
Raises:
ValueError: If any of the required fields are empty.
TypeError: If any of the required fields are of the wrong type.
"""
os_name: str
os_version: str = ""
compiler_name: str = ""
compiler_version: str = ""
image_sha: str = ""
specs: list[Spec] = field(default_factory=list)
def __post_init__(self):
if not self.os_name:
raise ValueError("os_name cannot be empty")
if not isinstance(self.os_name, str):
raise TypeError("os_name must be a string")
if self.os_version and not isinstance(self.os_version, str):
raise TypeError("os_version must be a string")
if self.compiler_name and not isinstance(self.compiler_name, str):
raise TypeError("compiler_name must be a string")
if self.compiler_version and not isinstance(self.compiler_version, str):
raise TypeError("compiler_version must be a string")
if self.image_sha and not isinstance(self.image_sha, str):
raise TypeError("image_sha must be a string")
if not self.specs:
raise ValueError("specs cannot be empty")
if not isinstance(self.specs, list):
raise TypeError("specs must be a list")
if not all(isinstance(spec, Spec) for spec in self.specs):
raise TypeError("specs must be a list of Spec")
if not is_unique(self.specs):
raise ValueError("specs must be a list of unique Spec")

View File

@@ -0,0 +1,743 @@
import pytest
from helpers.defs import *
from helpers.enums import *
from helpers.funcs import *
def test_config_valid_none_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image=None,
)
def test_config_valid_empty_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="install",
build_type="Debug",
enable_tests=False,
enable_package=True,
enable_image=False,
runs_on=["label"],
image="",
)
def test_config_valid_with_image():
assert Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="install",
build_type="Release",
enable_tests=False,
enable_package=True,
enable_image=True,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_config_name():
with pytest.raises(ValueError):
Config(
config_name="",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_config_name():
with pytest.raises(TypeError):
Config(
config_name=123,
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_cmake_args():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_cmake_args():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args=123,
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_cmake_target():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_invalid_cmake_target():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="invalid",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_cmake_target():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target=123,
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_empty_build_type():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_invalid_build_type():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="invalid",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_build_type():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type=123,
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_tests():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=123,
enable_package=False,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_package():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=123,
enable_image=False,
runs_on=["label"],
image="image",
)
def test_config_raises_on_wrong_enable_image():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=True,
enable_image=123,
runs_on=["label"],
image="image",
)
def test_config_raises_on_none_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=None,
image="image",
)
def test_config_raises_on_empty_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[],
image="image",
)
def test_config_raises_on_invalid_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[""],
image="image",
)
def test_config_raises_on_wrong_runs_on():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=[123],
image="image",
)
def test_config_raises_on_duplicate_runs_on():
with pytest.raises(ValueError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label", "label"],
image="image",
)
def test_config_raises_on_wrong_image():
with pytest.raises(TypeError):
Config(
config_name="config",
cmake_args="-Doption=ON",
cmake_target="all",
build_type="Debug",
enable_tests=True,
enable_package=False,
enable_image=False,
runs_on=["label"],
image=123,
)
def test_spec_valid():
assert Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_archs():
with pytest.raises(ValueError):
Spec(
archs=None,
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_archs():
with pytest.raises(ValueError):
Spec(
archs=[],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_archs():
with pytest.raises(TypeError):
Spec(
archs=[123],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_duplicate_archs():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64, Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=123,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_build_modes():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=None,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_build_modes():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_modes():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[123],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=None,
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_empty_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_build_types():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[123],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_duplicate_build_types():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG, BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_publish_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=123,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_wrong_test_option():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=123,
triggers=[Trigger.COMMIT],
)
def test_spec_raises_on_none_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=None,
)
def test_spec_raises_on_empty_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[],
)
def test_spec_raises_on_wrong_triggers():
with pytest.raises(TypeError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[123],
)
def test_spec_raises_on_duplicate_triggers():
with pytest.raises(ValueError):
Spec(
archs=[Arch.LINUX_AMD64],
build_option=BuildOption.NONE,
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.NONE,
test_option=TestOption.NONE,
triggers=[Trigger.COMMIT, Trigger.COMMIT],
)
def test_distro_valid_none_image_sha():
assert Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha=None,
specs=[Spec()], # This is valid due to the default values.
)
def test_distro_valid_empty_os_compiler_image_sha():
assert Distro(
os_name="os_name",
os_version="",
compiler_name="",
compiler_version="",
image_sha="",
specs=[Spec()],
)
def test_distro_valid_with_image():
assert Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_empty_os_name():
with pytest.raises(ValueError):
Distro(
os_name="",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_os_name():
with pytest.raises(TypeError):
Distro(
os_name=123,
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_os_version():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version=123,
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_compiler_name():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name=123,
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_compiler_version():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version=123,
image_sha="image_sha",
specs=[Spec()],
)
def test_distro_raises_on_wrong_image_sha():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha=123,
specs=[Spec()],
)
def test_distro_raises_on_none_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=None,
)
def test_distro_raises_on_empty_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[],
)
def test_distro_raises_on_invalid_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec(triggers=[])],
)
def test_distro_raises_on_duplicate_specs():
with pytest.raises(ValueError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[Spec(), Spec()],
)
def test_distro_raises_on_wrong_specs():
with pytest.raises(TypeError):
Distro(
os_name="os_name",
os_version="os_version",
compiler_name="compiler_name",
compiler_version="compiler_version",
image_sha="image_sha",
specs=[123],
)

View File

@@ -0,0 +1,75 @@
from enum import StrEnum, auto
class Arch(StrEnum):
"""Represents architectures to build for."""
LINUX_AMD64 = "linux/amd64"
LINUX_ARM64 = "linux/arm64"
MACOS_ARM64 = "macos/arm64"
WINDOWS_AMD64 = "windows/amd64"
class BuildMode(StrEnum):
"""Represents whether to perform a unity or non-unity build."""
UNITY_OFF = auto()
UNITY_ON = auto()
class BuildOption(StrEnum):
"""Represents build options to enable."""
NONE = auto()
COVERAGE = auto()
SANITIZE_ASAN = (
auto()
) # Address Sanitizer, also includes Undefined Behavior Sanitizer.
SANITIZE_TSAN = (
auto()
) # Thread Sanitizer, also includes Undefined Behavior Sanitizer.
VOIDSTAR = auto()
class BuildType(StrEnum):
"""Represents the build type to use."""
DEBUG = auto()
RELEASE = auto()
PUBLISH = auto() # Release build without assertions.
class PublishOption(StrEnum):
"""Represents whether to publish a package, an image, or both."""
NONE = auto()
PACKAGE_ONLY = auto()
IMAGE_ONLY = auto()
PACKAGE_AND_IMAGE = auto()
class TestOption(StrEnum):
"""Represents test options to enable, specifically the reference fee to use."""
__test__ = False # Tell pytest to not consider this as a test class.
NONE = "" # Use the default reference fee of 10.
REFERENCE_FEE_500 = "500"
REFERENCE_FEE_1000 = "1000"
class Platform(StrEnum):
"""Represents the platform to use."""
LINUX = "linux"
MACOS = "macos"
WINDOWS = "windows"
class Trigger(StrEnum):
"""Represents the trigger that caused the workflow to run."""
COMMIT = "commit"
LABEL = "label"
MERGE = "merge"
SCHEDULE = "schedule"

View File

@@ -0,0 +1,235 @@
from helpers.defs import *
from helpers.enums import *
def generate_config_name(
os_name: str,
os_version: str | None,
compiler_name: str | None,
compiler_version: str | None,
arch: Arch,
build_type: BuildType,
build_mode: BuildMode,
build_option: BuildOption,
) -> str:
"""Create a configuration name based on the distro details and build
attributes.
The configuration name is used as the display name in the GitHub Actions
UI, and since GitHub truncates long names we have to make sure the most
important information is at the beginning of the name.
Args:
os_name (str): The OS name.
os_version (str): The OS version.
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
arch (Arch): The architecture.
build_type (BuildType): The build type.
build_mode (BuildMode): The build mode.
build_option (BuildOption): The build option.
Returns:
str: The configuration name.
Raises:
ValueError: If the OS name is empty.
"""
if not os_name:
raise ValueError("os_name cannot be empty")
config_name = os_name
if os_version:
config_name += f"-{os_version}"
if compiler_name:
config_name += f"-{compiler_name}"
if compiler_version:
config_name += f"-{compiler_version}"
if build_option == BuildOption.COVERAGE:
config_name += "-coverage"
elif build_option == BuildOption.VOIDSTAR:
config_name += "-voidstar"
elif build_option == BuildOption.SANITIZE_ASAN:
config_name += "-asan"
elif build_option == BuildOption.SANITIZE_TSAN:
config_name += "-tsan"
if build_type == BuildType.DEBUG:
config_name += "-debug"
elif build_type == BuildType.RELEASE:
config_name += "-release"
elif build_type == BuildType.PUBLISH:
config_name += "-publish"
if build_mode == BuildMode.UNITY_ON:
config_name += "-unity"
config_name += f"-{arch.value.split('/')[1]}"
return config_name
def generate_cmake_args(
compiler_name: str | None,
compiler_version: str | None,
build_type: BuildType,
build_mode: BuildMode,
build_option: BuildOption,
test_option: TestOption,
) -> str:
"""Create the CMake arguments based on the build type and enabled build
options.
- All builds will have the `tests`, `werr`, and `xrpld` options.
- All builds will have the `wextra` option except for GCC 12 and Clang 16.
- All release builds will have the `assert` option.
- Set the unity option if specified.
- Set the coverage option if specified.
- Set the voidstar option if specified.
- Set the reference fee if specified.
Args:
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
build_type (BuildType): The build type.
build_mode (BuildMode): The build mode.
build_option (BuildOption): The build option.
test_option (TestOption): The test option.
Returns:
str: The CMake arguments.
"""
cmake_args = "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
if not f"{compiler_name}-{compiler_version}" in [
"gcc-12",
"clang-16",
]:
cmake_args += " -Dwextra=ON"
if build_type == BuildType.RELEASE:
cmake_args += " -Dassert=ON"
if build_mode == BuildMode.UNITY_ON:
cmake_args += " -Dunity=ON"
if build_option == BuildOption.COVERAGE:
cmake_args += " -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
elif build_option == BuildOption.SANITIZE_ASAN:
pass # TODO: Add ASAN-UBSAN flags.
elif build_option == BuildOption.SANITIZE_TSAN:
pass # TODO: Add TSAN-UBSAN flags.
elif build_option == BuildOption.VOIDSTAR:
cmake_args += " -Dvoidstar=ON"
if test_option != TestOption.NONE:
cmake_args += f" -DUNIT_TEST_REFERENCE_FEE={test_option.value}"
return cmake_args
def generate_cmake_target(os_name: str, build_type: BuildType) -> str:
"""Create the CMake target based on the build type.
The `install` target is used for Windows and for publishing a package, while
the `all` target is used for all other configurations.
Args:
os_name (str): The OS name.
build_type (BuildType): The build type.
Returns:
str: The CMake target.
"""
if os_name == "windows" or build_type == BuildType.PUBLISH:
return "install"
return "all"
def generate_enable_options(
os_name: str,
build_type: BuildType,
publish_option: PublishOption,
) -> tuple[bool, bool, bool]:
"""Create the enable flags based on the OS name, build option, and publish
option.
We build and test all configurations by default, except for Windows in
Debug, because it is too slow.
Args:
os_name (str): The OS name.
build_type (BuildType): The build type.
publish_option (PublishOption): The publish option.
Returns:
tuple: A tuple containing the enable test, enable package, and enable image flags.
"""
enable_tests = (
False if os_name == "windows" and build_type == BuildType.DEBUG else True
)
enable_package = (
True
if publish_option
in [
PublishOption.PACKAGE_ONLY,
PublishOption.PACKAGE_AND_IMAGE,
]
else False
)
enable_image = (
True
if publish_option
in [
PublishOption.IMAGE_ONLY,
PublishOption.PACKAGE_AND_IMAGE,
]
else False
)
return enable_tests, enable_package, enable_image
def generate_image_name(
os_name: str,
os_version: str,
compiler_name: str,
compiler_version: str,
image_sha: str,
) -> str | None:
"""Create the Docker image name based on the distro details.
Args:
os_name (str): The OS name.
os_version (str): The OS version.
compiler_name (str): The compiler name.
compiler_version (str): The compiler version.
image_sha (str): The image SHA.
Returns:
str: The Docker image name or None if not applicable.
Raises:
ValueError: If any of the arguments is empty for Linux.
"""
if os_name == "windows" or os_name == "macos":
return None
if not os_name:
raise ValueError("os_name cannot be empty")
if not os_version:
raise ValueError("os_version cannot be empty")
if not compiler_name:
raise ValueError("compiler_name cannot be empty")
if not compiler_version:
raise ValueError("compiler_version cannot be empty")
if not image_sha:
raise ValueError("image_sha cannot be empty")
return f"ghcr.io/xrplf/ci/{os_name}-{os_version}:{compiler_name}-{compiler_version}-{image_sha}"

View File

@@ -0,0 +1,419 @@
import pytest
from helpers.enums import *
from helpers.funcs import *
def test_generate_config_name_a_b_c_d_debug_amd64():
assert (
generate_config_name(
"a",
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
== "a-b-c-d-debug-amd64"
)
def test_generate_config_name_a_b_c_release_unity_arm64():
assert (
generate_config_name(
"a",
"b",
"c",
"",
Arch.LINUX_ARM64,
BuildType.RELEASE,
BuildMode.UNITY_ON,
BuildOption.NONE,
)
== "a-b-c-release-unity-arm64"
)
def test_generate_config_name_a_b_coverage_publish_amd64():
assert (
generate_config_name(
"a",
"b",
"",
"",
Arch.LINUX_AMD64,
BuildType.PUBLISH,
BuildMode.UNITY_OFF,
BuildOption.COVERAGE,
)
== "a-b-coverage-publish-amd64"
)
def test_generate_config_name_a_asan_debug_unity_arm64():
assert (
generate_config_name(
"a",
"",
"",
"",
Arch.LINUX_ARM64,
BuildType.DEBUG,
BuildMode.UNITY_ON,
BuildOption.SANITIZE_ASAN,
)
== "a-asan-debug-unity-arm64"
)
def test_generate_config_name_a_c_tsan_release_amd64():
assert (
generate_config_name(
"a",
"",
"c",
"",
Arch.LINUX_AMD64,
BuildType.RELEASE,
BuildMode.UNITY_OFF,
BuildOption.SANITIZE_TSAN,
)
== "a-c-tsan-release-amd64"
)
def test_generate_config_name_a_d_voidstar_debug_amd64():
assert (
generate_config_name(
"a",
"",
"",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.VOIDSTAR,
)
== "a-d-voidstar-debug-amd64"
)
def test_generate_config_name_raises_on_none_os_name():
with pytest.raises(ValueError):
generate_config_name(
None,
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
def test_generate_config_name_raises_on_empty_os_name():
with pytest.raises(ValueError):
generate_config_name(
"",
"b",
"c",
"d",
Arch.LINUX_AMD64,
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
)
def test_generate_cmake_args_a_b_debug():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON"
)
def test_generate_cmake_args_gcc_12_no_wextra():
assert (
generate_cmake_args(
"gcc",
"12",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
)
def test_generate_cmake_args_clang_16_no_wextra():
assert (
generate_cmake_args(
"clang",
"16",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON"
)
def test_generate_cmake_args_a_b_release():
assert (
generate_cmake_args(
"a",
"b",
BuildType.RELEASE,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON"
)
def test_generate_cmake_args_a_b_publish():
assert (
generate_cmake_args(
"a",
"b",
BuildType.PUBLISH,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON"
)
def test_generate_cmake_args_a_b_unity():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_ON,
BuildOption.NONE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dunity=ON"
)
def test_generate_cmake_args_a_b_coverage():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.COVERAGE,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dcoverage=ON -Dcoverage_format=xml -DCODE_COVERAGE_VERBOSE=ON -DCMAKE_C_FLAGS=-O0 -DCMAKE_CXX_FLAGS=-O0"
)
def test_generate_cmake_args_a_b_voidstar():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.VOIDSTAR,
TestOption.NONE,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dvoidstar=ON"
)
def test_generate_cmake_args_a_b_reference_fee_500():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.REFERENCE_FEE_500,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -DUNIT_TEST_REFERENCE_FEE=500"
)
def test_generate_cmake_args_a_b_reference_fee_1000():
assert (
generate_cmake_args(
"a",
"b",
BuildType.DEBUG,
BuildMode.UNITY_OFF,
BuildOption.NONE,
TestOption.REFERENCE_FEE_1000,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -DUNIT_TEST_REFERENCE_FEE=1000"
)
def test_generate_cmake_args_a_b_multiple():
assert (
generate_cmake_args(
"a",
"b",
BuildType.RELEASE,
BuildMode.UNITY_ON,
BuildOption.VOIDSTAR,
TestOption.REFERENCE_FEE_500,
)
== "-Dtests=ON -Dwerr=ON -Dxrpld=ON -Dwextra=ON -Dassert=ON -Dunity=ON -Dvoidstar=ON -DUNIT_TEST_REFERENCE_FEE=500"
)
def test_generate_cmake_target_linux_debug():
assert generate_cmake_target("linux", BuildType.DEBUG) == "all"
def test_generate_cmake_target_linux_release():
assert generate_cmake_target("linux", BuildType.RELEASE) == "all"
def test_generate_cmake_target_linux_publish():
assert generate_cmake_target("linux", BuildType.PUBLISH) == "install"
def test_generate_cmake_target_macos_debug():
assert generate_cmake_target("macos", BuildType.DEBUG) == "all"
def test_generate_cmake_target_macos_release():
assert generate_cmake_target("macos", BuildType.RELEASE) == "all"
def test_generate_cmake_target_macos_publish():
assert generate_cmake_target("macos", BuildType.PUBLISH) == "install"
def test_generate_cmake_target_windows_debug():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_cmake_target_windows_release():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_cmake_target_windows_publish():
assert generate_cmake_target("windows", BuildType.DEBUG) == "install"
def test_generate_enable_options_linux_debug_no_publish():
assert generate_enable_options("linux", BuildType.DEBUG, PublishOption.NONE) == (
True,
False,
False,
)
def test_generate_enable_options_linux_release_package_only():
assert generate_enable_options(
"linux", BuildType.RELEASE, PublishOption.PACKAGE_ONLY
) == (True, True, False)
def test_generate_enable_options_linux_publish_image_only():
assert generate_enable_options(
"linux", BuildType.PUBLISH, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_enable_options_macos_debug_package_only():
assert generate_enable_options(
"macos", BuildType.DEBUG, PublishOption.PACKAGE_ONLY
) == (True, True, False)
def test_generate_enable_options_macos_release_image_only():
assert generate_enable_options(
"macos", BuildType.RELEASE, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_enable_options_macos_publish_package_and_image():
assert generate_enable_options(
"macos", BuildType.PUBLISH, PublishOption.PACKAGE_AND_IMAGE
) == (True, True, True)
def test_generate_enable_options_windows_debug_package_and_image():
assert generate_enable_options(
"windows", BuildType.DEBUG, PublishOption.PACKAGE_AND_IMAGE
) == (False, True, True)
def test_generate_enable_options_windows_release_no_publish():
assert generate_enable_options(
"windows", BuildType.RELEASE, PublishOption.NONE
) == (True, False, False)
def test_generate_enable_options_windows_publish_image_only():
assert generate_enable_options(
"windows", BuildType.PUBLISH, PublishOption.IMAGE_ONLY
) == (True, False, True)
def test_generate_image_name_linux():
assert generate_image_name("a", "b", "c", "d", "e") == "ghcr.io/xrplf/ci/a-b:c-d-e"
def test_generate_image_name_linux_raises_on_empty_os_name():
with pytest.raises(ValueError):
generate_image_name("", "b", "c", "d", "e")
def test_generate_image_name_linux_raises_on_empty_os_version():
with pytest.raises(ValueError):
generate_image_name("a", "", "c", "d", "e")
def test_generate_image_name_linux_raises_on_empty_compiler_name():
with pytest.raises(ValueError):
generate_image_name("a", "b", "", "d", "e")
def test_generate_image_name_linux_raises_on_empty_compiler_version():
with pytest.raises(ValueError):
generate_image_name("a", "b", "c", "", "e")
def test_generate_image_name_linux_raises_on_empty_image_sha():
with pytest.raises(ValueError):
generate_image_name("a", "b", "c", "e", "")
def test_generate_image_name_macos():
assert generate_image_name("macos", "", "", "", "") is None
def test_generate_image_name_macos_extra():
assert generate_image_name("macos", "value", "does", "not", "matter") is None
def test_generate_image_name_windows():
assert generate_image_name("windows", "", "", "", "") is None
def test_generate_image_name_windows_extra():
assert generate_image_name("windows", "value", "does", "not", "matter") is None

View File

@@ -0,0 +1,30 @@
import json
from dataclasses import _is_dataclass_instance, asdict
from typing import Any
def is_unique(items: list[Any]) -> bool:
"""Check if a list of dataclass objects contains only unique items.
As the items may not be hashable, we convert them to JSON strings first, and
then check if the list of strings is the same size as the set of strings.
Args:
items: The list of dataclass objects to check.
Returns:
True if the list contains only unique items, False otherwise.
Raises:
TypeError: If any of the items is not a dataclass.
"""
l = list()
s = set()
for item in items:
if not _is_dataclass_instance(item):
raise TypeError("items must be a list of dataclasses")
j = json.dumps(asdict(item))
l.append(j)
s.add(j)
return len(l) == len(s)

View File

@@ -0,0 +1,40 @@
from dataclasses import dataclass
import pytest
from helpers.unique import *
@dataclass
class ExampleInt:
value: int
@dataclass
class ExampleList:
values: list[int]
def test_unique_int():
assert is_unique([ExampleInt(1), ExampleInt(2), ExampleInt(3)])
def test_not_unique_int():
assert not is_unique([ExampleInt(1), ExampleInt(2), ExampleInt(1)])
def test_unique_list():
assert is_unique(
[ExampleList([1, 2, 3]), ExampleList([4, 5, 6]), ExampleList([7, 8, 9])]
)
def test_not_unique_list():
assert not is_unique(
[ExampleList([1, 2, 3]), ExampleList([4, 5, 6]), ExampleList([1, 2, 3])]
)
def test_unique_raises_on_non_dataclass():
with pytest.raises(TypeError):
is_unique([1, 2, 3])

View File

@@ -1,212 +0,0 @@
{
"architecture": [
{
"platform": "linux/amd64",
"runner": ["self-hosted", "Linux", "X64", "heavy"]
},
{
"platform": "linux/arm64",
"runner": ["self-hosted", "Linux", "ARM64", "heavy-arm64"]
}
],
"os": [
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "bookworm",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "gcc",
"compiler_version": "15",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "20",
"image_sha": "0525eae"
},
{
"distro_name": "debian",
"distro_version": "trixie",
"compiler_name": "clang",
"compiler_version": "21",
"image_sha": "0525eae"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "8",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "9",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "rhel",
"distro_version": "10",
"compiler_name": "clang",
"compiler_version": "any",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "jammy",
"compiler_name": "gcc",
"compiler_version": "12",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "13",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "gcc",
"compiler_version": "14",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "16",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "17",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "18",
"image_sha": "e1782cd"
},
{
"distro_name": "ubuntu",
"distro_version": "noble",
"compiler_name": "clang",
"compiler_version": "19",
"image_sha": "e1782cd"
}
],
"build_type": ["Debug", "Release"],
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

385
.github/scripts/strategy-matrix/linux.py vendored Executable file
View File

@@ -0,0 +1,385 @@
from helpers.defs import *
from helpers.enums import *
# The default CI image SHAs to use, which can be specified per distro group and
# can be overridden for individual distros, which is useful when debugging using
# a locally built CI image. See https://github.com/XRPLF/ci for the images.
DEBIAN_SHA = "sha-ca4517d"
RHEL_SHA = "sha-ca4517d"
UBUNTU_SHA = "sha-84afd81"
# We only build a selection of configurations for the various triggers to reduce
# pipeline runtime. Across all three operating systems we aim to cover all GCC
# and Clang versions, while not duplicating configurations too much. See also
# the README for more details.
# The Debian distros to build configurations for.
#
# We have the following distros available:
# - Debian Bullseye: GCC 12-15
# - Debian Bookworm: GCC 13-15, Clang 16-20
# - Debian Trixie: GCC 14-15, Clang 20-21
DEBIAN_DISTROS = [
Distro(
os_name="debian",
os_version="bullseye",
compiler_name="gcc",
compiler_version="14",
image_sha=DEBIAN_SHA,
specs=[
Spec(
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
publish_option=PublishOption.PACKAGE_ONLY,
triggers=[Trigger.COMMIT, Trigger.LABEL],
),
Spec(
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.PUBLISH],
publish_option=PublishOption.PACKAGE_AND_IMAGE,
triggers=[Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bullseye",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_option=BuildOption.COVERAGE,
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="16",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_option=BuildOption.VOIDSTAR,
build_types=[BuildType.DEBUG],
publish_option=PublishOption.IMAGE_ONLY,
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="17",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="18",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="bookworm",
compiler_name="clang",
compiler_version="19",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="trixie",
compiler_name="gcc",
compiler_version="15",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="debian",
os_version="trixie",
compiler_name="clang",
compiler_version="21",
image_sha=DEBIAN_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]
# The RHEL distros to build configurations for.
#
# We have the following distros available:
# - RHEL 8: GCC 14, Clang "any"
# - RHEL 9: GCC 12-14, Clang "any"
# - RHEL 10: GCC 14, Clang "any"
RHEL_DISTROS = [
Distro(
os_name="rhel",
os_version="8",
compiler_name="gcc",
compiler_version="14",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="8",
compiler_name="clang",
compiler_version="any",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="9",
compiler_name="gcc",
compiler_version="12",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="9",
compiler_name="gcc",
compiler_version="13",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="rhel",
os_version="10",
compiler_name="clang",
compiler_version="any",
image_sha=RHEL_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]
# The Ubuntu distros to build configurations for.
#
# We have the following distros available:
# - Ubuntu Jammy (22.04): GCC 12
# - Ubuntu Noble (24.04): GCC 13-14, Clang 16-20
UBUNTU_DISTROS = [
Distro(
os_name="ubuntu",
os_version="jammy",
compiler_name="gcc",
compiler_version="12",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="gcc",
compiler_version="13",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="gcc",
compiler_version="14",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="17",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="18",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="19",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
Distro(
os_name="ubuntu",
os_version="noble",
compiler_name="clang",
compiler_version="20",
image_sha=UBUNTU_SHA,
specs=[
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT],
),
Spec(
archs=[Arch.LINUX_AMD64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.RELEASE],
triggers=[Trigger.MERGE],
),
Spec(
archs=[Arch.LINUX_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -1,22 +0,0 @@
{
"architecture": [
{
"platform": "macos/arm64",
"runner": ["self-hosted", "macOS", "ARM64", "mac-runner-m1"]
}
],
"os": [
{
"distro_name": "macos",
"distro_version": "",
"compiler_name": "",
"compiler_version": "",
"image_sha": ""
}
],
"build_type": ["Debug", "Release"],
"cmake_args": [
"-Dunity=OFF -DCMAKE_POLICY_VERSION_MINIMUM=3.5",
"-Dunity=ON -DCMAKE_POLICY_VERSION_MINIMUM=3.5"
]
}

20
.github/scripts/strategy-matrix/macos.py vendored Executable file
View File

@@ -0,0 +1,20 @@
from helpers.defs import *
from helpers.enums import *
DISTROS = [
Distro(
os_name="macos",
specs=[
Spec(
archs=[Arch.MACOS_ARM64],
build_modes=[BuildMode.UNITY_OFF],
build_types=[BuildType.DEBUG],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
archs=[Arch.MACOS_ARM64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -1,19 +0,0 @@
{
"architecture": [
{
"platform": "windows/amd64",
"runner": ["self-hosted", "Windows", "devbox"]
}
],
"os": [
{
"distro_name": "windows",
"distro_version": "",
"compiler_name": "",
"compiler_version": "",
"image_sha": ""
}
],
"build_type": ["Debug", "Release"],
"cmake_args": ["-Dunity=OFF", "-Dunity=ON"]
}

20
.github/scripts/strategy-matrix/windows.py vendored Executable file
View File

@@ -0,0 +1,20 @@
from helpers.defs import *
from helpers.enums import *
DISTROS = [
Distro(
os_name="windows",
specs=[
Spec(
archs=[Arch.WINDOWS_AMD64],
build_modes=[BuildMode.UNITY_ON],
build_types=[BuildType.RELEASE],
triggers=[Trigger.COMMIT, Trigger.MERGE],
),
Spec(
archs=[Arch.WINDOWS_AMD64],
triggers=[Trigger.SCHEDULE],
),
],
),
]

View File

@@ -112,9 +112,10 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [linux, macos, windows]
platform: [linux, macos, windows]
with:
os: ${{ matrix.os }}
platform: ${{ matrix.platform }}
trigger: commit
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -66,9 +66,10 @@ jobs:
strategy:
fail-fast: ${{ github.event_name == 'merge_group' }}
matrix:
os: [linux, macos, windows]
platform: [linux, macos, windows]
with:
os: ${{ matrix.os }}
strategy_matrix: ${{ github.event_name == 'schedule' && 'all' || 'minimal' }}
platform: ${{ matrix.platform }}
# The workflow dispatch event uses the same trigger as the schedule event.
trigger: ${{ github.event_name == 'push' && 'merge' || 'schedule' }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -22,7 +22,7 @@ defaults:
shell: bash
env:
BUILD_DIR: .build
BUILD_DIR: build
NPROC_SUBTRACT: 2
jobs:
@@ -36,7 +36,7 @@ jobs:
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}

View File

@@ -3,16 +3,6 @@ name: Build and test configuration
on:
workflow_call:
inputs:
build_dir:
description: "The directory where to build."
required: true
type: string
build_only:
description: 'Whether to only build or to build and test the code ("true", "false").'
required: true
type: boolean
build_type:
description: 'The build type to use ("Debug", "Release").'
type: string
@@ -29,6 +19,21 @@ on:
type: string
required: true
enable_tests:
description: "Whether to run the tests."
required: true
type: boolean
enable_package:
description: "Whether to publish a package."
required: true
type: boolean
enable_image:
description: "Whether to publish an image."
required: true
type: boolean
runs_on:
description: Runner to run the job on as a JSON string
required: true
@@ -59,6 +64,11 @@ defaults:
run:
shell: bash
env:
# Conan installs the generators in the build/generators directory, see the
# layout() method in conanfile.py. We then run CMake from the build directory.
BUILD_DIR: build
jobs:
build-and-test:
name: ${{ inputs.config_name }}
@@ -71,13 +81,13 @@ jobs:
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
uses: XRPLF/actions/cleanup-workspace@2ece4ec6ab7de266859a6f053571425b2bd684b6
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
uses: XRPLF/actions/prepare-runner@2ece4ec6ab7de266859a6f053571425b2bd684b6
with:
disable_ccache: false
@@ -85,7 +95,7 @@ jobs:
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6
id: nproc
with:
subtract: ${{ inputs.nproc_subtract }}
@@ -96,7 +106,6 @@ jobs:
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: ${{ inputs.build_dir }}
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ inputs.build_type }}
# Set the verbosity to "quiet" for Windows to avoid an excessive
@@ -104,7 +113,7 @@ jobs:
log_verbosity: ${{ runner.os == 'Windows' && 'quiet' || 'verbose' }}
- name: Configure CMake
working-directory: ${{ inputs.build_dir }}
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_TYPE: ${{ inputs.build_type }}
CMAKE_ARGS: ${{ inputs.cmake_args }}
@@ -117,7 +126,7 @@ jobs:
..
- name: Build the binary
working-directory: ${{ inputs.build_dir }}
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
@@ -132,8 +141,6 @@ jobs:
- name: Upload the binary (Linux)
if: ${{ github.repository_owner == 'XRPLF' && runner.os == 'Linux' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
env:
BUILD_DIR: ${{ inputs.build_dir }}
with:
name: xrpld-${{ inputs.config_name }}
path: ${{ env.BUILD_DIR }}/xrpld
@@ -142,7 +149,7 @@ jobs:
- name: Check linking (Linux)
if: ${{ runner.os == 'Linux' }}
working-directory: ${{ inputs.build_dir }}
working-directory: ${{ env.BUILD_DIR }}
run: |
ldd ./xrpld
if [ "$(ldd ./xrpld | grep -E '(libstdc\+\+|libgcc)' | wc -l)" -eq 0 ]; then
@@ -154,13 +161,13 @@ jobs:
- name: Verify presence of instrumentation (Linux)
if: ${{ runner.os == 'Linux' && env.ENABLED_VOIDSTAR == 'true' }}
working-directory: ${{ inputs.build_dir }}
working-directory: ${{ env.BUILD_DIR }}
run: |
./xrpld --version | grep libvoidstar
- name: Run the separate tests
if: ${{ !inputs.build_only }}
working-directory: ${{ inputs.build_dir }}
if: ${{ inputs.enable_tests }}
working-directory: ${{ env.BUILD_DIR }}
# Windows locks some of the build files while running tests, and parallel jobs can collide
env:
BUILD_TYPE: ${{ inputs.build_type }}
@@ -172,15 +179,15 @@ jobs:
-j "${PARALLELISM}"
- name: Run the embedded tests
if: ${{ !inputs.build_only }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', inputs.build_dir, inputs.build_type) || inputs.build_dir }}
if: ${{ inputs.enable_tests }}
working-directory: ${{ runner.os == 'Windows' && format('{0}/{1}', env.BUILD_DIR, inputs.build_type) || env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
run: |
./xrpld --unittest --unittest-jobs "${BUILD_NPROC}"
- name: Debug failure (Linux)
if: ${{ failure() && runner.os == 'Linux' && !inputs.build_only }}
if: ${{ (failure() || cancelled()) && runner.os == 'Linux' && inputs.enable_tests }}
run: |
echo "IPv4 local port range:"
cat /proc/sys/net/ipv4/ip_local_port_range
@@ -188,8 +195,8 @@ jobs:
netstat -an
- name: Prepare coverage report
if: ${{ !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
working-directory: ${{ inputs.build_dir }}
if: ${{ github.repository_owner == 'XRPLF' && env.ENABLED_COVERAGE == 'true' }}
working-directory: ${{ env.BUILD_DIR }}
env:
BUILD_NPROC: ${{ steps.nproc.outputs.nproc }}
BUILD_TYPE: ${{ inputs.build_type }}
@@ -201,13 +208,13 @@ jobs:
--target coverage
- name: Upload coverage report
if: ${{ github.repository_owner == 'XRPLF' && !inputs.build_only && env.ENABLED_COVERAGE == 'true' }}
if: ${{ github.repository_owner == 'XRPLF' && env.ENABLED_COVERAGE == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
disable_search: true
disable_telem: true
fail_ci_if_error: true
files: ${{ inputs.build_dir }}/coverage.xml
files: ${{ env.BUILD_DIR }}/coverage.xml
plugins: noop
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -8,21 +8,14 @@ name: Build and test
on:
workflow_call:
inputs:
build_dir:
description: "The directory where to build."
platform:
description: "The platform to generate the strategy matrix for ('linux', 'macos', 'windows'). If not provided all platforms are used."
required: false
type: string
default: ".build"
os:
description: 'The operating system to use for the build ("linux", "macos", "windows").'
trigger:
description: "The trigger that caused the workflow to run ('commit', 'label', 'merge', 'schedule')."
required: true
type: string
strategy_matrix:
# TODO: Support additional strategies, e.g. "ubuntu" for generating all Ubuntu configurations.
description: 'The strategy matrix to use for generating the configurations ("minimal", "all").'
required: false
type: string
default: "minimal"
secrets:
CODECOV_TOKEN:
description: "The Codecov token to use for uploading coverage reports."
@@ -33,8 +26,8 @@ jobs:
generate-matrix:
uses: ./.github/workflows/reusable-strategy-matrix.yml
with:
os: ${{ inputs.os }}
strategy_matrix: ${{ inputs.strategy_matrix }}
platform: ${{ inputs.platform }}
trigger: ${{ inputs.trigger }}
# Build and test the binary for each configuration.
build-test-config:
@@ -46,13 +39,14 @@ jobs:
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
max-parallel: 10
with:
build_dir: ${{ inputs.build_dir }}
build_only: ${{ matrix.build_only }}
build_type: ${{ matrix.build_type }}
cmake_args: ${{ matrix.cmake_args }}
cmake_target: ${{ matrix.cmake_target }}
runs_on: ${{ toJSON(matrix.architecture.runner) }}
image: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || '' }}
enable_tests: ${{ matrix.enable_tests }}
enable_package: ${{ matrix.enable_package }}
enable_image: ${{ matrix.enable_image }}
runs_on: ${{ toJson(matrix.runs_on) }}
image: ${{ matrix.image }}
config_name: ${{ matrix.config_name }}
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -27,6 +27,8 @@ jobs:
run: .github/scripts/rename/cmake.sh .
- name: Check binary name
run: .github/scripts/rename/binary.sh .
- name: Check namespaces
run: .github/scripts/rename/namespace.sh .
- name: Check for differences
env:
MESSAGE: |

View File

@@ -3,16 +3,14 @@ name: Generate strategy matrix
on:
workflow_call:
inputs:
os:
description: 'The operating system to use for the build ("linux", "macos", "windows").'
platform:
description: "The platform to generate the strategy matrix for ('linux', 'macos', 'windows'). If not provided all platforms are used."
required: false
type: string
strategy_matrix:
# TODO: Support additional strategies, e.g. "ubuntu" for generating all Ubuntu configurations.
description: 'The strategy matrix to use for generating the configurations ("minimal", "all").'
required: false
trigger:
description: "The trigger that caused the workflow to run ('commit', 'label', 'merge', 'schedule')."
required: true
type: string
default: "minimal"
outputs:
matrix:
description: "The generated strategy matrix."
@@ -40,6 +38,6 @@ jobs:
working-directory: .github/scripts/strategy-matrix
id: generate
env:
GENERATE_CONFIG: ${{ inputs.os != '' && format('--config={0}.json', inputs.os) || '' }}
GENERATE_OPTION: ${{ inputs.strategy_matrix == 'all' && '--all' || '' }}
run: ./generate.py ${GENERATE_OPTION} ${GENERATE_CONFIG} >> "${GITHUB_OUTPUT}"
PLATFORM: ${{ inputs.platform != '' && format('--platform={0}', inputs.platform) || '' }}
TRIGGER: ${{ format('--trigger={0}', inputs.trigger) }}
run: ./generate.py ${PLATFORM} ${TRIGGER} >> "${GITHUB_OUTPUT}"

View File

@@ -19,17 +19,17 @@ on:
branches: [develop]
paths:
# This allows testing changes to the upload workflow in a PR
- .github/workflows/upload-conan-deps.yml
- ".github/workflows/upload-conan-deps.yml"
push:
branches: [develop]
paths:
- .github/workflows/upload-conan-deps.yml
- .github/workflows/reusable-strategy-matrix.yml
- .github/actions/build-deps/action.yml
- .github/actions/setup-conan/action.yml
- ".github/workflows/upload-conan-deps.yml"
- ".github/workflows/reusable-strategy-matrix.yml"
- ".github/actions/build-deps/action.yml"
- ".github/actions/setup-conan/action.yml"
- ".github/scripts/strategy-matrix/**"
- conanfile.py
- conan.lock
- "conanfile.py"
- "conan.lock"
env:
CONAN_REMOTE_NAME: xrplf
@@ -49,7 +49,8 @@ jobs:
generate-matrix:
uses: ./.github/workflows/reusable-strategy-matrix.yml
with:
strategy_matrix: ${{ github.event_name == 'pull_request' && 'minimal' || 'all' }}
# The workflow dispatch event uses the same trigger as the schedule event.
trigger: ${{ github.event_name == 'pull_request' && 'commit' || (github.event_name == 'push' && 'merge' || 'schedule') }}
# Build and upload the dependencies for each configuration.
run-upload-conan-deps:
@@ -59,18 +60,18 @@ jobs:
fail-fast: false
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
max-parallel: 10
runs-on: ${{ matrix.architecture.runner }}
container: ${{ contains(matrix.architecture.platform, 'linux') && format('ghcr.io/xrplf/ci/{0}-{1}:{2}-{3}-sha-{4}', matrix.os.distro_name, matrix.os.distro_version, matrix.os.compiler_name, matrix.os.compiler_version, matrix.os.image_sha) || null }}
runs-on: ${{ matrix.runs_on }}
container: ${{ matrix.image }}
steps:
- name: Cleanup workspace (macOS and Windows)
if: ${{ runner.os == 'macOS' || runner.os == 'Windows' }}
uses: XRPLF/actions/.github/actions/cleanup-workspace@01b244d2718865d427b499822fbd3f15e7197fcc
uses: XRPLF/actions/cleanup-workspace@2ece4ec6ab7de266859a6f053571425b2bd684b6
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: Prepare runner
uses: XRPLF/actions/.github/actions/prepare-runner@99685816bb60a95a66852f212f382580e180df3a
uses: XRPLF/actions/prepare-runner@2ece4ec6ab7de266859a6f053571425b2bd684b6
with:
disable_ccache: false
@@ -78,7 +79,7 @@ jobs:
uses: ./.github/actions/print-env
- name: Get number of processors
uses: XRPLF/actions/.github/actions/get-nproc@046b1620f6bfd6cd0985dc82c3df02786801fe0a
uses: XRPLF/actions/get-nproc@2ece4ec6ab7de266859a6f053571425b2bd684b6
id: nproc
with:
subtract: ${{ env.NPROC_SUBTRACT }}
@@ -92,7 +93,6 @@ jobs:
- name: Build dependencies
uses: ./.github/actions/build-deps
with:
build_dir: .build
build_nproc: ${{ steps.nproc.outputs.nproc }}
build_type: ${{ matrix.build_type }}
force_build: ${{ github.event_name == 'schedule' || github.event.inputs.force_source_build == 'true' }}

1
.gitignore vendored
View File

@@ -19,6 +19,7 @@ Release/
/tmp/
CMakeSettings.json
CMakeUserPresets.json
__pycache__
# Coverage files.
*.gcno

View File

@@ -304,7 +304,7 @@ For this reason:
- Example **bad** name
`"RFC1751::insert(char* s, int x, int start, int length) : length is greater than or equal zero"`
(missing namespace, unnecessary full function signature, description too verbose).
Good name: `"ripple::RFC1751::insert : minimum length"`.
Good name: `"xrpl::RFC1751::insert : minimum length"`.
- In **few** well-justified cases a non-standard name can be used, in which case a
comment should be placed to explain the rationale (example in `contract.cpp`)
- Do **not** rename a contract without a good reason (e.g. the name no longer

View File

@@ -3,7 +3,7 @@
#include <boost/filesystem.hpp>
namespace ripple {
namespace xrpl {
/** Extract a tar archive compressed with lz4
@@ -17,6 +17,6 @@ extractTarLz4(
boost::filesystem::path const& src,
boost::filesystem::path const& dst);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -12,7 +12,7 @@
#include <unordered_map>
#include <vector>
namespace ripple {
namespace xrpl {
using IniFileSections =
std::unordered_map<std::string, std::vector<std::string>>;
@@ -380,6 +380,6 @@ get_if_exists<bool>(Section const& section, std::string const& name, bool& v)
return stat;
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,13 +3,13 @@
#include <vector>
namespace ripple {
namespace xrpl {
/** Storage for linear binary data.
Blocks of binary data appear often in various idioms and structures.
*/
using Blob = std::vector<unsigned char>;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -8,7 +8,7 @@
#include <cstring>
#include <memory>
namespace ripple {
namespace xrpl {
/** Like std::vector<char> but better.
Meets the requirements of BufferFactory.
@@ -96,7 +96,7 @@ public:
XRPL_ASSERT(
s.size() == 0 || size_ == 0 || s.data() < p_.get() ||
s.data() >= p_.get() + size_,
"ripple::Buffer::operator=(Slice) : input not a subset");
"xrpl::Buffer::operator=(Slice) : input not a subset");
if (auto p = alloc(s.size()))
std::memcpy(p, s.data(), s.size());
@@ -215,6 +215,6 @@ operator!=(Buffer const& lhs, Buffer const& rhs) noexcept
return !(lhs == rhs);
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -1,7 +1,7 @@
#ifndef XRPL_BASICS_BYTEUTILITIES_H_INCLUDED
#define XRPL_BASICS_BYTEUTILITIES_H_INCLUDED
namespace ripple {
namespace xrpl {
template <class T>
constexpr auto
@@ -19,6 +19,6 @@ megabytes(T value) noexcept
static_assert(kilobytes(2) == 2048, "kilobytes(2) == 2048");
static_assert(megabytes(3) == 3145728, "megabytes(3) == 3145728");
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -10,7 +10,7 @@
#include <stdexcept>
#include <vector>
namespace ripple {
namespace xrpl {
namespace compression_algorithms {
@@ -144,6 +144,6 @@ lz4Decompress(
} // namespace compression_algorithms
} // namespace ripple
} // namespace xrpl
#endif // XRPL_COMPRESSIONALGORITHMS_H_INCLUDED

View File

@@ -8,7 +8,7 @@
#include <utility>
#include <vector>
namespace ripple {
namespace xrpl {
/** Manages all counted object types. */
class CountedObjects
@@ -133,6 +133,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <chrono>
#include <cmath>
namespace ripple {
namespace xrpl {
/** Sampling function using exponential decay to provide a continuous value.
@tparam The number of seconds in the decay window.
@@ -131,6 +131,6 @@ private:
time_point when_;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -7,7 +7,7 @@
#include <stdexcept>
namespace ripple {
namespace xrpl {
/** Expected is an approximation of std::expected (hoped for in C++23)
@@ -232,6 +232,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_EXPECTED_H_INCLUDED

View File

@@ -6,7 +6,7 @@
#include <optional>
namespace ripple {
namespace xrpl {
std::string
getFileContents(
@@ -20,6 +20,6 @@ writeFileContents(
boost::filesystem::path const& destPath,
std::string const& contents);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <type_traits>
#include <utility>
namespace ripple {
namespace xrpl {
//------------------------------------------------------------------------------
@@ -492,5 +492,5 @@ dynamic_pointer_cast(TT const& v)
return SharedPtr<T>(DynamicCastTagSharedIntrusive{}, v);
}
} // namespace intr_ptr
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <utility>
namespace ripple {
namespace xrpl {
template <class T>
template <CAdoptTag TAdoptTag>
@@ -608,7 +608,7 @@ SharedWeakUnion<T>::convertToStrong()
[[maybe_unused]] auto action = p->releaseWeakRef();
XRPL_ASSERT(
(action == ReleaseWeakRefAction::noop),
"ripple::SharedWeakUnion::convertToStrong : "
"xrpl::SharedWeakUnion::convertToStrong : "
"action is noop");
unsafeSetRawPtr(p, RefStrength::strong);
return true;
@@ -637,7 +637,7 @@ SharedWeakUnion<T>::convertToWeak()
// We just added a weak ref. How could we destroy?
// LCOV_EXCL_START
UNREACHABLE(
"ripple::SharedWeakUnion::convertToWeak : destroying freshly "
"xrpl::SharedWeakUnion::convertToWeak : destroying freshly "
"added ref");
delete p;
unsafeSetRawPtr(nullptr);
@@ -719,5 +719,5 @@ SharedWeakUnion<T>::unsafeReleaseNoStore()
}
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <atomic>
#include <cstdint>
namespace ripple {
namespace xrpl {
/** Action to perform when releasing a strong pointer.
@@ -34,7 +34,7 @@ enum class ReleaseWeakRefAction { noop, destroy };
/** Implement the strong count, weak count, and bit flags for an intrusive
pointer.
A class can satisfy the requirements of a ripple::IntrusivePointer by
A class can satisfy the requirements of a xrpl::IntrusivePointer by
inheriting from this class.
*/
struct IntrusiveRefCounts
@@ -257,7 +257,7 @@ IntrusiveRefCounts::releaseStrongRef() const
RefCountPair const prevVal{prevIntVal};
XRPL_ASSERT(
(prevVal.strong >= strongDelta),
"ripple::IntrusiveRefCounts::releaseStrongRef : previous ref "
"xrpl::IntrusiveRefCounts::releaseStrongRef : previous ref "
"higher than new");
auto nextIntVal = prevIntVal - strongDelta;
ReleaseStrongRefAction action = noop;
@@ -282,7 +282,7 @@ IntrusiveRefCounts::releaseStrongRef() const
// twice.
XRPL_ASSERT(
(action == noop) || !(prevIntVal & partialDestroyStartedMask),
"ripple::IntrusiveRefCounts::releaseStrongRef : not in partial "
"xrpl::IntrusiveRefCounts::releaseStrongRef : not in partial "
"destroy");
return action;
}
@@ -314,7 +314,7 @@ IntrusiveRefCounts::addWeakReleaseStrongRef() const
// can't happen twice.
XRPL_ASSERT(
(!prevVal.partialDestroyStartedBit),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not in "
"xrpl::IntrusiveRefCounts::addWeakReleaseStrongRef : not in "
"partial destroy");
auto nextIntVal = prevIntVal + delta;
@@ -336,7 +336,7 @@ IntrusiveRefCounts::addWeakReleaseStrongRef() const
{
XRPL_ASSERT(
(!(prevIntVal & partialDestroyStartedMask)),
"ripple::IntrusiveRefCounts::addWeakReleaseStrongRef : not "
"xrpl::IntrusiveRefCounts::addWeakReleaseStrongRef : not "
"started partial destroy");
return action;
}
@@ -408,11 +408,11 @@ inline IntrusiveRefCounts::~IntrusiveRefCounts() noexcept
auto v = refCounts.load(std::memory_order_acquire);
XRPL_ASSERT(
(!(v & valueMask)),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : count must be zero");
auto t = v & tagMask;
XRPL_ASSERT(
(!t || t == tagMask),
"ripple::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
"xrpl::IntrusiveRefCounts::~IntrusiveRefCounts : valid tag");
#endif
}
@@ -427,7 +427,7 @@ inline IntrusiveRefCounts::RefCountPair::RefCountPair(
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(FieldType) : inputs inside "
"xrpl::IntrusiveRefCounts::RefCountPair(FieldType) : inputs inside "
"range");
}
@@ -438,7 +438,7 @@ inline IntrusiveRefCounts::RefCountPair::RefCountPair(
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair(CountType, CountType) : "
"xrpl::IntrusiveRefCounts::RefCountPair(CountType, CountType) : "
"inputs inside range");
}
@@ -447,7 +447,7 @@ IntrusiveRefCounts::RefCountPair::combinedValue() const noexcept
{
XRPL_ASSERT(
(strong < checkStrongMaxValue && weak < checkWeakMaxValue),
"ripple::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"xrpl::IntrusiveRefCounts::RefCountPair::combinedValue : inputs "
"inside range");
return (static_cast<IntrusiveRefCounts::FieldType>(weak)
<< IntrusiveRefCounts::StrongCountNumBits) |
@@ -465,7 +465,7 @@ partialDestructorFinished(T** o)
XRPL_ASSERT(
(!p.partialDestroyFinishedBit && p.partialDestroyStartedBit &&
!p.strong),
"ripple::partialDestructorFinished : not a weak ref");
"xrpl::partialDestructorFinished : not a weak ref");
if (!p.weak)
{
// There was a weak count before the partial destructor ran (or we would
@@ -479,5 +479,5 @@ partialDestructorFinished(T** o)
}
//------------------------------------------------------------------------------
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,10 +4,10 @@
#include <xrpl/basics/TaggedCache.h>
#include <xrpl/basics/base_uint.h>
namespace ripple {
namespace xrpl {
using KeyCache = TaggedCache<uint256, int, true>;
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_KEYCACHE_H

View File

@@ -6,7 +6,7 @@
#include <memory>
#include <unordered_map>
namespace ripple {
namespace xrpl {
namespace detail {
@@ -109,6 +109,6 @@ LocalValue<T>::operator*()
.emplace(this, std::make_unique<detail::LocalValues::Value<T>>(t_))
.first->second->get());
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -13,7 +13,7 @@
#include <mutex>
#include <utility>
namespace ripple {
namespace xrpl {
// DEPRECATED use beast::severities::Severity instead
enum LogSeverity {
@@ -271,6 +271,6 @@ setDebugLogSink(std::unique_ptr<beast::Journal::Sink> sink);
beast::Journal
debugLog();
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -5,7 +5,7 @@
#include <cassert>
#include <cstddef>
namespace ripple {
namespace xrpl {
/** Calculate one number divided by another number in percentage.
* The result is rounded up to the next integer, and capped in the range [0,100]
@@ -44,6 +44,6 @@ static_assert(calculatePercent(50'000'000, 100'000'000) == 50);
static_assert(calculatePercent(50'000'001, 100'000'000) == 51);
static_assert(calculatePercent(99'999'999, 100'000'000) == 100);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <ostream>
#include <string>
namespace ripple {
namespace xrpl {
class Number;
@@ -402,6 +402,6 @@ public:
operator=(NumberRoundModeGuard const&) = delete;
};
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_NUMBER_H_INCLUDED

View File

@@ -21,11 +21,11 @@ ripple/basic should contain no dependencies on other modules.
- `std::set`
- For sorted containers.
- `ripple::hash_set`
- `xrpl::hash_set`
- Where inserts and contains need to be O(1).
- For "small" sets, `std::set` might be faster and smaller.
- `ripple::hardened_hash_set`
- `xrpl::hardened_hash_set`
- For data sets where the key could be manipulated by an attacker
in an attempt to mount an algorithmic complexity attack: see
http://en.wikipedia.org/wiki/Algorithmic_complexity_attack
@@ -33,5 +33,5 @@ ripple/basic should contain no dependencies on other modules.
The following container is deprecated
- `std::unordered_set`
- Use `ripple::hash_set` instead, which uses a better hashing algorithm.
- Or use `ripple::hardened_hash_set` to prevent algorithmic complexity attacks.
- Use `xrpl::hash_set` instead, which uses a better hashing algorithm.
- Or use `xrpl::hardened_hash_set` to prevent algorithmic complexity attacks.

View File

@@ -11,7 +11,7 @@
#include <string>
#include <vector>
namespace ripple {
namespace xrpl {
/** A closed interval over the domain T.
@@ -85,7 +85,7 @@ to_string(RangeSet<T> const& rs)
std::string s;
for (auto const& interval : rs)
s += ripple::to_string(interval) + ",";
s += xrpl::to_string(interval) + ",";
s.pop_back();
return s;
@@ -172,6 +172,6 @@ prevMissing(RangeSet<T> const& rs, T t, T minVal = 0)
return boost::icl::last(tgt);
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <functional>
#include <vector>
namespace ripple {
namespace xrpl {
class Resolver
{
@@ -47,6 +47,6 @@ public:
/** @} */
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <boost/asio/io_context.hpp>
namespace ripple {
namespace xrpl {
class ResolverAsio : public Resolver
{
@@ -17,6 +17,6 @@ public:
New(boost::asio::io_context&, beast::Journal);
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <ostream>
namespace ripple {
namespace xrpl {
// A SHAMapHash is the hash of a node in a SHAMap, and also the
// type of the hash of the entire SHAMap.
@@ -97,6 +97,6 @@ extract(SHAMapHash const& key)
return *reinterpret_cast<std::size_t const*>(key.as_uint256().data());
}
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_SHAMAP_HASH_H_INCLUDED

View File

@@ -4,7 +4,7 @@
#include <memory>
#include <variant>
namespace ripple {
namespace xrpl {
/** A combination of a std::shared_ptr and a std::weak_pointer.
@@ -112,5 +112,5 @@ public:
private:
std::variant<std::shared_ptr<T>, std::weak_ptr<T>> combo_;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <xrpl/basics/SharedWeakCachePointer.h>
namespace ripple {
namespace xrpl {
template <class T>
SharedWeakCachePointer<T>::SharedWeakCachePointer(
SharedWeakCachePointer const& rhs) = default;
@@ -169,5 +169,5 @@ SharedWeakCachePointer<T>::convertToWeak()
return false;
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -22,7 +22,7 @@
#include <sys/mman.h>
#endif
namespace ripple {
namespace xrpl {
template <typename Type>
class SlabAllocator
@@ -128,7 +128,7 @@ class SlabAllocator
{
XRPL_ASSERT(
own(ptr),
"ripple::SlabAllocator::SlabBlock::deallocate : own input");
"xrpl::SlabAllocator::SlabBlock::deallocate : own input");
std::lock_guard l(m_);
@@ -173,7 +173,7 @@ public:
{
XRPL_ASSERT(
(itemAlignment_ & (itemAlignment_ - 1)) == 0,
"ripple::SlabAllocator::SlabAllocator : valid alignment");
"xrpl::SlabAllocator::SlabAllocator : valid alignment");
}
SlabAllocator(SlabAllocator const& other) = delete;
@@ -285,7 +285,7 @@ public:
{
XRPL_ASSERT(
ptr,
"ripple::SlabAllocator::SlabAllocator::deallocate : non-null "
"xrpl::SlabAllocator::SlabAllocator::deallocate : non-null "
"input");
for (auto slab = slabs_.load(); slab != nullptr; slab = slab->next_)
@@ -419,6 +419,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_SLABALLOCATOR_H_INCLUDED

View File

@@ -15,7 +15,7 @@
#include <type_traits>
#include <vector>
namespace ripple {
namespace xrpl {
/** An immutable linear range of bytes.
@@ -87,7 +87,7 @@ public:
{
XRPL_ASSERT(
i < size_,
"ripple::Slice::operator[](std::size_t) const : valid input");
"xrpl::Slice::operator[](std::size_t) const : valid input");
return data_[i];
}
@@ -243,6 +243,6 @@ makeSlice(std::basic_string<char, Traits, Alloc> const& s)
return Slice(s.data(), s.size());
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -12,7 +12,7 @@
#include <optional>
#include <string>
namespace ripple {
namespace xrpl {
/** Format arbitrary binary data as an SQLite "blob literal".
@@ -132,6 +132,6 @@ to_uint64(std::string const& s);
bool
isProperlyFormedTomlDomain(std::string_view domain);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -16,7 +16,7 @@
#include <type_traits>
#include <vector>
namespace ripple {
namespace xrpl {
/** Map/cache combination.
This class implements a cache and a map. The cache keeps objects alive
@@ -315,6 +315,6 @@ private:
std::uint64_t m_misses;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <xrpl/basics/IntrusivePointer.ipp>
#include <xrpl/basics/TaggedCache.h>
namespace ripple {
namespace xrpl {
template <
class Key,
@@ -1005,6 +1005,6 @@ TaggedCache<
});
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <string>
#include <type_traits>
namespace ripple {
namespace xrpl {
/** to_string() generalizes std::to_string to handle bools, chars, and strings.
@@ -43,6 +43,6 @@ to_string(char const* s)
return s;
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -22,7 +22,7 @@
* what container it is.
*/
namespace ripple {
namespace xrpl {
// hash containers
@@ -102,6 +102,6 @@ template <
using hardened_hash_multiset =
std::unordered_multiset<Value, Hash, Pred, Allocator>;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -6,7 +6,7 @@
#include <ratio>
#include <thread>
namespace ripple {
namespace xrpl {
/** Tracks program uptime to seconds precision.
@@ -45,6 +45,6 @@ private:
start_clock();
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <utility>
namespace ripple {
namespace xrpl {
// Requires: [first1, last1) and [first2, last2) are ordered ranges according to
// comp.
@@ -95,6 +95,6 @@ remove_if_intersect_or_match(
return first1;
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -38,7 +38,7 @@
#include <cstdint>
#include <string>
namespace ripple {
namespace xrpl {
std::string
base64_encode(std::uint8_t const* data, std::size_t len);
@@ -53,6 +53,6 @@ base64_encode(std::string const& s)
std::string
base64_decode(std::string_view data);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -23,7 +23,7 @@
#include <cstring>
#include <type_traits>
namespace ripple {
namespace xrpl {
namespace detail {
@@ -275,7 +275,7 @@ public:
{
XRPL_ASSERT(
c.size() * sizeof(typename Container::value_type) == size(),
"ripple::base_uint::base_uint(Container auto) : input size match");
"xrpl::base_uint::base_uint(Container auto) : input size match");
std::memcpy(data_.data(), c.data(), size());
}
@@ -288,7 +288,7 @@ public:
{
XRPL_ASSERT(
c.size() * sizeof(typename Container::value_type) == size(),
"ripple::base_uint::operator=(Container auto) : input size match");
"xrpl::base_uint::operator=(Container auto) : input size match");
std::memcpy(data_.data(), c.data(), size());
return *this;
}
@@ -648,12 +648,12 @@ static_assert(sizeof(uint192) == 192 / 8, "There should be no padding bytes");
static_assert(sizeof(uint256) == 256 / 8, "There should be no padding bytes");
#endif
} // namespace ripple
} // namespace xrpl
namespace beast {
template <std::size_t Bits, class Tag>
struct is_uniquely_represented<ripple::base_uint<Bits, Tag>>
struct is_uniquely_represented<xrpl::base_uint<Bits, Tag>>
: public std::true_type
{
explicit is_uniquely_represented() = default;

View File

@@ -12,7 +12,7 @@
#include <ratio>
#include <string>
namespace ripple {
namespace xrpl {
// A few handy aliases
@@ -104,6 +104,6 @@ stopwatch()
return beast::get_abstract_clock<Facade, Clock>();
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <functional>
namespace ripple {
namespace xrpl {
#ifdef _MSC_VER
@@ -52,6 +52,6 @@ using equal_to = std::equal_to<T>;
#endif
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -7,7 +7,7 @@
#include <string>
#include <utility>
namespace ripple {
namespace xrpl {
/* Programming By Contract
@@ -52,6 +52,6 @@ Throw(Args&&... args)
[[noreturn]] void
LogicError(std::string const& how) noexcept;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -9,7 +9,7 @@
#include <random>
#include <utility>
namespace ripple {
namespace xrpl {
namespace detail {
@@ -92,6 +92,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <string>
namespace ripple {
namespace xrpl {
template <class Stream, class Iter>
Stream&
@@ -85,6 +85,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -5,7 +5,7 @@
#include <string>
namespace ripple {
namespace xrpl {
/** Create a self-signed SSL context that allows anonymous Diffie Hellman. */
std::shared_ptr<boost::asio::ssl::context>
@@ -19,6 +19,6 @@ make_SSLContextAuthed(
std::string const& chainFile,
std::string const& cipherList);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -5,7 +5,7 @@
#include <limits>
#include <optional>
namespace ripple {
namespace xrpl {
auto constexpr muldiv_max = std::numeric_limits<std::uint64_t>::max();
/** Return value*mul/div accurately.
@@ -21,6 +21,6 @@ auto constexpr muldiv_max = std::numeric_limits<std::uint64_t>::max();
std::optional<std::uint64_t>
mulDiv(std::uint64_t value, std::uint64_t mul, std::uint64_t div);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -12,7 +12,7 @@
#include <utility>
#include <vector>
namespace ripple {
namespace xrpl {
template <typename Key>
static std::size_t
@@ -242,7 +242,7 @@ public:
map_.resize(partitions_);
XRPL_ASSERT(
partitions_,
"ripple::partitioned_unordered_map::partitioned_unordered_map : "
"xrpl::partitioned_unordered_map::partitioned_unordered_map : "
"nonzero partitions");
}
@@ -401,6 +401,6 @@ private:
mutable partition_map_type map_{};
};
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_PARTITIONED_UNORDERED_MAP_H

View File

@@ -11,7 +11,7 @@
#include <random>
#include <type_traits>
namespace ripple {
namespace xrpl {
#ifndef __INTELLISENSE__
static_assert(
@@ -95,7 +95,7 @@ std::enable_if_t<
Integral>
rand_int(Engine& engine, Integral min, Integral max)
{
XRPL_ASSERT(max > min, "ripple::rand_int : max over min inputs");
XRPL_ASSERT(max > min, "xrpl::rand_int : max over min inputs");
// This should have no state and constructing it should
// be very cheap. If that turns out not to be the case
@@ -186,6 +186,6 @@ rand_bool()
}
/** @} */
} // namespace ripple
} // namespace xrpl
#endif // XRPL_BASICS_RANDOM_H_INCLUDED

View File

@@ -3,7 +3,7 @@
#include <type_traits>
namespace ripple {
namespace xrpl {
// safe_cast adds compile-time checks to a static_cast to ensure that
// the destination can hold all values of the source. This is particularly
@@ -80,6 +80,6 @@ inline constexpr std::
return unsafe_cast<Dest>(static_cast<std::underlying_type_t<Src>>(s));
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -8,7 +8,7 @@
#include <type_traits>
#include <utility>
namespace ripple {
namespace xrpl {
// RAII scope helpers. As specified in Library Fundamental, Version 3
// Basic design of idea: https://www.youtube.com/watch?v=WjTrfoiB0MQ
@@ -218,7 +218,7 @@ public:
{
XRPL_ASSERT(
plock->owns_lock(),
"ripple::scope_unlock::scope_unlock : mutex must be locked");
"xrpl::scope_unlock::scope_unlock : mutex must be locked");
plock->unlock();
}
@@ -236,6 +236,6 @@ public:
template <class Mutex>
scope_unlock(std::unique_lock<Mutex>&) -> scope_unlock<Mutex>;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -13,7 +13,7 @@
#include <immintrin.h>
#endif
namespace ripple {
namespace xrpl {
namespace detail {
/** Inform the processor that we are in a tight spin-wait loop.
@@ -105,7 +105,7 @@ public:
{
XRPL_ASSERT(
index >= 0 && (mask_ != 0),
"ripple::packed_spinlock::packed_spinlock : valid index and mask");
"xrpl::packed_spinlock::packed_spinlock : valid index and mask");
}
[[nodiscard]] bool
@@ -206,6 +206,6 @@ public:
};
/** @} */
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <boost/algorithm/hex.hpp>
#include <boost/endian/conversion.hpp>
namespace ripple {
namespace xrpl {
template <class FwdIt>
std::string
@@ -28,6 +28,6 @@ strHex(T const& from)
return strHex(from.begin(), from.end());
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -10,7 +10,7 @@
#include <iostream>
#include <type_traits>
namespace ripple {
namespace xrpl {
/** A type-safe wrap around standard integral types
@@ -197,11 +197,11 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
namespace beast {
template <class Int, class Tag, class HashAlgorithm>
struct is_contiguously_hashable<ripple::tagged_integer<Int, Tag>, HashAlgorithm>
struct is_contiguously_hashable<xrpl::tagged_integer<Int, Tag>, HashAlgorithm>
: public is_contiguously_hashable<Int, HashAlgorithm>
{
explicit is_contiguously_hashable() = default;

View File

@@ -8,7 +8,7 @@
#include <mutex>
#include <optional>
namespace ripple {
namespace xrpl {
/**
* The role of a `ClosureCounter` is to assist in shutdown by letting callers
@@ -202,6 +202,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif // XRPL_CORE_CLOSURE_COUNTER_H_INCLUDED

View File

@@ -3,7 +3,7 @@
#include <xrpl/basics/ByteUtilities.h>
namespace ripple {
namespace xrpl {
template <class F>
JobQueue::Coro::Coro(
@@ -34,7 +34,7 @@ JobQueue::Coro::Coro(
inline JobQueue::Coro::~Coro()
{
#ifndef NDEBUG
XRPL_ASSERT(finished_, "ripple::JobQueue::Coro::~Coro : is finished");
XRPL_ASSERT(finished_, "xrpl::JobQueue::Coro::~Coro : is finished");
#endif
}
@@ -85,8 +85,7 @@ JobQueue::Coro::resume()
detail::getLocalValues().reset(&lvs_);
std::lock_guard lock(mutex_);
XRPL_ASSERT(
static_cast<bool>(coro_),
"ripple::JobQueue::Coro::resume : is runnable");
static_cast<bool>(coro_), "xrpl::JobQueue::Coro::resume : is runnable");
coro_();
detail::getLocalValues().release();
detail::getLocalValues().reset(saved);
@@ -129,6 +128,6 @@ JobQueue::Coro::join()
cv_.wait(lk, [this]() { return running_ == false; });
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -7,7 +7,7 @@
#include <functional>
namespace ripple {
namespace xrpl {
// Note that this queue should only be used for CPU-bound jobs
// It is primarily intended for signature checking
@@ -131,6 +131,6 @@ private:
using JobCounter = ClosureCounter<void>;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -12,7 +12,7 @@
#include <set>
namespace ripple {
namespace xrpl {
namespace perf {
class PerfLog;
@@ -382,11 +382,11 @@ private:
lock is released which only happens after the coroutine completes.
*/
} // namespace ripple
} // namespace xrpl
#include <xrpl/core/Coro.ipp>
namespace ripple {
namespace xrpl {
template <class F>
std::shared_ptr<JobQueue::Coro>
@@ -408,6 +408,6 @@ JobQueue::postCoro(JobType t, std::string const& name, F&& f)
return coro;
}
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -5,7 +5,7 @@
#include <xrpl/beast/insight/Collector.h>
#include <xrpl/core/JobTypeInfo.h>
namespace ripple {
namespace xrpl {
struct JobTypeData
{
@@ -83,6 +83,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <xrpl/core/Job.h>
namespace ripple {
namespace xrpl {
/** Holds all the 'static' information about a job, which does not change */
class JobTypeInfo
@@ -78,6 +78,6 @@ public:
}
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -7,7 +7,7 @@
#include <map>
#include <string>
namespace ripple {
namespace xrpl {
class JobTypes
{
@@ -35,7 +35,7 @@ private:
std::chrono::milliseconds peakLatency) {
XRPL_ASSERT(
m_map.find(jt) == m_map.end(),
"ripple::JobTypes::JobTypes::add : unique job type input");
"xrpl::JobTypes::JobTypes::add : unique job type input");
[[maybe_unused]] auto const inserted =
m_map
@@ -48,7 +48,7 @@ private:
XRPL_ASSERT(
inserted == true,
"ripple::JobTypes::JobTypes::add : input is inserted");
"xrpl::JobTypes::JobTypes::add : input is inserted");
};
// clang-format off
@@ -122,7 +122,7 @@ public:
get(JobType jt) const
{
Map::const_iterator const iter(m_map.find(jt));
XRPL_ASSERT(iter != m_map.end(), "ripple::JobTypes::get : valid input");
XRPL_ASSERT(iter != m_map.end(), "xrpl::JobTypes::get : valid input");
if (iter != m_map.end())
return iter->second;
@@ -170,6 +170,6 @@ public:
Map m_map;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <chrono>
#include <string>
namespace ripple {
namespace xrpl {
class LoadMonitor;
@@ -65,6 +65,6 @@ private:
std::chrono::steady_clock::duration timeRunning_;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -8,7 +8,7 @@
#include <chrono>
#include <mutex>
namespace ripple {
namespace xrpl {
// Monitors load levels and response times
@@ -67,6 +67,6 @@ private:
beast::Journal const j_;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -17,7 +17,7 @@ namespace beast {
class Journal;
}
namespace ripple {
namespace xrpl {
class Application;
namespace perf {
@@ -187,6 +187,6 @@ measureDurationAndLog(
}
} // namespace perf
} // namespace ripple
} // namespace xrpl
#endif // XRPL_CORE_PERFLOG_H

View File

@@ -10,7 +10,7 @@
#include <string>
#include <thread>
namespace ripple {
namespace xrpl {
namespace perf {
class PerfLog;
@@ -214,6 +214,6 @@ private:
m_paused; // holds just paused workers
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -1,6 +1,6 @@
/**
*
* TODO: Remove ripple::basic_semaphore (and this file) and use
* TODO: Remove xrpl::basic_semaphore (and this file) and use
* std::counting_semaphore.
*
* Background:
@@ -32,7 +32,7 @@
#include <condition_variable>
#include <mutex>
namespace ripple {
namespace xrpl {
template <class Mutex, class CondVar>
class basic_semaphore
@@ -87,6 +87,6 @@ public:
using semaphore = basic_semaphore<std::mutex, std::condition_variable>;
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <string>
#include <vector>
namespace ripple {
namespace xrpl {
class RFC1751
{
@@ -42,6 +42,6 @@ private:
static char const* s_dictionary[];
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <mutex>
namespace ripple {
namespace xrpl {
/** A cryptographically secure random number engine
@@ -70,6 +70,6 @@ public:
csprng_engine&
crypto_prng();
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -3,7 +3,7 @@
#include <cstddef>
namespace ripple {
namespace xrpl {
/** Attempts to clear the given blob of memory.
@@ -22,6 +22,6 @@ namespace ripple {
void
secure_erase(void* dest, std::size_t bytes);
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -4,7 +4,7 @@
#include <xrpl/beast/utility/PropertyStream.h>
#include <xrpl/json/json_value.h>
namespace ripple {
namespace xrpl {
/** A PropertyStream::Sink which produces a Json::Value of type objectValue. */
class JsonPropertyStream : public beast::PropertyStream
@@ -66,6 +66,6 @@ protected:
add(std::string const& v) override;
};
} // namespace ripple
} // namespace xrpl
#endif

View File

@@ -1,445 +0,0 @@
#ifndef XRPL_JSON_OBJECT_H_INCLUDED
#define XRPL_JSON_OBJECT_H_INCLUDED
#include <xrpl/json/Writer.h>
#include <memory>
namespace Json {
/**
Collection is a base class for Array and Object, classes which provide the
facade of JSON collections for the O(1) JSON writer, while still using no
heap memory and only a very small amount of stack.
From http://json.org, JSON has two types of collection: array, and object.
Everything else is a *scalar* - a number, a string, a boolean, the special
value null, or a legacy Json::Value.
Collections must write JSON "as-it-goes" in order to get the strong
performance guarantees. This puts restrictions upon API users:
1. Only one collection can be open for change at any one time.
This condition is enforced automatically and a std::logic_error thrown if
it is violated.
2. A tag may only be used once in an Object.
Some objects have many tags, so this condition might be a little
expensive. Enforcement of this condition is turned on in debug builds and
a std::logic_error is thrown when the tag is added for a second time.
Code samples:
Writer writer;
// An empty object.
{
Object::Root (writer);
}
// Outputs {}
// An object with one scalar value.
{
Object::Root root (writer);
write["hello"] = "world";
}
// Outputs {"hello":"world"}
// Same, using chaining.
{
Object::Root (writer)["hello"] = "world";
}
// Output is the same.
// Add several scalars, with chaining.
{
Object::Root (writer)
.set ("hello", "world")
.set ("flag", false)
.set ("x", 42);
}
// Outputs {"hello":"world","flag":false,"x":42}
// Add an array.
{
Object::Root root (writer);
{
auto array = root.setArray ("hands");
array.append ("left");
array.append ("right");
}
}
// Outputs {"hands":["left", "right"]}
// Same, using chaining.
{
Object::Root (writer)
.setArray ("hands")
.append ("left")
.append ("right");
}
// Output is the same.
// Add an object.
{
Object::Root root (writer);
{
auto object = root.setObject ("hands");
object["left"] = false;
object["right"] = true;
}
}
// Outputs {"hands":{"left":false,"right":true}}
// Same, using chaining.
{
Object::Root (writer)
.setObject ("hands")
.set ("left", false)
.set ("right", true);
}
}
// Outputs {"hands":{"left":false,"right":true}}
Typical ways to make mistakes and get a std::logic_error:
Writer writer;
Object::Root root (writer);
// Repeat a tag.
{
root ["hello"] = "world";
root ["hello"] = "there"; // THROWS! in a debug build.
}
// Open a subcollection, then set something else.
{
auto object = root.setObject ("foo");
root ["hello"] = "world"; // THROWS!
}
// Open two subcollections at a time.
{
auto object = root.setObject ("foo");
auto array = root.setArray ("bar"); // THROWS!!
}
For more examples, check the unit tests.
*/
class Collection
{
public:
Collection(Collection&& c) noexcept;
Collection&
operator=(Collection&& c) noexcept;
Collection() = delete;
~Collection();
protected:
// A null parent means "no parent at all".
// Writers cannot be null.
Collection(Collection* parent, Writer*);
void
checkWritable(std::string const& label);
Collection* parent_;
Writer* writer_;
bool enabled_;
};
class Array;
//------------------------------------------------------------------------------
/** Represents a JSON object being written to a Writer. */
class Object : protected Collection
{
public:
/** Object::Root is the only Collection that has a public constructor. */
class Root;
/** Set a scalar value in the Object for a key.
A JSON scalar is a single value - a number, string, boolean, nullptr or
a Json::Value.
`set()` throws an exception if this object is disabled (which means that
one of its children is enabled).
In a debug build, `set()` also throws an exception if the key has
already been set() before.
An operator[] is provided to allow writing `object["key"] = scalar;`.
*/
template <typename Scalar>
void
set(std::string const& key, Scalar const&);
void
set(std::string const& key, Json::Value const&);
// Detail class and method used to implement operator[].
class Proxy;
Proxy
operator[](std::string const& key);
Proxy
operator[](Json::StaticString const& key);
/** Make a new Object at a key and return it.
This Object is disabled until that sub-object is destroyed.
Throws an exception if this Object was already disabled.
*/
Object
setObject(std::string const& key);
/** Make a new Array at a key and return it.
This Object is disabled until that sub-array is destroyed.
Throws an exception if this Object was already disabled.
*/
Array
setArray(std::string const& key);
protected:
friend class Array;
Object(Collection* parent, Writer* w) : Collection(parent, w)
{
}
};
class Object::Root : public Object
{
public:
/** Each Object::Root must be constructed with its own unique Writer. */
Root(Writer&);
};
//------------------------------------------------------------------------------
/** Represents a JSON array being written to a Writer. */
class Array : private Collection
{
public:
/** Append a scalar to the Arrary.
Throws an exception if this array is disabled (which means that one of
its sub-collections is enabled).
*/
template <typename Scalar>
void
append(Scalar const&);
/**
Appends a Json::Value to an array.
Throws an exception if this Array was disabled.
*/
void
append(Json::Value const&);
/** Append a new Object and return it.
This Array is disabled until that sub-object is destroyed.
Throws an exception if this Array was disabled.
*/
Object
appendObject();
/** Append a new Array and return it.
This Array is disabled until that sub-array is destroyed.
Throws an exception if this Array was already disabled.
*/
Array
appendArray();
protected:
friend class Object;
Array(Collection* parent, Writer* w) : Collection(parent, w)
{
}
};
//------------------------------------------------------------------------------
// Generic accessor functions to allow Json::Value and Collection to
// interoperate.
/** Add a new subarray at a named key in a Json object. */
Json::Value&
setArray(Json::Value&, Json::StaticString const& key);
/** Add a new subarray at a named key in a Json object. */
Array
setArray(Object&, Json::StaticString const& key);
/** Add a new subobject at a named key in a Json object. */
Json::Value&
addObject(Json::Value&, Json::StaticString const& key);
/** Add a new subobject at a named key in a Json object. */
Object
addObject(Object&, Json::StaticString const& key);
/** Append a new subarray to a Json array. */
Json::Value&
appendArray(Json::Value&);
/** Append a new subarray to a Json array. */
Array
appendArray(Array&);
/** Append a new subobject to a Json object. */
Json::Value&
appendObject(Json::Value&);
/** Append a new subobject to a Json object. */
Object
appendObject(Array&);
/** Copy all the keys and values from one object into another. */
void
copyFrom(Json::Value& to, Json::Value const& from);
/** Copy all the keys and values from one object into another. */
void
copyFrom(Object& to, Json::Value const& from);
/** An Object that contains its own Writer. */
class WriterObject
{
public:
WriterObject(Output const& output)
: writer_(std::make_unique<Writer>(output))
, object_(std::make_unique<Object::Root>(*writer_))
{
}
WriterObject(WriterObject&& other) = default;
Object*
operator->()
{
return object_.get();
}
Object&
operator*()
{
return *object_;
}
private:
std::unique_ptr<Writer> writer_;
std::unique_ptr<Object::Root> object_;
};
WriterObject
stringWriterObject(std::string&);
//------------------------------------------------------------------------------
// Implementation details.
// Detail class for Object::operator[].
class Object::Proxy
{
private:
Object& object_;
std::string const key_;
public:
Proxy(Object& object, std::string const& key);
template <class T>
void
operator=(T const& t)
{
object_.set(key_, t);
// Note: This function shouldn't return *this, because it's a trap.
//
// In Json::Value, foo[jss::key] returns a reference to a
// mutable Json::Value contained _inside_ foo. But in the case of
// Json::Object, where we write once only, there isn't any such
// reference that can be returned. Returning *this would return an
// object "a level higher" than in Json::Value, leading to obscure bugs,
// particularly in generic code.
}
};
//------------------------------------------------------------------------------
template <typename Scalar>
void
Array::append(Scalar const& value)
{
checkWritable("append");
if (writer_)
writer_->append(value);
}
template <typename Scalar>
void
Object::set(std::string const& key, Scalar const& value)
{
checkWritable("set");
if (writer_)
writer_->set(key, value);
}
inline Json::Value&
setArray(Json::Value& json, Json::StaticString const& key)
{
return (json[key] = Json::arrayValue);
}
inline Array
setArray(Object& json, Json::StaticString const& key)
{
return json.setArray(std::string(key));
}
inline Json::Value&
addObject(Json::Value& json, Json::StaticString const& key)
{
return (json[key] = Json::objectValue);
}
inline Object
addObject(Object& object, Json::StaticString const& key)
{
return object.setObject(std::string(key));
}
inline Json::Value&
appendArray(Json::Value& json)
{
return json.append(Json::arrayValue);
}
inline Array
appendArray(Array& json)
{
return json.appendArray();
}
inline Json::Value&
appendObject(Json::Value& json)
{
return json.append(Json::objectValue);
}
inline Object
appendObject(Array& json)
{
return json.appendObject();
}
} // namespace Json
#endif

View File

@@ -234,7 +234,7 @@ inline void
check(bool condition, std::string const& message)
{
if (!condition)
ripple::Throw<std::logic_error>(message);
xrpl::Throw<std::logic_error>(message);
}
} // namespace Json

Some files were not shown because too many files have changed in this diff Show More