Compare commits

...

193 Commits

Author SHA1 Message Date
Nicholas Dudfield
83f6bc64e1 fix: restore ninja -v flag for compile command visibility
Removed in b24e4647b under naive assumption we were "past debugging phase".

Reality: debugging is never done, verbose output is invaluable, costs nothing.
Keep -v flag permanently.
2025-10-31 13:09:53 +07:00
Nicholas Dudfield
be6fad9692 fix: revert to PID-based temp files (was working before)
Reverts the unnecessary mktemp change from 638cb0afe that broke cache saving.

What happened:
- Original delta code used $$ (PID) for temp files: DELTA_TARBALL="/tmp/...-$$.tar.zst"
- This creates a STRING, not a file - zstd creates the file when writing
- When removing deltas (638cb0afe), I unnecessarily changed to mktemp for "better practice"
- mktemp CREATES an empty file - zstd refuses to overwrite it
- Result: "already exists; not overwritten" error

Why it seemed to work:
- Immutability check skipped save for existing caches
- Upload code path never executed during testing
- Bug only appeared when actually trying to create new cache

The fix:
- Revert to PID-based naming ($$) that was working
- Don't fix what isn't broken

Applies to both save and restore actions for consistency.
2025-10-31 12:10:13 +07:00
Nicholas Dudfield
b24e4647ba fix: configure ccache after cache restore to prevent stale config
Same ordering bug as Conan profile (Session 8) - ccache config was being
created in workflow BEFORE cache restore, causing cached ccache.conf to
overwrite fresh configuration.

Changes:
- Build action: Add ccache config inputs (max_size, hash_dir, compiler_check)
- Build action: Configure ccache AFTER cache restore (overwrites cached config)
- Build action: Add "Show ccache config before build" step (debugging aid)
- Build action: Remove debug steps (past debugging phase)
- Build action: Remove ninja -v flag (past debugging phase)
- Nix workflow: Remove "Configure ccache" step (now handled in build action)
- macOS workflow: Remove "Configure ccache" step (now handled in build action)
- macOS workflow: Add missing stdlib and AWS credentials to build step
- Delete unused xahau-configure-ccache action (logic moved to build action)

Flow now matches Conan pattern:
1. Restore cache (includes potentially stale config)
2. Configure ccache (overwrites with fresh config: 2G max, hash_dir=true, compiler_check=content)
3. Show config (verification)
4. Build

This ensures fresh ccache configuration for each job, preventing issues
from cached config files with different settings.
2025-10-31 11:03:12 +07:00
Nicholas Dudfield
638cb0afe5 refactor: remove OverlayFS delta caching entirely
THE GREAT CULLING: Remove all OverlayFS and delta caching logic.

After extensive investigation and testing, we determined that OverlayFS
file-level layering is fundamentally incompatible with ccache's access
patterns:

- ccache opens files with O_RDWR → kernel must provide writable file handle
- OverlayFS must copy files to upper layer immediately (can't wait)
- Even with metacopy=on, metadata-only files still appear in upper layer
- Result: ~366MB deltas instead of tiny incremental diffs

The fundamental constraint: cannot have all three of:
1. Read-only lower layer (for base sharing)
2. Writable file handles (for O_RDWR)
3. Minimal deltas (for efficient caching)

Changes:
- Removed all OverlayFS mounting/unmounting logic
- Removed workspace and registry tracking
- Removed delta creation and restoration
- Removed use-deltas parameter
- Simplified to direct tar/extract workflow

Before: 726 lines across cache actions
After:  321 lines (-55% reduction)

Benefits:
-  Simpler architecture (direct tar/extract)
-  More maintainable (less code, less complexity)
-  More reliable (fewer moving parts)
-  Same performance (base-only was already used)
-  Clear path forward (restic/borg for future optimization)

Current state works great:
- Build times: 20-30 min → 2-5 min (80% improvement)
- Cache sizes: ~323-609 MB per branch (with zst compression)
- S3 costs: acceptable for current volume

If bandwidth costs become problematic, migrate to restic/borg for
chunk-level deduplication (completely different architecture).
2025-10-31 10:30:31 +07:00
Nicholas Dudfield
bd384e6bc1 Revert "feat: enable metacopy=on to test metadata-only copy-up"
This reverts commit 4c546e5d91.
2025-10-31 09:51:27 +07:00
Nicholas Dudfield
4c546e5d91 feat: enable metacopy=on to test metadata-only copy-up
Mount OverlayFS with metacopy=on option (kernel 4.2+, supported on ubuntu-22.04).
This prevents full file copy-up when files are opened with O_RDWR but not modified.

Expected behavior:
- ccache opens cache files with write access
- OverlayFS creates metadata-only entry in upper layer
- Full copy-up only happens if data is actually written
- Should dramatically reduce delta sizes from ~324 MB to ~KB

Re-enabled use-deltas for ccache to test this optimization.
Conan remains base-only (hash-based keys mean exact match most of the time).

If successful, deltas should be tiny for cache hit scenarios.
2025-10-31 09:36:09 +07:00
Nicholas Dudfield
28727b3f86 perf: disable delta caching (use base-only mode)
OverlayFS operates at FILE LEVEL:
- File modified by 8 bytes → Copy entire 500 KB file to delta
- ccache updates LRU metadata (8 bytes) on every cache hit
- 667 cache hits = 667 files copied = 324 MB delta (nearly full cache)

What we need is BYTE LEVEL deltas:
- File modified by 8 bytes → Delta is 8 bytes
- 100% cache hit scenario: 10 KB delta (not 324 MB)

OverlayFS is the wrong abstraction for ccache. Binary diff tools (rsync,
xdelta3, borg) would work better, but require significant rework.

For now: Disable deltas, use base-only caching
- Base cache restore still provides massive speed boost (2-5 min vs 20-30)
- Simpler, more reliable
- Saves S3 bandwidth (no 324 MB delta uploads)

Future: Research binary diff strategies (see Gemini prompt in session notes)
2025-10-31 08:11:47 +07:00
Nicholas Dudfield
a4f96a435a fix: don't override ccache's default cache_dir
PROBLEM: Setting cache_dir='~/.ccache' explicitly stores the
LITERAL tilde in ccache.conf, which might not expand properly.

SOLUTION: Don't set cache_dir at all - let ccache use its default
- Linux default: ~/.ccache
- ccache will expand the tilde correctly when using default
- Only configure max_size, hash_dir, compiler_check
- Remove CCACHE_DIR env var (not needed with default)

This should fix the issue where ccache runs but creates no cache files.
2025-10-30 17:58:08 +07:00
Nicholas Dudfield
d0f63cc2d1 debug: add ccache directory inspection after build
Add comprehensive debug output to inspect ~/.ccache contents:
- Full recursive directory listing
- Disk space check (rule out disk full)
- ccache config verification
- Directory sizes
- List actual files (distinguish config from cache data)

This will help diagnose why ccache is being invoked on every
compilation but not creating any cache files (0.0 GB).
2025-10-30 17:53:33 +07:00
Nicholas Dudfield
2433bfe277 debug: add ninja verbose output to see compile commands
Temporarily adding -v flag to ninja builds to see actual compile
commands. This will show whether ccache is actually being invoked
or if the CMAKE_C_COMPILER_LAUNCHER setting isn't taking effect.

We should see either:
  ccache /usr/bin/g++-13 -c file.cpp  (working)
  /usr/bin/g++-13 -c file.cpp         (not working)

This is temporary for debugging - remove once ccache issue resolved.
2025-10-30 17:04:00 +07:00
Nicholas Dudfield
ef40a7f351 refactor: move Conan profile creation after cache restore
PROBLEM: Profile was created before cache restore, then immediately
overwritten by cached profile. This meant we were using a potentially
stale cached profile instead of fresh configuration.

SOLUTION: Move profile creation into dependencies action after restore
- Dependencies action now takes compiler params as inputs
- Profile created AFTER cache restore (overwrites any cached profile)
- Ensures fresh profile with correct compiler settings for each job

CHANGES:
- Dependencies action: Add inputs (os, arch, compiler, compiler_version, cc, cxx)
- Dependencies action: Add 'Configure Conan' step after cache restore
- Dependencies action: Support both Linux and macOS profile generation
- Nix workflow: Remove 'Configure Conan' step, pass compiler params
- macOS workflow: Detect compiler version, pass to dependencies action

Benefits:
 Profile always fresh and correct for current matrix config
 Cache still includes .conan.db and other important files
 Self-contained dependencies action (easier to understand)
 Works for both Linux (with explicit cc/cxx) and macOS (auto-detect)
2025-10-30 16:41:43 +07:00
Nicholas Dudfield
a4a4126bdc fix: indent heredoc content for valid YAML syntax
The heredoc content inside the run block needs proper indentation
to be valid YAML. Without indentation, the YAML parser treats the
lines as top-level keys, causing syntax errors.

The generated CMake file will have leading spaces, but CMake handles
this fine (whitespace-tolerant parser).
2025-10-30 16:17:40 +07:00
Nicholas Dudfield
0559b6c418 fix: enable ccache for main app via wrapper toolchain
PROBLEM: ccache was never being used (all builds compiling from scratch)
- CMake command-line args (-DCMAKE_C_COMPILER_LAUNCHER=ccache) were
  being overridden by Conan's conan_toolchain.cmake
- This toolchain file loads AFTER command-line args and resets settings
- Result: Empty ccache entries (~200 bytes) in both old and new cache

SOLUTION: Wrapper toolchain that overlays ccache on Conan's toolchain
- Create wrapper_toolchain.cmake that includes Conan's toolchain first
- Then sets CMAKE_C_COMPILER_LAUNCHER and CMAKE_CXX_COMPILER_LAUNCHER
- This happens AFTER Conan's toolchain loads, so settings stick
- Only affects main app build (NOT Conan dependency builds)

CHANGES:
- Build action: Create wrapper toolchain when ccache enabled
- Build action: Use CMAKE_CURRENT_LIST_DIR for correct relative path
- Build action: Remove broken CCACHE_ARGS logic (was being overridden)
- Build action: Use ${TOOLCHAIN_FILE} variable instead of hardcoded path

This approach:
 Conan dependency builds: Clean (no ccache overhead)
 Main xahaud build: Uses ccache via wrapper
 Separation: Build action controls ccache, not Conan profile
 Simple: No profile regeneration, just one wrapper file
2025-10-30 16:06:45 +07:00
Nicholas Dudfield
f8d1a6f2b4 fix: normalize paths in cache actions to fix bootstrap detection
Path comparison was failing when registry had expanded paths
(/home/runner/.ccache) but input had unexpanded paths (~/.ccache),
causing bootstrap mode to not be detected.

Now both restore and save actions consistently expand tildes to
absolute paths before writing to or reading from the mount registry.
2025-10-30 13:25:55 +07:00
Nicholas Dudfield
c46ede7c8f chore: bump CACHE_VERSION to 3 for fresh bootstrap
Triggering clean rebuild after S3 cache cleared and ccache simplified to single directory.
2025-10-30 11:57:02 +07:00
Nicholas Dudfield
0e2bc365ea refactor: simplify ccache to single directory with restore-keys fallback
Removed split ccache configuration (~/.ccache-main + ~/.ccache-current).

The split had an edge case: building a feature branch before main branch
cache is primed results in double ccache disk usage:
- Feature branch builds first → populates ~/.ccache-current (2G max_size)
- Main branch builds later → populates ~/.ccache-main (2G max_size)
- Next feature branch build → restores both caches (4G total)

This doubles ccache disk usage unnecessarily on GitHub-hosted runners.

New single-directory approach:
- Single ~/.ccache directory for all branches
- Branch-specific keys with restore-keys fallback to main branch
- Cache actions handle cross-branch sharing via partial-match mode
- Simpler and prevents double disk usage edge case

Changes:
- xahau-configure-ccache: Single cache_dir input, removed branch logic
- xahau-ga-build: Single restore/save step, uses restore-keys for fallback
- xahau-ga-nix: Removed is_main_branch parameter

Disk usage: 2G max (vs 4G worst case with split)
2025-10-30 11:24:14 +07:00
Nicholas Dudfield
446bc76b69 fix: remove broken cache save conditions for Conan
The check-conanfile-changes logic prevented cache saves on feature
branches unless conanfile changed, but this was broken because:
1. Cache keys don't include branch name (shared across branches)
2. S3 immutability makes this unnecessary (first-write-wins)
3. It prevented bootstrap saves on feature branches

Also removed unused safe-branch step (dead code from copy-paste)
2025-10-30 10:35:37 +07:00
Nicholas Dudfield
a0c38a4fb3 fix: disable deltas for Conan cache (base-only mode)
Conan dependencies are relatively static, so use base-only mode
instead of delta caching to avoid unnecessary complexity
2025-10-30 10:03:48 +07:00
Nicholas Dudfield
631650f7eb feat(wip): add nd-experiment-overlayfs-2025-10-29 to nix push 2025-10-30 09:56:33 +07:00
Nicholas Dudfield
0b31d8e534 feat: replace actions/cache with custom S3+OverlayFS cache
- Updated xahau-ga-dependencies to use xahau-actions-cache-restore/save
- Updated xahau-ga-build to use custom cache for ccache (4 operations)
- Added AWS credential inputs to both actions
- Main workflow now passes AWS secrets to cache actions
- Removed legacy ~/.conan path (Conan 2 uses ~/.conan2 only)
- All cache keys remain unchanged for compatibility
2025-10-30 09:55:06 +07:00
Nicholas Dudfield
ecf03f4afe test: expect state 1 [ci-clear-cache] 2025-10-30 09:50:57 +07:00
Nicholas Dudfield
b801c2837d test: expect state 3 2025-10-30 09:46:17 +07:00
Nicholas Dudfield
1474e808cb test: expect state 2 2025-10-30 09:43:26 +07:00
Nicholas Dudfield
457e633a81 feat(test): add workflow_dispatch inputs for state machine testing
Instead of requiring commits with message tags, allow manual workflow
triggering with inputs:
- state_assertion: Expected state for validation
- start_state: Force specific starting state
- clear_cache: Clear cache before running

Benefits:
- No need for empty commits to trigger tests
- Easy manual testing via GitHub Actions UI
- Still supports commit message tags for push events
- Workflow inputs take priority over commit tags
2025-10-30 09:40:26 +07:00
Nicholas Dudfield
7ea99caa19 fix(cache): trim whitespace in delta count + use s3api for tagging
Two fixes:
1. Restore: Trim whitespace from DELTA_COUNT to fix integer comparison
2. Save: Use 's3api put-object' instead of 's3 cp' to support --tagging

The aws s3 cp command doesn't support --tagging parameter.
Switched to s3api put-object which supports tagging directly.
Tags are needed for lifecycle policies (30-day eviction).
2025-10-30 09:29:02 +07:00
Nicholas Dudfield
3e5c15c172 fix(cache): handle empty cache gracefully in [ci-clear-cache]
When [ci-clear-cache] is used but no cache exists yet, grep returns
exit code 1 which causes script failure with set -e.

Add || echo "0" to handle case where no deltas exist to delete.
2025-10-30 09:25:21 +07:00
Nicholas Dudfield
52b4fb503c feat(cache): implement [ci-clear-cache] tag + auto-detecting state machine test
Cache Clearing Feature:
- Add [ci-clear-cache] commit message tag detection in restore action
- Deletes base + all deltas when tag present
- Implicit access via github.event.head_commit.message env var
- No workflow changes needed (action handles automatically)
- Commit message only (not PR title) - one-time action

State Machine Test Workflow:
- Auto-detects state by counting state files (state0.txt, state1.txt, etc.)
- Optional [state:N] assertions validate detected == expected
- [start-state:N] forces specific state for scenario testing
- Dual validation: local cache state AND S3 objects
- 4 validation checkpoints: S3 before, local after restore, after build, S3 after save
- Self-documenting: prints next steps after each run
- Supports [ci-clear-cache] integration

Usage:
  # Auto-advance (normal)
  git commit -m 'continue testing'

  # With assertion
  git commit -m 'test delta [state:2]'

  # Clear and restart
  git commit -m 'fresh start [ci-clear-cache]'

  # Jump to scenario
  git commit -m 'test from state 3 [start-state:3]'
2025-10-30 09:19:12 +07:00
Nicholas Dudfield
98123fa934 feat(cache): implement inline delta cleanup (keep only 1 per key)
- Add automatic cleanup after each delta upload
- Query all deltas for key, sort by LastModified
- Keep only latest (just uploaded), delete all older ones
- Matches restore logic (only uses latest delta)
- Minimal storage: 1 delta per key (~2GB) vs unbounded growth
- Simpler than keeping N: restore never needs older deltas
- Concurrency-safe (idempotent batch deletes)
- Eliminates need for separate cleanup workflow

Rationale: Since restore only ever uses the single latest delta,
keeping historical deltas adds complexity without benefit. This
matches GitHub Actions semantics (one 'latest' per key).
2025-10-29 14:51:28 +07:00
Nicholas Dudfield
ce7b1c4f1d feat: add custom S3+OverlayFS cache actions with configurable delta support
Implements drop-in replacement for actions/cache using S3 backend and OverlayFS for delta caching:

- xahau-actions-cache-restore: Downloads immutable base + optional latest delta
- xahau-actions-cache-save: Saves immutable bases (bootstrap/partial-match) or timestamped deltas (exact-match)

Key features:
- Immutable bases: One static base per key (first-write-wins, GitHub Actions semantics)
- Timestamped deltas: Always-timestamped to eliminate concurrency issues
- Configurable use-deltas parameter (default true):
  - true: For symbolic keys (branch-based) - massive bandwidth savings via incremental deltas
  - false: For content-based keys (hash-based) - base-only mode, no delta complexity
- Three cache modes: bootstrap, partial-match (restore-keys), exact-match
- OverlayFS integration: Automatic delta extraction via upperdir, whiteout file support
- S3 lifecycle ready: Bases tagged 'type=base', deltas tagged 'type=delta-archive'

Decision rule for use-deltas:
- Content-based discriminator (hashFiles, commit SHA) → use-deltas: false
- Symbolic discriminator (branch name, tag, PR) → use-deltas: true

Also disables existing workflows temporarily during development.
2025-10-29 13:07:40 +07:00
Nicholas Dudfield
e062dcae58 feat(wip): comment out unused secret encryption 2025-10-29 09:05:17 +07:00
Nicholas Dudfield
a9d284fec1 feat(wip): use new key names 2025-10-29 09:02:24 +07:00
Nicholas Dudfield
065d0c3e07 feat(wip): remove currently unused workflows 2025-10-29 08:57:21 +07:00
Nicholas Dudfield
4fda40b709 test: add S3 upload to overlayfs delta test
- Upload delta tarball to S3 bucket
- Test file: hello-world-first-test.tar.gz
- Uses new secret names: XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_KEY_ID/ACCESS_KEY
- Verifies upload with aws s3 ls
- Complete end-to-end test: OverlayFS → tarball → S3
2025-10-29 08:49:05 +07:00
Nicholas Dudfield
6014356d91 test: add encrypted secrets test to overlayfs workflow
- Generate random encryption key and store in GitHub Secrets via gh CLI
- Encrypt test message with GPG and commit to repo
- Decrypt in workflow using key from secrets and echo result
- Demonstrates encrypted secrets approach for SSH keys
2025-10-29 08:04:28 +07:00
Nicholas Dudfield
d790f97430 feat(wip): experiment overlayfs 2025-10-29 07:52:06 +07:00
tequ
9ed20a4f1c Refactor: SetCron to CronSet (#609) 2025-10-27 14:38:40 +10:00
tequ
89ffc1969b Add Previous fields to ltCron (#611) 2025-10-27 14:36:57 +10:00
tequ
79fdafe638 Support Cron in util_keylet Hook API (#612) 2025-10-27 14:35:01 +10:00
tequ
2a10013dfc Support 'cron' with ledger_entry RPC (#608) 2025-10-24 17:05:14 +10:00
tequ
6f148a8ac7 ExtendedHookState (#406) 2025-10-23 18:57:38 +10:00
tequ
96222baf5e Add hook header generators and CI verification workflow (#597) 2025-10-22 15:25:38 +10:00
Niq Dudfield
74477d2c13 added configurable NuDB block size support in xahaud (#601) 2025-10-22 14:15:12 +10:00
Alloy Networks
9378f1a0ad Update CONTRIBUTING.md (#599) 2025-10-21 14:20:10 +10:00
tequ
6fa6a96e3a Introduce StartTime in CronSet and improve next execution scheduling (#596) 2025-10-21 14:17:53 +10:00
RichardAH
b0fcd36bcd import_vl_keys logic fix (flap fix) (#588) 2025-10-18 16:27:05 +10:00
RichardAH
1ec31e79c9 Cron (on ledger cronjobs) (#590)
Co-authored-by: tequ <git@tequ.dev>
2025-10-17 18:45:16 +10:00
tequ
9c8b005406 fix: improve logging for transaction preflight failures in applyHook.cpp (#566) 2025-10-15 12:33:32 +10:00
tequ
687ccf4203 Remove unused variable enabled in MultiSign_test.cpp (#592) 2025-10-15 12:32:31 +10:00
Niq Dudfield
83f09fd8ab ci: add clang to build matrix [ci-nix-full-matrix] (#569) 2025-10-15 11:26:31 +10:00
tequ
15c7ad6f78 Fix Invalid Tx flags (#514) 2025-10-14 15:35:48 +10:00
Niq Dudfield
1f12b9ec5a feat(logs): add -DBEAST_ENHANCED_LOGGING with file:line numbers for JLOG macro (#552) 2025-10-14 10:44:03 +10:00
Niq Dudfield
ad0531ad6c chore: fix warnings (#509)
Co-authored-by: Denis Angell <dangell@transia.co>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-10-11 11:47:13 +10:00
tequ
e580f7cfc0 chore(vscode): enable format on save in settings.json (#578) 2025-10-11 11:43:50 +10:00
tequ
094f011006 Fix emit Hook API testcase name (#580) 2025-10-11 11:43:09 +10:00
Niq Dudfield
39d1c43901 build: upgrade openssl from 1.1.1u to 3.6.0 (#587)
Updates OpenSSL dependency to the latest 3.x series available on Conan Center.
2025-10-10 19:53:35 +10:00
J. Scott Branson
b3e6a902cb Update Sample Configuration Files in /cfg for Congruence with xahaud (#584) 2025-10-10 14:59:39 +11:00
Niq Dudfield
fa1b93bfd8 build: migrate to conan 2 (#585)
Migrates the build system from Conan 1 to Conan 2
2025-10-10 14:57:46 +11:00
tequ
92e3a927fc refactor KEYLET_LINE in utils_keylet (#502)
Fixes the use of high and low in variable names, as these are determined by ripple::keylet::line processing.

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-10-09 21:02:14 +11:00
tequ
8f7ebf0377 Optimize github action cache (#544)
* optimize github action cache

* fix

* refactor: improve github actions cache optimization (#3)

- move ccache configuration logic to dedicated action
- rename conanfile-changed to should-save-conan-cache for clarity

---------

Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-09-08 15:53:40 +10:00
Niq Dudfield
46cf6785ab fix(tests): prevent buffer corruption from concurrent log writes (#565)
std::endl triggers flush() which calls sync() on the shared log buffer.
Multiple threads racing in sync() cause str()/str("") operations to
corrupt buffer state, leading to crashes and double frees.

Added mutex to serialize access to suite.log, preventing concurrent
sync() calls on the same buffer.
2025-09-08 13:57:49 +10:00
Niq Dudfield
3c4c9c87c5 Fix rwdb memory leak with online_delete and remove flatmap (#570)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-08-26 14:00:58 +10:00
Niq Dudfield
7a790246fb fix: upgrade CI to GCC 13 and fix compilation issues, fixes #557 (#559) 2025-08-14 17:41:49 +10:00
Niq Dudfield
1a3d2db8ef fix(ci): export correct snappy version (#546) 2025-08-14 14:01:32 +10:00
tequ
2fc912d54d Make release build use conan deps where possible and hbb 4.0.1 (#516)
Co-authored-by: Denis Angell <dangell@transia.co>
Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-08-14 12:59:57 +10:00
Niq Dudfield
849d447a20 docs(freeze): canceling escrows with deep frozen assets is allowed (#540) 2025-07-09 13:48:59 +10:00
tequ
ee27049687 IOUIssuerWeakTSH (#388) 2025-07-09 13:48:26 +10:00
tequ
60dec74baf Add DeepFreeze test for URIToken (#539) 2025-07-09 12:49:47 +10:00
Denis Angell
9abea13649 Feature Clawback (#534) 2025-07-09 12:48:46 +10:00
Denis Angell
810e15319c Feature DeepFreeze (#536)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-09 10:33:08 +10:00
Niq Dudfield
d593f3bef5 fix: provisional PreviousTxn{Id,LedgerSeq} double threading (#515)
---------

Co-authored-by: tequ <git@tequ.dev>
2025-07-08 18:04:39 +10:00
Niq Dudfield
1233694b6c chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter (#525)
* chore: add suspicious_patterns to .scripts/pre-hook and not-suspicious filter

* rm: kill annoying checkpatterns job

* chore: cleanup

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2025-07-01 20:58:06 +10:00
tequ
a1d42b7380 Improve unittests (#494)
* Match unit tests on start of test name (#4634)

* For example, without this change, to run the TxQ tests, must specify
  `--unittest=TxQ1,TxQ2` on the command line. With this change, can use
  `--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
  name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
  might be useful. If not, the shorter-named test(s) can be renamed. For
  example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
  up parallel tests by a factor of 5.

* SetHook_test, SetHookTSH_test, XahauGenesis_test

---------

Co-authored-by: Ed Hennis <ed@ripple.com>
2025-06-30 10:03:02 +10:00
tequ
f6d2bf819d Fix governance vote purge (#221)
governance hook should be independently and deterministically recompiled before being voted in
2025-06-16 17:12:06 +10:00
Denis Angell
a5ea86fdfc Add Conan Building For Development (#432) 2025-05-14 14:00:20 +10:00
RichardAH
615f56570a Sus pat (#507) 2025-05-01 17:23:56 +10:00
RichardAH
5e005cd6ee remove false positives from sus pat finder (#506) 2025-05-01 09:54:41 +10:00
Denis Angell
80a7197590 fix warnings (#505) 2025-04-30 11:51:58 +02:00
tequ
7b581443d1 Suppress build warning introduced in Catalogue (#499) 2025-04-29 08:25:55 +10:00
tequ
5400f43359 Supress logs for Catalogue_test, Import_test (#495) 2025-04-24 17:46:09 +10:00
Denis Angell
8cf7d485ab fix: ledger_index (#498) 2025-04-24 16:45:01 +10:00
tequ
372f25d09b Remove #ifndef DEBUG guards and exception handling wrappers (#496) 2025-04-24 16:38:14 +10:00
Denis Angell
401395a204 patch remarks (#497) 2025-04-24 16:36:57 +10:00
tequ
4221dcf568 Add tests for SetRemarks (#491) 2025-04-18 09:34:44 +10:00
tequ
989532702d Update clang-format workflow (#490) 2025-04-17 16:16:59 +10:00
RichardAH
f9cd2e0d21 Remarks amendment (#301)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-04-16 08:42:04 +10:00
tequ
59e334c099 fixRewardClaimFlags (#487) 2025-04-15 20:08:19 +10:00
tequ
9018596532 HookCanEmit (#392) 2025-04-15 13:32:35 +10:00
Niq Dudfield
b827f0170d feat(catalogue): add cli commands and fix file_size (#486)
* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): add cli commands and fix file_size

* feat(catalogue): fix tests

* feat(catalogue): fix tests

* feat(catalogue): use formatBytesIEC

* feat: add file_size_estimated

* feat: add file_size_estimated

* feat: add file_size_estimated
2025-04-15 08:50:15 +10:00
tequ
e4b7e8f0f2 Update sfcodes script (#479) 2025-04-10 09:44:31 +10:00
tequ
1485078d91 Update CHooks build script (#465) 2025-04-09 20:22:34 +10:00
tequ
6625d2be92 Add xpop_slot test (#470) 2025-04-09 20:20:23 +10:00
tequ
2fb5c92140 feat: Run unittests in parallel with Github Actions (#483)
Implement parallel execution for unit tests using Github Actions to improve CI pipeline efficiency and reduce build times.
2025-04-04 19:32:47 +02:00
Niq Dudfield
c4b5ae3787 Fix missing includes in Catalogue.cpp for non-unity builds (#485) 2025-04-04 12:53:45 +10:00
Niq Dudfield
d546d761ce Fix using using Status with rpcError (#484) 2025-04-01 21:00:13 +10:00
RichardAH
e84a36867b Catalogue (#443) 2025-04-01 16:47:48 +10:00
Niq Dudfield
0b675465b4 Fix ServerDefinitions_test regression intro in #475 (#477) 2025-03-19 12:32:27 +10:00
Niq Dudfield
d088ad61a9 Prevent dangling reference in getHash() (#475)
Replace temporary uint256 with static variable when returning fallback hash
to avoid returning a const reference to a local temporary object.
2025-03-18 18:37:18 +10:00
Niq Dudfield
ef77b02d7f CI Release Builder (#455) 2025-03-11 13:19:28 +01:00
RichardAH
7385828983 Touch Amendment (#294) 2025-03-06 08:25:42 +01:00
Niq Dudfield
88b01514c1 fix: remove negative rate test failing on MacOS (#452) 2025-03-03 13:12:13 +01:00
Denis Angell
aeece15096 [fix] github runner (#451)
Co-authored-by: Niq Dudfield <ndudfield@gmail.com>
2025-03-03 09:55:51 +01:00
tequ
89cacb1258 Enhance shell script error handling and debugging on GHA (#447) 2025-02-24 10:33:21 +01:00
tequ
8ccff44e8c Fix Error handling on build action (#412) 2025-02-24 18:16:21 +10:00
tequ
420240a2ab Fixed not to use a large fixed range in the magic_enum. (#436) 2025-02-24 17:46:42 +10:00
Richard Holland
230873f196 debug gh builds 2025-02-06 15:21:37 +11:00
Wietse Wind
1fb1a99ea2 Update build-in-docker.yml 2025-02-05 08:23:49 +01:00
Richard Holland
e0b63ac70e Revert "debug account tx tests under release builder"
This reverts commit da8df63be3.

Revert "add strict filtering to account_tx api (#429)"

This reverts commit 317bd4bc6e.
2025-02-05 14:59:33 +11:00
Richard Holland
da8df63be3 debug account tx tests under release builder 2025-02-04 17:02:17 +11:00
RichardAH
317bd4bc6e add strict filtering to account_tx api (#429) 2025-02-03 17:56:08 +10:00
RichardAH
2fd465bb3f fix20250131 (#428)
Co-authored-by: Denis Angell <dangell@transia.co>
2025-02-03 10:33:19 +10:00
Wietse Wind
fa71bda29c Artifact v4 continue on error 2025-02-01 08:58:13 +01:00
Wietse Wind
412593d7bc Update artifact 2025-02-01 08:57:48 +01:00
Wietse Wind
12d8342c34 Update artifact 2025-02-01 08:57:25 +01:00
tequ
d17f7151ab Fix HookResult(ExitType) when accept() is not called (#415) 2025-01-22 13:33:59 +10:00
tequ
4466175231 Update boost link for build-full.sh (#421) 2025-01-22 08:38:12 +10:00
tequ
621ca9c865 Add space to trace_float log (#424) 2025-01-22 08:34:33 +10:00
tequ
85a752235a add URITokenIssuer to account_flags for account_info (#404) 2024-12-16 16:10:01 +10:00
RichardAH
d878fd4a6e allow multiple datagram monitor endpoints (#408) 2024-12-14 08:44:40 +10:00
Richard Holland
532a471a35 fixReduceImport (#398)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:37 +11:00
RichardAH
e9468d8b4a Datagram monitor (#400)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-12-11 13:29:30 +11:00
Denis Angell
9d54da3880 Fix: failing assert (#397) 2024-12-11 13:08:50 +11:00
Ekiserrepé
542172f0a1 Update README.md (#396)
Updated Xaman link.
2024-12-11 13:08:50 +11:00
Richard Holland
e086724772 UDP RPC (admin) support (#390) 2024-12-11 13:08:44 +11:00
RichardAH
21863b05f3 Limit xahau genesis to networks starting with 2133X (#395) 2024-11-23 21:19:09 +10:00
Denis Angell
61ac04aacc Sync: Ripple(d) 1.11.0 (#299)
* Add jss fields used by Clio `nft_info`: (#4320)

Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.

Clio can be found here: https://github.com/XRPLF/clio

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>

* Introduce support for a slabbed allocator: (#4218)

When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.

In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.

This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.

It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.

A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.

This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).

Commit 1 of 3 in #4218.

* Optimize `SHAMapItem` and leverage new slab allocator: (#4218)

The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.

Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.

This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.

In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.

Commit 2 of 3 in #4218.

* Avoid using std::shared_ptr when not necessary: (#4218)

The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.

The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.

Commit 3 of 3 in #4218.

* Prevent replay attacks with NetworkID field: (#4370)

Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.

The new field must be used when the server is using a network id > 1024.

To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.

Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.

The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.

This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:

- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`

To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html

Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html

In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.

A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.

* Fix the fix for std::result_of (#4496)

Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.

This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
  forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
  * Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
    1.77 with a newer compiler.

* Use quorum specified via command line: (#4489)

If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.

Fix #4488.

---------

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>

* Fix errors for Clang 16: (#4501)

Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).

- `std::{u,bi}nary_function` were removed in C++17. They were empty
  classes with a few associated types. We already have conditional code
  to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
  inherits from a binary function class in the standard library, e.g.
  `std::equal_to`, and this causes an error when the type privately
  inherits from such a class. Change these instances to public
  inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
  inherit directly from an empty class than from
  `beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
  compilation of assertions. Add a `[[maybe_unused]]` annotation to
  suppress it.
- As a drive-by clean-up, remove commented code.

See related work in #4486.

* Fix typo (#4508)

* fix!: Prevent API from accepting seed or public key for account (#4404)

The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.

Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.

Resolves: #3329, #3330, #4337

BREAKING CHANGE: Remove non-strict account parsing (#3330)

* Add nftoken_id, nftoken_ids, offer_id fields for NFTokens (#4447)

Three new fields are added to the `Tx` responses for NFTs:

1. `nftoken_id`: This field is included in the `Tx` responses for
   `NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
   `NFTokenID` for the `NFToken` that was modified on the ledger by the
   transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
   `NFTokenCancelOffer`. This field provides a list of all the
   `NFTokenID`s for the `NFToken`s that were modified on the ledger by
   the transaction.
3. `offer_id`: This field is included in the `Tx` response for
   `NFTokenCreateOffer` transactions and shows the OfferID of the
   `NFTokenOffer` created.

The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.

* Ensure that switchover vars are initialized before use: (#4527)

Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.

Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.

* Move faulty assert (#4533)

This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.

The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.

Note that asserts are normally checked only in debug builds, so release
packages should not be affected.

Introduced in: #4319 (66627b26cf)

* Fix unaligned load and stores: (#4528) (#4531)

Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.

For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).

* Add missing includes for gcc 13.1: (#4555)

gcc 13.1 failed to compile due to missing headers. This patch adds the
needed headers.

* Trivial: add comments for NFToken-related invariants (#4558)

* fix node size estimation (#4536)

Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.

* fix: remove redundant moves (#4565)

- Resolve gcc compiler warning:
      AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
  - The std::move() operation on trivially copyable types may generate a
    compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.

* Revert "Fix the fix for std::result_of (#4496)"

This reverts commit cee8409d60.

* Revert "Fix typo (#4508)"

This reverts commit 2956f14de8.

* clang

* [fold] bad merge

* [fold] fix bad merge

- add back filter for ripple state on account_channels
- add back network id test (env auto adds network id in xahau)

* [fold] fix build error

---------

Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: ledhed2222 <ledhed2222@users.noreply.github.com>
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
Co-authored-by: solmsted <steven.olm@gmail.com>
Co-authored-by: drlongle <drlongle@gmail.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
2024-11-20 10:54:03 +10:00
tequ
57a1329bff Fix lexicographical_compare_three_way build error at macos (#391)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-15 08:33:55 +10:00
Denis Angell
daf22b3b85 Fix: RWDB (#389) 2024-11-15 07:31:55 +10:00
RichardAH
2b225977e2 Feature: RWDB (#378)
Co-authored-by: Denis Angell <dangell@transia.co>
2024-11-12 08:55:56 +10:00
Denis Angell
58b22901cb Fix: float_divide rounding error (#351)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-11-09 15:17:00 +10:00
Denis Angell
8ba37a3138 Add Script for SfCode generation (#358)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Co-authored-by: tequ <git@tequ.dev>
2024-11-09 14:17:49 +10:00
tequ
8cffd3054d add trace message to exception on etxn_fee_base (#387) 2024-11-09 14:00:59 +10:00
Denis Angell
6b26045cbc Update settings.json (#342) 2024-10-25 11:56:16 +10:00
Wietse Wind
08f13b7cfe Fix account_tx sluggishness as per https://github.com/XRPLF/rippled/commit/2e9261cb (#308) 2024-10-25 11:13:42 +10:00
tequ
766f5d7ee1 Update macro.h (#366) 2024-10-25 10:10:43 +10:00
Wietse Wind
287c01ad04 Improve Admin command RPC Post (#384)
* Improve ADMIN HTTP POST RPC notifications: no queue limit, shorter HTTP call TTL
2024-10-25 10:10:14 +10:00
tequ
4239124750 Update amendments for rippled-standalone.cfg (#385) 2024-10-25 09:10:45 +10:00
RichardAH
1e45d4120c Update to boost186 (#377)
Co-Authored-By: Denis Angell <dangell@transia.co>
2024-10-17 01:29:17 +02:00
Denis Angell
9e446bcc85 Fix: Missing Headers - Linker Errors (#300) 2024-10-16 18:19:21 +10:00
RichardAH
376727d20c use std::lexicographical_compare_three_way for uint spaceship operator (#374)
* use std::lexicographical_compare_three_way for uint spaceship operator

* clang
2024-10-16 11:37:26 +10:00
Richard Holland
d921c87c88 also update max transactions for tx queue 2024-10-11 09:59:04 +11:00
RichardAH
7b94d3d99d increase txn in ledger target to 1000 (#372) 2024-10-11 08:21:23 +10:00
Denis Angell
79d83bd424 fix240911 (#363) 2024-09-11 13:43:03 +10:00
Richard Holland
1a4d54f9d9 force build 2024-09-07 15:41:00 +10:00
Wietse Wind
26cd629d28 Merge/2.2.2 jobqueue (#360)
* Merge fbbea9e6e2

* Merge 7741483894

* clang-format

* Oops

---------

Co-authored-by: Wietse Wind <wrw@Wietses-MacBook-Pro.local>
2024-09-07 15:22:15 +10:00
RichardAH
2fb93f874b fixPageCap (#359)
* page cap fix

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-09-07 11:39:24 +10:00
RichardAH
833df20fce Fix240819 (#350)
fix240918
---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-08-20 09:40:31 +10:00
Wietse Wind
5737c2b6e8 Workaround CentOS7 EOL 2024-08-18 01:50:44 +02:00
Richard Holland
a15d0b2ecc set huge mode nudb cache to 64mib 2024-08-14 16:40:07 +10:00
Denis Angell
18d76d3082 fix delivered amount (#310)
* fix delivered amount
2024-07-16 08:56:30 +10:00
Wietse Wind
849a4435e0 CI Split jobs with prev job dependency & CI on jshooks (#320)
* CI on `jshooks` branch

* CI Split jobs with prev job dependency

* No multi branch worker in parallel

---------

Co-authored-by: Denis Angell <dangell@transia.co>
2024-05-29 13:45:59 +02:00
Wietse Wind
247e9d98bf CI on jshooks branch (#317) 2024-05-24 10:10:40 +10:00
Denis Angell
acd455f5df Fix: Server Definitions Typo (#306) 2024-04-19 07:39:21 +10:00
Denis Angell
6636e3b6fd Add RPC Tests (#295)
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2024-04-18 18:22:46 +10:00
Denis Angell
3c5f118b59 Fix: Add Tx Flags To Server Definitions (#304) 2024-04-18 15:43:48 +10:00
Vasu
88308126cc Removing duplicate macro #define LPAREN ( (#233) 2024-03-25 09:11:21 +11:00
Denis Angell
497e52fcc6 Update Macros (#279)
* add common macros
* remove old/unused macros
2024-03-25 08:43:46 +11:00
Denis Angell
a3852763e7 Fix: Namespace Delete (OwnerCount) (#296)
* fix ns delete owner count
* add a new success code and refactor success checks, limit ns delete operations to 256 entries per txn
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-25 08:37:08 +11:00
RichardAH
7cd8f0a03a ZeroB2M amendment (#293)
* ZeroB2M amendment

Co-authored-by: Denis Angell <dangell@transia.co>
2024-03-22 10:49:35 +11:00
Denis Angell
d24c134612 add emitted order test (#273) 2024-03-11 11:46:03 +11:00
Denis Angell
cdac69a111 Fix: URIToken Test (#254)
* update tests for fixXahauV1
2024-03-11 10:06:15 +11:00
Denis Angell
1500522427 fix tsh on nftoken (#269) 2024-03-11 09:38:45 +11:00
Denis Angell
75aba531d6 Amendment: featureRemit (#278)
* Remit Amendment

Co-authored-by: Denis Angell <dangell@transia.co>

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-03-11 09:29:39 +11:00
Wietse Wind
caa8b382d8 🤦 2024-02-22 23:28:19 +01:00
Wietse Wind
82e04073be Revert checkout v3 2024-02-14 15:21:43 +01:00
Wietse Wind
e1b78f9682 Do clean 2024-02-14 15:20:24 +01:00
Wietse Wind
901d1d4e8d Update checkout CI to v4 2024-02-14 15:17:45 +01:00
Wietse Wind
aca5241515 Build Container per user 2024-02-14 15:15:12 +01:00
RichardAH
780378c221 fix hook emission ordering (#270) 2024-01-24 19:55:28 +01:00
RichardAH
2dc5e670ac fix buildinfo test (#266) 2024-01-22 12:34:13 +01:00
RichardAH
4dff5a5c8e fix permisisons on inject (#264) 2024-01-22 10:48:01 +01:00
Denis Angell
f64e626a3f Fix: TSH Updates & Emitted Txn (#261)
fixXahauV2
* refactor tsh
* add uritoken mint/cancel tsh
* add flags to hookexections meta and nonce to hookemissions meta

Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
2024-01-22 10:25:36 +01:00
RichardAH
f21d3e1e97 Merge pull request #260 from Xahau/emit_guard
Fix: EmittedTxn Reliability
2024-01-19 14:05:06 +01:00
Denis Angell
858055c811 clang-format 2024-01-19 11:07:22 +01:00
Denis Angell
5d66f17574 add test 2024-01-19 11:05:45 +01:00
Denis Angell
00328cb782 Merge branch 'dev' into emit_guard 2024-01-19 10:24:34 +01:00
Richard Holland
fdf7ea4174 dbg inject clang + permission check 2024-01-17 18:15:17 +00:00
Richard Holland
7877ed9704 debug txn injector 2024-01-17 18:07:05 +00:00
Richard Holland
17ccec9ac5 Add additional checks for emitted txns 2024-01-17 15:39:02 +00:00
RichardAH
de522ac4ae Merge pull request #255 from Xahau/candidate
Candidate/release/sync
2023-12-29 22:18:04 +01:00
RichardAH
74c83a9271 Merge branch 'release' into candidate 2023-12-29 21:56:05 +01:00
Wietse Wind
66ee96d456 Build on release after all 2023-12-29 15:43:33 +01:00
Wietse Wind
b476aea55b Do not auto build on release 2023-12-29 15:39:48 +01:00
RichardAH
4ad697069f Fix xahau v1audit (#250) 2023-12-27 14:53:38 +01:00
Denis Angell
97acfe9f97 Fix: URIToken Test (#247) 2023-12-22 11:54:54 +01:00
Denis Angell
475b6f7347 Amendment: Fix Xahau v1 (#231)
* FXV1: Meta Amount (#225)

* FXV1: Optional Offer Sequence (#224)

* FXV1: Patch Hooks OwnerDir (#236)

* FXV1:  Fix `Import` Quorum (#235)

* FXV1: Namespace Limit (#220)

* FXV1: allow duplicate entries in genesis mint transactor (#239)

* FXV1: Fix URIToken (#243)

* lite fixes for tsh issues (#244)

Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
2023-12-21 16:21:17 +01:00
RichardAH
64a07e5539 move utf8 checking to header file (#241)
Co-authored-by: Denis Angell <dangell@transia.co>
2023-12-19 19:52:34 +01:00
Denis Angell
c49b7a2301 Fix: Verify Emitted Result on GenesisMint tests (#242) 2023-12-19 10:08:51 +01:00
Denis Angell
ac694c7c90 Add HookParameters fee calc to all txs (#223) 2023-12-07 14:30:30 +01:00
Denis Angell
3be6baded9 update definitions test (#228) 2023-12-07 13:40:18 +01:00
Richard Holland
49798b1792 make server_definitions caching thread_local 2023-12-05 13:47:54 +00:00
Denis Angell
27f43ba9ee TSH Tests (#222) 2023-12-03 10:13:05 +01:00
Wietse Wind
7881b7f69f Build on most cores (nproc / 3 » nproc / 1.337) (#226)
Use those resources! 🎉🚀
2023-12-01 12:47:50 +01:00
Denis Angell
1b9373e220 add array xpop path 2023-10-30 15:53:48 +01:00
438 changed files with 47823 additions and 18771 deletions

12
.githooks/pre-commit Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
# Pre-commit hook that runs the suspicious patterns check on staged files
# Get the repository's root directory
repo_root=$(git rev-parse --show-toplevel)
# Run the suspicious patterns script in pre-commit mode
"$repo_root/suspicious_patterns.sh" --pre-commit
# Exit with the same code as the script
exit $?

4
.githooks/setup.sh Normal file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
echo "Configuring git to use .githooks directory..."
git config core.hooksPath .githooks

View File

@@ -0,0 +1,211 @@
name: 'Xahau Cache Restore (S3)'
description: 'Drop-in replacement for actions/cache/restore using S3 storage'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache (currently only single path supported)'
required: true
key:
description: 'An explicit key for restoring the cache'
required: true
restore-keys:
description: 'An ordered list of prefix-matched keys to use for restoring stale cache if no cache hit occurred for key'
required: false
default: ''
s3-bucket:
description: 'S3 bucket name for cache storage'
required: false
default: 'xahaud-github-actions-cache-niq'
s3-region:
description: 'S3 region'
required: false
default: 'us-east-1'
fail-on-cache-miss:
description: 'Fail the workflow if cache entry is not found'
required: false
default: 'false'
lookup-only:
description: 'Check if a cache entry exists for the given input(s) without downloading it'
required: false
default: 'false'
# Note: Composite actions can't access secrets.* directly - must be passed from workflow
aws-access-key-id:
description: 'AWS Access Key ID for S3 access'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 access'
required: true
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
value: ${{ steps.restore-cache.outputs.cache-hit }}
cache-primary-key:
description: 'The key that was used to restore the cache (may be from restore-keys)'
value: ${{ steps.restore-cache.outputs.cache-primary-key }}
cache-matched-key:
description: 'The key that was used to restore the cache (exact or prefix match)'
value: ${{ steps.restore-cache.outputs.cache-matched-key }}
runs:
using: 'composite'
steps:
- name: Restore cache from S3
id: restore-cache
shell: bash
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
S3_BUCKET: ${{ inputs.s3-bucket }}
S3_REGION: ${{ inputs.s3-region }}
CACHE_KEY: ${{ inputs.key }}
RESTORE_KEYS: ${{ inputs.restore-keys }}
TARGET_PATH: ${{ inputs.path }}
FAIL_ON_MISS: ${{ inputs.fail-on-cache-miss }}
LOOKUP_ONLY: ${{ inputs.lookup-only }}
run: |
set -euo pipefail
echo "=========================================="
echo "Xahau Cache Restore (S3)"
echo "=========================================="
echo "Target path: ${TARGET_PATH}"
echo "Cache key: ${CACHE_KEY}"
echo "S3 bucket: s3://${S3_BUCKET}"
echo ""
# Normalize target path (expand tilde and resolve to absolute path)
if [[ "${TARGET_PATH}" == ~* ]]; then
TARGET_PATH="${HOME}${TARGET_PATH:1}"
fi
TARGET_PATH=$(realpath -m "${TARGET_PATH}")
echo "Normalized target path: ${TARGET_PATH}"
echo ""
# Function to try restoring a cache key
try_restore_key() {
local key=$1
local s3_key="s3://${S3_BUCKET}/${key}-base.tar.zst"
echo "Checking for key: ${key}"
if aws s3 ls "${s3_key}" --region "${S3_REGION}" >/dev/null 2>&1; then
echo "✓ Found cache: ${s3_key}"
return 0
else
echo "✗ Not found: ${key}"
return 1
fi
}
# Try exact match first
MATCHED_KEY=""
EXACT_MATCH="false"
if try_restore_key "${CACHE_KEY}"; then
MATCHED_KEY="${CACHE_KEY}"
EXACT_MATCH="true"
echo ""
echo "🎯 Exact cache hit for key: ${CACHE_KEY}"
else
# Try restore-keys (prefix matching)
if [ -n "${RESTORE_KEYS}" ]; then
echo ""
echo "Primary key not found, trying restore-keys..."
while IFS= read -r restore_key; do
[ -z "${restore_key}" ] && continue
restore_key=$(echo "${restore_key}" | xargs)
if try_restore_key "${restore_key}"; then
MATCHED_KEY="${restore_key}"
EXACT_MATCH="false"
echo ""
echo "✓ Cache restored from fallback key: ${restore_key}"
break
fi
done <<< "${RESTORE_KEYS}"
fi
fi
# Check if we found anything
if [ -z "${MATCHED_KEY}" ]; then
echo ""
echo "❌ No cache found for key: ${CACHE_KEY}"
if [ "${FAIL_ON_MISS}" = "true" ]; then
echo "fail-on-cache-miss is enabled, failing workflow"
exit 1
fi
# Set outputs for cache miss
echo "cache-hit=false" >> $GITHUB_OUTPUT
echo "cache-primary-key=" >> $GITHUB_OUTPUT
echo "cache-matched-key=" >> $GITHUB_OUTPUT
# Create empty cache directory
mkdir -p "${TARGET_PATH}"
echo ""
echo "=========================================="
echo "Cache restore completed (bootstrap mode)"
echo "Created empty cache directory: ${TARGET_PATH}"
echo "=========================================="
exit 0
fi
# If lookup-only, we're done
if [ "${LOOKUP_ONLY}" = "true" ]; then
echo "cache-hit=${EXACT_MATCH}" >> $GITHUB_OUTPUT
echo "cache-primary-key=${CACHE_KEY}" >> $GITHUB_OUTPUT
echo "cache-matched-key=${MATCHED_KEY}" >> $GITHUB_OUTPUT
echo ""
echo "=========================================="
echo "Cache lookup completed (lookup-only mode)"
echo "Cache exists: ${MATCHED_KEY}"
echo "=========================================="
exit 0
fi
# Download and extract cache
S3_KEY="s3://${S3_BUCKET}/${MATCHED_KEY}-base.tar.zst"
TEMP_TARBALL="/tmp/xahau-cache-restore-$$.tar.zst"
echo ""
echo "Downloading cache..."
aws s3 cp "${S3_KEY}" "${TEMP_TARBALL}" --region "${S3_REGION}"
TARBALL_SIZE=$(du -h "${TEMP_TARBALL}" | cut -f1)
echo "✓ Downloaded: ${TARBALL_SIZE}"
# Create parent directory if needed
mkdir -p "$(dirname "${TARGET_PATH}")"
# Remove existing target if it exists
if [ -e "${TARGET_PATH}" ]; then
echo "Removing existing target: ${TARGET_PATH}"
rm -rf "${TARGET_PATH}"
fi
# Create target directory and extract
mkdir -p "${TARGET_PATH}"
echo ""
echo "Extracting cache..."
zstd -d -c "${TEMP_TARBALL}" | tar -xf - -C "${TARGET_PATH}"
echo "✓ Cache extracted to: ${TARGET_PATH}"
# Cleanup
rm -f "${TEMP_TARBALL}"
# Set outputs
echo "cache-hit=${EXACT_MATCH}" >> $GITHUB_OUTPUT
echo "cache-primary-key=${CACHE_KEY}" >> $GITHUB_OUTPUT
echo "cache-matched-key=${MATCHED_KEY}" >> $GITHUB_OUTPUT
echo ""
echo "=========================================="
echo "Cache restore completed successfully"
echo "Cache hit: ${EXACT_MATCH}"
echo "Matched key: ${MATCHED_KEY}"
echo "=========================================="

View File

@@ -0,0 +1,110 @@
name: 'Xahau Cache Save (S3)'
description: 'Drop-in replacement for actions/cache/save using S3 storage'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache (currently only single path supported)'
required: true
key:
description: 'An explicit key for saving the cache'
required: true
s3-bucket:
description: 'S3 bucket name for cache storage'
required: false
default: 'xahaud-github-actions-cache-niq'
s3-region:
description: 'S3 region'
required: false
default: 'us-east-1'
# Note: Composite actions can't access secrets.* directly - must be passed from workflow
aws-access-key-id:
description: 'AWS Access Key ID for S3 access'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 access'
required: true
runs:
using: 'composite'
steps:
- name: Save cache to S3
shell: bash
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
S3_BUCKET: ${{ inputs.s3-bucket }}
S3_REGION: ${{ inputs.s3-region }}
CACHE_KEY: ${{ inputs.key }}
TARGET_PATH: ${{ inputs.path }}
run: |
set -euo pipefail
echo "=========================================="
echo "Xahau Cache Save (S3)"
echo "=========================================="
echo "Target path: ${TARGET_PATH}"
echo "Cache key: ${CACHE_KEY}"
echo "S3 bucket: s3://${S3_BUCKET}"
echo ""
# Normalize target path (expand tilde and resolve to absolute path)
if [[ "${TARGET_PATH}" == ~* ]]; then
TARGET_PATH="${HOME}${TARGET_PATH:1}"
fi
echo "Normalized target path: ${TARGET_PATH}"
echo ""
# Check if target directory exists
if [ ! -d "${TARGET_PATH}" ]; then
echo "⚠️ Target directory does not exist: ${TARGET_PATH}"
echo "Skipping cache save."
exit 0
fi
# Use static base name (one base per key, immutable)
S3_BASE_KEY="s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst"
# Check if base already exists (immutability - first write wins)
if aws s3 ls "${S3_BASE_KEY}" --region "${S3_REGION}" >/dev/null 2>&1; then
echo "⚠️ Cache already exists: ${S3_BASE_KEY}"
echo "Skipping upload (immutability - first write wins, like GitHub Actions)"
echo ""
echo "=========================================="
echo "Cache save completed (already exists)"
echo "=========================================="
exit 0
fi
# Create tarball
BASE_TARBALL="/tmp/xahau-cache-base-$$.tar.zst"
echo "Creating cache tarball..."
tar -cf - -C "${TARGET_PATH}" . | zstd -3 -T0 -q -o "${BASE_TARBALL}"
BASE_SIZE=$(du -h "${BASE_TARBALL}" | cut -f1)
echo "✓ Cache tarball created: ${BASE_SIZE}"
echo ""
# Upload to S3
echo "Uploading cache to S3..."
echo " Key: ${CACHE_KEY}-base.tar.zst"
aws s3api put-object \
--bucket "${S3_BUCKET}" \
--key "${CACHE_KEY}-base.tar.zst" \
--body "${BASE_TARBALL}" \
--tagging 'type=base' \
--region "${S3_REGION}" \
>/dev/null
echo "✓ Uploaded: ${S3_BASE_KEY}"
# Cleanup
rm -f "${BASE_TARBALL}"
echo ""
echo "=========================================="
echo "Cache save completed successfully"
echo "Cache size: ${BASE_SIZE}"
echo "Cache key: ${CACHE_KEY}"
echo "=========================================="

View File

@@ -0,0 +1,225 @@
name: build
description: 'Builds the project with ccache integration'
inputs:
generator:
description: 'CMake generator to use'
required: true
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build in'
required: false
default: '.build'
cc:
description: 'C compiler to use'
required: false
default: ''
cxx:
description: 'C++ compiler to use'
required: false
default: ''
compiler-id:
description: 'Unique identifier: compiler-version-stdlib[-gccversion] (e.g. clang-14-libstdcxx-gcc11, gcc-13-libstdcxx)'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
ccache_enabled:
description: 'Whether to use ccache'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
stdlib:
description: 'C++ standard library to use'
required: true
type: choice
options:
- libstdcxx
- libcxx
clang_gcc_toolchain:
description: 'GCC version to use for Clang toolchain (e.g. 11, 13)'
required: false
default: ''
ccache_max_size:
description: 'Maximum ccache size'
required: false
default: '2G'
ccache_hash_dir:
description: 'Whether to include directory paths in hash'
required: false
default: 'true'
ccache_compiler_check:
description: 'How to check compiler for changes'
required: false
default: 'content'
aws-access-key-id:
description: 'AWS Access Key ID for S3 cache storage'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 cache storage'
required: true
runs:
using: 'composite'
steps:
- name: Generate safe branch name
if: inputs.ccache_enabled == 'true'
id: safe-branch
shell: bash
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | tr -c 'a-zA-Z0-9_.-' '-')
echo "name=${SAFE_BRANCH}" >> $GITHUB_OUTPUT
- name: Restore ccache directory
if: inputs.ccache_enabled == 'true'
id: ccache-restore
uses: ./.github/actions/xahau-actions-cache-restore
with:
path: ~/.ccache
key: ${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ steps.safe-branch.outputs.name }}
restore-keys: |
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ inputs.main_branch }}
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-
${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
- name: Configure ccache
if: inputs.ccache_enabled == 'true'
shell: bash
run: |
# Use ccache's default cache_dir (~/.ccache) - don't override it
# This avoids tilde expansion issues when setting it explicitly
# Create cache directory using ccache's default
mkdir -p ~/.ccache
# Configure ccache settings (but NOT cache_dir - use default)
# This overwrites any cached config to ensure fresh configuration
ccache --set-config=max_size=${{ inputs.ccache_max_size }}
ccache --set-config=hash_dir=${{ inputs.ccache_hash_dir }}
ccache --set-config=compiler_check=${{ inputs.ccache_compiler_check }}
# Note: Not setting CCACHE_DIR - let ccache use its default (~/.ccache)
# Print config for verification
echo "=== ccache configuration ==="
ccache -p
# Zero statistics before the build
ccache -z
- name: Configure project
shell: bash
run: |
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Set compiler environment variables if provided
if [ -n "${{ inputs.cc }}" ]; then
export CC="${{ inputs.cc }}"
fi
if [ -n "${{ inputs.cxx }}" ]; then
export CXX="${{ inputs.cxx }}"
fi
# Create wrapper toolchain that overlays ccache on top of Conan's toolchain
# This enables ccache for the main app build without affecting Conan dependency builds
if [ "${{ inputs.ccache_enabled }}" = "true" ]; then
cat > wrapper_toolchain.cmake <<'EOF'
# Include Conan's generated toolchain first (sets compiler, flags, etc.)
# Note: CMAKE_CURRENT_LIST_DIR is the directory containing this wrapper (.build/)
include(${CMAKE_CURRENT_LIST_DIR}/build/generators/conan_toolchain.cmake)
# Overlay ccache configuration for main application build
# This does NOT affect Conan dependency builds (already completed)
set(CMAKE_C_COMPILER_LAUNCHER ccache CACHE STRING "C compiler launcher" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER ccache CACHE STRING "C++ compiler launcher" FORCE)
EOF
TOOLCHAIN_FILE="wrapper_toolchain.cmake"
echo "✅ Created wrapper toolchain with ccache enabled"
else
TOOLCHAIN_FILE="build/generators/conan_toolchain.cmake"
echo " Using Conan toolchain directly (ccache disabled)"
fi
# Configure C++ standard library if specified
# libstdcxx used for clang-14/16 to work around missing lexicographical_compare_three_way in libc++
# libcxx can be used with clang-17+ which has full C++20 support
# Note: -stdlib flag is Clang-specific, GCC always uses libstdc++
CMAKE_CXX_FLAGS=""
if [[ "${{ inputs.cxx }}" == clang* ]]; then
# Only Clang needs the -stdlib flag
if [ "${{ inputs.stdlib }}" = "libstdcxx" ]; then
CMAKE_CXX_FLAGS="-stdlib=libstdc++"
elif [ "${{ inputs.stdlib }}" = "libcxx" ]; then
CMAKE_CXX_FLAGS="-stdlib=libc++"
fi
fi
# GCC always uses libstdc++ and doesn't need/support the -stdlib flag
# Configure GCC toolchain for Clang if specified
if [ -n "${{ inputs.clang_gcc_toolchain }}" ] && [[ "${{ inputs.cxx }}" == clang* ]]; then
# Extract Clang version from compiler executable name (e.g., clang++-14 -> 14)
clang_version=$(echo "${{ inputs.cxx }}" | grep -oE '[0-9]+$')
# Clang 16+ supports --gcc-install-dir (precise path specification)
# Clang <16 only has --gcc-toolchain (uses discovery heuristics)
if [ -n "$clang_version" ] && [ "$clang_version" -ge "16" ]; then
# Clang 16+ uses --gcc-install-dir (canonical, precise)
CMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS --gcc-install-dir=/usr/lib/gcc/x86_64-linux-gnu/${{ inputs.clang_gcc_toolchain }}"
else
# Clang 14-15 uses --gcc-toolchain (deprecated but necessary)
# Note: This still uses discovery, so we hide newer GCC versions in the workflow
CMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS --gcc-toolchain=/usr"
fi
fi
# Run CMake configure
# Note: conanfile.py hardcodes 'build/generators' as the output path.
# If we're in a 'build' folder, Conan detects this and uses just 'generators/'
# If we're in '.build' (non-standard), Conan adds the full 'build/generators/'
# So we get: .build/build/generators/ with our non-standard folder name
cmake .. \
-G "${{ inputs.generator }}" \
${CMAKE_CXX_FLAGS:+-DCMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS"} \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=${TOOLCHAIN_FILE} \
-DCMAKE_BUILD_TYPE=${{ inputs.configuration }}
- name: Show ccache config before build
if: inputs.ccache_enabled == 'true'
shell: bash
run: |
echo "=========================================="
echo "ccache configuration before build"
echo "=========================================="
ccache -p
echo ""
- name: Build project
shell: bash
run: |
cd ${{ inputs.build_dir }}
cmake --build . --config ${{ inputs.configuration }} --parallel $(nproc) -- -v
- name: Show ccache statistics
if: inputs.ccache_enabled == 'true'
shell: bash
run: ccache -s
- name: Save ccache directory
if: always() && inputs.ccache_enabled == 'true'
uses: ./.github/actions/xahau-actions-cache-save
with:
path: ~/.ccache
key: ${{ runner.os }}-ccache-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ inputs.configuration }}-${{ steps.safe-branch.outputs.name }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}

View File

@@ -0,0 +1,169 @@
name: dependencies
description: 'Installs build dependencies with caching'
inputs:
configuration:
description: 'Build configuration (Debug, Release, etc.)'
required: true
build_dir:
description: 'Directory to build dependencies in'
required: false
default: '.build'
compiler-id:
description: 'Unique identifier: compiler-version-stdlib[-gccversion] (e.g. clang-14-libstdcxx-gcc11, gcc-13-libstdcxx)'
required: false
default: ''
cache_version:
description: 'Cache version for invalidation'
required: false
default: '1'
cache_enabled:
description: 'Whether to use caching'
required: false
default: 'true'
main_branch:
description: 'Main branch name for restore keys'
required: false
default: 'dev'
os:
description: 'Operating system (Linux, Macos)'
required: false
default: 'Linux'
arch:
description: 'Architecture (x86_64, armv8)'
required: false
default: 'x86_64'
compiler:
description: 'Compiler type (gcc, clang, apple-clang)'
required: true
compiler_version:
description: 'Compiler version (11, 13, 14, etc.)'
required: true
cc:
description: 'C compiler executable (gcc-13, clang-14, etc.), empty for macOS'
required: false
default: ''
cxx:
description: 'C++ compiler executable (g++-14, clang++-14, etc.), empty for macOS'
required: false
default: ''
stdlib:
description: 'C++ standard library for Conan configuration (note: also in compiler-id)'
required: true
type: choice
options:
- libstdcxx
- libcxx
aws-access-key-id:
description: 'AWS Access Key ID for S3 cache storage'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 cache storage'
required: true
outputs:
cache-hit:
description: 'Whether there was a cache hit'
value: ${{ steps.cache-restore-conan.outputs.cache-hit }}
runs:
using: 'composite'
steps:
- name: Restore Conan cache
if: inputs.cache_enabled == 'true'
id: cache-restore-conan
uses: ./.github/actions/xahau-actions-cache-restore
with:
path: ~/.conan2
# Note: compiler-id format is compiler-version-stdlib[-gccversion]
key: ${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-${{ inputs.configuration }}
restore-keys: |
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-
${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
- name: Configure Conan
shell: bash
run: |
# Create the default profile directory if it doesn't exist
mkdir -p ~/.conan2/profiles
# Determine the correct libcxx based on stdlib parameter
if [ "${{ inputs.stdlib }}" = "libcxx" ]; then
LIBCXX="libc++"
else
LIBCXX="libstdc++11"
fi
# Create profile with our specific settings
# This overwrites any cached profile to ensure fresh configuration
cat > ~/.conan2/profiles/default <<EOF
[settings]
arch=${{ inputs.arch }}
build_type=${{ inputs.configuration }}
compiler=${{ inputs.compiler }}
compiler.cppstd=20
compiler.libcxx=${LIBCXX}
compiler.version=${{ inputs.compiler_version }}
os=${{ inputs.os }}
EOF
# Add buildenv and conf sections for Linux (not needed for macOS)
if [ "${{ inputs.os }}" = "Linux" ] && [ -n "${{ inputs.cc }}" ]; then
cat >> ~/.conan2/profiles/default <<EOF
[buildenv]
CC=/usr/bin/${{ inputs.cc }}
CXX=/usr/bin/${{ inputs.cxx }}
[conf]
tools.build:compiler_executables={"c": "/usr/bin/${{ inputs.cc }}", "cpp": "/usr/bin/${{ inputs.cxx }}"}
EOF
fi
# Add macOS-specific conf if needed
if [ "${{ inputs.os }}" = "Macos" ]; then
cat >> ~/.conan2/profiles/default <<EOF
[conf]
# Workaround for gRPC with newer Apple Clang
tools.build:cxxflags=["-Wno-missing-template-arg-list-after-template-kw"]
EOF
fi
# Display profile for verification
conan profile show
- name: Export custom recipes
shell: bash
run: |
conan export external/snappy --version 1.1.10 --user xahaud --channel stable
conan export external/soci --version 4.0.3 --user xahaud --channel stable
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable
- name: Install dependencies
shell: bash
env:
CONAN_REQUEST_TIMEOUT: 180 # Increase timeout to 3 minutes for slow mirrors
run: |
# Create build directory
mkdir -p ${{ inputs.build_dir }}
cd ${{ inputs.build_dir }}
# Install dependencies using conan
conan install \
--output-folder . \
--build missing \
--settings build_type=${{ inputs.configuration }} \
..
- name: Save Conan cache
if: always() && inputs.cache_enabled == 'true' && steps.cache-restore-conan.outputs.cache-hit != 'true'
uses: ./.github/actions/xahau-actions-cache-save
with:
path: ~/.conan2
key: ${{ runner.os }}-conan-v${{ inputs.cache_version }}-${{ inputs.compiler-id }}-${{ hashFiles('**/conanfile.txt', '**/conanfile.py') }}-${{ inputs.configuration }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}

View File

@@ -1,26 +0,0 @@
name: Build using Docker
on:
push:
branches: [ "dev", "candidate", "release" ]
pull_request:
branches: [ "dev", "candidate", "release" ]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
builder:
runs-on: [self-hosted, vanity]
steps:
- uses: actions/checkout@v3
with:
clean: false
- name: Check for suspicious patterns
run: /bin/bash suspicious_patterns.sh
- name: Build using Docker
run: /bin/bash release-builder.sh
- name: Unit tests
run: /bin/bash docker-unit-tests.sh

View File

@@ -0,0 +1,95 @@
name: Build using Docker
on:
push:
branches: ["dev", "candidate", "release", "jshooks"]
pull_request:
branches: ["dev", "candidate", "release", "jshooks"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP: 1
jobs:
checkout:
runs-on: [self-hosted, vanity]
outputs:
checkout_path: ${{ steps.vars.outputs.checkout_path }}
steps:
- name: Prepare checkout path
id: vars
run: |
SAFE_BRANCH=$(echo "${{ github.ref_name }}" | sed -e 's/[^a-zA-Z0-9._-]/-/g')
CHECKOUT_PATH="${SAFE_BRANCH}-${{ github.sha }}"
echo "checkout_path=${CHECKOUT_PATH}" >> "$GITHUB_OUTPUT"
- uses: actions/checkout@v4
with:
path: ${{ steps.vars.outputs.checkout_path }}
clean: true
fetch-depth: 2 # Only get the last 2 commits, to avoid fetching all history
build:
runs-on: [self-hosted, vanity]
needs: [checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Set Cleanup Script Path
run: |
echo "JOB_CLEANUP_SCRIPT=$(mktemp)" >> $GITHUB_ENV
- name: Build using Docker
run: /bin/bash release-builder.sh
- name: Stop Container (Cleanup)
if: always()
run: |
echo "Running cleanup script: $JOB_CLEANUP_SCRIPT"
/bin/bash -e -x "$JOB_CLEANUP_SCRIPT"
CLEANUP_EXIT_CODE=$?
if [[ "$CLEANUP_EXIT_CODE" -eq 0 ]]; then
echo "Cleanup script succeeded."
rm -f "$JOB_CLEANUP_SCRIPT"
echo "Cleanup script removed."
else
echo "⚠️ Cleanup script failed! Keeping for debugging: $JOB_CLEANUP_SCRIPT"
fi
if [[ "${DEBUG_BUILD_CONTAINERS_AFTER_CLEANUP}" == "1" ]]; then
echo "🔍 Checking for leftover containers..."
BUILD_CONTAINERS=$(docker ps --format '{{.Names}}' | grep '^xahaud_cached_builder' || echo "")
if [[ -n "$BUILD_CONTAINERS" ]]; then
echo "⚠️ WARNING: Some build containers are still running"
echo "$BUILD_CONTAINERS"
else
echo "✅ No build containers found"
fi
fi
tests:
runs-on: [self-hosted, vanity]
needs: [build, checkout]
defaults:
run:
working-directory: ${{ needs.checkout.outputs.checkout_path }}
steps:
- name: Unit tests
run: /bin/bash docker-unit-tests.sh
cleanup:
runs-on: [self-hosted, vanity]
needs: [tests, checkout]
if: always()
steps:
- name: Cleanup workspace
run: |
CHECKOUT_PATH="${{ needs.checkout.outputs.checkout_path }}"
echo "Cleaning workspace for ${CHECKOUT_PATH}"
rm -rf "${{ github.workspace }}/${CHECKOUT_PATH}"

View File

@@ -4,21 +4,32 @@ on: [push, pull_request]
jobs:
check:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
env:
CLANG_VERSION: 10
steps:
- uses: actions/checkout@v3
- name: Install clang-format
# - name: Install clang-format
# run: |
# codename=$( lsb_release --codename --short )
# sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
# deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
# deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
# EOF
# wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
# sudo apt-get update -y
# sudo apt-get install -y clang-format-${CLANG_VERSION}
# Temporary fix until this commit is merged
# https://github.com/XRPLF/rippled/commit/552377c76f55b403a1c876df873a23d780fcc81c
- name: Download and install clang-format
run: |
codename=$( lsb_release --codename --short )
sudo tee /etc/apt/sources.list.d/llvm.list >/dev/null <<EOF
deb http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
deb-src http://apt.llvm.org/${codename}/ llvm-toolchain-${codename}-${CLANG_VERSION} main
EOF
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add
sudo apt-get update
sudo apt-get install clang-format-${CLANG_VERSION}
sudo apt-get update -y
sudo apt-get install -y libtinfo5
curl -LO https://github.com/llvm/llvm-project/releases/download/llvmorg-10.0.1/clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz
tar -xf clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz
sudo mv clang+llvm-10.0.1-x86_64-linux-gnu-ubuntu-16.04 /opt/clang-10
sudo ln -s /opt/clang-10/bin/clang-format /usr/local/bin/clang-format-10
- name: Format src/ripple
run: find src/ripple -type f \( -name '*.cpp' -o -name '*.h' -o -name '*.ipp' \) -print0 | xargs -0 clang-format-${CLANG_VERSION} -i
- name: Format src/test
@@ -30,7 +41,7 @@ jobs:
git diff --exit-code | tee "clang-format.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: clang-format.patch

View File

@@ -1,25 +0,0 @@
name: Build and publish Doxygen documentation
on:
push:
branches:
- dev
jobs:
job:
runs-on: ubuntu-latest
container:
image: docker://rippleci/rippled-ci-builder:2944b78d22db
steps:
- name: checkout
uses: actions/checkout@v2
- name: build
run: |
mkdir build
cd build
cmake -DBoost_NO_BOOST_CMAKE=ON ..
cmake --build . --target docs --parallel $(nproc)
- name: publish
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/docs/html

View File

@@ -18,7 +18,7 @@ jobs:
git diff --exit-code | tee "levelization.patch"
- name: Upload patch
if: failure() && steps.assert.outcome == 'failure'
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: levelization.patch

View File

@@ -0,0 +1,290 @@
name: Test Cache Actions (State Machine)
on:
push:
branches: ["nd-experiment-overlayfs-*"]
workflow_dispatch:
inputs:
state_assertion:
description: 'Expected state (optional, e.g. "2" to assert state 2)'
required: false
type: string
default: '1'
start_state:
description: 'Force specific starting state (optional, e.g. "3" to start at state 3)'
required: false
type: string
clear_cache:
description: 'Clear cache before running'
required: false
type: boolean
default: false
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test-cache-state-machine:
runs-on: ubuntu-latest
env:
CACHE_KEY: test-state-machine-${{ github.ref_name }}
CACHE_DIR: /tmp/test-cache
S3_BUCKET: xahaud-github-actions-cache-niq
S3_REGION: us-east-1
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Parse Inputs (workflow_dispatch or commit message)
id: parse-inputs
run: |
# Priority 1: workflow_dispatch inputs (manual trigger)
STATE_ASSERTION="${{ inputs.state_assertion }}"
START_STATE="${{ inputs.start_state }}"
SHOULD_CLEAR="${{ inputs.clear_cache }}"
# Priority 2: commit message tags (push event)
if [ "${{ github.event_name }}" = "push" ]; then
COMMIT_MSG="${{ github.event.head_commit.message }}"
# Parse [state:N] assertion tag (optional, if not provided as input)
if [ -z "${STATE_ASSERTION}" ] && echo "${COMMIT_MSG}" | grep -qE '\[state:[0-9]+\]'; then
STATE_ASSERTION=$(echo "${COMMIT_MSG}" | grep -oE '\[state:[0-9]+\]' | grep -oE '[0-9]+')
echo "State assertion found in commit: ${STATE_ASSERTION}"
fi
# Parse [start-state:N] force tag (optional, if not provided as input)
if [ -z "${START_STATE}" ] && echo "${COMMIT_MSG}" | grep -qE '\[start-state:[0-9]+\]'; then
START_STATE=$(echo "${COMMIT_MSG}" | grep -oE '\[start-state:[0-9]+\]' | grep -oE '[0-9]+')
echo "Start state found in commit: ${START_STATE}"
fi
# Parse [ci-clear-cache] tag (if not provided as input)
if [ "${SHOULD_CLEAR}" != "true" ] && echo "${COMMIT_MSG}" | grep -q '\[ci-clear-cache\]'; then
SHOULD_CLEAR=true
echo "Cache clear requested in commit"
fi
fi
# Output final values
echo "state_assertion=${STATE_ASSERTION}" >> "$GITHUB_OUTPUT"
echo "start_state=${START_STATE}" >> "$GITHUB_OUTPUT"
echo "should_clear=${SHOULD_CLEAR}" >> "$GITHUB_OUTPUT"
# Log what we're using
echo ""
echo "Configuration:"
[ -n "${STATE_ASSERTION}" ] && echo " State assertion: ${STATE_ASSERTION}"
[ -n "${START_STATE}" ] && echo " Start state: ${START_STATE}"
echo " Clear cache: ${SHOULD_CLEAR}"
- name: Check S3 State (Before Restore)
env:
AWS_ACCESS_KEY_ID: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
run: |
echo "=========================================="
echo "S3 State Check (Before Restore)"
echo "=========================================="
echo "Cache key: ${CACHE_KEY}"
echo ""
# Check if base exists
BASE_EXISTS=false
if aws s3 ls "s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst" --region "${S3_REGION}" >/dev/null 2>&1; then
BASE_EXISTS=true
fi
echo "Base exists: ${BASE_EXISTS}"
# Count deltas
DELTA_COUNT=$(aws s3 ls "s3://${S3_BUCKET}/" --region "${S3_REGION}" | grep "${CACHE_KEY}-delta-" | wc -l || echo "0")
echo "Delta count: ${DELTA_COUNT}"
- name: Restore Cache
uses: ./.github/actions/xahau-actions-cache-restore
with:
path: ${{ env.CACHE_DIR }}
key: ${{ env.CACHE_KEY }}
s3-bucket: ${{ env.S3_BUCKET }}
s3-region: ${{ env.S3_REGION }}
use-deltas: 'true'
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Auto-Detect State and Validate
id: state
env:
STATE_ASSERTION: ${{ steps.parse-inputs.outputs.state_assertion }}
START_STATE: ${{ steps.parse-inputs.outputs.start_state }}
run: |
echo "=========================================="
echo "State Detection and Validation"
echo "=========================================="
# Create cache directory if it doesn't exist
mkdir -p "${CACHE_DIR}"
# Handle [start-state:N] - force specific state
if [ -n "${START_STATE}" ]; then
echo "🎯 [start-state:${START_STATE}] detected - forcing state setup"
# Clear cache and create state files 0 through START_STATE
rm -f ${CACHE_DIR}/state*.txt 2>/dev/null || true
for i in $(seq 0 ${START_STATE}); do
echo "State ${i} - Forced at $(date)" > "${CACHE_DIR}/state${i}.txt"
echo "Commit: ${{ github.sha }}" >> "${CACHE_DIR}/state${i}.txt"
done
DETECTED_STATE=${START_STATE}
echo "✓ Forced to state ${DETECTED_STATE}"
else
# Auto-detect state by counting state files
STATE_FILES=$(ls ${CACHE_DIR}/state*.txt 2>/dev/null | wc -l)
DETECTED_STATE=${STATE_FILES}
echo "Auto-detected state: ${DETECTED_STATE} (${STATE_FILES} state files)"
fi
# Show cache contents
echo ""
echo "Cache contents:"
if [ -d "${CACHE_DIR}" ] && [ "$(ls -A ${CACHE_DIR})" ]; then
ls -la "${CACHE_DIR}"
else
echo "(empty)"
fi
# Validate [state:N] assertion if provided
if [ -n "${STATE_ASSERTION}" ]; then
echo ""
echo "Validating assertion: [state:${STATE_ASSERTION}]"
if [ "${DETECTED_STATE}" -ne "${STATE_ASSERTION}" ]; then
echo "❌ ERROR: State mismatch!"
echo " Expected (from [state:N]): ${STATE_ASSERTION}"
echo " Detected (from cache): ${DETECTED_STATE}"
exit 1
fi
echo "✓ Assertion passed: detected == expected (${DETECTED_STATE})"
fi
# Output detected state for next steps
echo "detected_state=${DETECTED_STATE}" >> "$GITHUB_OUTPUT"
echo ""
echo "=========================================="
- name: Simulate Build (State Transition)
env:
DETECTED_STATE: ${{ steps.state.outputs.detected_state }}
run: |
echo "=========================================="
echo "Simulating Build (State Transition)"
echo "=========================================="
# Calculate next state
NEXT_STATE=$((DETECTED_STATE + 1))
echo "Transitioning: State ${DETECTED_STATE} → State ${NEXT_STATE}"
echo ""
# Create state file for next state
STATE_FILE="${CACHE_DIR}/state${NEXT_STATE}.txt"
echo "State ${NEXT_STATE} - Created at $(date)" > "${STATE_FILE}"
echo "Commit: ${{ github.sha }}" >> "${STATE_FILE}"
echo "Message: ${{ github.event.head_commit.message }}" >> "${STATE_FILE}"
echo "✓ Created ${STATE_FILE}"
# Show final cache state
echo ""
echo "Final cache contents:"
ls -la "${CACHE_DIR}"
echo ""
echo "State files:"
cat ${CACHE_DIR}/state*.txt
- name: Save Cache
uses: ./.github/actions/xahau-actions-cache-save
with:
path: ${{ env.CACHE_DIR }}
key: ${{ env.CACHE_KEY }}
s3-bucket: ${{ env.S3_BUCKET }}
s3-region: ${{ env.S3_REGION }}
use-deltas: 'true'
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Validate S3 State (After Save)
env:
AWS_ACCESS_KEY_ID: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
DETECTED_STATE: ${{ steps.state.outputs.detected_state }}
run: |
echo "=========================================="
echo "S3 State Validation (After Save)"
echo "=========================================="
# Calculate next state (what we just saved)
NEXT_STATE=$((DETECTED_STATE + 1))
echo "Saved state: ${NEXT_STATE}"
echo ""
# Check if base exists
if aws s3 ls "s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst" --region "${S3_REGION}" >/dev/null 2>&1; then
BASE_SIZE=$(aws s3 ls "s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst" --region "${S3_REGION}" | awk '{print $3}')
echo "✓ Base exists: ${CACHE_KEY}-base.tar.zst (${BASE_SIZE} bytes)"
else
echo "❌ ERROR: Base should exist after save"
exit 1
fi
# List deltas
echo ""
echo "Delta layers:"
DELTAS=$(aws s3 ls "s3://${S3_BUCKET}/" --region "${S3_REGION}" | grep "${CACHE_KEY}-delta-" || echo "")
if [ -n "${DELTAS}" ]; then
echo "${DELTAS}"
DELTA_COUNT=$(echo "${DELTAS}" | wc -l)
else
echo "(none)"
DELTA_COUNT=0
fi
# Validate S3 state
echo ""
if [ "${DETECTED_STATE}" -eq 0 ]; then
# Saved state 1 from bootstrap (state 0 → 1)
if [ "${DELTA_COUNT}" -ne 0 ]; then
echo "⚠️ WARNING: Bootstrap (state 1) should have 0 deltas, found ${DELTA_COUNT}"
else
echo "✓ State 1 saved: base exists, 0 deltas"
fi
else
# Saved delta (state N+1)
if [ "${DELTA_COUNT}" -ne 1 ]; then
echo "⚠️ WARNING: State ${NEXT_STATE} expects 1 delta (inline cleanup), found ${DELTA_COUNT}"
echo "This might be OK if multiple builds ran concurrently"
else
echo "✓ State ${NEXT_STATE} saved: base + 1 delta (old deltas cleaned)"
fi
fi
echo ""
echo "=========================================="
echo "✅ State ${DETECTED_STATE} → ${NEXT_STATE} Complete!"
echo "=========================================="
echo ""
echo "Next commit will auto-detect state ${NEXT_STATE}"
echo ""
echo "Options:"
echo " # Normal (auto-advance)"
echo " git commit -m 'continue testing'"
echo ""
echo " # With assertion (validate state)"
echo " git commit -m 'test delta [state:${NEXT_STATE}]'"
echo ""
echo " # Clear cache and restart"
echo " git commit -m 'fresh start [ci-clear-cache]'"
echo ""
echo " # Jump to specific state"
echo " git commit -m 'jump to state 3 [start-state:3]'"

View File

@@ -0,0 +1,182 @@
name: Test OverlayFS Delta Extraction
on:
push:
branches: ["*"]
workflow_dispatch:
jobs:
test-overlayfs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
# - name: Test encrypted secrets (decrypt test message)
# run: |
# echo "========================================"
# echo "TESTING ENCRYPTED SECRETS"
# echo "========================================"
# echo ""
# echo "Decrypting test message from .github/secrets/test-message.gpg"
# echo "Using encryption key from GitHub Secrets..."
# echo ""
#
# # Decrypt using key from GitHub Secrets
# echo "${{ secrets.TEST_ENCRYPTION_KEY }}" | \
# gpg --batch --yes --passphrase-fd 0 \
# --decrypt .github/secrets/test-message.gpg
#
# echo ""
# echo "========================================"
# echo "If you see the success message above,"
# echo "then encrypted secrets work! 🎉"
# echo "========================================"
# echo ""
- name: Setup OverlayFS layers
run: |
echo "=== Creating directory structure ==="
mkdir -p /tmp/test/{base,delta,upper,work,merged}
echo "=== Creating base layer files ==="
echo "base file 1" > /tmp/test/base/file1.txt
echo "base file 2" > /tmp/test/base/file2.txt
echo "base file 3" > /tmp/test/base/file3.txt
mkdir -p /tmp/test/base/subdir
echo "base subdir file" > /tmp/test/base/subdir/file.txt
echo "=== Base layer contents ==="
find /tmp/test/base -type f -exec sh -c 'echo "{}:"; cat "{}"' \;
echo "=== Mounting OverlayFS ==="
sudo mount -t overlay overlay \
-o lowerdir=/tmp/test/base,upperdir=/tmp/test/upper,workdir=/tmp/test/work \
/tmp/test/merged
echo "=== Mounted successfully ==="
mount | grep overlay
- name: Verify merged view shows base files
run: |
echo "=== Contents of /merged (should show base files) ==="
ls -R /tmp/test/merged
find /tmp/test/merged -type f -exec sh -c 'echo "{}:"; cat "{}"' \;
- name: Make changes via merged layer
run: |
echo "=== Making changes via /merged ==="
# Overwrite existing file
echo "MODIFIED file 2" > /tmp/test/merged/file2.txt
echo "Modified file2.txt"
# Create new file
echo "NEW file 4" > /tmp/test/merged/file4.txt
echo "Created new file4.txt"
# Create new directory with file
mkdir -p /tmp/test/merged/newdir
echo "NEW file in new dir" > /tmp/test/merged/newdir/newfile.txt
echo "Created newdir/newfile.txt"
# Add file to existing directory
echo "NEW file in existing subdir" > /tmp/test/merged/subdir/newfile.txt
echo "Created subdir/newfile.txt"
echo "=== Changes complete ==="
- name: Show the delta (upperdir)
run: |
echo "========================================"
echo "THE DELTA (only changes in /upper):"
echo "========================================"
if [ -z "$(ls -A /tmp/test/upper)" ]; then
echo "Upper directory is empty - no changes detected"
else
echo "Upper directory structure:"
ls -R /tmp/test/upper
echo ""
echo "Upper directory files with content:"
find /tmp/test/upper -type f -exec sh -c 'echo "---"; echo "FILE: {}"; cat "{}"; echo ""' \;
echo "========================================"
echo "SIZE OF DELTA:"
du -sh /tmp/test/upper
echo "========================================"
fi
- name: Compare base vs upper vs merged
run: |
echo "========================================"
echo "COMPARISON:"
echo "========================================"
echo "BASE layer (original, untouched):"
ls -la /tmp/test/base/
echo ""
echo "UPPER layer (DELTA - only changes):"
ls -la /tmp/test/upper/
echo ""
echo "MERGED layer (unified view = base + upper):"
ls -la /tmp/test/merged/
echo ""
echo "========================================"
echo "PROOF: Upper dir contains ONLY the delta!"
echo "========================================"
- name: Simulate tarball creation (what we'd upload)
run: |
echo "=== Creating tarball of delta ==="
tar -czf /tmp/delta.tar.gz -C /tmp/test/upper .
echo "Delta tarball size:"
ls -lh /tmp/delta.tar.gz
echo ""
echo "Delta tarball contents:"
tar -tzf /tmp/delta.tar.gz
echo ""
echo "========================================"
echo "This is what we'd upload to S3/rsync!"
echo "Only ~few KB instead of entire cache!"
echo "========================================"
- name: Upload delta to S3 (actual test!)
env:
AWS_ACCESS_KEY_ID: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
run: |
echo "========================================"
echo "UPLOADING TO S3"
echo "========================================"
# Upload the delta tarball
aws s3 cp /tmp/delta.tar.gz \
s3://xahaud-github-actions-cache-niq/hello-world-first-test.tar.gz \
--region us-east-1
echo ""
echo "✅ Successfully uploaded to S3!"
echo "File: s3://xahaud-github-actions-cache-niq/hello-world-first-test.tar.gz"
echo ""
# Verify it exists
echo "Verifying upload..."
aws s3 ls s3://xahaud-github-actions-cache-niq/hello-world-first-test.tar.gz --region us-east-1
echo ""
echo "========================================"
echo "S3 upload test complete! 🚀"
echo "========================================"
- name: Cleanup
if: always()
run: |
echo "=== Unmounting OverlayFS ==="
sudo umount /tmp/test/merged || true

View File

@@ -0,0 +1,36 @@
name: Verify Generated Hook Headers
on:
push:
pull_request:
jobs:
verify-generated-headers:
strategy:
fail-fast: false
matrix:
include:
- target: hook/error.h
generator: ./hook/generate_error.sh
- target: hook/extern.h
generator: ./hook/generate_extern.sh
- target: hook/sfcodes.h
generator: bash ./hook/generate_sfcodes.sh
- target: hook/tts.h
generator: ./hook/generate_tts.sh
runs-on: ubuntu-latest
name: ${{ matrix.target }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Verify ${{ matrix.target }}
run: |
set -euo pipefail
chmod +x hook/generate_*.sh || true
tmp=$(mktemp)
trap 'rm -f "$tmp"' EXIT
${{ matrix.generator }} > "$tmp"
diff -u ${{ matrix.target }} "$tmp"

View File

@@ -0,0 +1,131 @@
name: MacOS - GA Runner
on:
push:
branches: ["dev", "candidate", "release"]
pull_request:
branches: ["dev", "candidate", "release"]
schedule:
- cron: '0 0 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
matrix:
generator:
- Ninja
configuration:
- Debug
runs-on: macos-15
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 1
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Conan
run: |
brew install conan
# Verify Conan 2 is installed
conan --version
- name: Install Coreutils
run: |
brew install coreutils
echo "Num proc: $(nproc)"
- name: Install Ninja
if: matrix.generator == 'Ninja'
run: brew install ninja
- name: Install Python
run: |
if which python3 > /dev/null 2>&1; then
echo "Python 3 executable exists"
python3 --version
else
brew install python@3.12
fi
# Create 'python' symlink if it doesn't exist (for tools expecting 'python')
if ! which python > /dev/null 2>&1; then
sudo ln -sf $(which python3) /usr/local/bin/python
fi
- name: Install CMake
run: |
# Install CMake 3.x to match local dev environments
# With Conan 2 and the policy args passed to CMake, newer versions
# can have issues with dependencies that require cmake_minimum_required < 3.5
brew uninstall cmake --ignore-dependencies 2>/dev/null || true
# Download and install CMake 3.31.7 directly
curl -L https://github.com/Kitware/CMake/releases/download/v3.31.7/cmake-3.31.7-macos-universal.tar.gz -o cmake.tar.gz
tar -xzf cmake.tar.gz
# Move the entire CMake.app to /Applications
sudo mv cmake-3.31.7-macos-universal/CMake.app /Applications/
echo "/Applications/CMake.app/Contents/bin" >> $GITHUB_PATH
/Applications/CMake.app/Contents/bin/cmake --version
- name: Install ccache
run: brew install ccache
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which python && python --version || echo "Python not found"
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
clang --version
ccache --version
echo "---- Full Environment ----"
env
- name: Detect compiler version
id: detect-compiler
run: |
COMPILER_VERSION=$(clang --version | grep -oE 'version [0-9]+' | grep -oE '[0-9]+')
echo "compiler_version=${COMPILER_VERSION}" >> $GITHUB_OUTPUT
echo "Detected Apple Clang version: ${COMPILER_VERSION}"
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
os: Macos
arch: armv8
compiler: apple-clang
compiler_version: ${{ steps.detect-compiler.outputs.compiler_version }}
stdlib: libcxx
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: ${{ matrix.generator }}
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: clang
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
stdlib: libcxx
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Test
run: |
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)

298
.github/workflows/xahau-ga-nix.yml vendored Normal file
View File

@@ -0,0 +1,298 @@
name: Nix - GA Runner
on:
push:
branches: ["dev", "candidate", "release", "nd-experiment-overlayfs-2025-10-29"]
pull_request:
branches: ["dev", "candidate", "release"]
schedule:
- cron: '0 0 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
matrix-setup:
runs-on: ubuntu-latest
container: python:3-slim
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Generate build matrix
id: set-matrix
shell: python
run: |
import json
import os
# Full matrix with all 6 compiler configurations
# Each configuration includes all parameters needed by the build job
full_matrix = [
{
"compiler_id": "gcc-11-libstdcxx",
"compiler": "gcc",
"cc": "gcc-11",
"cxx": "g++-11",
"compiler_version": 11,
"stdlib": "libstdcxx",
"configuration": "Debug"
},
{
"compiler_id": "gcc-13-libstdcxx",
"compiler": "gcc",
"cc": "gcc-13",
"cxx": "g++-13",
"compiler_version": 13,
"stdlib": "libstdcxx",
"configuration": "Debug"
},
{
"compiler_id": "clang-14-libstdcxx-gcc11",
"compiler": "clang",
"cc": "clang-14",
"cxx": "clang++-14",
"compiler_version": 14,
"stdlib": "libstdcxx",
"clang_gcc_toolchain": 11,
"configuration": "Debug"
},
{
"compiler_id": "clang-16-libstdcxx-gcc13",
"compiler": "clang",
"cc": "clang-16",
"cxx": "clang++-16",
"compiler_version": 16,
"stdlib": "libstdcxx",
"clang_gcc_toolchain": 13,
"configuration": "Debug"
},
{
"compiler_id": "clang-17-libcxx",
"compiler": "clang",
"cc": "clang-17",
"cxx": "clang++-17",
"compiler_version": 17,
"stdlib": "libcxx",
"configuration": "Debug"
},
{
# Clang 18 - testing if it's faster than Clang 17 with libc++
# Requires patching Conan v1 settings.yml to add version 18
"compiler_id": "clang-18-libcxx",
"compiler": "clang",
"cc": "clang-18",
"cxx": "clang++-18",
"compiler_version": 18,
"stdlib": "libcxx",
"configuration": "Debug"
}
]
# Minimal matrix for PRs and feature branches
minimal_matrix = [
full_matrix[1], # gcc-13 (middle-ground gcc)
full_matrix[2] # clang-14 (mature, stable clang)
]
# Determine which matrix to use based on the target branch
ref = "${{ github.ref }}"
base_ref = "${{ github.base_ref }}" # For PRs, this is the target branch
event_name = "${{ github.event_name }}"
commit_message = """${{ github.event.head_commit.message }}"""
pr_title = """${{ github.event.pull_request.title }}"""
# Debug logging
print(f"Event: {event_name}")
print(f"Ref: {ref}")
print(f"Base ref: {base_ref}")
print(f"PR title: {pr_title}")
print(f"Commit message: {commit_message}")
# Check for override tags in commit message or PR title
force_full = "[ci-nix-full-matrix]" in commit_message or "[ci-nix-full-matrix]" in pr_title
print(f"Force full matrix: {force_full}")
# Check if this is targeting a main branch
# For PRs: check base_ref (target branch)
# For pushes: check ref (current branch)
main_branches = ["refs/heads/dev", "refs/heads/release", "refs/heads/candidate"]
if force_full:
# Override: always use full matrix if tag is present
use_full = True
elif event_name == "pull_request":
# For PRs, base_ref is just the branch name (e.g., "dev", not "refs/heads/dev")
# Check if the PR targets release or candidate (more critical branches)
use_full = base_ref in ["release", "candidate"]
else:
# For pushes, ref is the full reference (e.g., "refs/heads/dev")
use_full = ref in main_branches
# Select the appropriate matrix
if use_full:
if force_full:
print(f"Using FULL matrix (6 configs) - forced by [ci-nix-full-matrix] tag")
else:
print(f"Using FULL matrix (6 configs) - targeting main branch")
matrix = full_matrix
else:
print(f"Using MINIMAL matrix (2 configs) - feature branch/PR")
matrix = minimal_matrix
# Output the matrix as JSON
output = json.dumps({"include": matrix})
with open(os.environ['GITHUB_OUTPUT'], 'a') as f:
f.write(f"matrix={output}\n")
build:
needs: matrix-setup
runs-on: ubuntu-latest
outputs:
artifact_name: ${{ steps.set-artifact-name.outputs.artifact_name }}
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.matrix-setup.outputs.matrix) }}
env:
build_dir: .build
# Bump this number to invalidate all caches globally.
CACHE_VERSION: 3
MAIN_BRANCH_NAME: dev
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install build dependencies
run: |
sudo apt-get update
sudo apt-get install -y ninja-build ${{ matrix.cc }} ${{ matrix.cxx }} ccache
# Install the specific GCC version needed for Clang
if [ -n "${{ matrix.clang_gcc_toolchain }}" ]; then
echo "=== Installing GCC ${{ matrix.clang_gcc_toolchain }} for Clang ==="
sudo apt-get install -y gcc-${{ matrix.clang_gcc_toolchain }} g++-${{ matrix.clang_gcc_toolchain }} libstdc++-${{ matrix.clang_gcc_toolchain }}-dev
echo "=== GCC versions available after installation ==="
ls -la /usr/lib/gcc/x86_64-linux-gnu/ | grep -E "^d"
fi
# For Clang < 16 with --gcc-toolchain, hide newer GCC versions
# This is needed because --gcc-toolchain still picks the highest version
#
# THE GREAT GCC HIDING TRICK (for Clang < 16):
# Clang versions before 16 don't have --gcc-install-dir, only --gcc-toolchain
# which is deprecated and still uses discovery heuristics that ALWAYS pick
# the highest version number. So we play a sneaky game...
#
# We rename newer GCC versions to very low integers (1, 2, 3...) which makes
# Clang think they're ancient GCC versions. Since 11 > 3 > 2 > 1, Clang will
# pick GCC 11 over our renamed versions. It's dumb but it works!
#
# Example: GCC 12→1, GCC 13→2, GCC 14→3, so Clang picks 11 (highest number)
if [ -n "${{ matrix.clang_gcc_toolchain }}" ] && [ "${{ matrix.compiler_version }}" -lt "16" ]; then
echo "=== Hiding GCC versions newer than ${{ matrix.clang_gcc_toolchain }} for Clang < 16 ==="
target_version=${{ matrix.clang_gcc_toolchain }}
counter=1 # Start with 1 - these will be seen as "GCC version 1, 2, 3" etc
for dir in /usr/lib/gcc/x86_64-linux-gnu/*/; do
if [ -d "$dir" ]; then
version=$(basename "$dir")
# Check if version is numeric and greater than target
if [[ "$version" =~ ^[0-9]+$ ]] && [ "$version" -gt "$target_version" ]; then
echo "Hiding GCC $version -> renaming to $counter (will be seen as GCC version $counter)"
# Safety check: ensure target doesn't already exist
if [ ! -e "/usr/lib/gcc/x86_64-linux-gnu/$counter" ]; then
sudo mv "$dir" "/usr/lib/gcc/x86_64-linux-gnu/$counter"
else
echo "ERROR: Cannot rename GCC $version - /usr/lib/gcc/x86_64-linux-gnu/$counter already exists"
exit 1
fi
counter=$((counter + 1))
fi
fi
done
fi
# Verify what Clang will use
if [ -n "${{ matrix.clang_gcc_toolchain }}" ]; then
echo "=== Verifying GCC toolchain selection ==="
echo "Available GCC versions:"
ls -la /usr/lib/gcc/x86_64-linux-gnu/ | grep -E "^d.*[0-9]+$" || true
echo ""
echo "Clang's detected GCC installation:"
${{ matrix.cxx }} -v -E -x c++ /dev/null -o /dev/null 2>&1 | grep "Found candidate GCC installation" || true
fi
# Install libc++ dev packages if using libc++ (not needed for libstdc++)
if [ "${{ matrix.stdlib }}" = "libcxx" ]; then
sudo apt-get install -y libc++-${{ matrix.compiler_version }}-dev libc++abi-${{ matrix.compiler_version }}-dev
fi
# Install Conan 2
pip install --upgrade "conan>=2.0,<3"
- name: Check environment
run: |
echo "PATH:"
echo "${PATH}" | tr ':' '\n'
which conan && conan --version || echo "Conan not found"
which cmake && cmake --version || echo "CMake not found"
which ${{ matrix.cc }} && ${{ matrix.cc }} --version || echo "${{ matrix.cc }} not found"
which ${{ matrix.cxx }} && ${{ matrix.cxx }} --version || echo "${{ matrix.cxx }} not found"
which ccache && ccache --version || echo "ccache not found"
echo "---- Full Environment ----"
env
- name: Install dependencies
uses: ./.github/actions/xahau-ga-dependencies
with:
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
compiler: ${{ matrix.compiler }}
compiler_version: ${{ matrix.compiler_version }}
cc: ${{ matrix.cc }}
cxx: ${{ matrix.cxx }}
stdlib: ${{ matrix.stdlib }}
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Build
uses: ./.github/actions/xahau-ga-build
with:
generator: Ninja
configuration: ${{ matrix.configuration }}
build_dir: ${{ env.build_dir }}
cc: ${{ matrix.cc }}
cxx: ${{ matrix.cxx }}
compiler-id: ${{ matrix.compiler_id }}
cache_version: ${{ env.CACHE_VERSION }}
main_branch: ${{ env.MAIN_BRANCH_NAME }}
stdlib: ${{ matrix.stdlib }}
clang_gcc_toolchain: ${{ matrix.clang_gcc_toolchain || '' }}
aws-access-key-id: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.XAHAUD_GITHUB_ACTIONS_CACHE_NIQ_AWS_ACCESS_KEY }}
- name: Set artifact name
id: set-artifact-name
run: |
ARTIFACT_NAME="build-output-nix-${{ github.run_id }}-${{ matrix.compiler }}-${{ matrix.configuration }}"
echo "artifact_name=${ARTIFACT_NAME}" >> "$GITHUB_OUTPUT"
echo "Using artifact name: ${ARTIFACT_NAME}"
- name: Debug build directory
run: |
echo "Checking build directory contents: ${{ env.build_dir }}"
ls -la ${{ env.build_dir }} || echo "Build directory not found or empty"
- name: Run tests
run: |
# Ensure the binary exists before trying to run
if [ -f "${{ env.build_dir }}/rippled" ]; then
${{ env.build_dir }}/rippled --unittest --unittest-jobs $(nproc)
else
echo "Error: rippled executable not found in ${{ env.build_dir }}"
exit 1
fi

7
.gitignore vendored
View File

@@ -24,6 +24,11 @@ bin/project-cache.jam
build/docker
# Ignore release builder files
.env
release-build
cmake-*.tar.gz
# Ignore object files.
*.o
build
@@ -114,3 +119,5 @@ pkg_out
pkg
CMakeUserPresets.json
bld.rippled/
generated

View File

@@ -3,11 +3,11 @@
"C_Cpp.clang_format_path": ".clang-format",
"C_Cpp.clang_format_fallbackStyle": "{ ColumnLimit: 0 }",
"[cpp]":{
"editor.wordBasedSuggestions": false,
"editor.wordBasedSuggestions": "off",
"editor.suggest.insertMode": "replace",
"editor.semanticHighlighting.enabled": true,
"editor.tabSize": 4,
"editor.defaultFormatter": "xaver.clang-format",
"editor.formatOnSave": false
"editor.formatOnSave": true
}
}

468
BUILD.md Normal file
View File

@@ -0,0 +1,468 @@
> These instructions assume you have a C++ development environment ready
> with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up
> on Linux, macOS, or Windows, see [our guide](./docs/build/environment.md).
>
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan,
> you can read our [crash course](./docs/build/conan.md)
> or the official [Getting Started][3] walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
releases](https://github.com/ripple/rippled/releases).
```
git checkout master
```
For the latest release candidate, choose the `release` branch.
```
git checkout release
```
For the latest set of untested features, or to contribute, choose the `develop`
branch.
```
git checkout develop
```
## Minimum Requirements
- [Python 3.7](https://www.python.org/downloads/)
- [Conan 2.x](https://conan.io/downloads)
- [CMake 3.16](https://cmake.org/download/)
`rippled` is written in the C++20 dialect and includes the `<concepts>` header.
The [minimum compiler versions][2] required are:
| Compiler | Version |
|-------------|---------|
| GCC | 10 |
| Clang | 13 |
| Apple Clang | 13.1.6 |
| MSVC | 19.23 |
We don't recommend Windows for `rippled` production at this time. As of
January 2023, Ubuntu has the highest level of quality assurance, testing,
and support.
Windows developers should use Visual Studio 2019. `rippled` isn't
compatible with [Boost](https://www.boost.org/) 1.78 or 1.79, and Conan
can't build earlier Boost versions.
**Note:** 32-bit Windows development isn't supported.
## Steps
### Set Up Conan
1. (Optional) If you've never used Conan, use autodetect to set up a default profile.
```
conan profile detect --force
```
2. Update the compiler settings.
For Conan 2, you can edit the profile directly at `~/.conan2/profiles/default`,
or use the Conan CLI. Ensure C++20 is set:
```
conan profile show
```
Look for `compiler.cppstd=20` in the output. If it's not set, edit the profile:
```
# Edit ~/.conan2/profiles/default and ensure these settings exist:
[settings]
compiler.cppstd=20
```
Linux developers will commonly have a default Conan [profile][] that compiles
with GCC and links with libstdc++.
If you are linking with libstdc++ (see profile setting `compiler.libcxx`),
then you will need to choose the `libstdc++11` ABI.
```
# In ~/.conan2/profiles/default, ensure:
[settings]
compiler.libcxx=libstdc++11
```
On Windows, you should use the x64 native build tools.
An easy way to do that is to run the shortcut "x64 Native Tools Command
Prompt" for the version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture.
```
# In ~/.conan2/profiles/default, ensure:
[settings]
arch=x86_64
```
3. (Optional) If you have multiple compilers installed on your platform,
make sure that Conan and CMake select the one you want to use.
This setting will set the correct variables (`CMAKE_<LANG>_COMPILER`)
in the generated CMake toolchain file.
```
# In ~/.conan2/profiles/default, add under [conf] section:
[conf]
tools.build:compiler_executables={"c": "<path>", "cpp": "<path>"}
```
For setting environment variables for dependencies:
```
# In ~/.conan2/profiles/default, add under [buildenv] section:
[buildenv]
CC=<path>
CXX=<path>
```
4. Export our [Conan recipe for Snappy](./external/snappy).
It doesn't explicitly link the C++ standard library,
which allows you to statically link it with GCC, if you want.
```
conan export external/snappy --version 1.1.10 --user xahaud --channel stable
```
5. Export our [Conan recipe for SOCI](./external/soci).
It patches their CMake to correctly import its dependencies.
```
conan export external/soci --version 4.0.3 --user xahaud --channel stable
```
6. Export our [Conan recipe for WasmEdge](./external/wasmedge).
```
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable
```
### Build and Test
1. Create a build directory and move into it.
```
mkdir .build
cd .build
```
You can use any directory name. Conan treats your working directory as an
install folder and generates files with implementation details.
You don't need to worry about these files, but make sure to change
your working directory to your build directory before calling Conan.
**Note:** You can specify a directory for the installation files by adding
the `install-folder` or `-if` option to every `conan install` command
in the next step.
2. Generate CMake files for every configuration you want to build.
```
conan install .. --output-folder . --build missing --settings build_type=Release
conan install .. --output-folder . --build missing --settings build_type=Debug
```
For a single-configuration generator, e.g. `Unix Makefiles` or `Ninja`,
you only need to run this command once.
For a multi-configuration generator, e.g. `Visual Studio`, you may want to
run it more than once.
Each of these commands should also have a different `build_type` setting.
A second command with the same `build_type` setting will overwrite the files
generated by the first. You can pass the build type on the command line with
`--settings build_type=$BUILD_TYPE` or in the profile itself,
under the section `[settings]` with the key `build_type`.
If you are using a Microsoft Visual C++ compiler,
then you will need to ensure consistency between the `build_type` setting
and the `compiler.runtime` setting.
When `build_type` is `Release`, `compiler.runtime` should be `MT`.
When `build_type` is `Debug`, `compiler.runtime` should be `MTd`.
```
conan install .. --output-folder . --build missing --settings build_type=Release --settings compiler.runtime=MT
conan install .. --output-folder . --build missing --settings build_type=Debug --settings compiler.runtime=MTd
```
3. Configure CMake and pass the toolchain file generated by Conan, located at
`$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake`.
Single-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release ..
```
Pass the CMake variable [`CMAKE_BUILD_TYPE`][build_type]
and make sure it matches the `build_type` setting you chose in the previous
step.
Multi-config gnerators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake ..
```
**Note:** You can pass build options for `rippled` in this step.
4. Build `rippled`.
For a single-configuration generator, it will build whatever configuration
you passed for `CMAKE_BUILD_TYPE`. For a multi-configuration generator,
you must pass the option `--config` to select the build configuration.
Single-config generators:
```
cmake --build .
```
Multi-config generators:
```
cmake --build . --config Release
cmake --build . --config Debug
```
5. Test rippled.
Single-config generators:
```
./rippled --unittest
```
Multi-config generators:
```
./Release/rippled --unittest
./Debug/rippled --unittest
```
The location of `rippled` in your build directory depends on your CMake
generator. Pass `--help` to see the rest of the command line options.
## Options
| Option | Default Value | Description |
| --- | ---| ---|
| `assert` | OFF | Enable assertions.
| `reporting` | OFF | Build the reporting mode feature. |
| `tests` | ON | Build tests. |
| `unity` | ON | Configure a unity build. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
If you have trouble building dependencies after changing Conan settings,
try removing the Conan cache.
For Conan 2:
```
rm -rf ~/.conan2/p
```
Or clear the entire Conan 2 cache:
```
conan cache clean "*"
```
### macOS compilation with Apple Clang 17+
If you're on macOS with Apple Clang 17 or newer, you need to add a compiler flag to work around a compilation error in gRPC dependencies.
Edit `~/.conan2/profiles/default` and add under the `[conf]` section:
```
[conf]
tools.build:cxxflags=["-Wno-missing-template-arg-list-after-template-kw"]
```
### recompile with -fPIC
If you get a linker error suggesting that you recompile Boost with
position-independent code, such as:
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):
requires unsupported dynamic reloc 11; recompile with -fPIC
```
Conan most likely downloaded a bad binary distribution of the dependency.
This seems to be a [bug][1] in Conan just for Boost 1.77.0 compiled with GCC
for Linux. The solution is to build the dependency locally by passing
`--build boost` when calling `conan install`.
```
/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/dc8aedd23a0f0a773a5fcdcfe1ae3e89c4205978/lib/libboost_container.a(alloc_lib.o): requires unsupported dynamic reloc 11; recompile with -fPIC
```
## Add a Dependency
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
`default_options` property (with syntax `'$package:$option': $value`).
3. Modify [`CMakeLists.txt`](./CMakeLists.txt):
- Add a call to `find_package($package REQUIRED)`.
- Link a library from the package to the target `ripple_libs`
(search for the existing call to `target_link_libraries(ripple_libs INTERFACE ...)`).
4. Start coding! Don't forget to include whatever headers you need from the package.
## A crash course in CMake and Conan
To better understand how to use Conan,
we should first understand _why_ we use Conan,
and to understand that,
we need to understand how we use CMake.
### CMake
Technically, you don't need CMake to build this project.
You could manually compile every translation unit into an object file,
using the right compiler options,
and then manually link all those objects together,
using the right linker options.
However, that is very tedious and error-prone,
which is why we lean on tools like CMake.
We have written CMake configuration files
([`CMakeLists.txt`](./CMakeLists.txt) and friends)
for this project so that CMake can be used to correctly compile and link
all of the translation units in it.
Or rather, CMake will generate files for a separate build system
(e.g. Make, Ninja, Visual Studio, Xcode, etc.)
that compile and link all of the translation units.
Even then, CMake has parameters, some of which are platform-specific.
In CMake's parlance, parameters are specially-named **variables** like
[`CMAKE_BUILD_TYPE`][build_type] or
[`CMAKE_MSVC_RUNTIME_LIBRARY`][runtime].
Parameters include:
- what build system to generate files for
- where to find the compiler and linker
- where to find dependencies, e.g. libraries and headers
- how to link dependencies, e.g. any special compiler or linker flags that
need to be used with them, including preprocessor definitions
- how to compile translation units, e.g. with optimizations, debug symbols,
position-independent code, etc.
- on Windows, which runtime library to link with
For some of these parameters, like the build system and compiler,
CMake goes through a complicated search process to choose default values.
For others, like the dependencies,
_we_ had written in the CMake configuration files of this project
our own complicated process to choose defaults.
For most developers, things "just worked"... until they didn't, and then
you were left trying to debug one of these complicated processes, instead of
choosing and manually passing the parameter values yourself.
You can pass every parameter to CMake on the command line,
but writing out these parameters every time we want to configure CMake is
a pain.
Most humans prefer to put them into a configuration file, once, that
CMake can read every time it is configured.
For CMake, that file is a [toolchain file][toolchain].
### Conan
These next few paragraphs on Conan are going to read much like the ones above
for CMake.
Technically, you don't need Conan to build this project.
You could manually download, configure, build, and install all of the
dependencies yourself, and then pass all of the parameters necessary for
CMake to link to those dependencies.
To guarantee ABI compatibility, you must be sure to use the same set of
compiler and linker options for all dependencies _and_ this project.
However, that is very tedious and error-prone, which is why we lean on tools
like Conan.
We have written a Conan configuration file ([`conanfile.py`](./conanfile.py))
so that Conan can be used to correctly download, configure, build, and install
all of the dependencies for this project,
using a single set of compiler and linker options for all of them.
It generates files that contain almost all of the parameters that CMake
expects.
Those files include:
- A single toolchain file.
- For every dependency, a CMake [package configuration file][pcf],
[package version file][pvf], and for every build type, a package
targets file.
Together, these files implement version checking and define `IMPORTED`
targets for the dependencies.
The toolchain file itself amends the search path
([`CMAKE_PREFIX_PATH`][prefix_path]) so that [`find_package()`][find_package]
will [discover][search] the generated package configuration files.
**Nearly all we must do to properly configure CMake is pass the toolchain
file.**
What CMake parameters are left out?
You'll still need to pick a build system generator,
and if you choose a single-configuration generator,
you'll need to pass the `CMAKE_BUILD_TYPE`,
which should match the `build_type` setting you gave to Conan.
Even then, Conan has parameters, some of which are platform-specific.
In Conan's parlance, parameters are either settings or options.
**Settings** are shared by all packages, e.g. the build type.
**Options** are specific to a given package, e.g. whether to build and link
OpenSSL as a shared library.
For settings, Conan goes through a complicated search process to choose
defaults.
For options, each package recipe defines its own defaults.
You can pass every parameter to Conan on the command line,
but it is more convenient to put them in a [profile][profile].
**All we must do to properly configure Conan is edit and pass the profile.**
[1]: https://github.com/conan-io/conan-center-index/issues/13168
[5]: https://en.wikipedia.org/wiki/Unity_build
[build_type]: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
[runtime]: https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html
[toolchain]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html
[pcf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-configuration-file
[pvf]: https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-version-file
[find_package]: https://cmake.org/cmake/help/latest/command/find_package.html
[search]: https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure
[prefix_path]: https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html
[profile]: https://docs.conan.io/en/latest/reference/profiles.html

View File

@@ -23,6 +23,11 @@ else()
message(STATUS "ACL not found, continuing without ACL support")
endif()
add_library(libxrpl INTERFACE)
target_link_libraries(libxrpl INTERFACE xrpl_core)
add_library(xrpl::libxrpl ALIAS libxrpl)
#[===============================[
beast/legacy FILES:
TODO: review these sources for removal or replacement
@@ -45,6 +50,12 @@ target_sources (xrpl_core PRIVATE
src/ripple/beast/utility/src/beast_Journal.cpp
src/ripple/beast/utility/src/beast_PropertyStream.cpp)
# Conditionally add enhanced logging source when BEAST_ENHANCED_LOGGING is enabled
if(DEFINED BEAST_ENHANCED_LOGGING AND BEAST_ENHANCED_LOGGING)
target_sources(xrpl_core PRIVATE
src/ripple/beast/utility/src/beast_EnhancedLogging.cpp)
endif()
#[===============================[
core sources
#]===============================]
@@ -144,12 +155,19 @@ target_link_libraries (xrpl_core
PUBLIC
OpenSSL::Crypto
Ripple::boost
NIH::WasmEdge
wasmedge::wasmedge
Ripple::syslibs
NIH::secp256k1
NIH::ed25519-donna
secp256k1::secp256k1
ed25519::ed25519
date::date
Ripple::opts)
# Link date-tz library when enhanced logging is enabled
if(DEFINED BEAST_ENHANCED_LOGGING AND BEAST_ENHANCED_LOGGING)
if(TARGET date::date-tz)
target_link_libraries(xrpl_core PUBLIC date::date-tz)
endif()
endif()
#[=================================[
main/core headers installation
#]=================================]
@@ -392,6 +410,7 @@ target_sources (rippled PRIVATE
src/ripple/app/misc/NegativeUNLVote.cpp
src/ripple/app/misc/NetworkOPs.cpp
src/ripple/app/misc/SHAMapStoreImp.cpp
src/ripple/app/misc/StateAccounting.cpp
src/ripple/app/misc/detail/impl/WorkSSL.cpp
src/ripple/app/misc/impl/AccountTxPaging.cpp
src/ripple/app/misc/impl/AmendmentTable.cpp
@@ -433,13 +452,20 @@ target_sources (rippled PRIVATE
src/ripple/app/tx/impl/CancelOffer.cpp
src/ripple/app/tx/impl/CashCheck.cpp
src/ripple/app/tx/impl/Change.cpp
src/ripple/app/tx/impl/ClaimReward.cpp
src/ripple/app/tx/impl/Clawback.cpp
src/ripple/app/tx/impl/CreateCheck.cpp
src/ripple/app/tx/impl/CreateOffer.cpp
src/ripple/app/tx/impl/CreateTicket.cpp
src/ripple/app/tx/impl/Cron.cpp
src/ripple/app/tx/impl/CronSet.cpp
src/ripple/app/tx/impl/DeleteAccount.cpp
src/ripple/app/tx/impl/DepositPreauth.cpp
src/ripple/app/tx/impl/Escrow.cpp
src/ripple/app/tx/impl/GenesisMint.cpp
src/ripple/app/tx/impl/Import.cpp
src/ripple/app/tx/impl/InvariantCheck.cpp
src/ripple/app/tx/impl/Invoke.cpp
src/ripple/app/tx/impl/NFTokenAcceptOffer.cpp
src/ripple/app/tx/impl/NFTokenBurn.cpp
src/ripple/app/tx/impl/NFTokenCancelOffer.cpp
@@ -448,13 +474,11 @@ target_sources (rippled PRIVATE
src/ripple/app/tx/impl/OfferStream.cpp
src/ripple/app/tx/impl/PayChan.cpp
src/ripple/app/tx/impl/Payment.cpp
src/ripple/app/tx/impl/Remit.cpp
src/ripple/app/tx/impl/SetAccount.cpp
src/ripple/app/tx/impl/SetRegularKey.cpp
src/ripple/app/tx/impl/SetHook.cpp
src/ripple/app/tx/impl/ClaimReward.cpp
src/ripple/app/tx/impl/GenesisMint.cpp
src/ripple/app/tx/impl/Import.cpp
src/ripple/app/tx/impl/Invoke.cpp
src/ripple/app/tx/impl/SetRemarks.cpp
src/ripple/app/tx/impl/SetRegularKey.cpp
src/ripple/app/tx/impl/SetSignerList.cpp
src/ripple/app/tx/impl/SetTrust.cpp
src/ripple/app/tx/impl/SignerEntries.cpp
@@ -537,6 +561,7 @@ target_sources (rippled PRIVATE
subdir: nodestore
#]===============================]
src/ripple/nodestore/backend/CassandraFactory.cpp
src/ripple/nodestore/backend/RWDBFactory.cpp
src/ripple/nodestore/backend/MemoryFactory.cpp
src/ripple/nodestore/backend/NuDBFactory.cpp
src/ripple/nodestore/backend/NullFactory.cpp
@@ -602,6 +627,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/BlackList.cpp
src/ripple/rpc/handlers/BookOffers.cpp
src/ripple/rpc/handlers/CanDelete.cpp
src/ripple/rpc/handlers/Catalogue.cpp
src/ripple/rpc/handlers/Connect.cpp
src/ripple/rpc/handlers/ConsensusInfo.cpp
src/ripple/rpc/handlers/CrawlShards.cpp
@@ -657,6 +683,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/ValidatorListSites.cpp
src/ripple/rpc/handlers/Validators.cpp
src/ripple/rpc/handlers/WalletPropose.cpp
src/ripple/rpc/handlers/Catalogue.cpp
src/ripple/rpc/impl/DeliveredAmount.cpp
src/ripple/rpc/impl/Handler.cpp
src/ripple/rpc/impl/LegacyPathFind.cpp
@@ -668,6 +695,9 @@ target_sources (rippled PRIVATE
src/ripple/rpc/impl/ShardVerificationScheduler.cpp
src/ripple/rpc/impl/Status.cpp
src/ripple/rpc/impl/TransactionSign.cpp
src/ripple/rpc/impl/NFTokenID.cpp
src/ripple/rpc/impl/NFTokenOfferID.cpp
src/ripple/rpc/impl/NFTSyntheticSerializer.cpp
#[===============================[
main sources:
subdir: perflog
@@ -706,6 +736,8 @@ if (tests)
src/test/app/BaseFee_test.cpp
src/test/app/Check_test.cpp
src/test/app/ClaimReward_test.cpp
src/test/app/Cron_test.cpp
src/test/app/Clawback_test.cpp
src/test/app/CrossingLimits_test.cpp
src/test/app/DeliverMin_test.cpp
src/test/app/DepositAuth_test.cpp
@@ -736,17 +768,23 @@ if (tests)
src/test/app/Path_test.cpp
src/test/app/PayChan_test.cpp
src/test/app/PayStrand_test.cpp
src/test/app/PreviousTxn_test.cpp
src/test/app/PseudoTx_test.cpp
src/test/app/RCLCensorshipDetector_test.cpp
src/test/app/RCLValidations_test.cpp
src/test/app/Regression_test.cpp
src/test/app/Remit_test.cpp
src/test/app/SHAMapStore_test.cpp
src/test/app/SetAuth_test.cpp
src/test/app/SetHook_test.cpp
src/test/app/SetHookTSH_test.cpp
src/test/app/SetRegularKey_test.cpp
src/test/app/SetRemarks_test.cpp
src/test/app/SetTrust_test.cpp
src/test/app/Taker_test.cpp
src/test/app/TheoreticalQuality_test.cpp
src/test/app/Ticket_test.cpp
src/test/app/Touch_test.cpp
src/test/app/Transaction_ordering_test.cpp
src/test/app/TrustAndBalance_test.cpp
src/test/app/TxQ_test.cpp
@@ -754,7 +792,6 @@ if (tests)
src/test/app/ValidatorKeys_test.cpp
src/test/app/ValidatorList_test.cpp
src/test/app/ValidatorSite_test.cpp
src/test/app/SetHook_test.cpp
src/test/app/Wildcard_test.cpp
src/test/app/XahauGenesis_test.cpp
src/test/app/tx/apply_test.cpp
@@ -864,6 +901,7 @@ if (tests)
src/test/jtx/impl/amount.cpp
src/test/jtx/impl/balance.cpp
src/test/jtx/impl/check.cpp
src/test/jtx/impl/cron.cpp
src/test/jtx/impl/delivermin.cpp
src/test/jtx/impl/deposit.cpp
src/test/jtx/impl/envconfig.cpp
@@ -888,14 +926,18 @@ if (tests)
src/test/jtx/impl/rate.cpp
src/test/jtx/impl/regkey.cpp
src/test/jtx/impl/reward.cpp
src/test/jtx/impl/remarks.cpp
src/test/jtx/impl/remit.cpp
src/test/jtx/impl/sendmax.cpp
src/test/jtx/impl/seq.cpp
src/test/jtx/impl/sig.cpp
src/test/jtx/impl/tag.cpp
src/test/jtx/impl/TestHelpers.cpp
src/test/jtx/impl/ticket.cpp
src/test/jtx/impl/token.cpp
src/test/jtx/impl/trust.cpp
src/test/jtx/impl/txflags.cpp
src/test/jtx/impl/unl.cpp
src/test/jtx/impl/uritoken.cpp
src/test/jtx/impl/utility.cpp
@@ -923,6 +965,7 @@ if (tests)
src/test/nodestore/Basics_test.cpp
src/test/nodestore/DatabaseShard_test.cpp
src/test/nodestore/Database_test.cpp
src/test/nodestore/NuDBFactory_test.cpp
src/test/nodestore/Timing_test.cpp
src/test/nodestore/import_test.cpp
src/test/nodestore/varint_test.cpp
@@ -969,6 +1012,11 @@ if (tests)
subdir: resource
#]===============================]
src/test/resource/Logic_test.cpp
#[===============================[
test sources:
subdir: rdb
#]===============================]
src/test/rdb/RelationalDatabase_test.cpp
#[===============================[
test sources:
subdir: rpc
@@ -978,10 +1026,12 @@ if (tests)
src/test/rpc/AccountLinesRPC_test.cpp
src/test/rpc/AccountObjects_test.cpp
src/test/rpc/AccountOffers_test.cpp
src/test/rpc/AccountNamespace_test.cpp
src/test/rpc/AccountSet_test.cpp
src/test/rpc/AccountTx_test.cpp
src/test/rpc/AmendmentBlocked_test.cpp
src/test/rpc/Book_test.cpp
src/test/rpc/Catalogue_test.cpp
src/test/rpc/DepositAuthorized_test.cpp
src/test/rpc/DeliveredAmount_test.cpp
src/test/rpc/Feature_test.cpp
@@ -1040,6 +1090,11 @@ target_link_libraries (rippled
Ripple::opts
Ripple::libs
Ripple::xrpl_core
# Workaround for a Conan 1.x bug that prevents static linking of libstdc++
# when a dependency (snappy) modifies system_libs. See the comment in
# external/snappy/conanfile.py for a full explanation.
# This is likely not strictly necessary, but listed explicitly as a good practice.
m
)
exclude_if_included (rippled)
# define a macro for tests that might need to

View File

@@ -1,6 +1,13 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
# Early return if the `docs` directory is missing,
# e.g. when we are building a Conan package.
if(NOT EXISTS docs)
return()
endif()
if (tests)
find_package (Doxygen)
if (NOT TARGET Doxygen::doxygen)

View File

@@ -4,7 +4,6 @@
install (
TARGETS
ed25519-donna
common
opts
ripple_syslibs
@@ -16,17 +15,6 @@ install (
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
if(${INSTALL_SECP256K1})
install (
TARGETS
secp256k1
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
endif()
install (EXPORT RippleExports
FILE RippleTargets.cmake
NAMESPACE Ripple::

View File

@@ -14,7 +14,7 @@ if (is_multiconfig)
file(GLOB md_files RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} CONFIGURE_DEPENDS
*.md)
LIST(APPEND all_sources ${md_files})
foreach (_target secp256k1 ed25519-donna pbufs xrpl_core rippled)
foreach (_target secp256k1::secp256k1 ed25519::ed25519 pbufs xrpl_core rippled)
get_target_property (_type ${_target} TYPE)
if(_type STREQUAL "INTERFACE_LIBRARY")
continue()

View File

@@ -1,33 +0,0 @@
#[===================================================================[
NIH prefix path..this is where we will download
and build any ExternalProjects, and they will hopefully
survive across build directory deletion (manual cleans)
#]===================================================================]
string (REGEX REPLACE "[ \\/%]+" "_" gen_for_path ${CMAKE_GENERATOR})
string (TOLOWER ${gen_for_path} gen_for_path)
# HACK: trying to shorten paths for windows CI (which hits 260 MAXPATH easily)
# @see: https://issues.jenkins-ci.org/browse/JENKINS-38706?focusedCommentId=339847
string (REPLACE "visual_studio" "vs" gen_for_path ${gen_for_path})
if (NOT DEFINED NIH_CACHE_ROOT)
if (DEFINED ENV{NIH_CACHE_ROOT})
set (NIH_CACHE_ROOT $ENV{NIH_CACHE_ROOT})
else ()
set (NIH_CACHE_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/.nih_c")
endif ()
endif ()
set (nih_cache_path
"${NIH_CACHE_ROOT}/${gen_for_path}/${CMAKE_CXX_COMPILER_ID}_${CMAKE_CXX_COMPILER_VERSION}")
if (NOT is_multiconfig)
set (nih_cache_path "${nih_cache_path}/${CMAKE_BUILD_TYPE}")
endif ()
file(TO_CMAKE_PATH "${nih_cache_path}" nih_cache_path)
message (STATUS "NIH-EP cache path: ${nih_cache_path}")
## two convenience variables:
set (ep_lib_prefix ${CMAKE_STATIC_LIBRARY_PREFIX})
set (ep_lib_suffix ${CMAKE_STATIC_LIBRARY_SUFFIX})
# this is a setting for FetchContent and needs to be
# a cache variable
# https://cmake.org/cmake/help/latest/module/FetchContent.html#populating-the-content
set (FETCHCONTENT_BASE_DIR ${nih_cache_path} CACHE STRING "" FORCE)

View File

@@ -1,50 +1,4 @@
#[===================================================================[
NIH dep: boost
#]===================================================================]
if((NOT DEFINED BOOST_ROOT) AND(DEFINED ENV{BOOST_ROOT}))
set(BOOST_ROOT $ENV{BOOST_ROOT})
endif()
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
if(WIN32 OR CYGWIN)
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
if(DEFINED BOOST_ROOT)
if(IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
elseif(IS_DIRECTORY ${BOOST_ROOT}/stage/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/stage/lib)
elseif(IS_DIRECTORY ${BOOST_ROOT}/lib)
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/lib)
else()
message(WARNING "Did not find expected boost library dir. "
"Defaulting to ${BOOST_ROOT}")
set(BOOST_LIBRARYDIR ${BOOST_ROOT})
endif()
endif()
endif()
message(STATUS "BOOST_ROOT: ${BOOST_ROOT}")
message(STATUS "BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
# uncomment the following as needed to debug FindBoost issues:
#set(Boost_DEBUG ON)
#[=========================================================[
boost dynamic libraries don't trivially support @rpath
linking right now (cmake's default), so just force
static linking for macos, or if requested on linux by flag
#]=========================================================]
if(static)
set(Boost_USE_STATIC_LIBS ON)
endif()
set(Boost_USE_MULTITHREADED ON)
if(static AND NOT APPLE)
set(Boost_USE_STATIC_RUNTIME ON)
else()
set(Boost_USE_STATIC_RUNTIME OFF)
endif()
# TBD:
# Boost_USE_DEBUG_RUNTIME: When ON, uses Boost libraries linked against the
find_package(Boost 1.70 REQUIRED
find_package(Boost 1.86 REQUIRED
COMPONENTS
chrono
container
@@ -55,11 +9,12 @@ find_package(Boost 1.70 REQUIRED
program_options
regex
system
thread)
thread
)
add_library(ripple_boost INTERFACE)
add_library(Ripple::boost ALIAS ripple_boost)
if(is_xcode)
if(XCODE)
target_include_directories(ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options(ripple_boost INTERFACE --system-header-prefix="boost/")
else()
@@ -77,6 +32,7 @@ target_link_libraries(ripple_boost
Boost::program_options
Boost::regex
Boost::system
Boost::iostreams
Boost::thread)
if(Boost_COMPILER)
target_link_libraries(ripple_boost INTERFACE Boost::disable_autolinking)

View File

@@ -1,28 +0,0 @@
#[===================================================================[
NIH dep: ed25519-donna
#]===================================================================]
add_library (ed25519-donna STATIC
src/ed25519-donna/ed25519.c)
target_include_directories (ed25519-donna
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
$<INSTALL_INTERFACE:include>
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/src/ed25519-donna)
#[=========================================================[
NOTE for macos:
https://github.com/floodyberry/ed25519-donna/issues/29
our source for ed25519-donna-portable.h has been
patched to workaround this.
#]=========================================================]
target_link_libraries (ed25519-donna PUBLIC OpenSSL::SSL)
add_library (NIH::ed25519-donna ALIAS ed25519-donna)
target_link_libraries (ripple_libs INTERFACE NIH::ed25519-donna)
#[===========================[
headers installation
#]===========================]
install (
FILES
src/ed25519-donna/ed25519.h
DESTINATION include/ed25519-donna)

File diff suppressed because it is too large Load Diff

View File

@@ -1,47 +0,0 @@
# - Try to find jemalloc
# Once done this will define
# JEMALLOC_FOUND - System has jemalloc
# JEMALLOC_INCLUDE_DIRS - The jemalloc include directories
# JEMALLOC_LIBRARIES - The libraries needed to use jemalloc
if(NOT USE_BUNDLED_JEMALLOC)
find_package(PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_check_modules(PC_JEMALLOC QUIET jemalloc)
endif()
else()
set(PC_JEMALLOC_INCLUDEDIR)
set(PC_JEMALLOC_INCLUDE_DIRS)
set(PC_JEMALLOC_LIBDIR)
set(PC_JEMALLOC_LIBRARY_DIRS)
set(LIMIT_SEARCH NO_DEFAULT_PATH)
endif()
set(JEMALLOC_DEFINITIONS ${PC_JEMALLOC_CFLAGS_OTHER})
find_path(JEMALLOC_INCLUDE_DIR jemalloc/jemalloc.h
PATHS ${PC_JEMALLOC_INCLUDEDIR} ${PC_JEMALLOC_INCLUDE_DIRS}
${LIMIT_SEARCH})
# If we're asked to use static linkage, add libjemalloc.a as a preferred library name.
if(JEMALLOC_USE_STATIC)
list(APPEND JEMALLOC_NAMES
"${CMAKE_STATIC_LIBRARY_PREFIX}jemalloc${CMAKE_STATIC_LIBRARY_SUFFIX}")
endif()
list(APPEND JEMALLOC_NAMES jemalloc)
find_library(JEMALLOC_LIBRARY NAMES ${JEMALLOC_NAMES}
HINTS ${PC_JEMALLOC_LIBDIR} ${PC_JEMALLOC_LIBRARY_DIRS}
${LIMIT_SEARCH})
set(JEMALLOC_LIBRARIES ${JEMALLOC_LIBRARY})
set(JEMALLOC_INCLUDE_DIRS ${JEMALLOC_INCLUDE_DIR})
include(FindPackageHandleStandardArgs)
# handle the QUIETLY and REQUIRED arguments and set JEMALLOC_FOUND to TRUE
# if all listed variables are TRUE
find_package_handle_standard_args(JeMalloc DEFAULT_MSG
JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR)
mark_as_advanced(JEMALLOC_INCLUDE_DIR JEMALLOC_LIBRARY)

View File

@@ -1,22 +0,0 @@
find_package (PkgConfig REQUIRED)
pkg_search_module (libarchive_PC QUIET libarchive>=3.4.3)
if(static)
set(LIBARCHIVE_LIB libarchive.a)
else()
set(LIBARCHIVE_LIB archive)
endif()
find_library (archive
NAMES ${LIBARCHIVE_LIB}
HINTS
${libarchive_PC_LIBDIR}
${libarchive_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (LIBARCHIVE_INCLUDE_DIR
NAMES archive.h
HINTS
${libarchive_PC_INCLUDEDIR}
${libarchive_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (lz4_PC QUIET liblz4>=1.9)
endif ()
if(static)
set(LZ4_LIB liblz4.a)
else()
set(LZ4_LIB lz4.so)
endif()
find_library (lz4
NAMES ${LZ4_LIB}
HINTS
${lz4_PC_LIBDIR}
${lz4_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (LZ4_INCLUDE_DIR
NAMES lz4.h
HINTS
${lz4_PC_INCLUDEDIR}
${lz4_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (secp256k1_PC QUIET libsecp256k1)
endif ()
if(static)
set(SECP256K1_LIB libsecp256k1.a)
else()
set(SECP256K1_LIB secp256k1)
endif()
find_library(secp256k1
NAMES ${SECP256K1_LIB}
HINTS
${secp256k1_PC_LIBDIR}
${secp256k1_PC_LIBRARY_PATHS}
NO_DEFAULT_PATH)
find_path (SECP256K1_INCLUDE_DIR
NAMES secp256k1.h
HINTS
${secp256k1_PC_INCLUDEDIR}
${secp256k1_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (snappy_PC QUIET snappy>=1.1.7)
endif ()
if(static)
set(SNAPPY_LIB libsnappy.a)
else()
set(SNAPPY_LIB libsnappy.so)
endif()
find_library (snappy
NAMES ${SNAPPY_LIB}
HINTS
${snappy_PC_LIBDIR}
${snappy_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (SNAPPY_INCLUDE_DIR
NAMES snappy.h
HINTS
${snappy_PC_INCLUDEDIR}
${snappy_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,19 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
# TBD - currently no soci pkgconfig
#pkg_search_module (soci_PC QUIET libsoci_core>=3.2)
endif ()
if(static)
set(SOCI_LIB libsoci.a)
else()
set(SOCI_LIB libsoci_core.so)
endif()
find_library (soci
NAMES ${SOCI_LIB})
find_path (SOCI_INCLUDE_DIR
NAMES soci/soci.h)
message("SOCI FOUND AT: ${SOCI_LIB}")

View File

@@ -1,24 +0,0 @@
find_package (PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_search_module (sqlite_PC QUIET sqlite3>=3.26.0)
endif ()
if(static)
set(SQLITE_LIB libsqlite3.a)
else()
set(SQLITE_LIB sqlite3.so)
endif()
find_library (sqlite3
NAMES ${SQLITE_LIB}
HINTS
${sqlite_PC_LIBDIR}
${sqlite_PC_LIBRARY_DIRS}
NO_DEFAULT_PATH)
find_path (SQLITE_INCLUDE_DIR
NAMES sqlite3.h
HINTS
${sqlite_PC_INCLUDEDIR}
${sqlite_PC_INCLUDEDIRS}
NO_DEFAULT_PATH)

View File

@@ -1,163 +0,0 @@
#[===================================================================[
NIH dep: libarchive
#]===================================================================]
option (local_libarchive "use local build of libarchive." OFF)
add_library (archive_lib UNKNOWN IMPORTED GLOBAL)
if (NOT local_libarchive)
if (NOT WIN32)
find_package(libarchive_pc REQUIRED)
endif ()
if (archive)
message (STATUS "Found libarchive using pkg-config. Using ${archive}.")
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${archive}
IMPORTED_LOCATION_RELEASE
${archive}
INTERFACE_INCLUDE_DIRECTORIES
${LIBARCHIVE_INCLUDE_DIR})
# pkg-config can return extra info for static lib linking
# this is probably needed/useful generally, but apply
# to APPLE for now (mostly for homebrew)
if (APPLE AND static AND libarchive_PC_STATIC_LIBRARIES)
message(STATUS "NOTE: libarchive static libs: ${libarchive_PC_STATIC_LIBRARIES}")
# also, APPLE seems to need iconv...maybe linux does too (TBD)
target_link_libraries (archive_lib
INTERFACE iconv ${libarchive_PC_STATIC_LIBRARIES})
endif ()
else ()
## now try searching using the minimal find module that cmake provides
find_package(LibArchive 3.4.3 QUIET)
if (LibArchive_FOUND)
if (static)
# find module doesn't find static libs currently, so we re-search
get_filename_component(_loc ${LibArchive_LIBRARY} DIRECTORY)
find_library(_la_static
NAMES libarchive.a archive_static.lib archive.lib
PATHS ${_loc})
if (_la_static)
set (_la_lib ${_la_static})
else ()
message (WARNING "unable to find libarchive static lib - switching to local build")
set (local_libarchive ON CACHE BOOL "" FORCE)
endif ()
else ()
set (_la_lib ${LibArchive_LIBRARY})
endif ()
if (NOT local_libarchive)
message (STATUS "Found libarchive using module/config. Using ${_la_lib}.")
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${_la_lib}
IMPORTED_LOCATION_RELEASE
${_la_lib}
INTERFACE_INCLUDE_DIRECTORIES
${LibArchive_INCLUDE_DIRS})
endif ()
else ()
set (local_libarchive ON CACHE BOOL "" FORCE)
endif ()
endif ()
endif()
if (local_libarchive)
set (lib_post "")
if (MSVC)
set (lib_post "_static")
endif ()
ExternalProject_Add (libarchive
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libarchive/libarchive.git
GIT_TAG v3.4.3
CMAKE_ARGS
# passing the compiler seems to be needed for windows CI, sadly
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DENABLE_LZ4=ON
-ULZ4_*
-DLZ4_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
# because we are building a static lib, this lz4 library doesn't
# actually matter since you can't generally link static libs to other static
# libs. The include files are needed, but the library itself is not (until
# we link our application, at which point we use the lz4 we built above).
# nonetheless, we need to provide a library to libarchive else it will
# NOT include lz4 support when configuring
-DLZ4_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-DENABLE_WERROR=OFF
-DENABLE_TAR=OFF
-DENABLE_TAR_SHARED=OFF
-DENABLE_INSTALL=ON
-DENABLE_NETTLE=OFF
-DENABLE_OPENSSL=OFF
-DENABLE_LZO=OFF
-DENABLE_LZMA=OFF
-DENABLE_ZLIB=OFF
-DENABLE_BZip2=OFF
-DENABLE_LIBXML2=OFF
-DENABLE_EXPAT=OFF
-DENABLE_PCREPOSIX=OFF
-DENABLE_LibGCC=OFF
-DENABLE_CNG=OFF
-DENABLE_CPIO=OFF
-DENABLE_CPIO_SHARED=OFF
-DENABLE_CAT=OFF
-DENABLE_CAT_SHARED=OFF
-DENABLE_XATTR=OFF
-DENABLE_ACL=OFF
-DENABLE_ICONV=OFF
-DENABLE_TEST=OFF
-DENABLE_COVERAGE=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target archive_static
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/libarchive/$<CONFIG>/${ep_lib_prefix}archive${lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/libarchive
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS lz4_lib
BUILD_BYPRODUCTS
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (libarchive BINARY_DIR)
ExternalProject_Get_Property (libarchive SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (libarchive)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/libarchive)
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/libarchive
INTERFACE_COMPILE_DEFINITIONS
LIBARCHIVE_STATIC)
endif()
add_dependencies (archive_lib libarchive)
target_link_libraries (archive_lib INTERFACE lz4_lib)
target_link_libraries (ripple_libs INTERFACE archive_lib)
exclude_if_included (libarchive)
exclude_if_included (archive_lib)

View File

@@ -1,79 +0,0 @@
#[===================================================================[
NIH dep: lz4
#]===================================================================]
add_library (lz4_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(lz4)
endif()
if(lz4)
set_target_properties (lz4_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${lz4}
IMPORTED_LOCATION_RELEASE
${lz4}
INTERFACE_INCLUDE_DIRECTORIES
${LZ4_INCLUDE_DIR})
else()
ExternalProject_Add (lz4
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/lz4/lz4.git
GIT_TAG v1.9.2
SOURCE_SUBDIR contrib/cmake_unofficial
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_STATIC_LIBS=ON
-DBUILD_SHARED_LIBS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target lz4_static
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}lz4$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}lz4${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}lz4_d${ep_lib_suffix}
)
ExternalProject_Get_Property (lz4 BINARY_DIR)
ExternalProject_Get_Property (lz4 SOURCE_DIR)
file (MAKE_DIRECTORY ${SOURCE_DIR}/lz4)
set_target_properties (lz4_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}lz4_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}lz4${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/lib)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (lz4)
endif ()
add_dependencies (lz4_lib lz4)
target_link_libraries (ripple_libs INTERFACE lz4_lib)
exclude_if_included (lz4)
endif()
exclude_if_included (lz4_lib)

View File

@@ -1,31 +0,0 @@
#[===================================================================[
NIH dep: nudb
NuDB is header-only, thus is an INTERFACE lib in CMake.
TODO: move the library definition into NuDB repo and add
proper targets and export/install
#]===================================================================]
if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
add_library (nudb INTERFACE)
FetchContent_Declare(
nudb_src
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.5
)
FetchContent_GetProperties(nudb_src)
if(NOT nudb_src_POPULATED)
message (STATUS "Pausing to download NuDB...")
FetchContent_Populate(nudb_src)
endif()
file(TO_CMAKE_PATH "${nudb_src_SOURCE_DIR}" nudb_src_SOURCE_DIR)
# specify as system includes so as to avoid warnings
target_include_directories (nudb SYSTEM INTERFACE ${nudb_src_SOURCE_DIR}/include)
target_link_libraries (nudb
INTERFACE
Boost::thread
Boost::system)
add_library (NIH::nudb ALIAS nudb)
target_link_libraries (ripple_libs INTERFACE NIH::nudb)
endif ()

View File

@@ -1,48 +0,0 @@
#[===================================================================[
NIH dep: openssl
#]===================================================================]
#[===============================================[
OPENSSL_ROOT_DIR is the only variable that
FindOpenSSL honors for locating, so convert any
OPENSSL_ROOT vars to this
#]===============================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (HOMEBREW)
execute_process (COMMAND ${HOMEBREW} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_package (OpenSSL 1.1.1 REQUIRED)
target_link_libraries (ripple_libs
INTERFACE
OpenSSL::SSL
OpenSSL::Crypto)
# disable SSLv2...this can also be done when building/configuring OpenSSL
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2)
#[=========================================================[
https://gitlab.kitware.com/cmake/cmake/issues/16885
depending on how openssl is built, it might depend
on zlib. In fact, the openssl find package should
figure this out for us, but it does not currently...
so let's add zlib ourselves to the lib list
TODO: investigate linking to static zlib for static
build option
#]=========================================================]
find_package (ZLIB)
set (has_zlib FALSE)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
set (has_zlib TRUE)
endif ()

View File

@@ -1,70 +0,0 @@
if(reporting)
find_package(PostgreSQL)
if(NOT PostgreSQL_FOUND)
message("find_package did not find postgres")
find_library(postgres NAMES pq libpq libpq-dev pq-dev postgresql-devel)
find_path(libpq-fe NAMES libpq-fe.h PATH_SUFFIXES postgresql pgsql include)
if(NOT libpq-fe_FOUND OR NOT postgres_FOUND)
message("No system installed Postgres found. Will build")
add_library(postgres SHARED IMPORTED GLOBAL)
add_library(pgport SHARED IMPORTED GLOBAL)
add_library(pgcommon SHARED IMPORTED GLOBAL)
ExternalProject_Add(postgres_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/postgres/postgres.git
GIT_TAG REL_14_5
CONFIGURE_COMMAND ./configure --without-readline > /dev/null
BUILD_COMMAND ${CMAKE_COMMAND} -E env --unset=MAKELEVEL make
UPDATE_COMMAND ""
BUILD_IN_SOURCE 1
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/src/interfaces/libpq/${ep_lib_prefix}pq.a
<BINARY_DIR>/src/common/${ep_lib_prefix}pgcommon.a
<BINARY_DIR>/src/port/${ep_lib_prefix}pgport.a
LOG_BUILD TRUE
)
ExternalProject_Get_Property (postgres_src SOURCE_DIR)
ExternalProject_Get_Property (postgres_src BINARY_DIR)
set (postgres_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${postgres_src_SOURCE_DIR})
list(APPEND INCLUDE_DIRS
${SOURCE_DIR}/src/include
${SOURCE_DIR}/src/interfaces/libpq
)
set_target_properties(postgres PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/interfaces/libpq/${ep_lib_prefix}pq.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
set_target_properties(pgcommon PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/common/${ep_lib_prefix}pgcommon.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
set_target_properties(pgport PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/src/port/${ep_lib_prefix}pgport.a
INTERFACE_INCLUDE_DIRECTORIES
"${INCLUDE_DIRS}"
)
add_dependencies(postgres postgres_src)
add_dependencies(pgcommon postgres_src)
add_dependencies(pgport postgres_src)
file(TO_CMAKE_PATH "${postgres_src_SOURCE_DIR}" postgres_src_SOURCE_DIR)
target_link_libraries(ripple_libs INTERFACE postgres pgcommon pgport)
else()
message("Found system installed Postgres via find_libary")
target_include_directories(ripple_libs INTERFACE ${libpq-fe})
target_link_libraries(ripple_libs INTERFACE ${postgres})
endif()
else()
message("Found system installed Postgres via find_package")
target_include_directories(ripple_libs INTERFACE ${PostgreSQL_INCLUDE_DIRS})
target_link_libraries(ripple_libs INTERFACE ${PostgreSQL_LIBRARIES})
endif()
endif()

View File

@@ -1,155 +1,22 @@
#[===================================================================[
import protobuf (lib and compiler) and create a lib
from our proto message definitions. If the system protobuf
is not found, fallback on EP to download and build a version
from official source.
#]===================================================================]
find_package(Protobuf 3.8)
if (static)
set (Protobuf_USE_STATIC_LIBS ON)
endif ()
find_package (Protobuf 3.8)
if (is_multiconfig)
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARIES})
else ()
string(TOUPPER ${CMAKE_BUILD_TYPE} upper_cmake_build_type)
set(protobuf_protoc_lib ${Protobuf_PROTOC_LIBRARY_${upper_cmake_build_type}})
endif ()
if (local_protobuf OR NOT (Protobuf_FOUND AND Protobuf_PROTOC_EXECUTABLE AND protobuf_protoc_lib))
include (GNUInstallDirs)
message (STATUS "using local protobuf build.")
set(protobuf_reqs Protobuf_PROTOC_EXECUTABLE protobuf_protoc_lib)
foreach(lib ${protobuf_reqs})
if(NOT ${lib})
message(STATUS "Couldn't find ${lib}")
endif()
endforeach()
if (WIN32)
# protobuf prepends lib even on windows
set (pbuf_lib_pre "lib")
else ()
set (pbuf_lib_pre ${ep_lib_prefix})
endif ()
# for the external project build of protobuf, we currently ignore the
# static option and always build static libs here. This is consistent
# with our other EP builds. Dynamic libs in an EP would add complexity
# because we'd need to get them into the runtime path, and probably
# install them.
ExternalProject_Add (protobuf_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/protocolbuffers/protobuf.git
GIT_TAG v3.8.0
SOURCE_SUBDIR cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-Dprotobuf_BUILD_TESTS=OFF
-Dprotobuf_BUILD_EXAMPLES=OFF
-Dprotobuf_BUILD_PROTOC_BINARIES=ON
-Dprotobuf_MSVC_STATIC_RUNTIME=ON
-DBUILD_SHARED_LIBS=OFF
-Dprotobuf_BUILD_SHARED_LIBS=OFF
-DCMAKE_DEBUG_POSTFIX=_d
-Dprotobuf_DEBUG_POSTFIX=_d
-Dprotobuf_WITH_ZLIB=$<IF:$<BOOL:${has_zlib}>,ON,OFF>
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc${ep_lib_suffix}
<BINARY_DIR>/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc_d${ep_lib_suffix}
<BINARY_DIR>/_installed_/bin/protoc${CMAKE_EXECUTABLE_SUFFIX}
)
ExternalProject_Get_Property (protobuf_src BINARY_DIR)
ExternalProject_Get_Property (protobuf_src SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (protobuf_src)
endif ()
exclude_if_included (protobuf_src)
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/proto_gen)
set(ccbd ${CMAKE_CURRENT_BINARY_DIR})
set(CMAKE_CURRENT_BINARY_DIR ${CMAKE_BINARY_DIR}/proto_gen)
protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS src/ripple/proto/ripple.proto)
set(CMAKE_CURRENT_BINARY_DIR ${ccbd})
if (NOT TARGET protobuf::libprotobuf)
add_library (protobuf::libprotobuf STATIC IMPORTED GLOBAL)
endif ()
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (protobuf::libprotobuf PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protobuf${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (protobuf::libprotobuf protobuf_src)
exclude_if_included (protobuf::libprotobuf)
if (NOT TARGET protobuf::libprotoc)
add_library (protobuf::libprotoc STATIC IMPORTED GLOBAL)
endif ()
set_target_properties (protobuf::libprotoc PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/${CMAKE_INSTALL_LIBDIR}/${pbuf_lib_pre}protoc${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (protobuf::libprotoc protobuf_src)
exclude_if_included (protobuf::libprotoc)
if (NOT TARGET protobuf::protoc)
add_executable (protobuf::protoc IMPORTED)
exclude_if_included (protobuf::protoc)
endif ()
set_target_properties (protobuf::protoc PROPERTIES
IMPORTED_LOCATION "${BINARY_DIR}/_installed_/bin/protoc${CMAKE_EXECUTABLE_SUFFIX}")
add_dependencies (protobuf::protoc protobuf_src)
else ()
if (NOT TARGET protobuf::protoc)
if (EXISTS "${Protobuf_PROTOC_EXECUTABLE}")
add_executable (protobuf::protoc IMPORTED)
set_target_properties (protobuf::protoc PROPERTIES
IMPORTED_LOCATION "${Protobuf_PROTOC_EXECUTABLE}")
else ()
message (FATAL_ERROR "Protobuf import failed")
endif ()
endif ()
endif ()
file (MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/proto_gen)
set (save_CBD ${CMAKE_CURRENT_BINARY_DIR})
set (CMAKE_CURRENT_BINARY_DIR ${CMAKE_BINARY_DIR}/proto_gen)
protobuf_generate_cpp (
PROTO_SRCS
PROTO_HDRS
src/ripple/proto/ripple.proto)
set (CMAKE_CURRENT_BINARY_DIR ${save_CBD})
add_library (pbufs STATIC ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories (pbufs PRIVATE src)
target_include_directories (pbufs
SYSTEM PUBLIC ${CMAKE_BINARY_DIR}/proto_gen)
target_link_libraries (pbufs protobuf::libprotobuf)
target_compile_options (pbufs
add_library(pbufs STATIC ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories(pbufs SYSTEM PUBLIC
${CMAKE_BINARY_DIR}/proto_gen
${CMAKE_BINARY_DIR}/proto_gen/src/ripple/proto
)
target_link_libraries(pbufs protobuf::libprotobuf)
target_compile_options(pbufs
PUBLIC
$<$<BOOL:${is_xcode}>:
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
add_library (Ripple::pbufs ALIAS pbufs)
target_link_libraries (ripple_libs INTERFACE Ripple::pbufs)
exclude_if_included (pbufs)
>
)
add_library(Ripple::pbufs ALIAS pbufs)

View File

@@ -1,177 +0,0 @@
#[===================================================================[
NIH dep: rocksdb
#]===================================================================]
add_library (rocksdb_lib UNKNOWN IMPORTED GLOBAL)
set_target_properties (rocksdb_lib
PROPERTIES INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1)
option (local_rocksdb "use local build of rocksdb." OFF)
if (NOT local_rocksdb)
find_package (RocksDB 6.27 QUIET CONFIG)
if (TARGET RocksDB::rocksdb)
message (STATUS "Found RocksDB using config.")
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION_DEBUG)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_DEBUG ${_rockslib_l})
endif ()
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION_RELEASE)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_RELEASE ${_rockslib_l})
endif ()
get_target_property (_rockslib_l RocksDB::rocksdb IMPORTED_LOCATION)
if (_rockslib_l)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION ${_rockslib_l})
endif ()
get_target_property (_rockslib_i RocksDB::rocksdb INTERFACE_INCLUDE_DIRECTORIES)
if (_rockslib_i)
set_target_properties (rocksdb_lib PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${_rockslib_i})
endif ()
target_link_libraries (ripple_libs INTERFACE RocksDB::rocksdb)
else ()
# using a find module with rocksdb is difficult because
# you have no idea how it was configured (transitive dependencies).
# the code below will generally find rocksdb using the module, but
# will then result in linker errors for static linkage since the
# transitive dependencies are unknown. force local build here for now, but leave the code as
# a placeholder for future investigation.
if (static)
set (local_rocksdb ON CACHE BOOL "" FORCE)
# TBD if there is some way to extract transitive deps..then:
#set (RocksDB_USE_STATIC ON)
else ()
find_package (RocksDB 6.27 MODULE)
if (ROCKSDB_FOUND)
if (RocksDB_LIBRARY_DEBUG)
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_DEBUG ${RocksDB_LIBRARY_DEBUG})
endif ()
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION_RELEASE ${RocksDB_LIBRARIES})
set_target_properties (rocksdb_lib PROPERTIES IMPORTED_LOCATION ${RocksDB_LIBRARIES})
set_target_properties (rocksdb_lib PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${RocksDB_INCLUDE_DIRS})
else ()
set (local_rocksdb ON CACHE BOOL "" FORCE)
endif ()
endif ()
endif ()
endif ()
if (local_rocksdb)
message (STATUS "Using local build of RocksDB.")
ExternalProject_Add (rocksdb
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/facebook/rocksdb.git
GIT_TAG v6.27.3
PATCH_COMMAND
# only used by windows build
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/rocks_thirdparty.inc
<SOURCE_DIR>/thirdparty.inc
COMMAND
# fixup their build version file to keep the values
# from changing always
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/rocksdb_build_version.cc.in
<SOURCE_DIR>/util/build_version.cc.in
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DWITH_JEMALLOC=$<IF:$<BOOL:${jemalloc}>,ON,OFF>
-DWITH_SNAPPY=ON
-DWITH_LZ4=ON
-DWITH_ZLIB=OFF
-DUSE_RTTI=ON
-DWITH_ZSTD=OFF
-DWITH_GFLAGS=OFF
-DWITH_BZ2=OFF
-ULZ4_*
-Ulz4_*
-Dlz4_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-Dlz4_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-Dlz4_FOUND=ON
-USNAPPY_*
-Usnappy_*
-USnappy_*
-Dsnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-Dsnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
-Dsnappy_FOUND=ON
-DSnappy_INCLUDE_DIRS=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSnappy_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
-DSnappy_FOUND=ON
-DWITH_MD_LIBRARY=OFF
-DWITH_RUNTIME_DEBUG=$<IF:$<CONFIG:Debug>,ON,OFF>
-DFAIL_ON_WARNINGS=OFF
-DWITH_ASAN=OFF
-DWITH_TSAN=OFF
-DWITH_UBSAN=OFF
-DWITH_NUMA=OFF
-DWITH_TBB=OFF
-DWITH_WINDOWS_UTF8_FILENAMES=OFF
-DWITH_XPRESS=OFF
-DPORTABLE=ON
-DFORCE_SSE42=OFF
-DDISABLE_STALL_NOTIF=OFF
-DOPTDBG=ON
-DROCKSDB_LITE=OFF
-DWITH_FALLOCATE=ON
-DWITH_LIBRADOS=OFF
-DWITH_JNI=OFF
-DROCKSDB_INSTALL_ON_WINDOWS=OFF
-DWITH_TESTS=OFF
-DWITH_TOOLS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -MP /DNDEBUG"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-DNDEBUG"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}rocksdb$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
LIST_SEPARATOR ::
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS snappy_lib lz4_lib
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}rocksdb${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
)
ExternalProject_Get_Property (rocksdb BINARY_DIR)
ExternalProject_Get_Property (rocksdb SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (rocksdb)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
set_target_properties (rocksdb_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}rocksdb${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies (rocksdb_lib rocksdb)
exclude_if_included (rocksdb)
endif ()
target_link_libraries (rocksdb_lib
INTERFACE
snappy_lib
lz4_lib
$<$<BOOL:${MSVC}>:rpcrt4>)
exclude_if_included (rocksdb_lib)
target_link_libraries (ripple_libs INTERFACE rocksdb_lib)

View File

@@ -1,58 +0,0 @@
#[===================================================================[
NIH dep: secp256k1
#]===================================================================]
add_library (secp256k1_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(secp256k1)
endif()
if(secp256k1)
set_target_properties (secp256k1_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${secp256k1}
IMPORTED_LOCATION_RELEASE
${secp256k1}
INTERFACE_INCLUDE_DIRECTORIES
${SECP256K1_INCLUDE_DIR})
add_library (secp256k1 ALIAS secp256k1_lib)
add_library (NIH::secp256k1 ALIAS secp256k1_lib)
else()
set(INSTALL_SECP256K1 true)
add_library (secp256k1 STATIC
src/secp256k1/src/secp256k1.c)
target_compile_definitions (secp256k1
PRIVATE
USE_NUM_NONE
USE_FIELD_10X26
USE_FIELD_INV_BUILTIN
USE_SCALAR_8X32
USE_SCALAR_INV_BUILTIN)
target_include_directories (secp256k1
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
$<INSTALL_INTERFACE:include>
PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src/secp256k1)
target_compile_options (secp256k1
PRIVATE
$<$<BOOL:${MSVC}>:-wd4319>
$<$<NOT:$<BOOL:${MSVC}>>:
-Wno-deprecated-declarations
-Wno-unused-function
>
$<$<BOOL:${is_gcc}>:-Wno-nonnull-compare>)
target_link_libraries (ripple_libs INTERFACE NIH::secp256k1)
#[===========================[
headers installation
#]===========================]
install (
FILES
src/secp256k1/include/secp256k1.h
DESTINATION include/secp256k1/include)
add_library (NIH::secp256k1 ALIAS secp256k1)
endif()

View File

@@ -1,77 +0,0 @@
#[===================================================================[
NIH dep: snappy
#]===================================================================]
add_library (snappy_lib STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(snappy)
endif()
if(snappy)
set_target_properties (snappy_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${snappy}
IMPORTED_LOCATION_RELEASE
${snappy}
INTERFACE_INCLUDE_DIRECTORIES
${SNAPPY_INCLUDE_DIR})
else()
ExternalProject_Add (snappy
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/google/snappy.git
GIT_TAG 1.1.7
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DSNAPPY_BUILD_TESTS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}snappy$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E copy_if_different <BINARY_DIR>/config.h <BINARY_DIR>/snappy-stubs-public.h <SOURCE_DIR>
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}snappy${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}snappy_d${ep_lib_suffix}
)
ExternalProject_Get_Property (snappy BINARY_DIR)
ExternalProject_Get_Property (snappy SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (snappy)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/snappy)
set_target_properties (snappy_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}snappy_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}snappy${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR})
endif()
add_dependencies (snappy_lib snappy)
target_link_libraries (ripple_libs INTERFACE snappy_lib)
exclude_if_included (snappy)
exclude_if_included (snappy_lib)

View File

@@ -1,165 +0,0 @@
#[===================================================================[
NIH dep: soci
#]===================================================================]
foreach (_comp core empty sqlite3)
add_library ("soci_${_comp}" STATIC IMPORTED GLOBAL)
endforeach ()
if (NOT WIN32)
find_package(soci)
endif()
if (soci)
foreach (_comp core empty sqlite3)
set_target_properties ("soci_${_comp}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${soci}
IMPORTED_LOCATION_RELEASE
${soci}
INTERFACE_INCLUDE_DIRECTORIES
${SOCI_INCLUDE_DIR})
endforeach ()
else()
set (soci_lib_pre ${ep_lib_prefix})
set (soci_lib_post "")
if (WIN32)
# for some reason soci on windows still prepends lib (non-standard)
set (soci_lib_pre lib)
# this version in the name might change if/when we change versions of soci
set (soci_lib_post "_4_0")
endif ()
get_target_property (_boost_incs Boost::date_time INTERFACE_INCLUDE_DIRECTORIES)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION)
if (NOT _boost_dt)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION_RELEASE)
endif ()
if (NOT _boost_dt)
get_target_property (_boost_dt Boost::date_time IMPORTED_LOCATION_DEBUG)
endif ()
ExternalProject_Add (soci
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/SOCI/soci.git
GIT_TAG 04e1870294918d20761736743bb6136314c42dd5
# We had an issue with soci integer range checking for boost::optional
# and needed to remove the exception that SOCI throws in this case.
# This is *probably* a bug in SOCI, but has never been investigated more
# nor reported to the maintainers.
# This cmake script comments out the lines in question.
# This patch process is likely fragile and should be reviewed carefully
# whenever we update the GIT_TAG above.
PATCH_COMMAND
${CMAKE_COMMAND} -D RIPPLED_SOURCE=${CMAKE_CURRENT_SOURCE_DIR}
-P ${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/soci_patch.cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${CMAKE_TOOLCHAIN_FILE}>:-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}>
$<$<BOOL:${VCPKG_TARGET_TRIPLET}>:-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET}>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_PREFIX_PATH=${CMAKE_BINARY_DIR}/sqlite3
-DCMAKE_MODULE_PATH=${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake
-DCMAKE_INCLUDE_PATH=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DCMAKE_LIBRARY_PATH=${sqlite_BINARY_DIR}
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DSOCI_CXX_C11=ON
-DSOCI_STATIC=ON
-DSOCI_LIBDIR=lib
-DSOCI_SHARED=OFF
-DSOCI_TESTS=OFF
# hacks to workaround the fact that soci doesn't currently use
# boost imported targets in its cmake. If they switch to
# proper imported targets, this next line can be removed
# (as well as the get_property above that sets _boost_incs)
-DBoost_INCLUDE_DIRS=$<JOIN:${_boost_incs},::>
-DBoost_INCLUDE_DIR=$<JOIN:${_boost_incs},::>
-DBOOST_ROOT=${BOOST_ROOT}
-DWITH_BOOST=ON
-DBoost_FOUND=ON
-DBoost_NO_BOOST_CMAKE=ON
-DBoost_DATE_TIME_FOUND=ON
-DSOCI_HAVE_BOOST=ON
-DSOCI_HAVE_BOOST_DATE_TIME=ON
-DBoost_DATE_TIME_LIBRARY=${_boost_dt}
-DSOCI_DB2=OFF
-DSOCI_FIREBIRD=OFF
-DSOCI_MYSQL=OFF
-DSOCI_ODBC=OFF
-DSOCI_ORACLE=OFF
-DSOCI_POSTGRESQL=OFF
-DSOCI_SQLITE3=ON
-DSQLITE3_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSQLITE3_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_RELEASE>>
$<$<BOOL:${APPLE}>:-DCMAKE_FIND_FRAMEWORK=LAST>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations"
>
# SEE: https://github.com/SOCI/soci/issues/640
$<$<AND:$<BOOL:${is_gcc}>,$<VERSION_GREATER_EQUAL:${CMAKE_CXX_COMPILER_VERSION},8>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations -Wno-error=format-overflow -Wno-format-overflow -Wno-error=format-truncation"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_core${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_empty${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_sqlite3${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS sqlite
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (soci BINARY_DIR)
ExternalProject_Get_Property (soci SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (soci)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
file (MAKE_DIRECTORY ${BINARY_DIR}/include)
foreach (_comp core empty sqlite3)
set_target_properties ("soci_${_comp}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
"${SOURCE_DIR}/include;${BINARY_DIR}/include")
add_dependencies ("soci_${_comp}" soci) # something has to depend on the ExternalProject to trigger it
target_link_libraries (ripple_libs INTERFACE "soci_${_comp}")
if (NOT _comp STREQUAL "core")
target_link_libraries ("soci_${_comp}" INTERFACE soci_core)
endif ()
endforeach ()
endif()
foreach (_comp core empty sqlite3)
exclude_if_included ("soci_${_comp}")
endforeach ()
exclude_if_included (soci)

View File

@@ -1,93 +0,0 @@
#[===================================================================[
NIH dep: sqlite
#]===================================================================]
add_library (sqlite STATIC IMPORTED GLOBAL)
if (NOT WIN32)
find_package(sqlite)
endif()
if(sqlite3)
set_target_properties (sqlite PROPERTIES
IMPORTED_LOCATION_DEBUG
${sqlite3}
IMPORTED_LOCATION_RELEASE
${sqlite3}
INTERFACE_INCLUDE_DIRECTORIES
${SQLITE_INCLUDE_DIR})
else()
ExternalProject_Add (sqlite3
PREFIX ${nih_cache_path}
# sqlite doesn't use git, but it provides versioned tarballs
URL https://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
http://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
https://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
http://www2.sqlite.org/2018/sqlite-amalgamation-3260000.zip
# ^^^ version is apparent in the URL: 3260000 => 3.26.0
URL_HASH SHA256=de5dcab133aa339a4cf9e97c40aa6062570086d6085d8f9ad7bc6ddf8a52096e
# Don't need to worry about MITM attacks too much because the download
# is checked against a strong hash
TLS_VERIFY false
# we wrote a very simple CMake file to build sqlite
# so that's what we copy here so that we can build with
# CMake. sqlite doesn't generally provided a build system
# for the single amalgamation source file.
PATCH_COMMAND
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/CMake_sqlite3.txt
<SOURCE_DIR>/CMakeLists.txt
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}sqlite3$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}sqlite3${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
)
ExternalProject_Get_Property (sqlite3 BINARY_DIR)
ExternalProject_Get_Property (sqlite3 SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (sqlite3)
endif ()
set_target_properties (sqlite PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}sqlite3${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR})
add_dependencies (sqlite sqlite3)
exclude_if_included (sqlite3)
endif()
target_link_libraries (sqlite INTERFACE $<$<NOT:$<BOOL:${MSVC}>>:dl>)
target_link_libraries (ripple_libs INTERFACE sqlite)
exclude_if_included (sqlite)
set(sqlite_BINARY_DIR ${BINARY_DIR})

View File

@@ -1,84 +1 @@
#[===================================================================[
NIH dep: wasmedge: web assembly runtime for hooks.
#]===================================================================]
find_package(Curses)
if(CURSES_FOUND)
include_directories(${CURSES_INCLUDE_DIR})
target_link_libraries(ripple_libs INTERFACE ${CURSES_LIBRARY})
else()
message(WARNING "CURSES library not found... (only important for mac builds)")
endif()
find_package(LLVM REQUIRED CONFIG)
message(STATUS "Found LLVM ${LLVM_PACKAGE_VERSION}")
message(STATUS "Using LLVMConfig.cmake in: ${LLVM_DIR}")
ExternalProject_Add (wasmedge_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/WasmEdge/WasmEdge.git
GIT_TAG 0.11.2
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
-DWASMEDGE_BUILD_SHARED_LIB=OFF
-DWASMEDGE_BUILD_STATIC_LIB=ON
-DWASMEDGE_BUILD_AOT_RUNTIME=ON
-DWASMEDGE_FORCE_DISABLE_LTO=ON
-DWASMEDGE_LINK_LLVM_STATIC=ON
-DWASMEDGE_LINK_TOOLS_STATIC=ON
-DWASMEDGE_BUILD_PLUGINS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DLLVM_DIR=${LLVM_DIR}
-DLLVM_LIBRARY_DIR=${LLVM_LIBRARY_DIR}
-DLLVM_ENABLE_TERMINFO=OFF
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP -march=native"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_CONFIGURE ON
LOG_BUILD ON
LOG_CONFIGURE ON
COMMAND
pwd
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/api/libwasmedge.a
)
add_library (wasmedge STATIC IMPORTED GLOBAL)
ExternalProject_Get_Property (wasmedge_src BINARY_DIR)
ExternalProject_Get_Property (wasmedge_src SOURCE_DIR)
set (wasmedge_src_BINARY_DIR "${BINARY_DIR}")
add_dependencies (wasmedge wasmedge_src)
execute_process(
COMMAND
mkdir -p "${wasmedge_src_BINARY_DIR}/include/api"
)
set_target_properties (wasmedge PROPERTIES
IMPORTED_LOCATION_DEBUG
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge.a"
IMPORTED_LOCATION_RELEASE
"${wasmedge_src_BINARY_DIR}/lib/api/libwasmedge.a"
INTERFACE_INCLUDE_DIRECTORIES
"${wasmedge_src_BINARY_DIR}/include/api/"
)
target_link_libraries (ripple_libs INTERFACE wasmedge)
#RH NOTE: some compilers / versions of some libraries need these, most don't
find_library(XAR_LIBRARY NAMES xar)
if(XAR_LIBRARY)
target_link_libraries(ripple_libs INTERFACE ${XAR_LIBRARY})
else()
message(WARNING "xar library not found... (only important for mac builds)")
endif()
add_library (NIH::WasmEdge ALIAS wasmedge)
find_package(wasmedge REQUIRED)

View File

@@ -1,167 +0,0 @@
if(reporting)
find_library(cassandra NAMES cassandra)
if(NOT cassandra)
message("System installed Cassandra cpp driver not found. Will build")
find_library(zlib NAMES zlib1g-dev zlib-devel zlib z)
if(NOT zlib)
message("zlib not found. will build")
add_library(zlib STATIC IMPORTED GLOBAL)
ExternalProject_Add(zlib_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/madler/zlib.git
GIT_TAG v1.2.12
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}z.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (zlib_src SOURCE_DIR)
ExternalProject_Get_Property (zlib_src BINARY_DIR)
set (zlib_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${zlib_src_SOURCE_DIR}/include)
set_target_properties (zlib PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}z.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(zlib zlib_src)
file(TO_CMAKE_PATH "${zlib_src_SOURCE_DIR}" zlib_src_SOURCE_DIR)
endif()
find_library(krb5 NAMES krb5-dev libkrb5-dev)
if(NOT krb5)
message("krb5 not found. will build")
add_library(krb5 STATIC IMPORTED GLOBAL)
ExternalProject_Add(krb5_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/krb5/krb5.git
GIT_TAG krb5-1.20-final
UPDATE_COMMAND ""
CONFIGURE_COMMAND autoreconf src && CFLAGS=-fcommon ./src/configure --enable-static --disable-shared > /dev/null
BUILD_IN_SOURCE 1
BUILD_COMMAND make
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/${ep_lib_prefix}krb5.a
LOG_BUILD TRUE
)
ExternalProject_Get_Property (krb5_src SOURCE_DIR)
ExternalProject_Get_Property (krb5_src BINARY_DIR)
set (krb5_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${krb5_src_SOURCE_DIR}/include)
set_target_properties (krb5 PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/lib/${ep_lib_prefix}krb5.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(krb5 krb5_src)
file(TO_CMAKE_PATH "${krb5_src_SOURCE_DIR}" krb5_src_SOURCE_DIR)
endif()
find_library(libuv1 NAMES uv1 libuv1 liubuv1-dev libuv1:amd64)
if(NOT libuv1)
message("libuv1 not found, will build")
add_library(libuv1 STATIC IMPORTED GLOBAL)
ExternalProject_Add(libuv_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libuv/libuv.git
GIT_TAG v1.44.2
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}uv_a.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (libuv_src SOURCE_DIR)
ExternalProject_Get_Property (libuv_src BINARY_DIR)
set (libuv_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${libuv_src_SOURCE_DIR}/include)
set_target_properties (libuv1 PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}uv_a.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(libuv1 libuv_src)
file(TO_CMAKE_PATH "${libuv_src_SOURCE_DIR}" libuv_src_SOURCE_DIR)
endif()
add_library (cassandra STATIC IMPORTED GLOBAL)
ExternalProject_Add(cassandra_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/datastax/cpp-driver.git
GIT_TAG 2.16.2
CMAKE_ARGS
-DLIBUV_ROOT_DIR=${BINARY_DIR}
-DLIBUV_LIBARY=${BINARY_DIR}/libuv_a.a
-DLIBUV_INCLUDE_DIR=${SOURCE_DIR}/include
-DCASS_BUILD_STATIC=ON
-DCASS_BUILD_SHARED=OFF
-DOPENSSL_ROOT_DIR=/opt/local/openssl
INSTALL_COMMAND ""
BUILD_BYPRODUCTS <BINARY_DIR>/${ep_lib_prefix}cassandra_static.a
LOG_BUILD TRUE
LOG_CONFIGURE TRUE
)
ExternalProject_Get_Property (cassandra_src SOURCE_DIR)
ExternalProject_Get_Property (cassandra_src BINARY_DIR)
set (cassandra_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${cassandra_src_SOURCE_DIR}/include)
set_target_properties (cassandra PROPERTIES
IMPORTED_LOCATION
${BINARY_DIR}/${ep_lib_prefix}cassandra_static.a
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include)
add_dependencies(cassandra cassandra_src)
if(NOT libuv1)
ExternalProject_Add_StepDependencies(cassandra_src build libuv1)
target_link_libraries(cassandra INTERFACE libuv1)
else()
target_link_libraries(cassandra INTERFACE ${libuv1})
endif()
if(NOT krb5)
ExternalProject_Add_StepDependencies(cassandra_src build krb5)
target_link_libraries(cassandra INTERFACE krb5)
else()
target_link_libraries(cassandra INTERFACE ${krb5})
endif()
if(NOT zlib)
ExternalProject_Add_StepDependencies(cassandra_src build zlib)
target_link_libraries(cassandra INTERFACE zlib)
else()
target_link_libraries(cassandra INTERFACE ${zlib})
endif()
file(TO_CMAKE_PATH "${cassandra_src_SOURCE_DIR}" cassandra_src_SOURCE_DIR)
target_link_libraries(ripple_libs INTERFACE cassandra)
else()
message("Found system installed cassandra cpp driver")
find_path(cassandra_includes NAMES cassandra.h REQUIRED)
target_link_libraries (ripple_libs INTERFACE ${cassandra})
target_include_directories(ripple_libs INTERFACE ${cassandra_includes})
endif()
exclude_if_included (cassandra)
endif()

View File

@@ -1,18 +0,0 @@
#[===================================================================[
NIH dep: date
the main library is header-only, thus is an INTERFACE lib in CMake.
NOTE: this has been accepted into c++20 so can likely be replaced
when we update to that standard
#]===================================================================]
find_package (date QUIET)
if (NOT TARGET date::date)
FetchContent_Declare(
hh_date_src
GIT_REPOSITORY https://github.com/HowardHinnant/date.git
GIT_TAG fc4cf092f9674f2670fb9177edcdee870399b829
)
FetchContent_MakeAvailable(hh_date_src)
endif ()

View File

@@ -1,319 +1,15 @@
# currently linking to unsecure versions...if we switch, we'll
# need to add ssl as a link dependency to the grpc targets
option (use_secure_grpc "use TLS version of grpc libs." OFF)
if (use_secure_grpc)
set (grpc_suffix "")
else ()
set (grpc_suffix "_unsecure")
endif ()
find_package (gRPC 1.23 CONFIG QUIET)
if (TARGET gRPC::gpr AND NOT local_grpc)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION_DEBUG)
if (NOT _grpc_l)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION_RELEASE)
endif ()
if (NOT _grpc_l)
get_target_property (_grpc_l gRPC::gpr IMPORTED_LOCATION)
endif ()
message (STATUS "Found cmake config for gRPC. Using ${_grpc_l}.")
else ()
find_package (PkgConfig QUIET)
if (PKG_CONFIG_FOUND)
pkg_check_modules (grpc QUIET "grpc${grpc_suffix}>=1.25" "grpc++${grpc_suffix}" gpr)
endif ()
if (grpc_FOUND)
message (STATUS "Found gRPC using pkg-config. Using ${grpc_gpr_PREFIX}.")
endif ()
add_executable (gRPC::grpc_cpp_plugin IMPORTED)
exclude_if_included (gRPC::grpc_cpp_plugin)
if (grpc_FOUND AND NOT local_grpc)
# use installed grpc (via pkg-config)
macro (add_imported_grpc libname_)
if (static)
set (_search "${CMAKE_STATIC_LIBRARY_PREFIX}${libname_}${CMAKE_STATIC_LIBRARY_SUFFIX}")
else ()
set (_search "${CMAKE_SHARED_LIBRARY_PREFIX}${libname_}${CMAKE_SHARED_LIBRARY_SUFFIX}")
endif()
find_library(_found_${libname_}
NAMES ${_search}
HINTS ${grpc_LIBRARY_DIRS})
if (_found_${libname_})
message (STATUS "importing ${libname_} as ${_found_${libname_}}")
else ()
message (FATAL_ERROR "using pkg-config for grpc, can't find ${_search}")
endif ()
add_library ("gRPC::${libname_}" STATIC IMPORTED GLOBAL)
set_target_properties ("gRPC::${libname_}" PROPERTIES IMPORTED_LOCATION ${_found_${libname_}})
if (grpc_INCLUDE_DIRS)
set_target_properties ("gRPC::${libname_}" PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${grpc_INCLUDE_DIRS})
endif ()
target_link_libraries (ripple_libs INTERFACE "gRPC::${libname_}")
exclude_if_included ("gRPC::${libname_}")
endmacro ()
set_target_properties (gRPC::grpc_cpp_plugin PROPERTIES
IMPORTED_LOCATION "${grpc_gpr_PREFIX}/bin/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}")
pkg_check_modules (cares QUIET libcares)
if (cares_FOUND)
if (static)
set (_search "${CMAKE_STATIC_LIBRARY_PREFIX}cares${CMAKE_STATIC_LIBRARY_SUFFIX}")
set (_prefix cares_STATIC)
set (_static STATIC)
else ()
set (_search "${CMAKE_SHARED_LIBRARY_PREFIX}cares${CMAKE_SHARED_LIBRARY_SUFFIX}")
set (_prefix cares)
set (_static)
endif()
find_library(_location NAMES ${_search} HINTS ${cares_LIBRARY_DIRS})
if (NOT _location)
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
endif ()
add_library (c-ares::cares ${_static} IMPORTED GLOBAL)
set_target_properties (c-ares::cares PROPERTIES
IMPORTED_LOCATION ${_location}
INTERFACE_INCLUDE_DIRECTORIES "${${_prefix}_INCLUDE_DIRS}"
INTERFACE_LINK_OPTIONS "${${_prefix}_LDFLAGS}"
)
exclude_if_included (c-ares::cares)
else ()
message (FATAL_ERROR "using pkg-config for grpc, can't find c-ares")
endif ()
else ()
#[===========================[
c-ares (grpc requires)
#]===========================]
ExternalProject_Add (c-ares_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/c-ares/c-ares.git
GIT_TAG cares-1_15_0
CMAKE_ARGS
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-DCARES_SHARED=OFF
-DCARES_STATIC=ON
-DCARES_STATIC_PIC=ON
-DCARES_INSTALL=ON
-DCARES_MSVC_STATIC_RUNTIME=ON
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}cares${ep_lib_suffix}
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}cares_d${ep_lib_suffix}
)
exclude_if_included (c-ares_src)
ExternalProject_Get_Property (c-ares_src BINARY_DIR)
set (cares_binary_dir "${BINARY_DIR}")
add_library (c-ares::cares STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (c-ares::cares PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}cares_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}cares${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (c-ares::cares c-ares_src)
exclude_if_included (c-ares::cares)
if (NOT has_zlib)
#[===========================[
zlib (grpc requires)
#]===========================]
if (MSVC)
set (zlib_debug_postfix "d") # zlib cmake sets this internally for MSVC, so we really don't have a choice
set (zlib_base "zlibstatic")
else ()
set (zlib_debug_postfix "_d")
set (zlib_base "z")
endif ()
ExternalProject_Add (zlib_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/madler/zlib.git
GIT_TAG v1.2.11
CMAKE_ARGS
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=${zlib_debug_postfix}
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DCMAKE_INSTALL_PREFIX=<BINARY_DIR>/_installed_
-DBUILD_SHARED_LIBS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
TEST_COMMAND ""
INSTALL_COMMAND
${CMAKE_COMMAND} -E env --unset=DESTDIR ${CMAKE_COMMAND} --build . --config $<CONFIG> --target install
BUILD_BYPRODUCTS
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}${zlib_base}${ep_lib_suffix}
<BINARY_DIR>/_installed_/lib/${ep_lib_prefix}${zlib_base}${zlib_debug_postfix}${ep_lib_suffix}
)
exclude_if_included (zlib_src)
ExternalProject_Get_Property (zlib_src BINARY_DIR)
set (zlib_binary_dir "${BINARY_DIR}")
add_library (ZLIB::ZLIB STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${BINARY_DIR}/_installed_/include)
set_target_properties (ZLIB::ZLIB PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}${zlib_base}${zlib_debug_postfix}${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/_installed_/lib/${ep_lib_prefix}${zlib_base}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${BINARY_DIR}/_installed_/include)
add_dependencies (ZLIB::ZLIB zlib_src)
exclude_if_included (ZLIB::ZLIB)
endif ()
#[===========================[
grpc
#]===========================]
ExternalProject_Add (grpc_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/grpc/grpc.git
GIT_TAG v1.25.0
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<BOOL:${CMAKE_TOOLCHAIN_FILE}>:-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}>
$<$<BOOL:${VCPKG_TARGET_TRIPLET}>:-DVCPKG_TARGET_TRIPLET=${VCPKG_TARGET_TRIPLET}>
$<$<BOOL:${unity}>:-DCMAKE_UNITY_BUILD=ON}>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DgRPC_BUILD_TESTS=OFF
-DgRPC_BENCHMARK_PROVIDER=""
-DgRPC_BUILD_CSHARP_EXT=OFF
-DgRPC_MSVC_STATIC_RUNTIME=ON
-DgRPC_INSTALL=OFF
-DgRPC_CARES_PROVIDER=package
-Dc-ares_DIR=${cares_binary_dir}/_installed_/lib/cmake/c-ares
-DgRPC_SSL_PROVIDER=package
-DOPENSSL_ROOT_DIR=${OPENSSL_ROOT_DIR}
-DgRPC_PROTOBUF_PROVIDER=package
-DProtobuf_USE_STATIC_LIBS=$<IF:$<AND:$<BOOL:${Protobuf_FOUND}>,$<NOT:$<BOOL:${static}>>>,OFF,ON>
-DProtobuf_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:protobuf::libprotobuf,INTERFACE_INCLUDE_DIRECTORIES>,:_:>
-DProtobuf_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:protobuf::libprotobuf,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:protobuf::libprotobuf,IMPORTED_LOCATION_RELEASE>>
-DProtobuf_PROTOC_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:protobuf::libprotoc,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:protobuf::libprotoc,IMPORTED_LOCATION_RELEASE>>
-DProtobuf_PROTOC_EXECUTABLE=$<TARGET_PROPERTY:protobuf::protoc,IMPORTED_LOCATION>
-DgRPC_ZLIB_PROVIDER=package
$<$<NOT:$<BOOL:${has_zlib}>>:-DZLIB_ROOT=${zlib_binary_dir}/_installed_>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--parallel ${ep_procs}
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}grpc${grpc_suffix}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}grpc++${grpc_suffix}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}address_sorting$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}gpr$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/$<CONFIG>/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}
<BINARY_DIR>
>
LIST_SEPARATOR :_:
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS c-ares_src
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}grpc${grpc_suffix}${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc${grpc_suffix}_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc++${grpc_suffix}${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}grpc++${grpc_suffix}_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}address_sorting${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}address_sorting_d${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}gpr${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}gpr_d${ep_lib_suffix}
<BINARY_DIR>/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}
)
if (TARGET protobuf_src)
ExternalProject_Add_StepDependencies(grpc_src build protobuf_src)
endif ()
exclude_if_included (grpc_src)
ExternalProject_Get_Property (grpc_src BINARY_DIR)
ExternalProject_Get_Property (grpc_src SOURCE_DIR)
set (grpc_binary_dir "${BINARY_DIR}")
set (grpc_source_dir "${SOURCE_DIR}")
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (grpc_src)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
macro (add_imported_grpc libname_)
add_library ("gRPC::${libname_}" STATIC IMPORTED GLOBAL)
set_target_properties ("gRPC::${libname_}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${grpc_binary_dir}/${ep_lib_prefix}${libname_}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${grpc_binary_dir}/${ep_lib_prefix}${libname_}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${grpc_source_dir}/include)
add_dependencies ("gRPC::${libname_}" grpc_src)
target_link_libraries (ripple_libs INTERFACE "gRPC::${libname_}")
exclude_if_included ("gRPC::${libname_}")
endmacro ()
set_target_properties (gRPC::grpc_cpp_plugin PROPERTIES
IMPORTED_LOCATION "${grpc_binary_dir}/grpc_cpp_plugin${CMAKE_EXECUTABLE_SUFFIX}")
add_dependencies (gRPC::grpc_cpp_plugin grpc_src)
endif ()
add_imported_grpc (gpr)
add_imported_grpc ("grpc${grpc_suffix}")
add_imported_grpc ("grpc++${grpc_suffix}")
add_imported_grpc (address_sorting)
target_link_libraries ("gRPC::grpc${grpc_suffix}" INTERFACE c-ares::cares gRPC::gpr gRPC::address_sorting ZLIB::ZLIB)
target_link_libraries ("gRPC::grpc++${grpc_suffix}" INTERFACE "gRPC::grpc${grpc_suffix}" gRPC::gpr)
endif ()
find_package(gRPC 1.23)
#[=================================[
generate protobuf sources for
grpc defs and bundle into a
static lib
#]=================================]
set (GRPC_GEN_DIR "${CMAKE_BINARY_DIR}/proto_gen_grpc")
file (MAKE_DIRECTORY ${GRPC_GEN_DIR})
set (GRPC_PROTO_SRCS)
set (GRPC_PROTO_HDRS)
set (GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
set(GRPC_GEN_DIR "${CMAKE_BINARY_DIR}/proto_gen_grpc")
file(MAKE_DIRECTORY ${GRPC_GEN_DIR})
set(GRPC_PROTO_SRCS)
set(GRPC_PROTO_HDRS)
set(GRPC_PROTO_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/src/ripple/proto/org")
file(GLOB_RECURSE GRPC_DEFINITION_FILES LIST_DIRECTORIES false "${GRPC_PROTO_ROOT}/*.proto")
foreach(file ${GRPC_DEFINITION_FILES})
get_filename_component(_abs_file ${file} ABSOLUTE)
@@ -324,10 +20,10 @@ foreach(file ${GRPC_DEFINITION_FILES})
get_filename_component(_rel_root_dir ${_rel_root_file} DIRECTORY)
file(RELATIVE_PATH _rel_dir ${CMAKE_CURRENT_SOURCE_DIR} ${_abs_dir})
set (src_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.cc")
set (src_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.cc")
set (hdr_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.h")
set (hdr_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.h")
set(src_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.cc")
set(src_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.cc")
set(hdr_1 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.grpc.pb.h")
set(hdr_2 "${GRPC_GEN_DIR}/${_rel_root_dir}/${_basename}.pb.h")
add_custom_command(
OUTPUT ${src_1} ${src_2} ${hdr_1} ${hdr_2}
COMMAND protobuf::protoc
@@ -345,20 +41,22 @@ foreach(file ${GRPC_DEFINITION_FILES})
list(APPEND GRPC_PROTO_HDRS ${hdr_1} ${hdr_2})
endforeach()
add_library (grpc_pbufs STATIC ${GRPC_PROTO_SRCS} ${GRPC_PROTO_HDRS})
#target_include_directories (grpc_pbufs PRIVATE src)
target_include_directories (grpc_pbufs SYSTEM PUBLIC ${GRPC_GEN_DIR})
target_link_libraries (grpc_pbufs protobuf::libprotobuf "gRPC::grpc++${grpc_suffix}")
target_compile_options (grpc_pbufs
add_library(grpc_pbufs STATIC ${GRPC_PROTO_SRCS} ${GRPC_PROTO_HDRS})
#target_include_directories(grpc_pbufs PRIVATE src)
target_include_directories(grpc_pbufs SYSTEM PUBLIC ${GRPC_GEN_DIR})
target_link_libraries(grpc_pbufs
"gRPC::grpc++"
# libgrpc is missing references.
absl::random_random
)
target_compile_options(grpc_pbufs
PRIVATE
$<$<BOOL:${MSVC}>:-wd4065>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-deprecated-declarations>
PUBLIC
$<$<BOOL:${MSVC}>:-wd4996>
$<$<BOOL:${is_xcode}>:
$<$<BOOL:${XCODE}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
add_library (Ripple::grpc_pbufs ALIAS grpc_pbufs)
target_link_libraries (ripple_libs INTERFACE Ripple::grpc_pbufs)
exclude_if_included (grpc_pbufs)
add_library(Ripple::grpc_pbufs ALIAS grpc_pbufs)

View File

@@ -72,15 +72,15 @@ It generates many files of [results](results):
desired as described above. In a perfect repo, this file will be
empty.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
Github workflow](../../.github/workflows/levelization.yml.disabled) to validate
that nothing changed.
* [`ordering.txt`](results/ordering.txt): A list showing relationships
between modules where there are no loops as they actually exist, as
opposed to how they are desired as described above.
This file is committed to the repo, and is used by the [levelization
Github workflow](../../.github/workflows/levelization.yml) to validate
Github workflow](../../.github/workflows/levelization.yml.disabled) to validate
that nothing changed.
* [`levelization.yml`](../../.github/workflows/levelization.yml)
* [`levelization.yml`](../../.github/workflows/levelization.yml.disabled)
Github Actions workflow to test that levelization loops haven't
changed. Unfortunately, if changes are detected, it can't tell if
they are improvements or not, so if you have resolved any issues or

View File

@@ -52,6 +52,9 @@ Loop: ripple.overlay ripple.rpc
Loop: test.app test.jtx
test.app > test.jtx
Loop: test.app test.rpc
test.rpc ~= test.app
Loop: test.jtx test.toplevel
test.toplevel > test.jtx

View File

@@ -90,7 +90,6 @@ test.app > ripple.overlay
test.app > ripple.protocol
test.app > ripple.resource
test.app > ripple.rpc
test.app > test.rpc
test.app > test.toplevel
test.app > test.unit_test
test.basics > ripple.basics
@@ -187,6 +186,10 @@ test.protocol > ripple.crypto
test.protocol > ripple.json
test.protocol > ripple.protocol
test.protocol > test.toplevel
test.rdb > ripple.app
test.rdb > ripple.core
test.rdb > test.jtx
test.rdb > test.toplevel
test.resource > ripple.basics
test.resource > ripple.beast
test.resource > ripple.resource

View File

@@ -1,14 +1,24 @@
cmake_minimum_required (VERSION 3.16)
if (POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif ()
project (rippled)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
endif()
# Fix "unrecognized escape" issues when passing CMAKE_MODULE_PATH on Windows.
file(TO_CMAKE_PATH "${CMAKE_MODULE_PATH}" CMAKE_MODULE_PATH)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
if(POLICY CMP0144)
cmake_policy(SET CMP0144 NEW)
endif()
project (rippled)
set(Boost_NO_BOOST_CMAKE ON)
# make GIT_COMMIT_HASH define available to all sources
@@ -23,14 +33,41 @@ if(Git_FOUND)
endif()
endif() #git
if (thread_safety_analysis)
# make SOURCE_ROOT_PATH define available for logging
set(SOURCE_ROOT_PATH "${CMAKE_CURRENT_SOURCE_DIR}/src/")
add_definitions(-DSOURCE_ROOT_PATH="${SOURCE_ROOT_PATH}")
# BEAST_ENHANCED_LOGGING option - adds file:line numbers and formatting to logs
# Default to ON for Debug builds, OFF for Release
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
option(BEAST_ENHANCED_LOGGING "Include file and line numbers in log messages" ON)
else()
option(BEAST_ENHANCED_LOGGING "Include file and line numbers in log messages" OFF)
endif()
if(BEAST_ENHANCED_LOGGING)
add_definitions(-DBEAST_ENHANCED_LOGGING=1)
message(STATUS "Log line numbers enabled")
else()
message(STATUS "Log line numbers disabled")
endif()
if(thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")
option(USE_CONAN "Use Conan package manager for dependencies" OFF)
# Then, auto-detect if conan_toolchain.cmake is being used
if(CMAKE_TOOLCHAIN_FILE)
# Check if the toolchain file path contains "conan_toolchain"
if(CMAKE_TOOLCHAIN_FILE MATCHES "conan_toolchain")
set(USE_CONAN ON CACHE BOOL "Using Conan detected from toolchain file" FORCE)
message(STATUS "Conan toolchain detected: ${CMAKE_TOOLCHAIN_FILE}")
message(STATUS "Building with Conan dependencies")
endif()
endif()
include (CheckCXXCompilerFlag)
include (FetchContent)
@@ -44,7 +81,7 @@ endif ()
include(RippledSanity)
include(RippledVersion)
include(RippledSettings)
include(RippledNIH)
# this check has to remain in the top-level cmake
# because of the early return statement
if (packages_only)
@@ -56,25 +93,64 @@ endif ()
include(RippledCompiler)
include(RippledInterface)
###
include(deps/Boost)
include(deps/OpenSSL)
include(deps/Secp256k1)
include(deps/Ed25519-donna)
include(deps/Lz4)
include(deps/Libarchive)
include(deps/Sqlite)
include(deps/Soci)
include(deps/Snappy)
include(deps/Rocksdb)
include(deps/Nudb)
include(deps/date)
find_package(OpenSSL 1.1.1 REQUIRED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2
)
add_subdirectory(src/secp256k1)
add_subdirectory(src/ed25519-donna)
find_package(lz4 REQUIRED)
# Target names with :: are not allowed in a generator expression.
# We need to pull the include directories and imported location properties
# from separate targets.
find_package(LibArchive REQUIRED)
find_package(SOCI REQUIRED)
find_package(SQLite3 REQUIRED)
find_package(Snappy REQUIRED)
# find_package(wasmedge REQUIRED)
option(rocksdb "Enable RocksDB" ON)
if(rocksdb)
find_package(RocksDB REQUIRED)
set_target_properties(RocksDB::rocksdb PROPERTIES
INTERFACE_COMPILE_DEFINITIONS RIPPLE_ROCKSDB_AVAILABLE=1
)
target_link_libraries(ripple_libs INTERFACE RocksDB::rocksdb)
endif()
find_package(nudb REQUIRED)
find_package(date REQUIRED)
include(deps/Protobuf)
include(deps/gRPC)
include(deps/cassandra)
include(deps/Postgres)
include(deps/WasmEdge)
include(deps/WasmEdge)
if(TARGET nudb::core)
set(nudb nudb::core)
elseif(TARGET NuDB::nudb)
set(nudb NuDB::nudb)
else()
message(FATAL_ERROR "unknown nudb target")
endif()
target_link_libraries(ripple_libs INTERFACE ${nudb})
if(reporting)
find_package(cassandra-cpp-driver REQUIRED)
find_package(PostgreSQL REQUIRED)
target_link_libraries(ripple_libs INTERFACE
cassandra-cpp-driver::cassandra-cpp-driver
PostgreSQL::PostgreSQL
)
endif()
target_link_libraries(ripple_libs INTERFACE
ed25519::ed25519
LibArchive::LibArchive
lz4::lz4
OpenSSL::Crypto
OpenSSL::SSL
Ripple::grpc_pbufs
Ripple::pbufs
secp256k1::secp256k1
soci::soci
SQLite::SQLite3
)
###

View File

@@ -176,10 +176,11 @@ existing maintainer without a vote.
## Current Maintainers
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + XRP Ledger Foundation)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + XRP Ledger Foundation)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + XRP Ledger Foundation)
* [Richard Holland](https://github.com/RichardAH) (XRPL Labs + INFTF)
* [Denis Angell](https://github.com/dangell7) (XRPL Labs + INFTF)
* [Wietse Wind](https://github.com/WietseWind) (XRPL Labs + INFTF)
* [tequ](https://github.com/tequdev) (Independent + INFTF)
[1]: https://docs.github.com/en/get-started/quickstart/contributing-to-projects
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits
[2]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits

View File

@@ -1,8 +1,8 @@
# Xahau
# Xahau
**Note:** Throughout this README, references to "we" or "our" pertain to the community and contributors involved in the Xahau network. It does not imply a legal entity or a specific collection of individuals.
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://docs.xahau.network/)
[Xahau](https://xahau.network/) is a decentralized cryptographic ledger that builds upon the robust foundation of the XRP Ledger. It inherits the XRP Ledger's Byzantine Fault Tolerant consensus algorithm and enhances it with additional features and functionalities. Developers and users familiar with the XRP Ledger will find that most documentation and tutorials available on [xrpl.org](https://xrpl.org) are relevant and applicable to Xahau, including those related to running validators and managing validator keys. For Xahau specific documentation you can visit our [documentation](https://xahau.network/)
## XAH
XAH is the public, counterparty-free asset native to Xahau and functions primarily as network gas. Transactions submitted to the Xahau network must supply an appropriate amount of XAH, to be burnt by the network as a fee, in order to be successfully included in a validated ledger. In addition, XAH also acts as a bridge currency within the Xahau DEX. XAH is traded on the open-market and is available for anyone to access. Xahau was created in 2023 with a supply of 600 million units of XAH.
@@ -12,7 +12,7 @@ The server software that powers Xahau is called `xahaud` and is available in thi
### Build from Source
* [Read the build instructions in our documentation](https://docs.xahau.network/infrastructure/building-xahau)
* [Read the build instructions in our documentation](https://xahau.network/infrastructure/building-xahau)
* If you encounter any issues, please [open an issue](https://github.com/xahau/xahaud/issues)
## Highlights of Xahau
@@ -58,7 +58,7 @@ git-subtree. See those directories' README files for more details.
- **Documentation**: Documentation for XRPL, Xahau and Hooks.
- [Xrpl Documentation](https://xrpl.org)
- [Xahau Documentation](https://docs.xahau.network/)
- [Xahau Documentation](https://xahau.network/)
- [Hooks Technical Documentation](https://xrpl-hooks.readme.io/)
- **Explorers**: Explore the Xahau ledger using various explorers:
- [xahauexplorer.com](https://xahauexplorer.com)
@@ -67,5 +67,5 @@ git-subtree. See those directories' README files for more details.
- [explorer.xahau.network](https://explorer.xahau.network)
- **Testnet & Faucet**: Test applications and obtain test XAH at [xahau-test.net](https://xahau-test.net) and use the testnet explorer at [explorer.xahau.network](https://explorer.xahau.network).
- **Supporting Wallets**: A list of wallets that support XAH and Xahau-based assets.
- [Xumm](https://xumm.app)
- [Crossmark](https://crossmark.io)
- [Xaman](https://xaman.app)
- [Crossmark](https://crossmark.io)

View File

@@ -8,6 +8,130 @@ This document contains the release notes for `rippled`, the reference server imp
Have new ideas? Need help with setting up your node? [Please open an issue here](https://github.com/xrplf/rippled/issues/new/choose).
# Introducing XRP Ledger version 1.11.0
Version 1.11.0 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available.
This release reduces memory usage, introduces the `fixNFTokenRemint` amendment, and adds new features and bug fixes. For example, the new NetworkID field in transactions helps to prevent replay attacks with side-chains.
[Sign Up for Future Release Announcements](https://groups.google.com/g/ripple-server)
<!-- BREAK -->
## Action Required
The `fixNFTokenRemint` amendment is now open for voting according to the XRP Ledger's [amendment process](https://xrpl.org/amendments.html), which enables protocol changes following two weeks of >80% support from trusted validators.
If you operate an XRP Ledger server, upgrade to version 1.11.0 by July 5 to ensure service continuity. The exact time that protocol changes take effect depends on the voting decisions of the decentralized network.
## Install / Upgrade
On supported platforms, see the [instructions on installing or updating `rippled`](https://xrpl.org/install-rippled.html).
## What's Changed
### New Features and Improvements
* Allow port numbers be be specified using a either a colon or a space by @RichardAH in https://github.com/XRPLF/rippled/pull/4328
* Eliminate memory allocation from critical path: by @nbougalis in https://github.com/XRPLF/rippled/pull/4353
* Make it easy for projects to depend on libxrpl by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4449
* Add the ability to mark amendments as obsolete by @ximinez in https://github.com/XRPLF/rippled/pull/4291
* Always create the FeeSettings object in genesis ledger by @ximinez in https://github.com/XRPLF/rippled/pull/4319
* Log exception messages in several locations by @drlongle in https://github.com/XRPLF/rippled/pull/4400
* Parse flags in account_info method by @drlongle in https://github.com/XRPLF/rippled/pull/4459
* Add NFTokenPages to account_objects RPC by @RichardAH in https://github.com/XRPLF/rippled/pull/4352
* add jss fields used by clio `nft_info` by @ledhed2222 in https://github.com/XRPLF/rippled/pull/4320
* Introduce a slab-based memory allocator and optimize SHAMapItem by @nbougalis in https://github.com/XRPLF/rippled/pull/4218
* Add NetworkID field to transactions to help prevent replay attacks on and from side-chains by @RichardAH in https://github.com/XRPLF/rippled/pull/4370
* If present, set quorum based on command line. by @mtrippled in https://github.com/XRPLF/rippled/pull/4489
* API does not accept seed or public key for account by @drlongle in https://github.com/XRPLF/rippled/pull/4404
* Add `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `Tx` responses by @shawnxie999 in https://github.com/XRPLF/rippled/pull/4447
### Bug Fixes
* fix(gateway_balances): handle overflow exception by @RichardAH in https://github.com/XRPLF/rippled/pull/4355
* fix(ValidatorSite): handle rare null pointer dereference in timeout by @ximinez in https://github.com/XRPLF/rippled/pull/4420
* RPC commands understand markers derived from all ledger object types by @ximinez in https://github.com/XRPLF/rippled/pull/4361
* `fixNFTokenRemint`: prevent NFT re-mint: by @shawnxie999 in https://github.com/XRPLF/rippled/pull/4406
* Fix a case where ripple::Expected returned a json array, not a value by @scottschurr in https://github.com/XRPLF/rippled/pull/4401
* fix: Ledger data returns an empty list (instead of null) when all entries are filtered out by @drlongle in https://github.com/XRPLF/rippled/pull/4398
* Fix unit test ripple.app.LedgerData by @drlongle in https://github.com/XRPLF/rippled/pull/4484
* Fix the fix for std::result_of by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4496
* Fix errors for Clang 16 by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4501
* Ensure that switchover vars are initialized before use: by @seelabs in https://github.com/XRPLF/rippled/pull/4527
* Move faulty assert by @ximinez in https://github.com/XRPLF/rippled/pull/4533
* Fix unaligned load and stores: (#4528) by @seelabs in https://github.com/XRPLF/rippled/pull/4531
* fix node size estimation by @dangell7 in https://github.com/XRPLF/rippled/pull/4536
* fix: remove redundant moves by @ckeshava in https://github.com/XRPLF/rippled/pull/4565
### Code Cleanup and Testing
* Replace compare() with the three-way comparison operator in base_uint, Issue and Book by @drlongle in https://github.com/XRPLF/rippled/pull/4411
* Rectify the import paths of boost::function_output_iterator by @ckeshava in https://github.com/XRPLF/rippled/pull/4293
* Expand Linux test matrix by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4454
* Add patched recipe for SOCI by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4510
* Switch to self-hosted runners for macOS by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4511
* [TRIVIAL] Add missing includes by @seelabs in https://github.com/XRPLF/rippled/pull/4555
### Docs
* Refactor build instructions by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4381
* Add install instructions for package managers by @thejohnfreeman in https://github.com/XRPLF/rippled/pull/4472
* Fix typo by @solmsted in https://github.com/XRPLF/rippled/pull/4508
* Update environment.md by @sappenin in https://github.com/XRPLF/rippled/pull/4498
* Update BUILD.md by @oeggert in https://github.com/XRPLF/rippled/pull/4514
* Trivial: add comments for NFToken-related invariants by @scottschurr in https://github.com/XRPLF/rippled/pull/4558
## New Contributors
* @drlongle made their first contribution in https://github.com/XRPLF/rippled/pull/4411
* @ckeshava made their first contribution in https://github.com/XRPLF/rippled/pull/4293
* @solmsted made their first contribution in https://github.com/XRPLF/rippled/pull/4508
* @sappenin made their first contribution in https://github.com/XRPLF/rippled/pull/4498
* @oeggert made their first contribution in https://github.com/XRPLF/rippled/pull/4514
**Full Changelog**: https://github.com/XRPLF/rippled/compare/1.10.1...1.11.0
### GitHub
The public source code repository for `rippled` is hosted on GitHub at <https://github.com/XRPLF/rippled>.
We welcome all contributions and invite everyone to join the community of XRP Ledger developers to help build the Internet of Value.
### Credits
The following people contributed directly to this release:
- Alloy Networks <45832257+alloynetworks@users.noreply.github.com>
- Brandon Wilson <brandon@coil.com>
- Chenna Keshava B S <21219765+ckeshava@users.noreply.github.com>
- David Fuelling <sappenin@gmail.com>
- Denis Angell <dangell@transia.co>
- Ed Hennis <ed@ripple.com>
- Elliot Lee <github.public@intelliot.com>
- John Freeman <jfreeman08@gmail.com>
- Mark Travis <mtrippled@users.noreply.github.com>
- Nik Bougalis <nikb@bougalis.net>
- RichardAH <richard.holland@starstone.co.nz>
- Scott Determan <scott.determan@yahoo.com>
- Scott Schurr <scott@ripple.com>
- Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
- drlongle <drlongle@gmail.com>
- ledhed2222 <ledhed2222@users.noreply.github.com>
- oeggert <117319296+oeggert@users.noreply.github.com>
- solmsted <steven.olm@gmail.com>
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
# Introducing XRP Ledger version 1.10.1
Version 1.10.1 of `rippled`, the reference server implementation of the XRP Ledger protocol, is now available. This release restores packages for Ubuntu 18.04.

View File

@@ -61,13 +61,12 @@ For these complaints or reports, please [contact our support team](mailto:bugs@x
### The following type of security problems are excluded
- (D)DOS attacks
- Error messages or error pages without sensitive data
- Tests & sample data as publicly available in our repositories at Github
- Common issues like browser header warnings or DNS configuration, identified by vulnerability scans
- Vulnerability scan reports for software we publicly use
- Security issues related to outdated OS's, browsers or plugins
- Reports for security problems that we have been notified of before
1. **In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `xahaud` and `xahau-lib`.
2. **Relevant**. A security issue, posing a danger to user funds, privacy or the operation of the Xahau Ledger.
3. **Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4. **Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5. **Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other peoples software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6. **Unused**. If you use the exploit to attack the Xahau Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
Please note: Reports that are lacking any proof (such as screenshots or other data), detailed information or details on how to reproduce any unexpected result will be investigated but will not be eligible for any reward.

View File

@@ -1,4 +1,9 @@
#!/bin/bash
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
echo "START INSIDE CONTAINER - CORE"
@@ -7,6 +12,13 @@ echo "-- GITHUB_REPOSITORY: $1"
echo "-- GITHUB_SHA: $2"
echo "-- GITHUB_RUN_NUMBER: $4"
# Use mounted filesystem for temp files to avoid container space limits
export TMPDIR=/io/tmp
export TEMP=/io/tmp
export TMP=/io/tmp
mkdir -p /io/tmp
echo "=== Using temp directory: /io/tmp ==="
umask 0000;
cd /io/ &&
@@ -20,24 +32,52 @@ if [[ "$?" -ne "0" ]]; then
exit 127
fi
perl -i -pe "s/^(\\s*)-DBUILD_SHARED_LIBS=OFF/\\1-DBUILD_SHARED_LIBS=OFF\\n\\1-DROCKSDB_BUILD_SHARED=OFF/g" Builds/CMake/deps/Rocksdb.cmake &&
BUILD_TYPE=Release
mv Builds/CMake/deps/WasmEdge.cmake Builds/CMake/deps/WasmEdge.old &&
echo "find_package(LLVM REQUIRED CONFIG)
message(STATUS \"Found LLVM ${LLVM_PACKAGE_VERSION}\")
message(STATUS \"Found LLVM \${LLVM_PACKAGE_VERSION}\")
message(STATUS \"Using LLVMConfig.cmake in: \${LLVM_DIR}\")
add_library (wasmedge STATIC IMPORTED GLOBAL)
set_target_properties(wasmedge PROPERTIES IMPORTED_LOCATION \${WasmEdge_LIB})
target_link_libraries (ripple_libs INTERFACE wasmedge)
add_library (NIH::WasmEdge ALIAS wasmedge)
add_library (wasmedge::wasmedge ALIAS wasmedge)
message(\"WasmEdge DONE\")
" > Builds/CMake/deps/WasmEdge.cmake &&
export LDFLAGS="-static-libstdc++"
git config --global --add safe.directory /io &&
git checkout src/ripple/protocol/impl/BuildInfo.cpp &&
sed -i s/\"0.0.0\"/\"$(date +%Y).$(date +%-m).$(date +%-d)-$(git rev-parse --abbrev-ref HEAD)+$4\"/g src/ripple/protocol/impl/BuildInfo.cpp &&
sed -i s/\"0.0.0\"/\"$(date +%Y).$(date +%-m).$(date +%-d)-$(git rev-parse --abbrev-ref HEAD)$(if [ -n "$4" ]; then echo "+$4"; fi)\"/g src/ripple/protocol/impl/BuildInfo.cpp &&
conan export external/snappy --version 1.1.10 --user xahaud --channel stable &&
conan export external/soci --version 4.0.3 --user xahaud --channel stable &&
conan export external/wasmedge --version 0.11.2 --user xahaud --channel stable &&
cd release-build &&
cmake .. -DCMAKE_BUILD_TYPE=Release -DBoost_NO_BOOST_CMAKE=ON -DLLVM_DIR=/usr/lib64/llvm13/lib/cmake/llvm/ -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ -DWasmEdge_LIB=/usr/local/lib64/libwasmedge.a &&
make -j$3 VERBOSE=1 &&
# Install dependencies - tool_requires in conanfile.py handles glibc 2.28 compatibility
# for build tools (protoc, grpc plugins, b2) in HBB environment
# The tool_requires('b2/5.3.2') in conanfile.py should force b2 to build from source
# with the correct toolchain, avoiding the GLIBCXX_3.4.29 issue
echo "=== Installing dependencies ===" &&
conan install .. --output-folder . --build missing --settings build_type=$BUILD_TYPE \
-o with_wasmedge=False -o tool_requires_b2=True &&
cmake .. -G Ninja \
-DCMAKE_BUILD_TYPE=$BUILD_TYPE \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_EXE_LINKER_FLAGS="-static-libstdc++" \
-DLLVM_DIR=$LLVM_DIR \
-DWasmEdge_LIB=$WasmEdge_LIB \
-Dxrpld=TRUE \
-Dtests=TRUE &&
ccache -z &&
ninja -j $3 && echo "=== Re-running final link with verbose output ===" && rm -f rippled && ninja -v rippled &&
ccache -s &&
strip -s rippled &&
mv rippled xahaud &&
echo "=== Full ldd output ===" &&
ldd xahaud &&
echo "=== Running libcheck ===" &&
libcheck xahaud &&
echo "Build host: `hostname`" > release.info &&
echo "Build date: `date`" >> release.info &&
echo "Build md5: `md5sum xahaud`" >> release.info &&
@@ -62,8 +102,8 @@ fi
cd ..;
mv src/ripple/net/impl/RegisterSSLCerts.cpp.old src/ripple/net/impl/RegisterSSLCerts.cpp;
mv Builds/CMake/deps/Rocksdb.cmake.old Builds/CMake/deps/Rocksdb.cmake;
mv Builds/CMake/deps/WasmEdge.old Builds/CMake/deps/WasmEdge.cmake;
rm src/certs/certbundle.h;
git checkout src/ripple/protocol/impl/BuildInfo.cpp;
echo "END INSIDE CONTAINER - CORE"

100
build-full.sh Executable file → Normal file
View File

@@ -1,4 +1,9 @@
#!/bin/bash
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -e
echo "START INSIDE CONTAINER - FULL"
@@ -9,8 +14,10 @@ echo "-- GITHUB_RUN_NUMBER: $4"
umask 0000;
####
cd /io;
mkdir src/certs;
mkdir -p src/certs;
curl --silent -k https://raw.githubusercontent.com/RichardAH/rippled-release-builder/main/ca-bundle/certbundle.h -o src/certs/certbundle.h;
if [ "`grep certbundle.h src/ripple/net/impl/RegisterSSLCerts.cpp | wc -l`" -eq "0" ]
then
@@ -57,92 +64,15 @@ then
#endif/g" src/ripple/net/impl/RegisterSSLCerts.cpp &&
sed -i "s/#include <ripple\/net\/RegisterSSLCerts.h>/\0\n#include <certs\/certbundle.h>/g" src/ripple/net/impl/RegisterSSLCerts.cpp
fi
mkdir .nih_c;
mkdir .nih_toolchain;
cd .nih_toolchain &&
yum install -y wget lz4 lz4-devel git llvm13-static.x86_64 llvm13-devel.x86_64 devtoolset-10-binutils zlib-static ncurses-static -y \
devtoolset-7-gcc-c++ \
devtoolset-9-gcc-c++ \
devtoolset-10-gcc-c++ \
snappy snappy-devel \
zlib zlib-devel \
lz4-devel \
libasan &&
export PATH=`echo $PATH | sed -E "s/devtoolset-9/devtoolset-7/g"` &&
echo "-- Install ZStd 1.1.3 --" &&
yum install epel-release -y &&
ZSTD_VERSION="1.1.3" &&
( wget -nc -q -O zstd-${ZSTD_VERSION}.tar.gz https://github.com/facebook/zstd/archive/v${ZSTD_VERSION}.tar.gz; echo "" ) &&
tar xzvf zstd-${ZSTD_VERSION}.tar.gz &&
cd zstd-${ZSTD_VERSION} &&
make -j$3 install &&
cd .. &&
echo "-- Install Cmake 3.23.1 --" &&
pwd &&
( wget -nc -q https://github.com/Kitware/CMake/releases/download/v3.23.1/cmake-3.23.1-linux-x86_64.tar.gz; echo "" ) &&
tar -xzf cmake-3.23.1-linux-x86_64.tar.gz -C /hbb/ &&
echo "-- Install Boost 1.75.0 --" &&
pwd &&
( wget -nc -q https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz; echo "" ) &&
tar -xzf boost_1_75_0.tar.gz &&
cd boost_1_75_0 && ./bootstrap.sh && ./b2 link=static -j$3 && ./b2 install &&
cd ../ &&
echo "-- Install Protobuf 3.20.0 --" &&
pwd &&
( wget -nc -q https://github.com/protocolbuffers/protobuf/releases/download/v3.20.0/protobuf-all-3.20.0.tar.gz; echo "" ) &&
tar -xzf protobuf-all-3.20.0.tar.gz &&
cd protobuf-3.20.0/ &&
./autogen.sh && ./configure --prefix=/usr --disable-shared link=static && make -j$3 && make install &&
cd .. &&
echo "-- Build LLD --" &&
pwd &&
ln /usr/bin/llvm-config-13 /usr/bin/llvm-config &&
mv /opt/rh/devtoolset-9/root/usr/bin/ar /opt/rh/devtoolset-9/root/usr/bin/ar-9 &&
ln /opt/rh/devtoolset-10/root/usr/bin/ar /opt/rh/devtoolset-9/root/usr/bin/ar &&
( wget -nc -q https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1/lld-13.0.1.src.tar.xz; echo "" ) &&
( wget -nc -q https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1/libunwind-13.0.1.src.tar.xz; echo "" ) &&
tar -xf lld-13.0.1.src.tar.xz &&
tar -xf libunwind-13.0.1.src.tar.xz &&
cp -r libunwind-13.0.1.src/include libunwind-13.0.1.src/src lld-13.0.1.src/ &&
cd lld-13.0.1.src &&
rm -rf build CMakeCache.txt &&
mkdir build &&
cd build &&
cmake .. -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ -DCMAKE_INSTALL_PREFIX=/usr/lib64/llvm13/ -DCMAKE_BUILD_TYPE=Release &&
make -j$3 install &&
ln -s /usr/lib64/llvm13/lib/include/lld /usr/include/lld &&
cp /usr/lib64/llvm13/lib/liblld*.a /usr/local/lib/ &&
cd ../../ &&
echo "-- Build WasmEdge --" &&
( wget -nc -q https://github.com/WasmEdge/WasmEdge/archive/refs/tags/0.11.2.zip; unzip -o 0.11.2.zip; ) &&
cd WasmEdge-0.11.2 &&
( mkdir build; echo "" ) &&
cd build &&
export BOOST_ROOT="/usr/local/src/boost_1_75_0" &&
export Boost_LIBRARY_DIRS="/usr/local/lib" &&
export BOOST_INCLUDEDIR="/usr/local/src/boost_1_75_0" &&
export PATH=`echo $PATH | sed -E "s/devtoolset-7/devtoolset-9/g"` &&
cmake .. \
-DCMAKE_BUILD_TYPE=Release \
-DWASMEDGE_BUILD_SHARED_LIB=OFF \
-DWASMEDGE_BUILD_STATIC_LIB=ON \
-DWASMEDGE_BUILD_AOT_RUNTIME=ON \
-DWASMEDGE_FORCE_DISABLE_LTO=ON \
-DCMAKE_POSITION_INDEPENDENT_CODE=ON \
-DWASMEDGE_LINK_LLVM_STATIC=ON \
-DWASMEDGE_BUILD_PLUGINS=OFF \
-DWASMEDGE_LINK_TOOLS_STATIC=ON \
-DBoost_NO_BOOST_CMAKE=ON -DLLVM_DIR=/usr/lib64/llvm13/lib/cmake/llvm/ -DLLVM_LIBRARY_DIR=/usr/lib64/llvm13/lib/ &&
make -j$3 install &&
export PATH=`echo $PATH | sed -E "s/devtoolset-9/devtoolset-10/g"` &&
cp -r include/api/wasmedge /usr/include/ &&
cd /io/ &&
# Environment setup moved to Dockerfile in release-builder.sh
source /opt/rh/gcc-toolset-11/enable
export PATH=/usr/local/bin:$PATH
export CC='ccache gcc' &&
export CXX='ccache g++' &&
echo "-- Build Rippled --" &&
pwd &&
cp Builds/CMake/deps/Rocksdb.cmake Builds/CMake/deps/Rocksdb.cmake.old &&
echo "MOVING TO [ build-core.sh ]"
cd /io;
echo "MOVING TO [ build-core.sh ]";
printenv > .env.temp;
cat .env.temp | grep '=' | sed s/\\\(^[^=]\\+=\\\)/\\1\\\"/g|sed s/\$/\\\"/g > .env;

View File

@@ -1,7 +1,7 @@
#
# Default validators.txt
#
# This file is located in the same folder as your rippled.cfg file
# This file is located in the same folder as your xahaud.cfg file
# and defines which validators your server trusts not to collude.
#
# This file is UTF-8 with DOS, UNIX, or Mac style line endings.
@@ -17,18 +17,17 @@
# See validator_list_sites and validator_list_keys below.
#
# Examples:
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt
# n9L3GdotB8a3AqtsvS7NXt4BUTQSAYyJUr9xtFj2qXJjfbZsawKY
# n9M7G6eLwQtUjfCthWUmTN8L4oEZn1sNr46yvKrpsq58K1C6LAxz
#
# [validator_list_sites]
#
# List of URIs serving lists of recommended validators.
#
# Examples:
# https://vl.ripple.com
# https://vl.xrplf.org
# https://vl.xahau.org
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
# file:///etc/opt/xahaud/vl.txt
#
# [validator_list_keys]
#
@@ -39,50 +38,48 @@
# Validator list keys should be hex-encoded.
#
# Examples:
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
# EDA46E9C39B1389894E690E58914DC1029602870370A0993E5B87C4A24EAF4A8E8
#
# [import_vl_keys]
#
# This section is used to import the public keys of trusted validator list publishers.
# The keys are used to authenticate and accept new lists of trusted validators.
# In this example, the key for the publisher "vl.xrplf.org" is imported.
# Each key is represented as a hexadecimal string.
#
# Examples:
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# The default validator list publishers that the rippled instance
# The default validator list publishers that the xahaud instance
# trusts.
#
# WARNING: Changing these values can cause your rippled instance to see a
# validated ledger that contradicts other rippled instances'
# WARNING: Changing these values can cause your xahaud instance to see a
# validated ledger that contradicts other xahaud instances'
# validated ledgers (aka a ledger fork) if your validator list(s)
# do not sufficiently overlap with the list(s) used by others.
# See: https://arxiv.org/pdf/1802.07242.pdf
[validator_list_sites]
https://vl.ripple.com
https://vl.xrplf.org
https://vl.xahau.org
[validator_list_keys]
#vl.ripple.com
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
# vl.xahau.org
EDA46E9C39B1389894E690E58914DC1029602870370A0993E5B87C4A24EAF4A8E8
[import_vl_keys]
# vl.xrplf.org
ED45D1840EE724BE327ABE9146503D5848EFD5F38B6D5FEDE71E80ACCE5E6E738B
ED42AEC58B701EEBB77356FFFEC26F83C1F0407263530F068C7C73D392C7E06FD1
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# To use the test network (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# To use the test network (see https://xahau.network/docs/infrastructure/installing-xahaud),
# use the following configuration instead:
#
# [validator_list_sites]
# https://vl.altnet.rippletest.net
#
# [validator_list_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
# [validators]
# nHBoJCE3wPgkTcrNPMHyTJFQ2t77EyCAqcBRspFCpL6JhwCm94VZ
# nHUVv4g47bFMySAZFUKVaXUYEmfiUExSoY4FzwXULNwJRzju4XnQ
# nHBvr8avSFTz4TFxZvvi4rEJZZtyqE3J6KAAcVWVtifsE7edPM7q
# nHUH3Z8TRU57zetHbEPr1ynyrJhxQCwrJvNjr4j1SMjYADyW1WWe
#
# [import_vl_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860

View File

@@ -9,7 +9,7 @@
#
# 2. Peer Protocol
#
# 3. Ripple Protocol
# 3. XRPL Protocol
#
# 4. HTTPS Client
#
@@ -29,18 +29,17 @@
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is launched, it
# This file documents and provides examples of all xahaud server process
# configuration options. When the xahaud server instance is launched, it
# looks for a file with the following name:
#
# rippled.cfg
# xahaud.cfg
#
# For more information on where the rippled server instance searches for the
# file, visit:
# To run xahaud with a custom configuration file, use the "--conf {file}" flag.
# By default, xahaud will look in the local working directory or the home directory.
#
# https://xrpl.org/commandline-usage.html#generic-options
#
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# This file should be named xahaud.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
@@ -89,8 +88,8 @@
#
#
#
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# xahaud offers various server protocols to clients making inbound
# connections. The listening ports xahaud uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
@@ -103,7 +102,7 @@
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, rippled will look for a configuration file
# For each name in this list, xahaud will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
@@ -134,7 +133,7 @@
# ip = 127.0.0.1
# protocol = http
#
# When rippled is used as a command line client (for example, issuing a
# When xahaud is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
@@ -175,7 +174,7 @@
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, rippled cannot
# NOTE If no ports support the peer protocol, xahaud cannot
# receive incoming peer connections or become a superpeer.
#
# limit = <number>
@@ -194,7 +193,7 @@
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
@@ -237,7 +236,7 @@
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
@@ -258,7 +257,7 @@
# resource controls will default to those for non-administrative users.
#
# The secure_gateway IP addresses are intended to represent
# proxies. Since rippled trusts these hosts, they must be
# proxies. Since xahaud trusts these hosts, they must be
# responsible for properly authenticating the remote user.
#
# If some IP addresses are included for both "admin" and
@@ -272,7 +271,7 @@
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# rippled will generate an internal self-signed certificate.
# xahaud will generate an internal self-signed certificate.
#
# The files have these meanings:
#
@@ -295,12 +294,12 @@
# Control the ciphers which the server will support over SSL on the port,
# specified using the OpenSSL "cipher list format".
#
# NOTE If unspecified, rippled will automatically configure a modern
# NOTE If unspecified, xahaud will automatically configure a modern
# cipher suite. This default suite should be widely supported.
#
# You should not modify this string unless you have a specific
# reason and cryptographic expertise. Incorrect modification may
# keep rippled from connecting to other instances of rippled or
# keep xahaud from connecting to other instances of xahaud or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = [1..65535]
@@ -351,7 +350,7 @@
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
# { "command" : "log_level", "partition" : "xahaudcalc", "severity" : "trace" }
#
#
#
@@ -380,16 +379,15 @@
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
# server section of the xahaud process. It is over peer connections that
# transactions and validations are passed from to machine to machine, to
# determine the contents of validated ledgers.
#
#
#
# [ips]
#
# List of hostnames or ips where the Ripple protocol is served. A default
# List of hostnames or ips where the XRPL protocol is served. A default
# starter list is included in the code and used if no other hostnames are
# available.
#
@@ -398,24 +396,23 @@
# does not generally matter.
#
# The default list of entries is:
# - r.ripple.com 51235
# - zaphod.alloy.ee 51235
# - sahyadri.isrdc.in 51235
# - hubs.xahau.as16089.net 21337
# - bacab.alloy.ee 21337
#
# Examples:
#
# [ips]
# 192.168.0.1
# 192.168.0.1 2459
# r.ripple.com 51235
# 192.168.0.1 21337
# bacab.alloy.ee 21337
#
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which rippled should always attempt to
# List of IP addresses or hostnames to which xahaud should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
# Xahau Network through a public-facing server, or for building a set
# of cluster peers.
#
# One address or domain names per line is allowed. A port must be specified
@@ -465,7 +462,7 @@
#
# IP address or domain of NTP servers to use for time synchronization.
#
# These NTP servers are suitable for rippled servers located in the United
# These NTP servers are suitable for xahaud servers located in the United
# States:
# time.windows.com
# time.apple.com
@@ -566,7 +563,7 @@
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when rippled is running in standalone
# Like minimum_txn_in_ledger when xahaud is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
@@ -703,7 +700,7 @@
#
# [validator_token]
#
# This is an alternative to [validation_seed] that allows rippled to perform
# This is an alternative to [validation_seed] that allows xahaud to perform
# validation without having to store the validator keys on the network
# connected server. The field should contain a single token in the form of a
# base64-encoded blob.
@@ -738,19 +735,18 @@
#
# Specify the file by its name or path.
# Unless an absolute path is specified, it will be considered relative to
# the folder in which the rippled.cfg file is located.
# the folder in which the xahaud.cfg file is located.
#
# Examples:
# /home/ripple/validators.txt
# C:/home/ripple/validators.txt
# /home/xahaud/validators.txt
# C:/home/xahaud/validators.txt
#
# Example content:
# [validators]
# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
# n9L3GdotB8a3AqtsvS7NXt4BUTQSAYyJUr9xtFj2qXJjfbZsawKY
# n9LQDHLWyFuAn5BXJuW2ow5J9uGqpmSjRYS2cFRpxf6uJbxwDzvM
# n9MCWyKVUkiatXVJTKUrAESB5kBFP8R3hm43jGHtg8WBnjv3iDfb
# n9KWXCLRhjpajuZtULTXsy6R5xbisA6ozGxM4zdEJFq6uHiFZDvW
#
#
#
@@ -833,7 +829,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, a rippled node only downloads
# acquiring a ledger from the network, a xahaud node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -844,10 +840,9 @@
#
#----------------
#
# The rippled server instance uses HTTPS GET requests in a variety of
# The xahaud server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to a Ripple Payment
# Network address.
# fetch information such as mapping an email address to a user's r address.
#
# [ssl_verify]
#
@@ -884,15 +879,15 @@
#
#------------
#
# rippled has an optional operating mode called Reporting Mode. In Reporting
# Mode, rippled does not connect to the peer to peer network. Instead, rippled
# will continuously extract data from one or more rippled servers that are
# xahaud has an optional operating mode called Reporting Mode. In Reporting
# Mode, xahaud does not connect to the peer to peer network. Instead, xahaud
# will continuously extract data from one or more xahaud servers that are
# connected to the peer to peer network (referred to as an ETL source).
# Reporting mode servers will forward RPC requests that require access to the
# peer to peer network (submit, fee, etc) to an ETL source.
#
# [reporting] Settings for Reporting Mode. If and only if this section is
# present, rippled will start in reporting mode. This section
# present, xahaud will start in reporting mode. This section
# contains a list of ETL source names, and key-value pairs. The
# ETL source names each correspond to a configuration file
# section; the names must match exactly. The key-value pairs are
@@ -997,16 +992,16 @@
#
#------------
#
# rippled creates 4 SQLite database to hold bookkeeping information
# xahaud creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers. In Reporting Mode, rippled
# make up the current and historical ledgers. In Reporting Mode, xahauad
# uses a Postgres database instead of SQLite.
#
# The simplest way to work with Postgres is to install it locally.
# When it is running, execute the initdb.sh script in the current
# directory as: sudo -u postgres ./initdb.sh
# This will create the rippled user and an empty database of the same name.
# This will create the xahaud user and an empty database of the same name.
#
# The size of the NodeDB grows in proportion to the amount of new data and the
# amount of historical data (a configurable setting) so the performance of the
@@ -1014,7 +1009,7 @@
# the performance of the server.
#
# Partial pathnames will be considered relative to the location of
# the rippled.cfg file.
# the xahaud.cfg file.
#
# [node_db] Settings for the Node Database (required)
#
@@ -1025,18 +1020,18 @@
#
# Example:
# type=nudb
# path=db/nudb
# path=/opt/xahaud/db/nudb
#
# The "type" field must be present and controls the choice of backend:
#
# type = NuDB
#
# NuDB is a high-performance database written by Ripple Labs and optimized
# for rippled and solid-state drives.
# for and solid-state drives.
#
# NuDB maintains its high speed regardless of the amount of history
# stored. Online delete may be selected, but is not required. NuDB is
# available on all platforms that rippled runs on.
# available on all platforms that xahaud runs on.
#
# type = RocksDB
#
@@ -1056,10 +1051,23 @@
# Cassandra is an alternative backend to be used only with Reporting Mode.
# See the Reporting Mode section for more details about Reporting Mode.
#
# type = RWDB
#
# RWDB is a high-performance memory store written by XRPL-Labs and optimized
# for xahaud. RWDB is NOT persistent and the data will be lost on restart.
# RWDB is recommended for Validator and Peer nodes that are not required to
# store history.
#
# Required keys for NuDB and RocksDB:
#
# path Location to store the database
#
# Required keys for RWDB:
#
# online_delete Required. RWDB stores data in memory and will
# grow unbounded without online_delete. See the
# online_delete section below.
#
# Required keys for Cassandra:
#
# contact_points IP of a node in the Cassandra cluster
@@ -1099,9 +1107,19 @@
# if sufficient IOPS capacity is available.
# Default 0.
#
# Optional keys for NuDB or RocksDB:
# online_delete for RWDB, NuDB and RocksDB:
#
# earliest_seq The default is 32570 to match the XRP ledger
# online_delete Minimum value of 256. Enable automatic purging
# of older ledger information. Maintain at least this
# number of ledger records online. Must be greater
# than or equal to ledger_history.
#
# REQUIRED for RWDB to prevent out-of-memory errors.
# Optional for NuDB and RocksDB.
#
# Optional keys for NuDB and RocksDB:
#
# earliest_seq The default is 32570 to match the XRP Ledger's
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
# If a [shard_db] section is defined, and this
@@ -1109,11 +1127,40 @@
# it must be defined with the same value in both
# sections.
#
# online_delete Minimum value of 256. Enable automatic purging
# of older ledger information. Maintain at least this
# number of ledger records online. Must be greater
# than or equal to ledger_history.
# Optional keys for NuDB only:
#
# nudb_block_size EXPERIMENTAL: Block size in bytes for NuDB storage.
# Must be a power of 2 between 4096 and 32768. Default is 4096.
#
# This parameter controls the fundamental storage unit
# size for NuDB's internal data structures. The choice
# of block size can significantly impact performance
# depending on your storage hardware and filesystem:
#
# - 4096 bytes: Optimal for most standard SSDs and
# traditional filesystems (ext4, NTFS, HFS+).
# Provides good balance of performance and storage
# efficiency. Recommended for most deployments.
#
# - 8192-16384 bytes: May improve performance on
# high-end NVMe SSDs and copy-on-write filesystems
# like ZFS or Btrfs that benefit from larger block
# alignment. Can reduce metadata overhead for large
# databases.
#
# - 32768 bytes (32K): Maximum supported block size
# for high-performance scenarios with very fast
# storage. May increase memory usage and reduce
# efficiency for smaller databases.
#
# Note: This setting cannot be changed after database
# creation without rebuilding the entire database.
# Choose carefully based on your hardware and expected
# database size.
#
# Example: nudb_block_size=4096
#
# These keys modify the behavior of online_delete, and thus are only
# relevant if online_delete is defined and non-zero:
#
@@ -1147,7 +1194,7 @@
#
# recovery_wait_seconds
# The online delete process checks periodically
# that rippled is still in sync with the network,
# that xahaud is still in sync with the network,
# and that the validated ledger is less than
# 'age_threshold_seconds' old. If not, then continue
# sleeping for this number of seconds and
@@ -1186,8 +1233,8 @@
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
# the 'database_path' location. If you omit this configuration setting,
# the server creates a directory called "db" located in the same place as
# your rippled.cfg file.
# Partial pathnames are relative to the location of the rippled executable.
# your xahaud.cfg file.
# Partial pathnames are relative to the location of the xahaud executable.
#
# [shard_db] Settings for the Shard Database (optional)
#
@@ -1263,7 +1310,7 @@
# The default is "wal", which uses a write-ahead
# log to implement database transactions.
# Alternately, "memory" saves disk I/O, but if
# rippled crashes during a transaction, the
# xahaud crashes during a transaction, the
# database is likely to be corrupted.
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
# for more details about the available options.
@@ -1273,7 +1320,7 @@
# synchronous Valid values: off, normal, full, extra
# The default is "normal", which works well with
# the "wal" journal mode. Alternatively, "off"
# allows rippled to continue as soon as data is
# allows xahaud to continue as soon as data is
# passed to the OS, which can significantly
# increase speed, but risks data corruption if
# the host computer crashes before writing that
@@ -1287,7 +1334,7 @@
# The default is "file", which will use files
# for temporary database tables and indices.
# Alternatively, "memory" may save I/O, but
# rippled does not currently use many, if any,
# xahaud does not currently use many, if any,
# of these temporary objects.
# See https://www.sqlite.org/pragma.html#pragma_temp_store
# for more details about the available options.
@@ -1299,9 +1346,9 @@
# conninfo Info for connecting to Postgres. Format is
# postgres://[username]:[password]@[ip]/[database].
# The database and user must already exist. If this
# section is missing and rippled is running in
# Reporting Mode, rippled will connect as the
# user running rippled to a database with the
# section is missing and xahaud is running in
# Reporting Mode, xahaud will connect as the
# user running xahaud to a database with the
# same name. On Linux and Mac OS X, the connection
# will take place using the server's UNIX domain
# socket. On Windows, through the localhost IP
@@ -1310,7 +1357,7 @@
# use_tx_tables Valid values: 1, 0
# The default is 1 (true). Determines whether to use
# the SQLite transaction database. If set to 0,
# rippled will not write to the transaction database,
# xahaud will not write to the transaction database,
# and will reject tx, account_tx and tx_history RPCs.
# In Reporting Mode, this setting is ignored.
#
@@ -1338,7 +1385,7 @@
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the rippled process.
# performed by the xahaud process.
#
#
#
@@ -1355,7 +1402,7 @@
#
# Configuration parameters for the Beast. Insight stats collection module.
#
# Insight is a module that collects information from the areas of rippled
# Insight is a module that collects information from the areas of xahaud
# that have instrumentation. The configuration parameters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
@@ -1364,7 +1411,7 @@
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while rippled is running. More information on StatsD is
# running while xahaud is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
@@ -1374,7 +1421,7 @@
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of rippled.
# to distinguish between different running instances of xahaud.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
@@ -1401,7 +1448,7 @@
#
# Example:
# [perf]
# perf_log=/var/log/rippled/perf.log
# perf_log=/var/log/xahaud/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
@@ -1410,8 +1457,8 @@
#
#----------
#
# The vote settings configure settings for the entire Ripple network.
# While a single instance of rippled cannot unilaterally enforce network-wide
# The vote settings configure settings for the entire Xahau Network.
# While a single instance of xahaud cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
@@ -1423,9 +1470,9 @@
#
# The cost of the reference transaction fee, specified in drops.
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
# It represents an XAH payment between two parties.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1434,26 +1481,26 @@
# account_reserve = <drops>
#
# The account reserve requirement is specified in drops. The portion of an
# account's XRP balance that is at or below the reserve may only be
# account's XAH balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# account_reserve = 10000000 # 10 XRP
# account_reserve = 10000000 # 10 XAH
#
# owner_reserve = <drops>
#
# The owner reserve is the amount of XRP reserved in the account for
# The owner reserve is the amount of XAH reserved in the account for
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# owner_reserve = 2000000 # 2 XRP
# owner_reserve = 2000000 # 2 XAH
#
#-------------------------------------------------------------------------------
#
@@ -1491,7 +1538,7 @@
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that rippled makes available.
# that xahaud makes available.
#
# The default value of this field is "false"
#
@@ -1570,7 +1617,7 @@
#--------------------
#
# Administrators can use these values as a starting point for configuring
# their instance of rippled, but each value should be checked to make sure
# their instance of xahaud, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
@@ -1580,7 +1627,7 @@
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming rippled connections. This does not affect automatic
# incoming xahaud connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
@@ -1608,8 +1655,8 @@
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
# 443 (HTTPS), most operating systems will require xahaud to
# run with administrator privileges, or else xahaud will not start.
[server]
port_rpc_admin_local
@@ -1620,20 +1667,20 @@ port_ws_admin_local
#ssl_cert = /etc/ssl/certs/server.crt
[port_rpc_admin_local]
port = 5005
port = 5009
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
port = 21337
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
protocol = peer
[port_ws_admin_local]
port = 6006
port = 6009
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
@@ -1644,15 +1691,15 @@ ip = 127.0.0.1
secure_gateway = 127.0.0.1
#[port_ws_public]
#port = 6005
#port = 6008
#ip = 127.0.0.1
#protocol = wss
#-------------------------------------------------------------------------------
# This is primary persistent datastore for rippled. This includes transaction
# This is primary persistent datastore for xahaud. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found at https://xrpl.org/capacity-planning.html#node-db-type
# found at https://xahau.network/docs/infrastructure/system-requirements
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
# slow / spinning disks should use RocksDB. Caution: Spinning disks are
# not recommended. They do not perform well enough to consistently remain
@@ -1665,16 +1712,16 @@ secure_gateway = 127.0.0.1
# deletion.
[node_db]
type=NuDB
path=/var/lib/rippled/db/nudb
path=/opt/xahaud/db/nudb
online_delete=512
advisory_delete=0
# This is the persistent datastore for shards. It is important for the health
# of the ripple network that rippled operators shard as much as practical.
# of the Xahau Network that xahaud operators shard as much as practical.
# NuDB requires SSD storage. Helpful information can be found at
# https://xrpl.org/history-sharding.html
#[shard_db]
#path=/var/lib/rippled/db/shards/nudb
#path=/opt/xahaud/db/shards/nudb
#max_historical_shards=50
#
# This optional section can be configured with a list
@@ -1685,7 +1732,7 @@ advisory_delete=0
#/path/2
[database_path]
/var/lib/rippled/db
/opt/xahaud/db
# To use Postgres, uncomment this section and fill in the appropriate connection
@@ -1700,7 +1747,7 @@ advisory_delete=0
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
/var/log/xahaud/debug.log
[sntp_servers]
time.windows.com
@@ -1708,15 +1755,19 @@ time.apple.com
time.nist.gov
pool.ntp.org
# To use the XRP test network
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# To use the Xahau Test Network
# (see https://xahau.network/docs/infrastructure/installing-xahaud),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# 79.110.60.121 21338
# 79.110.60.122 21338
# 79.110.60.124 21338
# 79.110.60.125 21338
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
# folder in which the xahaud.cfg file is located.
[validators_file]
validators.txt

View File

@@ -9,7 +9,7 @@
#
# 2. Peer Protocol
#
# 3. Ripple Protocol
# 3. XRPL Protocol
#
# 4. HTTPS Client
#
@@ -29,18 +29,16 @@
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is launched, it
# This file documents and provides examples of all xahaud server process
# configuration options. When the xahaud server instance is launched, it
# looks for a file with the following name:
#
# rippled.cfg
# xahaud.cfg
#
# For more information on where the rippled server instance searches for the
# file, visit:
# To run xahaud with a custom configuration file, use the "--conf {file}" flag.
# By default, xahaud will look in the local working directory or the home directory
#
# https://xrpl.org/commandline-usage.html#generic-options
#
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# This file should be named xahaud.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
@@ -89,8 +87,8 @@
#
#
#
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# xahaud offers various server protocols to clients making inbound
# connections. The listening ports xahaud uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
@@ -103,7 +101,7 @@
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, rippled will look for a configuration file
# For each name in this list, xahaud will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
@@ -134,7 +132,7 @@
# ip = 127.0.0.1
# protocol = http
#
# When rippled is used as a command line client (for example, issuing a
# When xahaud is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
@@ -175,7 +173,7 @@
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, rippled cannot
# NOTE If no ports support the peer protocol, xahaud cannot
# receive incoming peer connections or become a superpeer.
#
# limit = <number>
@@ -194,7 +192,7 @@
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
@@ -227,7 +225,7 @@
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, rippled will supply these credentials
# When acting in the client role, xahaud will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
@@ -247,11 +245,11 @@
# resource controls will default to those for non-administrative users.
#
# The secure_gateway IP addresses are intended to represent
# proxies. Since rippled trusts these hosts, they must be
# proxies. Since xahaud trusts these hosts, they must be
# responsible for properly authenticating the remote user.
#
# The same IP address cannot be used in both "admin" and "secure_gateway"
# lists for the same port. In this case, rippled will abort with an error
# lists for the same port. In this case, xahaud will abort with an error
# message to the console shortly after startup
#
# ssl_key = <filename>
@@ -261,7 +259,7 @@
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# rippled will generate an internal self-signed certificate.
# xahaud will generate an internal self-signed certificate.
#
# The files have these meanings:
#
@@ -284,12 +282,12 @@
# Control the ciphers which the server will support over SSL on the port,
# specified using the OpenSSL "cipher list format".
#
# NOTE If unspecified, rippled will automatically configure a modern
# NOTE If unspecified, xahaud will automatically configure a modern
# cipher suite. This default suite should be widely supported.
#
# You should not modify this string unless you have a specific
# reason and cryptographic expertise. Incorrect modification may
# keep rippled from connecting to other instances of rippled or
# keep xahaud from connecting to other instances of xahaud or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = [1..65535]
@@ -340,7 +338,7 @@
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
# { "command" : "log_level", "partition" : "xahaucalc", "severity" : "trace" }
#
#
#
@@ -369,8 +367,8 @@
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# server section of the xahaud process. Peer Protocol implements the
# XRPL Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
#
@@ -378,7 +376,7 @@
#
# [ips]
#
# List of hostnames or ips where the Ripple protocol is served. A default
# List of hostnames or ips where the XRPL protocol is served. A default
# starter list is included in the code and used if no other hostnames are
# available.
#
@@ -387,24 +385,23 @@
# does not generally matter.
#
# The default list of entries is:
# - r.ripple.com 51235
# - zaphod.alloy.ee 51235
# - sahyadri.isrdc.in 51235
# - bacab.alloy.ee 21337
# - hubs.xahau.as16089.net 21337
#
# Examples:
#
# [ips]
# 192.168.0.1
# 192.168.0.1 2459
# r.ripple.com 51235
# 192.168.0.1 21337
# bacab.alloy.ee 21337
#
#
# [ips_fixed]
#
# List of IP addresses or hostnames to which rippled should always attempt to
# List of IP addresses or hostnames to which xahaud should always attempt to
# maintain peer connections with. This is useful for manually forming private
# networks, for example to configure a validation server that connects to the
# Ripple network through a public-facing server, or for building a set
# Xahau Network through a public-facing server, or for building a set
# of cluster peers.
#
# One address or domain names per line is allowed. A port must be specified
@@ -454,7 +451,7 @@
#
# IP address or domain of NTP servers to use for time synchronization.
#
# These NTP servers are suitable for rippled servers located in the United
# These NTP servers are suitable for xahaud servers located in the United
# States:
# time.windows.com
# time.apple.com
@@ -555,7 +552,7 @@
#
# minimum_txn_in_ledger_standalone = <number>
#
# Like minimum_txn_in_ledger when rippled is running in standalone
# Like minimum_txn_in_ledger when xahaud is running in standalone
# mode. Default: 1000.
#
# target_txn_in_ledger = <number>
@@ -682,7 +679,7 @@
#
# [validator_token]
#
# This is an alternative to [validation_seed] that allows rippled to perform
# This is an alternative to [validation_seed] that allows xahaud to perform
# validation without having to store the validator keys on the network
# connected server. The field should contain a single token in the form of a
# base64-encoded blob.
@@ -717,22 +714,21 @@
#
# Specify the file by its name or path.
# Unless an absolute path is specified, it will be considered relative to
# the folder in which the rippled.cfg file is located.
# the folder in which the xahaud.cfg file is located.
#
# Examples:
# /home/ripple/validators.txt
# C:/home/ripple/validators.txt
# /home/xahaud/validators.txt
# C:/home/xahaud/validators.txt
#
# Example content:
# [validators]
# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
#
#
#
# n9L3GdotB8a3AqtsvS7NXt4BUTQSAYyJUr9xtFj2qXJjfbZsawKY
# n9LQDHLWyFuAn5BXJuW2ow5J9uGqpmSjRYS2cFRpxf6uJbxwDzvM
# n9MCWyKVUkiatXVJTKUrAESB5kBFP8R3hm43jGHtg8WBnjv3iDfb
# n9KWXCLRhjpajuZtULTXsy6R5xbisA6ozGxM4zdEJFq6uHiFZDvW
# [path_search]
# When searching for paths, the default search aggressiveness. This can take
# exponentially more resources as the size is increased.
@@ -795,7 +791,7 @@
#
# 0: Disable the ledger replay feature [default]
# 1: Enable the ledger replay feature. With this feature enabled, when
# acquiring a ledger from the network, a rippled node only downloads
# acquiring a ledger from the network, a xahaud node only downloads
# the ledger header and the transactions instead of the whole ledger.
# And the ledger is built by applying the transactions to the parent
# ledger.
@@ -806,9 +802,9 @@
#
#----------------
#
# The rippled server instance uses HTTPS GET requests in a variety of
# The xahaud server instance uses HTTPS GET requests in a variety of
# circumstances, including but not limited to contacting trusted domains to
# fetch information such as mapping an email address to a Ripple Payment
# fetch information such as mapping an email address to a XRPL Payment
# Network address.
#
# [ssl_verify]
@@ -846,15 +842,15 @@
#
#------------
#
# rippled has an optional operating mode called Reporting Mode. In Reporting
# Mode, rippled does not connect to the peer to peer network. Instead, rippled
# will continuously extract data from one or more rippled servers that are
# xahaud has an optional operating mode called Reporting Mode. In Reporting
# Mode, xahaud does not connect to the peer to peer network. Instead, xahaud
# will continuously extract data from one or more xahaud servers that are
# connected to the peer to peer network (referred to as an ETL source).
# Reporting mode servers will forward RPC requests that require access to the
# peer to peer network (submit, fee, etc) to an ETL source.
#
# [reporting] Settings for Reporting Mode. If and only if this section is
# present, rippled will start in reporting mode. This section
# present, xahaud will start in reporting mode. This section
# contains a list of ETL source names, and key-value pairs. The
# ETL source names each correspond to a configuration file
# section; the names must match exactly. The key-value pairs are
@@ -959,16 +955,16 @@
#
#------------
#
# rippled creates 4 SQLite database to hold bookkeeping information
# xahaud creates 4 SQLite database to hold bookkeeping information
# about transactions, local credentials, and various other things.
# It also creates the NodeDB, which holds all the objects that
# make up the current and historical ledgers. In Reporting Mode, rippled
# make up the current and historical ledgers. In Reporting Mode, xahaud
# uses a Postgres database instead of SQLite.
#
# The simplest way to work with Postgres is to install it locally.
# When it is running, execute the initdb.sh script in the current
# directory as: sudo -u postgres ./initdb.sh
# This will create the rippled user and an empty database of the same name.
# This will create the xahaud user and an empty database of the same name.
#
# The size of the NodeDB grows in proportion to the amount of new data and the
# amount of historical data (a configurable setting) so the performance of the
@@ -976,7 +972,7 @@
# the performance of the server.
#
# Partial pathnames will be considered relative to the location of
# the rippled.cfg file.
# the xahaud.cfg file.
#
# [node_db] Settings for the Node Database (required)
#
@@ -994,11 +990,11 @@
# type = NuDB
#
# NuDB is a high-performance database written by Ripple Labs and optimized
# for rippled and solid-state drives.
# for solid-state drives.
#
# NuDB maintains its high speed regardless of the amount of history
# stored. Online delete may be selected, but is not required. NuDB is
# available on all platforms that rippled runs on.
# available on all platforms that xahaud runs on.
#
# type = RocksDB
#
@@ -1103,14 +1099,14 @@
#
# recovery_wait_seconds
# The online delete process checks periodically
# that rippled is still in sync with the network,
# that xahaud is still in sync with the network,
# and that the validated ledger is less than
# 'age_threshold_seconds' old. By default, if it
# is not the online delete process aborts and
# tries again later. If 'recovery_wait_seconds'
# is set and rippled is out of sync, but likely to
# is set and xahaud is out of sync, but likely to
# recover quickly, then online delete will wait
# this number of seconds for rippled to get back
# this number of seconds for xahaud to get back
# into sync before it aborts.
# Set this value if the node is otherwise staying
# in sync, or recovering quickly, but the online
@@ -1146,8 +1142,8 @@
# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
# the 'database_path' location. If you omit this configuration setting,
# the server creates a directory called "db" located in the same place as
# your rippled.cfg file.
# Partial pathnames are relative to the location of the rippled executable.
# your xahaud.cfg file.
# Partial pathnames are relative to the location of the xahaud executable.
#
# [shard_db] Settings for the Shard Database (optional)
#
@@ -1223,7 +1219,7 @@
# The default is "wal", which uses a write-ahead
# log to implement database transactions.
# Alternately, "memory" saves disk I/O, but if
# rippled crashes during a transaction, the
# xahaud crashes during a transaction, the
# database is likely to be corrupted.
# See https://www.sqlite.org/pragma.html#pragma_journal_mode
# for more details about the available options.
@@ -1233,7 +1229,7 @@
# synchronous Valid values: off, normal, full, extra
# The default is "normal", which works well with
# the "wal" journal mode. Alternatively, "off"
# allows rippled to continue as soon as data is
# allows xahaud to continue as soon as data is
# passed to the OS, which can significantly
# increase speed, but risks data corruption if
# the host computer crashes before writing that
@@ -1247,7 +1243,7 @@
# The default is "file", which will use files
# for temporary database tables and indices.
# Alternatively, "memory" may save I/O, but
# rippled does not currently use many, if any,
# xahaud does not currently use many, if any,
# of these temporary objects.
# See https://www.sqlite.org/pragma.html#pragma_temp_store
# for more details about the available options.
@@ -1259,9 +1255,9 @@
# conninfo Info for connecting to Postgres. Format is
# postgres://[username]:[password]@[ip]/[database].
# The database and user must already exist. If this
# section is missing and rippled is running in
# Reporting Mode, rippled will connect as the
# user running rippled to a database with the
# section is missing and xahaud is running in
# Reporting Mode, xahaud will connect as the
# user running xahaud to a database with the
# same name. On Linux and Mac OS X, the connection
# will take place using the server's UNIX domain
# socket. On Windows, through the localhost IP
@@ -1270,7 +1266,7 @@
# use_tx_tables Valid values: 1, 0
# The default is 1 (true). Determines whether to use
# the SQLite transaction database. If set to 0,
# rippled will not write to the transaction database,
# xahaud will not write to the transaction database,
# and will reject tx, account_tx and tx_history RPCs.
# In Reporting Mode, this setting is ignored.
#
@@ -1298,7 +1294,7 @@
#
# These settings are designed to help server administrators diagnose
# problems, and obtain detailed information about the activities being
# performed by the rippled process.
# performed by the xahaud process.
#
#
#
@@ -1315,7 +1311,7 @@
#
# Configuration parameters for the Beast. Insight stats collection module.
#
# Insight is a module that collects information from the areas of rippled
# Insight is a module that collects information from the areas of xahaud
# that have instrumentation. The configuration parameters control where the
# collection metrics are sent. The parameters are expressed as key = value
# pairs with no white space. The main parameter is the choice of server:
@@ -1324,7 +1320,7 @@
#
# Choice of server to send metrics to. Currently the only choice is
# "statsd" which sends UDP packets to a StatsD daemon, which must be
# running while rippled is running. More information on StatsD is
# running while xahaud is running. More information on StatsD is
# available here:
# https://github.com/b/statsd_spec
#
@@ -1334,7 +1330,7 @@
# in the format, n.n.n.n:port.
#
# "prefix" A string prepended to each collected metric. This is used
# to distinguish between different running instances of rippled.
# to distinguish between different running instances of xahaud.
#
# If this section is missing, or the server type is unspecified or unknown,
# statistics are not collected or reported.
@@ -1361,7 +1357,7 @@
#
# Example:
# [perf]
# perf_log=/var/log/rippled/perf.log
# perf_log=/var/log/xahaud/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
@@ -1370,8 +1366,8 @@
#
#----------
#
# The vote settings configure settings for the entire Ripple network.
# While a single instance of rippled cannot unilaterally enforce network-wide
# The vote settings configure settings for the entire Xahau Network.
# While a single instance of xahaud cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
@@ -1383,9 +1379,9 @@
#
# The cost of the reference transaction fee, specified in drops.
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
# It represents an XAH payment between two parties.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
@@ -1394,26 +1390,26 @@
# account_reserve = <drops>
#
# The account reserve requirement is specified in drops. The portion of an
# account's XRP balance that is at or below the reserve may only be
# account's XAH balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# account_reserve = 10000000 # 10 XRP
# account_reserve = 10000000 # 10 XAH
#
# owner_reserve = <drops>
#
# The owner reserve is the amount of XRP reserved in the account for
# The owner reserve is the amount of XAH reserved in the account for
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, rippled will use an internal
# If this parameter is unspecified, xahaud will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# owner_reserve = 2000000 # 2 XRP
# owner_reserve = 2000000 # 2 XAH
#
#-------------------------------------------------------------------------------
#
@@ -1451,7 +1447,7 @@
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that rippled makes available.
# that xahaud makes available.
#
# The default value of this field is "false"
#
@@ -1530,7 +1526,7 @@
#--------------------
#
# Administrators can use these values as a starting point for configuring
# their instance of rippled, but each value should be checked to make sure
# their instance of xahaud, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
@@ -1540,7 +1536,7 @@
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming rippled connections. This does not affect automatic
# incoming xahaud connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
@@ -1568,8 +1564,8 @@
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
# 443 (HTTPS), most operating systems will require xahaud to
# run with administrator privileges, or else xahaud will not start.
[server]
port_rpc_admin_local
@@ -1587,7 +1583,7 @@ admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
port = 21337
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
@@ -1611,9 +1607,9 @@ protocol = ws
#-------------------------------------------------------------------------------
# This is primary persistent datastore for rippled. This includes transaction
# This is primary persistent datastore for xahaud. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found at https://xrpl.org/capacity-planning.html#node-db-type
# found at https://xahau.network/docs/infrastructure/system-requirements
# type=NuDB is recommended for non-validators with fast SSDs. Validators or
# slow / spinning disks should use RocksDB. Caution: Spinning disks are
# not recommended. They do not perform well enough to consistently remain
@@ -1626,16 +1622,16 @@ protocol = ws
# deletion.
[node_db]
type=NuDB
path=/var/lib/rippled-reporting/db/nudb
path=/opt/xahaud-reporting/db/nudb
# online_delete=512 #
advisory_delete=0
# This is the persistent datastore for shards. It is important for the health
# of the ripple network that rippled operators shard as much as practical.
# of the Xahau Network that xahaud operators shard as much as practical.
# NuDB requires SSD storage. Helpful information can be found at
# https://xrpl.org/history-sharding.html
#[shard_db]
#path=/var/lib/rippled/db/shards/nudb
#path=/opt/xahaud-reporting/db/shards/nudb
#max_historical_shards=50
#
# This optional section can be configured with a list
@@ -1646,7 +1642,7 @@ advisory_delete=0
#/path/2
[database_path]
/var/lib/rippled-reporting/db
/opt/xahaud-reporting/db
# To use Postgres, uncomment this section and fill in the appropriate connection
# info. Postgres can only be used in Reporting Mode.
@@ -1660,7 +1656,7 @@ advisory_delete=0
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled-reporting/debug.log
/var/log/xahaud-reporting/debug.log
[sntp_servers]
time.windows.com
@@ -1668,17 +1664,20 @@ time.apple.com
time.nist.gov
pool.ntp.org
# To use the XRP test network
# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
# To use the Xahau Test Network
# (see https://xahau.network/docs/infrastructure/installing-xahaud),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# 79.110.60.121 21338
# 79.110.60.122 21338
# 79.110.60.124 21338
# 79.110.60.125 21338
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
# folder in which the xahaud.cfg file is located.
[validators_file]
/opt/rippled-reporting/etc/validators.txt
/opt/xahaud-reporting/etc/validators.txt
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
@@ -1699,5 +1698,5 @@ etl_source
[etl_source]
source_grpc_port=50051
source_ws_port=6005
source_ws_port=6008
source_ip=127.0.0.1

View File

@@ -1,4 +1,4 @@
# standalone: ./rippled -a --ledgerfile config/genesis.json --conf config/rippled-standalone.cfg
# standalone: ./xahaud -a --ledgerfile config/genesis.json --conf config/xahaud-standalone.cfg
[server]
port_rpc_admin_local
port_ws_public
@@ -21,7 +21,7 @@ ip = 0.0.0.0
protocol = ws
# [port_peer]
# port = 51235
# port = 21337
# ip = 0.0.0.0
# protocol = peer
@@ -69,7 +69,8 @@ time.nist.gov
pool.ntp.org
[ips]
r.ripple.com 51235
bacab.alloy.ee 21337
hubs.xahau.as16089.net 21337
[validators_file]
validators-example.txt
@@ -94,7 +95,7 @@ validators-example.txt
1000000
[network_id]
21338
21337
[amendments]
740352F2412A9909880C23A559FCECEDA3BE2126FED62FC7660D628A06927F11 Flow
@@ -144,4 +145,12 @@ D686F2538F410C9D0D856788E98E3579595DAF7B38D38887F81ECAC934B06040 HooksUpdate1
86E83A7D2ECE3AD5FA87AB2195AE015C950469ABF0B72EAACED318F74886AE90 CryptoConditionsSuite
3C43D9A973AA4443EF3FC38E42DD306160FBFFDAB901CD8BAA15D09F2597EB87 NonFungibleTokensV1
0285B7E5E08E1A8E4C15636F0591D87F73CB6A7B6452A932AD72BBC8E5D1CBE3 fixNFTokenDirV1
36799EA497B1369B170805C078AEFE6188345F9B3E324C21E9CA3FF574E3C3D6 fixNFTokenNegOffer
36799EA497B1369B170805C078AEFE6188345F9B3E324C21E9CA3FF574E3C3D6 fixNFTokenNegOffer
4C499D17719BB365B69010A436B64FD1A82AAB199FC1CEB06962EBD01059FB09 fixXahauV1
215181D23BF5C173314B5FDB9C872C92DE6CC918483727DE037C0C13E7E6EE9D fixXahauV2
0D8BF22FF7570D58598D1EF19EBB6E142AD46E59A223FD3816262FBB69345BEA Remit
7CA0426E7F411D39BB014E57CD9E08F61DE1750F0D41FCD428D9FB80BB7596B0 ZeroB2M
4B8466415FAB32FFA89D9DCBE166A42340115771DF611A7160F8D7439C87ECD8 fixNSDelete
EDB4EE4C524E16BDD91D9A529332DED08DCAAA51CC6DC897ACFA1A0ED131C5B6 fix240819
8063140E9260799D6716756B891CEC3E7006C4E4F277AB84670663A88F94B9C4 fixPageCap
88693F108C3CD8A967F3F4253A32DEF5E35F9406ACD2A11B88B11D90865763A9 fix240911

172
conanfile.py Normal file
View File

@@ -0,0 +1,172 @@
from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
import re
class Xrpl(ConanFile):
name = 'xrpl'
license = 'ISC'
author = 'John Freeman <jfreeman@ripple.com>'
url = 'https://github.com/xrplf/rippled'
description = 'The XRP Ledger'
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'assertions': [True, False],
'coverage': [True, False],
'fPIC': [True, False],
'jemalloc': [True, False],
'reporting': [True, False],
'rocksdb': [True, False],
'shared': [True, False],
'static': [True, False],
'tests': [True, False],
'unity': [True, False],
'with_wasmedge': [True, False],
'tool_requires_b2': [True, False],
}
requires = [
'date/3.0.3',
'libarchive/3.6.0',
'lz4/1.9.4',
'grpc/1.50.1',
'nudb/2.0.8',
'openssl/3.6.0',
'protobuf/3.21.12',
'soci/4.0.3@xahaud/stable',
'zlib/1.3.1',
]
default_options = {
'assertions': False,
'coverage': False,
'fPIC': True,
'jemalloc': False,
'reporting': False,
'rocksdb': True,
'shared': False,
'static': True,
'tests': True,
'unity': False,
'with_wasmedge': True,
'tool_requires_b2': False,
'cassandra-cpp-driver/*:shared': False,
'date/*:header_only': False,
'grpc/*:shared': False,
'grpc/*:secure': True,
'libarchive/*:shared': False,
'libarchive/*:with_acl': False,
'libarchive/*:with_bzip2': False,
'libarchive/*:with_cng': False,
'libarchive/*:with_expat': False,
'libarchive/*:with_iconv': False,
'libarchive/*:with_libxml2': False,
'libarchive/*:with_lz4': True,
'libarchive/*:with_lzma': False,
'libarchive/*:with_lzo': False,
'libarchive/*:with_nettle': False,
'libarchive/*:with_openssl': False,
'libarchive/*:with_pcreposix': False,
'libarchive/*:with_xattr': False,
'libarchive/*:with_zlib': False,
'libpq/*:shared': False,
'lz4/*:shared': False,
'openssl/*:shared': False,
'protobuf/*:shared': False,
'protobuf/*:with_zlib': True,
'rocksdb/*:enable_sse': False,
'rocksdb/*:lite': False,
'rocksdb/*:shared': False,
'rocksdb/*:use_rtti': True,
'rocksdb/*:with_jemalloc': False,
'rocksdb/*:with_lz4': True,
'rocksdb/*:with_snappy': True,
'snappy/*:shared': False,
'soci/*:shared': False,
'soci/*:with_sqlite3': True,
'soci/*:with_boost': True,
}
def set_version(self):
path = f'{self.recipe_folder}/src/ripple/protocol/impl/BuildInfo.cpp'
regex = r'versionString\s?=\s?\"(.*)\"'
with open(path, 'r') as file:
matches = (re.search(regex, line) for line in file)
match = next(m for m in matches if m)
self.version = match.group(1)
def build_requirements(self):
# These provide build tools (protoc, grpc plugins) that run during build
self.tool_requires('protobuf/3.21.12')
self.tool_requires('grpc/1.50.1')
# Explicitly require b2 (e.g. for building from source for glibc compatibility)
if self.options.tool_requires_b2:
self.tool_requires('b2/5.3.2')
def configure(self):
if self.settings.compiler == 'apple-clang':
self.options['boost/*'].visibility = 'global'
def requirements(self):
# Force sqlite3 version to avoid conflicts with soci
self.requires('sqlite3/3.42.0', override=True)
# Force our custom snappy build for all dependencies
self.requires('snappy/1.1.10@xahaud/stable', override=True)
# Force boost version for all dependencies to avoid conflicts
self.requires('boost/1.86.0', override=True)
if self.options.with_wasmedge:
self.requires('wasmedge/0.11.2@xahaud/stable')
if self.options.jemalloc:
self.requires('jemalloc/5.2.1')
if self.options.reporting:
self.requires('cassandra-cpp-driver/2.15.3')
self.requires('libpq/13.6')
if self.options.rocksdb:
self.requires('rocksdb/6.27.3')
exports_sources = (
'CMakeLists.txt', 'Builds/*', 'bin/getRippledInfo', 'src/*', 'cfg/*'
)
def layout(self):
cmake_layout(self)
# Fix this setting to follow the default introduced in Conan 1.48
# to align with our build instructions.
self.folders.generators = 'build/generators'
generators = 'CMakeDeps'
def generate(self):
tc = CMakeToolchain(self)
tc.variables['tests'] = self.options.tests
tc.variables['assert'] = self.options.assertions
tc.variables['coverage'] = self.options.coverage
tc.variables['jemalloc'] = self.options.jemalloc
tc.variables['reporting'] = self.options.reporting
tc.variables['rocksdb'] = self.options.rocksdb
tc.variables['BUILD_SHARED_LIBS'] = self.options.shared
tc.variables['static'] = self.options.static
tc.variables['unity'] = self.options.unity
tc.generate()
def build(self):
cmake = CMake(self)
cmake.verbose = True
cmake.configure()
cmake.build()
def package(self):
cmake = CMake(self)
cmake.verbose = True
cmake.install()
def package_info(self):
libxrpl = self.cpp_info.components['libxrpl']
libxrpl.libs = [
'libxrpl_core.a',
'libed25519.a',
'libsecp256k1.a',
]
libxrpl.includedirs = ['include']
libxrpl.requires = ['boost::boost']

11
docker-unit-tests.sh Normal file → Executable file
View File

@@ -1,4 +1,11 @@
#!/bin/bash
#!/bin/bash -x
docker run --rm -i -v $(pwd):/io ubuntu sh -c '/io/release-build/xahaud -u'
BUILD_CORES=$(echo "scale=0 ; `nproc` / 1.337" | bc)
if [[ "$GITHUB_REPOSITORY" == "" ]]; then
#Default
BUILD_CORES=8
fi
echo "Mounting $(pwd)/io in ubuntu and running unit tests"
docker run --rm -i -v $(pwd):/io --platform=linux/amd64 -e BUILD_CORES=$BUILD_CORES ubuntu sh -c '/io/release-build/xahaud --unittest-jobs $BUILD_CORES -u'

84
docs/build/environment.md vendored Normal file
View File

@@ -0,0 +1,84 @@
Our [build instructions][BUILD.md] assume you have a C++ development
environment complete with Git, Python, Conan, CMake, and a C++ compiler.
This document exists to help readers set one up on any of the Big Three
platforms: Linux, macOS, or Windows.
[BUILD.md]: ../../BUILD.md
## Linux
Package ecosystems vary across Linux distributions,
so there is no one set of instructions that will work for every Linux user.
These instructions are written for Ubuntu 22.04.
They are largely copied from the [script][1] used to configure our Docker
container for continuous integration.
That script handles many more responsibilities.
These instructions are just the bare minimum to build one configuration of
rippled.
You can check that codebase for other Linux distributions and versions.
If you cannot find yours there,
then we hope that these instructions can at least guide you in the right
direction.
```
apt update
apt install --yes curl git libssl-dev python3.10-dev python3-pip make g++-11
curl --location --remote-name \
"https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
rm cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
pip3 install 'conan<2'
```
[1]: https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh
## macOS
Open a Terminal and enter the below command to bring up a dialog to install
the command line developer tools.
Once it is finished, this command should return a version greater than the
minimum required (see [BUILD.md][]).
```
clang --version
```
The command line developer tools should include Git too:
```
git --version
```
Install [Homebrew][],
use it to install [pyenv][],
use it to install Python,
and use it to install Conan:
[Homebrew]: https://brew.sh/
[pyenv]: https://github.com/pyenv/pyenv
```
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew update
brew install xz
brew install pyenv
pyenv install 3.10-dev
pyenv global 3.10-dev
eval "$(pyenv init -)"
pip install 'conan<2'
```
Install CMake with Homebrew too:
```
brew install cmake
```

159
docs/build/install.md vendored Normal file
View File

@@ -0,0 +1,159 @@
This document contains instructions for installing rippled.
The APT package manager is common on Debian-based Linux distributions like
Ubuntu,
while the YUM package manager is common on Red Hat-based Linux distributions
like CentOS.
Installing from source is an option for all platforms,
and the only supported option for installing custom builds.
## From source
From a source build, you can install rippled and libxrpl using CMake's
`--install` mode:
```
cmake --install . --prefix /opt/local
```
The default [prefix][1] is typically `/usr/local` on Linux and macOS and
`C:/Program Files/rippled` on Windows.
[1]: https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html
## With the APT package manager
1. Update repositories:
sudo apt update -y
2. Install utilities:
sudo apt install -y apt-transport-https ca-certificates wget gnupg
3. Add Ripple's package-signing GPG key to your list of trusted keys:
sudo mkdir /usr/local/share/keyrings/
wget -q -O - "https://repos.ripple.com/repos/api/gpg/key/public" | gpg --dearmor > ripple-key.gpg
sudo mv ripple-key.gpg /usr/local/share/keyrings
4. Check the fingerprint of the newly-added key:
gpg /usr/local/share/keyrings/ripple-key.gpg
The output should include an entry for Ripple such as the following:
gpg: WARNING: no command supplied. Trying to guess what you mean ...
pub rsa3072 2019-02-14 [SC] [expires: 2026-02-17]
C0010EC205B35A3310DC90DE395F97FFCCAFD9A2
uid TechOps Team at Ripple <techops+rippled@ripple.com>
sub rsa3072 2019-02-14 [E] [expires: 2026-02-17]
In particular, make sure that the fingerprint matches. (In the above example, the fingerprint is on the third line, starting with `C001`.)
4. Add the appropriate Ripple repository for your operating system version:
echo "deb [signed-by=/usr/local/share/keyrings/ripple-key.gpg] https://repos.ripple.com/repos/rippled-deb focal stable" | \
sudo tee -a /etc/apt/sources.list.d/ripple.list
The above example is appropriate for **Ubuntu 20.04 Focal Fossa**. For other operating systems, replace the word `focal` with one of the following:
- `jammy` for **Ubuntu 22.04 Jammy Jellyfish**
- `bionic` for **Ubuntu 18.04 Bionic Beaver**
- `bullseye` for **Debian 11 Bullseye**
- `buster` for **Debian 10 Buster**
If you want access to development or pre-release versions of `rippled`, use one of the following instead of `stable`:
- `unstable` - Pre-release builds ([`release` branch](https://github.com/ripple/rippled/tree/release))
- `nightly` - Experimental/development builds ([`develop` branch](https://github.com/ripple/rippled/tree/develop))
**Warning:** Unstable and nightly builds may be broken at any time. Do not use these builds for production servers.
5. Fetch the Ripple repository.
sudo apt -y update
6. Install the `rippled` software package:
sudo apt -y install rippled
7. Check the status of the `rippled` service:
systemctl status rippled.service
The `rippled` service should start automatically. If not, you can start it manually:
sudo systemctl start rippled.service
8. Optional: allow `rippled` to bind to privileged ports.
This allows you to serve incoming API requests on port 80 or 443. (If you want to do so, you must also update the config file's port settings.)
sudo setcap 'cap_net_bind_service=+ep' /opt/ripple/bin/rippled
## With the YUM package manager
1. Install the Ripple RPM repository:
Choose the appropriate RPM repository for the stability of releases you want:
- `stable` for the latest production release (`master` branch)
- `unstable` for pre-release builds (`release` branch)
- `nightly` for experimental/development builds (`develop` branch)
*Stable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-stable]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/stable/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/stable/repodata/repomd.xml.key
REPOFILE
*Unstable*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-unstable]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/unstable/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/unstable/repodata/repomd.xml.key
REPOFILE
*Nightly*
cat << REPOFILE | sudo tee /etc/yum.repos.d/ripple.repo
[ripple-nightly]
name=XRP Ledger Packages
enabled=1
gpgcheck=0
repo_gpgcheck=1
baseurl=https://repos.ripple.com/repos/rippled-rpm/nightly/
gpgkey=https://repos.ripple.com/repos/rippled-rpm/nightly/repodata/repomd.xml.key
REPOFILE
2. Fetch the latest repo updates:
sudo yum -y update
3. Install the new `rippled` package:
sudo yum install -y rippled
4. Configure the `rippled` service to start on boot:
sudo systemctl enable rippled.service
5. Start the `rippled` service:
sudo systemctl start rippled.service

193
external/rocksdb/conanfile.py vendored Normal file
View File

@@ -0,0 +1,193 @@
import os
import shutil
from conans import ConanFile, CMake
from conan.tools import microsoft as ms
class RocksDB(ConanFile):
name = 'rocksdb'
version = '6.27.3'
license = ('GPL-2.0-only', 'Apache-2.0')
url = 'https://github.com/conan-io/conan-center-index'
description = 'A library that provides an embeddable, persistent key-value store for fast storage'
topics = ('rocksdb', 'database', 'leveldb', 'facebook', 'key-value')
settings = 'os', 'compiler', 'build_type', 'arch'
options = {
'enable_sse': [False, 'sse42', 'avx2'],
'fPIC': [True, False],
'lite': [True, False],
'shared': [True, False],
'use_rtti': [True, False],
'with_gflags': [True, False],
'with_jemalloc': [True, False],
'with_lz4': [True, False],
'with_snappy': [True, False],
'with_tbb': [True, False],
'with_zlib': [True, False],
'with_zstd': [True, False],
}
default_options = {
'enable_sse': False,
'fPIC': True,
'lite': False,
'shared': False,
'use_rtti': False,
'with_gflags': False,
'with_jemalloc': False,
'with_lz4': False,
'with_snappy': False,
'with_tbb': False,
'with_zlib': False,
'with_zstd': False,
}
def requirements(self):
if self.options.with_gflags:
self.requires('gflags/2.2.2')
if self.options.with_jemalloc:
self.requires('jemalloc/5.2.1')
if self.options.with_lz4:
self.requires('lz4/1.9.3')
if self.options.with_snappy:
self.requires('snappy/1.1.9')
if self.options.with_tbb:
self.requires('onetbb/2020.3')
if self.options.with_zlib:
self.requires('zlib/1.2.11')
if self.options.with_zstd:
self.requires('zstd/1.5.2')
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
del self.options.fPIC
generators = 'cmake', 'cmake_find_package'
scm = {
'type': 'git',
'url': 'https://github.com/facebook/rocksdb.git',
'revision': 'v6.27.3',
}
exports_sources = 'thirdparty.inc'
# For out-of-source build.
no_copy_source = True
_cmake = None
def _configure_cmake(self):
if self._cmake:
return
self._cmake = CMake(self)
self._cmake.definitions['CMAKE_POSITION_INDEPENDENT_CODE'] = True
self._cmake.definitions['DISABLE_STALL_NOTIF'] = False
self._cmake.definitions['FAIL_ON_WARNINGS'] = False
self._cmake.definitions['OPTDBG'] = True
self._cmake.definitions['WITH_TESTS'] = False
self._cmake.definitions['WITH_TOOLS'] = False
self._cmake.definitions['WITH_GFLAGS'] = self.options.with_gflags
self._cmake.definitions['WITH_JEMALLOC'] = self.options.with_jemalloc
self._cmake.definitions['WITH_LZ4'] = self.options.with_lz4
self._cmake.definitions['WITH_SNAPPY'] = self.options.with_snappy
self._cmake.definitions['WITH_TBB'] = self.options.with_tbb
self._cmake.definitions['WITH_ZLIB'] = self.options.with_zlib
self._cmake.definitions['WITH_ZSTD'] = self.options.with_zstd
self._cmake.definitions['USE_RTTI'] = self.options.use_rtti
self._cmake.definitions['ROCKSDB_LITE'] = self.options.lite
self._cmake.definitions['ROCKSDB_INSTALL_ON_WINDOWS'] = (
self.settings.os == 'Windows'
)
if not self.options.enable_sse:
self._cmake.definitions['PORTABLE'] = True
self._cmake.definitions['FORCE_SSE42'] = False
elif self.options.enable_sse == 'sse42':
self._cmake.definitions['PORTABLE'] = True
self._cmake.definitions['FORCE_SSE42'] = True
elif self.options.enable_sse == 'avx2':
self._cmake.definitions['PORTABLE'] = False
self._cmake.definitions['FORCE_SSE42'] = False
self._cmake.definitions['WITH_ASAN'] = False
self._cmake.definitions['WITH_BZ2'] = False
self._cmake.definitions['WITH_JNI'] = False
self._cmake.definitions['WITH_LIBRADOS'] = False
if ms.is_msvc(self):
self._cmake.definitions['WITH_MD_LIBRARY'] = (
ms.msvc_runtime_flag(self).startswith('MD')
)
self._cmake.definitions['WITH_RUNTIME_DEBUG'] = (
ms.msvc_runtime_flag(self).endswith('d')
)
self._cmake.definitions['WITH_NUMA'] = False
self._cmake.definitions['WITH_TSAN'] = False
self._cmake.definitions['WITH_UBSAN'] = False
self._cmake.definitions['WITH_WINDOWS_UTF8_FILENAMES'] = False
self._cmake.definitions['WITH_XPRESS'] = False
self._cmake.definitions['WITH_FALLOCATE'] = True
def build(self):
if ms.is_msvc(self):
file = os.path.join(
self.recipe_folder, '..', 'export_source', 'thirdparty.inc'
)
shutil.copy(file, self.build_folder)
self._configure_cmake()
self._cmake.configure()
self._cmake.build()
def package(self):
self._configure_cmake()
self._cmake.install()
def package_info(self):
self.cpp_info.filenames['cmake_find_package'] = 'RocksDB'
self.cpp_info.filenames['cmake_find_package_multi'] = 'RocksDB'
self.cpp_info.set_property('cmake_file_name', 'RocksDB')
self.cpp_info.names['cmake_find_package'] = 'RocksDB'
self.cpp_info.names['cmake_find_package_multi'] = 'RocksDB'
self.cpp_info.components['librocksdb'].names['cmake_find_package'] = 'rocksdb'
self.cpp_info.components['librocksdb'].names['cmake_find_package_multi'] = 'rocksdb'
self.cpp_info.components['librocksdb'].set_property(
'cmake_target_name', 'RocksDB::rocksdb'
)
self.cpp_info.components['librocksdb'].libs = ['rocksdb']
if self.settings.os == "Windows":
self.cpp_info.components["librocksdb"].system_libs = ["shlwapi", "rpcrt4"]
if self.options.shared:
self.cpp_info.components["librocksdb"].defines = ["ROCKSDB_DLL"]
elif self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.components["librocksdb"].system_libs = ["pthread", "m"]
if self.options.lite:
self.cpp_info.components["librocksdb"].defines.append("ROCKSDB_LITE")
if self.options.with_gflags:
self.cpp_info.components["librocksdb"].requires.append("gflags::gflags")
if self.options.with_jemalloc:
self.cpp_info.components["librocksdb"].requires.append("jemalloc::jemalloc")
if self.options.with_lz4:
self.cpp_info.components["librocksdb"].requires.append("lz4::lz4")
if self.options.with_snappy:
self.cpp_info.components["librocksdb"].requires.append("snappy::snappy")
if self.options.with_tbb:
self.cpp_info.components["librocksdb"].requires.append("onetbb::onetbb")
if self.options.with_zlib:
self.cpp_info.components["librocksdb"].requires.append("zlib::zlib")
if self.options.with_zstd:
self.cpp_info.components["librocksdb"].requires.append("zstd::zstd")

62
external/rocksdb/thirdparty.inc vendored Normal file
View File

@@ -0,0 +1,62 @@
if(WITH_GFLAGS)
# Config with namespace available since gflags 2.2.2
find_package(gflags REQUIRED)
set(GFLAGS_LIB gflags::gflags)
list(APPEND THIRDPARTY_LIBS ${GFLAGS_LIB})
add_definitions(-DGFLAGS=1)
endif()
if(WITH_SNAPPY)
find_package(Snappy REQUIRED)
add_definitions(-DSNAPPY)
list(APPEND THIRDPARTY_LIBS Snappy::snappy)
endif()
if(WITH_LZ4)
find_package(lz4 REQUIRED)
add_definitions(-DLZ4)
list(APPEND THIRDPARTY_LIBS lz4::lz4)
endif()
if(WITH_ZLIB)
find_package(ZLIB REQUIRED)
add_definitions(-DZLIB)
list(APPEND THIRDPARTY_LIBS ZLIB::ZLIB)
endif()
option(WITH_BZ2 "build with bzip2" OFF)
if(WITH_BZ2)
find_package(BZip2 REQUIRED)
add_definitions(-DBZIP2)
list(APPEND THIRDPARTY_LIBS BZip2::BZip2)
endif()
if(WITH_ZSTD)
find_package(zstd REQUIRED)
add_definitions(-DZSTD)
list(APPEND THIRDPARTY_LIBS zstd::zstd)
endif()
# ================================================== XPRESS ==================================================
# This makes use of built-in Windows API, no additional includes, links to a system lib
if(WITH_XPRESS)
message(STATUS "XPRESS is enabled")
add_definitions(-DXPRESS)
# We are using the implementation provided by the system
list(APPEND SYSTEM_LIBS Cabinet.lib)
else()
message(STATUS "XPRESS is disabled")
endif()
# ================================================== JEMALLOC ==================================================
if(WITH_JEMALLOC)
message(STATUS "JEMALLOC library is enabled")
add_definitions(-DROCKSDB_JEMALLOC -DJEMALLOC_EXPORT= -DJEMALLOC_NO_RENAME)
list(APPEND THIRDPARTY_LIBS jemalloc::jemalloc)
set(ARTIFACT_SUFFIX "_je")
else ()
set(ARTIFACT_SUFFIX "")
message(STATUS "JEMALLOC library is disabled")
endif ()

40
external/snappy/conandata.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
sources:
"1.1.10":
url: "https://github.com/google/snappy/archive/1.1.10.tar.gz"
sha256: "49d831bffcc5f3d01482340fe5af59852ca2fe76c3e05df0e67203ebbe0f1d90"
"1.1.9":
url: "https://github.com/google/snappy/archive/1.1.9.tar.gz"
sha256: "75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7"
"1.1.8":
url: "https://github.com/google/snappy/archive/1.1.8.tar.gz"
sha256: "16b677f07832a612b0836178db7f374e414f94657c138e6993cbfc5dcc58651f"
"1.1.7":
url: "https://github.com/google/snappy/archive/1.1.7.tar.gz"
sha256: "3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4"
patches:
"1.1.10":
- patch_file: "patches/1.1.10-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.10-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"
"1.1.9":
- patch_file: "patches/1.1.9-0001-fix-inlining-failure.patch"
patch_description: "disable inlining for compilation error"
patch_type: "portability"
- patch_file: "patches/1.1.9-0002-no-Werror.patch"
patch_description: "disable 'warning as error' options"
patch_type: "portability"
- patch_file: "patches/1.1.9-0003-fix-clobber-list-older-llvm.patch"
patch_description: "disable inline asm on apple-clang"
patch_type: "portability"
- patch_file: "patches/1.1.9-0004-rtti-by-default.patch"
patch_description: "remove 'disable rtti'"
patch_type: "conan"

94
external/snappy/conanfile.py vendored Normal file
View File

@@ -0,0 +1,94 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.scm import Version
import os
required_conan_version = ">=1.54.0"
class SnappyConan(ConanFile):
name = "snappy"
description = "A fast compressor/decompressor"
topics = ("google", "compressor", "decompressor")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/google/snappy"
license = "BSD-3-Clause"
package_type = "library"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
}
def export_sources(self):
export_conandata_patches(self)
def config_options(self):
if self.settings.os == 'Windows':
del self.options.fPIC
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SNAPPY_BUILD_TESTS"] = False
if Version(self.version) >= "1.1.8":
tc.variables["SNAPPY_FUZZING_BUILD"] = False
tc.variables["SNAPPY_REQUIRE_AVX"] = False
tc.variables["SNAPPY_REQUIRE_AVX2"] = False
tc.variables["SNAPPY_INSTALL"] = True
if Version(self.version) >= "1.1.9":
tc.variables["SNAPPY_BUILD_BENCHMARKS"] = False
tc.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "Snappy")
self.cpp_info.set_property("cmake_target_name", "Snappy::snappy")
# TODO: back to global scope in conan v2 once cmake_find_package* generators removed
self.cpp_info.components["snappylib"].libs = ["snappy"]
# The following block is commented out as a workaround for a bug in the
# Conan 1.x CMakeDeps generator. Including system_libs ("m") here
# incorrectly triggers a heuristic that adds a dynamic link to `stdc++`
# (-lstdc++), preventing a fully static build.
# This behavior is expected to be corrected in Conan 2.
# if not self.options.shared:
# if self.settings.os in ["Linux", "FreeBSD"]:
# self.cpp_info.components["snappylib"].system_libs.append("m")
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "Snappy"
self.cpp_info.names["cmake_find_package_multi"] = "Snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package"] = "snappy"
self.cpp_info.components["snappylib"].names["cmake_find_package_multi"] = "snappy"
self.cpp_info.components["snappylib"].set_property("cmake_target_name", "Snappy::snappy")

View File

@@ -0,0 +1,13 @@
diff --git a/snappy-stubs-internal.h b/snappy-stubs-internal.h
index 1548ed7..3b4a9f3 100644
--- a/snappy-stubs-internal.h
+++ b/snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#if HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif // HAVE_ATTRIBUTE_ALWAYS_INLINE

View File

@@ -0,0 +1,13 @@
diff --git a/snappy.cc b/snappy.cc
index d414718..e4efb59 100644
--- a/snappy.cc
+++ b/snappy.cc
@@ -1132,7 +1132,7 @@ inline size_t AdvanceToNextTagX86Optimized(const uint8_t** ip_p, size_t* tag) {
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__)
+#if defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -0,0 +1,14 @@
Fixes the following error:
error: inlining failed in call to always_inline size_t snappy::AdvanceToNextTag(const uint8_t**, size_t*): function body can be overwritten at link time
--- snappy-stubs-internal.h
+++ snappy-stubs-internal.h
@@ -100,7 +100,7 @@
// Inlining hints.
#ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE
-#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
+#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#else
#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE
#endif

View File

@@ -0,0 +1,12 @@
--- CMakeLists.txt
+++ CMakeLists.txt
@@ -69,7 +69,7 @@
- # Use -Werror for clang only.
+if(0)
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
endif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
-
+endif()

View File

@@ -0,0 +1,12 @@
asm clobbers do not work for clang < 9 and apple-clang < 11 (found by SpaceIm)
--- snappy.cc
+++ snappy.cc
@@ -1026,7 +1026,7 @@
size_t literal_len = *tag >> 2;
size_t tag_type = *tag;
bool is_literal;
-#if defined(__GNUC__) && defined(__x86_64__)
+#if defined(__GNUC__) && defined(__x86_64__) && ( (!defined(__clang__) && !defined(__APPLE__)) || (!defined(__APPLE__) && defined(__clang__) && (__clang_major__ >= 9)) || (defined(__APPLE__) && defined(__clang__) && (__clang_major__ > 11)) )
// TODO clang misses the fact that the (c & 3) already correctly
// sets the zero flag.
asm("and $3, %k[tag_type]\n\t"

View File

@@ -0,0 +1,20 @@
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -53,8 +53,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
add_definitions(-D_HAS_EXCEPTIONS=0)
# Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
else(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Use -Wall for clang and gcc.
if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
@@ -78,8 +76,6 @@ endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
# Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

12
external/soci/conandata.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
sources:
"4.0.3":
url: "https://github.com/SOCI/soci/archive/v4.0.3.tar.gz"
sha256: "4b1ff9c8545c5d802fbe06ee6cd2886630e5c03bf740e269bb625b45cf934928"
patches:
"4.0.3":
- patch_file: "patches/0001-Remove-hardcoded-INSTALL_NAME_DIR-for-relocatable-li.patch"
patch_description: "Generate relocatable libraries on MacOS"
patch_type: "portability"
- patch_file: "patches/0002-Fix-soci_backend.patch"
patch_description: "Fix variable names for dependencies"
patch_type: "conan"

212
external/soci/conanfile.py vendored Normal file
View File

@@ -0,0 +1,212 @@
from conan import ConanFile
from conan.tools.build import check_min_cppstd
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, rmdir
from conan.tools.microsoft import is_msvc
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.55.0"
class SociConan(ConanFile):
name = "soci"
homepage = "https://github.com/SOCI/soci"
url = "https://github.com/conan-io/conan-center-index"
description = "The C++ Database Access Library "
topics = ("mysql", "odbc", "postgresql", "sqlite3")
license = "BSL-1.0"
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
"fPIC": [True, False],
"empty": [True, False],
"with_sqlite3": [True, False],
"with_db2": [True, False],
"with_odbc": [True, False],
"with_oracle": [True, False],
"with_firebird": [True, False],
"with_mysql": [True, False],
"with_postgresql": [True, False],
"with_boost": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"empty": False,
"with_sqlite3": False,
"with_db2": False,
"with_odbc": False,
"with_oracle": False,
"with_firebird": False,
"with_mysql": False,
"with_postgresql": False,
"with_boost": False,
}
def export_sources(self):
export_conandata_patches(self)
def layout(self):
cmake_layout(self, src_folder="src")
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self):
if self.options.with_sqlite3:
self.requires("sqlite3/3.41.1")
if self.options.with_odbc and self.settings.os != "Windows":
self.requires("odbc/2.3.11")
if self.options.with_mysql:
self.requires("libmysqlclient/8.0.31")
if self.options.with_postgresql:
self.requires("libpq/14.7")
if self.options.with_boost:
self.requires("boost/1.81.0")
@property
def _minimum_compilers_version(self):
return {
"Visual Studio": "14",
"gcc": "4.8",
"clang": "3.8",
"apple-clang": "8.0"
}
def validate(self):
if self.settings.compiler.get_safe("cppstd"):
check_min_cppstd(self, 11)
compiler = str(self.settings.compiler)
compiler_version = Version(self.settings.compiler.version.value)
if compiler not in self._minimum_compilers_version:
self.output.warning("{} recipe lacks information about the {} compiler support.".format(self.name, self.settings.compiler))
elif compiler_version < self._minimum_compilers_version[compiler]:
raise ConanInvalidConfiguration("{} requires a {} version >= {}".format(self.name, compiler, compiler_version))
prefix = "Dependencies for"
message = "not configured in this conan package."
if self.options.with_db2:
# self.requires("db2/0.0.0") # TODO add support for db2
raise ConanInvalidConfiguration("{} DB2 {} ".format(prefix, message))
if self.options.with_oracle:
# self.requires("oracle_db/0.0.0") # TODO add support for oracle
raise ConanInvalidConfiguration("{} ORACLE {} ".format(prefix, message))
if self.options.with_firebird:
# self.requires("firebird/0.0.0") # TODO add support for firebird
raise ConanInvalidConfiguration("{} firebird {} ".format(prefix, message))
def source(self):
get(self, **self.conan_data["sources"][self.version], strip_root=True)
def generate(self):
tc = CMakeToolchain(self)
tc.variables["SOCI_SHARED"] = self.options.shared
tc.variables["SOCI_STATIC"] = not self.options.shared
tc.variables["SOCI_TESTS"] = False
tc.variables["SOCI_CXX11"] = True
tc.variables["SOCI_EMPTY"] = self.options.empty
tc.variables["WITH_SQLITE3"] = self.options.with_sqlite3
tc.variables["WITH_DB2"] = self.options.with_db2
tc.variables["WITH_ODBC"] = self.options.with_odbc
tc.variables["WITH_ORACLE"] = self.options.with_oracle
tc.variables["WITH_FIREBIRD"] = self.options.with_firebird
tc.variables["WITH_MYSQL"] = self.options.with_mysql
tc.variables["WITH_POSTGRESQL"] = self.options.with_postgresql
tc.variables["WITH_BOOST"] = self.options.with_boost
tc.generate()
deps = CMakeDeps(self)
deps.generate()
def build(self):
apply_conandata_patches(self)
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
copy(self, "LICENSE_1_0.txt", dst=os.path.join(self.package_folder, "licenses"), src=self.source_folder)
cmake = CMake(self)
cmake.install()
rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
def package_info(self):
self.cpp_info.set_property("cmake_file_name", "SOCI")
target_suffix = "" if self.options.shared else "_static"
lib_prefix = "lib" if is_msvc(self) and not self.options.shared else ""
version = Version(self.version)
lib_suffix = "_{}_{}".format(version.major, version.minor) if self.settings.os == "Windows" else ""
# soci_core
self.cpp_info.components["soci_core"].set_property("cmake_target_name", "SOCI::soci_core{}".format(target_suffix))
self.cpp_info.components["soci_core"].libs = ["{}soci_core{}".format(lib_prefix, lib_suffix)]
if self.options.with_boost:
self.cpp_info.components["soci_core"].requires.append("boost::headers")
# soci_empty
if self.options.empty:
self.cpp_info.components["soci_empty"].set_property("cmake_target_name", "SOCI::soci_empty{}".format(target_suffix))
self.cpp_info.components["soci_empty"].libs = ["{}soci_empty{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_empty"].requires = ["soci_core"]
# soci_sqlite3
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].set_property("cmake_target_name", "SOCI::soci_sqlite3{}".format(target_suffix))
self.cpp_info.components["soci_sqlite3"].libs = ["{}soci_sqlite3{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_sqlite3"].requires = ["soci_core", "sqlite3::sqlite3"]
# soci_odbc
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].set_property("cmake_target_name", "SOCI::soci_odbc{}".format(target_suffix))
self.cpp_info.components["soci_odbc"].libs = ["{}soci_odbc{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_odbc"].requires = ["soci_core"]
if self.settings.os == "Windows":
self.cpp_info.components["soci_odbc"].system_libs.append("odbc32")
else:
self.cpp_info.components["soci_odbc"].requires.append("odbc::odbc")
# soci_mysql
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].set_property("cmake_target_name", "SOCI::soci_mysql{}".format(target_suffix))
self.cpp_info.components["soci_mysql"].libs = ["{}soci_mysql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_mysql"].requires = ["soci_core", "libmysqlclient::libmysqlclient"]
# soci_postgresql
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].set_property("cmake_target_name", "SOCI::soci_postgresql{}".format(target_suffix))
self.cpp_info.components["soci_postgresql"].libs = ["{}soci_postgresql{}".format(lib_prefix, lib_suffix)]
self.cpp_info.components["soci_postgresql"].requires = ["soci_core", "libpq::libpq"]
# TODO: to remove in conan v2 once cmake_find_package* generators removed
self.cpp_info.names["cmake_find_package"] = "SOCI"
self.cpp_info.names["cmake_find_package_multi"] = "SOCI"
self.cpp_info.components["soci_core"].names["cmake_find_package"] = "soci_core{}".format(target_suffix)
self.cpp_info.components["soci_core"].names["cmake_find_package_multi"] = "soci_core{}".format(target_suffix)
if self.options.empty:
self.cpp_info.components["soci_empty"].names["cmake_find_package"] = "soci_empty{}".format(target_suffix)
self.cpp_info.components["soci_empty"].names["cmake_find_package_multi"] = "soci_empty{}".format(target_suffix)
if self.options.with_sqlite3:
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package"] = "soci_sqlite3{}".format(target_suffix)
self.cpp_info.components["soci_sqlite3"].names["cmake_find_package_multi"] = "soci_sqlite3{}".format(target_suffix)
if self.options.with_odbc:
self.cpp_info.components["soci_odbc"].names["cmake_find_package"] = "soci_odbc{}".format(target_suffix)
self.cpp_info.components["soci_odbc"].names["cmake_find_package_multi"] = "soci_odbc{}".format(target_suffix)
if self.options.with_mysql:
self.cpp_info.components["soci_mysql"].names["cmake_find_package"] = "soci_mysql{}".format(target_suffix)
self.cpp_info.components["soci_mysql"].names["cmake_find_package_multi"] = "soci_mysql{}".format(target_suffix)
if self.options.with_postgresql:
self.cpp_info.components["soci_postgresql"].names["cmake_find_package"] = "soci_postgresql{}".format(target_suffix)
self.cpp_info.components["soci_postgresql"].names["cmake_find_package_multi"] = "soci_postgresql{}".format(target_suffix)

View File

@@ -0,0 +1,39 @@
From d491bf7b5040d314ffd0c6310ba01f78ff44c85e Mon Sep 17 00:00:00 2001
From: Rasmus Thomsen <rasmus.thomsen@dampsoft.de>
Date: Fri, 14 Apr 2023 09:16:29 +0200
Subject: [PATCH] Remove hardcoded INSTALL_NAME_DIR for relocatable libraries
on MacOS
---
cmake/SociBackend.cmake | 2 +-
src/core/CMakeLists.txt | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 5d4ef0df..39fe1f77 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -171,7 +171,7 @@ macro(soci_backend NAME)
set_target_properties(${THIS_BACKEND_TARGET}
PROPERTIES
SOVERSION ${${PROJECT_NAME}_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib)
+ )
if(APPLE)
set_target_properties(${THIS_BACKEND_TARGET}
diff --git a/src/core/CMakeLists.txt b/src/core/CMakeLists.txt
index 3e7deeae..f9eae564 100644
--- a/src/core/CMakeLists.txt
+++ b/src/core/CMakeLists.txt
@@ -59,7 +59,6 @@ if (SOCI_SHARED)
PROPERTIES
VERSION ${SOCI_VERSION}
SOVERSION ${SOCI_SOVERSION}
- INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib
CLEAN_DIRECT_OUTPUT 1)
endif()
--
2.25.1

View File

@@ -0,0 +1,24 @@
diff --git a/cmake/SociBackend.cmake b/cmake/SociBackend.cmake
index 0a664667..3fa2ed95 100644
--- a/cmake/SociBackend.cmake
+++ b/cmake/SociBackend.cmake
@@ -31,14 +31,13 @@ macro(soci_backend_deps_found NAME DEPS SUCCESS)
if(NOT DEPEND_FOUND)
list(APPEND DEPS_NOT_FOUND ${dep})
else()
- string(TOUPPER "${dep}" DEPU)
- if( ${DEPU}_INCLUDE_DIR )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIR})
+ if( ${dep}_INCLUDE_DIR )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIR})
endif()
- if( ${DEPU}_INCLUDE_DIRS )
- list(APPEND DEPS_INCLUDE_DIRS ${${DEPU}_INCLUDE_DIRS})
+ if( ${dep}_INCLUDE_DIRS )
+ list(APPEND DEPS_INCLUDE_DIRS ${${dep}_INCLUDE_DIRS})
endif()
- list(APPEND DEPS_LIBRARIES ${${DEPU}_LIBRARIES})
+ list(APPEND DEPS_LIBRARIES ${${dep}_LIBRARIES})
endif()
endforeach()

194
external/wasmedge/conandata.yml vendored Normal file
View File

@@ -0,0 +1,194 @@
sources:
"0.13.5":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-windows.zip"
sha256: "db533289ba26ec557b5193593c9ed03db75be3bc7aa737e2caa5b56b8eef888a"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-manylinux2014_x86_64.tar.gz"
sha256: "3686e0226871bf17b62ec57e1c15778c2947834b90af0dfad14f2e0202bf9284"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-manylinux2014_aarch64.tar.gz"
sha256: "472de88e0257c539c120b33fdd1805e1e95063121acc2df1d5626e4676b93529"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-darwin_x86_64.tar.gz"
sha256: "b7fdfaf59805951241f47690917b501ddfa06d9b6f7e0262e44e784efe4a7b33"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-darwin_arm64.tar.gz"
sha256: "acc93721210294ced0887352f360e42e46dcc05332e6dd78c1452fb3a35d5255"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.13.5/WasmEdge-0.13.5-android_aarch64.tar.gz"
sha256: "59a0d68a0c7368b51cc65cb5a44a68037d79fd449883ef42792178d57c8784a8"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.13.5/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.11.2":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-windows.zip"
sha256: "ca49b98c0cf5f187e08c3ba71afc8d71365fde696f10b4219379a4a4d1a91e6d"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-manylinux2014_x86_64.tar.gz"
sha256: "784bf1eb25928e2cf02aa88e9372388fad682b4a188485da3cd9162caeedf143"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-manylinux2014_aarch64.tar.gz"
sha256: "a2766a4c1edbaea298a30e5431a4e795003a10d8398a933d923f23d4eb4fa5d1"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-darwin_x86_64.tar.gz"
sha256: "aedec53f29b1e0b657e46e67dba3e2f32a2924f4d9136e60073ea1aba3073e70"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-darwin_arm64.tar.gz"
sha256: "fe391df90e1eee69cf1e976f5ddf60c20f29b651710aaa4fc03e2ab4fe52c0d3"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.2/WasmEdge-0.11.2-android_aarch64.tar.gz"
sha256: "69e308f5927c753b2bb5639569d10219b60598174d8b304bdf310093fd7b2464"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.2/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.11.1":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-windows.zip"
sha256: "c86f6384555a0484a5dd81faba5636bba78f5e3d6eaf627d880e34843f9e24bf"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-manylinux2014_x86_64.tar.gz"
sha256: "76ce4ea0eb86adfa52c73f6c6b44383626d94990e0923cae8b1e6f060ef2bf5b"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-manylinux2014_aarch64.tar.gz"
sha256: "cb9ea32932360463991cfda80e09879b2cf6c69737f12f3f2b371cd0af4e9ce8"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-darwin_x86_64.tar.gz"
sha256: "56df2b00669c25b8143ea2c17370256cd6a33f3b316d3b47857dd38d603cb69a"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-darwin_arm64.tar.gz"
sha256: "82f7da1a7a36ec1923fb045193784dd090a03109e84da042af97297205a71f08"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Android:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.11.1/WasmEdge-0.11.1-android_aarch64.tar.gz"
sha256: "af8694e93bf72ac5506450d4caebccc340fbba254dca3d58ec0712e96ec9dedd"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.11.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.10.0":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-windows.zip"
sha256: "63b8a02cced52a723aa283dba02bbe887656256ecca69bb0fff17872c0fb5ebc"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-manylinux2014_x86_64.tar.gz"
sha256: "4c1ffca9fd8cbdeb8f0951ddaffbbefe81ae123d5b80f61e80ea8d9b56853cde"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.10.0/WasmEdge-0.10.0-manylinux2014_aarch64.tar.gz"
sha256: "c000bf96d0a73a1d360659246c0806c2ce78620b6f78c1147fbf9e2be0280bd9"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.10.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.9.1":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-windows.zip"
sha256: "68240d8aee23d44db5cc252d8c1cf5d0c77ab709a122af2747a4b836ba461671"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-manylinux2014_x86_64.tar.gz"
sha256: "bcb6fe3d6e30db0d0aa267ec3bd9b7248f8c8c387620cef4049d682d293c8371"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.1/WasmEdge-0.9.1-manylinux2014_aarch64.tar.gz"
sha256: "515bcac3520cd546d9d14372b7930ab48b43f1c5dc258a9f61a82b22c0107eef"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.1/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"0.9.0":
Windows:
"x86_64":
Visual Studio:
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-windows.zip"
sha256: "f81bfea4cf09053510e3e74c16c1ee010fc93def8a7e78744443b950f0011c3b"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Linux:
"x86_64":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-manylinux2014_x86_64.tar.gz"
sha256: "27847f15e4294e707486458e857d7cb11806481bb67a26f076a717a1446827ed"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-manylinux2014_aarch64.tar.gz"
sha256: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"
Macos:
"armv8":
"gcc":
- url: "https://github.com/WasmEdge/WasmEdge/releases/download/0.9.0/WasmEdge-0.9.0-darwin_arm64.tar.gz"
sha256: "236a407a646f746ab78a1d0a39fa4e85fe28eae219b1635ba49f908d7944686d"
- url: "https://raw.githubusercontent.com/WasmEdge/WasmEdge/0.9.0/LICENSE"
sha256: "c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4"

99
external/wasmedge/conanfile.py vendored Normal file
View File

@@ -0,0 +1,99 @@
from conan import ConanFile
from conan.tools.files import get, copy, download
from conan.tools.scm import Version
from conan.errors import ConanInvalidConfiguration
import os
required_conan_version = ">=1.53.0"
class WasmedgeConan(ConanFile):
name = "wasmedge"
description = ("WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime"
"for cloud native, edge, and decentralized applications."
"It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.")
license = "Apache-2.0"
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/WasmEdge/WasmEdge/"
topics = ("webassembly", "wasm", "wasi", "emscripten")
package_type = "shared-library"
settings = "os", "arch", "compiler", "build_type"
@property
def _compiler_alias(self):
return {
"Visual Studio": "Visual Studio",
# "Visual Studio": "msvc",
"msvc": "msvc",
}.get(str(self.info.settings.compiler), "gcc")
def configure(self):
self.settings.compiler.rm_safe("libcxx")
self.settings.compiler.rm_safe("cppstd")
def validate(self):
try:
self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias]
except KeyError:
raise ConanInvalidConfiguration("Binaries for this combination of version/os/arch/compiler are not available")
def package_id(self):
# Make binary compatible across compiler versions (since we're downloading prebuilt)
self.info.settings.rm_safe("compiler.version")
# Group compilers by their binary compatibility
# Note: We must use self.info.settings here, not self.settings (forbidden in Conan 2)
compiler_name = str(self.info.settings.compiler)
if compiler_name in ["Visual Studio", "msvc"]:
self.info.settings.compiler = "Visual Studio"
else:
self.info.settings.compiler = "gcc"
def build(self):
# This is packaging binaries so the download needs to be in build
get(self, **self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias][0],
destination=self.source_folder, strip_root=True)
download(self, filename="LICENSE",
**self.conan_data["sources"][self.version][str(self.settings.os)][str(self.settings.arch)][self._compiler_alias][1])
def package(self):
copy(self, pattern="*.h", dst=os.path.join(self.package_folder, "include"), src=os.path.join(self.source_folder, "include"), keep_path=True)
copy(self, pattern="*.inc", dst=os.path.join(self.package_folder, "include"), src=os.path.join(self.source_folder, "include"), keep_path=True)
srclibdir = os.path.join(self.source_folder, "lib64" if self.settings.os == "Linux" else "lib")
srcbindir = os.path.join(self.source_folder, "bin")
dstlibdir = os.path.join(self.package_folder, "lib")
dstbindir = os.path.join(self.package_folder, "bin")
if Version(self.version) >= "0.11.1":
copy(self, pattern="wasmedge.lib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge.dll", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="libwasmedge.so*", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="libwasmedge*.dylib", src=srclibdir, dst=dstlibdir, keep_path=False)
else:
copy(self, pattern="wasmedge_c.lib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge_c.dll", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="libwasmedge_c.so*", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="libwasmedge_c*.dylib", src=srclibdir, dst=dstlibdir, keep_path=False)
copy(self, pattern="wasmedge*", src=srcbindir, dst=dstbindir, keep_path=False)
copy(self, pattern="LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"), keep_path=False)
def package_info(self):
if Version(self.version) >= "0.11.1":
self.cpp_info.libs = ["wasmedge"]
else:
self.cpp_info.libs = ["wasmedge_c"]
bindir = os.path.join(self.package_folder, "bin")
self.output.info("Appending PATH environment variable: {}".format(bindir))
self.env_info.PATH.append(bindir)
if self.settings.os == "Windows":
self.cpp_info.system_libs.append("ws2_32")
self.cpp_info.system_libs.append("wsock32")
self.cpp_info.system_libs.append("shlwapi")
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.system_libs.append("m")
self.cpp_info.system_libs.append("dl")
self.cpp_info.system_libs.append("rt")
self.cpp_info.system_libs.append("pthread")

View File

@@ -45,5 +45,7 @@
#define INVALID_KEY -41
#define NOT_A_STRING -42
#define MEM_OVERLAP -43
#define TOO_MANY_STATE_MODIFICATIONS -44
#define TOO_MANY_NAMESPACES -45
#define HOOK_ERROR_CODES
#endif //HOOK_ERROR_CODES
#endif //HOOK_ERROR_CODES

View File

@@ -12,8 +12,6 @@ accept(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
extern int64_t
rollback(uint32_t read_ptr, uint32_t read_len, int64_t error_code);
// UTIL
extern int64_t
util_raddr(
uint32_t write_ptr,
@@ -56,8 +54,6 @@ util_keylet(
uint32_t e,
uint32_t f);
// STO
extern int64_t
sto_validate(uint32_t tread_ptr, uint32_t tread_len);
@@ -85,8 +81,6 @@ sto_erase(
uint32_t read_len,
uint32_t field_id);
// EMITTED TXN
extern int64_t
etxn_burden(void);
@@ -112,8 +106,6 @@ emit(
uint32_t read_ptr,
uint32_t read_len);
// FLOAT
extern int64_t
float_set(int32_t exponent, int64_t mantissa);
@@ -174,8 +166,6 @@ float_log(int64_t float1);
extern int64_t
float_root(int64_t float1, uint32_t n);
// LEDGER
extern int64_t
fee_base(void);
@@ -200,8 +190,6 @@ ledger_keylet(
uint32_t hread_ptr,
uint32_t hread_len);
// HOOK
extern int64_t
hook_account(uint32_t write_ptr, uint32_t write_len);
@@ -233,8 +221,6 @@ hook_skip(uint32_t read_ptr, uint32_t read_len, uint32_t flags);
extern int64_t
hook_pos(void);
// SLOT
extern int64_t
slot(uint32_t write_ptr, uint32_t write_len, uint32_t slot);
@@ -262,8 +248,6 @@ slot_type(uint32_t slot_no, uint32_t flags);
extern int64_t
slot_float(uint32_t slot_no);
// STATE
extern int64_t
state_set(
uint32_t read_ptr,
@@ -300,8 +284,6 @@ state_foreign(
uint32_t aread_ptr,
uint32_t aread_len);
// TRACE
extern int64_t
trace(
uint32_t mread_ptr,
@@ -316,8 +298,6 @@ trace_num(uint32_t read_ptr, uint32_t read_len, int64_t number);
extern int64_t
trace_float(uint32_t read_ptr, uint32_t read_len, int64_t float1);
// OTXN
extern int64_t
otxn_burden(void);
@@ -346,9 +326,8 @@ otxn_param(
extern int64_t
meta_slot(uint32_t slot_no);
// featureHooks1
extern int64_t xpop_slot(uint32_t, uint32_t);
extern int64_t
xpop_slot(uint32_t slot_no_tx, uint32_t slot_no_meta);
#define HOOK_EXTERN
#endif // HOOK_EXTERN

58
hook/generate_error.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/bin/bash
set -eu
SCRIPT_DIR=$(dirname "$0")
SCRIPT_DIR=$(cd "$SCRIPT_DIR" && pwd)
ENUM_FILE="$SCRIPT_DIR/../src/ripple/app/hook/Enum.h"
echo '// For documentation please see: https://xrpl-hooks.readme.io/reference/'
echo '// Generated using generate_error.sh'
echo '#ifndef HOOK_ERROR_CODES'
sed -n '/enum hook_return_code/,/};/p' "$ENUM_FILE" |
awk '
function ltrim(s) { sub(/^[[:space:]]+/, "", s); return s }
function rtrim(s) { sub(/[[:space:]]+$/, "", s); return s }
function trim(s) { return rtrim(ltrim(s)) }
function emit(entry) {
entry = trim(entry)
if (entry == "")
return
gsub(/,[[:space:]]*$/, "", entry)
split(entry, parts, "=")
if (length(parts) < 2)
return
name = trim(parts[1])
value = trim(parts[2])
if (name == "" || value == "")
return
printf "#define %s %s\n", name, value
}
{
line = $0
if (line ~ /enum[[:space:]]+hook_return_code/)
next
if (line ~ /^[[:space:]]*\{/)
next
sub(/\/\/.*$/, "", line)
if (line ~ /^[[:space:]]*\};/) {
emit(buffer)
exit
}
if (line ~ /^[[:space:]]*$/)
next
buffer = buffer line " "
if (line ~ /,[[:space:]]*$/) {
emit(buffer)
buffer = ""
}
}
'
echo '#define HOOK_ERROR_CODES'
echo '#endif //HOOK_ERROR_CODES'

145
hook/generate_extern.sh Executable file
View File

@@ -0,0 +1,145 @@
#!/bin/bash
set -eu
SCRIPT_DIR=$(dirname "$0")
SCRIPT_DIR=$(cd "$SCRIPT_DIR" && pwd)
APPLY_HOOK="$SCRIPT_DIR/../src/ripple/app/hook/applyHook.h"
{
echo '// For documentation please see: https://xrpl-hooks.readme.io/reference/'
echo '// Generated using generate_extern.sh'
echo '#include <stdint.h>'
echo '#ifndef HOOK_EXTERN'
echo
awk '
function trim(s) {
sub(/^[[:space:]]+/, "", s);
sub(/[[:space:]]+$/, "", s);
return s;
}
function emit(ret, name, argc, argt, argn) {
attr = (name == "_g") ? " __attribute__((noduplicate))" : "";
if (!first)
printf("\n");
first = 0;
printf("extern %s%s\n", ret, attr);
if (argc == 0) {
printf("%s(void);\n", name);
return;
}
if (argc <= 3) {
line = argt[1] " " argn[1];
for (i = 2; i <= argc; ++i)
line = line ", " argt[i] " " argn[i];
printf("%s(%s);\n", name, line);
return;
}
printf("%s(\n", name);
for (i = 1; i <= argc; ++i) {
sep = (i < argc) ? "," : ");";
printf(" %s %s%s\n", argt[i], argn[i], sep);
}
}
function process(buffer, kind, payload, parts, n, i, arg, tokens, argc, argt, argn) {
if (kind == "func")
sub(/^DECLARE_HOOK_FUNCTION[[:space:]]*\(/, "", buffer);
else
sub(/^DECLARE_HOOK_FUNCNARG[[:space:]]*\(/, "", buffer);
buffer = trim(buffer);
sub(/\)[[:space:]]*$/, "", buffer);
n = split(buffer, parts, ",");
for (i = 1; i <= n; ++i)
parts[i] = trim(parts[i]);
ret = parts[1];
name = parts[2];
argc = 0;
delete argt;
delete argn;
for (i = 3; i <= n; ++i) {
arg = parts[i];
if (arg == "")
continue;
split(arg, tokens, /[[:space:]]+/);
if (length(tokens) < 2)
continue;
++argc;
argt[argc] = tokens[1];
argn[argc] = tokens[2];
}
emit(ret, name, argc, argt, argn);
}
BEGIN {
first = 1;
in_block = 0;
in_macro = 0;
}
{
line = $0;
if (in_block) {
if (line ~ /\*\//) {
sub(/.*\*\//, "", line);
in_block = 0;
}
else
next;
}
while (line ~ /\/\*/) {
if (line ~ /\/\*.*\*\//) {
gsub(/\/\*.*\*\//, "", line);
}
else {
sub(/\/\*.*/, "", line);
in_block = 1;
break;
}
}
sub(/\/\/.*$/, "", line);
line = trim(line);
if (line == "")
next;
if (!in_macro && line ~ /^DECLARE_HOOK_FUNCTION\(/) {
buffer = line;
kind = "func";
if (line ~ /\);[[:space:]]*$/) {
sub(/\);[[:space:]]*$/, "", buffer);
process(buffer, kind);
}
else
in_macro = 1;
next;
}
if (!in_macro && line ~ /^DECLARE_HOOK_FUNCNARG\(/) {
buffer = line;
kind = "narg";
if (line ~ /\);[[:space:]]*$/) {
sub(/\);[[:space:]]*$/, "", buffer);
process(buffer, kind);
}
else
in_macro = 1;
next;
}
if (in_macro) {
buffer = buffer " " line;
if (line ~ /\);[[:space:]]*$/) {
sub(/\);[[:space:]]*$/, "", buffer);
process(buffer, kind);
in_macro = 0;
}
}
}
END {
printf("\n");
}
' "$APPLY_HOOK"
echo '#define HOOK_EXTERN'
echo '#endif // HOOK_EXTERN'
}

34
hook/generate_sfcodes.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/bash
set -eu
SCRIPT_DIR=$(dirname "$0")
SCRIPT_DIR=$(cd "$SCRIPT_DIR" && pwd)
RIPPLED_ROOT="$SCRIPT_DIR/../src/ripple"
echo '// For documentation please see: https://xrpl-hooks.readme.io/reference/'
echo '// Generated using generate_sfcodes.sh'
cat "$RIPPLED_ROOT/protocol/impl/SField.cpp" | grep -E '^CONSTRUCT_' |
sed 's/UINT16,/1,/g' |
sed 's/UINT32,/2,/g' |
sed 's/UINT64,/3,/g' |
sed 's/HASH128,/4,/g' |
sed 's/HASH256,/5,/g' |
sed 's/UINT128,/4,/g' |
sed 's/UINT256,/5,/g' |
sed 's/AMOUNT,/6,/g' |
sed 's/VL,/7,/g' |
sed 's/ACCOUNT,/8,/g' |
sed 's/OBJECT,/14,/g' |
sed 's/ARRAY,/15,/g' |
sed 's/UINT8,/16,/g' |
sed 's/HASH160,/17,/g' |
sed 's/UINT160,/17,/g' |
sed 's/PATHSET,/18,/g' |
sed 's/VECTOR256,/19,/g' |
sed 's/UINT96,/20,/g' |
sed 's/UINT192,/21,/g' |
sed 's/UINT384,/22,/g' |
sed 's/UINT512,/23,/g' |
grep -Eo '"([^"]+)", *([0-9]+), *([0-9]+)' |
sed 's/"//g' | sed 's/ *//g' | sed 's/,/ /g' |
awk '{print ("#define sf"$1" (("$2"U << 16U) + "$3"U)")}'

38
hook/generate_tts.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/bash
set -eu
SCRIPT_DIR=$(dirname "$0")
SCRIPT_DIR=$(cd "$SCRIPT_DIR" && pwd)
RIPPLED_ROOT="$SCRIPT_DIR/../src/ripple"
TX_FORMATS="$RIPPLED_ROOT/protocol/TxFormats.h"
echo '// For documentation please see: https://xrpl-hooks.readme.io/reference/'
echo '// Generated using generate_tts.sh'
sed -n '/enum TxType/,/};/p' "$TX_FORMATS" |
awk '
function ltrim(s) { sub(/^[[:space:]]+/, "", s); return s }
function rtrim(s) { sub(/[[:space:]]+$/, "", s); return s }
function trim(s) { return rtrim(ltrim(s)) }
/^[ \t]*tt[A-Z0-9_]+/ {
line = $0
deprecated = (line ~ /\[\[deprecated/)
gsub(/\[\[deprecated[^]]*\]\]/, "", line)
sub(/\/\/.*$/, "", line)
gsub(/,[[:space:]]*$/, "", line)
split(line, parts, "=")
if (length(parts) < 2)
next
name = trim(parts[1])
value = trim(parts[2])
if (name == "" || value == "")
next
prefix = deprecated ? "// " : ""
postfix = deprecated ? " // deprecated" : ""
printf "%s#define %s %s%s\n", prefix, name, value, postfix
}
'

View File

@@ -637,43 +637,55 @@ int64_t hook(uint32_t r)
{
previous_member[0] = 'V';
for (int i = 1; GUARD(32), i < 32; ++i)
for (int tbl = 1; GUARD(2), tbl <= 2; ++tbl)
{
previous_member[1] = i < 2 ? 'R' : i < 12 ? 'H' : 'S';
previous_member[2] =
i == 0 ? 'R' :
i == 1 ? 'D' :
i < 12 ? i - 2 :
i - 12;
uint8_t vote_key[32];
if (state(SBUF(vote_key), SBUF(previous_member)) == 32)
for (int i = 0; GUARD(66), i < 32; ++i)
{
uint8_t vote_count = 0;
previous_member[1] = i < 2 ? 'R' : i < 12 ? 'H' : 'S';
previous_member[2] =
i == 0 ? 'R' :
i == 1 ? 'D' :
i < 12 ? i - 2 :
i - 12;
previous_member[3] = tbl;
// find and decrement the vote counter
vote_key[0] = 'C';
vote_key[1] = previous_member[1];
vote_key[2] = previous_member[2];
if (state(&vote_count, 1, SBUF(vote_key)) == 1)
uint8_t vote_key[32] = {};
uint8_t ts =
previous_member[1] == 'H' ? 32 : // hook topics are a 32 byte hook hash
previous_member[1] == 'S' ? 20 : // account topics are a 20 byte account ID
8; // reward topics are an 8 byte le xfl
uint8_t padding = 32 - ts;
if (state(vote_key + padding, ts, SBUF(previous_member)) == ts)
{
// if we're down to 1 vote then delete state
if (vote_count <= 1)
{
ASSERT(state_set(0,0, SBUF(vote_key)) == 0);
trace_num(SBUF("Decrement vote count deleted"), vote_count);
}
else // otherwise decrement
{
vote_count--;
ASSERT(state_set(&vote_count, 1, SBUF(vote_key)) == 1);
trace_num(SBUF("Decrement vote count to"), vote_count);
}
}
uint8_t vote_count = 0;
// delete the vote entry
ASSERT(state_set(0,0, SBUF(previous_member)) == 0);
trace(SBUF("Vote entry deleted"), vote_key, 32, 1);
// find and decrement the vote counter
vote_key[0] = 'C';
vote_key[1] = previous_member[1];
vote_key[2] = previous_member[2];
vote_key[3] = tbl;
if (state(&vote_count, 1, SBUF(vote_key)) == 1)
{
// if we're down to 1 vote then delete state
if (vote_count <= 1)
{
ASSERT(state_set(0,0, SBUF(vote_key)) == 0);
trace_num(SBUF("Decrement vote count deleted"), vote_count);
}
else // otherwise decrement
{
vote_count--;
ASSERT(state_set(&vote_count, 1, SBUF(vote_key)) == 1);
trace_num(SBUF("Decrement vote count to"), vote_count);
}
}
// delete the vote entry
ASSERT(state_set(0,0, SBUF(previous_member)) == 0);
trace(SBUF("Vote entry deleted"), vote_key, 32, 1);
}
}
}

View File

@@ -37,6 +37,7 @@
#define KEYLET_NFT_OFFER 23
#define KEYLET_HOOK_DEFINITION 24
#define KEYLET_HOOK_STATE_DIR 25
#define KEYLET_CRON 26
#define COMPARE_EQUAL 1U
#define COMPARE_LESS 2U

View File

@@ -1,5 +1,5 @@
/**
* These are helper macros for writing hooks, all of them are optional as is including hookmacro.h at all
* These are helper macros for writing hooks, all of them are optional as is including macro.h at all
*/
#include <stdint.h>
@@ -9,13 +9,32 @@
#ifndef HOOKMACROS_INCLUDED
#define HOOKMACROS_INCLUDED 1
#ifdef NDEBUG
#define DEBUG 0
#else
#define DEBUG 1
#endif
#define DONEEMPTY()\
accept(0,0,__LINE__)
#define DONEMSG(msg)\
accept(msg, sizeof(msg),__LINE__)
#define DONE(x)\
accept(SVAR(x),(uint32_t)__LINE__);
#define ASSERT(x)\
{\
if (!(x))\
rollback(0,0,__LINE__);\
}
#define NOPE(x)\
{\
return rollback((x), sizeof(x), __LINE__);\
}
#define TRACEVAR(v) if (DEBUG) trace_num((uint32_t)(#v), (uint32_t)(sizeof(#v) - 1), (int64_t)v);
#define TRACEHEX(v) if (DEBUG) trace((uint32_t)(#v), (uint32_t)(sizeof(#v) - 1), (uint32_t)(v), (uint32_t)(sizeof(v)), 1);
#define TRACEXFL(v) if (DEBUG) trace_float((uint32_t)(#v), (uint32_t)(sizeof(#v) - 1), (int64_t)v);
@@ -26,6 +45,7 @@
#define GUARDM(maxiter, n) _g(( (1ULL << 31U) + (__LINE__ << 16) + n), (maxiter)+1)
#define SBUF(str) (uint32_t)(str), sizeof(str)
#define SVAR(x) &x, sizeof(x)
#define REQUIRE(cond, str)\
{\
@@ -138,6 +158,17 @@ int out_len = 0;\
*(((uint64_t*)(buf1)) + 2) == *(((uint64_t*)(buf2)) + 2) &&\
*(((uint64_t*)(buf1)) + 3) == *(((uint64_t*)(buf2)) + 3))
#define BUFFER_EQUAL_64(buf1, buf2) \
( \
(*((uint64_t*)(buf1) + 0) == *((uint64_t*)(buf2) + 0)) && \
(*((uint64_t*)(buf1) + 1) == *((uint64_t*)(buf2) + 1)) && \
(*((uint64_t*)(buf1) + 2) == *((uint64_t*)(buf2) + 2)) && \
(*((uint64_t*)(buf1) + 3) == *((uint64_t*)(buf2) + 3)) && \
(*((uint64_t*)(buf1) + 4) == *((uint64_t*)(buf2) + 4)) && \
(*((uint64_t*)(buf1) + 5) == *((uint64_t*)(buf2) + 5)) && \
(*((uint64_t*)(buf1) + 6) == *((uint64_t*)(buf2) + 6)) && \
(*((uint64_t*)(buf1) + 7) == *((uint64_t*)(buf2) + 7)) \
)
// when using this macro buf1len may be dynamic but buf2len must be static
// provide n >= 1 to indicate how many times the macro will be hit on the line of code
@@ -185,6 +216,18 @@ int out_len = 0;\
#define BUFFER_EQUAL(output, buf1, buf2, compare_len)\
BUFFER_EQUAL_GUARD(output, buf1, compare_len, buf2, compare_len, 1)
#define UINT8_TO_BUF(buf_raw, i)\
{\
unsigned char* buf = (unsigned char*)buf_raw;\
buf[0] = (((uint8_t)i) >> 0) & 0xFFUL;\
if (i < 0) buf[0] |= 0x80U;\
}
#define UINT8_FROM_BUF(buf)\
(((uint8_t)((buf)[0]) << 0))
#define UINT16_TO_BUF(buf_raw, i)\
{\
unsigned char* buf = (unsigned char*)buf_raw;\
@@ -205,7 +248,6 @@ int out_len = 0;\
buf[3] = (((uint64_t)i) >> 0) & 0xFFUL;\
}
#define UINT32_FROM_BUF(buf)\
(((uint64_t)((buf)[0]) << 24) +\
((uint64_t)((buf)[1]) << 16) +\
@@ -225,7 +267,6 @@ int out_len = 0;\
buf[7] = (((uint64_t)i) >> 0) & 0xFFUL;\
}
#define UINT64_FROM_BUF(buf)\
(((uint64_t)((buf)[0]) << 56) +\
((uint64_t)((buf)[1]) << 48) +\
@@ -236,17 +277,6 @@ int out_len = 0;\
((uint64_t)((buf)[6]) << 8) +\
((uint64_t)((buf)[7]) << 0))
#define INT64_FROM_BUF(buf)\
((((uint64_t)((buf)[0] & 0x7FU) << 56) +\
((uint64_t)((buf)[1]) << 48) +\
((uint64_t)((buf)[2]) << 40) +\
((uint64_t)((buf)[3]) << 32) +\
((uint64_t)((buf)[4]) << 24) +\
((uint64_t)((buf)[5]) << 16) +\
((uint64_t)((buf)[6]) << 8) +\
((uint64_t)((buf)[7]) << 0)) * (buf[0] & 0x80U ? -1 : 1))
#define INT64_TO_BUF(buf_raw, i)\
{\
unsigned char* buf = (unsigned char*)buf_raw;\
@@ -261,6 +291,65 @@ int out_len = 0;\
if (i < 0) buf[0] |= 0x80U;\
}
#define INT64_FROM_BUF(buf)\
((((uint64_t)((buf)[0] & 0x7FU) << 56) +\
((uint64_t)((buf)[1]) << 48) +\
((uint64_t)((buf)[2]) << 40) +\
((uint64_t)((buf)[3]) << 32) +\
((uint64_t)((buf)[4]) << 24) +\
((uint64_t)((buf)[5]) << 16) +\
((uint64_t)((buf)[6]) << 8) +\
((uint64_t)((buf)[7]) << 0)) * (buf[0] & 0x80U ? -1 : 1))
#define BYTES20_TO_BUF(buf_raw, i)\
{\
unsigned char* buf = (unsigned char*)buf_raw;\
*(uint64_t*)(buf + 0) = *(uint64_t*)(i + 0);\
*(uint64_t*)(buf + 8) = *(uint64_t*)(i + 8);\
*(uint32_t*)(buf + 16) = *(uint32_t*)(i + 16);\
}
#define BYTES20_FROM_BUF(buf_raw, i)\
{\
const unsigned char* buf = (const unsigned char*)buf_raw;\
*(uint64_t*)(i + 0) = *(const uint64_t*)(buf + 0);\
*(uint64_t*)(i + 8) = *(const uint64_t*)(buf + 8);\
*(uint32_t*)(i + 16) = *(const uint32_t*)(buf + 16);\
}
#define BYTES32_TO_BUF(buf_raw, i)\
{\
unsigned char* buf = (unsigned char*)buf_raw;\
*(uint64_t*)(buf + 0) = *(uint64_t*)(i + 0);\
*(uint64_t*)(buf + 8) = *(uint64_t*)(i + 8);\
*(uint64_t*)(buf + 16) = *(uint64_t*)(i + 16);\
*(uint64_t*)(buf + 24) = *(uint64_t*)(i + 24);\
}
#define BYTES32_FROM_BUF(buf_raw, i)\
{\
const unsigned char* buf = (const unsigned char*)buf_raw;\
*(uint64_t*)(i + 0) = *(const uint64_t*)(buf + 0);\
*(uint64_t*)(i + 8) = *(const uint64_t*)(buf + 8);\
*(uint64_t*)(i + 16) = *(const uint64_t*)(buf + 16);\
*(uint64_t*)(i + 24) = *(const uint64_t*)(buf + 24);\
}
#define FLIP_ENDIAN_32(n) ((uint32_t) (((n & 0xFFU) << 24U) | \
((n & 0xFF00U) << 8U) | \
((n & 0xFF0000U) >> 8U) | \
((n & 0xFF000000U) >> 24U)))
#define FLIP_ENDIAN_64(n) ((uint64_t)(((n & 0xFFULL) << 56ULL) | \
((n & 0xFF00ULL) << 40ULL) | \
((n & 0xFF0000ULL) << 24ULL) | \
((n & 0xFF000000ULL) << 8ULL) | \
((n & 0xFF00000000ULL) >> 8ULL) | \
((n & 0xFF0000000000ULL) >> 24ULL) | \
((n & 0xFF000000000000ULL) >> 40ULL) | \
((n & 0xFF00000000000000ULL) >> 56ULL)))
#define tfCANONICAL 0x80000000UL
#define atACCOUNT 1U
@@ -287,350 +376,6 @@ int out_len = 0;\
#define amRIPPLEESCROW 17U
#define amDELIVEREDAMOUNT 18U
/**
* RH NOTE -- PAY ATTENTION
*
* ALL 'ENCODE' MACROS INCREMENT BUF_OUT
* THIS IS TO MAKE CHAINING EASY
* BUF_OUT IS A SACRIFICIAL POINTER
*
* 'ENCODE' MACROS WITH CONSTANTS HAVE
* ALIASING TO ASSIST YOU WITH ORDER
* _TYPECODE_FIELDCODE_ENCODE_MACRO
* TO PRODUCE A SERIALIZED OBJECT
* IN CANONICAL FORMAT YOU MUST ORDER
* FIRST BY TYPE CODE THEN BY FIELD CODE
*
* ALL 'PREPARE' MACROS PRESERVE POINTERS
*
**/
#define ENCODE_TL_SIZE 49
#define ENCODE_TL(buf_out, tlamt, amount_type)\
{\
uint8_t uat = amount_type; \
buf_out[0] = 0x60U +(uat & 0x0FU ); \
for (int i = 1; GUARDM(48, 1), i < 49; ++i)\
buf_out[i] = tlamt[i-1];\
buf_out += ENCODE_TL_SIZE;\
}
#define _06_XX_ENCODE_TL(buf_out, drops, amount_type )\
ENCODE_TL(buf_out, drops, amount_type );
#define ENCODE_TL_AMOUNT(buf_out, drops )\
ENCODE_TL(buf_out, drops, amAMOUNT );
#define _06_01_ENCODE_TL_AMOUNT(buf_out, drops )\
ENCODE_TL_AMOUNT(buf_out, drops );
// Encode drops to serialization format
// consumes 9 bytes
#define ENCODE_DROPS_SIZE 9
#define ENCODE_DROPS(buf_out, drops, amount_type ) \
{\
uint8_t uat = amount_type; \
uint64_t udrops = drops; \
buf_out[0] = 0x60U +(uat & 0x0FU ); \
buf_out[1] = 0b01000000 + (( udrops >> 56 ) & 0b00111111 ); \
buf_out[2] = (udrops >> 48) & 0xFFU; \
buf_out[3] = (udrops >> 40) & 0xFFU; \
buf_out[4] = (udrops >> 32) & 0xFFU; \
buf_out[5] = (udrops >> 24) & 0xFFU; \
buf_out[6] = (udrops >> 16) & 0xFFU; \
buf_out[7] = (udrops >> 8) & 0xFFU; \
buf_out[8] = (udrops >> 0) & 0xFFU; \
buf_out += ENCODE_DROPS_SIZE; \
}
#define _06_XX_ENCODE_DROPS(buf_out, drops, amount_type )\
ENCODE_DROPS(buf_out, drops, amount_type );
#define ENCODE_DROPS_AMOUNT(buf_out, drops )\
ENCODE_DROPS(buf_out, drops, amAMOUNT );
#define _06_01_ENCODE_DROPS_AMOUNT(buf_out, drops )\
ENCODE_DROPS_AMOUNT(buf_out, drops );
#define ENCODE_DROPS_FEE(buf_out, drops )\
ENCODE_DROPS(buf_out, drops, amFEE );
#define _06_08_ENCODE_DROPS_FEE(buf_out, drops )\
ENCODE_DROPS_FEE(buf_out, drops );
#define ENCODE_TT_SIZE 3
#define ENCODE_TT(buf_out, tt )\
{\
uint8_t utt = tt;\
buf_out[0] = 0x12U;\
buf_out[1] =(utt >> 8 ) & 0xFFU;\
buf_out[2] =(utt >> 0 ) & 0xFFU;\
buf_out += ENCODE_TT_SIZE; \
}
#define _01_02_ENCODE_TT(buf_out, tt)\
ENCODE_TT(buf_out, tt);
#define ENCODE_ACCOUNT_SIZE 22
#define ENCODE_ACCOUNT(buf_out, account_id, account_type)\
{\
uint8_t uat = account_type;\
buf_out[0] = 0x80U + uat;\
buf_out[1] = 0x14U;\
*(uint64_t*)(buf_out + 2) = *(uint64_t*)(account_id + 0);\
*(uint64_t*)(buf_out + 10) = *(uint64_t*)(account_id + 8);\
*(uint32_t*)(buf_out + 18) = *(uint32_t*)(account_id + 16);\
buf_out += ENCODE_ACCOUNT_SIZE;\
}
#define _08_XX_ENCODE_ACCOUNT(buf_out, account_id, account_type)\
ENCODE_ACCOUNT(buf_out, account_id, account_type);
#define ENCODE_ACCOUNT_SRC_SIZE 22
#define ENCODE_ACCOUNT_SRC(buf_out, account_id)\
ENCODE_ACCOUNT(buf_out, account_id, atACCOUNT);
#define _08_01_ENCODE_ACCOUNT_SRC(buf_out, account_id)\
ENCODE_ACCOUNT_SRC(buf_out, account_id);
#define ENCODE_ACCOUNT_DST_SIZE 22
#define ENCODE_ACCOUNT_DST(buf_out, account_id)\
ENCODE_ACCOUNT(buf_out, account_id, atDESTINATION);
#define _08_03_ENCODE_ACCOUNT_DST(buf_out, account_id)\
ENCODE_ACCOUNT_DST(buf_out, account_id);
#define ENCODE_ACCOUNT_OWNER_SIZE 22
#define ENCODE_ACCOUNT_OWNER(buf_out, account_id) \
ENCODE_ACCOUNT(buf_out, account_id, atOWNER);
#define _08_02_ENCODE_ACCOUNT_OWNER(buf_out, account_id) \
ENCODE_ACCOUNT_OWNER(buf_out, account_id);
#define ENCODE_UINT32_COMMON_SIZE 5U
#define ENCODE_UINT32_COMMON(buf_out, i, field)\
{\
uint32_t ui = i; \
uint8_t uf = field; \
buf_out[0] = 0x20U +(uf & 0x0FU); \
buf_out[1] =(ui >> 24 ) & 0xFFU; \
buf_out[2] =(ui >> 16 ) & 0xFFU; \
buf_out[3] =(ui >> 8 ) & 0xFFU; \
buf_out[4] =(ui >> 0 ) & 0xFFU; \
buf_out += ENCODE_UINT32_COMMON_SIZE; \
}
#define _02_XX_ENCODE_UINT32_COMMON(buf_out, i, field)\
ENCODE_UINT32_COMMON(buf_out, i, field)\
#define ENCODE_UINT32_UNCOMMON_SIZE 6U
#define ENCODE_UINT32_UNCOMMON(buf_out, i, field)\
{\
uint32_t ui = i; \
uint8_t uf = field; \
buf_out[0] = 0x20U; \
buf_out[1] = uf; \
buf_out[2] =(ui >> 24 ) & 0xFFU; \
buf_out[3] =(ui >> 16 ) & 0xFFU; \
buf_out[4] =(ui >> 8 ) & 0xFFU; \
buf_out[5] =(ui >> 0 ) & 0xFFU; \
buf_out += ENCODE_UINT32_UNCOMMON_SIZE; \
}
#define _02_XX_ENCODE_UINT32_UNCOMMON(buf_out, i, field)\
ENCODE_UINT32_UNCOMMON(buf_out, i, field)\
#define ENCODE_LLS_SIZE 6U
#define ENCODE_LLS(buf_out, lls )\
ENCODE_UINT32_UNCOMMON(buf_out, lls, 0x1B );
#define _02_27_ENCODE_LLS(buf_out, lls )\
ENCODE_LLS(buf_out, lls );
#define ENCODE_FLS_SIZE 6U
#define ENCODE_FLS(buf_out, fls )\
ENCODE_UINT32_UNCOMMON(buf_out, fls, 0x1A );
#define _02_26_ENCODE_FLS(buf_out, fls )\
ENCODE_FLS(buf_out, fls );
#define ENCODE_TAG_SRC_SIZE 5
#define ENCODE_TAG_SRC(buf_out, tag )\
ENCODE_UINT32_COMMON(buf_out, tag, 0x3U );
#define _02_03_ENCODE_TAG_SRC(buf_out, tag )\
ENCODE_TAG_SRC(buf_out, tag );
#define ENCODE_TAG_DST_SIZE 5
#define ENCODE_TAG_DST(buf_out, tag )\
ENCODE_UINT32_COMMON(buf_out, tag, 0xEU );
#define _02_14_ENCODE_TAG_DST(buf_out, tag )\
ENCODE_TAG_DST(buf_out, tag );
#define ENCODE_SEQUENCE_SIZE 5
#define ENCODE_SEQUENCE(buf_out, sequence )\
ENCODE_UINT32_COMMON(buf_out, sequence, 0x4U );
#define _02_04_ENCODE_SEQUENCE(buf_out, sequence )\
ENCODE_SEQUENCE(buf_out, sequence );
#define ENCODE_FLAGS_SIZE 5
#define ENCODE_FLAGS(buf_out, tag )\
ENCODE_UINT32_COMMON(buf_out, tag, 0x2U );
#define _02_02_ENCODE_FLAGS(buf_out, tag )\
ENCODE_FLAGS(buf_out, tag );
#define ENCODE_SIGNING_PUBKEY_SIZE 35
#define ENCODE_SIGNING_PUBKEY(buf_out, pkey )\
{\
buf_out[0] = 0x73U;\
buf_out[1] = 0x21U;\
*(uint64_t*)(buf_out + 2) = *(uint64_t*)(pkey + 0);\
*(uint64_t*)(buf_out + 10) = *(uint64_t*)(pkey + 8);\
*(uint64_t*)(buf_out + 18) = *(uint64_t*)(pkey + 16);\
*(uint64_t*)(buf_out + 26) = *(uint64_t*)(pkey + 24);\
buf[34] = pkey[32];\
buf_out += ENCODE_SIGNING_PUBKEY_SIZE;\
}
#define _07_03_ENCODE_SIGNING_PUBKEY(buf_out, pkey )\
ENCODE_SIGNING_PUBKEY(buf_out, pkey );
#define ENCODE_SIGNING_PUBKEY_NULL_SIZE 2
#define ENCODE_SIGNING_PUBKEY_NULL(buf_out )\
{\
*buf_out++ = 0x73U;\
*buf_out++ = 0x00U;\
}
#define _07_03_ENCODE_SIGNING_PUBKEY_NULL(buf_out )\
ENCODE_SIGNING_PUBKEY_NULL(buf_out );
#define _0E_0E_ENCODE_HOOKOBJ(buf_out, hhash)\
{\
uint8_t* hook0 = (hhash);\
*buf_out++ = 0xEEU; /* hook obj start */ \
if (hook0 == 0) /* noop */\
{\
/* do nothing */ \
}\
else\
{\
*buf_out++ = 0x22U; /* flags = override */\
*buf_out++ = 0x00U;\
*buf_out++ = 0x00U;\
*buf_out++ = 0x00U;\
*buf_out++ = 0x01U;\
if (hook0 == 0xFFFFFFFFUL) /* delete operation */ \
{\
*buf_out++ = 0x7BU; /* empty createcode */ \
*buf_out++ = 0x00U;\
}\
else\
{\
*buf_out++ = 0x50U; /* HookHash */\
*buf_out++ = 0x1FU;\
uint64_t* d = (uint64_t*)buf_out;\
uint64_t* s = (uint64_t*)hook0;\
*d++ = *s++;\
*d++ = *s++;\
*d++ = *s++;\
*d++ = *s++;\
buf_out+=32;\
}\
}\
*buf_out++ = 0xE1U;\
}
#define PREPARE_HOOKSET(buf_out_master, maxlen, h, sizeout)\
{\
uint8_t* buf_out = (buf_out_master); \
uint8_t acc[20]; \
uint32_t cls = (uint32_t)ledger_seq(); \
hook_account(SBUF(acc)); \
_01_02_ENCODE_TT (buf_out, ttHOOK_SET ); \
_02_02_ENCODE_FLAGS (buf_out, tfCANONICAL ); \
_02_04_ENCODE_SEQUENCE (buf_out, 0 ); \
_02_26_ENCODE_FLS (buf_out, cls + 1 ); \
_02_27_ENCODE_LLS (buf_out, cls + 5 ); \
uint8_t* fee_ptr = buf_out; \
_06_08_ENCODE_DROPS_FEE (buf_out, 0 ); \
_07_03_ENCODE_SIGNING_PUBKEY_NULL (buf_out ); \
_08_01_ENCODE_ACCOUNT_SRC (buf_out, acc ); \
uint32_t remaining_size = (maxlen) - (buf_out - (buf_out_master)); \
int64_t edlen = etxn_details((uint32_t)buf_out, remaining_size); \
buf_out += edlen; \
*buf_out++ = 0xFBU; /* hook array start */ \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[0]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[1]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[2]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[3]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[4]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[5]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[6]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[7]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[8]); \
_0E_0E_ENCODE_HOOKOBJ (buf_out, h[9]); \
*buf_out++ = 0xF1U; /* hook array end */ \
sizeout = (buf_out - (buf_out_master)); \
int64_t fee = etxn_fee_base(buf_out_master, sizeout); \
_06_08_ENCODE_DROPS_FEE (fee_ptr, fee ); \
}
#ifdef HAS_CALLBACK
#define PREPARE_PAYMENT_SIMPLE_SIZE 270U
#else
#define PREPARE_PAYMENT_SIMPLE_SIZE 248U
#endif
#define PREPARE_PAYMENT_SIMPLE(buf_out_master, drops_amount_raw, to_address, dest_tag_raw, src_tag_raw)\
{\
uint8_t* buf_out = buf_out_master;\
uint8_t acc[20];\
uint64_t drops_amount = (drops_amount_raw);\
uint32_t dest_tag = (dest_tag_raw);\
uint32_t src_tag = (src_tag_raw);\
uint32_t cls = (uint32_t)ledger_seq();\
hook_account(SBUF(acc));\
_01_02_ENCODE_TT (buf_out, ttPAYMENT ); /* uint16 | size 3 */ \
_02_02_ENCODE_FLAGS (buf_out, tfCANONICAL ); /* uint32 | size 5 */ \
_02_03_ENCODE_TAG_SRC (buf_out, src_tag ); /* uint32 | size 5 */ \
_02_04_ENCODE_SEQUENCE (buf_out, 0 ); /* uint32 | size 5 */ \
_02_14_ENCODE_TAG_DST (buf_out, dest_tag ); /* uint32 | size 5 */ \
_02_26_ENCODE_FLS (buf_out, cls + 1 ); /* uint32 | size 6 */ \
_02_27_ENCODE_LLS (buf_out, cls + 5 ); /* uint32 | size 6 */ \
_06_01_ENCODE_DROPS_AMOUNT (buf_out, drops_amount ); /* amount | size 9 */ \
uint8_t* fee_ptr = buf_out;\
_06_08_ENCODE_DROPS_FEE (buf_out, 0 ); /* amount | size 9 */ \
_07_03_ENCODE_SIGNING_PUBKEY_NULL (buf_out ); /* pk | size 35 */ \
_08_01_ENCODE_ACCOUNT_SRC (buf_out, acc ); /* account | size 22 */ \
_08_03_ENCODE_ACCOUNT_DST (buf_out, to_address ); /* account | size 22 */ \
int64_t edlen = etxn_details((uint32_t)buf_out, PREPARE_PAYMENT_SIMPLE_SIZE); /* emitdet | size 1?? */ \
int64_t fee = etxn_fee_base(buf_out_master, PREPARE_PAYMENT_SIMPLE_SIZE); \
_06_08_ENCODE_DROPS_FEE (fee_ptr, fee ); \
}
#ifdef HAS_CALLBACK
#define PREPARE_PAYMENT_SIMPLE_TRUSTLINE_SIZE 309
#else
#define PREPARE_PAYMENT_SIMPLE_TRUSTLINE_SIZE 287
#endif
#define PREPARE_PAYMENT_SIMPLE_TRUSTLINE(buf_out_master, tlamt, to_address, dest_tag_raw, src_tag_raw)\
{\
uint8_t* buf_out = buf_out_master;\
uint8_t acc[20];\
uint32_t dest_tag = (dest_tag_raw);\
uint32_t src_tag = (src_tag_raw);\
uint32_t cls = (uint32_t)ledger_seq();\
hook_account(SBUF(acc));\
_01_02_ENCODE_TT (buf_out, ttPAYMENT ); /* uint16 | size 3 */ \
_02_02_ENCODE_FLAGS (buf_out, tfCANONICAL ); /* uint32 | size 5 */ \
_02_03_ENCODE_TAG_SRC (buf_out, src_tag ); /* uint32 | size 5 */ \
_02_04_ENCODE_SEQUENCE (buf_out, 0 ); /* uint32 | size 5 */ \
_02_14_ENCODE_TAG_DST (buf_out, dest_tag ); /* uint32 | size 5 */ \
_02_26_ENCODE_FLS (buf_out, cls + 1 ); /* uint32 | size 6 */ \
_02_27_ENCODE_LLS (buf_out, cls + 5 ); /* uint32 | size 6 */ \
_06_01_ENCODE_TL_AMOUNT (buf_out, tlamt ); /* amount | size 48 */ \
uint8_t* fee_ptr = buf_out;\
_06_08_ENCODE_DROPS_FEE (buf_out, 0 ); /* amount | size 9 */ \
_07_03_ENCODE_SIGNING_PUBKEY_NULL (buf_out ); /* pk | size 35 */ \
_08_01_ENCODE_ACCOUNT_SRC (buf_out, acc ); /* account | size 22 */ \
_08_03_ENCODE_ACCOUNT_DST (buf_out, to_address ); /* account | size 22 */ \
etxn_details((uint32_t)buf_out, PREPARE_PAYMENT_SIMPLE_TRUSTLINE_SIZE); /* emitdet | size 1?? */ \
int64_t fee = etxn_fee_base(buf_out_master, PREPARE_PAYMENT_SIMPLE_TRUSTLINE_SIZE); \
_06_08_ENCODE_DROPS_FEE (fee_ptr, fee ); \
}
#endif

View File

@@ -15,6 +15,7 @@
#define sfHookEmitCount ((1U << 16U) + 18U)
#define sfHookExecutionIndex ((1U << 16U) + 19U)
#define sfHookApiVersion ((1U << 16U) + 20U)
#define sfHookStateScale ((1U << 16U) + 21U)
#define sfNetworkID ((2U << 16U) + 1U)
#define sfFlags ((2U << 16U) + 2U)
#define sfSourceTag ((2U << 16U) + 3U)
@@ -60,7 +61,13 @@
#define sfBurnedNFTokens ((2U << 16U) + 44U)
#define sfHookStateCount ((2U << 16U) + 45U)
#define sfEmitGeneration ((2U << 16U) + 46U)
#define sfLockCount ((2U << 16U) + 47U)
#define sfLockCount ((2U << 16U) + 49U)
#define sfFirstNFTokenSequence ((2U << 16U) + 50U)
#define sfStartTime ((2U << 16U) + 93U)
#define sfRepeatCount ((2U << 16U) + 94U)
#define sfDelaySeconds ((2U << 16U) + 95U)
#define sfXahauActivationLgrSeq ((2U << 16U) + 96U)
#define sfImportSequence ((2U << 16U) + 97U)
#define sfRewardTime ((2U << 16U) + 98U)
#define sfRewardLgrFirst ((2U << 16U) + 99U)
#define sfRewardLgrLast ((2U << 16U) + 100U)
@@ -80,12 +87,15 @@
#define sfHookInstructionCount ((3U << 16U) + 17U)
#define sfHookReturnCode ((3U << 16U) + 18U)
#define sfReferenceCount ((3U << 16U) + 19U)
#define sfTouchCount ((3U << 16U) + 97U)
#define sfAccountIndex ((3U << 16U) + 98U)
#define sfAccountCount ((3U << 16U) + 99U)
#define sfRewardAccumulator ((3U << 16U) + 100U)
#define sfEmailHash ((4U << 16U) + 1U)
#define sfTakerPaysCurrency ((10U << 16U) + 1U)
#define sfTakerPaysIssuer ((10U << 16U) + 2U)
#define sfTakerGetsCurrency ((10U << 16U) + 3U)
#define sfTakerGetsIssuer ((10U << 16U) + 4U)
#define sfTakerPaysCurrency ((17U << 16U) + 1U)
#define sfTakerPaysIssuer ((17U << 16U) + 2U)
#define sfTakerGetsCurrency ((17U << 16U) + 3U)
#define sfTakerGetsIssuer ((17U << 16U) + 4U)
#define sfLedgerHash ((5U << 16U) + 1U)
#define sfParentHash ((5U << 16U) + 2U)
#define sfTransactionHash ((5U << 16U) + 3U)
@@ -99,6 +109,7 @@
#define sfEmitParentTxnID ((5U << 16U) + 11U)
#define sfEmitNonce ((5U << 16U) + 12U)
#define sfEmitHookHash ((5U << 16U) + 13U)
#define sfObjectID ((5U << 16U) + 14U)
#define sfBookDirectory ((5U << 16U) + 16U)
#define sfInvoiceID ((5U << 16U) + 17U)
#define sfNickname ((5U << 16U) + 18U)
@@ -120,6 +131,11 @@
#define sfOfferID ((5U << 16U) + 34U)
#define sfEscrowID ((5U << 16U) + 35U)
#define sfURITokenID ((5U << 16U) + 36U)
#define sfGovernanceFlags ((5U << 16U) + 99U)
#define sfGovernanceMarks ((5U << 16U) + 98U)
#define sfEmittedTxnID ((5U << 16U) + 97U)
#define sfHookCanEmit ((5U << 16U) + 96U)
#define sfCron ((5U << 16U) + 95U)
#define sfAmount ((6U << 16U) + 1U)
#define sfBalance ((6U << 16U) + 2U)
#define sfLimitAmount ((6U << 16U) + 3U)
@@ -136,6 +152,9 @@
#define sfNFTokenBrokerFee ((6U << 16U) + 19U)
#define sfHookCallbackFee ((6U << 16U) + 20U)
#define sfLockedBalance ((6U << 16U) + 21U)
#define sfBaseFeeDrops ((6U << 16U) + 22U)
#define sfReserveBaseDrops ((6U << 16U) + 23U)
#define sfReserveIncrementDrops ((6U << 16U) + 24U)
#define sfPublicKey ((7U << 16U) + 1U)
#define sfMessageKey ((7U << 16U) + 2U)
#define sfSigningPubKey ((7U << 16U) + 3U)
@@ -161,6 +180,8 @@
#define sfHookParameterName ((7U << 16U) + 24U)
#define sfHookParameterValue ((7U << 16U) + 25U)
#define sfBlob ((7U << 16U) + 26U)
#define sfRemarkValue ((7U << 16U) + 98U)
#define sfRemarkName ((7U << 16U) + 99U)
#define sfAccount ((8U << 16U) + 1U)
#define sfOwner ((8U << 16U) + 2U)
#define sfDestination ((8U << 16U) + 3U)
@@ -171,11 +192,13 @@
#define sfNFTokenMinter ((8U << 16U) + 9U)
#define sfEmitCallback ((8U << 16U) + 10U)
#define sfHookAccount ((8U << 16U) + 16U)
#define sfInform ((8U << 16U) + 99U)
#define sfIndexes ((19U << 16U) + 1U)
#define sfHashes ((19U << 16U) + 2U)
#define sfAmendments ((19U << 16U) + 3U)
#define sfNFTokenOffers ((19U << 16U) + 4U)
#define sfHookNamespaces ((19U << 16U) + 5U)
#define sfURITokenIDs ((19U << 16U) + 99U)
#define sfPaths ((18U << 16U) + 1U)
#define sfTransactionMetaData ((14U << 16U) + 2U)
#define sfCreatedNode ((14U << 16U) + 3U)
@@ -198,6 +221,13 @@
#define sfHookDefinition ((14U << 16U) + 22U)
#define sfHookParameter ((14U << 16U) + 23U)
#define sfHookGrant ((14U << 16U) + 24U)
#define sfRemark ((14U << 16U) + 97U)
#define sfGenesisMint ((14U << 16U) + 96U)
#define sfActiveValidator ((14U << 16U) + 95U)
#define sfImportVLKey ((14U << 16U) + 94U)
#define sfHookEmission ((14U << 16U) + 93U)
#define sfMintURIToken ((14U << 16U) + 92U)
#define sfAmountEntry ((14U << 16U) + 91U)
#define sfSigners ((15U << 16U) + 3U)
#define sfSignerEntries ((15U << 16U) + 4U)
#define sfTemplate ((15U << 16U) + 5U)
@@ -212,4 +242,9 @@
#define sfHookExecutions ((15U << 16U) + 18U)
#define sfHookParameters ((15U << 16U) + 19U)
#define sfHookGrants ((15U << 16U) + 20U)
#define sfRemarks ((15U << 16U) + 97U)
#define sfGenesisMints ((15U << 16U) + 96U)
#define sfActiveValidators ((15U << 16U) + 95U)
#define sfImportVLKeys ((15U << 16U) + 94U)
#define sfHookEmissions ((15U << 16U) + 93U)
#define sfAmounts ((15U << 16U) + 92U)

View File

@@ -1,4 +1,5 @@
// For documentation please see: https://xrpl-hooks.readme.io/reference/
// Generated using generate_tts.sh
#define ttPAYMENT 0
#define ttESCROW_CREATE 1
#define ttESCROW_FINISH 2
@@ -8,6 +9,7 @@
// #define ttNICKNAME_SET 6 // deprecated
#define ttOFFER_CREATE 7
#define ttOFFER_CANCEL 8
// #define ttCONTRACT 9 // deprecated
#define ttTICKET_CREATE 10
// #define ttSPINAL_TAP 11 // deprecated
#define ttSIGNER_LIST_SET 12
@@ -26,11 +28,16 @@
#define ttNFTOKEN_CREATE_OFFER 27
#define ttNFTOKEN_CANCEL_OFFER 28
#define ttNFTOKEN_ACCEPT_OFFER 29
#define ttCLAWBACK 30
#define ttURITOKEN_MINT 45
#define ttURITOKEN_BURN 46
#define ttURITOKEN_BUY 47
#define ttURITOKEN_CREATE_SELL_OFFER 48
#define ttURITOKEN_CANCEL_SELL_OFFER 49
#define ttCRON 92
#define ttCRON_SET 93
#define ttREMARKS_SET 94
#define ttREMIT 95
#define ttGENESIS_MINT 96
#define ttIMPORT 97
#define ttCLAIM_REWARD 98
@@ -39,4 +46,4 @@
#define ttFEE 101
#define ttUNL_MODIFY 102
#define ttEMIT_FAILURE 103
#define ttUNL_REPORT 104
#define ttUNL_REPORT 104

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +1,35 @@
#!/bin/bash
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
set -ex
echo "START BUILDING (HOST)"
echo "Cleaning previously built binary"
rm -f release-build/xahaud
BUILD_CORES=$(echo "scale=0 ; `nproc` / 3" | bc)
BUILD_CORES=$(echo "scale=0 ; `nproc` / 1.337" | bc)
if [[ "$GITHUB_REPOSITORY" == "" ]]; then
#Default
BUILD_CORES=8
BUILD_CORES=${BUILD_CORES:-8}
fi
# Ensure still works outside of GH Actions by setting these to /dev/null
# GA will run this script and then delete it at the end of the job
JOB_CLEANUP_SCRIPT=${JOB_CLEANUP_SCRIPT:-/dev/null}
NORMALIZED_WORKFLOW=$(echo "$GITHUB_WORKFLOW" | tr -c 'a-zA-Z0-9' '-')
NORMALIZED_REF=$(echo "$GITHUB_REF" | tr -c 'a-zA-Z0-9' '-')
CONTAINER_NAME="xahaud_cached_builder_${NORMALIZED_WORKFLOW}-${NORMALIZED_REF}"
echo "-- BUILD CORES: $BUILD_CORES"
echo "-- GITHUB_REPOSITORY: $GITHUB_REPOSITORY"
echo "-- GITHUB_SHA: $GITHUB_SHA"
echo "-- GITHUB_RUN_NUMBER: $GITHUB_RUN_NUMBER"
echo "-- CONTAINER_NAME: $CONTAINER_NAME"
which docker 2> /dev/null 2> /dev/null
if [ "$?" -eq "1" ]
@@ -31,29 +45,195 @@ then
exit 1
fi
STATIC_CONTAINER=$(docker ps -a | grep xahaud_cached_builder |wc -l)
STATIC_CONTAINER=$(docker ps -a | grep $CONTAINER_NAME |wc -l)
if [[ "$STATIC_CONTAINER" -gt "0" && "$GITHUB_REPOSITORY" != "" ]]; then
CACHE_VOLUME_NAME="xahau-release-builder-cache"
# if [[ "$STATIC_CONTAINER" -gt "0" && "$GITHUB_REPOSITORY" != "" ]]; then
if false; then
echo "Static container, execute in static container to have max. cache"
docker start xahaud_cached_builder
docker exec -i xahaud_cached_builder /hbb_exe/activate-exec bash -x /io/build-core.sh "$GITHUB_REPOSITORY" "$GITHUB_SHA" "$BUILD_CORES" "$GITHUB_RUN_NUMBER"
docker stop xahaud_cached_builder
docker start $CONTAINER_NAME
docker exec -i $CONTAINER_NAME /hbb_exe/activate-exec bash -c "source /opt/rh/gcc-toolset-11/enable && bash -x /io/build-core.sh '$GITHUB_REPOSITORY' '$GITHUB_SHA' '$BUILD_CORES' '$GITHUB_RUN_NUMBER'"
docker stop $CONTAINER_NAME
else
echo "No static container, build on temp container"
rm -rf release-build;
mkdir -p release-build;
docker volume create $CACHE_VOLUME_NAME
# Create inline Dockerfile with environment setup for build-full.sh
DOCKERFILE_CONTENT=$(cat <<'DOCKERFILE_EOF'
FROM ghcr.io/phusion/holy-build-box:4.0.1-amd64
ARG BUILD_CORES=8
# Enable repositories and install dependencies
RUN /hbb_exe/activate-exec bash -c "dnf install -y epel-release && \
dnf config-manager --set-enabled powertools || dnf config-manager --set-enabled crb && \
dnf install -y --enablerepo=devel \
wget git \
gcc-toolset-11-gcc-c++ gcc-toolset-11-binutils gcc-toolset-11-libatomic-devel \
lz4 lz4-devel \
ncurses-static ncurses-devel \
snappy snappy-devel \
zlib zlib-devel zlib-static \
libasan \
python3 python3-pip \
ccache \
ninja-build \
patch \
glibc-devel glibc-static \
libxml2-devel \
autoconf \
automake \
texinfo \
libtool \
llvm14-static llvm14-devel && \
dnf clean all"
# Install Conan 2 and CMake
RUN /hbb_exe/activate-exec pip3 install "conan>=2.0,<3.0" && \
/hbb_exe/activate-exec wget -q https://github.com/Kitware/CMake/releases/download/v3.23.1/cmake-3.23.1-linux-x86_64.tar.gz -O cmake.tar.gz && \
mkdir cmake && \
tar -xzf cmake.tar.gz --strip-components=1 -C cmake && \
rm cmake.tar.gz
# Dual Boost configuration in HBB environment:
# - Manual Boost in /usr/local (minimal: for WasmEdge which is pre-built in Docker)
# - Conan Boost (full: for the application and all dependencies via toolchain)
#
# Install minimal Boost 1.86.0 for WasmEdge only (filesystem and its dependencies)
# The main application will use Conan-provided Boost for all other components
# IMPORTANT: Understanding Boost linking options:
# - link=static: Creates static Boost libraries (.a files) instead of shared (.so files)
# - runtime-link=shared: Links Boost libraries against shared libc (glibc)
# WasmEdge only needs boost::filesystem and boost::system
RUN /hbb_exe/activate-exec bash -c "echo 'Boost cache bust: v5-minimal' && \
rm -rf /usr/local/lib/libboost* /usr/local/include/boost && \
cd /tmp && \
wget -q https://archives.boost.io/release/1.86.0/source/boost_1_86_0.tar.gz -O boost.tar.gz && \
mkdir boost && \
tar -xzf boost.tar.gz --strip-components=1 -C boost && \
cd boost && \
./bootstrap.sh && \
./b2 install \
link=static runtime-link=shared -j${BUILD_CORES} \
--with-filesystem --with-system && \
cd /tmp && \
rm -rf boost boost.tar.gz"
ENV CMAKE_EXE_LINKER_FLAGS="-static-libstdc++"
ENV LLVM_DIR=/usr/lib64/llvm14/lib/cmake/llvm
ENV WasmEdge_LIB=/usr/local/lib64/libwasmedge.a
ENV CC='ccache gcc'
ENV CXX='ccache g++'
# Install LLD
RUN /hbb_exe/activate-exec bash -c "source /opt/rh/gcc-toolset-11/enable && \
cd /tmp && \
wget -q https://github.com/llvm/llvm-project/releases/download/llvmorg-14.0.3/lld-14.0.3.src.tar.xz && \
wget -q https://github.com/llvm/llvm-project/releases/download/llvmorg-14.0.3/libunwind-14.0.3.src.tar.xz && \
tar -xf lld-14.0.3.src.tar.xz && \
tar -xf libunwind-14.0.3.src.tar.xz && \
cp -r libunwind-14.0.3.src/include libunwind-14.0.3.src/src lld-14.0.3.src/ && \
cd lld-14.0.3.src && \
mkdir -p build && cd build && \
cmake .. \
-DLLVM_LIBRARY_DIR=/usr/lib64/llvm14/lib/ \
-DCMAKE_INSTALL_PREFIX=/usr/lib64/llvm14/ \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_EXE_LINKER_FLAGS=\"\$CMAKE_EXE_LINKER_FLAGS\" && \
make -j${BUILD_CORES} install && \
ln -s /usr/lib64/llvm14/lib/include/lld /usr/include/lld && \
cp /usr/lib64/llvm14/lib/liblld*.a /usr/local/lib/ && \
cd /tmp && rm -rf lld-* libunwind-*"
# Build and install WasmEdge (static version)
# Note: Conan only provides WasmEdge with shared library linking.
# For a fully static build, we need to manually install:
# * Boost: Static C++ libraries for filesystem and system operations (built from source above)
# * LLVM: Static LLVM libraries for WebAssembly compilation (installed via llvm14-static package)
# * LLD: Static linker to produce the final static binary (built from source above)
# These were installed above to enable WASMEDGE_LINK_LLVM_STATIC=ON
RUN cd /tmp && \
( wget -nc -q https://github.com/WasmEdge/WasmEdge/archive/refs/tags/0.11.2.zip; unzip -o 0.11.2.zip; ) && \
cd WasmEdge-0.11.2 && \
( mkdir -p build; echo "" ) && \
cd build && \
/hbb_exe/activate-exec bash -c "source /opt/rh/gcc-toolset-11/enable && \
ln -sf /opt/rh/gcc-toolset-11/root/usr/bin/ar /usr/bin/ar && \
ln -sf /opt/rh/gcc-toolset-11/root/usr/bin/ranlib /usr/bin/ranlib && \
echo '=== Binutils version check ===' && \
ar --version | head -1 && \
ranlib --version | head -1 && \
cmake .. \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr/local \
-DCMAKE_POSITION_INDEPENDENT_CODE=ON \
-DWASMEDGE_BUILD_SHARED_LIB=OFF \
-DWASMEDGE_BUILD_STATIC_LIB=ON \
-DWASMEDGE_BUILD_AOT_RUNTIME=ON \
-DWASMEDGE_FORCE_DISABLE_LTO=ON \
-DWASMEDGE_LINK_LLVM_STATIC=ON \
-DWASMEDGE_BUILD_PLUGINS=OFF \
-DWASMEDGE_LINK_TOOLS_STATIC=ON \
-DBoost_NO_BOOST_CMAKE=ON \
-DCMAKE_EXE_LINKER_FLAGS=\"\$CMAKE_EXE_LINKER_FLAGS\" \
&& \
make -j${BUILD_CORES} install" && \
cp -r include/api/wasmedge /usr/include/ && \
cd /tmp && rm -rf WasmEdge* 0.11.2.zip
# Set environment variables
ENV PATH=/usr/local/bin:$PATH
# Configure ccache and Conan 2
# NOTE: Using echo commands instead of heredocs because heredocs in Docker RUN commands are finnicky
RUN /hbb_exe/activate-exec bash -c "ccache -M 10G && \
ccache -o cache_dir=/cache/ccache && \
ccache -o compiler_check=content && \
mkdir -p ~/.conan2 /cache/conan2 /cache/conan2_download /cache/conan2_sources && \
echo 'core.cache:storage_path=/cache/conan2' > ~/.conan2/global.conf && \
echo 'core.download:download_cache=/cache/conan2_download' >> ~/.conan2/global.conf && \
echo 'core.sources:download_cache=/cache/conan2_sources' >> ~/.conan2/global.conf && \
conan profile detect --force && \
echo '[settings]' > ~/.conan2/profiles/default && \
echo 'arch=x86_64' >> ~/.conan2/profiles/default && \
echo 'build_type=Release' >> ~/.conan2/profiles/default && \
echo 'compiler=gcc' >> ~/.conan2/profiles/default && \
echo 'compiler.cppstd=20' >> ~/.conan2/profiles/default && \
echo 'compiler.libcxx=libstdc++11' >> ~/.conan2/profiles/default && \
echo 'compiler.version=11' >> ~/.conan2/profiles/default && \
echo 'os=Linux' >> ~/.conan2/profiles/default && \
echo '' >> ~/.conan2/profiles/default && \
echo '[conf]' >> ~/.conan2/profiles/default && \
echo '# Force building from source for packages with binary compatibility issues' >> ~/.conan2/profiles/default && \
echo '*:tools.system.package_manager:mode=build' >> ~/.conan2/profiles/default"
DOCKERFILE_EOF
)
# Build custom Docker image
IMAGE_NAME="xahaud-builder:latest"
echo "Building custom Docker image with dependencies..."
echo "$DOCKERFILE_CONTENT" | docker build --build-arg BUILD_CORES="$BUILD_CORES" -t "$IMAGE_NAME" - || exit 1
if [[ "$GITHUB_REPOSITORY" == "" ]]; then
# Non GH, local building
echo "Non-GH runner, local building, temp container"
docker run -i --user 0:$(id -g) --rm -v /data/builds:/data/builds -v `pwd`:/io --network host ghcr.io/foobarwidget/holy-build-box-x64 /hbb_exe/activate-exec bash -x /io/build-full.sh "$GITHUB_REPOSITORY" "$GITHUB_SHA" "$BUILD_CORES" "$GITHUB_RUN_NUMBER"
docker run -i --user 0:$(id -g) --rm -v /data/builds:/data/builds -v `pwd`:/io -v "$CACHE_VOLUME_NAME":/cache --network host "$IMAGE_NAME" /hbb_exe/activate-exec bash -c "source /opt/rh/gcc-toolset-11/enable && bash -x /io/build-full.sh '$GITHUB_REPOSITORY' '$GITHUB_SHA' '$BUILD_CORES' '$GITHUB_RUN_NUMBER'"
else
# GH Action, runner
echo "GH Action, runner, clean & re-create create persistent container"
docker rm -f xahaud_cached_builder
docker run -di --user 0:$(id -g) --name xahaud_cached_builder -v /data/builds:/data/builds -v `pwd`:/io --network host ghcr.io/foobarwidget/holy-build-box-x64 /hbb_exe/activate-exec bash
docker exec -i xahaud_cached_builder /hbb_exe/activate-exec bash -x /io/build-full.sh "$GITHUB_REPOSITORY" "$GITHUB_SHA" "$BUILD_CORES" "$GITHUB_RUN_NUMBER"
docker stop xahaud_cached_builder
docker rm -f $CONTAINER_NAME
echo "echo 'Stopping container: $CONTAINER_NAME'" >> "$JOB_CLEANUP_SCRIPT"
echo "docker stop --time=15 \"$CONTAINER_NAME\" || echo 'Failed to stop container or container not running'" >> "$JOB_CLEANUP_SCRIPT"
docker run -di --user 0:$(id -g) --name $CONTAINER_NAME -v /data/builds:/data/builds -v `pwd`:/io -v "$CACHE_VOLUME_NAME":/cache --network host "$IMAGE_NAME" /hbb_exe/activate-exec bash
docker exec -i $CONTAINER_NAME /hbb_exe/activate-exec bash -c "source /opt/rh/gcc-toolset-11/enable && bash -x /io/build-full.sh '$GITHUB_REPOSITORY' '$GITHUB_SHA' '$BUILD_CORES' '$GITHUB_RUN_NUMBER'"
docker stop $CONTAINER_NAME
fi
fi

View File

@@ -0,0 +1,48 @@
cmake_minimum_required(VERSION 3.11)
project(ed25519
LANGUAGES C
)
if(PROJECT_NAME STREQUAL CMAKE_PROJECT_NAME)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/output/$<CONFIG>/lib")
endif()
if(NOT TARGET OpenSSL::SSL)
find_package(OpenSSL)
endif()
add_library(ed25519 STATIC
ed25519.c
)
add_library(ed25519::ed25519 ALIAS ed25519)
target_link_libraries(ed25519 PUBLIC OpenSSL::SSL)
include(GNUInstallDirs)
#[=========================================================[
NOTE for macos:
https://github.com/floodyberry/ed25519-donna/issues/29
our source for ed25519-donna-portable.h has been
patched to workaround this.
#]=========================================================]
target_include_directories(ed25519 PUBLIC
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
)
install(
TARGETS ed25519
EXPORT ${PROJECT_NAME}-exports
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"
)
install(
EXPORT ${PROJECT_NAME}-exports
DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}"
FILE ${PROJECT_NAME}-targets.cmake
NAMESPACE ${PROJECT_NAME}::
)
install(
FILES ed25519.h
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
)

View File

@@ -5,11 +5,11 @@
// | | | | (_| | (_| | | (__ | |____| | | | |_| | | | | | | | |____|_| |_|
// |_| |_|\__,_|\__, |_|\___| |______|_| |_|\__,_|_| |_| |_| \_____|
// __/ | https://github.com/Neargye/magic_enum
// |___/ version 0.9.3
// |___/ version 0.9.5
//
// Licensed under the MIT License <http://opensource.org/licenses/MIT>.
// SPDX-License-Identifier: MIT
// Copyright (c) 2019 - 2023 Daniil Goncharov <neargye@gmail.com>.
// Copyright (c) 2019 - 2024 Daniil Goncharov <neargye@gmail.com>.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
@@ -34,7 +34,7 @@
#define MAGIC_ENUM_VERSION_MAJOR 0
#define MAGIC_ENUM_VERSION_MINOR 9
#define MAGIC_ENUM_VERSION_PATCH 3
#define MAGIC_ENUM_VERSION_PATCH 5
#include <array>
#include <cstddef>
@@ -60,7 +60,7 @@
#if defined(MAGIC_ENUM_NO_ASSERT)
# define MAGIC_ENUM_ASSERT(...) static_cast<void>(0)
#else
#elif !defined(MAGIC_ENUM_ASSERT)
# include <cassert>
# define MAGIC_ENUM_ASSERT(...) assert((__VA_ARGS__))
#endif
@@ -69,9 +69,11 @@
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wunknown-warning-option"
# pragma clang diagnostic ignored "-Wenum-constexpr-conversion"
# pragma clang diagnostic ignored "-Wuseless-cast" // suppresses 'static_cast<char_type>('\0')' for char_type = char (common on Linux).
#elif defined(__GNUC__)
# pragma GCC diagnostic push
# pragma GCC diagnostic ignored "-Wmaybe-uninitialized" // May be used uninitialized 'return {};'.
# pragma GCC diagnostic ignored "-Wuseless-cast" // suppresses 'static_cast<char_type>('\0')' for char_type = char (common on Linux).
#elif defined(_MSC_VER)
# pragma warning(push)
# pragma warning(disable : 26495) // Variable 'static_str<N>::chars_' is uninitialized.
@@ -164,14 +166,11 @@ namespace customize {
// If need another range for specific enum type, add specialization enum_range for necessary enum type.
template <typename E>
struct enum_range {
static_assert(std::is_enum_v<E>, "magic_enum::customize::enum_range requires enum type.");
static constexpr int min = MAGIC_ENUM_RANGE_MIN;
static constexpr int max = MAGIC_ENUM_RANGE_MAX;
static_assert(max > min, "magic_enum::customize::enum_range requires max > min.");
};
static_assert(MAGIC_ENUM_RANGE_MAX > MAGIC_ENUM_RANGE_MIN, "MAGIC_ENUM_RANGE_MAX must be greater than MAGIC_ENUM_RANGE_MIN.");
static_assert((MAGIC_ENUM_RANGE_MAX - MAGIC_ENUM_RANGE_MIN) < (std::numeric_limits<std::uint16_t>::max)(), "MAGIC_ENUM_RANGE must be less than UINT16_MAX.");
namespace detail {
@@ -216,9 +215,9 @@ namespace detail {
template <typename T>
struct supported
#if defined(MAGIC_ENUM_SUPPORTED) && MAGIC_ENUM_SUPPORTED || defined(MAGIC_ENUM_NO_CHECK_SUPPORT)
: std::true_type {};
: std::true_type {};
#else
: std::false_type {};
: std::false_type {};
#endif
template <auto V, typename E = std::decay_t<decltype(V)>, std::enable_if_t<std::is_enum_v<E>, int> = 0>
@@ -423,10 +422,20 @@ constexpr auto n() noexcept {
constexpr auto name_ptr = MAGIC_ENUM_GET_TYPE_NAME_BUILTIN(E);
constexpr auto name = name_ptr ? str_view{name_ptr, std::char_traits<char>::length(name_ptr)} : str_view{};
#elif defined(__clang__)
auto name = str_view{__PRETTY_FUNCTION__ + 34, sizeof(__PRETTY_FUNCTION__) - 36};
str_view name;
if constexpr (sizeof(__PRETTY_FUNCTION__) == sizeof(__FUNCTION__)) {
static_assert(always_false_v<E>, "magic_enum::detail::n requires __PRETTY_FUNCTION__.");
return str_view{};
} else {
name.size_ = sizeof(__PRETTY_FUNCTION__) - 36;
name.str_ = __PRETTY_FUNCTION__ + 34;
}
#elif defined(__GNUC__)
auto name = str_view{__PRETTY_FUNCTION__, sizeof(__PRETTY_FUNCTION__) - 1};
if (name.str_[name.size_ - 1] == ']') {
if constexpr (sizeof(__PRETTY_FUNCTION__) == sizeof(__FUNCTION__)) {
static_assert(always_false_v<E>, "magic_enum::detail::n requires __PRETTY_FUNCTION__.");
return str_view{};
} else if (name.str_[name.size_ - 1] == ']') {
name.size_ -= 50;
name.str_ += 49;
} else {
@@ -489,7 +498,14 @@ constexpr auto n() noexcept {
constexpr auto name_ptr = MAGIC_ENUM_GET_ENUM_NAME_BUILTIN(V);
auto name = name_ptr ? str_view{name_ptr, std::char_traits<char>::length(name_ptr)} : str_view{};
#elif defined(__clang__)
auto name = str_view{__PRETTY_FUNCTION__ + 34, sizeof(__PRETTY_FUNCTION__) - 36};
str_view name;
if constexpr (sizeof(__PRETTY_FUNCTION__) == sizeof(__FUNCTION__)) {
static_assert(always_false_v<decltype(V)>, "magic_enum::detail::n requires __PRETTY_FUNCTION__.");
return str_view{};
} else {
name.size_ = sizeof(__PRETTY_FUNCTION__) - 36;
name.str_ = __PRETTY_FUNCTION__ + 34;
}
if (name.size_ > 22 && name.str_[0] == '(' && name.str_[1] == 'a' && name.str_[10] == ' ' && name.str_[22] == ':') {
name.size_ -= 23;
name.str_ += 23;
@@ -499,7 +515,10 @@ constexpr auto n() noexcept {
}
#elif defined(__GNUC__)
auto name = str_view{__PRETTY_FUNCTION__, sizeof(__PRETTY_FUNCTION__) - 1};
if (name.str_[name.size_ - 1] == ']') {
if constexpr (sizeof(__PRETTY_FUNCTION__) == sizeof(__FUNCTION__)) {
static_assert(always_false_v<decltype(V)>, "magic_enum::detail::n requires __PRETTY_FUNCTION__.");
return str_view{};
} else if (name.str_[name.size_ - 1] == ']') {
name.size_ -= 55;
name.str_ += 54;
} else {
@@ -698,7 +717,7 @@ constexpr void valid_count(bool* valid, std::size_t& count) noexcept {
} \
}
MAGIC_ENUM_FOR_EACH_256(MAGIC_ENUM_V);
MAGIC_ENUM_FOR_EACH_256(MAGIC_ENUM_V)
if constexpr ((I + 256) < Size) {
valid_count<E, S, Size, Min, I + 256>(valid, count);
@@ -750,7 +769,6 @@ constexpr auto values() noexcept {
constexpr auto max = reflected_max<E, S>();
constexpr auto range_size = max - min + 1;
static_assert(range_size > 0, "magic_enum::enum_range requires valid size.");
static_assert(range_size < (std::numeric_limits<std::uint16_t>::max)(), "magic_enum::enum_range requires valid size.");
return values<E, S, range_size, min>();
}
@@ -807,7 +825,8 @@ inline constexpr auto max_v = (count_v<E, S> > 0) ? static_cast<U>(values_v<E, S
template <typename E, enum_subtype S, std::size_t... I>
constexpr auto names(std::index_sequence<I...>) noexcept {
return std::array<string_view, sizeof...(I)>{{enum_name_v<E, values_v<E, S>[I]>...}};
constexpr auto names = std::array<string_view, sizeof...(I)>{{enum_name_v<E, values_v<E, S>[I]>...}};
return names;
}
template <typename E, enum_subtype S>
@@ -818,7 +837,8 @@ using names_t = decltype((names_v<D, S>));
template <typename E, enum_subtype S, std::size_t... I>
constexpr auto entries(std::index_sequence<I...>) noexcept {
return std::array<std::pair<E, string_view>, sizeof...(I)>{{{values_v<E, S>[I], enum_name_v<E, values_v<E, S>[I]>}...}};
constexpr auto entries = std::array<std::pair<E, string_view>, sizeof...(I)>{{{values_v<E, S>[I], enum_name_v<E, values_v<E, S>[I]>}...}};
return entries;
}
template <typename E, enum_subtype S>
@@ -845,17 +865,16 @@ constexpr bool is_sparse() noexcept {
template <typename E, enum_subtype S = subtype_v<E>>
inline constexpr bool is_sparse_v = is_sparse<E, S>();
template <typename E, enum_subtype S, typename U = std::underlying_type_t<E>>
constexpr U values_ors() noexcept {
static_assert(S == enum_subtype::flags, "magic_enum::detail::values_ors requires valid subtype.");
template <typename E, enum_subtype S>
struct is_reflected
#if defined(MAGIC_ENUM_NO_CHECK_REFLECTED_ENUM)
: std::true_type {};
#else
: std::bool_constant<std::is_enum_v<E> && (count_v<E, S> != 0)> {};
#endif
auto ors = U{0};
for (std::size_t i = 0; i < count_v<E, S>; ++i) {
ors |= static_cast<U>(values_v<E, S>[i]);
}
return ors;
}
template <typename E, enum_subtype S>
inline constexpr bool is_reflected_v = is_reflected<std::decay_t<E>, S>{};
template <bool, typename R>
struct enable_if_enum {};
@@ -1156,6 +1175,7 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_value(std::size_t index) noexcept -> detail::enable_if_t<E, std::decay_t<E>> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
if constexpr (detail::is_sparse_v<D, S>) {
return MAGIC_ENUM_ASSERT(index < detail::count_v<D, S>), detail::values_v<D, S>[index];
@@ -1170,6 +1190,7 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
template <typename E, std::size_t I, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_value() noexcept -> detail::enable_if_t<E, std::decay_t<E>> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
static_assert(I < detail::count_v<D, S>, "magic_enum::enum_value out of range.");
return enum_value<D, S>(I);
@@ -1178,7 +1199,10 @@ template <typename E, std::size_t I, detail::enum_subtype S = detail::subtype_v<
// Returns std::array with enum values, sorted by enum value.
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_values() noexcept -> detail::enable_if_t<E, detail::values_t<E, S>> {
return detail::values_v<std::decay_t<E>, S>;
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
return detail::values_v<D, S>;
}
// Returns integer value from enum value.
@@ -1199,11 +1223,9 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_index(E value) noexcept -> detail::enable_if_t<E, optional<std::size_t>> {
using D = std::decay_t<E>;
using U = underlying_type_t<D>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
if constexpr (detail::count_v<D, S> == 0) {
static_cast<void>(value);
return {}; // Empty enum.
} else if constexpr (detail::is_sparse_v<D, S> || (S == detail::enum_subtype::flags)) {
if constexpr (detail::is_sparse_v<D, S> || (S == detail::enum_subtype::flags)) {
#if defined(MAGIC_ENUM_ENABLE_HASH)
return detail::constexpr_switch<&detail::values_v<D, S>, detail::case_call_t::index>(
[](std::size_t i) { return optional<std::size_t>{i}; },
@@ -1231,14 +1253,17 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
template <detail::enum_subtype S, typename E>
[[nodiscard]] constexpr auto enum_index(E value) noexcept -> detail::enable_if_t<E, optional<std::size_t>> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
return enum_index<D, S>(value);
}
// Obtains index in enum values from static storage enum variable.
template <auto V, detail::enum_subtype S = detail::subtype_v<std::decay_t<decltype(V)>>>
[[nodiscard]] constexpr auto enum_index() noexcept -> detail::enable_if_t<decltype(V), std::size_t> {
constexpr auto index = enum_index<std::decay_t<decltype(V)>, S>(V);
[[nodiscard]] constexpr auto enum_index() noexcept -> detail::enable_if_t<decltype(V), std::size_t> {\
using D = std::decay_t<decltype(V)>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
constexpr auto index = enum_index<D, S>(V);
static_assert(index, "magic_enum::enum_index enum value does not have a index.");
return *index;
@@ -1259,6 +1284,7 @@ template <auto V>
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_name(E value) noexcept -> detail::enable_if_t<E, string_view> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
if (const auto i = enum_index<D, S>(value)) {
return detail::names_v<D, S>[*i];
@@ -1271,6 +1297,7 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
template <detail::enum_subtype S, typename E>
[[nodiscard]] constexpr auto enum_name(E value) -> detail::enable_if_t<E, string_view> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
return enum_name<D, S>(value);
}
@@ -1278,13 +1305,19 @@ template <detail::enum_subtype S, typename E>
// Returns std::array with names, sorted by enum value.
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_names() noexcept -> detail::enable_if_t<E, detail::names_t<E, S>> {
return detail::names_v<std::decay_t<E>, S>;
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
return detail::names_v<D, S>;
}
// Returns std::array with pairs (value, name), sorted by enum value.
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_entries() noexcept -> detail::enable_if_t<E, detail::entries_t<E, S>> {
return detail::entries_v<std::decay_t<E>, S>;
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
return detail::entries_v<D, S>;
}
// Allows you to write magic_enum::enum_cast<foo>("bar", magic_enum::case_insensitive);
@@ -1295,31 +1328,27 @@ inline constexpr auto case_insensitive = detail::case_insensitive<>{};
template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
[[nodiscard]] constexpr auto enum_cast(underlying_type_t<E> value) noexcept -> detail::enable_if_t<E, optional<std::decay_t<E>>> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
if constexpr (detail::count_v<D, S> == 0) {
static_cast<void>(value);
return {}; // Empty enum.
} else {
if constexpr (detail::is_sparse_v<D, S> || (S == detail::enum_subtype::flags)) {
if constexpr (detail::is_sparse_v<D, S> || (S == detail::enum_subtype::flags)) {
#if defined(MAGIC_ENUM_ENABLE_HASH)
return detail::constexpr_switch<&detail::values_v<D, S>, detail::case_call_t::value>(
[](D v) { return optional<D>{v}; },
static_cast<D>(value),
detail::default_result_type_lambda<optional<D>>);
return detail::constexpr_switch<&detail::values_v<D, S>, detail::case_call_t::value>(
[](D v) { return optional<D>{v}; },
static_cast<D>(value),
detail::default_result_type_lambda<optional<D>>);
#else
for (std::size_t i = 0; i < detail::count_v<D, S>; ++i) {
if (value == static_cast<underlying_type_t<D>>(enum_value<D, S>(i))) {
return static_cast<D>(value);
}
}
return {}; // Invalid value or out of range.
#endif
} else {
if (value >= detail::min_v<D, S> && value <= detail::max_v<D, S>) {
for (std::size_t i = 0; i < detail::count_v<D, S>; ++i) {
if (value == static_cast<underlying_type_t<D>>(enum_value<D, S>(i))) {
return static_cast<D>(value);
}
return {}; // Invalid value or out of range.
}
return {}; // Invalid value or out of range.
#endif
} else {
if (value >= detail::min_v<D, S> && value <= detail::max_v<D, S>) {
return static_cast<D>(value);
}
return {}; // Invalid value or out of range.
}
}
@@ -1328,26 +1357,23 @@ template <typename E, detail::enum_subtype S = detail::subtype_v<E>>
template <typename E, detail::enum_subtype S = detail::subtype_v<E>, typename BinaryPredicate = std::equal_to<>>
[[nodiscard]] constexpr auto enum_cast(string_view value, [[maybe_unused]] BinaryPredicate p = {}) noexcept(detail::is_nothrow_invocable<BinaryPredicate>()) -> detail::enable_if_t<E, optional<std::decay_t<E>>, BinaryPredicate> {
using D = std::decay_t<E>;
static_assert(detail::is_reflected_v<D, S>, "magic_enum requires enum implementation and valid max and min.");
if constexpr (detail::count_v<D, S> == 0) {
static_cast<void>(value);
return {}; // Empty enum.
#if defined(MAGIC_ENUM_ENABLE_HASH)
} else if constexpr (detail::is_default_predicate<BinaryPredicate>()) {
return detail::constexpr_switch<&detail::names_v<D, S>, detail::case_call_t::index>(
[](std::size_t i) { return optional<D>{detail::values_v<D, S>[i]}; },
value,
detail::default_result_type_lambda<optional<D>>,
[&p](string_view lhs, string_view rhs) { return detail::cmp_equal(lhs, rhs, p); });
#endif
} else {
for (std::size_t i = 0; i < detail::count_v<D, S>; ++i) {
if (detail::cmp_equal(value, detail::names_v<D, S>[i], p)) {
return enum_value<D, S>(i);
}
}
return {}; // Invalid value or out of range.
if constexpr (detail::is_default_predicate<BinaryPredicate>()) {
return detail::constexpr_switch<&detail::names_v<D, S>, detail::case_call_t::index>(
[](std::size_t i) { return optional<D>{detail::values_v<D, S>[i]}; },
value,
detail::default_result_type_lambda<optional<D>>,
[&p](string_view lhs, string_view rhs) { return detail::cmp_equal(lhs, rhs, p); });
}
#endif
for (std::size_t i = 0; i < detail::count_v<D, S>; ++i) {
if (detail::cmp_equal(value, detail::names_v<D, S>[i], p)) {
return enum_value<D, S>(i);
}
}
return {}; // Invalid value or out of range.
}
// Checks whether enum contains value with such value.

View File

@@ -134,8 +134,12 @@ RCLConsensus::Adaptor::acquireLedger(LedgerHash const& hash)
acquiringLedger_ = hash;
app_.getJobQueue().addJob(
jtADVANCE, "getConsensusLedger", [id = hash, &app = app_]() {
app.getInboundLedgers().acquire(
jtADVANCE,
"getConsensusLedger1",
[id = hash, &app = app_, this]() {
JLOG(j_.debug())
<< "JOB advanceLedger getConsensusLedger1 started";
app.getInboundLedgers().acquireAsync(
id, 0, InboundLedger::Reason::CONSENSUS);
});
}
@@ -182,7 +186,7 @@ RCLConsensus::Adaptor::share(RCLCxTx const& tx)
if (app_.getHashRouter().shouldRelay(tx.id()))
{
JLOG(j_.debug()) << "Relaying disputed tx " << tx.id();
auto const slice = tx.tx_.slice();
auto const slice = tx.tx_->slice();
protocol::TMTransaction msg;
msg.set_rawtransaction(slice.data(), slice.size());
msg.set_status(protocol::tsNEW);
@@ -326,7 +330,7 @@ RCLConsensus::Adaptor::onClose(
tx.first->add(s);
initialSet->addItem(
SHAMapNodeType::tnTRANSACTION_NM,
SHAMapItem(tx.first->getTransactionID(), s.slice()));
make_shamapitem(tx.first->getTransactionID(), s.slice()));
}
// Add pseudo-transactions to the set
@@ -370,7 +374,8 @@ RCLConsensus::Adaptor::onClose(
RCLCensorshipDetector<TxID, LedgerIndex>::TxIDSeqVec proposed;
initialSet->visitLeaves(
[&proposed, seq](std::shared_ptr<SHAMapItem const> const& item) {
[&proposed,
seq](boost::intrusive_ptr<SHAMapItem const> const& item) {
proposed.emplace_back(item->key(), seq);
});
@@ -493,15 +498,11 @@ RCLConsensus::Adaptor::doAccept(
for (auto const& item : *result.txns.map_)
{
#ifndef DEBUG
try
{
#endif
retriableTxs.insert(
std::make_shared<STTx const>(SerialIter{item.slice()}));
JLOG(j_.debug()) << " Tx: " << item.key();
#ifndef DEBUG
}
catch (std::exception const& ex)
{
@@ -509,7 +510,6 @@ RCLConsensus::Adaptor::doAccept(
JLOG(j_.warn())
<< " Tx: " << item.key() << " throws: " << ex.what();
}
#endif
}
auto built = buildLCL(
@@ -535,7 +535,7 @@ RCLConsensus::Adaptor::doAccept(
std::vector<TxID> accepted;
result.txns.map_->visitLeaves(
[&accepted](std::shared_ptr<SHAMapItem const> const& item) {
[&accepted](boost::intrusive_ptr<SHAMapItem const> const& item) {
accepted.push_back(item->key());
});
@@ -610,7 +610,7 @@ RCLConsensus::Adaptor::doAccept(
<< "Test applying disputed transaction that did"
<< " not get in " << dispute.tx().id();
SerialIter sit(dispute.tx().tx_.slice());
SerialIter sit(dispute.tx().tx_->slice());
auto txn = std::make_shared<STTx const>(sit);
// Disputed pseudo-transactions that were not accepted

View File

@@ -42,7 +42,7 @@ public:
@param txn The transaction to wrap
*/
RCLCxTx(SHAMapItem const& txn) : tx_{txn}
RCLCxTx(boost::intrusive_ptr<SHAMapItem const> txn) : tx_(std::move(txn))
{
}
@@ -50,11 +50,11 @@ public:
ID const&
id() const
{
return tx_.key();
return tx_->key();
}
//! The SHAMapItem that represents the transaction.
SHAMapItem const tx_;
boost::intrusive_ptr<SHAMapItem const> tx_;
};
/** Represents a set of transactions in RCLConsensus.
@@ -90,8 +90,7 @@ public:
bool
insert(Tx const& t)
{
return map_->addItem(
SHAMapNodeType::tnTRANSACTION_NM, SHAMapItem{t.tx_});
return map_->addItem(SHAMapNodeType::tnTRANSACTION_NM, t.tx_);
}
/** Remove a transaction from the set.
@@ -145,7 +144,7 @@ public:
code uses the shared_ptr semantics to know whether the find
was successful and properly creates a Tx as needed.
*/
std::shared_ptr<const SHAMapItem> const&
boost::intrusive_ptr<SHAMapItem const> const&
find(Tx::ID const& entry) const
{
return map_->peekItem(entry);

Some files were not shown because too many files have changed in this diff Show More