Compare commits

...

256 Commits

Author SHA1 Message Date
godexsoft
782fea3837 [CI] clang-tidy auto fixes 2025-04-18 09:27:58 +00:00
Sergey Kuznetsov
bd9e39ee85 fix: Fix incorrect requests logging (#2005)
Fixes #2004.
2025-04-17 17:49:44 +01:00
Sergey Kuznetsov
46514c8fe9 feat: More efficient cache (#1997)
Fixes #1473.
2025-04-17 16:44:53 +01:00
github-actions[bot]
39d1ceace4 style: clang-tidy auto fixes (#2007)
Fixes #2006. 
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-04-17 11:05:27 +01:00
Peter Chen
89eb962d85 fix: CTID issue (#2001)
fixes #1998
2025-04-16 19:55:12 +01:00
Maria Shodunke
4a5fee7548 docs: Verify/review config descriptions (#1960)
- Reviewed and modified descriptions.
- Added some minor formatting.
- Removed the "Key:" before every key name, seems redundant.
2025-04-09 09:49:11 -07:00
Alex Kremer
fcd891148b chore: Fix clang-tidy 'fixes' (#1996)
Undo some of #1994
2025-04-08 17:18:33 +01:00
github-actions[bot]
99adb31184 style: clang-tidy auto fixes (#1994)
Fixes #1993. 

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-04-08 11:18:52 +01:00
Sergey Kuznetsov
2c1a90a20d feat: Nodes communication via DB (#1976)
Fixes #1966.
2025-04-07 14:18:49 +01:00
github-actions[bot]
2385bf547b style: clang-tidy auto fixes (#1992)
Fixes #1991. 

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-04-07 11:13:22 +01:00
Alex Kremer
1d011cf8d9 feat: ETLng integration (#1986)
For #1594
2025-04-04 15:52:22 +01:00
Alex Kremer
6896a2545a chore: Update workflow notification settings (#1990) 2025-04-04 15:51:13 +01:00
github-actions[bot]
91484c64e4 style: clang-tidy auto fixes (#1989)
Fixes #1988. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-04-03 11:08:47 +01:00
Alex Kremer
bdf7382d44 chore: Update maintainers (#1987) 2025-04-02 16:50:45 +01:00
Peter Chen
8a3e71e91f fix: incorrect set HighDeepFreeze flag (#1978)
fixes #1977
2025-04-02 09:53:35 -04:00
github-actions[bot]
e61ee30180 style: clang-tidy auto fixes (#1985)
Fixes #1984.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-04-02 12:40:38 +01:00
Sergey Kuznetsov
d3df6d10e4 fix: Fix ssl in new webserver (#1981)
Fixes #1980.

An SSL handshake was missing and WsConnection should be build from
stream not socket because in case SSL connection stream already
completed SSL handshake.
2025-04-01 16:12:16 +01:00
Sergey Kuznetsov
60df3a1914 ci: Pin cmake 3.31.6 for macos runners (#1983)
Fixes #1982.
2025-04-01 14:49:17 +01:00
cyan317
f454076fb6 feat: Snapshot import feature (#1970)
Implement snapshot import cmd
`clio_snapshot --server --grpc_server 0.0.0.0:12345 --path
<snapshot_path>`

Implement snapshot range cmd
`./clio_snapshot --range --path <snapshot_path>`

Add
LedgerHouses: It is responsible for reading/writing snapshot data
Server: Start grpc server and ws server
2025-03-26 09:11:15 +00:00
github-actions[bot]
66b3f40268 style: clang-tidy auto fixes (#1972)
Fixes #1971. 
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-03-24 12:14:38 +00:00
Alex Kremer
b31b7633c9 feat: ETLng extensions (#1967)
For #1599 #1597
2025-03-21 16:41:29 +00:00
Peter Chen
a36aa3618f fix: ripple_flag logic in account lines (#1969)
fixes #1968
2025-03-19 10:26:04 -04:00
Sergey Kuznetsov
7943f47939 chore: Add git-cliff config (#1965)
First step for #1779.
2025-03-18 15:12:49 +00:00
Sergey Kuznetsov
67e451ec23 chore: Upgrade libxrpl to 2.4.0 (#1961) 2025-03-13 15:42:20 +00:00
github-actions[bot]
92789d5a91 style: clang-tidy auto fixes (#1963)
Fixes #1962.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-03-13 11:24:12 +00:00
Sergey Kuznetsov
73477fb9d4 feat: Expose ledger cache full and disabled to prometheus (#1957)
Fixes #1771
2025-03-12 14:54:21 +00:00
Alex Kremer
8ac1ff7699 feat: Implement and use LedgerCacheInterface (#1955)
For #1200
2025-03-12 13:48:33 +00:00
Sergey Kuznetsov
26842374de fix: Fix url check in config (#1953)
Fixes #1850
2025-03-11 12:54:22 +00:00
Sergey Kuznetsov
a46d700390 fix: Improve error message when starting read only mode with empty DB (#1946)
Fixes #1721
2025-03-10 11:54:56 +00:00
github-actions[bot]
a34d565ea4 style: clang-tidy auto fixes (#1949)
Fixes #1948.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-03-10 10:35:36 +00:00
Sergey Kuznetsov
c57fe1e6e4 test: Add assert mock to avoid death tests (#1947)
Fixes #1750
2025-03-07 18:11:52 +00:00
Peter Chen
8a08c5e6ce feat: Add support for deep freeze (#1875)
fixes #1826
2025-03-05 11:04:44 -05:00
Peter Chen
5d2694d36c chore: update libxrpl (#1943) 2025-03-05 10:14:39 -05:00
Peter Chen
98ff72be66 fix: change math/rand to crypto/rand (#1941) 2025-03-05 10:12:50 -05:00
Sergey Kuznetsov
915a8beb40 style: Use error code instead of exception when parsing json (#1942) 2025-03-04 18:34:45 +00:00
Sergey Kuznetsov
f7db030ad7 fix: Fix dangling reference in new web server (#1938)
Also delete move constructors where moving may be dangerous.
2025-03-04 16:45:47 +00:00
Peter Chen
86e2cd1cc4 feat: Add workflow to check config description (#1894)
fixes #1880

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
Co-authored-by: Shawn Xie <35279399+shawnxie999@users.noreply.github.com>
2025-03-04 10:47:36 -05:00
Sergey Kuznetsov
f0613c945f ci: Use ubuntu latest for some ci jobs (#1939)
Ubuntu 20.04 images will be deprecated soon:
https://github.com/actions/runner-images/issues/11101
Switch to the latest Ubuntu everywhere we use Github's image.
2025-03-04 14:52:37 +00:00
Sergey Kuznetsov
d11e7bc60e fix: Data race in new webserver (#1926)
There was a data race inside `CoroutineGroup` because internal timer was
used from multiple threads in the methods `asyncWait()` and
`onCoroutineComplete()`. Changing `registerForeign()` to spawn to the
same `yield_context` fixes the problem because now the timer is accessed
only from the same coroutine which has an internal strand.

During debugging I also added websocket support for `request_gun` tool.
2025-02-27 15:08:46 +00:00
Sergey Kuznetsov
b909b8879d fix: Fix backtrace usage (#1932) 2025-02-27 14:26:51 +00:00
github-actions[bot]
918a92eeee style: clang-tidy auto fixes (#1925)
Fixes #1924. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-02-25 09:29:24 +00:00
Shawn Xie
c9e8330e0a feat: LPT freeze (#1840)
Fixes #1827
2025-02-24 15:39:11 +00:00
github-actions[bot]
f577139f70 style: clang-tidy auto fixes (#1920)
Fixes #1919. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-02-24 09:35:25 +00:00
Alex Kremer
491cd58f93 feat: ETLng monitor (#1898)
For #1594
2025-02-21 16:10:25 +00:00
Sergey Kuznetsov
25296f8ffa fix: Better errors on logger init failure (#1857)
Fixes #1326.
2025-02-18 15:43:13 +00:00
Sergey Kuznetsov
4b178805de fix: Array parsing in new config (#1896)
Improving array parsing in config:
- Allow null values in arrays for optional fields
- Allow empty array even for required field
- Allow to not put an empty array in config even if array contains
required fields
2025-02-18 13:29:43 +00:00
Peter Chen
fcebd715ba test: add non-admin test for simulate (#1893) 2025-02-14 13:00:40 -05:00
Sergey Kuznetsov
531e1dad6d ci: Upload cache only for develop branch (#1897) 2025-02-14 16:54:08 +00:00
cyan317
3c008b6bb4 feat: Snapshot exporting tool (#1877)
In this PR:
1 We create a golang grpc client to request data from rippled
2 We store the data into the specific place
3 Add unittests
4 Create build script, the build can be initiated by set conan option
`snapshot` being true.

Please ignore the grpc server part. It will be implemented in importing
tool.
2025-02-12 16:56:04 +00:00
Peter Chen
624f7ff6d5 feat: Support Simulate (#1891)
fixes #1887
2025-02-12 10:00:04 -05:00
Sergey Kuznetsov
e503dffc9a fix: Array parsing in new config (#1884)
Fixes #1870.
2025-02-12 13:28:06 +00:00
Alex Kremer
cd1aa8fb70 chore: Revert workflow names (#1890) 2025-02-11 18:08:47 +00:00
github-actions[bot]
b5fe22da18 style: clang-tidy auto fixes (#1889)
Fixes #1888. Please review and commit clang-tidy fixes.

Co-authored-by: godexsoft <385326+godexsoft@users.noreply.github.com>
2025-02-11 13:01:31 +00:00
Peter Chen
cd6289b79a feat: Generate config descriptions (#1842)
Fixes #1460
2025-02-10 11:29:00 -05:00
Alex Kremer
f5e6c9576e feat: Run tests with sanitizers in CI (#1879)
Fixes #1075
Fixes #1049
2025-02-10 16:20:25 +00:00
Sergey Kuznetsov
427ba47716 chore: Fix error in grafana dashboard example (#1878) 2025-02-07 13:42:30 +00:00
cyan317
67c989081d fix clang-tidy issues (#1871) 2025-02-03 12:00:59 +00:00
github-actions[bot]
2fd16cd582 style: clang-tidy auto fixes (#1868)
Fixes #1867. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2025-02-03 08:43:26 +00:00
Sergey Kuznetsov
89af8fe500 feat: Permissioned domains (#1841)
Fixes #1833.
2025-01-31 15:30:34 +00:00
cyan317
1753c95910 feat: Support Dynamic NFT (#1525)
Fix #1471
Clio's changes for supporting DNFT
https://github.com/XRPLF/rippled/pull/5048/files
2025-01-31 13:33:20 +00:00
Maria Shodunke
e7702e9c11 docs: Move metrics and static analysis docs (#1864)
Fixes #1219.
2025-01-31 11:37:27 +00:00
github-actions[bot]
e549657766 style: clang-tidy auto fixes (#1863)
Fixes #1862. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2025-01-30 09:16:34 +00:00
Peter Chen
7c2742036b refactor: Remove boost filesystem (#1859) 2025-01-29 11:16:54 -05:00
Alex Kremer
73f375f20d feat: ETLng task manager (#1843) 2025-01-29 15:29:13 +00:00
Alex Kremer
3e200d8b9d feat: Add Conan profiles for common Sanitizers to docker ci image (#1856)
For #1049
2025-01-29 14:45:00 +00:00
Alex Kremer
81fe617816 fix: Re-add account_tx max limit (#1855) 2025-01-29 13:42:31 +00:00
Sergey Kuznetsov
75354fbecd fix: CacheLoader causes crash when no cache is used (#1853)
If cache is disabled or Clio starts with and empty DB, `loader_` inside
cache is not created. So calling `CacheLoader::stop()` or
`CacheLoader::wait()` was causing crash.
2025-01-28 18:10:19 +00:00
Sergey Kuznetsov
540e938223 refactor: Use mutex from utils (#1851)
Fixes #1359.
2025-01-27 15:28:00 +00:00
Sergey Kuznetsov
6ef6ca9e65 chore: Fix issue found by clang-tidy (#1849)
Fixes #1848
2025-01-23 12:29:43 +00:00
github-actions[bot]
35b9a066e3 style: clang-tidy auto fixes (#1847)
Fixes #1846. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2025-01-23 09:34:00 +00:00
Sergey Kuznetsov
957028699b feat: Graceful shutdown (#1801)
Fixes #442.
2025-01-22 13:09:16 +00:00
cyan317
12e6fcc97e fix: gateway_balance discrepancy (#1839)
Fix https://github.com/XRPLF/clio/issues/1832

rippled code:

https://github.com/XRPLF/rippled/blob/develop/src/xrpld/rpc/handlers/GatewayBalances.cpp#L129
2025-01-22 09:48:13 +00:00
github-actions[bot]
f9d9879513 style: clang-tidy auto fixes (#1845)
Fixes #1844. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2025-01-22 09:46:37 +00:00
cyan317
278f7b1b58 feat: Block clio if migration is blocking (#1834)
Add:
- Block server if migration is blocking
- Initialise the migration related table when server starts against
empty DB

Add MigrationInspectorInterface. server uses inspector to check the
migrators status.
2025-01-21 14:10:01 +00:00
nkramer44
fbedeff697 fix: Remove InvalidHotWallet Error from gateway_balances RPC handler (#1830)
Fixes #1825 by removing the check in the gateway_balances RPC handler
that returns the RpcInvalidHotWallet error code if one of the addresses
supplied in the request's `hotwallet` array does not have a trustline
with the `account` from the request.

As stated in the original ticket, this change fixes a discrepancy in
behavior between Clio and rippled, as rippled does not check for
trustline existence when handling gateway_balances RPCs

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2025-01-21 12:17:54 +00:00
cyan317
f64d8ecb77 fix: Copyright format (#1835)
Aligned the copyright notice to match the format used in other files.
Updated Copyright (c) 2022-2024 to Copyright (c) 2024 to ensure
consistency.
2025-01-16 14:36:03 +00:00
Peter Chen
3e38ea9b48 fix: Add more constraints to config (#1831)
Log file size and rotation should also not allowed to be 0.
2025-01-15 10:56:50 -05:00
Sergey Kuznetsov
7834b63b55 fix: Check result of parsing config (#1829) 2025-01-14 11:55:05 -05:00
Alex Kremer
2cf849dd12 feat: ETLng scheduling (#1820)
For #1596
2025-01-14 15:50:59 +00:00
Peter Chen
c47b96bc68 fix: comment from verify config PR (#1823)
PR here: https://github.com/XRPLF/clio/pull/1814#
2025-01-13 12:58:20 -05:00
Sergey Kuznetsov
9659d98140 fix: Reading log_channels levels from config (#1821) 2025-01-13 16:29:45 +00:00
Peter Chen
f1698c55ff feat: add config verify flag (#1814)
fixes #1806
2025-01-13 09:57:11 -05:00
Alex Kremer
91c00e781a fix: Silence expected use after move warnings (#1819)
Fixes #1818
2025-01-10 15:47:48 +00:00
github-actions[bot]
c0d52723c9 style: clang-tidy auto fixes (#1817)
Fixes #1816. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2025-01-10 09:06:05 +00:00
Alex Kremer
590c07ad84 fix: AsyncFramework RAII (#1815)
Fixes #1812
2025-01-09 15:26:25 +00:00
Alex Kremer
48c8d85d0c feat: ETLng loader basics (#1808)
For #1597
2025-01-09 14:47:08 +00:00
Alex Kremer
36a9f40a60 fix: Optimize ledger_range query (#1797) 2025-01-07 14:52:56 +00:00
Peter Chen
698718a02a fix: Incorrect log values in newconfig (#1807)
Additional fixes for #1627
2025-01-06 17:37:40 +00:00
Sergey Kuznetsov
0a9dbe1cc1 ci: Switch CI to macos 15 runners (#1761) 2025-01-06 13:19:03 +00:00
Peter Chen
cce7aa2893 style: Fix formatting via clang-format (#1809)
For #1760
2025-01-04 04:00:03 +00:00
Alex Kremer
820b32c6d7 chore: No ALL_CAPS (#1760)
Fixes #1680
2025-01-02 11:39:31 +00:00
github-actions[bot]
efe5d08205 style: clang-tidy auto fixes (#1803)
Fixes #1802. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-12-23 09:02:34 +00:00
Alex Kremer
285d4e6e9b feat: Repeating operations for util::async (#1776)
Async framework needed a way to do repeating operations (think simplest
cases like AmendmentBlockHandler).
This PR implements the **absolute minimum**, barebones repeating
operations that
- Can't return any values (void)
- Do not take any arguments in the user-provided function
- Can't be scheduled (i.e. a delay before starting repeating)
- Can't be stopped from inside the user block of code (i.e. does not
have stop token or anything like that)
- Can be stopped through the RepeatedOperation's `abort()` function (but
not from the user-provided repeating function)
2024-12-20 13:24:01 +00:00
github-actions[bot]
f2a89b095d style: clang-tidy auto fixes (#1799)
Fixes #1798.
2024-12-20 10:44:36 +00:00
Sergey Kuznetsov
7d4e3619b0 fix: Fix bugs in new webserver (#1780)
Fixes #919.

Fixes bugs for new webserver:
- Unhandled exception when closing already closed websocket
- No pings for plain websocket connection
- Server drops websocket connection when client responds to pings but
doesn't send anything

Also changing API of ng connections. Now timeout is set by a separate
method instead of providing it for each call.
2024-12-19 15:14:04 +00:00
Sergey Kuznetsov
c4b87d2a0a test: Remove request timeout from integration tests (#1794)
Fixes #1791.
2024-12-18 15:19:58 +00:00
Alex Kremer
2d0253bc4a feat: Upgrade to libxrpl 2.4.0-b1 (#1789) 2024-12-18 14:35:16 +00:00
github-actions[bot]
017cf2adc9 style: clang-tidy auto fixes (#1796)
Fixes #1795. Turning off noisy clang-tidy check.
2024-12-18 13:51:39 +00:00
github-actions[bot]
64b50b419f style: clang-tidy auto fixes (#1793)
Fixes #1792.
2024-12-18 11:43:53 +00:00
Sergey Kuznetsov
fc3e60f17f test: Fix integration tests (#1788)
Fixes #1784
2024-12-17 15:35:17 +00:00
cyan317
8dc7f16ef1 feat: Migration framework (#1768)
This PR implemented the migration framework, which contains the command
line interface to execute migration and helps to migrate data easily.
Please read README.md for more information about this framework.
2024-12-17 14:50:51 +00:00
github-actions[bot]
15a441b084 style: clang-tidy auto fixes (#1786)
Fixes #1785. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-12-17 09:45:14 +00:00
Peter Chen
3c4903a339 refactor: Replace all old instances of Config with New Config (#1627)
Fixes #1184 
Previous PR's found [here](https://github.com/XRPLF/clio/pull/1593) and
[here](https://github.com/XRPLF/clio/pull/1544)
2024-12-16 15:33:32 -08:00
Sergey Kuznetsov
b53cfd0ec1 fix: Improve Repeat implementation (#1775) 2024-12-11 15:07:48 +00:00
github-actions[bot]
c41399ef8e style: clang-tidy auto fixes (#1778)
Fixes #1777.
2024-12-11 11:33:03 +00:00
Sergey Kuznetsov
7bef13f913 feat: Add dosguard to new webserver (#1772)
For #919.
2024-12-10 14:59:12 +00:00
Alex Kremer
4ff2953257 fix: Fix and re-enable flaky test (#1773)
Fixes #1752
2024-12-09 16:30:51 +00:00
Alex Kremer
475e309f25 chore: Add clang-tidy 19 checks (#1774)
Fix #1664
2024-12-09 16:27:53 +00:00
Sergey Kuznetsov
a7074dbf0f fix: Fix broken tests (#1767)
Fixes #1764
2024-12-02 14:28:57 +00:00
github-actions[bot]
66691c45a0 style: clang-tidy auto fixes (#1766)
Fixes #1765.
2024-12-02 11:06:21 +00:00
Peter Chen
fe4f95dabd fix: Check ledger range in every handler (#1755)
fixes #1565
2024-11-29 10:11:07 -05:00
Peter Chen
f62fadc9f9 refactor: setRange in tests (#1763)
There are a few files that cannot move the setRange into constructor of
the test because either the place that calls setRange matters or tests
checks range doesn't exist
2024-11-29 09:27:45 -05:00
Alex Kremer
afb0c7fee2 feat: Add v3 support (#1754) 2024-11-27 17:39:57 +00:00
Alex Kremer
fd73b90416 feat: Healthcheck endpoint (#1751)
Fixes #1759
2024-11-26 13:56:25 +00:00
dependabot[bot]
541bf4395f ci: Bump wandalen/wretry.action from 3.7.2 to 3.7.3 (#1753)
Bumps [wandalen/wretry.action](https://github.com/wandalen/wretry.action) from 3.7.2 to 3.7.3.
2024-11-26 02:14:51 +00:00
Alex Kremer
63c80f2b7d feat: Upgrade to libxrpl 2.3.0 (#1756) 2024-11-26 02:12:06 +00:00
Alex Kremer
385d99c56e chore: Disable a flaky test (#1757)
For #1752
2024-11-26 02:11:37 +00:00
github-actions[bot]
b5da61931f style: clang-tidy auto fixes (#1749)
Fixes #1748.
2024-11-22 10:28:17 +00:00
Alex Kremer
6af86367fd feat: GrpcSource for ETL ng (#1745)
For #1596 and #1597
2024-11-21 17:03:37 +00:00
Peter Chen
9dc322fc7b fix: authorized_credential elements in array not objects bug (#1744) 2024-11-21 10:28:23 -05:00
Sergey Kuznetsov
c77154a5e6 feat: Integrate new webserver (#1722)
For #919.
The new web server is not using dosguard yet. It will be fixed by a
separate PR.
2024-11-21 14:48:32 +00:00
github-actions[bot]
fc3ba07f2e style: clang-tidy auto fixes (#1742)
Fixes #1741. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-19 09:06:58 +00:00
Peter Chen
229ba69b5d fix: Credential error message (#1738)
fixes #1737
2024-11-18 16:08:20 +00:00
github-actions[bot]
524d096777 style: clang-tidy auto fixes (#1740)
Fixes #1739. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-18 09:30:34 +00:00
Alex Kremer
815dfd672e feat: Extraction basics (#1733)
For #1596
2024-11-15 19:55:13 +00:00
Alex Kremer
a4b3877cb2 feat: Upgrade to libxrpl 2.3.0-rc2 (#1736) 2024-11-15 16:18:17 +00:00
github-actions[bot]
6bb5804bb8 style: clang-tidy auto fixes (#1735)
Fixes #1734. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-15 11:05:38 +00:00
Peter Chen
67d99457f2 feat: Add Support Credentials for Clio (#1712)
Rippled PR: [here](https://github.com/XRPLF/rippled/pull/5103)
2024-11-14 19:52:03 +00:00
github-actions[bot]
0e25c0cabc style: clang-tidy auto fixes (#1730)
Fixes #1729. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-12 09:33:53 +00:00
Shawn Xie
6b61984e0e feat: Implement MPT changes (#1147)
Implements https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens
2024-11-11 16:02:02 +00:00
dependabot[bot]
891fd1e7bf ci: Bump wandalen/wretry.action from 3.7.0 to 3.7.2 (#1723)
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.7.0 to 3.7.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8ceaefd717"><code>8ceaefd</code></a>
version 3.7.2</li>
<li><a
href="ce976ac9e7"><code>ce976ac</code></a>
version 3.7.1</li>
<li><a
href="7a8f8d4bf2"><code>7a8f8d4</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/174">#174</a>
from dmvict/master</li>
<li><a
href="2103bce855"><code>2103bce</code></a>
Fix action, add option <code>pre_retry_command</code> to call of
subaction</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.7.0...v3.7.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wandalen/wretry.action&package-manager=github_actions&previous-version=3.7.0&new-version=3.7.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2024-11-11 15:21:13 +00:00
Alex Kremer
de6c17797f chore: Upgrade conan to use new libxrpl (#1724)
Forgotten part for #1718
2024-11-11 13:37:57 +00:00
Alex Kremer
0add6c6d90 feat: Upgrade to libxrpl 2.3.0-rc1 (#1718)
Fixes #1717
2024-11-08 18:18:58 +00:00
github-actions[bot]
e6cdb141c5 style: clang-tidy auto fixes (#1720)
Fixes #1719. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-11-08 08:58:20 +00:00
Alex Kremer
c435466fb0 feat: ETLng Registry (#1713)
For #1597
2024-11-07 17:56:29 +00:00
dependabot[bot]
f8df654d8e ci: Bump wandalen/wretry.action from 3.5.0 to 3.7.0 (#1714)
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
3.5.0 to 3.7.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f8754f7974"><code>f8754f7</code></a>
version 3.7.0</li>
<li><a
href="03db9837ed"><code>03db983</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/171">#171</a>
from dmvict/docker_readme</li>
<li><a
href="d80901cd5c"><code>d80901c</code></a>
Sync readme for new feature</li>
<li><a
href="e00d406ade"><code>e00d406</code></a>
version 3.6.0</li>
<li><a
href="e00deaa9ba"><code>e00deaa</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/170">#170</a>
from dmvict/pre_retry_action</li>
<li><a
href="8b50f3152e"><code>8b50f31</code></a>
Update action, add option <code>pre_retry_command</code> to run command
between retries</li>
<li><a
href="990f16983d"><code>990f169</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/167">#167</a>
from Vampire/add-typing</li>
<li><a
href="aeb34f4d13"><code>aeb34f4</code></a>
Add action typing</li>
<li>See full diff in <a
href="https://github.com/wandalen/wretry.action/compare/v3.5.0...v3.7.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wandalen/wretry.action&package-manager=github_actions&previous-version=3.5.0&new-version=3.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Kremer <akremer@ripple.com>
2024-11-07 16:04:00 +00:00
Alex Kremer
f3e754398e chore: Fix compilation for upcoming libxrpl 2.3.0-rc1 (#1716)
Fixes #1715
2024-11-07 15:39:53 +00:00
Peter Chen
d04331d244 fix: Support Delete NFT (#1695)
Fixes #1677
2024-10-25 11:27:02 -04:00
cyan317
1c82d379d9 fix: Add queue size limit for websocket (#1701)
For slow clients, we will disconnect with it if the message queue is too
long.

---------

Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-10-25 13:30:52 +01:00
github-actions[bot]
f083c82557 style: clang-tidy auto fixes (#1711)
Fixes #1710.

---------

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-10-25 13:06:58 +01:00
Sergey Kuznetsov
b6d5ec5cf7 ci: Fix nightly build (#1709)
Fixes #1703.
2024-10-25 12:20:26 +01:00
Sergey Kuznetsov
c62e9d56b8 fix: Fix issues clang-tidy found (#1708)
Fixes #1706.
2024-10-25 11:48:18 +01:00
github-actions[bot]
2a5d73730f style: clang-tidy auto fixes (#1705)
Fixes #1704. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-10-25 09:05:28 +01:00
Sergey Kuznetsov
cffda52ba6 refactor: Coroutine based webserver (#1699)
Code of new coroutine-based web server. The new server is not connected
to Clio and not ready to use yet.
For #919.
2024-10-24 16:50:26 +01:00
Sergey Kuznetsov
cf081e7e25 fix: Fix timer spurious calls (#1700)
Fixes #1634.
I also checked other timers and they don't have the issue.
2024-10-23 14:24:05 +01:00
Peter Chen
f351a4cc79 fix: example config syntax (#1696) 2024-10-22 12:21:19 +01:00
cyan317
d60654c3dc fix: Remove log (#1694) 2024-10-18 17:11:15 +01:00
cyan317
9b0b4f5ad7 chore: Add counter for total messages waiting to be sent (#1691) 2024-10-16 17:06:27 +01:00
Sergey Kuznetsov
a21011799b style: Fix include (#1687)
Fixes #1686
2024-10-16 11:39:23 +01:00
cyan317
2f40cde7f5 chore: Remove unused static variables (#1683) 2024-10-15 16:43:21 +01:00
github-actions[bot]
02a75356fb style: clang-tidy auto fixes (#1685)
Fixes #1684. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-10-15 10:05:03 +01:00
Alex Kremer
b761fffa2d style: Update code formatting (#1682)
For #1664
2024-10-14 17:15:36 +01:00
Alex Kremer
c3be155f8d chore: Upgrade to llvm 19 tooling (#1681)
For #1664
2024-10-14 16:43:49 +01:00
Peter Chen
11192c362e fix: deletion script will not OOM (#1679)
fixes #1676 and #1678
2024-10-09 12:11:55 -04:00
github-actions[bot]
2c18fd5465 style: clang-tidy auto fixes (#1674)
Fixes #1673. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-10-01 09:14:04 +01:00
cyan317
da76907279 feat: server info cache (#1671)
fix: #1181
2024-09-30 17:00:23 +01:00
dependabot[bot]
1b42466a0d chore: Bump ytanikin/PRConventionalCommits from 1.2.0 to 1.3.0 (#1670)
Bumps
[ytanikin/PRConventionalCommits](https://github.com/ytanikin/prconventionalcommits)
from 1.2.0 to 1.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ytanikin/prconventionalcommits/releases">ytanikin/PRConventionalCommits's
releases</a>.</em></p>
<blockquote>
<h2>1.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: Set breaking changes regex by <a
href="https://github.com/alexangas"><code>@​alexangas</code></a> in <a
href="https://redirect.github.com/ytanikin/PRConventionalCommits/pull/24">ytanikin/PRConventionalCommits#24</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/alexangas"><code>@​alexangas</code></a>
made their first contribution in <a
href="https://redirect.github.com/ytanikin/PRConventionalCommits/pull/24">ytanikin/PRConventionalCommits#24</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ytanikin/PRConventionalCommits/compare/1.2.0...1.3.0">https://github.com/ytanikin/PRConventionalCommits/compare/1.2.0...1.3.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b628c5a234"><code>b628c5a</code></a>
test: enable &quot;breaking change&quot; test</li>
<li><a
href="e1b5683aa4"><code>e1b5683</code></a>
fix: Set breaking changes regex (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/24">#24</a>)</li>
<li><a
href="92a7ab7dc6"><code>92a7ab7</code></a>
fix: upgrade dependencies (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/26">#26</a>)</li>
<li><a
href="cc6cc0dddb"><code>cc6cc0d</code></a>
test: fix tests (<a
href="https://redirect.github.com/ytanikin/prconventionalcommits/issues/25">#25</a>)</li>
<li>See full diff in <a
href="https://github.com/ytanikin/prconventionalcommits/compare/1.2.0...1.3.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ytanikin/PRConventionalCommits&package-manager=github_actions&previous-version=1.2.0&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-30 08:59:38 +01:00
Sergey Kuznetsov
87f1c06b5b chore: Update libxrpl to 2.3.0-b4 (#1667) 2024-09-25 13:39:38 +01:00
Alex Kremer
d0c6b65870 fix: Workaround for gcc12 bug with defaulted destructors (#1666)
Fixes #1662
2024-09-23 21:39:55 +01:00
github-actions[bot]
3343c1fa6b style: clang-tidy auto fixes (#1663)
Fixes #1662.

---------

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
Co-authored-by: Peter Chen <ychen@ripple.com>
2024-09-23 15:24:20 +01:00
Peter Chen
c8e3da6470 fix: add no lint to ignore clang-tidy (#1660)
Fixes build for
[#1659](https://github.com/XRPLF/clio/actions/runs/10956058143/job/30421296417)
2024-09-21 10:47:37 -04:00
github-actions[bot]
c409f8b2d6 style: clang-tidy auto fixes (#1659)
Fixes #1658. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-20 09:39:21 +01:00
Peter Chen
13a9aef579 chore: Revert Cassandra driver upgrade (#1656)
Reverts XRPLF/clio#1646
2024-09-19 15:39:01 +01:00
Peter Chen
af4fde9a3a refactor: Clio Config (#1593)
Add constraint + parse json into Config
Second part of refactoring Clio Config; First PR found
[here](https://github.com/XRPLF/clio/pull/1544)

Steps that are left to implement:
- Replacing all the places where we fetch config values (by using
config.valueOr/MaybeValue) to instead get it from Config Definition
- Generate markdown file using Clio Config Description
2024-09-19 15:10:04 +01:00
cyan317
0282504f18 feat: add 'force_forward' field to request (#1647)
Fix #1141
2024-09-17 11:42:51 +01:00
Alex Kremer
bea905adcd feat: Delete-before support in data removal tool (#1649)
Fixes #1650
2024-09-16 16:47:29 +01:00
Peter Chen
7a9a1656f9 fix: Upgrade Cassandra driver (#1646)
Fixes #1296
2024-09-16 12:28:33 +01:00
Peter Chen
0ede0ed351 fix: pre-push tag (#1614)
Fix issue of git was verifying incorrect Tag
2024-09-11 09:44:42 -04:00
cyan317
ee6018186e fix: no restriction on type field (#1644)
'type' should not matter if 'full' or 'accounts' is false. Relax the
restriction for 'type'
2024-09-11 14:42:25 +01:00
cyan317
293af3f3b0 fix: Add more restrictions to admin fields (#1643) 2024-09-10 14:50:42 +01:00
dependabot[bot]
9600637edd ci: Bump peter-evans/create-pull-request from 6 to 7 (#1636)
Bumps
[peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request)
from 6 to 7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>Create Pull Request v7.0.0</h2>
<p> Now supports commit signing with bot-generated tokens! See
&quot;What's new&quot; below. ✍️🤖</p>
<h3>Behaviour changes</h3>
<ul>
<li>Action input <code>git-token</code> has been renamed
<code>branch-token</code>, to be more clear about its purpose. The
<code>branch-token</code> is the token that the action will use to
create and update the branch.</li>
<li>The action now handles requests that have been rate-limited by
GitHub. Requests hitting a primary rate limit will retry twice, for a
total of three attempts. Requests hitting a secondary rate limit will
not be retried.</li>
<li>The <code>pull-request-operation</code> output now returns
<code>none</code> when no operation was executed.</li>
<li>Removed deprecated output environment variable
<code>PULL_REQUEST_NUMBER</code>. Please use the
<code>pull-request-number</code> action output instead.</li>
</ul>
<h3>What's new</h3>
<ul>
<li>The action can now sign commits as <code>github-actions[bot]</code>
when using <code>GITHUB_TOKEN</code>, or your own bot when using <a
href="https://github.com/peter-evans/create-pull-request/blob/HEAD/docs/concepts-guidelines.md#authenticating-with-github-app-generated-tokens">GitHub
App tokens</a>. See <a
href="https://github.com/peter-evans/create-pull-request/blob/HEAD/docs/concepts-guidelines.md#commit-signature-verification-for-bots">commit
signing</a> for details.</li>
<li>Action input <code>draft</code> now accepts a new value
<code>always-true</code>. This will set the pull request to draft status
when the pull request is updated, as well as on creation.</li>
<li>A new action input <code>maintainer-can-modify</code> indicates
whether <a
href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork">maintainers
can modify</a> the pull request. The default is <code>true</code>, which
retains the existing behaviour of the action.</li>
<li>A new output <code>pull-request-commits-verified</code> returns
<code>true</code> or <code>false</code>, indicating whether GitHub
considers the signature of the branch's commits to be verified.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.36 to
18.19.39 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3000">peter-evans/create-pull-request#3000</a></li>
<li>build(deps-dev): bump ts-jest from 29.1.5 to 29.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3008">peter-evans/create-pull-request#3008</a></li>
<li>build(deps-dev): bump prettier from 3.3.2 to 3.3.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3018">peter-evans/create-pull-request#3018</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.0 to 29.2.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3019">peter-evans/create-pull-request#3019</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.1.3 to 5.2.1 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3035">peter-evans/create-pull-request#3035</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.39 to
18.19.41 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3037">peter-evans/create-pull-request#3037</a></li>
<li>build(deps): bump undici from 6.19.2 to 6.19.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3036">peter-evans/create-pull-request#3036</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.2 to 29.2.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3038">peter-evans/create-pull-request#3038</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.41 to
18.19.42 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3070">peter-evans/create-pull-request#3070</a></li>
<li>build(deps): bump undici from 6.19.4 to 6.19.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3086">peter-evans/create-pull-request#3086</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.42 to
18.19.43 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3087">peter-evans/create-pull-request#3087</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.3 to 29.2.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3088">peter-evans/create-pull-request#3088</a></li>
<li>build(deps): bump undici from 6.19.5 to 6.19.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3145">peter-evans/create-pull-request#3145</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.43 to
18.19.44 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3144">peter-evans/create-pull-request#3144</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3154">peter-evans/create-pull-request#3154</a></li>
<li>build(deps): bump undici from 6.19.7 to 6.19.8 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3213">peter-evans/create-pull-request#3213</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.44 to
18.19.45 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3214">peter-evans/create-pull-request#3214</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3221">peter-evans/create-pull-request#3221</a></li>
<li>build(deps-dev): bump eslint-import-resolver-typescript from 3.6.1
to 3.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3255">peter-evans/create-pull-request#3255</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.45 to
18.19.46 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3254">peter-evans/create-pull-request#3254</a></li>
<li>build(deps-dev): bump ts-jest from 29.2.4 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3256">peter-evans/create-pull-request#3256</a></li>
<li>v7 - signed commits by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3057">peter-evans/create-pull-request#3057</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/rustycl0ck"><code>@​rustycl0ck</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3057">peter-evans/create-pull-request#3057</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v6.1.0...v7.0.0">https://github.com/peter-evans/create-pull-request/compare/v6.1.0...v7.0.0</a></p>
<h2>Create Pull Request v6.1.0</h2>
<p> Adds <code>pull-request-branch</code> as an action output.</p>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8867c4aba1"><code>8867c4a</code></a>
fix: handle ambiguous argument failure on diff stat (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3312">#3312</a>)</li>
<li><a
href="6073f5434b"><code>6073f54</code></a>
build(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code> (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3291">#3291</a>)</li>
<li><a
href="6d01b5601c"><code>6d01b56</code></a>
build(deps-dev): bump eslint-plugin-import from 2.29.1 to 2.30.0 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3290">#3290</a>)</li>
<li><a
href="25cf8451c3"><code>25cf845</code></a>
build(deps-dev): bump <code>@​typescript-eslint/parser</code> from
7.17.0 to 7.18.0 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3289">#3289</a>)</li>
<li><a
href="d87b980a0e"><code>d87b980</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.46 to
18.19.48 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3288">#3288</a>)</li>
<li><a
href="119d131ea9"><code>119d131</code></a>
build(deps): bump peter-evans/create-pull-request from 6 to 7 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3283">#3283</a>)</li>
<li><a
href="73e6230af4"><code>73e6230</code></a>
docs: update readme</li>
<li><a
href="c0348e860f"><code>c0348e8</code></a>
ci: add v7 to workflow</li>
<li><a
href="4320041ed3"><code>4320041</code></a>
feat: signed commits (v7) (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3057">#3057</a>)</li>
<li><a
href="0c2a66fe4a"><code>0c2a66f</code></a>
build(deps-dev): bump ts-jest from 29.2.4 to 29.2.5 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/3256">#3256</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/peter-evans/create-pull-request/compare/v6...v7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/create-pull-request&package-manager=github_actions&previous-version=6&new-version=7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 10:56:28 +01:00
cyan317
7d0753f1da fix: Don't forward ledger API if 'full' is a string (#1640)
Fix #1635
2024-09-09 11:20:02 +01:00
github-actions[bot]
b04e090cbb style: clang-tidy auto fixes (#1639)
Fixes #1638. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-09 09:29:45 +01:00
Sergey Kuznetsov
7088ebad97 fix: Subscription source bugs fix (#1626) (#1633)
Fixes #1620.
Cherry pick of #1626 into develop.

- Add timeouts for websocket operations for connections to rippled.
Without these timeouts if connection hangs for some reason, clio
wouldn't know the connection is hanging.
- Fix potential data race in choosing new subscription source which will
forward messages to users.
- Optimise switching between subscription sources.
2024-09-06 14:35:18 +01:00
Sergey Kuznetsov
1d33b8e88a fix: Fix logging in SubscriptionSource (#1617) (#1632)
Fixes #1616. 
Cherry pick of #1617 into develop.
2024-09-06 13:48:07 +01:00
cyan317
44c07e7332 refactor: Remove SubscriptionManagerRunner (#1623) 2024-09-06 10:34:23 +01:00
github-actions[bot]
dbb8d9eedd style: clang-tidy auto fixes (#1631)
Fixes #1630. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-09-06 10:15:49 +01:00
Sergey Kuznetsov
bc9ca41bc1 test: Add test for WsConnection for ping response (#1619) 2024-09-05 16:33:04 +01:00
cyan317
e4736bf9d8 fix: not forward admin API (#1628) 2024-09-05 15:06:57 +01:00
Peter Chen
7360c4fd0e fix: AccountNFT with invalid marker (#1589)
Fixes [#1497](https://github.com/XRPLF/clio/issues/1497)
Mimics the behavior of the [fix on Rippled
side](https://github.com/XRPLF/rippled/pull/5045)
2024-08-27 14:13:52 -04:00
Alex Kremer
9a9de501e4 feat: Move/copy support in async framework (#1609)
Fixes #1608
2024-08-20 13:24:51 +01:00
github-actions[bot]
fb473f6d28 style: clang-tidy auto fixes (#1613)
Fixes #1612. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-19 09:30:39 +01:00
cyan317
4cbd3f5e18 refactor: Subscription Manager uses async framework (#1605)
Fix #1209
2024-08-16 13:46:14 +01:00
Sergey Kuznetsov
5332d3e9f0 test: Make ForwardingSource tests more stable (#1607)
Fixes #1606.
2024-08-15 14:58:08 +01:00
Sergey Kuznetsov
5499b892e6 feat: Add stop to WorkQueue (#1600)
For #442.
2024-08-14 12:00:13 +01:00
github-actions[bot]
0ff1edaac8 style: clang-tidy auto fixes (#1604)
Fixes #1603. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-14 08:48:23 +01:00
Sergey Kuznetsov
b7c8ed7e3a refactor: Create interface for DOSGuard (#1602)
For #919.
2024-08-13 17:26:47 +01:00
Rome Reginelli
49e9d5eda0 docs: Document how to build with custom libxrpl (#1572)
Fixes #1272
2024-08-09 09:26:42 +01:00
cyan317
d7605d1069 docs: Use non-admin WS port (#1592)
This PR is copied from #1533 , unblock it.
Help https://github.com/XRPLF/clio/pull/1533/files
2024-08-08 10:22:37 +01:00
github-actions[bot]
58045fb0b6 style: clang-tidy auto fixes (#1591)
Fixes #1590. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-08 10:13:43 +01:00
Sergey Kuznetsov
1b4eed3b2b refactor: Move interval timer into a separate class (#1588)
For #442.
2024-08-07 17:38:24 +01:00
Alex Kremer
27c9e2a530 fix: Support conan channels in check_libxrpl flow (#1583)
Fixes #1582
2024-08-07 15:41:41 +01:00
Alex Kremer
00026ebf5a fix: Use doxygen 1.12 (#1587)
Fixes #1431
2024-08-07 15:22:21 +01:00
github-actions[bot]
fa1e9da0de style: clang-tidy auto fixes (#1585)
Fixes #1584. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-07 09:01:11 +01:00
Peter Chen
2bd7ac346c refactor: Clio Config (#1544)
Implementation of new config definition + methods + UT

Steps that still need to be implemented: 
- Make ClioConfigDefinition and it's method to be as constexpr as
possible
- Getting User Config file and populating the values in ConfigDefinition
while checking for constraints on user values
- Replacing all the places where we fetch config values (by using
config.valueOr/MaybeValue) to instead get it from Config Definition
- Generate markdown file using Clio Config Description
2024-08-06 11:07:25 -04:00
github-actions[bot]
5abf912b5a style: clang-tidy auto fixes (#1581)
Fixes #1580. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-06 09:09:40 +01:00
cyan317
a7f34490b1 fix: account_objects returns error when filter does not make sense (#1579)
Fix #1488
2024-08-05 14:37:46 +01:00
Sergey Kuznetsov
2a74a65b22 ci: Fix nightly release workflow (#1577)
Fixes #1574.
2024-08-02 11:49:17 +01:00
github-actions[bot]
319cd3d67b style: clang-tidy auto fixes (#1576)
Fixes #1575. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-08-02 09:13:55 +01:00
Sergey Kuznetsov
2fef03d766 refactor: Refactor main (#1555)
For #442.
2024-08-01 10:53:17 +01:00
Sergey Kuznetsov
fb90fb27ae style: Fix clang-tidy issue (#1571)
Fixes #1569. Fixes #1573.
2024-07-31 11:06:42 +01:00
Sergey Kuznetsov
9607cff8a0 ci: Fix errors in docker image uploading (#1570) 2024-07-30 16:56:37 +01:00
cyan317
00c4287b3b fix: Remove cassandra from log (#1557)
Fix #1452
2024-07-30 14:38:19 +01:00
github-actions[bot]
3095f58dbe style: clang-tidy auto fixes (#1568)
Fixes #1567. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <15742918+kuznetsss@users.noreply.github.com>
2024-07-30 08:35:08 +01:00
cyan317
8d0e904ecb fix: LedgerEntryNotExist unittest failure (#1564) 2024-07-29 16:41:10 +01:00
dependabot[bot]
3a3d8d46dd ci: Bump wandalen/wretry.action from 1.4.10 to 3.5.0 (#1563)
Bumps
[wandalen/wretry.action](https://github.com/wandalen/wretry.action) from
1.4.10 to 3.5.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6feedb7ded"><code>6feedb7</code></a>
version 3.5.0</li>
<li><a
href="b5ca17cbcf"><code>b5ca17c</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/163">#163</a>
from dmvict/master</li>
<li><a
href="82f08e4ccd"><code>82f08e4</code></a>
Add dependency to build script</li>
<li><a
href="c14e35fac1"><code>c14e35f</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/162">#162</a>
from dmvict/master</li>
<li><a
href="979d4e538d"><code>979d4e5</code></a>
Synchronize info with <code>Readme</code> of <code>js_action</code>
branch</li>
<li><a
href="0dd1d5d77d"><code>0dd1d5d</code></a>
version 3.4.0</li>
<li><a
href="8e449ec3ce"><code>8e449ec</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/156">#156</a>
from dmvict/master</li>
<li><a
href="050710def1"><code>050710d</code></a>
Update action files, add option <code>steps_context</code>, update file
<code>Readme</code>, add op...</li>
<li><a
href="dfc978b430"><code>dfc978b</code></a>
version 3.3.0</li>
<li><a
href="b7882d347e"><code>b7882d3</code></a>
Merge pull request <a
href="https://redirect.github.com/wandalen/wretry.action/issues/154">#154</a>
from dmvict/master</li>
<li>Additional commits viewable in <a
href="https://github.com/wandalen/wretry.action/compare/v1.4.10...v3.5.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wandalen/wretry.action&package-manager=github_actions&previous-version=1.4.10&new-version=3.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 16:18:46 +01:00
dependabot[bot]
b2f7983609 ci: Bump actions/configure-pages from 4 to 5 (#1562)
Bumps
[actions/configure-pages](https://github.com/actions/configure-pages)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/configure-pages/releases">actions/configure-pages's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h1>Breaking Changes</h1>
<p>⚠️ This version contains breaking changes! ⚠️</p>
<ul>
<li>When using the <a
href="983d7736d9/action.yml (L7-L10)">input
param <code>static_site_generator: next</code></a>:
<ul>
<li>Added support for Next.js &gt;= 13.3.0</li>
<li>Consequently, also dropped support for Next.js &lt; 13.3.0</li>
</ul>
</li>
</ul>
<h1>Full Changelog</h1>
<ul>
<li>Attempt to auto-detect configuration files with varying file
extensions <a
href="https://github.com/JamesMGreene"><code>@​JamesMGreene</code></a>
(<a
href="https://redirect.github.com/actions/configure-pages/issues/139">#139</a>)</li>
<li>Convert errors into Actions-compatible logging with annotations <a
href="https://github.com/JamesMGreene"><code>@​JamesMGreene</code></a>
(<a
href="https://redirect.github.com/actions/configure-pages/issues/138">#138</a>)</li>
<li>Bump <code>@​actions/github</code> from 5.1.1 to 6.0.0 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/123">#123</a>)</li>
<li>Bump the non-breaking-changes group with 2 updates <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/136">#136</a>)</li>
<li>Update the Next.js configuration for v14 <a
href="https://github.com/JamesMGreene"><code>@​JamesMGreene</code></a>
(<a
href="https://redirect.github.com/actions/configure-pages/issues/137">#137</a>)</li>
<li>Bump the non-breaking-changes group with 3 updates <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/132">#132</a>)</li>
<li>Bump release-drafter/release-drafter from 5 to 6 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/133">#133</a>)</li>
<li>Bump github/codeql-action from 2 to 3 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/127">#127</a>)</li>
<li>Bump actions/checkout from 3 to 4 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/120">#120</a>)</li>
<li>Bump actions/setup-node from 3 to 4 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/118">#118</a>)</li>
<li>Bump the non-breaking-changes group with 1 update <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> (<a
href="https://redirect.github.com/actions/configure-pages/issues/131">#131</a>)</li>
<li>Update Dependabot config to group non-breaking changes <a
href="https://github.com/JamesMGreene"><code>@​JamesMGreene</code></a>
(<a
href="https://redirect.github.com/actions/configure-pages/issues/130">#130</a>)</li>
</ul>
<p>See details of <a
href="https://github.com/actions/configure-pages/compare/v4.0.0...v5.0.0">all
code changes</a> since previous release.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983d7736d9"><code>983d773</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/configure-pages/issues/139">#139</a>
from actions/config-auto-detect</li>
<li><a
href="9cf6e24f74"><code>9cf6e24</code></a>
Tweak comment</li>
<li><a
href="f304bd89be"><code>f304bd8</code></a>
Update distributables</li>
<li><a
href="215cd51eb0"><code>215cd51</code></a>
Attempt to detect existing config files matching the expected basename
plus o...</li>
<li><a
href="e9382ac9ad"><code>e9382ac</code></a>
Front-load the file extension warning</li>
<li><a
href="7781abd34b"><code>7781abd</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/configure-pages/issues/138">#138</a>
from actions/error-utils</li>
<li><a
href="fc47e3c838"><code>fc47e3c</code></a>
Update distributables</li>
<li><a
href="9c9f8a266f"><code>9c9f8a2</code></a>
Update tests to use the Octokit RequestError class</li>
<li><a
href="9a4705d653"><code>9a4705d</code></a>
Update distributables</li>
<li><a
href="f6ded38287"><code>f6ded38</code></a>
Fix syntax error and formatting</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/configure-pages/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/configure-pages&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 16:18:37 +01:00
dependabot[bot]
501e131061 ci: Bump crazy-max/ghaction-import-gpg from 5 to 6 (#1561)
Bumps
[crazy-max/ghaction-import-gpg](https://github.com/crazy-max/ghaction-import-gpg)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crazy-max/ghaction-import-gpg/releases">crazy-max/ghaction-import-gpg's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<ul>
<li>Node 20 as default runtime (requires <a
href="https://github.com/actions/runner/releases/tag/v2.308.0">Actions
Runner v2.308.0</a> or later) by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/183">crazy-max/ghaction-import-gpg#183</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v5.4.0...v6.0.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v5.4.0...v6.0.0</a></p>
<h2>v5.4.0</h2>
<ul>
<li>Fallback to gpg homedir if <code>HOME</code> not set by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/181">crazy-max/ghaction-import-gpg#181</a></li>
<li>Bump openpgp from 5.8.0 to 5.10.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/177">crazy-max/ghaction-import-gpg#177</a>
<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/171">crazy-max/ghaction-import-gpg#171</a></li>
<li>Bump semver from 6.3.0 to 6.3.1 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/174">crazy-max/ghaction-import-gpg#174</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.4 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/175">crazy-max/ghaction-import-gpg#175</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v5.3.0...v5.4.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v5.3.0...v5.4.0</a></p>
<h2>v5.3.0</h2>
<ul>
<li>Add <code>trust_level</code> input to set private key trust level by
<a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in
<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/168">crazy-max/ghaction-import-gpg#168</a></li>
<li>Missing <code>name</code> output to action metadata by <a
href="https://github.com/dtan4"><code>@​dtan4</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/154">crazy-max/ghaction-import-gpg#154</a></li>
<li>Update yarn to 3.5.1 by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/165">crazy-max/ghaction-import-gpg#165</a></li>
<li>Update dev dependencies by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/167">crazy-max/ghaction-import-gpg#167</a></li>
<li>Bump openpgp from 5.5.0 to 5.8.0 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/164">crazy-max/ghaction-import-gpg#164</a></li>
<li>Bump minimatch from 3.0.4 to 3.1.2 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/155">crazy-max/ghaction-import-gpg#155</a></li>
<li>Bump json5 from 2.1.3 to 2.2.3 in <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/pull/157">crazy-max/ghaction-import-gpg#157</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v5.2.0...v5.3.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v5.2.0...v5.3.0</a></p>
<h2>v5.2.0</h2>
<ul>
<li>Remove <code>setOutput</code> workaround by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/152">#152</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.9.0 to 1.10.0 (<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/147">#147</a>
<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/151">#151</a>)</li>
<li>Bump openpgp from 5.3.1 to 5.5.0 (<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/149">#149</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v5.1.0...v5.2.0">https://github.com/crazy-max/ghaction-import-gpg/compare/v5.1.0...v5.2.0</a></p>
<h2>v5.1.0</h2>
<ul>
<li>Bump openpgp from 5.2.1 to 5.3.1 (<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/145">#145</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.6.0 to 1.9.0 (<a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/143">#143</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="01dd5d3ca4"><code>01dd5d3</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/186">#186</a>
from crazy-max/dependabot/npm_and_yarn/actions/core-1...</li>
<li><a
href="ab787acd76"><code>ab787ac</code></a>
chore: update generated content</li>
<li><a
href="c63a0195c8"><code>c63a019</code></a>
build(deps): bump <code>@​actions/core</code> from 1.10.0 to 1.10.1</li>
<li><a
href="81f63a86f4"><code>81f63a8</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/191">#191</a>
from crazy-max/dependabot/npm_and_yarn/babel/traverse...</li>
<li><a
href="98ff7fb2c6"><code>98ff7fb</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/190">#190</a>
from crazy-max/dependabot/npm_and_yarn/debug-4.3.4</li>
<li><a
href="e83a2eaa2f"><code>e83a2ea</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/193">#193</a>
from crazy-max/dependabot/github_actions/actions/gith...</li>
<li><a
href="2e40814c31"><code>2e40814</code></a>
Merge pull request <a
href="https://redirect.github.com/crazy-max/ghaction-import-gpg/issues/192">#192</a>
from crazy-max/dependabot/npm_and_yarn/openpgp-5.11.0</li>
<li><a
href="480319b8ff"><code>480319b</code></a>
chore: update generated content</li>
<li><a
href="019a31d476"><code>019a31d</code></a>
build(deps): bump actions/github-script from 6 to 7</li>
<li><a
href="24f4ba9d1e"><code>24f4ba9</code></a>
build(deps): bump openpgp from 5.10.1 to 5.11.0</li>
<li>Additional commits viewable in <a
href="https://github.com/crazy-max/ghaction-import-gpg/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crazy-max/ghaction-import-gpg&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 16:18:25 +01:00
dependabot[bot]
27cf62ca63 ci: Bump peter-evans/create-pull-request from 5 to 6 (#1560)
Bumps
[peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>Create Pull Request v6.0.0</h2>
<h2>Behaviour changes</h2>
<ul>
<li>The default values for <code>author</code> and
<code>committer</code> have changed. See &quot;What's new&quot; below
for details. If you are overriding the default values you will not be
affected by this change.</li>
<li>On completion, the action now removes the temporary git remote
configuration it adds when using <code>push-to-fork</code>. This should
not affect you unless you were using the temporary configuration for
some other purpose after the action completes.</li>
</ul>
<h2>What's new</h2>
<ul>
<li>Updated runtime to Node.js 20
<ul>
<li>The action now requires a minimum version of <a
href="https://github.com/actions/runner/releases/tag/v2.308.0">v2.308.0</a>
for the Actions runner. Update self-hosted runners to v2.308.0 or later
to ensure compatibility.</li>
</ul>
</li>
<li>The default value for <code>author</code> has been changed to
<code>${{ github.actor }} &lt;${{ github.actor_id }}+${{ github.actor
}}@users.noreply.github.com&gt;</code>. The change adds the <code>${{
github.actor_id }}+</code> prefix to the email address to align with
GitHub's standard format for the author email address.</li>
<li>The default value for <code>committer</code> has been changed to
<code>github-actions[bot]
&lt;41898282+github-actions[bot]@users.noreply.github.com&gt;</code>.
This is to align with the default GitHub Actions bot user account.</li>
<li>Adds input <code>git-token</code>, the <a
href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token">Personal
Access Token (PAT)</a> that the action will use for git operations. This
input defaults to the value of <code>token</code>. Use this input if you
would like the action to use a different token for git operations than
the one used for the GitHub API.</li>
<li><code>push-to-fork</code> now supports pushing to sibling
repositories in the same network.</li>
<li>Previously, when using <code>push-to-fork</code>, the action did not
remove temporary git remote configuration it adds during execution. This
has been fixed and the configuration is now removed when the action
completes.</li>
<li>If the pull request body is truncated due to exceeding the maximum
length, the action will now suffix the body with the message
&quot;...<em>[Pull request body truncated]</em>&quot; to indicate that
the body has been truncated.</li>
<li>The action now uses <code>--unshallow</code> only when necessary,
rather than as a default argument of <code>git fetch</code>. This should
improve performance, particularly for large git repositories with
extensive commit history.</li>
<li>The action can now be executed on one GitHub server and create pull
requests on a <em>different</em> GitHub server. Server products include
GitHub hosted (github.com), GitHub Enterprise Server (GHES), and GitHub
Enterprise Cloud (GHEC). For example, the action can be executed on
GitHub hosted and create pull requests on a GHES or GHEC instance.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2086">peter-evans/create-pull-request#2086</a></li>
<li>fix crazy-max/ghaction-import-gp parameters by <a
href="https://github.com/fharper"><code>@​fharper</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2177">peter-evans/create-pull-request#2177</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2364">peter-evans/create-pull-request#2364</a></li>
<li>Use checkout v4 by <a
href="https://github.com/okuramasafumi"><code>@​okuramasafumi</code></a>
in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2521">peter-evans/create-pull-request#2521</a></li>
<li>Note about <code>delete-branch</code> by <a
href="https://github.com/dezren39"><code>@​dezren39</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2631">peter-evans/create-pull-request#2631</a></li>
<li>98 dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/fharper"><code>@​fharper</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2177">peter-evans/create-pull-request#2177</a></li>
<li><a
href="https://github.com/okuramasafumi"><code>@​okuramasafumi</code></a>
made their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2521">peter-evans/create-pull-request#2521</a></li>
<li><a href="https://github.com/dezren39"><code>@​dezren39</code></a>
made their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2631">peter-evans/create-pull-request#2631</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v5.0.2...v6.0.0">https://github.com/peter-evans/create-pull-request/compare/v5.0.2...v6.0.0</a></p>
<h2>Create Pull Request v5.0.2</h2>
<p>⚙️ Fixes an issue that occurs when using <code>push-to-fork</code>
and both base and head repositories are in the same org/user
account.</p>
<h2>What's Changed</h2>
<ul>
<li>fix: specify head repo by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/2044">peter-evans/create-pull-request#2044</a></li>
<li>20 dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v5.0.1...v5.0.2">https://github.com/peter-evans/create-pull-request/compare/v5.0.1...v5.0.2</a></p>
<h2>Create Pull Request v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: truncate body if exceeds max length by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/1915">peter-evans/create-pull-request#1915</a></li>
<li>12 dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v5.0.0...v5.0.1">https://github.com/peter-evans/create-pull-request/compare/v5.0.0...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c5a7806660"><code>c5a7806</code></a>
feat: add branch name output (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2995">#2995</a>)</li>
<li><a
href="4383ba9ef0"><code>4383ba9</code></a>
build: update distribution (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2990">#2990</a>)</li>
<li><a
href="36f7648874"><code>36f7648</code></a>
build(deps): bump undici from 6.18.2 to 6.19.2 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2977">#2977</a>)</li>
<li><a
href="5f7c1586fd"><code>5f7c158</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.34 to
18.19.36 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2976">#2976</a>)</li>
<li><a
href="db1713da3a"><code>db1713d</code></a>
build(deps-dev): bump ts-jest from 29.1.4 to 29.1.5 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2975">#2975</a>)</li>
<li><a
href="ca98a71ccc"><code>ca98a71</code></a>
build(deps-dev): bump ws from 8.11.0 to 8.17.1 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2970">#2970</a>)</li>
<li><a
href="ce008085c8"><code>ce00808</code></a>
build(deps-dev): bump braces from 3.0.2 to 3.0.3 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2962">#2962</a>)</li>
<li><a
href="7318c0b7b6"><code>7318c0b</code></a>
build(deps-dev): bump prettier from 3.3.0 to 3.3.2 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2959">#2959</a>)</li>
<li><a
href="e30bbbb3c9"><code>e30bbbb</code></a>
build: update distribution (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2947">#2947</a>)</li>
<li><a
href="bad19b8e0b"><code>bad19b8</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.33 to
18.19.34 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/2935">#2935</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/peter-evans/create-pull-request/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/create-pull-request&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 16:18:01 +01:00
Alex Kremer
638e2e28ab ci: Update clang-format and ccache (#1559)
Fixes #1212
Fixes #1558
2024-07-25 16:16:20 +01:00
Alex Kremer
f9bb62f670 ci: Setup dependabot for github-actions (#1556)
Fixes #1502
2024-07-25 15:35:42 +01:00
cyan317
895f3c0059 fix: nftData unique bug (#1550)
Fix: failed to unique the NFT data in one ledger as we wish.
2024-07-25 11:05:28 +01:00
Alex Kremer
77494245a9 fix: Static linkage (#1551)
Fixes #1507
2024-07-23 17:35:39 +01:00
github-actions[bot]
648cedcba5 style: clang-tidy auto fixes (#1548)
Fixes #1547. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-07-17 10:48:27 +01:00
cyan317
8a613c5de8 fix: Fail to deduplicate the same nfts in ttNFTOKEN_CANCEL_OFFER (#1542) 2024-07-16 10:50:20 +01:00
github-actions[bot]
d6ae890f83 style: clang-tidy auto fixes (#1546)
Fixes #1545. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-07-16 10:49:07 +01:00
Zhiyuan Wang
e16a9510f1 fix: Support deleted object in ledger_entry (#1483)
Fixes #1306
2024-07-15 18:07:09 +01:00
cyan317
d6598f30f1 fix: Add more account check (#1543)
Make sure all char is alphanumeric for account
2024-07-15 16:42:12 +01:00
Alex Kremer
b12d916276 fix: Relax error when full or accounts set to false (#1540)
Fixes #1537
2024-07-12 15:44:46 +01:00
Alex Kremer
4f6f717bfb fix: Compatible feature response (#1539)
Fixes #1538
2024-07-12 15:03:19 +01:00
github-actions[bot]
46a616cdad style: clang-tidy auto fixes (#1536)
Fixes #1535.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-07-12 09:40:12 +01:00
Alex Kremer
f771478da0 feat: Native Feature RPC (#1526) 2024-07-11 12:18:13 +01:00
cyan317
6e606cb7d8 fix: Change the field name from "close_time_iso" to "closed (#1531) 2024-07-10 13:47:02 +01:00
Sergey Kuznetsov
5bcc11b347 test: Add more tests for warnings (#1532)
For #1518.
2024-07-10 13:33:59 +01:00
github-actions[bot]
d227c53ef3 style: clang-tidy auto fixes (#1530)
Fixes #1529. 
---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-07-10 11:42:39 +01:00
github-actions[bot]
e85f6cd9e4 style: clang-tidy auto fixes (#1528)
Fixes #1527. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-07-10 10:35:59 +01:00
Sergey Kuznetsov
46bd67a9ec feat: More verbose forwarding errors (#1523)
Fixes #1516.
2024-07-09 15:22:01 +01:00
cyan317
094ed8f299 fix: Fix the ledger_index timezone issue (#1522)
Fix unittest failures
2024-07-09 13:14:22 +01:00
Sergey Kuznetsov
d536433d64 ci: Fix bugs in nightly docker publishing (#1520) 2024-07-05 15:04:08 +01:00
Sergey Kuznetsov
29847caf0e fix: Fix extra brackets in warnings (#1519)
Fixes #1518
2024-07-05 12:03:22 +01:00
Sergey Kuznetsov
aa86075159 ci: Fix nightly (#1514) 2024-07-03 17:46:14 +01:00
Sergey Kuznetsov
4dd3254354 style: Fix clang-tidy issue (#1515)
Fixes #1513
2024-07-03 16:22:25 +01:00
github-actions[bot]
f55872d496 style: clang-tidy auto fixes (#1512)
Fixes #1511. 

---------

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
Co-authored-by: Sergey Kuznetsov <skuznetsov@ripple.com>
2024-07-03 11:23:08 +01:00
Sergey Kuznetsov
6cb63ed9b2 style: Fix clang-tidy issues (#1510)
Fixes #1508, #1506, #1500
2024-07-03 09:16:26 +01:00
Sergey Kuznetsov
f77186002a ci: Clio docker image (#1509)
Fixes #1051.
2024-07-02 18:04:38 +01:00
cyan317
66849432be feat: Ledger index (#1503)
Fixed #1052
2024-07-02 13:58:21 +01:00
github-actions[bot]
d26c93a711 style: clang-tidy auto fixes (#1505)
Fixes #1504. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-07-01 09:21:46 +01:00
yinyiqian1
54c9a6e7c0 refactor: separate fixtures (#1495)
refactor #945
2024-06-28 13:25:52 -04:00
github-actions[bot]
b24aadc898 style: clang-tidy auto fixes (#1499)
Fixes #1498. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-06-28 09:51:50 +01:00
Alex Kremer
7bd21345a1 feat: AmendmentCenter (#1418)
Fixes #1416
2024-06-27 18:21:30 +01:00
yinyiqian1
2ff51ff416 refactor: Structure global validators better (#1484)
refactor: #1170 Structure global validators better
2024-06-27 09:55:17 -04:00
cyan317
72f9a8fe78 fix: Remove unused file (#1496) 2024-06-27 14:25:38 +01:00
cyan317
b2eacf9868 build: Upgrade to libxrpl 2.3.0-b1 (#1489)
Update libxrpl and change include path
2024-06-25 15:05:01 +01:00
Sergey Kuznetsov
77cec26cc9 ci: Turn off PR labelling (#1492) 2024-06-25 14:52:26 +01:00
Sergey Kuznetsov
6a848649b3 ci: Add permissions to pr title check workflow (#1491) 2024-06-25 14:30:50 +01:00
Sergey Kuznetsov
c761b50fa4 docs: mention conventional commits (#1490)
Also add build type to the allowed types.
2024-06-25 12:55:08 +01:00
Sergey Kuznetsov
062ef9f0a5 ci: added conventional commits check (#1487)
Fixes #1366
2024-06-24 15:11:03 +01:00
cyan317
c7fee023e7 Fix issue: "Updating cache" prints in log when cache is disabled (#1479)
Fixed #1461
2024-06-24 13:05:07 +01:00
github-actions[bot]
e65351e9e6 [CI] clang-tidy auto fixes (#1486)
Fixes #1485. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-06-24 08:05:53 +01:00
cyan317
8cea9ee7e2 Fix empty currency (#1481) 2024-06-21 11:54:53 +01:00
Michael Legleux
9d299a1948 Make sure release commit signed and tagged (#1468)
Best practices, good housekeeping and help me look stuff up.

Looks like the annotated tag train has derailed. This is an attempt to
enforce it.
2024-06-20 15:30:25 -07:00
cyan317
132ec743e1 Add support for ltNEGATIVE_UNL (#1474)
Fixed #1369
2024-06-20 11:19:42 +01:00
github-actions[bot]
bdb72f91a2 [CI] clang-tidy auto fixes (#1478)
Fixes #1477. Please review and commit clang-tidy fixes.

Co-authored-by: kuznetsss <kuznetsss@users.noreply.github.com>
2024-06-20 09:13:24 +01:00
Michael Legleux
0cdc24023f Ignore checking formatting if no formatted files are staged (#1469)
I'm constantly harassed to follow the rules when I'm not doing anything
relevant.

This is an attempt to ameliorate that rather than `--no-verify` or just
disabling hooks altogether. 🍄 ☁️

Somebody might want to confirm `basename` is included with macOS.
2024-06-18 10:11:06 -07:00
Peter Chen
c795cf371a Fix base_asset value in getAggregatePrice (#1467)
Fixes #1372
2024-06-18 09:04:33 -04:00
Peter Chen
e135aa49d5 Create generate free port class to avoid conflicting ports (#1439)
Fixes #1317
2024-06-18 11:29:05 +01:00
791 changed files with 68917 additions and 17299 deletions

View File

@@ -8,6 +8,7 @@ Checks: '-*,
bugprone-chained-comparison,
bugprone-compare-pointer-to-member-virtual-function,
bugprone-copy-constructor-init,
bugprone-crtp-constructor-accessibility,
bugprone-dangling-handle,
bugprone-dynamic-static-initializers,
bugprone-empty-catch,
@@ -33,9 +34,11 @@ Checks: '-*,
bugprone-non-zero-enum-to-bool-conversion,
bugprone-optional-value-conversion,
bugprone-parent-virtual-call,
bugprone-pointer-arithmetic-on-polymorphic-object,
bugprone-posix-return,
bugprone-redundant-branch-condition,
bugprone-reserved-identifier,
bugprone-return-const-ref-from-parameter,
bugprone-shared-ptr-array-mismatch,
bugprone-signal-handler,
bugprone-signed-char-misuse,
@@ -55,6 +58,7 @@ Checks: '-*,
bugprone-suspicious-realloc-usage,
bugprone-suspicious-semicolon,
bugprone-suspicious-string-compare,
bugprone-suspicious-stringview-data-usage,
bugprone-swapped-arguments,
bugprone-switch-missing-default-case,
bugprone-terminating-continue,
@@ -97,10 +101,12 @@ Checks: '-*,
modernize-make-unique,
modernize-pass-by-value,
modernize-type-traits,
modernize-use-designated-initializers,
modernize-use-emplace,
modernize-use-equals-default,
modernize-use-equals-delete,
modernize-use-override,
modernize-use-ranges,
modernize-use-starts-ends-with,
modernize-use-std-numbers,
modernize-use-using,
@@ -121,9 +127,12 @@ Checks: '-*,
readability-convert-member-functions-to-static,
readability-duplicate-include,
readability-else-after-return,
readability-enum-initial-value,
readability-implicit-bool-conversion,
readability-inconsistent-declaration-parameter-name,
readability-identifier-naming,
readability-make-member-function-const,
readability-math-missing-parentheses,
readability-misleading-indentation,
readability-non-const-parameter,
readability-redundant-casting,
@@ -135,14 +144,48 @@ Checks: '-*,
readability-simplify-boolean-expr,
readability-static-accessed-through-instance,
readability-static-definition-in-anonymous-namespace,
readability-suspicious-call-argument
readability-suspicious-call-argument,
readability-use-std-min-max
'
CheckOptions:
readability-braces-around-statements.ShortStatementLines: 2
readability-identifier-naming.MacroDefinitionCase: UPPER_CASE
readability-identifier-naming.ClassCase: CamelCase
readability-identifier-naming.StructCase: CamelCase
readability-identifier-naming.UnionCase: CamelCase
readability-identifier-naming.EnumCase: CamelCase
readability-identifier-naming.EnumConstantCase: CamelCase
readability-identifier-naming.ScopedEnumConstantCase: CamelCase
readability-identifier-naming.GlobalConstantCase: UPPER_CASE
readability-identifier-naming.GlobalConstantPrefix: 'k'
readability-identifier-naming.GlobalVariableCase: CamelCase
readability-identifier-naming.GlobalVariablePrefix: 'g'
readability-identifier-naming.ConstexprFunctionCase: camelBack
readability-identifier-naming.ConstexprMethodCase: camelBack
readability-identifier-naming.ClassMethodCase: camelBack
readability-identifier-naming.ClassMemberCase: camelBack
readability-identifier-naming.ClassConstantCase: UPPER_CASE
readability-identifier-naming.ClassConstantPrefix: 'k'
readability-identifier-naming.StaticConstantCase: UPPER_CASE
readability-identifier-naming.StaticConstantPrefix: 'k'
readability-identifier-naming.StaticVariableCase: UPPER_CASE
readability-identifier-naming.StaticVariablePrefix: 'k'
readability-identifier-naming.ConstexprVariableCase: UPPER_CASE
readability-identifier-naming.ConstexprVariablePrefix: 'k'
readability-identifier-naming.LocalConstantCase: camelBack
readability-identifier-naming.LocalVariableCase: camelBack
readability-identifier-naming.TemplateParameterCase: CamelCase
readability-identifier-naming.ParameterCase: camelBack
readability-identifier-naming.FunctionCase: camelBack
readability-identifier-naming.MemberCase: camelBack
readability-identifier-naming.PrivateMemberSuffix: _
readability-identifier-naming.ProtectedMemberSuffix: _
readability-identifier-naming.PublicMemberSuffix: ''
readability-identifier-naming.FunctionIgnoredRegexp: '.*tag_invoke.*'
bugprone-unsafe-functions.ReportMoreUnsafeFunctions: true
bugprone-unused-return-value.CheckedReturnTypes: ::std::error_code;::std::error_condition;::std::errc
misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*'
misc-include-cleaner.IgnoreHeaders: '.*/(detail|impl)/.*;.*(expected|unexpected).*;.*ranges_lower_bound\.h;time.h;stdlib.h;__chrono/.*;fmt/chrono.h;boost/uuid/uuid_hash.hpp'
HeaderFilterRegex: '^.*/(src|tests)/.*\.(h|hpp)$'
WarningsAsErrors: '*'

View File

@@ -8,3 +8,6 @@ Diagnostics:
IgnoreHeader:
- ".*/(detail|impl)/.*"
- ".*expected.*"
- ".*ranges_lower_bound.h"
- "time.h"
- "stdlib.h"

View File

@@ -13,14 +13,36 @@ TMPDIR=${ROOT}/.cache/doxygen
TMPFILE=${TMPDIR}/docs.log
DOCDIR=${TMPDIR}/out
# Check doxygen is at all installed
if [ -z "$DOXYGEN" ]; then
# No hard error if doxygen is not installed yet
cat <<EOF
WARNING
-----------------------------------------------------------------------------
'doxygen' is required to check documentation.
Please install it for next time. For the time being it's on CI.
'doxygen' is required to check documentation.
Please install it for next time.
Your changes may fail to pass CI once pushed.
-----------------------------------------------------------------------------
EOF
exit 0
fi
# Check version of doxygen is at least 1.12
version=$($DOXYGEN --version | grep -o '[0-9\.]*')
if [[ "1.12.0" > "$version" ]]; then
# No hard error if doxygen version is not the one we want - let CI deal with it
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 1.12 of `which doxygen` is required.
Your version is $version. Please upgrade it for next time.
Your changes may fail to pass CI once pushed.
-----------------------------------------------------------------------------
EOF

View File

@@ -5,6 +5,20 @@
# This script checks the format of the code and cmake files.
# In many cases it will automatically fix the issues and abort the commit.
no_formatted_directories_staged() {
staged_directories=$(git diff-index --cached --name-only HEAD | awk -F/ '{print $1}')
for sd in $staged_directories; do
if [[ "$sd" =~ ^(benchmark|cmake|src|tests)$ ]]; then
return 1
fi
done
return 0
}
if no_formatted_directories_staged ; then
exit 0
fi
echo "+ Checking code format..."
# paths to check and re-format
@@ -12,12 +26,12 @@ sources="src tests"
formatter="clang-format -i"
version=$($formatter --version | grep -o '[0-9\.]*')
if [[ "18.0.0" > "$version" ]]; then
if [[ "19.0.0" > "$version" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
A minimum of version 18 of `which clang-format` is required.
A minimum of version 19 of `which clang-format` is required.
Your version is $version.
Please fix paths and run again.
-----------------------------------------------------------------------------

View File

@@ -3,6 +3,5 @@
# This script is intended to be run from the root of the repository.
source .githooks/check-format
#source .githooks/check-docs
source .githooks/check-docs
# TODO: Fix Doxygen issue with reference links. See https://github.com/XRPLF/clio/issues/1431

View File

@@ -1,3 +1,58 @@
#!/bin/sh
# git for-each-ref refs/tags # see which tags are annotated and which are lightweight. Annotated tags are "tag" objects.
# # Set these so your commits and tags are always signed
# git config commit.gpgsign true
# git config tag.gpgsign true
verify_commit_signed() {
if git verify-commit HEAD &> /dev/null; then
:
# echo "HEAD commit seems signed..."
else
echo "HEAD commit isn't signed!"
exit 1
fi
}
verify_tag() {
if git describe --exact-match --tags HEAD &> /dev/null; then
: # You might be ok to push
# echo "Tag is annotated."
return 0
else
echo "Tag for [$version] not an annotated tag."
exit 1
fi
}
verify_tag_signed() {
if git verify-tag "$version" &> /dev/null ; then
: # ok, I guess we'll let you push
# echo "Tag appears signed"
return 0
else
echo "$version tag isn't signed"
echo "Sign it with [git tag -ams\"$version\" $version]"
exit 1
fi
}
while read local_ref local_oid remote_ref remote_oid; do
# Check some things if we're pushing a branch called "release/"
if echo "$remote_ref" | grep ^refs\/heads\/release\/ &> /dev/null ; then
version=$(git tag --points-at HEAD)
echo "Looks like you're trying to push a $version release..."
echo "Making sure you've signed and tagged it."
if verify_commit_signed && verify_tag && verify_tag_signed ; then
: # Ok, I guess you can push
else
exit 1
fi
fi
done
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on your path. If you no longer wish to use Git LFS, remove this hook by deleting the 'pre-push' file in the hooks directory (set by 'core.hookspath'; usually '.git/hooks').\n"; exit 2; }
git lfs pre-push "$@"

View File

@@ -4,12 +4,18 @@ inputs:
target:
description: Build target name
default: all
substract_threads:
description: An option for the action get_number_of_threads. See get_number_of_threads
required: true
default: '0'
runs:
using: composite
steps:
- name: Get number of threads
uses: ./.github/actions/get_number_of_threads
id: number_of_threads
with:
substract_threads: ${{ inputs.substract_threads }}
- name: Build Clio
shell: bash

View File

@@ -0,0 +1,66 @@
name: Build and push Docker image
description: Build and push Docker image to DockerHub and GitHub Container Registry
inputs:
image_name:
description: Name of the image to build
required: true
push_image:
description: Whether to push the image to the registry (true/false)
required: true
directory:
description: The directory containing the Dockerfile
required: true
tags:
description: Comma separated tags to apply to the image
required: true
platforms:
description: Platforms to build the image for (e.g. linux/amd64,linux/arm64)
required: true
description:
description: Short description of the image
required: true
runs:
using: composite
steps:
- name: Login to DockerHub
if: ${{ inputs.push_image == 'true' }}
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
- name: Login to GitHub Container Registry
if: ${{ inputs.push_image == 'true' }}
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ env.GITHUB_TOKEN }}
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/metadata-action@v5
id: meta
with:
images: ${{ inputs.image_name }}
tags: ${{ inputs.tags }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ inputs.directory }}
platforms: ${{ inputs.platforms }}
push: ${{ inputs.push_image == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
- name: Update DockerHub description
if: ${{ inputs.push_image == 'true' }}
uses: peter-evans/dockerhub-description@v4
with:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_PW }}
repository: ${{ inputs.image_name }}
short-description: ${{ inputs.description }}
readme-filepath: ${{ inputs.directory }}/README.md

View File

@@ -14,7 +14,7 @@ inputs:
assignees:
description: Comma-separated list of assignees
required: true
default: 'cindyyan317,godexsoft,kuznetsss'
default: 'godexsoft,kuznetsss,PeterChen13579'
outputs:
created_issue_id:
description: Created issue id
@@ -31,5 +31,3 @@ runs:
created_issue=$(cat create_issue.log | sed 's|.*/||')
echo "created_issue=$created_issue" >> $GITHUB_OUTPUT
rm create_issue.log issue.md

View File

@@ -12,6 +12,10 @@ inputs:
description: Build type for third-party libraries and clio. Could be 'Release', 'Debug'
required: true
default: 'Release'
build_integration_tests:
description: Whether to build integration tests
required: true
default: 'true'
code_coverage:
description: Whether conan's coverage option should be on or not
required: true
@@ -20,6 +24,10 @@ inputs:
description: Whether Clio is to be statically linked
required: true
default: 'false'
sanitizer:
description: Sanitizer to use
required: true
default: 'false' # false, tsan, asan or ubsan
runs:
using: composite
steps:
@@ -33,14 +41,20 @@ runs:
BUILD_OPTION: "${{ inputs.conan_cache_hit == 'true' && 'missing' || '' }}"
CODE_COVERAGE: "${{ inputs.code_coverage == 'true' && 'True' || 'False' }}"
STATIC_OPTION: "${{ inputs.static == 'true' && 'True' || 'False' }}"
INTEGRATION_TESTS_OPTION: "${{ inputs.build_integration_tests == 'true' && 'True' || 'False' }}"
run: |
cd build
conan install .. -of . -b $BUILD_OPTION -s build_type=${{ inputs.build_type }} -o clio:static="${STATIC_OPTION}" -o clio:tests=True -o clio:integration_tests=True -o clio:lint=False -o clio:coverage="${CODE_COVERAGE}" --profile ${{ inputs.conan_profile }}
conan install .. -of . -b $BUILD_OPTION -s build_type=${{ inputs.build_type }} -o clio:static="${STATIC_OPTION}" -o clio:tests=True -o clio:integration_tests="${INTEGRATION_TESTS_OPTION}" -o clio:lint=False -o clio:coverage="${CODE_COVERAGE}" --profile ${{ inputs.conan_profile }}
- name: Run cmake
shell: bash
env:
BUILD_TYPE: "${{ inputs.build_type }}"
SANITIZER_OPTION: |
${{ inputs.sanitizer == 'tsan' && '-Dsan=thread' ||
inputs.sanitizer == 'ubsan' && '-Dsan=undefined' ||
inputs.sanitizer == 'asan' && '-Dsan=address' ||
'' }}
run: |
cd build
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=${{ inputs.build_type }} ${{ inputs.extra_cmake_args }} .. -G Ninja
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE="${BUILD_TYPE}" ${SANITIZER_OPTION} .. -G Ninja

View File

@@ -1,5 +1,10 @@
name: Get number of threads
description: Determines number of threads to use on macOS and Linux
inputs:
substract_threads:
description: How many threads to substract from the calculated number
required: true
default: '0'
outputs:
threads_number:
description: Number of threads to use
@@ -19,8 +24,11 @@ runs:
shell: bash
run: echo "num=$(($(nproc) - 2))" >> $GITHUB_OUTPUT
- name: Export output variable
shell: bash
- name: Shift and export number of threads
id: number_of_threads_export
shell: bash
run: |
echo "num=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}" >> $GITHUB_OUTPUT
num_of_threads=${{ steps.mac_threads.outputs.num || steps.linux_threads.outputs.num }}
shift_by=${{ inputs.substract_threads }}
shifted=$((num_of_threads - shift_by))
echo "num=$(( shifted > 1 ? shifted : 1 ))" >> $GITHUB_OUTPUT

View File

@@ -11,9 +11,35 @@ runs:
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
brew install llvm@14 pkg-config ninja bison cmake ccache jq gh conan@1
brew install llvm@14 pkg-config ninja bison ccache jq gh conan@1 ca-certificates
echo "/opt/homebrew/opt/conan@1/bin" >> $GITHUB_PATH
- name: Install CMake 3.31.6 on mac
if: ${{ runner.os == 'macOS' }}
shell: bash
run: |
# Uninstall any existing cmake
brew uninstall cmake --ignore-dependencies || true
# Download specific cmake formula
FORMULA_URL="https://raw.githubusercontent.com/Homebrew/homebrew-core/b4e46db74e74a8c1650b38b1da222284ce1ec5ce/Formula/c/cmake.rb"
FORMULA_EXPECTED_SHA256="c7ec95d86f0657638835441871e77541165e0a2581b53b3dd657cf13ad4228d4"
mkdir -p /tmp/homebrew-formula
curl -s -L $FORMULA_URL -o /tmp/homebrew-formula/cmake.rb
# Verify the downloaded formula
ACTUAL_SHA256=$(shasum -a 256 /tmp/homebrew-formula/cmake.rb | cut -d ' ' -f 1)
if [ "$ACTUAL_SHA256" != "$FORMULA_EXPECTED_SHA256" ]; then
echo "Error: Formula checksum mismatch"
echo "Expected: $FORMULA_EXPECTED_SHA256"
echo "Actual: $ACTUAL_SHA256"
exit 1
fi
# Install cmake from the specific formula with force flag
brew install --force /tmp/homebrew-formula/cmake.rb
- name: Fix git permissions on Linux
if: ${{ runner.os == 'Linux' }}
shell: bash

View File

@@ -15,10 +15,10 @@ runs:
if: ${{ runner.os == 'macOS' }}
shell: bash
env:
CONAN_PROFILE: apple_clang_15
CONAN_PROFILE: apple_clang_16
id: conan_setup_mac
run: |
echo "Creating $CONAN_PROFILE conan profile";
echo "Creating $CONAN_PROFILE conan profile"
conan profile new $CONAN_PROFILE --detect --force
conan profile update settings.compiler.libcxx=libc++ $CONAN_PROFILE
conan profile update settings.compiler.cppstd=20 $CONAN_PROFILE

16
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
time: "04:00"
timezone: "Etc/GMT"
reviewers:
- "godexsoft"
- "kuznetsss"
- "PeterChen13579"
commit-message:
prefix: "[CI] "
target-branch: "develop"

45
.github/scripts/execute-tests-under-sanitizer vendored Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
set -o pipefail
# Note: This script is intended to be run from the root of the repository.
#
# This script runs each unit-test separately and generates reports from the currently active sanitizer.
# Output is saved in ./.sanitizer-report in the root of the repository
if [[ -z "$1" ]]; then
cat <<EOF
ERROR
-----------------------------------------------------------------------------
Path to clio_tests should be passed as first argument to the script.
-----------------------------------------------------------------------------
EOF
exit 1
fi
TEST_BINARY=$1
if [[ ! -f "$TEST_BINARY" ]]; then
echo "Test binary not found: $TEST_BINARY"
exit 1
fi
TESTS=$($TEST_BINARY --gtest_list_tests | awk '/^ / {print suite $1} !/^ / {suite=$1}')
OUTPUT_DIR="./.sanitizer-report"
mkdir -p "$OUTPUT_DIR"
for TEST in $TESTS; do
OUTPUT_FILE="$OUTPUT_DIR/${TEST//\//_}"
export TSAN_OPTIONS="log_path=\"$OUTPUT_FILE\" die_after_fork=0"
export ASAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export UBSAN_OPTIONS="log_path=\"$OUTPUT_FILE\""
export MallocNanoZone='0' # for MacOSX
$TEST_BINARY --gtest_filter="$TEST" > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "'$TEST' failed a sanitizer check."
fi
done

View File

@@ -9,7 +9,7 @@ on:
jobs:
check_format:
name: Check format
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
container:
image: rippleci/clio_ci:latest
steps:
@@ -23,10 +23,10 @@ jobs:
run: |
./.githooks/check-format --diff
shell: bash
check_docs:
name: Check documentation
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
container:
image: rippleci/clio_ci:latest
steps:
@@ -47,133 +47,44 @@ jobs:
matrix:
include:
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Release
conan_profile: gcc
build_type: Release
container: '{ "image": "rippleci/clio_ci:latest" }'
code_coverage: false
static: true
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Debug
conan_profile: gcc
build_type: Debug
container: '{ "image": "rippleci/clio_ci:latest" }'
code_coverage: true
static: true
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Release
conan_profile: clang
build_type: Release
container: '{ "image": "rippleci/clio_ci:latest" }'
code_coverage: false
static: true
- os: heavy
container:
image: rippleci/clio_ci:latest
build_type: Debug
conan_profile: clang
build_type: Debug
container: '{ "image": "rippleci/clio_ci:latest" }'
code_coverage: false
static: true
- os: macos14
- os: macos15
build_type: Release
code_coverage: false
static: false
runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: false
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: ${{ matrix.conan_profile }}
- name: Restore cache
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_profile: ${{ steps.conan.outputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
static: ${{ matrix.static }}
- name: Build Clio
uses: ./.github/actions/build_clio
- name: Show ccache's statistics
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
- name: Strip tests
if: ${{ !matrix.code_coverage }}
run: strip build/clio_tests && strip build/clio_integration_tests
- name: Upload clio_server
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ !matrix.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_*tests
- name: Save cache
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
conan_profile: ${{ steps.conan.outputs.conan_profile }}
# TODO: This is not a part of build process but it is the easiest way to do it here.
# It will be refactored in https://github.com/XRPLF/clio/issues/1075
- name: Run code coverage
if: ${{ matrix.code_coverage }}
uses: ./.github/actions/code_coverage
upload_coverage_report:
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
uses: ./.github/workflows/build_impl.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
conan_profile: ${{ matrix.conan_profile }}
build_type: ${{ matrix.build_type }}
code_coverage: ${{ matrix.code_coverage }}
static: ${{ matrix.static }}
unit_tests: true
integration_tests: true
clio_server: true
test:
name: Run Tests
@@ -183,24 +94,24 @@ jobs:
matrix:
include:
- os: heavy
container:
image: rippleci/clio_ci:latest
conan_profile: gcc
build_type: Release
- os: heavy
container:
image: rippleci/clio_ci:latest
- os: heavy
conan_profile: clang
build_type: Release
- os: heavy
container:
image: rippleci/clio_ci:latest
- os: heavy
conan_profile: clang
build_type: Debug
- os: macos14
conan_profile: apple_clang_15
container:
image: rippleci/clio_ci:latest
- os: macos15
conan_profile: apple_clang_16
build_type: Release
runs-on: [self-hosted, "${{ matrix.os }}"]
runs-on: ${{ matrix.os }}
container: ${{ matrix.container }}
steps:
@@ -211,7 +122,49 @@ jobs:
- uses: actions/download-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ matrix.conan_profile }}
- name: Run clio_tests
run: |
chmod +x ./clio_tests
./clio_tests
check_config:
name: Check Config Description
needs: build
runs-on: heavy
container:
image: rippleci/clio_ci:latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: clio_server_Linux_Release_gcc
- name: Compare Config Description
shell: bash
run: |
repoConfigFile=docs/config-description.md
if ! [ -f ${repoConfigFile} ]; then
echo "Config Description markdown file is missing in docs folder"
exit 1
fi
chmod +x ./clio_server
configDescriptionFile=config_description_new.md
./clio_server -d ${configDescriptionFile}
configDescriptionHash=$(sha256sum ${configDescriptionFile} | cut -d' ' -f1)
repoConfigHash=$(sha256sum ${repoConfigFile} | cut -d' ' -f1)
if [ ${configDescriptionHash} != ${repoConfigHash} ]; then
echo "Markdown file is not up to date"
diff -u "${repoConfigFile}" "${configDescriptionFile}"
rm -f ${configDescriptionFile}
exit 1
fi
rm -f ${configDescriptionFile}
exit 0

View File

@@ -0,0 +1,95 @@
name: Build and publish Clio docker image
on:
workflow_call:
inputs:
tags:
required: true
type: string
description: Comma separated tags for docker image
artifact_name:
type: string
description: Name of Github artifact to put into docker image
strip_binary:
type: boolean
description: Whether to strip clio binary
default: true
publish_image:
type: boolean
description: Whether to publish docker image
required: true
workflow_dispatch:
inputs:
tags:
required: true
type: string
description: Comma separated tags for docker image
clio_server_binary_url:
required: true
type: string
description: Url to download clio_server binary from
binary_sha256:
required: true
type: string
description: sha256 hash of the binary
strip_binary:
type: boolean
description: Whether to strip clio binary
default: true
jobs:
build_and_publish_image:
name: Build and publish image
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download Clio binary from artifact
if: ${{ inputs.artifact_name != null }}
uses: actions/download-artifact@v4
with:
name: ${{ inputs.artifact_name }}
path: ./docker/clio/artifact/
- name: Download Clio binary from url
if: ${{ inputs.clio_server_binary_url != null }}
shell: bash
run: |
wget ${{inputs.clio_server_binary_url}} -P ./docker/clio/artifact/
if [ "$(sha256sum ./docker/clio/clio_server | awk '{print $1}')" != "${{inputs.binary_sha256}}" ]; then
echo "Binary sha256 sum doesn't match"
exit 1
fi
- name: Unpack binary
shell: bash
run: |
sudo apt update && sudo apt install -y tar unzip
cd docker/clio/artifact
artifact=$(find . -type f)
if [[ $artifact == *.zip ]]; then
unzip $artifact
elif [[ $artifact == *.tar.gz ]]; then
tar -xvf $artifact
fi
mv clio_server ../
cd ../
rm -rf ./artifact
- name: Strip binary
if: ${{ inputs.strip_binary }}
shell: bash
run: strip ./docker/clio/clio_server
- name: Build Docker image
uses: ./.github/actions/build_docker_image
env:
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
image_name: rippleci/clio
push_image: ${{ inputs.publish_image }}
directory: docker/clio
tags: ${{ inputs.tags }}
platforms: linux/amd64
description: Clio is an XRP Ledger API server.

192
.github/workflows/build_impl.yml vendored Normal file
View File

@@ -0,0 +1,192 @@
name: Reusable build
on:
workflow_call:
inputs:
runs_on:
description: Runner to run the job on
required: true
type: string
default: heavy
container:
description: "The container object as a JSON string (leave empty to run natively)"
required: true
type: string
default: ""
conan_profile:
description: Conan profile to use
required: true
type: string
build_type:
description: Build type
required: true
type: string
disable_cache:
description: Whether ccache and conan cache should be disabled
required: false
type: boolean
default: false
code_coverage:
description: Whether to enable code coverage
required: true
type: boolean
default: false
static:
description: Whether to build static binaries
required: true
type: boolean
default: true
unit_tests:
description: Whether to run unit tests
required: true
type: boolean
default: false
integration_tests:
description: Whether to run integration tests
required: true
type: boolean
default: false
clio_server:
description: Whether to build clio_server
required: true
type: boolean
default: true
target:
description: Build target name
required: false
type: string
default: all
sanitizer:
description: Sanitizer to use
required: false
type: string
default: 'false'
jobs:
build:
name: Build ${{ inputs.container != '' && 'in container' || 'natively' }}
runs-on: ${{ inputs.runs_on }}
container: ${{ inputs.container != '' && fromJson(inputs.container) || null }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: ${{ inputs.disable_cache }}
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: ${{ inputs.conan_profile }}
- name: Restore cache
if: ${{ !inputs.disable_cache }}
uses: ./.github/actions/restore_cache
id: restore_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_profile: ${{ steps.conan.outputs.conan_profile }}
ccache_dir: ${{ env.CCACHE_DIR }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ !inputs.disable_cache && steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
static: ${{ inputs.static }}
sanitizer: ${{ inputs.sanitizer }}
- name: Build Clio
uses: ./.github/actions/build_clio
with:
target: ${{ inputs.target }}
- name: Show ccache's statistics
if: ${{ !inputs.disable_cache }}
shell: bash
id: ccache_stats
run: |
ccache -s > /tmp/ccache.stats
miss_rate=$(cat /tmp/ccache.stats | grep 'Misses' | head -n1 | sed 's/.*(\(.*\)%).*/\1/')
echo "miss_rate=${miss_rate}" >> $GITHUB_OUTPUT
cat /tmp/ccache.stats
- name: Strip unit_tests
if: ${{ inputs.unit_tests && !inputs.code_coverage && inputs.sanitizer == 'false' }}
run: strip build/clio_tests
- name: Strip integration_tests
if: ${{ inputs.integration_tests && !inputs.code_coverage }}
run: strip build/clio_integration_tests
- name: Upload clio_server
if: ${{ inputs.clio_server }}
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_server
- name: Upload clio_tests
if: ${{ inputs.unit_tests && !inputs.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_tests
- name: Upload clio_integration_tests
if: ${{ inputs.integration_tests && !inputs.code_coverage }}
uses: actions/upload-artifact@v4
with:
name: clio_integration_tests_${{ runner.os }}_${{ inputs.build_type }}_${{ steps.conan.outputs.conan_profile }}
path: build/clio_integration_tests
- name: Save cache
if: ${{ !inputs.disable_cache && github.ref == 'refs/heads/develop' }}
uses: ./.github/actions/save_cache
with:
conan_dir: ${{ env.CONAN_USER_HOME }}/.conan
conan_hash: ${{ steps.restore_cache.outputs.conan_hash }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
ccache_dir: ${{ env.CCACHE_DIR }}
ccache_cache_hit: ${{ steps.restore_cache.outputs.ccache_cache_hit }}
ccache_cache_miss_rate: ${{ steps.ccache_stats.outputs.miss_rate }}
build_type: ${{ inputs.build_type }}
code_coverage: ${{ inputs.code_coverage }}
conan_profile: ${{ steps.conan.outputs.conan_profile }}
# TODO: This is not a part of build process but it is the easiest way to do it here.
# It will be refactored in https://github.com/XRPLF/clio/issues/1075
- name: Run code coverage
if: ${{ inputs.code_coverage }}
uses: ./.github/actions/code_coverage
upload_coverage_report:
if: ${{ inputs.code_coverage }}
name: Codecov
needs: build
uses: ./.github/workflows/upload_coverage_report.yml
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -47,7 +47,7 @@ jobs:
- name: Upload clio_tests
uses: actions/upload-artifact@v4
with:
name: clio_tests_libxrpl-${{ github.event.client_payload.version }}
name: clio_tests_check_libxrpl
path: build/clio_tests
run_tests:
@@ -60,7 +60,7 @@ jobs:
steps:
- uses: actions/download-artifact@v4
with:
name: clio_tests_libxrpl-${{ github.event.client_payload.version }}
name: clio_tests_check_libxrpl
- name: Run clio_tests
run: |
@@ -71,7 +71,7 @@ jobs:
name: Create an issue on failure
needs: [build, run_tests]
if: ${{ always() && contains(needs.*.result, 'failure') }}
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
permissions:
contents: write
issues: write

18
.github/workflows/check_pr_title.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Check PR title
on:
pull_request:
types: [opened, edited, reopened, synchronize]
branches: [develop]
jobs:
check_title:
runs-on: ubuntu-latest
# permissions:
# pull-requests: write
steps:
- uses: ytanikin/PRConventionalCommits@1.3.0
with:
task_types: '["build","feat","fix","docs","test","ci","style","refactor","perf","chore"]'
add_label: false
# Turned off labelling because it leads to an error, see https://github.com/ytanikin/PRConventionalCommits/issues/19
# custom_labels: '{"build":"build", "feat":"enhancement", "fix":"bug", "docs":"documentation", "test":"testability", "ci":"ci", "style":"refactoring", "refactor":"refactoring", "perf":"performance", "chore":"tooling"}'

View File

@@ -1,7 +1,7 @@
name: Clang-tidy check
on:
schedule:
- cron: "0 6 * * 1-5"
- cron: "0 9 * * 1-5"
workflow_dispatch:
pull_request:
branches: [develop]
@@ -12,7 +12,7 @@ on:
jobs:
clang_tidy:
runs-on: [self-hosted, Linux]
runs-on: heavy
container:
image: rippleci/clio_ci:latest
permissions:
@@ -60,7 +60,7 @@ jobs:
shell: bash
id: run_clang_tidy
run: |
run-clang-tidy-18 -p build -j ${{ steps.number_of_threads.outputs.threads_number }} -fix -quiet 1>output.txt
run-clang-tidy-19 -p build -j ${{ steps.number_of_threads.outputs.threads_number }} -fix -quiet 1>output.txt
- name: Check format
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
@@ -89,7 +89,7 @@ jobs:
List of the issues found: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/
- uses: crazy-max/ghaction-import-gpg@v5
- uses: crazy-max/ghaction-import-gpg@v6
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
with:
gpg_private_key: ${{ secrets.ACTIONS_GPG_PRIVATE_KEY }}
@@ -99,7 +99,7 @@ jobs:
- name: Create PR with fixes
if: ${{ steps.run_clang_tidy.outcome != 'success' }}
uses: peter-evans/create-pull-request@v5
uses: peter-evans/create-pull-request@v7
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -109,9 +109,9 @@ jobs:
branch: "clang_tidy/autofix"
branch-suffix: timestamp
delete-branch: true
title: "[CI] clang-tidy auto fixes"
title: "style: clang-tidy auto fixes"
body: "Fixes #${{ steps.create_issue.outputs.created_issue_id }}. Please review and commit clang-tidy fixes."
reviewers: "cindyyan317,godexsoft,kuznetsss"
reviewers: "godexsoft,kuznetsss,PeterChen13579"
- name: Fail the job
if: ${{ steps.run_clang_tidy.outcome != 'success' }}

View File

@@ -6,7 +6,7 @@ on:
jobs:
restart_clang_tidy:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
permissions:
actions: write
@@ -17,7 +17,7 @@ jobs:
id: check
shell: bash
run: |
passed=$(if [[ $(git log -1 --pretty=format:%s | grep '\[CI\] clang-tidy auto fixes') ]]; then echo 'true' ; else echo 'false' ; fi)
passed=$(if [[ $(git log -1 --pretty=format:%s | grep 'style: clang-tidy auto fixes') ]]; then echo 'true' ; else echo 'false' ; fi)
echo "passed=$passed" >> $GITHUB_OUTPUT
- name: Run clang-tidy workflow

View File

@@ -18,7 +18,7 @@ jobs:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
continue-on-error: true
container:
image: rippleci/clio_ci:latest
@@ -34,7 +34,7 @@ jobs:
cmake ../docs && cmake --build . --target docs
- name: Setup Pages
uses: actions/configure-pages@v4
uses: actions/configure-pages@v5
- name: Upload artifact
uses: actions/upload-pages-artifact@v3

View File

@@ -1,8 +1,12 @@
name: Nightly release
on:
schedule:
- cron: '0 5 * * 1-5'
- cron: '0 8 * * 1-5'
workflow_dispatch:
pull_request:
paths:
- '.github/workflows/nightly.yml'
- '.github/workflows/build_clio_docker_image.yml'
jobs:
build:
@@ -11,70 +15,29 @@ jobs:
fail-fast: false
matrix:
include:
- os: macos14
- os: macos15
build_type: Release
static: false
- os: heavy
build_type: Release
container:
image: rippleci/clio_ci:latest
static: true
container: '{ "image": "rippleci/clio_ci:latest" }'
- os: heavy
build_type: Debug
container:
image: rippleci/clio_ci:latest
runs-on: [self-hosted, "${{ matrix.os }}"]
container: ${{ matrix.container }}
steps:
- name: Clean workdir
if: ${{ runner.os == 'macOS' }}
uses: kuznetsss/workspace-cleanup@1.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Prepare runner
uses: ./.github/actions/prepare_runner
with:
disable_ccache: true
- name: Setup conan
uses: ./.github/actions/setup_conan
id: conan
with:
conan_profile: gcc
- name: Run conan and cmake
uses: ./.github/actions/generate
with:
conan_profile: ${{ steps.conan.outputs.conan_profile }}
conan_cache_hit: ${{ steps.restore_cache.outputs.conan_cache_hit }}
build_type: ${{ matrix.build_type }}
- name: Build Clio
uses: ./.github/actions/build_clio
- name: Strip tests
run: strip build/clio_tests && strip build/clio_integration_tests
- name: Upload clio_tests
uses: actions/upload-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_*tests
- name: Compress clio_server
shell: bash
run: |
cd build
tar czf ./clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz ./clio_server
- name: Upload clio_server
uses: actions/upload-artifact@v4
with:
name: clio_server_${{ runner.os }}_${{ matrix.build_type }}
path: build/clio_server_${{ runner.os }}_${{ matrix.build_type }}.tar.gz
static: true
container: '{ "image": "rippleci/clio_ci:latest" }'
uses: ./.github/workflows/build_impl.yml
with:
runs_on: ${{ matrix.os }}
container: ${{ matrix.container }}
conan_profile: gcc
build_type: ${{ matrix.build_type }}
code_coverage: false
static: ${{ matrix.static }}
unit_tests: true
integration_tests: true
clio_server: true
disable_cache: true
run_tests:
needs: build
@@ -82,15 +45,18 @@ jobs:
fail-fast: false
matrix:
include:
- os: macos14
- os: macos15
conan_profile: apple_clang_16
build_type: Release
integration_tests: false
- os: heavy
conan_profile: gcc
build_type: Release
container:
image: rippleci/clio_ci:latest
integration_tests: true
- os: heavy
conan_profile: gcc
build_type: Debug
container:
image: rippleci/clio_ci:latest
@@ -114,13 +80,17 @@ jobs:
- uses: actions/download-artifact@v4
with:
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}
name: clio_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ matrix.conan_profile }}
- name: Run clio_tests
run: |
chmod +x ./clio_tests
./clio_tests
- uses: actions/download-artifact@v4
with:
name: clio_integration_tests_${{ runner.os }}_${{ matrix.build_type }}_${{ matrix.conan_profile }}
# To be enabled back once docker in mac runner arrives
# https://github.com/XRPLF/clio/issues/1400
- name: Run clio_integration_tests
@@ -130,8 +100,9 @@ jobs:
./clio_integration_tests --backend_host=scylladb
nightly_release:
if: ${{ github.event_name != 'pull_request' }}
needs: run_tests
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
@@ -143,13 +114,13 @@ jobs:
- uses: actions/download-artifact@v4
with:
path: nightly_release
pattern: clio_server_*
- name: Prepare files
shell: bash
run: |
cp ${{ github.workspace }}/.github/workflows/nightly_notes.md "${RUNNER_TEMP}/nightly_notes.md"
cd nightly_release
rm -r clio_*tests*
for d in $(ls); do
archive_name=$(ls $d)
mv ${d}/${archive_name} ./
@@ -172,10 +143,22 @@ jobs:
--target $GITHUB_SHA --notes-file "${RUNNER_TEMP}/nightly_notes.md" \
./nightly_release/clio_server*
build_and_publish_docker_image:
uses: ./.github/workflows/build_clio_docker_image.yml
needs: run_tests
secrets: inherit
with:
tags: |
type=raw,value=nightly
type=raw,value=${{ github.sha }}
artifact_name: clio_server_Linux_Release_gcc
strip_binary: true
publish_image: ${{ github.event_name != 'pull_request' }}
create_issue_on_failure:
needs: [build, run_tests, nightly_release]
if: ${{ always() && contains(needs.*.result, 'failure') }}
runs-on: ubuntu-20.04
needs: [build, run_tests, nightly_release, build_and_publish_docker_image]
if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name != 'pull_request' }}
runs-on: ubuntu-latest
permissions:
contents: write
issues: write

106
.github/workflows/sanitizers.yml vendored Normal file
View File

@@ -0,0 +1,106 @@
name: Run tests with sanitizers
on:
schedule:
- cron: "0 4 * * 1-5"
workflow_dispatch:
pull_request:
paths:
- '.github/workflows/sanitizers.yml'
jobs:
build:
name: Build clio tests
strategy:
fail-fast: false
matrix:
include:
- sanitizer: tsan
compiler: gcc
- sanitizer: asan
compiler: gcc
# - sanitizer: ubsan # todo: enable when heavy runners are available
# compiler: gcc
uses: ./.github/workflows/build_impl.yml
with:
runs_on: ubuntu-latest # todo: change to heavy
container: '{ "image": "rippleci/clio_ci:latest" }'
disable_cache: true
conan_profile: ${{ matrix.compiler }}.${{ matrix.sanitizer }}
build_type: Release
code_coverage: false
static: false
unit_tests: true
integration_tests: false
clio_server: false
target: clio_tests
sanitizer: ${{ matrix.sanitizer }}
# consider combining this with the previous matrix instead
run_tests:
needs: build
strategy:
fail-fast: false
matrix:
include:
- sanitizer: tsan
compiler: gcc
- sanitizer: asan
compiler: gcc
# - sanitizer: ubsan # todo: enable when heavy runners are available
# compiler: gcc
runs-on: ubuntu-latest # todo: change to heavy
container:
image: rippleci/clio_ci:latest
permissions:
contents: write
issues: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/download-artifact@v4
with:
name: clio_tests_${{ runner.os }}_Release_${{ matrix.compiler }}.${{ matrix.sanitizer }}
- name: Run clio_tests [${{ matrix.compiler }} / ${{ matrix.sanitizer }}]
shell: bash
run: |
chmod +x ./clio_tests
./.github/scripts/execute-tests-under-sanitizer ./clio_tests
- name: Check for sanitizer report
shell: bash
id: check_report
run: |
if ls .sanitizer-report/* 1> /dev/null 2>&1; then
echo "found_report=true" >> $GITHUB_OUTPUT
else
echo "found_report=false" >> $GITHUB_OUTPUT
fi
- name: Upload report
if: ${{ steps.check_report.outputs.found_report == 'true' }}
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.compiler }}_${{ matrix.sanitizer }}_report
path: .sanitizer-report/*
include-hidden-files: true
#
# todo: enable when we have fixed all currently existing issues from sanitizers
#
# - name: Create an issue
# if: ${{ steps.check_report.outputs.found_report == 'true' }}
# uses: ./.github/actions/create_issue
# env:
# GH_TOKEN: ${{ github.token }}
# with:
# labels: 'bug'
# title: '[${{ matrix.sanitizer }}/${{ matrix.compiler }}] reported issues'
# body: >
# Clio tests failed one or more sanitizer checks when built with ${{ matrix.compiler }}`.
# Workflow: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}/
# Reports are available as artifacts.

View File

@@ -18,30 +18,19 @@ jobs:
name: Build and push docker image
runs-on: [self-hosted, heavy]
steps:
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_PW }}
- uses: actions/checkout@v4
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/metadata-action@v5
id: meta
- uses: ./.github/actions/build_docker_image
env:
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PW: ${{ secrets.DOCKERHUB_PW }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
images: rippleci/clio_ci
image_name: rippleci/clio_ci
push_image: ${{ github.event_name != 'pull_request' }}
directory: docker/ci
tags: |
type=raw,value=latest
type=raw,value=gcc_12_clang_16
type=raw,value=${{ env.GITHUB_SHA }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ github.workspace }}/docker/ci
type=raw,value=${{ github.sha }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
description: CI image for XRPLF/clio.

View File

@@ -9,7 +9,7 @@ on:
jobs:
upload_report:
name: Upload report
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
@@ -23,7 +23,7 @@ jobs:
- name: Upload coverage report
if: ${{ hashFiles('build/coverage_report.xml') != '' }}
uses: wandalen/wretry.action@v1.4.10
uses: wandalen/wretry.action@v3.7.3
with:
action: codecov/codecov-action@v4
with: |

3
.gitignore vendored
View File

@@ -6,6 +6,7 @@
.vscode
.python-version
.DS_Store
.sanitizer-report
CMakeUserPresets.json
config.json
src/main/impl/Build.cpp
src/util/build/Build.cpp

View File

@@ -16,6 +16,8 @@ option(coverage "Build test coverage report" FALSE)
option(packaging "Create distribution packages" FALSE)
option(lint "Run clang-tidy checks during compilation" FALSE)
option(static "Statically linked Clio" FALSE)
option(snapshot "Build snapshot tool" FALSE)
# ========================================================================== #
set(san "" CACHE STRING "Add sanitizer instrumentation")
set(CMAKE_EXPORT_COMPILE_COMMANDS TRUE)
@@ -65,15 +67,21 @@ endif ()
# Enable selected sanitizer if enabled via `san`
if (san)
set(SUPPORTED_SANITIZERS "address" "thread" "memory" "undefined")
list(FIND SUPPORTED_SANITIZERS "${san}" INDEX)
if (INDEX EQUAL -1)
message(FATAL_ERROR "Error: Unsupported sanitizer '${san}'. Supported values are: ${SUPPORTED_SANITIZERS}.")
endif ()
target_compile_options(
clio PUBLIC # Sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1> ${SAN_FLAG} -fno-omit-frame-pointer
clio_options INTERFACE # Sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1> ${SAN_FLAG} -fno-omit-frame-pointer
)
target_compile_definitions(
clio PUBLIC $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>
clio_options INTERFACE $<$<STREQUAL:${san},address>:SANITIZER=ASAN> $<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN> $<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>
)
target_link_libraries(clio INTERFACE ${SAN_FLAG} ${SAN_LIB})
target_link_libraries(clio_options INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
# Generate `docs` target for doxygen documentation if enabled Note: use `make docs` to generate the documentation
@@ -85,3 +93,7 @@ include(install/install)
if (packaging)
include(cmake/packaging.cmake) # This file exists only in build runner
endif ()
if (snapshot)
add_subdirectory(tools/snapshot)
endif ()

View File

@@ -21,13 +21,13 @@ git config --local core.hooksPath .githooks
```
## Git hooks dependencies
The pre-commit hook requires `clang-format >= 18.0.0` and `cmake-format` to be installed on your machine.
The pre-commit hook requires `clang-format >= 19.0.0` and `cmake-format` to be installed on your machine.
`clang-format` can be installed using `brew` on macOS and default package manager on Linux.
`cmake-format` can be installed using `pip`.
The hook will also attempt to automatically use `doxygen` to verify that everything public in the codebase is covered by doc comments. If `doxygen` is not installed, the hook will raise a warning suggesting to install `doxygen` for future commits.
## Git commands
This sections offers a detailed look at the git commands you will need to use to get your PR submitted.
This sections offers a detailed look at the git commands you will need to use to get your PR submitted.
Please note that there are more than one way to do this and these commands are provided for your convenience.
At this point it's assumed that you have already finished working on your feature/bug.
@@ -72,6 +72,9 @@ git push --force
Clio uses `ccache` to speed up compilation. If you want to use it, please make sure it is installed on your machine.
CMake will automatically detect it and use it if it is available.
## Opening a pull request
When a pull request is open CI will perform checks on the new code.
Title of the pull request and squashed commit should follow [conventional commits specification](https://www.conventionalcommits.org/en/v1.0.0/).
## Fixing issues found during code review
While your code is in review, it's possible that some changes will be requested by reviewer(s).
@@ -102,7 +105,7 @@ The button for that is near the bottom of the PR's page on GitHub.
This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent.
## Formatting
Code must conform to `clang-format` version 18, unless the result would be unreasonably difficult to read or maintain.
Code must conform to `clang-format` version 19, unless the result would be unreasonably difficult to read or maintain.
In most cases the pre-commit hook will take care of formatting and will fix any issues automatically.
To manually format your code, use `clang-format -i <your changed files>` for C++ files and `cmake-format -i <your changed files>` for CMake files.
@@ -141,12 +144,13 @@ Existing maintainers can resign, or be subject to a vote for removal at the behe
## Existing Maintainers
* [cindyyan317](https://github.com/cindyyan317) (Ripple)
* [godexsoft](https://github.com/godexsoft) (Ripple)
* [kuznetsss](https://github.com/kuznetsss) (Ripple)
* [legleux](https://github.com/legleux) (Ripple)
* [PeterChen13579](https://github.com/PeterChen13579) (Ripple)
## Honorable ex-Maintainers
* [cindyyan317](https://github.com/cindyyan317) (ex-Ripple)
* [cjcobb23](https://github.com/cjcobb23) (ex-Ripple)
* [natenichols](https://github.com/natenichols) (ex-Ripple)

View File

@@ -28,7 +28,6 @@ Below are some useful docs to learn more about Clio.
**For Developers**:
- [How to build Clio](./docs/build-clio.md)
- [Metrics and static analysis](./docs/metrics-and-static-analysis.md)
- [Coverage report](./docs/coverage-report.md)
**For Operators**:

View File

@@ -12,5 +12,5 @@ target_sources(
include(deps/gbench)
target_include_directories(clio_benchmark PRIVATE .)
target_link_libraries(clio_benchmark PUBLIC clio benchmark::benchmark_main)
target_link_libraries(clio_benchmark PUBLIC clio_etl benchmark::benchmark_main)
set_target_properties(clio_benchmark PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})

View File

@@ -31,7 +31,6 @@
#include <cstdint>
#include <latch>
#include <optional>
#include <stdexcept>
#include <thread>
#include <vector>
@@ -189,10 +188,10 @@ public:
static auto
generateData()
{
constexpr auto TOTAL = 10'000;
constexpr auto kTOTAL = 10'000;
std::vector<uint64_t> data;
data.reserve(TOTAL);
for (auto i = 0; i < TOTAL; ++i)
data.reserve(kTOTAL);
for (auto i = 0; i < kTOTAL; ++i)
data.push_back(util::Random::uniform(1, 100'000'000));
return data;
@@ -209,7 +208,7 @@ benchmarkThreads(benchmark::State& state)
}
template <typename CtxType>
void
static void
benchmarkExecutionContextBatched(benchmark::State& state)
{
auto data = generateData();
@@ -220,7 +219,7 @@ benchmarkExecutionContextBatched(benchmark::State& state)
}
template <typename CtxType>
void
static void
benchmarkAnyExecutionContextBatched(benchmark::State& state)
{
auto data = generateData();

92
cliff.toml Normal file
View File

@@ -0,0 +1,92 @@
# git-cliff ~ default configuration file
# https://git-cliff.org/docs/configuration
#
# Lines starting with "#" are comments.
# Configuration options are organized into tables and keys.
# See documentation for more information on available options.
[changelog]
# template for the changelog header
header = """
# Changelog\n
All notable changes to this project will be documented in this file.\n
"""
# template for the changelog body
# https://keats.github.io/tera/docs/#introduction
body = """
{% if version %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}\
## [unreleased]
{% endif %}\
{% for group, commits in commits | filter(attribute="merge_commit", value=false) | group_by(attribute="group") %}
### {{ group | striptags | trim | upper_first }}
{% for commit in commits %}
- {% if commit.scope %}*({{ commit.scope }})* {% endif %}\
{% if commit.breaking %}[**breaking**] {% endif %}\
{{ commit.message | upper_first }} {% if commit.remote.username %}by @{{ commit.remote.username }}{% endif %}\
{% endfor %}
{% endfor %}\n
"""
# template for the changelog footer
footer = """
<!-- generated by git-cliff -->
"""
# remove the leading and trailing s
trim = true
# postprocessors
postprocessors = [
# { pattern = '<REPO>', replace = "https://github.com/orhun/git-cliff" }, # replace repository URL
]
# render body even when there are no releases to process
# render_always = true
# output file path
output = "CHANGELOG.md"
[git]
# parse the commits based on https://www.conventionalcommits.org
conventional_commits = true
# filter out the commits that are not conventional
filter_unconventional = true
# process each line of a commit as an individual commit
split_commits = false
# regex for preprocessing the commit messages
commit_preprocessors = [
# Replace issue numbers
#{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](<REPO>/issues/${2}))"},
# Check spelling of the commit with https://github.com/crate-ci/typos
# If the spelling is incorrect, it will be automatically fixed.
#{ pattern = '.*', replace_command = 'typos --write-changes -' },
]
# regex for parsing and grouping commits
commit_parsers = [
{ message = "^feat", group = "<!-- 0 -->🚀 Features" },
{ message = "^fix", group = "<!-- 1 -->🐛 Bug Fixes" },
{ message = "^doc", group = "<!-- 3 -->📚 Documentation" },
{ message = "^perf", group = "<!-- 4 -->⚡ Performance" },
{ message = "^refactor", group = "<!-- 2 -->🚜 Refactor" },
{ message = "^style.*[Cc]lang-tidy auto fixes", skip = true },
{ message = "^style", group = "<!-- 5 -->🎨 Styling" },
{ message = "^test", group = "<!-- 6 -->🧪 Testing" },
{ message = "^chore\\(release\\): prepare for", skip = true },
{ message = "^chore: Commits", skip = true },
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore|^ci", group = "<!-- 7 -->⚙️ Miscellaneous Tasks" },
{ body = ".*security", group = "<!-- 8 -->🛡️ Security" },
{ message = "^revert", group = "<!-- 9 -->◀️ Revert" },
{ message = ".*", group = "<!-- 10 -->💼 Other" },
]
# filter out the commits that are not matched by commit parsers
filter_commits = false
# sort the tags topologically
topo_order = false
# sort the commits inside sections by oldest/newest order
sort_commits = "oldest"
ignore_tags = "^.*-[b|rc].*"
[remote.github]
owner = "XRPLF"
repo = "clio"

View File

@@ -17,25 +17,26 @@
*/
//==============================================================================
#include "main/Build.hpp"
#include "util/build/Build.hpp"
#include <string>
namespace Build {
static constexpr char versionString[] = "@CLIO_VERSION@";
namespace util::build {
static constexpr char versionString[] = "@CLIO_VERSION@"; // NOLINT(readability-identifier-naming)
std::string const&
getClioVersionString()
{
static std::string const value = versionString;
static std::string const value = versionString; // NOLINT(readability-identifier-naming)
return value;
}
std::string const&
getClioFullVersionString()
{
static std::string const value = "clio-" + getClioVersionString();
static std::string const value = "clio-" + getClioVersionString(); // NOLINT(readability-identifier-naming)
return value;
}
} // namespace Build
} // namespace util::build

View File

@@ -8,7 +8,7 @@ if (lint)
endif ()
message(STATUS "Using clang-tidy from CLIO_CLANG_TIDY_BIN")
else ()
find_program(_CLANG_TIDY_BIN NAMES "clang-tidy-18" "clang-tidy" REQUIRED)
find_program(_CLANG_TIDY_BIN NAMES "clang-tidy-19" "clang-tidy" REQUIRED)
endif ()
if (NOT _CLANG_TIDY_BIN)

View File

@@ -45,4 +45,4 @@ endif ()
message(STATUS "Build version: ${CLIO_VERSION}")
configure_file(${CMAKE_CURRENT_LIST_DIR}/Build.cpp.in ${CMAKE_CURRENT_LIST_DIR}/../src/main/impl/Build.cpp)
configure_file(${CMAKE_CURRENT_LIST_DIR}/Build.cpp.in ${CMAKE_CURRENT_LIST_DIR}/../src/util/build/Build.cpp)

View File

@@ -39,6 +39,34 @@ if (is_appleclang)
list(APPEND COMPILER_FLAGS -Wreorder-init-list)
endif ()
if (san)
# When building with sanitizers some compilers will actually produce extra warnings/errors. We don't want this yet, at
# least not until we have fixed all runtime issues reported by the sanitizers. Once that is done we can start removing
# some of these and trying to fix it in our codebase. We can never remove all of below because most of them are
# reported from deep inside libraries like boost or libxrpl.
#
# TODO: Address in https://github.com/XRPLF/clio/issues/1885
list(
APPEND
COMPILER_FLAGS
-Wno-error=tsan # Disables treating TSAN warnings as errors
-Wno-tsan # Disables TSAN warnings (thread-safety analysis)
-Wno-uninitialized # Disables warnings about uninitialized variables (AddressSanitizer, UndefinedBehaviorSanitizer,
# etc.)
-Wno-stringop-overflow # Disables warnings about potential string operation overflows (AddressSanitizer)
-Wno-unsafe-buffer-usage # Disables warnings about unsafe memory operations (AddressSanitizer)
-Wno-frame-larger-than # Disables warnings about stack frame size being too large (AddressSanitizer)
-Wno-unused-function # Disables warnings about unused functions (LeakSanitizer, memory-related issues)
-Wno-unused-but-set-variable # Disables warnings about unused variables (MemorySanitizer)
-Wno-thread-safety-analysis # Disables warnings related to thread safety usage (ThreadSanitizer)
-Wno-thread-safety # Disables warnings related to thread safety usage (ThreadSanitizer)
-Wno-sign-compare # Disables warnings about signed/unsigned comparison (UndefinedBehaviorSanitizer)
-Wno-nonnull # Disables warnings related to null pointer dereferencing (UndefinedBehaviorSanitizer)
-Wno-address # Disables warnings about address-related issues (UndefinedBehaviorSanitizer)
-Wno-array-bounds # Disables array bounds checks (UndefinedBehaviorSanitizer)
)
endif ()
# See https://github.com/cpp-best-practices/cppbestpractices/blob/master/02-Use_the_Tools_Available.md#gcc--clang for
# the flags description

View File

@@ -1,3 +1,11 @@
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_LINK)
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_USE_BACKTRACE)
find_package(libbacktrace REQUIRED CONFIG)
if ("${san}" STREQUAL "")
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_LINK)
target_compile_definitions(clio_options INTERFACE BOOST_STACKTRACE_USE_BACKTRACE)
find_package(libbacktrace REQUIRED CONFIG)
else ()
# Some sanitizers (TSAN and ASAN for sure) can't be used with libbacktrace because they have their own backtracing
# capabilities and there are conflicts. In any case, this makes sure Clio code knows that backtrace is not available.
# See relevant conan profiles for sanitizers where we disable stacktrace in Boost explicitly.
target_compile_definitions(clio_options INTERFACE CLIO_WITHOUT_STACKTRACE)
message(STATUS "Sanitizer enabled, disabling stacktrace")
endif ()

View File

@@ -19,16 +19,18 @@ class Clio(ConanFile):
'packaging': [True, False], # create distribution packages
'coverage': [True, False], # build for test coverage report; create custom target `clio_tests-ccov`
'lint': [True, False], # run clang-tidy checks during compilation
'snapshot': [True, False], # build export/import snapshot tool
}
requires = [
'boost/1.82.0',
'boost/1.83.0',
'cassandra-cpp-driver/2.17.0',
'fmt/10.1.1',
'protobuf/3.21.9',
'grpc/1.50.1',
'openssl/1.1.1u',
'xrpl/2.2.0',
'openssl/1.1.1v',
'xrpl/2.4.0',
'zlib/1.3.1',
'libbacktrace/cci.20210118'
]
@@ -43,6 +45,7 @@ class Clio(ConanFile):
'coverage': False,
'lint': False,
'docs': False,
'snapshot': False,
'xrpl/*:tests': False,
'xrpl/*:rocksdb': False,
@@ -91,6 +94,7 @@ class Clio(ConanFile):
tc.variables['docs'] = self.options.docs
tc.variables['packaging'] = self.options.packaging
tc.variables['benchmark'] = self.options.benchmark
tc.variables['snapshot'] = self.options.snapshot
tc.generate()
def build(self):

16
docker/ci/README.md Normal file
View File

@@ -0,0 +1,16 @@
# CI image for XRPLF/clio
This image contains an environment to build [Clio](https://github.com/XRPLF/clio), check code and documentation.
It is used in [Clio Github Actions](https://github.com/XRPLF/clio/actions) but can also be used to compile Clio locally.
The image is based on Ubuntu 20.04 and contains:
- clang 16.0.6
- gcc 12.3
- doxygen 1.12
- gh 2.40
- ccache 4.10.2
- conan 1.62
- and some other useful tools
Conan is set up to build Clio without any additional steps. There are two preset conan profiles: `clang` and `gcc` to use corresponding compiler. By default conan is setup to use `gcc`.
Sanitizer builds for `ASAN`, `TSAN` and `UBSAN` are enabled via conan profiles for each of the supported compilers. These can be selected using the following pattern (all lowercase): `[compiler].[sanitizer]` (e.g. `--profile gcc.tsan`).

View File

@@ -0,0 +1,9 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

View File

@@ -0,0 +1,9 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -0,0 +1,9 @@
include(clang)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

9
docker/ci/conan/gcc.asan Normal file
View File

@@ -0,0 +1,9 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=address\" linkflags=\"-fsanitize=address\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=address"
CXXFLAGS="-fsanitize=address"
LDFLAGS="-fsanitize=address"

9
docker/ci/conan/gcc.tsan Normal file
View File

@@ -0,0 +1,9 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=thread\" linkflags=\"-fsanitize=thread\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=thread"
CXXFLAGS="-fsanitize=thread"
LDFLAGS="-fsanitize=thread"

View File

@@ -0,0 +1,9 @@
include(gcc)
[options]
boost:extra_b2_flags="cxxflags=\"-fsanitize=undefined\" linkflags=\"-fsanitize=undefined\""
boost:without_stacktrace=True
[env]
CFLAGS="-fsanitize=undefined"
CXXFLAGS="-fsanitize=undefined"
LDFLAGS="-fsanitize=undefined"

View File

@@ -6,10 +6,10 @@ SHELL ["/bin/bash", "-c"]
USER root
WORKDIR /root
ENV CCACHE_VERSION=4.8.3 \
LLVM_TOOLS_VERSION=18 \
ENV CCACHE_VERSION=4.10.2 \
LLVM_TOOLS_VERSION=19 \
GH_VERSION=2.40.0 \
DOXYGEN_VERSION=1.10.0
DOXYGEN_VERSION=1.12.0
# Add repositories
RUN apt-get -qq update \
@@ -98,3 +98,10 @@ RUN conan profile new clang --detect \
&& conan profile update "conf.tools.build:compiler_executables={\"c\": \"/usr/bin/clang-16\", \"cpp\": \"/usr/bin/clang++-16\"}" clang
RUN echo "include(gcc)" >> .conan/profiles/default
COPY conan/gcc.asan /root/.conan/profiles
COPY conan/gcc.tsan /root/.conan/profiles
COPY conan/gcc.ubsan /root/.conan/profiles
COPY conan/clang.asan /root/.conan/profiles
COPY conan/clang.tsan /root/.conan/profiles
COPY conan/clang.ubsan /root/.conan/profiles

23
docker/clio/README.md Normal file
View File

@@ -0,0 +1,23 @@
# Clio official docker image
[Clio](https://github.com/XRPLF/clio) is an XRP Ledger API server optimized for RPC calls over WebSocket or JSON-RPC.
It stores validated historical ledger and transaction data in a space efficient format.
This image contains `clio_server` binary allowing users to run Clio easily.
## Clio configuration file
Please note that while Clio requires a configuration file, this image doesn't include any default config.
Your configuration file should be mounted under the path `/opt/clio/etc/config.json`.
Clio repository provides an [example](https://github.com/XRPLF/clio/blob/develop/docs/examples/config/example-config.json) of the configuration file.
Config file recommendations:
- Set `log_to_console` to `false` if you want to avoid logs being written to `stdout`.
- Set `log_directory` to `/opt/clio/log` to store logs in a volume.
## Usage
The following command can be used to run Clio in docker (assuming server's port is `51233` in your config):
```bash
docker run -d -v <path to your config.json>:/opt/clio/etc/config.json -v <path to store logs>:/opt/clio/log -p 51233:51233 rippleci/clio
```

16
docker/clio/dockerfile Normal file
View File

@@ -0,0 +1,16 @@
FROM ubuntu:22.04
COPY ./clio_server /opt/clio/bin/clio_server
RUN ln -s /opt/clio/bin/clio_server /usr/local/bin/clio_server && \
mkdir -p /opt/clio/etc/ && \
mkdir -p /opt/clio/log/ && \
groupadd -g 10001 clio && \
useradd -u 10000 -g 10001 -s /bin/bash clio && \
chown clio:clio /opt/clio/log && \
apt update && \
apt install -y libatomic1
USER clio
ENTRYPOINT ["/opt/clio/bin/clio_server"]
CMD ["--conf", "/opt/clio/etc/config.json"]

View File

@@ -1,4 +1,3 @@
version: '3.7'
services:
clio_develop:
image: rippleci/clio_ci:latest

View File

@@ -141,3 +141,60 @@ If you wish to develop against a `rippled` instance running in standalone mode t
1. Advance the `rippled` ledger to at least ledger 256.
2. Wait 10 minutes before first starting Clio against this standalone node.
## Building with a Custom `libxrpl`
Sometimes, during development, you need to build against a custom version of `libxrpl`. (For example, you may be developing compatibility for a proposed amendment that is not yet merged to the main `rippled` codebase.) To build Clio with compatibility for a custom fork or branch of `rippled`, follow these steps:
1. First, pull/clone the appropriate `rippled` fork and switch to the branch you want to build. For example, the following example uses an in-development build with [XLS-33d Multi-Purpose Tokens](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0033d-multi-purpose-tokens):
```sh
git clone https://github.com/shawnxie999/rippled/
cd rippled
git switch mpt-1.1
```
2. Export a custom package to your local Conan store using a user/channel:
```sh
conan export . my/feature
```
3. Patch your local Clio build to use the right package.
Edit `conanfile.py` (from the Clio repository root). Replace the `xrpl` requirement with the custom package version from the previous step. This must also include the current version number from your `rippled` branch. For example:
```py
# ... (excerpt from conanfile.py)
requires = [
'boost/1.82.0',
'cassandra-cpp-driver/2.17.0',
'fmt/10.1.1',
'protobuf/3.21.9',
'grpc/1.50.1',
'openssl/1.1.1u',
'xrpl/2.3.0-b1@my/feature', # Update this line
'libbacktrace/cci.20210118'
]
```
4. Build Clio as you would have before.
See [Building Clio](#building-clio) for details.
## Using `clang-tidy` for static analysis
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 19.0.
Clang-tidy can be run by Cmake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
```
By default Cmake will try to find `clang-tidy` automatically in your system.
To force Cmake to use your desired binary, set the `CLIO_CLANG_TIDY_BIN` environment variable to the path of the `clang-tidy` binary. For example:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@19/bin/clang-tidy
```

592
docs/config-description.md Normal file
View File

@@ -0,0 +1,592 @@
# Clio Config Description
This document provides a list of all available Clio configuration properties in detail.
> [!NOTE]
> Dot notation in configuration key names represents nested fields. For example, **database.scylladb** refers to the _scylladb_ field inside the _database_ object. If a key name includes "[]", it indicates that the nested field is an array (e.g., etl_sources.[]).
## Configuration Details
### database.type
- **Required**: True
- **Type**: string
- **Default value**: `cassandra`
- **Constraints**: The value must be one of the following: `cassandra`.
- **Description**: Specifies the type of database used for storing and retrieving data required by the Clio server. Both ScyllaDB and Cassandra can serve as backends for Clio; however, this value must be set to `cassandra`.
### database.cassandra.contact_points
- **Required**: True
- **Type**: string
- **Default value**: `localhost`
- **Constraints**: None
- **Description**: A list of IP addresses or hostnames for the initial cluster nodes (Cassandra or ScyllaDB) that the client connects to when establishing a database connection. If you're running Clio locally, set this value to `localhost` or `127.0.0.1`.
### database.cassandra.secure_connect_bundle
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The configuration file that contains the necessary credentials and connection details for securely connecting to a Cassandra database cluster.
### database.cassandra.port
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The port number used to connect to the Cassandra database.
### database.cassandra.keyspace
- **Required**: True
- **Type**: string
- **Default value**: `clio`
- **Constraints**: None
- **Description**: The Cassandra keyspace to use for the database. If you don't provide a value, this is set to `clio` by default.
### database.cassandra.replication_factor
- **Required**: True
- **Type**: int
- **Default value**: `3`
- **Constraints**: The minimum value is `0`. The maximum value is `65535`.
- **Description**: Represents the number of replicated nodes for ScyllaDB. For more details see [Fault Tolerance Replication Factor](https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/fault-tolerance-replication-factor/).
### database.cassandra.table_prefix
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: An optional field to specify a prefix for the Cassandra database table names.
### database.cassandra.max_write_requests_outstanding
- **Required**: True
- **Type**: int
- **Default value**: `10000`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Represents the maximum number of outstanding write requests. Write requests are API calls that write to the database.
### database.cassandra.max_read_requests_outstanding
- **Required**: True
- **Type**: int
- **Default value**: `100000`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Maximum number of outstanding read requests. Read requests are API calls that read from the database.
### database.cassandra.threads
- **Required**: True
- **Type**: int
- **Default value**: The number of available CPU cores.
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Represents the number of threads that will be used for database operations.
### database.cassandra.core_connections_per_host
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The number of core connections per host for the Cassandra database.
### database.cassandra.queue_size_io
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: Defines the queue size of the input/output (I/O) operations in Cassandra.
### database.cassandra.write_batch_size
- **Required**: True
- **Type**: int
- **Default value**: `20`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: Represents the batch size for write operations in Cassandra.
### database.cassandra.connect_timeout
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum amount of time in seconds that the system waits for a database connection to be established.
### database.cassandra.request_timeout
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum amount of time in seconds that the system waits for a request to be fetched from the database.
### database.cassandra.username
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The username used for authenticating with the database.
### database.cassandra.password
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The password used for authenticating with the database.
### database.cassandra.certfile
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The path to the SSL/TLS certificate file used to establish a secure connection between the client and the Cassandra database.
### allow_no_etl
- **Required**: True
- **Type**: boolean
- **Default value**: `True`
- **Constraints**: None
- **Description**: If set to `True`, allows Clio to start without any ETL source.
### etl_sources.[].ip
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be a valid IP address.
- **Description**: The IP address of the ETL source.
### etl_sources.[].ws_port
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The WebSocket port of the ETL source.
### etl_sources.[].grpc_port
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The gRPC port of the ETL source.
### forwarding.cache_timeout
- **Required**: True
- **Type**: double
- **Default value**: `0`
- **Constraints**: The value must be a positive double number.
- **Description**: Specifies the timeout duration (in seconds) for the forwarding cache used in `rippled` communication. A value of `0` means disabling this feature.
### forwarding.request_timeout
- **Required**: True
- **Type**: double
- **Default value**: `10`
- **Constraints**: The value must be a positive double number.
- **Description**: Specifies the timeout duration (in seconds) for the forwarding request used in `rippled` communication.
### rpc.cache_timeout
- **Required**: True
- **Type**: double
- **Default value**: `0`
- **Constraints**: The value must be a positive double number.
- **Description**: Specifies the timeout duration (in seconds) for RPC cache response to timeout. A value of `0` means disabling this feature.
### num_markers
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `256`.
- **Description**: Specifies the number of coroutines used to download the initial ledger.
### dos_guard.whitelist.[]
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The list of IP addresses to whitelist for DOS protection.
### dos_guard.max_fetches
- **Required**: True
- **Type**: int
- **Default value**: `1000000`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum number of fetch operations allowed by DOS guard.
### dos_guard.max_connections
- **Required**: True
- **Type**: int
- **Default value**: `20`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum number of concurrent connections for a specific IP address.
### dos_guard.max_requests
- **Required**: True
- **Type**: int
- **Default value**: `20`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum number of requests allowed for a specific IP address.
### dos_guard.sweep_interval
- **Required**: True
- **Type**: double
- **Default value**: `1`
- **Constraints**: The value must be a positive double number.
- **Description**: Interval in seconds for DOS guard to sweep(clear) its state.
### workers
- **Required**: True
- **Type**: int
- **Default value**: The number of available CPU cores.
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The number of threads used to process RPC requests.
### server.ip
- **Required**: True
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be a valid IP address.
- **Description**: The IP address of the Clio HTTP server.
### server.port
- **Required**: True
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The port number of the Clio HTTP server.
### server.max_queue_size
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum size of the server's request queue. If set to `0`, this means there is no queue size limit.
### server.local_admin
- **Required**: False
- **Type**: boolean
- **Default value**: None
- **Constraints**: None
- **Description**: Indicates if requests from `localhost` are allowed to call Clio admin-only APIs. Note that this setting cannot be enabled together with [server.admin_password](#serveradmin_password).
### server.admin_password
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The password for Clio admin-only APIs. Note that this setting cannot be enabled together with [server.local_admin](#serveradmin_password).
### server.processing_policy
- **Required**: True
- **Type**: string
- **Default value**: `parallel`
- **Constraints**: The value must be one of the following: `parallel`, `sequent`.
- **Description**: For the `sequent` policy, requests from a single client connection are processed one by one, with the next request read only after the previous one is processed. For the `parallel` policy, Clio will accept all requests and process them in parallel, sending a reply for each request as soon as it is ready.
### server.parallel_requests_limit
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: This is an optional parameter, used only if the `processing_strategy` is `parallel`. It limits the number of requests processed in parallel for a single client connection. If not specified, no limit is enforced.
### server.ws_max_sending_queue_size
- **Required**: True
- **Type**: int
- **Default value**: `1500`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Maximum queue size for sending subscription data to clients. This queue buffers data when a client is slow to receive it, ensuring delivery once the client is ready.
### prometheus.enabled
- **Required**: True
- **Type**: boolean
- **Default value**: `False`
- **Constraints**: None
- **Description**: Enables or disables Prometheus metrics.
### prometheus.compress_reply
- **Required**: True
- **Type**: boolean
- **Default value**: `False`
- **Constraints**: None
- **Description**: Enables or disables compression of Prometheus responses.
### io_threads
- **Required**: True
- **Type**: int
- **Default value**: `2`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The number of input/output (I/O) threads. The value cannot be less than `1`.
### subscription_workers
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The number of worker threads or processes that are responsible for managing and processing subscription-based tasks from `rippled`.
### graceful_period
- **Required**: True
- **Type**: double
- **Default value**: `10`
- **Constraints**: The value must be a positive double number.
- **Description**: The number of milliseconds the server waits to shutdown gracefully. If Clio does not shutdown gracefully after the specified value, it will be killed instead.
### cache.num_diffs
- **Required**: True
- **Type**: int
- **Default value**: `32`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The number of cursors generated is the number of changed (without counting deleted) objects in the latest `cache.num_diffs` number of ledgers. Cursors are workers that load the ledger cache from the position of markers concurrently. For more information, please read [README.md](../src/etl/README.md).
### cache.num_markers
- **Required**: True
- **Type**: int
- **Default value**: `48`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: Specifies how many markers are placed randomly within the cache. These markers define the positions on the ledger that will be loaded concurrently by the workers. The higher the number, the more places within the cache we potentially cover.
### cache.num_cursors_from_diff
- **Required**: True
- **Type**: int
- **Default value**: `0`
- **Constraints**: The minimum value is `0`. The maximum value is `65535`.
- **Description**: `cache.num_cursors_from_diff` number of cursors are generated by looking at the number of changed objects in the most recent ledger. If number of changed objects in current ledger is not enough, it will keep reading previous ledgers until it hit `cache.num_cursors_from_diff`. If set to `0`, the system defaults to generating cursors based on `cache.num_diffs`.
### cache.num_cursors_from_account
- **Required**: True
- **Type**: int
- **Default value**: `0`
- **Constraints**: The minimum value is `0`. The maximum value is `65535`.
- **Description**: `cache.num_cursors_from_diff` of cursors are generated by reading accounts in `account_tx` table. If set to `0`, the system defaults to generating cursors based on `cache.num_diffs`.
### cache.page_fetch_size
- **Required**: True
- **Type**: int
- **Default value**: `512`
- **Constraints**: The minimum value is `1`. The maximum value is `65535`.
- **Description**: The number of ledger objects to fetch concurrently per marker.
### cache.load
- **Required**: True
- **Type**: string
- **Default value**: `async`
- **Constraints**: The value must be one of the following: `sync`, `async`, `none`.
- **Description**: The strategy used for Cache loading.
### log_channels.[].channel
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be one of the following: `General`, `WebServer`, `Backend`, `RPC`, `ETL`, `Subscriptions`, `Performance`, `Migration`.
- **Description**: The name of the log channel.
### log_channels.[].log_level
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: The value must be one of the following: `trace`, `debug`, `info`, `warning`, `error`, `fatal`, `count`.
- **Description**: The log level for the specific log channel.
### log_level
- **Required**: True
- **Type**: string
- **Default value**: `info`
- **Constraints**: The value must be one of the following: `trace`, `debug`, `info`, `warning`, `error`, `fatal`, `count`.
- **Description**: The general logging level of Clio. This level is applied to all log channels that do not have an explicitly defined logging level.
### log_format
- **Required**: True
- **Type**: string
- **Default value**: `%TimeStamp% (%SourceLocation%) [%ThreadID%] %Channel%:%Severity% %Message%`
- **Constraints**: None
- **Description**: The format string for log messages. The format is described here: https://www.boost.org/doc/libs/1_83_0/libs/log/doc/html/log/tutorial/formatters.html.
### log_to_console
- **Required**: True
- **Type**: boolean
- **Default value**: `True`
- **Constraints**: None
- **Description**: Enables or disables logging to the console.
### log_directory
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The directory path for the log files.
### log_rotation_size
- **Required**: True
- **Type**: int
- **Default value**: `2048`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The log rotation size in megabytes. When the log file reaches this particular size, a new log file starts.
### log_directory_max_size
- **Required**: True
- **Type**: int
- **Default value**: `51200`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The maximum size of the log directory in megabytes.
### log_rotation_hour_interval
- **Required**: True
- **Type**: int
- **Default value**: `12`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Represents the interval (in hours) for log rotation. If the current log file reaches this value in logging, a new log file starts.
### log_tag_style
- **Required**: True
- **Type**: string
- **Default value**: `none`
- **Constraints**: The value must be one of the following: `int`, `uint`, `null`, `none`, `uuid`.
- **Description**: Log tags are unique identifiers for log messages. `uint`/`int` starts logging from 0 and increments, making it faster. In contrast, `uuid` generates a random unique identifier, which adds overhead.
### extractor_threads
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: Number of threads used to extract data from ETL source.
### read_only
- **Required**: True
- **Type**: boolean
- **Default value**: `True`
- **Constraints**: None
- **Description**: Indicates if the server is allowed to write data to the database.
### start_sequence
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: If specified, the ledger index Clio will start writing to the database from.
### finish_sequence
- **Required**: False
- **Type**: int
- **Default value**: None
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: If specified, the final ledger that Clio will write to the database.
### ssl_cert_file
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The path to the SSL certificate file.
### ssl_key_file
- **Required**: False
- **Type**: string
- **Default value**: None
- **Constraints**: None
- **Description**: The path to the SSL key file.
### api_version.default
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `3`.
- **Description**: The default API version that the Clio server will run on.
### api_version.min
- **Required**: True
- **Type**: int
- **Default value**: `1`
- **Constraints**: The minimum value is `1`. The maximum value is `3`.
- **Description**: The minimum API version allowed to use.
### api_version.max
- **Required**: True
- **Type**: int
- **Default value**: `3`
- **Constraints**: The minimum value is `1`. The maximum value is `3`.
- **Description**: The maximum API version allowed to use.
### migration.full_scan_threads
- **Required**: True
- **Type**: int
- **Default value**: `2`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The number of threads used to scan the table.
### migration.full_scan_jobs
- **Required**: True
- **Type**: int
- **Default value**: `4`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The number of coroutines used to scan the table.
### migration.cursors_per_job
- **Required**: True
- **Type**: int
- **Default value**: `100`
- **Constraints**: The minimum value is `1`. The maximum value is `4294967295`.
- **Description**: The number of cursors each job will scan.

View File

@@ -18,7 +18,8 @@ Clio needs access to a `rippled` server in order to work. The following configur
- A port to handle gRPC requests, with the IP(s) of Clio specified in the `secure_gateway` entry
The example configs of [rippled](https://github.com/XRPLF/rippled/blob/develop/cfg/rippled-example.cfg) and [Clio](../docs/examples/config/example-config.json) are set up in a way that minimal changes are required.
The example configs of [rippled](https://github.com/XRPLF/rippled/blob/develop/cfg/rippled-example.cfg) and [Clio](../docs/examples/config/example-config.json) are set up in a way that minimal changes are required. However, if you want to view all configuration keys available in Clio, see [config-description.md](./config-description.md).
When running locally, the only change needed is to uncomment the `port_grpc` section of the `rippled` config.
If you're running Clio and `rippled` on separate machines, in addition to uncommenting the `port_grpc` section, a few other steps must be taken:

View File

@@ -1,43 +0,0 @@
/*
* This is an example configuration file. Please do not use without modifying to suit your needs.
*/
{
"database": {
"type": "cassandra",
"cassandra": {
// This option can be used to setup a secure connect bundle connection
"secure_connect_bundle": "[path/to/zip. ignore if using contact_points]",
// The following options are used only if using contact_points
"contact_points": "[ip. ignore if using secure_connect_bundle]",
"port": "[port. ignore if using_secure_connect_bundle]",
// Authentication settings
"username": "[username, if any]",
"password": "[password, if any]",
// Other common settings
"keyspace": "clio",
"max_write_requests_outstanding": 25000,
"max_read_requests_outstanding": 30000,
"threads": 8
}
},
"etl_sources": [
{
"ip": "[rippled ip]",
"ws_port": "6006",
"grpc_port": "50051"
}
],
"dos_guard": {
"whitelist": [
"127.0.0.1"
]
},
"server": {
"ip": "0.0.0.0",
"port": 8080
},
"log_level": "debug",
"log_file": "./clio.log",
"extractor_threads": 8,
"read_only": false
}

View File

@@ -31,7 +31,7 @@
"etl_sources": [
{
"ip": "127.0.0.1",
"ws_port": "6006",
"ws_port": "6005",
"grpc_port": "50051"
}
],
@@ -39,6 +39,9 @@
"cache_timeout": 0.250, // in seconds, could be 0, which means no cache
"request_timeout": 10.0 // time for Clio to wait for rippled to reply on a forwarded request (default is 10 seconds)
},
"rpc": {
"cache_timeout": 0.5 // in seconds, could be 0, which means no cache for rpc
},
"dos_guard": {
// Comma-separated list of IPs to exclude from rate limiting
"whitelist": [
@@ -67,7 +70,15 @@
"admin_password": "xrp",
// If local_admin is true, Clio will consider requests come from 127.0.0.1 as admin requests
// It's true by default unless admin_password is set,'local_admin' : true and 'admin_password' can not be set at the same time
"local_admin": false
"local_admin": false,
"processing_policy": "parallel", // Could be "sequent" or "parallel".
// For sequent policy request from one client connection will be processed one by one and the next one will not be read before
// the previous one is processed. For parallel policy Clio will take all requests and process them in parallel and
// send a reply for each request whenever it is ready.
"parallel_requests_limit": 10, // Optional parameter, used only if "processing_strategy" is "parallel". It limits the number of requests for one client connection processed in parallel. Infinite if not specified.
// Max number of responses to queue up before sent successfully. If a client's waiting queue is too long, the server will close the connection.
"ws_max_sending_queue_size": 1500,
"__ng_web_server": false // Use ng web server. This is a temporary setting which will be deleted after switching to ng web server
},
// Time in seconds for graceful shutdown. Defaults to 10 seconds. Not fully implemented yet.
"graceful_period": 10.0,

View File

@@ -1,5 +1,9 @@
# Example of clio monitoring infrastructure
> [!WARNING]
> This is only an example of Grafana dashboard for Clio. It was created for demonstration purposes only and may contain errors.
> Clio team would not recommend to relate on data from this dashboard or use it for monitoring your Clio instances.
This directory contains an example of docker based infrastructure to collect and visualise metrics from clio.
The structure of the directory:

View File

@@ -20,7 +20,6 @@
"graphTooltip": 0,
"id": 1,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
@@ -79,6 +78,7 @@
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
@@ -90,7 +90,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.0",
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -159,6 +159,7 @@
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
@@ -170,7 +171,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.0",
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -243,6 +244,7 @@
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
@@ -254,7 +256,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.0",
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -327,6 +329,7 @@
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
@@ -338,7 +341,7 @@
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.0",
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -373,6 +376,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -435,6 +439,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -491,6 +496,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -552,6 +558,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -586,6 +593,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -647,6 +655,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -681,6 +690,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -742,6 +752,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -776,6 +787,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -837,6 +849,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -872,6 +885,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -934,6 +948,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -941,7 +956,7 @@
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "rpc_method_duration_us{job=\"clio\"}",
"expr": "sum by (method) (increase(rpc_method_duration_us[$__interval]))\n / \n sum by (method,) (increase(rpc_method_total_number{status=\"finished\"}[$__interval]))",
"instant": false,
"legendFormat": "{{method}}",
"range": true,
@@ -968,6 +983,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -1029,6 +1045,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1063,6 +1080,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -1124,6 +1142,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1158,6 +1177,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
@@ -1223,7 +1243,7 @@
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1296,6 +1316,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -1357,6 +1378,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1404,6 +1426,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -1465,6 +1488,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1510,6 +1534,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
@@ -1572,6 +1597,7 @@
"sort": "none"
}
},
"pluginVersion": "11.4.0",
"targets": [
{
"datasource": {
@@ -1590,8 +1616,9 @@
"type": "timeseries"
}
],
"preload": false,
"refresh": "5s",
"schemaVersion": 39,
"schemaVersion": 40,
"tags": [],
"templating": {
"list": []

View File

@@ -1,30 +0,0 @@
# Metrics and static analysis
## Prometheus metrics collection
Clio natively supports [Prometheus](https://prometheus.io/) metrics collection. It accepts Prometheus requests on the port configured in the `server` section of the config.
Prometheus metrics are enabled by default, and replies to `/metrics` are compressed. To disable compression, and have human readable metrics, add `"prometheus": { "enabled": true, "compress_reply": false }` to Clio's config.
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to Clio's config.
It is important to know that Clio responds to Prometheus request only if they are admin requests. If you are using the admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
You can find an example docker-compose file, with Prometheus and Grafana configs, in [examples/infrastructure](../docs/examples/infrastructure/).
## Using `clang-tidy` for static analysis
The minimum [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) version required is 17.0.
Clang-tidy can be run by Cmake when building the project. To achieve this, you just need to provide the option `-o lint=True` for the `conan install` command:
```sh
conan install .. --output-folder . --build missing --settings build_type=Release -o tests=True -o lint=True
```
By default Cmake will try to find `clang-tidy` automatically in your system.
To force Cmake to use your desired binary, set the `CLIO_CLANG_TIDY_BIN` environment variable to the path of the `clang-tidy` binary. For example:
```sh
export CLIO_CLANG_TIDY_BIN=/opt/homebrew/opt/llvm@17/bin/clang-tidy
```

View File

@@ -80,3 +80,15 @@ Clio will fallback to hardcoded defaults when these values are not specified in
> [!TIP]
> See the [example-config.json](../docs/examples/config/example-config.json) for more details.
## Prometheus metrics collection
Clio natively supports [Prometheus](https://prometheus.io/) metrics collection. It accepts Prometheus requests on the port configured in the `server` section of the config.
Prometheus metrics are enabled by default, and replies to `/metrics` are compressed. To disable compression, and have human readable metrics, add `"prometheus": { "enabled": true, "compress_reply": false }` to Clio's config.
To completely disable Prometheus metrics add `"prometheus": { "enabled": false }` to Clio's config.
It is important to know that Clio responds to Prometheus request only if they are admin requests. If you are using the admin password feature, the same password should be provided in the Authorization header of Prometheus requests.
You can find an example docker-compose file, with Prometheus and Grafana configs, in [examples/infrastructure](../docs/examples/infrastructure/).

View File

@@ -1,7 +1,11 @@
add_subdirectory(util)
add_subdirectory(data)
add_subdirectory(cluster)
add_subdirectory(etl)
add_subdirectory(etlng)
add_subdirectory(feed)
add_subdirectory(rpc)
add_subdirectory(web)
add_subdirectory(migration)
add_subdirectory(app)
add_subdirectory(main)

13
src/app/CMakeLists.txt Normal file
View File

@@ -0,0 +1,13 @@
add_library(clio_app)
target_sources(clio_app PRIVATE CliArgs.cpp ClioApplication.cpp Stopper.cpp WebHandlers.cpp)
target_link_libraries(
clio_app
PUBLIC clio_cluster
clio_etl
clio_etlng
clio_feed
clio_web
clio_rpc
clio_migration
)

100
src/app/CliArgs.cpp Normal file
View File

@@ -0,0 +1,100 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/CliArgs.hpp"
#include "migration/MigrationApplication.hpp"
#include "util/build/Build.hpp"
#include "util/newconfig/ConfigDescription.hpp"
#include <boost/program_options/options_description.hpp>
#include <boost/program_options/parsers.hpp>
#include <boost/program_options/positional_options.hpp>
#include <boost/program_options/value_semantic.hpp>
#include <boost/program_options/variables_map.hpp>
#include <cstdlib>
#include <filesystem>
#include <iostream>
#include <string>
#include <utility>
namespace app {
CliArgs::Action
CliArgs::parse(int argc, char const* argv[])
{
namespace po = boost::program_options;
// clang-format off
po::options_description description("Options");
description.add_options()
("help,h", "Print help message and exit")
("version,v", "Print version and exit")
("conf,c", po::value<std::string>()->default_value(kDEFAULT_CONFIG_PATH), "Configuration file")
("ng-web-server,w", "Use ng-web-server")
("migrate", po::value<std::string>(), "Start migration helper")
("verify", "Checks the validity of config values")
("config-description,d", po::value<std::string>(), "Generate config description markdown file")
;
// clang-format on
po::positional_options_description positional;
positional.add("conf", 1);
po::variables_map parsed;
po::store(po::command_line_parser(argc, argv).options(description).positional(positional).run(), parsed);
po::notify(parsed);
if (parsed.count("help") != 0u) {
std::cout << "Clio server " << util::build::getClioFullVersionString() << "\n\n" << description;
return Action{Action::Exit{EXIT_SUCCESS}};
}
if (parsed.count("version") != 0u) {
std::cout << util::build::getClioFullVersionString() << '\n';
return Action{Action::Exit{EXIT_SUCCESS}};
}
if (parsed.count("config-description") != 0u) {
std::filesystem::path const filePath = parsed["config-description"].as<std::string>();
auto const res = util::config::ClioConfigDescription::generateConfigDescriptionToFile(filePath);
if (res.has_value())
return Action{Action::Exit{EXIT_SUCCESS}};
std::cerr << res.error().error << std::endl;
return Action{Action::Exit{EXIT_FAILURE}};
}
auto configPath = parsed["conf"].as<std::string>();
if (parsed.count("migrate") != 0u) {
auto const opt = parsed["migrate"].as<std::string>();
if (opt == "status")
return Action{Action::Migrate{.configPath = std::move(configPath), .subCmd = MigrateSubCmd::status()}};
return Action{Action::Migrate{.configPath = std::move(configPath), .subCmd = MigrateSubCmd::migration(opt)}};
}
if (parsed.count("verify") != 0u)
return Action{Action::VerifyConfig{.configPath = std::move(configPath)}};
return Action{Action::Run{.configPath = std::move(configPath), .useNgWebServer = parsed.count("ng-web-server") != 0}
};
}
} // namespace app

108
src/app/CliArgs.hpp Normal file
View File

@@ -0,0 +1,108 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "migration/MigrationApplication.hpp"
#include "util/OverloadSet.hpp"
#include <string>
#include <variant>
namespace app {
/**
* @brief Parsed command line arguments representation.
*/
class CliArgs {
public:
/**
* @brief Default configuration path.
*/
static constexpr char kDEFAULT_CONFIG_PATH[] = "/etc/opt/clio/config.json";
/**
* @brief An action parsed from the command line.
*/
class Action {
public:
/** @brief Run action. */
struct Run {
std::string configPath; ///< Configuration file path.
bool useNgWebServer; ///< Whether to use a ng web server
};
/** @brief Exit action. */
struct Exit {
int exitCode; ///< Exit code.
};
/** @brief Migration action. */
struct Migrate {
std::string configPath;
MigrateSubCmd subCmd;
};
/** @brief Verify Config action. */
struct VerifyConfig {
std::string configPath;
};
/**
* @brief Construct an action from a Run.
*
* @param action Run action.
*/
template <typename ActionType>
requires std::is_same_v<ActionType, Run> or std::is_same_v<ActionType, Exit> or
std::is_same_v<ActionType, Migrate> or std::is_same_v<ActionType, VerifyConfig>
explicit Action(ActionType&& action) : action_(std::forward<ActionType>(action))
{
}
/**
* @brief Apply a function to the action.
*
* @tparam Processors Action processors types. Must be callable with the action type and return int.
* @param processors Action processors.
* @return Exit code.
*/
template <typename... Processors>
int
apply(Processors&&... processors) const
{
return std::visit(util::OverloadSet{std::forward<Processors>(processors)...}, action_);
}
private:
std::variant<Run, Exit, Migrate, VerifyConfig> action_;
};
/**
* @brief Parse command line arguments.
*
* @param argc Number of arguments.
* @param argv Array of arguments.
* @return Parsed command line arguments.
*/
static Action
parse(int argc, char const* argv[]);
};
} // namespace app

214
src/app/ClioApplication.cpp Normal file
View File

@@ -0,0 +1,214 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/ClioApplication.hpp"
#include "app/Stopper.hpp"
#include "app/WebHandlers.hpp"
#include "cluster/ClusterCommunicationService.hpp"
#include "data/AmendmentCenter.hpp"
#include "data/BackendFactory.hpp"
#include "data/LedgerCache.hpp"
#include "etl/ETLService.hpp"
#include "etl/LoadBalancer.hpp"
#include "etl/NetworkValidatedLedgers.hpp"
#include "etlng/LoadBalancer.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManager.hpp"
#include "migration/MigrationInspectorFactory.hpp"
#include "rpc/Counters.hpp"
#include "rpc/RPCEngine.hpp"
#include "rpc/WorkQueue.hpp"
#include "rpc/common/impl/HandlerProvider.hpp"
#include "util/build/Build.hpp"
#include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/prometheus/Prometheus.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/RPCServerHandler.hpp"
#include "web/Server.hpp"
#include "web/dosguard/DOSGuard.hpp"
#include "web/dosguard/IntervalSweepHandler.hpp"
#include "web/dosguard/WhitelistHandler.hpp"
#include "web/ng/RPCServerHandler.hpp"
#include "web/ng/Server.hpp"
#include <boost/asio/io_context.hpp>
#include <cstdint>
#include <cstdlib>
#include <memory>
#include <optional>
#include <thread>
#include <utility>
#include <vector>
namespace app {
namespace {
/**
* @brief Start context threads
*
* @param ioc Context
* @param numThreads Number of worker threads to start
*/
void
start(boost::asio::io_context& ioc, std::uint32_t numThreads)
{
std::vector<std::thread> v;
v.reserve(numThreads - 1);
for (auto i = numThreads - 1; i > 0; --i)
v.emplace_back([&ioc] { ioc.run(); });
ioc.run();
for (auto& t : v)
t.join();
}
} // namespace
ClioApplication::ClioApplication(util::config::ClioConfigDefinition const& config)
: config_(config), signalsHandler_{config_}
{
LOG(util::LogService::info()) << "Clio version: " << util::build::getClioFullVersionString();
PrometheusService::init(config);
signalsHandler_.subscribeToStop([this]() { appStopper_.stop(); });
}
int
ClioApplication::run(bool const useNgWebServer)
{
auto const threads = config_.get<uint16_t>("io_threads");
LOG(util::LogService::info()) << "Number of io threads = " << threads;
// IO context to handle all incoming requests, as well as other things.
// This is not the only io context in the application.
boost::asio::io_context ioc{threads};
// Rate limiter, to prevent abuse
auto whitelistHandler = web::dosguard::WhitelistHandler{config_};
auto dosGuard = web::dosguard::DOSGuard{config_, whitelistHandler};
auto sweepHandler = web::dosguard::IntervalSweepHandler{config_, ioc, dosGuard};
auto cache = data::LedgerCache{};
// Interface to the database
auto backend = data::makeBackend(config_, cache);
cluster::ClusterCommunicationService clusterCommunicationService{backend};
clusterCommunicationService.run();
auto const amendmentCenter = std::make_shared<data::AmendmentCenter const>(backend);
{
auto const migrationInspector = migration::makeMigrationInspector(config_, backend);
// Check if any migration is blocking Clio server starting.
if (migrationInspector->isBlockingClio() and backend->hardFetchLedgerRangeNoThrow()) {
LOG(util::LogService::error())
<< "Existing Migration is blocking Clio, Please complete the database migration first.";
return EXIT_FAILURE;
}
}
// Manages clients subscribed to streams
auto subscriptions = feed::SubscriptionManager::makeSubscriptionManager(config_, backend, amendmentCenter);
// Tracks which ledgers have been validated by the network
auto ledgers = etl::NetworkValidatedLedgers::makeValidatedLedgers();
// Handles the connection to one or more rippled nodes.
// ETL uses the balancer to extract data.
// The server uses the balancer to forward RPCs to a rippled node.
// The balancer itself publishes to streams (transactions_proposed and accounts_proposed)
auto balancer = [&] -> std::shared_ptr<etlng::LoadBalancerInterface> {
if (config_.get<bool>("__ng_etl"))
return etlng::LoadBalancer::makeLoadBalancer(config_, ioc, backend, subscriptions, ledgers);
return etl::LoadBalancer::makeLoadBalancer(config_, ioc, backend, subscriptions, ledgers);
}();
// ETL is responsible for writing and publishing to streams. In read-only mode, ETL only publishes
auto etl = etl::ETLService::makeETLService(config_, ioc, backend, subscriptions, balancer, ledgers);
auto workQueue = rpc::WorkQueue::makeWorkQueue(config_);
auto counters = rpc::Counters::makeCounters(workQueue);
auto const handlerProvider = std::make_shared<rpc::impl::ProductionHandlerProvider const>(
config_, backend, subscriptions, balancer, etl, amendmentCenter, counters
);
using RPCEngineType = rpc::RPCEngine<rpc::Counters>;
auto const rpcEngine =
RPCEngineType::makeRPCEngine(config_, backend, balancer, dosGuard, workQueue, counters, handlerProvider);
if (useNgWebServer or config_.get<bool>("server.__ng_web_server")) {
web::ng::RPCServerHandler<RPCEngineType> handler{config_, backend, rpcEngine, etl};
auto expectedAdminVerifier = web::makeAdminVerificationStrategy(config_);
if (not expectedAdminVerifier.has_value()) {
LOG(util::LogService::error()) << "Error creating admin verifier: " << expectedAdminVerifier.error();
return EXIT_FAILURE;
}
auto const adminVerifier = std::move(expectedAdminVerifier).value();
auto httpServer = web::ng::makeServer(config_, OnConnectCheck{dosGuard}, DisconnectHook{dosGuard}, ioc);
if (not httpServer.has_value()) {
LOG(util::LogService::error()) << "Error creating web server: " << httpServer.error();
return EXIT_FAILURE;
}
httpServer->onGet("/metrics", MetricsHandler{adminVerifier});
httpServer->onGet("/health", HealthCheckHandler{});
auto requestHandler = RequestHandler{adminVerifier, handler, dosGuard};
httpServer->onPost("/", requestHandler);
httpServer->onWs(std::move(requestHandler));
auto const maybeError = httpServer->run();
if (maybeError.has_value()) {
LOG(util::LogService::error()) << "Error starting web server: " << *maybeError;
return EXIT_FAILURE;
}
appStopper_.setOnStop(
Stopper::makeOnStopCallback(httpServer.value(), *balancer, *etl, *subscriptions, *backend, ioc)
);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope
// Calls destructors on all resources, and destructs in order
start(ioc, threads);
return EXIT_SUCCESS;
}
// Init the web server
auto handler = std::make_shared<web::RPCServerHandler<RPCEngineType>>(config_, backend, rpcEngine, etl);
auto const httpServer = web::makeHttpServer(config_, ioc, dosGuard, handler);
// Blocks until stopped.
// When stopped, shared_ptrs fall out of scope
// Calls destructors on all resources, and destructs in order
start(ioc, threads);
return EXIT_SUCCESS;
}
} // namespace app

View File

@@ -0,0 +1,55 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "app/Stopper.hpp"
#include "util/SignalsHandler.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
namespace app {
/**
* @brief The main application class
*/
class ClioApplication {
util::config::ClioConfigDefinition const& config_;
util::SignalsHandler signalsHandler_;
Stopper appStopper_;
public:
/**
* @brief Construct a new ClioApplication object
*
* @param config The configuration of the application
*/
ClioApplication(util::config::ClioConfigDefinition const& config);
/**
* @brief Run the application
*
* @param useNgWebServer Whether to use the new web server
*
* @return exit code
*/
int
run(bool useNgWebServer);
};
} // namespace app

52
src/app/Stopper.cpp Normal file
View File

@@ -0,0 +1,52 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/Stopper.hpp"
#include <boost/asio/spawn.hpp>
#include <functional>
#include <thread>
#include <utility>
namespace app {
Stopper::~Stopper()
{
if (worker_.joinable())
worker_.join();
}
void
Stopper::setOnStop(std::function<void(boost::asio::yield_context)> cb)
{
boost::asio::spawn(ctx_, std::move(cb));
}
void
Stopper::stop()
{
// Do nothing if worker_ is already running
if (worker_.joinable())
return;
worker_ = std::thread{[this]() { ctx_.run(); }};
}
} // namespace app

115
src/app/Stopper.hpp Normal file
View File

@@ -0,0 +1,115 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/BackendInterface.hpp"
#include "etlng/ETLServiceInterface.hpp"
#include "etlng/LoadBalancerInterface.hpp"
#include "feed/SubscriptionManagerInterface.hpp"
#include "util/CoroutineGroup.hpp"
#include "util/log/Logger.hpp"
#include "web/ng/Server.hpp"
#include <boost/asio/executor_work_guard.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#include <functional>
#include <thread>
namespace app {
/**
* @brief Application stopper class. On stop it will create a new thread to run all the shutdown tasks.
*/
class Stopper {
boost::asio::io_context ctx_;
std::thread worker_;
public:
/**
* @brief Destroy the Stopper object
*/
~Stopper();
/**
* @brief Set the callback to be called when the application is stopped.
*
* @param cb The callback to be called on application stop.
*/
void
setOnStop(std::function<void(boost::asio::yield_context)> cb);
/**
* @brief Stop the application and run the shutdown tasks.
*/
void
stop();
/**
* @brief Create a callback to be called on application stop.
*
* @param server The server to stop.
* @param balancer The load balancer to stop.
* @param etl The ETL service to stop.
* @param subscriptions The subscription manager to stop.
* @param backend The backend to stop.
* @param ioc The io_context to stop.
* @return The callback to be called on application stop.
*/
template <web::ng::SomeServer ServerType>
static std::function<void(boost::asio::yield_context)>
makeOnStopCallback(
ServerType& server,
etlng::LoadBalancerInterface& balancer,
etlng::ETLServiceInterface& etl,
feed::SubscriptionManagerInterface& subscriptions,
data::BackendInterface& backend,
boost::asio::io_context& ioc
)
{
return [&](boost::asio::yield_context yield) {
util::CoroutineGroup coroutineGroup{yield};
coroutineGroup.spawn(yield, [&server](auto innerYield) {
server.stop(innerYield);
LOG(util::LogService::info()) << "Server stopped";
});
coroutineGroup.spawn(yield, [&balancer](auto innerYield) {
balancer.stop(innerYield);
LOG(util::LogService::info()) << "LoadBalancer stopped";
});
coroutineGroup.asyncWait(yield);
etl.stop();
LOG(util::LogService::info()) << "ETL stopped";
subscriptions.stop();
LOG(util::LogService::info()) << "SubscriptionManager stopped";
backend.waitForWritesToFinish();
LOG(util::LogService::info()) << "Backend writes finished";
ioc.stop();
LOG(util::LogService::info()) << "io_context stopped";
};
}
};
} // namespace app

58
src/app/VerifyConfig.hpp Normal file
View File

@@ -0,0 +1,58 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "util/newconfig/ConfigDefinition.hpp"
#include "util/newconfig/ConfigFileJson.hpp"
#include <cstdlib>
#include <iostream>
#include <string_view>
namespace app {
/**
* @brief Verifies user's config values are correct
*
* @param configPath The path to config
* @return true if config values are all correct, false otherwise
*/
inline bool
parseConfig(std::string_view configPath)
{
using namespace util::config;
auto const json = ConfigFileJson::makeConfigFileJson(configPath);
if (!json.has_value()) {
std::cerr << "Error parsing json from config: " << configPath << "\n" << json.error().error << std::endl;
return false;
}
auto const errors = gClioConfig.parse(json.value());
if (errors.has_value()) {
for (auto const& err : errors.value()) {
std::cerr << "Issues found in provided config '" << configPath << "':\n";
std::cerr << err.error << std::endl;
}
return false;
}
return true;
}
} // namespace app

111
src/app/WebHandlers.cpp Normal file
View File

@@ -0,0 +1,111 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "app/WebHandlers.hpp"
#include "util/Assert.hpp"
#include "util/prometheus/Http.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
#include "web/dosguard/DOSGuardInterface.hpp"
#include "web/ng/Connection.hpp"
#include "web/ng/Request.hpp"
#include "web/ng/Response.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <memory>
#include <optional>
#include <utility>
namespace app {
OnConnectCheck::OnConnectCheck(web::dosguard::DOSGuardInterface& dosguard) : dosguard_{dosguard}
{
}
std::expected<void, web::ng::Response>
OnConnectCheck::operator()(web::ng::Connection const& connection)
{
dosguard_.get().increment(connection.ip());
if (not dosguard_.get().isOk(connection.ip())) {
return std::unexpected{
web::ng::Response{boost::beast::http::status::too_many_requests, "Too many requests", connection}
};
}
return {};
}
DisconnectHook::DisconnectHook(web::dosguard::DOSGuardInterface& dosguard) : dosguard_{dosguard}
{
}
void
DisconnectHook::operator()(web::ng::Connection const& connection)
{
dosguard_.get().decrement(connection.ip());
}
MetricsHandler::MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier)
: adminVerifier_{std::move(adminVerifier)}
{
}
web::ng::Response
MetricsHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
)
{
auto const maybeHttpRequest = request.asHttpRequest();
ASSERT(maybeHttpRequest.has_value(), "Got not a http request in Get");
auto const& httpRequest = maybeHttpRequest->get();
// FIXME(#1702): Using veb server thread to handle prometheus request. Better to post on work queue.
auto maybeResponse = util::prometheus::handlePrometheusRequest(
httpRequest, adminVerifier_->isAdmin(httpRequest, connectionMetadata.ip())
);
ASSERT(maybeResponse.has_value(), "Got unexpected request for Prometheus");
return web::ng::Response{std::move(maybeResponse).value(), request};
}
web::ng::Response
HealthCheckHandler::operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
)
{
static auto constexpr kHEALTH_CHECK_HTML = R"html(
<!DOCTYPE html>
<html>
<head><title>Test page for Clio</title></head>
<body><h1>Clio Test</h1><p>This page shows Clio http(s) connectivity is working.</p></body>
</html>
)html";
return web::ng::Response{boost::beast::http::status::ok, kHEALTH_CHECK_HTML, request};
}
} // namespace app

234
src/app/WebHandlers.hpp Normal file
View File

@@ -0,0 +1,234 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "rpc/Errors.hpp"
#include "util/log/Logger.hpp"
#include "web/AdminVerificationStrategy.hpp"
#include "web/SubscriptionContextInterface.hpp"
#include "web/dosguard/DOSGuardInterface.hpp"
#include "web/ng/Connection.hpp"
#include "web/ng/Request.hpp"
#include "web/ng/Response.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/beast/http/status.hpp>
#include <boost/json/array.hpp>
#include <boost/json/parse.hpp>
#include <exception>
#include <functional>
#include <memory>
#include <utility>
namespace app {
/**
* @brief A function object that checks if the connection is allowed to proceed.
*/
class OnConnectCheck {
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new OnConnectCheck object
*
* @param dosguard The DOSGuardInterface to use for checking the connection.
*/
OnConnectCheck(web::dosguard::DOSGuardInterface& dosguard);
/**
* @brief Check if the connection is allowed to proceed.
*
* @param connection The connection to check.
* @return A response if the connection is not allowed to proceed or void otherwise.
*/
std::expected<void, web::ng::Response>
operator()(web::ng::Connection const& connection);
};
/**
* @brief A function object to be called when a connection is disconnected.
*/
class DisconnectHook {
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new DisconnectHook object
*
* @param dosguard The DOSGuardInterface to use for disconnecting the connection.
*/
DisconnectHook(web::dosguard::DOSGuardInterface& dosguard);
/**
* @brief The call of the function object.
*
* @param connection The connection which has disconnected.
*/
void
operator()(web::ng::Connection const& connection);
};
/**
* @brief A function object that handles the metrics endpoint.
*/
class MetricsHandler {
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
public:
/**
* @brief Construct a new MetricsHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
*/
MetricsHandler(std::shared_ptr<web::AdminVerificationStrategy> adminVerifier);
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @return The response to the request.
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr,
boost::asio::yield_context
);
};
/**
* @brief A function object that handles the health check endpoint.
*/
class HealthCheckHandler {
public:
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @return The response to the request
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata&,
web::SubscriptionContextPtr,
boost::asio::yield_context
);
};
/**
* @brief A function object that handles the websocket endpoint.
*
* @tparam RpcHandlerType The type of the RPC handler.
*/
template <typename RpcHandlerType>
class RequestHandler {
util::Logger webServerLog_{"WebServer"};
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier_;
std::reference_wrapper<RpcHandlerType> rpcHandler_;
std::reference_wrapper<web::dosguard::DOSGuardInterface> dosguard_;
public:
/**
* @brief Construct a new RequestHandler object
*
* @param adminVerifier The AdminVerificationStrategy to use for verifying the connection for admin access.
* @param rpcHandler The RPC handler to use for handling the request.
* @param dosguard The DOSGuardInterface to use for checking the connection.
*/
RequestHandler(
std::shared_ptr<web::AdminVerificationStrategy> adminVerifier,
RpcHandlerType& rpcHandler,
web::dosguard::DOSGuardInterface& dosguard
)
: adminVerifier_(std::move(adminVerifier)), rpcHandler_(rpcHandler), dosguard_(dosguard)
{
}
/**
* @brief The call of the function object.
*
* @param request The request to handle.
* @param connectionMetadata The connection metadata.
* @param subscriptionContext The subscription context.
* @param yield The yield context.
* @return The response to the request.
*/
web::ng::Response
operator()(
web::ng::Request const& request,
web::ng::ConnectionMetadata& connectionMetadata,
web::SubscriptionContextPtr subscriptionContext,
boost::asio::yield_context yield
)
{
if (not dosguard_.get().request(connectionMetadata.ip())) {
auto error = rpc::makeError(rpc::RippledError::rpcSLOW_DOWN);
if (not request.isHttp()) {
try {
auto requestJson = boost::json::parse(request.message());
if (requestJson.is_object() && requestJson.as_object().contains("id"))
error["id"] = requestJson.as_object().at("id");
error["request"] = request.message();
} catch (std::exception const&) {
error["request"] = request.message();
}
}
return web::ng::Response{boost::beast::http::status::service_unavailable, error, request};
}
LOG(webServerLog_.info()) << connectionMetadata.tag()
<< "Received request from ip = " << connectionMetadata.ip()
<< " - posting to WorkQueue";
connectionMetadata.setIsAdmin([this, &request, &connectionMetadata]() {
return adminVerifier_->isAdmin(request.httpHeaders(), connectionMetadata.ip());
});
try {
auto response = rpcHandler_(request, connectionMetadata, std::move(subscriptionContext), yield);
if (not dosguard_.get().add(connectionMetadata.ip(), response.message().size())) {
auto jsonResponse = boost::json::parse(response.message()).as_object();
jsonResponse["warning"] = "load";
if (jsonResponse.contains("warnings") && jsonResponse["warnings"].is_array()) {
jsonResponse["warnings"].as_array().push_back(rpc::makeWarning(rpc::WarnRpcRateLimit));
} else {
jsonResponse["warnings"] = boost::json::array{rpc::makeWarning(rpc::WarnRpcRateLimit)};
}
response.setMessage(jsonResponse);
}
return response;
} catch (std::exception const&) {
return web::ng::Response{
boost::beast::http::status::internal_server_error,
rpc::makeError(rpc::RippledError::rpcINTERNAL),
request
};
}
}
};
} // namespace app

View File

@@ -0,0 +1,5 @@
add_library(clio_cluster)
target_sources(clio_cluster PRIVATE ClioNode.cpp ClusterCommunicationService.cpp)
target_link_libraries(clio_cluster PRIVATE clio_util clio_data)

64
src/cluster/ClioNode.cpp Normal file
View File

@@ -0,0 +1,64 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "cluster/ClioNode.hpp"
#include "util/TimeUtils.hpp"
#include <boost/json/conversion.hpp>
#include <boost/json/object.hpp>
#include <boost/json/value.hpp>
#include <boost/uuid/uuid.hpp>
#include <memory>
#include <stdexcept>
#include <string>
#include <string_view>
namespace cluster {
namespace {
struct Fields {
static constexpr std::string_view const kUPDATE_TIME = "update_time";
};
} // namespace
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, ClioNode const& node)
{
jv = {
{Fields::kUPDATE_TIME, util::systemTpToUtcStr(node.updateTime, ClioNode::kTIME_FORMAT)},
};
}
ClioNode
tag_invoke(boost::json::value_to_tag<ClioNode>, boost::json::value const& jv)
{
auto const& updateTimeStr = jv.as_object().at(Fields::kUPDATE_TIME).as_string();
auto const updateTime = util::systemTpFromUtcStr(std::string(updateTimeStr), ClioNode::kTIME_FORMAT);
if (!updateTime.has_value()) {
throw std::runtime_error("Failed to parse update time");
}
return ClioNode{.uuid = std::make_shared<boost::uuids::uuid>(), .updateTime = updateTime.value()};
}
} // namespace cluster

58
src/cluster/ClioNode.hpp Normal file
View File

@@ -0,0 +1,58 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include <boost/json/conversion.hpp>
#include <boost/json/value.hpp>
#include <boost/uuid/uuid.hpp>
#include <chrono>
#include <memory>
namespace cluster {
/**
* @brief Represents a node in the cluster.
*/
struct ClioNode {
/**
* @brief The format of the time to store in the database.
*/
static constexpr char const* kTIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ";
// enum class WriterRole {
// ReadOnly,
// NotWriter,
// Writer
// };
std::shared_ptr<boost::uuids::uuid> uuid; ///< The UUID of the node.
std::chrono::system_clock::time_point updateTime; ///< The time the data about the node was last updated.
// WriterRole writerRole;
};
void
tag_invoke(boost::json::value_from_tag, boost::json::value& jv, ClioNode const& node);
ClioNode
tag_invoke(boost::json::value_to_tag<ClioNode>, boost::json::value const& jv);
} // namespace cluster

View File

@@ -0,0 +1,175 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "cluster/ClusterCommunicationService.hpp"
#include "cluster/ClioNode.hpp"
#include "data/BackendInterface.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
#include <boost/json/parse.hpp>
#include <boost/json/serialize.hpp>
#include <boost/json/value.hpp>
#include <boost/json/value_from.hpp>
#include <boost/json/value_to.hpp>
#include <boost/uuid/random_generator.hpp>
#include <boost/uuid/uuid.hpp>
#include <chrono>
#include <ctime>
#include <memory>
#include <utility>
#include <vector>
namespace cluster {
ClusterCommunicationService::ClusterCommunicationService(
std::shared_ptr<data::BackendInterface> backend,
std::chrono::steady_clock::duration readInterval,
std::chrono::steady_clock::duration writeInterval
)
: backend_(std::move(backend))
, readInterval_(readInterval)
, writeInterval_(writeInterval)
, selfData_{ClioNode{
.uuid = std::make_shared<boost::uuids::uuid>(boost::uuids::random_generator{}()),
.updateTime = std::chrono::system_clock::time_point{}
}}
{
nodesInClusterMetric_.set(1); // The node always sees itself
isHealthy_ = true;
}
void
ClusterCommunicationService::run()
{
boost::asio::spawn(strand_, [this](boost::asio::yield_context yield) {
boost::asio::steady_timer timer(yield.get_executor());
while (true) {
timer.expires_after(readInterval_);
timer.async_wait(yield);
doRead(yield);
}
});
boost::asio::spawn(strand_, [this](boost::asio::yield_context yield) {
boost::asio::steady_timer timer(yield.get_executor());
while (true) {
doWrite();
timer.expires_after(writeInterval_);
timer.async_wait(yield);
}
});
}
ClusterCommunicationService::~ClusterCommunicationService()
{
stop();
}
void
ClusterCommunicationService::stop()
{
if (stopped_)
return;
ctx_.stop();
ctx_.join();
stopped_ = true;
}
std::shared_ptr<boost::uuids::uuid>
ClusterCommunicationService::selfUuid() const
{
// Uuid never changes so it is safe to copy it without using strand_
return selfData_.uuid;
}
ClioNode
ClusterCommunicationService::selfData() const
{
ClioNode result{};
boost::asio::spawn(strand_, [this, &result](boost::asio::yield_context) { result = selfData_; });
return result;
}
std::vector<ClioNode>
ClusterCommunicationService::clusterData() const
{
std::vector<ClioNode> result;
boost::asio::spawn(strand_, [this, &result](boost::asio::yield_context) {
result = otherNodesData_;
result.push_back(selfData_);
});
return result;
}
void
ClusterCommunicationService::doRead(boost::asio::yield_context yield)
{
otherNodesData_.clear();
auto const expectedResult = backend_->fetchClioNodesData(yield);
if (!expectedResult.has_value()) {
LOG(log_.error()) << "Failed to fetch nodes data";
isHealthy_ = false;
return;
}
// Create a new vector here to not have partially parsed data in otherNodesData_
std::vector<ClioNode> otherNodesData;
for (auto const& [uuid, nodeDataStr] : expectedResult.value()) {
if (uuid == *selfData_.uuid) {
continue;
}
boost::system::error_code errorCode;
auto const json = boost::json::parse(nodeDataStr, errorCode);
if (errorCode.failed()) {
LOG(log_.error()) << "Error parsing json from DB: " << nodeDataStr;
isHealthy_ = false;
return;
}
auto expectedNodeData = boost::json::try_value_to<ClioNode>(json);
if (expectedNodeData.has_error()) {
LOG(log_.error()) << "Error converting json to ClioNode: " << json;
isHealthy_ = false;
return;
}
*expectedNodeData->uuid = uuid;
otherNodesData.push_back(std::move(expectedNodeData).value());
}
otherNodesData_ = std::move(otherNodesData);
nodesInClusterMetric_.set(otherNodesData_.size() + 1);
isHealthy_ = true;
}
void
ClusterCommunicationService::doWrite()
{
selfData_.updateTime = std::chrono::system_clock::now();
boost::json::value jsonValue{};
boost::json::value_from(selfData_, jsonValue);
backend_->writeNodeMessage(*selfData_.uuid, boost::json::serialize(jsonValue.as_object()));
}
} // namespace cluster

View File

@@ -0,0 +1,141 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "cluster/ClioNode.hpp"
#include "cluster/ClusterCommunicationServiceInterface.hpp"
#include "data/BackendInterface.hpp"
#include "util/log/Logger.hpp"
#include "util/prometheus/Bool.hpp"
#include "util/prometheus/Gauge.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/asio/strand.hpp>
#include <boost/asio/thread_pool.hpp>
#include <boost/uuid/uuid.hpp>
#include <chrono>
#include <memory>
#include <vector>
namespace cluster {
/**
* @brief Service to post and read messages to/from the cluster. It uses a backend to communicate with the cluster.
*/
class ClusterCommunicationService : public ClusterCommunicationServiceInterface {
util::prometheus::GaugeInt& nodesInClusterMetric_ = PrometheusService::gaugeInt(
"cluster_nodes_total_number",
{},
"Total number of nodes this node can detect in the cluster."
);
util::prometheus::Bool isHealthy_ = PrometheusService::boolMetric(
"cluster_communication_is_healthy",
{},
"Whether cluster communicaton service is operating healthy (1 - healthy, 0 - we have a problem)"
);
// TODO: Use util::async::CoroExecutionContext after https://github.com/XRPLF/clio/issues/1973 is implemented
boost::asio::thread_pool ctx_{1};
boost::asio::strand<boost::asio::thread_pool::executor_type> strand_ = boost::asio::make_strand(ctx_);
util::Logger log_{"ClusterCommunication"};
std::shared_ptr<data::BackendInterface> backend_;
std::chrono::steady_clock::duration readInterval_;
std::chrono::steady_clock::duration writeInterval_;
ClioNode selfData_;
std::vector<ClioNode> otherNodesData_;
bool stopped_ = false;
public:
static constexpr std::chrono::milliseconds kDEFAULT_READ_INTERVAL{2100};
static constexpr std::chrono::milliseconds kDEFAULT_WRITE_INTERVAL{1200};
/**
* @brief Construct a new Cluster Communication Service object.
*
* @param backend The backend to use for communication.
* @param readInterval The interval to read messages from the cluster.
* @param writeInterval The interval to write messages to the cluster.
*/
ClusterCommunicationService(
std::shared_ptr<data::BackendInterface> backend,
std::chrono::steady_clock::duration readInterval = kDEFAULT_READ_INTERVAL,
std::chrono::steady_clock::duration writeInterval = kDEFAULT_WRITE_INTERVAL
);
~ClusterCommunicationService() override;
/**
* @brief Start the service.
*/
void
run();
/**
* @brief Stop the service.
*/
void
stop();
ClusterCommunicationService(ClusterCommunicationService&&) = delete;
ClusterCommunicationService(ClusterCommunicationService const&) = delete;
ClusterCommunicationService&
operator=(ClusterCommunicationService&&) = delete;
ClusterCommunicationService&
operator=(ClusterCommunicationService const&) = delete;
/**
* @brief Get the UUID of the current node.
*
* @return The UUID of the current node.
*/
std::shared_ptr<boost::uuids::uuid>
selfUuid() const;
/**
* @brief Get the data of the current node.
*
* @return The data of the current node.
*/
ClioNode
selfData() const override;
/**
* @brief Get the data of all nodes in the cluster (including self).
*
* @return The data of all nodes in the cluster.
*/
std::vector<ClioNode>
clusterData() const override;
private:
void
doRead(boost::asio::yield_context yield);
void
doWrite();
};
} // namespace cluster

View File

@@ -0,0 +1,52 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "cluster/ClioNode.hpp"
#include <vector>
namespace cluster {
/**
* @brief Interface for the cluster communication service.
*/
class ClusterCommunicationServiceInterface {
public:
virtual ~ClusterCommunicationServiceInterface() = default;
/**
* @brief Get the data of the current node.
*
* @return The data of the current node.
*/
[[nodiscard]] virtual ClioNode
selfData() const = 0;
/**
* @brief Get the data of all nodes in the cluster (including self).
*
* @return The data of all nodes in the cluster.
*/
[[nodiscard]] virtual std::vector<ClioNode>
clusterData() const = 0;
};
} // namespace cluster

View File

@@ -0,0 +1,203 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include "data/AmendmentCenter.hpp"
#include "data/BackendInterface.hpp"
#include "data/Types.hpp"
#include "util/Assert.hpp"
#include <boost/asio/spawn.hpp>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/digest.h>
#include <algorithm>
#include <cstdint>
#include <iterator>
#include <map>
#include <memory>
#include <optional>
#include <ranges>
#include <stdexcept>
#include <string>
#include <string_view>
#include <unordered_set>
#include <utility>
#include <vector>
namespace {
std::unordered_set<std::string>&
supportedAmendments()
{
static std::unordered_set<std::string> kAMENDMENTS = {};
return kAMENDMENTS;
}
bool
lookupAmendment(auto const& allAmendments, std::vector<ripple::uint256> const& ledgerAmendments, std::string_view name)
{
namespace rg = std::ranges;
if (auto const am = rg::find(allAmendments, name, &data::Amendment::name); am != rg::end(allAmendments))
return rg::find(ledgerAmendments, am->feature) != rg::end(ledgerAmendments);
return false;
}
} // namespace
namespace data {
namespace impl {
WritingAmendmentKey::WritingAmendmentKey(std::string amendmentName) : AmendmentKey{std::move(amendmentName)}
{
ASSERT(not supportedAmendments().contains(name), "Attempt to register the same amendment twice");
supportedAmendments().insert(name);
}
} // namespace impl
AmendmentKey::operator std::string const&() const
{
return name;
}
AmendmentKey::operator std::string_view() const
{
return name;
}
AmendmentKey::operator ripple::uint256() const
{
return Amendment::getAmendmentId(name);
}
AmendmentCenter::AmendmentCenter(std::shared_ptr<data::BackendInterface> const& backend) : backend_{backend}
{
namespace rg = std::ranges;
namespace vs = std::views;
rg::copy(
ripple::allAmendments() | vs::transform([&](auto const& p) {
auto const& [name, support] = p;
return Amendment{
.name = name,
.feature = Amendment::getAmendmentId(name),
.isSupportedByXRPL = support != ripple::AmendmentSupport::Unsupported,
.isSupportedByClio = rg::find(supportedAmendments(), name) != rg::end(supportedAmendments()),
.isRetired = support == ripple::AmendmentSupport::Retired
};
}),
std::back_inserter(all_)
);
for (auto const& am : all_ | vs::filter([](auto const& am) { return am.isSupportedByClio; }))
supported_.insert_or_assign(am.name, am);
}
bool
AmendmentCenter::isSupported(AmendmentKey const& key) const
{
return supported_.contains(key);
}
std::map<std::string, Amendment> const&
AmendmentCenter::getSupported() const
{
return supported_;
}
std::vector<Amendment> const&
AmendmentCenter::getAll() const
{
return all_;
}
bool
AmendmentCenter::isEnabled(AmendmentKey const& key, uint32_t seq) const
{
return data::synchronous([this, &key, seq](auto yield) { return isEnabled(yield, key, seq); });
}
bool
AmendmentCenter::isEnabled(boost::asio::yield_context yield, AmendmentKey const& key, uint32_t seq) const
{
if (auto const listAmendments = fetchAmendmentsList(yield, seq); listAmendments)
return lookupAmendment(all_, *listAmendments, key);
return false;
}
std::vector<bool>
AmendmentCenter::isEnabled(boost::asio::yield_context yield, std::vector<AmendmentKey> const& keys, uint32_t seq) const
{
namespace rg = std::ranges;
if (auto const listAmendments = fetchAmendmentsList(yield, seq); listAmendments) {
std::vector<bool> out;
rg::transform(keys, std::back_inserter(out), [this, &listAmendments](auto const& key) {
return lookupAmendment(all_, *listAmendments, key);
});
return out;
}
return std::vector<bool>(keys.size(), false);
}
Amendment const&
AmendmentCenter::getAmendment(AmendmentKey const& key) const
{
ASSERT(supported_.contains(key), "The amendment '{}' must be present in supported amendments list", key.name);
return supported_.at(key);
}
Amendment const&
AmendmentCenter::operator[](AmendmentKey const& key) const
{
return getAmendment(key);
}
ripple::uint256
Amendment::getAmendmentId(std::string_view name)
{
return ripple::sha512Half(ripple::Slice(name.data(), name.size()));
}
std::optional<std::vector<ripple::uint256>>
AmendmentCenter::fetchAmendmentsList(boost::asio::yield_context yield, uint32_t seq) const
{
// the amendments should always be present on the ledger
auto const amendments = backend_->fetchLedgerObject(ripple::keylet::amendments().key, seq, yield);
if (not amendments.has_value())
throw std::runtime_error("Amendments ledger object must be present in the database");
ripple::SLE const amendmentsSLE{
ripple::SerialIter{amendments->data(), amendments->size()}, ripple::keylet::amendments().key
};
return amendmentsSLE[~ripple::sfAmendments];
}
} // namespace data

View File

@@ -0,0 +1,266 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/AmendmentCenterInterface.hpp"
#include "data/BackendInterface.hpp"
#include "data/Types.hpp"
#include <boost/asio/spawn.hpp>
#include <boost/preprocessor.hpp>
#include <boost/preprocessor/seq/for_each.hpp>
#include <boost/preprocessor/stringize.hpp>
#include <boost/preprocessor/variadic/to_seq.hpp>
#include <xrpl/basics/Slice.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/Feature.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/digest.h>
#include <cstdint>
#include <map>
#include <memory>
#include <optional>
#include <string>
#include <vector>
#define REGISTER(name) \
inline static impl::WritingAmendmentKey const name = \
impl::WritingAmendmentKey(std::string(BOOST_PP_STRINGIZE(name)))
namespace data {
namespace impl {
struct WritingAmendmentKey : AmendmentKey {
explicit WritingAmendmentKey(std::string amendmentName);
};
} // namespace impl
/**
* @brief List of supported amendments
*/
struct Amendments {
// NOTE: if Clio wants to report it supports an Amendment it should be listed here.
// Whether an amendment is obsolete and/or supported by libxrpl is extracted directly from libxrpl.
// If an amendment is in the list below it just means Clio did whatever changes needed to support it.
// Most of the time it's going to be no changes at all.
/** @cond */
// NOLINTBEGIN(readability-identifier-naming)
REGISTER(OwnerPaysFee);
REGISTER(Flow);
REGISTER(FlowCross);
REGISTER(fix1513);
REGISTER(DepositAuth);
REGISTER(Checks);
REGISTER(fix1571);
REGISTER(fix1543);
REGISTER(fix1623);
REGISTER(DepositPreauth);
REGISTER(fix1515);
REGISTER(fix1578);
REGISTER(MultiSignReserve);
REGISTER(fixTakerDryOfferRemoval);
REGISTER(fixMasterKeyAsRegularKey);
REGISTER(fixCheckThreading);
REGISTER(fixPayChanRecipientOwnerDir);
REGISTER(DeletableAccounts);
REGISTER(fixQualityUpperBound);
REGISTER(RequireFullyCanonicalSig);
REGISTER(fix1781);
REGISTER(HardenedValidations);
REGISTER(fixAmendmentMajorityCalc);
REGISTER(NegativeUNL);
REGISTER(TicketBatch);
REGISTER(FlowSortStrands);
REGISTER(fixSTAmountCanonicalize);
REGISTER(fixRmSmallIncreasedQOffers);
REGISTER(CheckCashMakesTrustLine);
REGISTER(ExpandedSignerList);
REGISTER(NonFungibleTokensV1_1);
REGISTER(fixTrustLinesToSelf);
REGISTER(fixRemoveNFTokenAutoTrustLine);
REGISTER(ImmediateOfferKilled);
REGISTER(DisallowIncoming);
REGISTER(XRPFees);
REGISTER(fixUniversalNumber);
REGISTER(fixNonFungibleTokensV1_2);
REGISTER(fixNFTokenRemint);
REGISTER(fixReducedOffersV1);
REGISTER(Clawback);
REGISTER(AMM);
REGISTER(XChainBridge);
REGISTER(fixDisallowIncomingV1);
REGISTER(DID);
REGISTER(fixFillOrKill);
REGISTER(fixNFTokenReserve);
REGISTER(fixInnerObjTemplate);
REGISTER(fixAMMOverflowOffer);
REGISTER(PriceOracle);
REGISTER(fixEmptyDID);
REGISTER(fixXChainRewardRounding);
REGISTER(fixPreviousTxnID);
REGISTER(fixAMMv1_1);
REGISTER(NFTokenMintOffer);
REGISTER(fixReducedOffersV2);
REGISTER(fixEnforceNFTokenTrustline);
REGISTER(fixInnerObjTemplate2);
REGISTER(fixNFTokenPageLinks);
REGISTER(InvariantsV1_1);
REGISTER(MPTokensV1);
REGISTER(fixAMMv1_2);
REGISTER(AMMClawback);
REGISTER(Credentials);
REGISTER(DynamicNFT);
REGISTER(PermissionedDomains);
REGISTER(fixInvalidTxFlags);
REGISTER(fixFrozenLPTokenTransfer);
REGISTER(DeepFreeze);
// Obsolete but supported by libxrpl
REGISTER(CryptoConditionsSuite);
REGISTER(NonFungibleTokensV1);
REGISTER(fixNFTokenDirV1);
REGISTER(fixNFTokenNegOffer);
// Retired amendments
REGISTER(MultiSign);
REGISTER(TrustSetAuth);
REGISTER(FeeEscalation);
REGISTER(PayChan);
REGISTER(fix1368);
REGISTER(CryptoConditions);
REGISTER(Escrow);
REGISTER(TickSize);
REGISTER(fix1373);
REGISTER(EnforceInvariants);
REGISTER(SortedDirectories);
REGISTER(fix1201);
REGISTER(fix1512);
REGISTER(fix1523);
REGISTER(fix1528);
// NOLINTEND(readability-identifier-naming)
/** @endcond */
};
#undef REGISTER
/**
* @brief Knowledge center for amendments within XRPL
*/
class AmendmentCenter : public AmendmentCenterInterface {
std::shared_ptr<data::BackendInterface> backend_;
std::map<std::string, Amendment> supported_;
std::vector<Amendment> all_;
public:
/**
* @brief Construct a new AmendmentCenter instance
*
* @param backend The backend
*/
explicit AmendmentCenter(std::shared_ptr<data::BackendInterface> const& backend);
/**
* @brief Check whether an amendment is supported by Clio
*
* @param key The key of the amendment to check
* @return true if supported; false otherwise
*/
[[nodiscard]] bool
isSupported(AmendmentKey const& key) const final;
/**
* @brief Get all supported amendments as a map
*
* @return The amendments supported by Clio
*/
[[nodiscard]] std::map<std::string, Amendment> const&
getSupported() const final;
/**
* @brief Get all known amendments
*
* @return All known amendments as a vector
*/
[[nodiscard]] std::vector<Amendment> const&
getAll() const final;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param key The key of the amendment to check
* @param seq The sequence to check for
* @return true if enabled; false otherwise
*/
[[nodiscard]] bool
isEnabled(AmendmentKey const& key, uint32_t seq) const final;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param yield The coroutine context to use
* @param key The key of the amendment to check
* @param seq The sequence to check for
* @return true if enabled; false otherwise
*/
[[nodiscard]] bool
isEnabled(boost::asio::yield_context yield, AmendmentKey const& key, uint32_t seq) const final;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param yield The coroutine context to use
* @param keys The keys of the amendments to check
* @param seq The sequence to check for
* @return A vector of bools representing enabled state for each of the given keys
*/
[[nodiscard]] std::vector<bool>
isEnabled(boost::asio::yield_context yield, std::vector<AmendmentKey> const& keys, uint32_t seq) const final;
/**
* @brief Get an amendment
*
* @param key The key of the amendment to get
* @return The amendment as a const ref; asserts if the amendment is unknown
*/
[[nodiscard]] Amendment const&
getAmendment(AmendmentKey const& key) const final;
/**
* @brief Get an amendment by its key
* @param key The amendment key from @see Amendments
* @return The amendment as a const ref; asserts if the amendment is unknown
*/
[[nodiscard]] Amendment const&
operator[](AmendmentKey const& key) const final;
private:
[[nodiscard]] std::optional<std::vector<ripple::uint256>>
fetchAmendmentsList(boost::asio::yield_context yield, uint32_t seq) const;
};
} // namespace data

View File

@@ -0,0 +1,116 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2024, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/Types.hpp"
#include <boost/asio/spawn.hpp>
#include <cstdint>
#include <map>
#include <string>
#include <vector>
namespace data {
/**
* @brief The interface of an amendment center
*/
class AmendmentCenterInterface {
public:
virtual ~AmendmentCenterInterface() = default;
/**
* @brief Check whether an amendment is supported by Clio
*
* @param key The key of the amendment to check
* @return true if supported; false otherwise
*/
[[nodiscard]] virtual bool
isSupported(AmendmentKey const& key) const = 0;
/**
* @brief Get all supported amendments as a map
*
* @return The amendments supported by Clio
*/
[[nodiscard]] virtual std::map<std::string, Amendment> const&
getSupported() const = 0;
/**
* @brief Get all known amendments
*
* @return All known amendments as a vector
*/
[[nodiscard]] virtual std::vector<Amendment> const&
getAll() const = 0;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param key The key of the amendment to check
* @param seq The sequence to check for
* @return true if enabled; false otherwise
*/
[[nodiscard]] virtual bool
isEnabled(AmendmentKey const& key, uint32_t seq) const = 0;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param yield The coroutine context to use
* @param key The key of the amendment to check
* @param seq The sequence to check for
* @return true if enabled; false otherwise
*/
[[nodiscard]] virtual bool
isEnabled(boost::asio::yield_context yield, AmendmentKey const& key, uint32_t seq) const = 0;
/**
* @brief Check whether an amendment was/is enabled for a given sequence
*
* @param yield The coroutine context to use
* @param keys The keys of the amendments to check
* @param seq The sequence to check for
* @return A vector of bools representing enabled state for each of the given keys
*/
[[nodiscard]] virtual std::vector<bool>
isEnabled(boost::asio::yield_context yield, std::vector<AmendmentKey> const& keys, uint32_t seq) const = 0;
/**
* @brief Get an amendment
*
* @param key The key of the amendment to get
* @return The amendment as a const ref; asserts if the amendment is unknown
*/
[[nodiscard]] virtual Amendment const&
getAmendment(AmendmentKey const& key) const = 0;
/**
* @brief Get an amendment by its key
*
* @param key The amendment key from @see Amendments
* @return The amendment as a const ref; asserts if the amendment is unknown
*/
[[nodiscard]] virtual Amendment const&
operator[](AmendmentKey const& key) const = 0;
};
} // namespace data

View File

@@ -36,7 +36,7 @@ namespace data {
namespace {
std::vector<std::int64_t> const histogramBuckets{1, 2, 5, 10, 20, 50, 100, 200, 500, 700, 1000};
std::vector<std::int64_t> const kHISTOGRAM_BUCKETS{1, 2, 5, 10, 20, 50, 100, 200, 500, 700, 1000};
std::int64_t
durationInMillisecondsSince(std::chrono::steady_clock::time_point const startTime)
@@ -69,13 +69,13 @@ BackendCounters::BackendCounters()
, readDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "read"}}),
histogramBuckets,
kHISTOGRAM_BUCKETS,
"The duration of backend read operations including retries"
))
, writeDurationHistogram_(PrometheusService::histogramInt(
"backend_duration_milliseconds_histogram",
Labels({Label{"operation", "write"}}),
histogramBuckets,
kHISTOGRAM_BUCKETS,
"The duration of backend write operations including retries"
))
{

View File

@@ -21,9 +21,10 @@
#include "data/BackendInterface.hpp"
#include "data/CassandraBackend.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/cassandra/SettingsProvider.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp"
#include "util/newconfig/ConfigDefinition.hpp"
#include <boost/algorithm/string.hpp>
#include <boost/algorithm/string/predicate.hpp>
@@ -38,23 +39,25 @@ namespace data {
* @brief A factory function that creates the backend based on a config.
*
* @param config The clio config to use
* @param cache The ledger cache to use
* @return A shared_ptr<BackendInterface> with the selected implementation
*/
inline std::shared_ptr<BackendInterface>
make_Backend(util::Config const& config)
makeBackend(util::config::ClioConfigDefinition const& config, data::LedgerCacheInterface& cache)
{
static util::Logger const log{"Backend"};
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming)
LOG(log.info()) << "Constructing BackendInterface";
auto const readOnly = config.valueOr("read_only", false);
auto const readOnly = config.get<bool>("read_only");
auto const type = config.value<std::string>("database.type");
auto const type = config.get<std::string>("database.type");
std::shared_ptr<BackendInterface> backend = nullptr;
// TODO: retire `cassandra-new` by next release after 2.0
if (boost::iequals(type, "cassandra") or boost::iequals(type, "cassandra-new")) {
auto cfg = config.section("database." + type);
backend = std::make_shared<data::cassandra::CassandraBackend>(data::cassandra::SettingsProvider{cfg}, readOnly);
if (boost::iequals(type, "cassandra")) {
auto const cfg = config.getObject("database." + type);
backend = std::make_shared<data::cassandra::CassandraBackend>(
data::cassandra::SettingsProvider{cfg}, cache, readOnly
);
}
if (!backend)

View File

@@ -24,13 +24,13 @@
#include "util/log/Logger.hpp"
#include <boost/asio/spawn.hpp>
#include <ripple/basics/base_uint.h>
#include <ripple/basics/strHex.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/Serializer.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/protocol/Fees.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <chrono>
#include <cstddef>
@@ -87,13 +87,12 @@ BackendInterface::fetchLedgerObject(
boost::asio::yield_context yield
) const
{
auto obj = cache_.get(key, sequence);
auto obj = cache_.get().get(key, sequence);
if (obj) {
LOG(gLog.trace()) << "Cache hit - " << ripple::strHex(key);
return obj;
}
LOG(gLog.trace()) << "Cache miss - " << ripple::strHex(key);
auto dbObj = doFetchLedgerObject(key, sequence, yield);
if (!dbObj) {
LOG(gLog.trace()) << "Missed cache and missed in db";
@@ -103,6 +102,19 @@ BackendInterface::fetchLedgerObject(
return dbObj;
}
std::optional<std::uint32_t>
BackendInterface::fetchLedgerObjectSeq(
ripple::uint256 const& key,
std::uint32_t const sequence,
boost::asio::yield_context yield
) const
{
auto seq = doFetchLedgerObjectSeq(key, sequence, yield);
if (!seq)
LOG(gLog.trace()) << "Missed in db";
return seq;
}
std::vector<Blob>
BackendInterface::fetchLedgerObjects(
std::vector<ripple::uint256> const& keys,
@@ -114,7 +126,7 @@ BackendInterface::fetchLedgerObjects(
results.resize(keys.size());
std::vector<ripple::uint256> misses;
for (size_t i = 0; i < keys.size(); ++i) {
auto obj = cache_.get(keys[i], sequence);
auto obj = cache_.get().get(keys[i], sequence);
if (obj) {
results[i] = *obj;
} else {
@@ -144,7 +156,7 @@ BackendInterface::fetchSuccessorKey(
boost::asio::yield_context yield
) const
{
auto succ = cache_.getSuccessor(key, ledgerSequence);
auto succ = cache_.get().getSuccessor(key, ledgerSequence);
if (succ) {
LOG(gLog.trace()) << "Cache hit - " << ripple::strHex(key);
} else {
@@ -164,9 +176,9 @@ BackendInterface::fetchSuccessorObject(
if (succ) {
auto obj = fetchLedgerObject(*succ, ledgerSequence, yield);
if (!obj)
return {{*succ, {}}};
return {{.key = *succ, .blob = {}}};
return {{*succ, *obj}};
return {{.key = *succ, .blob = *obj}};
}
return {};
}
@@ -255,7 +267,7 @@ std::optional<LedgerRange>
BackendInterface::fetchLedgerRange() const
{
std::shared_lock const lck(rngMtx_);
return range;
return range_;
}
void
@@ -264,16 +276,16 @@ BackendInterface::updateRange(uint32_t newMax)
std::scoped_lock const lck(rngMtx_);
ASSERT(
!range || newMax >= range->maxSequence,
!range_ || newMax >= range_->maxSequence,
"Range shouldn't exist yet or newMax should be greater. newMax = {}, range->maxSequence = {}",
newMax,
range->maxSequence
range_->maxSequence
);
if (!range) {
range = {newMax, newMax};
if (!range_) {
range_ = {.minSequence = newMax, .maxSequence = newMax};
} else {
range->maxSequence = newMax;
range_->maxSequence = newMax;
}
}
@@ -284,10 +296,10 @@ BackendInterface::setRange(uint32_t min, uint32_t max, bool force)
if (!force) {
ASSERT(min <= max, "Range min must be less than or equal to max");
ASSERT(not range.has_value(), "Range was already set");
ASSERT(not range_.has_value(), "Range was already set");
}
range = {min, max};
range_ = {.minSequence = min, .maxSequence = max};
}
LedgerPage
@@ -308,10 +320,10 @@ BackendInterface::fetchLedgerPage(
ripple::uint256 const& curCursor = [&]() {
if (!keys.empty())
return keys.back();
return (cursor ? *cursor : firstKey);
return (cursor ? *cursor : kFIRST_KEY);
}();
std::uint32_t const seq = outOfOrder ? range->maxSequence : ledgerSequence;
std::uint32_t const seq = outOfOrder ? range_->maxSequence : ledgerSequence;
auto succ = fetchSuccessorKey(curCursor, seq, yield);
if (!succ) {

View File

@@ -20,7 +20,7 @@
#pragma once
#include "data/DBHelpers.hpp"
#include "data/LedgerCache.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etl/CorruptionDetector.hpp"
#include "util/log/Logger.hpp"
@@ -31,15 +31,17 @@
#include <boost/json.hpp>
#include <boost/json/object.hpp>
#include <boost/utility/result_of.hpp>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Fees.h>
#include <ripple/protocol/LedgerHeader.h>
#include <boost/uuid/uuid.hpp>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/Fees.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <exception>
#include <functional>
#include <optional>
#include <shared_mutex>
#include <string>
@@ -65,7 +67,7 @@ public:
}
};
static constexpr std::size_t DEFAULT_WAIT_BETWEEN_RETRY = 500;
static constexpr std::size_t kDEFAULT_WAIT_BETWEEN_RETRY = 500;
/**
* @brief A helper function that catches DatabaseTimout exceptions and retries indefinitely.
*
@@ -76,9 +78,9 @@ static constexpr std::size_t DEFAULT_WAIT_BETWEEN_RETRY = 500;
*/
template <typename FnType>
auto
retryOnTimeout(FnType func, size_t waitMs = DEFAULT_WAIT_BETWEEN_RETRY)
retryOnTimeout(FnType func, size_t waitMs = kDEFAULT_WAIT_BETWEEN_RETRY)
{
static util::Logger const log{"Backend"};
static util::Logger const log{"Backend"}; // NOLINT(readability-identifier-naming)
while (true) {
try {
@@ -138,19 +140,28 @@ synchronousAndRetryOnTimeout(FnType&& func)
class BackendInterface {
protected:
mutable std::shared_mutex rngMtx_;
std::optional<LedgerRange> range;
LedgerCache cache_;
std::optional<etl::CorruptionDetector<LedgerCache>> corruptionDetector_;
std::optional<LedgerRange> range_;
std::reference_wrapper<LedgerCacheInterface> cache_;
std::optional<etl::CorruptionDetector> corruptionDetector_;
public:
BackendInterface() = default;
/**
* @brief Construct a new backend interface instance.
*
* @param cache The ledger cache to use
*/
BackendInterface(LedgerCacheInterface& cache) : cache_{cache}
{
}
virtual ~BackendInterface() = default;
// TODO: Remove this hack. Cache should not be exposed thru BackendInterface
// TODO https://github.com/XRPLF/clio/issues/1956: Remove this hack once old ETL is removed.
// Cache should not be exposed thru BackendInterface
/**
* @return Immutable cache
*/
LedgerCache const&
LedgerCacheInterface const&
cache() const
{
return cache_;
@@ -159,7 +170,7 @@ public:
/**
* @return Mutable cache
*/
LedgerCache&
LedgerCacheInterface&
cache()
{
return cache_;
@@ -171,7 +182,7 @@ public:
* @param detector The corruption detector to set
*/
void
setCorruptionDetector(etl::CorruptionDetector<LedgerCache> detector)
setCorruptionDetector(etl::CorruptionDetector detector)
{
corruptionDetector_ = std::move(detector);
}
@@ -364,6 +375,25 @@ public:
boost::asio::yield_context yield
) const = 0;
/**
* @brief Fetches all holders' balances for a MPTIssuanceID
*
* @param mptID MPTIssuanceID you wish you query.
* @param limit Paging limit.
* @param cursorIn Optional cursor to allow us to pick up from where we last left off.
* @param ledgerSequence The ledger sequence to fetch for
* @param yield Currently executing coroutine.
* @return std::vector<Blob> of MPToken balances and an optional marker
*/
virtual MPTHoldersAndCursor
fetchMPTHolders(
ripple::uint192 const& mptID,
std::uint32_t const limit,
std::optional<ripple::AccountID> const& cursorIn,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const = 0;
/**
* @brief Fetches a specific ledger object.
*
@@ -378,6 +408,19 @@ public:
std::optional<Blob>
fetchLedgerObject(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield) const;
/**
* @brief Fetches a specific ledger object sequence.
*
* Currently the real fetch happens in doFetchLedgerObjectSeq
*
* @param key The key of the object
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The sequence in unit32_t on success; nullopt otherwise
*/
std::optional<std::uint32_t>
fetchLedgerObjectSeq(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield) const;
/**
* @brief Fetches all ledger objects by their keys.
*
@@ -407,6 +450,18 @@ public:
virtual std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield) const = 0;
/**
* @brief The database-specific implementation for fetching a ledger object sequence.
*
* @param key The key to fetch for
* @param sequence The ledger sequence to fetch for
* @param yield The coroutine context
* @return The sequence in unit32_t on success; nullopt otherwise
*/
virtual std::optional<std::uint32_t>
doFetchLedgerObjectSeq(ripple::uint256 const& key, std::uint32_t sequence, boost::asio::yield_context yield)
const = 0;
/**
* @brief The database-specific implementation for fetching ledger objects.
*
@@ -504,6 +559,25 @@ public:
boost::asio::yield_context yield
) const;
/**
* @brief Fetches the status of migrator by name.
*
* @param migratorName The name of the migrator
* @param yield The coroutine context
* @return The status of the migrator if found; nullopt otherwise
*/
virtual std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const = 0;
/**
* @brief Fetches the data of all nodes in the cluster.
*
* @param yield The coroutine context
*@return The data of all nodes in the cluster.
*/
[[nodiscard]] virtual std::expected<std::vector<std::pair<boost::uuids::uuid, std::string>>, std::string>
fetchClioNodesData(boost::asio::yield_context yield) const = 0;
/**
* @brief Synchronously fetches the ledger range from DB.
*
@@ -584,6 +658,14 @@ public:
virtual void
writeAccountTransactions(std::vector<AccountTransactionsData> data) = 0;
/**
* @brief Write a new account transaction.
*
* @param record An object representing the account transaction
*/
virtual void
writeAccountTransaction(AccountTransactionsData record) = 0;
/**
* @brief Write NFTs transactions.
*
@@ -592,6 +674,14 @@ public:
virtual void
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) = 0;
/**
* @brief Write accounts that started holding onto a MPT.
*
* @param data A vector of MPT ID and account pairs
*/
virtual void
writeMPTHolders(std::vector<MPTHolderData> const& data) = 0;
/**
* @brief Write a new successor.
*
@@ -602,6 +692,15 @@ public:
virtual void
writeSuccessor(std::string&& key, std::uint32_t seq, std::string&& successor) = 0;
/**
* @brief Write a node message. Used by ClusterCommunicationService
*
* @param uuid The UUID of the node
* @param message The message to write
*/
virtual void
writeNodeMessage(boost::uuids::uuid const& uuid, std::string message) = 0;
/**
* @brief Starts a write transaction with the DB. No-op for cassandra.
*
@@ -621,6 +720,21 @@ public:
bool
finishWrites(std::uint32_t ledgerSequence);
/**
* @brief Wait for all pending writes to finish.
*/
virtual void
waitForWritesToFinish() = 0;
/**
* @brief Mark the migration status of a migrator as Migrated in the database
*
* @param migratorName The name of the migrator
* @param status The status to set
*/
virtual void
writeMigratorStatus(std::string const& migratorName, std::string const& status) = 0;
/**
* @return true if database is overwhelmed; false otherwise
*/

View File

@@ -1,7 +1,8 @@
add_library(clio_data)
target_sources(
clio_data
PRIVATE BackendCounters.cpp
PRIVATE AmendmentCenter.cpp
BackendCounters.cpp
BackendInterface.cpp
LedgerCache.cpp
cassandra/impl/Future.cpp

View File

@@ -21,6 +21,7 @@
#include "data/BackendInterface.hpp"
#include "data/DBHelpers.hpp"
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "data/cassandra/Concepts.hpp"
#include "data/cassandra/Handle.hpp"
@@ -35,15 +36,19 @@
#include <boost/asio/spawn.hpp>
#include <boost/json/object.hpp>
#include <boost/uuid/string_generator.hpp>
#include <boost/uuid/uuid.hpp>
#include <cassandra.h>
#include <ripple/basics/Blob.h>
#include <ripple/basics/base_uint.h>
#include <ripple/basics/strHex.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Indexes.h>
#include <ripple/protocol/LedgerHeader.h>
#include <ripple/protocol/nft.h>
#include <fmt/core.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/strHex.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/Indexes.h>
#include <xrpl/protocol/LedgerHeader.h>
#include <xrpl/protocol/nft.h>
#include <algorithm>
#include <atomic>
#include <chrono>
#include <cstddef>
@@ -73,28 +78,32 @@ class BasicCassandraBackend : public BackendInterface {
SettingsProviderType settingsProvider_;
Schema<SettingsProviderType> schema_;
std::atomic_uint32_t ledgerSequence_ = 0u;
protected:
Handle handle_;
// have to be mutable because BackendInterface constness :(
mutable ExecutionStrategyType executor_;
std::atomic_uint32_t ledgerSequence_ = 0u;
public:
/**
* @brief Create a new cassandra/scylla backend instance.
*
* @param settingsProvider The settings provider to use
* @param cache The ledger cache to use
* @param readOnly Whether the database should be in readonly mode
*/
BasicCassandraBackend(SettingsProviderType settingsProvider, bool readOnly)
: settingsProvider_{std::move(settingsProvider)}
BasicCassandraBackend(SettingsProviderType settingsProvider, data::LedgerCacheInterface& cache, bool readOnly)
: BackendInterface(cache)
, settingsProvider_{std::move(settingsProvider)}
, schema_{settingsProvider_}
, handle_{settingsProvider_.getSettings()}
, executor_{settingsProvider_.getSettings(), handle_}
{
if (auto const res = handle_.connect(); not res)
throw std::runtime_error("Could not connect to Cassandra: " + res.error());
throw std::runtime_error("Could not connect to database: " + res.error());
if (not readOnly) {
if (auto const res = handle_.execute(schema_.createKeyspace); not res) {
@@ -111,13 +120,24 @@ public:
try {
schema_.prepareStatements(handle_);
} catch (std::runtime_error const& ex) {
LOG(log_.error()) << "Failed to prepare the statements: " << ex.what() << "; readOnly: " << readOnly;
throw;
auto const error = fmt::format(
"Failed to prepare the statements: {}; readOnly: {}. ReadOnly should be turned off or another Clio "
"node with write access to DB should be started first.",
ex.what(),
readOnly
);
LOG(log_.error()) << error;
throw std::runtime_error(error);
}
LOG(log_.info()) << "Created (revamped) CassandraBackend";
}
/*
* @brief Move constructor is deleted because handle_ is shared by reference with executor
*/
BasicCassandraBackend(BasicCassandraBackend&&) = delete;
TransactionsAndCursor
fetchAccountTransactions(
ripple::AccountID const& account,
@@ -129,7 +149,7 @@ public:
{
auto rng = fetchLedgerRange();
if (!rng)
return {{}, {}};
return {.txns = {}, .cursor = {}};
Statement const statement = [this, forward, &account]() {
if (forward)
@@ -186,13 +206,18 @@ public:
return {txns, {}};
}
void
waitForWritesToFinish() override
{
executor_.sync();
}
bool
doFinishWrites() override
{
// wait for other threads to finish their writes
executor_.sync();
waitForWritesToFinish();
if (!range) {
if (!range_) {
executor_.writeSync(schema_->updateLedgerRange, ledgerSequence_, false, ledgerSequence_);
}
@@ -336,8 +361,8 @@ public:
auto const& result = res.value();
if (not result.hasRows()) {
LOG(log_.error()) << "Could not fetch all transaction hashes - no rows; ledger = "
<< std::to_string(ledgerSequence);
LOG(log_.warn()) << "Could not fetch all transaction hashes - no rows; ledger = "
<< std::to_string(ledgerSequence);
return {};
}
@@ -346,7 +371,7 @@ public:
hashes.push_back(std::move(hash));
auto end = std::chrono::system_clock::now();
LOG(log_.debug()) << "Fetched " << hashes.size() << " transaction hashes from Cassandra in "
LOG(log_.debug()) << "Fetched " << hashes.size() << " transaction hashes from database in "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()
<< " milliseconds";
@@ -400,7 +425,7 @@ public:
{
auto rng = fetchLedgerRange();
if (!rng)
return {{}, {}};
return {.txns = {}, .cursor = {}};
Statement const statement = [this, forward, &tokenID]() {
if (forward)
@@ -548,6 +573,45 @@ public:
return ret;
}
MPTHoldersAndCursor
fetchMPTHolders(
ripple::uint192 const& mptID,
std::uint32_t const limit,
std::optional<ripple::AccountID> const& cursorIn,
std::uint32_t const ledgerSequence,
boost::asio::yield_context yield
) const override
{
auto const holderEntries = executor_.read(
yield, schema_->selectMPTHolders, mptID, cursorIn.value_or(ripple::AccountID(0)), Limit{limit}
);
auto const& holderResults = holderEntries.value();
if (not holderResults.hasRows()) {
LOG(log_.debug()) << "No rows returned";
return {};
}
std::vector<ripple::uint256> mptKeys;
std::optional<ripple::AccountID> cursor;
for (auto const [holder] : extract<ripple::AccountID>(holderResults)) {
mptKeys.push_back(ripple::keylet::mptoken(mptID, holder).key);
cursor = holder;
}
auto mptObjects = doFetchLedgerObjects(mptKeys, ledgerSequence, yield);
auto it = std::remove_if(mptObjects.begin(), mptObjects.end(), [](Blob const& mpt) { return mpt.empty(); });
mptObjects.erase(it, mptObjects.end());
ASSERT(mptKeys.size() <= limit, "Number of keys can't exceed the limit");
if (mptKeys.size() == limit)
return {mptObjects, cursor};
return {mptObjects, {}};
}
std::optional<Blob>
doFetchLedgerObject(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context yield)
const override
@@ -567,6 +631,24 @@ public:
return std::nullopt;
}
std::optional<std::uint32_t>
doFetchLedgerObjectSeq(ripple::uint256 const& key, std::uint32_t const sequence, boost::asio::yield_context yield)
const override
{
LOG(log_.debug()) << "Fetching ledger object for seq " << sequence << ", key = " << ripple::to_string(key);
if (auto const res = executor_.read(yield, schema_->selectObject, key, sequence); res) {
if (auto const result = res->template get<Blob, std::uint32_t>(); result) {
auto [_, seq] = result.value();
return seq;
}
LOG(log_.debug()) << "Could not fetch ledger object sequence - no rows";
} else {
LOG(log_.error()) << "Could not fetch ledger object sequence: " << res.error();
}
return std::nullopt;
}
std::optional<TransactionAndMetadata>
fetchTransaction(ripple::uint256 const& hash, boost::asio::yield_context yield) const override
{
@@ -590,7 +672,7 @@ public:
{
if (auto const res = executor_.read(yield, schema_->selectSuccessor, key, ledgerSequence); res) {
if (auto const result = res->template get<ripple::uint256>(); result) {
if (*result == lastKey)
if (*result == kLAST_KEY)
return std::nullopt;
return result;
}
@@ -640,7 +722,7 @@ public:
});
ASSERT(numHashes == results.size(), "Number of hashes and results must match");
LOG(log_.debug()) << "Fetched " << numHashes << " transactions from Cassandra in " << timeDiff
LOG(log_.debug()) << "Fetched " << numHashes << " transactions from database in " << timeDiff
<< " milliseconds";
return results;
}
@@ -760,7 +842,7 @@ public:
if (keys.empty())
return {};
LOG(log_.debug()) << "Fetched " << keys.size() << " diff hashes from Cassandra in " << timeDiff
LOG(log_.debug()) << "Fetched " << keys.size() << " diff hashes from database in " << timeDiff
<< " milliseconds";
auto const objs = fetchLedgerObjects(keys, ledgerSequence, yield);
@@ -778,12 +860,48 @@ public:
return results;
}
std::optional<std::string>
fetchMigratorStatus(std::string const& migratorName, boost::asio::yield_context yield) const override
{
auto const res = executor_.read(yield, schema_->selectMigratorStatus, Text(migratorName));
if (not res) {
LOG(log_.error()) << "Could not fetch migrator status: " << res.error();
return {};
}
auto const& results = res.value();
if (not results) {
return {};
}
for (auto [statusString] : extract<std::string>(results))
return statusString;
return {};
}
std::expected<std::vector<std::pair<boost::uuids::uuid, std::string>>, std::string>
fetchClioNodesData(boost::asio::yield_context yield) const override
{
auto const readResult = executor_.read(yield, schema_->selectClioNodesData);
if (not readResult)
return std::unexpected{readResult.error().message()};
std::vector<std::pair<boost::uuids::uuid, std::string>> result;
for (auto [uuid, message] : extract<boost::uuids::uuid, std::string>(*readResult)) {
result.emplace_back(uuid, std::move(message));
}
return result;
}
void
doWriteLedgerObject(std::string&& key, std::uint32_t const seq, std::string&& blob) override
{
LOG(log_.trace()) << " Writing ledger object " << key.size() << ":" << seq << " [" << blob.size() << " bytes]";
if (range)
if (range_)
executor_.write(schema_->insertDiff, seq, key);
executor_.write(schema_->insertObject, std::move(key), seq, std::move(blob));
@@ -807,30 +925,42 @@ public:
statements.reserve(data.size() * 10); // assume 10 transactions avg
for (auto& record : data) {
std::transform(
std::begin(record.accounts),
std::end(record.accounts),
std::back_inserter(statements),
[this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::forward<decltype(account)>(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash
);
}
);
std::ranges::transform(record.accounts, std::back_inserter(statements), [this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::forward<decltype(account)>(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash
);
});
}
executor_.write(std::move(statements));
}
void
writeAccountTransaction(AccountTransactionsData record) override
{
std::vector<Statement> statements;
statements.reserve(record.accounts.size());
std::ranges::transform(record.accounts, std::back_inserter(statements), [this, &record](auto&& account) {
return schema_->insertAccountTx.bind(
std::forward<decltype(account)>(account),
std::make_tuple(record.ledgerSequence, record.transactionIndex),
record.txHash
);
});
executor_.write(std::move(statements));
}
void
writeNFTTransactions(std::vector<NFTTransactionsData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
std::transform(std::cbegin(data), std::cend(data), std::back_inserter(statements), [this](auto const& record) {
std::ranges::transform(data, std::back_inserter(statements), [this](auto const& record) {
return schema_->insertNFTTx.bind(
record.tokenID, std::make_tuple(record.ledgerSequence, record.transactionIndex), record.txHash
);
@@ -848,7 +978,7 @@ public:
std::string&& metadata
) override
{
LOG(log_.trace()) << "Writing txn to cassandra";
LOG(log_.trace()) << "Writing txn to database";
executor_.write(schema_->insertLedgerTransaction, seq, hash);
executor_.write(
@@ -863,27 +993,45 @@ public:
statements.reserve(data.size() * 3);
for (NFTsData const& record : data) {
statements.push_back(
schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned)
);
if (!record.onlyUriChanged) {
statements.push_back(
schema_->insertNFT.bind(record.tokenID, record.ledgerSequence, record.owner, record.isBurned)
);
// If `uri` is set (and it can be set to an empty uri), we know this
// is a net-new NFT. That is, this NFT has not been seen before by
// us _OR_ it is in the extreme edge case of a re-minted NFT ID with
// the same NFT ID as an already-burned token. In this case, we need
// to record the URI and link to the issuer_nf_tokens table.
if (record.uri) {
statements.push_back(schema_->insertIssuerNFT.bind(
ripple::nft::getIssuer(record.tokenID),
static_cast<uint32_t>(ripple::nft::getTaxon(record.tokenID)),
record.tokenID
));
// If `uri` is set (and it can be set to an empty uri), we know this
// is a net-new NFT. That is, this NFT has not been seen before by
// us _OR_ it is in the extreme edge case of a re-minted NFT ID with
// the same NFT ID as an already-burned token. In this case, we need
// to record the URI and link to the issuer_nf_tokens table.
if (record.uri) {
statements.push_back(schema_->insertIssuerNFT.bind(
ripple::nft::getIssuer(record.tokenID),
static_cast<uint32_t>(ripple::nft::getTaxon(record.tokenID)),
record.tokenID
));
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
);
}
} else {
// only uri changed, we update the uri table only
statements.push_back(
schema_->insertNFTURI.bind(record.tokenID, record.ledgerSequence, record.uri.value())
);
}
}
executor_.writeEach(std::move(statements));
}
void
writeMPTHolders(std::vector<MPTHolderData> const& data) override
{
std::vector<Statement> statements;
statements.reserve(data.size());
for (auto [mptId, holder] : data)
statements.push_back(schema_->insertMPTHolder.bind(mptId, holder));
executor_.write(std::move(statements));
}
@@ -894,6 +1042,20 @@ public:
// probably was used in PG to start a transaction or smth.
}
void
writeMigratorStatus(std::string const& migratorName, std::string const& status) override
{
executor_.writeSync(
schema_->insertMigratorStatus, data::cassandra::Text{migratorName}, data::cassandra::Text(status)
);
}
void
writeNodeMessage(boost::uuids::uuid const& uuid, std::string message) override
{
executor_.writeSync(schema_->updateClioNodeMessage, data::cassandra::Text{std::move(message)}, uuid);
}
bool
isTooBusy() const override
{

View File

@@ -23,16 +23,16 @@
#include "util/Assert.hpp"
#include <boost/container/flat_set.hpp>
#include <ripple/basics/Blob.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STAccount.h>
#include <ripple/protocol/STLedgerEntry.h>
#include <ripple/protocol/Serializer.h>
#include <ripple/protocol/TxMeta.h>
#include <xrpl/basics/Blob.h>
#include <xrpl/basics/Log.h>
#include <xrpl/basics/StringUtilities.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/AccountID.h>
#include <xrpl/protocol/SField.h>
#include <xrpl/protocol/STAccount.h>
#include <xrpl/protocol/STLedgerEntry.h>
#include <xrpl/protocol/Serializer.h>
#include <xrpl/protocol/TxMeta.h>
#include <cstddef>
#include <cstdint>
@@ -54,7 +54,7 @@ struct AccountTransactionsData {
* @param meta The transaction metadata
* @param txHash The transaction hash
*/
AccountTransactionsData(ripple::TxMeta& meta, ripple::uint256 const& txHash)
AccountTransactionsData(ripple::TxMeta const& meta, ripple::uint256 const& txHash)
: accounts(meta.getAffectedAccounts())
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
@@ -107,6 +107,7 @@ struct NFTsData {
ripple::AccountID owner;
std::optional<ripple::Blob> uri;
bool isBurned = false;
bool onlyUriChanged = false; // Whether only the URI was changed
/**
* @brief Construct a new NFTsData object
@@ -170,6 +171,31 @@ struct NFTsData {
: tokenID(tokenID), ledgerSequence(ledgerSequence), owner(owner), uri(uri)
{
}
/**
* @brief Construct a new NFTsData object with only the URI changed
*
* @param tokenID The token ID
* @param meta The transaction metadata
* @param uri The new URI
*
*/
NFTsData(ripple::uint256 const& tokenID, ripple::TxMeta const& meta, ripple::Blob const& uri)
: tokenID(tokenID)
, ledgerSequence(meta.getLgrSeq())
, transactionIndex(meta.getIndex())
, uri(uri)
, onlyUriChanged(true)
{
}
};
/**
* @brief Represents an MPT and holder pair
*/
struct MPTHolderData {
ripple::uint192 mptID;
ripple::AccountID holder;
};
/**
@@ -182,11 +208,11 @@ template <typename T>
inline bool
isOffer(T const& object)
{
static constexpr short OFFER_OFFSET = 0x006f;
static constexpr short SHIFT = 8;
static constexpr short kOFFER_OFFSET = 0x006f;
static constexpr short kSHIFT = 8;
short offer_bytes = (object[1] << SHIFT) | object[2];
return offer_bytes == OFFER_OFFSET;
short offerBytes = (object[1] << kSHIFT) | object[2];
return offerBytes == kOFFER_OFFSET;
}
/**
@@ -215,9 +241,9 @@ template <typename T>
inline bool
isDirNode(T const& object)
{
static constexpr short DIR_NODE_SPACE_KEY = 0x0064;
static constexpr short kDIR_NODE_SPACE_KEY = 0x0064;
short const spaceKey = (object.data()[1] << 8) | object.data()[2];
return spaceKey == DIR_NODE_SPACE_KEY;
return spaceKey == kDIR_NODE_SPACE_KEY;
}
/**
@@ -265,12 +291,12 @@ template <typename T>
inline ripple::uint256
getBookBase(T const& key)
{
static constexpr size_t KEY_SIZE = 24;
static constexpr size_t kEY_SIZE = 24;
ASSERT(key.size() == ripple::uint256::size(), "Invalid key size {}", key.size());
ripple::uint256 ret;
for (size_t i = 0; i < KEY_SIZE; ++i)
for (size_t i = 0; i < kEY_SIZE; ++i)
ret.data()[i] = key.data()[i];
return ret;
@@ -289,4 +315,4 @@ uint256ToString(ripple::uint256 const& input)
}
/** @brief The ripple epoch start timestamp. Midnight on 1st January 2000. */
static constexpr std::uint32_t rippleEpochStart = 946684800;
static constexpr std::uint32_t kRIPPLE_EPOCH_START = 946684800;

View File

@@ -20,9 +20,10 @@
#include "data/LedgerCache.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "util/Assert.hpp"
#include <ripple/basics/base_uint.h>
#include <xrpl/basics/base_uint.h>
#include <cstddef>
#include <cstdint>
@@ -75,7 +76,7 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
auto& e = map_[obj.key];
if (seq > e.seq) {
e = {seq, obj.blob};
e = {.seq = seq, .blob = obj.blob};
}
} else {
map_.erase(obj.key);
@@ -87,6 +88,42 @@ LedgerCache::update(std::vector<LedgerObject> const& objs, uint32_t seq, bool is
}
}
void
LedgerCache::update(std::vector<etlng::model::Object> const& objs, uint32_t seq)
{
if (disabled_)
return;
std::scoped_lock const lck{mtx_};
if (seq > latestSeq_) {
ASSERT(
seq == latestSeq_ + 1 || latestSeq_ == 0,
"New sequence must be either next or first. seq = {}, latestSeq_ = {}",
seq,
latestSeq_
);
latestSeq_ = seq;
}
deleted_.clear(); // previous update's deletes no longer needed
for (auto const& obj : objs) {
if (!obj.data.empty()) {
auto& e = map_[obj.key];
if (seq > e.seq)
e = {.seq = seq, .blob = obj.data};
} else {
if (map_.contains(obj.key))
deleted_[obj.key] = map_[obj.key];
map_.erase(obj.key);
if (!full_)
deletes_.insert(obj.key);
}
}
cv_.notify_all();
}
std::optional<LedgerObject>
LedgerCache::getSuccessor(ripple::uint256 const& key, uint32_t seq) const
{
@@ -101,7 +138,7 @@ LedgerCache::getSuccessor(ripple::uint256 const& key, uint32_t seq) const
if (e == map_.end())
return {};
++successorHitCounter_.get();
return {{e->first, e->second.blob}};
return {{.key = e->first, .blob = e->second.blob}};
}
std::optional<LedgerObject>
@@ -117,7 +154,7 @@ LedgerCache::getPredecessor(ripple::uint256 const& key, uint32_t seq) const
if (e == map_.begin())
return {};
--e;
return {{e->first, e->second.blob}};
return {{.key = e->first, .blob = e->second.blob}};
}
std::optional<Blob>
@@ -139,6 +176,29 @@ LedgerCache::get(ripple::uint256 const& key, uint32_t seq) const
return {e->second.blob};
}
std::optional<Blob>
LedgerCache::getDeleted(ripple::uint256 const& key, uint32_t seq) const
{
if (disabled_)
return std::nullopt;
std::shared_lock const lck{mtx_};
if (seq > latestSeq_)
return std::nullopt;
++objectReqCounter_.get();
auto e = deleted_.find(key);
if (e == deleted_.end())
return std::nullopt;
if (seq < e->second.seq)
return std::nullopt;
++objectHitCounter_.get();
return {e->second.blob};
}
void
LedgerCache::setDisabled()
{

View File

@@ -19,15 +19,17 @@
#pragma once
#include "data/LedgerCacheInterface.hpp"
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include "util/prometheus/Bool.hpp"
#include "util/prometheus/Counter.hpp"
#include "util/prometheus/Label.hpp"
#include "util/prometheus/Prometheus.hpp"
#include <ripple/basics/base_uint.h>
#include <ripple/basics/hardened_hash.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/hardened_hash.h>
#include <atomic>
#include <condition_variable>
#include <cstddef>
#include <cstdint>
@@ -43,7 +45,7 @@ namespace data {
/**
* @brief Cache for an entire ledger.
*/
class LedgerCache {
class LedgerCache : public LedgerCacheInterface {
struct CacheEntry {
uint32_t seq = 0;
Blob blob;
@@ -72,120 +74,70 @@ class LedgerCache {
)};
std::map<ripple::uint256, CacheEntry> map_;
std::map<ripple::uint256, CacheEntry> deleted_;
mutable std::shared_mutex mtx_;
std::condition_variable_any cv_;
uint32_t latestSeq_ = 0;
std::atomic_bool full_ = false;
std::atomic_bool disabled_ = false;
util::prometheus::Bool full_{PrometheusService::boolMetric(
"ledger_cache_full",
util::prometheus::Labels{},
"Whether ledger cache full or not"
)};
util::prometheus::Bool disabled_{PrometheusService::boolMetric(
"ledger_cache_disabled",
util::prometheus::Labels{},
"Whether ledger cache is disabled or not"
)};
// temporary set to prevent background thread from writing already deleted data. not used when cache is full
std::unordered_set<ripple::uint256, ripple::hardened_hash<>> deletes_;
public:
/**
* @brief Update the cache with new ledger objects.
*
* @param objs The ledger objects to update cache with
* @param seq The sequence to update cache for
* @param isBackground Should be set to true when writing old data from a background thread
*/
void
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground = false);
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground) override;
void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) override;
/**
* @brief Fetch a cached object by its key and sequence number.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached Blob; otherwise nullopt is returned
*/
std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const;
get(ripple::uint256 const& key, uint32_t seq) const override;
std::optional<Blob>
getDeleted(ripple::uint256 const& key, uint32_t seq) const override;
/**
* @brief Gets a cached successor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached successor; otherwise nullopt is returned
*/
std::optional<LedgerObject>
getSuccessor(ripple::uint256 const& key, uint32_t seq) const;
getSuccessor(ripple::uint256 const& key, uint32_t seq) const override;
/**
* @brief Gets a cached predcessor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached predcessor; otherwise nullopt is returned
*/
std::optional<LedgerObject>
getPredecessor(ripple::uint256 const& key, uint32_t seq) const;
getPredecessor(ripple::uint256 const& key, uint32_t seq) const override;
/**
* @brief Disables the cache.
*/
void
setDisabled();
setDisabled() override;
/**
* @return true if the cache is disabled; false otherwise
*/
bool
isDisabled() const;
isDisabled() const override;
/**
* @brief Sets the full flag to true.
*
* This is used when cache loaded in its entirety at startup of the application. This can be either loaded from DB,
* populated together with initial ledger download (on first run) or downloaded from a peer node (specified in
* config).
*/
void
setFull();
setFull() override;
/**
* @return The latest ledger sequence for which cache is available.
*/
uint32_t
latestLedgerSequence() const;
latestLedgerSequence() const override;
/**
* @return true if the cache has all data for the most recent ledger; false otherwise
*/
bool
isFull() const;
isFull() const override;
/**
* @return The total size of the cache.
*/
size_t
size() const;
size() const override;
/**
* @return A number representing the success rate of hitting an object in the cache versus missing it.
*/
float
getObjectHitRate() const;
getObjectHitRate() const override;
/**
* @return A number representing the success rate of hitting a successor in the cache versus missing it.
*/
float
getSuccessorHitRate() const;
getSuccessorHitRate() const override;
/**
* @brief Waits until the cache contains a specific sequence.
*
* @param seq The sequence to wait for
*/
void
waitUntilCacheContainsSeq(uint32_t seq);
waitUntilCacheContainsSeq(uint32_t seq) override;
};
} // namespace data

View File

@@ -0,0 +1,173 @@
//------------------------------------------------------------------------------
/*
This file is part of clio: https://github.com/XRPLF/clio
Copyright (c) 2025, the clio developers.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#pragma once
#include "data/Types.hpp"
#include "etlng/Models.hpp"
#include <xrpl/basics/base_uint.h>
#include <xrpl/basics/hardened_hash.h>
#include <cstddef>
#include <cstdint>
#include <optional>
#include <vector>
namespace data {
/**
* @brief Cache for an entire ledger.
*/
class LedgerCacheInterface {
public:
virtual ~LedgerCacheInterface() = default;
LedgerCacheInterface() = default;
LedgerCacheInterface(LedgerCacheInterface&&) = delete;
LedgerCacheInterface(LedgerCacheInterface const&) = delete;
LedgerCacheInterface&
operator=(LedgerCacheInterface&&) = delete;
LedgerCacheInterface&
operator=(LedgerCacheInterface const&) = delete;
/**
* @brief Update the cache with new ledger objects.
*
* @param objs The ledger objects to update cache with
* @param seq The sequence to update cache for
* @param isBackground Should be set to true when writing old data from a background thread
*/
virtual void
update(std::vector<LedgerObject> const& objs, uint32_t seq, bool isBackground = false) = 0;
/**
* @brief Update the cache with new ledger objects.
*
* @param objs The ledger objects to update cache with
* @param seq The sequence to update cache for
*/
virtual void
update(std::vector<etlng::model::Object> const& objs, uint32_t seq) = 0;
/**
* @brief Fetch a cached object by its key and sequence number.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached Blob; otherwise nullopt is returned
*/
virtual std::optional<Blob>
get(ripple::uint256 const& key, uint32_t seq) const = 0;
/**
* @brief Fetch a recently deleted object by its key and sequence number.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in deleted cache, will return the cached Blob; otherwise nullopt is returned
*/
virtual std::optional<Blob>
getDeleted(ripple::uint256 const& key, uint32_t seq) const = 0;
/**
* @brief Gets a cached successor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached successor; otherwise nullopt is returned
*/
virtual std::optional<LedgerObject>
getSuccessor(ripple::uint256 const& key, uint32_t seq) const = 0;
/**
* @brief Gets a cached predcessor.
*
* Note: This function always returns std::nullopt when @ref isFull() returns false.
*
* @param key The key to fetch for
* @param seq The sequence to fetch for
* @return If found in cache, will return the cached predcessor; otherwise nullopt is returned
*/
virtual std::optional<LedgerObject>
getPredecessor(ripple::uint256 const& key, uint32_t seq) const = 0;
/**
* @brief Disables the cache.
*/
virtual void
setDisabled() = 0;
/**
* @return true if the cache is disabled; false otherwise
*/
virtual bool
isDisabled() const = 0;
/**
* @brief Sets the full flag to true.
*
* This is used when cache loaded in its entirety at startup of the application. This can be either loaded from DB,
* populated together with initial ledger download (on first run) or downloaded from a peer node (specified in
* config).
*/
virtual void
setFull() = 0;
/**
* @return The latest ledger sequence for which cache is available.
*/
virtual uint32_t
latestLedgerSequence() const = 0;
/**
* @return true if the cache has all data for the most recent ledger; false otherwise
*/
virtual bool
isFull() const = 0;
/**
* @return The total size of the cache.
*/
virtual size_t
size() const = 0;
/**
* @return A number representing the success rate of hitting an object in the cache versus missing it.
*/
virtual float
getObjectHitRate() const = 0;
/**
* @return A number representing the success rate of hitting a successor in the cache versus missing it.
*/
virtual float
getSuccessorHitRate() const = 0;
/**
* @brief Waits until the cache contains a specific sequence.
*
* @param seq The sequence to wait for
*/
virtual void
waitUntilCacheContainsSeq(uint32_t seq) = 0;
};
} // namespace data

View File

@@ -262,3 +262,15 @@ CREATE TABLE clio.nf_token_transactions (
```
The `nf_token_transactions` table serves as the NFT counterpart to `account_tx`, inspired by the same motivations and fulfilling a similar role within this context. It drives the `nft_history` API.
### migrator_status
```
CREATE TABLE clio.migrator_status (
migrator_name TEXT, # The name of the migrator
status TEXT, # The status of the migrator
PRIMARY KEY (migrator_name)
)
```
The `migrator_status` table stores the status of the migratior in this database. If a migrator's status is `migrated`, it means this database has finished data migration for this migrator.

View File

@@ -19,11 +19,13 @@
#pragma once
#include <ripple/basics/base_uint.h>
#include <ripple/protocol/AccountID.h>
#include <xrpl/basics/base_uint.h>
#include <xrpl/protocol/AccountID.h>
#include <concepts>
#include <cstdint>
#include <optional>
#include <string>
#include <string_view>
#include <tuple>
#include <utility>
@@ -231,6 +233,14 @@ struct NFTsAndCursor {
std::optional<ripple::uint256> cursor;
};
/**
* @brief Represents an array of MPTokens
*/
struct MPTHoldersAndCursor {
std::vector<Blob> mptokens;
std::optional<ripple::AccountID> cursor;
};
/**
* @brief Stores a range of sequences as a min and max pair.
*/
@@ -239,8 +249,71 @@ struct LedgerRange {
std::uint32_t maxSequence = 0;
};
constexpr ripple::uint256 firstKey{"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 lastKey{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 hi192{"0000000000000000000000000000000000000000000000001111111111111111"};
/**
* @brief Represents an amendment in the XRPL
*/
struct Amendment {
std::string name;
ripple::uint256 feature;
bool isSupportedByXRPL = false;
bool isSupportedByClio = false;
bool isRetired = false;
/**
* @brief Get the amendment Id from its name
*
* @param name The name of the amendment
* @return The amendment Id as uint256
*/
static ripple::uint256
getAmendmentId(std::string_view const name);
/**
* @brief Equality comparison operator
* @param other The object to compare to
* @return Whether the objects are equal
*/
bool
operator==(Amendment const& other) const
{
return name == other.name;
}
};
/**
* @brief A helper for amendment name to feature conversions
*/
struct AmendmentKey {
std::string name;
/**
* @brief Construct a new AmendmentKey
* @param val Anything convertible to a string
*/
AmendmentKey(std::convertible_to<std::string> auto&& val) : name{std::forward<decltype(val)>(val)}
{
}
/** @brief Conversion to string */
operator std::string const&() const;
/** @brief Conversion to string_view */
operator std::string_view() const;
/** @brief Conversion to uint256 */
operator ripple::uint256() const;
/**
* @brief Comparison operators
* @param other The object to compare to
* @return Whether the objects are equal, greater or less
*/
auto
operator<=>(AmendmentKey const& other) const = default;
};
constexpr ripple::uint256 kFIRST_KEY{"0000000000000000000000000000000000000000000000000000000000000000"};
constexpr ripple::uint256 kLAST_KEY{"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"};
constexpr ripple::uint256 kHI192{"0000000000000000000000000000000000000000000000001111111111111111"};
} // namespace data

View File

@@ -60,7 +60,7 @@ Handle::connect() const
Handle::FutureType
Handle::asyncConnect(std::string_view keyspace) const
{
return cass_session_connect_keyspace(session_, cluster_, keyspace.data());
return cass_session_connect_keyspace_n(session_, cluster_, keyspace.data(), keyspace.size());
}
Handle::MaybeErrorType
@@ -155,7 +155,7 @@ Handle::asyncExecute(std::vector<StatementType> const& statements, std::function
Handle::PreparedStatementType
Handle::prepare(std::string_view query) const
{
Handle::FutureType const future = cass_session_prepare(session_, query.data());
Handle::FutureType const future = cass_session_prepare_n(session_, query.data(), query.size());
auto const rc = future.await();
if (rc)
return cass_future_get_prepared(future);

View File

@@ -74,7 +74,7 @@ public:
'class': 'SimpleStrategy',
'replication_factor': '{}'
}}
AND durable_writes = true
AND durable_writes = True
)",
settingsProvider_.get().getKeyspace(),
settingsProvider_.get().getReplicationFactor()
@@ -257,6 +257,44 @@ public:
qualifiedTableName(settingsProvider_.get(), "nf_token_transactions")
));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
mpt_id blob,
holder blob,
PRIMARY KEY (mpt_id, holder)
)
WITH CLUSTERING ORDER BY (holder ASC)
)",
qualifiedTableName(settingsProvider_.get(), "mp_token_holders")
));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
migrator_name TEXT,
status TEXT,
PRIMARY KEY (migrator_name)
)
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
statements.emplace_back(fmt::format(
R"(
CREATE TABLE IF NOT EXISTS {}
(
node_id UUID,
message TEXT,
PRIMARY KEY (node_id)
)
WITH default_time_to_live = 2
)",
qualifiedTableName(settingsProvider_.get(), "nodes_chat")
));
return statements;
}();
@@ -393,6 +431,17 @@ public:
));
}();
PreparedStatement insertMPTHolder = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(mpt_id, holder)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "mp_token_holders")
));
}();
PreparedStatement insertLedgerHeader = [this]() {
return handle_.get().prepare(fmt::format(
R"(
@@ -436,12 +485,34 @@ public:
R"(
UPDATE {}
SET sequence = ?
WHERE is_latest = false
WHERE is_latest = False
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}();
PreparedStatement insertMigratorStatus = [this]() {
return handle_.get().prepare(fmt::format(
R"(
INSERT INTO {}
(migrator_name, status)
VALUES (?, ?)
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
}();
PreparedStatement updateClioNodeMessage = [this]() {
return handle_.get().prepare(fmt::format(
R"(
UPDATE {}
SET message = ?
WHERE node_id = ?
)",
qualifiedTableName(settingsProvider_.get(), "nodes_chat")
));
}();
//
// Select queries
//
@@ -687,6 +758,20 @@ public:
));
}();
PreparedStatement selectMPTHolders = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT holder
FROM {}
WHERE mpt_id = ?
AND holder > ?
ORDER BY holder ASC
LIMIT ?
)",
qualifiedTableName(settingsProvider_.get(), "mp_token_holders")
));
}();
PreparedStatement selectLedgerByHash = [this]() {
return handle_.get().prepare(fmt::format(
R"(
@@ -715,7 +800,7 @@ public:
R"(
SELECT sequence
FROM {}
WHERE is_latest = true
WHERE is_latest = True
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
@@ -726,10 +811,32 @@ public:
R"(
SELECT sequence
FROM {}
WHERE is_latest in (True, False)
)",
qualifiedTableName(settingsProvider_.get(), "ledger_range")
));
}();
PreparedStatement selectMigratorStatus = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT status
FROM {}
WHERE migrator_name = ?
)",
qualifiedTableName(settingsProvider_.get(), "migrator_status")
));
}();
PreparedStatement selectClioNodesData = [this]() {
return handle_.get().prepare(fmt::format(
R"(
SELECT node_id, message
FROM {}
)",
qualifiedTableName(settingsProvider_.get(), "nodes_chat")
));
}();
};
/**

View File

@@ -22,10 +22,7 @@
#include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/Cluster.hpp"
#include "util/Constants.hpp"
#include "util/config/Config.hpp"
#include <boost/json/conversion.hpp>
#include <boost/json/value.hpp>
#include "util/newconfig/ObjectView.hpp"
#include <cerrno>
#include <chrono>
@@ -36,43 +33,17 @@
#include <ios>
#include <iterator>
#include <optional>
#include <stdexcept>
#include <string>
#include <string_view>
#include <system_error>
namespace data::cassandra {
namespace impl {
inline Settings::ContactPoints
tag_invoke(boost::json::value_to_tag<Settings::ContactPoints>, boost::json::value const& value)
{
if (not value.is_object()) {
throw std::runtime_error("Feed entire Cassandra section to parse Settings::ContactPoints instead");
}
util::Config const obj{value};
Settings::ContactPoints out;
out.contactPoints = obj.valueOrThrow<std::string>("contact_points", "`contact_points` must be a string");
out.port = obj.maybeValue<uint16_t>("port");
return out;
}
inline Settings::SecureConnectionBundle
tag_invoke(boost::json::value_to_tag<Settings::SecureConnectionBundle>, boost::json::value const& value)
{
if (not value.is_string())
throw std::runtime_error("`secure_connect_bundle` must be a string");
return Settings::SecureConnectionBundle{value.as_string().data()};
}
} // namespace impl
SettingsProvider::SettingsProvider(util::Config const& cfg)
SettingsProvider::SettingsProvider(util::config::ObjectView const& cfg)
: config_{cfg}
, keyspace_{cfg.valueOr<std::string>("keyspace", "clio")}
, keyspace_{cfg.get<std::string>("keyspace")}
, tablePrefix_{cfg.maybeValue<std::string>("table_prefix")}
, replicationFactor_{cfg.valueOr<uint16_t>("replication_factor", 3)}
, replicationFactor_{cfg.get<uint16_t>("replication_factor")}
, settings_{parseSettings()}
{
}
@@ -86,8 +57,8 @@ SettingsProvider::getSettings() const
std::optional<std::string>
SettingsProvider::parseOptionalCertificate() const
{
if (auto const certPath = config_.maybeValue<std::string>("certfile"); certPath) {
auto const path = std::filesystem::path(*certPath);
if (auto const certPath = config_.getValueView("certfile"); certPath.hasValue()) {
auto const path = std::filesystem::path(certPath.asString());
std::ifstream fileStream(path.string(), std::ios::in);
if (!fileStream) {
throw std::system_error(errno, std::generic_category(), "Opening certificate " + path.string());
@@ -108,30 +79,34 @@ Settings
SettingsProvider::parseSettings() const
{
auto settings = Settings::defaultSettings();
if (auto const bundle = config_.maybeValue<Settings::SecureConnectionBundle>("secure_connect_bundle"); bundle) {
settings.connectionInfo = *bundle;
// all config values used in settings is under "database.cassandra" prefix
if (config_.getValueView("secure_connect_bundle").hasValue()) {
auto const bundle = Settings::SecureConnectionBundle{(config_.get<std::string>("secure_connect_bundle"))};
settings.connectionInfo = bundle;
} else {
settings.connectionInfo =
config_.valueOrThrow<Settings::ContactPoints>("Missing contact_points in Cassandra config");
Settings::ContactPoints out;
out.contactPoints = config_.get<std::string>("contact_points");
out.port = config_.maybeValue<uint32_t>("port");
settings.connectionInfo = out;
}
settings.threads = config_.valueOr<uint32_t>("threads", settings.threads);
settings.maxWriteRequestsOutstanding =
config_.valueOr<uint32_t>("max_write_requests_outstanding", settings.maxWriteRequestsOutstanding);
settings.maxReadRequestsOutstanding =
config_.valueOr<uint32_t>("max_read_requests_outstanding", settings.maxReadRequestsOutstanding);
settings.coreConnectionsPerHost =
config_.valueOr<uint32_t>("core_connections_per_host", settings.coreConnectionsPerHost);
settings.threads = config_.get<uint32_t>("threads");
settings.maxWriteRequestsOutstanding = config_.get<uint32_t>("max_write_requests_outstanding");
settings.maxReadRequestsOutstanding = config_.get<uint32_t>("max_read_requests_outstanding");
settings.coreConnectionsPerHost = config_.get<uint32_t>("core_connections_per_host");
settings.queueSizeIO = config_.maybeValue<uint32_t>("queue_size_io");
settings.writeBatchSize = config_.valueOr<std::size_t>("write_batch_size", settings.writeBatchSize);
settings.writeBatchSize = config_.get<std::size_t>("write_batch_size");
auto const connectTimeoutSecond = config_.maybeValue<uint32_t>("connect_timeout");
if (connectTimeoutSecond)
settings.connectionTimeout = std::chrono::milliseconds{*connectTimeoutSecond * util::MILLISECONDS_PER_SECOND};
if (config_.getValueView("connect_timeout").hasValue()) {
auto const connectTimeoutSecond = config_.get<uint32_t>("connect_timeout");
settings.connectionTimeout = std::chrono::milliseconds{connectTimeoutSecond * util::kMILLISECONDS_PER_SECOND};
}
auto const requestTimeoutSecond = config_.maybeValue<uint32_t>("request_timeout");
if (requestTimeoutSecond)
settings.requestTimeout = std::chrono::milliseconds{*requestTimeoutSecond * util::MILLISECONDS_PER_SECOND};
if (config_.getValueView("request_timeout").hasValue()) {
auto const requestTimeoutSecond = config_.get<uint32_t>("request_timeout");
settings.requestTimeout = std::chrono::milliseconds{requestTimeoutSecond * util::kMILLISECONDS_PER_SECOND};
}
settings.certificate = parseOptionalCertificate();
settings.username = config_.maybeValue<std::string>("username");

View File

@@ -19,10 +19,9 @@
#pragma once
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Types.hpp"
#include "util/config/Config.hpp"
#include "util/log/Logger.hpp"
#include "data/cassandra/impl/Cluster.hpp"
#include "util/newconfig/ObjectView.hpp"
#include <cstdint>
#include <optional>
@@ -34,7 +33,7 @@ namespace data::cassandra {
* @brief Provides settings for @ref BasicCassandraBackend.
*/
class SettingsProvider {
util::Config config_;
util::config::ObjectView config_;
std::string keyspace_;
std::optional<std::string> tablePrefix_;
@@ -47,7 +46,7 @@ public:
*
* @param cfg The config of Clio to use
*/
explicit SettingsProvider(util::Config const& cfg);
explicit SettingsProvider(util::config::ObjectView const& cfg);
/**
* @return The cluster settings

View File

@@ -21,6 +21,8 @@
#include <cstdint>
#include <expected>
#include <string>
#include <utility>
namespace data::cassandra {
@@ -55,6 +57,26 @@ struct Limit {
int32_t limit;
};
/**
* @brief A strong type wrapper for string
*
* This is unfortunately needed right now to support TEXT properly
* because clio uses string to represent BLOB
* If we want to bind TEXT with string, we need to use this type
*/
struct Text {
std::string text;
/**
* @brief Construct a new Text object from string type
*
* @param text The text to wrap
*/
explicit Text(std::string text) : text{std::move(text)}
{
}
};
class Handle;
class CassandraError;

View File

@@ -23,6 +23,7 @@
#include "data/cassandra/Handle.hpp"
#include "data/cassandra/Types.hpp"
#include "data/cassandra/impl/RetryPolicy.hpp"
#include "util/Mutex.hpp"
#include "util/log/Logger.hpp"
#include <boost/asio.hpp>
@@ -64,8 +65,8 @@ class AsyncExecutor : public std::enable_shared_from_this<AsyncExecutor<Statemen
RetryCallbackType onRetry_;
// does not exist during initial construction, hence optional
std::optional<FutureWithCallbackType> future_;
std::mutex mtx_;
using OptionalFuture = std::optional<FutureWithCallbackType>;
util::Mutex<OptionalFuture> future_;
public:
/**
@@ -127,8 +128,8 @@ private:
self = nullptr; // explicitly decrement refcount
};
std::scoped_lock const lck{mtx_};
future_.emplace(handle.asyncExecute(data_, std::move(handler)));
auto future = future_.template lock<std::scoped_lock>();
future->emplace(handle.asyncExecute(data_, std::move(handler)));
}
};

View File

@@ -31,14 +31,14 @@
#include <vector>
namespace {
constexpr auto batchDeleter = [](CassBatch* ptr) { cass_batch_free(ptr); };
constexpr auto kBATCH_DELETER = [](CassBatch* ptr) { cass_batch_free(ptr); };
} // namespace
namespace data::cassandra::impl {
// TODO: Use an appropriate value instead of CASS_BATCH_TYPE_LOGGED for different use cases
Batch::Batch(std::vector<Statement> const& statements)
: ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), batchDeleter}
: ManagedObject{cass_batch_new(CASS_BATCH_TYPE_LOGGED), kBATCH_DELETER}
{
cass_batch_set_is_idempotent(*this, cass_true);

View File

@@ -21,6 +21,7 @@
#include "data/cassandra/impl/ManagedObject.hpp"
#include "data/cassandra/impl/SslContext.hpp"
#include "util/OverloadSet.hpp"
#include "util/log/Logger.hpp"
#include <cassandra.h>
@@ -31,21 +32,14 @@
#include <variant>
namespace {
constexpr auto clusterDeleter = [](CassCluster* ptr) { cass_cluster_free(ptr); };
template <typename... Ts>
struct overloadSet : Ts... {
using Ts::operator()...;
};
constexpr auto kCLUSTER_DELETER = [](CassCluster* ptr) { cass_cluster_free(ptr); };
// explicit deduction guide (not needed as of C++20, but clang be clang)
template <typename... Ts>
overloadSet(Ts...) -> overloadSet<Ts...>;
}; // namespace
namespace data::cassandra::impl {
Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), clusterDeleter}
Cluster::Cluster(Settings const& settings) : ManagedObject{cass_cluster_new(), kCLUSTER_DELETER}
{
using std::to_string;
@@ -90,7 +84,7 @@ void
Cluster::setupConnection(Settings const& settings)
{
std::visit(
overloadSet{
util::OverloadSet{
[this](Settings::ContactPoints const& points) { setupContactPoints(points); },
[this](Settings::SecureConnectionBundle const& bundle) { setupSecureBundle(bundle); }
},

Some files were not shown because too many files have changed in this diff Show More