Per XLS-0095, we are taking steps to rename ripple(d) to xrpl(d).
This change updates the CMake files and definitions therein, plus a handful of related modifications. Specifically, the compiler files are renamed from `RippleXXX.cmake` or `RippledXXX.cmake` to `XrplXXX.cmake`, and any references to `ripple` and `rippled` (with or without capital letters) are renamed to `xrpl` and `xrpld`, respectively. The name of the binary, currently `rippled`, remains unchanged and will be updated in a separate PR. This change is purely cosmetic and does not affect the functioning of the binary.
Per XLS-0095, we are taking steps to rename ripple(d) to xrpl(d).
This change specifically removes all copyright notices referencing Ripple, XRPLF, and certain affiliated contributors upon mutual agreement, so the notice in the LICENSE.md file applies throughout. Copyright notices referencing external contributions remain as-is. Duplicate verbiage is also removed.
Per XLS-0095, we are taking steps to rename ripple(d) to xrpl(d).
C++ include guards are used to prevent the contents of a header file from being included multiple times in a single compilation unit. This change renames all `RIPPLE_` and `RIPPLED_` definitions, primarily include guards, to `XRPL_`. It also provides a script to allow developers to replicate the changes in their local branch or fork to avoid conflicts.
Amendments activated for more than 2 years can be retired, and obsolete retirements that were never activated can also be removed after 2 years. This change retires the NonFungibleTokensV1_1, fixNonFungibleTokensV1_2, and fixNFTokenRemint amendments, and removes the NonFungibleTokensV1, fixNFTokenNegOffer, and fixNFTokenDirV1 amendments.
To debug test failures we would like to use `netstat`, but that package wasn't installed yet in the CI images. This change uses the new CI images created by https://github.com/XRPLF/ci/pull/79.
This change:
* Simplifies the `TxMeta` constructors - both were setting the same set of fields, and to make it harder for future bugs to arise and keep the code DRY, we can combine those into one helper function.
* Removes an unused constructor.
* Renames the variables to avoid Hungarian naming.
* Removes a bunch of now-unnecessary helper functions.
This change introduces the `featurePermissionDelegationV1_1` amendment, which is designed to supersede both `featurePermissionDelegation` and `fixDelegateV1_1 amendments, which should be considered deprecated. The `checkPermission` function will now return `terNO_DELEGATE_PERMISSION` when a delegate transaction lacks the necessary permissions.
This change introduces the `fixDirectoryLimit` amendment to remove the directory pages limit. We found that the directory size limit is easier to hit than originally assumed, and there is no good reason to keep this limit, since the object reserve provides the necessary incentive to avoid creating unnecessary objects on the ledger.
Field `sfSubjectNode` is not populated by `CredentialCreate` in self-issued credentials. Rather than fixup the Credentials already on the ledger, we can in this case safely change the object template for this field from `soeREQUIRED` to `soeOPTIONAL`.
- Restructures `STTx` signature checking code to be able to handle
a `sigObject`, which may be the full transaction, or may be an object
field containing a separate signature. Either way, the `sigObject` can
be a single- or multi-sign signature.
- This is distinct from 550f90a75e (#5594), which changed the check in
Transactor, which validates whether a given account is allowed to sign
for the given transaction. This cryptographically checks the signature
validity.
This change adds an extra step to the CI test job that outputs network info, which may allow us to confirm whether random test failures are caused by port exhaustion.
We are on an amendment retiring spree, but each change results in conflicts in `features.macro` because currently they all add the retired amendment to the end of the list. By sorting the list the number of conflicts should be reduced, making it easier to merge them.
Amendments activated for more than 2 years can be retired. This change retires the fix1571 amendment.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
To protect the identity of UNL validators, the IP addresses are redacted from the log messages sent to the common Grafana instance. However, without such identifying information it is challenging to debug issues. This change adds a node's public key to logs to improve our ability to debug issues.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
* Retired fix1781 amendment
Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>
* refactor: Retire fix1781 amendment
Amendments activated for more than 2 years can be retired. This change retires the fix1781 amendment.
---------
Signed-off-by: Pratik Mankawde <pmankawde@ripple.com>
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change reduces the number of cores used to build and test, as using all cores may be contributing to occasional build and test failures.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change changes the CI concurrency group for pushes to the `develop` branch to use the commit hash instead of the target branch.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
There are separate steps for logging into Conan and uploading packages. However, at the moment sometimes the login step is executed even though no packages will be uploaded. The condition for performing both steps should be the same.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This changes fixes an invariant error where the amount withdrawn is equal to the transaction fee.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change fixes the `account_tx` RPC method to properly validate malformed limit parameter values. Previously, invalid values like `0`, `1.2`, `"10"`, `true`, `false`, `-1`, `[]`, `{}`, etc. were either accepted without errors or caused internal errors. Now all malformed values correctly return the `invalidParams` error.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
Amendments activated for more than 2 years can be retired. This change retires the fix1543 amendment.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
The `-Wno-missing-template-arg-list-after-template-kw` flag is only needed for the grpc library. Use `+=` for the default build flags to make it easier to extend in the future.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
Amendments activated for more than 2 years can be retired. This change retires the fix1515 amendment.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
As XRPL network demand grows and ledger sizes increase, the default 4K NuDB block size becomes a performance bottleneck, especially on high-performance storage systems. Modern SSDs and enterprise storage often perform better with larger block sizes, but rippled previously had no way to configure this parameter. This change therefore implements configurable NuDB block size support, allowing operators to optimize storage performance based on their hardware configuration. The feature adds a new `nudb_block_size` configuration parameter that enables block sizes from 4K to 32K bytes, with comprehensive validation and backward compatibility.
Specific changes are:
- Implements `parseBlockSize()` function with validation.
- Adds `nudb_block_size` configuration parameter.
- Supports block sizes from 4K to 32K (power of 2).
- Adds comprehensive logging and error handling.
- Maintains backward compatibility with 4K default.
- Adds unit tests for block size validation.
- Updates configuration documentation with performance guidance.
- Marks feature as experimental.
- Applies code formatting fixes.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
Similarly to other transaction typed that can create a trust line or MPToken for the transaction submitter (e.g. CashCheck #5285, EscrowFinish #5185 ), VaultWithdraw should enforce reserve before creating a new object. Additionally, the lsfRequireDestTag account flag should be enforced for the transaction submitter.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
The default job timeout is 5 hours, while build times are anywhere between 4-20 mins and test times between 2-10. As a runner occasionally gets stuck, we should fail much quicker.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This PR sets the fail-fast strategy option to false (it defaults to true), unless it is run by a merge group.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change sanitizes inputs by setting them as environment variables, and adjusts the number of CPUs used for building. Namely, GitHub inputs should be sanitized, per recommendation by Semgrep, as using them directly poses a security risk. A recent change further overrode the global configuration by having builds use all cores, but as we have noticed an increased number of job cancelation this change updates it to use all cores less one.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
There are some tests that write out JSONs as a string instead of using the Json::Value library, which are cleaned up by this change.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
A regression was introduced in #5669 which would cause rippled to potentially dereference a disengaged std::optional when connecting to a peer. This would cause UB in release build and crash in debug.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change replaces boost::lexical_cast<std::string> with to_string in some of the tests to make them more readable.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change replaces instances of JSON LastLedgerSequence with last_ledger_seq, which makes the tests a bit simpler and easier to read.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
Windows is extremely chatty and generates tons of logs when building, making it practically impossible to use the build logs to debug issues. This change sets the verbosity to 'quiet' on Windows.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This PR changes fee().accountReserve(0) to fee().reserve, as the current network reserve amount should be used instead of the account reserve.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change updates the Docker image hashes of the tools-rippled images to fix a missing dependency.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change adds a paychan namespace to the TestHelpers and implementation files, improving organization and clarity. Additionally, it updates the AMM test to use the new `paychan::create` function for payment channel creation.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change excludes from Codecov unreachable/difficult-to-test transaction code (such as `tecINTERNAL`) and old code (from amendments that have been enabled for a long time that are only around for ledger replay reasons). This removes about 200 lines of misses and increases the Codecov coverage by 0.3% (79.2% to 79.5%).
This change adds a wildcard to the release branch in the CI pipeline spec. Namely, after adopting an improved release process, with release branches that now look like release-X.Y, the trigger pipeline was no longer running as it only searched for an exact match to release.
This change downgrades OpenSSL 3.6.0 to 3.5.4. To avoid potential zero-day issues in a new major version of OpenSSL, 3.6.0, it is safer to stick with 3.5.4. While 3.6.0 has some nice new features, such as improved SHA512 hashing, it also introduces new features that could contain bugs. In contrast, 3.5.4 has seen quite a few bug fixes over 3.5.0 and has been used in the wild for a while now.
This change fixes a release build error with GCC 15.2.
The `fields` variable is only used in `XRPL_ASSERT`, which evaluates to nothing in a Release build, leaving the variable unused. This change silences the build warning.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
This change uses the new RHEL 9 and 10 images to build and test the binary, and adds support for having different Docker image SHAs per distro-compiler combination.
Instead of supporting RHEL each minor version, we are simplifying our pipelines by only supporting RHEL major versions. Our CI Docker images have already been updated accordingly, and we recently added support for RHEL 10 as well. Up until now, the CI Docker images had all been rebuilt at the same time, but that is not necessarily true as the most recent push to the CI repo has shown where the RHEL images now have a different SHA than the Debian and Ubuntu ones.
Co-authored-by: Bart Thomee <11445373+bthomee@users.noreply.github.com>
* Restructures Transactor signature checking code to be able to handle a `sigObject`, which may be the full transaction, or may be an object field containing a separate signature. Either way, the `sigObject` can be a single- or multi-sign signature.
* Restructures `Transactor::preflight` to create several functions that will remove the need for error-prone boilerplate code in derived classes' implementations of `preflight`.
This change adds `STInt32` as a new `SType` under the `STInteger` umbrella, with `SType` value `12`. This is the first and only `STInteger` type that supports negative values.
This change adds more comprehensive tests for the `FeeVote` module, which previously only checked the basics, and not the more comprehensive flows in that class.
This change makes the regex in `HttpClient.cpp` that matches the content-length http header case insensitive to improve compatibility, as http headers are case insensitive.
This change raises logging severity from `INFO` to `WARN` when handling UNL manifest signed with an unexpected / invalid key. It also changes the internal error code for an invalid format of UNL manifest to `invalid` (from `untrusted`).
This is a follow up to problems experienced by an UNL node due to old manifest key configured in `validators.txt`, which would be easier to diagnose with improved logging.
It also replaces a log line with `UNREACHABLE` for an impossible situation when we match UNL manifest key against a configured key which has an invalid type (we cannot configure such a key because of checks when loading configured keys).
This adds a comment to avoid using `std::counting_semaphore` until the minimum compiler versions of GCC and Clang have been updated to no longer contain the bug that is present in older compilers.
This change reverts #5617, because it will require extensive testing that will take up more time than we have before the next scheduled release.
Reverting this change does not mean we are abandoning it. We aim to pick it back up once there's a sufficient time window to allow for testing on multiple distros running a mixture of OpenSSL 1.x and 3.x.
This change is to improve code coverage (and to simplify #5720 and #5725); there is otherwise no change in functionality. The change adds basic tests for `STInteger` and `STParsedJSON`, so it becomes easier to test smaller changes to the types, as well as removes `STParsedJSONArray`, since it is not used anywhere (including in Clio).
- Added a new Invariant: `ValidPseudoAccounts` which checks that all pseudo-accounts behave consistently through creation and updates, and that no "real" accounts look like pseudo-accounts (which means they don't have a 0 sequence).
- `to_short_string(base_uint)`. Like `to_string`, but only returns the first 8 characters. (Similar to how a git commit ID can be abbreviated.) Used as a wrapped sink to prefix most transaction-related messages. More can be added later.
- `XRPL_ASSERT_PARTS`. Convenience wrapper for `XRPL_ASSERT`, which takes the `function` and `description` as separate parameters.
- `SField::sMD_PseudoAccount`. Metadata option for `SField` definitions to indicate that the field, if set in an `AccountRoot` indicates that account is a pseudo-account. Removes the need for hard-coded field lists all over the place. Added the flag to `AMMID` and `VaultID`.
- Added functionality to `SField` ctor to detect both code and name collisions using asserts. And require all SFields to have a name
- Convenience type aliases `STLedgerEntry::const_pointer` and `STLedgerEntry::const_ref`. (`SLE` is an alias to `STLedgerEntry`.)
- Generalized `feeunit.h` (`TaggedFee`) into `unit.h` (`ValueUnit`) and added new "BIPS"-related tags for future use. Also refactored the type restrictions to use Concepts.
- Restructured `transactions.macro` to do two big things
1. Include the `#include` directives for transactor header files directly in the macro file. Removes the need to update `applySteps.cpp` and the resulting conflicts.
2. Added a `privileges` parameter to the `TRANSACTION` macro, which specifies some of the operations a transaction is allowed to do. These `privileges` are enforced by invariant checks. Again, removed the need to update scattered lists of transaction types in various checks.
- Unit tests:
1. Moved more helper functions into `TestHelpers.h` and `.cpp`.
2. Cleaned up the namespaces to prevent / mitigate random collisions and ambiguous symbols, particularly in unity builds.
3. Generalized `Env::balance` to add support for `MPTIssue` and `Asset`.
4. Added a set of helper classes to simplify `Env` transaction parameter classes: `JTxField`, `JTxFieldWrapper`, and a bunch of classes derived or aliased from it. For an example of how awesome it is, check the changes `src/test/jtx/escrow.h` for how much simpler the definitions are for `finish_time`, `cancel_time`, `condition`, and `fulfillment`.
5. Generalized several of the amount-related helper classes to understand `Asset`s.
6. `env.balance` for an MPT issuer will return a negative number (or 0) for consistency with IOUs.
This change re-enables building and testing all configurations, but only for the daily scheduled run. Previously all configurations were run for each merge into the develop branch, but that overwhelmed both the GitHub runners and the Conan remote, and thus they were limited to just a subset of configurations. Now that the number of jobs is limited via `max-parallel: 10`, we should be able to safely enable building all configurations again. However, building them all once a day instead of for each PR merge should be sufficient.
- Ensures the commits don't get orphaned, even though the relevant code
changes are already included.
* tag '2.5.1':
Set version to 2.5.1
Fix: Don't flag consensus as stalled prematurely (#5658)
GitHub runners have a limit on how many concurrent jobs they can actually process (even though they will try to run them all at the same time), and similarly the Conan remote cannot handle hundreds of concurrent requests. Previously, the Conan dependency uploading was already limited to max 10 jobs running in parallel, and this change makes the same change to the build+test workflow.
This change adds a fix amendment (`fixIncludeKeyletFields`) that adds:
* `sfSequence` to `Escrow` and `PayChannel`
* `sfOwner` to `SignerList`
* `sfOracleDocumentID` to `Oracle`
This ensures that all ledger entries hold all the information needed to determine their keylet.
The XRPL establishes connections in three stages: first a TCP connection, then a TLS/SSL handshake to secure the connection, and finally an upgrade to the bespoke XRP Ledger peer-to-peer protocol. During connection termination, xrpld directly closes the TCP connection, bypassing the TLS/SSL shutdown handshake. This makes peer disconnection diagnostics more difficult - abrupt TCP termination appears as if the peer crashed rather than disconnected gracefully.
This change refactors the connection lifecycle with the following changes:
- Enhanced outgoing connection logic with granular timeouts for each connection stage (TCP, TLS, XRPL handshake) to improve diagnostic capabilities
- Updated both PeerImp and ConnectAttempt to use proper asynchronous TLS shutdown procedures for graceful connection termination
* extends the functionality of the MPTokenIssuanceSet transaction, allowing the issuer to update fields or flags that were explicitly marked as mutable during creation.
Clio should only be notified when releases are about to be made, instead of for all PR, so this change only notifies Clio when a PR targets the release or master branch.
This change wraps all GitHub conditionals in `${{ .. }}`, both for consistency and to reduce unexpected failures, because it was previously noticed that not all conditionals work without those curly braces.
- Amendment: fixDelegateV1_1
- In DelegateSet, disallow invalid PermissionValues like 0, and transaction values when the transaction's amendment is not enabled. Acts as if the transaction doesn't exist, which is the same thing older versions without the amendment will do.
- Payment burn/mint should disallow DEX currency exchange.
- Support MPT for Payment burn/mint.
- Don't run upload-conan-deps in PRs, unless the PR changes the workflow file.
- Change cron schedule for uploading Conan dependencies to run after work hours for most dev.
- This should prevent Artifactory from being overloaded by too many requests at a time.
- Uses "max-parallel" to limit the build job to 10 simultaneous instances.
- Only run the minimal matrix on PRs.
For the purposes of being able to merge a PR, Github Actions jobs count as passed if they ran and passed, or were skipped.
With this change, if any of the jobs that "passed" depends on fail or are cancelled, then "passed" will fail. If they all succeed or are skipped, then "passed" is skipped, which does not prevent a merge.
This saves spinning up a runner in the usual case where things work, and will simplify our branch protection rules, so that only "passed" will need to be checked.
* Add and Scale to VaultCreate
* Add round-trip calculation to VaultDeposit VaultWithdraw and VaultClawback
* Implement Number::truncate() for VaultClawback
* Add rounding to DepositWithdraw
* Disallow zero shares withdraw or deposit with tecPRECISION_LOSS
* Return tecPATH_DRY on overflow when converting shares/assets
* Remove empty shares MPToken in clawback or withdraw (except for vault owner)
* Implicitly create shares MPToken for vault owner in VaultCreate
* Review feedback: defensive checks in shares/assets calculations
---------
Co-authored-by: Ed Hennis <ed@ripple.com>
Fix stalled consensus detection to prevent false positives in situations where there are no disputed transactions.
Stalled consensus detection was added to 2.5.0 in response to a network consensus halt that caused a round to run for over an hour. However, it has a flaw that makes it very easy to have false positives. Those false positives are usually mitigated by other checks that prevent them from having an effect, but there have been several instances of validators "running ahead" because there are circumstances where the other checks are "successful", allowing the stall state to be checked.
* chore: Use conan lockfile
* Add windows-specific dependencies as well
* Add more info about lockfiles
* Update lockfile to latest version
* Update BUILD.md with conan install note
This is a major refactor of LedgerEntry.cpp. It adds a number of helper functions to make the code easier to maintain.
It also splits up the ledger and ledger_entry tests into different files, and cleans up the ledger_entry tests to make them easier to write and maintain.
This refactor also caught a few bugs in some of the other RPC processing, so those are fixed along the way.
This is a follow-up to PR #5664 that further improves the specificity of logging for refused peer connections. The previous changes did not account for several key scenarios, leading to potentially misleading log messages.
It addresses the following
- Inbound Disabled: Connections are now explicitly logged as rejected when the server is not configured to accept inbound peers. Previously, this was logged as the server being "full," which was technically correct but lacked diagnostic clarity.
- Duplicate Connections: The logging now distinguishes between two types of duplicate connection refusals:
- When a peer with the same node public key is already connected (duplicate connection).
- When a connection is rejected because the limit for connections from a single IP address has been reached.
These changes provide more accurate and actionable diagnostic information when analyzing peer connection behavior.
Test jobs will run if
* Either the PR is non-draft or has the "DraftRunCI" label set *AND*
* One of the following:
* Certain files were changed *OR*
* The PR is non-draft and has the "Ready to merge" flag *OR*
* The workflow is being run from the merge queue.
Additionally, a meta "passed" job was added that is dependent on all the other test jobs, so the required jobs list under branch protection rules only needs to specify "passed" to ensure that *either* all the test jobs pass *or* all the test jobs are skipped because they don't need to be run.
This allows PRs that don't affect the build or binary to be merged without overriding.
This updates Boost to 1.88, which is needed because Clio wants to move to 1.88 as that fixes several ASAN false positives around coroutine usage. In order for Clio to move to newer boost, libXRPL needs to move too. Hence the changes in this PR. A lot has changed between 1.83 and 1.88 so there are lots of changes in the diff, especially in regards to Boost.Asio and coroutines in particular.
This change removes `labeled` and `unlabeled` as pipeline trigger actions, and instead adds `reopened` and `ready_for_review`. The logic whether to run the pipeline jobs is then simplified, although to get a draft PR with the `DraftCIRun` label to run it can be necessary to close and reopen a PR.
The change updates how clang-format is called in CI and locally, and adds prettier to the pre-commit hook. Proto files are now also formatted, while external files are excluded.
This change will skip running the notify-clio job when a PR is created from a fork, and reorders the strategy matrix configuration fields so GitHub will more clearly show which configuration is running.
This change reverts the formatting applied to external files and adds formatting of proto files.
As clang-format will complain if a proto file is modified or moved, since the .clang-format file does not explicitly contain a section for proto files, the change has been included in this PR as well.
This change updates OpenSSL from 1.1.1w to 3.5.2. The code works as-is, but many functions have been marked as deprecated and thus will need to be rewritten. For now we explicitly add the `-DOPENSSL_SUPPRESS_DEPRECATED` to give us time to do so, while providing us with the benefits of the updated version.
This change modifies the `build_only` check used to determine whether to run tests. For easier debugging in the future it also prints out the contents of the strategy matrix.
Reduce log noise by changing two log statements from error/warn level to debug level. These logs occur during normal operation when AMM offers are not available or when IOU authorization checks fail, which are expected scenarios that don't require an elevated log level.
Currently, all peer connection rejections are logged with the reason "slots full". This is inaccurate, as the PeerFinder can also reject connections if they are a duplicate. This change updates the logging logic to correctly report the specific reason (full or duplicate) for a rejected peer connection, providing more accurate diagnostic information.
This change fixes the suite names all around the test files, to make them match to the folder name in which this test files are located. Also, the RCL test files are relocated to the consensus folder, because they are testing consensus functionality.
This change replaces the configuration variable with the hardcoded `https://conan.ripplex.io`, making it possible for PRs from forks to use our Conan remote containing workarounds.
We're currently calling `XXH3_createState` and `XXH3_freeState` when hashing an object. However, it may be slow because they call `malloc` and `free`, which may affect the performance. This change avoids the use of the streaming API as much as possible by using an internal buffer.
Fix stalled consensus detection to prevent false positives in situations where there are no disputed transactions.
Stalled consensus detection was added to 2.5.0 in response to a network consensus halt that caused a round to run for over an hour. However, it has a flaw that makes it very easy to have false positives. Those false positives are usually mitigated by other checks that prevent them from having an effect, but there have been several instances of validators "running ahead" because there are circumstances where the other checks are "successful", allowing the stall state to be checked.
This change updates some incorrect Conan commands for Conan 2. As some flags do not exist in Conan 2, such as --settings build_type=[configuration], the commands have been adjusted accordingly. This change further uses the org-level variables and secrets rather than the repo-level ones.
This change introduces two key optimizations:
* Mutex scope reduction: Limits the lock to individual partitions within `TaggedCache`, reducing contention.
* Decoupling: Removes the tight coupling between `LedgerHistory` and `TaggedCache`, improving modularity and testability.
Lock contention analysis based on eBPF showed significant improvements as a result of this change.
This change uploads built Conan dependencies to the Conan remote upon merge into the develop branch.
At the moment, whenever Conan dependencies change, we need to remember to manually push them to our Conan remote, so they are cached for future reuse. If we forget to do so, these changed dependencies need to be rebuilt over and over again, which can take a long time.
This change:
* Removes the patched Conan recipes from the `external/` directory.
* Adds instructions for contributors how to obtain our patched recipes.
* Updates the Conan remote name and remote URL (the underlying package repository isn't changed).
* If the remote already exists, updates the URL instead of removing and re-adding.
* This is not done for the libXRPL job as it still uses Conan 1. This job will be switched to Conan 2 soon.
* Removes duplicate Conan remote CI pipeline steps.
* Overwrites the existing global.conf on MacOS and Windows machines, as those do not run CI pipelines in isolation but all share the same Conan installation; appending the same config over and over bloats the file.
This change updates BUILD.md for Conan 2, add fixes/workarounds for Apple Clang 17, Clang 20 and CMake 4. This also removes (from BUILD.md only) workarounds for compiler versions which we no longer support e.g. Clang 15 and adds compilation flag -Wno-deprecated-declarations to enable building with Clang 20 on Linux.
This change fixes an issue where the order of `PriceDataSeries` was out of sync between when `PriceOracle` was created and when it was updated. Although they are registered in the canonical order when updated, they are created using the order specified in the transaction; this change ensures that they are also registered in the canonical order when created.
This change decouples `ledger` from `xrpld/app`, and therefore fully clears the path to the modularisation of the ledger component. Before this change, `View.cpp` relied on `MPTokenAuthorize::authorize; this change moves `MPTokenAuthorize::authorize` to `View.cpp` to invert the dependency, making ledger a standalone module.
The Payment transaction metadata is missing the `DeliveredAmount` field that displays the actual amount delivered to the destination excluding transfer fees. This amendment fixes this problem.
#5224 added (among other things) a `VaultWithdraw` transaction that allows setting the recipient of the withdrawn funds in the `Destination` transaction field. This technically turns this transaction into a payment, and in some respect the implementation does follow payment rules, e.g. enforcement of `lsfRequireDestTag` or `lsfDepositAuth`, or that MPT transfer has destination `MPToken`. However for IOUs, it missed verification that the destination account has a trust line to the asset issuer. Since the default behavior of `accountSendIOU` is to create this trust line (if missing), this is what `VaultWithdraw` currently does. This is incorrect, since the `Destination` might not be interested in holding the asset in question; this basically enables spammy transfers. This change, therefore, removes automatic creation of a trust line to the `Destination` account in `VaultWithdraw`.
This change updates RocksDB to its latest version. RocksDB is backward-compatible, so even though this is a major version bump, databases created with previous versions will continue to function.
The external RocksDB folder is removed, as the latest version available via Conan Center no longer needs custom patches.
Before `XRPLF/ci` images, we did not have a `dependencies:` job for clang-16, so `instrumentation:` had to build its own dependencies. Now we have clang-16 Conan dependencies built in a separate job that can be used.
This change includes `network_id` data in the validations and ledger subscription stream responses, as well as unit tests to validate the response fields. Fixes#4783
This change adds support for `DomainID` to existing transactions `MPTokenIssuanceCreate` and `MPTokenIssuanceSet`.
In #5224 `DomainID` was added as an access control mechanism for `SingleAssetVault`. The actual implementation of this feature lies in `MPToken` and `MPTokenIssuance`, hence it makes sense to enable the use of `DomainID` also in `MPTokenIssuanceCreate` and `MPTokenIssuanceSet`, following same rules as in Vault:
* `MPTokenIssuanceCreate` and `MPTokenIssuanceSet` can only set `DomainID` if flag `MPTRequireAuth` is set.
* `MPTokenIssuanceCreate` requires that `DomainID` be a non-zero, uint256 number.
* `MPTokenIssuanceSet` allows `DomainID` to be zero (or empty) in which case it will remove `DomainID` from the `MPTokenIssuance` object.
The change is amendment-gated by `SingleAssetVault`. This is a non-breaking change because `SingleAssetVault` amendment is `Supported::no`, i.e. at this moment considered a work in progress, which cannot be enabled on the network.
For jobs running in containers, $GITHUB_WORKSPACE and ${{ github.workspace }} might not be the same directory. The actions/checkout step is supposed to checkout into `$GITHUB_WORKSPACE` and then add it to safe.directory (see instructions at https://github.com/actions/checkout), but that's apparently not happening for some container images. We can't be sure what is actually happening, so we preemptively add both directories to `safe.directory`. See also the GitHub issue opened in 2022 that still has not been resolved https://github.com/actions/runner/issues/2058.
The current implementation of rngfill is prone to false warnings from GCC about array bounds violations. Looking at the code, the implementation naively manipulates both the bytes count and the buffer pointer directly to ensure the trailing memcpy doesn't overrun the buffer. As expressed, there is a data dependency on both fields between loop iterations.
Now, ideally, an optimizing compiler would realize that these dependencies were unnecessary and end up restructuring its intermediate representation into a functionally equivalent form with them absent. However, the point at which this occurs may be disjoint from when warning analyses are performed, potentially rendering them more difficult to
determine precisely.
In addition, it may also consume a portion of the budget the optimizer has allocated to attempting to improve a translation unit's performance. Given this is a function template which requires context-sensitive instantiation, this code would be more prone than most to being inlined, with a decrease in optimization budget corresponding to the effort the optimizer has already expended, having already optimized one or more calling functions. Thus, the scope for impacting the the ultimate quality of the code generated is elevated.
For this change, we rearrange things so that the location and contents of each memcpy can be computed independently, relying on a simple loop iteration counter as the only changing input between iterations.
Remove `include(default)` from `conan/profiles/libxrpl`. This means that we will now rely on compiler workarounds stored elsewhere e.g. in global.conf.
This change reverts the usage of boost::shared_mutex back to std::shared_mutex. The change was originally introduced as a workaround for a bug in glibc 2.28 and older versions, which could cause threads using std::shared_mutex to stall. This issue primarily affected Ubuntu 18.04 and earlier distributions, which we no longer support.
This change fixes the MacOS pipeline issue by limiting GitHub to choose the existing runners, ensuring the new experimental runners are excluded until they are ready.
This issue was reported on the Javascript client library: XRPLF/xrpl.js#2611
The type filter (Note: as of the latest version of rippled, type parameter is deprecated) does not work as expected. This PR removes the type filter from the ledger command.
This PR updates several dependencies to their latest versions. Not all dependencies have been updated, as some need to be patched and some require additional code changes due to backward incompatibilities introduced by the version bump.
Due to rounding, the LPTokenBalance of the last LP might not match the LP's trustline balance. This was fixed for `AMMWithdraw` in `fixAMMv1_1` by adjusting the LPTokenBalance to be the same as the trustline balance. Since `AMMClawback` is also performing a withdrawal, we need to adjust LPTokenBalance as well in `AMMClawback.`
This change includes:
1. Refactored `verifyAndAdjustLPTokenBalance` function in `AMMUtils`, which both`AMMWithdraw` and `AMMClawback` call to adjust LPTokenBalance.
2. Added the unit test `testLastHolderLPTokenBalance` to test the scenario.
3. Modify the existing unit tests for `fixAMMClawbackRounding`.
Currently there is no easy way to track MPT related transactions for the issuer. This change allows MPT transactions to show up on issuer's AccountTx RPC (to align with how IOUs work).
* Update the `account_info` API so that the `allowTrustLineLocking` flag is included in the response.
* The proposed `TokenEscrow` amendment added an `allowTrustLineLocking` flag in the `AccountRoot` object.
* In the API response, under `account_flags`, there is now an `allowTrustLineLocking` field with a boolean (`true` or `false`) value.
* For reference, the XLS-85 Token-Enabled Escrows implementation can be found in https://github.com/XRPLF/rippled/pull/5185
The current version was copied from `antithesis-sdk-cpp` but there is no logical reason to require this specific version of CMake. This change downgrades the version to make the project build with older CMake versions.
Having `boost::boost` in `self.requires` makes clio link with all boost libraries. There are additionally several Boost stacktrace backends that are both linked with, which violate ODR.
This change fixes the problem.
This PR refactors `CredentialHelpers` and removes some unnecessary dependencies as a step of modularization.
The ledger component is almost independent except that it references `MPTokenAuthorize` and `CredentialHelpers.h`, and the latter further references `Transactor.h`. This PR partially clears the path to modularizing the ledger component and decouples `CredentialHelpers` from xrpld.
This PR fixes a crash in tests when the test `Env is run at trace/debug log level.
This issue only affects tests, and only if logging at trace/debug level, so really only relevant during rippled development, and does not affect production servers.
The tests that ensure `tfInnerBatchTxn` won't block delegated transactions silently fail in `Delegate_test.cpp`. This change removes these cases from that file and adds them to `Batch_test.cpp` instead where they do not silently fail, because there the batch delegate results are explicitly checked. Moving them to that file further avoids refactoring many helper functions.
This change allows users to submit simulate requests from a multi-sign account without needing to specify the accounts that are doing the multi-signing, and fixes an error with simulate that allowed double-"signed" (both single-sign and multi-sign public keys are provided) transactions.
Multi-line log messages are hard to work with. Writing these handful of related messages as one message should make the log a tiny bit easier to manage.
The CMake statements that make it seem as if the number of cores used to build external project dependencies is halved don't actually do anything. This change removes these statements.
* Adds `tecNO_DELEGATE_PERMISSION` for unauthorized transactions sent by a delegated account.
* Returns `tecNO_TARGET` instead of `terNO_ACCOUNT` for the `DelegateSet` transaction if the delegated account does not exist.
* Fixes `tfFullyCanonicalSig` and `tfInnerBatchTxn` blocking transactions issue by adding `tfUniversal` in the permission related masks in `txFlags.h`
The change increases the default network I/O worker thread pool size from 2 to 6. This will improve stability, as worker thread saturation correlates to desyncs, particularly on high-traffic peers, such as hubs.
To be able to consume `rippled` in Conan 2, the recipe should specify transitive_headers for external libraries that are present in the exported header files. This change remains compatibility with Conan 1, where this flag was not present.
- Specification: https://github.com/XRPLF/XRPL-Standards/pull/272
- Amendment: `TokenEscrow`
- Enables escrowing of IOU and MPT tokens in addition to native XRP.
- Allows accounts to lock issued tokens (IOU/MPT) in escrow objects, with support for freeze, authorization, and transfer rates.
- Adds new ledger fields (`sfLockedAmount`, `sfIssuerNode`, etc.) to track locked balances for IOU and MPT escrows.
- Updates EscrowCreate, EscrowFinish, and EscrowCancel transaction logic to support IOU and MPT assets, including proper handling of trustlines and MPT authorization, transfer rates, and locked balances.
- Enforces invariant checks for escrowed IOU/MPT amounts.
- Extends GatewayBalances RPC to report locked (escrowed) balances.
The changes are focused on fixing NFT transactions bypassing the trustline authorization requirement and potential invariant violation when interacting with deep frozen trustlines.
* Add AMM bid/create/deposit/swap/withdraw/vote invariants:
- Deposit, Withdrawal invariants: `sqrt(asset1Balance * asset2Balance) >= LPTokens`.
- Bid: `sqrt(asset1Balance * asset2Balance) > LPTokens` and the pool balances don't change.
- Create: `sqrt(asset1Balance * assetBalance2) == LPTokens`.
- Swap: `asset1BalanceAfter * asset2BalanceAfter >= asset1BalanceBefore * asset2BalanceBefore`
and `LPTokens` don't change.
- Vote: `LPTokens` and pool balances don't change.
- All AMM and swap transactions: amounts and tokens are greater than zero, except on withdrawal if all tokens
are withdrawn.
* Add AMM deposit and withdraw rounding to ensure AMM invariant:
- On deposit, tokens out are rounded downward and deposit amount is rounded upward.
- On withdrawal, tokens in are rounded upward and withdrawal amount is rounded downward.
* Add Order Book Offer invariant to verify consumed amounts. Consumed amounts are less than the offer.
* Fix Bid validation. `AuthAccount` can't have duplicate accounts or the submitter account.
This commit changes the ledger close in env.meta to be conditional on if it hasn't already been closed (i.e. the current ledger doesn't have any transactions in it). This change will make it a bit easier to use, as it will still work if you close the ledger outside of this usage. Previously, if you accidentally closed the ledger outside of the meta function, it would segfault and it was incredibly difficult to debug.
This commit introduces the following changes:
* Renames `vp_enable config` option to `vp_base_squelch_enable` to enable squelching for validators.
* Removes `vp_squelch` config option which was used to configure whether to send squelch messages to peers or not. With this flag removed, if squelching is enabled, squelch messages will be sent. This was an option used for debugging.
* Introduces a temporary `vp_base_squelch_max_trusted_peers` config option to change the max number of peers who are selected as validator message sources. This is a temporary option, which will be removed once a good value is found.
* Adds a traffic counter to count the number of times peers ignored squelch messages and kept sending messages for squelched validators.
* Moves the decision whether squelching is enabled and ready into Slot.h.
- Specification: [XRPLF/XRPL-Standards 56](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-0056d-batch/README.md)
- Amendment: `Batch`
- Implements execution of multiple transactions within a single batch transaction with four execution modes: `tfAllOrNothing`, `tfOnlyOne`, `tfUntilFailure`, and `tfIndependent`.
- Enables atomic multi-party transactions where multiple accounts can participate in a single batch, with up to 8 inner transactions and 8 batch signers per batch transaction.
- Inner transactions use `tfInnerBatchTxn` flag with zero fees, no signature, and empty signing public key.
- Inner transactions are applied after the outer batch succeeds via the `applyBatchTransactions` function in apply.cpp.
- Network layer prevents relay of transactions with `tfInnerBatchTxn` flag - each peer applies inner transactions locally from the batch.
- Batch transactions are excluded from AccountDelegate permissions but inner transactions retain full delegation support.
- Metadata includes `ParentBatchID` linking inner transactions to their containing batch for traceability and auditing.
- Extended STTx with batch-specific signature verification methods and added protocol structures (`sfRawTransactions`, `sfBatchSigners`).
Before #5224, the pseudoaccount ID was calculated using prefix expressed in `std::uint16_t`. The refactoring to move the pseudoaccount ID calculation to View.cpp had accidentally changed the prefix type to `int` (derived from `auto i = 0`) which in turn changed the length of the input to `sha512Half` from 2 bytes to 4, altering the result.
This resulted in a different ID of the pseudoaccount calculated from the function after the refactoring, breaking the ledger. This impacts AMMCreate, even when the `SingleAssetVault` amendment is not active. This change restores the prefix type to `std::uint16_t`.
- Specification: XRPLF/XRPL-Standards#239
- Amendment: `SingleAssetVault`
- Implements a vault feature used to store a fungible asset (XRP, IOU, or MPT, but not NFT) and to receive shares in the vault (an MPT) in exchange.
- A vault can be private or public.
- A private vault can use permissioned domains, subject to the `PermissionedDomains` amendment.
- Shares can be exchanged back into asset with `VaultWithdraw`.
- Permissions on the asset in the vault are transitively applied on shares in the vault.
- Issuer of the asset in the vault can clawback with `VaultClawback`.
- Extended `MPTokenIssuance` with `DomainID`, used by the permissioned domain on the vault shares.
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Using std::barrier performs extremely poorly (~1 hour vs ~1 minute to run the test suite) in certain macOS environments.
To unblock our macOS CI pipeline, std::barrier has been replaced with a custom mutex-based barrier (Barrier) that significantly improves performance without compromising correctness.
Running unit tests in parallel and multiple threads can write into one file can corrupt output files, and then gcovr won't be able to parse the corrupted file. This change adds -fprofile-update=atomic as instructed by https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68080.
This change implements the account permission delegation described in XLS-75d, see https://github.com/XRPLF/XRPL-Standards/pull/257.
* Introduces transaction-level and granular permissions that can be delegated to other accounts.
* Adds `DelegateSet` transaction to grant specified permissions to another account.
* Adds `ltDelegate` ledger object to maintain the permission list for delegating/delegated account pair.
* Adds an optional `Delegate` field in common fields, allowing a delegated account to send transactions on behalf of the delegating account within the granted permission scope. The `Account` field remains the delegating account; the `Delegate` field specifies the delegated account. The transaction is signed by the delegated account.
This change updates the squelching logic to accept squelch messages for untrusted validators. As a result, servers will also squelch untrusted validator messages reducing duplicate traffic they generate.
In particular:
* Updates squelch message handling logic to squelch messages for all validators, not only trusted ones.
* Updates the logic to send squelch messages to peers that don't squelch themselves
* Increases the threshold for the number of messages that a peer has to deliver to consider it as a candidate for validator messages.
Combines four related changes:
1. "Decrease `shouldRelay` limit to 30s." Pretty self-explanatory. Currently, the limit is 5 minutes, by which point the `HashRouter` entry could have expired, making this transaction look brand new (and thus causing it to be relayed back to peers which have sent it to us recently).
2. "Give a transaction more chances to be retried." Will put a transaction into `LedgerMaster`'s held transactions if the transaction gets a `ter`, `tel`, or `tef` result. Old behavior was just `ter`.
* Additionally, to prevent a transaction from being repeatedly held indefinitely, it must meet some extra conditions. (Documented in a comment in the code.)
3. "Pop all transactions with sequential sequences, or tickets." When a transaction is processed successfully, currently, one held transaction for the same account (if any) will be popped out of the held transactions list, and queued up for the next transaction batch. This change pops all transactions for the account, but only if they have sequential sequences (for non-ticket transactions) or use a ticket. This issue was identified from interactions with @mtrippled's #4504, which was merged, but unfortunately reverted later by #4852. When the batches were spaced out, it could potentially take a very long time for a large number of held transactions for an account to get processed through. However, whether batched or not, this change will help get held transactions cleared out, particularly if a missing earlier transaction is what held them up.
4. "Process held transactions through existing NetworkOPs batching." In the current processing, at the end of each consensus round, all held transactions are directly applied to the open ledger, then the held list is reset. This bypasses all of the logic in `NetworkOPs::apply` which, among other things, broadcasts successful transactions to peers. This means that the transaction may not get broadcast to peers for a really long time (5 minutes in the current implementation, or 30 seconds with this first commit). If the node is a bottleneck (either due to network configuration, or because the transaction was submitted locally), the transaction may not be seen by any other nodes or validators before it expires or causes other problems.
This change addresses an issue where `rippled` attempts to connect to an IPv6 address, even when the local network lacks IPv6 support, resulting in a "Network is unreachable" error.
The fix replaces the custom endpoint selection logic with `boost::async_connect`, which sequentially attempts to connect to available endpoints until one succeeds or all fail.
This PR replaces the word `failed` with `failure` in any test names and renames some test files to fix MSVC warnings, so that it is easier to search through the test output to find tests that failed.
This change fixes a number of issues involved with CTID:
* CTID is not present on all RPC tx transactions.
* rpcWRONG_NETWORK is missing in the ErrorCodes.cpp
When using subscribe at admin RPC port to send webhooks for the transaction stream to a backend, on large(r) ledgers the endpoint receives fewer HTTP POSTs with TX information than the amount of transactions in a ledger. This change removes the hardcoded queue length to avoid dropping TX notifications for the admin-only command. In addition, the per-request TTL for outgoing RPC HTTP calls has been reduced from 10 minutes to 30 seconds.
This change introduces a new fix amendment (`fixPayChanV1`) that prevents the creation of new `PaymentChannelCreate` transaction with a `CancelAfter` time less than the current ledger time. It piggy backs off of fix1571.
Once the amendment is activated, creating a new `PaymentChannel` will require that if you specify the `CancelAfter` time/value, that value must be greater than or equal to the current ledger time.
Currently users can create a payment channel where the `CancelAfter` time is before the current ledger time. This results in the payment channel being immediately closed on the next PaymentChannel transaction.
This PR splits out `ledger_entry` tests into its own file (`LedgerEntry_test.cpp`) and alphabetizes the helper functions in `LedgerEntry.cpp`. These commits were split out of #5237 to make that PR a little more manageable, since these basic trivial changes are most of the diff. There is no code change, just moving code around.
Adds metric counters for the following P2P message types:
* Untrusted proposal and validation messages
* Duplicate proposal, validation and transaction messages
It’s possible for this to happen legitimately if a set of peers, including a validator, are connected in a cycle, and the latency and message processing time between those peers is significantly less than the latency between the validator and the last peer. It’s unlikely in the real world, but obviously easy to simulate with Antithesis.
The ci pipelines are constantly hitting Docker Hub's public rate limiting since increasing the number of jobs we're running. This change switches over to images hosted in GitHub's registry.
As part of import optimization, a transitive include had been removed that defined `BOOST_COMP_MSVC` on Windows. In unity builds, this definition was pulled in, but in non-unity builds it was not - causing a compilation error. An inspection of the Boost code revealed that we can just gate the statements by `_MS_VER` instead. A `#pragma message` is added to verify that the statement is only printed on Windows builds.
The main goal of this optimisation is memory reduction in SHAMapTreeNodes by introducing intrusive pointers instead of standard std::shared_ptr and std::weak_ptr.
In preparation for a potential reference fee change we would like to verify that fee change works as expected. The first step is to fix all unit tests to be able to work with different reference fee values.
- Detects if the consensus process is "stalled". If it is, then we can declare a
consensus and end successfully even if we do not have 80% agreement on
our proposal.
- "Stalled" is defined as:
- We have a close time consensus
- Each disputed transaction is individually stalled:
- It has been in the final "stuck" 95% requirement for at least 2
(avMIN_ROUNDS) "inner rounds" of phaseEstablish,
- and either all of the other trusted proposers or this validator, if proposing,
have had the same vote(s) for at least 4 (avSTALLED_ROUNDS) "inner
rounds", and at least 80% of the validators (including this one, if
appropriate) agree about the vote (whether yes or no).
- If we have been in the establish phase for more than 10x the previous
consensus establish phase's time, then consensus is considered "expired",
and we will leave the round, which sends a partial validation (indicating
that the node is moving on without validating). Two restrictions avoid
prematurely exiting, or having an extended exit in extreme situations.
- The 10x time is clamped to be within a range of 15s
(ledgerMAX_CONSENSUS) to 120s (ledgerABANDON_CONSENSUS).
- If consensus has not had an opportunity to walk through all avalanche
states (defined as not going through 8 "inner rounds" of phaseEstablish),
then ConsensusState::Expired is treated as ConsensusState::No.
- When enough nodes leave the round, any remaining nodes will see they've
fallen behind, and move on, too, generally before hitting the timeout. Any
validations or partial validations sent during this time will help the
consensus process bring the nodes back together.
This change removes the existing undefined behavior from `LogicError`, so we can be certain that there will be always a stacktrace.
De-referencing a null pointer is an old trick to generate `SIGSEGV`, which would typically also create a stacktrace. However it is also an undefined behaviour and compilers can do something else. A more robust way to create a stacktrace while crashing the program is to use `std::abort`, which we have also used in this location for a long time. If we combine the two, we might not get the expected behaviour - namely, the nullpointer deref followed by `std::abort`, as handled in certain compiler versions may not immediately cause a crash. We have observed stacktrace being wiped instead, and thread put in indeterminate state, then stacktrace created without any useful information.
The Trustline RPC `no_ripple` flag gets set depending on `lsfDefaultRipple` flag, which is not a flag of a trustline but of the account root. The `lsfDefaultRipple` flag does not provide any insight if this particular trust line has `lsfLowNoRipple` or `lsfHighNoRipple` flag set, so it should not be used here at all. This change simplifies the logic.
The `end_marker` is used to limit the range of ledger entries to fetch. If `end_marker` is less than `marker`, a crash can occur. This change adds an additional check.
Changes the error to `malformedAddress` for `permissioned_domain` in the `ledger_entry` rpc, when the account is not a string. This change makes it more clear to a user what is wrong with their request.
What the LoadManager class does is stall detection, which is not the same as deadlock detection. In the condition of severe CPU starvation, LoadManager will currently intentionally crash rippled reporting `LogicError: Deadlock detected`. This error message is misleading as the condition being detected is not a deadlock. This change fixes and refactors the code in response.
Requiring manual updates of numFeatures is an annoying manual process that is easily forgotten, and leads to frequent merge conflicts. This change takes advantage of the `XRPL_FEATURE` and `XRPL_FIX` macros, and adds a new `XRPL_RETIRE` macro to automatically set `numFeatures`.
The codebase is filled with includes that are unused, and which thus can be removed. At the same time, the files often do not include all headers that contain the definitions used in those files. This change uses clang-format and clang-tidy to clean up the includes, with minor manual intervention to ensure the code compiles on all platforms.
- PR #5228 added assert=TRUE and werr=TRUE CMake flags to the
build/action.yml script which is used by all CI jobs to build rippled,
ensuring those flags were always set. The assumption was that only the
CI jobs used that script, so any extra time cost was offset by the
benefit of the extra checks. That assumption was incorrect. That
script is used by other downstream projects. Therefore, those flags
have been moved into the individual CI jobs' "cmake-args" parameter
passed to build/action.yml. This will have the same effect for CI jobs
without any side effects.
Combine multiple related debug log data points into a single
message. Allows quick correlation of events that
previously were either not logged or, if logged, strewn
across multiple lines, making correlation difficult.
The Heartbeat Timer and consensus ledger accept processing
each have this capability.
Also guarantees that log entries will be written if the
node is a validator, regardless of log severity level.
Otherwise, the level of these messages is at INFO severity.
* Add logging for amendment voting decision process
* When counting "received validations" to determine quorum, count the number of validators actually voting, not the total number of possible votes.
The current comment in the example cfg file incorrectly mentions both "may" and "must". This change fixes this comment to clarify that the default port of hosts is 2459 and that specifying it is therefore optional. It further sets the default port to 2459 instead of the legacy 51235.
- Drop duplicate outgoing TMGetLedger messages per peer
- Allow a retry after 30s in case of peer or network congestion.
- Addresses RIPD-1870
- (Changes levelization. That is not desirable, and will need to be fixed.)
- Drop duplicate incoming TMGetLedger messages per peer
- Allow a retry after 15s in case of peer or network congestion.
- The requestCookie is ignored when computing the hash, thus increasing
the chances of detecting duplicate messages.
- With duplicate messages, keep track of the different requestCookies
(or lack of cookie). When work is finally done for a given request,
send the response to all the peers that are waiting on the request,
sending one message per peer, including all the cookies and
a "directResponse" flag indicating the data is intended for the
sender, too.
- Addresses RIPD-1871
- Drop duplicate incoming TMLedgerData messages
- Addresses RIPD-1869
- Improve logging related to ledger acquisition
- Class "CanProcess" to keep track of processing of distinct items
---------
Co-authored-by: Valentin Balaschenko <13349202+vlntb@users.noreply.github.com>
This change enhances the filtering in the ledger, ledger_data, and account_objects methods by also supporting filtering by the canonical name of the LedgerEntryType using case-insensitive matching.
Rewrites the code so that the lock is not held during the callback. Instead it locks twice, once before, and once after. This is safe due to the structure of the code, but is checked after the second lock. This allows mutex_ to be changed back to a regular mutex.
In PeerImpl.cpp, if the function is a message handler (onMessage) or called directly from a message handler, then it should use fee_, since when the handler returns (OnMessageEnd) then the charge function is called. If the function is not a message handler, such as a job queue item, it should remain charge.
- Rename the job in missing-commits.yml from "check" to "up_to_date",
because other jobs named "check" prevent merges, but this one should
not prevent merges. How else are branches going to get caught up?
- Move the job in instrumentation.yml to nix.yml, but keep it entirely
independent.
If the permissioned domains amendment XLS-80 is enabled before credentials XLS-70, then the permissioned domain users will not be able to match any credentials. The changes here prevent the creation of any permissioned domain objects if credentials are not enabled.
Make `simulate` RPC easier to use:
* Prevent the use of `seed`, `secret`, `seed_hex`, and `passphrase` fields (to avoid confusing with the signing methods).
* Add autofilling of the `NetworkID` field.
- Also get the branch name.
- Use rev-parse instead of describe to get a clean hash.
- Return the git hash and branch name in server_info for admin
connections.
- Include git hash and branch name on separate lines in --version.
- spec: XRPLF/XRPL-Standards#220
- amendment: "DeepFreeze"
- implemented deep freeze spec to allow token issuers to prevent currency holders from being able to acquire more of these tokens.
- in combination with normal freeze, deep freeze effectively prevents any balance trust line balance change of a currency holder (except direct issuer <-> holder payments).
- added 2 new invariant checks to verify that deep freeze cannot be enacted without normal freeze and transfer is not frozen.
- made some fixes to existing freeze handling.
Co-authored-by: Ed Hennis <ed@ripple.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
- Fix an erroneous high fee penalty that peers could incur for sending
older transactions.
- Update to the fees charged for imposing a load on the server.
- Prevent the relaying of internal pseudo-transactions.
- Before: Pseudo-transactions received from a peer will fail the signature
check, even if they were requested (using TMGetObjectByHash), because
they have no signature. This causes the peer to be charge for an
invalid signature.
- After: Pseudo-transactions, are put into the global cache
(TransactionMaster) only. If the transaction is not part of
a TMTransactions batch, the peer is charged an unwanted data fee.
These fees will not be a problem in the normal course of operations,
but should dissuade peers from behaving badly by sending a bunch of
junk.
- Improve logging: include the reason for fees charged to a peer.
Co-authored-by: Ed Hennis <ed@ripple.com>
* Has more steps, but allows merges to develop to continue when a
beta / RC is pending, increasing developer velocity.
* Add a CI job to check that no reverse merges have been missed.
* Add some useful scripts in bin/git:
* Set up upstreams as expected for safer pushes
* Squash a bunch of branches
* Set the version number
* Resolves an issue introduced in #5111, which inadvertently removed the
-Wno-maybe-uninitialized compiler option from some xrpl.libxrpl
modules. This resulted in new "may be used uninitialized" build
warnings, first noticed in the "protocol" module. When compiling with
derr=TRUE, those warnings became errors, which made the build fail.
* Github CI actions will build with the assert and werr options turned
on. This will cause CI jobs to fail if a developer introduces a new
compiler warning, or causes an assert to fail in release builds.
* Includes the OS and compiler version in the linux dependencies jobs in
the "check environment" step.
* Translates the `unity` build option into `CMAKE_UNITY_BUILD` setting.
The LEDGER_ENTRY macro now takes an additional parameter, which makes it easier to avoid missing including the new field in jss.h and to the list of account_objects/ledger_data filters.
Replace Issue in STIssue with Asset. STIssue with MPTIssue is only used in MPT tests.
Will be used in Vault and in transactions with STIssue fields once MPT is integrated into DEX.
* Rename ASSERT to XRPL_ASSERT
* Upgrade to Anthithesis SDK 0.4.4, and use new 0.4.4 features
* automatic cast to bool, like assert
* Add instrumentation workflow to verify build with instrumentation enabled
Adds two CMake functions:
* add_module(library subdirectory): Declares an OBJECT "library" (a CMake abstraction for a collection of object files) with sources from the given subdirectory of the given library, representing a module. Isolates the module's headers by creating a subdirectory in the build directory, e.g. .build/tmp123, that contains just a symlink, e.g. .build/tmp123/basics, to the module's header directory, e.g. include/xrpl/basics, in the source directory, and putting .build/tmp123 (but not include/xrpl) on the include path of the module sources. This prevents the module sources from including headers not explicitly linked to the module in CMake with target_link_libraries.
* target_link_modules(library scope modules...): Links the library target to each of the module targets, and removes their sources from its source list (so they are not compiled and linked twice).
Uses these functions to separate and explicitly link modules in libxrpl:
Level 01: beast
Level 02: basics
Level 03: json, crypto
Level 04: protocol
Level 05: resource, server
* Copy Antithesis SDK version 0.4.0 to directory external/
* Add build option `voidstar` to enable instrumentation with Antithesis SDK
* Define instrumentation macros ASSERT and UNREACHABLE in terms of regular C assert
* Replace asserts with named ASSERT or UNREACHABLE
* Add UNREACHABLE to LogicError
* Document instrumentation macros in CONTRIBUTING.md
`STNumber` lets objects and transactions contain multiple fields for
quantities of XRP, IOU, or MPT without duplicating information about the
"issue" (represented by `STIssue`). It is a straightforward serialization of
the `Number` type that uniformly represents those quantities.
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
* 2.2.2 changed functions acquireAsync and NetworkOPsImp::recvValidation to add an item to a collection under lock, unlock, do some work, then lock again to do remove the item. It will deadlock if an exception is thrown while adding the item - before unlocking.
* Replace ScopedUnlock with scope_unlock.
Move the newest information to the top, i.e., use reverse chronological order within each of the two sections ("API Versions" and "XRP Ledger server versions")
The page_size will soon be made configurable with #5135, making this
re-ordering necessary.
When opening SQLite connection, there are specific pragmas set with
commonPragmas.
In particular, PRAGMA journal_mode creates journal file and locks the
page_size; as of this commit, this sets the page size to the default
value of 4096. Coincidentally, the hardcoded page_size was also 4096, so
no issue was noticed.
Update book_changes RPC to reduce latency, add "validated" field, and accept shortcut strings (current, closed, validated) for ledger_index.
`"validated": true` indicates that the transaction has been included in a validated ledger so the result of the transaction is immutable.
Fix#5033Fix#5034Fix#5035Fix#5036
---------
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
When rippled initiates a connection to SQLite3, rippled sends a "PRAGMA"
statement defining the maximum number of pages allowed in the database.
Update the max_page_count so it is consistent with the default for newer
versions of SQLite3. Increasing max_page_count is critical for keeping
full history servers online.
Fix#5102
* Retry some failed RPC connections / commands in unit tests
* Remove orphaned `getAccounts` function
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* upstream/master:
Set version to 2.2.2
Allow only 1 job queue slot for each validation ledger check
Allow only 1 job queue slot for acquiring inbound ledger.
Track latencies of certain code blocks, and log if they take too long
* refactor filtering of validations to specifically avoid
concurrent checkAccept() calls for the same validation ledger hash.
* Log when duplicate concurrent validation requests are filtered.
* RAII for containers that track concurrent validation requests.
* Log when duplicate concurrent inbound ledger are filtered.
* RAII for containers that track concurrent inbound ledger.
* Comment on when to asynchronously acquire inbound ledgers, which
is possible to be always OK, but should have further review.
* Other small logging changes
Co-authored-by: Ed Hennis <ed@ripple.com>
Implements a CI workflow that detects when a new version of libxrpl is
proposed, uploads it to artifactory under the `clio` channel and
notifies Clio's CI to check this newly proposed version.
* Add fixNFTokenPageLinks amendment:
It was discovered that under rare circumstances the links between
NFTokenPages could be removed. If this happens, then the
account_objects and account_nfts RPC commands under-report the
NFTokens owned by an account.
The fixNFTokenPageLinks amendment does the following to address
the problem:
- It fixes the underlying problem so no further broken links
should be created.
- It adds Invariants so, if such damage were introduced in the
future, an invariant would stop it.
- It adds a new FixLedgerState transaction that repairs
directories that were damaged in this fashion.
- It adds unit tests for all of it.
* `account_objects` returns an invalid field error if `type` is not supported.
This includes objects an account can't own, or which are unsupported by `account_objects`
* Includes:
* Amendments
* Directory Node
* Fee Settings
* Ledger Hashes
* Negative UNL
The names of the files should reflect the name of the Dir class.
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
* fix CTID in tx command returns invalidParams on lowercase hex
* test mixed case and change auto to explicit type
* add header cctype because std::tolower is called
* remove unused local variable
* change test case comment from 'lowercase' to 'mixed case'
---------
Co-authored-by: Zack Brunson <Zshooter@gmail.com>
* Add feature / amendment "InvariantsV1_1"
* Adds invariant AccountRootsDeletedClean:
* Checks that a deleted account doesn't leave any directly
accessible artifacts behind.
* Always tests, but only changes the transaction result if
featureInvariantsV1_1 is enabled.
* Unit tests.
* Resolves#4638
* [FOLD] Review feedback from @gregtatcam:
* Fix unused variable warning
* Improve Invariant test const correctness
* [FOLD] Review feedback from @mvadari:
* Centralize the account keylet function list, and some optimization
* [FOLD] Some structured binding doesn't work in clang
* [FOLD] Review feedback 2 from @mvadari:
* Clean up and clarify some comments.
* [FOLD] Change InvariantsV1_1 to unsupported
* Will allow multiple PRs to be merged over time using the same amendment.
* fixup! [FOLD] Change InvariantsV1_1 to unsupported
* [FOLD] Update and clarify some comments. No code changes.
* Move CMake directory
* Rearrange sources
* Rewrite includes
* Recompute loops
* Fix merge issue and formatting
---------
Co-authored-by: Pretty Printer <cpp@ripple.com>
* fix "account_nfts" with unassociated marker returning issue
* create unit test for fixing nft page invalid marker not returning error
add more test
change test name
create unit test
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* fix "account_nfts" with unassociated marker returning issue
* [FOLD] accumulated review suggestions
* move BEAST check out of lambda function
---------
Authored-by: Scott Schurr <scott@ripple.com>
* fixInnerObjTemplate2 amendment:
Apply inner object templates to all remaining (non-AMM)
inner objects.
Adds a unit test for applying the template to sfMajorities.
Other remaining inner objects showed no problems having
templates applied.
* Move CMake directory
* Rearrange sources
* Rewrite includes
* Recompute loops
---------
Co-authored-by: Pretty Printer <cpp@ripple.com>
Fix interactions between NFTokenOffers and trust lines.
Since the NFTokenAcceptOffer does not check the trust line that
the issuer receives as a transfer fee in the NFTokenAcceptOffer,
if the issuer deletes the trust line after NFTokenCreateOffer,
the trust line is created for the issuer by the
NFTokenAcceptOffer. That's fixed.
Resolves#4925.
Fixes issue #4937.
The fixReducedOffersV1 amendment fixed certain forms of offer
modification that could lead to blocked order books. Reduced
offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer
(from the perspective of the taker). It turns out that, for
small values, the quality of the reduced offer can be
significantly affected by the rounding mode used during
scaling computations.
Issue #4937 identified an additional code path that modified
offers in a way that could lead to blocked order books. This
commit changes the rounding in that newly located code path so
the quality of the modified offer is never worse than the
quality of the offer as it was originally placed.
It is possible that additional ways of producing blocking
offers will come to light. Therefore there may be a future
need for a V3 amendment.
* Add trap_tx_hash command line option
This new option can be used only if replay is also enabled. It takes a transaction hash from the ledger loaded for replay, and will cause a specific line to be hit in Transactor.cpp, right before the selected transaction is applied.
Price Oracle data-series logic uses `unordered_map` to update the Oracle object.
This results in different servers disagreeing on the order of that hash table.
Consequently, the generated ledgers will have different hashes.
The fix uses `map` instead to guarantee the order of the token pairs
in the data-series.
Due to the rounding, LPTokenBalance of the last
Liquidity Provider (LP), might not match this LP's
trustline balance. This fix sets LPTokenBalance on
last LP withdrawal to this LP's LPToken trustline
balance.
Single path AMM offer has to factor in the transfer in rate
when calculating the upper bound quality and the quality function
because single path AMM's offer quality is not constant.
This fix factors in the transfer fee in
BookStep::adjustQualityWithFees().
* Fix AMM offer rounding and low quality LOB offer blocking AMM:
A single-path AMM offer with account offer on DEX, is always generated
starting with the takerPays first, which is rounded up, and then
the takerGets, which is rounded down. This rounding ensures that the pool's
product invariant is maintained. However, when one of the offer's side
is XRP, this rounding can result in the AMM offer having a lower
quality, potentially causing offer generation to fail if the quality
is lower than the account's offer quality.
To address this issue, the proposed fix adjusts the offer generation process
to start with the XRP side first and always rounds it down. This results
in a smaller offer size, improving the offer's quality. Regardless if the offer
has XRP or not, the rounding is done so that the offer size is minimized.
This change still ensures the product invariant, as the other generated
side is the exact result of the swap-in or swap-out equations.
If a liquidity can be provided by both AMM and LOB offer on offer crossing
then AMM offer is generated so that it matches LOB offer quality. If LOB
offer quality is less than limit quality then generated AMM offer quality
is also less than limit quality and the offer doesn't cross. To address
this issue, if LOB quality is better than limit quality then use LOB
quality to generate AMM offer. Otherwise, don't use the quality to generate
AMM offer. In this case, limitOut() function in StrandFlow limits
the out amount to match strand's quality to limit quality and consume
maximum AMM liquidity.
Rounding in the payment engine is causing an assert to sometimes fire
with "dust" amounts. This is causing issues when running debug builds of
rippled. This issue will be addressed, but the assert is no longer
serving its purpose.
I am resigning from my role as maintainer of the `rippled` codebase.
Please update repository permissions accordingly, prior to merging this pull request.
Thanks to everyone who has contributed, especially those whom I had the opportunity to closely collaborate with.
The AMM has an invariant for swaps where:
new_balance_1*new_balance_2 >= old_balance_1*old_balance_2
Due to rounding, this invariant could sometimes be violated (although by
very small amounts).
This patch introduces an amendment `fixAMMRounding` that changes the
rounding to always favor the AMM. Doing this should maintain the
invariant.
Co-authored-by: Bronek Kozicki
Co-authored-by: thejohnfreeman
It can be difficult to make transaction breaking changes to low level
code because the low level code does not have access to a ledger and the
current activated amendments in that ledger (the "rules"). This patch
adds global access to the current ledger rules as a `std::optional`. If
the optional is not seated, then there is no active transaction.
The `rotateWithLock` function holds a lock while it calls a callback
function that's passed in by the caller. This is a problematic design
that needs to be used very carefully. In this case, at least one caller
passed in a callback that eventually relocks the mutex on the same
thread, causing UB (a deadlock was observed). The caller was from
SHAMapStoreImpl, and it called `clearCaches`. This `clearCaches` can
potentially call `fetchNodeObject`, which tried to relock the mutex.
This patch resolves the issue by changing the mutex type to a
`recursive_mutex`. Ideally, the code should be rewritten so it doesn't
hold the mutex during the callback and the mutex should be changed back
to a regular mutex.
Co-authored-by: Ed Hennis <ed@ripple.com>
* Amend `.codecov.yml` to disable coverage reporting of test sources
and explicitly set most parameters
* Increase codecov upload retry time to 210s (from 35s)
* Upgrade gcovr adding support for more coverage formats (lcov, clover, jacoco)
* Upgrade github actions in coverage workflow
* Explicitly disable codecov plugins (also removing `gcov` coverage, which is not
correctly handled by codecov https://github.com/codecov/feedback/issues/334)
This amendment, `fixPreviousTxnID`, adds `PreviousTxnID` and
`PreviousTxnLgrSequence` as fields to all ledger objects that did
not already have them included (`DirectoryNode`, `Amendments`,
`FeeSettings`, `NegativeUNL`, and `AMM`). This makes it much easier
to go through the history of these ledger objects.
Github Actions for the build/test jobs (nix.yml, mac.yml, windows.yml) will only run on branches that build packages (develop, release, master), and branches with names starting with "ci/". This is intended as a compromise between disabling CI jobs on personal forks entirely, and having the jobs run as a free-for-all. Note that it will not affect PR jobs at all.
A large synthetic offer was not handled correctly in the payment engine.
This patch fixes that issue and introduces a new invariant check while
processing synthetic offers.
When calculating reward shares, the amount should always be rounded
down. If the `fixUniversalNumber` amendment is not active, this works
correctly. If it is not active, then the amount is incorrectly rounded
up. This patch introduces an amendment so it will be rounded down.
This fixes a case where a peer can desync under a certain timing
circumstance--if it reaches a certain point in consensus before it receives
proposals.
This was noticed under high transaction volumes. Namely, when we arrive at the
point of deciding whether consensus is reached after minimum establish phase
duration but before having received any proposals. This could be caused by
finishing the previous round slightly faster and/or having some delay in
receiving proposals. Existing behavior arrives at consensus immediately after
the minimum establish duration with no proposals. This causes us to desync
because we then close a non-validated ledger. The change in this PR causes us to
wait for a configured threshold before making the decision to arrive at
consensus with no proposals. This allows validators to catch up and for brief
delays in receiving proposals to be absorbed. There should be no drawback since,
with no proposals coming in, we needn't be in a huge rush to jump ahead.
Remove the zaphod.alloy.ee hubs from the bootstrap and default configuration after 5 years. It has been an honor to run these servers, but it is now time for another entity to step into this role.
The zaphod servers will be taken offline in a phased manner keeping all those who have peering arrangements informed.
These would be the preferred attributes of a boostrap set of hubs:
1. Commitment to run the hubs for a minimum of 2 years
2. Highly available
3. Geographically dispersed
4. Secure and up to date
5. Committed to ensure that peering information is kept private
We do not currently enforce that incoming peer connection does not have
remote_endpoint which is already used (either by incoming or outgoing
connection), hence already stored in slots_. If we happen to receive a
connection from such a duplicate remote_endpoint, it will eventually result in a
crash (when disconnecting) or weird behavior (when updating slot state), as a
result of an apparently matching remote_endpoint in slots_ being used by a
different connection.
This amendment fixes an edge case where an empty DID object can be
created. It adds an additional check to ensure that DIDs are
non-empty when created, and returns a `tecEMPTY_DID` error if the DID
would be empty.
The witness server makes heavily use of the `account_tx` RPC command. Perf
testing showed that the SQL query used by `account_tx` became unacceptably slow
when the DB was large and there was a `marker` parameter. The plan for the query
showed only indexed reads. This appears to be an issue with the internal SQLite
optimizer. This patch rewrote the query to use `UNION` instead of `OR` and
significantly improves performance. See RXI-896 and RIPD-1847 for more details.
- Update container for Doxygen workflow. Matches Linux workflow, with newer GLIBC version required by newer actions.
- Fixes macOS workflow to install and configure Conan correctly. Still fails on tests, but that does not seem attributable to the workflow.
* telENV_RPC_FAILED is a new code, reserved exclusively
for unit tests when RPC fails. This will
make those types of errors distinct and easier to test
for when expected and/or diagnose when not.
* Output RPC command result when result is not expected.
We are currently using old version 0.6.2 of `xxhash`, as a verbatim copy and paste of its header file `xxhash.h`. Switch to the more recent version 0.8.2. Since this version is in Conan Center (and properly protects its ABI by keeping the state object incomplete), add it as a Conan requirement. Switch to the SIMD instructions (in the new `XXH3` family) supported by the new version.
This algorithm is about an order of magnitude faster than the existing
algorithm (about 10x faster for encoding and about 15x faster for
decoding - including the double hash for the checksum). The algorithms
use gcc's int128 (fast MS version will have to wait, in the meantime MS
falls back to the slow code).
* It is now an invariant that all constructed Public Keys are valid,
non-empty and contain 33 bytes of data.
* Additionally, the memory footprint of the PublicKey class is reduced.
The size_ data member is declared as static.
* Distinguish and identify the PublisherList retrieved from the local
config file, versus the ones obtained from other validators.
* Fixes#2942
The compilation fails due to an issue in the initializer list
of an optional argument, which holds a vector of pairs.
The code compiles correctly on earlier gcc versions, but fails on gcc 13.
Implement native support for Price Oracles.
A Price Oracle is used to bring real-world data, such as market prices,
onto the blockchain, enabling dApps to access and utilize information
that resides outside the blockchain.
Add Price Oracle functionality:
- OracleSet: create or update the Oracle object
- OracleDelete: delete the Oracle object
To support this functionality add:
- New RPC method, `get_aggregate_price`, to calculate aggregate price for a token pair of the specified oracles
- `ltOracle` object
The `ltOracle` object maintains:
- Oracle Owner's account
- Oracle's metadata
- Up to ten token pairs with the scaled price
- The last update time the token pairs were updated
Add Oracle unit-tests
* Commit 01c37fe introduced a change to the STTx unit test where a local
"defaultRules" object was created with a temporary inline "presets"
value provided to the ctor. Rules::Impl stores a const ref to the
presets provided to the ctor. This particular call provided an inline
temp variable, which goes out of scope as soon as the object is
created. On Windows, attempting to use the presets (e.g. via the
enabled() function) causes an access violation, which crashes the test
run.
* An audit of the code indicates that all other instances of Rules use
the Application's config.features list, which will have a sufficient
lifetime.
Add `STObject` constructor to explicitly set the inner object template.
This allows certain AMM transactions to apply in the same ledger:
There is no issue if the trading fee is greater than or equal to 0.01%.
If the trading fee is less than 0.01%, then:
- After AMM create, AMM transactions must wait for one ledger to close
(3-5 seconds).
- After one ledger is validated, all AMM transactions succeed, as
appropriate, except for AMMVote.
- The first AMMVote which votes for a 0 trading fee in a ledger will
succeed. Subsequent AMMVote transactions which vote for a 0 trading
fee will wait for the next ledger (3-5 seconds). This behavior repeats
for each ledger.
This has no effect on the ultimate correctness of AMM. This amendment
will allow the transactions described above to succeed as expected, even
if the trading fee is 0 and the transactions are applied within one
ledger (block).
Prior to this commit, `port_grpc` could not be added to the [server]
stanza. Instead of validating gRPC IP/Port/Protocol information in
ServerHandler, validate grpc port info in GRPCServer constructor. This
should not break backwards compatibility.
gRPC-related config info must be in a section (stanza) called
[port_gprc].
* Close#4015 - That was an alternate solution. It was decided that with
relaxed validation, it is not necessary to rename port_grpc.
* Fix#4557
These headers are required in the xrpl Conan package in order for
xbridge witness server (xbwd) to build. This change to libxrpl may help
any dependents of libxrpl. This addition does not change any C++ code.
Without this amendment, an NFTokenAcceptOffer transaction can succeed
even when the NFToken recipient does not have sufficient reserves for
the new NFTokenPage. This allowed accounts to accept NFT sell offers
without having a sufficient reserve. (However, there was no issue in
brokered mode or when a buy offer is involved.)
Instead, the transaction should fail with `tecINSUFFICIENT_RESERVE` as
appropriate. The `fixNFTokenReserve` amendment adds checks in the
NFTokenAcceptOffer transactor to check if the OwnerCount changed. If it
did, then it checks the new reserve requirement.
Fix#4679
Use consistent platform-agnostic library names on all platforms.
Fix an issue that prevents dependents like validator-keys-tool from
linking to libxrpl on Windows.
It is bad practice to change the binary base name depending on the
platform. CMake already manipulates the base name into a final name that
fits the conventions of the platform. Linkers accept base names on the
command line and then look for conventional names on disk.
* Disable the Windows CI unit tests "allowed to fail" workaround which
was previously introduced in #4596.
* The runner hardware was upgraded, and the unit tests have been passing
since then.
Update to #4849, using a workaround for spurious codecov upload errors.
Spurious codecov upload errors are expected in public repos which rely
on PRs via forks. Retrying uploads is a decent and easy workaround.
Clients subscribed to `transactions` over WebSocket are being
disconnected because the traffic exceeds the default `send_queue_limit`
of 100.
This commit changes the default configuration, not the default in code.
Fix#4866
Resolves a warning that was emitted from the clang compiler. Switches
usage of the sprintf function to the recommended snprintf function.
Warning was observed in Apple clang version 15.0.0 (clang-1500.0.40.1).
Fix#4569
* Add logging for Application.cpp sweep()
* Improve lifetime management of ledger objects (`SLE`s)
* Only store SLE digest in CachedView; get SLEs from CachedSLEs
* Also force release of last ledger used for path finding if there are
no path finding requests to process
* Count more ST objects (derive from `CountedObject`)
* Track CachedView stats in CountedObjects
* Rename the CachedView counters
* Fix the scope of the digest lookup lock
Before this patch, if you asked "is it caching?" It was always caching.
Prevent WebSocket connections from trying to close twice.
The issue only occurs in debug builds (assertions are disabled in
release builds, including published packages), and when the WebSocket
connections are unprivileged. The assert (and WRN log) occurs when a
client drives up the resource balance enough to be forcibly disconnected
while there are still messages pending to be sent.
Thanks to @lathanbritz for discovering this issue in #4822.
This reverts commit 002893f280.
There were two files with conflicts in the automated revert:
- src/ripple/rpc/impl/RPCHelpers.h and
- src/test/rpc/JSONRPC_test.cpp
Those files were manually resolved.
Workaround for compilation errors with gcc-13 and other compilers
relying on `libstdc++` version 13. This is temporary until actual fix is
available for us to use: https://github.com/boostorg/beast/pull/2682
Some boost.beast files (which we do use) rely on an old gcc-12 behaviour
where `#include <cstdint>` was not needed even though types from this
header were used. This was broken by a change in libstdc++ version 13:
https://gcc.gnu.org/gcc-13/porting_to.html#header-dep-changes
The necessary fix was implemented in boost.beast, however it is not yet
available. Until it is available, we can use this workaround to enable
compilation of `rippled` with gcc-13, clang-16, etc.
* Revert "Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)"
This reverts commit 8ce85a9750.
* Revert "Several changes to improve Consensus stability: (#4505)"
This reverts commit f259cc1ab6.
* Add missing include
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.
* git apply clang-format.patch
* Review (Howard) fixes.
* Review fix for impasse discovered by John.
* Review fixes (comments) from John.
* Scott S review fixes. Also clang-format.
* Promote API version 2 to supported
* Switch command line to API version 1
* Fix LedgerRequestRPC test
* Remove obsolete tx_account method
This method is not implemented, the only parts which are removed are related to command-line parsing
* Fix RPCCall test
* Reduce diff size, small test improvements
* Minor fixes
* Support for the mold linker
* [fold] handle case where both mold and gold are installed
* [fold] Use first non-default linker
* Fix TransactionEntry_test
* Fix AccountTx_test
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
* Support for the mold linker (#4807)
* Promote API version 2 to supported (#4803)
* Promote API version 2 to be supported
* Switch the command line to API version 1
* Fix LedgerRequestRPC test
* Remove obsolete tx_account method
This method is not implemented, the only parts which are removed are related to command-line parsing
* Fix RPCCall test
* Reduce diff size, small test improvements
* Minor fixes
* Support for the mold linker
* Fix TransactionEntry_test
* Fix AccountTx_test
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
* Update Linux smoketest distros (#4813)
* Fix 2.0 regression in tx method with binary output (#4812)
* Fix binary output from tx method
* Formatting fix
* Minor test improvement
* Minor test improvements
* Optimize calculation of close time to avoid impasse and minimize gratuitous proposal changes (#4760)
* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.
* git apply clang-format.patch
* Scott S review fixes. Also clang-format.
* Set version to 2.0.0-rc2
---------
Co-authored-by: manoj <mdoshi@ripple.com>
Co-authored-by: Scott Determan <scott.determan@yahoo.com>
Co-authored-by: Bronek Kozicki <brok@incorrekt.com>
Co-authored-by: Michael Legleux <legleux@users.noreply.github.com>
Co-authored-by: Mark Travis <mtrippled@users.noreply.github.com>
* Optimize the calculation of close time to avoid
impasse and minimize gratuitous proposal changes.
* git apply clang-format.patch
* Review (Howard) fixes.
* Review fix for impasse discovered by John.
* Review fixes (comments) from John.
* Scott S review fixes. Also clang-format.
* Promote API version 2 to supported
* Switch command line to API version 1
* Fix LedgerRequestRPC test
* Remove obsolete tx_account method
This method is not implemented, the only parts which are removed are related to command-line parsing
* Fix RPCCall test
* Reduce diff size, small test improvements
* Minor fixes
* Support for the mold linker
* [fold] handle case where both mold and gold are installed
* [fold] Use first non-default linker
* Fix TransactionEntry_test
* Fix AccountTx_test
---------
Co-authored-by: seelabs <scott.determan@yahoo.com>
* Remove include <ranges>
* Formatting fix
* Output for subscriptions
* Output from sign, submit etc.
* Output from ledger
* Output from account_tx
* Output from transaction_entry
* Output from tx
* Store close_time_iso in API v2 output
* Add small APIv2 unit test for subscribe
* Add unit test for transaction_entry
* Add unit test for tx
* Remove inLedger from API version 2
* Set ledger_hash and ledger_index
* Move isValidated from RPCHelpers to LedgerMaster
* Store closeTime in LedgerFill
* Time formatting fix
* additional tests for Subscribe unit tests
* Improved comments
* Rename mInLedger to mLedgerIndex
* Minor fixes
* Set ledger_hash on closed ledger, even if not validated
* Update API-CHANGELOG.md
* Add ledger_hash, ledger_index to transaction_entry
* Fix validated and close_time_iso in account_tx
* Fix typos
* Improve getJson for Transaction and STTx
* Minor improvements
* Replace class enum JsonOptions with struct
We may consider turning this into a general-purpose template and using it elsewhere
* simplify the extraction of transactionID from Transaction object
* Remove obsolete comments
* Unconditionally set validated in account_tx output
* Minor improvements
* Minor fixes
---------
Co-authored-by: Chenna Keshava <ckeshavabs@gmail.com>
The command line API still uses `apiMaximumSupportedVersion`.
The unit test RPCs use `apiMinimumSupportedVersion` if unspecified.
Context:
- #4568
- #4552
With clang 15, an unused-but-set-variable warning was emitted:
PostgresDatabase.cpp:178:14: warning: variable 'expNumResults' set but
not used [-Wunused-but-set-variable]
uint32_t expNumResults = 1;
Introduce the `fixFillOrKill` amendment.
Fix an edge case occurring when an offer with `tfFillOrKill` set (but
without `tfSell` set) fails to cross an offer with a better rate. If
`tfFillOrKill` is set, then the owner must receive the full TakerPays.
Without this amendment, an offer fails if the entire `TakerGets` is not
spent. With this amendment, when `tfSell` is not set, the entire
`TakerGets` does not have to be spent.
For details about OfferCreate, see: https://xrpl.org/offercreate.htmlFix#4684
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Remove dependency on `<ranges>` header, since it is not implemented by
all compilers which we want to support.
This code change only affects unit tests.
Resolve https://github.com/XRPLF/rippled/issues/4787
Remove `tx_history` and `ledger_header` methods from API version 2.
Update `RPC::Handler` to allow for methods (or method implementations)
to be API version specific. This partially resolves#4727. We can now
store multiple handlers with the same name, as long as they belong to
different (non-overlapping) API versions. This necessarily impacts the
handler lookup algorithm and its complexity; however, there is no
performance loss on x86_64 architecture, and only minimal performance
loss on arm64 (around 10ns). This design change gives us extra
flexibility evolving the API in the future, including other parts of
#4727.
In API version 2, `tx_history` and `ledger_header` are no longer
recognised; if they are called, `rippled` will return error
`unknownCmd`
Resolve#3638Resolve#3539
Using the "Amount" field in Payment transactions can cause incorrect
interpretation. There continue to be problems from the use of this
field. "Amount" is rarely the correct field to use; instead,
"delivered_amount" (or "DeliveredAmount") should be used.
Rename the "Amount" field to "DeliverMax", a less misleading name. With
api_version: 2, remove the "Amount" field from Payment transactions.
- Input: "DeliverMax" in `tx_json` is an alias for "Amount"
- sign
- submit (in sign-and-submit mode)
- submit_multisigned
- sign_for
- Output: Add "DeliverMax" where transactions are provided by the API
- ledger
- tx
- tx_history
- account_tx
- transaction_entry
- subscribe (transactions stream)
- Output: Remove "Amount" from API version 2
Fix#3484Fix#3902
The unity build speeds up compilation by bundling multiple source files
into one larger file. This reduces Windows CI build time by up to 50%.
As described in #4596, the automatic Windows builds take a very long
time. Unity builds are significantly faster - currently about 45 min,
much closer to the typical MacOS (35-40 minutes) and nix (~30 minutes)
run times.
This is intended as a stopgap solution until a more resourced and
reliable runner is available.
No C++ code was changed. This only affects CI.
Add a new RPC / WS call for `server_definitions`, which returns an
SDK-compatible `definitions.json` (binary enum definitions) generated by
the server. This enables clients/libraries to dynamically work with new
fields and features, such as ones that may become available on side
chains. Clients query `server_definitions` on a node from the network
they want to work with, and immediately know how to speak that node's
binary "language", even if new features are added to it in the future
(as long as there are no new serialized types that the software doesn't
know how to serialize/deserialize).
Example:
```js
> {"command": "server_definitions"}
< {
"result": {
"FIELDS": [
[
"Generic",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": 0,
"type": "Unknown"
}
],
[
"Invalid",
{
"isSerialized": false,
"isSigningField": false,
"isVLEncoded": false,
"nth": -1,
"type": "Unknown"
}
],
[
"ObjectEndMarker",
{
"isSerialized": false,
"isSigningField": true,
"isVLEncoded": false,
"nth": 1,
"type": "STObject"
}
],
...
```
Close#3657
---------
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
Implement native support for W3C DIDs.
Add a new ledger object: `DID`.
Add two new transactions:
1. `DIDSet`: create or update the `DID` object.
2. `DIDDelete`: delete the `DID` object.
This meets the requirements specified in the DID v1.0 specification
currently recommended by the W3C Credentials Community Group.
The DID format for the XRP Ledger conforms to W3C DID standards.
The objects can be created and owned by any XRPL account holder.
The transactions can be integrated by any service, wallet, or application.
It might be possible for the server code to indirect through certain
`end()` iterators. While a debug build would catch this problem with
`assert()`s, a release build would crash. If there are problems in this
area in the future, it is best to get a definitive indication of the
nature of the error regardless of whether it's a debug or release build.
To accomplish this, these `assert`s are converted into `LogicError`s
that will produce a reasonable error message when they fire.
In Windows, we need to call `python` in order for the `pip` upgrade
command to work.
This changes the GitHub Actions Windows CI job to use the correct
command to upgrade PIP, fixing this error:
```
ERROR: To modify pip, please run the following command:
C:\hostedtoolcache\windows\Python\3.9.13\x64\python.exe -m pip install --upgrade pip
```
A future task is to make job run on heavy Windows runners so that it
doesn't take so long.
Context: #4596
The assert is saying that the only reason `pathFinder` would be null is
if the request was aborted (connection dropped, etc.). That's what
`continueCallback()` checks. But that is very clearly not true if you
look at `getPathFinder`, which calls `findPaths`, which can return false
for many reasons.
Fix#4744
Update the nix CI runner. This commit does not modify any source code
files. The unix builds were successful, but the binaries were not
uploaded to the internal artifactory. This PR borrows an idea from
@ximinez to attempt to fix this issue.
After successful authentication, the `outcome` variable contains a
string. In the upload step, we are checking if outcome == 'success' as a
prerequisite for uploading the binary.
This commit updates the contents of the `outcome` variable.
Artifactory support was added to the `nix` builds with #4556. This
extends that support to the Windows build. Now the Windows build works;
CI will build and test a Windows release build. This only affects CI and
does not change any C++ code.
* Copy the remote setup step outcome fix from #4716 discussion
* Allow the Windows job to succeed if tests fail:
* Currently the tests do not always pass, even on a single threaded
run on the GitHub runners. So we are using parallel runs and mark
the test step as allowed to fail (continue-on-error).
* At this point, it's more important that the build succeeds than that
the tests succeed, because:
* We've got plenty of test coverage on the other jobs.
* Test failures are much rarer than build failures because of
cross-platform issues.
* Having a test failure locally doesn't interrupt a workflow nearly as
much as a build failure.
Note that Conan Center cannot hold the binaries we need. They do not
build the configurations we need, and they will not add them.
## Future Tasks
This introduces a new bottleneck since the build and test takes over an
hour. Speed up the job by:
* Making this job run on heavy Windows runners.
* Increasing the number of hardware threads.
P2P link compression is a feature added in 1.6.0 by #3287.
https://xrpl.org/enable-link-compression.html
If the default changes in the future - for example, as currently
proposed by #4387 - the comment will be updated at that time.
Fix#4656
Context: The `DisallowIncoming` amendment provides an option to block
incoming trust lines from reaching your account. The
asfDisallowIncomingTrustline AccountSet Flag, when enabled, prevents any
incoming trust line from being created. However, it was too restrictive:
it would block an issuer from authorizing a trust line, even if the
trust line already exists. Consider:
1. Issuer sets asfRequireAuth on their account.
2. User sets asfDisallowIncomingTrustline on their account.
3. User submits tx to SetTrust to Issuer.
At this point, without `fixDisallowIncomingV1` active, the issuer would
not be able to authorize the trust line.
The `fixDisallowIncomingV1` amendment, once activated, allows an issuer
to authorize a trust line even after the user sets the
asfDisallowIncomingTrustline flag, as long as the trust line already
exists.
When a new transactor is added, there are several places in applySteps
that need to be modified. This patch refactors the code so only one
function needs to be modified.
Make transactions and pseudo-transactions share the same commonFields
again. This regularizes the code in a nice way.
While this technically allows pseudo-transactions to have a
TicketSequence field, pseudo-transactions are only ever constructed by
code paths that don't add such a field, so this is not a transaction
processing change. It may be possible to add a separate check to ensure
TicketSequence (and other fields that don't make sense on
pseudo-transactions) are never added to pseudo-transactions, but that
should not be necessary. (TicketSequence is not the only common field
that can not and does not appear in pseudo-transactions.) Note:
TicketSequence is already documented as a common field.
Related: #4637Fix#4714
Address a stack-use-after-scope issue when using rvalues with
`soci::use`. Replace rvalues with lvalues to ensure the scope extends
beyond the end of the expression.
The issue arises from `soci` taking a reference to the rvalue without
copying its value or extending its lifetime. `soci` references rvalues
in `soci::use_container` and then the address in `soci_use_type`. For
types like `int`, memory access post-lifetime is unlikely to cause
issues. However, for `std::string`, the backing heap memory can be freed
and potentially reused, leading to a potential segmentation fault.
This was detected on x86_64 using clang-15 with asan. asan confirms
resolution of the issue.
Fix#4675
Update minimum compiler requirement for building the codebase. The
feature "using enum" is required. This feature was introduced in C++20.
Updating the C++ compiler to version 11 or later fixes this error:
```
Building CXX object CMakeFiles/xrpl_core.dir/src/ripple/protocol/impl/STAmount.cpp.o
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp: In lambda function:
/build/ripple/binary/src/ripple/protocol/impl/STAmount.cpp:1577:15: error: expected nested-name-specifier before 'enum'
1577 | using enum Number::rounding_mode;
| ^~~~
```
Fix#4693
Modify the `XChainBridge` amendment.
Before this patch, two door accounts on the same chain could could own
the same bridge spec (of course, one would have to be the issuer and one
would have to be the locker). While this is silly, it does not violate
any bridge invariants. However, on further review, if we allow this then
the `claim` transactions would need to change. Since it's hard to see a
use case for two doors to own the same bridge, this patch disallows
it. (The transaction will return tecDUPLICATE).
Amendment "flapping" (an amendment repeatedly gaining and losing
majority) usually occurs when an amendment is on the verge of gaining
majority, and a validator not in favor of the amendment goes offline or
loses sync. This fix makes two changes:
1. The number of validators in the UNL determines the threshold required
for an amendment to gain majority.
2. The AmendmentTable keeps a record of the most recent Amendment vote
received from each trusted validator (and, with `trustChanged`, stays
up-to-date when the set of trusted validators changes). If no
validation arrives from a given validator, then the AmendmentTable
assumes that the previously-received vote has not changed.
In other words, when missing an `STValidation` from a remote validator,
each server now uses the last vote seen. There is a 24 hour timeout for
recorded validator votes.
These changes do not require an amendment because they do not impact
transaction processing, but only the threshold at which each individual
validator decides to propose an EnableAmendment pseudo-transaction.
Fix#4350
Fix the Windows build by using `unsigned int` (instead of `uint`).
The error, introduced by #4618, looks something like:
rpc\impl\RPCHelpers.h(299,5): error C2061: syntax error: identifier
'uint' (compiling source file app\ledger\Ledger.cpp)
A few methods, including `book_offers`, take currency codes as
parameters. The XRPL doesn't care if the letters in those codes are
lowercase or uppercase, as long as they come from an alphabet defined
internally. rippled doesn't care either, when they are submitted in a
hex representation. When they are submitted in an ASCII string
representation, rippled, but not XRPL, is more restrictive, preventing
clients from interacting with some currencies already in the XRPL.
This change gets rippled out of the way and lets clients submit currency
codes in ASCII using the full alphabet.
Fixes#4112
Currently, the `BUILD.md` instructions suggest using `.build` as the
build directory, so this change helps to reduce confusion.
An alternative would be to instruct developers to add `/.build/` to
`.git/info/exclude` or to user-level `.gitignore` (although the latter
is very intrusive). However, it is being added here because it is a good
practice to have a sensible default that's consistent with the build
instructions.
Clean up the peer-to-peer protocol start/close sequences by introducing
START_PROTOCOL and GRACEFUL_CLOSE messages, which sync inbound/outbound
peer send/receive. The GRACEFUL_CLOSE message differentiates application
and link layer failures.
* Introduce the `InboundHandoff` class to manage inbound peer
instantiation and synchronize the send/receive protocol messages
between peers.
* Update `OverlayImpl` to utilize the `InboundHandoff` class to manage
inbound handshakes.
* Update `PeerImp` for improved handling of protocol messages.
* Modify the `Message` class for better maintainability.
* Introduce P2P protocol version `2.3`.
gateway_balances
* When `account` does not exist in the ledger, return `actNotFound`
* (Previously, a normal response was returned)
* Fix#4290
* When required field(s) are missing, return `invalidParams`
* (Previously, `invalidHotWallet` was incorrectly returned)
* Fix#4548
channel_authorize
* When the specified `key_type` is invalid, return `badKeyType`
* (Previously, `invalidParams` was returned)
* Fix#4289
Since these are breaking changes, they apply only to API version 2.
Supersedes #4577
Copy the new code to `src/secp256k1` without changes:
`src/secp256k1` is identical to bitcoin-core/secp256k1@acf5c55 (v0.3.2).
We could consider changing to a Git submodule, though that would require
changes to the build instructions because we are not using submodules
anywhere else.
Remove the `verify` and `message` function declarations. The explicit
instantiation requests could not be completed because there were no
implementations for those two member functions. It is helpful that the
Microsoft (MSVC) compiler on Windows appears to be strict when it comes
to template instantiation.
This resolves the warning:
XChainAttestations.h(450): warning C4661: 'bool
ripple::XChainAttestationsBase<ripple::XChainClaimAttestation>::verify(void)
const': no suitable definition provided for explicit template
instantiation request
* For example, without this change, to run the TxQ tests, must specify
`--unittest=TxQ1,TxQ2` on the command line. With this change, can use
`--unittest=TxQ`, and both will be run.
* An exact match will prevent any further partial matching.
* This could have some side effects for different tests with a common
name beginning. For example, NFToken, NFTokenBurn, NFTokenDir. This
might be useful. If not, the shorter-named test(s) can be renamed. For
example, NFToken to NFTokens.
* Split the NFToken, NFTokenBurn, and Offer test classes. Potentially speeds
up parallel tests by a factor of 5.
A bridge connects two blockchains: a locking chain and an issuing
chain (also called a mainchain and a sidechain). Both are independent
ledgers, with their own validators and potentially their own custom
transactions. Importantly, there is a way to move assets from the
locking chain to the issuing chain and a way to return those assets from
the issuing chain back to the locking chain: the bridge. This key
operation is called a cross-chain transfer. A cross-chain transfer is
not a single transaction. It happens on two chains, requires multiple
transactions, and involves an additional server type called a "witness".
A bridge does not exchange assets between two ledgers. Instead, it locks
assets on one ledger (the "locking chain") and represents those assets
with wrapped assets on another chain (the "issuing chain"). A good model
to keep in mind is a box with an infinite supply of wrapped assets.
Putting an asset from the locking chain into the box will release a
wrapped asset onto the issuing chain. Putting a wrapped asset from the
issuing chain back into the box will release one of the existing locking
chain assets back onto the locking chain. There is no other way to get
assets into or out of the box. Note that there is no way for the box to
"run out of" wrapped assets - it has an infinite supply.
Co-authored-by: Gregory Popovitch <greg7mdp@gmail.com>
For the `account_tx` and `noripple_check` methods, perform input
validation for optional parameters such as "binary", "forward",
"strict", "transactions". Previously, when these parameters had invalid
values (e.g. not a bool), no error would be returned. Now, it returns an
`invalidParams` error.
* This updates the behavior to match Clio
(https://github.com/XRPLF/clio).
* Since this is potentially a breaking change, it only applies to
requests specifying api_version: 2.
* Fix#4543.
* Verify accepted ledger becomes validated, and retry
with a new consensus transaction set if not.
* Always store proposals.
* Track proposals by ledger sequence. This helps slow peers catch
up with the rest of the network.
* Acquire transaction sets for proposals with future ledger sequences.
This also helps slow peers catch up.
* Optimize timer delay for establish phase to wait based on how
long validators have been sending proposals. This also helps slow
peers to catch up.
* Fix impasse achieving close time consensus.
* Don't wait between open and establish phases.
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
Minor refactor to `TxFormats.cpp`:
- Rename `commonFields` to `pseudoCommonFields` (since it is the common fields
that all pseudo-transactions need)
- Add a new static variable, `commonFields`, which represents all the common
fields that non-pseudo transactions need (essentially everything that
`pseudoCommonFields` contains, plus `sfTicketSequence`)
This makes it harder to accidentally leave out `sfTicketSequence` in a new
transaction.
- Verify "check", used to retrieve a Check object, is a string.
- Verify "nft_page", used to retrieve an NFT Page, is a string.
- Verify "index", used to retrieve any type of ledger object by its
unique ID, is a string.
- Verify "directory", used to retrieve a DirectoryNode, is a string or
an object.
This change only impacts api_version 2 since it is a breaking change.
https://xrpl.org/ledger_entry.htmlFix#4550
* In namespace ripple, introduces get_name function that takes a
std:🧵:native_handle_type and returns a std::string.
* In namespace ripple, introduces get_name function that takes a
std::thread or std::jthread and returns a std::string.
* In namespace ripple::this_thread, introduces get_name function
that takes no parameters and returns the name of the current
thread as a std::string.
* In namespace ripple::this_thread, introduces set_name function
that takes a std::string_view and sets the name of the current
thread.
* Intended to replace the beast utilities setCurrentThreadName
and getCurrentThreadName.
- Update amm_info to fetch AMM by amm account id.
- This is an additional way to retrieve an AMM object.
- Alternatively, AMM can still be fetched by the asset pair as well.
- Add owner directory entry for AMM object.
Context:
- Add back the AMM object directory entry, which was deleted by #4626.
- This fixes `account_objects` for `amm` type.
Modify two error cases in AMMBid transactor to return `tecINTERNAL` to
more clearly indicate that these errors should not be possible unless
operating in unforeseen circumstances. It likely indicates a bug.
The log level has been updated to `fatal()` since it indicates a
(potentially network-wide) unexpected condition when either of these
errors occurs.
Details:
The two specific transaction error cases changed are:
- `tecAMM_BALANCE` - In this case, this error (total LP Tokens
outstanding is lower than the amount to be burned for the bid) is a
subset of the case where the user doesn't have enough LP Tokens to pay
for the bid. When this case is reached, the bidder's LP Tokens balance
has already been checked first. The user's LP Tokens should always be
a subset of total LP Tokens issued, so this should be impossible.
- `tecINSUFFICIENT_PAYMENT` - In this case, the amount to be refunded as
a result of the bid is greater than the price paid for the auction
slot. This should never occur unless something is wrong with the math
for calculating the refund amount.
Both error cases in question are "defense in depth" measures meant to
protect against making things worse if the code has already reached a
state that is supposed to be impossible, likely due to a bug elsewhere.
Such "shouldn't ever occur" checks should use an error code that
categorically indicates a larger problem. This is similar to how
`tecINVARIANT_FAILED` is a warning sign that something went wrong and
likely could've been worse, but since there isn't an Invariant Check
applying here, `tecINTERNAL` is the appropriate error code.
This is "debatably" a transaction processing change since it could
hypothetically change how transactions are processed if there's a bug we
don't know about.
* Verify accepted ledger becomes validated, and retry
with a new consensus transaction set if not.
* Always store proposals.
* Track proposals by ledger sequence. This helps slow peers catch
up with the rest of the network.
* Acquire transaction sets for proposals with future ledger sequences.
This also helps slow peers catch up.
* Optimize timer delay for establish phase to wait based on how
long validators have been sending proposals. This also helps slow
peers to catch up.
* Fix impasse achieving close time consensus.
* Don't wait between open and establish phases.
Add new transaction submission API field, "sync", which
determines behavior of the server while submitting transactions:
1) sync (default): Process transactions in a batch immediately,
and return only once the transaction has been processed.
2) async: Put transaction into the batch for the next processing
interval and return immediately.
3) wait: Put transaction into the batch for the next processing
interval and return only after it is processed.
Introduce a new variadic template helper function, `forAllApiVersions`,
that accepts callables to execute a set of functions over a range of
versions - from RPC::apiMinimumSupportedVersion to RPC::apiBetaVersion.
This avoids the duplication of code.
Context: #4552
- "Rename" the type `LedgerInfo` to `LedgerHeader` (but leave an alias
for `LedgerInfo` to not yet disturb existing uses). Put it in its own
public header, named after itself, so that it is more easily found.
- Move the type `Fees` and NFT serialization functions into public
(installed) headers.
- Compile the XRPL and gRPC protocol buffers directly into `libxrpl` and
install their headers. Fix the Conan recipe to correctly export these
types.
Addresses change (2) in
https://github.com/XRPLF/XRPL-Standards/discussions/121.
For context: This work supports Clio's dependence on libxrpl. Clio is
just an example consumer. These changes should benefit all current and
future consumers.
---------
Co-authored-by: cyan317 <120398799+cindyyan317@users.noreply.github.com>
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
Improve the checking of the path lengths during Payments. Previously,
the code that did the check of the payment path lengths was sometimes
executed, but without any effect. This changes it to only check when it
matters, and to not make unnecessary copies of the path vectors.
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
- Replace custom popcnt16 implementation with std::popcount from C++20
- Maintain compatibility with older compilers and MacOS by providing a
conditional compilation fallback to __builtin_popcount and a lookup
table method
- Move and inline related functions within SHAMapInnerNode for
performance and readability
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
When an AMM account is deleted, the owner directory entries must be
deleted in order to ensure consistent ledger state.
* When deleting AMM account:
* Clean up AMM owner dir, linking AMM account and AMM object
* Delete trust lines to AMM
* Disallow `CheckCreate` to AMM accounts
* AMM cannot cash a check
* Constrain entries in AuthAccounts array to be accounts
* AuthAccounts is an array of objects for the AMMBid transaction
* SetTrust (TrustSet): Allow on AMM only for LP tokens
* If the destination is an AMM account and the trust line doesn't
exist, then:
* If the asset is not the AMM LP token, then fail the tx with
`tecNO_PERMISSION`
* If the AMM is in empty state, then fail the tx with `tecAMM_EMPTY`
* This disallows trustlines to AMM in empty state
* Add AMMID to AMM root account
* Remove lsfAMM flag and use sfAMMID instead
* Remove owner dir entry for ltAMM
* Add `AMMDelete` transaction type to handle amortized deletion
* Limit number of trust lines to delete on final withdraw + AMMDelete
* Put AMM in empty state when LPTokens is 0 upon final withdraw
* Add `tfTwoAssetIfEmpty` deposit option in AMM empty state
* Fail all AMM transactions in AMM empty state except special deposit
* Add `tecINCOMPLETE` to indicate that not all AMM trust lines are
deleted (i.e. partial deletion)
* This is handled in Transactor similar to deleted offers
* Fail AMMDelete with `tecINTERNAL` if AMM root account is nullptr
* Don't validate for invalid asset pair in AMMDelete
* AMMWithdraw deletes AMM trust lines and AMM account/object only if the
number of trust lines is less than max
* Current `maxDeletableAMMTrustLines` = 512
* Check no directory left after AMM trust lines are deleted
* Enable partial trustline deletion in AMMWithdraw
* Add `tecAMM_NOT_EMPTY` to fail any transaction that expects an AMM in
empty state
* Clawback considerations
* Disallow clawback out of AMM account
* Disallow AMM create if issuer can claw back
This patch applies to the AMM implementation in #4294.
Acknowledgements:
Richard Holland and Nik Bougalis for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the project code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
Signed-off-by: Manoj Doshi <mdoshi@ripple.com>
Introduce an API Changelog, which logs the changes that have been made
to the API.
Without this changelog, it is difficult to keep track of the changes and
additions that are made to the API. While all changes are surfaced in
PRs and Release Notes, these are mixed in with other non-API-affecting
changes. PRs that affect the API have the `API Change` label applied,
but it is hard to identify which PRs have been included in each release.
Furthermore, some API changes will take effect based on `api_version`
(starting with rippled version 1.12, which will introduce `api_version:
2`), while others are based on the `rippled` version.
The API Changelog clarifies the details of the changes in a way that is
easily understood by API consumers, and breaks down the changes to be
clear which ones are gated by api_version (versus `rippled` version).
From now on, all PR authors are responsible for updating the API
Changelog according to the additions/changes that their PR makes to the
APIs.
Reason for this change is here XRPLF/XRPL-Standards#119
We would want to be explicit that this flag is exclusively for trustline. For new token types(eg. CFT), they will not utilize this flag for clawback, instead, they will turn clawback on/off on the token-level, which is more versatile.
Use the most recent versions in ConanCenter.
* Due to a bug in Clang 16, you may get a compile error:
"call to 'async_teardown' is ambiguous"
* A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
in `.conan/data`
* A patch to support gcc13 may be added in a later PR.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Add AMM functionality:
- InstanceCreate
- Deposit
- Withdraw
- Governance
- Auctioning
- payment engine integration
To support this functionality, add:
- New RPC method, `amm_info`, to fetch pool and LPT balances
- AMM Root Account
- trust line for each IOU AMM token
- trust line to track Liquidity Provider Tokens (LPT)
- `ltAMM` object
The `ltAMM` object tracks:
- fee votes
- auction slot bids
- AMM tokens pair
- total outstanding tokens balance
- `AMMID` to AMM `RootAccountID` mapping
Add new classes to facilitate AMM integration into the payment engine.
`BookStep` uses these classes to infer if AMM liquidity can be consumed.
The AMM formula implementation uses the new Number class added in #4192.
IOUAmount and STAmount use Number arithmetic.
Add AMM unit tests for all features.
AMM requires the following amendments:
- featureAMM
- fixUniversalNumber
- featureFlowCross
Notes:
- Current trading fee threshold is 1%
- AMM currency is generated by: 0x03 + 152 bits of sha256{cur1, cur2}
- Current max AMM Offers is 30
---------
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Improve error handling for ledger_entry by returning an "invalidParams"
error when one or more request fields are specified incorrectly, or one
or more required fields are missing.
For example, if none of of the following fields is provided, then the
API should return an invalidParams error:
* index, account_root, directory, offer, ripple_state, check, escrow,
payment_channel, deposit_preauth, ticket
Prior to this commit, the API returned an "unknownOption" error instead.
Since the error was actually due to invalid parameters, rather than
unknown options, this error was misleading.
Since this is an API breaking change, the "invalidParams" error is only
returned for requests using api_version: 2 and above. To maintain
backward compatibility, the "unknownOption" error is still returned for
api_version: 1.
Related: #4573Fix#4303
- Previously, mulDiv had `std::pair<bool, uint64_t>` as the output type.
- This is an error-prone interface as it is easy to ignore when
overflow occurs.
- Using a return type of `std::optional` should decrease the likelihood
of ignoring overflow.
- It also allows for the use of optional::value_or() as a way to
explicitly recover from overflow.
- Include limits.h header file preprocessing directive in order to
satisfy gcc's numeric_limits incomplete_type requirement.
Fix#3495
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* Update the `account_info` API so that the `allowClawback` flag is
included in the response.
* The proposed `Clawback` amendement added an `allowClawback` flag in
the `AccountRoot` object.
* In the API response, under `account_flags`, there is now an
`allowClawback` field with a boolean (`true` or `false`) value.
* For reference, the XLS-39 Clawback implementation can be found in
#4553Fix#4588
Enhance security during the build process:
* The '-fstack-protector' flag enables stack protection for preventing
buffer overflow vulnerabilities. If an attempt is made to overflow the
buffer, the program will terminate, thus protecting the integrity of
the stack.
* The '-Wl,-z,relro,-z,now' linker flag enables Read-only Relocations
(RELRO), a feature that helps harden the binary against certain types
of exploits, particularly those that involve overwriting the Global
Offset Table (GOT).
* This flag is only set for Linux builds, due to compatibility issues
with apple-clang.
* The `relro` option makes certain sections of memory read-only after
initialization to prevent them from being overwritten, while `now`
ensures that all dynamic symbols are resolved immediately on program
start, reducing the window of opportunity for attacks.
* Add a new YAML file (.pre-commit-config.yaml) to set up pre-commit
hook for clang-format
* The pre-commit hook is opt-in and needs to be installed in order to
run automatically
* Update CONTRIBUTING.md with instructions on how to set up and use the
clang-format linter
Automating the process of running clang-format before committing code
helps to save time by removing the need to fix formatting issues later.
This is a tooling improvement and doesn't change C++ code.
* Update the version of the checkout action (for GitHub Actions) in
`clang-format.yml` and `levelization.yml`.
* The previous version, v2, was raising deprecation warnings due to
its reliance on Node.js 12.
* The latest checkout action version, v3, uses Node.js 16.
When requesting `account_info` with an invalid `signer_lists` value, the
API should return an "invalidParams" error.
`signer_lists` should have a value of type boolean. If it is not a
boolean, then it is invalid input. The response now indicates that.
* This is an API breaking change, so the change is only reflected for
requests containing `"api_version": 2`
* Fix#4539
The debug packages were named with the extension ".ddeb", but due to a
bug in Artifactory, they need to have the ".deb" extension. Debug symbol
packages with ".ddeb" extensions are not indexed, and thus are not
visible in apt clients.
* Fix the issue by renaming the debug packages in the build script.
* Use GCC-11 and update GCC Conan profile.
* This software requires GCC 11 and C++20. However, reporting mode is
built with C++17.
This is a quick band-aid to fix the build. Later, it will be better to
remove this package-building code.
For context, a Debian (deb) package contains bundled software and
resources necessary for installing and managing software on a
Debian-based system, including Ubuntu and derivatives.
- Use powers of two to clearly indicate the bitmask
- Replace bitmask with explicit if-conditions to better indicate predicates
Change enum values to be powers of two (fix#3417) #4239
Implement the simplified condition evaluation
removes the complex bitwise and(&) operator
Implement the second proposed solution in Nik Bougalis's comment - Software does not distinguish between different Conditions (Version: 1.5) #3417 (comment)
I have tested this code change by performing RPC calls with the commands server_info, server_state, peers and validation_info. These commands worked as expected.
Certain inputs for the AccountTx method should return an error. In other
words, an invalid request from a user or client now results in an error
message.
Since this can change the response from the API, it is an API breaking
change. This commit maintains backward compatibility by keeping the
existing behavior for existing requests. When clients specify
"api_version": 2, they will be able to get the updated error messages.
Update unit tests to check the error based on the API version.
* Fix#4288
* Fix#4545
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
references to `ServerHandlerImp` in files outside of that class's
organizational unit (which is technically incorrect). The second
removed `ServerHandlerImp`, but was not up to date with develop. This
results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
the more correct `ServerHandler`.
Rename `ServerHandlerImp` to `ServerHandler`. There was no other
ServerHandler definition despite the existence of a header suggesting
that there was.
This resolves a piece of historical confusion in the code, which was
identified during a code review.
The changes in the diff may look more extensive than they actually are.
The contents of `impl/ServerHandlerImp.h` were merged into
`ServerHandler.h`, making the latter file appear to have undergone
significant modifications. However, this is a non-breaking refactor that
only restructures code.
Fix the libxrpl library target for consumers using Conan.
* Fix installation issues and update includes.
* Update requirements in the Conan package info.
* libxrpl requires openssl::crypto.
(Conan is a software package manager for C++.)
Replace hand-rolled code with std::from_chars for better
maintainability.
The C++ std::from_chars function is intended to be as fast as possible,
so it is unlikely to be slower than the code it replaces. This change is
a net gain because it reduces the amount of hand-rolled code.
Apply a minor cleanup in `TypedField`:
* Remove a non-working and unused move constructor.
* Constrain the remaining constructor to not be overly generic enough as
to be used as a copy or move constructor.
There is now an Artifactory (thanks @shichengripple001 and team!) to
hold dependency binaries for the builds.
* Rewrite the `nix` workflow to use it and cut the time down to a mere
21 minutes.
* This workflow should continue to work (just more slowly) for forks
that do not have access to the Artifactory.
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.
The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).
This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.
Also increase test coverage for RPC ServerInfo.
Fix#2837.
Curtail the occurrence of order books that are blocked by reduced offers
with the implementation of the fixReducedOffersV1 amendment.
This commit identifies three ways in which offers can be reduced:
1. A new offer can be partially crossed by existing offers, so the new
offer is reduced when placed in the ledger.
2. An in-ledger offer can be partially crossed by a new offer in a
transaction. So the in-ledger offer is reduced by the new offer.
3. An in-ledger offer may be under-funded. In this case the in-ledger
offer is scaled down to match the available funds.
Reduced offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer (from the
perspective of the taker). It turns out that, for small values, the
quality of the reduced offer can be significantly affected by the
rounding mode used during scaling computations.
This commit adjusts some rounding modes so that the quality of a reduced
offer is always at least as good (from the taker's perspective) as the
original offer.
The amendment is titled fixReducedOffersV1 because additional ways of
producing reduced offers may come to light. Therefore, there may be a
future need for a V2 amendment.
* Enable api_version 2, which is currently in beta. It is expected to be
marked stable by the next stable release.
* This does not change any defaults.
* The only existing tests changed were one that set the same flag, which
was now redundant, and a couple that tested versioning explicitly.
- Resolve gcc compiler warning:
AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
- The std::move() operation on trivially copyable types may generate a
compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.
Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.
Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.
For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).
This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.
The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.
Note that asserts are normally checked only in debug builds, so release
packages should not be affected.
Introduced in: #4319 (66627b26cf)
Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.
Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.
Three new fields are added to the `Tx` responses for NFTs:
1. `nftoken_id`: This field is included in the `Tx` responses for
`NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
`NFTokenID` for the `NFToken` that was modified on the ledger by the
transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
`NFTokenCancelOffer`. This field provides a list of all the
`NFTokenID`s for the `NFToken`s that were modified on the ledger by
the transaction.
3. `offer_id`: This field is included in the `Tx` response for
`NFTokenCreateOffer` transactions and shows the OfferID of the
`NFTokenOffer` created.
The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.
Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.
Resolves: #3329, #3330, #4337
BREAKING CHANGE: Remove non-strict account parsing (#3330)
On macOS, if you have not installed something that depends on `xz`, then your
system may lack `lzma`, resulting in a build error similar to:
```
Downloading libarchive-3.6.0.tar.xz completed [6250.61k]
libarchive/3.6.0:
ERROR: libarchive/3.6.0: Error in source() method, line 120
get(self, **self.conan_data["sources"][self.version], strip_root=True)
ReadError: file could not be opened successfully:
- method gz: ReadError('not a gzip file')
- method bz2: ReadError('not a bzip2 file')
- method xz: CompressionError('lzma module is not available')
- method tar: ReadError('invalid header')
```
The solution is to ensure that `lzma` is installed by installing `xz`.
SOCI is the C++ database access library. The SOCI recipe was updated in
Conan Center Index (CCI), and it breaks for our choice of options. This
breakage occurs when you build with a fresh Conan cache (e.g. when you
submit a PR, or delete `~/.conan/data`).
* Add a custom Conan recipe for SOCI v4.0.3
* Update dependency building to handle exporting and installing Snappy
and SOCI
* Fix workflows to use custom SOCI recipe
* Update BUILD.md to include instruction for exporting the SOCI Conan
recipe:
* `conan export external/soci soci/4.0.3@`
This solution has been verified on Ubuntu 20.04 and macOS.
Context:
* There is a compiler error that the `sqlite3.h` header is not available
when building soci.
* When package B depends on package A, it finds the pieces it needs by
importing the Package Configuration File (PCF) that Conan generates
for package A.
* Read the CMake written by package B to check that it is importing
the PCF correctly and linking its exports correctly.
* Since this can be difficult, it is often more efficient to check
https://github.com/conan-io/conan-center-index/issues for package B
to see if anyone else has seen a similar problem.
* One of the issues points to a problem area in soci's CMake. To
confirm the diagnosis, review soci's CMake (after any patches are
applied) in the Conan build directory `build/$buildId/src/`.
* Review the Conan-generated PCF in
`build/$buildId/build/$buildType/generators/`.
* In this case, the problem was likely (re)introduced by
https://github.com/conan-io/conan-center-index/pull/17026
* If there is a problem in the source or in the Conan recipe, the
fastest fix is to copy the recipe and either:
* Add a source patch to fix any problems in the source.
* Change the recipe to fix any problems in the recipe.
* In this case, this can be done by finding soci's Conan recipe at
https://github.com/conan-io/conan-center-index/tree/master/recipes/soci
and then copying the `all` directory as `external/$packageName` in our
project. Then, make any changes.
* Test packages can be removed from the recipe folder as they are not
needed.
* If adding a patch in the `patches` directory, add a description for
it to `conandata.yml`.
* Since `conanfile.py` has no `version` property on the recipe class,
builders need to pass a version on the command line (like they do
for our `snappy` recipe).
* Add an example command to `BUILD.md`.
Future work: It may make sense to refer to recipes by revision, by
checking in a lockfile.
This change makes progress on the plan in #4371. It does not replicate
the full [matrix] implemented in #3851, but it does replicate the 1.ii
section of the Linux matrix. It leverages "heavy" self-hosted runners,
and demonstrates a repeatable pattern for future matrices.
[matrix]: d794a0f3f1/.github/README.md (continuous-integration)
Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).
- `std::{u,bi}nary_function` were removed in C++17. They were empty
classes with a few associated types. We already have conditional code
to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
inherits from a binary function class in the standard library, e.g.
`std::equal_to`, and this causes an error when the type privately
inherits from such a class. Change these instances to public
inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
inherit directly from an empty class than from
`beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
compilation of assertions. Add a `[[maybe_unused]]` annotation to
suppress it.
- As a drive-by clean-up, remove commented code.
See related work in #4486.
If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.
Fix#4488.
---------
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Add instructions for installing rippled using the package managers APT
and YUM. Some steps were adapted from xrpl.org.
---------
Co-authored-by: Michael Legleux <mlegleux@ripple.com>
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.
This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
* Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
1.77 with a newer compiler.
Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.
The new field must be used when the server is using a network id > 1024.
To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.
Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.
The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.
This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`
To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html
Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html
In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.
A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.
The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.
The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.
Commit 3 of 3 in #4218.
The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.
Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.
This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.
In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.
Commit 2 of 3 in #4218.
When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.
In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.
This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.
It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.
A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.
This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).
Commit 1 of 3 in #4218.
Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.
Clio can be found here: https://github.com/XRPLF/clio
Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
- Include NFTokenPages in account_objects to make it easier to
understand an account's Owner Reserve and simplify app development.
- Update related tests and documentation.
- Fix#4347.
For info about the Owner Reserve, see https://xrpl.org/reserves.html
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Change `ledger_data` to return an empty list when all entries are
filtered out.
When the `type` field is specified for the `ledger_data` method, it is
possible that no objects of the specified type are found. This can even
occur if those objects exist, but not in the section that the server
checked while serving your request. Previously, the `state` field of the
response has the value `null`, instead of an empty array `[]`. By
changing this to an empty array, the response is the same data type so
that clients can handle it consistently.
For example, in Python, `for entry in state` should now work correctly.
It would raise an exception if `state` is `null` (or `None`).
This could break client code that explicitly checks for null. However,
this fix aligns the response with the documentation, where the `state`
field is an array.
Fix#4392.
Previously, the object `account_data` in the `account_info` response
contained a single field `Flags` that contains flags of an account. API
consumers must perform bitwise operations on this field to retrieve the
account flags.
This change adds a new object, `account_flags`, at the top level of the
`account_info` response `result`. The object contains relevant flags of
the account. This makes it easier to write simple code to check a flag's
value.
The flags included may depend on the amendments that are enabled.
Fix#2457.
Log exception messages at several locations.
Previously, these were locations where an exception was caught, but the
exception message was not logged. Logging the exception messages can be
useful for analysis or debugging. The additional logging could have a
small negative performance impact.
Fix#3213.
In the release notes (current and historical), there is a link to the
`Builds` directory. By creating `Builds/README.md` with a link to
`BUILD.md`, it is easier to find the build instructions.
* Create the FeeSettings object in genesis ledger.
* Initialize with default values from the config. Removes the need to
pass a Config down into the Ledger initialization functions, including
setup().
* Drop the undocumented fee config settings in favor of the [voting]
section.
* Fix#3734.
* If you previously used fee_account_reserve and/or fee_owner_reserve,
you should change to using the [voting] section instead. Example:
```
[voting]
account_reserve=10000000
owner_reserve=2000000
```
* Because old Mainnet ledgers (prior to 562177 - yes, I looked it up)
don't have FeeSettings, some of the other ctors will default them to
the config values before setup() tries to load the object.
* Update default Config fee values to match Mainnet.
* Fix unit tests:
* Updated fees: Some tests are converted to use computed values of fee
object, but the default Env config was also updated to fix the rest.
* Unit tests that check the structure of the ledger have updated
hashes and counts.
Add the ability to mark amendments as obsolete. There are some known
amendments that should not be voted for because they are broken (or
similar reasons).
This commit marks four amendments as obsolete:
1. `CryptoConditionsSuite`
2. `NonFungibleTokensV1`
3. `fixNFTokenDirV1`
4. `fixNFTokenNegOffer`
When an amendment is `Obsolete`, voting for the amendment is prevented.
A determined operator can still vote for the amendment by changing the
source, and doing so does not break any protocol rules.
The "feature" command now does not modify the vote for obsolete
amendments.
Before this change, there were two options for an amendment's
`DefaultVote` behavior: yes and no.
After this change, there are three options for an amendment's
`VoteBehavior`: DefaultYes, DefaultNo, and Obsolete.
To be clear, if an obsolete amendment were to (somehow) be activated by
consensus, the server still has the code to process transactions
according to that amendment, and would not be amendment blocked. It
would function the same as if it had been voting "no" on the amendment.
Resolves#4014.
Incorporates review feedback from @scottschurr.
Fix a case where `ripple::Expected` returned a json array, not a value.
The problem was that `Expected` invoked the wrong constructor for the
expected type, which resulted in a constructor that took multiple
arguments being interpreted as an array.
A proposed fix was provided by @godexsoft, which involved a minor
adjustment to three constructors that replaces the use of curly braces
with parentheses. This makes `Expected` usable for
[Clio](https://github.com/XRPLF/clio).
A unit test is also included to ensure that the issue doesn't occur
again in the future.
Make it easy for projects to depend on libxrpl by adding an `ALIAS`
target named `xrpl::libxrpl` for projects to link.
The name was chosen because:
* The current library target is named `xrpl_core`. There is no other
"non-core" library target against which we need to distinguish the
"core" library. We only export one library target, and it should just
be named after the project to keep things simple and predictable.
* Underscores in target or library names are generally discouraged.
* Every target exported in CMake should be prefixed with the project
name.
By adding an `ALIAS` target, existing consumers who use the `xrpl_core`
target will not be affected.
* In the future, there can be a migration plan to make `xrpl_core` the
`ALIAS` target (and `libxrpl` the "real" target, which will affect the
filename of the compiled binary), and eventually remove it entirely.
Also:
* Fix the Conan recipe so that consumers using Conan import a target
named `xrpl::libxrpl`. This way, every consumer can use the same
instructions.
* Document the two easiest methods to depend on libxrpl. Both have been
tested.
* See #4443.
* Remove obsolete build instructions.
* By using Conan, builders can choose which dependencies specifically to
build and link as shared objects.
* Refactor the build instructions based on the plan in #4433.
Without the protocol amendment introduced by this commit, an NFT ID can
be reminted in this manner:
1. Alice creates an account and mints an NFT.
2. Alice burns the NFT with an `NFTokenBurn` transaction.
3. Alice deletes her account with an `AccountDelete` transaction.
4. Alice re-creates her account.
5. Alice mints an NFT with an `NFTokenMint` transaction with params:
`NFTokenTaxon` = 0, `Flags` = 9).
This will mint a NFT with the same `NFTokenID` as the one minted in step
1. The params that construct the NFT ID will cause a collision in
`NFTokenID` if their values are equal before and after the remint.
With the `fixNFTokenRemint` amendment, there is a new sequence number
construct which avoids this scenario:
- A new `AccountRoot` field, `FirstNFTSequence`, stays constant over
time.
- This field is set to the current account sequence when the account
issues their first NFT.
- Otherwise, it is not set.
- The sequence of a newly-minted NFT is computed by: `FirstNFTSequence +
MintedNFTokens`.
- `MintedNFTokens` is then incremented by 1 for each mint.
Furthermore, there is a new account deletion restriction:
- An account can only be deleted if `FirstNFTSequence + MintedNFTokens +
256` is less than the current ledger sequence.
- 256 was chosen because it already exists in the current account
deletion constraint.
Without this restriction, an NFT may still be remintable. Example
scenario:
1. Alice's account sequence is at 1.
2. Bob is Alice's authorized minter.
3. Bob mints 500 NFTs for Alice. The NFTs will have sequences 1-501, as
NFT sequence is computed by `FirstNFTokenSequence + MintedNFTokens`).
4. Alice deletes her account at ledger 257 (as required by the existing
`AccountDelete` amendment).
5. Alice re-creates her account at ledger 258.
6. Alice mints an NFT. `FirstNFTokenSequence` initializes to her account
sequence (258), and `MintedNFTokens` initializes as 0. This
newly-minted NFT would have a sequence number of 258, which is a
duplicate of what she issued through authorized minting before she
deleted her account.
---------
Signed-off-by: Shawn Xie <shawnxie920@gmail.com>
There were situations where `marker`s returned by `account_lines` did
not work on subsequent requests, returning "Invalid Parameters".
This was caused by the optimization implemented in "Enforce account RPC
limits by account objects traversed":
e28989638d
Previously, the ledger traversal would find up to `limit` account lines,
and if there were more, the marker would be derived from the key of the
next account line. After the change, ledger traversal would _consider_
up to `limit` account objects of any kind found in the account's
directory structure. If there were more, the marker would be derived
from the key of the next object, regardless of type.
With this optimization, it is expected that `account_lines` may return
fewer than `limit` account lines - even 0 - along with a marker
indicating that there are may be more available.
The problem is that this optimization did not update the
`RPC::isOwnedByAccount` helper function to handle those other object
types. Additionally, XLS-20 added `ltNFTOKEN_OFFER` ledger objects to
objects that have been added to the account's directory structure, but
did not update `RPC::isOwnedByAccount` to be able to handle those
objects. The `marker` provided in the example for #4354 includes the key
for an `ltNFTOKEN_OFFER`. When that `marker` is used on subsequent
calls, it is not recognized as valid, and so the request fails.
* Add unit test that walks all the object types and verifies that all of
their indexes can work as a marker.
* Fix#4340
* Fix#4354
When writing objects to the NodeStore, we need to convert them from
the in-memory format to the binary format used by the node store.
The conversion is handled by the `EncodedBlob` class, which is only
instantiated on the stack. Coupled with the fact that most objects
are under 1024 bytes in size, this presents an opportunity to elide
a memory allocation in a critical path.
This commit also simplifies the interface of `EncodedBlob` and
eliminates a subtle corner case that could result in dangling
pointers.
These changes are not expected to cause a significant reduction in
memory usage. The change avoids the use of a `std::shared_ptr` when
unnecessary and tries to use stack-based memory allocation instead
of the heap whenever possible.
This is a net gain both in terms of memory usage (lower
fragmentation) and performance (less work to do at runtime).
In rare circumstances, both `onRequestTimeout` and the response handler
(`onSiteFetch` or `onTextFetch`) can get queued and processed. In all
observed cases, the response handler processes a network error.
`onRequestTimeout` usually runs first, but on rare occasions, the
response handler runs first, which leaves `activeResource` empty.
* Prevent internal error by catching overflow exception in `gateway_balances`.
* Treat `gateway_balances` obligations overflow as max (largest valid) `STAmount`.
* Note that very large sums of STAmount are approximations regardless.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
- MSVC 19.x reported a warning about import paths in boost for
function_output_iterator class (boost::function_output_iterator).
- Eliminate that warning by updating the import paths, as suggested by
the compiler warnings.
Port numbers can now be specified using either a colon or a space.
Examples:
1.2.3.4:51235
1.2.3.4 51235
- In the configuration file, an annoying "gotcha" for node operators is
accidentally specifying IP:PORT combinations using a colon. The code
previously expected a space, not a colon. It also does not provide
good feedback when this operator error is made.
- This change simply allows this mistake (using a colon) to be fixed
automatically, preserving the intention of the operator.
- Add unit tests, which test the functionality when specifying IP:PORT
in the configuration file.
- The RPCCall test regime is not specific enough to test this
functionality, it has been tested by hand.
- Ensure IPv6 addresses are not confused for ip:port
---------
Co-authored-by: Elliot Lee <github.public@intelliot.com>
- Implement the `operator==` and the `operator<=>` (aka the spaceship
operator) in `base_uint`, `Issue`, and `Book`.
- C++20-compliant compilers automatically provide the remaining
comparison operators (e.g. `operator<`, `operator<=`, ...).
- Remove the function compare() because it is no longer needed.
- Maintain the same semantics as the existing code.
- Add some unit tests to gain further confidence.
- Fix#2525.
In Reporting Mode, a server would core dump when it is not able to read
from Cassandra. This patch prevents the core dump when Cassandra is down
for reporting mode servers. This does not fix the root cause, but it
cuts down on some of the resulting noise.
* Follow-up to #4336
* NFToken is the naming convention in the codebase (rather than NFT)
* Rename `lsfDisallowIncomingNFTOffer` to `lsfDisallowIncomingNFTokenOffer`
* Rename `asfDisallowIncomingNFTOffer` to `asfDisallowIncomingNFTokenOffer`
Partially revert the functionality introduced
with #4195 / 5a15229 (part of 1.10.0-b1).
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
---------
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
- Copies the recipe for Snappy from Conan Center, but removes three
lines that explicitly link the standard library, which prevents
builders from statically linking it.
- Removes the recipe for RocksDB now that an official recipe for version
6.27.3 is in Conan Center.
Developers will likely need to remove cached versions of both RocksDB
and Snappy:
```
conan remove -f rocksdb
conan remove -f snappy
```
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* Set "fail-fast: false" so that multiple jobs in one workflow can
finish independently. By default, if one job fails, other running jobs
will be aborted, even if the other jobs are working fine and are
almost done. This leads to wasted time and resources if the failure
is, for example, OS specific, or due to a flaky unit test, and the
failed job needs to be re-run, because all the jobs end up re-running.
* Put conditions back into the windows.yml job (manual, and for
a specific branch name and that job). This prevents Github Actions
from sending "No jobs were run" failure emails on every commit.
Without this amendment, for NFTs using broker mode, if the sell offer contains a destination and that destination is the buyer account, anyone can broker the transaction. Also, if a buy offer contains a destination and that destination is the seller account, anyone can broker the transaction. This is not ideal and is misleading.
Instead, with this amendment: If you set a destination, that destination needs to be the account settling the transaction. So, the broker must be the destination if they want to settle. If the buyer is the destination, then the buyer must accept the sell offer, as you cannot broker your own offers.
If users want their offers open to the public, then they should not set a destination. On the other hand, if users want to limit who can settle the offers, then they would set a destination.
Unit tests:
1. The broker cannot broker a destination offer to the buyer and the buyer must accept the sell offer. (0 transfer)
2. If the broker is the destination, the broker will take the difference. (broker mode)
Fixes#4374
It was possible for a broker to combine a sell and a buy offer from an account that already owns an NFT. Such brokering extracts money from the NFT owner and provides no benefit in return.
With this amendment, the code detects when a broker is returning an NFToken to its initial owner and prohibits the transaction. This forbids a broker from selling an NFToken to the account that already owns the token. This fixes a bug in the original implementation of XLS-20.
Thanks to @nixer89 for suggesting this fix.
Fixes 3 issues:
In the following scenario, an account cannot perform NFTokenAcceptOffer even though it should be allowed to:
- BROKER has < S
- ALICE offers to sell token for S
- BOB offers to buy token for > S
- BROKER tries to bridge the two offers
This currently results in `tecINSUFFICIENT_FUNDS`, but should not because BROKER is not spending any funds in this transaction, beyond the transaction fee.
When trading an NFT using IOUs, and when the issuer of the IOU has any non-zero value set for TransferFee on their account via AccountSet (not a TransferFee on the NFT), and when the sale amount is equal to the total balance of that IOU that the buyer has, the resulting balance for the issuer of the IOU will become positive. This means that the buyer of the NFT was supposed to have caused a certain amount of IOU to be burned. That amount was unable to be burned because the buyer couldn't cover it. This results in the buyer owing this amount back to the issuer. In a real world scenario, this is appropriate and can be settled off-chain.
Currency issuers could not make offers for NFTs using their own currency, receiving `tecINSUFFICIENT_FUNDS` if they tried to do so.
With this fix, they are now able to buy/sell NFTs using their own currency.
Three static member functions are introduced with
definitions consistent with std::numeric_limits:
static constexpr Number min() noexcept;
Returns: The minimum positive value. This is the value closest to zero.
static constexpr Number max() noexcept;
Returns: The maximum possible value.
static constexpr Number lowest() noexcept;
Returns: The negative value which is less than all other values.
You can set a thread-local flag to direct Number how to round
non-exact results with the syntax:
Number::rounding_mode prev_mode = Number::setround(Number::towards_zero);
This flag will stay in effect for this thread only until another call
to setround. The previously set rounding mode is returned.
You can also retrieve the current rounding mode with:
Number::rounding_mode current_mode = Number::getround();
The available rounding modes are:
* to_nearest : Rounds to nearest representable value. On tie, rounds
to even.
* towards_zero : Rounds towards zero.
* downward : Rounds towards negative infinity.
* upward : Rounds towards positive infinity.
The default rounding mode is to_nearest.
* Conversions to Number are implicit
* Conversions away from Number are explicit and potentially lossy
* If lossy, round to nearest, and to even on tie
* Introduces amendment `XRPFees`
* Convert fee voting and protocol messages to use XRPAmounts
* Includes Validations, Change transactions, the "Fees" ledger object,
and subscription messages
* Improve handling of 0 drop reference fee with TxQ. For use with networks that do not want to require fees
* Note that fee escalation logic is still in place, which may cause the
open ledger fee to rise if the network is busy. 0 drop transactions
will still queue, and fee escalation can be effectively disabled by
modifying the configuration on all nodes
* Change default network reserves to match Mainnet
* Name the new SFields *Drops (not *XRP)
* Reserve SField IDs for Hooks
* Clarify comments explaining the ttFEE transaction field validation
Fixes#4005
Makes it possible for internal RPC Error Codes to associate
themselves with a non-OK (200) HTTP status code. There are
quite a number of RPC responses in addition to tooBusy that
now have non-OK HTTP status codes.
The new return HTTP return codes are only enabled by including
"ripplerpc": "3.0" or higher in the original request.
Otherwise the historical value, 200, continues to be returned.
This ensures that this is not a breaking change.
featureDisallowIncoming is a new amendment that would allow users to opt-out of incoming Checks, Payment Channels, NFTokenOffers, and trust lines. This commit includes tests.
Adds four new AccountSet Flags:
1. asfDisallowIncomingNFTOffer
2. asfDisallowIncomingCheck
3. asfDisallowIncomingPayChan
4. asfDisallowIncomingTrustline
Introduces a conanfile.py (and a Conan recipe for RocksDB) to enable building the package with Conan, choosing more recent default versions of dependencies. It removes almost all of the CMake build files related to dependencies, and the configurations for Travis CI and GitLab CI. A new set of cross-platform build instructions are written in BUILD.md.
Includes example GitHub Actions workflow for each of Linux, macOS, Windows.
* Test on macos-12
We use the <concepts> library which was not added to Apple Clang until
version 13.1.6. The default Clang on macos-11 (the sometimes current
version of macos-latest) is 13.0.0, and the default Clang on macos-12 is
14.0.0.
Closes#4223.
Clang warned about the code removed in this patch with the warning:
```
warning: out-of-line definition of constexpr static data member is
redundant in C++17 and is deprecated [-Wdeprecated]
```
There's a bug in gdb where unsigned template parameters cause issues with
RTTI. This patch changes a template parameter from `size_t` to `int` to
work around this gdb bug.
Reduce the reserve requirements from 20/5 to 10/2 in line with the current network votes. The requirements of 10/2 have been on the network long enough that new nodes should not still have the old reserve amount.
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
* Per actions/runner-images#6002, ubuntu-18.04 is being deprecated. If
latest ever fails in the future, we'll need to fix the jobs anyway, so
catch it early.
* Use long option names
* Force clang-format to ubuntu-20.04 because LLVM 10 is not available for 22.04
* Improve move semantics in Expected:
This patch unconditionally moves an `Unexpected<U>` value parameter as
long as `U` is not a reference. If `U` is a reference the code should
not compile. An error type that holds a reference is a strange use-case,
and an overload is not provided. If it is required in the future it can
be added.
The `Expected(U r)` overload should take a forwarding ref.
* Replace enable_if with concepts in Expected
* Removed a reference to the default number of workers varying based on whether a node has validation enabled. Workers default to the number of processor cores + 2: https://github.com/ripple/rippled/blob/develop/src/ripple/core/impl/JobQueue.cpp#L166
* Protobuf v2 and Ubuntu 16.04 are no longer supported.
* Updated protobuf version as v3 is now supported, fixed typos, automatically sent number of processors when building boost & rippled.
It turns out that the feature enabled by the tfTrustLine flag
on an NFTokenMint transaction could be used as a means to
attack the NFToken issuer. Details are in
https://github.com/XRPLF/rippled/issues/4300
The fixRemoveNFTokenAutoTrustLine amendment removes the
ability to set the tfTrustLine flag on an NFTokenMint
transaction.
Closes 4300.
When starting, the code generates a new ephemeral private key and
a corresponding certificate to go along with it. This process can
take time and, while this is unlikely to matter for normal server
operations, it can have a significant impact for unit testing and
development. Profiling data suggests that ~20% of the time needed
for a unit test run can be attributed to this.
This commit does several things:
1. It restructures the code so that a new self-signed certificate
and its corresponding private key are only initialized once at
startup; this has minimal impact on the operation of a regular
server.
2. It provides new default DH parameters. This doesn't impact the
security of the connection, but those who compile from scratch
can generate new parameters if they so choose.
3. It properly sets the version number in the certificate, fixing
issue #4007; thanks to @donovanhide for the report.
4. It uses SHA-256 instead of SHA-1 as the hash algorithm for the
certificate and adds some X.509 extensions as well as a random
128-bit serial number.
5. It rounds the certificate's "start of validity" period so that
the server's precise startup time cannot be easily deduced and
limits the validity period to two years, down from ten years.
6. It removes some CBC-based ciphers from the default cipher list
to avoid some potential security issues, such as CVE-2016-2107
and CVE-2013-0169.
Caching the base58check encoded version of an `AccountID` has
performance advantages, because because of the computationally
heavy cost associated with the conversion, which requires the
application of SHA-256 twice.
This commit makes the cache significantly more efficient in terms
of memory used: it eliminates the map, using a vector with a size
that is determined by the configured size of the node, and a hash
function to directly map any given `AccountID` to a specific slot
in the cache; the eviction policy is simple: in case of collision
the existing entry is removed and replaced with the new data.
Previously, use of the cache was optional and required additional
effort by the programmer. Now the cache is automatic and does not
require any additional work or information.
The new cache also utilizes a 64-way spinlock, to help reduce any
contention that the pressure on the cache would impose.
Each node on the network is supposed to have a unique cryptographic
identity. Typically, this identity is generated randomly at startup
and stored for later reuse in the (poorly named) file `wallet.db`.
If the file is copied, it is possible for two nodes to share the
same node identity. This is generally not desirable and existing
servers will detect and reject connections to other servers that
have the same key.
This commit achives three things:
1. It improves the detection code to pinpoint instances where two
distinct servers with the same key connect with each other. In
that case, servers will log an appropriate error and shut down
pending intervention by the server's operator.
2. It makes it possible for server administrators to securely and
easily generate new cryptographic identities for servers using
the new `--newnodeid` command line arguments. When a server is
started using this command, it will generate and save a random
secure identity.
3. It makes it possible to configure the identity using a command
line option, which makes it possible to derive it from data or
parameters associated with the container or hardware where the
instance is running by passing the `--nodeid` option, followed
by a single argument identifying the infomation from which the
node's identity is derived. For example, the following command
will result in nodes with different hostnames having different
node identities: `rippled --nodeid $HOSTNAME`
The last option is particularly useful for automated cloud-based
deployments that minimize the need for storing state and provide
unique deployment identifiers.
**Important note for server operators:**
Depending on variables outside of the the control of this code,
such as operating system version or configuration, permissions,
and more, it may be possible for other users or programs to be
able to access the command line arguments of other processes
on the system.
If you are operating in a shared environment, you should avoid
using this option, preferring instead to use the `[node_seed]`
option in the configuration file, and use permissions to limit
exposure of the node seed.
A user who gains access to the value used to derive the node's
unique identity could impersonate that node.
The commit also updates the minimum supported server protocol
version to `XRPL/2.1`, which has been supported since version
1.5.0 and eliminates support for `XPRL/2.0`.
We profiled different algorithms and data structures to understand which
strategy is best from a performance standpoint:
- Linear search on an array;
- Binary search on a sorted array;
- Using `std::map`; and
- Using `std::unordered_map`.
Both linear search and std::unordered_map outperformed the other alternatives
so no change to the existing data structure is justified. If more handers are
added, this should be revisited.
For some additional details and timings, please see:
https://github.com/XRPLF/rippled/issues/3298#issuecomment-1185946010
Trustlines must be between two different accounts but two trustlines exist
where an account extends trust to itself. They were created in the early
days, likely because of bugs that have been fixed. The new fixTrustLinesToSelf
amendment will remove those trustlines when it activates.
The existing spinlock code, used to protect SHAMapInnerNode
child lists, has a mistake that can allow the same child to
be repeatedly locked under some circumstances.
The bug was in the `SpinBitLock::lock` loop condition check
and would result in the loop terminating early.
This commit fixes this and further simplifies the lock loop
making the correctness of the code easier to verify without
sacrificing performance.
It also promotes the spinlock class from an implementation
detail to a more general purpose, easier to use lock class
with clearer semantics. Two different lock types now allow
developers to easily grab either a single spinlock from an
a group of spinlocks (packed in an unsigned integer) or to
grab all of the spinlocks at once.
While this commit makes spinlocks more widely available to
developers, they are rarely the best tool for the job. Use
them judiciously and only after careful consideration.
The XLS-20 implementation contained two bugs that would require the
introduction of amendments. This complicates the adoption of XLS-20
by requiring a staggered amendment activation, first of the two fix
amendments, followed by the `NonFungibleTokensV1` amendment.
After consideration, the consensus among node operators is that the
process should be simplified by the introduction of a new amendment
that, if enabled, would behaves as if the `NonFungibleTokensV1` and
the two fix amendments (`fixNFTokenDirV1` and `fixNFTokenNegOffer`)
were activated at once.
This commit implements this proposal; it does not introduce any new
functionality or additional features, above and beyond that offered
by the existing amendments.
The peer discovery protocol depends on peers exchanging messages
listing IP addresses for other peers.
Under normal circumstances, these messages should not be sent
frequently; the existing code would track the earliest time a
new message should be processed, but did not actually enforce
that limit.
An incorrect SQL query could cause the server to improperly
configure its voting state after a restart; typically, this
would manifest as an apparent failure to store a vote which
the administrator of the server had configured.
This commit fixes the broken SQL and ensures that amendment
votes are properly reloaded post-restart and closes#4220.
The existing code would, incorrectly, allow negative amounts in offers
for non-fungible tokens. Such offers would be handled very differently
depending on the context: a direct offer would fail with an error code
indicating an internal processing error, whereas brokered offers would
improperly succeed.
This commit introduces the `fixNFTokenNegOffer` amendment that detects
such offers during creation and returns an appropriate error code.
The commit also extends the existing code to allow for buy offers that
contain a `Destination` field, so that a specific broker can be set in
the offer.
A few unit tests have historically generated a lot of noise
to the console from log writes. This noise was not useful
and made it harder to locate actual test failures.
By changing the log level of these tests from
- severities::kError to
- severities::kDisabled
it was possible to remove that noise coming from the logs.
While there should never be a missing node when copying the SHAMap,
rippled should not terminate when there's an error rotating the
database. This patch aborts the database rotation rather than aborting rippled.
ThreadSafetyAnalysis was used to identify race conditions in this file.
This analysis was modivated by a (rare) crash while running unit tests.
Add locks to Shard flagged by ThreadSafetyAnalysis
The existing code properly parses the network_id parameter from the
the configuration file, but it does not properly set up the code to
use the value correctly. As a result the configured `network_id` is
ignored.
o Fixes an off-by-one when determining which NFTokenPage an
NFToken belongs on.
o Improves handling of packed sets of 32 NFTs with
identical low 96-bits.
o Fixes marker handling by the account_nfts RPC command.
o Tightens constraints of NFTokenPage invariant checks.
Adds unit tests to exercise the fixed cases as well as tests
for previously untested functionality.
The amendment increases the maximum sign of an account's signer
list from 8 to 32.
Like all new features, the associated amendment is configured with
a default vote of "no" and server operators will have to vote for
it explicitly if they believe it is useful.
* "A path is considered invalid if and only if it enters and exits an
address node through trust lines where No Ripple has been enabled for
that address." (https://xrpl.org/rippling.html#specifics)
* When loading trust lines for an account "Alice" which was reached
via a trust line that has the No Ripple flag set on Alice's side, do
not use or cache any of Alice's trust lines which have the No Ripple
flag set on Alice's side. For typical "end-user" accounts, this will
return no trust lines.
One of the two versions of the `rngfill` function accepts a pointer
to a buffer and a size (in bytes). The function aims to fill the
provided `buffer` with `size` random bytes. It does this in chunks
of 8 bytes, for long as possible, and then fills any left-over gap
one byte at a time.
To avoid an annoying and incorrect warning about a potential buffer
overflow in the "trailing write", commit 78bc2727f7
used a `#pragma` to instruct the compiler to not generate the incorrect
diagnostic. Unfortunately, this change _also_ eliminated the trailing
write code, which means that, under some cases, the `rngfill` function
would generate between 1 and 7 fewer random bytes than requested.
This problem would only manifest on builds that do not define `__GNUC__`
which, as of this writing, means MSVC.
Several hard-coded parameters control the behavior of the ledger
acquisition engine. The values of many of these parameters where
set by intuition and have complex and non-intuitive interactions
with each other and other parts of the code.
An earlier commit attempted to adjust several of these parameters
to improve syncing performance; initial testing was promising but
a number of operators reported experiencing syncing and stability
issues with their servers. As a result, this commit reverts parts
of commit 18235067af.
This commit further adjusts some tunables so as to increase the
aggressiveness of the ledger acquisition engine.
This commit addresses minor bugs introduced with commit
6faaa91850:
- The number of threads used by the database engine was
incorrectly clamped to the lower possible value, such
that the database was effectively operating in single
threaded mode.
- The number of requests to extract at once was so high
that it could result in increased latency. The bundle
size is now limited to 4 and can be adjusted by a new
configuration option `rq_bundle` in the `[node_db]`
stanza. This is an advanced tunable and adjusting it
should not be needed.
This commit modernizes the `AcceptedLedger` and `AcceptedLedgerTx`
classes, reduces their memory footprint and reduces unnecessary
dynamic memory allocations.
This commit optimizes the way asynchronous nodestore operations are
processed both by reducing the amount of time locks are held and by
minimizing the number of memory allocations and data copying.
Adds support to TaggedCache to support smart replacement
(Needed to avoid race conditions with negative caching.)
Create a "hotDUMMY" object that represents the absence
of an object.
Allow DatabaseNodeImp::asyncFetch to complete immediately
if object is in cache (positive or negative).
Fix a bug in asyncFetch where the object returned may not
be the correct canonical version because we stash the
object in the results array before we canonicalize it.
When fetching ledgers, the existing code would isolate the peer
that sent the most useful responses and issue follow up queries
only to that peer.
This commit increases the query aggressiveness, and changes the
mechanism used to select which peers to issue follow-up queries
to so as to more evenly spread the load along those peers which
provided useful responses.
The `SHAMapInnerNode` class had a global mutex to protect the
array of node children. Profiling suggested that around 4% of
all attempts to lock the global would block.
This commit removes that global mutex, and replaces it with a
new per-node 16-way spinlock (implemented so as not to effect
the size of an inner node objet), effectively eliminating the
lock contention.
* Txs with the same fee level will sort by TxID XORed with the parent
ledger hash.
* The TxQ is re-sorted after every ledger.
* Attempt to future-proof the TxQ tie breaking test
* Abort background path finding when closed or disconnected
* Exit pathfinding job thread if there are no requests left
* Don't bother creating the path find job if there are no requests
* Refactor to remove circular dependency between InfoSub and PathRequest
The existing trust line caching code was suboptimal in that it stored
redundant information, pinned SLEs into memory and required multiple
memory allocations per cached object.
This commit eliminates redundant data, reducing the size of cached
objects and unpinning SLEs from memory, and uses value_types to
avoid the need for `std::shared_ptr`. As a result of these changes, the
effective size of a cached object, includes the overhead of the memory
allocator and the `std::shared_ptr` should be reduced by at least 64
bytes. This is significant, as there can easily be tens of millions
of these objects.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_`
var. This new `mutex_` var is a `shared_mutex`, and most operations only need to
lock it with a `shared_lock`. The only exception is `applyMutex`, which may need
a `unique_lock`.
One consequence of removing the `apply_mutex` is more than one `applyMutex`
function can run at the same time. To help reduce lock contention that a
`unique_lock` would cause, checks that only require reading data are run a
`shared_lock` (call these the "prewriteChecks"), then the lock is released, then
a `unique_lock` is acquired. Since a currently running `applyManifest` may write
data between the time a `shared_lock` is released and the `write_lock` is
acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't
ideal, but the "prewirteChecks" are relatively inexpensive.
A couple of other designs were considered. We could restrict more than one
`applyMutex` function from running concurrently - either with a `applyMutex` or
my setting the max number of manifest jobs on the job queue to one. The biggest
issue with this is if any other function ever adds a write lock for any reason,
`applyManifest` would not be broken - data could be written between the release
of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is
tempting to solve this problem by not releasing the `shared_mutex` and simply
upgrading the lock. In the presence of concurrently running `applyManifest`
functions, this will deadlock (both function need to wait for the other to
release their read locks before they can acquire a write lock).
In order to preserve the Hooks ABI, it is important that field
values used for hooks be stable going forward.
This commit reserves the required codes so that they will not
be repurposed before Hooks can be proposed for inclusion in
the codebase.
* Remove Application & Database dependency in PerfLog. Replace it with
a callback passed into the constructor.
* Fixes the circular dependency between ripple/nodestore and ripple/basics
If fast loading is enabled but the last persisted ledger is not
entirely on disk, the server would fail to start without manual
intervention by the server operator.
This commit allows the server to detect this scenario and attempt
to automatically recover.
This is a refactor aimed at cleaning up and simplifying the existing
job queue.
As of now, all jobs are cancelled at the same time and in the same
way, so this commit removes the per-job cancellation token. If the
need for such support is demonstrated, support can be re-added.
* Revise documentation for ClosureCounter and Workers.
* Simplify code, removing unnecessary function arguments and
deduplicating expressions
* Restructure job handlers to no longer need to pass a job's
handle to the job.
A typographical error would mishandle the case where a caller explicitly
tries to remove a child that is not actually part of the node. This case
is never invoked in practice, and so the bug would will never trigger.
Commit bf013c02ad added support
for incorporating a commit ID into the compiled version string
but did so in a way that did not follow the semantic versioning
standard.
This commit corrects that flaw by moving the commit ID into the
"metadata" part of the version string and properly handles the
case where the commit hash cannot be retrieved.
The pathfinding engine built into the code has several configurable
parameters to adjust the depth of the paths indexed and explored.
These parameters can dramatically impact the performance and memory
consumption of a server; higher values can result in resource usage
increasing exponentially.
These default values were decided early and somewhat arbitrarily at
a time when the network and the size of the network state were much
smaller.
This commit adjusts the default values to reduce the depth of paths
to more reasonable levels; unless explicitly overriden, the changes
mean that pathfinding operations will return fewer, shallower paths
than previous versions of the software.
This commit corrects a technical flaw that was introduced with commit
7c12f01358: as written, a mutex that is
intended to help provide synchronization for multiple threads as they
are each walking the map, is declared so that each thread is passed a
dangling reference to a unique mutex.
This commit hoists the mutex outside the thread creation loop, so all
threads use a single mutex and eliminating the dangling reference.
The nodestore includes a built-in cache to reduce the disk I/O
load but, by default, this cache was not initialized unless it
was explicitly configured by the server operator.
This commit introduces sensible defaults based on the server's
configured node size.
It remains possible to completely disable the cache if desired
by explicitly configuring it the cache size and age parameters
to 0:
[node_db]
...
cache_size = 0
cache_age = 0
- Only duplicate records from archive to writable during online_delete.
- Log duration of nodestore reads.
- Include nodestore counters in perf_log output.
- Remove gratuitous nodestore activity counting.
- Report initial sync duration in server_info and perfLog.
- Report state_accounting in perfLog.
- Make state_accounting durations more accurate.
- Parallel ledger loader.
- Config parameter to load ledgers on start.
1) Don't acquire so many nodes per pass. It's likely
far more than we need.
2) Right-size the finishedReads_ vector on passes other
than just the first.
* Sort by fee level (which is the current behavior) then by transaction
ID (hash).
* Edge case when the account at the end of the queue submits a higher
paying transaction to walk backwards and compare against the cheapest
transaction from a different account.
* Use std::if_any to simplify the JobQueue::isOverloaded loop.
* Log load fee values (at debug) received from validations.
* Log remote and cluster fee values (at trace) when changed.
* Refactor JobQueue::isOverloaded to return sooner if overloaded.
* Refactor Transactor::checkFee to only compute fee if ledger is open.
This flag, if present, suppresses the output of incoming
trustlines in default state.
This is primarily motivated by observing that users of Xumm often
have many unwanted incoming trustlines in a default state, which are
not useful in the vast majority of cases.
Being able to suppress those when doing `account_lines` saves bandwidth
and resources.
This commit implements partitioned unordered maps and makes it possible
to traverse such a map in parallel, allowing for more efficient use of
CPU resources.
The `CachedSLEs`, `TaggedCache`, and `KeyCache` classes make use of the
new functionality, which should improve performance.
The pathfinding engine requires pre-building large tables which is a
resource-intensive operation. Typically, one would not expect that a
server configured as a validator would also support pathfinding APIs
and so, building those tables by default wastes resources.
This commit, if merged, will disable pathfinding on servers that are
configured as validators, unless the server operator opts to support
it explicitly, by configuring the `[path_search_max]` parameter.
Validator operators that wish to support pathfinding on a validator
and want to use the default values can add the following stanza to
their server's configuration file:
[path_search_max]
7
The priority of different types of jobs was set back in the early
days of development, based on insight and observations that don't
necessarily apply any longer.
Specifically, job types used by the server to sync to the network
were being treated as lower priority than client requests, making
it more difficult to regain sync.
This commit adjusts the priority of several jobs and should allow
servers to prioritize resynchronizing to the network over serving
clients.
The existing calculation would limit the maximum number of threads
that would be created by default to at most 6; this may have been
reasonable a few years ago, but given both the load on the network
as of today and the increase in the number of CPU cores, the value
should be revisited.
This commit, if merged, changes the default calculation for nodes
that are configured as `large` or `huge` to allow for up to twelve
threads.
The "sweep interval" is the amount of time between successive sweeps of
of various in-memory data structures to remove stale items.
Prior to this commit, the interval was automatically adjusted, based on
the value of the `[node_size]` option in a server's configuration file.
If merged, this commit introduces a new configuration option that makes
it possible for a server operator to adjust the sweep interval and make
a CPU/memory tradeoff:
[sweep_interval]
<integer>
The specified value represents the number of seconds between successive
sweeps. The range of valid values is between 10 and 600.
Important operator notes:
This is an advanced configuration option that should not be used unless
there is empirical data which suggests that the default sweep frequency
is either resulting in performance problems or is causing undue load to
the server.
Note that adjusting the sweep interval may not have the intended effect
on the server. Lower values will not always translate to a reduction of
memory usage and higher values will not always translate to a reduction
of CPU usage and/or load.
The performance characteristics of `std::unordered_map` are better
than `std::map` and the former should be preferred when the strict
ordering of the latter is not required.
* Only require adding the new feature names in one place. (Also need to
increment a counter, but a check on startup will catch that.)
* Allows rippled to have the code to support a given amendment, but
not vote for it by default. This allows the amendment to be enabled in
a future version without necessarily amendment blocking these older
versions.
* The default vote is carried with the amendment name in the list of
supported amendments.
* The amendment table is constructed with the amendment and default
vote.
* Also clean up some formatting in the Windows instructions
* Changed the recommended version for Windows to 1.1.1L after deeper
checking uncovered some build issues.
* Patch the soci unsigned-types.h file. If no changes are made, delete
the patched file and exit. If there are changes, backup the original
and replace it with the patched file.
* Fixes#3885
Patch Rocksdb only once:
* The repeated patches do not appear to affect build times, but avoiding
unnecessary copies is good for its own sake.
There are two mutexes in ValidatorSite: `site_mutex_` and `state_mutex_`. Some
function end up locking both mutexes. However, depending on the call, the
mutexes could be locked in different orders, resulting in deadlocks.
If both mutexes are locked, this patch always locks the `sites_mutex_` first.
The existing logic involves every server sending every transaction
that it receives to all its peers (except the one that it received
a transaction from).
This commit instead uses a randomized algorithm, where a node will
randomly select peers to relay a given transaction to, caching the
list of transaction hashes that are not relayed and forwading them
to peers once every second. Peers can then determine whether there
are transactions that they have not seen and can request them from
the node which has them.
It is expected that this feature will further reduce the bandwidth
needed to operate a server.
The existing license file contains copyright statements and license
snippets referencing code that has long since been removed from the
codebase and which are not necessary any longer.
The general copyright statement for the ISC License is also changed
from "Ripple Labs" which is one contributor to the codebase, to the
more general "XRP Ledger Developers".
Remaining code which was originally taken from Bitcoin includes the
relevant copyright statement(s) inline.
To aid in the automated detection of the license type by GitHub and
tools like `licensee`, the following statement which referenced the
ISC license, and which was listed at the bottom of the file, is now
incorporated at the top:
>The accompanying files incorporate work covered by the following copyright
>and previous license notice:
>
>Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
>Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
This commit removes the `ltINVALID` pseudo-type identifier from
`LedgerEntryType` and the `ttINVALID` pseudo-type identifier from
`TxType` and includes several small additional improvements that
help to simplify the code base.
It also improves the documentation `LedgerEntryType` and `TxType`,
which was all over the place, and highlights some important caveats
associated with making changes to the ledger and transaction type
identifiers.
The commit also adds a safety check to the `KnownFormats<>` class,
that will catch the the accidental reuse of format identifiers.
Ideally, this should be done at compile time but C++ does not (yet?)
allow for the sort of introspection that would enable this.
The legacy functions `cdirFirst` and `dirFirst` were mostly
identical; the differences were only type-related. The same
situation existed with `cdirNext` and `dirNext`.
This commit removes the duplicated code by introducing new
template functions that abstract away the differences that
are present between each pair of functions.
This commit also improves the naming of function arguments,
helping to elucidate their purpose & use and to make the
code self-documenting.
The Negative UNL is a feature of the XRP Ledger consensus protocol that
improves liveness (the network's ability to make forward progress) during
a partial outage. Using the Negative UNL, servers adjust their effective
UNLs based on which validators are currently online and operational, so
that a new ledger version can be declared validated even if several trusted
validators are offline.
The Negative UNL has no impact on how the network processes transactions
or what transactions' outcomes are, except that it improves the network's
ability to declare outcomes final during some types of partial outages.
The feature was originally introduced with version **1.6.0** but it was
only possible to manually enable this. If merged, this commit introduces
the amendment associated with the feature so that server operators can
vote on whether to enable this feature.
For more details, please see https://xrpl.org/negative-unl.html
This commit closes#3898.
Under some circumstances, it is possible to induce an out-of-bounds
memory read in the base58 decoder.
This commit addresses this issue.
Acknowledgements:
Guido Vranken for discovering and responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
Ripple is generously sponsoring a bug bounty program for the
rippled project. For more information please visit:
https://ripple.com/bug-bounty
The HardenedValidations amendment introduces additional fields
in validations:
- `sfValidatedHash`, if present, is the hash the of last ledger that
the validator considers to be fully validated.
- `sfCookie`, if present, is a 64-bit cookie (the default
implementation selects it randomly at startup but other
implementations are possible), which can be used to improve the
detection and classification of duplicate validations.
- `sfServerVersion`, if present, reports the version of the software
that the validator is running. By surfacing this information,
server operators gain additional insight about variety of software
on the network.
If merged, this commit fixes#3797 by adding the fields to the
`validations` stream as shown below:
- `sfValidateHash` as `validated_hash`: a 256-bit hex string;
- `sfCookie` as `cookie`: a 64-bit integer as a string; and
- `sfServerVersion` as `server_version`: a 64-bit integer as
a string.
With this amendment, the CheckCash transaction creates a TrustLine
if needed. The change is modeled after offer crossing. And,
similar to offer crossing, cashing a check allows an account to
exceed its trust line limit.
The following changes were made:
- Removed dependency on template defined in beast detail namespace.
- Removed Section::find() method which had an obsolete interface.
- Made Section::get<>() easier to use for the common case of
retrieving a std::string. The revised get() method replaces old
calls to Section::find().
- Provided a default template parameter to free function
get<>(Section config, std::string name) so it stays similar to
Section::get<>().
Then the rest of the code was adapted to these changes.
- Calls to Section::find() were replaced with calls to Section::get.
- Unnecessary get<std::string>() arguments were reduced to get().
These changes dug up an interesting artifact in the SHAMap unit
tests. I'm not sure why the tests were working before, but there
was a problem with the case of a Section key. The unit test is
fixed.
The `[node_size]` configuration parameter is used to tune various
parameters based on the hardware that the code is running on. The
parameter can take five distinct values: `tiny`, `small`, `medium`,
`large` and `huge`.
The default value in the code is `tiny` but the default configuration
file sets the value to `medium`. This commit attempts to detect the
amount of RAM on the system and adjusts the node size default value
based on the amount of RAM and the number of hardware execution
threads on the system.
The decision matrix currently used is:
| | 1 | 2 or 3 | ≥ 4 |
|:-------:|:----:|:------:|:------:|
| > ~8GB | tiny | tiny | tiny |
| > ~12GB | tiny | small | small |
| > ~16GB | tiny | small | medium |
| > ~24GB | tiny | small | large |
| > ~32GB | tiny | small | huge |
Some systems exclude memory reserved by the the hardware, the kernel
or the underlying hypervisor so the automatic detection code may end
up determining the node_size to be one less than "appropriate" given
the above table.
The detection algorithm is simplistic and does not take into account
other relevant factors. Therefore, for production-quality servers it
is recommended that server operators examine the system holistically
and determine what the appropriate size is instead of relying on the
automatic detection code.
To aid server operators, the node size will now be reported in the
`server_info` API as `node_size` when the command is invoked in
'admin' mode.
A recent version of clang notes a number of places in range
for loops where the code base was making unnecessary copies
or using const lvalue references to extend lifetimes. This
fixes the places that clang identified.
* Create SQLite database for mapping transaction IDs to shard indexes
* Create SQLite database for mapping ledger hashes to shard indexes
* Create additional test cases for the shard database
* load_factor was missing from server_info when the server was running in
reporting mode. Now, the reporting mode server calls server_info on the p2p
node, and propagates the load_factor back to the client.
In order to effectively mitigate CVE-2021-3499 even when compiling
against versions of OpenSSL prior to 1.1.1k, this commit:
1) requires use of TLS 1.2 or later. Note that both TLS 1.0 and
TLS 1.1 have been officially deprecated for over a year.
2) disables renegotiation support for TLS 1.2 connections.
Lastly, this commit also changes the default list of ciphers that
the server offers, limiting it only to ciphers that are part of
TLS 1.2.
The `tx` command supports output in both "text" and "binary" modes,
controlled by the binary flag. For more details on the command and
the possible arguments, please see: https://xrpl.org/tx.html.
The existing handler would incorrectly deal with metadata when in
binary mode. This commit corrects this issue, ensuring that the
metadata is properly encoded, depending on the mode.
Typically, an RPC response contains a `result` field, which
contains details about the operation performed. For ease of
parsing, forwarded responses must look like a non-forwarded
response.
In some instances the response was incorrectly composed, so
that the actual `result` object would be encapsulated by an
outer `result` object, breaking existing code.
This commit, addresses this issue and correctly "folds" the
`result` field, ensuring a consistent schema for responses.
* Add instructions to the workflow at the point where the failure would
occur, which is where someone experiencing a failure is most likely to
look. To keep things simple, the instructions are always printed. The
assumption is that if the job succeeds, nobody is likely to look
anyway.
* Provides the diff of the failure as an artifact, so the user can apply
it directly to their repo.
* Also update the levelization/README.md to clarify the levels a little
bit.
This updates the build process to use the local Artifactory server as a docker image cache to avoid being rate limited by docker hub during the build process.
While most of the code associated with secp256k1 operations had
been migrated to libsecp256k1, the deterministic key derivation
code was still using calls to OpenSSL.
If merged, this commit replaces the OpenSSL-based routines with
new libsecp256k1-based implementations. No functional change is
expected and the change should be transparent.
This commit also removes several support classes and utility
functions that wrapped or adapted various OpenSSL types that
are no longer needed.
A tip of the hat to the original author of this truly superb
library, Dr. Pieter Wuille, and to all other contributors.
This commit expands the detection capabilities of the Byzantine
validation detector. Prior to this commit, only validators that
were on a server's UNL were monitored. Now, all the validations
that a server receives are passed through the detector.
The existing class offered several constructors which were mostly
unnecessary. This commit eliminates all existing constructors and
introduces a single new one, taking a `Slice`.
The internal buffer is switched from `std::vector` to `Buffer` to
save a minimum of 8 bytes (plus the buffer slack that is inherent
in `std::vector`) per SHAMapItem instance.
Add support to allow multiple indepedent nodes to produce a binary identical
shard for a given range of ledgers. The advantage is that servers can use
content-addressable storage, and can more efficiently retrieve shards by
downloading from multiple peers at once and then verifying the integrity of
a shard by cross-checking its checksum with the checksum other servers report.
Before this change any non-zero Sequence field was handled as
a non-ticketed transaction, even if a TicketSequence was
present. We learned that this could lead to user confusion.
So the rules are tightened up.
Now if any transaction contains both a non-zero Sequence
field and a TicketSequence field then that transaction
returns a temSEQ_AND_TICKET error code.
The (deprecated) "sign" and "submit" RPC commands are tuned
up so they auto-insert a Sequence field of zero if they see
a TicketSequence in the transaction.
No amendment is needed because this change is going into
the first release that supports the TicketBatch amendment.
* Fix bug where incorrect max amount was set for XRP
* Fix bug where incorrect source currencies were set when XRP was the dst and a
sendmax amount was set
The existing code that deserialized an STAmount was sub-optimal and performed
poorly. In some rare cases the operation could result in otherwise valid
serialized amounts overflowing during deserialization. This commit will help
detect error conditions more quickly and eliminate the problematic corner cases.
* Use theoretical quality to order the strands
* Do not use strands below the user specified quality limit
* Stop exploring strands (at the current quality iteration) once any strand is non-dry
The previous error description was focused on keys that are too long,
but this error can occur if the key is too short or does not contain
the correct prefix.
* Add a new operating mode to rippled called reporting mode
* Add ETL mechanism for a reporting node to extract data from a p2p node
* Add new gRPC methods to faciliate ETL
* Use Postgres in place of SQLite in reporting mode
* Add Cassandra as a nodestore option
* Update logic of RPC handlers when running in reporting mode
* Add ability to forward RPCs to a p2p node
- The changes to manifest relaying introduced with commit f74b469e68
will cause newly accepted manifests to be sent back to the peer from
which they were received. This no longer happens: a newly accepted
manifest is never sent back to the peer we received it from.
- When encountering a manifest without a domain set, the `manifest` and
`validator_info` commands would include an empty string as the domain
associated with the manifest. This no longer happens: if a domain is
not present, the `domain` field will not be.
The existing code attempts to validate the provided node public key
using a function that assumes that the encoded public key is for an
account. This causes the parsing to fail.
This commit fixes#3317 by letting the caller specify the type of
the public key being checked.
The manifest relay code would only ever relay manifests from validators
on a server's UNL which means that the manifests of validators that are
not broadly trusted can fail to propagate across the network, which can
make it difficult to detect and track such validators.
This commit, if merged, propagates all manifests on a best-effort basis
resulting in broader availability of manifests on the network and avoid
the need to introduce on-ledger manifest storage or to establish one or
more manifest repositories.
- Add validation/proposal reduce-relay feature negotiation to
the handshake
- Make squelch duration proportional to a number of peers that
can be squelched
- Refactor makeRequest()/makeResponse() to facilitate handshake
unit-testing
- Fix compression enable flag for inbound peer
- Fix compression algorithm parsing in the header parser
- Fix squelch duration in onMessage(TMSquelch)
This commit fixes 3624, fixes 3639 and fixes 3641
Support for 'out-of-sequence' transaction execution was introduced
in commit 7724cca384.
The changes in that commit were gated under a feature but there was
no corresponding amendment introduced that would allow the network
to vote on this amendment.
This commit introduces 'TicketBatch' amendment as the amendment
that is associated with the tickets feature. If the amendment is
enabled, it will activate support for tickets.
This commit also removes several workarounds that are no longer
needed in unit tests.
* Markdown explanation of what levelization is, the intended levels, as
well as the process used to determine dependencies
* Shell script finds all dependencies, groups them, and finds cyclic
dependencies and maps out non-cyclic dependencies.
* Github job to run the script and fail if anything changes. Should
catch introduction of new dependencies and new problems. Will also
detect changes if problems or dependencies are removed.
* Creates a version 2 of the UNL file format allowing publishers to
pre-publish the next UNL while the current one is still valid.
* Version 1 of the UNL file format is still valid and backward
compatible.
* Also causes rippled to lock down if it has no valid UNLs, similar to
being amendment blocked, except reversible.
* Resolves#3548
* Resolves#3470
* Move all the vcpkg windows dependency installations into one step.
* Move the unmodified `before_install` step above the matrix to improve
readability, because this step runs before any of the matrix steps.
Due to some quirky emergent behavior, the server can't really begin
synching until twice the default close time resolution of the genesis
ledger, which is 30 seconds, has passed. In effect, this causes a one
minute delay.
This commit adjusts the default close time resolution down to the
minimum allowed resoluion of 10 seconds, so the corresponding delay
is reduced by 67% down to 20 seconds. This should be enough time to
ensure the server has reasonable connectivity without unduly delaying
initial synch times.
This comment explains this patch and the associated patches
that should be folded into it. This paragraph should be removed
when the patches are folded after review.
This change significantly improves ledger sync and fetch
times while reducing memory consumption. The change affects
the code from that begins with SHAMap::getMissingNodes and runs
through to Database::threadEntry.
The existing code issues a number of async fetches which are then
handed off to the Database's pool of read threads to execute.
The results of each read are placed in the Database's positive
and negative caches. The caller waits for all reads to complete
and then retrieves the results out of these caches.
Among other issues, this means that the results of the first read
cannot be processed until the last read completes. Additionally,
all the results must sit in memory.
This patch changes the behavior so that each read operation has a
completion handler associated with it. The completion of the read
calls the handler, allowing the results of each read to be
processed as it completes. As this was the only reason the
negative and positive caches were needed, they can now be removed.
The read generation code is also no longer needed and is removed.
The batch fetch logic was never implemented or supported and is
removed.
gcc's implementation of `prm::synchronized_pool_resource` showed
extremely poor performance compared with
`boost::synchronized_pool_resouece`. Boost's implementation of pmr is
now used in all cases (previously it was only used when a standard
lib, like clang's, lacked an implementation of pmr).
This patch also makes a minor change where inner nodes are constructed
with sparse arrays, unless "dense" is explicitly requested.
Prior to this commit, the amendments that a server would vote in support
of or against could be configured both via the configuration file and
via the command line "feature" command. Changes made in the configuration
file would only be loaded once at server startup and changes made via the
command line take effect immediately but are not persisted across
restarts.
This commit deprecates management of amendments via the configuration
file and stores the relevant information in the `wallet.db` database
file.
1. On startup, the new code parses the configuration file.
2. If the `[veto_amendments]` or `[amendments]` sections are present,
we check if the `FeatureVotes` table is present in `wallet.db`.
3. If it is not, we create the `FeatureVotes` table and transfer the
settings from the config file.
4. Proceed normally but only reference the `FeatureVotes` table instead
of the config file.
5. Warns if the voting table already exists in `wallet.db` and there
exists voting sections in the config file. The config file is ignored
in this case.
This change addresses & closes#3366
* Found several functions called under lock that take a lock. Refactor
to require a lock as a parameter instead.
* Found several functions called under lock that don't take a lock, but
should. Refactored those as well to require a lock as a parameter.
Unit tests are counting test failures, process crashes, and process exit code
failures in the count. Since a failing tests causes the process exit code to
return failure, we get extra counts. This patch removes process exit code
failures from the count.
ReadViewFwdRange was storing a cached `end_` iterator that was lazily
created in an iterators `end()` function. When the cache is empty, and
the range it iterated from multiple threads, this creates a race
condition.
This change has performance consequences for "old style" for loops.
For example:
```
// don't do this
for(auto i = tx_range.begin(); i != tx_range.end(); ++i)
```
Can call the now expensive `end()` function more often than needed.
Range-based for loop (I.e. `for(auto const& t : tx_range)`) should be
used instead.
- Under some conditions, comparing `ReadViewFwdRange::iterators`
for equality could derefence an empty `std::unique_ptr` which
will result in a crash.
- Misuse of the `equal` API could result in a `std::bad_cast`
exception being thrown from when iterating transactions or
SLEs from the `OpenView`, `RawStateTable` and `Ledger` classes.
A large percentage of inner nodes only store a small number of children. Memory
can be saved by storing the inner node's children in sparse arrays. Measurements
show that on average a typical SHAMap's inner nodes can be stored using only 25%
of the original space.
This commit combines a number of cleanups, targeting both the
code structure and the code logic. Large changes include:
- Using more strongly-typed classes for SHAMap nodes, instead of relying
on runtime-time detection of class types. This change saves 16 bytes
of memory per node.
- Improving the interface of SHAMap::addGiveItem and SHAMap::addItem to
avoid the need for passing two bool arguments.
- Documenting the "copy-on-write" semantics that SHAMap uses to
efficiently track changes in individual nodes.
- Removing unused code and simplifying several APIs.
- Improving function naming.
- Simplify and consolidate code for parsing hex input.
- Replace beast::endian::order with boost::endian::order.
- Simplify CountedObject code.
- Remove pre-C++17 workarounds in favor of C++17 based solutions.
- Improve `base_uint` and simplify its hex-parsing interface by
consolidating the `SexHex` and `SetHexExact` methods into one
API: `parseHex` which forces callers to verify the result of
the operation; as a result some public-facing API endpoints
may now return errors when passed values that were previously
accepted.
- Remove the simple fallback implementations of SHA2 and RIPEMD
introduced to reduce our dependency on OpenSSL. The code is
slow and rarely, if ever, exercised and we rely on OpenSSL
functionality for Boost.ASIO as well.
- Provide separate functions for serializing depending on whether
one wants a "wire" version of a node, or one suitable for hashing.
- Remove unused functions
The existing SHAMapNodeID object has both a valid and an invalid state
and requirs callers to verify the state of an instance prior to using
it. A simple set of changes removes that restriction and ensures that
all instances are valid, making the code more robust.
This change also:
1. Introduces a new function to construct a SHAMapNodeID from a
serialized blob; and
2. Reduces the amount of constructors the class exposes.
- Limit the lifetime of a buffer that was only used in the early
phases of peer connection establishment but which lived on as
long as the peer was active.
- Cache the message used to transfer manifests, so it can be reused
instead of recreated for every peer connection.
- Improve the reading of partial messages by passing a hint to the
I/O layer if the number of bytes needed to complete the message
is known.
The existing code issues a PING to each peer every 8 seconds. While
frequent PINGs allow us to estimate a peer's latency with a high
degree of accuracy, this "inter-server polka dance" is inefficient
and not useful. This commit, if merged, reduces the PING frequency
to once every 60 seconds.
Additionally, this commit simplifies the PING handling logic and
merges the code used to check and disconnect peers which fail to
track the network directly into the timer callback.
When evaluating the fitness and usefulness of an outbound peer, the code
would incorrectly calculate the amount of time that the peer spent in
a non-useful state.
This commit, if merged, corrects the calculation and makes the timeout
values configurable by server operators.
Two new options are introduced in the 'overlay' stanza of the config
file. The default values, in seconds, are:
[overlay]
max_unknown_time = 600
max_diverged_time = 300
This commit replaces the `peers_max` configuration element which had
a predetermined split between incoming and outgoing connections with
two new configuration options, `peers_in_max` and `peers_out_max`,
which server operators can use to explicitly control the number of
incoming and outgoing peer slots.
There have been cases in the past where SFields have been defined
in such a way that they did not follow our conventions. In
particular, the string representation of an SField should match
the in-code name of the SField.
This change leverages the preprocessor to encourage SFields to
be properly constructed.
The suffixes of SField types are changed to be the same as
the suffixes of corresponding SerializedTypeIDs. This allows
The preprocessor to match types using simple name pasting.
Since the string representation of the SField is part of our
stable API, the name of sfPayChannel was changed to sfChannel.
This change allows sfChannel to follow our conventions while
making no changes to our external API.
* Remove DinD Service from container build template
DinD has changed how it works on GItLab due to recent docker changes such that the service no long needs to be called so long as the runner is being run on a `docker-X` tagged machine.
* refactor for docker service on normal node
* If multiple transactions are queued for the account, change the
account's sequence number in a temporary view before processing the
transaction.
* Adds a new "at()" interface to STObject which is identical to the
operator[], but easier to write and read when dealing with ptrs.
* Split the TxQ tests into two suites to speed up parallel run times.
This commit introduces a new configuration option that server
operators can set. The value is communicated to other servers
and is also reported via the `server_info` API.
The value is meant to allow third-party applications or tools
to group servers together. For example, a tool that visualizes
the network's topology can group servers together.
Similar to the "Domain" field in validator manifests, an operator
can claim any domain. Prior to relying on the value returned, the
domain should be verified by retrieving the xrp-ledger.toml file
from the domain and looking for the server's public key in the
`nodes` array.
* Increases hard-coded number of parallel unit test processes for
Windows and MacOS builds from 1 to 2.
* Reduces Travis job time to well under the timeout value of 1.5 hours.
* Continue using a hard-coded value rather than `nprocs` because higher
values cause some jobs to run out of memory.
* Jobs with no unit tests are counted as failures. Resolves#3474
* Crashed processes are counted as failures. Resolves#3600
* Any tests specified on the command line test do not have matching
suites are counted as failures.
* Remove unused CI manual test.
When processing the `tx` command, we will now load both the transaction
and its metadata directly from SQLite.
Previously the `tx` RPC call was querying SQLite for the transaction
and then separately querying the key-value store for the metadata.
Support for IPv6 messages was added with commit 08382d866b
and version 1.1.0. No peer presently connected to the network in a useful capacity fails
to understand v2 messages.
This commit removes the code that generates and processes v1 messages and deletes legacy
messages from the protocol buffer definition file.
Use C++17 constant expressions to calculate the inverse
alphabet map at compile time instead of at runtime.
Remove support for encoding & decoding tokens using the
Bitcoin alphabet.
The job queue can impose limits of how many jobs of a particular
type can be queued.
This commit makes the previously hard-coded limit associated with
transactions configurable by the server's operator. Servers that
have increased memory capacity or which expect to see an influx
of transactions can increase the number of transactions their
server will be able to queue.
This commit fixes#3556.
The "/vl" HTTP endpoint can be used to request a particular
UNL from a rippled instance.
This commit, if merged, includes the public key of the requested
list in the response.
This commit fixes#3392
* Distinguish between recent and historical shards
* Allow multiple storage paths for historical shards
* Add documentation for this feature
* Add unit tests
Some RPC commands return `ledger_index` as a quoted numeric
string. This change allows the returned value to be directly
copied and used for follow-on RPC commands.
This commit fixes#3533
When attempting to load a validator list from a configured
site, attempt to reuse the last IP that was successfully
used if that IP is still present in the DNS response.
Otherwise, randomly select an IP address from the list of
IPs provided by the DNS system.
This commit fixes#3494.
With few exceptions, servers will typically receive multiple copies
of any given message from its directly connected peers. For servers
with several peers this can impact the processing latency and force
it to do redundant work. Proposal and validation messages are often
relayed with extremely high redundancy.
This commit, if merged, introduces experimental code that attempts
to optimize the relaying of proposals and validations by allowing
servers to instruct their peers to "squelch" delivery of selected
proposals and validations. Servers making squelching decisions by
a process that evaluates the fitness and performance of a given
server and randomly selecting a subset of the best candidates.
The experimental code is presently disabled and must be explicitly
enabled by server operators that wish to test it.
Tickets are a mechanism to allow for the "out-of-order" execution of
transactions on the XRP Ledger.
This commit, if merged, reworks the existing support for tickets and
introduces support for 'ticket batching', completing the feature set
needed for tickets.
The code is gated under the newly-introduced `TicketBatch` amendment
and the `Tickets` amendment, which is not presently active on the
network, is being removed.
The specification for this change can be found at:
https://github.com/xrp-community/standards-drafts/issues/16
Commit 4dc08f8202 introduced support for
deterministic shards, which makes the sharding functionality provided
by rippled more useful.
After merging, several opportunities for further improvements to the
deterministic sharding implementation were identified and a significant
increase int memory usage during shard finalization was detected.
Because of these issues, the commit is being reverted and the feature is
being rolled back. It will be reintroduced in a future release.
* Builds Windows dependencies first.
* Builds ALL OSs in the last stage.
* Fix the MacOS builds.
* Windows dependency stages are allowed to fail so ALL configurations will
attempt to build. Windows builds will probably fail if dependencies fail
(caching may allow them to succeed), but they will at least be attempted.
* Remove broken AppVeyor config file, so it stops trying.
The checkpointer class had assumed that the database would exist for the
lifetime of the application. This is no long true. These changes resolve bugs
involving dangling pointers.
There was a race condition in `on_accept` where the object's destructor
could run while `on_accept` was called.
This patch ensures that if `on_accept` is called then the object remains
valid for the duration of the call.
* Fixes#3486
* load factor computation normalized by load_base.
* last validated ledger age set to -1 while syncing.
* Return status changed:
* healthy -> ok
* warning -> service_unavailable
* critical -> internal_server_error
This change can help improve the liveness of the network during periods of network
instability, by allowing the network to track which validators are presently not online
and to disregard them for the purposes of quorum calculations.
If the 'HardenedValidations' amendment is enabled, this commit will
track the version of the software that validators embed in their
validations.
If a server notices that at least 60% of the validators on its UNL
are running a newer version than it is running, it will periodically
print an informational message, reminding the operator to check for
update.
The tecUNFUNDED code is actively used when attempting to create payment
channels; the messages incorrectly list it as deprecated.
Meanwhile, the tecUNFUNDED_ADD code actually is an unused legacy code,
dating back to when there was a WalletAdd transactor. The terLAST and
terFUNDS_SPENT codes are also unused legacy codes.
Engine result messages are not part of the binary format and are
documented as subject to change without notice, so this should not
require an amendment nor a new API version.
Align error code table for human readability.
The amendment was partially complete, included no functional code
and, even if activated, it would result in no changes to transaction
proessing. Despite this, removing the amendment is the prudent course
of action and avoids the possibility of an accidental activation.
If additional cryptoconditions are implemented, they will be each
assigned a new, unique amendment code.
This commit, if merged, adds support to allow multiple indepedent nodes to
produce a binary identical shard for a given range of ledgers. The advantage
is that servers can use content-addressable storage, and can more efficiently
retrieve shards by downloading from multiple peers at once and then verifying
the integrity of a shard by cross-checking its checksum with the checksum
other servers report.
* Document delete_batch, back_off_milliseconds, age_threshold_seconds.
* Convert those time values to chrono types.
* Fix bug that ignored age_threshold_seconds.
* Add a "recovery buffer" to the config that gives the node a chance to
recover before aborting online delete.
* Add begin/end log messages around the SQL queries.
* Add a new configuration section: [sqlite] to allow tuning the sqlite
database operations. Ignored on full/large history servers.
* Update documentation of [node_db] and [sqlite] in the
rippled-example.cfg file.
Resolves#3321
* The amendment ballot counting code contained a minor technical
flaw, caused by the use of integer arithmetic and rounding
semantics, that could allow amendments to reach majority with
slightly less than 80% support. This commit introduces an
amendment which, if enabled, will ensure that activation
requires at least 80% support.
* This commit also introduces a configuration option to adjust
the amendment activation hysteresis. This option is useful on
test networks, but should not be used on the main network as
is a network-wide consensus parameter that should not be
changed on a per-server basis; doing so can result in a
hard-fork.
Fixes#3396
Work on a version 2 of the XRP Network API has begun. The new
API returns:
* `notSynced` in place of `noClosed`, `noCurrent`, and `noNetwork`;
* `invalidParams` in place of `lgrIdxInvalid`.
The new version 2 API cannot be selected yet, as it remains a work
in progress.
Fixes#3269
If a port number is not specified in the [ips] or [ips_fixed]
blocks, automatically add the new default peer port which was
registered with IANA: 2459. Also use 2459 if no port is specified
with manually using the `connect` command; previously it was
using 6561, which could have resulted in spurious failures.
This commit, if merged, fixes#2861.
* Gives a summary of the health of the node:
Healthy, Warning, or Critical
* Last validated ledger age:
<7s is Healthy,
7s to 20s is Warning
> 20s is Critcal
* If amendment blocked, Critical
* Number of peers:
> 7 is Healthy
1 to 7 is Warning
0 is Critical
* server state:
One of full, validating or proposing is Healthy
One of syncing, tracking or connected is Warning
All other states are Critical
* load factor:
<= 100 is Healthy
101 to 999 is Warning
>= 1000 is Critical
* If not Healthy, info field contains data that is considered not
Healthy.
Fixes: #2809
Commit e257a22 introduced changes in the logic used to acquire historical
ledgers. The logic could cause historical ledgers to be acquired only since
the last online deletion interval instead of the configured value to allow
deletion.
* Make sure variables are always initialized
* Use lround instead of adding .5 and casting
* Remove some unneeded vars
* Check for null before calling strcmp
* Remove redundant if conditions
* Remove make_TxQ factory function
* Improve documentation
* Make the ShardArchiveHandler rather than the DatabaseShardImp perform
LastLedgerHash verification for downloaded shards
* Remove ShardArchiveHandler's singleton implementation and make it an
Application member
* Have the Application invoke ShardArchiveHandler initialization
instead of clients
* Add RecoveryHandler as a ShardArchiveHandler derived class
* Improve commenting
* Add documentation for shard validation
* Retrieve last ledger hash for imported shards
* Verify the last ledger hash in Shard::finalize
* Limit last ledger hash retrieval attempts for imported shards
* Use a common function for removing failed shards
* Add new ShardInfo::State for imported shards
Identifiers for retired amendments should not generally be used
in the codebase.
This commit reduces their visibility down to one translation
unit and marks them as unused and deprecated to prevent
accidental reuse.
In deciding whether to relay a proposal or validation, a server would
consider whether it was issued by a validator on that server's UNL.
While both trusted proposals and validations were always relayed,
the code prioritized relaying of untrusted proposals over untrusted
validations. While not technically incorrect, validations are
generally more "valuable" because they are required during the
consensus process, whereas proposals are not, strictly, required.
The commit introduces two new configuration options, allowing server
operators to fine-tune the relaying behavior:
The `[relay_proposals]` option controls the relaying behavior for
proposals received by this server. It has two settings: "trusted"
and "all" and the default is "trusted".
The `[relay_validations]` options controls the relaying behavior for
validations received by this server. It has two settings: "trusted"
and "all" and the default is "all".
This change does not require an amendment as it does not affect
transaction processing.
The sfLedgerSequence field is designated as optional in the object
template but it is effectively required and validations which do not
include it were, correctly, rejected.
This commit migrates the check outside of the peer code and into the
constructor used for validations being deserialized for the network.
Furthermore, the code will generate an error if a validation that is
generated by a server does not include the field.
The existing code used std::deque along with a size check to constrain the
size of a buffer and, effectively, "hand rolled" a circular buffer. This
change simply migrates directly to boost::circular_buffer.
The unit test now verifies that if an account is not present in the
starting account_tx ledger, account_tx still iterates down and finds
the transaction that deletes the account (and earlier transactions).
This commit introduces no functional changes but cleans up the
code and shrinks the surface area by removing dead and unused
code, leveraging std:: alternatives to hand-rolled code and
improving comments and documentation.
The script, when invoked by a server operator can collect information
useful for debugging, while attempting to redact potentially sensitive
data.
It contained no explanation or other exposition to allow people who
look at the file but aren't familiar with shell scripts to understand
its purpose.
The built-in watchdog is simplistic and can, sometimes, cause problems
especially on systems that have the ability to automatically start and
monitor processes.
This commit removes the sustain system entirely, changes the handling
of the SIGTERM signal to properly terminate the process and improves
the error message reported to the user when the command line used to
start `rippled` is incorrect and malformed.
Entries in the ledger are located using 256-bit locators. The locators
are calculated using a wide range of parameters specific to the entry
whose locator we are calculating (e.g. an account's locator is derived
from the account's address, whereas the locator for an offer is derived
from the account and the offer sequence.)
Keylets enhance type safety during lookup and make the code more robust,
so this commit removes most of the earlier code, which used naked
uint256 values.
This commit removes obsolete comments, dead or no longer useful
code, and workarounds for several issues that were present in older
compilers that we no longer support.
Specifically:
- It improves the transaction metadata handling class, simplifying
its use and making it less error-prone.
- It reduces the footprint of the Serializer class by consolidating
code and leveraging templates.
- It cleanups the ST* class hierarchy, removing dead code, improving
and consolidating code to reduce complexity and code duplication.
- It shores up the handling of currency codes and the conversation
between 160-bit currency codes and their string representation.
- It migrates beast::secure_erase to the ripple namespace and uses
a call to OpenSSL_cleanse instead of the custom implementation.
A deliberately malformed token can cause the server to crash during
startup. This is not remotely exploitable and would require someone
with access to the configuration file of the server to make changes
and then restart the server.
Acknowledgements:
Guido Vranken for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
Ripple is generously sponsoring a bug bounty program for the
rippled project. For more information please visit:
https://ripple.com/bug-bounty
Currently there is no mechanism for a validator to report the
version of the software it is currently running. Such reports
can be useful for those who are developing network monitoring
dashboards and server operators in general.
This commit, if merged, defines an encoding scheme to encode
a version string into a 64-bit unsigned integer and adds an
additional optional field to validations.
This commit piggybacks on "HardenedValidations" amendment to
determine whether version information should be propagated
or not.
The general encoding scheme is:
XXXXXXXX-XXXXXXXX-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY
X: 16 bits identifying the particular implementation
Y: 48 bits of data specific to the implementation
The rippled-specific format (implementation ID is: 0x18 0x3B) is:
00011000-00111011-MMMMMMMM-mmmmmmmm-pppppppp-TTNNNNNN-00000000-00000000
M: 8-bit major version (0-255)
m: 8-bit minor version (0-255)
p: 8-bit patch version (0-255)
T: 11 if neither an RC nor a beta
10 if an RC
01 if a beta
N: 6-bit rc/beta number (1-63)
This commit introduces the "HardenedValidations" amendment which,
if enabled, allows validators to include additional information in
their validations that can increase the robustness of consensus.
Specifically, the commit introduces a new optional field that can
be set in validation messages can be used to attest to the hash of
the latest ledger that a validator considers to be fully validated.
Additionally, the commit leverages the previously introduced "cookie"
field to improve the robustness of the network by making it possible
for servers to automatically detect accidental misconfiguration which
results in two or more validators using the same validation key.
- Add missing `#include` in `ripple/core/JobTypeInfo.h`
- Protect version string from clang-format in
`ripple/protocol/impl/BuildInfo.cpp`.
`Builds/CMake/RippledVersion.cmake` searches for this line by pattern.
Existing per-thread PRNGs are individually initialized using calls
to std::random_device.
If merged, this commit will use a single PRNG, initialized from
std::random_device on startup, to seed the thread-specific PRNGs.
Acknowledgements:
Thomas Snider, who reported this issue to Ripple on April 8, 2020.
Historically strand re-execute log messages have been treated as
errors. However in the vast majority of cases these log messages
are caused by well understood mechanics in the payment engine.
So usually these log messages should be treated as warnings.
The automated build system only builds packages signed with a list of
approved keys. This is a security measure to prevent someone who gains
push access to the repository from producing potentially malicious
packages that are signed by Ripple's trusted private keys.
Moving this list to the new location makes it easy to add and delete
new keys to the list.
* scoped_lock is now a std name with subtly different semantics
compared to lock_guard. Namely it can be used to lock 0 or
more mutexes. This is valuable, but can also be accidentally
used to lock 0 mutexes when 1 was intended, creating a
run-time error.
Therefore, if and when we use scoped_lock, extra care needs to
be taken in reviewing that code to ensure it doesn't
accidentally lock 0 mutexes when 1 was intended. To aid in
such careful reviewing, the use of the name scoped_lock should
be limited to those cases where the number of mutexes is not
exactly one.
* canonicalize_replace_cache
* canonicalize_replace_client
Now it is clear at the call site that if there are
duplicate copies of the data between the cache and
the caller, which copy gets replaced.
Additionally data parameter is now const-correct.
If it is not going to be replaced (canonicalize_replace_cache),
then the shared_ptr to the client data is const.
* The [network_id] option allows three string values:
- main: the XRP Ledger
- testnet: the Testnet operated by Ripple.
- devnet: the development test network operated by Ripple.
* Peers negotiate compression via HTTP Header "X-Offer-Compression: lz4"
* Messages greater than 70 bytes and protocol type messages MANIFESTS,
ENDPOINTS, TRANSACTION, GET_LEDGER, LEDGER_DATA, GET_OBJECT,
and VALIDATORLIST are compressed
* If the compressed message is larger than the uncompressed message
then the uncompressed message is sent
* Compression flag and the compression algorithm type are included
in the message header
* Only LZ4 block compression is currently supported
The payment engine restricts payment paths so two steps do not input the
same Currency/Issuer or output the same Currency/Issuer. This check was
skipped when the path started or ended with XRP. An example of a path
that was incorrectly accepted was: XRP -> //USD -> //XRP -> EUR
This patch enables the path loop check for paths that start or end with
XRP.
* Make ShardArchiveHandler a singleton.
* Add state database for ShardArchiveHandler.
* Use temporary database for SSLHTTPDownloader downloads.
* Make ShardArchiveHandler a Stoppable class.
* Automatically resume interrupted downloads at server start.
* Reduce lock scope on all public functions
* Use TaskQueue to process shard finalization in separate thread
* Store shard last ledger hash and other info in backend
* Use temp SQLite DB versus control file when acquiring
* Remove boost serialization from cmake files
When computing rates for offers, an STAmount's value can be out of can be out of
range (before canonicalizing). There was an assert that could incorrectly fire
in some cases. This patch removes that assert.
The newest MSVC 19.25.28610.4 does not build rocksdb. During the
Travis CI Windows job, the vs_BuildTools.exe automatically
downloads the newest version of the compiler. This fix forces the
install of MSVC 19.24.28314.0 to build rocksdb.
The fix1781 amendment was incorrectly introduced during conflict
resolution and support for it is not included at this time. This commit
removes the definition of the amendment identifier.
A review of the lag ratchet code revealed that we were using
the long-term master public keys of trusted validators, when
we should have been using the ephemeral public keys instead.
As a result, the lag ratchet code would be effectively
inoperable.
- Add support for all transaction types and ledger object types to gRPC
implementation of tx and account_tx.
- Create common handlers for tx and account_tx.
- Remove mutex and abort() from gRPC server. JobQueue is stopped before
gRPC server, with all coroutines executed to completion, so no need for
synchronization.
* Whenever a node downloads a new VL, send it to all peers that
haven't already sent or received it. It also saves it to the
database_dir as a Json text file named "cache." plus the public key of
the list signer. Any files that exist for public keys provided in
[validator_list_keys] will be loaded and processed if any download
from [validator_list_sites] fails or no [validator_list_sites] are
configured.
* Whenever a node receives a broadcast VL message, it treats it as if
it had downloaded it on it's own, broadcasting to other peers as
described above.
* Because nodes normally download the VL once every 5 minutes, a single
node downloading a VL with an updated sequence number could
potentially propagate across a large part of a well-connected network
before any other nodes attempt to download, decreasing the amount of
time that different parts of the network are using different VLs.
* Send all of our current valid VLs to new peers on connection.
This is probably the "noisiest" part of this change, but will give
poorly connected or poorly networked nodes the best chance of syncing
quickly. Nodes which have no http(s) access configured or available
can get a VL with no extra effort.
* Requests on the peer port to the /vl/<pubkey> endpoint will return
that VL in the same JSON format as is used to download now, IF the
node trusts and has a valid instance of that VL.
* Upgrade protocol version to 2.1. VLs will only be sent to 2.1 and
higher nodes.
* Resolves#2953
* Example: gcc.Debug will use the the default version of gcc installed on the
system. gcc-9.Debug will use version 9, regardless of the default. This will
be most useful when the default is older than required or desired.
* When an unknown amendment reaches majority, log an error-level
message, and return a `warnings` array on all successful
admin-level RPC calls to `server_info` and `server_state` with
a message describing the problem, and the expected deadline.
* In addition to the `amendment_blocked` flag returned by
`server_info` and `server_state`, return a warning with a more
verbose description when the server is amendment blocked.
* Check on every flag ledger to see if the amendment(s) lose majority.
Logs again if they don't, resumes normal operations if they did.
The intention is to give operators earlier warning that their
instances are in danger of being amendment blocked, which will
hopefully motivate them to update ahead of time.
* update EP and find package requirements
* minor protobuf/libarchive build fixes
* change travis release builds to nounity to
ameliorate vm memory exhaustion.
FIXES: #3223, #3232
* In and Out parameters were swapped when calculating the rate
* In and out qualities were not calculated correctly; use existing functions
to get the qualities
* Added tests to check that theoretical quality matches actual computed quality
* Remove in/out parameter from qualityUpperBound
* Rename an overload of qualityUpperBound to adjustQualityWithFees
* Add fix amendment
STAmount::soTime and soTime2 were time based "amendment like"
switches to control small changes in behavior for STAmount.
soTime2, which was the most recent, was dated Feb 27, 2016.
That's over 3 years ago.
The main reason to retain these soTimes would be to replay
old transactions. The likelihood of needing to replay a
transaction from over three years ago is pretty low. So it
makes sense to remove these soTime values.
In Flow_test the testZeroOutputStep() test is removed. That
test started to fail when the STAmount::soTimes were removed.
I checked with the original author of the test. He said
that the code being tested by that unit test has been removed,
so it makes sense to remove the test. That test is removed.
* use tagged containers for pkg build
* update build images
* continue to build container images in pipeline, but allow
failure (non-block)
* limit travis macos cache
* add vs2019 windows to travis
* remove xcode 9 travis build
* remove clang5/6 from CI and update min version of Clang required in
cmake
* break windows CI build into stages to reduce timeouts
* update datelib
* add if condition to travis builds to allow commit message to limit
builds by platform
FIXES: #2847
* Transactions that are submitted with the fail_hard flag
and that result in any TER code besides tesSUCCESS shall
be neither queued nor held.
[FOLD] Keep tec results out of the open ledger when fail_hard:
* Improve TransactionStatus const correctness, and remove redundant
`local` check
* Check open ledger tx count in fail_hard tests
* Fix some wrapping
* Remove duplicate test
Remove the implicit conversion from int64 to XRPAmount. The motivation for this
was noticing that many calls to `to_string` with an integer parameter type were
calling the wrong `to_string` function. Since the calls were not prefixed with
`std::`, and there is no ADL to call `std::to_string`, this was converting the
int to an `XRPAmount` and calling `to_string(XRPAmount)`.
Since `to_string(XRPAmount)` did the same thing as `to_string(int)` this error
went undetected.
Prior to this commit, the queue and execution times for individual jobs
were reported indepedently and could, potentially, be out of sync. This
change reports both values when either one of the exceeds the reporting
threshold.
It's possible an overloaded job queue is causing false alarms on the deadlock
detector. Log a fatal message after 90s, declare a logic error after 600s.
If merged, this commit will report additional information in the
response to the submit command; this will make it easier for developers
to accurately track the status of transaction submission.
Fixes#2851
Treat all `#` characters in config files as comments (and remove)
*unless* the `#` is immediately preceded by `\`. Write a warning
to log file when trailing comments are found/ignored in the config
to let operators know that the treatment of trailing `#` has changed.
Fixes#3121
The 'network_id' option allows an administrator to specify to which
network they intend a server to connect. Servers can leverage this
information to optimize routing and prune automatically discovered
cross-network connections.
This commit will, if merged:
- add support for the devnet keyword, which corresponds to network ID #2;
- report the network ID, if one is configured, in server_info
The `node_size` configuration option is used to automatically
configure various parameters (cache sizes, timeouts, etc) for
the server.
A previous commit included changes that caused incorrect values
to be returned which can result in sub-optimal performance that
can manifest as difficulty syncing to the network, or increased
disk I/O and/or memory usage. The problem was introduced with
commit 66fad62e66.
This commit, if merged, fixes the code to ensure that the correct
values are returned and introduces a compile-time check to prevent
this issue from reoccurring.
The existing platform detection code was derived from the old Beast
library, which was, itself, derived from JUCE.
This commit removes that code and replaces it with the Boost.Predef
library which defines a consistent set of compiler, architecture,
operating system, library, and other version numbers.
For more on Boost.Predef, please see the Boost documentation. The
documentation for the current version as of this writing is at:
https://www.boost.org/doc/libs/1_71_0/doc/html/predef.html
This commit restructures the HTTP based protocol negotiation that `rippled`
executes and introduces support for negotiation of compression for peer
links which, if implemented, should result in significant bandwidth savings
for some server roles.
This commit also introduces the new `[network_id]` configuration option
that administrators can use to specify which network the server is part of
and intends to join. This makes it possible for servers from different
networks to drop the link early.
The changeset also improves the log messages generated when negotiation
of a peer link upgrade fails. In the past, no useful information would
be logged, making it more difficult for admins to troubleshoot errors.
This commit also fixes RIPD-237 and RIPD-451
* The `tx` command now supports min_ledger and max_ledger fields.
* If the requested transaction isn't found and these fields are
provided, the error response indicates whether or not every
ledger in the the provided range was searched.
This fixes#2924
* adding package signing steps for rpm and deb
* first spike at GPG signing with CI and containers
* refine ubuntu portion
* get correct gpg package version
* adding CentOS support
* fixing errors in installing gpg on ubuntu
* base64 decode the GPG key
* fixing line continuations
* revised package signing, looking for package artifacts
* add dpkg-sig to ubuntu image
* sign all deb packges
* add passphrase to GPG process
* repeat yo slef on dpkg
* sign all the rpm packages too
* install rpm-sign in the CentOS docker image
* loop through rpm files
* no need for PIN on GPG signing
Collecting the returned and expected values in sets only works if there are no
duplicates. The implementation is changed to use sorted vectors to fix this case.
When the Env::AppBundle constructor throws an exception
it still needs to run ~AppBundle(), otherwise the JobQueue
isn't properly shut down. Specifically the JobQueue
can destruct without waiting on outstanding jobs in the
queue.
This change ensures that if Env::AppBundle constructor
throws, Env::AppBundle::~AppBundle() runs.
This fixes the unit test crash exposed by PR #3047.
FIXES: #3106
Different versions of protobuf produce subtly different
results when given invalid message payloads. This leads to
subtly different behavior when we try to deserialize these
invalid messages. As such, we can't tie success to a
particular exception.
The XRP Ledger utilizes an account model. Unlike systems based on a UTXO
model, XRP Ledger accounts are first-class objects. This design choice
allows the XRP Ledger to offer rich functionality, including the ability
to own objects (offers, escrows, checks, signer lists) as well as other
advanced features, such as key rotation and configurable multi-signing
without needing to change a destination address.
The trade-off is that accounts must be stored on ledger. The XRP Ledger
applies reserve requirements, in XRP, to protect the shared global ledger
from growing excessively large as the result of spam or malicious usage.
Prior to this commit, accounts had been permanent objects; once created,
they could never be deleted.
This commit introduces a new amendment "DeletableAccounts" which, if
enabled, will allow account objects to be deleted by executing the new
"AccountDelete" transaction. Any funds remaining in the account will
be transferred to an account specified in the deletion transaction.
The amendment changes the mechanics of account creation; previously
a new account would have an initial sequence number of 1. Accounts
created after the amendment will have an initial sequence number that
is equal to the ledger in which the account was created.
Accounts can only be deleted if they are not associated with any
obligations (like RippleStates, Escrows, or PayChannels) and if the
current ledger sequence number exceeds the account's sequence number
by at least 256 so that, if recreated, the account can be protected
from transaction replay.
* replace boost::beast::detail::iequals with boost::iequals
* replace deprecated `buffers` function with `make_printable`
* replace boost::beast::detail::ascii_tolower with lambda
* add missing includes
The validation stream only reported the ephemeral signing key for validators
which use manifests. This made tracking unnecessarily difficult for clients
processing the data stream.
With this change, the validator's long-term master public key is also
included.
This commit fixes#3005
* Provide proposing validator's master key in the validation stream
subscription JSON responses.
Implement code review changes.
FIXES: #3005
* Explain that Arch/Manjaro/etc. need `-Dstatic=OFF` during the configure step
* move configuration options closer to that step
* separate sub-headers for configuration and build
Different compilers and handling the shadow warning differently. In particular,
some are warning about types being shadowed by variables. Until these can be
resolved the shadow warning is being disabled.
Note: the shadow warning was originally enabled to help with the structured
bindings patch. As that is now complete, it's less important to keep this
warning.
FIXES: #2527
* define custom docker image for travis-linux builds based on
package build image
* add macos builds
* add windows builds (currently allowed to fail)
* improve build and shell scripts as required for the CI envs
* add asio timer latency workaround
* omit several manual tests from TravisCI which cause memory exhaustion
This commit allows server operators to reserve slots for specific
peers (identified by the peer's public node identity) and to make
changes to the reservations while the server is operating.
This commit closes#2938
- Add docker container tags for "latest_BRANCH"
- Prevent different branches from overwriting deb repo artifacts
- Manual approval always required before pushing to prod
The original intent was that RPC error codes were not stable.
But those codes were made available through the API, so some
users came to depend on the code values. This change adapts
to the current state of affairs.
Manifests which are revoked can include ephemeral keys although doing
so does not make sense: a revoked manifest isn't used for signing and
so don't need to define an ephemeral key.
A running instance of the server tracks the number of protocol messages
and the number of bytes it sends and receives.
This commit makes the counters more granular, allowing server operators
to better track and understand bandwidth usage.
* Add construction and assignment from a generic
contiguous container. Both compile-time and run time
safety checks are made to ensure the safety of this
conversion.
* Remove base_uint::copyFrom. The generic copy assignment
operator now does this functionality with enhanced
safety and better syntax.
* Remove construction from and dedendence on Blob.
The generic constructor and assignment now handle this
functionality.
* Fix client code to adhere to this new API.
* Removed the use of fromVoid in PeerImp.cpp as it was
an inappropriate use of this dangerous API. The
generic container constructors do it with enhanced
safety and better syntax.
* Rename data member pn to data_ and make it private.
* Remove constraint from hash_append
* Remove array_type alias
This PR addresses a problem where the server could hang indefinitely
on shutdown. The cause of the problem is the SNTPClock class was not
binding the socket to an endpoint on initialization. This can create
an error sent to the read handler. Unfortunately, the handler ignores
the error, reads again and enters into a loop preventing the
io_service from ever completing.
- Explain how to bind to both IPv4 and IPv6 interfaces
- Provide a hint in the default [port_peer] section
- Do not enable it by default
Note that on Linux, use of '::' and IPv4-mapped IPv6 depends on a sysctl value
setting 'net.ipv6.bindv6only = 0' which seems to be the default on most Linux
distributions.
- Use `std::lock` when grabbing multiple mutexes to ensure consistent
locking order and avoid deadlocks.
- Reduce the scope of the master mutex lock by relesing it prior to
calling setHeartbeatTimer
A tiny input amount to a payment step can cause this step to output zero. For
example, if a previous steps outputs a dust amount of 10^-80, and this step is a
IOU -> XRP offer, the offer may output zero drops. In this case, call the strand
dry. Before this patch, an error would be logged, the strand would be called
dry; in debug mode an assert triggered.
Note, this patch is not transaction breaking, as the caller did not user the ter
code. The caller only checked for success or failuer.
This patch addresses github issue issue reported here:
https://github.com/ripple/rippled/issues/2929
This patch removes calls to several deprecated asio functions.
* `io_service::post` becomes `post` (free function)
* `io_service::work` becomes `executor_work_guard`
* `io_service::wrap` becomes `bind_executor`
* `get_io_context` becomes `get_executor` or `get_executor().context()`
This patch was tested with boost 1.69 and 1.70. The functions
`ripple::get_lowest_layer` and `beast::create_waitable_timer` are required to
handle a breaking difference between these versions. When rippled no longer
needs to support pre 1.70 boost versions, both of these functions may be
removed, and the waitable timer injections may also be removed.
The new parse logic is more strict but handles more cases. If an exception
is thrown, just bail.
* Allow parsing unenclosed IPv6 addresses without port
* Improve string construction
* Reduce nesting levels of code
The XRP Ledger allows an account to authorize a secondary key pair,
called a regular key pair, to sign future transactions, while keeping
the master key pair offline.
The regular key pair can be changed as often as desired, without
requiring other changes on the account.
If merged, this commit corrects a minor technical flaw which would
allow an account holder to specify the master key as the account's
new regular key.
The change is controlled by the `fixMasterKeyAsRegularKey` amendment
which, if enabled, will:
1. Prevent specifying an account's master key as the account's
regular key.
2. Prevent the "Disable Master Key" flag from incorrectly affecting
regular keys.
Before this patch, jtx allowed non-invocable functions to be passed to
operator(). However, these arguments are ignored. This caused erronious code
code such as:
```
env (offer (account_to_test, BTC (250), XRP (1000)),
offers (account_to_test, 1));
```
While it looks like the number of offers are checked, they are not. The `offers`
funclet is never run. While we could modify jtx to make the above code correct,
a cleaner solution is to run post conditions in a `require` statement after a
transasction runs.
At this point all of the jss::* names are defined in the same
file. That file has been named JsonFields.h. That file name
has little to do with either JsonStaticStrings (which is what
jss is short for) or with jss. The file is renamed to jss.h
so the file name better reflects what the file contains.
All includes of that file are fixed. A few include order
issues are tidied up along the way.
Formerly an SOTemplate was default constructed and its elements
added using push_back(). This left open the possibility of a
malformed SOTemplate if adding one of the elements caused a throw.
With this commit the SOTemplate requires an initializer_list of
its elements at construction. Elements may not be added after
construction. With this approach either the SOTemplate is fully
constructed with all of its elements or the constructor throws,
which prevents an invalid SOTemplate from even existing.
This change requires all SOTemplate construction to be adjusted
at the call site. Those changes are also in this commit.
The SOE_Flags enum is also renamed to SOEStyle, which harmonizes
the name with other uses in the code base. SOEStyle elements
are renamed (slightly) to have an "soe" prefix rather than "SOE_".
This heads toward reserving identifiers with all upper case for
macros. The new style also aligns with other prominent enums in
the code base like the collection of TER identifiers.
SOElement is adjusted so it can be stored directly in an STL
container, rather than requiring storage in a unique_ptr.
Correspondingly, unique_ptr usage is removed from both
SOTemplate and KnownFormats.
The new 'Domain' field allows validator operators to associate a domain
name with their manifest in a transparent and independently verifiable
fashion.
It is important to point out that while this system can cryptographically
prove that a particular validator claims to be associated with a domain
it does *NOT* prove that the validator is, actually, associated with that
domain.
Domain owners will have to cryptographically attest to operating particular
validators that claim to be associated with that domain. One option for
doing so would be by making available a file over HTTPS under the domain
being claimed, which is verified separately (e.g. by ensuring that the
certificate used to serve the file matches the domain being claimed) and
which contains the long-term master public keys of validator(s) associated
with that domain.
Credit for an early prototype of this idea goes to GitHub user @cryptobrad
who introduced a PR that would allow a validator list publisher to attest
that a particular validator was associated with a domain. The idea may be
worth revisiting as a way of verifying the domain name claimed by the
validator's operator.
Resource limits were not properly applied to connections with
known IP addresses but no corresponding users.
Add unit tests for unlimited vs. limited ports.
An audit showed that a number of the RPC error codes in
ErrorCodes.h are no longer used in the code base. The unused
codes were removed from the file along with their support code
in ErrorCodes.cpp.
The ledger already declared a transaction that is both single-
and multi-signing malformed. This just adds some checking in
the signing RPC commands (like submit and sign_for) which allows
that sort of error to be identified a bit closer to the user.
In the process of adding this code a bug was found in the
RPCCall unit test. That bug is fixed as well.
If the number of peers a server has is below the configured
minimum peer limit, this commit will properly transition the
server's state to "disconnected".
The default limit for the minimum number of peers required was
0 meaning that a server that was connected but lost all its
peers would never transition to disconnected, since it could
never drop below zero peers.
This commit redefines the default minimum number of peers to 1
and produces a warning if the server is configured in a way
that will prevent it from ever achieving sufficient connectivity.
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.
This commit makes servers that are configured as validators
opt out of crawling.
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.
Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.
Servers will only provide this enhanced state information for
connections that have elevated privileges.
Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
The /crawl API endpoint allows developers to examine the structure of
the XRP Ledger's overlay network.
This commit adds additional information about the local server to the
/crawl endpoint, making it possible for developers to create data-rich
network-wide status dashboards.
Related:
- https://developers.ripple.com/peer-protocol.html
- https://github.com/ripple/rippled-network-crawler
When deserializing specially crafted data, the code would ignore certain
types of errors. Reserializing objects created from such data results in
failures or generates a different serialization, which is not ideal.
Also addresses: RIPD-1677, RIPD-1682, RIPD-1686 and RIPD-1689.
Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
Specially crafted messages could cause the server to buffer large
amounts of memory which could increase memory pressure.
This commit changes how messages are buffered and imposes a limit
on the amount of data that the server is willing to buffer.
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find. For information
on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
The constructor would previously assert that the specified buffer pointer
was non-null, even if the buffer size is specified as 0. While reasonable,
this also makes it more difficult to use this API.
* Using txnsExpected_, which is influenced by both the config
and network behavior, can reserve far too much or far too
little memory, wasting time and resources.
* Not an issue during normal operation, but a user could
cause problems on their local node with extreme configuration
settings.
* initFee was using a lot of logic that could look unclear. Add
some documentation explaining why certain values were used.
* Because initFee had side effects, callers needed to repeat the
max queue size computation, making the initial problem more
likely. Instead, return the max queue size value, so the caller
can reuse it.
* A newer test (testInFlightBalance()) was incorrectly using a
hard-coded queue limit. Fix it to use initFee's new return
value.
The --rpc_port command-line option is effectively ignored. We construct
an `Endpoint` with the given port, but then drop it on the floor.
(Perhaps the author thought the `Endpoint::at_port` method is a mutation
instead of a transformation.) This small change adds the missing
assignment to hold on to the new endpoint.
Fixes#2764
This changeset ensures the preferred ledger calculation
properly distinguishes the absence of trusted validations
from a preferred ledger which is the genesis ledger.
The `STObject` member function `setType()` has been renamed to
applyTemplate() and modified to throw if there is a template
mismatch.
The error description in the exception is, in certain cases,
used, to better indicate why a particular transaction was
considered ill formed.
Fixes#2585.
`Json::Value::isConvertibleTo` indicates that unsigned integers and
reals are convertible to string, but trying to do so (with
`Json::Value::asString`) throws an exception because its internal switch
is missing these cases. This change fills them in (and adds tests).
Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing this issue.
Closes#2778
Although `parseURL` used a regex to pull the authority out of the URL
being parsed, it performed manual parsing of the hostname and port.
This commit rolls the parsing of the username and password, if any,
directly into the regex. The hostname can be a name, an IPv4 or an
IPv6 address.
Fixes#2751
* Adds local file:// URL support to the [validator_list_sites] stanza.
The file:// URL must not contain a hostname. Allows a rippled node
operator to "sideload" a new list if their node is unable to reach
a validator list's web site before an old list expires. Lists
loaded from a file will be validated in the same way a downloaded
list is validated.
* Generalize file/dir "guards" from Config test so they can be reused
in other tests.
* Check for error when reading validators.txt. Saves some parsing and
checking of an empty string, and will give a more meaningful error.
* Completes RIPD-1674.
* Relevant when deciding whether an account can queue multiple
transactions. If the potential spend of the already queued
transactions would dip into the reserve, the reserve is
preserved for fees.
* Also change several direct modifications of the owner count to
call adjustOwnerCount to preserve overflow checking.
* Update related unit testcase
* Resolves#2251
Perform some extra checks on the close time and sequence number
of a candidate for network consensus ledger. This tightens
defenses against some "insane/hostile supermajority" attacks.
Under certain conditions, we could call `memcpy` or `memcmp` with a null
source pointer. Even when specifying 0 as the amount of data to copy this
could result in undefined behavior under the C and C++ standards.
Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
The XRP Ledger is designed to be censorship resistant. Any attempt to
censor transactions would require coordinated action by a majority of
the system's validators.
Importantly, the design of the system is such that such an attempt is
detectable and can be easily proven since every validators must sign
the validations it publishes.
This commit adds an automated censorship detector. While the server is
in sync, the detector tracks all transactions that, in the view of the
server, should have been included and issues warnings of increasing
severity for any transactions which, have not after several rounds.
When Ed25519 support was added to ripple-lib, a way to specify
whether a seed should be used to derive a "classic" secp256k1
keypair or a "new" Ed25519 keypair was needed, and the
requirements were that:
1. previously seeds would, correctly, generate a secp256k1
keypair.
2. users would not have to know about whether the seed was
used to generate a secp256k1 or an Ed25519 keypair.
To address these requirements, the decision was made to encode
the type of key within the seed and a custom encoding was
designed.
The encoding uses a token type of 1 and prefixes the actual
seed with a 2 byte header, selected to ensure that all such
keypairs will, when encoded, begin with the string "sEd".
This custom encoding is non-standard and was not previously
documented; as a result, it is not widely supported and other
sofware may treat such keys as invalid. This can make it
difficult for users that have stored such a seed to use
wallets or other tooling that is not based on ripple-lib.
This commit adds support to rippled for automatically
detecting and properly handling such seeds.
The 'validation_seed' RPC command was used to change the validation
key used by a validator at runtime.
Its implementation was commented out with commit fa796a2eb5
which has been included in the codebase since the 0.30.0 release
and there are no plans to reintroduce the functionality at this
point.
Validator operators should migrate to using validator manifests
instead.
This fixes#2748.
The FeeEscalation amendment has been enabled on the XRP Ledger network
since May 19, 2016. The transaction which activated this amendment is:
5B1F1E8E791A9C243DD728680F108FEF1F28F21BA3B202B8F66E7833CA71D3C3.
This change removes all conditional code based around the FeeEscalation
amendment, but leaves the amendment definition itself since removing the
definition would cause nodes to think an unknown amendment was activate
causing them to become amendment blocked.
The commit also removes the redundant precomputed hashes from the
supportedAmendments vector.
Problem:
- There are only a few call sites to cachedRead, and all of them
currently do more work than is required since we know the type in each
case.
Solution:
- "Inline" the codepath to cachedRead, but do not check if the type is
valid. In all such call sites, we know the keylet to read directly.
This fixes#2550
The WaitableEvent class was a leftover from the pre-Boost
version of Beast and used Windows- and pthread-specific
APIs.
This refactor replaces that functionality by using only
interfaces provided by the C++ standard, making the code
more portable.
Closes#2402.
Many of the warnings on Windows were not resolved, just
silenced with _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS.
They need to be resolved in a future commit.
* If rippled is not synced to the network, `fee` will return a
"no network" error instead of the possibly confusing "not enabled"
error.
* Resolves RIPD-1588
A validator that was configured to use a published validator list could
exhibit aberrent behavior if that validator list expired.
This commit introduces additional logic that makes validators operating
with an expired validator list bow out of the consensus process instead
of continuing to publish validations. Normal operation will resume once
a non-expired validator list becomes available.
This commit also enhances status reporting when using the `server_info`
and `validators` commands. Before, only the expiration time of the list
would be returned; now, its current status is also reported in a format
that is clearer.
Problem:
- There are several specific overloads with some custom code that can be
easily replaced using Boost.Hex.
Solution:
- Introduce `strHex(itr, itr)` to return a string given a begin and end
iterator.
- Remove `strHex(itr, size)` in favor of the `strHex(T)` where T is
something that has a `begin()` member function. This allows us to
remove the strHex overloads for `std::string`, Blob, and Slice.
* For example Visual Studio, XCode. This will allow easily working with
any file in the IDE.
* Also ignore the file created by Visual Studio when using cmake
integration.
* Use conditional for unity/nounity sources (h/t @mellery451)
- allow private token for jenkins/codecov
- add custom targets for gcc/clang to generate codecov reports
- use CMake coverage target in jenkins build
- optional coverage_test argument when configuring the build
Reduces the account reserve for a multisigning SignerList from
(conditionally) 3 to 10 OwnerCounts to (unconditionally) 1
OwnerCount. Includes a transition process.
* When increasing the expected ledger size, add on an extra 20%.
* When decreasing the expected ledger size, take the minimum of the
validated ledger size or the old expected size, and subract another 50%.
* Update fee escalation documentation.
* Refactor the FeeMetrics object to use values from Setup
As described in #2314, when an offer executed with `Fill or Kill`
semantics, the server would return `tesSUCCESS` even if the order
couldn't be filled and was aborted. This would require additional
processing of metadata by users to determine the effects of the
transaction.
This commit introduces the `fix1578` amendment which, if enabled,
will cause the server to return the new `tecKILLED` error code
instead of `tesSUCCESS` for `Fill or Kill` orders that could not
be filled.
Additionally, the `fix1578` amendment will prevent the setting of
the `No Ripple` flag on trust lines with negative balance; trying
to set the flag on such a trust line will fail with the new error
code `tecNEGATIVE_BALANCE`.
Fixes: RIPD-1648
- use ExternalProject for snappy, lz4, SOCI, and sqlite3
- use FetchContent for NuDB
- update SOCI from 79e222e3c2278e6108137a2d26d3689418b37544 to
3a1f602b3021b925d38828e3ff95f9e7f8887ff7
- update lz4 from c10863b98e1503af90616ae99725ecd120265dfb to v1.8.2
- update sqlite3 from 3.21 to 3.24
- update snappy from b02bfa754ebf27921d8da3bd2517eab445b84ff9 to 1.1.7
- update NuDB from 00adc6a4f16679a376f40c967f77dfa544c179c1 to 1.0.0
*If there is any impact to the public API methods (HTTP / WebSocket), please update https://github.com/xrplf/rippled/blob/develop/API-CHANGELOG.md
*Update API-CHANGELOG.md and add the change directly in this PR by pushing to your PR branch.
*libxrpl: See https://github.com/XRPLF/rippled/blob/develop/docs/build/depend.md
*Peer Protocol: See https://xrpl.org/peer-protocol.html
-->
- [ ] Public API: New feature (new methods and/or new fields)
- [ ] Public API: Breaking change (in general, breaking changes should only impact the next api_version)
- [ ]`libxrpl` change (any change that may affect `libxrpl` or dependents of `libxrpl`)
- [ ] Peer protocol change (must be backward compatible or bump the peer protocol version)
<!--
## Before / After
If relevant, use this section for an English description of the change at a technical level.
If this change affects an API, examples should be included here.
For performance-impacting changes, please provide these details:
1. Is this a new feature, bug fix, or improvement to existing functionality?
2. What behavior/functionality does the change impact?
3. In what processing can the impact be measured? Be as specific as possible - e.g. RPC client call, payment transaction that involves LOB, AMM, caching, DB operations, etc.
4. Does this change affect concurrent processing - e.g. does it involve acquiring locks, multi-threaded processing, or async processing?
-->
<!--
## Test Plan
If helpful, please describe the tests that you ran to verify your changes and provide instructions so that others can reproduce.
This section may not be needed if your change includes thoroughly commented unit tests.
# Restore copyright notices that were removed from specific files, without
# restoring the verbiage that is already present in LICENSE.md. Ensure that if
# the script is run multiple times, duplicate notices are not added.
if ! grep -q 'Raw Material Software' include/xrpl/beast/core/CurrentThreadName.h;then
echo -e "// Portions of this file are from JUCE (http://www.juce.com).\n// Copyright (c) 2013 - Raw Material Software Ltd.\n// Please visit http://www.juce.com\n\n$(cat include/xrpl/beast/core/CurrentThreadName.h)" > include/xrpl/beast/core/CurrentThreadName.h
fi
if ! grep -q 'Dev Null' src/test/app/NetworkID_test.cpp;then
parser.add_argument('-a','--all',help='Set to generate all configurations (generally used when merging a PR) or leave unset to generate a subset of configurations (generally used when committing to a PR).',action="store_true")
parser.add_argument('-c','--config',help='Path to the JSON file containing the strategy matrix configurations.',required=False,type=Path)
This changelog is intended to list all updates to the [public API methods](https://xrpl.org/public-api-methods.html).
For info about how [API versioning](https://xrpl.org/request-formatting.html#api-versioning) works, including examples, please view the [XLS-22d spec](https://github.com/XRPLF/XRPL-Standards/discussions/54). For details about the implementation of API versioning, view the [implementation PR](https://github.com/XRPLF/rippled/pull/3155). API versioning ensures existing integrations and users continue to receive existing behavior, while those that request a higher API version will experience new behavior.
The API version controls the API behavior you see. This includes what properties you see in responses, what parameters you're permitted to send in requests, and so on. You specify the API version in each of your requests. When a breaking change is introduced to the `rippled` API, a new version is released. To avoid breaking your code, you should set (or increase) your version when you're ready to upgrade.
For a log of breaking changes, see the **API Version [number]** headings. In general, breaking changes are associated with a particular API Version number. For non-breaking changes, scroll to the **XRP Ledger version [x.y.z]** headings. Non-breaking changes are associated with a particular XRP Ledger (`rippled`) release.
## API Version 2
API version 2 is available in `rippled` version 2.0.0 and later. To use this API, clients specify `"api_version" : 2` in each request.
#### Removed methods
In API version 2, the following deprecated methods are no longer available: (https://github.com/XRPLF/rippled/pull/4759)
-`tx_history` - Instead, use other methods such as `account_tx` or `ledger` with the `transactions` field set to `true`.
-`ledger_header` - Instead, use the `ledger` method.
#### Modifications to JSON transaction element in V2
In API version 2, JSON elements for transaction output have been changed and made consistent for all methods which output transactions. (https://github.com/XRPLF/rippled/pull/4775)
This helps to unify the JSON serialization format of transactions. (https://github.com/XRPLF/clio/issues/722, https://github.com/XRPLF/rippled/issues/4727)
- JSON transaction element is named `tx_json`
- Binary transaction element is named `tx_blob`
- JSON transaction metadata element is named `meta`
- Binary transaction metadata element is named `meta_blob`
Additionally, these elements are now consistently available next to `tx_json` (i.e. sibling elements), where possible:
-`hash` - Transaction ID. This data was stored inside transaction output in API version 1, but in API version 2 is a sibling element.
-`ledger_index` - Ledger index (only set on validated ledgers)
-`ledger_hash` - Ledger hash (only set on closed or validated ledgers)
-`close_time_iso` - Ledger close time expressed in ISO 8601 time format (only set on validated ledgers)
-`validated` - Bool element set to `true` if the transaction is in a validated ledger, otherwise `false`
This change affects the following methods:
-`tx` - Transaction data moved into element `tx_json` (was inline inside `result`) or, if binary output was requested, moved from `tx` to `tx_blob`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
-`account_tx` - Renamed transaction element from `tx` to `tx_json`. Renamed binary transaction metadata element (if it was requested) from `meta` to `meta_blob`. Changed location of `hash` and added new elements
-`transaction_entry` - Renamed transaction metadata element from `metadata` to `meta`. Changed location of `hash` and added new elements
-`subscribe` - Renamed transaction element from `transaction` to `tx_json`. Changed location of `hash` and added new elements
-`sign`, `sign_for`, `submit` and `submit_multisigned` - Changed location of `hash` element.
#### Modification to `Payment` transaction JSON schema
When reading Payments, the `Amount` field should generally **not** be used. Instead, use [delivered_amount](https://xrpl.org/partial-payments.html#the-delivered_amount-field) to see the amount that the Payment delivered. To clarify its meaning, the `Amount` field is being renamed to `DeliverMax`. (https://github.com/XRPLF/rippled/pull/4733)
- In `Payment` transaction type, JSON RPC field `Amount` is renamed to `DeliverMax`. To enable smooth client transition, `Amount` is still handled, as described below: (https://github.com/XRPLF/rippled/pull/4733)
- On JSON RPC input (e.g. `submit_multisigned` etc. methods), `Amount` is recognized as an alias to `DeliverMax` for both API version 1 and version 2 clients.
- On JSON RPC input, submitting both `Amount` and `DeliverMax` fields is allowed _only_ if they are identical; otherwise such input is rejected with `rpcINVALID_PARAMS` error.
- On JSON RPC output (e.g. `subscribe`, `account_tx` etc. methods), `DeliverMax` is present in both API version 1 and version 2.
- On JSON RPC output, `Amount` is only present in API version 1 and _not_ in version 2.
#### Modifications to account_info response
-`signer_lists` is returned in the root of the response. (In API version 1, it was nested under `account_data`.) (https://github.com/XRPLF/rippled/pull/3770)
- When using an invalid `signer_lists` value, the API now returns an "invalidParams" error. (https://github.com/XRPLF/rippled/pull/4585)
- (`signer_lists` must be a boolean. In API version 1, strings were accepted and may return a normal response - i.e. as if `signer_lists` were `true`.)
#### Modifications to [account_tx](https://xrpl.org/account_tx.html#account_tx) response
- Using `ledger_index_min`, `ledger_index_max`, and `ledger_index` returns `invalidParams` because if you use `ledger_index_min` or `ledger_index_max`, then it does not make sense to also specify `ledger_index`. In API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4571)
- The same applies for `ledger_index_min`, `ledger_index_max`, and `ledger_hash`. (https://github.com/XRPLF/rippled/issues/4545#issuecomment-1565065579)
- Using a `ledger_index_min` or `ledger_index_max` beyond the range of ledgers that the server has:
- returns `lgrIdxMalformed` in API version 2. Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/issues/4288)
- Attempting to use a non-boolean value (such as a string) for the `binary` or `forward` parameters returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
#### Modifications to [noripple_check](https://xrpl.org/noripple_check.html#noripple_check) response
- Attempting to use a non-boolean value (such as a string) for the `transactions` parameter returns `invalidParams` (`rpcINVALID_PARAMS`). Previously, in API version 1, no error was returned. (https://github.com/XRPLF/rippled/pull/4620)
## API Version 1
This version is supported by all `rippled` versions. For WebSocket and HTTP JSON-RPC requests, it is currently the default API version used when no `api_version` is specified.
The [commandline](https://xrpl.org/docs/references/http-websocket-apis/api-conventions/request-formatting/#commandline-format) always uses the latest API version. The command line is intended for ad-hoc usage by humans, not programs or automated scripts. The command line is not meant for use in production code.
### Inconsistency: server_info - network_id
The `network_id` field was added in the `server_info` response in version 1.5.0 (2019), but it is not returned in [reporting mode](https://xrpl.org/rippled-server-modes.html#reporting-mode). However, use of reporting mode is now discouraged, in favor of using [Clio](https://github.com/XRPLF/clio) instead.
## XRP Ledger server version 2.5.0
As of 2025-04-04, version 2.5.0 is in development. You can use a pre-release version by building from source or [using the `nightly` package](https://xrpl.org/docs/infrastructure/installation/install-rippled-on-ubuntu).
### Additions and bugfixes in 2.5.0
-`channel_authorize`: If `signing_support` is not enabled in the config, the RPC is disabled.
## XRP Ledger server version 2.4.0
[Version 2.4.0](https://github.com/XRPLF/rippled/releases/tag/2.4.0) was released on March 4, 2025.
### Additions and bugfixes in 2.4.0
-`ledger_entry`: `state` is added an alias for `ripple_state`.
-`ledger_entry`: Enables case-insensitive filtering by canonical name in addition to case-sensitive filtering by RPC name.
-`validators`: Added new field `validator_list_threshold` in response.
-`simulate`: A new RPC that executes a [dry run of a transaction submission](https://github.com/XRPLF/XRPL-Standards/tree/master/XLS-0069d-simulate#2-rpc-simulate)
- Signing methods autofill fees better and properly handle transactions that don't have a base fee, and will also autofill the `NetworkID` field.
## XRP Ledger server version 2.3.0
[Version 2.3.0](https://github.com/XRPLF/rippled/releases/tag/2.3.0) was released on Nov 25, 2024.
### Breaking changes in 2.3.0
-`book_changes`: If the requested ledger version is not available on this node, a `ledgerNotFound` error is returned and the node does not attempt to acquire the ledger from the p2p network (as with other non-admin RPCs).
Admins can still attempt to retrieve old ledgers with the `ledger_request` RPC.
### Additions and bugfixes in 2.3.0
-`book_changes`: Returns a `validated` field in its response, which was missing in prior versions.
## XRP Ledger server version 2.2.0
[Version 2.2.0](https://github.com/XRPLF/rippled/releases/tag/2.2.0) was released on Jun 5, 2024. The following additions are non-breaking (because they are purely additive):
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 2.0.0
[Version 2.0.0](https://github.com/XRPLF/rippled/releases/tag/2.0.0) was released on Jan 9, 2024. The following additions are non-breaking (because they are purely additive):
-`server_definitions`: A new RPC that generates a `definitions.json`-like output that can be used in XRPL libraries.
- In `Payment` transactions, `DeliverMax` has been added. This is a replacement for the `Amount` field, which should not be used. Typically, the `delivered_amount` (in transaction metadata) should be used. To ease the transition, `DeliverMax` is present regardless of API version, since adding a field is non-breaking.
- API version 2 has been moved from beta to supported, meaning that it is generally available (regardless of the `beta_rpc_api` setting).
## XRP Ledger server version 2.2.0
The following is a non-breaking addition to the API.
- The `feature` method now has a non-admin mode for users. (It was previously only available to admin connections.) The method returns an updated list of amendments, including their names and other information. ([#4781](https://github.com/XRPLF/rippled/pull/4781))
## XRP Ledger server version 1.12.0
[Version 1.12.0](https://github.com/XRPLF/rippled/releases/tag/1.12.0) was released on Sep 6, 2023. The following additions are non-breaking (because they are purely additive).
-`server_info`: Added `ports`, an array which advertises the RPC and WebSocket ports. This information is also included in the `/crawl` endpoint (which calls `server_info` internally). `grpc` and `peer` ports are also included. (https://github.com/XRPLF/rippled/pull/4427)
-`ports` contains objects, each containing a `port` for the listening port (a number string), and a `protocol` array listing the supported protocols on that port.
- This allows crawlers to build a more detailed topology without needing to port-scan nodes.
- (For peers and other non-admin clients, the info about admin ports is excluded.)
- Clawback: The following additions are gated by the Clawback amendment (`featureClawback`). (https://github.com/XRPLF/rippled/pull/4553)
- Adds an [AccountRoot flag](https://xrpl.org/accountroot.html#accountroot-flags) called `lsfAllowTrustLineClawback` (https://github.com/XRPLF/rippled/pull/4617)
- Adds the corresponding `asfAllowTrustLineClawback` [AccountSet Flag](https://xrpl.org/accountset.html#accountset-flags) as well.
- Clawback is disabled by default, so if an issuer desires the ability to claw back funds, they must use an `AccountSet` transaction to set the AllowTrustLineClawback flag. They must do this before creating any trust lines, offers, escrows, payment channels, or checks.
- Adds the [Clawback transaction type](https://github.com/XRPLF/XRPL-Standards/blob/master/XLS-39d-clawback/README.md#331-clawback-transaction), containing these fields:
-`Account`: The issuer of the asset being clawed back. Must also be the sender of the transaction.
-`Amount`: The amount being clawed back, with the `Amount.issuer` being the token holder's address.
`tfWithdrawAll`, `tfOneAssetWithdrawAll` which allow a trader to specify different fields combination
for `AMMDeposit` and `AMMWithdraw` transactions.
- Adds new transaction result codes:
- tecUNFUNDED_AMM: insufficient balance to fund AMM. The account does not have funds for liquidity provision.
- tecAMM_BALANCE: AMM has invalid balance. Calculated balances greater than the current pool balances.
- tecAMM_FAILED: AMM transaction failed. Fails due to a processing failure.
- tecAMM_INVALID_TOKENS: AMM invalid LP tokens. Invalid input values, format, or calculated values.
- tecAMM_EMPTY: AMM is in empty state. Transaction requires AMM in non-empty state (LP tokens > 0).
- tecAMM_NOT_EMPTY: AMM is not in empty state. Transaction requires AMM in empty state (LP tokens == 0).
- tecAMM_ACCOUNT: AMM account. Clawback of AMM account.
- tecINCOMPLETE: Some work was completed, but more submissions required to finish. AMMDelete partially deletes the trustlines.
## XRP Ledger server version 1.11.0
[Version 1.11.0](https://github.com/XRPLF/rippled/releases/tag/1.11.0) was released on Jun 20, 2023.
### Breaking changes in 1.11
- Added the ability to mark amendments as obsolete. For the `feature` admin API, there is a new possible value for the `vetoed` field. (https://github.com/XRPLF/rippled/pull/4291)
- The value of `vetoed` can now be `true`, `false`, or `"Obsolete"`.
- Removed the acceptance of seeds or public keys in place of account addresses. (https://github.com/XRPLF/rippled/pull/4404)
- This simplifies the API and encourages better security practices (i.e. seeds should never be sent over the network).
- For the `ledger_data` method, when all entries are filtered out, the `state` field of the response is now an empty list (in other words, an empty array, `[]`). (Previously, it would return `null`.) While this is technically a breaking change, the new behavior is consistent with the documentation, so this is considered only a bug fix. (https://github.com/XRPLF/rippled/pull/4398)
- If and when the `fixNFTokenRemint` amendment activates, there will be a new AccountRoot field, `FirstNFTSequence`. This field is set to the current account sequence when the account issues their first NFT. If an account has not issued any NFTs, then the field is not set. ([#4406](https://github.com/XRPLF/rippled/pull/4406))
- There is a new account deletion restriction: an account can only be deleted if `FirstNFTSequence` + `MintedNFTokens` + `256` is less than the current ledger sequence.
- This is potentially a breaking change if clients have logic for determining whether an account can be deleted.
- NetworkID
- For sidechains and networks with a network ID greater than 1024, there is a new [transaction common field](https://xrpl.org/transaction-common-fields.html), `NetworkID`. (https://github.com/XRPLF/rippled/pull/4370)
- This field helps to prevent replay attacks and is now required for chains whose network ID is 1025 or higher.
- The field must be omitted for Mainnet, so there is no change for Mainnet users.
- There are three new local error codes:
-`telNETWORK_ID_MAKES_TX_NON_CANONICAL`: a `NetworkID` is present but the chain's network ID is less than 1025. Remove the field from the transaction, and try again.
-`telREQUIRES_NETWORK_ID`: a `NetworkID` is required, but is not present. Add the field to the transaction, and try again.
-`telWRONG_NETWORK`: a `NetworkID` is specified, but it is for a different network. Submit the transaction to a different server which is connected to the correct network.
### Additions and bug fixes in 1.11
- Added `nftoken_id`, `nftoken_ids` and `offer_id` meta fields into NFT `tx` and `account_tx` responses. (https://github.com/XRPLF/rippled/pull/4447)
- Added an `account_flags` object to the `account_info` method response. (https://github.com/XRPLF/rippled/pull/4459)
- Added `NFTokenPages` to the `account_objects` RPC. (https://github.com/XRPLF/rippled/pull/4352)
- Fixed: `marker` returned from the `account_lines` command would not work on subsequent commands. (https://github.com/XRPLF/rippled/pull/4361)
- If the `XRPFees` feature is enabled, the `fee_ref` field will be
removed from the [ledger subscription stream](https://xrpl.org/subscribe.html#ledger-stream), because it will no longer
have any meaning.
# Unit tests for API changes
The following information is useful to developers contributing to this project:
The purpose of unit tests is to catch bugs and prevent regressions. In general, it often makes sense to create a test function when there is a breaking change to the API. For APIs that have changed in a new API version, the tests should be modified so that both the prior version and the new version are properly tested.
To take one example: for `account_info` version 1, WebSocket and JSON-RPC behavior should be tested. The latest API version, i.e. API version 2, should be tested over WebSocket, JSON-RPC, and command line.
| These instructions assume you have a C++ development environment ready with Git, Python, Conan, CMake, and a C++ compiler. For help setting one up on Linux, macOS, or Windows, [see this guide](./docs/build/environment.md). |
> These instructions also assume a basic familiarity with Conan and CMake.
> If you are unfamiliar with Conan, you can read our
> [crash course](./docs/build/conan.md) or the official [Getting Started][3]
> walkthrough.
## Branches
For a stable release, choose the `master` branch or one of the [tagged
You can manage multiple Conan profiles in the directory
`$(conan config home)/profiles`, for example renaming `default` to a different
name and then creating a new `default` profile for a different compiler.
#### Select language
The default profile created by Conan will typically select different C++ dialect
than C++20 used by this project. You should set `20` in the profile line
starting with `compiler.cppstd=`. For example:
```bash
sed -i.bak -e 's|^compiler\.cppstd=.*$|compiler.cppstd=20|'$(conan config home)/profiles/default
```
#### Select standard library in Linux
**Linux** developers will commonly have a default Conan [profile][] that
compiles with GCC and links with libstdc++. If you are linking with libstdc++
(see profile setting `compiler.libcxx`), then you will need to choose the
`libstdc++11` ABI:
```bash
sed -i.bak -e 's|^compiler\.libcxx=.*$|compiler.libcxx=libstdc++11|'$(conan config home)/profiles/default
```
#### Select architecture and runtime in Windows
**Windows** developers may need to use the x64 native build tools. An easy way
to do that is to run the shortcut "x64 Native Tools Command Prompt" for the
version of Visual Studio that you have installed.
Windows developers must also build `rippled` and its dependencies for the x64
architecture:
```bash
sed -i.bak -e 's|^arch=.*$|arch=x86_64|'$(conan config home)/profiles/default
```
**Windows** developers also must select static runtime:
```bash
sed -i.bak -e 's|^compiler\.runtime=.*$|compiler.runtime=static|'$(conan config home)/profiles/default
```
#### Clang workaround for grpc
If your compiler is clang, version 19 or later, or apple-clang, version 17 or
later, you may encounter a compilation error while building the `grpc`
dependency:
```text
In file included from .../lib/promise/try_seq.h:26:
.../lib/promise/detail/basic_seq.h:499:38: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
If your compiler is gcc, version 12, and you have enabled `werr` option, you may
encounter a compilation error such as:
```text
/usr/include/c++/12/bits/char_traits.h:435:56: error: 'void* __builtin_memcpy(void*, const void*, long unsigned int)' accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Werror=restrict]
| `coverage` | OFF | Prepare the coverage report. |
| `san` | N/A | Enable a sanitizer with Clang. Choices are `thread` and `address`. |
| `tests` | OFF | Build tests. |
| `unity` | OFF | Configure a unity build. |
| `xrpld` | OFF | Build the xrpld (`rippled`) application, and not just the libxrpl library. |
| `werr` | OFF | Treat compilation warnings as errors |
| `wextra` | OFF | Enable additional compilation warnings |
[Unity builds][5] may be faster for the first build
(at the cost of much more memory) since they concatenate sources into fewer
translation units. Non-unity builds may be faster for incremental builds,
and can be helpful for detecting `#include` omissions.
## Troubleshooting
### Conan
After any updates or changes to dependencies, you may need to do the following:
1. Remove your build directory.
2. Remove individual libraries from the Conan cache, e.g.
```bash
conan remove 'grpc/*'
```
**or**
Remove all libraries from Conan cache:
```bash
conan remove '*'
```
3. Re-run [conan export](#patched-recipes) if needed.
4. [Regenerate lockfile](#conan-lockfile).
5. Re-run [conan install](#build-and-test).
#### ERROR: Package not resolved
If you're seeing an error like `ERROR: Package 'snappy/1.1.10' not resolved: Unable to find 'snappy/1.1.10#968fef506ff261592ec30c574d4a7809%1756234314.246' in remotes.`,
please add `xrplf` remote or re-run `conan export` for [patched recipes](#patched-recipes).
### `protobuf/port_def.inc` file not found
If `cmake --build .` results in an error due to a missing a protobuf file, then
you might have generated CMake files for a different `build_type` than the
`CMAKE_BUILD_TYPE` you passed to Conan.
```
/rippled/.build/pb-xrpl.libpb/xrpl/proto/xrpl.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found
10 | #include <google/protobuf/port_def.inc>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
For example, if you want to build Debug:
1. For conan install, pass `--settings build_type=Debug`
2. For cmake, pass `-DCMAKE_BUILD_TYPE=Debug`
## Add a Dependency
If you want to experiment with a new package, follow these steps:
1. Search for the package on [Conan Center](https://conan.io/center/).
2. Modify [`conanfile.py`](./conanfile.py):
- Add a version of the package to the `requires` property.
- Change any default options for the package by adding them to the
The XRP Ledger is a decentralized cryptographic ledger powered by a network of peer-to-peer servers. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
## XRP
XRP is a public, counterparty-less asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP the help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
## `rippled`
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE). The `rippled` server is written primarily in C++ and runs on a variety of platforms.
[XRP](https://xrpl.org/xrp.html) is a public, counterparty-free crypto-asset native to the XRP Ledger, and is designed as a gas token for network services and to bridge different currencies. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.
## rippled
# Key Features of the XRP Ledger
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE.md). The `rippled` server software is written primarily in C++ and runs on a variety of platforms. The `rippled` server software can run in several modes depending on its [configuration](https://xrpl.org/rippled-server-modes.html).
If you are interested in running an **API Server** (including a **Full History Server**), take a look at [Clio](https://github.com/XRPLF/clio). (rippled Reporting Mode has been replaced by Clio.)
### Build from Source
- [Read the build instructions in `BUILD.md`](BUILD.md)
- If you encounter any issues, please [open an issue](https://github.com/XRPLF/rippled/issues)
## Key Features of the XRP Ledger
- **[Censorship-Resistant Transaction Processing][]:** No single party decides which transactions succeed or fail, and no one can "roll back" a transaction after it completes. As long as those who choose to participate in the network keep it healthy, they can settle transactions in seconds.
- **[Fast, Efficient Consensus Algorithm][]:** The XRP Ledger's consensus algorithm settles transactions in 4 to 5 seconds, processing at a throughput of up to 1500 transactions per second. These properties put XRP at least an order of magnitude ahead of other top digital assets.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. (Each XRP is subdivisible down to 6 decimal places, for a grand total of 100 quintillion _drops_ of XRP.) The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction costs.
- **[Responsible Software Governance][]:** A team of full-time, world-class developers at Ripple maintain and continually improve the XRP Ledger's underlying software with contributions from the open-source community. Ripple acts as a steward for the technology and an advocate for its interests, and builds constructive relationships with governments and financial institutions worldwide.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction fees.
- **[Responsible Software Governance][]:** A team of full-time developers at Ripple & other organizations maintain and continually improve the XRP Ledger's underlying software with contributions from the open-source community. Ripple acts as a steward for the technology and an advocate for its interests.
- **[Secure, Adaptable Cryptography][]:** The XRP Ledger relies on industry standard digital signature systems like ECDSA (the same scheme used by Bitcoin) but also supports modern, efficient algorithms like Ed25519. The extensible nature of the XRP Ledger's software makes it possible to add and disable algorithms as the state of the art in cryptography advances.
- **[Modern Features for Smart Contracts][]:** Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the [Interledger Protocol](https://interledger.org/). This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
- **[Modern Features][]:** Features like Escrow, Checks, and Payment Channels support financial applications atop of the XRP Ledger. This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
- **[On-Ledger Decentralized Exchange][]:** In addition to all the features that make XRP useful on its own, the XRP Ledger also has a fully-functional accounting system for tracking and trading obligations denominated in any way users want, and an exchange built into the protocol. The XRP Ledger can settle long, cross-currency payment paths and exchanges of multiple currencies in atomic transactions, bridging gaps of trust with XRP.
Here are some good places to start learning the source code:
- Read the markdown files in the source tree: `src/ripple/**/*.md`.
- Read [the levelization document](.github/scripts/levelization) to get an idea of the internal dependency graph.
- In the big picture, the `main` function constructs an `ApplicationImp` object, which implements the `Application` virtual interface. Almost every component in the application takes an `Application&` parameter in its constructor, typically named `app` and stored as a member variable `app_`. This allows most components to depend on any other component.
For more details on operating an XRP Ledger server securely, please visit https://xrpl.org/manage-the-rippled-server.html.
# Security Policy
## Supported Versions
Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the **master**, **release** or **develop** branches, and with proposed, [open pull requests](https://github.com/ripple/rippled/pulls).
## Identifying and Reporting Vulnerabilities
We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.
### Responsible Investigation
We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.
Responsible investigation includes, but isn't limited to, the following:
- Not performing tests on the main network. If testing is necessary, use the [Testnet or Devnet](https://xrpl.org/xrp-testnet-faucet.html).
- Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.
### Responsible Disclosure
If you discover a vulnerability or potential threat, or if you _think_
you have, please reach out by dropping an email using the contact
information below.
Your report should include the following:
- Your contact information (typically, an email address);
- The description of the vulnerability;
- The attack scenario (if any);
- The steps to reproduce the vulnerability;
- Any other relevant details or artifacts, including code, scripts or patches.
In your email, please describe the issue or potential threat. If possible, include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as detailed and comprehensive as possible.
For more information on responsible disclosure, please read this [Wikipedia article](https://en.wikipedia.org/wiki/Responsible_disclosure).
## Report Handling Process
Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.
Once we receive a report, we:
1. Assign two people to independently evaluate the report;
2. Consider their recommendations;
3. If action is necessary, formulate a plan to address the issue;
4. Communicate privately with the reporter to explain our plan.
5. Prepare, test and release a version which fixes the issue; and
6. Announce the vulnerability publicly.
We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.
While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.
## Bug Bounty Program
[Ripple](https://ripple.com) is generously sponsoring a bug bounty program for vulnerabilities in [`rippled`](https://github.com/XRPLF/rippled) (and other related projects, like [`xrpl.js`](https://github.com/XRPLF/xrpl.js), [`xrpl-py`](https://github.com/XRPLF/xrpl-py), [`xrpl4j`](https://github.com/XRPLF/xrpl4j)).
This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, in order to qualify for a bounty, the bug must be:
1.**In scope**. Only bugs in software under the scope of the program qualify. Currently, that means `rippled`, `xrpl.js`, `xrpl-py`, `xrpl4j`.
2.**Relevant**. A security issue, posing a danger to user funds, privacy, or the operation of the XRP Ledger.
3.**Original and previously unknown**. Bugs that are already known and discussed in public do not qualify. Previously reported bugs, even if publicly unknown, are not eligible.
4.**Specific**. We welcome general security advice or recommendations, but we cannot pay bounties for that.
5.**Fixable**. There has to be something we can do to permanently fix the problem. Note that bugs in other people’s software may still qualify in some cases. For example, if you find a bug in a library that we use which can compromise the security of software that is in scope and we can get it fixed, you may qualify for a bounty.
6.**Unused**. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.
The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please don’t hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.
### Contacting Us
To report a qualifying bug, please send a detailed report to:
echo"Use the following command on your local machine to download from your rippled instance: scp <remote_rippled_username>@<remote_host>:${tmp_loc}/info-package.tar.gz <path/to/local_machine/directory>"| tee /dev/fd/3
message(ERROR"failed to create symlink: <${link}>")
endif()
endfunction()
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.