Reason for this change is here XRPLF/XRPL-Standards#119
We would want to be explicit that this flag is exclusively for trustline. For new token types(eg. CFT), they will not utilize this flag for clawback, instead, they will turn clawback on/off on the token-level, which is more versatile.
Use the most recent versions in ConanCenter.
* Due to a bug in Clang 16, you may get a compile error:
"call to 'async_teardown' is ambiguous"
* A compiler flag workaround is documented in `BUILD.md`.
* At this time, building this with gcc 13 may require editing some files
in `.conan/data`
* A patch to support gcc13 may be added in a later PR.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Add AMM functionality:
- InstanceCreate
- Deposit
- Withdraw
- Governance
- Auctioning
- payment engine integration
To support this functionality, add:
- New RPC method, `amm_info`, to fetch pool and LPT balances
- AMM Root Account
- trust line for each IOU AMM token
- trust line to track Liquidity Provider Tokens (LPT)
- `ltAMM` object
The `ltAMM` object tracks:
- fee votes
- auction slot bids
- AMM tokens pair
- total outstanding tokens balance
- `AMMID` to AMM `RootAccountID` mapping
Add new classes to facilitate AMM integration into the payment engine.
`BookStep` uses these classes to infer if AMM liquidity can be consumed.
The AMM formula implementation uses the new Number class added in #4192.
IOUAmount and STAmount use Number arithmetic.
Add AMM unit tests for all features.
AMM requires the following amendments:
- featureAMM
- fixUniversalNumber
- featureFlowCross
Notes:
- Current trading fee threshold is 1%
- AMM currency is generated by: 0x03 + 152 bits of sha256{cur1, cur2}
- Current max AMM Offers is 30
---------
Co-authored-by: Howard Hinnant <howard.hinnant@gmail.com>
Improve error handling for ledger_entry by returning an "invalidParams"
error when one or more request fields are specified incorrectly, or one
or more required fields are missing.
For example, if none of of the following fields is provided, then the
API should return an invalidParams error:
* index, account_root, directory, offer, ripple_state, check, escrow,
payment_channel, deposit_preauth, ticket
Prior to this commit, the API returned an "unknownOption" error instead.
Since the error was actually due to invalid parameters, rather than
unknown options, this error was misleading.
Since this is an API breaking change, the "invalidParams" error is only
returned for requests using api_version: 2 and above. To maintain
backward compatibility, the "unknownOption" error is still returned for
api_version: 1.
Related: #4573Fix#4303
- Previously, mulDiv had `std::pair<bool, uint64_t>` as the output type.
- This is an error-prone interface as it is easy to ignore when
overflow occurs.
- Using a return type of `std::optional` should decrease the likelihood
of ignoring overflow.
- It also allows for the use of optional::value_or() as a way to
explicitly recover from overflow.
- Include limits.h header file preprocessing directive in order to
satisfy gcc's numeric_limits incomplete_type requirement.
Fix#3495
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* Update the `account_info` API so that the `allowClawback` flag is
included in the response.
* The proposed `Clawback` amendement added an `allowClawback` flag in
the `AccountRoot` object.
* In the API response, under `account_flags`, there is now an
`allowClawback` field with a boolean (`true` or `false`) value.
* For reference, the XLS-39 Clawback implementation can be found in
#4553Fix#4588
Enhance security during the build process:
* The '-fstack-protector' flag enables stack protection for preventing
buffer overflow vulnerabilities. If an attempt is made to overflow the
buffer, the program will terminate, thus protecting the integrity of
the stack.
* The '-Wl,-z,relro,-z,now' linker flag enables Read-only Relocations
(RELRO), a feature that helps harden the binary against certain types
of exploits, particularly those that involve overwriting the Global
Offset Table (GOT).
* This flag is only set for Linux builds, due to compatibility issues
with apple-clang.
* The `relro` option makes certain sections of memory read-only after
initialization to prevent them from being overwritten, while `now`
ensures that all dynamic symbols are resolved immediately on program
start, reducing the window of opportunity for attacks.
* Add a new YAML file (.pre-commit-config.yaml) to set up pre-commit
hook for clang-format
* The pre-commit hook is opt-in and needs to be installed in order to
run automatically
* Update CONTRIBUTING.md with instructions on how to set up and use the
clang-format linter
Automating the process of running clang-format before committing code
helps to save time by removing the need to fix formatting issues later.
This is a tooling improvement and doesn't change C++ code.
* Update the version of the checkout action (for GitHub Actions) in
`clang-format.yml` and `levelization.yml`.
* The previous version, v2, was raising deprecation warnings due to
its reliance on Node.js 12.
* The latest checkout action version, v3, uses Node.js 16.
When requesting `account_info` with an invalid `signer_lists` value, the
API should return an "invalidParams" error.
`signer_lists` should have a value of type boolean. If it is not a
boolean, then it is invalid input. The response now indicates that.
* This is an API breaking change, so the change is only reflected for
requests containing `"api_version": 2`
* Fix#4539
The debug packages were named with the extension ".ddeb", but due to a
bug in Artifactory, they need to have the ".deb" extension. Debug symbol
packages with ".ddeb" extensions are not indexed, and thus are not
visible in apt clients.
* Fix the issue by renaming the debug packages in the build script.
* Use GCC-11 and update GCC Conan profile.
* This software requires GCC 11 and C++20. However, reporting mode is
built with C++17.
This is a quick band-aid to fix the build. Later, it will be better to
remove this package-building code.
For context, a Debian (deb) package contains bundled software and
resources necessary for installing and managing software on a
Debian-based system, including Ubuntu and derivatives.
- Use powers of two to clearly indicate the bitmask
- Replace bitmask with explicit if-conditions to better indicate predicates
Change enum values to be powers of two (fix#3417) #4239
Implement the simplified condition evaluation
removes the complex bitwise and(&) operator
Implement the second proposed solution in Nik Bougalis's comment - Software does not distinguish between different Conditions (Version: 1.5) #3417 (comment)
I have tested this code change by performing RPC calls with the commands server_info, server_state, peers and validation_info. These commands worked as expected.
Certain inputs for the AccountTx method should return an error. In other
words, an invalid request from a user or client now results in an error
message.
Since this can change the response from the API, it is an API breaking
change. This commit maintains backward compatibility by keeping the
existing behavior for existing requests. When clients specify
"api_version": 2, they will be able to get the updated error messages.
Update unit tests to check the error based on the API version.
* Fix#4288
* Fix#4545
* Commits 0b812cd (#4427) and 11e914f (#4516) conflict. The first added
references to `ServerHandlerImp` in files outside of that class's
organizational unit (which is technically incorrect). The second
removed `ServerHandlerImp`, but was not up to date with develop. This
results in the build failing.
* Fixes the build by changing references to `ServerHandlerImp` to
the more correct `ServerHandler`.
Rename `ServerHandlerImp` to `ServerHandler`. There was no other
ServerHandler definition despite the existence of a header suggesting
that there was.
This resolves a piece of historical confusion in the code, which was
identified during a code review.
The changes in the diff may look more extensive than they actually are.
The contents of `impl/ServerHandlerImp.h` were merged into
`ServerHandler.h`, making the latter file appear to have undergone
significant modifications. However, this is a non-breaking refactor that
only restructures code.
Fix the libxrpl library target for consumers using Conan.
* Fix installation issues and update includes.
* Update requirements in the Conan package info.
* libxrpl requires openssl::crypto.
(Conan is a software package manager for C++.)
Replace hand-rolled code with std::from_chars for better
maintainability.
The C++ std::from_chars function is intended to be as fast as possible,
so it is unlikely to be slower than the code it replaces. This change is
a net gain because it reduces the amount of hand-rolled code.
Apply a minor cleanup in `TypedField`:
* Remove a non-working and unused move constructor.
* Constrain the remaining constructor to not be overly generic enough as
to be used as a copy or move constructor.
There is now an Artifactory (thanks @shichengripple001 and team!) to
hold dependency binaries for the builds.
* Rewrite the `nix` workflow to use it and cut the time down to a mere
21 minutes.
* This workflow should continue to work (just more slowly) for forks
that do not have access to the Artifactory.
Enhance the /crawl endpoint by publishing WebSocket/RPC ports in the
server_info response. The function processing requests to the /crawl
endpoint actually calls server_info internally, so this change enables a
server to advertise its WebSocket/RPC port(s) to peers via the /crawl
endpoint. `grpc` and `peer` ports are included as well.
The new `ports` array contains objects, each containing a `port` for the
listening port (number string), and an array `protocol` listing the
supported protocol(s).
This allows crawlers to build a richer topology without needing to
port-scan nodes. For non-admin users (including peers), the info about
*admin* ports is excluded.
Also increase test coverage for RPC ServerInfo.
Fix#2837.
Curtail the occurrence of order books that are blocked by reduced offers
with the implementation of the fixReducedOffersV1 amendment.
This commit identifies three ways in which offers can be reduced:
1. A new offer can be partially crossed by existing offers, so the new
offer is reduced when placed in the ledger.
2. An in-ledger offer can be partially crossed by a new offer in a
transaction. So the in-ledger offer is reduced by the new offer.
3. An in-ledger offer may be under-funded. In this case the in-ledger
offer is scaled down to match the available funds.
Reduced offers can block order books if the effective quality of the
reduced offer is worse than the quality of the original offer (from the
perspective of the taker). It turns out that, for small values, the
quality of the reduced offer can be significantly affected by the
rounding mode used during scaling computations.
This commit adjusts some rounding modes so that the quality of a reduced
offer is always at least as good (from the taker's perspective) as the
original offer.
The amendment is titled fixReducedOffersV1 because additional ways of
producing reduced offers may come to light. Therefore, there may be a
future need for a V2 amendment.
* Enable api_version 2, which is currently in beta. It is expected to be
marked stable by the next stable release.
* This does not change any defaults.
* The only existing tests changed were one that set the same flag, which
was now redundant, and a couple that tested versioning explicitly.
- Resolve gcc compiler warning:
AccountObjects.cpp:182:47: warning: redundant move in initialization [-Wredundant-move]
- The std::move() operation on trivially copyable types may generate a
compile warning in newer versions of gcc.
- Remove extraneous header (unused imports) from a unit test file.
Fix a bug in the `NODE_SIZE` auto-detection feature in `Config.cpp`.
Specifically, this patch corrects the calculation for the total amount
of RAM available, which was previously returned in bytes, but is now
being returned in units of the system's memory unit. Additionally, the
patch adjusts the node size based on the number of available hardware
threads of execution.
Misaligned load and store operations are supported by both Intel and ARM
CPUs. However, in C++, these operations are undefined behavior (UB).
Substituting these operations with a `memcpy` fixes this UB. The
compiled assembly code is equivalent to the original, so there is no
performance penalty to using memcpy.
For context: The unaligned load and store operations fixed here were
originally introduced in the slab allocator (#4218).
This assert was put in the wrong place, but it only triggers if shards
are configured. This change moves the assert to the right place and
updates it to ensure correctness.
The assert could be hit after the server downloads some shards. It may
be necessary to restart after the shards are downloaded.
Note that asserts are normally checked only in debug builds, so release
packages should not be affected.
Introduced in: #4319 (66627b26cf)
Global variables in different TUs are initialized in an undefined order.
At least one global variable was accessing a global switchover variable.
This caused the switchover variable to be accessed in an uninitialized
state.
Since the switchover is always explicitly set before transaction
processing, this bug can not effect transaction processing, but could
effect unit tests (and potentially the value of some global variables).
Note: at the time of this patch the offending bug is not yet in
production.
Three new fields are added to the `Tx` responses for NFTs:
1. `nftoken_id`: This field is included in the `Tx` responses for
`NFTokenMint` and `NFTokenAcceptOffer`. This field indicates the
`NFTokenID` for the `NFToken` that was modified on the ledger by the
transaction.
2. `nftoken_ids`: This array is included in the `Tx` response for
`NFTokenCancelOffer`. This field provides a list of all the
`NFTokenID`s for the `NFToken`s that were modified on the ledger by
the transaction.
3. `offer_id`: This field is included in the `Tx` response for
`NFTokenCreateOffer` transactions and shows the OfferID of the
`NFTokenOffer` created.
The fields make it easier to track specific tokens and offers. The
implementation includes code (by @ledhed2222) from the Clio project to
extract NFTokenIDs from mint transactions.
The API would allow seeds (and public keys) to be used in place of
accounts at several locations in the API. For example, when calling
account_info, you could pass `"account": "foo"`. The string "foo" is
treated like a seed, so the method returns `actNotFound` (instead of
`actMalformed`, as most developers would expect). In the early days,
this was a convenience to make testing easier. However, it allows for
poor security practices, so it is no longer a good idea. Allowing a
secret or passphrase is now considered a bug. Previously, it was
controlled by the `strict` option on some methods. With this commit,
since the API does not interpret `account` as `seed`, the option
`strict` is no longer needed and is removed.
Removing this behavior from the API is a [breaking
change](https://xrpl.org/request-formatting.html#breaking-changes). One
could argue that it shouldn't be done without bumping the API version;
however, in this instance, there is no evidence that anyone is using the
API in the "legacy" way. Furthermore, it is a potential security hole,
as it allows users to send secrets to places where they are not needed,
where they could end up in logs, error messages, etc. There's no reason
to take such a risk with a seed/secret, since only the public address is
needed.
Resolves: #3329, #3330, #4337
BREAKING CHANGE: Remove non-strict account parsing (#3330)
On macOS, if you have not installed something that depends on `xz`, then your
system may lack `lzma`, resulting in a build error similar to:
```
Downloading libarchive-3.6.0.tar.xz completed [6250.61k]
libarchive/3.6.0:
ERROR: libarchive/3.6.0: Error in source() method, line 120
get(self, **self.conan_data["sources"][self.version], strip_root=True)
ReadError: file could not be opened successfully:
- method gz: ReadError('not a gzip file')
- method bz2: ReadError('not a bzip2 file')
- method xz: CompressionError('lzma module is not available')
- method tar: ReadError('invalid header')
```
The solution is to ensure that `lzma` is installed by installing `xz`.
SOCI is the C++ database access library. The SOCI recipe was updated in
Conan Center Index (CCI), and it breaks for our choice of options. This
breakage occurs when you build with a fresh Conan cache (e.g. when you
submit a PR, or delete `~/.conan/data`).
* Add a custom Conan recipe for SOCI v4.0.3
* Update dependency building to handle exporting and installing Snappy
and SOCI
* Fix workflows to use custom SOCI recipe
* Update BUILD.md to include instruction for exporting the SOCI Conan
recipe:
* `conan export external/soci soci/4.0.3@`
This solution has been verified on Ubuntu 20.04 and macOS.
Context:
* There is a compiler error that the `sqlite3.h` header is not available
when building soci.
* When package B depends on package A, it finds the pieces it needs by
importing the Package Configuration File (PCF) that Conan generates
for package A.
* Read the CMake written by package B to check that it is importing
the PCF correctly and linking its exports correctly.
* Since this can be difficult, it is often more efficient to check
https://github.com/conan-io/conan-center-index/issues for package B
to see if anyone else has seen a similar problem.
* One of the issues points to a problem area in soci's CMake. To
confirm the diagnosis, review soci's CMake (after any patches are
applied) in the Conan build directory `build/$buildId/src/`.
* Review the Conan-generated PCF in
`build/$buildId/build/$buildType/generators/`.
* In this case, the problem was likely (re)introduced by
https://github.com/conan-io/conan-center-index/pull/17026
* If there is a problem in the source or in the Conan recipe, the
fastest fix is to copy the recipe and either:
* Add a source patch to fix any problems in the source.
* Change the recipe to fix any problems in the recipe.
* In this case, this can be done by finding soci's Conan recipe at
https://github.com/conan-io/conan-center-index/tree/master/recipes/soci
and then copying the `all` directory as `external/$packageName` in our
project. Then, make any changes.
* Test packages can be removed from the recipe folder as they are not
needed.
* If adding a patch in the `patches` directory, add a description for
it to `conandata.yml`.
* Since `conanfile.py` has no `version` property on the recipe class,
builders need to pass a version on the command line (like they do
for our `snappy` recipe).
* Add an example command to `BUILD.md`.
Future work: It may make sense to refer to recipes by revision, by
checking in a lockfile.
This change makes progress on the plan in #4371. It does not replicate
the full [matrix] implemented in #3851, but it does replicate the 1.ii
section of the Linux matrix. It leverages "heavy" self-hosted runners,
and demonstrates a repeatable pattern for future matrices.
[matrix]: d794a0f3f1/.github/README.md (continuous-integration)
Address issues related to the removal of `std::{u,bi}nary_function` in
C++17 and some warnings with Clang 16. Some warnings appeared with the
upgrade to Apple clang version 14.0.3 (clang-1403.0.22.14.1).
- `std::{u,bi}nary_function` were removed in C++17. They were empty
classes with a few associated types. We already have conditional code
to define the types. Just make it unconditional.
- libc++ checks a cast in an unevaluated context to see if a type
inherits from a binary function class in the standard library, e.g.
`std::equal_to`, and this causes an error when the type privately
inherits from such a class. Change these instances to public
inheritance.
- We don't need a middle-man for the empty base optimization. Prefer to
inherit directly from an empty class than from
`beast::detail::empty_base_optimization`.
- Clang warns when all the uses of a variable are removed by conditional
compilation of assertions. Add a `[[maybe_unused]]` annotation to
suppress it.
- As a drive-by clean-up, remove commented code.
See related work in #4486.
If `--quorum` setting is present on the command line, use the specified
value as the minimum quorum. This allows for the use of a potentially
fork-unsafe quorum, but it is sometimes necessary for small and test
networks.
Fix#4488.
---------
Co-authored-by: RichardAH <richard.holland@starstone.co.nz>
Add instructions for installing rippled using the package managers APT
and YUM. Some steps were adapted from xrpl.org.
---------
Co-authored-by: Michael Legleux <mlegleux@ripple.com>
Newer compilers, such as Apple Clang 15.0, have removed `std::result_of`
as part of C++20. The build instructions provided a fix for this (by
adding a preprocessor definition), but the fix was broken.
This fixes the fix by:
* Adding the `conf` prefix for tool configurations (which had been
forgotten).
* Passing `extra_b2_flags` to `boost` package to fix its build.
* Define `BOOST_ASIO_HAS_STD_INVOKE_RESULT` in order to build boost
1.77 with a newer compiler.
Add a `NetworkID` field to help prevent replay attacks on and from
side-chains.
The new field must be used when the server is using a network id > 1024.
To preserve legacy behavior, all chains with a network ID less than 1025
retain the existing behavior. This includes Mainnet, Testnet, Devnet,
and hooks-testnet. If `sfNetworkID` is present in any transaction
submitted to any of the nodes on one of these chains, then
`telNETWORK_ID_MAKES_TX_NON_CANONICAL` is returned.
Since chains with a network ID less than 1025, including Mainnet, retain
the existing behavior, there is no need for an amendment.
The `NetworkID` helps to prevent replay attacks because users specify a
`NetworkID` field in every transaction for that chain.
This change introduces a new UINT32 field, `sfNetworkID` ("NetworkID").
There are also three new local error codes for transaction results:
- `telNETWORK_ID_MAKES_TX_NON_CANONICAL`
- `telREQUIRES_NETWORK_ID`
- `telWRONG_NETWORK`
To learn about the other transaction result codes, see:
https://xrpl.org/transaction-results.html
Local error codes were chosen because a transaction is not necessarily
malformed if it is submitted to a node running on the incorrect chain.
This is a local error specific to that node and could be corrected by
switching to a different node or by changing the `network_id` on that
node. See:
https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html
In addition to using `NetworkID`, it is still generally recommended to
use different accounts and keys on side-chains. However, people will
undoubtedly use the same keys on multiple chains; for example, this is
common practice on other blockchain networks. There are also some
legitimate use cases for this.
A `app.NetworkID` test suite has been added, and `core.Config` was
updated to include some network_id tests.
The `Ledger` class contains two `SHAMap` instances: the state and
transaction maps. Previously, the maps were dynamically allocated using
`std::make_shared` despite the fact that they did not require lifetime
management separate from the lifetime of the `Ledger` instance to which
they belong.
The two `SHAMap` instances are now regular member variables. Some smart
pointers and dynamic memory allocation was avoided by using stack-based
alternatives.
Commit 3 of 3 in #4218.
The `SHAMapItem` class contains a variable-sized buffer that
holds the serialized data associated with a particular item
inside a `SHAMap`.
Prior to this commit, the buffer for the serialized data was
allocated separately. Coupled with the fact that most instances
of `SHAMapItem` were wrapped around a `std::shared_ptr` meant
that an instantiation might result in up to three separate
memory allocations.
This commit switches away from `std::shared_ptr` for `SHAMapItem`
and uses `boost::intrusive_ptr` instead, allowing the reference
count for an instance to live inside the instance itself. Coupled
with using a slab-based allocator to optimize memory allocation
for the most commonly sized buffers, the net result is significant
memory savings. In testing, the reduction in memory usage hovers
between 400MB and 650MB. Other scenarios might result in larger
savings.
In performance testing with NFTs, this commit reduces memory size by
about 15% sustained over long duration.
Commit 2 of 3 in #4218.
When instantiating a large amount of fixed-sized objects on the heap
the overhead that dynamic memory allocation APIs impose will quickly
become significant.
In some cases, allocating a large amount of memory at once and using
a slabbing allocator to carve the large block into fixed-sized units
that are used to service requests for memory out will help to reduce
memory fragmentation significantly and, potentially, improve overall
performance.
This commit introduces a new `SlabAllocator<>` class that exposes an
API that is _similar_ to the C++ concept of an `Allocator` but it is
not meant to be a general-purpose allocator.
It should not be used unless profiling and analysis of specific memory
allocation patterns indicates that the additional complexity introduced
will improve the performance of the system overall, and subsequent
profiling proves it.
A helper class, `SlabAllocatorSet<>` simplifies handling of variably
sized objects that benefit from slab allocations.
This commit incorporates improvements suggested by Greg Popovitch
(@greg7mdp).
Commit 1 of 3 in #4218.
Add Clio-specific JSS constants to ensure a common vocabulary of
keywords in Clio and this project. By providing visibility of the full
API keyword namespace, it reduces the likelihood of developers
introducing minor variations on names used by Clio, or unknowingly
claiming a keyword that Clio has already claimed. This change moves this
project slightly away from having only the code necessary for running
the core server, but it is a step toward the goal of keeping this
server's and Clio's APIs similar. The added JSS constants are annotated
to indicate their relevance to Clio.
Clio can be found here: https://github.com/XRPLF/clio
Signed-off-by: ledhed2222 <ledhed2222@users.noreply.github.com>
- Include NFTokenPages in account_objects to make it easier to
understand an account's Owner Reserve and simplify app development.
- Update related tests and documentation.
- Fix#4347.
For info about the Owner Reserve, see https://xrpl.org/reserves.html
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
Co-authored-by: Ed Hennis <ed@ripple.com>
Change `ledger_data` to return an empty list when all entries are
filtered out.
When the `type` field is specified for the `ledger_data` method, it is
possible that no objects of the specified type are found. This can even
occur if those objects exist, but not in the section that the server
checked while serving your request. Previously, the `state` field of the
response has the value `null`, instead of an empty array `[]`. By
changing this to an empty array, the response is the same data type so
that clients can handle it consistently.
For example, in Python, `for entry in state` should now work correctly.
It would raise an exception if `state` is `null` (or `None`).
This could break client code that explicitly checks for null. However,
this fix aligns the response with the documentation, where the `state`
field is an array.
Fix#4392.
Previously, the object `account_data` in the `account_info` response
contained a single field `Flags` that contains flags of an account. API
consumers must perform bitwise operations on this field to retrieve the
account flags.
This change adds a new object, `account_flags`, at the top level of the
`account_info` response `result`. The object contains relevant flags of
the account. This makes it easier to write simple code to check a flag's
value.
The flags included may depend on the amendments that are enabled.
Fix#2457.
Log exception messages at several locations.
Previously, these were locations where an exception was caught, but the
exception message was not logged. Logging the exception messages can be
useful for analysis or debugging. The additional logging could have a
small negative performance impact.
Fix#3213.
In the release notes (current and historical), there is a link to the
`Builds` directory. By creating `Builds/README.md` with a link to
`BUILD.md`, it is easier to find the build instructions.
* Create the FeeSettings object in genesis ledger.
* Initialize with default values from the config. Removes the need to
pass a Config down into the Ledger initialization functions, including
setup().
* Drop the undocumented fee config settings in favor of the [voting]
section.
* Fix#3734.
* If you previously used fee_account_reserve and/or fee_owner_reserve,
you should change to using the [voting] section instead. Example:
```
[voting]
account_reserve=10000000
owner_reserve=2000000
```
* Because old Mainnet ledgers (prior to 562177 - yes, I looked it up)
don't have FeeSettings, some of the other ctors will default them to
the config values before setup() tries to load the object.
* Update default Config fee values to match Mainnet.
* Fix unit tests:
* Updated fees: Some tests are converted to use computed values of fee
object, but the default Env config was also updated to fix the rest.
* Unit tests that check the structure of the ledger have updated
hashes and counts.
Add the ability to mark amendments as obsolete. There are some known
amendments that should not be voted for because they are broken (or
similar reasons).
This commit marks four amendments as obsolete:
1. `CryptoConditionsSuite`
2. `NonFungibleTokensV1`
3. `fixNFTokenDirV1`
4. `fixNFTokenNegOffer`
When an amendment is `Obsolete`, voting for the amendment is prevented.
A determined operator can still vote for the amendment by changing the
source, and doing so does not break any protocol rules.
The "feature" command now does not modify the vote for obsolete
amendments.
Before this change, there were two options for an amendment's
`DefaultVote` behavior: yes and no.
After this change, there are three options for an amendment's
`VoteBehavior`: DefaultYes, DefaultNo, and Obsolete.
To be clear, if an obsolete amendment were to (somehow) be activated by
consensus, the server still has the code to process transactions
according to that amendment, and would not be amendment blocked. It
would function the same as if it had been voting "no" on the amendment.
Resolves#4014.
Incorporates review feedback from @scottschurr.
Fix a case where `ripple::Expected` returned a json array, not a value.
The problem was that `Expected` invoked the wrong constructor for the
expected type, which resulted in a constructor that took multiple
arguments being interpreted as an array.
A proposed fix was provided by @godexsoft, which involved a minor
adjustment to three constructors that replaces the use of curly braces
with parentheses. This makes `Expected` usable for
[Clio](https://github.com/XRPLF/clio).
A unit test is also included to ensure that the issue doesn't occur
again in the future.
Make it easy for projects to depend on libxrpl by adding an `ALIAS`
target named `xrpl::libxrpl` for projects to link.
The name was chosen because:
* The current library target is named `xrpl_core`. There is no other
"non-core" library target against which we need to distinguish the
"core" library. We only export one library target, and it should just
be named after the project to keep things simple and predictable.
* Underscores in target or library names are generally discouraged.
* Every target exported in CMake should be prefixed with the project
name.
By adding an `ALIAS` target, existing consumers who use the `xrpl_core`
target will not be affected.
* In the future, there can be a migration plan to make `xrpl_core` the
`ALIAS` target (and `libxrpl` the "real" target, which will affect the
filename of the compiled binary), and eventually remove it entirely.
Also:
* Fix the Conan recipe so that consumers using Conan import a target
named `xrpl::libxrpl`. This way, every consumer can use the same
instructions.
* Document the two easiest methods to depend on libxrpl. Both have been
tested.
* See #4443.
* Remove obsolete build instructions.
* By using Conan, builders can choose which dependencies specifically to
build and link as shared objects.
* Refactor the build instructions based on the plan in #4433.
Without the protocol amendment introduced by this commit, an NFT ID can
be reminted in this manner:
1. Alice creates an account and mints an NFT.
2. Alice burns the NFT with an `NFTokenBurn` transaction.
3. Alice deletes her account with an `AccountDelete` transaction.
4. Alice re-creates her account.
5. Alice mints an NFT with an `NFTokenMint` transaction with params:
`NFTokenTaxon` = 0, `Flags` = 9).
This will mint a NFT with the same `NFTokenID` as the one minted in step
1. The params that construct the NFT ID will cause a collision in
`NFTokenID` if their values are equal before and after the remint.
With the `fixNFTokenRemint` amendment, there is a new sequence number
construct which avoids this scenario:
- A new `AccountRoot` field, `FirstNFTSequence`, stays constant over
time.
- This field is set to the current account sequence when the account
issues their first NFT.
- Otherwise, it is not set.
- The sequence of a newly-minted NFT is computed by: `FirstNFTSequence +
MintedNFTokens`.
- `MintedNFTokens` is then incremented by 1 for each mint.
Furthermore, there is a new account deletion restriction:
- An account can only be deleted if `FirstNFTSequence + MintedNFTokens +
256` is less than the current ledger sequence.
- 256 was chosen because it already exists in the current account
deletion constraint.
Without this restriction, an NFT may still be remintable. Example
scenario:
1. Alice's account sequence is at 1.
2. Bob is Alice's authorized minter.
3. Bob mints 500 NFTs for Alice. The NFTs will have sequences 1-501, as
NFT sequence is computed by `FirstNFTokenSequence + MintedNFTokens`).
4. Alice deletes her account at ledger 257 (as required by the existing
`AccountDelete` amendment).
5. Alice re-creates her account at ledger 258.
6. Alice mints an NFT. `FirstNFTokenSequence` initializes to her account
sequence (258), and `MintedNFTokens` initializes as 0. This
newly-minted NFT would have a sequence number of 258, which is a
duplicate of what she issued through authorized minting before she
deleted her account.
---------
Signed-off-by: Shawn Xie <shawnxie920@gmail.com>
There were situations where `marker`s returned by `account_lines` did
not work on subsequent requests, returning "Invalid Parameters".
This was caused by the optimization implemented in "Enforce account RPC
limits by account objects traversed":
e28989638d
Previously, the ledger traversal would find up to `limit` account lines,
and if there were more, the marker would be derived from the key of the
next account line. After the change, ledger traversal would _consider_
up to `limit` account objects of any kind found in the account's
directory structure. If there were more, the marker would be derived
from the key of the next object, regardless of type.
With this optimization, it is expected that `account_lines` may return
fewer than `limit` account lines - even 0 - along with a marker
indicating that there are may be more available.
The problem is that this optimization did not update the
`RPC::isOwnedByAccount` helper function to handle those other object
types. Additionally, XLS-20 added `ltNFTOKEN_OFFER` ledger objects to
objects that have been added to the account's directory structure, but
did not update `RPC::isOwnedByAccount` to be able to handle those
objects. The `marker` provided in the example for #4354 includes the key
for an `ltNFTOKEN_OFFER`. When that `marker` is used on subsequent
calls, it is not recognized as valid, and so the request fails.
* Add unit test that walks all the object types and verifies that all of
their indexes can work as a marker.
* Fix#4340
* Fix#4354
When writing objects to the NodeStore, we need to convert them from
the in-memory format to the binary format used by the node store.
The conversion is handled by the `EncodedBlob` class, which is only
instantiated on the stack. Coupled with the fact that most objects
are under 1024 bytes in size, this presents an opportunity to elide
a memory allocation in a critical path.
This commit also simplifies the interface of `EncodedBlob` and
eliminates a subtle corner case that could result in dangling
pointers.
These changes are not expected to cause a significant reduction in
memory usage. The change avoids the use of a `std::shared_ptr` when
unnecessary and tries to use stack-based memory allocation instead
of the heap whenever possible.
This is a net gain both in terms of memory usage (lower
fragmentation) and performance (less work to do at runtime).
In rare circumstances, both `onRequestTimeout` and the response handler
(`onSiteFetch` or `onTextFetch`) can get queued and processed. In all
observed cases, the response handler processes a network error.
`onRequestTimeout` usually runs first, but on rare occasions, the
response handler runs first, which leaves `activeResource` empty.
* Prevent internal error by catching overflow exception in `gateway_balances`.
* Treat `gateway_balances` obligations overflow as max (largest valid) `STAmount`.
* Note that very large sums of STAmount are approximations regardless.
---------
Co-authored-by: Scott Schurr <scott@ripple.com>
- MSVC 19.x reported a warning about import paths in boost for
function_output_iterator class (boost::function_output_iterator).
- Eliminate that warning by updating the import paths, as suggested by
the compiler warnings.
Port numbers can now be specified using either a colon or a space.
Examples:
1.2.3.4:51235
1.2.3.4 51235
- In the configuration file, an annoying "gotcha" for node operators is
accidentally specifying IP:PORT combinations using a colon. The code
previously expected a space, not a colon. It also does not provide
good feedback when this operator error is made.
- This change simply allows this mistake (using a colon) to be fixed
automatically, preserving the intention of the operator.
- Add unit tests, which test the functionality when specifying IP:PORT
in the configuration file.
- The RPCCall test regime is not specific enough to test this
functionality, it has been tested by hand.
- Ensure IPv6 addresses are not confused for ip:port
---------
Co-authored-by: Elliot Lee <github.public@intelliot.com>
- Implement the `operator==` and the `operator<=>` (aka the spaceship
operator) in `base_uint`, `Issue`, and `Book`.
- C++20-compliant compilers automatically provide the remaining
comparison operators (e.g. `operator<`, `operator<=`, ...).
- Remove the function compare() because it is no longer needed.
- Maintain the same semantics as the existing code.
- Add some unit tests to gain further confidence.
- Fix#2525.
In Reporting Mode, a server would core dump when it is not able to read
from Cassandra. This patch prevents the core dump when Cassandra is down
for reporting mode servers. This does not fix the root cause, but it
cuts down on some of the resulting noise.
* Follow-up to #4336
* NFToken is the naming convention in the codebase (rather than NFT)
* Rename `lsfDisallowIncomingNFTOffer` to `lsfDisallowIncomingNFTokenOffer`
* Rename `asfDisallowIncomingNFTOffer` to `asfDisallowIncomingNFTokenOffer`
Partially revert the functionality introduced
with #4195 / 5a15229 (part of 1.10.0-b1).
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
To report a bug, please send a detailed report to:
bugs@xrpl.org
---------
Co-authored-by: Nik Bougalis <nikb@bougalis.net>
- Copies the recipe for Snappy from Conan Center, but removes three
lines that explicitly link the standard library, which prevents
builders from statically linking it.
- Removes the recipe for RocksDB now that an official recipe for version
6.27.3 is in Conan Center.
Developers will likely need to remove cached versions of both RocksDB
and Snappy:
```
conan remove -f rocksdb
conan remove -f snappy
```
---------
Co-authored-by: John Freeman <jfreeman08@gmail.com>
* Set "fail-fast: false" so that multiple jobs in one workflow can
finish independently. By default, if one job fails, other running jobs
will be aborted, even if the other jobs are working fine and are
almost done. This leads to wasted time and resources if the failure
is, for example, OS specific, or due to a flaky unit test, and the
failed job needs to be re-run, because all the jobs end up re-running.
* Put conditions back into the windows.yml job (manual, and for
a specific branch name and that job). This prevents Github Actions
from sending "No jobs were run" failure emails on every commit.
Without this amendment, for NFTs using broker mode, if the sell offer contains a destination and that destination is the buyer account, anyone can broker the transaction. Also, if a buy offer contains a destination and that destination is the seller account, anyone can broker the transaction. This is not ideal and is misleading.
Instead, with this amendment: If you set a destination, that destination needs to be the account settling the transaction. So, the broker must be the destination if they want to settle. If the buyer is the destination, then the buyer must accept the sell offer, as you cannot broker your own offers.
If users want their offers open to the public, then they should not set a destination. On the other hand, if users want to limit who can settle the offers, then they would set a destination.
Unit tests:
1. The broker cannot broker a destination offer to the buyer and the buyer must accept the sell offer. (0 transfer)
2. If the broker is the destination, the broker will take the difference. (broker mode)
Fixes#4374
It was possible for a broker to combine a sell and a buy offer from an account that already owns an NFT. Such brokering extracts money from the NFT owner and provides no benefit in return.
With this amendment, the code detects when a broker is returning an NFToken to its initial owner and prohibits the transaction. This forbids a broker from selling an NFToken to the account that already owns the token. This fixes a bug in the original implementation of XLS-20.
Thanks to @nixer89 for suggesting this fix.
Fixes 3 issues:
In the following scenario, an account cannot perform NFTokenAcceptOffer even though it should be allowed to:
- BROKER has < S
- ALICE offers to sell token for S
- BOB offers to buy token for > S
- BROKER tries to bridge the two offers
This currently results in `tecINSUFFICIENT_FUNDS`, but should not because BROKER is not spending any funds in this transaction, beyond the transaction fee.
When trading an NFT using IOUs, and when the issuer of the IOU has any non-zero value set for TransferFee on their account via AccountSet (not a TransferFee on the NFT), and when the sale amount is equal to the total balance of that IOU that the buyer has, the resulting balance for the issuer of the IOU will become positive. This means that the buyer of the NFT was supposed to have caused a certain amount of IOU to be burned. That amount was unable to be burned because the buyer couldn't cover it. This results in the buyer owing this amount back to the issuer. In a real world scenario, this is appropriate and can be settled off-chain.
Currency issuers could not make offers for NFTs using their own currency, receiving `tecINSUFFICIENT_FUNDS` if they tried to do so.
With this fix, they are now able to buy/sell NFTs using their own currency.
Three static member functions are introduced with
definitions consistent with std::numeric_limits:
static constexpr Number min() noexcept;
Returns: The minimum positive value. This is the value closest to zero.
static constexpr Number max() noexcept;
Returns: The maximum possible value.
static constexpr Number lowest() noexcept;
Returns: The negative value which is less than all other values.
You can set a thread-local flag to direct Number how to round
non-exact results with the syntax:
Number::rounding_mode prev_mode = Number::setround(Number::towards_zero);
This flag will stay in effect for this thread only until another call
to setround. The previously set rounding mode is returned.
You can also retrieve the current rounding mode with:
Number::rounding_mode current_mode = Number::getround();
The available rounding modes are:
* to_nearest : Rounds to nearest representable value. On tie, rounds
to even.
* towards_zero : Rounds towards zero.
* downward : Rounds towards negative infinity.
* upward : Rounds towards positive infinity.
The default rounding mode is to_nearest.
* Conversions to Number are implicit
* Conversions away from Number are explicit and potentially lossy
* If lossy, round to nearest, and to even on tie
* Introduces amendment `XRPFees`
* Convert fee voting and protocol messages to use XRPAmounts
* Includes Validations, Change transactions, the "Fees" ledger object,
and subscription messages
* Improve handling of 0 drop reference fee with TxQ. For use with networks that do not want to require fees
* Note that fee escalation logic is still in place, which may cause the
open ledger fee to rise if the network is busy. 0 drop transactions
will still queue, and fee escalation can be effectively disabled by
modifying the configuration on all nodes
* Change default network reserves to match Mainnet
* Name the new SFields *Drops (not *XRP)
* Reserve SField IDs for Hooks
* Clarify comments explaining the ttFEE transaction field validation
Fixes#4005
Makes it possible for internal RPC Error Codes to associate
themselves with a non-OK (200) HTTP status code. There are
quite a number of RPC responses in addition to tooBusy that
now have non-OK HTTP status codes.
The new return HTTP return codes are only enabled by including
"ripplerpc": "3.0" or higher in the original request.
Otherwise the historical value, 200, continues to be returned.
This ensures that this is not a breaking change.
featureDisallowIncoming is a new amendment that would allow users to opt-out of incoming Checks, Payment Channels, NFTokenOffers, and trust lines. This commit includes tests.
Adds four new AccountSet Flags:
1. asfDisallowIncomingNFTOffer
2. asfDisallowIncomingCheck
3. asfDisallowIncomingPayChan
4. asfDisallowIncomingTrustline
Introduces a conanfile.py (and a Conan recipe for RocksDB) to enable building the package with Conan, choosing more recent default versions of dependencies. It removes almost all of the CMake build files related to dependencies, and the configurations for Travis CI and GitLab CI. A new set of cross-platform build instructions are written in BUILD.md.
Includes example GitHub Actions workflow for each of Linux, macOS, Windows.
* Test on macos-12
We use the <concepts> library which was not added to Apple Clang until
version 13.1.6. The default Clang on macos-11 (the sometimes current
version of macos-latest) is 13.0.0, and the default Clang on macos-12 is
14.0.0.
Closes#4223.
Clang warned about the code removed in this patch with the warning:
```
warning: out-of-line definition of constexpr static data member is
redundant in C++17 and is deprecated [-Wdeprecated]
```
There's a bug in gdb where unsigned template parameters cause issues with
RTTI. This patch changes a template parameter from `size_t` to `int` to
work around this gdb bug.
Reduce the reserve requirements from 20/5 to 10/2 in line with the current network votes. The requirements of 10/2 have been on the network long enough that new nodes should not still have the old reserve amount.
Co-authored-by: Richard Holland <richard.holland@starstone.co.nz>
* Per actions/runner-images#6002, ubuntu-18.04 is being deprecated. If
latest ever fails in the future, we'll need to fix the jobs anyway, so
catch it early.
* Use long option names
* Force clang-format to ubuntu-20.04 because LLVM 10 is not available for 22.04
* Improve move semantics in Expected:
This patch unconditionally moves an `Unexpected<U>` value parameter as
long as `U` is not a reference. If `U` is a reference the code should
not compile. An error type that holds a reference is a strange use-case,
and an overload is not provided. If it is required in the future it can
be added.
The `Expected(U r)` overload should take a forwarding ref.
* Replace enable_if with concepts in Expected
* Removed a reference to the default number of workers varying based on whether a node has validation enabled. Workers default to the number of processor cores + 2: https://github.com/ripple/rippled/blob/develop/src/ripple/core/impl/JobQueue.cpp#L166
* Protobuf v2 and Ubuntu 16.04 are no longer supported.
* Updated protobuf version as v3 is now supported, fixed typos, automatically sent number of processors when building boost & rippled.
It turns out that the feature enabled by the tfTrustLine flag
on an NFTokenMint transaction could be used as a means to
attack the NFToken issuer. Details are in
https://github.com/XRPLF/rippled/issues/4300
The fixRemoveNFTokenAutoTrustLine amendment removes the
ability to set the tfTrustLine flag on an NFTokenMint
transaction.
Closes 4300.
When starting, the code generates a new ephemeral private key and
a corresponding certificate to go along with it. This process can
take time and, while this is unlikely to matter for normal server
operations, it can have a significant impact for unit testing and
development. Profiling data suggests that ~20% of the time needed
for a unit test run can be attributed to this.
This commit does several things:
1. It restructures the code so that a new self-signed certificate
and its corresponding private key are only initialized once at
startup; this has minimal impact on the operation of a regular
server.
2. It provides new default DH parameters. This doesn't impact the
security of the connection, but those who compile from scratch
can generate new parameters if they so choose.
3. It properly sets the version number in the certificate, fixing
issue #4007; thanks to @donovanhide for the report.
4. It uses SHA-256 instead of SHA-1 as the hash algorithm for the
certificate and adds some X.509 extensions as well as a random
128-bit serial number.
5. It rounds the certificate's "start of validity" period so that
the server's precise startup time cannot be easily deduced and
limits the validity period to two years, down from ten years.
6. It removes some CBC-based ciphers from the default cipher list
to avoid some potential security issues, such as CVE-2016-2107
and CVE-2013-0169.
Caching the base58check encoded version of an `AccountID` has
performance advantages, because because of the computationally
heavy cost associated with the conversion, which requires the
application of SHA-256 twice.
This commit makes the cache significantly more efficient in terms
of memory used: it eliminates the map, using a vector with a size
that is determined by the configured size of the node, and a hash
function to directly map any given `AccountID` to a specific slot
in the cache; the eviction policy is simple: in case of collision
the existing entry is removed and replaced with the new data.
Previously, use of the cache was optional and required additional
effort by the programmer. Now the cache is automatic and does not
require any additional work or information.
The new cache also utilizes a 64-way spinlock, to help reduce any
contention that the pressure on the cache would impose.
Each node on the network is supposed to have a unique cryptographic
identity. Typically, this identity is generated randomly at startup
and stored for later reuse in the (poorly named) file `wallet.db`.
If the file is copied, it is possible for two nodes to share the
same node identity. This is generally not desirable and existing
servers will detect and reject connections to other servers that
have the same key.
This commit achives three things:
1. It improves the detection code to pinpoint instances where two
distinct servers with the same key connect with each other. In
that case, servers will log an appropriate error and shut down
pending intervention by the server's operator.
2. It makes it possible for server administrators to securely and
easily generate new cryptographic identities for servers using
the new `--newnodeid` command line arguments. When a server is
started using this command, it will generate and save a random
secure identity.
3. It makes it possible to configure the identity using a command
line option, which makes it possible to derive it from data or
parameters associated with the container or hardware where the
instance is running by passing the `--nodeid` option, followed
by a single argument identifying the infomation from which the
node's identity is derived. For example, the following command
will result in nodes with different hostnames having different
node identities: `rippled --nodeid $HOSTNAME`
The last option is particularly useful for automated cloud-based
deployments that minimize the need for storing state and provide
unique deployment identifiers.
**Important note for server operators:**
Depending on variables outside of the the control of this code,
such as operating system version or configuration, permissions,
and more, it may be possible for other users or programs to be
able to access the command line arguments of other processes
on the system.
If you are operating in a shared environment, you should avoid
using this option, preferring instead to use the `[node_seed]`
option in the configuration file, and use permissions to limit
exposure of the node seed.
A user who gains access to the value used to derive the node's
unique identity could impersonate that node.
The commit also updates the minimum supported server protocol
version to `XRPL/2.1`, which has been supported since version
1.5.0 and eliminates support for `XPRL/2.0`.
We profiled different algorithms and data structures to understand which
strategy is best from a performance standpoint:
- Linear search on an array;
- Binary search on a sorted array;
- Using `std::map`; and
- Using `std::unordered_map`.
Both linear search and std::unordered_map outperformed the other alternatives
so no change to the existing data structure is justified. If more handers are
added, this should be revisited.
For some additional details and timings, please see:
https://github.com/XRPLF/rippled/issues/3298#issuecomment-1185946010
Trustlines must be between two different accounts but two trustlines exist
where an account extends trust to itself. They were created in the early
days, likely because of bugs that have been fixed. The new fixTrustLinesToSelf
amendment will remove those trustlines when it activates.
The existing spinlock code, used to protect SHAMapInnerNode
child lists, has a mistake that can allow the same child to
be repeatedly locked under some circumstances.
The bug was in the `SpinBitLock::lock` loop condition check
and would result in the loop terminating early.
This commit fixes this and further simplifies the lock loop
making the correctness of the code easier to verify without
sacrificing performance.
It also promotes the spinlock class from an implementation
detail to a more general purpose, easier to use lock class
with clearer semantics. Two different lock types now allow
developers to easily grab either a single spinlock from an
a group of spinlocks (packed in an unsigned integer) or to
grab all of the spinlocks at once.
While this commit makes spinlocks more widely available to
developers, they are rarely the best tool for the job. Use
them judiciously and only after careful consideration.
The XLS-20 implementation contained two bugs that would require the
introduction of amendments. This complicates the adoption of XLS-20
by requiring a staggered amendment activation, first of the two fix
amendments, followed by the `NonFungibleTokensV1` amendment.
After consideration, the consensus among node operators is that the
process should be simplified by the introduction of a new amendment
that, if enabled, would behaves as if the `NonFungibleTokensV1` and
the two fix amendments (`fixNFTokenDirV1` and `fixNFTokenNegOffer`)
were activated at once.
This commit implements this proposal; it does not introduce any new
functionality or additional features, above and beyond that offered
by the existing amendments.
The peer discovery protocol depends on peers exchanging messages
listing IP addresses for other peers.
Under normal circumstances, these messages should not be sent
frequently; the existing code would track the earliest time a
new message should be processed, but did not actually enforce
that limit.
An incorrect SQL query could cause the server to improperly
configure its voting state after a restart; typically, this
would manifest as an apparent failure to store a vote which
the administrator of the server had configured.
This commit fixes the broken SQL and ensures that amendment
votes are properly reloaded post-restart and closes#4220.
The existing code would, incorrectly, allow negative amounts in offers
for non-fungible tokens. Such offers would be handled very differently
depending on the context: a direct offer would fail with an error code
indicating an internal processing error, whereas brokered offers would
improperly succeed.
This commit introduces the `fixNFTokenNegOffer` amendment that detects
such offers during creation and returns an appropriate error code.
The commit also extends the existing code to allow for buy offers that
contain a `Destination` field, so that a specific broker can be set in
the offer.
A few unit tests have historically generated a lot of noise
to the console from log writes. This noise was not useful
and made it harder to locate actual test failures.
By changing the log level of these tests from
- severities::kError to
- severities::kDisabled
it was possible to remove that noise coming from the logs.
While there should never be a missing node when copying the SHAMap,
rippled should not terminate when there's an error rotating the
database. This patch aborts the database rotation rather than aborting rippled.
ThreadSafetyAnalysis was used to identify race conditions in this file.
This analysis was modivated by a (rare) crash while running unit tests.
Add locks to Shard flagged by ThreadSafetyAnalysis
The existing code properly parses the network_id parameter from the
the configuration file, but it does not properly set up the code to
use the value correctly. As a result the configured `network_id` is
ignored.
o Fixes an off-by-one when determining which NFTokenPage an
NFToken belongs on.
o Improves handling of packed sets of 32 NFTs with
identical low 96-bits.
o Fixes marker handling by the account_nfts RPC command.
o Tightens constraints of NFTokenPage invariant checks.
Adds unit tests to exercise the fixed cases as well as tests
for previously untested functionality.
The amendment increases the maximum sign of an account's signer
list from 8 to 32.
Like all new features, the associated amendment is configured with
a default vote of "no" and server operators will have to vote for
it explicitly if they believe it is useful.
* "A path is considered invalid if and only if it enters and exits an
address node through trust lines where No Ripple has been enabled for
that address." (https://xrpl.org/rippling.html#specifics)
* When loading trust lines for an account "Alice" which was reached
via a trust line that has the No Ripple flag set on Alice's side, do
not use or cache any of Alice's trust lines which have the No Ripple
flag set on Alice's side. For typical "end-user" accounts, this will
return no trust lines.
One of the two versions of the `rngfill` function accepts a pointer
to a buffer and a size (in bytes). The function aims to fill the
provided `buffer` with `size` random bytes. It does this in chunks
of 8 bytes, for long as possible, and then fills any left-over gap
one byte at a time.
To avoid an annoying and incorrect warning about a potential buffer
overflow in the "trailing write", commit 78bc2727f7
used a `#pragma` to instruct the compiler to not generate the incorrect
diagnostic. Unfortunately, this change _also_ eliminated the trailing
write code, which means that, under some cases, the `rngfill` function
would generate between 1 and 7 fewer random bytes than requested.
This problem would only manifest on builds that do not define `__GNUC__`
which, as of this writing, means MSVC.
Several hard-coded parameters control the behavior of the ledger
acquisition engine. The values of many of these parameters where
set by intuition and have complex and non-intuitive interactions
with each other and other parts of the code.
An earlier commit attempted to adjust several of these parameters
to improve syncing performance; initial testing was promising but
a number of operators reported experiencing syncing and stability
issues with their servers. As a result, this commit reverts parts
of commit 18235067af.
This commit further adjusts some tunables so as to increase the
aggressiveness of the ledger acquisition engine.
This commit addresses minor bugs introduced with commit
6faaa91850:
- The number of threads used by the database engine was
incorrectly clamped to the lower possible value, such
that the database was effectively operating in single
threaded mode.
- The number of requests to extract at once was so high
that it could result in increased latency. The bundle
size is now limited to 4 and can be adjusted by a new
configuration option `rq_bundle` in the `[node_db]`
stanza. This is an advanced tunable and adjusting it
should not be needed.
This commit modernizes the `AcceptedLedger` and `AcceptedLedgerTx`
classes, reduces their memory footprint and reduces unnecessary
dynamic memory allocations.
This commit optimizes the way asynchronous nodestore operations are
processed both by reducing the amount of time locks are held and by
minimizing the number of memory allocations and data copying.
Adds support to TaggedCache to support smart replacement
(Needed to avoid race conditions with negative caching.)
Create a "hotDUMMY" object that represents the absence
of an object.
Allow DatabaseNodeImp::asyncFetch to complete immediately
if object is in cache (positive or negative).
Fix a bug in asyncFetch where the object returned may not
be the correct canonical version because we stash the
object in the results array before we canonicalize it.
When fetching ledgers, the existing code would isolate the peer
that sent the most useful responses and issue follow up queries
only to that peer.
This commit increases the query aggressiveness, and changes the
mechanism used to select which peers to issue follow-up queries
to so as to more evenly spread the load along those peers which
provided useful responses.
The `SHAMapInnerNode` class had a global mutex to protect the
array of node children. Profiling suggested that around 4% of
all attempts to lock the global would block.
This commit removes that global mutex, and replaces it with a
new per-node 16-way spinlock (implemented so as not to effect
the size of an inner node objet), effectively eliminating the
lock contention.
* Txs with the same fee level will sort by TxID XORed with the parent
ledger hash.
* The TxQ is re-sorted after every ledger.
* Attempt to future-proof the TxQ tie breaking test
* Abort background path finding when closed or disconnected
* Exit pathfinding job thread if there are no requests left
* Don't bother creating the path find job if there are no requests
* Refactor to remove circular dependency between InfoSub and PathRequest
The existing trust line caching code was suboptimal in that it stored
redundant information, pinned SLEs into memory and required multiple
memory allocations per cached object.
This commit eliminates redundant data, reducing the size of cached
objects and unpinning SLEs from memory, and uses value_types to
avoid the need for `std::shared_ptr`. As a result of these changes, the
effective size of a cached object, includes the overhead of the memory
allocator and the `std::shared_ptr` should be reduced by at least 64
bytes. This is significant, as there can easily be tens of millions
of these objects.
This commit combines the `apply_mutex` and `read_mutex` into a single `mutex_`
var. This new `mutex_` var is a `shared_mutex`, and most operations only need to
lock it with a `shared_lock`. The only exception is `applyMutex`, which may need
a `unique_lock`.
One consequence of removing the `apply_mutex` is more than one `applyMutex`
function can run at the same time. To help reduce lock contention that a
`unique_lock` would cause, checks that only require reading data are run a
`shared_lock` (call these the "prewriteChecks"), then the lock is released, then
a `unique_lock` is acquired. Since a currently running `applyManifest` may write
data between the time a `shared_lock` is released and the `write_lock` is
acquired, the "prewriteChecks" need to be rerun. Duplicating this work isn't
ideal, but the "prewirteChecks" are relatively inexpensive.
A couple of other designs were considered. We could restrict more than one
`applyMutex` function from running concurrently - either with a `applyMutex` or
my setting the max number of manifest jobs on the job queue to one. The biggest
issue with this is if any other function ever adds a write lock for any reason,
`applyManifest` would not be broken - data could be written between the release
of the `shared_lock` and the acquisition of the `unique_lock`. Note: it is
tempting to solve this problem by not releasing the `shared_mutex` and simply
upgrading the lock. In the presence of concurrently running `applyManifest`
functions, this will deadlock (both function need to wait for the other to
release their read locks before they can acquire a write lock).
In order to preserve the Hooks ABI, it is important that field
values used for hooks be stable going forward.
This commit reserves the required codes so that they will not
be repurposed before Hooks can be proposed for inclusion in
the codebase.
* Remove Application & Database dependency in PerfLog. Replace it with
a callback passed into the constructor.
* Fixes the circular dependency between ripple/nodestore and ripple/basics
If fast loading is enabled but the last persisted ledger is not
entirely on disk, the server would fail to start without manual
intervention by the server operator.
This commit allows the server to detect this scenario and attempt
to automatically recover.
This is a refactor aimed at cleaning up and simplifying the existing
job queue.
As of now, all jobs are cancelled at the same time and in the same
way, so this commit removes the per-job cancellation token. If the
need for such support is demonstrated, support can be re-added.
* Revise documentation for ClosureCounter and Workers.
* Simplify code, removing unnecessary function arguments and
deduplicating expressions
* Restructure job handlers to no longer need to pass a job's
handle to the job.
A typographical error would mishandle the case where a caller explicitly
tries to remove a child that is not actually part of the node. This case
is never invoked in practice, and so the bug would will never trigger.
Commit bf013c02ad added support
for incorporating a commit ID into the compiled version string
but did so in a way that did not follow the semantic versioning
standard.
This commit corrects that flaw by moving the commit ID into the
"metadata" part of the version string and properly handles the
case where the commit hash cannot be retrieved.
The pathfinding engine built into the code has several configurable
parameters to adjust the depth of the paths indexed and explored.
These parameters can dramatically impact the performance and memory
consumption of a server; higher values can result in resource usage
increasing exponentially.
These default values were decided early and somewhat arbitrarily at
a time when the network and the size of the network state were much
smaller.
This commit adjusts the default values to reduce the depth of paths
to more reasonable levels; unless explicitly overriden, the changes
mean that pathfinding operations will return fewer, shallower paths
than previous versions of the software.
This commit corrects a technical flaw that was introduced with commit
7c12f01358: as written, a mutex that is
intended to help provide synchronization for multiple threads as they
are each walking the map, is declared so that each thread is passed a
dangling reference to a unique mutex.
This commit hoists the mutex outside the thread creation loop, so all
threads use a single mutex and eliminating the dangling reference.
The nodestore includes a built-in cache to reduce the disk I/O
load but, by default, this cache was not initialized unless it
was explicitly configured by the server operator.
This commit introduces sensible defaults based on the server's
configured node size.
It remains possible to completely disable the cache if desired
by explicitly configuring it the cache size and age parameters
to 0:
[node_db]
...
cache_size = 0
cache_age = 0
- Only duplicate records from archive to writable during online_delete.
- Log duration of nodestore reads.
- Include nodestore counters in perf_log output.
- Remove gratuitous nodestore activity counting.
- Report initial sync duration in server_info and perfLog.
- Report state_accounting in perfLog.
- Make state_accounting durations more accurate.
- Parallel ledger loader.
- Config parameter to load ledgers on start.
1) Don't acquire so many nodes per pass. It's likely
far more than we need.
2) Right-size the finishedReads_ vector on passes other
than just the first.
* Sort by fee level (which is the current behavior) then by transaction
ID (hash).
* Edge case when the account at the end of the queue submits a higher
paying transaction to walk backwards and compare against the cheapest
transaction from a different account.
* Use std::if_any to simplify the JobQueue::isOverloaded loop.
* Log load fee values (at debug) received from validations.
* Log remote and cluster fee values (at trace) when changed.
* Refactor JobQueue::isOverloaded to return sooner if overloaded.
* Refactor Transactor::checkFee to only compute fee if ledger is open.
This flag, if present, suppresses the output of incoming
trustlines in default state.
This is primarily motivated by observing that users of Xumm often
have many unwanted incoming trustlines in a default state, which are
not useful in the vast majority of cases.
Being able to suppress those when doing `account_lines` saves bandwidth
and resources.
This commit implements partitioned unordered maps and makes it possible
to traverse such a map in parallel, allowing for more efficient use of
CPU resources.
The `CachedSLEs`, `TaggedCache`, and `KeyCache` classes make use of the
new functionality, which should improve performance.
The pathfinding engine requires pre-building large tables which is a
resource-intensive operation. Typically, one would not expect that a
server configured as a validator would also support pathfinding APIs
and so, building those tables by default wastes resources.
This commit, if merged, will disable pathfinding on servers that are
configured as validators, unless the server operator opts to support
it explicitly, by configuring the `[path_search_max]` parameter.
Validator operators that wish to support pathfinding on a validator
and want to use the default values can add the following stanza to
their server's configuration file:
[path_search_max]
7
The priority of different types of jobs was set back in the early
days of development, based on insight and observations that don't
necessarily apply any longer.
Specifically, job types used by the server to sync to the network
were being treated as lower priority than client requests, making
it more difficult to regain sync.
This commit adjusts the priority of several jobs and should allow
servers to prioritize resynchronizing to the network over serving
clients.
The existing calculation would limit the maximum number of threads
that would be created by default to at most 6; this may have been
reasonable a few years ago, but given both the load on the network
as of today and the increase in the number of CPU cores, the value
should be revisited.
This commit, if merged, changes the default calculation for nodes
that are configured as `large` or `huge` to allow for up to twelve
threads.
The "sweep interval" is the amount of time between successive sweeps of
of various in-memory data structures to remove stale items.
Prior to this commit, the interval was automatically adjusted, based on
the value of the `[node_size]` option in a server's configuration file.
If merged, this commit introduces a new configuration option that makes
it possible for a server operator to adjust the sweep interval and make
a CPU/memory tradeoff:
[sweep_interval]
<integer>
The specified value represents the number of seconds between successive
sweeps. The range of valid values is between 10 and 600.
Important operator notes:
This is an advanced configuration option that should not be used unless
there is empirical data which suggests that the default sweep frequency
is either resulting in performance problems or is causing undue load to
the server.
Note that adjusting the sweep interval may not have the intended effect
on the server. Lower values will not always translate to a reduction of
memory usage and higher values will not always translate to a reduction
of CPU usage and/or load.
The performance characteristics of `std::unordered_map` are better
than `std::map` and the former should be preferred when the strict
ordering of the latter is not required.
* Only require adding the new feature names in one place. (Also need to
increment a counter, but a check on startup will catch that.)
* Allows rippled to have the code to support a given amendment, but
not vote for it by default. This allows the amendment to be enabled in
a future version without necessarily amendment blocking these older
versions.
* The default vote is carried with the amendment name in the list of
supported amendments.
* The amendment table is constructed with the amendment and default
vote.
* Also clean up some formatting in the Windows instructions
* Changed the recommended version for Windows to 1.1.1L after deeper
checking uncovered some build issues.
* Patch the soci unsigned-types.h file. If no changes are made, delete
the patched file and exit. If there are changes, backup the original
and replace it with the patched file.
* Fixes#3885
Patch Rocksdb only once:
* The repeated patches do not appear to affect build times, but avoiding
unnecessary copies is good for its own sake.
There are two mutexes in ValidatorSite: `site_mutex_` and `state_mutex_`. Some
function end up locking both mutexes. However, depending on the call, the
mutexes could be locked in different orders, resulting in deadlocks.
If both mutexes are locked, this patch always locks the `sites_mutex_` first.
The existing logic involves every server sending every transaction
that it receives to all its peers (except the one that it received
a transaction from).
This commit instead uses a randomized algorithm, where a node will
randomly select peers to relay a given transaction to, caching the
list of transaction hashes that are not relayed and forwading them
to peers once every second. Peers can then determine whether there
are transactions that they have not seen and can request them from
the node which has them.
It is expected that this feature will further reduce the bandwidth
needed to operate a server.
The existing license file contains copyright statements and license
snippets referencing code that has long since been removed from the
codebase and which are not necessary any longer.
The general copyright statement for the ISC License is also changed
from "Ripple Labs" which is one contributor to the codebase, to the
more general "XRP Ledger Developers".
Remaining code which was originally taken from Bitcoin includes the
relevant copyright statement(s) inline.
To aid in the automated detection of the license type by GitHub and
tools like `licensee`, the following statement which referenced the
ISC license, and which was listed at the bottom of the file, is now
incorporated at the top:
>The accompanying files incorporate work covered by the following copyright
>and previous license notice:
>
>Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
>Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
This commit removes the `ltINVALID` pseudo-type identifier from
`LedgerEntryType` and the `ttINVALID` pseudo-type identifier from
`TxType` and includes several small additional improvements that
help to simplify the code base.
It also improves the documentation `LedgerEntryType` and `TxType`,
which was all over the place, and highlights some important caveats
associated with making changes to the ledger and transaction type
identifiers.
The commit also adds a safety check to the `KnownFormats<>` class,
that will catch the the accidental reuse of format identifiers.
Ideally, this should be done at compile time but C++ does not (yet?)
allow for the sort of introspection that would enable this.
The legacy functions `cdirFirst` and `dirFirst` were mostly
identical; the differences were only type-related. The same
situation existed with `cdirNext` and `dirNext`.
This commit removes the duplicated code by introducing new
template functions that abstract away the differences that
are present between each pair of functions.
This commit also improves the naming of function arguments,
helping to elucidate their purpose & use and to make the
code self-documenting.
The Negative UNL is a feature of the XRP Ledger consensus protocol that
improves liveness (the network's ability to make forward progress) during
a partial outage. Using the Negative UNL, servers adjust their effective
UNLs based on which validators are currently online and operational, so
that a new ledger version can be declared validated even if several trusted
validators are offline.
The Negative UNL has no impact on how the network processes transactions
or what transactions' outcomes are, except that it improves the network's
ability to declare outcomes final during some types of partial outages.
The feature was originally introduced with version **1.6.0** but it was
only possible to manually enable this. If merged, this commit introduces
the amendment associated with the feature so that server operators can
vote on whether to enable this feature.
For more details, please see https://xrpl.org/negative-unl.html
This commit closes#3898.
Under some circumstances, it is possible to induce an out-of-bounds
memory read in the base58 decoder.
This commit addresses this issue.
Acknowledgements:
Guido Vranken for discovering and responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
Ripple is generously sponsoring a bug bounty program for the
rippled project. For more information please visit:
https://ripple.com/bug-bounty
The HardenedValidations amendment introduces additional fields
in validations:
- `sfValidatedHash`, if present, is the hash the of last ledger that
the validator considers to be fully validated.
- `sfCookie`, if present, is a 64-bit cookie (the default
implementation selects it randomly at startup but other
implementations are possible), which can be used to improve the
detection and classification of duplicate validations.
- `sfServerVersion`, if present, reports the version of the software
that the validator is running. By surfacing this information,
server operators gain additional insight about variety of software
on the network.
If merged, this commit fixes#3797 by adding the fields to the
`validations` stream as shown below:
- `sfValidateHash` as `validated_hash`: a 256-bit hex string;
- `sfCookie` as `cookie`: a 64-bit integer as a string; and
- `sfServerVersion` as `server_version`: a 64-bit integer as
a string.
With this amendment, the CheckCash transaction creates a TrustLine
if needed. The change is modeled after offer crossing. And,
similar to offer crossing, cashing a check allows an account to
exceed its trust line limit.
The following changes were made:
- Removed dependency on template defined in beast detail namespace.
- Removed Section::find() method which had an obsolete interface.
- Made Section::get<>() easier to use for the common case of
retrieving a std::string. The revised get() method replaces old
calls to Section::find().
- Provided a default template parameter to free function
get<>(Section config, std::string name) so it stays similar to
Section::get<>().
Then the rest of the code was adapted to these changes.
- Calls to Section::find() were replaced with calls to Section::get.
- Unnecessary get<std::string>() arguments were reduced to get().
These changes dug up an interesting artifact in the SHAMap unit
tests. I'm not sure why the tests were working before, but there
was a problem with the case of a Section key. The unit test is
fixed.
The `[node_size]` configuration parameter is used to tune various
parameters based on the hardware that the code is running on. The
parameter can take five distinct values: `tiny`, `small`, `medium`,
`large` and `huge`.
The default value in the code is `tiny` but the default configuration
file sets the value to `medium`. This commit attempts to detect the
amount of RAM on the system and adjusts the node size default value
based on the amount of RAM and the number of hardware execution
threads on the system.
The decision matrix currently used is:
| | 1 | 2 or 3 | ≥ 4 |
|:-------:|:----:|:------:|:------:|
| > ~8GB | tiny | tiny | tiny |
| > ~12GB | tiny | small | small |
| > ~16GB | tiny | small | medium |
| > ~24GB | tiny | small | large |
| > ~32GB | tiny | small | huge |
Some systems exclude memory reserved by the the hardware, the kernel
or the underlying hypervisor so the automatic detection code may end
up determining the node_size to be one less than "appropriate" given
the above table.
The detection algorithm is simplistic and does not take into account
other relevant factors. Therefore, for production-quality servers it
is recommended that server operators examine the system holistically
and determine what the appropriate size is instead of relying on the
automatic detection code.
To aid server operators, the node size will now be reported in the
`server_info` API as `node_size` when the command is invoked in
'admin' mode.
A recent version of clang notes a number of places in range
for loops where the code base was making unnecessary copies
or using const lvalue references to extend lifetimes. This
fixes the places that clang identified.
* Create SQLite database for mapping transaction IDs to shard indexes
* Create SQLite database for mapping ledger hashes to shard indexes
* Create additional test cases for the shard database
* load_factor was missing from server_info when the server was running in
reporting mode. Now, the reporting mode server calls server_info on the p2p
node, and propagates the load_factor back to the client.
In order to effectively mitigate CVE-2021-3499 even when compiling
against versions of OpenSSL prior to 1.1.1k, this commit:
1) requires use of TLS 1.2 or later. Note that both TLS 1.0 and
TLS 1.1 have been officially deprecated for over a year.
2) disables renegotiation support for TLS 1.2 connections.
Lastly, this commit also changes the default list of ciphers that
the server offers, limiting it only to ciphers that are part of
TLS 1.2.
The `tx` command supports output in both "text" and "binary" modes,
controlled by the binary flag. For more details on the command and
the possible arguments, please see: https://xrpl.org/tx.html.
The existing handler would incorrectly deal with metadata when in
binary mode. This commit corrects this issue, ensuring that the
metadata is properly encoded, depending on the mode.
Typically, an RPC response contains a `result` field, which
contains details about the operation performed. For ease of
parsing, forwarded responses must look like a non-forwarded
response.
In some instances the response was incorrectly composed, so
that the actual `result` object would be encapsulated by an
outer `result` object, breaking existing code.
This commit, addresses this issue and correctly "folds" the
`result` field, ensuring a consistent schema for responses.
* Add instructions to the workflow at the point where the failure would
occur, which is where someone experiencing a failure is most likely to
look. To keep things simple, the instructions are always printed. The
assumption is that if the job succeeds, nobody is likely to look
anyway.
* Provides the diff of the failure as an artifact, so the user can apply
it directly to their repo.
* Also update the levelization/README.md to clarify the levels a little
bit.
This updates the build process to use the local Artifactory server as a docker image cache to avoid being rate limited by docker hub during the build process.
While most of the code associated with secp256k1 operations had
been migrated to libsecp256k1, the deterministic key derivation
code was still using calls to OpenSSL.
If merged, this commit replaces the OpenSSL-based routines with
new libsecp256k1-based implementations. No functional change is
expected and the change should be transparent.
This commit also removes several support classes and utility
functions that wrapped or adapted various OpenSSL types that
are no longer needed.
A tip of the hat to the original author of this truly superb
library, Dr. Pieter Wuille, and to all other contributors.
This commit expands the detection capabilities of the Byzantine
validation detector. Prior to this commit, only validators that
were on a server's UNL were monitored. Now, all the validations
that a server receives are passed through the detector.
The existing class offered several constructors which were mostly
unnecessary. This commit eliminates all existing constructors and
introduces a single new one, taking a `Slice`.
The internal buffer is switched from `std::vector` to `Buffer` to
save a minimum of 8 bytes (plus the buffer slack that is inherent
in `std::vector`) per SHAMapItem instance.
Add support to allow multiple indepedent nodes to produce a binary identical
shard for a given range of ledgers. The advantage is that servers can use
content-addressable storage, and can more efficiently retrieve shards by
downloading from multiple peers at once and then verifying the integrity of
a shard by cross-checking its checksum with the checksum other servers report.
Before this change any non-zero Sequence field was handled as
a non-ticketed transaction, even if a TicketSequence was
present. We learned that this could lead to user confusion.
So the rules are tightened up.
Now if any transaction contains both a non-zero Sequence
field and a TicketSequence field then that transaction
returns a temSEQ_AND_TICKET error code.
The (deprecated) "sign" and "submit" RPC commands are tuned
up so they auto-insert a Sequence field of zero if they see
a TicketSequence in the transaction.
No amendment is needed because this change is going into
the first release that supports the TicketBatch amendment.
* Fix bug where incorrect max amount was set for XRP
* Fix bug where incorrect source currencies were set when XRP was the dst and a
sendmax amount was set
The existing code that deserialized an STAmount was sub-optimal and performed
poorly. In some rare cases the operation could result in otherwise valid
serialized amounts overflowing during deserialization. This commit will help
detect error conditions more quickly and eliminate the problematic corner cases.
* Use theoretical quality to order the strands
* Do not use strands below the user specified quality limit
* Stop exploring strands (at the current quality iteration) once any strand is non-dry
The previous error description was focused on keys that are too long,
but this error can occur if the key is too short or does not contain
the correct prefix.
* Add a new operating mode to rippled called reporting mode
* Add ETL mechanism for a reporting node to extract data from a p2p node
* Add new gRPC methods to faciliate ETL
* Use Postgres in place of SQLite in reporting mode
* Add Cassandra as a nodestore option
* Update logic of RPC handlers when running in reporting mode
* Add ability to forward RPCs to a p2p node
- The changes to manifest relaying introduced with commit f74b469e68
will cause newly accepted manifests to be sent back to the peer from
which they were received. This no longer happens: a newly accepted
manifest is never sent back to the peer we received it from.
- When encountering a manifest without a domain set, the `manifest` and
`validator_info` commands would include an empty string as the domain
associated with the manifest. This no longer happens: if a domain is
not present, the `domain` field will not be.
The existing code attempts to validate the provided node public key
using a function that assumes that the encoded public key is for an
account. This causes the parsing to fail.
This commit fixes#3317 by letting the caller specify the type of
the public key being checked.
The manifest relay code would only ever relay manifests from validators
on a server's UNL which means that the manifests of validators that are
not broadly trusted can fail to propagate across the network, which can
make it difficult to detect and track such validators.
This commit, if merged, propagates all manifests on a best-effort basis
resulting in broader availability of manifests on the network and avoid
the need to introduce on-ledger manifest storage or to establish one or
more manifest repositories.
- Add validation/proposal reduce-relay feature negotiation to
the handshake
- Make squelch duration proportional to a number of peers that
can be squelched
- Refactor makeRequest()/makeResponse() to facilitate handshake
unit-testing
- Fix compression enable flag for inbound peer
- Fix compression algorithm parsing in the header parser
- Fix squelch duration in onMessage(TMSquelch)
This commit fixes 3624, fixes 3639 and fixes 3641
Support for 'out-of-sequence' transaction execution was introduced
in commit 7724cca384.
The changes in that commit were gated under a feature but there was
no corresponding amendment introduced that would allow the network
to vote on this amendment.
This commit introduces 'TicketBatch' amendment as the amendment
that is associated with the tickets feature. If the amendment is
enabled, it will activate support for tickets.
This commit also removes several workarounds that are no longer
needed in unit tests.
* Markdown explanation of what levelization is, the intended levels, as
well as the process used to determine dependencies
* Shell script finds all dependencies, groups them, and finds cyclic
dependencies and maps out non-cyclic dependencies.
* Github job to run the script and fail if anything changes. Should
catch introduction of new dependencies and new problems. Will also
detect changes if problems or dependencies are removed.
* Creates a version 2 of the UNL file format allowing publishers to
pre-publish the next UNL while the current one is still valid.
* Version 1 of the UNL file format is still valid and backward
compatible.
* Also causes rippled to lock down if it has no valid UNLs, similar to
being amendment blocked, except reversible.
* Resolves#3548
* Resolves#3470
* Move all the vcpkg windows dependency installations into one step.
* Move the unmodified `before_install` step above the matrix to improve
readability, because this step runs before any of the matrix steps.
Due to some quirky emergent behavior, the server can't really begin
synching until twice the default close time resolution of the genesis
ledger, which is 30 seconds, has passed. In effect, this causes a one
minute delay.
This commit adjusts the default close time resolution down to the
minimum allowed resoluion of 10 seconds, so the corresponding delay
is reduced by 67% down to 20 seconds. This should be enough time to
ensure the server has reasonable connectivity without unduly delaying
initial synch times.
This comment explains this patch and the associated patches
that should be folded into it. This paragraph should be removed
when the patches are folded after review.
This change significantly improves ledger sync and fetch
times while reducing memory consumption. The change affects
the code from that begins with SHAMap::getMissingNodes and runs
through to Database::threadEntry.
The existing code issues a number of async fetches which are then
handed off to the Database's pool of read threads to execute.
The results of each read are placed in the Database's positive
and negative caches. The caller waits for all reads to complete
and then retrieves the results out of these caches.
Among other issues, this means that the results of the first read
cannot be processed until the last read completes. Additionally,
all the results must sit in memory.
This patch changes the behavior so that each read operation has a
completion handler associated with it. The completion of the read
calls the handler, allowing the results of each read to be
processed as it completes. As this was the only reason the
negative and positive caches were needed, they can now be removed.
The read generation code is also no longer needed and is removed.
The batch fetch logic was never implemented or supported and is
removed.
gcc's implementation of `prm::synchronized_pool_resource` showed
extremely poor performance compared with
`boost::synchronized_pool_resouece`. Boost's implementation of pmr is
now used in all cases (previously it was only used when a standard
lib, like clang's, lacked an implementation of pmr).
This patch also makes a minor change where inner nodes are constructed
with sparse arrays, unless "dense" is explicitly requested.
Prior to this commit, the amendments that a server would vote in support
of or against could be configured both via the configuration file and
via the command line "feature" command. Changes made in the configuration
file would only be loaded once at server startup and changes made via the
command line take effect immediately but are not persisted across
restarts.
This commit deprecates management of amendments via the configuration
file and stores the relevant information in the `wallet.db` database
file.
1. On startup, the new code parses the configuration file.
2. If the `[veto_amendments]` or `[amendments]` sections are present,
we check if the `FeatureVotes` table is present in `wallet.db`.
3. If it is not, we create the `FeatureVotes` table and transfer the
settings from the config file.
4. Proceed normally but only reference the `FeatureVotes` table instead
of the config file.
5. Warns if the voting table already exists in `wallet.db` and there
exists voting sections in the config file. The config file is ignored
in this case.
This change addresses & closes#3366
* Found several functions called under lock that take a lock. Refactor
to require a lock as a parameter instead.
* Found several functions called under lock that don't take a lock, but
should. Refactored those as well to require a lock as a parameter.
Unit tests are counting test failures, process crashes, and process exit code
failures in the count. Since a failing tests causes the process exit code to
return failure, we get extra counts. This patch removes process exit code
failures from the count.
ReadViewFwdRange was storing a cached `end_` iterator that was lazily
created in an iterators `end()` function. When the cache is empty, and
the range it iterated from multiple threads, this creates a race
condition.
This change has performance consequences for "old style" for loops.
For example:
```
// don't do this
for(auto i = tx_range.begin(); i != tx_range.end(); ++i)
```
Can call the now expensive `end()` function more often than needed.
Range-based for loop (I.e. `for(auto const& t : tx_range)`) should be
used instead.
- Under some conditions, comparing `ReadViewFwdRange::iterators`
for equality could derefence an empty `std::unique_ptr` which
will result in a crash.
- Misuse of the `equal` API could result in a `std::bad_cast`
exception being thrown from when iterating transactions or
SLEs from the `OpenView`, `RawStateTable` and `Ledger` classes.
A large percentage of inner nodes only store a small number of children. Memory
can be saved by storing the inner node's children in sparse arrays. Measurements
show that on average a typical SHAMap's inner nodes can be stored using only 25%
of the original space.
This commit combines a number of cleanups, targeting both the
code structure and the code logic. Large changes include:
- Using more strongly-typed classes for SHAMap nodes, instead of relying
on runtime-time detection of class types. This change saves 16 bytes
of memory per node.
- Improving the interface of SHAMap::addGiveItem and SHAMap::addItem to
avoid the need for passing two bool arguments.
- Documenting the "copy-on-write" semantics that SHAMap uses to
efficiently track changes in individual nodes.
- Removing unused code and simplifying several APIs.
- Improving function naming.
- Simplify and consolidate code for parsing hex input.
- Replace beast::endian::order with boost::endian::order.
- Simplify CountedObject code.
- Remove pre-C++17 workarounds in favor of C++17 based solutions.
- Improve `base_uint` and simplify its hex-parsing interface by
consolidating the `SexHex` and `SetHexExact` methods into one
API: `parseHex` which forces callers to verify the result of
the operation; as a result some public-facing API endpoints
may now return errors when passed values that were previously
accepted.
- Remove the simple fallback implementations of SHA2 and RIPEMD
introduced to reduce our dependency on OpenSSL. The code is
slow and rarely, if ever, exercised and we rely on OpenSSL
functionality for Boost.ASIO as well.
- Provide separate functions for serializing depending on whether
one wants a "wire" version of a node, or one suitable for hashing.
- Remove unused functions
The existing SHAMapNodeID object has both a valid and an invalid state
and requirs callers to verify the state of an instance prior to using
it. A simple set of changes removes that restriction and ensures that
all instances are valid, making the code more robust.
This change also:
1. Introduces a new function to construct a SHAMapNodeID from a
serialized blob; and
2. Reduces the amount of constructors the class exposes.
- Limit the lifetime of a buffer that was only used in the early
phases of peer connection establishment but which lived on as
long as the peer was active.
- Cache the message used to transfer manifests, so it can be reused
instead of recreated for every peer connection.
- Improve the reading of partial messages by passing a hint to the
I/O layer if the number of bytes needed to complete the message
is known.
The existing code issues a PING to each peer every 8 seconds. While
frequent PINGs allow us to estimate a peer's latency with a high
degree of accuracy, this "inter-server polka dance" is inefficient
and not useful. This commit, if merged, reduces the PING frequency
to once every 60 seconds.
Additionally, this commit simplifies the PING handling logic and
merges the code used to check and disconnect peers which fail to
track the network directly into the timer callback.
When evaluating the fitness and usefulness of an outbound peer, the code
would incorrectly calculate the amount of time that the peer spent in
a non-useful state.
This commit, if merged, corrects the calculation and makes the timeout
values configurable by server operators.
Two new options are introduced in the 'overlay' stanza of the config
file. The default values, in seconds, are:
[overlay]
max_unknown_time = 600
max_diverged_time = 300
This commit replaces the `peers_max` configuration element which had
a predetermined split between incoming and outgoing connections with
two new configuration options, `peers_in_max` and `peers_out_max`,
which server operators can use to explicitly control the number of
incoming and outgoing peer slots.
There have been cases in the past where SFields have been defined
in such a way that they did not follow our conventions. In
particular, the string representation of an SField should match
the in-code name of the SField.
This change leverages the preprocessor to encourage SFields to
be properly constructed.
The suffixes of SField types are changed to be the same as
the suffixes of corresponding SerializedTypeIDs. This allows
The preprocessor to match types using simple name pasting.
Since the string representation of the SField is part of our
stable API, the name of sfPayChannel was changed to sfChannel.
This change allows sfChannel to follow our conventions while
making no changes to our external API.
* Remove DinD Service from container build template
DinD has changed how it works on GItLab due to recent docker changes such that the service no long needs to be called so long as the runner is being run on a `docker-X` tagged machine.
* refactor for docker service on normal node
* If multiple transactions are queued for the account, change the
account's sequence number in a temporary view before processing the
transaction.
* Adds a new "at()" interface to STObject which is identical to the
operator[], but easier to write and read when dealing with ptrs.
* Split the TxQ tests into two suites to speed up parallel run times.
This commit introduces a new configuration option that server
operators can set. The value is communicated to other servers
and is also reported via the `server_info` API.
The value is meant to allow third-party applications or tools
to group servers together. For example, a tool that visualizes
the network's topology can group servers together.
Similar to the "Domain" field in validator manifests, an operator
can claim any domain. Prior to relying on the value returned, the
domain should be verified by retrieving the xrp-ledger.toml file
from the domain and looking for the server's public key in the
`nodes` array.
* Increases hard-coded number of parallel unit test processes for
Windows and MacOS builds from 1 to 2.
* Reduces Travis job time to well under the timeout value of 1.5 hours.
* Continue using a hard-coded value rather than `nprocs` because higher
values cause some jobs to run out of memory.
* Jobs with no unit tests are counted as failures. Resolves#3474
* Crashed processes are counted as failures. Resolves#3600
* Any tests specified on the command line test do not have matching
suites are counted as failures.
* Remove unused CI manual test.
When processing the `tx` command, we will now load both the transaction
and its metadata directly from SQLite.
Previously the `tx` RPC call was querying SQLite for the transaction
and then separately querying the key-value store for the metadata.
Support for IPv6 messages was added with commit 08382d866b
and version 1.1.0. No peer presently connected to the network in a useful capacity fails
to understand v2 messages.
This commit removes the code that generates and processes v1 messages and deletes legacy
messages from the protocol buffer definition file.
Use C++17 constant expressions to calculate the inverse
alphabet map at compile time instead of at runtime.
Remove support for encoding & decoding tokens using the
Bitcoin alphabet.
The job queue can impose limits of how many jobs of a particular
type can be queued.
This commit makes the previously hard-coded limit associated with
transactions configurable by the server's operator. Servers that
have increased memory capacity or which expect to see an influx
of transactions can increase the number of transactions their
server will be able to queue.
This commit fixes#3556.
The "/vl" HTTP endpoint can be used to request a particular
UNL from a rippled instance.
This commit, if merged, includes the public key of the requested
list in the response.
This commit fixes#3392
* Distinguish between recent and historical shards
* Allow multiple storage paths for historical shards
* Add documentation for this feature
* Add unit tests
Some RPC commands return `ledger_index` as a quoted numeric
string. This change allows the returned value to be directly
copied and used for follow-on RPC commands.
This commit fixes#3533
When attempting to load a validator list from a configured
site, attempt to reuse the last IP that was successfully
used if that IP is still present in the DNS response.
Otherwise, randomly select an IP address from the list of
IPs provided by the DNS system.
This commit fixes#3494.
With few exceptions, servers will typically receive multiple copies
of any given message from its directly connected peers. For servers
with several peers this can impact the processing latency and force
it to do redundant work. Proposal and validation messages are often
relayed with extremely high redundancy.
This commit, if merged, introduces experimental code that attempts
to optimize the relaying of proposals and validations by allowing
servers to instruct their peers to "squelch" delivery of selected
proposals and validations. Servers making squelching decisions by
a process that evaluates the fitness and performance of a given
server and randomly selecting a subset of the best candidates.
The experimental code is presently disabled and must be explicitly
enabled by server operators that wish to test it.
Tickets are a mechanism to allow for the "out-of-order" execution of
transactions on the XRP Ledger.
This commit, if merged, reworks the existing support for tickets and
introduces support for 'ticket batching', completing the feature set
needed for tickets.
The code is gated under the newly-introduced `TicketBatch` amendment
and the `Tickets` amendment, which is not presently active on the
network, is being removed.
The specification for this change can be found at:
https://github.com/xrp-community/standards-drafts/issues/16
Commit 4dc08f8202 introduced support for
deterministic shards, which makes the sharding functionality provided
by rippled more useful.
After merging, several opportunities for further improvements to the
deterministic sharding implementation were identified and a significant
increase int memory usage during shard finalization was detected.
Because of these issues, the commit is being reverted and the feature is
being rolled back. It will be reintroduced in a future release.
* Builds Windows dependencies first.
* Builds ALL OSs in the last stage.
* Fix the MacOS builds.
* Windows dependency stages are allowed to fail so ALL configurations will
attempt to build. Windows builds will probably fail if dependencies fail
(caching may allow them to succeed), but they will at least be attempted.
* Remove broken AppVeyor config file, so it stops trying.
The checkpointer class had assumed that the database would exist for the
lifetime of the application. This is no long true. These changes resolve bugs
involving dangling pointers.
There was a race condition in `on_accept` where the object's destructor
could run while `on_accept` was called.
This patch ensures that if `on_accept` is called then the object remains
valid for the duration of the call.
* Fixes#3486
* load factor computation normalized by load_base.
* last validated ledger age set to -1 while syncing.
* Return status changed:
* healthy -> ok
* warning -> service_unavailable
* critical -> internal_server_error
This change can help improve the liveness of the network during periods of network
instability, by allowing the network to track which validators are presently not online
and to disregard them for the purposes of quorum calculations.
If the 'HardenedValidations' amendment is enabled, this commit will
track the version of the software that validators embed in their
validations.
If a server notices that at least 60% of the validators on its UNL
are running a newer version than it is running, it will periodically
print an informational message, reminding the operator to check for
update.
The tecUNFUNDED code is actively used when attempting to create payment
channels; the messages incorrectly list it as deprecated.
Meanwhile, the tecUNFUNDED_ADD code actually is an unused legacy code,
dating back to when there was a WalletAdd transactor. The terLAST and
terFUNDS_SPENT codes are also unused legacy codes.
Engine result messages are not part of the binary format and are
documented as subject to change without notice, so this should not
require an amendment nor a new API version.
Align error code table for human readability.
The amendment was partially complete, included no functional code
and, even if activated, it would result in no changes to transaction
proessing. Despite this, removing the amendment is the prudent course
of action and avoids the possibility of an accidental activation.
If additional cryptoconditions are implemented, they will be each
assigned a new, unique amendment code.
This commit, if merged, adds support to allow multiple indepedent nodes to
produce a binary identical shard for a given range of ledgers. The advantage
is that servers can use content-addressable storage, and can more efficiently
retrieve shards by downloading from multiple peers at once and then verifying
the integrity of a shard by cross-checking its checksum with the checksum
other servers report.
* Document delete_batch, back_off_milliseconds, age_threshold_seconds.
* Convert those time values to chrono types.
* Fix bug that ignored age_threshold_seconds.
* Add a "recovery buffer" to the config that gives the node a chance to
recover before aborting online delete.
* Add begin/end log messages around the SQL queries.
* Add a new configuration section: [sqlite] to allow tuning the sqlite
database operations. Ignored on full/large history servers.
* Update documentation of [node_db] and [sqlite] in the
rippled-example.cfg file.
Resolves#3321
* The amendment ballot counting code contained a minor technical
flaw, caused by the use of integer arithmetic and rounding
semantics, that could allow amendments to reach majority with
slightly less than 80% support. This commit introduces an
amendment which, if enabled, will ensure that activation
requires at least 80% support.
* This commit also introduces a configuration option to adjust
the amendment activation hysteresis. This option is useful on
test networks, but should not be used on the main network as
is a network-wide consensus parameter that should not be
changed on a per-server basis; doing so can result in a
hard-fork.
Fixes#3396
Work on a version 2 of the XRP Network API has begun. The new
API returns:
* `notSynced` in place of `noClosed`, `noCurrent`, and `noNetwork`;
* `invalidParams` in place of `lgrIdxInvalid`.
The new version 2 API cannot be selected yet, as it remains a work
in progress.
Fixes#3269
If a port number is not specified in the [ips] or [ips_fixed]
blocks, automatically add the new default peer port which was
registered with IANA: 2459. Also use 2459 if no port is specified
with manually using the `connect` command; previously it was
using 6561, which could have resulted in spurious failures.
This commit, if merged, fixes#2861.
* Gives a summary of the health of the node:
Healthy, Warning, or Critical
* Last validated ledger age:
<7s is Healthy,
7s to 20s is Warning
> 20s is Critcal
* If amendment blocked, Critical
* Number of peers:
> 7 is Healthy
1 to 7 is Warning
0 is Critical
* server state:
One of full, validating or proposing is Healthy
One of syncing, tracking or connected is Warning
All other states are Critical
* load factor:
<= 100 is Healthy
101 to 999 is Warning
>= 1000 is Critical
* If not Healthy, info field contains data that is considered not
Healthy.
Fixes: #2809
Commit e257a22 introduced changes in the logic used to acquire historical
ledgers. The logic could cause historical ledgers to be acquired only since
the last online deletion interval instead of the configured value to allow
deletion.
* Make sure variables are always initialized
* Use lround instead of adding .5 and casting
* Remove some unneeded vars
* Check for null before calling strcmp
* Remove redundant if conditions
* Remove make_TxQ factory function
* Improve documentation
* Make the ShardArchiveHandler rather than the DatabaseShardImp perform
LastLedgerHash verification for downloaded shards
* Remove ShardArchiveHandler's singleton implementation and make it an
Application member
* Have the Application invoke ShardArchiveHandler initialization
instead of clients
* Add RecoveryHandler as a ShardArchiveHandler derived class
* Improve commenting
* Add documentation for shard validation
* Retrieve last ledger hash for imported shards
* Verify the last ledger hash in Shard::finalize
* Limit last ledger hash retrieval attempts for imported shards
* Use a common function for removing failed shards
* Add new ShardInfo::State for imported shards
Identifiers for retired amendments should not generally be used
in the codebase.
This commit reduces their visibility down to one translation
unit and marks them as unused and deprecated to prevent
accidental reuse.
In deciding whether to relay a proposal or validation, a server would
consider whether it was issued by a validator on that server's UNL.
While both trusted proposals and validations were always relayed,
the code prioritized relaying of untrusted proposals over untrusted
validations. While not technically incorrect, validations are
generally more "valuable" because they are required during the
consensus process, whereas proposals are not, strictly, required.
The commit introduces two new configuration options, allowing server
operators to fine-tune the relaying behavior:
The `[relay_proposals]` option controls the relaying behavior for
proposals received by this server. It has two settings: "trusted"
and "all" and the default is "trusted".
The `[relay_validations]` options controls the relaying behavior for
validations received by this server. It has two settings: "trusted"
and "all" and the default is "all".
This change does not require an amendment as it does not affect
transaction processing.
The sfLedgerSequence field is designated as optional in the object
template but it is effectively required and validations which do not
include it were, correctly, rejected.
This commit migrates the check outside of the peer code and into the
constructor used for validations being deserialized for the network.
Furthermore, the code will generate an error if a validation that is
generated by a server does not include the field.
The existing code used std::deque along with a size check to constrain the
size of a buffer and, effectively, "hand rolled" a circular buffer. This
change simply migrates directly to boost::circular_buffer.
The unit test now verifies that if an account is not present in the
starting account_tx ledger, account_tx still iterates down and finds
the transaction that deletes the account (and earlier transactions).
This commit introduces no functional changes but cleans up the
code and shrinks the surface area by removing dead and unused
code, leveraging std:: alternatives to hand-rolled code and
improving comments and documentation.
The script, when invoked by a server operator can collect information
useful for debugging, while attempting to redact potentially sensitive
data.
It contained no explanation or other exposition to allow people who
look at the file but aren't familiar with shell scripts to understand
its purpose.
The built-in watchdog is simplistic and can, sometimes, cause problems
especially on systems that have the ability to automatically start and
monitor processes.
This commit removes the sustain system entirely, changes the handling
of the SIGTERM signal to properly terminate the process and improves
the error message reported to the user when the command line used to
start `rippled` is incorrect and malformed.
Entries in the ledger are located using 256-bit locators. The locators
are calculated using a wide range of parameters specific to the entry
whose locator we are calculating (e.g. an account's locator is derived
from the account's address, whereas the locator for an offer is derived
from the account and the offer sequence.)
Keylets enhance type safety during lookup and make the code more robust,
so this commit removes most of the earlier code, which used naked
uint256 values.
This commit removes obsolete comments, dead or no longer useful
code, and workarounds for several issues that were present in older
compilers that we no longer support.
Specifically:
- It improves the transaction metadata handling class, simplifying
its use and making it less error-prone.
- It reduces the footprint of the Serializer class by consolidating
code and leveraging templates.
- It cleanups the ST* class hierarchy, removing dead code, improving
and consolidating code to reduce complexity and code duplication.
- It shores up the handling of currency codes and the conversation
between 160-bit currency codes and their string representation.
- It migrates beast::secure_erase to the ripple namespace and uses
a call to OpenSSL_cleanse instead of the custom implementation.
A deliberately malformed token can cause the server to crash during
startup. This is not remotely exploitable and would require someone
with access to the configuration file of the server to make changes
and then restart the server.
Acknowledgements:
Guido Vranken for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find.
Ripple is generously sponsoring a bug bounty program for the
rippled project. For more information please visit:
https://ripple.com/bug-bounty
Currently there is no mechanism for a validator to report the
version of the software it is currently running. Such reports
can be useful for those who are developing network monitoring
dashboards and server operators in general.
This commit, if merged, defines an encoding scheme to encode
a version string into a 64-bit unsigned integer and adds an
additional optional field to validations.
This commit piggybacks on "HardenedValidations" amendment to
determine whether version information should be propagated
or not.
The general encoding scheme is:
XXXXXXXX-XXXXXXXX-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY-YYYYYYYY
X: 16 bits identifying the particular implementation
Y: 48 bits of data specific to the implementation
The rippled-specific format (implementation ID is: 0x18 0x3B) is:
00011000-00111011-MMMMMMMM-mmmmmmmm-pppppppp-TTNNNNNN-00000000-00000000
M: 8-bit major version (0-255)
m: 8-bit minor version (0-255)
p: 8-bit patch version (0-255)
T: 11 if neither an RC nor a beta
10 if an RC
01 if a beta
N: 6-bit rc/beta number (1-63)
This commit introduces the "HardenedValidations" amendment which,
if enabled, allows validators to include additional information in
their validations that can increase the robustness of consensus.
Specifically, the commit introduces a new optional field that can
be set in validation messages can be used to attest to the hash of
the latest ledger that a validator considers to be fully validated.
Additionally, the commit leverages the previously introduced "cookie"
field to improve the robustness of the network by making it possible
for servers to automatically detect accidental misconfiguration which
results in two or more validators using the same validation key.
- Add missing `#include` in `ripple/core/JobTypeInfo.h`
- Protect version string from clang-format in
`ripple/protocol/impl/BuildInfo.cpp`.
`Builds/CMake/RippledVersion.cmake` searches for this line by pattern.
Existing per-thread PRNGs are individually initialized using calls
to std::random_device.
If merged, this commit will use a single PRNG, initialized from
std::random_device on startup, to seed the thread-specific PRNGs.
Acknowledgements:
Thomas Snider, who reported this issue to Ripple on April 8, 2020.
Historically strand re-execute log messages have been treated as
errors. However in the vast majority of cases these log messages
are caused by well understood mechanics in the payment engine.
So usually these log messages should be treated as warnings.
The automated build system only builds packages signed with a list of
approved keys. This is a security measure to prevent someone who gains
push access to the repository from producing potentially malicious
packages that are signed by Ripple's trusted private keys.
Moving this list to the new location makes it easy to add and delete
new keys to the list.
* scoped_lock is now a std name with subtly different semantics
compared to lock_guard. Namely it can be used to lock 0 or
more mutexes. This is valuable, but can also be accidentally
used to lock 0 mutexes when 1 was intended, creating a
run-time error.
Therefore, if and when we use scoped_lock, extra care needs to
be taken in reviewing that code to ensure it doesn't
accidentally lock 0 mutexes when 1 was intended. To aid in
such careful reviewing, the use of the name scoped_lock should
be limited to those cases where the number of mutexes is not
exactly one.
* canonicalize_replace_cache
* canonicalize_replace_client
Now it is clear at the call site that if there are
duplicate copies of the data between the cache and
the caller, which copy gets replaced.
Additionally data parameter is now const-correct.
If it is not going to be replaced (canonicalize_replace_cache),
then the shared_ptr to the client data is const.
* The [network_id] option allows three string values:
- main: the XRP Ledger
- testnet: the Testnet operated by Ripple.
- devnet: the development test network operated by Ripple.
* Peers negotiate compression via HTTP Header "X-Offer-Compression: lz4"
* Messages greater than 70 bytes and protocol type messages MANIFESTS,
ENDPOINTS, TRANSACTION, GET_LEDGER, LEDGER_DATA, GET_OBJECT,
and VALIDATORLIST are compressed
* If the compressed message is larger than the uncompressed message
then the uncompressed message is sent
* Compression flag and the compression algorithm type are included
in the message header
* Only LZ4 block compression is currently supported
The payment engine restricts payment paths so two steps do not input the
same Currency/Issuer or output the same Currency/Issuer. This check was
skipped when the path started or ended with XRP. An example of a path
that was incorrectly accepted was: XRP -> //USD -> //XRP -> EUR
This patch enables the path loop check for paths that start or end with
XRP.
* Make ShardArchiveHandler a singleton.
* Add state database for ShardArchiveHandler.
* Use temporary database for SSLHTTPDownloader downloads.
* Make ShardArchiveHandler a Stoppable class.
* Automatically resume interrupted downloads at server start.
* Reduce lock scope on all public functions
* Use TaskQueue to process shard finalization in separate thread
* Store shard last ledger hash and other info in backend
* Use temp SQLite DB versus control file when acquiring
* Remove boost serialization from cmake files
When computing rates for offers, an STAmount's value can be out of can be out of
range (before canonicalizing). There was an assert that could incorrectly fire
in some cases. This patch removes that assert.
The newest MSVC 19.25.28610.4 does not build rocksdb. During the
Travis CI Windows job, the vs_BuildTools.exe automatically
downloads the newest version of the compiler. This fix forces the
install of MSVC 19.24.28314.0 to build rocksdb.
The fix1781 amendment was incorrectly introduced during conflict
resolution and support for it is not included at this time. This commit
removes the definition of the amendment identifier.
A review of the lag ratchet code revealed that we were using
the long-term master public keys of trusted validators, when
we should have been using the ephemeral public keys instead.
As a result, the lag ratchet code would be effectively
inoperable.
- Add support for all transaction types and ledger object types to gRPC
implementation of tx and account_tx.
- Create common handlers for tx and account_tx.
- Remove mutex and abort() from gRPC server. JobQueue is stopped before
gRPC server, with all coroutines executed to completion, so no need for
synchronization.
* Whenever a node downloads a new VL, send it to all peers that
haven't already sent or received it. It also saves it to the
database_dir as a Json text file named "cache." plus the public key of
the list signer. Any files that exist for public keys provided in
[validator_list_keys] will be loaded and processed if any download
from [validator_list_sites] fails or no [validator_list_sites] are
configured.
* Whenever a node receives a broadcast VL message, it treats it as if
it had downloaded it on it's own, broadcasting to other peers as
described above.
* Because nodes normally download the VL once every 5 minutes, a single
node downloading a VL with an updated sequence number could
potentially propagate across a large part of a well-connected network
before any other nodes attempt to download, decreasing the amount of
time that different parts of the network are using different VLs.
* Send all of our current valid VLs to new peers on connection.
This is probably the "noisiest" part of this change, but will give
poorly connected or poorly networked nodes the best chance of syncing
quickly. Nodes which have no http(s) access configured or available
can get a VL with no extra effort.
* Requests on the peer port to the /vl/<pubkey> endpoint will return
that VL in the same JSON format as is used to download now, IF the
node trusts and has a valid instance of that VL.
* Upgrade protocol version to 2.1. VLs will only be sent to 2.1 and
higher nodes.
* Resolves#2953
* Example: gcc.Debug will use the the default version of gcc installed on the
system. gcc-9.Debug will use version 9, regardless of the default. This will
be most useful when the default is older than required or desired.
* When an unknown amendment reaches majority, log an error-level
message, and return a `warnings` array on all successful
admin-level RPC calls to `server_info` and `server_state` with
a message describing the problem, and the expected deadline.
* In addition to the `amendment_blocked` flag returned by
`server_info` and `server_state`, return a warning with a more
verbose description when the server is amendment blocked.
* Check on every flag ledger to see if the amendment(s) lose majority.
Logs again if they don't, resumes normal operations if they did.
The intention is to give operators earlier warning that their
instances are in danger of being amendment blocked, which will
hopefully motivate them to update ahead of time.
* update EP and find package requirements
* minor protobuf/libarchive build fixes
* change travis release builds to nounity to
ameliorate vm memory exhaustion.
FIXES: #3223, #3232
* In and Out parameters were swapped when calculating the rate
* In and out qualities were not calculated correctly; use existing functions
to get the qualities
* Added tests to check that theoretical quality matches actual computed quality
* Remove in/out parameter from qualityUpperBound
* Rename an overload of qualityUpperBound to adjustQualityWithFees
* Add fix amendment
STAmount::soTime and soTime2 were time based "amendment like"
switches to control small changes in behavior for STAmount.
soTime2, which was the most recent, was dated Feb 27, 2016.
That's over 3 years ago.
The main reason to retain these soTimes would be to replay
old transactions. The likelihood of needing to replay a
transaction from over three years ago is pretty low. So it
makes sense to remove these soTime values.
In Flow_test the testZeroOutputStep() test is removed. That
test started to fail when the STAmount::soTimes were removed.
I checked with the original author of the test. He said
that the code being tested by that unit test has been removed,
so it makes sense to remove the test. That test is removed.
* use tagged containers for pkg build
* update build images
* continue to build container images in pipeline, but allow
failure (non-block)
* limit travis macos cache
* add vs2019 windows to travis
* remove xcode 9 travis build
* remove clang5/6 from CI and update min version of Clang required in
cmake
* break windows CI build into stages to reduce timeouts
* update datelib
* add if condition to travis builds to allow commit message to limit
builds by platform
FIXES: #2847
* Transactions that are submitted with the fail_hard flag
and that result in any TER code besides tesSUCCESS shall
be neither queued nor held.
[FOLD] Keep tec results out of the open ledger when fail_hard:
* Improve TransactionStatus const correctness, and remove redundant
`local` check
* Check open ledger tx count in fail_hard tests
* Fix some wrapping
* Remove duplicate test
Remove the implicit conversion from int64 to XRPAmount. The motivation for this
was noticing that many calls to `to_string` with an integer parameter type were
calling the wrong `to_string` function. Since the calls were not prefixed with
`std::`, and there is no ADL to call `std::to_string`, this was converting the
int to an `XRPAmount` and calling `to_string(XRPAmount)`.
Since `to_string(XRPAmount)` did the same thing as `to_string(int)` this error
went undetected.
Prior to this commit, the queue and execution times for individual jobs
were reported indepedently and could, potentially, be out of sync. This
change reports both values when either one of the exceeds the reporting
threshold.
It's possible an overloaded job queue is causing false alarms on the deadlock
detector. Log a fatal message after 90s, declare a logic error after 600s.
If merged, this commit will report additional information in the
response to the submit command; this will make it easier for developers
to accurately track the status of transaction submission.
Fixes#2851
Treat all `#` characters in config files as comments (and remove)
*unless* the `#` is immediately preceded by `\`. Write a warning
to log file when trailing comments are found/ignored in the config
to let operators know that the treatment of trailing `#` has changed.
Fixes#3121
The 'network_id' option allows an administrator to specify to which
network they intend a server to connect. Servers can leverage this
information to optimize routing and prune automatically discovered
cross-network connections.
This commit will, if merged:
- add support for the devnet keyword, which corresponds to network ID #2;
- report the network ID, if one is configured, in server_info
The `node_size` configuration option is used to automatically
configure various parameters (cache sizes, timeouts, etc) for
the server.
A previous commit included changes that caused incorrect values
to be returned which can result in sub-optimal performance that
can manifest as difficulty syncing to the network, or increased
disk I/O and/or memory usage. The problem was introduced with
commit 66fad62e66.
This commit, if merged, fixes the code to ensure that the correct
values are returned and introduces a compile-time check to prevent
this issue from reoccurring.
The existing platform detection code was derived from the old Beast
library, which was, itself, derived from JUCE.
This commit removes that code and replaces it with the Boost.Predef
library which defines a consistent set of compiler, architecture,
operating system, library, and other version numbers.
For more on Boost.Predef, please see the Boost documentation. The
documentation for the current version as of this writing is at:
https://www.boost.org/doc/libs/1_71_0/doc/html/predef.html
This commit restructures the HTTP based protocol negotiation that `rippled`
executes and introduces support for negotiation of compression for peer
links which, if implemented, should result in significant bandwidth savings
for some server roles.
This commit also introduces the new `[network_id]` configuration option
that administrators can use to specify which network the server is part of
and intends to join. This makes it possible for servers from different
networks to drop the link early.
The changeset also improves the log messages generated when negotiation
of a peer link upgrade fails. In the past, no useful information would
be logged, making it more difficult for admins to troubleshoot errors.
This commit also fixes RIPD-237 and RIPD-451
* The `tx` command now supports min_ledger and max_ledger fields.
* If the requested transaction isn't found and these fields are
provided, the error response indicates whether or not every
ledger in the the provided range was searched.
This fixes#2924
* adding package signing steps for rpm and deb
* first spike at GPG signing with CI and containers
* refine ubuntu portion
* get correct gpg package version
* adding CentOS support
* fixing errors in installing gpg on ubuntu
* base64 decode the GPG key
* fixing line continuations
* revised package signing, looking for package artifacts
* add dpkg-sig to ubuntu image
* sign all deb packges
* add passphrase to GPG process
* repeat yo slef on dpkg
* sign all the rpm packages too
* install rpm-sign in the CentOS docker image
* loop through rpm files
* no need for PIN on GPG signing
Collecting the returned and expected values in sets only works if there are no
duplicates. The implementation is changed to use sorted vectors to fix this case.
When the Env::AppBundle constructor throws an exception
it still needs to run ~AppBundle(), otherwise the JobQueue
isn't properly shut down. Specifically the JobQueue
can destruct without waiting on outstanding jobs in the
queue.
This change ensures that if Env::AppBundle constructor
throws, Env::AppBundle::~AppBundle() runs.
This fixes the unit test crash exposed by PR #3047.
FIXES: #3106
Different versions of protobuf produce subtly different
results when given invalid message payloads. This leads to
subtly different behavior when we try to deserialize these
invalid messages. As such, we can't tie success to a
particular exception.
The XRP Ledger utilizes an account model. Unlike systems based on a UTXO
model, XRP Ledger accounts are first-class objects. This design choice
allows the XRP Ledger to offer rich functionality, including the ability
to own objects (offers, escrows, checks, signer lists) as well as other
advanced features, such as key rotation and configurable multi-signing
without needing to change a destination address.
The trade-off is that accounts must be stored on ledger. The XRP Ledger
applies reserve requirements, in XRP, to protect the shared global ledger
from growing excessively large as the result of spam or malicious usage.
Prior to this commit, accounts had been permanent objects; once created,
they could never be deleted.
This commit introduces a new amendment "DeletableAccounts" which, if
enabled, will allow account objects to be deleted by executing the new
"AccountDelete" transaction. Any funds remaining in the account will
be transferred to an account specified in the deletion transaction.
The amendment changes the mechanics of account creation; previously
a new account would have an initial sequence number of 1. Accounts
created after the amendment will have an initial sequence number that
is equal to the ledger in which the account was created.
Accounts can only be deleted if they are not associated with any
obligations (like RippleStates, Escrows, or PayChannels) and if the
current ledger sequence number exceeds the account's sequence number
by at least 256 so that, if recreated, the account can be protected
from transaction replay.
* replace boost::beast::detail::iequals with boost::iequals
* replace deprecated `buffers` function with `make_printable`
* replace boost::beast::detail::ascii_tolower with lambda
* add missing includes
The validation stream only reported the ephemeral signing key for validators
which use manifests. This made tracking unnecessarily difficult for clients
processing the data stream.
With this change, the validator's long-term master public key is also
included.
This commit fixes#3005
* Provide proposing validator's master key in the validation stream
subscription JSON responses.
Implement code review changes.
FIXES: #3005
* Explain that Arch/Manjaro/etc. need `-Dstatic=OFF` during the configure step
* move configuration options closer to that step
* separate sub-headers for configuration and build
Different compilers and handling the shadow warning differently. In particular,
some are warning about types being shadowed by variables. Until these can be
resolved the shadow warning is being disabled.
Note: the shadow warning was originally enabled to help with the structured
bindings patch. As that is now complete, it's less important to keep this
warning.
FIXES: #2527
* define custom docker image for travis-linux builds based on
package build image
* add macos builds
* add windows builds (currently allowed to fail)
* improve build and shell scripts as required for the CI envs
* add asio timer latency workaround
* omit several manual tests from TravisCI which cause memory exhaustion
This commit allows server operators to reserve slots for specific
peers (identified by the peer's public node identity) and to make
changes to the reservations while the server is operating.
This commit closes#2938
- Add docker container tags for "latest_BRANCH"
- Prevent different branches from overwriting deb repo artifacts
- Manual approval always required before pushing to prod
The original intent was that RPC error codes were not stable.
But those codes were made available through the API, so some
users came to depend on the code values. This change adapts
to the current state of affairs.
Manifests which are revoked can include ephemeral keys although doing
so does not make sense: a revoked manifest isn't used for signing and
so don't need to define an ephemeral key.
A running instance of the server tracks the number of protocol messages
and the number of bytes it sends and receives.
This commit makes the counters more granular, allowing server operators
to better track and understand bandwidth usage.
* Add construction and assignment from a generic
contiguous container. Both compile-time and run time
safety checks are made to ensure the safety of this
conversion.
* Remove base_uint::copyFrom. The generic copy assignment
operator now does this functionality with enhanced
safety and better syntax.
* Remove construction from and dedendence on Blob.
The generic constructor and assignment now handle this
functionality.
* Fix client code to adhere to this new API.
* Removed the use of fromVoid in PeerImp.cpp as it was
an inappropriate use of this dangerous API. The
generic container constructors do it with enhanced
safety and better syntax.
* Rename data member pn to data_ and make it private.
* Remove constraint from hash_append
* Remove array_type alias
This PR addresses a problem where the server could hang indefinitely
on shutdown. The cause of the problem is the SNTPClock class was not
binding the socket to an endpoint on initialization. This can create
an error sent to the read handler. Unfortunately, the handler ignores
the error, reads again and enters into a loop preventing the
io_service from ever completing.
- Explain how to bind to both IPv4 and IPv6 interfaces
- Provide a hint in the default [port_peer] section
- Do not enable it by default
Note that on Linux, use of '::' and IPv4-mapped IPv6 depends on a sysctl value
setting 'net.ipv6.bindv6only = 0' which seems to be the default on most Linux
distributions.
- Use `std::lock` when grabbing multiple mutexes to ensure consistent
locking order and avoid deadlocks.
- Reduce the scope of the master mutex lock by relesing it prior to
calling setHeartbeatTimer
A tiny input amount to a payment step can cause this step to output zero. For
example, if a previous steps outputs a dust amount of 10^-80, and this step is a
IOU -> XRP offer, the offer may output zero drops. In this case, call the strand
dry. Before this patch, an error would be logged, the strand would be called
dry; in debug mode an assert triggered.
Note, this patch is not transaction breaking, as the caller did not user the ter
code. The caller only checked for success or failuer.
This patch addresses github issue issue reported here:
https://github.com/ripple/rippled/issues/2929
This patch removes calls to several deprecated asio functions.
* `io_service::post` becomes `post` (free function)
* `io_service::work` becomes `executor_work_guard`
* `io_service::wrap` becomes `bind_executor`
* `get_io_context` becomes `get_executor` or `get_executor().context()`
This patch was tested with boost 1.69 and 1.70. The functions
`ripple::get_lowest_layer` and `beast::create_waitable_timer` are required to
handle a breaking difference between these versions. When rippled no longer
needs to support pre 1.70 boost versions, both of these functions may be
removed, and the waitable timer injections may also be removed.
The new parse logic is more strict but handles more cases. If an exception
is thrown, just bail.
* Allow parsing unenclosed IPv6 addresses without port
* Improve string construction
* Reduce nesting levels of code
The XRP Ledger allows an account to authorize a secondary key pair,
called a regular key pair, to sign future transactions, while keeping
the master key pair offline.
The regular key pair can be changed as often as desired, without
requiring other changes on the account.
If merged, this commit corrects a minor technical flaw which would
allow an account holder to specify the master key as the account's
new regular key.
The change is controlled by the `fixMasterKeyAsRegularKey` amendment
which, if enabled, will:
1. Prevent specifying an account's master key as the account's
regular key.
2. Prevent the "Disable Master Key" flag from incorrectly affecting
regular keys.
Before this patch, jtx allowed non-invocable functions to be passed to
operator(). However, these arguments are ignored. This caused erronious code
code such as:
```
env (offer (account_to_test, BTC (250), XRP (1000)),
offers (account_to_test, 1));
```
While it looks like the number of offers are checked, they are not. The `offers`
funclet is never run. While we could modify jtx to make the above code correct,
a cleaner solution is to run post conditions in a `require` statement after a
transasction runs.
At this point all of the jss::* names are defined in the same
file. That file has been named JsonFields.h. That file name
has little to do with either JsonStaticStrings (which is what
jss is short for) or with jss. The file is renamed to jss.h
so the file name better reflects what the file contains.
All includes of that file are fixed. A few include order
issues are tidied up along the way.
Formerly an SOTemplate was default constructed and its elements
added using push_back(). This left open the possibility of a
malformed SOTemplate if adding one of the elements caused a throw.
With this commit the SOTemplate requires an initializer_list of
its elements at construction. Elements may not be added after
construction. With this approach either the SOTemplate is fully
constructed with all of its elements or the constructor throws,
which prevents an invalid SOTemplate from even existing.
This change requires all SOTemplate construction to be adjusted
at the call site. Those changes are also in this commit.
The SOE_Flags enum is also renamed to SOEStyle, which harmonizes
the name with other uses in the code base. SOEStyle elements
are renamed (slightly) to have an "soe" prefix rather than "SOE_".
This heads toward reserving identifiers with all upper case for
macros. The new style also aligns with other prominent enums in
the code base like the collection of TER identifiers.
SOElement is adjusted so it can be stored directly in an STL
container, rather than requiring storage in a unique_ptr.
Correspondingly, unique_ptr usage is removed from both
SOTemplate and KnownFormats.
The new 'Domain' field allows validator operators to associate a domain
name with their manifest in a transparent and independently verifiable
fashion.
It is important to point out that while this system can cryptographically
prove that a particular validator claims to be associated with a domain
it does *NOT* prove that the validator is, actually, associated with that
domain.
Domain owners will have to cryptographically attest to operating particular
validators that claim to be associated with that domain. One option for
doing so would be by making available a file over HTTPS under the domain
being claimed, which is verified separately (e.g. by ensuring that the
certificate used to serve the file matches the domain being claimed) and
which contains the long-term master public keys of validator(s) associated
with that domain.
Credit for an early prototype of this idea goes to GitHub user @cryptobrad
who introduced a PR that would allow a validator list publisher to attest
that a particular validator was associated with a domain. The idea may be
worth revisiting as a way of verifying the domain name claimed by the
validator's operator.
Resource limits were not properly applied to connections with
known IP addresses but no corresponding users.
Add unit tests for unlimited vs. limited ports.
An audit showed that a number of the RPC error codes in
ErrorCodes.h are no longer used in the code base. The unused
codes were removed from the file along with their support code
in ErrorCodes.cpp.
The ledger already declared a transaction that is both single-
and multi-signing malformed. This just adds some checking in
the signing RPC commands (like submit and sign_for) which allows
that sort of error to be identified a bit closer to the user.
In the process of adding this code a bug was found in the
RPCCall unit test. That bug is fixed as well.
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.