This commit is contained in:
bthomee
2025-02-13 15:32:43 +00:00
parent 1d6907c0f0
commit 0bf47ae5ed
34 changed files with 564 additions and 544 deletions

View File

@@ -350,7 +350,7 @@ Friends</h2></td></tr>
<tr class="separator:aff78bcfb98b735a41d082871e735ccc7"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table>
<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
<div class="textblock"><h2><a class="anchor" id="autotoc_md569"></a>
<div class="textblock"><h2><a class="anchor" id="autotoc_md570"></a>
Trusted Validators List</h2>
<p >Rippled accepts ledger proposals and validations from trusted validator nodes. A ledger is considered fully-validated once the number of received trusted validations for a ledger meets or exceeds a quorum value.</p>
<p >This class manages the set of validation public keys the local rippled node trusts. The list of trusted keys is populated using the keys listed in the configuration file as well as lists signed by trusted publishers. The trusted publisher public keys are specified in the config.</p>

View File

@@ -224,7 +224,7 @@ Friends</h2></td></tr>
<tr class="separator:a13d17a86ad8d1ecdf3e4d2b99c51c03c"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table>
<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
<div class="textblock"><h2><a class="anchor" id="autotoc_md570"></a>
<div class="textblock"><h2><a class="anchor" id="autotoc_md571"></a>
Validator Sites</h2>
<p >This class manages the set of configured remote sites used to fetch the latest published recommended validator lists.</p>
<p >Lists are fetched at a regular interval. Fetched lists are expected to be in JSON format and contain the following fields:</p>

View File

@@ -73,20 +73,20 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p ><a class="anchor" id="md____w_rippled_rippled_README"></a> The <a href="https://xrpl.org/">XRP Ledger</a> is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.</p>
<h1><a class="anchor" id="autotoc_md209"></a>
<h1><a class="anchor" id="autotoc_md210"></a>
XRP</h1>
<p ><a href="https://xrpl.org/xrp.html">XRP</a> is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP.</p>
<h1><a class="anchor" id="autotoc_md210"></a>
<h1><a class="anchor" id="autotoc_md211"></a>
rippled</h1>
<p >The server software that powers the XRP Ledger is called <code>rippled</code> and is available in this repository under the permissive <a class="el" href="md____w_rippled_rippled_LICENSE.html">ISC open-source license</a>. The <code>rippled</code> server software is written primarily in C++ and runs on a variety of platforms. The <code>rippled</code> server software can run in several modes depending on its <a href="https://xrpl.org/rippled-server-modes.html">configuration</a>.</p>
<p >If you are interested in running an <b>API Server</b> (including a <b>Full History Server</b>), take a look at <a href="https://github.com/XRPLF/clio">Clio</a>. (rippled Reporting Mode has been replaced by Clio.)</p>
<h2><a class="anchor" id="autotoc_md211"></a>
<h2><a class="anchor" id="autotoc_md212"></a>
Build from Source</h2>
<ul>
<li><a class="el" href="md____w_rippled_rippled_BUILD.html">Read the build instructions in `BUILD.md`</a></li>
<li>If you encounter any issues, please <a href="https://github.com/XRPLF/rippled/issues">open an issue</a></li>
</ul>
<h1><a class="anchor" id="autotoc_md212"></a>
<h1><a class="anchor" id="autotoc_md213"></a>
Key Features of the XRP Ledger</h1>
<ul>
<li><b><a href="https://xrpl.org/xrp-ledger-overview.html#censorship-resistant-transaction-processing">Censorship-Resistant Transaction Processing</a>:</b> No single party decides which transactions succeed or fail, and no one can "roll back" a transaction after it completes. As long as those who choose to participate in the network keep it healthy, they can settle transactions in seconds.</li>
@@ -97,7 +97,7 @@ Key Features of the XRP Ledger</h1>
<li><b><a href="https://xrpl.org/xrp-ledger-overview.html#modern-features-for-smart-contracts">Modern Features for Smart Contracts</a>:</b> Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the <a href="https://interledger.org/">Interledger Protocol</a>. This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.</li>
<li><b><a href="https://xrpl.org/xrp-ledger-overview.html#on-ledger-decentralized-exchange">On-Ledger Decentralized Exchange</a>:</b> In addition to all the features that make XRP useful on its own, the XRP Ledger also has a fully-functional accounting system for tracking and trading obligations denominated in any way users want, and an exchange built into the protocol. The XRP Ledger can settle long, cross-currency payment paths and exchanges of multiple currencies in atomic transactions, bridging gaps of trust with XRP.</li>
</ul>
<h1><a class="anchor" id="autotoc_md213"></a>
<h1><a class="anchor" id="autotoc_md214"></a>
Source Code</h1>
<p >Here are some good places to start learning the source code:</p>
<ul>
@@ -105,7 +105,7 @@ Source Code</h1>
<li>Read <a href="./Builds/levelization">the levelization document</a> to get an idea of the internal dependency graph.</li>
<li>In the big picture, the <code>main</code> function constructs an <code>ApplicationImp</code> object, which implements the <code>Application</code> virtual interface. Almost every component in the application takes an <code>Application&amp;</code> parameter in its constructor, typically named <code>app</code> and stored as a member variable <code>app_</code>. This allows most components to depend on any other component.</li>
</ul>
<h2><a class="anchor" id="autotoc_md214"></a>
<h2><a class="anchor" id="autotoc_md215"></a>
Repository Contents</h2>
<table class="markdownTable">
<tr class="markdownTableHead">
@@ -122,14 +122,14 @@ Repository Contents</h2>
<td class="markdownTableBodyLeft"><code>./src</code> </td><td class="markdownTableBodyLeft">Source code. </td></tr>
</table>
<p >Some of the directories under <code>src</code> are external repositories included using git-subtree. See those directories' README files for more details.</p>
<h1><a class="anchor" id="autotoc_md215"></a>
<h1><a class="anchor" id="autotoc_md216"></a>
Additional Documentation</h1>
<ul>
<li><a href="https://xrpl.org/">XRP Ledger Dev Portal</a></li>
<li><a href="https://xrpl.org/install-rippled.html">Setup and Installation</a></li>
<li><a href="https://xrplf.github.io/rippled/">Source Documentation (Doxygen)</a></li>
</ul>
<h1><a class="anchor" id="autotoc_md216"></a>
<h1><a class="anchor" id="autotoc_md217"></a>
See Also</h1>
<ul>
<li><a href="https://github.com/XRPLF/clio">Clio API Server for the XRP Ledger</a></li>

View File

@@ -183,10 +183,11 @@ Build and Test</h3>
</div><!-- fragment --><p class="startli">You can use any directory name. Conan treats your working directory as an install folder and generates files with implementation details. You don't need to worry about these files, but make sure to change your working directory to your build directory before calling Conan.</p>
<p class="startli"><b>Note:</b> You can specify a directory for the installation files by adding the <code>install-folder</code> or <code>-if</code> option to every <code>conan install</code> command in the next step.</p>
</li>
<li><p class="startli">Generate CMake files for every configuration you want to build.</p>
<li><p class="startli">Use conan to generate CMake files for every configuration you want to build:</p>
<div class="fragment"><div class="line">conan install .. --output-folder . --build missing --settings build_type=Release</div>
<div class="line">conan install .. --output-folder . --build missing --settings build_type=Debug</div>
</div><!-- fragment --><p class="startli">For a single-configuration generator, e.g. <code>Unix Makefiles</code> or <code>Ninja</code>, you only need to run this command once. For a multi-configuration generator, e.g. <code>Visual Studio</code>, you may want to run it more than once.</p>
</div><!-- fragment --><p class="startli">To build Debug, in the next step, be sure to set <code>-DCMAKE_BUILD_TYPE=Debug</code></p>
<p class="startli">For a single-configuration generator, e.g. <code>Unix Makefiles</code> or <code>Ninja</code>, you only need to run this command once. For a multi-configuration generator, e.g. <code>Visual Studio</code>, you may want to run it more than once.</p>
<p class="startli">Each of these commands should also have a different <code>build_type</code> setting. A second command with the same <code>build_type</code> setting will overwrite the files generated by the first. You can pass the build type on the command line with <code>--settings build_type=$BUILD_TYPE</code> or in the profile itself, under the section <code>[settings]</code> with the key <code>build_type</code>.</p>
<p class="startli">If you are using a Microsoft Visual C++ compiler, then you will need to ensure consistency between the <code>build_type</code> setting and the <code>compiler.runtime</code> setting.</p>
<p class="startli">When <code>build_type</code> is <code>Release</code>, <code>compiler.runtime</code> should be <code>MT</code>.</p>
@@ -196,12 +197,19 @@ Build and Test</h3>
</div><!-- fragment --></li>
<li><p class="startli">Configure CMake and pass the toolchain file generated by Conan, located at <code>$OUTPUT_FOLDER/build/generators/conan_toolchain.cmake</code>.</p>
<p class="startli">Single-config generators:</p>
<p class="startli">Pass the CMake variable <a href="https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html"><code>CMAKE_BUILD_TYPE</code></a> and make sure it matches the one of the <code>build_type</code> settings you chose in the previous step.</p>
<p class="startli">For example, to build Debug, in the next command, replace "Release" with "Debug"</p>
<div class="fragment"><div class="line">cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -Dxrpld=ON -Dtests=ON ..</div>
</div><!-- fragment --><p class="startli">Pass the CMake variable <a href="https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html"><code>CMAKE_BUILD_TYPE</code></a> and make sure it matches the <code>build_type</code> setting you chose in the previous step.</p>
<p class="startli">Multi-config generators:</p>
<div class="fragment"><div class="line">cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..</div>
</div><!-- fragment --><p class="startli"><b>Note:</b> You can pass build options for <code>rippled</code> in this step.</p>
</li>
</div><!-- fragment --></li>
</ol>
<pre class="fragment">Multi-config generators:
```
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -Dxrpld=ON -Dtests=ON ..
```
**Note:** You can pass build options for `rippled` in this step.
</pre><ol type="1">
<li><p class="startli">Build <code>rippled</code>.</p>
<p class="startli">For a single-configuration generator, it will build whatever configuration you passed for <code>CMAKE_BUILD_TYPE</code>. For a multi-configuration generator, you must pass the option <code>--config</code> to select the build configuration.</p>
<p class="startli">Single-config generators:</p>
@@ -279,6 +287,18 @@ Conan</h3>
<li>Re-run conan install.</li>
</ol>
<h3><a class="anchor" id="autotoc_md38"></a>
'protobuf/port_def.inc' file not found</h3>
<p >If <code>cmake --build .</code> results in an error due to a missing a protobuf file, then you might have generated CMake files for a different <code>build_type</code> than the <code>CMAKE_BUILD_TYPE</code> you passed to conan.</p>
<div class="fragment"><div class="line">/rippled/.build/pb-xrpl.libpb/xrpl/proto/ripple.pb.h:10:10: fatal error: &#39;google/protobuf/port_def.inc&#39; file not found</div>
<div class="line"> 10 | #include &lt;google/protobuf/port_def.inc&gt;</div>
<div class="line"> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</div>
<div class="line">1 error generated.</div>
</div><!-- fragment --><p >For example, if you want to build Debug:</p>
<ol type="1">
<li>For conan install, pass <code>--settings build_type=Debug</code></li>
<li>For cmake, pass <code>-DCMAKE_BUILD_TYPE=Debug</code></li>
</ol>
<h3><a class="anchor" id="autotoc_md39"></a>
no std::result_of</h3>
<p >If your compiler version is recent enough to have removed <code><a class="elRef" href="http://en.cppreference.com/w/cpp/types/result_of.html">std::result_of</a></code> as part of C++20, e.g. Apple Clang 15.0, then you might need to add a preprocessor definition to your build.</p>
<div class="fragment"><div class="line">conan profile update &#39;options.boost:extra_b2_flags=&quot;define=BOOST_ASIO_HAS_STD_INVOKE_RESULT&quot;&#39; default</div>
@@ -286,19 +306,19 @@ no std::result_of</h3>
<div class="line">conan profile update &#39;env.CXXFLAGS=&quot;-DBOOST_ASIO_HAS_STD_INVOKE_RESULT&quot;&#39; default</div>
<div class="line">conan profile update &#39;conf.tools.build:cflags+=[&quot;-DBOOST_ASIO_HAS_STD_INVOKE_RESULT&quot;]&#39; default</div>
<div class="line">conan profile update &#39;conf.tools.build:cxxflags+=[&quot;-DBOOST_ASIO_HAS_STD_INVOKE_RESULT&quot;]&#39; default</div>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md39"></a>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md40"></a>
call to 'async_teardown' is ambiguous</h3>
<p >If you are compiling with an early version of Clang 16, then you might hit a <a href="https://github.com/boostorg/beast/issues/2648">regression</a> when compiling C++20 that manifests as an <a href="https://github.com/boostorg/beast/issues/2661">error in a Boost header</a>. You can workaround it by adding this preprocessor definition:</p>
<div class="fragment"><div class="line">conan profile update &#39;env.CXXFLAGS=&quot;-DBOOST_ASIO_DISABLE_CONCEPTS&quot;&#39; default</div>
<div class="line">conan profile update &#39;conf.tools.build:cxxflags+=[&quot;-DBOOST_ASIO_DISABLE_CONCEPTS&quot;]&#39; default</div>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md40"></a>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md41"></a>
recompile with -fPIC</h3>
<p >If you get a linker error suggesting that you recompile Boost with position-independent code, such as:</p>
<div class="fragment"><div class="line">/usr/bin/ld.gold: error: /home/username/.conan/data/boost/1.77.0/_/_/package/.../lib/libboost_container.a(alloc_lib.o):</div>
<div class="line"> requires unsupported dynamic reloc 11; recompile with -fPIC</div>
</div><!-- fragment --><p >Conan most likely downloaded a bad binary distribution of the dependency. This seems to be a <a href="https://github.com/conan-io/conan-center-index/issues/13168">bug</a> in Conan just for Boost 1.77.0 compiled with GCC for Linux. The solution is to build the dependency locally by passing <code>--build boost</code> when calling <code>conan install</code>.</p>
<div class="fragment"><div class="line">conan install --build boost ...</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md41"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md42"></a>
Add a Dependency</h2>
<p >If you want to experiment with a new package, follow these steps:</p>
<ol type="1">

View File

@@ -114,7 +114,7 @@ $(function() {
<td class="markdownTableBodyNone">16 </td><td class="markdownTableBodyNone">test/rpc test/app </td></tr>
</table>
<p >(Note that <code>test</code> levelization is <em>much</em> less important and <em>much</em> less strictly enforced than <code>ripple</code> levelization, other than the requirement that <code>test</code> code should <em>never</em> be included in <code>ripple</code> code.)</p>
<h1><a class="anchor" id="autotoc_md43"></a>
<h1><a class="anchor" id="autotoc_md44"></a>
Validation</h1>
<p >The <a href="levelization.sh">levelization.sh</a> script takes no parameters, reads no environment variables, and can be run from any directory, as long as it is in the expected location in the rippled repo. It can be run at any time from within a checked out repo, and will do an analysis of all the <code>#include</code>s in the rippled source. The only caveat is that it runs much slower under Windows than in Linux. It hasn't yet been tested under MacOS. It generates many files of [results](results):</p>
<ul>

View File

@@ -73,10 +73,10 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >The XRP Ledger has many and diverse stakeholders, and everyone deserves a chance to contribute meaningful changes to the code that runs the XRPL.</p>
<h1><a class="anchor" id="autotoc_md44"></a>
<h1><a class="anchor" id="autotoc_md45"></a>
Contributing</h1>
<p >We assume you are familiar with the general practice of <a href="https://docs.github.com/en/get-started/quickstart/contributing-to-projects">making contributions on GitHub</a>. This file includes only special instructions specific to this project.</p>
<h2><a class="anchor" id="autotoc_md45"></a>
<h2><a class="anchor" id="autotoc_md46"></a>
Before you start</h2>
<p >The following branches exist in the main project repository:</p>
<ul>
@@ -101,11 +101,11 @@ Before you start</h2>
</li>
</ul>
<p >Regardless of where the branch is created, please open a <em>draft</em> pull request as soon as possible after pushing the branch to Github, to increase visibility, and ease feedback during the development process.</p>
<h2><a class="anchor" id="autotoc_md46"></a>
<h2><a class="anchor" id="autotoc_md47"></a>
Major contributions</h2>
<p >If your contribution is a major feature or breaking change, then you must first write an XRP Ledger Standard (XLS) describing it. Go to <a href="https://github.com/XRPLF/XRPL-Standards/discussions">XRPL-Standards</a>, choose the next available standard number, and open a discussion with an appropriate title to propose your draft standard.</p>
<p >When you submit a pull request, please link the corresponding XLS in the description. An XLS still in draft status is considered a work-in-progress and open for discussion. Please allow time for questions, suggestions, and changes to the XLS draft. It is the responsibility of the XLS author to update the draft to match the final implementation when its corresponding pull request is merged, unless the author delegates that responsibility to others.</p>
<h2><a class="anchor" id="autotoc_md47"></a>
<h2><a class="anchor" id="autotoc_md48"></a>
Before making a pull request</h2>
<p >(Or marking a draft pull request as ready.)</p>
<p >Changes that alter transaction processing must be guarded by an <a href="https://xrpl.org/amendments.html">Amendment</a>. All other changes that maintain the existing behavior do not need an Amendment.</p>
@@ -124,7 +124,7 @@ Before making a pull request</h2>
<li>Every commit should be signed.</li>
<li>Every commit should be well-formed (builds successfully, unit tests passing), as this helps to resolve merge conflicts, and makes it easier to use <code>git bisect</code> to find bugs.</li>
</ul>
<h3><a class="anchor" id="autotoc_md48"></a>
<h3><a class="anchor" id="autotoc_md49"></a>
Good commit messages</h3>
<p >Refer to <a href="https://cbea.ms/git-commit/">"How to Write a Git Commit Message"</a> for general rules on writing a good commit message.</p>
<p >tl;dr </p><blockquote class="doxtable">
@@ -154,7 +154,7 @@ Good commit messages</h3>
<li><code>chore:</code> - Other tasks that don't affect the binary, but don't fit any of the other cases. e.g. formatting, git settings, updating Github Actions jobs.</li>
</ul>
<p >Whenever possible, when updating commits after the PR is open, please add the PR number to the end of the subject line. e.g. <code>test: Add unit tests for Feature X (#1234)</code>.</p>
<h2><a class="anchor" id="autotoc_md49"></a>
<h2><a class="anchor" id="autotoc_md50"></a>
Pull requests</h2>
<p >In general, pull requests use <code>develop</code> as the base branch. The exceptions are</p><ul>
<li>Fixes and improvements to a release candidate use <code>release</code> as the base.</li>
@@ -164,7 +164,7 @@ Pull requests</h2>
<p >Github pull requests are created as "Ready" by default, or you can mark a "Draft" pull request as "Ready". Once a pull request is marked as "Ready", any changes must be added as new commits. Do not force-push to a branch in a pull request under review. (This includes rebasing your branch onto the updated base branch. Use a merge operation, instead or hit the "Update branch" button at the bottom of the Github PR page.) This preserves the ability for reviewers to filter changes since their last review.</p>
<p >A pull request must obtain <b>approvals from at least two reviewers</b> before it can be considered for merge by a Maintainer. Maintainers retain discretion to require more approvals if they feel the credibility of the existing approvals is insufficient.</p>
<p >Pull requests must be merged by <a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits">squash-and-merge</a> to preserve a linear history for the <code>develop</code> branch.</p>
<h3><a class="anchor" id="autotoc_md50"></a>
<h3><a class="anchor" id="autotoc_md51"></a>
"Ready to merge"</h3>
<p >A pull request should only have the "Ready to merge" label added when it meets a few criteria:</p>
<ol type="1">
@@ -180,10 +180,10 @@ Pull requests</h2>
<li>Finally, and most importantly, the author of the PR must positively indicate that the PR is ready to merge. That can be accomplished by adding the "Ready to merge" label if their role allows, or by leaving a comment to the effect that the PR is ready to merge.</li>
</ol>
<p >Once the "Ready to merge" label is added, a maintainer may merge the PR at any time, so don't use it lightly.</p>
<h1><a class="anchor" id="autotoc_md51"></a>
<h1><a class="anchor" id="autotoc_md52"></a>
Style guide</h1>
<p >This is a non-exhaustive list of recommended style guidelines. These are not always strictly enforced and serve as a way to keep the codebase coherent rather than a set of <em>thou shalt not</em> commandments.</p>
<h2><a class="anchor" id="autotoc_md52"></a>
<h2><a class="anchor" id="autotoc_md53"></a>
Formatting</h2>
<p >All code must conform to <code>clang-format</code> version 18, according to the settings in <a href="./.clang-format"><code>.clang-format</code></a>, unless the result would be unreasonably difficult to read or maintain. To demarcate lines that should be left as-is, surround them with comments like this:</p>
<div class="fragment"><div class="line">// clang-format off</div>
@@ -201,7 +201,7 @@ Formatting</h2>
</ol>
<p >You can install a pre-commit hook to automatically run <code>clang-format</code> before every commit: </p><div class="fragment"><div class="line">pip3 install pre-commit</div>
<div class="line">pre-commit install</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md53"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md54"></a>
Contracts and instrumentation</h2>
<p >We are using <a href="https://antithesis.com/">Antithesis</a> for continuous fuzzing, and keep a copy of <a href="https://github.com/antithesishq/antithesis-sdk-cpp/">Antithesis C++ SDK</a> in <code>external/antithesis-sdk</code>. One of the aims of fuzzing is to identify bugs by finding external conditions which cause contracts violations inside <code>rippled</code>. The contracts are expressed as <code>XRPL_ASSERT</code> or <code>UNREACHABLE</code> (defined in <code><a class="el" href="instrumentation_8h_source.html">include/xrpl/beast/utility/instrumentation.h</a></code>), which are effectively (outside of Antithesis) wrappers for <code>assert(...)</code> with added name. The purpose of name is to provide contracts with stable identity which does not rely on line numbers.</p>
<p >When <code>rippled</code> is built with the Antithesis instrumentation enabled (using <code>voidstar</code> CMake option) and ran on the Antithesis platform, the contracts become <a href="https://antithesis.com/docs/using_antithesis/properties.html">test properties</a>; otherwise they are just like a regular <code>assert</code>. To learn more about Antithesis, see <a href="https://antithesis.com/docs/introduction/how_antithesis_works.html">How Antithesis Works</a> and <a href="https://antithesis.com/docs/using_antithesis/sdk/cpp/overview.html#">C++ SDK</a></p>
@@ -225,7 +225,7 @@ Contracts and instrumentation</h2>
<li>Do not use <code>std::unreachable</code></li>
<li>Do not put contracts where they can be violated by an external condition (e.g. timing, data payload before mandatory validation etc.) as this creates bogus bug reports (and causes crashes of Debug builds)</li>
</ul>
<h2><a class="anchor" id="autotoc_md54"></a>
<h2><a class="anchor" id="autotoc_md55"></a>
Unit Tests</h2>
<p >To execute all unit tests:</p>
<div class="fragment"><div class="line">(Note: Using multiple cores on a Mac M1 can cause spurious test failures. The </div>
@@ -343,7 +343,7 @@ Unit Tests</h2>
<div class="line"> </div>
<div class="line">It also assumes you have a default gpg signing key set up in git. e.g.</div>
</div><!-- fragment --><p> $ git config user.signingkey 968479A1AFF927E37D1A566BB5690EEEBB952194 </p>
<h1><a class="anchor" id="autotoc_md55"></a>
<h1><a class="anchor" id="autotoc_md56"></a>
(This is github's key. Use your own.)</h1>
<div class="fragment"><div class="line">### When and how to merge pull requests</div>
<div class="line"> </div>
@@ -446,25 +446,25 @@ Unit Tests</h2>
<div class="line"> </div>
<div class="line"> The workflow may look something like:</div>
</div><!-- fragment --><p> git fetch &ndash;multiple upstreams user1 user2 user3 [...] git checkout -B release-next &ndash;no-track upstream/develop</p>
<h1><a class="anchor" id="autotoc_md56"></a>
Only do an ff-only merge if prbranch1 is either already</h1>
<h1><a class="anchor" id="autotoc_md57"></a>
squashed, or needs to be merged with separate commits,</h1>
Only do an ff-only merge if prbranch1 is either already</h1>
<h1><a class="anchor" id="autotoc_md58"></a>
and has no merge commits.</h1>
squashed, or needs to be merged with separate commits,</h1>
<h1><a class="anchor" id="autotoc_md59"></a>
and has no merge commits.</h1>
<h1><a class="anchor" id="autotoc_md60"></a>
Use -S on the ff-only merge if prbranch1 isn't signed.</h1>
<p >git merge [-S] &ndash;ff-only user1/prbranch1</p>
<p >git merge &ndash;squash user2/prbranch2 git commit -S # Use the commit message provided on the PR</p>
<p >git merge &ndash;squash user3/prbranch3 git commit -S # Use the commit message provided on the PR</p>
<p >[...]</p>
<h1><a class="anchor" id="autotoc_md60"></a>
<h1><a class="anchor" id="autotoc_md61"></a>
Make sure the commits look right</h1>
<p >git log &ndash;show-signature "upstream/develop..HEAD"</p>
<p >git push &ndash;set-upstream origin</p>
<h1><a class="anchor" id="autotoc_md61"></a>
Continue to "Making the release" to update the version number, so</h1>
<h1><a class="anchor" id="autotoc_md62"></a>
Continue to "Making the release" to update the version number, so</h1>
<h1><a class="anchor" id="autotoc_md63"></a>
everything can be done in one PR.</h1>
<div class="fragment"><div class="line">You can also use the [squash-branches] script.</div>
<div class="line"> </div>
@@ -485,14 +485,14 @@ everything can be done in one PR.</h1>
<p >git diff</p>
<p >git add ${build}</p>
<p >git commit -S -m "Set version to ${v}"</p>
<h1><a class="anchor" id="autotoc_md63"></a>
<h1><a class="anchor" id="autotoc_md64"></a>
You could use your "origin" repo, but some CI tests work better on upstream.</h1>
<p >git push upstream-push git fetch upstreams git branch &ndash;set-upstream-to=upstream/release-next </p><div class="fragment"><div class="line"> You can also use the [update-version] script.</div>
<div class="line">2. Create a Pull Request for `release-next` with **`develop`** as</div>
<div class="line"> the base branch.</div>
<div class="line"> 1. Use the title &quot;[TRIVIAL] Set version to X.X.X-bX&quot;.</div>
<div class="line"> 2. Instead of the default description template, use the following:</div>
</div><!-- fragment --> <h2><a class="anchor" id="autotoc_md64"></a>
</div><!-- fragment --> <h2><a class="anchor" id="autotoc_md65"></a>
High Level Overview of Change</h2>
<p >This PR only changes the version number. It will be merged as soon as Github CI actions successfully complete. </p><div class="fragment"><div class="line">3. Wait for CI to successfully complete, and get someone to approve</div>
<div class="line"> the PR. (It is safe to ignore known CI issues.)</div>
@@ -518,28 +518,28 @@ High Level Overview of Change</h2>
<div class="line"> abandon 2.4.0-b1, the next attempt will be 2.4.0-b2.</div>
<div class="line">8. Once everything is ready to go, push to `release`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md65"></a>
<h1><a class="anchor" id="autotoc_md66"></a>
Just to be safe, do a dry run first:</h1>
<p >git push &ndash;dry-run upstream-push release-next:release</p>
<h1><a class="anchor" id="autotoc_md66"></a>
<h1><a class="anchor" id="autotoc_md67"></a>
If everything looks right, push the branch</h1>
<p >git push upstream-push release-next:release</p>
<h1><a class="anchor" id="autotoc_md67"></a>
<h1><a class="anchor" id="autotoc_md68"></a>
Check that all of the branches are updated</h1>
<p >git fetch upstreams git log -1 &ndash;oneline </p>
<h1><a class="anchor" id="autotoc_md68"></a>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md69"></a>
0123456789 (HEAD -&gt; upstream/release-next, upstream/release,</h1>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md70"></a>
upstream/develop) Set version to 2.4.0-b1</h1>
0123456789 (HEAD -&gt; upstream/release-next, upstream/release,</h1>
<h1><a class="anchor" id="autotoc_md71"></a>
Note that upstream/develop may not be on this commit, but</h1>
upstream/develop) Set version to 2.4.0-b1</h1>
<h1><a class="anchor" id="autotoc_md72"></a>
upstream/release must be.</h1>
Note that upstream/develop may not be on this commit, but</h1>
<h1><a class="anchor" id="autotoc_md73"></a>
Other branches, including some from upstream-push, may also be</h1>
upstream/release must be.</h1>
<h1><a class="anchor" id="autotoc_md74"></a>
Other branches, including some from upstream-push, may also be</h1>
<h1><a class="anchor" id="autotoc_md75"></a>
present.</h1>
<div class="fragment"><div class="line">9. Tag the release, too.</div>
</div><!-- fragment --><p> git tag &lt;version number&gt; git push upstream-push &lt;version number&gt; </p><div class="fragment"><div class="line">10. Delete the `release-next` branch on the repo. Use the Github UI or:</div>
@@ -583,9 +583,9 @@ present.</h1>
<div class="line"> E.g. For release A.B.C-rcD, use `mergeABCrcD`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<p >git checkout &ndash;no-track -b mergeABCrcD upstream/develop </p><div class="fragment"><div class="line">2. Merge `release` into your branch.</div>
</div><!-- fragment --> <h1><a class="anchor" id="autotoc_md75"></a>
</div><!-- fragment --> <h1><a class="anchor" id="autotoc_md76"></a>
I like the "--edit --log --verbose" parameters, but they are</h1>
<h1><a class="anchor" id="autotoc_md76"></a>
<h1><a class="anchor" id="autotoc_md77"></a>
not required.</h1>
<p >git merge upstream/release </p><div class="fragment"><div class="line">3. `BuildInfo.cpp` will have a conflict with the version number.</div>
<div class="line"> Resolve it with the version from `develop` - the higher version.</div>
@@ -601,11 +601,11 @@ not required.</h1>
<div class="line"> `develop` back into your branch. Instead rebase preserving merges,</div>
<div class="line"> or do the merge again. (See also the `rerere` git config setting.)</div>
</div><!-- fragment --><p> git rebase &ndash;rebase-merges upstream/develop </p>
<h1><a class="anchor" id="autotoc_md77"></a>
<h1><a class="anchor" id="autotoc_md78"></a>
OR</h1>
<p >git reset &ndash;hard upstream/develop git merge upstream/release </p><div class="fragment"><div class="line">7. When the PR is ready, push it to `develop`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md78"></a>
<h1><a class="anchor" id="autotoc_md79"></a>
Make sure the commits look right</h1>
<p >git log &ndash;show-signature "upstream/develop^..HEAD"</p>
<p >git push upstream-push mergeABCrcD:develop</p>
@@ -647,28 +647,28 @@ Make sure the commits look right</h1>
<div class="line"> more RCs as necessary](#release-candidates-after-the-first)</div>
<div class="line">8. Once everything is ready to go, push to `release` and `master`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md79"></a>
<h1><a class="anchor" id="autotoc_md80"></a>
Just to be safe, do dry runs first:</h1>
<p >git push &ndash;dry-run upstream-push master-next:release git push &ndash;dry-run upstream-push master-next:master</p>
<h1><a class="anchor" id="autotoc_md80"></a>
<h1><a class="anchor" id="autotoc_md81"></a>
If everything looks right, push the branch</h1>
<p >git push upstream-push master-next:release git push upstream-push master-next:master</p>
<h1><a class="anchor" id="autotoc_md81"></a>
<h1><a class="anchor" id="autotoc_md82"></a>
Check that all of the branches are updated</h1>
<p >git fetch upstreams git log -1 &ndash;oneline </p>
<h1><a class="anchor" id="autotoc_md82"></a>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md83"></a>
0123456789 (HEAD -&gt; upstream/master-next, upstream/master,</h1>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md84"></a>
upstream/release) Set version to A.B.0</h1>
0123456789 (HEAD -&gt; upstream/master-next, upstream/master,</h1>
<h1><a class="anchor" id="autotoc_md85"></a>
Note that both upstream/release and upstream/master must be on this</h1>
upstream/release) Set version to A.B.0</h1>
<h1><a class="anchor" id="autotoc_md86"></a>
commit.</h1>
Note that both upstream/release and upstream/master must be on this</h1>
<h1><a class="anchor" id="autotoc_md87"></a>
Other branches, including some from upstream-push, may also be</h1>
commit.</h1>
<h1><a class="anchor" id="autotoc_md88"></a>
Other branches, including some from upstream-push, may also be</h1>
<h1><a class="anchor" id="autotoc_md89"></a>
present.</h1>
<div class="fragment"><div class="line">9. Tag the release, too.</div>
</div><!-- fragment --><p> git tag &lt;version number&gt; git push upstream-push &lt;version number&gt; </p><div class="fragment"><div class="line">10. Delete the `master-next` branch on the repo. Use the Github UI or:</div>
@@ -715,28 +715,28 @@ present.</h1>
<div class="line"> version setting commit remains last</div>
<div class="line">8. Once everything is ready to go, push to `master` **only**.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md89"></a>
<h1><a class="anchor" id="autotoc_md90"></a>
Just to be safe, do a dry run first:</h1>
<p >git push &ndash;dry-run upstream-push master-next:master</p>
<h1><a class="anchor" id="autotoc_md90"></a>
<h1><a class="anchor" id="autotoc_md91"></a>
If everything looks right, push the branch</h1>
<p >git push upstream-push master-next:master</p>
<h1><a class="anchor" id="autotoc_md91"></a>
<h1><a class="anchor" id="autotoc_md92"></a>
Check that all of the branches are updated</h1>
<p >git fetch upstreams git log -1 &ndash;oneline </p>
<h1><a class="anchor" id="autotoc_md92"></a>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md93"></a>
0123456789 (HEAD -&gt; upstream/master-next, upstream/master) Set version</h1>
The output should look like:</h1>
<h1><a class="anchor" id="autotoc_md94"></a>
to 2.4.1</h1>
0123456789 (HEAD -&gt; upstream/master-next, upstream/master) Set version</h1>
<h1><a class="anchor" id="autotoc_md95"></a>
Note that upstream/master must be on this commit. upstream/release and</h1>
to 2.4.1</h1>
<h1><a class="anchor" id="autotoc_md96"></a>
upstream/develop should not.</h1>
Note that upstream/master must be on this commit. upstream/release and</h1>
<h1><a class="anchor" id="autotoc_md97"></a>
Other branches, including some from upstream-push, may also be</h1>
upstream/develop should not.</h1>
<h1><a class="anchor" id="autotoc_md98"></a>
Other branches, including some from upstream-push, may also be</h1>
<h1><a class="anchor" id="autotoc_md99"></a>
present.</h1>
<div class="fragment"><div class="line">9. Tag the release, too.</div>
</div><!-- fragment --><p> git tag &lt;version number&gt; git push upstream-push &lt;version number&gt; </p><div class="fragment"><div class="line">9. Delete the `master-next` branch on the repo.</div>
@@ -753,9 +753,9 @@ present.</h1>
<div class="line"> E.g. For release 2.2.3, use `merge223`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<p >git checkout &ndash;no-track -b merge223 upstream/develop </p><div class="fragment"><div class="line">2. Merge master into your branch.</div>
</div><!-- fragment --> <h1><a class="anchor" id="autotoc_md99"></a>
</div><!-- fragment --> <h1><a class="anchor" id="autotoc_md100"></a>
I like the "--edit --log --verbose" parameters, but they are</h1>
<h1><a class="anchor" id="autotoc_md100"></a>
<h1><a class="anchor" id="autotoc_md101"></a>
not required.</h1>
<p >git merge upstream/master </p><div class="fragment"><div class="line">3. `BuildInfo.cpp` will have a conflict with the version number.</div>
<div class="line"> Resolve it with the version from `develop` - the higher version.</div>
@@ -771,11 +771,11 @@ not required.</h1>
<div class="line"> `develop` back into your branch. Instead rebase preserving merges,</div>
<div class="line"> or do the merge again. (See also the `rerere` git config setting.)</div>
</div><!-- fragment --><p> git rebase &ndash;rebase-merges upstream/develop </p>
<h1><a class="anchor" id="autotoc_md101"></a>
<h1><a class="anchor" id="autotoc_md102"></a>
OR</h1>
<p >git reset &ndash;hard upstream/develop git merge upstream/master </p><div class="fragment"><div class="line">7. When the PR is ready, push it to `develop`.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md102"></a>
<h1><a class="anchor" id="autotoc_md103"></a>
Make sure the commits look right</h1>
<p >git log &ndash;show-signature "upstream/develop..HEAD"</p>
<p >git push upstream-push HEAD:develop </p><div class="fragment"><div class="line">Development on `develop` can proceed as normal. It is recommended to</div>
@@ -799,10 +799,10 @@ Make sure the commits look right</h1>
<div class="line"> </div>
<div class="line">1. Create two branches in the main (`upstream`) repo.</div>
</div><!-- fragment --><p> git fetch upstreams</p>
<h1><a class="anchor" id="autotoc_md103"></a>
<h1><a class="anchor" id="autotoc_md104"></a>
Create a base branch off the tag</h1>
<p >git checkout &ndash;no-track -b master-2.1.2 2.1.1 git push upstream-push</p>
<h1><a class="anchor" id="autotoc_md104"></a>
<h1><a class="anchor" id="autotoc_md105"></a>
Create a working branch</h1>
<p >git checkout &ndash;no-track -b master212-next master-2.1.2 git push upstream-push</p>
<p >git fetch upstreams ``<code></p><ol type="1">

File diff suppressed because one or more lines are too long

View File

@@ -73,15 +73,15 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >For more details on operating an XRP Ledger server securely, please visit <a href="https://xrpl.org/manage-the-rippled-server.html">https://xrpl.org/manage-the-rippled-server.html</a>.</p>
<h1><a class="anchor" id="autotoc_md492"></a>
<h1><a class="anchor" id="autotoc_md493"></a>
Security Policy</h1>
<h2><a class="anchor" id="autotoc_md493"></a>
<h2><a class="anchor" id="autotoc_md494"></a>
Supported Versions</h2>
<p >Software constantly evolves. In order to focus resources, we only generally only accept vulnerability reports that affect recent and current versions of the software. We always accept reports for issues present in the <b>master</b>, <b>release</b> or <b>develop</b> branches, and with proposed, <a href="https://github.com/ripple/rippled/pulls">open pull requests</a>.</p>
<h2><a class="anchor" id="autotoc_md494"></a>
<h2><a class="anchor" id="autotoc_md495"></a>
Identifying and Reporting Vulnerabilities</h2>
<p >We take security seriously and we do our best to ensure that all our releases are bug free. But we aren't perfect and sometimes things will slip through.</p>
<h3><a class="anchor" id="autotoc_md495"></a>
<h3><a class="anchor" id="autotoc_md496"></a>
Responsible Investigation</h3>
<p >We urge you to examine our code carefully and responsibly, and to disclose any issues that you identify in a responsible fashion.</p>
<p >Responsible investigation includes, but isn't limited to, the following:</p>
@@ -90,7 +90,7 @@ Responsible Investigation</h3>
<li>Not targeting physical security measures, or attempting to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.</li>
<li>Investigating bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to the XRP Ledger and the broader ecosystem.</li>
</ul>
<h3><a class="anchor" id="autotoc_md496"></a>
<h3><a class="anchor" id="autotoc_md497"></a>
Responsible Disclosure</h3>
<p >If you discover a vulnerability or potential threat, or if you <em>think</em> you have, please reach out by dropping an email using the contact information below.</p>
<p >Your report should include the following:</p>
@@ -103,7 +103,7 @@ Responsible Disclosure</h3>
</ul>
<p >In your email, please describe the issue or potential threat. If possible, include a "repro" (code that can reproduce the issue) or describe the best way to reproduce and replicate the issue. Please make your report as detailed and comprehensive as possible.</p>
<p >For more information on responsible disclosure, please read this <a href="https://en.wikipedia.org/wiki/Responsible_disclosure">Wikipedia article</a>.</p>
<h2><a class="anchor" id="autotoc_md497"></a>
<h2><a class="anchor" id="autotoc_md498"></a>
Report Handling Process</h2>
<p >Please report the bug directly to us and limit further disclosure. If you want to prove that you knew the bug as of a given time, consider using a cryptographic precommitment: hash the content of your report and publish the hash on a medium of your choice (e.g. on Twitter or as a memo in a transaction) as "proof" that you had written the text at a given point in time.</p>
<p >Once we receive a report, we:</p>
@@ -117,7 +117,7 @@ Report Handling Process</h2>
</ol>
<p >We will triage and respond to your disclosure within 24 hours. Beyond that, we will work to analyze the issue in more detail, formulate, develop and test a fix.</p>
<p >While we commit to responding with 24 hours of your initial report with our triage assessment, we cannot guarantee a response time for the remaining steps. We will communicate with you throughout this process, letting you know where we are and keeping you updated on the timeframe.</p>
<h2><a class="anchor" id="autotoc_md498"></a>
<h2><a class="anchor" id="autotoc_md499"></a>
Bug Bounty Program</h2>
<p ><a href="https://ripple.com">Ripple</a> is generously sponsoring a bug bounty program for vulnerabilities in <a href="https://github.com/XRPLF/rippled"><code>rippled</code></a> (and other related projects, like <a href="https://github.com/XRPLF/xrpl.js"><code>xrpl.js</code></a>, <a href="https://github.com/XRPLF/xrpl-py"><code>xrpl-py</code></a>, <a href="https://github.com/XRPLF/xrpl4j"><code>xrpl4j</code></a>).</p>
<p >This program allows us to recognize and reward individuals or groups that identify and report bugs. In summary, in order to qualify for a bounty, the bug must be:</p>
@@ -130,7 +130,7 @@ Bug Bounty Program</h2>
<li><b>Unused</b>. If you use the exploit to attack the XRP Ledger, you do not qualify for a bounty. If you report a vulnerability used in an ongoing or past attack and there is specific, concrete evidence that suggests you are the attacker we reserve the right not to pay a bounty.</li>
</ol>
<p >The amount paid varies dramatically. Vulnerabilities that are harmless on their own, but could form part of a critical exploit will usually receive a bounty. Full-blown exploits can receive much higher bounties. Please dont hold back partial vulnerabilities while trying to construct a full-blown exploit. We will pay a bounty to anyone who reports a complete chain of vulnerabilities even if they have reported each component of the exploit separately and those vulnerabilities have been fixed in the meantime. However, to qualify for a the full bounty, you must to have been the first to report each of the partial exploits.</p>
<h3><a class="anchor" id="autotoc_md499"></a>
<h3><a class="anchor" id="autotoc_md500"></a>
Contacting Us</h3>
<p >To report a qualifying bug, please send a detailed report to:</p>
<table class="markdownTable">

View File

@@ -72,13 +72,13 @@ $(function() {
<div class="headertitle"><div class="title">Negative UNL Engineering Spec </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md106"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md107"></a>
The Problem Statement</h1>
<p >The moment-to-moment health of the XRP Ledger network depends on the health and connectivity of a small number of computers (nodes). The most important nodes are validators, specifically ones listed on the unique node list (UNL). Ripple publishes a recommended UNL that most network nodes use to determine which peers in the network are trusted. Although most validators use the same list, they are not required to. The XRP Ledger network progresses to the next ledger when enough validators reach agreement (above the minimum quorum of 80%) about what transactions to include in the next ledger.</p>
<p >As an example, if there are 10 validators on the UNL, at least 8 validators have to agree with the latest ledger for it to become validated. But what if enough of those validators are offline to drop the network below the 80% quorum? The XRP Ledger network favors safety/correctness over advancing the ledger. Which means if enough validators are offline, the network will not be able to validate ledgers.</p>
<p >Unfortunately validators can go offline at any time for many different reasons. Power outages, network connectivity issues, and hardware failures are just a few scenarios where a validator would appear "offline". Given that most of these events are temporary, it would make sense to temporarily remove that validator from the UNL. But the UNL is updated infrequently and not every node uses the same UNL. So instead of removing the unreliable validator from the Ripple recommended UNL, we can create a second negative UNL which is stored directly on the ledger (so the entire network has the same view). This will help the network see which validators are <b>currently</b> unreliable, and adjust their quorum calculation accordingly.</p>
<p ><em>Improving the liveness of the network is the main motivation for the negative UNL.</em></p>
<h2><a class="anchor" id="autotoc_md107"></a>
<h2><a class="anchor" id="autotoc_md108"></a>
Targeted Faults</h2>
<p >In order to determine which validators are unreliable, we need clearly define what kind of faults to measure and analyze. We want to deal with the faults we frequently observe in the production network. Hence we will only monitor for validators that do not reliably respond to network messages or send out validations disagreeing with the locally generated validations. We will not target other byzantine faults.</p>
<p >To track whether or not a validator is responding to the network, we could monitor them with a “heartbeat” protocol. Instead of creating a new heartbeat protocol, we can leverage some existing protocol messages to mimic the heartbeat. We picked validation messages because validators should send one and only one validation message per ledger. In addition, we only count the validation messages that agree with the local node's validations.</p>
@@ -120,9 +120,9 @@ Targeted Faults</h2>
</ul>
</li>
</ol>
<h1><a class="anchor" id="autotoc_md108"></a>
<h1><a class="anchor" id="autotoc_md109"></a>
External Interactions</h1>
<h2><a class="anchor" id="autotoc_md109"></a>
<h2><a class="anchor" id="autotoc_md110"></a>
Message Format Changes</h2>
<p >This proposal will:</p><ol type="1">
<li>add a new pseudo-transaction type</li>
@@ -131,10 +131,10 @@ Message Format Changes</h2>
<li>add the negative UNL to the ledger data structure.</li>
</ol>
<p >Any tools or systems that rely on the format of this data will have to be updated.</p>
<h2><a class="anchor" id="autotoc_md110"></a>
<h2><a class="anchor" id="autotoc_md111"></a>
Amendment</h2>
<p >This feature <b>will</b> need an amendment to activate.</p>
<h1><a class="anchor" id="autotoc_md111"></a>
<h1><a class="anchor" id="autotoc_md112"></a>
Design</h1>
<p >This section discusses the following topics about the Negative UNL design:</p>
<ul>
@@ -146,16 +146,16 @@ Design</h1>
<li>Filter validation messages</li>
<li>High level sequence diagram of code changes</li>
</ul>
<h2><a class="anchor" id="autotoc_md112"></a>
<h2><a class="anchor" id="autotoc_md113"></a>
Negative UNL Protocol Overview</h2>
<p >Every ledger stores a list of zero or more unreliable validators. Updates to the list must be approved by the validators using the consensus mechanism that validators use to agree on the set of transactions. The list is used only when checking if a ledger is fully validated. If a validator V is in the list, nodes with V in their UNL adjust the quorum and Vs validation message is not counted when verifying if a ledger is fully validated. Vs flow of messages and network interactions, however, will remain the same.</p>
<p >We define the <em><b>effective UNL** = original UNL - negative UNL*, and the ***effective quorum</b></em> as the quorum of the <em>effective UNL</em>. And we set <em>effective quorum = Ceiling(80% * effective UNL)</em>.</p>
<h2><a class="anchor" id="autotoc_md113"></a>
<h2><a class="anchor" id="autotoc_md114"></a>
Validator Reliability Measurement</h2>
<p >A node only measures the reliability of validators on its own UNL, and only proposes based on local observations. There are many metrics that a node can measure about its validators, but we have chosen ledger validation messages. This is because every validator shall send one and only one signed validation message per ledger. This keeps the measurement simple and removes timing/clock-sync issues. A node will measure the percentage of agreeing validation messages (<em>PAV</em>) received from each validator on the node's UNL. Note that the node will only count the validation messages that agree with its own validations.</p>
<p >We define the <b>PAV</b> as the **P**ercentage of **A**greed **V**alidation messages received for the last N ledgers, where N = 256 by default.</p>
<p >When the PAV drops below the <em><b>low-water mark</b></em>, the validator is considered unreliable, and is a candidate to be disabled by being added to the negative UNL. A validator must have a PAV higher than the <em><b>high-water mark</b></em> to be re-enabled. The validator is re-enabled by removing it from the negative UNL. In the implementation, we plan to set the low-water mark as 50% and the high-water mark as 80%.</p>
<h2><a class="anchor" id="autotoc_md114"></a>
<h2><a class="anchor" id="autotoc_md115"></a>
Format Changes</h2>
<p >The negative UNL component in a ledger contains three fields.</p><ul>
<li><em><b>NegativeUNL</b></em>: The current negative UNL, a list of unreliable validators.</li>
@@ -169,7 +169,7 @@ Format Changes</h2>
<li><em><b>Validator</b></em>: The validator to be disabled or re-enabled.</li>
</ul>
<p >There would be at most one <em>disable</em> <code>UNLModify</code> and one <em>re-enable</em> <code>UNLModify</code> transaction per flag ledger. The full machinery is described further on.</p>
<h2><a class="anchor" id="autotoc_md115"></a>
<h2><a class="anchor" id="autotoc_md116"></a>
Negative UNL Maintenance</h2>
<p >The negative UNL can only be modified on the flag ledgers. If a validator's reliability status changes, it takes two flag ledgers to modify the negative UNL. Let's see an example of the algorithm:</p>
<ul>
@@ -229,13 +229,13 @@ Else clear `ToDisable`.
<p >The negative UNL is stored on each ledger because we don't know when a validator may reconnect to the network. If the negative UNL was stored only on every flag ledger, then a new validator would have to wait until it acquires the latest flag ledger to know the negative UNL. So any new ledgers created that are not flag ledgers copy the negative UNL from the parent ledger.</p>
<p >Note that when we have a validator to disable and a validator to re-enable at the same flag ledger, we create two separate <code>UNLModify</code> pseudo-transactions. We want either one or the other or both to make it into the ledger on their own merits.</p>
<p >Readers may have noticed that we defined several rules of creating the <code>UNLModify</code> pseudo-transactions but did not describe how to enforce the rules. The rules are actually enforced by the existing consensus algorithm. Unless enough validators propose the same pseudo-transaction it will not be included in the transaction set of the ledger.</p>
<h2><a class="anchor" id="autotoc_md116"></a>
<h2><a class="anchor" id="autotoc_md117"></a>
Quorum Size Calculation</h2>
<p >The effective quorum is 80% of the effective UNL. Note that because at most 25% of the original UNL can be on the negative UNL, the quorum should not be lower than the absolute minimum quorum (i.e. 60%) of the original UNL. However, considering that different nodes may have different UNLs, to be safe we compute <code>quorum = Ceiling(max(60% * original UNL, 80% * effective UNL))</code>.</p>
<h2><a class="anchor" id="autotoc_md117"></a>
<h2><a class="anchor" id="autotoc_md118"></a>
Filter Validation Messages</h2>
<p >If a validator V is in the negative UNL, it still participates in consensus sessions in the same way, i.e. V still follows the protocol and publishes proposal and validation messages. The messages from V are still stored the same way by everyone, used to calculate the new PAV for V, and could be used in future consensus sessions if needed. However V's ledger validation message is not counted when checking if the ledger is fully validated.</p>
<h2><a class="anchor" id="autotoc_md118"></a>
<h2><a class="anchor" id="autotoc_md119"></a>
High Level Sequence Diagram of Code Changes</h2>
<p >The diagram below is the sequence of one round of consensus. Classes and components with non-trivial changes are colored green.</p>
<ul>
@@ -250,19 +250,19 @@ High Level Sequence Diagram of Code Changes</h2>
</li>
</ul>
<p ><img src="./negativeUNL_highLevel_sequence.png?raw=true" alt="Sequence diagram" title="Negative UNL Changes" class="inline"/></p>
<h1><a class="anchor" id="autotoc_md119"></a>
<h1><a class="anchor" id="autotoc_md120"></a>
Roads Not Taken</h1>
<h2><a class="anchor" id="autotoc_md120"></a>
<h2><a class="anchor" id="autotoc_md121"></a>
Use a Mechanism Like Fee Voting to Process UNLModify Pseudo-Transactions</h2>
<p >The previous version of the negative UNL specification used the same mechanism as the <a href="https://xrpl.org/fee-voting.html#voting-process.">fee voting</a> for creating the negative UNL, and used the negative UNL as soon as the ledger was fully validated. However the timing of fully validation can differ among nodes, so different negative UNLs could be used, resulting in different effective UNLs and different quorums for the same ledger. As a result, the network's safety is impacted.</p>
<p >This updated version does not impact safety though operates a bit more slowly. The negative UNL modifications in the <em>UNLModify</em> pseudo-transaction approved by the consensus will take effect at the next flag ledger. The extra time of the 256 ledgers should be enough for nodes to be in sync of the negative UNL modifications.</p>
<h2><a class="anchor" id="autotoc_md121"></a>
<h2><a class="anchor" id="autotoc_md122"></a>
Use an Expiration Approach to Re-enable Validators</h2>
<p >After a validator disabled by the negative UNL becomes reliable, other validators explicitly vote for re-enabling it. An alternative approach to re-enable a validator is the expiration approach, which was considered in the previous version of the specification. In the expiration approach, every entry in the negative UNL has a fixed expiration time. One flag ledger interval was chosen as the expiration interval. Once expired, the other validators must continue voting to keep the unreliable validator on the negative UNL. The advantage of this approach is its simplicity. But it has a requirement. The negative UNL protocol must be able to vote multiple unreliable validators to be disabled at the same flag ledger. In this version of the specification, however, only one unreliable validator can be disabled at a flag ledger. So the expiration approach cannot be simply applied.</p>
<h2><a class="anchor" id="autotoc_md122"></a>
<h2><a class="anchor" id="autotoc_md123"></a>
Validator Reliability Measurement and Flag Ledger Frequency</h2>
<p >If the ledger time is about 4.5 seconds and the low-water mark is 50%, then in the worst case, it takes 48 minutes *((0.5 * 256 + 256 + 256) * 4.5 / 60 = 48)* to put an offline validator on the negative UNL. We considered lowering the flag ledger frequency so that the negative UNL can be more responsive. We also considered decoupling the reliability measurement and flag ledger frequency to be more flexible. In practice, however, their benefits are not clear.</p>
<h1><a class="anchor" id="autotoc_md123"></a>
<h1><a class="anchor" id="autotoc_md124"></a>
New Attack Vectors</h1>
<p >A group of malicious validators may try to frame a reliable validator and put it on the negative UNL. But they cannot succeed. Because:</p>
<ol type="1">
@@ -275,7 +275,7 @@ New Attack Vectors</h1>
<li>A validator can be added to a negative UNL only through a UNLModify transaction.</li>
</ol>
<p >Assuming the group of malicious validators is less than the quorum, they cannot frame a reliable validator.</p>
<h1><a class="anchor" id="autotoc_md124"></a>
<h1><a class="anchor" id="autotoc_md125"></a>
Summary</h1>
<p >The bullet points below briefly summarize the current proposal:</p>
<ul>
@@ -291,19 +291,19 @@ Summary</h1>
<li>The quorum is the larger of 80% of the effective UNL and 60% of the original UNL.</li>
<li>If a validator is on the negative UNL, its validation messages are ignored when the local node verifies if a ledger is fully validated.</li>
</ul>
<h1><a class="anchor" id="autotoc_md125"></a>
<h1><a class="anchor" id="autotoc_md126"></a>
FAQ</h1>
<h2><a class="anchor" id="autotoc_md126"></a>
<h2><a class="anchor" id="autotoc_md127"></a>
Question: What are UNLs?</h2>
<p >Quote from the <a href="https://xrpl.org/technical-faq.html">Technical FAQ</a>: "They are
the lists of transaction validators a given participant believes will not
conspire to defraud them."</p>
<h2><a class="anchor" id="autotoc_md127"></a>
<h2><a class="anchor" id="autotoc_md128"></a>
Question: How does the negative UNL proposal affect network liveness?</h2>
<p >The network can make forward progress when more than a quorum of the trusted validators agree with the progress. The lower the quorum size is, the easier for the network to progress. If the quorum is too low, however, the network is not safe because nodes may have different results. So the quorum size used in the consensus protocol is a balance between the safety and the liveness of the network. The negative UNL reduces the size of the effective UNL, resulting in a lower quorum size while keeping the network safe.</p>
<h3>Question: How does a validator get into the negative UNL? How is a validator removed from the negative UNL? </h3>
<p >A validators reliability is measured by other validators. If a validator becomes unreliable, at a flag ledger, other validators propose <em>UNLModify</em> pseudo-transactions which vote the validator to add to the negative UNL during the consensus session. If agreed, the validator is added to the negative UNL at the next flag ledger. The mechanism of removing a validator from the negative UNL is the same.</p>
<h2><a class="anchor" id="autotoc_md128"></a>
<h2><a class="anchor" id="autotoc_md129"></a>
Question: Given a negative UNL, what happens if the UNL changes?</h2>
<p >Answer: Lets consider the cases:</p>
<ol type="1">
@@ -324,7 +324,7 @@ Question: Given a negative UNL, what happens if the UNL changes?</h2>
<p class="startli">Case 3 and 4 are not affected by the negative UNL protocol.</p>
</li>
</ol>
<h2><a class="anchor" id="autotoc_md129"></a>
<h2><a class="anchor" id="autotoc_md130"></a>
Question: Can we simply lower the quorum to 60% without the negative UNL?</h2>
<p >Answer: No, because the negative UNL approach is safer.</p>
<p >First lets compare the two approaches intuitively, (1) the <em>negative UNL</em> approach, and (2) <em>lower quorum</em>: simply lowering the quorum from 80% to 60% without the negative UNL. The negative UNL approach uses consensus to come up with a list of unreliable validators, which are then removed from the effective UNL temporarily. With this approach, the list of unreliable validators is agreed to by a quorum of validators and will be used by every node in the network to adjust its UNL. The quorum is always 80% of the effective UNL. The lower quorum approach is a tradeoff between safety and liveness and against our principle of preferring safety over liveness. Note that different validators don't have to agree on which validation sources they are ignoring.</p>
@@ -368,9 +368,9 @@ Question: Can we simply lower the quorum to 60% without the negative UNL?</h2>
<p >In general by measuring the agreement of validations, we also measured the "sanity". If two validators have too many disagreements, one of them could be insane. When enough validators think a validator is insane, that validator is put on the negative UNL.</p>
<h3>Question: Why would there be at most one disable UNLModify and one re-enable UNLModify transaction per flag ledger? </h3>
<p >Answer: It is a design choice so that the effective UNL does not change too quickly. A typical targeted scenario is several validators go offline slowly during a long weekend. The current design can handle this kind of cases well without changing the effective UNL too quickly.</p>
<h1><a class="anchor" id="autotoc_md130"></a>
<h1><a class="anchor" id="autotoc_md131"></a>
Appendix</h1>
<h2><a class="anchor" id="autotoc_md131"></a>
<h2><a class="anchor" id="autotoc_md132"></a>
Confidence Test</h2>
<p >We will use two test networks, a single machine test network with multiple IP addresses and the QE test network with multiple machines. The single machine network will be used to test all the test cases and to debug. The QE network will be used after that. We want to see the test cases still pass with real network delay. A test case specifies:</p>
<ol type="1">
@@ -398,7 +398,7 @@ Confidence Test</h2>
<ol type="1">
<li>A sequence of events (or the lack of events) such as a killed validator is added to the negative UNL.</li>
</ol>
<h3><a class="anchor" id="autotoc_md132"></a>
<h3><a class="anchor" id="autotoc_md133"></a>
Roads Not Taken: Test with Extended CSF</h3>
<p >We considered testing with the current unit test framework, specifically the <a href="https://github.com/ripple/rippled/blob/develop/src/test/csf/README.md">Consensus Simulation Framework</a> (CSF). However, the CSF currently can only test the generic consensus algorithm as in the paper: <a href="https://arxiv.org/abs/1802.07242">Analysis of the XRP Ledger Consensus Protocol</a>. </p>
</div></div><!-- contents -->

View File

@@ -107,10 +107,10 @@ $(function() {
<li><code>onTimer()</code></li>
</ol>
<p >Each of these entry points does something unique to that entry point. They either (a) transition <code>LedgerReplayTask</code> to a terminal failed resolved state (<code>cancel()</code> and <code>onTimer()</code>) or (b) try to make progress toward the successful resolved state. <code>init()</code> and <code>updateSkipList(...)</code> call <code>trigger()</code> while <code>deltaReady(...)</code> calls <code>tryAdvance()</code>. There's a similarity between this pattern and the way coroutines are implemented, where every yield saves the spot in the code where it left off and every resume jumps back to that spot.</p>
<h2><a class="anchor" id="autotoc_md134"></a>
<h2><a class="anchor" id="autotoc_md135"></a>
Sequence Diagram</h2>
<p ><img src="./ledger_replay_sequence.png?raw=true" alt="Sequence diagram" title="A successful ledger replay" class="inline"/></p>
<h2><a class="anchor" id="autotoc_md135"></a>
<h2><a class="anchor" id="autotoc_md136"></a>
Class Diagram</h2>
<p ><img src="./ledger_replay_classes.png?raw=true" alt="Class diagram" title="Ledger replay classes" class="inline"/> </p>
</div></div><!-- contents -->

View File

@@ -72,7 +72,7 @@ $(function() {
<div class="headertitle"><div class="title">Code Style Cheat Sheet </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md147"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md148"></a>
Form</h1>
<ul>
<li>One class per header file.</li>
@@ -86,7 +86,7 @@ Form</h1>
<li>Order class declarations as types, public, protected, private, then data.</li>
<li>Prefer 'private' over 'protected'</li>
</ul>
<h1><a class="anchor" id="autotoc_md148"></a>
<h1><a class="anchor" id="autotoc_md149"></a>
Function</h1>
<ul>
<li>Minimize external dependencies<ul>

View File

@@ -73,7 +73,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >Coding standards used here gradually evolve and propagate through code reviews. Some aspects are enforced more strictly than others.</p>
<h1><a class="anchor" id="autotoc_md150"></a>
<h1><a class="anchor" id="autotoc_md151"></a>
Rules</h1>
<p >These rules only apply to our own code. We can't enforce any sort of style on the external repositories and libraries we include. The best guideline is to maintain the standards that are used in those libraries.</p>
<ul>
@@ -82,7 +82,7 @@ Rules</h1>
<li>Modern C++ principles. No naked <code>new</code> or <code>delete</code>.</li>
<li>Line lengths limited to 80 characters. Exceptions limited to data and tables.</li>
</ul>
<h1><a class="anchor" id="autotoc_md151"></a>
<h1><a class="anchor" id="autotoc_md152"></a>
Guidelines</h1>
<p >If you want to do something contrary to these guidelines, understand why you're doing it. Think, use common sense, and consider that this your changes will probably need to be maintained long after you've moved on to other projects.</p>
<ul>
@@ -98,7 +98,7 @@ Guidelines</h1>
</li>
<li>Don't over-inline by defining large functions within the class declaration, not even for template classes.</li>
</ul>
<h1><a class="anchor" id="autotoc_md152"></a>
<h1><a class="anchor" id="autotoc_md153"></a>
Formatting</h1>
<p >The goal of source code formatting should always be to make things as easy to read as possible. White space is used to guide the eye so that details are not overlooked. Blank lines are used to separate code into "paragraphs."</p>
<ul>

View File

@@ -75,12 +75,12 @@ $(function() {
<div class="textblock"><p >The jemalloc library provides a good API for doing heap analysis, including a mechanism to dump a description of the heap from within the running application via a function call. Details on how to perform this activity in general, as well as how to acquire the software, are available on the jemalloc site: <a href="https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling">https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling</a></p>
<p >jemalloc is acquired separately from rippled, and is not affiliated with Ripple Labs. If you compile and install jemalloc from the source release with default options, it will install the library and header under <code>/usr/local/lib</code> and <code>/usr/local/include</code>, respectively. Heap profiling has been tested with rippled on a Linux platform. It should work on platforms on which both rippled and jemalloc are available.</p>
<p >To link rippled with jemalloc, the argument <code>profile-jemalloc=&lt;jemalloc_dir&gt;</code> is provided after the optional target. The <code>&lt;jemalloc_dir&gt;</code> argument should be the same as that of the <code>--prefix</code> parameter passed to the jemalloc configure script when building.</p>
<h1><a class="anchor" id="autotoc_md177"></a>
<h1><a class="anchor" id="autotoc_md178"></a>
Examples:</h1>
<p >Build rippled with jemalloc library under /usr/local/lib and header under /usr/local/include: </p><pre class="fragment">$ scons profile-jemalloc=/usr/local
</pre><p> Build rippled using clang with the jemalloc library under /opt/local/lib and header under /opt/local/include: </p><pre class="fragment">$ scons clang profile-jemalloc=/opt/local
</pre> <hr />
<h1><a class="anchor" id="autotoc_md178"></a>
<h1><a class="anchor" id="autotoc_md179"></a>
Using the jemalloc library from within the code</h1>
<p >The <code>profile-jemalloc</code> parameter enables a macro definition called <code>PROFILE_JEMALLOC</code>. Include the jemalloc header file as well as the api call(s) that you wish to make within preprocessor conditional groups, such as:</p>
<p >In global scope: </p><pre class="fragment">#ifdef PROFILE_JEMALLOC

View File

@@ -72,7 +72,7 @@ $(function() {
<div class="headertitle"><div class="title">Building documentation </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md180"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md181"></a>
Dependencies</h1>
<p >Install these dependencies:</p>
<ul>
@@ -95,7 +95,7 @@ Dependencies</h1>
</ul>
</li>
</ul>
<h1><a class="anchor" id="autotoc_md181"></a>
<h1><a class="anchor" id="autotoc_md182"></a>
Docker</h1>
<p >Instead of installing the above dependencies locally, you can use the official build environment Docker image, which has all of them installed already.</p>
<ol type="1">
@@ -105,7 +105,7 @@ Docker</h1>
<li>Run the image from the project folder: <div class="fragment"><div class="line">sudo docker run -v $PWD:/opt/rippled --rm rippleci/rippled-ci-builder:2944b78d22db</div>
</div><!-- fragment --></li>
</ol>
<h1><a class="anchor" id="autotoc_md182"></a>
<h1><a class="anchor" id="autotoc_md183"></a>
Build</h1>
<p >There is a <code>docs</code> target in the CMake configuration.</p>
<div class="fragment"><div class="line">mkdir build</div>

View File

@@ -73,7 +73,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >To better understand how to use Conan, we should first understand <em>why</em> we use Conan, and to understand that, we need to understand how we use CMake.</p>
<h2><a class="anchor" id="autotoc_md137"></a>
<h2><a class="anchor" id="autotoc_md138"></a>
CMake</h2>
<p >Technically, you don't need CMake to build this project. You could manually compile every translation unit into an object file, using the right compiler options, and then manually link all those objects together, using the right linker options. However, that is very tedious and error-prone, which is why we lean on tools like CMake.</p>
<p >We have written CMake configuration files (<a href="./CMakeLists.txt"><code>CMakeLists.txt</code></a> and friends) for this project so that CMake can be used to correctly compile and link all of the translation units in it. Or rather, CMake will generate files for a separate build system (e.g. Make, Ninja, Visual Studio, Xcode, etc.) that compile and link all of the translation units. Even then, CMake has parameters, some of which are platform-specific. In CMake's parlance, parameters are specially-named <b>variables</b> like <a href="https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html"><code>CMAKE_BUILD_TYPE</code></a> or <a href="https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html"><code>CMAKE_MSVC_RUNTIME_LIBRARY</code></a>. Parameters include:</p>
@@ -87,7 +87,7 @@ CMake</h2>
</ul>
<p >For some of these parameters, like the build system and compiler, CMake goes through a complicated search process to choose default values. For others, like the dependencies, <em>we</em> had written in the CMake configuration files of this project our own complicated process to choose defaults. For most developers, things "just worked"... until they didn't, and then you were left trying to debug one of these complicated processes, instead of choosing and manually passing the parameter values yourself.</p>
<p >You can pass every parameter to CMake on the command line, but writing out these parameters every time we want to configure CMake is a pain. Most humans prefer to put them into a configuration file, once, that CMake can read every time it is configured. For CMake, that file is a <a href="https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html">toolchain file</a>.</p>
<h2><a class="anchor" id="autotoc_md138"></a>
<h2><a class="anchor" id="autotoc_md139"></a>
Conan</h2>
<p >These next few paragraphs on Conan are going to read much like the ones above for CMake.</p>
<p >Technically, you don't need Conan to build this project. You could manually download, configure, build, and install all of the dependencies yourself, and then pass all of the parameters necessary for CMake to link to those dependencies. To guarantee ABI compatibility, you must be sure to use the same set of compiler and linker options for all dependencies <em>and</em> this project. However, that is very tedious and error-prone, which is why we lean on tools like Conan.</p>

View File

@@ -73,7 +73,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >We recommend two different methods to depend on libxrpl in your own <a href="https://cmake.org/cmake/help/latest/">CMake</a> project. Both methods add a CMake library target named <code>xrpl::libxrpl</code>.</p>
<h2><a class="anchor" id="autotoc_md139"></a>
<h2><a class="anchor" id="autotoc_md140"></a>
Conan requirement</h2>
<p >The first method adds libxrpl as a <a href="https://docs.conan.io/">Conan</a> requirement. With this method, there is no need for a Git <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules">submodule</a>. It is good for when you just need a dependency on libxrpl as-is.</p>
<div class="fragment"><div class="line"># This conanfile.txt is just an example.</div>
@@ -103,7 +103,7 @@ Conan requirement</h2>
<div class="line"> -DCMAKE_BUILD_TYPE=Release \</div>
<div class="line"> ..</div>
<div class="line">cmake --build . --parallel</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md140"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md141"></a>
CMake subdirectory</h2>
<p >The second method adds the <a href="https://github.com/ripple/rippled">rippled</a> project as a CMake <a href="https://cmake.org/cmake/help/latest/command/add_subdirectory.html">subdirectory</a>. This method works well when you keep the rippled project as a Git <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules">submodule</a>. It's good for when you want to make changes to libxrpl as part of your own project. Be careful, though. Your project will inherit all of the same CMake options, so watch out for name collisions. We still recommend using <a href="https://docs.conan.io/">Conan</a> to download, build, and connect dependencies.</p>
<div class="fragment"><div class="line"># Add the project as a Git submodule.</div>

View File

@@ -73,7 +73,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >Our <a class="el" href="md____w_rippled_rippled_BUILD.html">build instructions</a> assume you have a C++ development environment complete with Git, Python, Conan, CMake, and a C++ compiler. This document exists to help readers set one up on any of the Big Three platforms: Linux, macOS, or Windows.</p>
<h2><a class="anchor" id="autotoc_md141"></a>
<h2><a class="anchor" id="autotoc_md142"></a>
Linux</h2>
<p >Package ecosystems vary across Linux distributions, so there is no one set of instructions that will work for every Linux user. These instructions are written for Ubuntu 22.04. They are largely copied from the <a href="https://github.com/thejohnfreeman/rippled-docker/blob/master/ubuntu-22.04/install.sh">script</a> used to configure our Docker container for continuous integration. That script handles many more responsibilities. These instructions are just the bare minimum to build one configuration of rippled. You can check that codebase for other Linux distributions and versions. If you cannot find yours there, then we hope that these instructions can at least guide you in the right direction.</p>
<div class="fragment"><div class="line">apt update</div>
@@ -90,7 +90,7 @@ Linux</h2>
<div class="line">cd ..</div>
<div class="line"> </div>
<div class="line">pip3 install &#39;conan&lt;2&#39;</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md142"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md143"></a>
macOS</h2>
<p >Open a Terminal and enter the below command to bring up a dialog to install the command line developer tools. Once it is finished, this command should return a version greater than the minimum required (see <a class="el" href="md____w_rippled_rippled_BUILD.html">BUILD.md</a>).</p>
<div class="fragment"><div class="line">clang --version</div>

View File

@@ -73,12 +73,12 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >This document contains instructions for installing rippled. The APT package manager is common on Debian-based Linux distributions like Ubuntu, while the YUM package manager is common on Red Hat-based Linux distributions like CentOS. Installing from source is an option for all platforms, and the only supported option for installing custom builds.</p>
<h2><a class="anchor" id="autotoc_md143"></a>
<h2><a class="anchor" id="autotoc_md144"></a>
From source</h2>
<p >From a source build, you can install rippled and libxrpl using CMake's <code>--install</code> mode:</p>
<div class="fragment"><div class="line">cmake --install . --prefix /opt/local</div>
</div><!-- fragment --><p >The default <a href="https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html">prefix</a> is typically <code>/usr/local</code> on Linux and macOS and <code>C:/Program Files/rippled</code> on Windows.</p>
<h2><a class="anchor" id="autotoc_md144"></a>
<h2><a class="anchor" id="autotoc_md145"></a>
With the APT package manager</h2>
<ol type="1">
<li>Update repositories: <pre class="fragment"> sudo apt update -y
@@ -124,7 +124,7 @@ sub rsa3072 2019-02-14 [E] [expires: 2026-02-17]
<p class="startli">This allows you to serve incoming API requests on port 80 or 443. (If you want to do so, you must also update the config file's port settings.) </p><pre class="fragment">sudo setcap 'cap_net_bind_service=+ep' /opt/ripple/bin/rippled
</pre></li>
</ol>
<h2><a class="anchor" id="autotoc_md145"></a>
<h2><a class="anchor" id="autotoc_md146"></a>
With the YUM package manager</h2>
<ol type="1">
<li><p class="startli">Install the Ripple RPM repository:</p>

View File

@@ -74,7 +74,7 @@ $(function() {
<div class="contents">
<div class="textblock"><p ><b>This section is a work in progress!!</b></p>
<p >Consensus is the task of reaching agreement within a distributed system in the presence of faulty or even malicious participants. This document outlines the <a href="https://arxiv.org/abs/1802.07242">XRP Ledger Consensus Algorithm</a> as implemented in <a href="https://github.com/ripple/rippled">rippled</a>, but focuses on its utility as a generic consensus algorithm independent of the detailed mechanics of the Ripple Consensus Ledger. Most notably, the algorithm does not require fully synchronous communication between all nodes in the network, or even a fixed network topology, but instead achieves consensus via collectively trusted subnetworks.</p>
<h1><a class="anchor" id="autotoc_md154"></a>
<h1><a class="anchor" id="autotoc_md155"></a>
Distributed Agreement</h1>
<p >A challenge for distributed systems is reaching agreement on changes in shared state. For the Ripple network, the shared state is the current ledger&ndash;account information, account balances, order books and other financial data. We will refer to shared distributed state as a /ledger/ throughout the remainder of this document.</p>
<div class="image">
@@ -94,9 +94,9 @@ Ledger Chain</div></div>
<div class="caption">
Block Chain</div></div>
<p >The remainder of this section describes the Consensus and Validation algorithms in more detail and is meant as a companion guide to understanding the generic implementation in <code>rippled</code>. The document <b>does not</b> discuss correctness, fault-tolerance or liveness properties of the algorithms or the full details of how they integrate within <code>rippled</code> to support the Ripple Consensus Ledger.</p>
<h1><a class="anchor" id="autotoc_md155"></a>
<h1><a class="anchor" id="autotoc_md156"></a>
Consensus Overview</h1>
<h2><a class="anchor" id="autotoc_md156"></a>
<h2><a class="anchor" id="autotoc_md157"></a>
Definitions</h2>
<ul>
<li>The <em>ledger</em> is the shared distributed state. Each ledger has a unique ID to distinguish it from all other ledgers. During consensus, the <em>previous</em>, <em>prior</em> or <em>last-closed</em> ledger is the most recent ledger seen by consensus and is the basis upon which it will build the next ledger.</li>
@@ -109,7 +109,7 @@ Definitions</h2>
<li>A <em>dispute</em> is a transaction that is either not part of a node's position or not in a peer's position. During consensus, the node will add or remove disputed transactions from its position based on that transaction's support amongst its peers.</li>
</ul>
<p >Note that most types have an ID as a lightweight identifier of instances of that type. Consensus often operates on the IDs directly since the underlying type is potentially expensive to share over the network. For example, proposal's only contain the ID of the position of a peer. Since many peers likely have the same position, this reduces the need to send the full transaction set multiple times. Instead, a node can request the transaction set from the network if necessary.</p>
<h2><a class="anchor" id="autotoc_md157"></a>
<h2><a class="anchor" id="autotoc_md158"></a>
Overview</h2>
<div class="image">
<img src="consensus_overview.png" alt=""/>
@@ -132,7 +132,7 @@ Effective Close Time</h2>
Effective Close Time</div></div>
<p >The effective close time is part of the node's position and is shared with peers in its proposals. Just like the position on the consensus transaction set, a node will update its close time position in response to its peers' effective close time positions. Peers can agree to disagree on the close time, in which case the effective close time is taken as 1 second past the prior close.</p>
<p >The close time resolution is itself dynamic, decreasing (coarser) resolution in subsequent consensus rounds if nodes are unable to reach consensus on an effective close time and increasing (finer) resolution if nodes consistently reach close time consensus.</p>
<h2><a class="anchor" id="autotoc_md158"></a>
<h2><a class="anchor" id="autotoc_md159"></a>
Modes</h2>
<p >Internally, a node operates under one of the following consensus modes. Either of the first two modes may be chosen when a consensus round starts.</p>
<ul>
@@ -150,7 +150,7 @@ Modes</h2>
Consensus Modes</div></div>
<p >Once either wrong ledger or switch ledger are reached, the node cannot return to proposing or observing until the next consensus round. However, the node could change its view of the correct prior ledger, so going from switch ledger to wrong ledger and back again is possible.</p>
<p >The distinction between the wrong and switched ledger modes arises because a ledger's unique identifier may be known by a node before the ledger itself. This reflects that fact that the data corresponding to a ledger may be large and take time to share over the network, whereas the smaller ID could be shared in a peer validation much more quickly. Distinguishing the two states allows the node to decide how best to generate the next ledger once it declares consensus.</p>
<h2><a class="anchor" id="autotoc_md159"></a>
<h2><a class="anchor" id="autotoc_md160"></a>
Phases</h2>
<p >As depicted in the overview diagram, consensus is best viewed as a progression through 3 phases. There are 4 public methods of the generic consensus algorithm that determine this progression</p>
<ul>
@@ -160,12 +160,12 @@ Phases</h2>
<li><code>gotTxSet</code> is called when a transaction set is received from the network. This is typically in response to a prior request from the node to acquire the transaction set corresponding to a disagreeing peer's position.</li>
</ul>
<p >The following subsections describe each consensus phase in more detail and what actions are taken in response to these calls.</p>
<h3><a class="anchor" id="autotoc_md160"></a>
<h3><a class="anchor" id="autotoc_md161"></a>
Open</h3>
<p >The <code>Open</code> phase is a quiescent period to allow transactions to build up in the node's open ledger. The duration is a trade-off between latency and throughput. A shorter window reduces the latency to generating the next ledger, but also reduces transaction throughput due to fewer transactions accepted into the ledger.</p>
<p >A call to <code>startRound</code> would forcibly begin the next consensus round, skipping completion of the current round. This is not expected during normal operation. Calls to <code>peerProposal</code> or <code>gotTxSet</code> simply store the proposal or transaction set for use in the coming <code>Establish</code> phase.</p>
<p >A call to <code>timerEntry</code> first checks that the node is working on the correct prior ledger. If not, it will update the mode and request the correct ledger. Otherwise, the node checks whether to switch to the <code>Establish</code> phase and close the ledger.</p>
<h4><a class="anchor" id="autotoc_md161"></a>
<h4><a class="anchor" id="autotoc_md162"></a>
Ledger Close</h4>
<p >Under normal circumstances, the open ledger period ends when one of the following is true</p>
<ul>
@@ -181,12 +181,12 @@ disputes</h4>
<img src="disputes.png" alt=""/>
<div class="caption">
Disputes</div></div>
<h3><a class="anchor" id="autotoc_md162"></a>
<h3><a class="anchor" id="autotoc_md163"></a>
Establish</h3>
<p >The establish phase is the active period of consensus in which the node exchanges proposals with peers in an attempt to reach agreement on the consensus transactions and effective close time.</p>
<p >A call to <code>startRound</code> would forcibly begin the next consensus round, skipping completion of the current round. This is not expected during normal operation. Calls to <code>peerProposal</code> or <code>gotTxSet</code> that reflect new positions will generate disputed transactions for any new disagreements and will update the peer's vote for all disputed transactions.</p>
<p >A call to <code>timerEntry</code> first checks that the node is working from the correct prior ledger. If not, the node will update the mode and request the correct ledger. Otherwise, the node updates the node's position and considers whether to switch to the <code>Accepted</code> phase and declare consensus reached. However, at least <code>LEDGER_MIN_CONSENSUS</code> time must have elapsed before doing either. This allows peers an opportunity to take an initial position and share it.</p>
<h4><a class="anchor" id="autotoc_md163"></a>
<h4><a class="anchor" id="autotoc_md164"></a>
Update Position</h4>
<p >In order to achieve consensus, the node is looking for a transaction set that is supported by a super-majority of peers. The node works towards this set by adding or removing disputed transactions from its position based on an increasing threshold for inclusion.</p>
<div class="image">
@@ -197,7 +197,7 @@ Threshold</div></div>
<p >Given the <a class="el" href="md____w_rippled_rippled_docs_consensus.html#disputes_image">example disputes above</a> and an initial threshold of 50%, our node would retain its position since transaction 1 was not in dispute and transactions 2 and 3 have 75% support. Since its position did not change, it would not need to send a new proposal to peers. Peer C would not change either. Peer A would add transaction 3 to its position and Peer B would remove transaction 4 from its position; both would then send an updated position.</p>
<p >Conversely, if the diagram reflected a later call to =timerEntry= that occurs in the stuck region with a threshold of say 95%, our node would remove transactions 2 and 3 from its candidate set and send an updated position. Likewise, all the other peers would end up with only transaction 1 in their position.</p>
<p >Lastly, if our node were not in the proposing mode, it would not include its own vote and just take the majority (&gt;50%) position of its peers. In this example, our node would maintain its position of transactions 1, 2 and 3.</p>
<h4><a class="anchor" id="autotoc_md164"></a>
<h4><a class="anchor" id="autotoc_md165"></a>
Checking Consensus</h4>
<p >After updating its position, the node checks for supermajority agreement with its peers on its current position. This agreement is of the exact transaction set, not just the support of individual transactions. That is, if our position is a subset of a peer's position, that counts as a disagreement. Also recall that effective close time agreement allows a supermajority of participants agreeing to disagree.</p>
<p >Consensus is declared when the following 3 clauses are true:</p>
@@ -208,16 +208,16 @@ Checking Consensus</h4>
</ul>
<p >The middle condition ensures slower peers have a chance to share positions, but prevents waiting too long on peers that have disconnected. Additionally, a node can declare that consensus has moved on if <code>minimumConsensusPercentage</code> peers have sent validations and moved on to the next ledger. This outcome indicates the node has fallen behind its peers and needs to catch up.</p>
<p >If a node is not proposing, it does not include its own position when calculating the percent of agreeing participants but otherwise follows the above logic.</p>
<h4><a class="anchor" id="autotoc_md165"></a>
<h4><a class="anchor" id="autotoc_md166"></a>
Accepting Consensus</h4>
<p >Once consensus is reached (or moved on), the node switches to the <code>Accept</code> phase and signals to the implementing code that the round is complete. That code is responsible for using the consensus transaction set to generate the next ledger and calling <code>startRound</code> to begin the next round. The implementation has total freedom on ordering transactions, deciding what to do if consensus moved on, determining whether to retry or abandon local transactions that did not make the consensus set and updating any internal state based on the consensus progress.</p>
<h3><a class="anchor" id="autotoc_md166"></a>
<h3><a class="anchor" id="autotoc_md167"></a>
Accept</h3>
<p >The <code>Accept</code> phase is the terminal phase of the consensus algorithm. Calls to <code>timerEntry</code>, <code>peerProposal</code> and <code>gotTxSet</code> will not change the internal consensus state while in the accept phase. The expectation is that the application specific code is working to generate the new ledger based on the consensus outcome. Once complete, that code should make a call to <code>startRound</code> to kick off the next consensus round. The <code>startRound</code> call includes the new prior ledger, prior ledger ID and whether the round should begin in the proposing or observing mode. After setting some initial state, the phase transitions to <code>Open</code>. The node will also check if the provided prior ledger and ID are correct, updating the mode and requesting the proper ledger from the network if necessary.</p>
<h1><a class="anchor" id="autotoc_md167"></a>
<h1><a class="anchor" id="autotoc_md168"></a>
Consensus Type Requirements</h1>
<p >The consensus type requirements are given below as minimal implementation stubs. Actual implementations would augment these stubs with members appropriate for managing the details of transactions and ledgers within the larger application framework.</p>
<h2><a class="anchor" id="autotoc_md168"></a>
<h2><a class="anchor" id="autotoc_md169"></a>
Transaction</h2>
<p >The transaction type <code>Tx</code> encapsulates a single transaction under consideration by consensus.</p>
<div class="fragment"><div class="line"><span class="keyword">struct </span>Tx</div>
@@ -227,7 +227,7 @@ Transaction</h2>
<div class="line"> </div>
<div class="line"> <span class="comment">//... implementation specific</span></div>
<div class="line">};</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md169"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md170"></a>
Transaction Set</h2>
<p >The transaction set type <code>TxSet</code> represents a set of <code>Tx</code>s that are collectively under consideration by consensus. A <code>TxSet</code> can be compared against other <code>TxSet</code>s (typically from peers) and can be modified to add or remove transactions via the mutable subtype.</p>
<div class="fragment"><div class="line"><span class="keyword">struct </span>TxSet</div>
@@ -261,7 +261,7 @@ Transaction Set</h2>
<div class="line"> <span class="comment">//... implementation specific</span></div>
<div class="line">};</div>
<div class="ttc" id="amap_html"><div class="ttname"><a href="http://en.cppreference.com/w/cpp/container/map.html">std::map</a></div></div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md170"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md171"></a>
Ledger</h2>
<p >The <code>Ledger</code> type represents the state shared amongst the distributed participants. Notice that the details of how the next ledger is generated from the prior ledger and the consensus accepted transaction set is not part of the interface. Within the generic code, this type is primarily used to know that peers are working on the same tip of the ledger chain and to provide some basic timing data for consensus.</p>
<div class="fragment"><div class="line"><span class="keyword">struct </span>Ledger</div>
@@ -292,7 +292,7 @@ Ledger</h2>
<div class="line"> <span class="comment">//... implementation specific</span></div>
<div class="line">};</div>
<div class="ttc" id="aclassJson_1_1Value_html"><div class="ttname"><a href="classJson_1_1Value.html">Json::Value</a></div><div class="ttdoc">Represents a JSON value.</div><div class="ttdef"><b>Definition:</b> <a href="json__value_8h_source.html#l00146">json_value.h:147</a></div></div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md171"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md172"></a>
PeerProposal</h2>
<p >The <code>PeerProposal</code> type represents the signed position taken by a peer during consensus. The only type requirement is owning an instance of a generic <code>ConsensusProposal</code>.</p>
<div class="fragment"><div class="line"><span class="comment">// Represents our proposed position or a peer&#39;s proposed position</span></div>
@@ -309,7 +309,7 @@ PeerProposal</h2>
<div class="line"> </div>
<div class="line"> <span class="comment">// ... implementation specific</span></div>
<div class="line">};</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md172"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md173"></a>
Generic Consensus Interface</h2>
<p >The generic <code>Consensus</code> relies on <code>Adaptor</code> template class to implement a set of helper functions that plug the consensus algorithm into a specific application. The <code>Adaptor</code> class also defines the types above needed by the algorithm. Below are excerpts of the generic consensus implementation and of helper types that will interact with the concrete implementing class.</p>
<div class="fragment"><div class="line"><span class="comment">// Represents a transction under dispute this round</span></div>
@@ -384,7 +384,7 @@ Generic Consensus Interface</h2>
<div class="ttc" id="anamespaceripple_html_a46c521271235f4e2715d7fa8b68940ca"><div class="ttname"><a href="namespaceripple.html#a46c521271235f4e2715d7fa8b68940ca">ripple::hash_map</a></div><div class="ttdeci">std::unordered_map&lt; Key, Value, Hash, Pred, Allocator &gt; hash_map</div><div class="ttdef"><b>Definition:</b> <a href="UnorderedContainers_8h_source.html#l00053">UnorderedContainers.h:53</a></div></div>
<div class="ttc" id="anamespaceripple_html_a53f80df10254751781250aa20704e98f"><div class="ttname"><a href="namespaceripple.html#a53f80df10254751781250aa20704e98f">ripple::set</a></div><div class="ttdeci">bool set(T &amp;target, std::string const &amp;name, Section const &amp;section)</div><div class="ttdoc">Set a value from a configuration Section If the named value is not found or doesn't parse as a T,...</div><div class="ttdef"><b>Definition:</b> <a href="BasicConfig_8h_source.html#l00316">BasicConfig.h:316</a></div></div>
<div class="ttc" id="anamespaceripple_html_a79cc3b590c118bd551b693bb333fb9d1"><div class="ttname"><a href="namespaceripple.html#a79cc3b590c118bd551b693bb333fb9d1">ripple::ConsensusState</a></div><div class="ttdeci">ConsensusState</div><div class="ttdoc">Whether we have or don't have a consensus.</div><div class="ttdef"><b>Definition:</b> <a href="ConsensusTypes_8h_source.html#l00186">ConsensusTypes.h:186</a></div></div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md173"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md174"></a>
Adapting Generic Consensus</h2>
<p >The stub below shows the set of callback/helper functions required in the implementing class.</p>
<div class="fragment"><div class="line"><span class="keyword">struct </span>Adaptor</div>
@@ -452,7 +452,7 @@ Adapting Generic Consensus</h2>
<li>The generic code does not specify how transactions are submitted by clients, propagated through the network or stored in the open ledger. Indeed, the open ledger is only conceptual from the perspective of the generic code&mdash;the initial position and transaction set are opaquely generated in a <code>Consensus::Result</code> instance returned from the <code>onClose</code> callback.</li>
<li>The calls to <code>acquireLedger</code> and <code>acquireTxSet</code> only have non-trivial return if the ledger or transaction set of interest is available. The implementing class is free to block while acquiring, or return the empty option while servicing the request asynchronously. Due to legacy reasons, the two calls are not symmetric. <code>acquireTxSet</code> requires the host application to call <code>gotTxSet</code> when an asynchronous <code>acquire</code> completes. Conversely, <code>acquireLedger</code> will be called again later by the consensus code if it still desires the ledger with the hope that the asynchronous acquisition is complete.</li>
</ul>
<h1><a class="anchor" id="autotoc_md174"></a>
<h1><a class="anchor" id="autotoc_md175"></a>
Validation</h1>
<p >Coming Soon! </p>
</div></div><!-- contents -->

View File

@@ -74,7 +74,7 @@ $(function() {
<div class="contents">
<div class="textblock"><p >Utility functions and classes.</p>
<p >ripple/basic should contain no dependencies on other modules.</p>
<h1><a class="anchor" id="autotoc_md186"></a>
<h1><a class="anchor" id="autotoc_md187"></a>
Choosing a rippled container.</h1>
<ul>
<li><code><a class="elRef" href="http://en.cppreference.com/w/cpp/container/vector.html">std::vector</a></code><ul>

View File

@@ -73,26 +73,26 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >This folder contains the protocol buffer definitions used by the rippled gRPC API. The gRPC API attempts to mimic the JSON/Websocket API as much as possible. As of April 2020, the gRPC API supports a subset of the full rippled API: tx, account_tx, account_info, fee and submit.</p>
<h2><a class="anchor" id="autotoc_md190"></a>
<h2><a class="anchor" id="autotoc_md191"></a>
Making Changes</h2>
<h3><a class="anchor" id="autotoc_md191"></a>
<h3><a class="anchor" id="autotoc_md192"></a>
Wire Format and Backwards Compatibility</h3>
<p >When making changes to the protocol buffer definitions in this folder, care must be taken to ensure the changes do not break the wire format, which would break backwards compatibility. At a high level, do not change any existing fields. This includes the field's name, type and field number. Do not remove any existing fields. It is always safe to add fields; just remember to give each of the new fields a unique field number. The field numbers don't have to be in any particular order and there can be gaps. More info about what changes break the wire format can be found <a href="https://developers.google.com/protocol-buffers/docs/proto3#updating">here</a>.</p>
<h3><a class="anchor" id="autotoc_md192"></a>
<h3><a class="anchor" id="autotoc_md193"></a>
Conventions</h3>
<p >For fields that are reused across different message types, we define the field as a unique message type in common.proto. The name of the message type is the same as the field name, with the exception that the field name itself is snake case, whereas the message type is in Pascal case. The message type has one field, called <code>value</code>. This pattern does not need to be strictly followed across the entire API, but should be followed for transactions and ledger objects, since there is a high rate of field reuse across different transactions and ledger objects. The motivation for this pattern is two-fold. First, we ensure the field has the same type everywhere that the field is used. Second, wrapping primitive types in their own message type prevents default initialization of those primitive types. For example, <code>uint32</code> is initialized to <code>0</code> if not explicitly set; there is no way to tell if the client or server set the field to <code>0</code> (which may be a valid value for the field) or the field was default initialized.</p>
<h3><a class="anchor" id="autotoc_md193"></a>
<h3><a class="anchor" id="autotoc_md194"></a>
Name Collisions</h3>
<p >Each message type must have a unique name. To resolve collisions, add a suffix to one or more message types. For instance, ledger objects and transaction types often have the same name (<code>DepositPreauth</code> for example). To resolve this, the <code>DepositPreauth</code> ledger object is named <code>DepositPreauthObject</code>.</p>
<h3><a class="anchor" id="autotoc_md194"></a>
<h3><a class="anchor" id="autotoc_md195"></a>
To add a field or message type</h3>
<p >To add a field to a message, define the fields type, name and unique index. To add a new message type, give the message type a unique name. Then, add the appropriate C++ code in GRPCHelpers.cpp, or in the handler itself, to serialize/deserialize the new field or message type.</p>
<h3><a class="anchor" id="autotoc_md195"></a>
<h3><a class="anchor" id="autotoc_md196"></a>
To add a new gRPC method</h3>
<p >To add a new gRPC method, add the gRPC method in xrp_ledger.proto. The method name should begin with a verb. Define the request and response types in their own file. The name of the request type should be the method name suffixed with <code>Request</code>, and the response type name should be the method name suffixed with <code>Response</code>. For example, the <code>GetAccountInfo</code> method has request type <code>GetAccountInfoRequest</code> and response type <code>GetAccountInfoResponse</code>.</p>
<p >After defining the protobuf messages for the new method, add an instantiation of the templated <code>CallData</code> class in GRPCServerImpl::setupListeners(). The template parameters should be the request type and the response type.</p>
<p >Finally, define the handler itself in the appropriate file under the src/ripple/rpc/handlers folder. If the method already has a JSON/Websocket equivalent, write the gRPC handler in the same file, and abstract common logic into helper functions (see <a class="el" href="Tx_8cpp_source.html">Tx.cpp</a> or <a class="el" href="AccountTx_8cpp_source.html">AccountTx.cpp</a> for an example).</p>
<h3><a class="anchor" id="autotoc_md196"></a>
<h3><a class="anchor" id="autotoc_md197"></a>
Testing</h3>
<p >When modifying an existing gRPC method, be sure to test that modification in the corresponding, existing unit test. When creating a new gRPC method, implement a class that derives from GRPCTestClientBase, and use the newly created class to call the new method. See the class <code>GrpcTxClient</code> in the file Tx_test.cpp for an example. The gRPC tests are paired with their JSON counterpart, and the tests should mirror the JSON test as much as possible.</p>
<p >Refer to the Protocol Buffers <a href="https://developers.google.com/protocol-buffers/docs/proto3">language guide</a> for more detailed information about Protocol Buffers. </p>

View File

@@ -73,11 +73,11 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >Classes and functions for handling data and values associated with the XRP Ledger protocol.</p>
<h1><a class="anchor" id="autotoc_md199"></a>
<h1><a class="anchor" id="autotoc_md200"></a>
Serialized Objects</h1>
<p >Objects transmitted over the network must be serialized into a canonical format. The prefix "ST" refers to classes that deal with the serialized format.</p>
<p >The term "Tx" or "tx" is an abbreviation for "Transaction", a commonly occurring object type.</p>
<h2><a class="anchor" id="autotoc_md200"></a>
<h2><a class="anchor" id="autotoc_md201"></a>
Optional Fields</h2>
<p >Our serialized fields have some "type magic" to make optional fields easier to read:</p>
<ul>
@@ -91,7 +91,7 @@ Optional Fields</h2>
</ul>
<p >Typically, for things that are guaranteed to exist, you use <code>x[sfFoo]</code> and avoid having to deal with a container that may or may not hold a value. For things not guaranteed to exist, you use <code>x[~sfFoo]</code> because you want such a container. It avoids having to look something up twice, once just to see if it exists and a second time to get/set its value. (<a href="https://github.com/ripple/rippled/blob/35f4698aed5dce02f771b34cfbb690495cb5efcc/src/ripple/app/tx/impl/PayChan.cpp#L229-L236">Real example</a>)</p>
<p >The source of this "type magic" is in <a href="./SField.h#L296-L302">SField.h</a>.</p>
<h2><a class="anchor" id="autotoc_md201"></a>
<h2><a class="anchor" id="autotoc_md202"></a>
Related Resources</h2>
<ul>
<li><a href="https://github.com/XRPLF/xrpl.js/tree/main/packages/ripple-binary-codec/src/enums">ripple-binary-codec SField enums</a></li>

View File

@@ -79,14 +79,14 @@ $(function() {
<li>Provide an interface to share load information in a cluster.</li>
<li>Warn and/or disconnect endpoints for imposing load.</li>
</ul>
<h1><a class="anchor" id="autotoc_md203"></a>
<h1><a class="anchor" id="autotoc_md204"></a>
Description</h1>
<p >To prevent monopolization of server resources or attacks on servers, resource consumption is monitored at each endpoint. When consumption exceeds certain thresholds, costs are imposed. Costs could include charging additional XRP for transactions, requiring a proof of work to be performed, or simply disconnecting the endpoint.</p>
<p >Currently, consumption endpoints include websocket connections used to service clients, and peer connections used to create the peer to peer overlay network implementing the Ripple protocol.</p>
<p >The current "balance" of a Consumer represents resource consumption debt or credit. Debt is accrued when bad loads are imposed. Credit is granted when good loads are imposed. When the balance crosses heuristic thresholds, costs are increased on the endpoint. The balance is represented as a unitless relative quantity. This balance is currently held by the Entry struct in the impl/Entry.h file.</p>
<p >Costs associated with specific transactions are defined in the impl/Fees files.</p>
<p >Although RPC connections consume resources, they are transient and cannot be rate limited. It is advised not to expose RPC interfaces to the general public.</p>
<h1><a class="anchor" id="autotoc_md204"></a>
<h1><a class="anchor" id="autotoc_md205"></a>
Consumer Types</h1>
<p >Consumers are placed into three classifications (as identified by the Resource::Kind enumeration):</p>
<ul>
@@ -95,15 +95,15 @@ Consumer Types</h1>
<li>Admin</li>
</ul>
<p >Each caller determines for itself the classification of the Consumer it is creating.</p>
<h1><a class="anchor" id="autotoc_md205"></a>
<h1><a class="anchor" id="autotoc_md206"></a>
Resource Loading</h1>
<p >It is expected that a client will impose a higher load on the server when it first connects: the client may need to catch up on transactions it has missed, or get trust lines, or transfer fees. The Manager must expect this initial peak load, but not allow that high load to continue because over the long term that would unduly stress the server.</p>
<p >If a client places a sustained high load on the server, that client is initially given a warning message. If that high load continues the Manager may tell the heavily loaded server to drop the connection entirely and not allow re-connection for some amount of time.</p>
<p >Each load is monitored by capturing peaks and then decaying those peak values over time: this is implemented by the DecayingSample class.</p>
<h1><a class="anchor" id="autotoc_md206"></a>
<h1><a class="anchor" id="autotoc_md207"></a>
Gossip</h1>
<p >Each server in a cluster creates a list of IP addresses of end points that are imposing a significant load. This list is called Gossip, which is passed to other nodes in that cluster. Gossip helps individual servers in the cluster identify IP addreses that might be unduly loading the entire cluster. Again the recourse of the individual servers is to drop connections to those IP addresses that occur commonly in the gossip.</p>
<h1><a class="anchor" id="autotoc_md207"></a>
<h1><a class="anchor" id="autotoc_md208"></a>
Access</h1>
<p >In rippled, the Application holds a unique instance of Resource::Manager, which may be retrieved by calling the method <code>Application::getResourceManager()</code>. </p>
</div></div><!-- contents -->

View File

@@ -72,7 +72,7 @@ $(function() {
<div class="headertitle"><div class="title">Unit Tests </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md509"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md510"></a>
Running Tests</h1>
<p >Unit tests are bundled in the <code>rippled</code> executable and can be executed using the <code>--unittest</code> parameter. Without any arguments to this option, all non-manual unit tests will be executed. If you want to run one or more manual tests, you must specify it by suite or full-name (e.g. <code>ripple.app.NoRippleCheckLimits</code> or just <code>NoRippleCheckLimits</code>).</p>
<p >More than one suite or group of suites can be specified as a comma separated list via the argument. For example, <code>--unittest=beast,OversizeMeta</code> will run all suites in the <code>beast</code> library (root identifier) as well as the test suite named <code>OversizeMeta</code>). All name matches are case sensitive.</p>

View File

@@ -73,7 +73,7 @@ $(function() {
</div><!--header-->
<div class="contents">
<div class="textblock"><p >The Consensus Simulation Framework is a set of software components for describing, running and analyzing simulations of the consensus algorithm in a controlled manner. It is also used to unit test the generic Ripple consensus algorithm implementation. The framework is in its early stages, so the design and supported features are subject to change.</p>
<h1><a class="anchor" id="autotoc_md501"></a>
<h1><a class="anchor" id="autotoc_md502"></a>
Overview</h1>
<p >The simulation framework focuses on simulating the core consensus and validation algorithms as a <a href="https://en.wikipedia.org/wiki/Discrete_event_simulation">discrete event simulation</a>. It is completely abstracted from the details of the XRP ledger and transactions. In the simulation, a ledger is simply a set of observed integers and transactions are single integers. The consensus process works to agree on the set of integers to include in the next ledger.</p>
<div class="image">
@@ -89,7 +89,7 @@ CSF Overview</div></div>
<li><code>Collector</code>s that aggregate, filter and analyze data from the simulation. Typically, this is used to monitor invariants or generate reports.</li>
</ul>
<p >Once specified, the simulation runs using a single <code>Scheduler</code> that manages the global clock and sequencing of activity. During the course of simulation, <code>Peer</code>s generate <code>Ledger</code>s and <code>Validation</code>s as a result of consensus, eventually fully validating the consensus history of accepted transactions. Each <code>Peer</code> also issues various <code>Event</code>s during the simulation, which are analyzed by the registered <code>Collector</code>s.</p>
<h1><a class="anchor" id="autotoc_md502"></a>
<h1><a class="anchor" id="autotoc_md503"></a>
Example Simulation</h1>
<p >Below is a basic simulation we can walk through to get an understanding of the framework. This simulation is for a set of 5 validators that aren't directly connected but rely on a single hub node for communication.</p>
<div class="image">
@@ -125,7 +125,7 @@ Example Sim</div></div>
<div class="line"> </div>
<div class="line">std::cout &lt;&lt; (simDur.stop - simDur.start).count() &lt;&lt; std::endl;</div>
<div class="line">assert(sim.synchronized());</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md503"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md504"></a>
&lt;tt&gt;Sim&lt;/tt&gt; and &lt;tt&gt;PeerGroup&lt;/tt&gt;</h2>
<div class="fragment"><div class="line"> {c++}</div>
<div class="line">Sim sim;</div>
@@ -135,7 +135,7 @@ Example Sim</div></div>
<div class="line">center[0]-&gt;runAsValidator = false;</div>
</div><!-- fragment --><p >The simulation code starts by creating a single instance of the <a href="./Sim.h"><code>Sim</code> class</a>. This class is used to manage the overall simulation and internally owns most other components, including the <code>Peer</code>s, <code>Scheduler</code>, <code>BasicNetwork</code> and <code>TrustGraph</code>. The next two lines create two differ <code>PeerGroup</code>s of size 5 and 1 . A <a href="./PeerGroup.h"><code>PeerGroup</code></a> is a convenient way for configuring a set of related peers together and internally has a vector of pointers to the <code>Peer</code>s which are owned by the <code>Sim</code>. <code>PeerGroup</code>s can be combined using <code>+/-</code> operators to configure more complex relationships of nodes as shown by <code>PeerGroup network</code>. Note that each call to <code>createGroup</code> adds that many new <code>Peer</code>s to the simulation, but does not specify any trust or network relationships for the new <code>Peer</code>s.</p>
<p >Lastly, the single <code>Peer</code> in the size 1 <code>center</code> group is switched from running as a validator (the default) to running as a tracking peer. The <a href="./Peer.h"><code>Peer</code> class</a> has a variety of configurable parameters that control how it behaves during the simulation.</p>
<h1><a class="anchor" id="autotoc_md504"></a>
<h1><a class="anchor" id="autotoc_md505"></a>
&lt;tt&gt;trust&lt;/tt&gt; and &lt;tt&gt;connect&lt;/tt&gt;</h1>
<div class="fragment"><div class="line"> {c++}</div>
<div class="line">validators.trust(validators);</div>
@@ -146,7 +146,7 @@ Example Sim</div></div>
<div class="line">validators.connect(center, delay);</div>
</div><!-- fragment --><p >Although the <code>sim</code> object has accessible instances of <a href="./TrustGraph.h">TrustGraph</a> and <a href="./BasicNetwork.h">BasicNetwork</a>, it is more convenient to manage the graphs via the <code>PeerGroup</code>s. The first two lines create a trust topology in which all <code>Peer</code>s trust the 5 validating <code>Peer</code>s. Or in the UNL perspective, all <code>Peer</code>s are configured with the same UNL listing the 5 validating <code>Peer</code>s. The two lines could've been rewritten as <code>network.trust(validators)</code>.</p>
<p >The next lines create the network communication topology. Each of the validating <code>Peer</code>s connects to the central hub <code>Peer</code> with a fixed delay of 200ms. Note that the network connections are really undirected, but are represented internally in a directed graph using edge pairs of inbound and outbound connections.</p>
<h1><a class="anchor" id="autotoc_md505"></a>
<h1><a class="anchor" id="autotoc_md506"></a>
Collectors</h1>
<div class="fragment"><div class="line"> {c++}</div>
<div class="line">SimDurationCollector simDur;</div>
@@ -155,14 +155,14 @@ Collectors</h1>
<p >Note that the collector lifetime is independent of the simulation and is added to the simulation by reference. This is intentional, since collectors might be used across several simulations to collect more complex combinations of data. At the end of the simulation, we print out the total duration by subtracting <code>simDur</code> members.</p>
<div class="fragment"><div class="line"> {c++}</div>
<div class="line">std::cout &lt;&lt; (simDur.stop - simDur.start).count() &lt;&lt; std::endl;</div>
</div><!-- fragment --><h1><a class="anchor" id="autotoc_md506"></a>
</div><!-- fragment --><h1><a class="anchor" id="autotoc_md507"></a>
Transaction submission</h1>
<div class="fragment"><div class="line"> {c++}</div>
<div class="line">// everyone submits their own ID as a TX and relay it to peers</div>
<div class="line">for (Peer * p : validators)</div>
<div class="line"> p-&gt;submit(Tx(static_cast&lt;std::uint32_t&gt;(p-&gt;id)));</div>
</div><!-- fragment --><p >In this basic example, we explicitly submit a single transaction to each validator. For larger simulations, clients can use a <a href="./submitters.h">Submitter</a> to send transactions in at fixed or random intervals to fixed or random <code>Peer</code>s.</p>
<h1><a class="anchor" id="autotoc_md507"></a>
<h1><a class="anchor" id="autotoc_md508"></a>
Run</h1>
<p >The example has two calls to <code>sim.run(1)</code>. This call runs the simulation until each <code>Peer</code> has closed one additional ledger. After closing the additional ledger, the <code>Peer</code> stops participating in consensus. The first call is used to ensure a more useful prior state of all <code>Peer</code>s. After the transaction submission, the second call to <code>run</code> results in one additional ledger that accepts those transactions.</p>
<p >Alternatively, you can specify a duration to run the simulation, e.g. <code>sim.run(10s)</code> which would have <code>Peer</code>s continuously run consensus until the scheduler has elapsed 10 additional seconds. The <code>sim.scheduler.in</code> or <code>sim.scheduler.at</code> methods can schedule arbitrary code to execute at a later time in the simulation, for example removing a network connection or modifying the trust graph. </p>

View File

@@ -72,9 +72,9 @@ $(function() {
<div class="headertitle"><div class="title">Ledger Process </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md512"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md513"></a>
Introduction</h1>
<h1><a class="anchor" id="autotoc_md513"></a>
<h1><a class="anchor" id="autotoc_md514"></a>
Life Cycle</h1>
<p >Every server always has an open ledger. All received new transactions are applied to the open ledger. The open ledger can't close until we reach a consensus on the previous ledger, and either: there is at least one transaction in the open ledger, or the ledger's close time has been reached.</p>
<p >When the open ledger is closed the transactions in the open ledger become the initial proposal. Validators will send the proposal (non-validators will simply not send the proposal). The open ledger contains the set of transactions the server thinks should go into the next ledger.</p>
@@ -85,17 +85,17 @@ Life Cycle</h1>
<li>Forms the basis of the initial proposal during consensus</li>
<li>Used to decide if we can reject the transaction without relaying it</li>
</ul>
<h1><a class="anchor" id="autotoc_md514"></a>
<h1><a class="anchor" id="autotoc_md515"></a>
Byzantine Failures</h1>
<p >Byzantine failures are resolved as follows. If there is a supermajority ledger, then a minority of validators will discover that the consensus round is proceeding on a different ledger than they thought. These validators will become desynced, and switch to a strategy of trying to acquire the consensus ledger.</p>
<p >If there is no majority ledger, then starting on the next consensus round there will not be a consensus on the last closed ledger. Another avalanche process is started.</p>
<h1><a class="anchor" id="autotoc_md515"></a>
<h1><a class="anchor" id="autotoc_md516"></a>
Validators</h1>
<p >The only meaningful difference between a validator and a 'regular' server is that the validator sends its proposals and validations to the network.</p>
<hr />
<h1><a class="anchor" id="autotoc_md517"></a>
<h1><a class="anchor" id="autotoc_md518"></a>
The Ledger Stream</h1>
<h2><a class="anchor" id="autotoc_md518"></a>
<h2><a class="anchor" id="autotoc_md519"></a>
Ledger Priorities</h2>
<p >There are two ledgers that are the most important for a rippled server to have:</p>
<ul>
@@ -113,7 +113,7 @@ Ledger Priorities</h2>
<p >But loading or network connectivity may sometimes interfere with that ledger stream. So suppose the server publishes validated ledger 600 and then receives validated ledger 603. Then the server wants to back fill its ledger history with ledgers 601 and 602.</p>
<p >The server prioritizes keeping up with current ledgers. But if it is caught up on the current ledger, and there are no higher priority demands on the server, then it will attempt to back fill its historical ledgers. It fills in the historical ledger data first by attempting to retrieve it from the local database. If the local database does not have all of the necessary data then the server requests the remaining information from network peers.</p>
<p >Suppose the server is missing multiple historical ledgers. Take the previous example where we have ledgers 603 and 600, but we're missing 601 and 602. In that case the server requests information for ledger 602 first, before back-filling ledger 601. We want to expand the contiguous range of most-recent ledgers that the server has locally. There's also a limit to how much historical ledger data is useful. So if we're on ledger 603, but we're missing ledger 4 we may not bother asking for ledger 4.</p>
<h2><a class="anchor" id="autotoc_md519"></a>
<h2><a class="anchor" id="autotoc_md520"></a>
Assembling a Ledger</h2>
<p >When data for a ledger arrives from a peer, it may take a while before the server can apply that data. So when ledger data arrives we schedule a job thread to apply that data. If more data arrives before the job starts we add that data to the job. We defer requesting more ledger data until all of the data we have for that ledger has been processed. Once all of that data is processed we can intelligently request only the additional data that we need to fill in the ledger. This reduces network traffic and minimizes the load on peers supplying the data.</p>
<p >If we receive data for a ledger that is not currently under construction, we don't just throw the data away. In particular the AccountStateNodes may be useful, since they can be re-used across ledgers. This data is stashed in memory (not the database) where the acquire process can find it.</p>
@@ -127,13 +127,13 @@ Assembling a Ledger</h2>
</ol>
<p >Inner-most nodes are supplied before outer nodes. This allows the requesting server to hook things up (and validate) in the order in which data arrives.</p>
<p >If this process fails, then a server can also ask for ledger data by hash, rather than by asking for specific nodes in a ledger. Asking for information by hash is less efficient, but it allows a peer to return the information even if the information is not assembled into a tree. All the peer needs is the raw data.</p>
<h2><a class="anchor" id="autotoc_md520"></a>
<h2><a class="anchor" id="autotoc_md521"></a>
Which Peer To Ask</h2>
<p >Peers go though state transitions as the network goes through its state transitions. Peer's provide their state to their directly connected peers. By monitoring the state of each connected peer a server can tell which of its peers has the information that it needs.</p>
<p >Therefore if a server suffers a byzantine failure the server can tell which of its peers did not suffer that same failure. So the server knows which peer(s) to ask for the missing information.</p>
<p >Peers also report their contiguous range of ledgers. This is another way that a server can determine which peer to ask for a particular ledger or piece of a ledger.</p>
<p >There are also indirect peer queries. If there have been timeouts while acquiring ledger data then a server may issue indirect queries. In that case the server receiving the indirect query passes the query along to any of its peers that may have the requested data. This is important if the network has a byzantine failure. It also helps protect the validation network. A validator may need to get a peer set from one of the other validators, and indirect queries improve the likelihood of success with that.</p>
<h2><a class="anchor" id="autotoc_md521"></a>
<h2><a class="anchor" id="autotoc_md522"></a>
Kinds of Fetch Packs</h2>
<p >A FetchPack is the way that peers send partial ledger data to other peers so the receiving peer can reconstruct a ledger.</p>
<p >A 'normal' FetchPack is a bucket of nodes indexed by hash. The server building the FetchPack puts information into the FetchPack that the destination server is likely to need. Normally they contain all of the missing nodes needed to fill in a ledger.</p>
@@ -147,36 +147,36 @@ Kinds of Fetch Packs</h2>
<li>The index and new data of modified nodes in the state tree.</li>
</ul>
<hr />
<h1><a class="anchor" id="autotoc_md523"></a>
<h1><a class="anchor" id="autotoc_md524"></a>
Definitions</h1>
<h2><a class="anchor" id="autotoc_md524"></a>
<h2><a class="anchor" id="autotoc_md525"></a>
Open Ledger</h2>
<p >The open ledger is the ledger that the server applies all new incoming transactions to.</p>
<h2><a class="anchor" id="autotoc_md525"></a>
<h2><a class="anchor" id="autotoc_md526"></a>
Last Validated Ledger</h2>
<p >The most recent ledger that the server is certain will always remain part of the permanent, public history.</p>
<h2><a class="anchor" id="autotoc_md526"></a>
<h2><a class="anchor" id="autotoc_md527"></a>
Last Closed Ledger</h2>
<p >The most recent ledger that the server believes the network reached consensus on. Different servers can arrive at a different conclusion about the last closed ledger. This is a consequence of Byzantanine failure. The purpose of validations is to resolve the differences between servers and come to a common conclusion about which last closed ledger is authoritative.</p>
<h2><a class="anchor" id="autotoc_md527"></a>
<h2><a class="anchor" id="autotoc_md528"></a>
Consensus</h2>
<p >A distributed agreement protocol. Ripple uses the consensus process to solve the problem of double-spending.</p>
<h2><a class="anchor" id="autotoc_md528"></a>
<h2><a class="anchor" id="autotoc_md529"></a>
Validation</h2>
<p >A signed statement indicating that it built a particular ledger as a result of the consensus process.</p>
<h2><a class="anchor" id="autotoc_md529"></a>
<h2><a class="anchor" id="autotoc_md530"></a>
Proposal</h2>
<p >A signed statement of which transactions it believes should be included in the next consensus ledger.</p>
<h2><a class="anchor" id="autotoc_md530"></a>
<h2><a class="anchor" id="autotoc_md531"></a>
Ledger Header</h2>
<p >The "ledger header" is the chunk of data that hashes to the ledger's hash. It contains the sequence number, parent hash, hash of the previous ledger, hash of the root node of the state tree, and so on.</p>
<h2><a class="anchor" id="autotoc_md531"></a>
<h2><a class="anchor" id="autotoc_md532"></a>
Ledger Base</h2>
<p >The term "ledger base" refers to a particular type of query and response used in the ledger fetch process that includes the ledger header but may also contain other information such as the root node of the state tree.</p>
<hr />
<h1><a class="anchor" id="autotoc_md533"></a>
<h1><a class="anchor" id="autotoc_md534"></a>
Ledger Structures</h1>
<h2><a class="anchor" id="autotoc_md534"></a>
<h2><a class="anchor" id="autotoc_md535"></a>
Account Root</h2>
<p ><b>Account:</b> A 160-bit account ID.</p>
<p ><b>Balance:</b> Balance in the account.</p>
@@ -187,7 +187,7 @@ Account Root</h2>
<p ><b>PreviousTxnLgrSeq:</b> Ledger number sequence number of the previous transaction on this account.</p>
<p ><b>Sequence:</b> Must be a value of 1 for the account to process a valid transaction. The value initially matches the sequence number of the state tree of the account that signed the transaction. The process of executing the transaction increments the sequence number. This is how ripple prevents a transaction from executing more than once.</p>
<p ><b>index:</b> 256-bit hash of this AccountRoot.</p>
<h2><a class="anchor" id="autotoc_md535"></a>
<h2><a class="anchor" id="autotoc_md536"></a>
Trust Line</h2>
<p >The trust line acts as an edge connecting two accounts: the accounts represented by the HighNode and the LowNode. Which account is "high" and "low" is determined by the values of the two 160-bit account IDs. The account with the smaller 160-bit ID is always the low account. This ordering makes the hash of a trust line between accounts A and B have the same value as a trust line between accounts B and A.</p>
<p ><b>Balance:</b></p><ul>
@@ -212,14 +212,14 @@ Trust Line</h2>
<p ><b>PreviousTxnID:</b> 256-bit hash of the previous transaction on this account.</p>
<p ><b>PreviousTxnLgrSeq:</b> Ledger number sequence number of the previous transaction on this account.</p>
<p ><b>index:</b> 256-bit hash of this RippleState.</p>
<h2><a class="anchor" id="autotoc_md536"></a>
<h2><a class="anchor" id="autotoc_md537"></a>
Ledger Hashes</h2>
<p ><b>Flags:</b> ???</p>
<p ><b>Hashes:</b> A list of the hashes of the previous 256 ledgers.</p>
<p ><b>LastLedgerSequence:</b></p>
<p ><b>LedgerEntryType:</b> "LedgerHashes".</p>
<p ><b>index:</b> 256-bit hash of this LedgerHashes.</p>
<h2><a class="anchor" id="autotoc_md537"></a>
<h2><a class="anchor" id="autotoc_md538"></a>
Owner Directory</h2>
<p >Lists all of the offers and trust lines that are associated with an account.</p>
<p ><b>Flags:</b> ???</p>
@@ -228,7 +228,7 @@ Owner Directory</h2>
<p ><b>Owner:</b> 160-bit ID of the owner account.</p>
<p ><b>RootIndex:</b></p>
<p ><b>index:</b> A hash of the owner account.</p>
<h2><a class="anchor" id="autotoc_md538"></a>
<h2><a class="anchor" id="autotoc_md539"></a>
Book Directory</h2>
<p >Lists one or more offers that have the same quality.</p>
<p >If a pair of Currency and Issuer fields are all zeros, then that pair is dealing in XRP.</p>
@@ -245,31 +245,31 @@ Book Directory</h2>
<p ><b>TakerPaysIssuer:</b> Issuer of the PaysCurrency.</p>
<p ><b>index:</b> A 256-bit hash computed using the TakerGetsCurrency, TakerGetsIssuer, TakerPaysCurrency, and TakerPaysIssuer in the top 192 bits. The lower 64-bits are occupied by the exchange rate.</p>
<hr />
<h1><a class="anchor" id="autotoc_md540"></a>
<h1><a class="anchor" id="autotoc_md541"></a>
Ledger Publication</h1>
<h2><a class="anchor" id="autotoc_md541"></a>
<h2><a class="anchor" id="autotoc_md542"></a>
Overview</h2>
<p >The Ripple server permits clients to subscribe to a continuous stream of fully-validated ledgers. The publication code maintains this stream.</p>
<p >The server attempts to maintain this continuous stream unless it falls too far behind, in which case it jumps to the current fully-validated ledger and then attempts to resume a continuous stream.</p>
<h2><a class="anchor" id="autotoc_md542"></a>
<h2><a class="anchor" id="autotoc_md543"></a>
Implementation</h2>
<p ><code>LedgerMaster::doAdvance</code> is invoked when work may need to be done to publish ledgers to clients. This code loops until it cannot make further progress.</p>
<p ><code>LedgerMaster::findNewLedgersToPublish</code> is called first. If the last fully-valid ledger's sequence number is greater than the last published ledger's sequence number, it attempts to publish those ledgers, retrieving them if needed.</p>
<p >If there are no new ledgers to publish, <code>doAdvance</code> determines if it can backfill history. If the publication is not caught up, backfilling is not attempted to conserve resources.</p>
<p >If history can be backfilled, the missing ledger with the highest sequence number is retrieved first. If a historical ledger is retrieved, and its predecessor is in the database, <code>tryFill</code> is invoked to update the list of resident ledgers.</p>
<hr />
<h1><a class="anchor" id="autotoc_md544"></a>
<h1><a class="anchor" id="autotoc_md545"></a>
The Ledger Cleaner</h1>
<h2><a class="anchor" id="autotoc_md545"></a>
<h2><a class="anchor" id="autotoc_md546"></a>
Overview</h2>
<p >The ledger cleaner checks and, if necessary, repairs the SQLite ledger and transaction databases. It can also check for pieces of a ledger that should be in the node back end but are missing. If it detects this case, it triggers a fetch of the ledger. The ledger cleaner only operates by manual request. It is never started automatically.</p>
<h2><a class="anchor" id="autotoc_md546"></a>
<h2><a class="anchor" id="autotoc_md547"></a>
Operations</h2>
<p >The ledger cleaner can operate on a single ledger or a range of ledgers. It always validates the ledger chain itself, ensuring that the SQLite database contains a consistent chain of ledgers from the last validated ledger as far back as the database goes.</p>
<p >If requested, it can additionally repair the SQLite entries for transactions in each checked ledger. This was primarily intended to repair incorrect entries created by a bug (since fixed) that could cause transasctions from a ledger other than the fully-validated ledger to appear in the SQLite databases in addition to the transactions from the correct ledger.</p>
<p >If requested, it can additionally check the ledger for missing entries in the account state and transaction trees.</p>
<p >To prevent the ledger cleaner from saturating the available I/O bandwidth and excessively polluting caches with ancient information, the ledger cleaner paces itself and does not attempt to get its work done quickly.</p>
<h2><a class="anchor" id="autotoc_md547"></a>
<h2><a class="anchor" id="autotoc_md548"></a>
Commands</h2>
<p >The ledger cleaner can be controlled and monitored with the <b>ledger_cleaner</b> RPC command. With no parameters, this command reports on the status of the ledger cleaner. This includes the range of ledgers it has been asked to process, the checks it is doing, and the number of errors it has found.</p>
<p >The ledger cleaner can be started, stopped, or have its behavior changed by the following RPC parameters:</p>
@@ -280,7 +280,7 @@ Commands</h2>
<p ><b>fix_txns</b>: A boolean indicating whether to replace the SQLite transaction entries unconditionally</p>
<p ><b>check_nodes</b>: A boolean indicating whether to check the specified ledger(s) for missing nodes in the back end node store</p>
<hr />
<h1><a class="anchor" id="autotoc_md549"></a>
<h1><a class="anchor" id="autotoc_md550"></a>
References</h1>
</div></div><!-- contents -->
</div><!-- PageDoc -->

View File

@@ -77,7 +77,7 @@ $(function() {
<li>Rapid Fee escalation</li>
<li>The Transaction Queue</li>
</ol>
<h1><a class="anchor" id="autotoc_md551"></a>
<h1><a class="anchor" id="autotoc_md552"></a>
Fee Escalation</h1>
<p >The guiding principal of fee escalation is that when things are going smoothly, fees stay low, but as soon as high levels of traffic appear on the network, fees will grow quickly to extreme levels. This should dissuade malicious users from abusing the system, while giving legitimate users the ability to pay a higher fee to get high-priority transactions into the open ledger, even during unfavorable conditions.</p>
<p >How fees escalate:</p>
@@ -106,7 +106,7 @@ Fee Escalation</h1>
<ul>
<li>This example assumes a cold-start scenario, with a single, possibly malicious, user willing to pay arbitrary amounts to get transactions into the open ledger. It ignores the effects of the Transaction Queue. Any lower fee level transactions submitted by other users at the same time as this user's transactions will go into the transaction queue, and will have the first opportunity to be applied to the <em>next</em> open ledger. The next section describes how that works in more detail.</li>
</ul>
<h1><a class="anchor" id="autotoc_md552"></a>
<h1><a class="anchor" id="autotoc_md553"></a>
Transaction Queue</h1>
<p >An integral part of making fee escalation work for users of the network is the transaction queue. The queue allows legitimate transactions to be considered by the network for future ledgers if the escalated open ledger fee gets too high. This allows users to submit low priority transactions with a low fee, and wait for high fees to drop. It also allows legitimate users to continue submitting transactions during high traffic periods, and give those transactions a much better chance to succeed.</p>
<ol type="1">
@@ -130,9 +130,9 @@ Transaction Queue</h1>
<li>none of the prior queued transactions affect the ability of subsequent transactions to claim a fee.</li>
</ul>
<p >Currently, there is an additional restriction that the queue cannot work with transactions using the <code>sfPreviousTxnID</code> or <code>sfAccountTxnID</code> fields. <code>sfPreviousTxnID</code> is deprecated and shouldn't be used anyway. Future development will make the queue aware of <code>sfAccountTxnID</code> mechanisms.</p>
<h1><a class="anchor" id="autotoc_md553"></a>
<h1><a class="anchor" id="autotoc_md554"></a>
Technical Details</h1>
<h2><a class="anchor" id="autotoc_md554"></a>
<h2><a class="anchor" id="autotoc_md555"></a>
Fee Level</h2>
<p >"Fee level" is used to allow the cost of different types of transactions to be compared directly. For a reference transaction, the base fee level is 256. If a transaction is submitted with a higher <code>Fee</code> field, the fee level is scaled appropriately.</p>
<p >Examples, assuming a reference transaction base fee of 10 drops:</p>
@@ -142,18 +142,18 @@ Fee Level</h2>
<li>A hypothetical future non-reference transaction with a base fee of 15 drops multi-signed with 5 signatures and <code>Fee=90</code> will have a fee level of <code>90 drop fee * 256 fee level / ((1tx + 5sigs) * 15 drop base fee) = 256 fee level</code>.</li>
</ol>
<p >This demonstrates that a simpler transaction paying less XRP can be more likely to get into the open ledger, or be sorted earlier in the queue than a more complex transaction paying more XRP.</p>
<h2><a class="anchor" id="autotoc_md555"></a>
<h2><a class="anchor" id="autotoc_md556"></a>
Load Fee</h2>
<p >Each rippled server maintains a minimum cost threshold based on its current load. If you submit a transaction with a fee that is lower than the current load-based transaction cost of the rippled server, the server neither applies nor relays the transaction to its peers. A transaction is very unlikely to survive the consensus process unless its transaction fee value meets the requirements of a majority of servers.</p>
<h2><a class="anchor" id="autotoc_md556"></a>
<h2><a class="anchor" id="autotoc_md557"></a>
Reference Transaction</h2>
<p >In this document, a "Reference Transaction" is any currently implemented single-signed transaction (eg. Payment, Account Set, Offer Create, etc) that requires a fee.</p>
<p >In the future, there may be other transaction types that require more (or less) work for rippled to process. Those transactions may have a higher (or lower) base fee, requiring a correspondingly higher (or lower) fee to get into the same position as a reference transaction.</p>
<h2><a class="anchor" id="autotoc_md557"></a>
<h2><a class="anchor" id="autotoc_md558"></a>
Consensus Health</h2>
<p >For consensus to be considered healthy, the peers on the network should largely remain in sync with one another. It is particularly important for the validators to remain in sync, because that is required for participation in consensus. However, the network tolerates some validators being out of sync. Fundamentally, network health is a function of validators reaching consensus on sets of recently submitted transactions.</p>
<p >Another factor to consider is the duration of the consensus process itself. This generally takes under 5 seconds on the main network under low volume. This is based on historical observations. However factors such as transaction volume can increase consensus duration. This is because rippled performs more work as transaction volume increases. Under sufficient load this tends to increase consensus duration. It's possible that relatively high consensus duration indicates a problem, but it is not appropriate to conclude so without investigation. The upper limit for consensus duration should be roughly 20 seconds. That is far above the normal. If the network takes this long to close ledgers, then it is almost certain that there is a problem with the network. This circumstance often coincides with new ledgers with zero transactions.</p>
<h2><a class="anchor" id="autotoc_md558"></a>
<h2><a class="anchor" id="autotoc_md559"></a>
Other Constants</h2>
<ul>
<li><em>Base fee transaction limit per ledger</em>. The minimum value of 5 was chosen to ensure the limit never gets so small that the ledger becomes unusable. The "target" value of 50 was chosen so the limit never gets large enough to invite abuse, but keeps up if the network stays healthy and active. These exact values were chosen experimentally, and can easily change in the future.</li>
@@ -165,7 +165,7 @@ Other Constants</h2>
<li><em>Minimum last ledger sequence buffer</em>. If a transaction has a <code>LastLedgerSequence</code> value, and cannot be processed into the open ledger, that <code>LastLedgerSequence</code> must be at least 2 more than the sequence number of the open ledger to be considered for the queue. The value was chosen to provide a balance between letting the user control the lifespan of the transaction, and giving a queued transaction a chance to get processed out of the queue before getting discarded, particularly since it may have dependent transactions also in the queue, which will never succeed if this one is discarded.</li>
<li><em>Replaced transaction fee increase</em>. Any transaction in the queue can be replaced by another transaction with the same sequence number and at least a 25% higher fee level. The 25% increase is intended to cover the resource cost incurred by broadcasting the original transaction to the network. This value was chosen experimentally, and can easily change in the future.</li>
</ul>
<h2><a class="anchor" id="autotoc_md559"></a>
<h2><a class="anchor" id="autotoc_md560"></a>
&lt;tt&gt;fee&lt;/tt&gt; command</h2>
<p ><b>The <code>fee</code> RPC and WebSocket command is still experimental, and may change without warning.</b></p>
<p ><code>fee</code> takes no parameters, and returns information about the current local fee escalation and transaction queue state as both fee levels and drops. The drop values assume a single-singed reference transaction. It is up to the user to compute the necessary fees for other types of transactions. (E.g. multiply all drop values by 5 for a multi-signed transaction with 4 signatures.)</p>
@@ -191,7 +191,7 @@ Other Constants</h2>
<div class="line"> }</div>
<div class="line"> }</div>
<div class="line">}</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md560"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md561"></a>
&lt;a href="https://xrpl.org/server_info.html" &gt;&lt;tt&gt;server_info&lt;/tt&gt;&lt;/a&gt; command</h2>
<p ><b>The fields listed here are still experimental, and may change without warning.</b></p>
<p >Up to two fields in <code>server_info</code> output are related to fee escalation.</p>
@@ -200,7 +200,7 @@ Other Constants</h2>
<li><code>load_factor_fee_queue</code>: If the queue is full, this is the factor on base transaction cost that a transaction must pay to get into the queue. If not full, the value is 1, so will not be returned.</li>
</ol>
<p >In all cases, the transaction fee must be high enough to overcome both <code>load_factor_fee_queue</code> and <code>load_factor</code> to be considered. It does not need to overcome <code>load_factor_fee_escalation</code>, though if it does not, it is more likely to be queued than immediately processed into the open ledger.</p>
<h2><a class="anchor" id="autotoc_md561"></a>
<h2><a class="anchor" id="autotoc_md562"></a>
&lt;a href="https://xrpl.org/server_state.html" &gt;&lt;tt&gt;server_state&lt;/tt&gt;&lt;/a&gt; command</h2>
<p ><b>The fields listed here are still experimental, and may change without warning.</b></p>
<p >Three fields in <code>server_state</code> output are related to fee escalation.</p>

View File

@@ -75,7 +75,7 @@ $(function() {
<div class="textblock"><p >The Ripple payment protocol enforces a fee schedule expressed in units of the native currency, XRP. Fees for transactions are paid directly from the account owner. There are also reserve requirements for each item that occupies storage in the ledger. The reserve fee schedule contains both a per-account reserve, and a per-owned-item reserve. The items an account may own include active offers, trust lines, and tickets.</p>
<p >Validators may vote to increase fees if they feel that the network is charging too little. They may also vote to decrease fees if the fees are too costly relative to the value the network provides. One common case where a validator may want to change fees is when the value of the native currency XRP fluctuates relative to other currencies.</p>
<p >The fee voting mechanism takes place every 256 ledgers ("voting ledgers"). In a voting ledger, each validator takes a position on what they think the fees should be. The consensus process converges on the majority position, and in subsequent ledgers a new fee schedule is enacted.</p>
<h1><a class="anchor" id="autotoc_md563"></a>
<h1><a class="anchor" id="autotoc_md564"></a>
Consensus</h1>
<p >The Ripple consensus algorithm allows distributed participants to arrive at the same answer for yes/no questions. The canonical case for consensus is whether or not a particular transaction is included in the ledger. Fees present a more difficult challenge, since the decision on the new fee is not a yes or no question.</p>
<p >To convert validators' positions on fees into a yes or no question that can be converged in the consensus process, the following algorithm is used:</p>
@@ -86,11 +86,11 @@ Consensus</h1>
<li>The consensus process is applied to these fee-setting transactions as normal. Each transaction is either included in the ledger or not. In most cases, one fee setting transaction will make it in while the others are rejected. In some rare cases more than one fee setting transaction will make it in. The last one to be applied will take effect. This is harmless since a majority of validators still agreed on it.</li>
<li>After the voting ledger has been validated, future pseudo transactions before the next voting ledger are rejected as fee setting transactions may only appear in voting ledgers.</li>
</ul>
<h1><a class="anchor" id="autotoc_md564"></a>
<h1><a class="anchor" id="autotoc_md565"></a>
Configuration</h1>
<p >A validating instance of rippled uses information in the configuration file to determine how it wants to vote on the fee schedule. It is the responsibility of the administrator to set these values.</p>
<hr />
<h1><a class="anchor" id="autotoc_md566"></a>
<h1><a class="anchor" id="autotoc_md567"></a>
Amendment</h1>
<p >An Amendment is a new or proposed change to a ledger rule. Ledger rules affect transaction processing and consensus; peers must use the same set of rules for consensus to succeed, otherwise different instances of rippled will get different results. Amendments can be almost anything but they must be accepted by a network majority through a consensus process before they are utilized. An Amendment must receive at least an 80% approval rate from validating nodes for a period of two weeks before being accepted. The following example outlines the process of an Amendment from its conception to approval and usage.</p>
<ul>
@@ -106,7 +106,7 @@ Amendment</h1>
<p >If an amendment holds majority status for two weeks, validators will introduce a pseudo-transaction to enable the amendment.</p>
<p >All amendments are assumed to be critical and irreversible. Thus there is no mechanism to disable or revoke an amendment, nor is there a way for a server to operate while an amendment it does not understand is enabled.</p>
<hr />
<h1><a class="anchor" id="autotoc_md568"></a>
<h1><a class="anchor" id="autotoc_md569"></a>
SHAMapStore: Online Delete</h1>
<p >Optional online deletion happens through the SHAMapStore. Records are deleted from disk based on ledger sequence number. These records reside in the key-value database as well as in the SQLite ledger and transaction databases. Without online deletion storage usage grows without bounds. It can only be pruned by stopping, manually deleting data, and restarting the server. Online deletion requires less operator intervention to manage the server.</p>
<p >The main mechanism to delete data from the key-value database is to keep two databases open at all times. One database has all writes directed to it. The other database has recent archival data from just prior to that from the current writable database. Upon rotation, the archival database is deleted. The writable database becomes archival, and a brand new database becomes writable. To ensure that no necessary data for transaction processing is lost, a variety of steps occur, including copying the contents of an entire ledger's account state map, clearing caches, and copying the contents of (freshening) other caches.</p>

View File

@@ -77,15 +77,15 @@ $(function() {
<li>All hard-coded SQL statements should be stored in the files under the <code>xrpld/app/rdb</code> directory. With the exception of test modules, no hard-coded SQL should be added to any other file in rippled.</li>
<li>The base class <code>RelationalDatabase</code> is inherited by derived classes that each provide an interface for operating on distinct relational database systems.</li>
</ul>
<h1><a class="anchor" id="autotoc_md572"></a>
<h1><a class="anchor" id="autotoc_md573"></a>
Overview</h1>
<p >Firstly, the interface <code>RelationalDatabase</code> is inherited by the classes <code>SQLiteDatabase</code> and <code>PostgresDatabase</code> which are used to operate the software's main data store (for storing transactions, accounts, ledgers, etc.). Secondly, the files under the <code>detail</code> directory provide supplementary functions that are used by these derived classes to access the underlying databases. Lastly, the remaining files in the interface (located at the top level of the module) are used by varied parts of the software to access any secondary relational databases.</p>
<h1><a class="anchor" id="autotoc_md573"></a>
<h1><a class="anchor" id="autotoc_md574"></a>
Configuration</h1>
<p >The config section <code>[relational_db]</code> has a property named <code>backend</code> whose value designates which database implementation will be used for node databases. Presently the only valid value for this property is <code>sqlite</code>:</p>
<div class="fragment"><div class="line">[relational_db]</div>
<div class="line">backend=sqlite</div>
</div><!-- fragment --><h1><a class="anchor" id="autotoc_md574"></a>
</div><!-- fragment --><h1><a class="anchor" id="autotoc_md575"></a>
Source Files</h1>
<p >The Relational Database Interface consists of the following directory structure (as of November 2021):</p>
<div class="fragment"><div class="line">src/xrpld/app/rdb/</div>
@@ -107,7 +107,7 @@ Source Files</h1>
<div class="line">├── State.h</div>
<div class="line">├── Vacuum.h</div>
<div class="line">└── Wallet.h</div>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md575"></a>
</div><!-- fragment --><h2><a class="anchor" id="autotoc_md576"></a>
File Contents</h2>
<table class="markdownTable">
<tr class="markdownTableHead">
@@ -129,10 +129,10 @@ File Contents</h2>
<tr class="markdownTableRowEven">
<td class="markdownTableBodyNone"><code>Wallet.[h\|cpp]</code> </td><td class="markdownTableBodyNone">Defines/Implements methods for interacting with Wallet SQLite databases </td></tr>
</table>
<h1><a class="anchor" id="autotoc_md576"></a>
<h1><a class="anchor" id="autotoc_md577"></a>
Classes</h1>
<p >The abstract class <code>RelationalDatabase</code> is the primary class of the Relational Database Interface and is defined in the eponymous header file. This class provides a static method <code>init()</code> which, when invoked, creates a concrete instance of a derived class whose type is specified by the system configuration. All other methods in the class are virtual. Presently there exist two classes that derive from <code>RelationalDatabase</code>, namely <code>SQLiteDatabase</code> and <code>PostgresDatabase</code>.</p>
<h1><a class="anchor" id="autotoc_md577"></a>
<h1><a class="anchor" id="autotoc_md578"></a>
Database Methods</h1>
<p >The Relational Database Interface provides three categories of methods for interacting with databases:</p>
<ul>

View File

@@ -76,9 +76,9 @@ $(function() {
<li>NodeStore</li>
<li>Benchmarks</li>
</ul>
<h1><a class="anchor" id="autotoc_md580"></a>
<h1><a class="anchor" id="autotoc_md581"></a>
NodeStore</h1>
<h2><a class="anchor" id="autotoc_md581"></a>
<h2><a class="anchor" id="autotoc_md582"></a>
Introduction</h2>
<p >A <code>NodeObject</code> is a simple object that the Ledger uses to store entries. It is comprised of a type, a hash and a blob. It can be uniquely identified by the hash, which is a 256 bit hash of the blob. The blob is a variable length block of serialized data. The type identifies what the blob contains. The fields are as follows:</p>
<ul>
@@ -117,7 +117,7 @@ Introduction</h2>
</table>
<hr />
<p> The <code>NodeStore</code> provides an interface that stores, in a persistent database, a collection of NodeObjects that rippled uses as its primary representation of ledger entries. All ledger entries are stored as NodeObjects and as such, need to be persisted between launches. If a NodeObject is accessed and is not in memory, it will be retrieved from the database.</p>
<h2><a class="anchor" id="autotoc_md582"></a>
<h2><a class="anchor" id="autotoc_md583"></a>
Backend</h2>
<p >The <code>NodeStore</code> implementation provides the <code>Backend</code> abstract interface, which lets different key/value databases to be chosen at run-time. This allows experimentation with different engines. Improvements in the performance of the NodeStore are a constant area of research. The database can be specified in the configuration file [node_db] section as follows.</p>
<p >One or more lines of key / value pairs</p>
@@ -148,15 +148,15 @@ Backend</h2>
<li><b>0</b> off</li>
<li><b>1</b> on (default)</li>
</ul>
<h1><a class="anchor" id="autotoc_md583"></a>
<h1><a class="anchor" id="autotoc_md584"></a>
Benchmarks</h1>
<p >The <code>NodeStore.Timing</code> test is used to execute a set of read/write workloads to compare current available nodestore backends. It can be executed with:</p>
<div class="fragment"><div class="line">$rippled --unittest=NodeStoreTiming</div>
</div><!-- fragment --><p >It is also possible to use alternate DB config params by passing config strings as <code>--unittest-arg</code>.</p>
<h2><a class="anchor" id="autotoc_md584"></a>
<h2><a class="anchor" id="autotoc_md585"></a>
Addendum</h2>
<p >The discussion below refers to a <code>RocksDBQuick</code> backend that has since been removed from the code as it was not working and not maintained. That backend primarily used one of the several rocks <code>Optimize*</code> methods to setup the majority of the DB options/params, whereas the primary RocksDB backend exposes many of the available config options directly. The code for RocksDBQuick can be found in versions of this repo 1.2 and earlier if you need to refer back to it. The conclusions below date from about 2014 and may need revisiting based on newer versions of RocksDB (TBD).</p>
<h2><a class="anchor" id="autotoc_md585"></a>
<h2><a class="anchor" id="autotoc_md586"></a>
Discussion</h2>
<p >RocksDBQuickFactory is intended to provide a testbed for comparing potential rocksdb performance with the existing recommended configuration in rippled.cfg. Through various executions and profiling some conclusions are presented below.</p>
<ul>

View File

@@ -72,19 +72,19 @@ $(function() {
<div class="headertitle"><div class="title">Overlay </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md587"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md588"></a>
Introduction</h1>
<p >The <em>XRP Ledger network</em> consists of a collection of <em>peers</em> running **<code>rippled</code>** or other compatible software. Each peer maintains multiple outgoing connections and optional incoming connections to other peers. These connections are made over both the public Internet and private local area networks. This network defines a connected directed graph of nodes where vertices are instances of <code>rippled</code> and edges are persistent TCP/IP connections. Peers send and receive messages to other connected peers. This peer to peer network, layered on top of the public and private Internet, forms an <a href="http://en.wikipedia.org/wiki/Overlay_network"><em>overlay network</em></a>. The contents of the messages and the behavior of peers in response to the messages, plus the information exchanged during the handshaking phase of connection establishment, defines the <em>XRP Ledger peer protocol</em> (or <em>protocol</em> in this context).</p>
<h1><a class="anchor" id="autotoc_md588"></a>
<h1><a class="anchor" id="autotoc_md589"></a>
Overview</h1>
<p >Each connection is represented by a <em>Peer</em> object. The Overlay manager establishes, receives, and maintains connections to peers. Protocol messages are exchanged between peers and serialized using <a href="https://developers.google.com/protocol-buffers/"><em>Google Protocol Buffers</em></a>.</p>
<h2><a class="anchor" id="autotoc_md589"></a>
<h2><a class="anchor" id="autotoc_md590"></a>
Structure</h2>
<p >Each connection between peers is identified by its connection type, which affects the behavior of message routing. At present, only a single connection type is supported: <b>Peer</b>.</p>
<h1><a class="anchor" id="autotoc_md590"></a>
<h1><a class="anchor" id="autotoc_md591"></a>
Handshake</h1>
<p >To establish a protocol connection, a peer makes an outgoing TLS encrypted connection to a remote peer, then sends an HTTP request with no message body.</p>
<h2><a class="anchor" id="autotoc_md591"></a>
<h2><a class="anchor" id="autotoc_md592"></a>
HTTP</h2>
<p >The HTTP <a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html">request</a> must:</p>
<ul>
@@ -94,7 +94,7 @@ HTTP</h2>
</ul>
<p >HTTP requests which do not conform to this requirements must generate an appropriate HTTP error and result in the connection being closed.</p>
<p >Upon receipt of a well-formed HTTP upgrade request, and validation of the protocol specific parameters, a peer will either send back a HTTP 101 response and switch to the requested protocol, or a message indicating that the request failed (e.g. by sending HTTP 400 "Bad Request" or HTTP 503 "Service Unavailable").</p>
<h4><a class="anchor" id="autotoc_md592"></a>
<h4><a class="anchor" id="autotoc_md593"></a>
Example HTTP Upgrade Request</h4>
<div class="fragment"><div class="line">GET / HTTP/1.1</div>
<div class="line">User-Agent: rippled-1.4.0-b1+DEBUG</div>
@@ -109,7 +109,7 @@ Example HTTP Upgrade Request</h4>
<div class="line">Remote-IP: 192.0.2.79</div>
<div class="line">Closed-Ledger: llRZSKqvNieGpPqbFGnm358pmF1aW96SDIUQcnMh6HI=</div>
<div class="line">Previous-Ledger: q4aKbP7sd5wv+EXArwCmQiWZhq9AwBl2p/hCtpGJNsc=</div>
</div><!-- fragment --><h4><a class="anchor" id="autotoc_md593"></a>
</div><!-- fragment --><h4><a class="anchor" id="autotoc_md594"></a>
Example HTTP Upgrade Response (Success)</h4>
<div class="fragment"><div class="line">HTTP/1.1 101 Switching Protocols</div>
<div class="line">Connection: Upgrade</div>
@@ -122,7 +122,7 @@ Example HTTP Upgrade Response (Success)</h4>
<div class="line">Network-Time: 619234797</div>
<div class="line">Closed-Ledger: h7HL85W9ywkex+G7p42USVeV5kE04CWK+4DVI19Of8I=</div>
<div class="line">Previous-Ledger: EPvIpAD2iavGFyyZYi8REexAXyKGXsi1jMF7OIBY6/Y=</div>
</div><!-- fragment --><h4><a class="anchor" id="autotoc_md594"></a>
</div><!-- fragment --><h4><a class="anchor" id="autotoc_md595"></a>
Example HTTP Upgrade Response (Failure: no slots available)</h4>
<div class="fragment"><div class="line">HTTP/1.1 503 Service Unavailable</div>
<div class="line">Server: rippled-0.27.0</div>
@@ -132,7 +132,7 @@ Example HTTP Upgrade Response (Failure: no slots available)</h4>
<div class="line">{&quot;peer-ips&quot;:[&quot;54.68.219.39:51235&quot;,&quot;54.187.191.179:51235&quot;,</div>
<div class="line">&quot;107.150.55.21:6561&quot;,&quot;54.186.230.77:51235&quot;,&quot;54.187.110.243:51235&quot;,</div>
<div class="line">&quot;85.127.34.221:51235&quot;,&quot;50.43.33.236:51235&quot;,&quot;54.187.138.75:51235&quot;]}</div>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md595"></a>
</div><!-- fragment --><h3><a class="anchor" id="autotoc_md596"></a>
Standard Fields</h3>
<table class="markdownTable">
<tr class="markdownTableHead">
@@ -169,7 +169,7 @@ Standard Fields</h3>
<p >For responses, it should a consist of <em>single element</em> matching one of the elements provided in the corresponding request. If the server does not understand any of the available protocol versions, the upgrade request should fail with an appropriate HTTP error code (e.g. by sending an HTTP 400 "Bad Request" response).</p>
<p >Protocol versions are string of the form <code>XRPL/</code> followed by a dotted major and minor protocol version number, where the major number is greater than or equal to 2 and the minor is greater than or equal to 0.</p>
<p >See <a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.42">RFC 2616 &sect;14.42</a></p>
<h3><a class="anchor" id="autotoc_md596"></a>
<h3><a class="anchor" id="autotoc_md597"></a>
Custom Fields</h3>
<table class="markdownTable">
<tr class="markdownTableHead">
@@ -272,11 +272,11 @@ Custom Fields</h3>
</table>
<p >If present, identifies the hash of the parent ledger that the sending server considers to be closed.</p>
<p >The value is presently encoded using <b>Base64</b> encoding, but implementations should support both <b>Base64</b> and <b>HEX</b> encoding for this value.</p>
<h3><a class="anchor" id="autotoc_md597"></a>
<h3><a class="anchor" id="autotoc_md598"></a>
Additional Headers</h3>
<p >An implementation or operator may specify additional, optional fields and values in both requests and responses.</p>
<p >Implementations should not reject requests because of the presence of fields that they do not understand.</p>
<h2><a class="anchor" id="autotoc_md598"></a>
<h2><a class="anchor" id="autotoc_md599"></a>
Session Signature</h2>
<p >Even for SSL/TLS encrypted connections, it is possible for an attacker to mount relatively inexpensive MITM attacks that can be extremely hard to detect and may afford the attacker the ability to intelligently tamper with messages exchanged between the two endpoints.</p>
<p >This risk can be mitigated if at least one side has a certificate from a certificate authority trusted by the other endpoint, but having a certificate is not always possible (or even desirable) in a decentralized and permissionless network.</p>
@@ -287,11 +287,11 @@ Session Signature</h2>
<p >Each side of the link will verify that the provided signature is from the claimed public key against the session's unique fingerprint. If this signature check fails then the link <b>MUST</b> be dropped.</p>
<p >If an attacker, Eve, establishes two separate SSL sessions with Alice and Bob, the fingerprints of the two sessions will be different, and Eve will not be able to sign the fingerprint of her session with Bob with Alice's private key, or the fingerprint of her session with Alice with Bob's private key, and so both A and B will know that an active MITM attack is in progress and will close their connections.</p>
<p >If Eve simply proxies the raw bytes, she will be unable to decrypt the data being transferred between A and B and will not be able to intelligently tamper with the message stream between Alice and Bob, although she may be still be able to inject delays or terminate the link.</p>
<h1><a class="anchor" id="autotoc_md599"></a>
<h1><a class="anchor" id="autotoc_md600"></a>
Ripple Clustering</h1>
<p >A cluster consists of more than one Ripple server under common administration that share load information, distribute cryptography operations, and provide greater response consistency.</p>
<p >Cluster nodes are identified by their public node keys. Cluster nodes exchange information about endpoints that are imposing load upon them. Cluster nodes share information about their internal load status. Cluster nodes do not have to verify the cryptographic signatures on messages received from other cluster nodes.</p>
<h2><a class="anchor" id="autotoc_md600"></a>
<h2><a class="anchor" id="autotoc_md601"></a>
Configuration</h2>
<p >A server's public key can be determined from the output of the <code>server_info</code> command. The key is in the <code>pubkey_node</code> value, and is a text string beginning with the letter <code>n</code>. The key is maintained across runs in a database.</p>
<p >Cluster members are configured in the <code>rippled.cfg</code> file under <code>[cluster_nodes]</code>. Each member should be configured on a line beginning with the node public key, followed optionally by a space and a friendly name.</p>
@@ -304,20 +304,20 @@ Configuration</h2>
<li>Restart each hub, one by one</li>
<li>Restart the spoke</li>
</ul>
<h2><a class="anchor" id="autotoc_md601"></a>
<h2><a class="anchor" id="autotoc_md602"></a>
Transaction Behavior</h2>
<p >When a transaction is received from a cluster member, several normal checks are bypassed:</p>
<p >Signature checking is bypassed because we trust that a cluster member would not relay a transaction with an incorrect signature. Validators may wish to disable this feature, preferring the additional load to get the additional security of having validators check each transaction.</p>
<p >Local checks for transaction checking are also bypassed. For example, a server will not reject a transaction from a cluster peer because the fee does not meet its current relay fee. It is preferable to keep the cluster in agreement and permit confirmation from one cluster member to more reliably indicate the transaction's acceptance by the cluster.</p>
<h2><a class="anchor" id="autotoc_md602"></a>
<h2><a class="anchor" id="autotoc_md603"></a>
Server Load Information</h2>
<p >Cluster members exchange information on their server's load level. The load level is essentially the amount by which the normal fee levels are multiplied to get the server's fee for relaying transactions.</p>
<p >A server's effective load level, and the one it uses to determine its relay fee, is the highest of its local load level, the network load level, and the cluster load level. The cluster load level is the median load level reported by a cluster member.</p>
<h2><a class="anchor" id="autotoc_md603"></a>
<h2><a class="anchor" id="autotoc_md604"></a>
Gossip</h2>
<p >Gossip is the mechanism by which cluster members share information about endpoints (typically IPv4 addresses) that are imposing unusually high load on them. The endpoint load manager takes into account gossip to reduce the amount of load the endpoint is permitted to impose on the local server before it is warned, disconnected, or banned.</p>
<p >Suppose, for example, that an attacker controls a large number of IP addresses, and with these, he can send sufficient requests to overload a server. Without gossip, he could use these same addresses to overload all the servers in a cluster. With gossip, if he chooses to use the same IP address to impose load on more than one server, he will find that the amount of load he can impose before getting disconnected is much lower.</p>
<h2><a class="anchor" id="autotoc_md604"></a>
<h2><a class="anchor" id="autotoc_md605"></a>
Monitoring</h2>
<p >The <code>peers</code> command will report on the status of the cluster. The <code>cluster</code> object will contain one entry for each member of the cluster (either configured or introduced by another cluster member). The <code>age</code> field is the number of seconds since the server was last heard from. If the server is reporting an elevated cluster fee, that will be reported as well.</p>
<p >In the <code>peers</code> object, cluster members will contain a <code>cluster</code> field set to <code>true</code>.</p>

View File

@@ -72,14 +72,14 @@ $(function() {
<div class="headertitle"><div class="title">PeerFinder </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md607"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md608"></a>
Introduction</h1>
<p >The <em>Ripple payment network</em> consists of a collection of <em>peers</em> running the <b>rippled software</b>. Each peer maintains multiple outgoing connections and optional incoming connections to other peers. These connections are made over both the public Internet and private local area networks. This network defines a fully connected directed graph of nodes. Peers send and receive messages to other connected peers. This peer to peer network, layered on top of the public and private Internet, forms an <a href="http://en.wikipedia.org/wiki/Overlay_network"><em>overlay network</em></a>.</p>
<h1><a class="anchor" id="autotoc_md608"></a>
<h1><a class="anchor" id="autotoc_md609"></a>
Bootstrapping</h1>
<p >When a peer comes online it needs a set of IP addresses to connect to in order to gain initial entry into the overlay in a process called <em>bootstrapping</em>. Once they have established an initial set of these outbound peer connections, they need to gain additional addresses to establish more outbound peer connections until the desired limit is reached. Furthermore, they need a mechanism to advertise their IP address to new or existing peers in the overlay so they may receive inbound connections up to some desired limit. And finally, they need a mechanism to provide inbound connection requests with an alternate set of IP addresses to try when they have already reached their desired maximum number of inbound connections.</p>
<p >PeerFinder is a self contained module that provides these services, along with some additional overlay network management services such as <em>fixed slots</em> and <em>cluster slots</em>.</p>
<h1><a class="anchor" id="autotoc_md609"></a>
<h1><a class="anchor" id="autotoc_md610"></a>
Features</h1>
<p >PeerFinder has these responsibilities</p>
<ul>
@@ -93,18 +93,18 @@ Features</h1>
<li>Prevent duplicate connections and connections to self.</li>
</ul>
<hr />
<h1><a class="anchor" id="autotoc_md611"></a>
<h1><a class="anchor" id="autotoc_md612"></a>
Concepts</h1>
<h2><a class="anchor" id="autotoc_md612"></a>
<h2><a class="anchor" id="autotoc_md613"></a>
Manager</h2>
<p >The <code>Manager</code> is an application singleton which provides the primary interface to interaction with the PeerFinder.</p>
<h3><a class="anchor" id="autotoc_md613"></a>
<h3><a class="anchor" id="autotoc_md614"></a>
Autoconnect</h3>
<p >The Autoconnect feature of PeerFinder automatically establishes outgoing connections using addresses learned from various sources including the configuration file, the result of domain name lookups, and messages received from the overlay itself.</p>
<h3><a class="anchor" id="autotoc_md614"></a>
<h3><a class="anchor" id="autotoc_md615"></a>
Callback</h3>
<p >PeerFinder is an isolated code module with few external dependencies. To perform socket specific activities such as establishing outgoing connections or sending messages to connected peers, the Manager is constructed with an abstract interface called the <code>Callback</code>. An instance of this interface performs the actual required operations, making PeerFinder independent of the calling code.</p>
<h3><a class="anchor" id="autotoc_md615"></a>
<h3><a class="anchor" id="autotoc_md616"></a>
Config</h3>
<p >The <code>Config</code> structure defines the operational parameters of the PeerFinder. Some values come from the configuration file while others are calculated via tuned heuristics. The fields are as follows:</p>
<ul>
@@ -126,14 +126,14 @@ Config</h3>
</ul>
<p >Here's an example of how the network might be structured with a fractional value for outPeers:</p>
<p >**(Need example here)**</p>
<h3><a class="anchor" id="autotoc_md616"></a>
<h3><a class="anchor" id="autotoc_md617"></a>
Livecache</h3>
<p >The Livecache holds relayed IP addresses that have been received recently in the form of Endpoint messages via the peer to peer overlay. A peer periodically broadcasts the Endpoint message to its neighbors when it has open inbound connection slots. Peers store these messages in the Livecache and periodically forward their neighbors a handful of random entries from their Livecache, with an incremented hop count for each forwarded entry.</p>
<p >The algorithm for sending a neighbor a set of Endpoint messages chooses evenly from all available hop counts on each send. This ensures that each peer will see some entries with the farthest hops at each iteration. The result is to expand a peer's horizon with respect to which overlay endpoints are visible. This is designed to force the overlay to become highly connected and reduce the network diameter with each connection establishment.</p>
<p >When a peer receives an Endpoint message that originates from a neighbor (identified by a hop count of zero) for the first time, it performs an incoming connection test on that neighbor by initiating an outgoing connection to the remote IP address as seen on the connection combined with the port advertised in the Endpoint message. If the test fails, then the peer considers its neighbor firewalled (intentionally or due to misconfiguration) and not forward neighbor endpoint in Endpoint messages. This prevents poor quality unconnectible addresses from landing in the caches. If the incoming connection test passes, then the peer fills in the Endpoint message with the remote address as seen on the connection before storing it in its cache and forwarding it to other peers. This relieves the neighbor from the responsibility of knowing its own IP address before it can start receiving incoming connections.</p>
<p >Livecache entries expire quickly. Since a peer stops advertising itself when it no longer has available inbound slots, its address will shortly after stop being handed out by other peers. Livecache entries are very likely to result in both a successful connection establishment and the acquisition of an active outbound slot. Compare this with Bootcache addresses, which are very likely to be connectible but unlikely to have an open slot.</p>
<p >Because entries in the Livecache are ephemeral, they are not persisted across launches in the database. The Livecache is continually updated and expired as Endpoint messages are received from the overlay over time.</p>
<h3><a class="anchor" id="autotoc_md617"></a>
<h3><a class="anchor" id="autotoc_md618"></a>
Bootcache</h3>
<p >The <code>Bootcache</code> stores IP addresses useful for gaining initial connections. Each address is associated with the following metadata:</p>
<ul>
@@ -144,10 +144,10 @@ Bootcache</h3>
<p >When choosing addresses from the boot cache for the purpose of establishing outgoing connections, addresses are ranked in decreasing order of valence. The Bootcache is persistent. Entries are periodically inserted and updated in the corresponding SQLite database during program operation. When <b>rippled</b> is launched, the existing Bootcache database data is accessed and loaded to accelerate the bootstrap process.</p>
<p >Desirable entries in the Bootcache are addresses for servers which are known to have high uptimes, and for which connection attempts usually succeed. However, these servers do not necessarily have available inbound connection slots. However, it is assured that these servers will have a well populated Livecache since they will have moved towards the core of the overlay over their high uptime. When a connected server is full it will return a handful of new addresses from its Livecache and gracefully close the connection. Addresses from the Livecache are highly likely to have inbound connection slots and be connectible.</p>
<p >For security, all information that contributes to the ranking of Bootcache entries is observed locally. PeerFinder never trusts external sources of information.</p>
<h3><a class="anchor" id="autotoc_md618"></a>
<h3><a class="anchor" id="autotoc_md619"></a>
Slot</h3>
<p >Each TCP/IP socket that can participate in the peer to peer overlay occupies a slot. Slots have properties and state associated with them:</p>
<h4><a class="anchor" id="autotoc_md619"></a>
<h4><a class="anchor" id="autotoc_md620"></a>
State (Slot)</h4>
<p >The slot state represents the current stage of the connection as it passes through the business logic for establishing peer connections.</p>
<ul>
@@ -167,7 +167,7 @@ State (Slot)</h4>
<p class="startli">The Closing state represents a connected socket in the process of being gracefully closed.</p>
</li>
</ul>
<h4><a class="anchor" id="autotoc_md620"></a>
<h4><a class="anchor" id="autotoc_md621"></a>
Properties (Slot)</h4>
<p >Slot properties may be combined and are not mutually exclusive.</p>
<ul>
@@ -184,19 +184,19 @@ Properties (Slot)</h4>
<p class="startli">A superpeer slot is a connection to a peer which can accept incoming connections, meets certain resource availaibility requirements (such as bandwidth, CPU, and storage capacity), and operates full duplex in the overlay. Connections which are not superpeers are by definition leaves. A leaf slot is a connection to a peer which does not route overlay messages to other peers, and operates in a partial half duplex fashion in the overlay.</p>
</li>
</ul>
<h4><a class="anchor" id="autotoc_md621"></a>
<h4><a class="anchor" id="autotoc_md622"></a>
Fixed Slots</h4>
<p >Fixed slots are identified by IP address and set up during the initialization of the Manager, usually from the configuration file. The Logic will always make outgoing connection attempts to each fixed slot which is not currently connected. If we receive an inbound connection from an endpoint whose address portion (without port) matches a fixed slot address, we consider the fixed slot to be connected.</p>
<h4><a class="anchor" id="autotoc_md622"></a>
<h4><a class="anchor" id="autotoc_md623"></a>
Cluster Slots</h4>
<p >Cluster slots are identified by the public key and set up during the initialization of the manager or discovered upon receipt of messages in the overlay from trusted connections.</p>
<hr />
<h1><a class="anchor" id="autotoc_md624"></a>
<h1><a class="anchor" id="autotoc_md625"></a>
Algorithms</h1>
<h2><a class="anchor" id="autotoc_md625"></a>
<h2><a class="anchor" id="autotoc_md626"></a>
Connection Strategy</h2>
<p >The <em>Connection Strategy</em> applies the configuration settings to establish desired outbound connections. It runs periodically and progresses through a series of stages, remaining in each stage until a condition is met</p>
<h3><a class="anchor" id="autotoc_md626"></a>
<h3><a class="anchor" id="autotoc_md627"></a>
Stage 1: Fixed Slots</h3>
<p >This stage is invoked when the number of active fixed connections is below the number of fixed connections specified in the configuration, and one of the following is true:</p>
<ul>
@@ -205,7 +205,7 @@ Stage 1: Fixed Slots</h3>
</ul>
<p >Each fixed address is associated with a retry timer. On a fixed connection failure, the timer is reset so that the address is not tried for some amount of time, which increases according to a scheduled sequence up to some maximum which is currently set to approximately one hour between retries. A fixed address is considered eligible if we are not currently connected or attempting the address, and its retry timer has expired.</p>
<p >The PeerFinder makes its best effort to become fully connected to the fixed addresses specified in the configuration file before moving on to establish outgoing connections to foreign peers. This security feature helps rippled establish itself with a trusted set of peers first before accepting untrusted data from the network.</p>
<h3><a class="anchor" id="autotoc_md627"></a>
<h3><a class="anchor" id="autotoc_md628"></a>
Stage 2: Livecache</h3>
<p >The Livecache is invoked when Stage 1 is not active, autoconnect is enabled, and the number of active outbound connections is below the number desired. The stage remains active while:</p>
<ul>
@@ -213,7 +213,7 @@ Stage 2: Livecache</h3>
<li>Any outbound connection attempts are in progress</li>
</ul>
<p >PeerFinder makes its best effort to exhaust addresses in the Livecache before moving on to the Bootcache, because Livecache addresses are highly likely to be connectible (since they are known to have been online within the last minute), and highly likely to have an open slot for an incoming connection (because peers only advertise themselves in the Livecache when they have open slots).</p>
<h3><a class="anchor" id="autotoc_md628"></a>
<h3><a class="anchor" id="autotoc_md629"></a>
Stage 3: Bootcache</h3>
<p >The Bootcache is invoked when Stage 1 and Stage 2 are not active, autoconnect is enabled, and the number of active outbound connections is below the number desired. The stage remains active while:</p>
<ul>
@@ -221,7 +221,7 @@ Stage 3: Bootcache</h3>
</ul>
<p >Entries in the Bootcache are ranked, with highly connectible addresses preferred over others. Connection attempts to Bootcache addresses are very likely to succeed but unlikely to produce an active connection since the peers likely do not have open slots. Before the remote peer closes the connection it will send a handful of addresses from its Livecache to help the new peer coming online obtain connections.</p>
<hr />
<h1><a class="anchor" id="autotoc_md630"></a>
<h1><a class="anchor" id="autotoc_md631"></a>
References</h1>
<p >Much of the work in PeerFinder was inspired by earlier work in Gnutella:</p>
<p ><a href="http://rfc-gnutella.sourceforge.net/src/pong-caching.html">Revised Gnutella Ping Pong Scheme</a><br />

View File

@@ -72,13 +72,13 @@ $(function() {
<div class="headertitle"><div class="title">How to use RPC coroutines. </div></div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h1><a class="anchor" id="autotoc_md633"></a>
<div class="textblock"><h1><a class="anchor" id="autotoc_md634"></a>
Introduction.</h1>
<p >By default, an RPC handler runs as an uninterrupted task on the JobQueue. This is fine for commands that are fast to compute but might not be acceptable for tasks that require multiple parts or are large, like a full ledger.</p>
<p >For this purpose, the rippled RPC handler allows <em>suspension with continuation</em></p><ul>
<li>a request to suspend execution of the RPC response and to continue it after some function or job has been executed. A default continuation is supplied which simply reschedules the job on the JobQueue, or the programmer can supply their own.</li>
</ul>
<h1><a class="anchor" id="autotoc_md634"></a>
<h1><a class="anchor" id="autotoc_md635"></a>
The classes.</h1>
<p >Suspension with continuation uses four <code><a class="elRef" href="http://en.cppreference.com/w/cpp/utility/functional/function.html">std::function</a></code>s in the <code><a class="el" href="namespaceripple_1_1RPC.html" title="API version numbers used in later API versions.">ripple::RPC</a></code> namespace: </p><pre class="fragment">using Callback = std::function &lt;void ()&gt;;
using Continuation = std::function &lt;void (Callback const&amp;)&gt;;
@@ -88,7 +88,7 @@ using Coroutine = std::function &lt;void (Suspend const&amp;)&gt;;
<p >A <code>Continuation</code> is a function that is given a <code>Callback</code> and promises to call it later. A <code>Continuation</code> guarantees to call the <code>Callback</code> exactly once at some point in the future, but it does not have to be immediately or even in the current thread.</p>
<p >A <code>Suspend</code> is a function belonging to a <code>Coroutine</code>. A <code>Suspend</code> runs a <code>Continuation</code>, passing it a <code>Callback</code> that continues execution of the <code>Coroutine</code>.</p>
<p >And finally, a <code>Coroutine</code> is a <code><a class="elRef" href="http://en.cppreference.com/w/cpp/utility/functional/function.html">std::function</a></code> which is given a <code>Suspend</code>. This is what the RPC handler gives to the coroutine manager, expecting to get called back with a <code>Suspend</code> and to be able to start execution.</p>
<h1><a class="anchor" id="autotoc_md635"></a>
<h1><a class="anchor" id="autotoc_md636"></a>
The flow of control.</h1>
<p >Given these functions, the flow of RPC control when using coroutines is straight-forward.</p>
<ol type="1">

View File

@@ -96,7 +96,7 @@ $(function() {
<p >Copies are made with the <code>snapShot</code> function as opposed to the <code>SHAMap</code> copy constructor. See the section on <code>SHAMap</code> creation for more details about <code>snapShot</code>.</p>
<p >Sequence numbers are used to further customize the node ownership strategy. See the section on sequence numbers for details on sequence numbers.</p>
<p ><img src="https://user-images.githubusercontent.com/46455409/77350005-1ef12c80-6cf9-11ea-9c8d-56410f442859.png" alt="node diagram" class="inline"/></p>
<h1><a class="anchor" id="autotoc_md637"></a>
<h1><a class="anchor" id="autotoc_md638"></a>
Mutability</h1>
<p >There are two different ways of building and using a <code>SHAMap</code>:</p>
<ol type="1">
@@ -109,7 +109,7 @@ Mutability</h1>
<p >Most <code>SHAMap</code>s are immutable, in the sense that they don't modify or remove their contained nodes.</p>
<p >An example where a mutable <code>SHAMap</code> is required is when we want to apply transactions to the last closed ledger. To do so we'd make a mutable snapshot of the state trie and then start applying transactions to it. Because the snapshot is mutable, changes to nodes in the snapshot will not affect nodes in other <code>SHAMap</code>s.</p>
<p >An example using a immutable ledger would be when there's an open ledger and some piece of code wishes to query the state of the ledger. In this case we don't wish to change the state of the <code>SHAMap</code>, so we'd use an immutable snapshot.</p>
<h1><a class="anchor" id="autotoc_md638"></a>
<h1><a class="anchor" id="autotoc_md639"></a>
Sequence numbers</h1>
<p >Both <code>SHAMap</code>s and their nodes carry a sequence number. This is simply an unsigned number that indicates ownership or membership, or a non-membership.</p>
<p ><code>SHAMap</code>s sequence numbers normally start out as 1. However when a snap-shot of a <code>SHAMap</code> is made, the copy's sequence number is 1 greater than the original.</p>
@@ -117,20 +117,20 @@ Sequence numbers</h1>
<p >When a <code>SHAMap</code> needs to have a private copy of a node, not shared by any other <code>SHAMap</code>, it first clones it and then sets the new copy to have a sequence number equal to the <code>SHAMap</code> sequence number. The <code>unshareNode</code> is a private utility which automates the task of first checking if the node is already sharable, and if so, cloning it and giving it the proper sequence number. An example case where a private copy is needed is when an inner node needs to have a child pointer altered. Any modification to a node will require a non-shared node.</p>
<p >When a <code>SHAMap</code> decides that it is safe to share a node of its own, it sets the node's sequence number to 0 (a <code>SHAMap</code> never has a sequence number of 0). This is done for every node in the trie when <code>SHAMap::walkSubTree</code> is executed.</p>
<p >Note that other objects in rippled also have sequence numbers (e.g. ledgers). The <code>SHAMap</code> and node sequence numbers should not be confused with these other sequence numbers (no relation).</p>
<h1><a class="anchor" id="autotoc_md639"></a>
<h1><a class="anchor" id="autotoc_md640"></a>
SHAMap Creation</h1>
<p >A <code>SHAMap</code> is usually not created from vacuum. Once an initial <code>SHAMap</code> is constructed, later <code>SHAMap</code>s are usually created by calling snapShot(bool
isMutable) on the original <code>SHAMap</code>. The returned <code>SHAMap</code> has the expected characteristics (mutable or immutable) based on the passed in flag.</p>
<p >It is cheaper to make an immutable snapshot of a <code>SHAMap</code> than to make a mutable snapshot. If the <code>SHAMap</code> snapshot is mutable then sharable nodes must be copied before they are placed in the mutable map.</p>
<p >A new <code>SHAMap</code> is created with each new ledger round. Transactions not executed in the previous ledger populate the <code>SHAMap</code> for the new ledger.</p>
<h1><a class="anchor" id="autotoc_md640"></a>
<h1><a class="anchor" id="autotoc_md641"></a>
Storing SHAMap data in the database</h1>
<p >When consensus is reached, the ledger is closed. As part of this process, the <code>SHAMap</code> is stored to the database by calling <code>SHAMap::flushDirty</code>.</p>
<p >Both <code>unshare()</code> and <code>flushDirty</code> walk the <code>SHAMap</code> by calling <code>SHAMap::walkSubTree</code>. As <code>unshare()</code> walks the trie, nodes are not written to the database, and as <code>flushDirty</code> walks the trie nodes are written to the database. <code>walkSubTree</code> visits every node in the trie. This process must ensure that each node is only owned by this trie, and so "unshares" as it walks each node (from the root down). This is done in the <code>preFlushNode</code> function by ensuring that the node has a sequence number equal to that of the <code>SHAMap</code>. If the node doesn't, it is cloned.</p>
<p >For each inner node encountered (starting with the root node), each of the children are inspected (from 1 to 16). For each child, if it has a non-zero sequence number (unshareable), the child is first copied. Then if the child is an inner node, we recurse down to that node's children. Otherwise we've found a leaf node and that node is written to the database. A count of each leaf node that is visited is kept. The hash of the data in the leaf node is computed at this time, and the child is reassigned back into the parent inner node just in case the COW operation created a new pointer to this leaf node.</p>
<p >After processing each node, the node is then marked as sharable again by setting its sequence number to 0.</p>
<p >After all of an inner node's children are processed, then its hash is updated and the inner node is written to the database. Then this inner node is assigned back into it's parent node, again in case the COW operation created a new pointer to it.</p>
<h1><a class="anchor" id="autotoc_md641"></a>
<h1><a class="anchor" id="autotoc_md642"></a>
Walking a SHAMap</h1>
<p >The private function <code>SHAMap::walkTowardsKey</code> is a good example of <em>how</em> to walk a <code>SHAMap</code>, and the various functions that call <code>walkTowardsKey</code> are good examples of <em>why</em> one would want to walk a <code>SHAMap</code> (e.g. <code>SHAMap::findKey</code>). <code>walkTowardsKey</code> always starts at the root of the <code>SHAMap</code> and traverses down through the inner nodes, looking for a leaf node along a path in the trie designated by a <code>uint256</code>.</p>
<p >As one walks the trie, one can <em>optionally</em> keep a stack of nodes that one has passed through. This isn't necessary for walking the trie, but many clients will use the stack after finding the desired node. For example if one is deleting a node from the trie, the stack is handy for repairing invariants in the trie after the deletion.</p>
@@ -148,7 +148,7 @@ Walking a SHAMap</h1>
<li>In the database.</li>
</ol>
<p >If the node is not found in the trie, then it is installed into the trie as part of the traversal process.</p>
<h1><a class="anchor" id="autotoc_md642"></a>
<h1><a class="anchor" id="autotoc_md643"></a>
Late-arriving Nodes</h1>
<p >As we noted earlier, <code>SHAMap</code>s (even immutable ones) may grow. If a <code>SHAMap</code> is searching for a node and runs into an empty spot in the trie, then the <code>SHAMap</code> looks to see if the node exists but has not yet been made part of the map. This operation is performed in the <code>SHAMap::fetchNodeNT()</code> method. The <em>NT</em> is this case stands for 'No Throw'.</p>
<p >The <code>fetchNodeNT()</code> method goes through three phases:</p>
@@ -159,29 +159,29 @@ Late-arriving Nodes</h1>
<li>If the node is not in the TreeNodeCache, we attempt to locate the node in the historic data stored by the data base. The call to to <code>fetchNodeFromDB(hash)</code> does that work for us.</li>
<li>Finally if a filter exists, we check if it can supply the node. This is typically the LedgerMaster which tracks the current ledger and ledgers in the process of closing.</li>
</ol>
<h1><a class="anchor" id="autotoc_md643"></a>
<h1><a class="anchor" id="autotoc_md644"></a>
Canonicalize</h1>
<p ><code>canonicalize()</code> is called every time a node is introduced into the <code>SHAMap</code>.</p>
<p >A call to <code>canonicalize()</code> stores the node in the <code>TreeNodeCache</code> if it does not already exist in the <code>TreeNodeCache</code>.</p>
<p >The calls to <code>canonicalize()</code> make sure that if the resulting node is already in the <code>SHAMap</code>, node <code>TreeNodeCache</code> or database, then we don't create duplicates by favoring the copy already in the <code>TreeNodeCache</code>.</p>
<p >By using <code>canonicalize()</code> we manage a thread race condition where two different threads might both recognize the lack of a SHAMapLeafNode at the same time (during a fetch). If they both attempt to insert the node into the <code>SHAMap</code>, then <code>canonicalize</code> makes sure that the first node in wins and the slower thread receives back a pointer to the node inserted by the faster thread. Recall that these two <code>SHAMap</code>s will share the same <code>TreeNodeCache</code>.</p>
<h1><a class="anchor" id="autotoc_md644"></a>
<h1><a class="anchor" id="autotoc_md645"></a>
&lt;tt&gt;TreeNodeCache&lt;/tt&gt;</h1>
<p >The <code>TreeNodeCache</code> is a <code><a class="elRef" href="http://en.cppreference.com/w/cpp/container/unordered_map.html">std::unordered_map</a></code> keyed on the hash of the <code>SHAMap</code> node. The stored type consists of <code>shared_ptr&lt;SHAMapTreeNode&gt;</code>, <code>weak_ptr&lt;SHAMapTreeNode&gt;</code>, and a time point indicating the most recent access of this node in the cache. The time point is based on <code><a class="elRef" href="http://en.cppreference.com/w/cpp/chrono/steady_clock.html">std::chrono::steady_clock</a></code>.</p>
<p >The container uses a cryptographically secure hash that is randomly seeded.</p>
<p >The <code>TreeNodeCache</code> also carries with it various data used for statistics and logging, and a target age for the contained nodes. When the target age for a node is exceeded, and there are no more references to the node, the node is removed from the <code>TreeNodeCache</code>.</p>
<h1><a class="anchor" id="autotoc_md645"></a>
<h1><a class="anchor" id="autotoc_md646"></a>
&lt;tt&gt;FullBelowCache&lt;/tt&gt;</h1>
<p >This cache remembers which trie keys have all of their children resident in a <code>SHAMap</code>. This optimizes the process of acquiring a complete trie. This is used when creating the missing nodes list. Missing nodes are those nodes that a <code>SHAMap</code> refers to but that are not stored in the local database.</p>
<p >As a depth-first walk of a <code>SHAMap</code> is performed, if an inner node answers true to <code>isFullBelow()</code> then it is known that none of this node's children are missing nodes, and thus that subtree does not need to be walked. These nodes are stored in the FullBelowCache. Subsequent walks check the FullBelowCache first when encountering a node, and ignore that subtree if found.</p>
<h1><a class="anchor" id="autotoc_md646"></a>
<h1><a class="anchor" id="autotoc_md647"></a>
&lt;tt&gt;SHAMapTreeNode&lt;/tt&gt;</h1>
<p >This is an abstract base class for the concrete node types. It holds the following common data:</p>
<ol type="1">
<li>A hash</li>
<li>An identifier used to perform copy-on-write operations</li>
</ol>
<h2><a class="anchor" id="autotoc_md647"></a>
<h2><a class="anchor" id="autotoc_md648"></a>
&lt;tt&gt;SHAMapInnerNode&lt;/tt&gt;</h2>
<p ><code>SHAMapInnerNode</code> publicly inherits directly from <code>SHAMapTreeNode</code>. It holds the following data:</p>
<ol type="1">
@@ -190,22 +190,22 @@ Canonicalize</h1>
<li>A bitset to indicate which of the 16 children exist.</li>
<li>An identifier used to determine whether the map below this node is fully populated</li>
</ol>
<h2><a class="anchor" id="autotoc_md648"></a>
<h2><a class="anchor" id="autotoc_md649"></a>
&lt;tt&gt;SHAMapLeafNode&lt;/tt&gt;</h2>
<p ><code>SHAMapLeafNode</code> is an abstract class which publicly inherits directly from <code>SHAMapTreeNode</code>. It isIt holds the following data:</p>
<ol type="1">
<li>A shared_ptr to a const SHAMapItem.</li>
</ol>
<h3><a class="anchor" id="autotoc_md649"></a>
<h3><a class="anchor" id="autotoc_md650"></a>
&lt;tt&gt;SHAMapAccountStateLeafNode&lt;/tt&gt;</h3>
<p ><code>SHAMapAccountStateLeafNode</code> is a class which publicly inherits directly from <code>SHAMapLeafNode</code>. It is used to represent entries (i.e. account objects, escrow objects, trust lines, etc.) in a state map.</p>
<h3><a class="anchor" id="autotoc_md650"></a>
<h3><a class="anchor" id="autotoc_md651"></a>
&lt;tt&gt;SHAMapTxLeafNode&lt;/tt&gt;</h3>
<p ><code>SHAMapTxLeafNode</code> is a class which publicly inherits directly from <code>SHAMapLeafNode</code>. It is used to represent transactions in a state map.</p>
<h3><a class="anchor" id="autotoc_md651"></a>
<h3><a class="anchor" id="autotoc_md652"></a>
&lt;tt&gt;SHAMapTxPlusMetaLeafNode&lt;/tt&gt;</h3>
<p ><code>SHAMapTxPlusMetaLeafNode</code> is a class which publicly inherits directly from <code>SHAMapLeafNode</code>. It is used to represent transactions along with metadata associated with this transaction in a state map.</p>
<h1><a class="anchor" id="autotoc_md652"></a>
<h1><a class="anchor" id="autotoc_md653"></a>
SHAMapItem</h1>
<p >This holds the following data:</p>
<ol type="1">