Compare commits

...

262 Commits

Author SHA1 Message Date
Nik Bougalis
a3470c225b Set version to 1.2.1 2019-02-25 13:01:32 -08:00
seelabs
c5d215d901 Add delivered amount to the ledger RPC command 2019-02-25 13:01:12 -08:00
JoelKatz
9dbf8495ee Avoid a race condition during peer status change 2019-02-25 12:59:35 -08:00
Nik Bougalis
2529edd2b6 Properly transition state to disconnected:
If the number of peers a server has is below the configured
minimum peer limit, this commit will properly transition the
server's state to "disconnected".

The default limit for the minimum number of peers required was
0 meaning that a server that was connected but lost all its
peers would never transition to disconnected, since it could
never drop below zero peers.

This commit redefines the default minimum number of peers to 1
and produces a warning if the server is configured in a way
that will prevent it from ever achieving sufficient connectivity.
2019-02-25 12:59:35 -08:00
Nik Bougalis
e974c7d8a4 Avoid directly using memcpy to deserialize data 2019-02-25 12:59:34 -08:00
Nik Bougalis
b335adb674 Make validators opt out of crawl:
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.

This commit makes servers that are configured as validators
opt out of crawling.
2019-02-25 12:59:34 -08:00
Nik Bougalis
c6ab880c03 Display validator status only to admin requests:
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.

Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.

Servers will only provide this enhanced state information for
connections that have elevated privileges.

Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2019-02-25 12:59:31 -08:00
Mike Ellery
7779dcdda0 Set version to 1.2.0 2019-02-12 16:41:03 -08:00
Mike Ellery
132f1b218c Set version to 1.2.0-rc2 2019-01-30 15:37:56 -08:00
Mike Ellery
e5d6f16f19 Remove [ips] section from sample config 2019-01-30 15:33:39 -08:00
Mike Ellery
8f973621fc Set version to 1.2.0-rc1 2019-01-28 12:02:33 -08:00
Mike Ellery
b75c2d71a5 Make sample config comment consistent with code 2019-01-28 11:53:30 -08:00
Nik Bougalis
eed210bb67 Set version to 1.2.0-b11 2019-01-18 12:13:22 -08:00
Mike Ellery
eab2a0d668 Improve debug information generated for the LedgerTrie 2019-01-18 12:13:21 -08:00
Howard Hinnant
148bbf4e8f Add safe_cast (RIPD-1702):
This change ensures that no overflow can occur when casting
between enums and integral types.
2019-01-18 12:13:21 -08:00
Joseph Busch
494724578a Enchance /crawl API endpoint with local server information (RIPD-1644):
The /crawl API endpoint allows developers to examine the structure of
the XRP Ledger's overlay network.

This commit adds additional information about the local server to the
/crawl endpoint, making it possible for developers to create data-rich
network-wide status dashboards.

Related:
 - https://developers.ripple.com/peer-protocol.html
 - https://github.com/ripple/rippled-network-crawler
2019-01-18 12:13:21 -08:00
Nik Bougalis
ea76103d5f Detect malformed data earlier during deserialization (RIPD-1695):
When deserializing specially crafted data, the code would ignore certain
types of errors. Reserializing objects created from such data results in
failures or generates a different serialization, which is not ideal.

Also addresses: RIPD-1677, RIPD-1682, RIPD-1686 and RIPD-1689.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2019-01-18 12:13:21 -08:00
Nik Bougalis
2151110976 Improve message buffering (RIPD-1699):
Specially crafted messages could cause the server to buffer large
amounts of memory which could increase memory pressure.

This commit changes how messages are buffered and imposes a limit
on the amount of data that the server is willing to buffer.

Acknowledgements:
Aaron Hook for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find. For information
on Ripple's Bug Bounty program, please visit:

    https://ripple.com/bug-bounty
2019-01-17 18:39:04 -08:00
Nik Bougalis
dfb45baa93 Set version to 1.2.0-b10 2018-12-28 13:32:27 -08:00
f443439f1f Add zaphod.alloy.ee to default hub configuration 2018-12-28 13:31:19 -08:00
Howard Hinnant
6d0b108ec1 Upgrade sqlite to 3.26 (fix #2810) 2018-12-28 13:31:19 -08:00
Howard Hinnant
710f9ee1ac Relax overly-strict assert in Serializer constructor (RIPD-1701):
The constructor would previously assert that the specified buffer pointer
was non-null, even if the buffer size is specified as 0. While reasonable,
this also makes it more difficult to use this API.
2018-12-28 13:31:19 -08:00
Howard Hinnant
76d5ecb595 Verify invariants when calling SHAMapInnerNodeV2::addRaw (RIPD-1700) 2018-12-28 13:32:09 -08:00
Joseph Busch
ba9ca1378e Strict input validation against expected schema (RIPD-1709, RIPD-1710) 2018-12-28 13:31:19 -08:00
Miguel Portilla
1be8094ee2 Improve crawl shard resource usage 2018-12-28 13:31:19 -08:00
Nik Bougalis
96c949a997 Set version to 1.2.0-b9 2018-12-11 13:01:05 -08:00
Mike Ellery
9121e26708 Update libarchive to 3.3.3 from official repo 2018-12-11 12:52:29 -08:00
Edward Hennis
2432f13903 Reserve correct vector size for fee calculations:
* Using txnsExpected_, which is influenced by both the config
  and network behavior, can reserve far too much or far too
  little memory, wasting time and resources.
* Not an issue during normal operation, but a user could
  cause problems on their local node with extreme configuration
  settings.
2018-12-11 12:51:46 -08:00
Edward Hennis
259fb1c32e Fix unit test with incorrectly hard-coded parameter:
* initFee was using a lot of logic that could look unclear. Add
  some documentation explaining why certain values were used.
* Because initFee had side effects, callers needed to repeat the
  max queue size computation, making the initial problem more
  likely. Instead, return the max queue size value, so the caller
  can reuse it.
* A newer test (testInFlightBalance()) was incorrectly using a
  hard-coded queue limit. Fix it to use initFee's new return
  value.
2018-12-11 12:51:46 -08:00
Rome Reginelli
e0515b0015 Correct amount serialization comments 2018-12-11 12:51:46 -08:00
John Freeman
412a3ec710 Fix the --rpc_port command-line argument
The --rpc_port command-line option is effectively ignored. We construct
an `Endpoint` with the given port, but then drop it on the floor.
(Perhaps the author thought the `Endpoint::at_port` method is a mutation
instead of a transformation.) This small change adds the missing
assignment to hold on to the new endpoint.

Fixes #2764
2018-12-11 12:50:05 -08:00
Nik Bougalis
30bba29da2 Merge master (1.1.2) into develop (1.2.0-b8) 2018-12-11 12:48:32 -08:00
Nik Bougalis
4f3a76dec0 Set version to 1.1.2 2018-11-29 21:49:10 -08:00
Nik Bougalis
61f443e3bb Properly bypass connection limits for cluster peers (fix #2795) 2018-11-29 21:38:35 -08:00
Brad Chase
bd2a38f584 Improve preferred ledger calculation:
This changeset ensures the preferred ledger calculation
properly distinguishes the absence of trusted validations
from a preferred ledger which is the genesis ledger.
2018-11-29 21:38:12 -08:00
Nik Bougalis
4cff94f7a4 Set version to 1.2.0-b8 2018-11-25 17:39:49 -08:00
Mark Travis
fbdbffed67 Report duration in current state. 2018-11-25 17:37:31 -08:00
Scott Schurr
ad5c5f1969 STObject::applyTemplate() throws with description of error:
The `STObject` member function `setType()` has been renamed to
applyTemplate() and modified to throw if there is a template
mismatch.

The error description in the exception is, in certain cases,
used, to better indicate why a particular transaction was
considered ill formed.

Fixes #2585.
2018-11-25 17:37:31 -08:00
John Freeman
c354809e1c Implement missing string conversions for JSON
`Json::Value::isConvertibleTo` indicates that unsigned integers and
reals are convertible to string, but trying to do so (with
`Json::Value::asString`) throws an exception because its internal switch
is missing these cases. This change fills them in (and adds tests).

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing this issue.

Closes #2778
2018-11-25 17:37:14 -08:00
John Freeman
dc4d76f626 Prefer regex to manual parsing in parseURL:
Although `parseURL` used a regex to pull the authority out of the URL
being parsed, it performed manual parsing of the hostname and port.

This commit rolls the parsing of the username and password, if any,
directly into the regex. The hostname can be a name, an IPv4 or an
IPv6 address.

Fixes #2751
2018-11-21 17:08:21 -08:00
Edward Hennis
c1a02440dc Load validator list from file:
* Adds local file:// URL support to the [validator_list_sites] stanza.
  The file:// URL must not contain a hostname. Allows a rippled node
  operator to "sideload" a new list if their node is unable to reach
  a validator list's web site before an old list expires. Lists
  loaded from a file will be validated in the same way a downloaded
  list is validated.
* Generalize file/dir "guards" from Config test so they can be reused
  in other tests.
* Check for error when reading validators.txt. Saves some parsing and
  checking of an empty string, and will give a more meaningful error.
* Completes RIPD-1674.
2018-11-20 19:49:39 -08:00
Edward Hennis
e7a69cce65 Account for minimum reserve in potential spend:
* Relevant when deciding whether an account can queue multiple
  transactions. If the potential spend of the already queued
  transactions would dip into the reserve, the reserve is
  preserved for fees.
* Also change several direct modifications of the owner count to
  call adjustOwnerCount to preserve overflow checking.
* Update related unit testcase
* Resolves #2251
2018-11-20 19:49:39 -08:00
Howard Hinnant
60dc949314 Remove custom terminate handler
* Reduce the amount of code we have to maintain.
* Remove the potential for degrading stack dumps.
2018-11-20 19:45:02 -08:00
Nik Bougalis
cc824685e7 Set version to 1.2.0-b7 2018-11-09 07:40:46 -08:00
JoelKatz
be70d81bd7 Perform some extra checks on ledger changes
Perform some extra checks on the close time and sequence number
of a candidate for network consensus ledger. This tightens
defenses against some "insane/hostile supermajority" attacks.
2018-11-09 07:40:41 -08:00
JoelKatz
6df96f08df Ensure websocket PING/PONG token has length 8 (RIPD-1670) 2018-11-09 07:40:41 -08:00
JoelKatz
9ad2b9be45 Fix a rare race condition on shutdown:
If we happen to get very unlucky and close the door when no
accept operation is pending, the do_accept loop would never
terminate.
2018-11-09 07:40:41 -08:00
JoelKatz
0d2b2923da Control memory growth from slow writes
* Don't allow a write batch to grow without bound
* Don't fetch history if write load is high
2018-11-09 07:40:41 -08:00
Mike Ellery
265f5f1fb1 Delete old protobuf subtree 2018-11-08 18:58:13 -08:00
Mike Ellery
a2ab6c4b02 Build protobuf as ExternalProject when not found 2018-11-08 18:58:13 -08:00
Mike Ellery
6bdc9e7b30 Use correct manifest cache when loading ValidatorList 2018-11-08 18:58:13 -08:00
Nik Bougalis
c71eb45240 Eliminate potential undefined behavior (RIPD-1685):
Under certain conditions, we could call `memcpy` or `memcmp` with a null
source pointer. Even when specifying 0 as the amount of data to copy this
could result in undefined behavior under the C and C++ standards.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2018-11-08 18:58:13 -08:00
Nik Bougalis
753600a2a0 Reset the validator list fetch timer if an error occurs 2018-11-08 18:58:12 -08:00
Nik Bougalis
945493d9cf Allow servers to detect transaction censorship attempts (RIPD-1626):
The XRP Ledger is designed to be censorship resistant. Any attempt to
censor transactions would require coordinated action by a majority of
the system's validators.

Importantly, the design of the system is such that such an attempt is
detectable and can be easily proven since every validators must sign
the validations it publishes.

This commit adds an automated censorship detector. While the server is
in sync, the detector tracks all transactions that, in the view of the
server, should have been included and issues warnings of increasing
severity for any transactions which, have not after several rounds.
2018-11-08 18:58:11 -08:00
Nik Bougalis
2a8b0e4b88 Set version to 1.2.0-b6 2018-11-06 10:27:29 -08:00
Nik Bougalis
513b1dd194 Add support for Ed25519 seeds encoded using ripple-lib:
When Ed25519 support was added to ripple-lib, a way to specify
whether a seed should be used to derive a "classic" secp256k1
keypair or a "new" Ed25519 keypair was needed, and the
requirements were that:

1. previously seeds would, correctly, generate a secp256k1
   keypair.
2. users would not have to know about whether the seed was
   used to generate a secp256k1 or an Ed25519 keypair.

To address these requirements, the decision was made to encode
the type of key within the seed and a custom encoding was
designed.

The encoding uses a token type of 1 and prefixes the actual
seed with a 2 byte header, selected to ensure that all such
keypairs will, when encoded, begin with the string "sEd".

This custom encoding is non-standard and was not previously
documented; as a result, it is not widely supported and other
sofware may treat such keys as invalid. This can make it
difficult for users that have stored such a seed to use
wallets or other tooling that is not based on ripple-lib.

This commit adds support to rippled for automatically
detecting and properly handling such seeds.
2018-11-06 10:27:13 -08:00
Nik Bougalis
77462b8f72 Remove deprecated 'validation_seed' RPC command:
The 'validation_seed' RPC command was used to change the validation
key used by a validator at runtime.

Its implementation was commented out with commit fa796a2eb5
which has been included in the codebase since the 0.30.0 release
and there are no plans to reintroduce the functionality at this
point.

Validator operators should migrate to using validator manifests
instead.

This fixes #2748.
2018-11-06 10:27:12 -08:00
Nik Bougalis
1682fe3a39 Cleanup unused Beast bits and pieces:
This cleanup does not remove Boost.Beast code, but old-style Beast
which is no longer relevant or helpful.
2018-11-06 10:27:10 -08:00
Edward Hennis
58f786cbb4 Make the FeeEscalation amendment permanent (RIPD-1654):
The FeeEscalation amendment has been enabled on the XRP Ledger network
since May 19, 2016. The transaction which activated this amendment is:
5B1F1E8E791A9C243DD728680F108FEF1F28F21BA3B202B8F66E7833CA71D3C3.

This change removes all conditional code based around the FeeEscalation
amendment, but leaves the amendment definition itself since removing the
definition would cause nodes to think an unknown amendment was activate
causing them to become amendment blocked.

The commit also removes the redundant precomputed hashes from the
supportedAmendments vector.
2018-11-06 10:26:29 -08:00
Edward Hennis
a96cb8fc1c Remove undocumented experimental options from RPC sign (RIPD-1653):
The `x_assume_tx` and `x_queue_okay` experimental options were
associated with the transaction queue that were not officially
supported.
2018-11-06 10:26:29 -08:00
Joe Loser
c587012e5c Inline calls to cachedRead:
Problem:
- There are only a few call sites to cachedRead, and all of them
  currently do more work than is required since we know the type in each
  case.

Solution:
- "Inline" the codepath to cachedRead, but do not check if the type is
  valid. In all such call sites, we know the keylet to read directly.

This fixes #2550
2018-11-06 10:26:29 -08:00
Mike Ellery
ad4bbd8dff Add source filtering for coverage with option to disable 2018-11-06 10:26:29 -08:00
Mike Ellery
202d91c9f0 Remove unused json_batchallocator.h 2018-11-06 10:26:29 -08:00
Howard Hinnant
146ea5d44e Remove a use after std::move
Fixes: #2538
Fixes: #2536
2018-11-06 10:26:29 -08:00
Howard Hinnant
157c066f2b Fix memory leak in Json move assignment operator
*  When move assignment is creates a cyclic ownership pattern
   memory was being leaked.  This patch breaks the cycle.

*  Fixes: #2572
2018-11-06 10:26:29 -08:00
Howard Hinnant
156e8dae83 Replace WaitableEvent with portable std primitives:
The WaitableEvent class was a leftover from the pre-Boost
version of Beast and used Windows- and pthread-specific
APIs.

This refactor replaces that functionality by using only
interfaces provided by the C++ standard, making the code
more portable.

Closes #2402.
2018-11-06 10:26:29 -08:00
Markus Teufelberger
5e96da51f9 Remove the state file for the random number generator 2018-11-06 10:26:29 -08:00
Nik Bougalis
cb71d493a0 Set version to 1.2.0-b5 2018-10-23 08:33:18 -07:00
MarkusTeufelberger
8124c1f51f remove duplicated include
The errno.h header is already included for both Linux and Android above
2018-10-23 08:24:11 -07:00
Nik Bougalis
6ed2270bc9 Merge master (1.1.1) into develop (1.2.0-b4) 2018-10-23 08:21:43 -07:00
Mike Ellery
4e7c038520 Set version to 1.2.0-b4 2018-10-19 12:24:51 -07:00
1535239824@qq.com
7b48dc36f5 Add fixTakerDryOfferRemoval amendment 2018-10-19 12:23:25 -07:00
Miguel Portilla
d5c0e1216d Change conflicting example websocket port 2018-10-19 12:22:47 -07:00
Scott Schurr
a999894dae Allow rippled to compile with C++17:
Many of the warnings on Windows were not resolved, just
silenced with _SILENCE_ALL_CXX17_DEPRECATION_WARNINGS.
They need to be resolved in a future commit.
2018-10-19 12:21:57 -07:00
Scott Schurr
63e167b7a3 ledger_entry RPC by index matches other forms [RIPD-1538] 2018-10-19 12:21:10 -07:00
MarkusTeufelberger
8fc6a8175b Remove unused execinfo.h header
Fixes #2671 and #2159
2018-10-19 12:20:11 -07:00
Edward Hennis
af1697cc6a Improve RPC error message for fee command:
* If rippled is not synced to the network, `fee` will return a
  "no network" error instead of the possibly confusing "not enabled"
  error.
* Resolves RIPD-1588
2018-10-19 12:19:20 -07:00
Mark Travis
e98c76110a Remove outdated example configs. 2018-10-19 12:18:29 -07:00
Mike Ellery
7fe1d4b9c2 Accept redirects from validator list sites:
Honor location header/redirect from validator sites. Limit retries per
refresh interval to 3. Shorten refresh interval after HTTP/network errors.

Fixes: RIPD-1669
2018-10-19 12:16:57 -07:00
Nik Bougalis
b36e11bc49 Properly handle expired validator lists when validating (RIPD-1661):
A validator that was configured to use a published validator list could
exhibit aberrent behavior if that validator list expired.

This commit introduces additional logic that makes validators operating
with an expired validator list bow out of the consensus process instead
of continuing to publish validations. Normal operation will resume once
a non-expired validator list becomes available.

This commit also enhances status reporting when using the `server_info`
and `validators` commands. Before, only the expiration time of the list
would be returned; now, its current status is also reported in a format
that is clearer.
2018-10-19 12:15:36 -07:00
seelabs
72e6005f56 Set version to 1.1.1 2018-10-19 13:12:40 -04:00
Nik Bougalis
152d698957 Properly handle expired validator lists when validating (RIPD-1661):
A validator that was configured to use a published validator list could
exhibit aberrent behavior if that validator list expired.

This commit introduces additional logic that makes validators operating
with an expired validator list bow out of the consensus process instead
of continuing to publish validations. Normal operation will resume once
a non-expired validator list becomes available.

This commit also enhances status reporting when using the `server_info`
and `validators` commands. Before, only the expiration time of the list
would be returned; now, its current status is also reported in a format
that is clearer.
2018-10-19 13:08:56 -04:00
Mike Ellery
7c96bbafbd CI rpm build fix 2018-10-11 11:08:55 -07:00
Mike Ellery
bdaad19e70 Accept redirects from validator list sites:
Honor location header/redirect from validator sites. Limit retries per
refresh interval to 3. Shorten refresh interval after HTTP/network errors.

Fixes: RIPD-1669
2018-10-11 11:08:27 -07:00
seelabs
63c3fc30d8 Set version to 1.2.0-b3 2018-10-10 13:09:25 -04:00
Joe Loser
1ac9694dbc Simplify strHex:
Problem:
- There are several specific overloads with some custom code that can be
  easily replaced using Boost.Hex.

Solution:
- Introduce `strHex(itr, itr)` to return a string given a begin and end
  iterator.
- Remove `strHex(itr, size)` in favor of the `strHex(T)` where T is
  something that has a `begin()` member function. This allows us to
  remove the strHex overloads for `std::string`, Blob, and Slice.
2018-10-10 13:09:22 -04:00
Miguel Portilla
3661dc88fe Add RPC command shard crawl (RIPD-1663) 2018-10-10 12:16:01 -04:00
Edward Hennis
86c066cd7e Include entire src tree in multiconfig projects:
* For example Visual Studio, XCode. This will allow easily working with
  any file in the IDE.
* Also ignore the file created by Visual Studio when using cmake
  integration.
* Use conditional for unity/nounity sources (h/t @mellery451)
2018-10-10 10:25:25 -04:00
Mike Ellery
d70464032c Add dependency for NuDB ExternalProject 2018-10-10 10:19:00 -04:00
Scott Schurr
0bbe6e226c Remove beast::Journal default constructor 2018-10-10 10:18:03 -04:00
Mike Ellery
49e61cc0a6 Improve codecov builds:
- allow private token for jenkins/codecov
- add custom targets for gcc/clang to generate codecov reports
- use CMake coverage target in jenkins build
- optional coverage_test argument when configuring the build
2018-10-10 10:15:10 -04:00
Scott Schurr
6572fc8e95 Implement MultiSignReserve amendment [RIPD-1647]:
Reduces the account reserve for a multisigning SignerList from
(conditionally) 3 to 10 OwnerCounts to (unconditionally) 1
OwnerCount.  Includes a transition process.
2018-10-01 18:17:33 -07:00
Nik Bougalis
3ce4dda5cb Set version to 1.2.0-b2 2018-10-01 11:26:31 -07:00
Edward Hennis
7295cf979b Grow the open ledger expected transactions quickly (RIPD-1630):
* When increasing the expected ledger size, add on an extra 20%.
* When decreasing the expected ledger size, take the minimum of the
  validated ledger size or the old expected size, and subract another 50%.
* Update fee escalation documentation.
* Refactor the FeeMetrics object to use values from Setup
2018-10-01 11:26:22 -07:00
Edward Hennis
e14f913244 Update TxQ developer docs:
* Rename a couple of member variables for clarity.
2018-10-01 11:26:22 -07:00
Joe Loser
cd1c5a30dd Add user defined literals for megabytes and kilobytes 2018-10-01 11:26:22 -07:00
Joe Loser
8dd8433bb6 Remove unused function in AutoSocket.h 2018-10-01 07:40:56 -07:00
Scott Schurr
eeb9d92fb0 Add RPCCall unit tests (RIPD-1634) 2018-10-01 07:40:56 -07:00
Scott Schurr
4104778067 Improve transaction error condition handling (RIPD-1578, RIPD-1593):
As described in #2314, when an offer executed with `Fill or Kill`
semantics, the server would return `tesSUCCESS` even if the order
couldn't be filled and was aborted. This would require additional
processing of metadata by users to determine the effects of the
transaction.

This commit introduces the `fix1578` amendment which, if enabled,
will cause the server to return the new `tecKILLED` error code
instead of `tesSUCCESS` for `Fill or Kill` orders that could not
be filled.

Additionally, the `fix1578` amendment will prevent the setting of
the `No Ripple` flag on trust lines with negative balance; trying
to set the flag on such a trust line will fail with the new error
code `tecNEGATIVE_BALANCE`.
2018-09-30 14:10:40 -07:00
Spec
4dcb3c9199 Avoid dispatching multiple fetch pack threads 2018-09-30 13:54:59 -07:00
Nik Bougalis
b0092aee24 Set version to 1.2.0-b1 2018-09-28 09:15:12 -07:00
Mike Ellery
bb52b04c25 Remove subtrees for soci, sqlite, lz4, snappy, nudb 2018-09-28 09:15:06 -07:00
Mike Ellery
83dac8b382 Use ExternalProject for NIH dependencies
Fixes: RIPD-1648

 - use ExternalProject for snappy, lz4, SOCI, and sqlite3
 - use FetchContent for NuDB
 - update SOCI from 79e222e3c2278e6108137a2d26d3689418b37544 to
   3a1f602b3021b925d38828e3ff95f9e7f8887ff7
 - update lz4 from c10863b98e1503af90616ae99725ecd120265dfb to v1.8.2
 - update sqlite3 from 3.21 to 3.24
 - update snappy from b02bfa754ebf27921d8da3bd2517eab445b84ff9 to 1.1.7
 - update NuDB from 00adc6a4f16679a376f40c967f77dfa544c179c1 to 1.0.0
2018-09-28 09:15:06 -07:00
Mike Ellery
8a4951947d Improve ssl and nih in cmake:
- provide better override handling for ssl dir
- include build type in nih cache for single config
  to avoid cmake cache collision
2018-09-28 09:15:06 -07:00
Mike Ellery
ab6163e989 Remove test sensitivity to error text from OpenSSL 2018-09-28 09:15:06 -07:00
Mike Ellery
5741a8356f Refine json object test for NDEBUG case 2018-09-28 09:15:06 -07:00
seelabs
b2f2d89a08 Support boost 1.68 2018-09-28 09:15:06 -07:00
seelabs
c946043280 Suppress clang warning on intentional self assignment 2018-09-28 09:15:06 -07:00
Miguel Portilla
820546c873 Report fetch pack errors with shards 2018-09-28 09:15:06 -07:00
Scott Schurr
b36e9dd1b4 Remove noisy log write from Stoppable.cpp 2018-09-28 09:15:06 -07:00
Scott Schurr
582d1691a9 Improve error descriptions in JSONRPC unit test 2018-09-28 09:15:06 -07:00
Nik Bougalis
3e22a1e9e8 Set version to 1.1.0 2018-09-14 12:53:38 -07:00
wilsonianb
7b0367730c Set version to 1.1.0-rc3 2018-08-21 13:56:28 -05:00
wilsonianb
8c14002c25 Do not use beast base64 encoding without fix:
Boost 1.67 and 1.68 are missing this fix
0439dcfa7a
2018-08-21 10:05:45 -05:00
Nik Bougalis
c0d396fb3c Set version to 1.1.0-rc2 2018-08-15 20:02:19 -07:00
Nik Bougalis
65d517d0df Don't filter proposals by close time at the wire protocol level:
When validators publish a proposal, they include the close time that they
believe the new ledger should have, and the network attempts to reach
consensus on that.

Instead of delaying consensus if no close time has the required majority
the servers can "agree to disagree"; if this happens, they switch to
proposing a close time of 0, and the network avalanches to that value.

If that occurs, determinstic rules record the new ledger's close time as
being one second later than its parent, and set a flag indicating that
no consensus on the close time was reached.

The wire protocol decoder would incorrectly filter such proposals, so
that they would not be seen by the higher level consensus engine.

This commit removes the low-level filtering, and allows higher level
code to filter out stale proposals instead.
2018-08-15 19:59:55 -07:00
Nik Bougalis
38c3a46a33 Deprecate commands that perform remote tx signing (RIPD-1649):
In order to facilitate transaction signing, `rippled` offers the `sign` and
`sign_for` and `submit` commands, which, given a seed, can be used to sign or
sign-and-submit transactions. These commands are accessible from the command
line, as well as over the WebSocket and RPC interfaces that `rippled` can be
configured to provide.

These commands, unfortunately, have significant security implications:

  1. They require divulging an account's seed (commonly known as a "secret
     key") to the server.
  2. When executing these commands against remote servers, the seeds can be
     transported over clear-text links.
  3. When executing these commands over the command line, the account
     seed may be visible using common tools that show running processes
     and may potentially be inadvertently stored by system monitoring
     tools or facilities designed to maintain a history of previously
     typed commands.

While this commit cannot prevent users from issuing these commands to a
server, whether locally or remotely, it restricts the `sign` and `sign_for`
commands, as well as the `submit` command when used to sign-and-submit,
so that they require administrative privileges on the server.

Server operators that want to allow unrestricted signing can do so by
adding the following stanza to their configuration file:

    [signing_support]
    true

Ripple discourages server operators from doing so and advises against using
these commands, which will be removed in a future release. If you rely on
these commands for signing, please migrate to a standalone signing solution
as soon as possible. One option is to use `ripple-lib`; documentation is
available at https://developers.ripple.com/rippleapi-reference.html#sign.

If the commands are administratively enabled, the server includes a warning
on startup and adds a new field in the resulting JSON, informing the caller
that the commands are deprecated and may become unavailable at any time.

Acknowledgements:
Jesper Wallin for reporting this issue to Ripple.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2018-08-15 19:59:52 -07:00
Scott Schurr
d3258c7f1f deposit_authorized gives error if source not in ledger (#2640) 2018-08-14 08:46:59 -07:00
seelabs
8a02903fa5 Set version to 1.1.0-rc1 2018-08-08 21:07:57 -04:00
Mike Ellery
dbc8f147c9 Improve subproj handling and deprecated target options:
Exclude several libraries from build when we are included in a
super-project (this is the case when someone only wants to use
xrpl_core). Force several target (deprecated) params to be cache
variables since they are now exposed as options.
2018-08-08 21:07:54 -04:00
Mike Ellery
7a547b8cf2 Add missing sources to build 2018-08-08 21:07:54 -04:00
Mike Ellery
2c13ca0109 Define DEBUG preprocessor symbol for debug builds 2018-08-08 21:07:54 -04:00
Miguel Portilla
a7ed5bfbee Improve shards file exception handling 2018-08-08 21:07:54 -04:00
Miguel Portilla
a73372cb9d Add RPC shard download 2018-08-08 21:07:54 -04:00
Miguel Portilla
658f904ce0 Add shard import support to shard database 2018-08-08 21:07:54 -04:00
Miguel Portilla
9212c28ef8 Add HTTPS file downloader client 2018-08-08 21:07:54 -04:00
Miguel Portilla
5336e3715a Add archive and lz4 extracting 2018-08-08 21:07:54 -04:00
Mike Ellery
c12dbc4386 Add libarchive 2018-08-08 21:07:54 -04:00
Scott Schurr
2901577be7 Remove using namespace declarations at namespace scope in headers 2018-08-08 21:07:54 -04:00
seelabs
4aa0bc37c0 Add delimiter when appending to CMAKE_CXX_FLAGS 2018-08-08 21:07:54 -04:00
Mike Ellery
24d2687f2d Remove empty source file 2018-08-08 21:07:54 -04:00
Mike Ellery
ed5a0bdc3c Correct werr flag for jenkins build 2018-08-08 21:07:54 -04:00
Mark Travis
04745b11a8 Expand SQLite potential storage capacity:
Increase page size for SQLite transaction database upon creation
Provide diagnostics for transaction db page usage.
Shut down rippled gracefullly if transaction db is running out of pages.
Add new rippled maintenance command line option to cause new page size
to take effect.
2018-08-08 21:07:54 -04:00
wilsonianb
9b63f4fb53 Remove executable bit from source files 2018-08-07 14:38:27 -04:00
Joe Loser
8ac6799149 Remove unused SNTP_DEBUG define in SNTPClock.cpp 2018-08-07 14:36:19 -04:00
Nik Bougalis
09050a860b Set version to 1.1.0-b5 2018-07-26 16:06:25 -07:00
mDuo13
32ca1dd6ed Update README with XRP Ledger branding 2018-07-26 16:06:16 -07:00
Mike Ellery
37d9544ef7 Refactor/modernize our cmake:
Switch to target-oriented dependencies. Use imported targets for
dependencies (openssl, boost). Localize FindBoost to remove cmake
version dependence for latest boost support. Logically separate
"ripple-libpp" core sources and add install targets.
Add ninja build for msvc. Add two clang sanitizer builds. Misc script
changes to work with latest modernized cmake.
2018-07-20 08:58:04 -07:00
Mike Ellery
63370b4441 Default to ipv4 for unit tests, add ipv6 option 2018-07-20 08:58:04 -07:00
Mike Ellery
49bcdda418 Improve charge handling in NoRippleCheckLimits test (RIPD-1641) 2018-07-20 08:58:04 -07:00
Miguel Portilla
d89ff1b63d Handle websocket construction exceptions:
Certain versions of the Beast HTTP & WebSocket library can
generate exceptions, which unless caught, will result in
unexpected behavior.

Acknowledgements:
Ripple thanks Thomas Snider for originally noticing this
issue and responsibly disclosing it to Ripple.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers
to responsibly disclose any issues that they may find. For
more on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
2018-07-20 08:58:04 -07:00
Miguel Portilla
d289512aeb Improve SSLHTTPPeer asynchronous shutdown 2018-07-20 08:58:04 -07:00
Howard Hinnant
d98c4992dd Supply ConsensusTimer with milliseconds or finer precision 2018-07-20 08:58:04 -07:00
Howard Hinnant
d257d1b2c9 Migrate more code into the chrono type system:
Changes include:

  *  Database::tune and all tune overrides
  *  TaggedCache TargetAge
  *  KeyCache TargetAge
2018-07-20 08:58:04 -07:00
Scott Schurr
574ea2c14d Minor optimization of STObject::add 2018-07-20 08:58:04 -07:00
Joe Loser
70d9d88cda Remove using namespace beast in base_uint.h 2018-07-20 08:58:04 -07:00
Joe Loser
79d819584f Replace boost locks and mutexes with std-equivalent 2018-07-16 17:49:42 -07:00
Joe Loser
e222ff5868 Rename data members of ConsensusParms:
Based on a TODO comment in DisputedTX.h, it seems at one point the
data members of ConsensusParms were macros. Now that they are not,
we should spell them like other data members (without all uppercase).
2018-07-16 17:49:42 -07:00
Joe Loser
f58916a2e4 Remove comment about passing allocator to KeyCache:
After some discussion on https://github.com/ripple/rippled/pull/2595
we have decided that the allocator should not be plumbed through
the KeyCache class template. As such, remove the comment suggesting
to push the allocator through.
2018-07-16 17:49:42 -07:00
wilsonianb
7e30897ef4 Increase validation quorum to 80%
All listed validators are trusted and quorum is 80% of trusted
validators regardless of the number of:
* configured published lists
* listed or trusted validators
* recently seen validators

Exceptions:
* A listed validator whose master key has been revoked is not trusted
* Custom minimum quorum (specified with --quorum in the command line)
  is used if the normal quorum appears unreachable based on the number
  of recently received validators.

RIPD-1640
2018-07-16 17:49:42 -07:00
seelabs
cff1abba5d Add .clang-format rules and update code style 2018-07-16 17:49:42 -07:00
MarkusTeufelberger
aa4e3a98f7 Remove cmake conditional that could never be true
Boost >= 1.67 is required, the check was for boost versions <= 1.66
2018-07-03 02:09:33 +02:00
Nik Bougalis
381a1b948b Set version to 1.1.0-b4 2018-06-25 17:12:08 -07:00
Nik Bougalis
873ba1ba9b Merge master (1.0.1) into develop (1.1.0-b3) 2018-06-25 13:53:15 -07:00
Edward Hennis
16b9bbb517 Retried transactions that tec move from TxQ to open ledger:
* Unit test of tec code handling.
* Extra TxQ debug logging
2018-06-25 13:52:16 -07:00
Ian Roskam
7427cf7506 Beast was accepted into Boost:
Link to Beast repository was outdated.  Updated to the boostorg/beast repository.
2018-06-25 13:52:16 -07:00
Scott Schurr
b14bdb068a Standardize on default_prng() for non-crypto shuffling 2018-06-25 13:52:15 -07:00
Mike Ellery
8098cba4c2 Trim space in Endpoint::from_string
Fixes: RIPD-1643
2018-06-25 13:38:05 -07:00
Mike Ellery
68bebc472a only IPv4 allowed with travis 2018-06-25 13:38:05 -07:00
Joe Loser
243e181c08 Replace uses of dirDelete with ApplyView::dirRemove 2018-06-25 13:38:05 -07:00
Joe Loser
b0a1aef43d Replace deprecated usages of std::random_shuffle
std::random_shuffle is deprecated in C++14 and removed completely
in C++17. The two-iterator version of std::random_shuffle usually
depends on std::rand and also on a global state. The preferred
replacement is to use std::shuffle with a pseudo-random number
generator.
2018-06-25 13:38:05 -07:00
Joe Loser
aab47e09b6 Remove static_assert for Boost version 2018-06-25 13:38:05 -07:00
Joe Loser
fc3a3d8267 Remove BEAST_NO_ZERO_AUTO_RETURN in Zero.h 2018-06-25 13:38:05 -07:00
Joe Loser
73fb3f0bfa Mark some move and move-assignment ctors noexcept 2018-06-25 13:38:05 -07:00
Joe Loser
5f8037c55b Apply clang-tidy modernize-use-equals-default check 2018-06-25 13:38:05 -07:00
Nikolaos D. Bougalis
3aaf6d7857 Use Boost.Endian instead of custom wrappers 2018-06-25 13:38:00 -07:00
Mike Ellery
11ab98cced Set version to 1.1.0-b3 2018-06-19 11:56:08 -07:00
Joe Loser
06d0ff6e52 Remove conditional check for using Boost.Process:
- Since we require a min Boost version of 1.67 as of recently (for
  Beast), we also remove the conditional checks that existed for us
  to know whether Boost.Process is available or not. We can
  always assume it is available now.
- Remove runtime checks for minimum Boost and OpenSSL versions
  since they are checked at CMake configure time.
2018-06-19 11:56:08 -07:00
Mike Ellery
5a830b63e9 RPM dev build fixes:
skip signature checks and allow for all branches (not PRs).
2018-06-19 11:25:20 -07:00
Joe Loser
f658656b82 Mark some single-argument constructors explicit 2018-06-19 11:25:20 -07:00
wilsonianb
31e511afcf Fix duplicate validation and manifest suppression
RIPD-1636
RIPD-1638
RIPD-1632
2018-06-19 11:25:20 -07:00
Joe Loser
f0cc7c4c8d Replace beast::SharedPtr with std::shared_ptr 2018-06-19 11:25:20 -07:00
Scott Schurr
6a74d771ee Reduce occurrences of sporadic PerfLog unit test failures 2018-06-19 11:25:20 -07:00
seelabs
833fae57db Use liquidity from strands that consume too many offers (RIPD-1515):
This changes the rules for payments in two ways:

1) It sets the maximum number of offers any book step can consume from
2000 to 1000.

2) When a strand contains a step that consumes too many offers,
currently the liquidity is not used at all and the strand will
be considered dry. This changes things so the liquidity is used,
however the strand will still be considered dry.
2018-06-19 11:25:20 -07:00
Scott Schurr
5097656c83 Add xrpRoundToZero logging for FlowCross compareSandboxes 2018-06-19 11:25:20 -07:00
Edward Hennis
5b733fb485 Remove Transactor::mFeeDue member variable
* mFeeDue is only used in one place by one derived class, so
  only compute it as a local in that function.
* The baseFee needs to be calculated outside of the Transactor class
  because, it can change during transaction processing, and the function
  is static, so we need to be sure to call the right version
* Rename Transactor::calculateFee to minimumFee
2018-06-19 11:25:20 -07:00
Joe Loser
0b2f33d23a Prefer std::array over C-style array in base_uint 2018-06-19 11:25:16 -07:00
Mike Ellery
08382d866b Support ipv6 for peer and RPC comms:
Fixes: RIPD-1574

Alias beast address classes to the asio equivalents. Adjust users of
address classes accordingly. Fix resolver class so that it can support
ipv6 addresses. Make unit tests use ipv6 localhost network. Extend
endpoint peer message to support string endpoint
representations while also supporting the existing fields (both are
optional/repeated types). Expand test for Livecache and Endpoint.
Workaround some false positive ipaddr tests on windows (asio bug?)
Replaced usage of address::from_string(deprecated) with free function
make_address. Identified a remaining use of v4 address type and
replaced with the more appropriate IPEndpoint type (rpc_ip cmdline
option). Add CLI flag for using ipv4 with unit tests.

Release Notes
-------------

The optional rpc_port command line flag is deprecated. The rpc_ip
parameter now works as documented and accepts ip and port combined.
2018-06-19 09:32:54 -07:00
Nik Bougalis
8429dd67e6 Set version to 1.0.1 2018-06-04 16:37:47 -07:00
Nik Bougalis
0439dcfa7a Fix a corner case when decoding base64:
Under some corner cases, the base64 decoder would not allocate
enough memory, which could result in spurious errors.

Acknowledgements:
Ripple thanks Guido Vranken for originally noticing this issue
and responsibly disclosing it to Ripple.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers
to responsibly disclose any issues that they may find. For
more on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
2018-06-04 16:37:45 -07:00
seelabs
00df097e5f Improve json exception handling 2018-06-04 12:09:48 -04:00
seelabs
fd4636b056 Set version to 1.1.0-b2 2018-06-01 13:29:57 -04:00
Scott Schurr
34d3f93868 Don't read Amount field if it is not present (RIPD-1623) 2018-06-01 13:29:52 -04:00
Joe Loser
57ab0a00b5 Rename member function in NetworkOPs.h 2018-06-01 13:29:52 -04:00
Joe Loser
f0cec3b2f1 Rename LoadEvent member function reName to setName 2018-06-01 13:29:52 -04:00
Mike Ellery
1c2b8417b9 Correct copy in io_latency_probe:
Fixes: https://github.com/ripple/rippled/issues/2521

Copy ctor missed one member. Also added move since we have some rvalues
passed around here.
2018-06-01 13:29:52 -04:00
Mike Ellery
ca29c2b906 Improve unity/nounity description in linux doc 2018-06-01 13:29:52 -04:00
Joe Loser
7c785d0d7c Add missing override keyword:
* Enable the `suggest-override` warning for gcc
* Fix all functions that were flagged by that warning
2018-06-01 13:29:52 -04:00
seelabs
0ae157a5c3 Enable c++-11 for soci 2018-06-01 13:02:52 -04:00
Joe Loser
a6f59081cc Remove deprecated protocol/types.h header 2018-06-01 13:01:45 -04:00
Mike Ellery
201f1aaa39 Prompt for manual approval on non-collaborator PRs 2018-06-01 13:01:10 -04:00
Mike Ellery
ae73878c59 Build an unsigned rpm in dev pipeline 2018-06-01 13:01:10 -04:00
Mike Ellery
cfdc64d7cf Enable manual tests in CI:
Fixes: RIPD-1575. Fix argument passing to runner. Allow multiple unit
test selectors to be passed via --unittest argument. Add optional
integer priority value to test suite list. Fix several failing manual
tests. Update CLI usage message to make it clearer.
2018-06-01 12:57:12 -04:00
seelabs
95eb5e1862 Fix manual offer test 2018-06-01 12:57:12 -04:00
Joe Loser
dc0d5996e2 Convert macros in STTX.h into an enum 2018-06-01 12:56:09 -04:00
seelabs
817d2339b8 Set version to 1.1.0-b1 2018-05-15 16:58:33 -04:00
Scott Schurr
008ff67ac2 Add DepositPreauth ledger type and transaction (RIPD-1624):
The lsfDepositAuth flag limits the AccountIDs that can deposit into
the account that has the flag set.  The original design only
allowed deposits to complete if the account with the flag set also
signed the transaction that caused the deposit.

The DepositPreauth ledger type allows an account with the
lsfDepositAuth flag set to preauthorize additional accounts.
This preauthorization allows them to sign deposits as well.  An
account can add DepositPreauth objects to the ledger (and remove
them as well) using the DepositPreauth transaction.
2018-05-15 16:58:31 -04:00
seelabs
b444196bf9 Remove pre-boost beast 2018-05-15 16:58:30 -04:00
seelabs
27703859e7 Convert code to use boost::beast 2018-05-15 16:58:30 -04:00
Nikolaos D. Bougalis
2ac1c2b433 Improve invariant checking:
Add a new invariant checker that verifies that we never charge a
fee higher than specified in the transaction; we will charge less
in some corner cases where the transacting account cannot afford
the fee.

Detect more anomalous conditions, and improve the logged error
messages.

Clarify the code flow associated with invoking the invariant checker
from `Transactor`, add extra comments and improve naming to make the
code self-documenting.
2018-05-15 11:28:50 -04:00
Scott Schurr
118c25c0f0 Compile time check preflight returns no tec (RIPD-1624):
The six different ranges of TER codes are broken up into six
different enumerations.  A template class allows subsets of
these enumerations to be aggregated.  This technique allows
verification at compile time that no TEC codes are returned
before the signature is checked.

Conversion between TER instance and integer is provided by
named functions.  This makes accidental conversion almost
impossible and makes type abuse easier to spot in the code
base.
2018-05-15 11:28:50 -04:00
Howard Hinnant
7d163a45dc Replace UptimeTimer with UptimeClock
* UptimeClock is a chrono-compatible seconds-precision clock.

* Like UptimeTimer, its purpose is to make it possible for clients
  to query the uptime thousands of times per second without a
  significant performance hit.

* UptimeClock decouples itself from LoadManager by managing its
  own once-per-second update loop.

* Clients now traffic in chrono time_points and durations instead
  of int.
2018-05-15 09:56:47 -04:00
Joe Loser
717f874767 Add missing virtual destructors:
Some classes had virtual methods, but were missing a virtual
destructor.

Technically, every unit test that inherits from the Beast test suite
would get flagged by `-Wnon-virtual-dtor` but I did not think it would
be a great idea to go sprinkle a virtual destructor for every Ripple
test suite.
2018-05-15 09:55:28 -04:00
Brad Chase
681df58b61 Refactor ledger replay logic (RIPD-1547):
Also switch to use ReadView for TxQ updates.
2018-05-15 09:54:00 -04:00
Nikolaos D. Bougalis
f31ca2860f Set version to 1.0.0 2018-05-11 10:29:41 -07:00
Nikolaos D. Bougalis
d702e736ca Set version to 1.0.0-rc1 2018-05-07 11:37:16 -07:00
Brad Chase
6156ff3eb7 Remove validation cookie support code 2018-05-07 11:36:27 -07:00
Joe Loser
04f1388860 Remove extra semicolons:
Several functions had an extra semicolon. This removes them.
2018-05-07 11:36:27 -07:00
Joe Loser
c1c332f0b0 Remove redundant type qualifier:
The extra `const` type qualifier on the return type has no effect.
Clang emits `-Wignored-qualifiers` warning with `-Wextra`.
2018-05-07 11:36:27 -07:00
seelabs
93780c25f7 Resolve gcc8 warnings 2018-05-07 11:31:23 -07:00
Nikolaos D. Bougalis
a442d3fdb3 Set version to 1.0.0-b5 2018-04-29 02:04:37 -07:00
Scott Schurr
7bc163ee4c Add delivered_amount to tx result for CheckCash (RIPD-1623) 2018-04-28 13:46:04 -07:00
Scott Schurr
6bd0b850a0 Fixes for PerfLog unit test in a Docker container 2018-04-28 13:45:25 -07:00
Nikolaos D. Bougalis
1eece9b1fd Set version to 1.0.0-b4 2018-04-09 09:52:30 -07:00
Miguel Portilla
859d18adb0 Add command import node store to shards 2018-04-09 09:52:13 -07:00
Scott Schurr
c4a9b73a66 Add check, escrow, and pay_chan to ledger_entry (RIPD-1600) 2018-04-08 02:34:37 -07:00
Mark Travis
8eb8c77886 Performance logging and counters:
* Tally and duration counters for Job Queue tasks and RPC calls
    optionally rendered by server_info and server_state, and
    optionally printed to a distinct log file.
    - Tally each Job Queue task as it is queued, starts, and
      finishes running. Track total duration queued and running.
    - Tally each RPC call as it starts and either finishes
      successfully or throws an exception. Track total running
      duration for each.
  * Track currently executing Job Queue tasks and RPC methods
    along with durations.
  * Json-formatted performance log file written by a dedicated
    thread, for above-described data.
  * New optional parameter, "counters", for server_info and
    server_state. If set, render Job Queue and RPC call counters
    as well as currently executing tasks.
  * New configuration section, "[perf]", to optionally control
    performance logging to a file.
  * Support optional sub-second periods when rendering human-readable
    time points.
2018-04-08 02:24:38 -07:00
Markus Teufelberger
ef3bc92b82 Rename xxhash.c to xxhash.cpp 2018-04-08 01:52:12 -07:00
Brad Chase
f7a4a94c3b Add cookie to validation (RIPD-1586):
Each validator will generate a random cookie on startup that it will
include in each of its validations. This will allow validators to detect
when more than one validator is accidentally operating with the same
validation keys.
2018-04-08 01:52:12 -07:00
Brad Chase
3dc0714273 Add testnet to example configs (RIPD-1622) 2018-04-08 01:52:12 -07:00
Mike Ellery
deb9e4ce3c Remove BeastConfig.h (RIPD-1167) 2018-04-08 01:52:12 -07:00
Mike Ellery
4bc300e251 Quiet protobuf warning in XCode build 2018-04-08 01:52:11 -07:00
Mike Ellery
d65e208a99 Eliminate objective-c sources 2018-04-08 01:52:11 -07:00
Howard Hinnant
f8fb1f6c7d Remove unneeded macOS-specific code 2018-04-08 01:52:11 -07:00
Howard Hinnant
db3b4dd396 Prevent accidental aggregates
*  The compiler can provide many non-explicit constructors for
   aggregate types.  This is sometimes desired, but it can
   happen accidentally, resulting in run-time errors.

*  This commit assures that no types are aggregates unless existing
   code is using aggregate initialization.
2018-04-08 01:52:11 -07:00
Nikolaos D. Bougalis
b7335fdff5 Remove unused headers for LevelDB and HyperlevelDB 2018-04-08 01:52:11 -07:00
Nikolaos D. Bougalis
d45556ec82 Improve checking of transaction flags (RIPD-1543) 2018-04-08 01:52:10 -07:00
Nikolaos D. Bougalis
cebb9c6604 Remove unused capture in lambdas 2018-04-08 01:52:08 -07:00
Nikolaos D. Bougalis
b7692b7bc1 Remove nodestore dependency on Snappy 2018-04-08 01:52:07 -07:00
Nikolaos D. Bougalis
327377cb2d Use xxhash and remove unused hash functions:
We had several hash functions implemented, including SipHash,
SpookyHash and FNV1a.

Default to using xxhash and remove the code for the remaining
hash functions.
2018-04-08 01:52:06 -07:00
Nikolaos D. Bougalis
75c4dbb0a1 Set version to 1.0.0-b3 2018-03-24 12:54:09 -07:00
Brad Chase
f0b9506617 Remove scons support 2018-03-24 12:53:53 -07:00
Howard Hinnant
b4e1b3c1b1 Remove undefined behavior from <ctype.h> calls:
For the functions defined in <ctype.h> the C standard requires
that the value of the int argument be in the range of an
unsigned char, or be EOF.  Violation of this requirement
results in undefined behavior.
2018-03-24 12:53:44 -07:00
seelabs
02c487348a Remove warning about plantuml:
Plantuml is only relevant when building docs. On most systems it is OK if it is
not installed.
2018-03-24 12:53:37 -07:00
David Schwartz
5db5e31140 Allow relayed ledger requests to check the shard store 2018-03-24 12:53:29 -07:00
Miguel Portilla
4869a0d00e Verify SQLite ledgers exist in node store 2018-03-24 12:53:21 -07:00
Miguel Portilla
d9be0de269 Add shard configuration example 2018-03-24 12:53:11 -07:00
Miguel Portilla
0b18b36186 Make earliest ledger sequence configurable 2018-03-24 12:53:01 -07:00
Nikolaos D. Bougalis
8d9dffcf84 Clarify Escrow semantics (RIPD-1571):
When creating an escrow, if the `CancelAfter` time is specified but
the `FinishAfter` is not, the resulting escrow can be immediately
completed using `EscrowFinish`. While this behavior is documented,
it is unintuitive and can be confusing for users.

This commit introduces a new fix amendment (fix1571) which prevents
the creation of new Escrow entries that can be finished immediately
and without any requirements.

Once the amendment is activated, creating a new Escrow will require
specifying the `FinishAfter` time explicitly or requires that a
cryptocondition be specified.
2018-03-24 12:52:40 -07:00
Nikolaos D. Bougalis
2b8893dfca Merge master (0.90.1) into develop (1.0.0-b2):
The merge also updates the RELEASENOTES.md with the release
notes for the 0.90.1 which were accidentally not included
in that release.
2018-03-24 12:51:23 -07:00
Howard Hinnant
881cd4cfad Limit STVar recursion during deserialization (RIPD-1603):
Constructing deeply nested objects could allow an attacker to
cause a server to overflow its available stack.

We now enforce a 10-deep nesting limit, and signal an error
if we encounter objects that are nested deeper.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing this
issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled codebase and urge reviewers
to responsibly disclose any issues that they may find. For
more on Ripple's Bug Bounty program, please visit
https://ripple.com/bug-bounty
2018-03-23 14:18:36 -07:00
Nikolaos D. Bougalis
067dbf299c Set version to 0.90.1 2018-03-21 20:39:20 -07:00
Miguel Portilla
8e9495f487 Use lock when creating peer shard rangeset 2018-03-21 20:39:19 -07:00
Howard Hinnant
40dc6b1458 Limit STVar recursion during deserialization (RIPD-1603):
Constructing deeply nested objects could allow an attacker to
cause a server to overflow its available stack.

We now enforce a 10-deep nesting limit, and signal an error
if we encounter objects that are nested deeper.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing this
issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled codebase and urge reviewers
to responsibly disclose any issues that they may find. For
more on Ripple's Bug Bounty program, please visit
https://ripple.com/bug-bounty
2018-03-21 20:39:18 -07:00
Nikolaos D. Bougalis
d5f981f5fc Address issues identified by external review:
* RIPD-1617, RIPD-1619, RIPD-1621:
  Verify serialized public keys more strictly before
  using them.

* RIPD-1618:
    * Simplify the base58 decoder logic.
    * Reduce the complexity of the base58 encoder and
      eliminate a potential out-of-bounds memory access.
    * Improve type safety by using an `enum class` to
      enforce strict type checking for token types.

* RIPD-1616:
  Avoid calling `memcpy` with a null pointer even if the
  size is specified as zero, since it results in undefined
  behavior.

Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these
issues.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers
to responsibly disclose any issues that they may find. For
more on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
2018-03-21 20:39:18 -07:00
Nikolaos D. Bougalis
25de6b0a5f Remove STValidation::getFlags:
The function is non-virtual and hides the virtual function specified in
the base class.

Falling back to the virtual function in the base class is the correct
solution.
2018-03-15 21:17:49 -07:00
seelabs
9af994ceb4 Set version to 1.0.0-b2 2018-03-15 14:38:09 -04:00
seelabs
5549ff4c8a Fix boost::error_code conversions to int:
* Boost 1.67 removes boost::error_code automatic conversions to int
2018-03-15 14:38:06 -04:00
Mike Ellery
3730d46dad Fix intermittent failure in Subscribe_test:
Fixes: RIPD-1601

Fix intermittent failure in server stream sub/unsub test.
Root cause is LoadManager thread *sometimes* running and causing a
fee change event which got published before our test could
unsubscribe. Fixed by explicitly stopping the LoadManager for this test.
2018-03-15 14:38:06 -04:00
Brad Chase
1507ed66a8 Check consensus hash consistency (RIPD-1456):
These changes use the hash of the consensus transaction set when
characterizing the mismatch between a locally built ledger and fully
validated network ledger. This allows detection of non-determinism in
transaction process, in which consensus succeeded, but a node somehow
generated a different subsequent ledger.
2018-03-15 14:38:06 -04:00
seelabs
3a5a6c3637 Remove unused variables 2018-03-15 14:21:18 -04:00
seelabs
4b2afc8f42 Detect when a unit test child process crashes (RIPD-1592):
When a test suite starts and ends, it informs the parent process. If the parent
has received a start message without a matching end message it reports that a
child may have crashed in that suite.
2018-03-15 14:20:25 -04:00
Mike Ellery
deef322b07 Remove outputDebugString, replace getComputerName 2018-03-15 14:19:29 -04:00
Miguel Portilla
9456b83576 Change permessage-deflate and compress defaults (RIPD-506) 2018-03-15 14:18:32 -04:00
Mike Ellery
864844086e Set version to 1.0.0-b1 2018-03-02 07:37:16 -08:00
Mike Ellery
755849115e Add dev docs generation to Jenkins:
Fixes: RIPD-1521

Switch to pure doxygen HTML for developer docs. Remove docca/boostbook
system. Convert consensus document to markdown. Add existing markdown
files to doxygen input set. Fix some image paths and scale images for
use with MD links. Rename/cleanup some files for consistency.
Add pipeline logic for windows slaves. Add ninja and parallel test run
option. Add make doc target build in build-and-test.sh. Cleanup README
files. Add nounity windows build. Add link to jenkins summary table.
Add rippled_classic build (win). Improve formatting of summary table.
2018-03-02 07:37:15 -08:00
Mike Ellery
605ace7645 Unroll some unity files in the nounity build:
FIXES: RIPD-1597

Add includes, remove unused getStackBacktrace() implementation.
2018-03-02 07:37:10 -08:00
Howard Hinnant
1a245234f1 Cleanup some Json::Value methods:
* Rename isArray to isArrayOrNull
* Rename isObject to isObjectOrNull
* Introduce isArray and isObject
* Change as many uses of isArrayorNull to isArray as possible
* Change as many uses of isObjectorNull to isObject as possible
* Reject null JSON arrays for subscribe and unsubscribe
2018-03-01 15:59:40 -08:00
Brad Chase
20defb4844 Update validations on UNL change (RIPD-1566):
Change the trust status of existing validations based when nodes are
added or removed from the UNL.
2018-03-01 13:27:28 -08:00
Markus Teufelberger
8b909d5c17 Include 2 missing headers:
<cerrno> for ETIMEDOUT
<sys/time.h> for gettimeofday()
2018-03-01 13:25:32 -08:00
Edward Hennis
4a3a40174e Strip down Travis CI support toward future deprecation:
* Remove all builds except cmake gcc & clang debug.
* Time some dependency and build operations, using a custom format to avoid
  interfering with other timers.
* Use Travis's "trusty" infrastructure.
* Install more dependencies via apt.
* Update boost version to 1.65.1.
* Do not run unit tests under gdb - several security features prevent
  it from running correctly.
* Run test job with two processes.
2018-03-01 13:23:49 -08:00
Miguel Portilla
2fee75bfc1 Use lock when creating peer shard rangeset 2018-02-26 12:24:56 -05:00
2718 changed files with 47102 additions and 748437 deletions

87
.clang-format Normal file
View File

@@ -0,0 +1,87 @@
---
Language: Cpp
AccessModifierOffset: -4
AlignAfterOpenBracket: AlwaysBreak
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
AlignEscapedNewlinesLeft: true
AlignOperands: false
AlignTrailingComments: true
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: false
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterReturnType: All
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
AfterClass: true
AfterControlStatement: true
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
BeforeCatch: true
BeforeElse: true
IndentBraces: false
BreakBeforeBinaryOperators: false
BreakBeforeBraces: Custom
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
ForEachMacros: [ foreach, Q_FOREACH, BOOST_FOREACH ]
IncludeCategories:
- Regex: '^<(BeastConfig)'
Priority: 0
- Regex: '^<(ripple)/'
Priority: 2
- Regex: '^<(boost)/'
Priority: 3
- Regex: '.*'
Priority: 4
IncludeIsMainRegex: '$'
IndentCaseLabels: true
IndentFunctionDeclarationAfterType: false
IndentWidth: 4
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: false
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments: true
SortIncludes: true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never

6
.codecov.yml Normal file
View File

@@ -0,0 +1,6 @@
codecov:
ci:
- ci.ops.ripple.com # add custom jenkins server
- !appveyor
- !travis

3
.gitignore vendored
View File

@@ -22,6 +22,7 @@ bin/project-cache.jam
# Ignore object files.
*.o
build
.nih_c
tags
TAGS
bin/rippled
@@ -50,6 +51,7 @@ validators.txt
# Doxygen generated documentation output
HtmlDocumentation
docs/html_doc
# Xcode user-specific project settings
# Xcode
@@ -90,3 +92,4 @@ Builds/VisualStudio2015/*.sdf
# MSVC
*.pdb
.vs/
CMakeSettings.json

12
.gitmodules vendored
View File

@@ -1,12 +0,0 @@
[submodule "docs/docca"]
path = docs/docca
url = https://github.com/vinniefalco/docca.git
[submodule "src/nudb/extras/beast"]
path = src/nudb/extras/beast
url = https://github.com/vinniefalco/Beast.git
[submodule "src/nudb/extras/rocksdb"]
path = src/nudb/extras/rocksdb
url = https://github.com/facebook/rocksdb.git
[submodule "src/nudb/doc/docca"]
path = src/nudb/doc/docca
url = https://github.com/vinniefalco/docca.git

View File

@@ -3,7 +3,6 @@ language: cpp
env:
global:
- LLVM_VERSION=3.8.0
# Maintenance note: to move to a new version
# of boost, update both BOOST_ROOT and BOOST_URL.
# Note that for simplicity, BOOST_ROOT's final
@@ -11,15 +10,14 @@ env:
# to boost's .tar.gz.
- LCOV_ROOT=$HOME/lcov
- GDB_ROOT=$HOME/gdb
- BOOST_ROOT=$HOME/boost_1_60_0
- BOOST_URL='http://sourceforge.net/projects/boost/files/boost/1.60.0/boost_1_60_0.tar.gz'
# Travis is timing out on Trusty. So, for now, use Precise. July 2017
dist: precise
- BOOST_ROOT=$HOME/boost_1_67_0
- BOOST_URL='http://sourceforge.net/projects/boost/files/boost/1.67.0/boost_1_67_0.tar.gz'
addons:
apt:
sources: ['ubuntu-toolchain-r-test']
sources:
- ubuntu-toolchain-r-test
- llvm-toolchain-trusty-5.0
packages:
- gcc-5
- g++-5
@@ -29,43 +27,23 @@ addons:
- libssl-dev
- libstdc++6
- binutils-gold
# Provides a backtrace if the unittests crash
- gdb
# needed to build gdb
- texinfo
- cmake
- lcov
- llvm-5.0
- clang-5.0
matrix:
include:
# Default BUILD is "scons".
# - compiler: gcc
# env: GCC_VER=5 TARGET=debug.nounity
- compiler: gcc
env: GCC_VER=5 BUILD=cmake TARGET=coverage PATH=$PWD/cmake/bin:$PATH
env: GCC_VER=5 BUILD_TYPE=Debug
- compiler: clang
env: GCC_VER=5 BUILD=cmake TARGET=debug CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PWD/cmake/bin:$PATH
cache:
directories:
- $GDB_ROOT
- compiler: clang
env: GCC_VER=5 TARGET=debug.nounity CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PATH
# The clang cmake builds do not link.
# - compiler: clang
# env: GCC_VER=5 BUILD=cmake TARGET=debug CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PWD/cmake/bin:$PATH
# - compiler: clang
# env: GCC_VER=5 BUILD=cmake TARGET=debug.nounity CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PWD/cmake/bin:$PATH
env: GCC_VER=5 BUILD_TYPE=Debug
cache:
directories:
- $BOOST_ROOT
- llvm-$LLVM_VERSION
- cmake
- $GDB_ROOT
before_install:
- bin/ci/ubuntu/install-dependencies.sh
@@ -80,4 +58,3 @@ notifications:
channels:
- "chat.freenode.net#ripple-dev"
dist: precise

View File

@@ -1,38 +0,0 @@
# Maintainer: Roberto Catini <roberto.catini@gmail.com>
pkgname=rippled
pkgrel=1
pkgver=0
pkgdesc="Ripple peer-to-peer network daemon"
arch=('i686' 'x86_64')
url="https://github.com/ripple/rippled"
license=('custom:ISC')
depends=('protobuf' 'openssl' 'boost-libs')
makedepends=('git' 'scons' 'boost')
backup=("etc/$pkgname/rippled.cfg")
source=("git://github.com/ripple/rippled.git#branch=master")
sha512sums=('SKIP')
pkgver() {
cd "$srcdir/$pkgname"
git describe --long --tags | sed -r 's/([^-]*-g)/r\1/;s/-/./g'
}
build() {
cd "$srcdir/$pkgname"
scons
}
check() {
cd "$srcdir/$pkgname"
build/rippled --unittest
}
package() {
cd "$srcdir/$pkgname"
install -D -m644 LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
install -D build/rippled "$pkgdir/usr/bin/rippled"
install -D -m644 doc/rippled-example.cfg "$pkgdir/etc/$pkgname/rippled.cfg"
mkdir -p "$pkgdir/var/lib/$pkgname/db"
mkdir -p "$pkgdir/var/log/$pkgname"
}

View File

@@ -1,31 +1,6 @@
# This is a set of common functions and settings for rippled
# and derived products.
############################################################
cmake_minimum_required(VERSION 3.1.0)
if("${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message(WARNING "Builds are strongly discouraged in "
"${CMAKE_SOURCE_DIR}.")
endif()
## "target" parsing..DEPRECATED and will be removed in future
macro(parse_target)
if (NOT target OR target STREQUAL "default")
if (NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Debug)
endif()
string(TOLOWER ${CMAKE_BUILD_TYPE} target)
if (APPLE)
set(target clang.${target})
elseif(WIN32)
set(target msvc)
else()
set(target gcc.${target})
endif()
endif()
if (target)
# Parse the target
set(remaining ${target})
@@ -75,13 +50,11 @@ macro(parse_target)
endif()
if (${cur_component} STREQUAL unity)
set(unity true)
set(nonunity false)
set(unity ON CACHE BOOL "" FORCE)
endif()
if (${cur_component} STREQUAL nounity)
set(unity false)
set(nonunity true)
set(unity OFF CACHE BOOL "" FORCE)
endif()
if (${cur_component} STREQUAL debug)
@@ -93,21 +66,13 @@ macro(parse_target)
endif()
if (${cur_component} STREQUAL coverage)
set(coverage true)
set(coverage ON CACHE BOOL "" FORCE)
set(debug true)
endif()
if (${cur_component} STREQUAL profile)
set(profile true)
set(profile ON CACHE BOOL "" FORCE)
endif()
if (${cur_component} STREQUAL ci)
# Workarounds that make various CI builds work, but that
# we don't want in the general case.
set(ci true)
set(openssl_min 1.0.1)
endif()
endwhile()
endif()
@@ -116,109 +81,15 @@ macro(parse_target)
message(FATAL_ERROR "Can not find appropriate compiler for target ${target}")
endif()
# If defined, promote the compiler path values to the CACHE, then
# unset the locals to prevent shadowing. Some scenarios do not
# need or want to find a compiler, such as -GNinja under Windows.
# Setting these values in those case may prevent CMake from finding
# a valid compiler.
if (CMAKE_C_COMPILER)
set(CMAKE_C_COMPILER ${CMAKE_C_COMPILER} CACHE FILEPATH
"Path to a program" FORCE)
unset(CMAKE_C_COMPILER)
endif (CMAKE_C_COMPILER)
if (CMAKE_CXX_COMPILER)
set(CMAKE_CXX_COMPILER ${CMAKE_CXX_COMPILER} CACHE FILEPATH
"Path to a program" FORCE)
unset(CMAKE_CXX_COMPILER)
endif (CMAKE_CXX_COMPILER)
if (release)
set(CMAKE_BUILD_TYPE Release)
else()
set(CMAKE_BUILD_TYPE Debug)
endif()
# ensure that the unity flags are set and exclusive
if (NOT DEFINED unity OR unity)
# Default to unity builds
set(unity true)
set(nonunity false)
else()
set(unity false)
set(nonunity true)
endif()
if (NOT unity)
set(CMAKE_BUILD_TYPE ${CMAKE_BUILD_TYPE}Classic)
endif()
# Promote this value to the CACHE, then unset the local
# to prevent shadowing.
set(CMAKE_BUILD_TYPE ${CMAKE_BUILD_TYPE} CACHE INTERNAL
"Choose the type of build, options are in CMAKE_CONFIGURATION_TYPES"
FORCE)
unset(CMAKE_BUILD_TYPE)
endmacro()
############################################################
macro(setup_build_cache)
set(san "" CACHE STRING "On gcc & clang, add sanitizer
instrumentation")
set_property(CACHE san PROPERTY STRINGS ";address;thread")
set(assert false CACHE BOOL "Enables asserts, even in release builds")
set(static false CACHE BOOL
"On linux, link protobuf, openssl, libc++, and boost statically")
set(jemalloc false CACHE BOOL "Enables jemalloc for heap profiling")
set(perf false CACHE BOOL "Enables flags that assist with perf recording")
if (static AND (WIN32 OR APPLE))
message(FATAL_ERROR "Static linking is only supported on linux.")
endif()
if (perf AND (WIN32 OR APPLE))
message(FATAL_ERROR "perf flags are only supported on linux.")
endif()
if (${CMAKE_GENERATOR} STREQUAL "Unix Makefiles" AND NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Debug)
endif()
# Can't exclude files from configurations, so can't support both
# unity and nonunity configurations at the same time
if (NOT DEFINED unity OR unity)
set(CMAKE_CONFIGURATION_TYPES
Debug
Release)
else()
set(CMAKE_CONFIGURATION_TYPES
DebugClassic
ReleaseClassic)
endif()
# Promote this value to the CACHE, then unset the local
# to prevent shadowing.
set(CMAKE_CONFIGURATION_TYPES
${CMAKE_CONFIGURATION_TYPES} CACHE STRING "" FORCE)
unset(CMAKE_CONFIGURATION_TYPES)
endmacro()
############################################################
function(prepend var prefix)
set(listVar "")
foreach(f ${ARGN})
list(APPEND listVar "${prefix}${f}")
endforeach(f)
set(${var} "${listVar}" PARENT_SCOPE)
endfunction()
macro(append_flags name)
foreach (arg ${ARGN})
set(${name} "${${name}} ${arg}")
endforeach()
endmacro()
macro(group_sources_in source_dir curdir)
file(GLOB children RELATIVE ${source_dir}/${curdir}
${source_dir}/${curdir}/*)
@@ -237,558 +108,141 @@ macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro(add_with_props src_var files)
list(APPEND ${src_var} ${files})
foreach (arg ${ARGN})
set(props "${props} ${arg}")
macro (exclude_if_included target_)
if (NOT ${CMAKE_CURRENT_SOURCE_DIR} STREQUAL ${CMAKE_SOURCE_DIR})
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endif ()
endmacro ()
function (print_ep_logs _target)
ExternalProject_Get_Property (${_target} STAMP_DIR)
add_custom_command(TARGET ${_target} POST_BUILD
COMMENT "${_target} BUILD OUTPUT"
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-out.log
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake
COMMAND ${CMAKE_COMMAND}
-DIN_FILE=${STAMP_DIR}/${_target}-build-err.log
-P ${CMAKE_SOURCE_DIR}/Builds/CMake/echo_file.cmake)
endfunction ()
#[=========================================================[
This is a function override for one function in the
standard ExternalProject module. We want to change
the generated build script slightly to include printing
the build logs in the case of failure. Those modifications
have been made here. This function override could break
in the future if the ExternalProject module changes internal
function names or changes the way it generates the build
scripts.
See:
https://gitlab.kitware.com/cmake/cmake/blob/df1ddeec128d68cc636f2dde6c2acd87af5658b6/Modules/ExternalProject.cmake#L1855-1952
#]=========================================================]
function(_ep_write_log_script name step cmd_var)
ExternalProject_Get_Property(${name} stamp_dir)
set(command "${${cmd_var}}")
set(make "")
set(code_cygpath_make "")
if(command MATCHES "^\\$\\(MAKE\\)")
# GNU make recognizes the string "$(MAKE)" as recursive make, so
# ensure that it appears directly in the makefile.
string(REGEX REPLACE "^\\$\\(MAKE\\)" "\${make}" command "${command}")
set(make "-Dmake=$(MAKE)")
if(WIN32 AND NOT CYGWIN)
set(code_cygpath_make "
if(\${make} MATCHES \"^/\")
execute_process(
COMMAND cygpath -w \${make}
OUTPUT_VARIABLE cygpath_make
ERROR_VARIABLE cygpath_make
RESULT_VARIABLE cygpath_error
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(NOT cygpath_error)
set(make \${cygpath_make})
endif()
endif()
")
endif()
endif()
set(config "")
if("${CMAKE_CFG_INTDIR}" MATCHES "^\\$")
string(REPLACE "${CMAKE_CFG_INTDIR}" "\${config}" command "${command}")
set(config "-Dconfig=${CMAKE_CFG_INTDIR}")
endif()
# Wrap multiple 'COMMAND' lines up into a second-level wrapper
# script so all output can be sent to one log file.
if(command MATCHES "(^|;)COMMAND;")
set(code_execute_process "
${code_cygpath_make}
execute_process(COMMAND \${command} RESULT_VARIABLE result)
if(result)
set(msg \"Command failed (\${result}):\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
set_source_files_properties(
${files}
PROPERTIES COMPILE_FLAGS
${props})
endmacro()
############################################################
macro(determine_build_type)
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set(is_clang true)
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(is_gcc true)
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
set(is_msvc true)
endif()
if (${CMAKE_GENERATOR} STREQUAL "Xcode")
set(is_xcode true)
else()
set(is_xcode false)
endif()
if (NOT is_gcc AND NOT is_clang AND NOT is_msvc)
message("Current compiler is ${CMAKE_CXX_COMPILER_ID}")
message(FATAL_ERROR "Missing compiler. Must be GNU, Clang, or MSVC")
endif()
endmacro()
############################################################
macro(check_gcc4_abi)
# Check if should use gcc4's ABI
set(gcc4_abi false)
if ($ENV{RIPPLED_OLD_GCC_ABI})
set(gcc4_abi true)
endif()
if (is_gcc AND NOT gcc4_abi)
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 5)
execute_process(COMMAND lsb_release -si OUTPUT_VARIABLE lsb)
string(STRIP "${lsb}" lsb)
if ("${lsb}" STREQUAL "Ubuntu")
execute_process(COMMAND lsb_release -sr OUTPUT_VARIABLE lsb)
string(STRIP ${lsb} lsb)
if (${lsb} VERSION_LESS 15.1)
set(gcc4_abi true)
message(FATAL_ERROR \"\${msg}\")
endif()
")
set(code "")
set(cmd "")
set(sep "")
foreach(arg IN LISTS command)
if("x${arg}" STREQUAL "xCOMMAND")
if(NOT "x${cmd}" STREQUAL "x")
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
endif()
endif()
endif()
endif()
if (gcc4_abi)
add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)
endif()
endmacro()
############################################################
macro(special_build_flags)
if (coverage)
add_compile_options(-fprofile-arcs -ftest-coverage)
append_flags(CMAKE_EXE_LINKER_FLAGS -fprofile-arcs -ftest-coverage)
endif()
if (profile)
add_compile_options(-p -pg)
append_flags(CMAKE_EXE_LINKER_FLAGS -p -pg)
endif()
endmacro()
############################################################
# Params: Boost components to search for.
macro(use_boost)
if ((NOT DEFINED BOOST_ROOT) AND (DEFINED ENV{BOOST_ROOT}))
set(BOOST_ROOT $ENV{BOOST_ROOT})
endif()
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
if(WIN32 OR CYGWIN)
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
if(DEFINED BOOST_ROOT)
if(CMAKE_SIZEOF_VOID_P EQUAL 8 AND IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
set(Boost_LIBRARY_DIR ${BOOST_ROOT}/stage64/lib)
else()
set(Boost_LIBRARY_DIR ${BOOST_ROOT}/stage/lib)
endif()
endif()
endif()
if (is_clang AND DEFINED ENV{CLANG_BOOST_ROOT})
set(BOOST_ROOT $ENV{CLANG_BOOST_ROOT})
endif()
# boost dynamic libraries don't trivially support @rpath
# linking right now (cmake's default), so just force
# static linking for macos, or if requested on linux
# by flag
if (static OR APPLE)
set(Boost_USE_STATIC_LIBS on)
endif()
set(Boost_USE_MULTITHREADED on)
set(Boost_USE_STATIC_RUNTIME off)
if(MSVC)
find_package(Boost REQUIRED)
else()
find_package(Boost REQUIRED ${ARGN})
endif()
if (Boost_FOUND OR
((CYGWIN OR WIN32) AND Boost_INCLUDE_DIRS AND Boost_LIBRARY_DIRS))
if(NOT Boost_FOUND)
message(WARNING "Boost directory found, but not all components. May not be able to build.")
endif()
if(MSVC14)
# VS2017 with boost <= 1.66.0 requires a flag to suppress warnings
if(NOT Boost_VERSION VERSION_GREATER 106600)
add_definitions(-DBOOST_CONFIG_SUPPRESS_OUTDATED_MESSAGE)
endif()
endif()
if (is_xcode)
include_directories(BEFORE ${Boost_INCLUDE_DIRS})
append_flags(CMAKE_CXX_FLAGS --system-header-prefix="boost/")
set(cmd "")
set(sep "")
else()
include_directories(SYSTEM ${Boost_INCLUDE_DIRS})
string(APPEND cmd "${sep}${arg}")
set(sep ";")
endif()
link_directories(${Boost_LIBRARY_DIRS})
else()
message(FATAL_ERROR "Boost not found")
endif()
endmacro()
macro(use_pthread)
if (NOT WIN32)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads)
add_compile_options(${CMAKE_THREAD_LIBS_INIT})
endif()
endmacro()
macro(use_openssl openssl_min)
if (APPLE AND NOT DEFINED ENV{OPENSSL_ROOT_DIR})
find_program(HOMEBREW brew)
if (NOT HOMEBREW STREQUAL "HOMEBREW-NOTFOUND")
execute_process(COMMAND brew --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
endforeach()
string(APPEND code "set(command \"${cmd}\")${code_execute_process}")
file(GENERATE OUTPUT "${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake" CONTENT "${code}")
set(command ${CMAKE_COMMAND} "-Dmake=\${make}" "-Dconfig=\${config}" -P ${stamp_dir}/${name}-${step}-$<CONFIG>-impl.cmake)
endif()
if (WIN32)
if ((NOT DEFINED OPENSSL_ROOT) AND (DEFINED ENV{OPENSSL_ROOT}))
set(OPENSSL_ROOT $ENV{OPENSSL_ROOT})
endif()
file(TO_CMAKE_PATH "${OPENSSL_ROOT}" OPENSSL_ROOT)
if (DEFINED OPENSSL_ROOT)
include_directories(${OPENSSL_ROOT}/include)
link_directories(${OPENSSL_ROOT}/lib)
endif()
else()
if (static)
set(tmp CMAKE_FIND_LIBRARY_SUFFIXES)
set(CMAKE_FIND_LIBRARY_SUFFIXES .a)
endif()
# Wrap the command in a script to log output to files.
set(script ${stamp_dir}/${name}-${step}-$<CONFIG>.cmake)
set(logbase ${stamp_dir}/${name}-${step})
set(code "
${code_cygpath_make}
function (_echo_file _fil)
file (READ \${_fil} _cont)
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"\${_cont}\")
endfunction ()
set(command \"${command}\")
execute_process(
COMMAND \${command}
RESULT_VARIABLE result
OUTPUT_FILE \"${logbase}-out.log\"
ERROR_FILE \"${logbase}-err.log\"
)
if(result)
set(msg \"Command failed: \${result}\\n\")
foreach(arg IN LISTS command)
set(msg \"\${msg} '\${arg}'\")
endforeach()
execute_process (COMMAND \${CMAKE_COMMAND} -E echo \"Build output for ${logbase} : \")
_echo_file (\"${logbase}-out.log\")
_echo_file (\"${logbase}-err.log\")
set(msg \"\${msg}\\nSee above\\n\")
message(FATAL_ERROR \"\${msg}\")
else()
set(msg \"${name} ${step} command succeeded. See also ${logbase}-*.log\")
message(STATUS \"\${msg}\")
endif()
")
file(GENERATE OUTPUT "${script}" CONTENT "${code}")
set(command ${CMAKE_COMMAND} ${make} ${config} -P ${script})
set(${cmd_var} "${command}" PARENT_SCOPE)
endfunction()
find_package(OpenSSL)
# depending on how openssl is built, it might depend
# on zlib. In fact, the openssl find package should
# figure this out for us, but it does not currently...
# so let's add zlib ourselves to the lib list
find_package(ZLIB)
if (static)
set(CMAKE_FIND_LIBRARY_SUFFIXES tmp)
endif()
if (OPENSSL_FOUND)
include_directories(${OPENSSL_INCLUDE_DIR})
list(APPEND OPENSSL_LIBRARIES ${ZLIB_LIBRARIES})
else()
message(FATAL_ERROR "OpenSSL not found")
endif()
if (UNIX AND NOT APPLE AND ${OPENSSL_VERSION} VERSION_LESS ${openssl_min})
message(FATAL_ERROR
"Your openssl is Version: ${OPENSSL_VERSION}, ${openssl_min} or better is required.")
endif()
endif()
endmacro()
macro(use_protobuf)
if (WIN32)
if (DEFINED ENV{PROTOBUF_ROOT})
include_directories($ENV{PROTOBUF_ROOT}/src)
link_directories($ENV{PROTOBUF_ROOT}/src/.libs)
endif()
# Modified from FindProtobuf.cmake
FUNCTION(PROTOBUF_GENERATE_CPP SRCS HDRS PROTOFILES)
# argument parsing
IF(NOT PROTOFILES)
MESSAGE(SEND_ERROR "Error: PROTOBUF_GENERATE_CPP() called without any proto files")
RETURN()
ENDIF()
SET(OUTPATH ${CMAKE_CURRENT_BINARY_DIR})
SET(PROTOROOT ${CMAKE_CURRENT_SOURCE_DIR})
# the real logic
SET(${SRCS})
SET(${HDRS})
FOREACH(PROTOFILE ${PROTOFILES})
# ensure that the file ends with .proto
STRING(REGEX MATCH "\\.proto$$" PROTOEND ${PROTOFILE})
IF(NOT PROTOEND)
MESSAGE(SEND_ERROR "Proto file '${PROTOFILE}' does not end with .proto")
ENDIF()
GET_FILENAME_COMPONENT(PROTO_PATH ${PROTOFILE} PATH)
GET_FILENAME_COMPONENT(ABS_FILE ${PROTOFILE} ABSOLUTE)
GET_FILENAME_COMPONENT(FILE_WE ${PROTOFILE} NAME_WE)
STRING(REGEX MATCH "^${PROTOROOT}" IN_ROOT_PATH ${PROTOFILE})
STRING(REGEX MATCH "^${PROTOROOT}" IN_ROOT_ABS_FILE ${ABS_FILE})
IF(IN_ROOT_PATH)
SET(MATCH_PATH ${PROTOFILE})
ELSEIF(IN_ROOT_ABS_FILE)
SET(MATCH_PATH ${ABS_FILE})
ELSE()
MESSAGE(SEND_ERROR "Proto file '${PROTOFILE}' is not in protoroot '${PROTOROOT}'")
ENDIF()
# build the result file name
STRING(REGEX REPLACE "^${PROTOROOT}(/?)" "" ROOT_CLEANED_FILE ${MATCH_PATH})
STRING(REGEX REPLACE "\\.proto$$" "" EXT_CLEANED_FILE ${ROOT_CLEANED_FILE})
SET(CPP_FILE "${OUTPATH}/${EXT_CLEANED_FILE}.pb.cc")
SET(H_FILE "${OUTPATH}/${EXT_CLEANED_FILE}.pb.h")
LIST(APPEND ${SRCS} "${CPP_FILE}")
LIST(APPEND ${HDRS} "${H_FILE}")
ADD_CUSTOM_COMMAND(
OUTPUT "${CPP_FILE}" "${H_FILE}"
COMMAND ${CMAKE_COMMAND} -E make_directory ${OUTPATH}
COMMAND ${PROTOBUF_PROTOC_EXECUTABLE}
ARGS "--cpp_out=${OUTPATH}" --proto_path "${PROTOROOT}" "${MATCH_PATH}"
DEPENDS ${ABS_FILE}
COMMENT "Running C++ protocol buffer compiler on ${MATCH_PATH} with root ${PROTOROOT}, generating: ${CPP_FILE}"
VERBATIM)
ENDFOREACH()
SET_SOURCE_FILES_PROPERTIES(${${SRCS}} ${${HDRS}} PROPERTIES GENERATED TRUE)
SET(${SRCS} ${${SRCS}} PARENT_SCOPE)
SET(${HDRS} ${${HDRS}} PARENT_SCOPE)
ENDFUNCTION()
set(PROTOBUF_PROTOC_EXECUTABLE Protoc) # must be on path
else()
if (static)
set(tmp CMAKE_FIND_LIBRARY_SUFFIXES)
set(CMAKE_FIND_LIBRARY_SUFFIXES .a)
endif()
find_package(Protobuf REQUIRED)
if (static)
set(CMAKE_FIND_LIBRARY_SUFFIXES tmp)
endif()
if (is_clang AND DEFINED ENV{CLANG_PROTOBUF_ROOT})
link_directories($ENV{CLANG_PROTOBUF_ROOT}/src/.libs)
include_directories($ENV{CLANG_PROTOBUF_ROOT}/src)
else()
include_directories(${PROTOBUF_INCLUDE_DIRS})
endif()
endif()
include_directories(${CMAKE_CURRENT_BINARY_DIR})
file(GLOB ripple_proto src/ripple/proto/*.proto)
PROTOBUF_GENERATE_CPP(PROTO_SRCS PROTO_HDRS ${ripple_proto})
if (WIN32)
include_directories(src/protobuf/src
src/protobuf/vsprojects
${CMAKE_CURRENT_BINARY_DIR}/src/ripple/proto)
endif()
endmacro()
############################################################
macro(setup_build_boilerplate)
if (NOT WIN32 AND san)
add_compile_options(-fsanitize=${san} -fno-omit-frame-pointer)
append_flags(CMAKE_EXE_LINKER_FLAGS
-fsanitize=${san})
string(TOLOWER ${san} ci_san)
if (${ci_san} STREQUAL address)
if (is_gcc)
set(SANITIZER_LIBRARIES asan)
endif()
add_definitions(-DSANITIZER=ASAN)
endif()
if (${ci_san} STREQUAL thread)
if (is_gcc)
set(SANITIZER_LIBRARIES tsan)
endif()
add_definitions(-DSANITIZER=TSAN)
endif()
endif()
if (perf)
add_compile_options(-fno-omit-frame-pointer)
endif()
############################################################
add_definitions(
-DOPENSSL_NO_SSL2
-DDEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
-DHAVE_USLEEP=1
-DSOCI_CXX_C11=1
-D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS
-DBOOST_NO_AUTO_PTR
)
if (is_gcc)
add_compile_options(-Wno-unused-but-set-variable -Wno-deprecated)
# use gold linker if available
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
# NOTE: THE gold linker inserts -rpath as DT_RUNPATH by default
# intead of DT_RPATH, so you might have slightly unexpected
# runtime ld behavior if you were expecting DT_RPATH.
# Specify --disable-new-dtags to gold if you do not want
# the default DT_RUNPATH behavior. This rpath treatment as well
# as static/dynamic selection means that gold does not currently
# have ideal default behavior when we are using jemalloc. Thus
# for simplicity we don't use it when jemalloc is requested.
# An alternative to disabling would be to figure out all the settings
# required to make gold play nicely with jemalloc.
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
append_flags(CMAKE_EXE_LINKER_FLAGS -fuse-ld=gold -Wl,--no-as-needed)
endif ()
unset(LD_VERSION)
endif()
# Generator expressions are not supported in add_definitions, use set_property instead
set_property(
DIRECTORY
APPEND
PROPERTY COMPILE_DEFINITIONS
$<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:DEBUG _DEBUG>)
if (NOT assert)
set_property(
DIRECTORY
APPEND
PROPERTY COMPILE_DEFINITIONS
$<$<OR:$<BOOL:${profile}>,$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:NDEBUG>)
else()
# CMAKE_CXX_FLAGS_RELEASE is created by CMake for most / all generators
# with defaults including /DNDEBUG or -DNDEBUG, and that value is stored
# in the cache. Override that locally so that the cache value will be
# avaiable if "assert" is ever changed.
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE}")
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_CXX_FLAGS_RELEASECLASSIC "${CMAKE_CXX_FLAGS_RELEASECLASSIC}")
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_C_FLAGS_RELEASECLASSIC "${CMAKE_C_FLAGS_RELEASECLASSIC}")
endif()
if (jemalloc)
find_package(jemalloc REQUIRED)
add_definitions(-DPROFILE_JEMALLOC)
include_directories(SYSTEM ${JEMALLOC_INCLUDE_DIRS})
link_libraries(${JEMALLOC_LIBRARIES})
get_filename_component(JEMALLOC_LIB_PATH ${JEMALLOC_LIBRARIES} DIRECTORY)
set(CMAKE_BUILD_RPATH ${CMAKE_BUILD_RPATH} ${JEMALLOC_LIB_PATH})
endif()
if (NOT WIN32)
add_definitions(-D_FILE_OFFSET_BITS=64)
append_flags(CMAKE_CXX_FLAGS -frtti -std=c++14 -Wno-invalid-offsetof -Wdeprecated
-DBOOST_COROUTINE_NO_DEPRECATION_WARNING -DBOOST_COROUTINES_NO_DEPRECATION_WARNING)
add_compile_options(-Wall -Wno-sign-compare -Wno-char-subscripts -Wno-format
-Wno-unused-local-typedefs -g)
# There seems to be an issue using generator experssions with multiple values,
# split the expression
add_compile_options($<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:-O3>)
add_compile_options($<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:-fno-strict-aliasing>)
append_flags(CMAKE_EXE_LINKER_FLAGS -rdynamic -g)
if (is_clang)
add_compile_options(
-Wno-redeclared-class-member -Wno-mismatched-tags -Wno-deprecated-register)
add_definitions(-DBOOST_ASIO_HAS_STD_ARRAY)
# use ldd linker if available
execute_process(
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
append_flags(CMAKE_EXE_LINKER_FLAGS -fuse-ld=lld)
endif ()
unset(LD_VERSION)
endif()
if (APPLE)
add_definitions(-DBEAST_COMPILE_OBJECTIVE_CPP=1)
add_compile_options(
-Wno-deprecated-declarations -Wno-unused-function)
endif()
if (is_gcc)
add_compile_options(-Wno-unused-but-set-variable -Wno-unused-local-typedefs)
add_compile_options($<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:-O0>)
endif (is_gcc)
else(NOT WIN32)
add_compile_options(
/bigobj # Increase object file max size
/EHa # ExceptionHandling all
/fp:precise # Floating point behavior
/Gd # __cdecl calling convention
/Gm- # Minimal rebuild: disabled
/GR # Enable RTTI
/Gy- # Function level linking: disabled
/FS
/MP # Multiprocessor compilation
/openmp- # pragma omp: disabled
/Zc:forScope # Language conformance: for scope
/Zi # Generate complete debug info
/errorReport:none # No error reporting to Internet
/nologo # Suppress login banner
/W3 # Warning level 3
/WX- # Disable warnings as errors
/wd4018 # Disable signed/unsigned comparison warnings
/wd4244 # Disable float to int possible loss of data warnings
/wd4267 # Disable size_t to T possible loss of data warnings
/wd4800 # Disable C4800(int to bool performance)
/wd4503 # Decorated name length exceeded, name was truncated
)
add_definitions(
-D_WIN32_WINNT=0x6000
-D_SCL_SECURE_NO_WARNINGS
-D_CRT_SECURE_NO_WARNINGS
-DWIN32_CONSOLE
-DNOMINMAX
-DBOOST_COROUTINE_NO_DEPRECATION_WARNING
-DBOOST_COROUTINES_NO_DEPRECATION_WARNING)
append_flags(CMAKE_EXE_LINKER_FLAGS
/DEBUG
/DYNAMICBASE
/ERRORREPORT:NONE
/MACHINE:X64
/MANIFEST
/nologo
/NXCOMPAT
/SUBSYSTEM:CONSOLE
/TLBID:1)
# There seems to be an issue using generator experssions with multiple values,
# split the expression
# /GS Buffers security check: enable
add_compile_options($<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:/GS>)
# /MTd Language: Multi-threaded Debug CRT
add_compile_options($<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:/MTd>)
# /Od Optimization: Disabled
add_compile_options($<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:/Od>)
# /RTC1 Run-time error checks:
add_compile_options($<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:/RTC1>)
# Generator expressions are not supported in add_definitions, use set_property instead
set_property(
DIRECTORY
APPEND
PROPERTY COMPILE_DEFINITIONS
$<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:_CRTDBG_MAP_ALLOC>)
# /MT Language: Multi-threaded CRT
add_compile_options($<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:/MT>)
add_compile_options($<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:/Ox>)
# /Ox Optimization: Full
endif (NOT WIN32)
if (static)
append_flags(CMAKE_EXE_LINKER_FLAGS -static-libstdc++)
# set_target_properties(ripple-libpp PROPERTIES LINK_SEARCH_START_STATIC 1)
# set_target_properties(ripple-libpp PROPERTIES LINK_SEARCH_END_STATIC 1)
endif()
endmacro()
############################################################
macro(create_build_folder cur_project)
if (NOT WIN32)
ADD_CUSTOM_TARGET(build_folder ALL
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Creating build output folder")
add_dependencies(${cur_project} build_folder)
endif()
endmacro()
macro(set_startup_project cur_project)
if (WIN32 AND NOT ci)
if (CMAKE_VERSION VERSION_LESS 3.6)
message(WARNING
"Setting the VS startup project requires cmake 3.6 or later. Please upgrade.")
endif()
set_property(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} PROPERTY
VS_STARTUP_PROJECT ${cur_project})
endif()
endmacro()
macro(link_common_libraries cur_project)
if (NOT MSVC)
target_link_libraries(${cur_project} ${Boost_LIBRARIES})
target_link_libraries(${cur_project} dl)
target_link_libraries(${cur_project} Threads::Threads)
if (APPLE)
find_library(app_kit AppKit)
find_library(foundation Foundation)
target_link_libraries(${cur_project}
${app_kit} ${foundation})
else()
target_link_libraries(${cur_project} rt)
endif()
else(NOT MSVC)
target_link_libraries(${cur_project}
$<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:VC/static/ssleay32MTd>
$<$<OR:$<CONFIG:Debug>,$<CONFIG:DebugClassic>>:VC/static/libeay32MTd>)
target_link_libraries(${cur_project}
$<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:VC/static/ssleay32MT>
$<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:VC/static/libeay32MT>)
target_link_libraries(${cur_project}
legacy_stdio_definitions.lib Shlwapi kernel32 user32 gdi32 winspool comdlg32
advapi32 shell32 ole32 oleaut32 uuid odbc32 odbccp32 crypt32)
endif (NOT MSVC)
endmacro()

View File

@@ -0,0 +1,60 @@
#[=========================================================[
SQLITE doesn't provide build files in the
standard source-only distribution. So we wrote
a simple cmake file and we copy it to the
external project folder so that we can use
this file to build the lib with ExternalProject
#]=========================================================]
add_library (sqlite3 STATIC sqlite3.c)
#[=========================================================[
When compiled with SQLITE_THREADSAFE=1, SQLite operates
in serialized mode. In this mode, SQLite can be safely
used by multiple threads with no restriction.
NOTE: This implies a global mutex!
When compiled with SQLITE_THREADSAFE=2, SQLite can be
used in a multithreaded program so long as no two
threads attempt to use the same database connection at
the same time.
NOTE: This is the preferred threading model, but not
currently enabled because we need to investigate our
use-model and concurrency requirements.
TODO: consider whether any other options should be
used: https://www.sqlite.org/compile.html
#]=========================================================]
target_compile_definitions (sqlite3
PRIVATE
SQLITE_THREADSAFE=1
HAVE_USLEEP=1)
target_compile_options (sqlite3
PRIVATE
$<$<BOOL:${MSVC}>:
-wd4100
-wd4127
-wd4232
-wd4244
-wd4701
-wd4706
-wd4996
>
$<$<NOT:$<BOOL:${MSVC}>>:-Wno-array-bounds>)
install (
TARGETS
sqlite3
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install (
FILES
sqlite3.h
sqlite3ext.h
DESTINATION include)

2125
Builds/CMake/FindBoost.cmake Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,55 @@
include (CMakeFindDependencyMacro)
# need to represent system dependencies of the lib here
#[=========================================================[
Boost
#]=========================================================]
if (static OR APPLE OR MSVC)
set (Boost_USE_STATIC_LIBS ON)
endif ()
set (Boost_USE_MULTITHREADED ON)
if (static OR MSVC)
set (Boost_USE_STATIC_RUNTIME ON)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency (Boost 1.67
COMPONENTS
chrono
context
coroutine
date_time
filesystem
program_options
regex
serialization
system
thread)
#[=========================================================[
OpenSSL
#]=========================================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (APPLE)
find_program (homebrew brew)
if (homebrew)
execute_process (COMMAND ${homebrew} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static OR APPLE OR MSVC)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_dependency (OpenSSL 1.0.2 REQUIRED)
find_dependency (ZLIB)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
endif ()
include ("${CMAKE_CURRENT_LIST_DIR}/RippleTargets.cmake")

View File

@@ -0,0 +1,15 @@
#[=========================================================[
This is a CMake script file that is used to write
the contents of a file to stdout (using the cmake
echo command). The input file is passed via the
IN_FILE variable.
#]=========================================================]
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
endif ()

View File

@@ -0,0 +1,13 @@
# This patches unsigned-types.h in the soci official sources
# so as to remove type range check exceptions that cause
# us trouble when using boost::optional to select int values
file (STRINGS include/soci/unsigned-types.h sourcecode)
foreach (line_ ${sourcecode})
if (line_ MATCHES "^[ \\t]+throw[ ]+soci_error[ ]*\\([ ]*\"Value outside of allowed.+$")
set (line_ "//${CMAKE_MATCH_0}")
endif ()
file (APPEND include/soci/unsigned-types.h.patched "${line_}\n")
endforeach ()
file (RENAME include/soci/unsigned-types.h include/soci/unsigned-types.h.orig)
file (RENAME include/soci/unsigned-types.h.patched include/soci/unsigned-types.h)

View File

@@ -1,30 +0,0 @@
# rippled
# use the ubuntu base image
FROM ubuntu
MAINTAINER Roberto Catini roberto.catini@gmail.com
# make sure the package repository is up to date
RUN apt-get update
RUN apt-get -y upgrade
# install the dependencies
RUN apt-get -y install git scons pkg-config protobuf-compiler libprotobuf-dev libssl-dev libboost1.55-all-dev
# download source code from official repository
RUN git clone https://github.com/ripple/rippled.git src; cd src/; git checkout master
# compile
RUN cd src/; scons build/rippled
# move to root directory and strip
RUN cp src/build/rippled rippled; strip rippled
# copy default config
RUN cp src/doc/rippled-example.cfg rippled.cfg
# clean source
RUN rm -r src
# launch rippled when launching the container
ENTRYPOINT ./rippled

View File

@@ -1,23 +0,0 @@
FROM ubuntu
MAINTAINER Torrie Fischer <torrie@ripple.com>
RUN apt-get update -qq &&\
apt-get install -qq software-properties-common &&\
apt-add-repository -y ppa:ubuntu-toolchain-r/test &&\
apt-add-repository -y ppa:afrank/boost &&\
apt-get update -qq
RUN apt-get purge -qq libboost1.48-dev &&\
apt-get install -qq libprotobuf8 libboost1.57-all-dev
RUN mkdir -p /srv/rippled/data
VOLUME /srv/rippled/data/
ENTRYPOINT ["/srv/rippled/bin/rippled"]
CMD ["--conf", "/srv/rippled/data/rippled.cfg"]
EXPOSE 51235/udp
EXPOSE 5005/tcp
ADD ./rippled.cfg /srv/rippled/data/rippled.cfg
ADD ./rippled /srv/rippled/bin/

View File

@@ -1,13 +0,0 @@
set -e
mkdir -p build/docker/
cp doc/rippled-example.cfg build/clang.debug/rippled build/docker/
cp Builds/Docker/Dockerfile-testnet build/docker/Dockerfile
mv build/docker/rippled-example.cfg build/docker/rippled.cfg
strip build/docker/rippled
docker build -t ripple/rippled:$CIRCLE_SHA1 build/docker/
docker tag ripple/rippled:$CIRCLE_SHA1 ripple/rippled:latest
if [ -z "$CIRCLE_PR_NUMBER" ]; then
docker tag ripple/rippled:$CIRCLE_SHA1 ripple/rippled:$CIRCLE_BRANCH
fi

View File

@@ -1,16 +0,0 @@
set -e
if [ -z "$DOCKER_EMAIL" -o -z "$DOCKER_USERNAME" -o -z "$DOCKER_PASSWORD" ];then
echo "Docker credentials are not set. Can't login to docker, no containers will be pushed."
exit 0
fi
if [ -n "$CIRCLE_PR_NUMBER" ]; then
echo "Not pushing results of a pull request build."
exit 0
fi
docker login -e $DOCKER_EMAIL -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
docker push ripple/rippled:$CIRCLE_SHA1
docker push ripple/rippled:$CIRCLE_BRANCH
docker push ripple/rippled:latest

View File

@@ -1,31 +0,0 @@
**Requirements**
1. Java Runtime Environment (JRE)
2. Eclipse with CDT (tested on Luna):
http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/lunasr2
3. Eclipse SCons plugin: http://sconsolidator.com/
**WARNING**: by default the SCons plugin uses 16 threads. Go to
*Window->Preferences->SCons->Build Settings* in Eclipse and make it
use only 4-8 jobs(threads) or whatever you feel confortable with. It will
positively freeze your system if you run with 16 threads/jobs.
![scons](scons.png)
**Getting Started**
After setting up Eclipse just do a File->New->Other...
Select: C/C++ / New SCons project from existing source
Point the importer to the folder where the SConstruct resides (the root
folder of your git workspace normally)
**Build**
Just hit Project->Build All in Eclipse to get started. And remember to not
let it run 16 threads!
**Debug**
Start a new Eclipse debug configuration and set binary to run to build/rippled
(assuming you have built it).
![debug](debug.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
#
# This scripts installs the dependencies needed by rippled. It should be run
# with sudo.
#
if [ ! -f /etc/fedora-release ]; then
echo "This script is meant to be run on fedora"
exit 1
fi
fedora_release=$(grep -o '[0-9]*' /etc/fedora-release)
if (( $(bc <<< "${fedora_release} < 22") )); then
echo "This script is meant to run on fedora 22 or greater"
exit 1
fi
yum -y update
yum -y group install "Development Tools"
yum -y install gcc-c++ scons openssl-devel openssl-static protobuf-devel protobuf-static boost-devel boost-static libstdc++-static

View File

@@ -1,5 +0,0 @@
# QTCreator
Makefile
*.user

View File

@@ -1,112 +0,0 @@
# Ripple protocol buffers
PROTOS = ../../src/ripple_data/protocol/ripple.proto
PROTOS_DIR = ../../build/proto
# Google Protocol Buffers support
protobuf_h.name = protobuf header
protobuf_h.input = PROTOS
protobuf_h.output = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.h
protobuf_h.depends = ${QMAKE_FILE_NAME}
protobuf_h.commands = protoc --cpp_out=$${PROTOS_DIR} --proto_path=${QMAKE_FILE_PATH} ${QMAKE_FILE_NAME}
protobuf_h.variable_out = HEADERS
QMAKE_EXTRA_COMPILERS += protobuf_h
protobuf_cc.name = protobuf implementation
protobuf_cc.input = PROTOS
protobuf_cc.output = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.cc
protobuf_cc.depends = $${PROTOS_DIR}/${QMAKE_FILE_BASE}.pb.h
protobuf_cc.commands = $$escape_expand(\\n)
#protobuf_cc.variable_out = SOURCES
QMAKE_EXTRA_COMPILERS += protobuf_cc
# Ripple compilation
DESTDIR = ../../build/QtCreator
OBJECTS_DIR = ../../build/QtCreator/obj
TEMPLATE = app
CONFIG += console thread warn_off
CONFIG -= qt gui
DEFINES += _DEBUG
linux-g++:QMAKE_CXXFLAGS += \
-Wall \
-Wno-sign-compare \
-Wno-char-subscripts \
-Wno-invalid-offsetof \
-Wno-unused-parameter \
-Wformat \
-O0 \
-std=c++11 \
-pthread
INCLUDEPATH += \
"../../src/leveldb/" \
"../../src/leveldb/port" \
"../../src/leveldb/include" \
$${PROTOS_DIR}
OTHER_FILES += \
# $$files(../../src/*, true) \
# $$files(../../src/beast/*) \
# $$files(../../src/beast/modules/beast_basics/diagnostic/*)
# $$files(../../src/beast/modules/beast_core/, true)
UI_HEADERS_DIR += ../../src/ripple_basics
# ---------
# New style
#
SOURCES += \
../../src/ripple/beast/ripple_beast.unity.cpp \
../../src/ripple/beast/ripple_beastc.c \
../../src/ripple/common/ripple_common.unity.cpp \
../../src/ripple/http/ripple_http.unity.cpp \
../../src/ripple/json/ripple_json.unity.cpp \
../../src/ripple/peerfinder/ripple_peerfinder.unity.cpp \
../../src/ripple/radmap/ripple_radmap.unity.cpp \
../../src/ripple/resource/ripple_resource.unity.cpp \
../../src/ripple/sitefiles/ripple_sitefiles.unity.cpp \
../../src/ripple/sslutil/ripple_sslutil.unity.cpp \
../../src/ripple/testoverlay/ripple_testoverlay.unity.cpp \
../../src/ripple/types/ripple_types.unity.cpp \
../../src/ripple/validators/ripple_validators.unity.cpp
# ---------
# Old style
#
SOURCES += \
../../src/ripple_app/ripple_app.unity.cpp \
../../src/ripple_app/ripple_app_pt1.unity.cpp \
../../src/ripple_app/ripple_app_pt2.unity.cpp \
../../src/ripple_app/ripple_app_pt3.unity.cpp \
../../src/ripple_app/ripple_app_pt4.unity.cpp \
../../src/ripple_app/ripple_app_pt5.unity.cpp \
../../src/ripple_app/ripple_app_pt6.unity.cpp \
../../src/ripple_app/ripple_app_pt7.unity.cpp \
../../src/ripple_app/ripple_app_pt8.unity.cpp \
../../src/ripple_basics/ripple_basics.unity.cpp \
../../src/ripple_core/ripple_core.unity.cpp \
../../src/ripple_data/ripple_data.unity.cpp \
../../src/ripple_hyperleveldb/ripple_hyperleveldb.unity.cpp \
../../src/ripple_leveldb/ripple_leveldb.unity.cpp \
../../src/ripple_net/ripple_net.unity.cpp \
../../src/ripple_overlay/ripple_overlay.unity.cpp \
../../src/ripple_rpc/ripple_rpc.unity.cpp \
../../src/ripple_websocket/ripple_websocket.unity.cpp
LIBS += \
-lboost_date_time-mt\
-lboost_filesystem-mt \
-lboost_program_options-mt \
-lboost_regex-mt \
-lboost_system-mt \
-lboost_thread-mt \
-lboost_random-mt \
-lprotobuf \
-lssl \
-lrt

View File

@@ -22,15 +22,10 @@ Invocation:
The build must succeed without shell aliases for this to work.
To pass flags to scons, put them at the very end of the command line, after
To pass flags to cmake, put them at the very end of the command line, after
the -- flag - like this:
./Builds/Test.py -- -j4 # Pass -j4 to scons.
To build with CMake, use the --cmake flag, or any of the specific configuration
flags
./Builds/Test.py --cmake -- -j4 # Pass -j4 to cmake --build
./Builds/Test.py -- -j4 # Pass -j4 to cmake --build
Common problems:
@@ -39,11 +34,7 @@ Common problems:
2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
3) scons is an alias. Solution: Create a script named "scons" somewhere in
your $PATH (eg. ~/bin/scons will often work).
#!/bin/sh
python /C/Python27/Scripts/scons.py "${@}"
3) cmake is not found. Solution: Be sure cmake directory is on your $PATH
"""
from __future__ import absolute_import, division, print_function, unicode_literals
@@ -69,12 +60,12 @@ IS_OS_X = platform.system().lower() == 'darwin'
# CMake
if IS_WINDOWS:
CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
CMAKE_NONUNITY_CONFIGS = ['DebugClassic', 'ReleaseClassic']
CMAKE_NONUNITY_CONFIGS = ['Debug', 'Release']
else:
CMAKE_UNITY_CONFIGS = []
CMAKE_NONUNITY_CONFIGS = []
CMAKE_UNITY_COMBOS = { '' : [['rippled', 'rippled_classic'], CMAKE_UNITY_CONFIGS],
'.nounity' : [['rippled', 'rippled_unity'], CMAKE_NONUNITY_CONFIGS] }
CMAKE_UNITY_COMBOS = { '' : [['rippled'], CMAKE_UNITY_CONFIGS],
'.nounity' : [['rippled'], CMAKE_NONUNITY_CONFIGS] }
if IS_WINDOWS:
CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
@@ -97,28 +88,6 @@ else:
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
# Scons
if IS_WINDOWS or IS_OS_X:
ALL_TARGETS = [('debug',), ('release',)]
else:
ALL_TARGETS = [(cc + "." + target,)
for cc in ['gcc', 'clang']
for target in ['debug', 'release', 'coverage', 'profile',
'debug.nounity', 'release.nounity', 'coverage.nounity', 'profile.nounity']]
# list of tuples of all possible options
if IS_WINDOWS or IS_OS_X:
ALL_OPTIONS = [tuple(x) for x in powerset(['--ninja', '--assert'])]
else:
ALL_OPTIONS = list(set(
[tuple(x) for x in powerset(['--ninja', '--static', '--assert', '--sanitize=address'])] +
[tuple(x) for x in powerset(['--ninja', '--static', '--assert', '--sanitize=thread'])]))
# list of tuples of all possible options + all possible targets
ALL_BUILDS = [options + target
for target in ALL_TARGETS
for options in ALL_OPTIONS]
parser = argparse.ArgumentParser(
description='Test.py - run ripple tests'
)
@@ -161,6 +130,12 @@ parser.add_argument(
help='Run tests in parallel'
)
parser.add_argument(
'--ipv6',
action='store_true',
help='Use IPv6 localhost when running unit tests.',
)
parser.add_argument(
'--clean', '-c',
action='store_true',
@@ -173,26 +148,11 @@ parser.add_argument(
help='Reduce output where possible (unit tests)',
)
# Scons and CMake parameters are too different to run
# both side-by-side
pgroup = parser.add_mutually_exclusive_group()
pgroup.add_argument(
'--cmake',
action='store_true',
help='Build using CMake.',
)
pgroup.add_argument(
'--scons',
action='store_true',
help='Build using Scons. Default behavior.')
parser.add_argument(
'--dir', '-d',
default=(),
nargs='*',
help='Specify one or more CMake dir names. Implies --cmake. '
help='Specify one or more CMake dir names. '
'Will also be used as -Dtarget=<dir> running cmake.'
)
@@ -200,7 +160,7 @@ parser.add_argument(
'--target',
default=(),
nargs='*',
help='Specify one or more CMake build targets. Implies --cmake. '
help='Specify one or more CMake build targets. '
'Will be used as --target <target> running cmake --build.'
)
@@ -208,7 +168,7 @@ parser.add_argument(
'--config',
default=(),
nargs='*',
help='Specify one or more CMake build configs. Implies --cmake. '
help='Specify one or more CMake build configs. '
'Will be used as --config <config> running cmake --build.'
)
@@ -216,14 +176,14 @@ parser.add_argument(
'--generator_option',
action='append',
help='Specify a CMake generator option. Repeat for multiple options. '
'Implies --cmake. Will be passed to the cmake generator. '
'Will be passed to the cmake generator. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --generator_option=-GNinja.')
parser.add_argument(
'--build_option',
action='append',
help='Specify a build option. Repeat for multiple options. Implies --cmake. '
help='Specify a build option. Repeat for multiple options. '
'Will be passed to the build tool via cmake --build. '
'Due to limits of the argument parser, arguments starting with \'-\' '
'must be attached to this option. e.g. --build_option=-j8.')
@@ -244,7 +204,7 @@ def decodeString(line):
else:
return line.decode()
def shell(cmd, args=(), silent=False):
def shell(cmd, args=(), silent=False, cust_env=None):
""""Execute a shell command and return the output."""
silent = ARGS.silent or silent
verbose = not silent and ARGS.verbose
@@ -253,12 +213,13 @@ def shell(cmd, args=(), silent=False):
command = (cmd,) + args
# shell is needed in Windows to find scons in the path
# shell is needed in Windows to find executable in the path
process = subprocess.Popen(
command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=cust_env,
shell=IS_WINDOWS)
lines = []
count = 0
@@ -284,58 +245,6 @@ def shell(cmd, args=(), silent=False):
process.wait()
return process.returncode, lines
def run_tests(args):
failed = []
if IS_WINDOWS:
binary_re = re.compile(r'build\\([^\\]+)\\rippled.exe')
else:
binary_re = re.compile(r'build/([^/]+)/rippled')
_, lines = shell('scons', ('-n', '--tree=derived',) + args, silent=True)
for line in lines:
match = binary_re.search(line)
if match:
executable, target = match.group(0, 1)
print('Unit tests for', target)
testflag = '--unittest'
quiet = ''
if ARGS.test:
testflag += ('=' + ARGS.test)
if ARGS.quiet:
quiet = '-q'
if ARGS.testjobs:
testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
resultcode, lines = shell(executable, (testflag, quiet, testjobs,))
if resultcode:
if not ARGS.verbose:
print('ERROR:', *lines, sep='')
failed.append([target, 'unittest'])
if not ARGS.keep_going:
break
return failed
def run_build(args=None):
print('Building:', *args or ('(default)',))
resultcode, lines = shell('scons', args)
if resultcode:
print('Build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
if '--ninja' in args:
resultcode, lines = shell('ninja')
if resultcode:
print('Ninja build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
sys.exit(1)
def get_cmake_dir(cmake_dir):
return os.path.join('build' , 'cmake' , cmake_dir)
@@ -350,7 +259,25 @@ def run_cmake(directory, cmake_dir, args):
args += ( '-GNinja', )
else:
args += ( '-GVisual Studio 14 2015 Win64', )
args += ( '-Dtarget=' + cmake_dir, os.path.join('..', '..', '..'), )
# hack to extract cmake options/args from the legacy target format
if re.search('\.unity', cmake_dir):
args += ( '-Dunity=ON', )
if re.search('\.nounity', cmake_dir):
args += ( '-Dunity=OFF', )
if re.search('coverage', cmake_dir):
args += ( '-Dcoverage=ON', )
if re.search('profile', cmake_dir):
args += ( '-Dprofile=ON', )
if re.search('debug', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Debug', )
if re.search('release', cmake_dir):
args += ( '-DCMAKE_BUILD_TYPE=Release', )
if re.search('gcc', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=gcc', '-DCMAKE_CXX_COMPILER=g++', )
if re.search('clang', cmake_dir):
args += ( '-DCMAKE_C_COMPILER=clang', '-DCMAKE_CXX_COMPILER=clang++', )
args += ( os.path.join('..', '..', '..'), )
resultcode, lines = shell('cmake', args)
if resultcode:
@@ -390,13 +317,16 @@ def run_cmake_tests(directory, target, config):
testflag = '--unittest'
quiet = ''
testjobs = ''
ipv6 = ''
if ARGS.test:
testflag += ('=' + ARGS.test)
if ARGS.quiet:
quiet = '-q'
if ARGS.ipv6:
ipv6 = '--unittest-ipv6'
if ARGS.testjobs:
testjobs = ('--unittest-jobs=' + str(ARGS.testjobs))
resultcode, lines = shell(executable, (testflag, quiet, testjobs,))
resultcode, lines = shell(executable, (testflag, quiet, testjobs, ipv6))
if resultcode:
if not ARGS.verbose:
@@ -407,92 +337,54 @@ def run_cmake_tests(directory, target, config):
def main():
all_failed = []
if ARGS.dir or ARGS.target or ARGS.config or ARGS.build_option or ARGS.generator_option:
ARGS.cmake=True
if not ARGS.cmake:
if ARGS.all:
to_build = ALL_BUILDS
else:
to_build = [tuple(ARGS.extra_args)]
for build in to_build:
args = ()
# additional arguments come first
for arg in list(ARGS.extra_args):
if arg not in build:
args += (arg,)
args += build
run_build(args)
failed = run_tests(args)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
all_failed.extend([','.join(build), ':'.join(f)]
for f in failed)
else:
print('Success')
if ARGS.clean:
shutil.rmtree('build')
if '--ninja' in args:
os.remove('build.ninja')
os.remove('.ninja_deps')
os.remove('.ninja_log')
if ARGS.all:
build_dir_targets = CMAKE_DIR_TARGETS
generator_options = CMAKE_ALL_GENERATE_OPTIONS
else:
if ARGS.all:
build_dir_targets = CMAKE_DIR_TARGETS
generator_options = CMAKE_ALL_GENERATE_OPTIONS
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
if ARGS.generator_option:
generator_options = [tuple(ARGS.generator_option)]
else:
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
if ARGS.generator_option:
generator_options = [tuple(ARGS.generator_option)]
else:
generator_options = [tuple()]
generator_options = [tuple()]
if not build_dir_targets:
# Let CMake choose the build tool.
build_dir_targets = { () : [] }
if not build_dir_targets:
# Let CMake choose the build tool.
build_dir_targets = { () : [] }
if ARGS.build_option:
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
else:
ARGS.build_option = list(ARGS.extra_args)
if ARGS.build_option:
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
else:
ARGS.build_option = list(ARGS.extra_args)
for args in generator_options:
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
if not build_dirs:
build_dirs = ('default',)
if not build_targets:
build_targets = ('rippled',)
if not build_configs:
build_configs = ('',)
for cmake_dir in build_dirs:
cmake_full_dir = get_cmake_dir(cmake_dir)
run_cmake(cmake_full_dir, cmake_dir, args)
for args in generator_options:
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
if not build_dirs:
build_dirs = ('default',)
if not build_targets:
build_targets = ('rippled',)
if not build_configs:
build_configs = ('',)
for cmake_dir in build_dirs:
cmake_full_dir = get_cmake_dir(cmake_dir)
run_cmake(cmake_full_dir, cmake_dir, args)
for target in build_targets:
for config in build_configs:
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
failed = run_cmake_tests(cmake_full_dir, target, config)
for target in build_targets:
for config in build_configs:
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
failed = run_cmake_tests(cmake_full_dir, target, config)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
all_failed.extend([decodeString(cmake_dir +
"." + target + "." + config), ':'.join(f)]
for f in failed)
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
if not ARGS.keep_going:
sys.exit(1)
else:
print('Success')
if ARGS.clean:
shutil.rmtree(cmake_full_dir)
all_failed.extend([decodeString(cmake_dir +
"." + target + "." + config), ':'.join(f)]
for f in failed)
else:
print('Success')
if ARGS.clean:
shutil.rmtree(cmake_full_dir)
if all_failed:
if len(all_failed) > 1:

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env bash
#
# This scripts installs boost and protobuf built with clang. This is needed on
# ubuntu 15.10 when building with clang
# It will build these in a 'clang' subdirectory that it creates below the directory
# this script is run from. If a clang directory already exists the script will refuse
# to run.
if hash lsb_release 2>/dev/null; then
if [ $(lsb_release -si) == "Ubuntu" ]; then
ubuntu_release=$(lsb_release -sr)
fi
fi
if [ -z "${ubuntu_release}" ]; then
echo "System not supported"
exit 1
fi
if ! hash clang 2>/dev/null; then
clang_version=3.7
if [ ${ubuntu_release} == "16.04" ]; then
clang_version=3.8
fi
sudo apt-get -y install clang-${clang_version}
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-${clang_version} 99 clang++
hash -r
if ! hash clang 2>/dev/null; then
echo "Please install clang"
exit 1
fi
fi
if [ ${ubuntu_release} != "16.04" ] && [ ${ubuntu_release} != "15.10" ]; then
echo "clang specific boost and protobuf not needed"
exit 0
fi
if [ -d clang ]; then
echo "clang directory already exists. Cowardly refusing to run"
exit 1
fi
if ! hash wget 2>/dev/null; then
sudo apt-get -y install wget
hash -r
if ! hash wget 2>/dev/null; then
echo "Please install wget"
exit 1
fi
fi
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # pysical cores
mkdir clang
pushd clang > /dev/null
# Install protobuf
pb=protobuf-2.6.1
pb_tar=${pb}.tar.gz
wget -O ${pb_tar} https://github.com/google/protobuf/releases/download/v2.6.1/${pb_tar}
tar xf ${pb_tar}
rm ${pb_tar}
pushd ${pb} > /dev/null
./configure CC=clang CXX=clang++ CXXFLAGS='-std=c++14 -O3 -g'
make -j${num_procs}
popd > /dev/null
# Install boost
boost_ver=1.60.0
bd=boost_${boost_ver//./_}
bd_tar=${bd}.tar.gz
wget -O ${bd_tar} http://sourceforge.net/projects/boost/files/boost/${boost_ver}/${bd_tar}
tar xf ${bd_tar}
rm ${bd_tar}
pushd ${bd} > /dev/null
./bootstrap.sh
./b2 toolset=clang -j${num_procs}
popd > /dev/null
popd > /dev/null

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env bash
#
# This script builds boost with the correct ABI flags for ubuntu
#
version=63
patch=0
if hash lsb_release 2>/dev/null; then
if [ $(lsb_release -si) == "Ubuntu" ]; then
ubuntu_release=$(lsb_release -sr)
fi
fi
if [ -z "${ubuntu_release}" ]; then
echo "System not supported"
exit 1
fi
extra_defines=""
if (( $(bc <<< "${ubuntu_release} < 15.1") )); then
extra_defines="define=_GLIBCXX_USE_CXX11_ABI=0"
fi
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # pysical cores
printf "\nBuild command will be: ./b2 -j${num_procs} ${extra_defines}\n\n"
boost_dir="boost_1_${version}_${patch}"
boost_tag="boost-1.${version}.${patch}"
git clone -b "${boost_tag}" --recursive https://github.com/boostorg/boost.git "${boost_dir}"
cd ${boost_dir}
git checkout --force ${boost_tag}
git submodule foreach git checkout --force ${boost_tag}
./bootstrap.sh
./b2 headers
./b2 -j${num_procs} ${extra_defines}
echo "Build command was: ./b2 -j${num_procs} ${extra_defines}"
echo "Don't forget to set BOOST_ROOT!"

View File

@@ -1,47 +0,0 @@
#!/usr/bin/env bash
#
# This scripts installs the dependencies needed by rippled. It should be run
# with sudo. For ubuntu < 15.10, it installs gcc 5 as the default compiler. gcc
# 5 is ABI incompatable with gcc 4. If needed, the following will switch back to
# gcc-4: `sudo update-alternatives --config gcc` and choosing the gcc-4
# option.
#
if hash lsb_release 2>/dev/null; then
if [ $(lsb_release -si) == "Ubuntu" ]; then
ubuntu_release=$(lsb_release -sr)
fi
fi
if [ -z "${ubuntu_release}" ]; then
echo "System not supported"
exit 1
fi
if [ ${ubuntu_release} == "14.04" ] || [ ${ubuntu_release} == "15.04" ]; then
apt-get install python-software-properties
echo "deb [arch=amd64] https://mirrors.ripple.com/ubuntu/ trusty stable contrib" | sudo tee /etc/apt/sources.list.d/ripple.list
wget -O- -q https://mirrors.ripple.com/mirrors.ripple.com.gpg.key | sudo apt-key add -
add-apt-repository ppa:ubuntu-toolchain-r/test
apt-get update
apt-get -y upgrade
apt-get -y install curl git cmake ctags pkg-config protobuf-compiler libprotobuf-dev libssl-dev python-software-properties boost-all-dev g++-5 g++-4.9
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 99 --slave /usr/bin/g++ g++ /usr/bin/g++-5
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 99 --slave /usr/bin/g++ g++ /usr/bin/g++-4.9
exit 0
fi
# Test if 0th parameter has a version number greater than or equal to the 1st param
function version_check() { test "$(printf '%s\n' "$@" | sort -V | tail -n 1)" == "$1"; }
# this should work for versions greater than 15.10
if version_check ${ubuntu_release} 15.10; then
apt-get update
apt-get -y upgrade
apt-get -y install python-software-properties curl git cmake ctags pkg-config protobuf-compiler libprotobuf-dev libssl-dev python-software-properties libboost-all-dev
exit 0
fi
echo "System not supported"
exit 1

View File

@@ -1,4 +0,0 @@
RippleD.vcxproj -text
RippleD.vcxproj.filters -text

View File

@@ -1,255 +0,0 @@
# Visual Studio 2015 Build Instructions
## Important
We do not recommend Windows for rippled production use at this time. Currently, the Ubuntu
platform has received the highest level of quality assurance, testing, and support.
Additionally, 32-bit Windows versions are not supported.
## Prerequisites
To clone the source code repository, create branches for inspection or modification,
build rippled under Visual Studio, and run the unit tests you will need these
software components:
* [Visual Studio 2015](README.md#install-visual-studio-2015)
* [Git for Windows](README.md#install-git-for-windows)
* [Google Protocol Buffers Compiler](README.md#install-google-protocol-buffers-compiler)
* (Optional) [Python and Scons](README.md#optional-install-python-and-scons)
* [OpenSSL Library](README.md#install-openssl)
* [Boost library](README.md#build-boost)
## Install Software
### Install Visual Studio 2015
If not already installed on your system, download your choice of installer from the
[Visual Studio 2015 Download](https://www.visualstudio.com/downloads/download-visual-studio-vs)
page, run the installer, and follow the directions. You may need to choose a "Custom"
installation and ensure that "Visual C++" is selected under "Programming Languages".
Any version of Visual Studio 2015 may be used to build rippled.
The **Visual Studio 2015 Community** edition is available free of charge (see
[the product page](https://www.visualstudio.com/products/visual-studio-community-vs)
for licensing details), while paid editions may be used for an free initial trial period.
### Install Git for Windows
Git is a distributed revision control system. The Windows version also provides the
bash shell and many Windows versions of Unix commands. While there are other
varieties of Git (such as TortoiseGit, which has a native Windows interface and
integrates with the Explorer shell), we recommend installing
[Git for Windows](https://git-scm.com/) since
it provides a Unix-like command line environment useful for running shell scripts.
Use of the bash shell under Windows is mandatory for running the unit tests.
* NOTE: To gain full featured access to the
[git-subtree](https://blogs.atlassian.com/2013/05/alternatives-to-git-submodule-git-subtree/)
functionality used in the rippled repository we suggest Git version 2.6.2 or later.
### Install Google Protocol Buffers Compiler
Building rippled requires **protoc.exe** version 2.5.1 or later. At your option you
may build it yourself from the sources in the
[Google Protocol Buffers](https://github.com/google/protobuf) repository,
or you may download a
[protoc.exe](https://ripple.github.io/Downloads/protoc/2.5.1/protoc.exe)
([alternate link](https://github.com/ripple/Downloads/raw/gh-pages/protoc/2.5.1/protoc.exe))
precompiled Windows executable from the
[Ripple Organization](https://github.com/ripple).
Either way, once you have the required version of **protoc.exe**, copy it into
a folder in your command line `%PATH%`.
* **NOTE:** If you use an older version of the compiler, the build will
fail with errors related to a mismatch of the version of protocol
buffer headers versus the compiler.
### (Optional) Install Python and Scons
[Python](https://www.python.org/downloads/) and
[Scons](http://scons.org/download.php) are not required to build
rippled with Visual Studio, but can be used to build from the
command line and in scripts, and are required to properly update
the `RippleD.vcxproj` file.
If you wish to build with scons, a version after 2.3.5 is required
for Visual Studio 2015 support.
## Configure Dependencies
### Install OpenSSL
[Download OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html)
There will be four variants available:
1. 64-bit. Use this if you are running 64-bit windows. As of this writing, the link is called: "Win64 OpenSSL v1.0.2j".
2. 64-bit light - Don't use this. It is missing files needed to build rippled. As of this writing, the link is called: "Win64 OpenSSL v1.0.2j Light"
Run the installer, and choose an appropriate location for your OpenSSL
installation. In this guide we use **C:\lib\OpenSSL-Win64** as the
destination location.
You may be informed on running the installer that "Visual C++ 2008
Redistributables" must first be installed first. If so, download it
from the [same page](http://slproweb.com/products/Win32OpenSSL.html),
again making sure to get the correct 32-/64-bit variant.
* NOTE: Since rippled links statically to OpenSSL, it does not matter
where the OpenSSL .DLL files are placed, or what version they are.
rippled does not use or require any external .DLL files to run
other than the standard operating system ones.
### Build Boost
After [downloading boost](http://www.boost.org/users/download/) and
unpacking it, open a **Developer Command Prompt** for
Visual Studio, change to the directory containing boost, then
bootstrap the build tools:
(As of this writing, the most recent version of boost is 1.62.0, which
will unpack into a directory named `boost_1_62_0`. For higher versions
of boost, adjust the directories provided in these examples as
appropriate.)
```powershell
cd C:\lib\boost_1_62_0
bootstrap
```
The rippled application is linked statically to the standard runtimes and external
dependencies on Windows, to ensure that the behavior of the executable is not
affected by changes in outside files. Therefore, it is necessary to build the
required boost static libraries using this command:
```powershell
bjam --toolset=msvc-14.0 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared,static stage --stagedir=stage64
```
Building the boost libraries may take considerable time. When the build process
is completed, take note of both the reported compiler include paths and linker
library paths as they will be required later.
* NOTE: If older versions of Visual Studio are also installed, the build may fail.
If this happens, make sure that only Visual Studio 2015 is installed. Due to
defects in the uninstallation procedures of these Microsoft products, it may
be necessary to start with a fresh install of the operating system with only
the necessary development environment components installed to have a successful build.
### Clone the rippled repository
If you are familiar with cloning github repositories, just follow your normal process
and clone `git@github.com:ripple/rippled.git`. Otherwise follow this section for instructions.
1. If you don't have a github account, sign up for one at
[github.com](https://github.com/).
2. Make sure you have Github ssh keys. For help see
[generating-ssh-keys](https://help.github.com/articles/generating-ssh-keys).
Open the "Git Bash" shell that was installed with "Git for Windows" in the
step above. Navigate to the directory where you want to clone rippled (git
bash uses `/c` for windows's `C:` and forward slash where windows uses
backslash, so `C:\Users\joe\projs` would be `/c/Users/joe/projs` in git bash).
Now clone the repository and optionally switch to the *master* branch.
Type the following at the bash prompt:
```powershell
git clone git@github.com:ripple/rippled.git
cd rippled
git checkout master
```
* If you receive an error about not having the "correct access rights"
make sure you have Github ssh keys, as described above.
### Configure Library Paths
Open the solution file located at **Builds/Visual Studio 2015/ripple.sln**
and select the "View->Property Manager" to bring up the Property Manager.
Expand the *debug | x64* section and
double click the *Microsoft.Cpp.x64.user* property sheet to bring up the
*Property Pages* dialog. These are global properties applied to all
64-bit build targets:
![Visual Studio 2015 Global Properties](images/VS2015x64Properties.png)
Go to *C/C++, General, Additional Include Directories* and add the
location of the boost installation:
![Visual Studio 2015 Include Directories](images/VS2015x64IncludeDirs.png)
Then, go to *Linker, General, Additional Library Directories* and add
the location of the compiled boost libraries reported at the completion
of building the boost libraries:
![Visual Studio 2015 Library Directories](images/VS2015x64LibraryDirs.png)
Follow the same procedure for adding the `Additional Include Directories`
and `Additional Library Directories` required for OpenSSL. In our example
these directories are **C:\lib\OpenSSL-Win64\include** and
**C:\lib\OpenSSL-Win64\lib** respectively.
# Setup Environment
## Create a working directory for rippled.cfg
The rippled server uses the [Rippled.cfg](https://wiki.ripple.com/Rippled.cfg)
file to read its configuration parameters. This section describes setting up
a directory to hold the config file. The next sections describe how to tell
the rippled server where that file is.
1. Create a directory to hold the configuration file. In this example, the
ripple config directory was created in `C:\Users\joe\ripple\config`.
2. Copy the example config file located in `doc\rippled-example.cfg` to the
new directory and rename it "rippled.cfg".
3. Read the rippled.cfg file and edit as appropriate.
## Change the Visual Studio Projects Debugging Properties
1. If not already open, open the solution file located at **Builds/Visual Studio 2015/Ripple.sln**
2. Select the correct solution platform in the solution platform dropdown (either *x64*
or *Win32* depending on machine type).
3. Select the "Project->Properties" menu item to bring up RippleD's Properties Pages
4. In "Configuration Properties" select "Debugging".
5. In the upper-left Configurations drop down, select "All Configurations".
6. In "Debugger to Launch" select "Local Windows Debugger".
### Tell rippled where to find the configuration file.
The `--conf` command-line switch to tell rippled where to find this file.
In the "Command Arguments" field in the properties dialog (that you opened
in the above section), add: `--conf="C:/Users/joe/ripple/config/rippled.cfg"`
(of course replacing that path with the path you set up above).
![Visual Studio 2013 Command Args Prop Page](images/VSCommandArgsPropPage.png)
### Set the _NO_DEBUG_HEAP Environment Variable
Rippled can run very slowly in the debugger when using the Windows Debug Heap.
Set the `_NO_DEBUG_HEAP` environment variable to one to disable the debug heap.
In the "Environment" field (that you opened in the above section), add:
`_NO_DEBUG_HEAP=1`
![Visual Studio 2013 No Debug Heap Prop Page](images/NoDebugHeapPropPage.png)
# Build
After these steps are complete, rippled should be ready to build. Simply
set rippled as the startup project by right clicking on it in the
Visual Studio Solution Explorer, choose **Set as Startup Project**,
and then choose the **Build->Build Solution** menu item.
# Unit Tests (Recommended)
The rippled unit tests are written in C++ and are part
of the rippled executable.
From a Windows console, run the unit tests:
```
./build/msvc.debug/rippled.exe --unittest
```
Substitute the correct path to the executable to test different builds.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,36 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 14
VisualStudioVersion = 14.0.25123.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "RippleD", "RippleD.vcxproj", "{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
debug.classic|x64 = debug.classic|x64
debug.classic|x86 = debug.classic|x86
debug|x64 = debug|x64
debug|x86 = debug|x86
release.classic|x64 = release.classic|x64
release.classic|x86 = release.classic|x86
release|x64 = release|x64
release|x86 = release|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug.classic|x64.ActiveCfg = debug.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug.classic|x64.Build.0 = debug.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug.classic|x86.ActiveCfg = debug.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug|x64.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug|x64.Build.0 = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug|x86.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release.classic|x64.ActiveCfg = release.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release.classic|x64.Build.0 = release.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release.classic|x86.ActiveCfg = release.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release|x64.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release|x64.Build.0 = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release|x86.ActiveCfg = release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal

View File

@@ -18,7 +18,7 @@ need these software components
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1|
| [Google Protocol Buffers Compiler](README.md#install-google-protocol-buffers-compiler) | 2.5.1|
| [OpenSSL Library](README.md#install-openssl) | 1.0.2n |
| [Boost library](README.md#build-boost) | 1.66.0 |
| [Boost library](README.md#build-boost) | 1.67.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.10.2 |
\* Only needed if not using the integrated CMake in VS 2017 and prefer generating dedicated project/solution files.
@@ -99,11 +99,13 @@ to get the correct 32-/64-bit variant.
### Build Boost
Boost 1.67 or later is required.
After [downloading boost](http://www.boost.org/users/download/) and unpacking it
to `c:\lib`. As of this writing, the most recent version of boost is 1.66.0,
which will unpack into a directory named `boost_1_66_0`. We recommended either
to `c:\lib`. As of this writing, the most recent version of boost is 1.68.0,
which will unpack into a directory named `boost_1_68_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_66_0`, so that you can more easily switch between versions.
boost_1_68_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
@@ -235,7 +237,7 @@ execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_66_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_68_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
```
Now launch Visual Studio 2017 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`

View File

@@ -5,4 +5,3 @@ num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of ph
path=$(cd $(dirname $0) && pwd)
cd $(dirname $path)
${path}/Test.py -a -c --testjobs=${num_procs} -- -j${num_procs}
${path}/Test.py -a -c -k --cmake --testjobs=${num_procs} -- -j${num_procs}

228
Builds/linux/README.md Normal file
View File

@@ -0,0 +1,228 @@
# Linux Build Instructions
This document focuses on building rippled for development purposes under recent
Ubuntu linux distributions. To build rippled for Redhat, Fedora or Centos
builds, including docker based builds for those distributions, please consult
the [rippled-package-builder](https://github.com/ripple/rippled-package-builder)
repository.
Development is regularly done on Ubuntu 16.04 or later. For non Ubuntu
distributions, the steps below should work be installing the appropriate
dependencies using that distribution's package management tools.
## Dependencies
Use `apt-get` to install the dependencies provided by the distribution
```
$ apt-get update
$ apt-get install -y gcc g++ wget git cmake protobuf-compiler libprotobuf-dev libssl-dev
```
Advanced users can choose to install newer versions of gcc, or the clang compiler.
At this time, rippled only supports protobuf version 2. Using version 3 of
protobuf will give errors.
### Build Boost
Boost 1.67 or later is required. We recommend downloading and compiling boost
with the following process: After changing to the directory where
you wish to download and compile boost, run
```
$ wget https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.gz
$ tar -xzf boost_1_68_0.tar.gz
$ cd boost_1_68_0
$ ./bootstrap.sh
$ ./b2 headers
$ ./b2 -j<Num Parallel>
```
### (Optional) Dependencies for Building Source Documentation
Source code documentation is not required for running/debugging rippled. That
said, the documentation contains some helpful information about specific
components of the application. For more information on how to install and run
the necessary components, see [this document](../../docs/README.md)
## Build
### Clone the rippled repository
From a shell:
```
git clone git@github.com:ripple/rippled.git
cd rippled
```
For a stable release, choose the `master` branch or one of the tagged releases
listed on [GitHub](https://github.com/ripple/rippled/releases).
```
git checkout master
```
or to test the latest release candidate, choose the `release` branch.
```
git checkout release
```
If you are doing development work and want the latest set of untested
features, you can consider using the `develop` branch instead.
```
git checkout develop
```
### Configure Library Paths
If you didn't persistently set the `BOOST_ROOT` environment variable to the
directory in which you compiled boost, then you should set it temporarily.
For example, you built Boost in your home directory `~/boost_1_68_0`, you
would do for any shell in which you want to build:
```
export BOOST_ROOT=~/boost_1_68_0
```
Alternatively, you can add `DBOOST_ROOT=~/boost_1_68_0` to the command line when
invoking `cmake`.
### Generate and Build
All builds should be done in a separate directory from the source tree root
(a subdirectory is fine). For example, from the root of the ripple source tree:
```
mkdir my_build
cd my_build
```
followed by:
```
cmake -DCMAKE_BUILD_TYPE=Debug ..
```
`CMAKE_BUILD_TYPE` can be changed as desired for `Debug` vs.
`Release` builds (all four standard cmake build types are supported).
To select a different compiler (most likely gcc will be found by default), pass
`-DCMAKE_C_COMPILER=<path/to/c-compiler>` and
`-DCMAKE_CXX_COMPILER=</path/to/cxx-compiler>` when configuring. If you prefer,
you can instead set `CC` and `CXX` environment variables which cmake will honor.
Once you have generated the build system, you can run the build via cmake:
```
cmake --build . -- -j <parallel jobs>
```
the `-j` parameter in this example tells the build tool to compile several
files in parallel. This value should be chosen roughly based on the number of
cores you have available and/or want to use for building.
When the build completes succesfully, you will have a `rippled` executable in
the current directory, which can be used to connect to the network (when
properly configured) or to run unit tests.
#### Options During Configuration:
The CMake file defines a number of configure-time options which can be
examined by running `cmake-gui` or `ccmake` to generated the build. In
particular, the `unity` option allows you to select between the unity and
non-unity builds. `unity` builds are faster to compile since they combine
multiple sources into a single compiliation unit - this is the default if you
don't specify. `nounity` builds can be helpful for detecting include omissions
or for finding other build-related issues, but aren't generally needed for
testing and running.
* `-Dunity=ON` to enable/disable unity builds (defaults to ON)
* `-Dassert=ON` to enable asserts
* `-Djemalloc=ON` to enable jemalloc support for heap checking
* `-Dsan=thread` to enable the thread sanitizer with clang
* `-Dsan=address` to enable the address sanitizer with clang
* `-Dstatic=ON` to enable static linking library dependencies
Several other infrequently used options are available - run `ccmake` or
`cmake-gui` for a list of all options.
#### Optional Installation
The rippled cmake build supports an installation target that will install
rippled as well as a support library that can be used to sign transactions. In
order to build and install the files, specify the `install` target when
building, e.g.:
```
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/opt/local ..
cmake --build . --target install -- -j <parallel jobs>
```
We recommend specifying `CMAKE_INSTALL_PREFIX` when configuring in order to
explicitly control the install location for your files. Without this setting,
cmake will typically install in `/usr/local`. It is also possible to "rehome"
the installation by specifying the `DESTDIR` env variable during the install phase,
e.g.:
```
DESTDIR=~/mylibs cmake --build . --target install -- -j <parallel jobs>
```
in which case, the files would be installed in the `CMAKE_INSTALL_PREFIX` within
the specified `DESTDIR` path.
#### Signing Library
If you want to use the signing support library to create an application, there
are two simple mechanisms with cmake + git that facilitate this.
With either option below, you will have access to a library from the
rippled project that you can link to in your own project's CMakeLists.txt, e.g.:
```
target_link_libraries (my-signing-app Ripple::xrpl_core)
```
##### Option 1: git submodules + add_subdirectory
First, add the rippled repo as a submodule to your project repo:
```
git submodule add -b master https://github.com/ripple/rippled.git vendor/rippled
```
change the `vendor/rippled` path as desired for your repo layout. Furthermore,
change the branch name if you want to track a different rippled branch, such
as `develop`.
Second, to bring this submodule into your project, just add the rippled subdirectory:
```
add_subdirectory (vendor/rippled)
```
##### Option 2: installed rippled + find_package
First, follow the "Optional Installation" instructions above to
build and install the desired version of rippled.
To make use of the installed files, add the following to your CMakeLists.txt file:
```
set (CMAKE_MODULE_PATH /opt/local/lib/cmake/ripple ${CMAKE_MODULE_PATH})
find_package(Ripple REQUIRED)
```
change the `/opt/local` module path above to match your chosen installation prefix.
## Unit Tests (Recommended)
`rippled` builds a set of unit tests into the server executable. To run these unit
tests after building, pass the `--unittest` option to the compiled `rippled`
executable. The executable will exit with summary info after running the unit tests.

View File

@@ -60,9 +60,11 @@ brew install git cmake pkg-config protobuf openssl ninja
### Build Boost
Boost 1.67 or later is required.
We want to compile boost with clang/libc++
Download [a release](https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.bz2)
Download [a release](https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.bz2)
Extract it to a folder, making note of where, open a terminal, then:
@@ -118,11 +120,11 @@ If you didn't persistently set the `BOOST_ROOT` environment variable to the
root of the extracted directory above, then you should set it temporarily.
For example, assuming your username were `Abigail` and you extracted Boost
1.66.0 in `/Users/Abigail/Downloads/boost_1_66_0`, you would do for any
1.68.0 in `/Users/Abigail/Downloads/boost_1_68_0`, you would do for any
shell in which you want to build:
```
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_66_0
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_68_0
```
### Generate and Build
@@ -140,20 +142,17 @@ cd my_build
followed by:
```
cmake -G "Unix Makefiles" -Dtarget=clang.debug.unity ..
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug ..
```
or
```
cmake -G "Ninja" -Dtarget=clang.debug.unity ..
cmake -G "Ninja" -DCMAKE_BUILD_TYPE=Debug ..
```
The target variable can be adjusted as needed for `debug` vs. `release` and
`unity` vs. `nounity` builds. `unity` builds are typically faster to compile
but run the risk of ODR violations given that multiple compilation units are
merged together at compile time. `nounity` builds will take longer to compile
but align more closely with language standards.
`CMAKE_BUILD_TYPE` can be changed as desired for `Debug` vs.
`Release` builds (all four standard cmake build types are supported).
Once you have generated the build system, you can run the build via cmake:
@@ -187,16 +186,42 @@ cmake --build . -- -jobs 4
This will invoke the `xcodebuild` utility to compile the project. See `xcodebuild
--help` for details about build options.
#### Optional installation
If you'd like to install the artifacts of the build, we have preliminary
support for standard CMake installation targets. We recommend explicitly
setting the installation location when configuring, e.g.:
```
cmake -DCMAKE_INSTALL_PREFIX=/opt/local ..
```
(change the destination as desired), and then build the `install` target:
```
cmake --build . --target install -- -jobs 4
```
#### Options During Configuration:
There are a number of config variables that our CMake files support. These
can be added to the cmake generation command as needed:
The CMake file defines a number of configure-time options which can be
examined by running `cmake-gui` or `ccmake` to generated the build. In
particular, the `unity` option allows you to select between the unity and
non-unity builds. `unity` builds are faster to compile since they combine
multiple sources into a single compiliation unit - this is the default if you
don't specify. `nounity` builds can be helpful for detecting include omissions
or for finding other build-related issues, but aren't generally needed for
testing and running.
* `-Dunity=ON` to enable/disable unity builds (defaults to ON)
* `-Dassert=ON` to enable asserts
* `-Djemalloc=ON` to enable jemalloc support for heap checking
* `-Dsan=thread` to enable the thread sanitizer with clang
* `-Dsan=address` to enable the address sanitizer with clang
Several other infrequently used options are available - run `ccmake` or
`cmake-gui` for a list of all options.
## Unit Tests (Recommended)
`rippled` builds a set of unit tests into the server executable. To run these unit

View File

@@ -1,13 +0,0 @@
--- /usr/include/boost/config/compiler/clang.hpp 2013-07-20 13:17:10.000000000 -0400
+++ /usr/include/boost/config/compiler/clang.rippled.hpp 2014-03-11 16:40:51.000000000 -0400
@@ -39,6 +39,10 @@
// Clang supports "long long" in all compilation modes.
#define BOOST_HAS_LONG_LONG
+#if defined(__SIZEOF_INT128__)
+# define BOOST_HAS_INT128
+#endif
+
//
// Dynamic shared object (DSO) and dynamic-link library (DLL) support
//

View File

@@ -1,10 +0,0 @@
--- /usr/include/boost/bimap/detail/debug/static_error.hpp 2008-03-22 17:45:55.000000000 -0400
+++ /usr/include/boost/bimap/detail/debug/static_error.rippled.hpp 2014-03-12 19:40:05.000000000 -0400
@@ -25,7 +25,6 @@
// a static error.
/*===========================================================================*/
#define BOOST_BIMAP_STATIC_ERROR(MESSAGE,VARIABLES) \
- struct BOOST_PP_CAT(BIMAP_STATIC_ERROR__,MESSAGE) {}; \
BOOST_MPL_ASSERT_MSG(false, \
BOOST_PP_CAT(BIMAP_STATIC_ERROR__,MESSAGE), \
VARIABLES)

File diff suppressed because it is too large Load Diff

11
Jamroot
View File

@@ -1,11 +0,0 @@
#
# Copyright (c) 2013-2016 Vinnie Falco (vinnie dot falco at gmail dot com)
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
import boost ;
boost.use-project ;

807
Jenkinsfile vendored
View File

@@ -3,15 +3,19 @@
import groovy.json.JsonOutput
import java.text.*
def all_status = [:]
def commit_id = ''
all_status = [:]
commit_id = ''
git_fork = 'ripple'
git_repo = 'rippled'
//
// this is not the actual token, but an ID/key into the jenkins
// credential store which httpRequest can access.
def github_cred = '6bd3f3b9-9a35-493e-8aef-975403c82d3e'
//
github_cred = '6bd3f3b9-9a35-493e-8aef-975403c82d3e'
//
// root API url for our repo (default, overriden below)
//
String github_repo = 'https://api.github.com/repos/ripple/rippled'
github_api = 'https://api.github.com/repos/ripple/rippled'
try {
stage ('Startup Checks') {
@@ -20,15 +24,7 @@ try {
// a filesystem (although we just pass it text)
node {
checkout scm
commit_id = sh(
script: 'git rev-parse HEAD',
returnStdout: true)
commit_id = commit_id.trim()
echo "commit ID is ${commit_id}"
commit_log = sh (
script: "git show --name-status ${commit_id}",
returnStdout: true)
printGitInfo (commit_id, commit_log)
commit_id = getCommitID()
//
// NOTE this getUserRemoteConfigs call requires a one-time
// In-process Script Approval (configure jenkins). We need this
@@ -37,15 +33,16 @@ try {
def remote_url = scm.getUserRemoteConfigs()[0].getUrl()
if (remote_url) {
echo "GIT URL scm: $remote_url"
def fork = remote_url.tokenize('/')[2]
def repo = remote_url.tokenize('/')[3].split('\\.')[0]
echo "GIT FORK: $fork"
echo "GIT REPO: $repo"
github_repo = "https://api.github.com/repos/${fork}/${repo}"
echo "API URL REPO: $github_repo"
git_fork = remote_url.tokenize('/')[2]
git_repo = remote_url.tokenize('/')[3].split('\\.')[0]
echo "GIT FORK: $git_fork"
echo "GIT REPO: $git_repo"
github_api = "https://api.github.com/repos/${git_fork}/${git_repo}"
echo "API URL REPO: $github_api"
}
if (env.CHANGE_AUTHOR) {
def collab_found = false;
//
// this means we have some sort of PR , so verify the author
//
@@ -60,10 +57,9 @@ try {
def response = httpRequest(
timeout: 10,
authentication: github_cred,
url: "${github_repo}/collaborators")
url: "${github_api}/collaborators")
def collab_data = readJSON(
text: response.content)
collab_found = false;
for (collaborator in collab_data) {
if (collaborator['login'] == "$CHANGE_AUTHOR") {
echo "$CHANGE_AUTHOR is a collaborator!"
@@ -73,108 +69,286 @@ try {
}
if (! collab_found) {
manager.addShortText(
"Author of this change is not a collaborator!",
"Crimson",
"white",
"0px",
"white")
all_status['startup'] =
[false, 'Author Check', "$CHANGE_AUTHOR is not a collaborator!"]
error "$CHANGE_AUTHOR does not appear to be a collaborator...bailing on this build"
echo "$CHANGE_AUTHOR is not a collaborator - waiting for manual approval."
try {
response = httpRequest(
timeout: 10,
authentication: github_cred,
url: getCommentURL(),
contentType: 'APPLICATION_JSON',
httpMode: 'POST',
requestBody: JsonOutput.toJson([
body: """
**Thank you** for your submission. It will be reviewed soon and submitted for processing in CI.
"""
])
)
}
catch (e) {
echo 'had a problem interacting with github...comments are probably not updated'
}
try {
input (
message: "User $CHANGE_AUTHOR has submitted PR #$CHANGE_ID. " +
"**Please review** the changes for any CI/security concerns " +
"and then decide whether to proceed with building.")
}
catch(e) {
def user = e.getCauses()[0].getUser().toString()
all_status['startup'] = [
false,
'Approval Check',
"Build aborted by [${user}]",
"[console](${env.BUILD_URL}/console)"]
error "Aborted by: [${user}]"
}
}
}
}
}
stage ('Parallel Build') {
def variants = [
'coverage',
'clang.debug.unity',
'clang.debug.nounity',
'gcc.debug.unity',
'gcc.debug.nounity',
'clang.release.unity',
'gcc.release.unity'] as String[]
String[][] variants = [
['gcc.Release' ,'-Dassert=ON' ,'MANUAL_TESTS=true' ],
['gcc.Debug' ,'-Dcoverage=ON' ,'TARGET=coverage_report', 'SKIP_TESTS=true'],
['docs' ,'' ,'TARGET=docs' ],
['msvc.Debug' ],
['msvc.Debug' ,'' ,'NINJA_BUILD=true' ],
['msvc.Debug' ,'-Dunity=OFF' ],
['msvc.Release' ],
['clang.Debug' ],
['clang.Debug' ,'-Dunity=OFF' ],
['gcc.Debug' ],
['gcc.Debug' ,'-Dunity=OFF' ],
['clang.Release' ,'-Dassert=ON' ],
['gcc.Release' ,'-Dassert=ON' ],
['gcc.Debug' ,'-Dstatic=OFF' ],
['gcc.Debug' ,'-Dstatic=OFF -DBUILD_SHARED_LIBS=ON' ],
['gcc.Debug' ,'' ,'NINJA_BUILD=true' ],
['clang.Debug' ,'-Dunity=OFF -Dsan=address' ,'PARALLEL_TESTS=false', 'DEBUGGER=false'],
['clang.Debug' ,'-Dunity=OFF -Dsan=undefined' ,'PARALLEL_TESTS=false'],
// TODO - tsan runs currently fail/hang
//['clang.Debug' ,'-Dunity=OFF -Dsan=thread' ,'PARALLEL_TESTS=false'],
]
// create a map of all builds
// that we want to run. The map
// is string keys and node{} object values
def builds = [:]
for (int index = 0; index < variants.size(); index++) {
def bldtype = variants[index]
builds[bldtype] = {
node('rippled-dev') {
def bldtype = variants[index][0]
def cmake_extra = variants[index].size() > 1 ? variants[index][1] : ''
def bldlabel = bldtype + cmake_extra
def extra_env = variants[index].size() > 2 ? variants[index][2..-1] : []
for (int j = 0; j < extra_env.size(); j++) {
bldlabel += "_" + extra_env[j]
}
bldlabel = bldlabel.replace('-', '_')
bldlabel = bldlabel.replace(' ', '')
bldlabel = bldlabel.replace('=', '_')
def compiler = getFirstPart(bldtype)
def config = getSecondPart(bldtype)
def target = 'install' // currently ignored for windows builds
if (compiler == 'docs') {
compiler = 'gcc'
config = 'Release'
target = 'docs'
}
def cc =
(compiler == 'clang') ? '/opt/llvm-5.0.1/bin/clang' : 'gcc'
def cxx =
(compiler == 'clang') ? '/opt/llvm-5.0.1/bin/clang++' : 'g++'
def ucc = isNoUnity(cmake_extra) ? 'true' : 'false'
def node_type =
(compiler == 'msvc') ? 'rippled-win' : 'rippled-dev'
// the default disposition for parallel test..disabled
// for coverage, enabled otherwise. Can still be overridden
// by explicitly setting with extra env settings above.
def pt = isCoverage(cmake_extra) ? 'false' : 'true'
def max_minutes = 25
def env_vars = [
"TARGET=${target}",
"BUILD_TYPE=${config}",
"COMPILER=${compiler}",
"PARALLEL_TESTS=${pt}",
'BUILD=cmake',
"MAX_TIME=${max_minutes}m",
"BUILD_DIR=${bldlabel}",
"CMAKE_EXTRA_ARGS=-Dwerr=ON ${cmake_extra}",
'VERBOSE_BUILD=true']
builds[bldlabel] = {
node(node_type) {
checkout scm
dir ('build') {
deleteDir()
}
def cdir = upDir(pwd())
echo "BASEDIR: ${cdir}"
def compiler = getCompiler(bldtype)
def target = getTarget(bldtype)
if (compiler == "coverage") {
compiler = 'gcc'
}
echo "COMPILER: ${compiler}"
echo "TARGET: ${target}"
def clang_cc =
(compiler == "clang") ? "${LLVM_ROOT}/bin/clang" : ''
def clang_cxx =
(compiler == "clang") ? "${LLVM_ROOT}/bin/clang++" : ''
def ucc = isNoUnity(target) ? 'true' : 'false'
echo "BUILD_TYPE: ${config}"
echo "USE_CC: ${ucc}"
withEnv(["CCACHE_BASEDIR=${cdir}",
"CCACHE_NOHASHDIR=true",
'LCOV_ROOT=""',
"TARGET=${target}",
"CC=${compiler}",
'BUILD=cmake',
'VERBOSE_BUILD=true',
"CLANG_CC=${clang_cc}",
"CLANG_CXX=${clang_cxx}",
"USE_CCACHE=${ucc}"])
{
myStage(bldtype)
try {
sh "ccache -s > ${bldtype}.txt"
// the devtoolset from SCL gives us a recent gcc. It's
// not strictly needed when we are building with clang,
// but it doesn't seem to interfere either
sh "source /opt/rh/devtoolset-6/enable && " +
"(/usr/bin/time -p ./bin/ci/ubuntu/build-and-test.sh 2>&1) 2>&1 " +
">> ${bldtype}.txt"
sh "ccache -s >> ${bldtype}.txt"
env_vars.addAll([
"NIH_CACHE_ROOT=${cdir}"])
if (compiler == 'msvc') {
env_vars.addAll([
'BOOST_ROOT=c:\\lib\\boost_1_67',
'PROJECT_NAME=rippled',
'MSBUILDDISABLENODEREUSE=1', // this ENV setting is probably redundant since we also pass /nr:false to msbuild
'OPENSSL_ROOT=c:\\OpenSSL-Win64'])
}
else {
env_vars.addAll([
'NINJA_BUILD=false',
"CCACHE_BASEDIR=${cdir}",
'PLANTUML_JAR=/opt/plantuml/plantuml.jar',
'APP_ARGS=--unittest-ipv6',
'CCACHE_NOHASHDIR=true',
"CC=${cc}",
"CXX=${cxx}",
'LCOV_ROOT=""',
'PATH+CMAKE_BIN=/opt/local/cmake',
'GDB_ROOT=/opt/local/gdb',
'BOOST_ROOT=/opt/local/boost_1_67_0',
"USE_CCACHE=${ucc}"])
}
if (extra_env.size() > 0) {
env_vars.addAll(extra_env)
}
// try to figure out codecov token to use. Look for
// MY_CODECOV_TOKEN id first so users can set that
// on job scope but then default to RIPPLED_CODECOV_TOKEN
// which should be globally scoped
def codecov_token = ''
try {
withCredentials( [string( credentialsId: 'MY_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
finally {
def outstr = readFile("${bldtype}.txt")
def st = getResults(outstr)
def time = getTime(outstr)
def fail_count = getFailures(outstr)
outstr = null
def txtcolor =
fail_count == 0 ? "DarkGreen" : "Crimson"
def shortbld = bldtype
shortbld = shortbld.replace('debug', 'dbg')
shortbld = shortbld.replace('release', 'rel')
shortbld = shortbld.replace('unity', 'un')
manager.addShortText(
"${shortbld}: ${st}, t: ${time}",
txtcolor,
"white",
"0px",
"white")
archive("${bldtype}.txt")
lock('rippled_dev_status') {
all_status[bldtype] =
[fail_count == 0, bldtype, "${st}, t: ${time}"]
}
catch (e) {
// this might throw when MY_CODECOV_TOKEN doesn't exist
}
if (codecov_token == '') {
withCredentials( [string( credentialsId: 'RIPPLED_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
}
env_vars.addAll(["CODECOV_TOKEN=${codecov_token}"])
withEnv(env_vars) {
myStage(bldlabel)
try {
timeout(
time: max_minutes * 2,
units: 'MINUTES')
{
if (compiler == 'msvc') {
powershell "Remove-Item -Path \"${bldlabel}.txt\" -Force -ErrorAction Ignore"
// we capture stdout to variable because I could
// not figure out how to make powershell redirect internally
output = powershell (
returnStdout: true,
script: windowsBuildCmd())
// if the powershell command fails (has nonzero exit)
// then the command above throws, we don't get our output,
// and we never create this output file.
// SEE https://issues.jenkins-ci.org/browse/JENKINS-44930
// Alternatively, figure out how to reliably redirect
// all output above to a file (Start/Stop transcript does not work)
writeFile(
file: "${bldlabel}.txt",
text: output)
}
else {
sh "rm -fv ${bldlabel}.txt"
// execute the bld command in a redirecting shell
// to capture output
sh redhatBuildCmd(bldlabel)
}
}
}
}
}
}
finally {
if (bldtype == 'docs') {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportName: 'Doxygen',
reportDir: "build/${bldlabel}/html_doc",
reportFiles: 'index.html')
}
if (isCoverage(cmake_extra)) {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: false,
keepAll: true,
reportName: 'Coverage',
reportDir: "build/${bldlabel}/coverage",
reportFiles: 'index.html')
}
def envs = ''
for (int j = 0; j < extra_env.size(); j++) {
envs += ", <br/>" + extra_env[j]
}
def cmake_txt = cmake_extra
if (cmake_txt != '') {
cmake_txt = " <br/>" + cmake_txt
}
def st = reportStatus(bldlabel, bldtype + cmake_txt + envs, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
} //try-catch-finally
} //withEnv
} //node
} //builds item
} //for variants
// Also add a single build job for doing the RPM build
// on a docker node
builds['rpm'] = {
node('docker') {
def bldlabel = 'rpm'
def remote =
(git_fork == 'ripple') ? 'origin' : git_fork
withCredentials(
[string(
credentialsId: 'RIPPLED_RPM_ROLE_ID',
variable: 'ROLE_ID')])
{
withEnv([
'docker_image=artifactory.ops.ripple.com:6555/rippled-rpm-builder:latest',
"git_commit=${commit_id}",
"git_remote=${remote}",
"rpm_release=${env.BUILD_ID}"])
{
try {
sh "rm -fv ${bldlabel}.txt"
sh "if [ -d rpm-out ]; then rm -rf rpm-out; fi"
sh rpmBuildCmd(bldlabel)
}
finally {
def st = reportStatus(bldlabel, bldlabel, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
archiveArtifacts(
artifacts: 'rpm-out/*.rpm',
allowEmptyArchive: true)
}
} //withEnv
} //withCredentials
} //node
}
// this actually executes all the builds we just defined
// above, in parallel as slaves are available
parallel builds
}
}
@@ -182,76 +356,15 @@ finally {
// anything here should run always...
stage ('Final Status') {
node {
def start_time = new Date()
def sdf = new SimpleDateFormat("yyyyMMdd - HH:mm:ss")
def datestamp = sdf.format(start_time)
def results = """
## Jenkins Build Summary
Built from [this commit](https://github.com/ripple/rippled/commit/${commit_id})
Built at __${datestamp}__
### Test Results
Build Type | Result | Status
---------- | ------ | ------
"""
for ( e in all_status) {
results += e.value[1] + " | " + e.value[2] + " | " +
(e.value[0] ? "PASS :white_check_mark: " : "FAIL :red_circle: ") + "\n"
}
results += "\n"
echo "FINAL BUILD RESULTS"
echo results
def results = makeResultText()
try {
def url_comment = ""
if (env.CHANGE_ID && env.CHANGE_ID ==~ /\d+/) {
//
// CHANGE_ID indicates we are building a PR
// find PR comments
//
def resp = httpRequest(
timeout: 10,
authentication: github_cred,
url: "${github_repo}/pulls/$CHANGE_ID")
def result = readJSON(text: resp.content)
//
// follow issue comments link
//
url_comment = result['_links']['issue']['href'] + '/comments'
}
else {
//
// if not a PR, just search comments for our commit ID
//
url_comment =
"${github_repo}/commits/${commit_id}/comments"
}
def response = httpRequest(
timeout: 10,
authentication: github_cred,
url: url_comment)
def data = readJSON(text: response.content)
def comment_id = 0
def mode = 'POST'
// see if we can find and existing comment here with
// a heading that matches ours...
for (comment in data) {
if (comment['body'] =~ /(?m)^##\s+Jenkins Build/) {
comment_id = comment['id']
echo "existing status comment ${comment_id} found"
url_comment = comment['url']
mode = 'PATCH'
break;
}
}
def res = getCommentID() //get array return b/c jenkins does not allow multiple direct return/assign
def comment_id = res[0]
def url_comment = res[1]
def mode = 'PATCH'
if (comment_id == 0) {
echo "no existing status comment found"
echo 'no existing status comment found'
mode = 'POST'
}
def body = JsonOutput.toJson([
@@ -266,8 +379,8 @@ Build Type | Result | Status
httpMode: mode,
requestBody: body)
}
catch (any) {
echo "had a problem interacting with github...status is probably not updated"
catch (e) {
echo 'had a problem interacting with github...status is probably not updated'
}
}
}
@@ -294,37 +407,162 @@ ${log}
"""
}
@NonCPS
def getResults(text) {
// example:
/// 194.5s, 154 suites, 948 cases, 360485 tests total, 0 failures
def matcher = text =~ /(\d+) cases, (\d+) tests total, (\d+) (failure(s?))/
matcher ? matcher[0][1] + " cases, " + matcher[0][3] + " failed" : "no test results"
def makeResultText () {
def start_time = new Date()
def sdf = new SimpleDateFormat('yyyyMMdd - HH:mm:ss')
def datestamp = sdf.format(start_time)
def results = """
## Jenkins Build Summary
Built from [this commit](https://github.com/${git_fork}/${git_repo}/commit/${commit_id})
Built at __${datestamp}__
### Test Results
Build Type | Log | Result | Status
---------- | --- | ------ | ------
"""
for ( e in all_status) {
results += e.value[1] + ' | ' + e.value[3] + ' | ' + e.value[2] + ' | ' +
(e.value[0] ? 'PASS :white_check_mark: ' : 'FAIL :red_circle: ') + '\n'
}
results += '\n'
echo 'FINAL BUILD RESULTS'
echo results
results
}
def getFailures(text) {
def getCommentURL () {
def url_c = ''
if (env.CHANGE_ID && env.CHANGE_ID ==~ /\d+/) {
//
// CHANGE_ID indicates we are building a PR
// find PR comments
//
def resp = httpRequest(
timeout: 10,
authentication: github_cred,
url: "${github_api}/pulls/$CHANGE_ID")
def result = readJSON(text: resp.content)
//
// follow issue comments link
//
url_c = result['_links']['issue']['href'] + '/comments'
}
else {
//
// if not a PR, just search comments for our commit ID
//
url_c =
"${github_api}/commits/${commit_id}/comments"
}
url_c
}
def getCommentID () {
def url_c = getCommentURL()
def response = httpRequest(
timeout: 10,
authentication: github_cred,
url: url_c)
def data = readJSON(text: response.content)
def comment_id = 0
// see if we can find and existing comment here with
// a heading that matches ours...
for (comment in data) {
if (comment['body'] =~ /(?m)^##\s+Jenkins Build/) {
comment_id = comment['id']
echo "existing status comment ${comment_id} found"
url_c = comment['url']
break;
}
}
[comment_id, url_c]
}
def getCommitID () {
def cid = sh (
script: 'git rev-parse HEAD',
returnStdout: true)
cid = cid.trim()
echo "commit ID is ${cid}"
commit_log = sh (
script: "git show --name-status ${cid}",
returnStdout: true)
printGitInfo (cid, commit_log)
cid
}
@NonCPS
def getResults(text, label) {
// example:
/// 194.5s, 154 suites, 948 cases, 360485 tests total, 0 failures
def matcher = text =~ /(\d+) tests total, (\d+) (failure(s?))/
/// 194.5s, 154 suites, 948 cases, 360485 tests total, 0 failures
// or build log format:
// [msvc.release] 71.3s, 162 suites, 995 cases, 318901 tests total, 1 failure
def matcher =
text == '' ?
manager.getLogMatcher(/\[${label}\].+?(\d+) case[s]?, (\d+) test[s]? total, (\d+) (failure(s?))/) :
text =~ /(\d+) case[s]?, (\d+) test[s]? total, (\d+) (failure(s?))/
matcher ? matcher[0][1] + ' cases, ' + matcher[0][3] + ' failed' : 'no test results'
}
@NonCPS
def getFailures(text, label) {
// [see above for format]
def matcher =
text == '' ?
manager.getLogMatcher(/\[${label}\].+?(\d+) test[s]? total, (\d+) (failure(s?))/) :
text =~ /(\d+) test[s]? total, (\d+) (failure(s?))/
// if we didn't match, then return 1 since something is
// probably wrong, e.g. maybe the build failed...
matcher ? matcher[0][2] as Integer : 1i
}
@NonCPS
def getCompiler(bld) {
def getTime(text, label) {
// look for text following a label 'real' for
// wallclock time. Some `time`s report fractional
// seconds and we can omit those in what we report
def matcher =
text == '' ?
manager.getLogMatcher(/(?m)^\[${label}\]\s+real\s+(.+)\.(\d+?)[s]?/) :
text =~ /(?m)^real\s+(.+)\.(\d+?)[s]?/
if (matcher) {
return matcher[0][1] + 's'
}
// alternatively, look for powershell elapsed time
// format, e.g. :
// TotalSeconds : 523.2140529
def matcher2 =
text == '' ?
manager.getLogMatcher(/(?m)^\[${label}\]\s+TotalSeconds\s+:\s+(\d+)\.(\d+?)?/) :
text =~ /(?m)^TotalSeconds\s+:\s+(\d+)\.(\d+?)?/
matcher2 ? matcher2[0][1] + 's' : 'n/a'
}
@NonCPS
def getFirstPart(bld) {
def matcher = bld =~ /^(.+?)\.(.+)$/
matcher ? matcher[0][1] : bld
}
@NonCPS
def isNoUnity(bld) {
def matcher = bld =~ /\.nounity\s*$/
def matcher = bld =~ /-Dunity=(off|OFF)/
matcher ? true : false
}
@NonCPS
def getTarget(bld) {
def isCoverage(bld) {
def matcher = bld =~ /-Dcoverage=(on|ON)/
matcher ? true : false
}
@NonCPS
def getSecondPart(bld) {
def matcher = bld =~ /^(.+?)\.(.+)$/
matcher ? matcher[0][2] : bld
}
@@ -333,16 +571,209 @@ def getTarget(bld) {
// functions in groovy....
@NonCPS
def upDir(path) {
def matcher = path =~ /^(.+)\/(.+?)/
def matcher = path =~ /^(.+)[\/\\](.+?)/
matcher ? matcher[0][1] : path
}
@NonCPS
def getTime(text) {
// look for text following a label 'real' for
// wallclock time. Some `time`s report fractional
// seconds and we can omit those in what we report
def matcher = text =~ /(?m)^real\s+(.+)\.(\d+?)[s]?/
matcher ? matcher[0][1] + "s" : "n/a"
// the shell command used for building on redhat
def redhatBuildCmd(bldlabel) {
'''\
#!/bin/bash
set -ex
log_file=''' + "${bldlabel}.txt" + '''
exec 3>&1 1>>${log_file} 2>&1
ccache -s
source /opt/rh/devtoolset-7/enable
/usr/bin/time -p ./bin/ci/ubuntu/build-and-test.sh 2>&1
ccache -s
'''
}
// the powershell command used for building an RPM
def windowsBuildCmd() {
'''
# Enable streams 3-6
$WarningPreference = 'Continue'
$VerbosePreference = 'Continue'
$DebugPreference = 'Continue'
$InformationPreference = 'Continue'
Invoke-BatchFile "${env:ProgramFiles(x86)}\\Microsoft Visual Studio\\2017\\Community\\VC\\Auxiliary\\Build\\vcvarsall.bat" x86_amd64
Get-ChildItem env:* | Sort-Object name
cl
cmake --version
New-Item -ItemType Directory -Force -Path "build/$env:BUILD_DIR" -ErrorAction Stop
$sw = [Diagnostics.Stopwatch]::StartNew()
try {
Push-Location "build/$env:BUILD_DIR"
if ($env:NINJA_BUILD -eq "true") {
Invoke-Expression "& cmake -G`"Ninja`" -DCMAKE_BUILD_TYPE=$env:BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=ON $env:CMAKE_EXTRA_ARGS ../.."
}
else {
Invoke-Expression "& cmake -G`"Visual Studio 15 2017 Win64`" -DCMAKE_VERBOSE_MAKEFILE=ON $env:CMAKE_EXTRA_ARGS ../.."
}
if ($LastExitCode -ne 0) { throw "CMake failed" }
## as of 01/2018, DO NOT USE cmake to run the actual build step. for some
## reason, cmake spawning the build under jenkins causes MSBUILD/ninja to
## get stuck at the end of the build. Perhaps cmake is spawning
## incorrectly or failing to pass certain params
if ($env:NINJA_BUILD -eq "true") {
ninja -j $env:NUMBER_OF_PROCESSORS -v
}
else {
msbuild /fl /m /nr:false /p:Configuration="$env:BUILD_TYPE" /p:Platform=x64 /p:GenerateFullPaths=True /v:normal /nologo /clp:"ShowCommandLine;DisableConsoleColor" "$env:PROJECT_NAME.vcxproj"
}
if ($LastExitCode -ne 0) { throw "CMake build failed" }
$exe = "./$env:BUILD_TYPE/$env:PROJECT_NAME"
if ($env:NINJA_BUILD -eq "true") {
$exe = "./$env:PROJECT_NAME"
}
"Exe is at $exe"
$params = '--unittest', '--quiet', '--unittest-log'
if ($env:PARALLEL_TESTS -eq "true") {
$params = $params += "--unittest-jobs=$env:NUMBER_OF_PROCESSORS"
}
& $exe $params
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
}
catch {
throw
}
finally {
$sw.Stop()
$sw.Elapsed
Pop-Location
}
'''
}
// the shell command used for building an RPM
def rpmBuildCmd(bldlabel) {
'''\
#!/bin/bash
set -ex
log_file=''' + "${bldlabel}.txt" + '''
exec 3>&1 1>>${log_file} 2>&1
# Vault Steps
SECRET_ID=$(cat /.vault/rippled-build-role/secret-id)
export VAULT_TOKEN=$(/usr/local/ripple/ops-toolbox/vault/vault_approle_auth -r ${ROLE_ID} -s ${SECRET_ID} -t)
/usr/local/ripple/ops-toolbox/vault/vault_get_sts_token.py -r rippled-build-role
mkdir -p rpm-out
docker pull "${docker_image}"
echo "Running build container"
docker run --rm \
-v $PWD/rpm-out:/opt/rippled-rpm/out \
-e "GIT_COMMIT=$git_commit" \
-e "GIT_REMOTE=$git_remote" \
-e "RPM_RELEASE=$rpm_release" \
"${docker_image}"
. rpm-out/build_vars
cd rpm-out
tar xvf rippled-*.tar.gz
ls -la *.rpm
#################################
## for now we don't want the src
## and debugsource rpms for testing
## or archiving...
#################################
rm rippled-debugsource*.rpm
rm *.src.rpm
mkdir rpm-main
cp *.rpm rpm-main
cd rpm-main
cd ../..
cat > test_rpm.sh << "EOL"
#!/bin/bash
function error {
echo $1
exit 1
}
yum install -y yum-utils openssl-static zlib-static
rpm -i /opt/rippled-rpm/*.rpm
rc=$?; if [[ $rc != 0 ]]; then
error "error installing rpms"
fi
/opt/ripple/bin/rippled --unittest
rc=$?; if [[ $rc != 0 ]]; then
error "rippled --unittest failed"
fi
/opt/ripple/bin/validator-keys --unittest
rc=$?; if [[ $rc != 0 ]]; then
error "validator-keys --unittest failed"
fi
EOL
chmod +x test_rpm.sh
echo "Running test container"
docker run --rm \
-v $PWD/rpm-out/rpm-main:/opt/rippled-rpm \
-v $PWD:/opt/rippled --entrypoint /opt/rippled/test_rpm.sh \
centos:latest
'''
}
// post processing step after each build:
// * archives the log file
// * adds short description/status to build status
// * returns an array of result info to add to the all_build summary
def reportStatus(label, type, bldurl) {
def outstr = ''
def loglink = "[console](${bldurl}/console)"
def logfile = "${label}.txt"
if ( fileExists(logfile) ) {
archiveArtifacts( artifacts: logfile )
outstr = readFile(logfile)
loglink = "[logfile](${bldurl}/artifact/${logfile})"
}
def st = getResults(outstr, label)
def time = getTime(outstr, label)
def fail_count = getFailures(outstr, label)
outstr = null
def txtcolor =
fail_count == 0 ? 'DarkGreen' : 'Crimson'
def shortbld = label
// this is just an attempt to shorten the
// summary text label to the point of absurdity..
shortbld = shortbld.replace('Debug', 'dbg')
shortbld = shortbld.replace('Release', 'rel')
shortbld = shortbld.replace('true', 'Y')
shortbld = shortbld.replace('false', 'N')
shortbld = shortbld.replace('Dcoverage', 'cov')
shortbld = shortbld.replace('Dassert', 'asrt')
shortbld = shortbld.replace('Dunity', 'unty')
shortbld = shortbld.replace('Dsan=address', 'asan')
shortbld = shortbld.replace('Dsan=thread', 'tsan')
shortbld = shortbld.replace('Dsan=undefined', 'ubsan')
shortbld = shortbld.replace('PARALLEL_TEST', 'PL')
shortbld = shortbld.replace('MANUAL_TESTS', 'MAN')
shortbld = shortbld.replace('NINJA_BUILD', 'ninja')
shortbld = shortbld.replace('DEBUGGER', 'gdb')
shortbld = shortbld.replace('ON', 'Y')
shortbld = shortbld.replace('OFF', 'N')
manager.addShortText(
"${shortbld}: ${st}, t: ${time}",
txtcolor,
'white',
'0px',
'white')
[fail_count == 0, type, "${st}, t: ${time}", loglink]
}

114
README.md
View File

@@ -1,94 +1,56 @@
![Ripple](/images/ripple.png)
# The XRP Ledger
**Do you work at a digital asset exchange or wallet provider?**
The XRP Ledger is a decentralized cryptographic ledger powered by a network of peer-to-peer servers. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
Please [contact us](mailto:support@ripple.com). We can help guide your integration.
## XRP
XRP is a public, counterparty-less asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP the help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
# What is Ripple?
Ripple is a network of computers which use the [Ripple consensus algorithm](https://www.youtube.com/watch?v=pj1QVb1vlC0) to atomically settle and record
transactions on a secure distributed database, the Ripple Consensus Ledger
(RCL). Because of its distributed nature, the RCL offers transaction immutability
without a central operator. The RCL contains a built-in currency exchange and its
path-finding algorithm finds competitive exchange rates across order books
and currency pairs.
## `rippled`
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE). The `rippled` server is written primarily in C++ and runs on a variety of platforms.
### Key Features
- **Distributed**
- Direct account-to-account settlement with no central operator
- Decentralized global market for competitive FX
- **Secure**
- Transactions are cryptographically signed using ECDSA or Ed25519
- Multi-signing capabilities
- **Scalable**
- Capacity to process the worlds cross-border payments volume
- Easy access to liquidity through a competitive FX marketplace
## Cross-border payments
Ripple enables banks to settle cross-border payments in real-time, with
end-to-end transparency, and at lower costs. Banks can provide liquidity
for FX themselves or source it from third parties.
# Key Features of the XRP Ledger
As Ripple adoption grows, so do the number of currencies and counterparties.
Liquidity providers need to maintain accounts with each counterparty for
each currency a capital- and time-intensive endeavor that spreads liquidity
thin. Further, some transactions, such as exotic currency trades, will require
multiple trading parties, who each layer costs to the transaction. Thin
liquidity and many intermediary trading parties make competitive pricing
challenging.
- **[Censorship-Resistant Transaction Processing][]:** No single party decides which transactions succeed or fail, and no one can "roll back" a transaction after it completes. As long as those who choose to participate in the network keep it healthy, they can settle transactions in seconds.
- **[Fast, Efficient Consensus Algorithm][]:** The XRP Ledger's consensus algorithm settles transactions in 4 to 5 seconds, processing at a throughput of up to 1500 transactions per second. These properties put XRP at least an order of magnitude ahead of other top digital assets.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. (Each XRP is subdivisible down to 6 decimal places, for a grand total of 100 quintillion _drops_ of XRP.) The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction costs.
- **[Responsible Software Governance][]:** A team of full-time, world-class developers at Ripple maintain and continually improve the XRP Ledger's underlying software with contributions from the open-source community. Ripple acts as a steward for the technology and an advocate for its interests, and builds constructive relationships with governments and financial institutions worldwide.
- **[Secure, Adaptable Cryptography][]:** The XRP Ledger relies on industry standard digital signature systems like ECDSA (the same scheme used by Bitcoin) but also supports modern, efficient algorithms like Ed25519. The extensible nature of the XRP Ledger's software makes it possible to add and disable algorithms as the state of the art in cryptography advances.
- **[Modern Features for Smart Contracts][]:** Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the [Interledger Protocol](https://interledger.org/). This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
- **[On-Ledger Decentralized Exchange][]:** In addition to all the features that make XRP useful on its own, the XRP Ledger also has a fully-functional accounting system for tracking and trading obligations denominated in any way users want, and an exchange built into the protocol. The XRP Ledger can settle long, cross-currency payment paths and exchanges of multiple currencies in atomic transactions, bridging gaps of trust with XRP.
![Flow - Direct](images/flow1.png)
[Censorship-Resistant Transaction Processing]: https://developers.ripple.com/xrp-ledger-overview.html#censorship-resistant-transaction-processing
[Fast, Efficient Consensus Algorithm]: https://developers.ripple.com/xrp-ledger-overview.html#fast-efficient-consensus-algorithm
[Finite XRP Supply]: https://developers.ripple.com/xrp-ledger-overview.html#finite-xrp-supply
[Responsible Software Governance]: https://developers.ripple.com/xrp-ledger-overview.html#responsible-software-governance
[Secure, Adaptable Cryptography]: https://developers.ripple.com/xrp-ledger-overview.html#secure-adaptable-cryptography
[Modern Features for Smart Contracts]: https://developers.ripple.com/xrp-ledger-overview.html#modern-features-for-smart-contracts
[On-Ledger Decentralized Exchange]: https://developers.ripple.com/xrp-ledger-overview.html#on-ledger-decentralized-exchange
### XRP as a Bridge Currency
Ripple can bridge even exotic currency pairs directly through XRP. Similar to
USD in todays currency market, XRP allows liquidity providers to focus on
offering competitive FX rates on fewer pairs and adding depth to order books.
Unlike USD, trading through XRP does not require bank accounts, service fees,
counterparty risk, or additional operational costs. By using XRP, liquidity
providers can specialize in certain currency corridors, reduce operational
costs, and ultimately, offer more competitive FX pricing.
![Flow - Bridged over XRP](images/flow2.png)
# rippled - Ripple server
`rippled` is the reference server implementation of the Ripple
protocol. To learn more about how to build and run a `rippled`
server, visit https://ripple.com/build/rippled-setup/
## Source Code
[![travis-ci.org: Build Status](https://travis-ci.org/ripple/rippled.png?branch=develop)](https://travis-ci.org/ripple/rippled)
[![codecov.io: Code Coverage](https://codecov.io/gh/ripple/rippled/branch/develop/graph/badge.svg)](https://codecov.io/gh/ripple/rippled)
### License
`rippled` is open source and permissively licensed under the
ISC license. See the LICENSE file for more details.
### Repository Contents
#### Repository Contents
| Folder | Contents |
|:-----------|:-------------------------------------------------|
| `./bin` | Scripts and data files for Ripple integrators. |
| `./Builds` | Platform-specific guides for building `rippled`. |
| `./docs` | Source documentation files and doxygen config. |
| `./cfg` | Example configuration files. |
| `./src` | Source code. |
| Folder | Contents |
|---------|----------|
| ./bin | Scripts and data files for Ripple integrators. |
| ./build | Intermediate and final build outputs. |
| ./Builds| Platform or IDE-specific project files. |
| ./doc | Documentation and example configuration files. |
| ./src | Source code. |
Some of the directories under `src` are external repositories included using
git-subtree. See those directories' README files for more details.
Some of the directories under `src` are external repositories inlined via
git-subtree. See the corresponding README for more details.
## For more information:
## See Also
* [Ripple Knowledge Center](https://ripple.com/learn/)
* [Ripple Developer Center](https://ripple.com/build/)
* Ripple Whitepapers & Reports
* [Ripple Consensus Whitepaper](https://ripple.com/files/ripple_consensus_whitepaper.pdf)
* [Ripple Solutions Guide](https://ripple.com/files/ripple_solutions_guide.pdf)
* [XRP Ledger Dev Portal](https://developers.ripple.com/)
* [XRP News](https://ripple.com/category/xrp/)
* [Setup and Installation](https://developers.ripple.com/install-rippled.html)
To learn about how Ripple is transforming global payments visit
[https://ripple.com/contact/](https://ripple.com/contact/)
- - -
Copyright © 2017, Ripple Labs. All rights reserved.
Portions of this document, including but not limited to the Ripple logo,
images and image templates are the property of Ripple Labs and cannot be
copied or used without permission.
To learn about how Ripple is transforming global payments, visit
<https://ripple.com/contact/>.

View File

@@ -1,16 +1,186 @@
![Ripple](/images/ripple.png)
# Release Notes
![Ripple](docs/images/ripple.png)
This document contains the release notes for `rippled`, the reference server implementation of the Ripple protocol. To learn more about how to build and run a `rippled` server, visit https://ripple.com/build/rippled-setup/
**Do you work at a digital asset exchange or wallet provider?**
Please [contact us](mailto:support@ripple.com). We can help guide your integration.
> **Do you work at a digital asset exchange or wallet provider?**
>
> Please [contact us](mailto:support@ripple.com). We can help guide your integration.
## Updating `rippled`
If you are using Red Hat Enterprise Linux 7 or CentOS 7, you can [update using `yum`](https://ripple.com/build/rippled-setup/#updating-rippled). For other platforms, please [compile from source](https://wiki.ripple.com/Rippled_build_instructions).
# Releases
## Version 1.2.1
The `rippled` 1.2.1 release introduces several fixes including a change in the
information reported via the enhanced crawl functionality introduced in the
1.2.0 release, a fix for a potential race condition when processing a status
change message for a peer, and for a technical flaw that could cause a server
to not properly detect that it had lost all its peers.
The release also adds the `delivered_amount` field to more responses to simplify
the handling of payment or check cashing transactions.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Fix a race condition during `TMStatusChange` handling (c8249981)
- Properly transition state to disconnected (9d027394)
- Display validator status only in response to admin requests (2d6a518a)
- Add the `delivered_amount` to more RPC commands (f2756914)
## Version 1.2.0
The `rippled` 1.2.0 release introduces the MultisignReserve Amendment, which
reduces the reserve requirement associated with signer lists. This release also
includes incremental improvements to the code that handles offers. Furthermore,
`rippled` now also has the ability to automatically detect transaction
censorship attempts and issue warnings of increasing severity for transactions
that should have been included in a closed ledger after several rounds of
consensus.
**New and Updated Features**
- Reduce the account reserve for a Multisign SignerList (6572fc8)
- Improve transaction error condition handling (4104778)
- Allow servers to automatically detect transaction censorship attempts (945493d)
- Load validator list from file (c1a0244)
- Add RPC command shard crawl (17e0d09)
- Add RPC Call unit tests (eeb9d92)
- Grow the open ledger expected transactions quickly (7295cf9)
- Avoid dispatching multiple fetch pack threads (4dcb3c9)
- Remove unused function in AutoSocket.h (8dd8433)
- Update TxQ developer docs (e14f913)
- Add user defined literals for megabytes and kilobytes (cd1c5a3)
- Make the FeeEscalation Amendment permanent (58f786c)
- Remove undocumented experimental options from RPC sign (a96cb8f)
- Improve RPC error message for fee command (af1697c)
- Improve ledger_entry commands inconsistent behavior (63e167b)
**Bug Fixes**
- Accept redirects from validator list sites (7fe1d4b)
- Implement missing string conversions for JSON (c0e9418)
- Eliminate potential undefined behavior (c71eb45)
- Add safe_cast to sure no overflow in casts between enums and integral types (a7e4541)
## Version 1.1.2
The `rippled` 1.1.2 release introduces a fix for an issue that could have
prevented cluster peers from successfully bypassing connection limits when
connecting to other servers on the same cluster. Additionally, it improves
logic used to determine what the preferred ledger is during suboptimal
network conditions.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Properly bypass connection limits for cluster peers (#2795, #2796)
- Improve preferred ledger calculation (#2784)
## Version 1.1.1
The `rippled` 1.1.1 release adds support for redirections when retrieving
validator lists and changes the way that validators with an expired list
behave. Additionally, informational commands return more useful information
to allow server operators to determine the state of their server
**New and Updated Features**
- Enhance status reporting when using the `server_info` and `validators` commands (#2734)
- Accept redirects from validator list sites: (#2715)
**Bug Fixes**
- Properly handle expired validator lists when validating (#2734)
## Version 1.1.0
The `rippled` 1.1.0 release release includes the `DepositPreAuth` amendment, which combined with the previously released `DepositAuth` amendment, allows users to pre-authorize incoming transactions to accounts, by whitelisting sender addresses. The 1.1.0 release also includes incremental improvements to several previously released features (`fix1515` amendment), deprecates support for the `sign` and `sign_for` commands from the rippled API and improves invariant checking for enhanced security.
Ripple recommends that all server operators upgrade to XRP Ledger version 1.1.0 by Thursday, 2018-09-27, to ensure service continuity.
**New and Updated Features**
- Add `DepositPreAuth` ledger type and transaction (#2513)
- Increase fault tolerance and raise validation quorum to 80%, which fixes issue 2604 (#2613)
- Support ipv6 for peer and RPC comms (#2321)
- Refactor ledger replay logic (#2477)
- Improve Invariant Checking (#2532)
- Expand SQLite potential storage capacity (#2650)
- Replace UptimeTimer with UptimeClock (#2532)
- Dont read Amount field if it is not present (#2566)
- Remove Transactor:: mFeeDue member variable (#2586)
- Remove conditional check for using Boost.Process (#2586)
- Improve charge handling in NoRippleCheckLimits test (#2629)
- Migrate more code into the chrono type system (#2629)
- Supply ConsensusTimer with milliseconds for finer precision (#2629)
- Refactor / modernize Cmake (#2629)
- Add delimiter when appending to cmake_cxx_flags (#2650)
- Remove using namespace declarations at namespace scope in headers (#2650)
**Bug Fixes**
- Deprecate the sign and sign_for APIs (#2657)
- Use liquidity from strands that consume too many offers, which will be enabled on fix1515 Amendment (#2546)
- Fix a corner case when decoding base64 (#2605)
- Trim space in Endpoint::from_string (#2593)
- Correctly suppress sent messages (#2564)
- Detect when a unit test child process crashes (#2415)
- Handle WebSocket construction exceptions (#2629)
- Improve JSON exception handling (#2605)
- Add missing virtual destructors (#2532)
## Version 1.0.0.
The `rippled` 1.0.0 release includes incremental improvements to several previously released features.
**New and Updated Features**
- The **history sharding** functionality has been improved. Instances can now use the shard store to satisfy ledger requests.
- Change permessage-deflate and compress defaults (RIPD-506)
- Update validations on UNL change (RIPD-1566)
**Bug Fixes**
- Add `check`, `escrow`, and `pay_chan` to `ledger_entry` (RIPD-1600)
- Clarify Escrow semantics (RIPD-1571)
## Version 0.90.1
The `rippled` 0.90.1 release includes fixes for issues reported by external security researchers. These issues, when exploited, could cause a rippled instance to restart or, in some circumstances, stop executing. While these issues can result in a denial of service attack, none affect the integrity of the XRP Ledger and no user funds, including XRP, are at risk.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Address issues identified by external review:
- Verify serialized public keys more strictly before using them
(RIPD-1617, RIPD-1619, RIPD-1621)
- Eliminate a potential out-of-bounds memory access in the base58
encoding/decoding logic (RIPD-1618)
- Avoid invoking undefined behavior in memcpy (RIPD-1616)
- Limit STVar recursion during deserialization (RIPD-1603)
- Use lock when creating a peer shard rangeset
## Version 0.90.0
The `rippled` 0.90.0 release introduces several features and enhancements that improve the reliability, scalability and security of the XRP Ledger.

1314
SConstruct

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +1,17 @@
# Set environment variables.
environment:
PYTHON: C:/Python27-x64
# We bundle up protoc.exe and only the parts of boost and openssl we need so
# that it's a small download. We also use appveyor's free cache, avoiding fees
# downloading from S3 each time.
# TODO: script to create this package.
RIPPLED_DEPS_PATH: rippled_deps17.01
RIPPLED_DEPS_PATH: rippled_deps17.04
RIPPLED_DEPS_URL: https://ripple.github.io/Downloads/appveyor/%RIPPLED_DEPS_PATH%.zip
# Other dependencies we just download each time.
PIP_PATH: get-pip.py
PIP_URL: https://bootstrap.pypa.io/%PIP_PATH%
# The % in this URL messes up variable substition, so any updates will
# need to update both PYWIN32_PATH and PYWIN32_URL
PYWIN32_PATH: pywin32-220.win-amd64-py2.7.exe
PYWIN32_URL: https://downloads.sourceforge.net/project/pywin32/pywin32/Build%20220/pywin32-220.win-amd64-py2.7.exe
# Scons honours these environment variables, setting the include/lib paths.
# CMake honors these environment variables, setting the include/lib paths.
BOOST_ROOT: C:/%RIPPLED_DEPS_PATH%/boost
OPENSSL_ROOT: C:/%RIPPLED_DEPS_PATH%/openssl
NIH_CACHE_ROOT: C:/%RIPPLED_DEPS_PATH%/
# We've had trouble with AppVeyor apparently not having a stack as large
# as the *nix CI platforms. AppVeyor support suggested that we try
@@ -27,10 +19,6 @@ environment:
appveyor_build_worker_cloud: gce
matrix:
# This build works, but our current Appveyor config runs matrix builds
# sequentially, and the one build is already slow enough.
# - build: scons
# target: msvc.debug
- build: cmake
target: msvc.debug
buildconfig: Debug
@@ -42,30 +30,14 @@ os: Visual Studio 2017
# Resulting archive should not exceed 100 MB.
cache:
- 'C:\%RIPPLED_DEPS_PATH%'
- '%PIP_PATH%'
- '%PYWIN32_PATH%'
# This means we'll download a zip of the branch we want, rather than the full
# history.
shallow_clone: true
install:
# We want easy_install, python and protoc.exe on PATH.
- SET PATH=%PYTHON%;%PYTHON%/Scripts;C:/%RIPPLED_DEPS_PATH%;%PATH%
# `ps` prefix means the command is executed by powershell.
- ps: |
if ($env:build -eq "scons") {
if(-not(Test-Path $env:PIP_PATH)) {
echo "Download from $env:PIP_URL"
Start-FileDownload $env:PIP_URL
}
if(-not(Test-Path $env:PYWIN32_PATH)) {
echo "Download from $env:PYWIN32_URL"
Start-FileDownload $env:PYWIN32_URL
}
}
- bin/ci/windows/install-dependencies.bat
# We want protoc.exe on PATH.
- SET PATH=C:/%RIPPLED_DEPS_PATH%;%PATH%
# Download dependencies if appveyor didn't restore them from the cache.
# Use 7zip to unzip.
@@ -95,44 +67,28 @@ build_script:
# Show which version of the compiler we are using.
- cl
- ps: |
if ($env:build -eq "scons") {
# Build with scons
scons $env:target -j%NUMBER_OF_PROCESSORS%
if ($LastExitCode -ne 0) { throw "scons build failed" }
}
else
{
# Build with cmake
cmake --version
$cmake_target="$($env:target).ci"
"$cmake_target"
New-Item -ItemType Directory -Force -Path "build/$cmake_target"
Push-Location "build/$cmake_target"
cmake -G"Visual Studio 15 2017 Win64" -Dtarget="$cmake_target" ../..
cmake -G"Visual Studio 15 2017 Win64" ../..
if ($LastExitCode -ne 0) { throw "CMake failed" }
cmake --build . --config $env:buildconfig -- -m
cmake --build . --config $env:buildconfig --parallel 3
if ($LastExitCode -ne 0) { throw "CMake build failed" }
Pop-Location
}
after_build:
- ps: |
if ($env:build -eq "scons") {
cp build/$($env:target)/rippled.exe build
ls build
$exe="build/rippled"
}
else
{
$exe="build/$cmake_target/$env:buildconfig/rippled"
}
$exe="build/$cmake_target/$env:buildconfig/rippled"
"Exe is at $exe"
test_script:
- ps: |
& {
# Run the rippled unit tests
& $exe --unittest --unittest-log
& $exe --unittest --unittest-log --unittest-jobs 2
# https://connect.microsoft.com/PowerShell/feedback/details/751703/option-to-stop-script-if-command-line-exe-fails
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
}

View File

@@ -5,124 +5,165 @@
# debugging.
set -ex
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
echo "using CC: $CC"
echo "using TARGET: $TARGET"
echo "using CC: ${CC}"
"${CC}" --version
export CC
COMPNAME=$(basename $CC)
echo "using CXX: ${CXX:-notset}"
if [[ $CXX ]]; then
"${CXX}" --version
export CXX
fi
: ${BUILD_TYPE:=Debug}
echo "BUILD TYPE: ${BUILD_TYPE}"
: ${TARGET:=install}
echo "BUILD TARGET: ${TARGET}"
# Ensure APP defaults to rippled if it's not set.
: ${APP:=rippled}
echo "using APP: $APP"
echo "using APP: ${APP}"
JOBS=${NUM_PROCESSORS:-2}
JOBS=$((JOBS+1))
if [[ ${TRAVIS:-false} != "true" ]]; then
JOBS=$((JOBS+1))
fi
if [[ ${BUILD:-scons} == "cmake" ]]; then
echo "cmake building ${APP}"
CMAKE_EXTRA_ARGS=" -DCMAKE_VERBOSE_MAKEFILE=ON"
CMAKE_TARGET=$CC.$TARGET
BUILDARGS=" -j${JOBS}"
if [[ ${VERBOSE_BUILD:-} == true ]]; then
# TODO: if we use a different generator, this
# option to build verbose would need to change:
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
fi
if [[ -z "${MAX_TIME:-}" ]] ; then
timeout_cmd=""
else
timeout_cmd="timeout ${MAX_TIME}"
fi
echo "cmake building ${APP}"
: ${CMAKE_EXTRA_ARGS:=""}
if [[ ${NINJA_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -G Ninja"
fi
coverage=false
if [[ "${TARGET}" == "coverage_report" ]] ; then
echo "coverage option detected."
coverage=true
export PATH=$PATH:${LCOV_ROOT}/usr/bin
fi
#
# allow explicit setting of the name of the build
# dir, otherwise default to the compiler.build_type
#
: "${BUILD_DIR:=${COMPNAME}.${BUILD_TYPE}}"
BUILDARGS=" -j${JOBS}"
if [[ ${VERBOSE_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
# TODO: if we use a different generator, this
# option to build verbose would need to change:
if [[ ${NINJA_BUILD:-} == true ]]; then
BUILDARGS+=" -v"
else
BUILDARGS+=" verbose=1"
fi
if [[ ${CI:-} == true ]]; then
CMAKE_TARGET=$CMAKE_TARGET.ci
fi
if [[ ${USE_CCACHE:-} == true ]]; then
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
if [ -d "build/${CMAKE_TARGET}" ]; then
rm -rf "build/${CMAKE_TARGET}"
fi
mkdir -p "build/${CMAKE_TARGET}"
pushd "build/${CMAKE_TARGET}"
cmake ../.. -Dtarget=$CMAKE_TARGET ${CMAKE_EXTRA_ARGS}
cmake --build . -- $BUILDARGS
if [[ ${BUILD_BOTH:-} == true ]]; then
if [[ ${TARGET} == *.unity ]]; then
cmake --build . --target rippled_classic -- $BUILDARGS
else
cmake --build . --target rippled_unity -- $BUILDARGS
fi
fi
popd
export APP_PATH="$PWD/build/${CMAKE_TARGET}/${APP}"
echo "using APP_PATH: $APP_PATH"
else
export APP_PATH="$PWD/build/$CC.$TARGET/${APP}"
echo "using APP_PATH: $APP_PATH"
# Make sure vcxproj is up to date
scons vcxproj
git diff --exit-code
# $CC will be either `clang` or `gcc`
# http://docs.travis-ci.com/user/migrating-from-legacy/?utm_source=legacy-notice&utm_medium=banner&utm_campaign=legacy-upgrade
# indicates that 2 cores are available to containers.
scons -j${JOBS} $CC.$TARGET
fi
# We can be sure we're using the build/$CC.$TARGET variant
# (-f so never err)
rm -f build/${APP}
if [[ ${USE_CCACHE:-} == true ]]; then
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
if [ -d "build/${BUILD_DIR}" ]; then
rm -rf "build/${BUILD_DIR}"
fi
mkdir -p "build/${BUILD_DIR}"
pushd "build/${BUILD_DIR}"
# generate
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
# build
export DESTDIR=$(pwd)/_INSTALLED_
time ${timeout_cmd} cmake --build . --target ${TARGET} -- $BUILDARGS
if [[ ${TARGET} == "docs" ]]; then
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f html_doc/index.html ]; then
echo "1 case, 1 test total, 0 failures"
else
echo "1 case, 1 test total, 1 failures"
fi
exit
fi
popd
export APP_PATH="$PWD/build/${BUILD_DIR}/${APP}"
echo "using APP_PATH: ${APP_PATH}"
# See what we've actually built
ldd $APP_PATH
ldd ${APP_PATH}
function join_by { local IFS="$1"; shift; echo "$*"; }
# This is a list of manual tests
# in rippled that we want to run
declare -a manual_tests=(
"beast.chrono.abstract_clock"
"beast.unit_test.print"
"ripple.NodeStore.Timing"
"ripple.app.Flow_manual"
"ripple.app.NoRippleCheckLimits"
"ripple.app.PayStrandAllPairs"
"ripple.consensus.ByzantineFailureSim"
"ripple.consensus.DistributedValidators"
"ripple.consensus.ScaleFreeSim"
"ripple.ripple_data.digest"
"ripple.tx.CrossingLimits"
"ripple.tx.FindOversizeCross"
"ripple.tx.Offer_manual"
"ripple.tx.OversizeMeta"
"ripple.tx.PlumpBook"
)
: ${APP_ARGS:=}
if [[ ${APP} == "rippled" ]]; then
export APP_ARGS="--unittest --quiet --unittest-log"
# Only report on src/ripple files
export LCOV_FILES="*/src/ripple/*"
# Nothing to explicitly exclude
export LCOV_EXCLUDE_FILES="LCOV_NO_EXCLUDE"
else
: ${APP_ARGS:=}
: ${LCOV_FILES:="*/src/*"}
# Don't exclude anything
: ${LCOV_EXCLUDE_FILES:="LCOV_NO_EXCLUDE"}
if [[ ${MANUAL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
fi
if [[ $TARGET == "coverage" ]]; then
export PATH=$PATH:$LCOV_ROOT/usr/bin
# Create baseline coverage data file
lcov --no-external -c -i -d . -o baseline.info | grep -v "ignoring data for external file"
if [[ ${coverage} == true ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
fi
if [[ ${SKIP_TESTS:-} == true ]]; then
echo "skipping tests for ${TARGET}"
echo "skipping tests."
exit
fi
if [[ $TARGET == debug* ]]; then
if [[ ${DEBUGGER:-true} == "true" && -v GDB_ROOT && -x ${GDB_ROOT}/bin/gdb ]]; then
${GDB_ROOT}/bin/gdb -v
# Execute unit tests under gdb, printing a call stack
# if we get a crash.
$GDB_ROOT/bin/gdb -return-child-result -quiet -batch \
export APP_ARGS
${timeout_cmd} ${GDB_ROOT}/bin/gdb -return-child-result -quiet -batch \
-ex "set env MALLOC_CHECK_=3" \
-ex "set print thread-events off" \
-ex run \
-ex "thread apply all backtrace full" \
-ex "quit" \
--args $APP_PATH $APP_ARGS
--args ${APP_PATH} ${APP_ARGS}
else
$APP_PATH $APP_ARGS
fi
if [[ $TARGET == "coverage" ]]; then
# Create test coverage data file
lcov --no-external -c -d . -o tests.info | grep -v "ignoring data for external file"
# Combine baseline and test coverage data
lcov -a baseline.info -a tests.info -o lcov-all.info
# Included files
lcov -e "lcov-all.info" "${LCOV_FILES}" -o lcov.pre.info
# Excluded files
lcov --remove lcov.pre.info "${LCOV_EXCLUDE_FILES}" -o lcov.info
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
${timeout_cmd} ${APP_PATH} ${APP_ARGS}
fi

View File

@@ -15,42 +15,11 @@ do
ln -sv $(type -p ${g}-$GCC_VER) $HOME/bin/${g}
done
if [[ -n ${CLANG_VER:-} ]]; then
# There are cases where the directory exists, but the exe is not available.
# Use this workaround for now.
if [[ ! -x ${TWD}/llvm-${LLVM_VERSION}/bin/llvm-config && -d ${TWD}/llvm-${LLVM_VERSION} ]]; then
rm -fr ${TWD}/llvm-${LLVM_VERSION}
fi
if [[ ! -d ${TWD}/llvm-${LLVM_VERSION} ]]; then
mkdir ${TWD}/llvm-${LLVM_VERSION}
LLVM_URL="http://llvm.org/releases/${LLVM_VERSION}/clang+llvm-${LLVM_VERSION}-x86_64-linux-gnu-ubuntu-14.04.tar.xz"
wget -O - ${LLVM_URL} | tar -Jxvf - --strip 1 -C ${TWD}/llvm-${LLVM_VERSION}
fi
${TWD}/llvm-${LLVM_VERSION}/bin/llvm-config --version;
export LLVM_CONFIG="${TWD}/llvm-${LLVM_VERSION}/bin/llvm-config";
fi
if [[ ${BUILD:-} == cmake ]]; then
# There are cases where the directory exists, but the exe is not available.
# Use this workaround for now.
if [[ ! -x ${TWD}/cmake/bin/cmake && -d ${TWD}/cmake ]]; then
rm -fr ${TWD}/cmake
fi
if [[ ! -d ${TWD}/cmake ]]; then
CMAKE_URL="https://www.cmake.org/files/v3.6/cmake-3.6.1-Linux-x86_64.tar.gz"
wget --version
# wget version 1.13.4 thinks this certificate is invalid, even though it's fine.
# "ERROR: no certificate subject alternative name matches"
# See also: https://github.com/travis-ci/travis-ci/issues/5059
mkdir ${TWD}/cmake &&
wget -O - --no-check-certificate ${CMAKE_URL} | tar --strip-components=1 -xz -C ${TWD}/cmake
cmake --version
fi
fi
# What versions are we ACTUALLY running?
if [ -x $HOME/bin/g++ ]; then
$HOME/bin/g++ -v
else
g++ -v
fi
pip install --user requests==2.13.0
@@ -58,24 +27,3 @@ pip install --user https://github.com/codecov/codecov-python/archive/master.zip
bash bin/sh/install-boost.sh
# Install lcov
# Download the archive
wget https://github.com/linux-test-project/lcov/releases/download/v1.12/lcov-1.12.tar.gz
# Extract to ~/lcov-1.12
tar xfvz lcov-1.12.tar.gz -C $HOME
# Set install path
mkdir -p $LCOV_ROOT
cd $HOME/lcov-1.12 && make install PREFIX=$LCOV_ROOT
if [[ ${TARGET} == debug* && ! -x ${GDB_ROOT}/bin/gdb ]]; then
pushd $HOME
#install gdb
wget https://ftp.gnu.org/gnu/gdb/gdb-8.0.tar.xz
tar xf gdb-8.0.tar.xz
pushd gdb-8.0
./configure CFLAGS='-w -O2' CXXFLAGS='-std=gnu++11 -g -O2 -w' --prefix=$GDB_ROOT
make -j2
make install
popd
popd
fi

View File

@@ -1,13 +0,0 @@
if "%build%" == "scons" (
rem Installing pip will install setuptools/easy_install.
python "%PIP_PATH%"
rem Pip has some problems installing scons on windows so we use easy install.
rem - easy_install scons
rem Workaround
easy_install https://pypi.python.org/packages/source/S/SCons/scons-2.5.0.tar.gz#md5=bda5530a70a41a7831d83c8b191c021e
rem Scons has problems with parallel builds on windows without pywin32.
easy_install "%PYWIN32_PATH%"
rem (easy_install can do headless installs of .exe wizards)
)

View File

@@ -8,6 +8,14 @@
# https://travis-ci.org/ripple/rippled/caches
set -e
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
fi
if [ ! -d "$BOOST_ROOT/lib" ]
then
wget $BOOST_URL -O /tmp/boost.tar.gz
@@ -15,9 +23,9 @@ then
rm -fr ${BOOST_ROOT}
tar xzf /tmp/boost.tar.gz
cd $BOOST_ROOT && \
./bootstrap.sh --prefix=$BOOST_ROOT && \
./b2 -d1 define=_GLIBCXX_USE_CXX11_ABI=0 -j$((2*${NUM_PROCESSORS:-2})) &&\
./b2 -d0 define=_GLIBCXX_USE_CXX11_ABI=0 install
$time ./bootstrap.sh --prefix=$BOOST_ROOT && \
$time ./b2 -d1 define=_GLIBCXX_USE_CXX11_ABI=0 -j$((2*${NUM_PROCESSORS:-2})) &&\
$time ./b2 -d0 define=_GLIBCXX_USE_CXX11_ABI=0 install
else
echo "Using cached boost at $BOOST_ROOT"
fi

View File

@@ -20,7 +20,9 @@
#
# 7. Voting
#
# 8. Example Settings
# 8. Misc Settings
#
# 9. Example Settings
#
#-------------------------------------------------------------------------------
#
@@ -376,7 +378,10 @@
# [ips]
# r.ripple.com 51235
#
# The default is: [ips_fixed] addresses (if present) or r.ripple.com 51235
# The default is:
# [ips_fixed] addresses (if present)
# or
# ( r.ripple.com 51235 , zaphod.alloy.ee 51235 )
#
#
# [ips_fixed]
@@ -538,6 +543,20 @@
# into the ledger at the minimum required fee before the required
# fee escalates. Default: no maximum.
#
# normal_consensus_increase_percent = <number>
#
# (Optional) When the ledger has more transactions than "expected",
# and performance is humming along nicely, the expected ledger size
# is updated to the previous ledger size plus this percentage.
# Default: 20
#
# slow_consensus_decrease_percent = <number>
#
# (Optional) When consensus takes longer than appropriate, the
# expected ledger size is updated to the minimum of the previous
# ledger size or the "expected" ledger size minus this percentage.
# Default: 50
#
# maximum_txn_per_account = <number>
#
# Maximum number of transactions that one account can have in the
@@ -824,6 +843,10 @@
# require administrative RPC call "can_delete"
# to enable online deletion of ledger records.
#
# earliest_seq The default is 32570 to match the XRP ledger
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
#
# Notes:
# The 'node_db' entry configures the primary, persistent storage.
#
@@ -843,11 +866,12 @@
#
# Example:
# type=nudb
# path=db/nudb
# path=db/shards/nudb
#
# The "type" field must be present and controls the choice of backend:
#
# type = NuDB
# NuDB is recommended for shards.
#
# type = RocksDB
#
@@ -926,6 +950,24 @@
# address=192.168.0.95:4201
# prefix=my_validator
#
# [perf]
#
# Configuration of performance logging. If enabled, write Json-formatted
# performance-oriented data periodically to a distinct log file.
#
# "perf_log" A string specifying the pathname of the performance log
# file. A relative pathname will log relative to the
# configuration directory. Required to enable
# performance logging.
#
# "log_interval" Integer value for number of seconds between writing
# to performance log. Default 1.
#
# Example:
# [perf]
# perf_log=/var/log/rippled/perf.log
# log_interval=2
#
#-------------------------------------------------------------------------------
#
# 7. Voting
@@ -979,7 +1021,72 @@
#
#-------------------------------------------------------------------------------
#
# 8. Example Settings
# 8. Misc Settings
#
#----------
#
# [signing_support]
#
# Specifies whether the server will accept "sign" and "sign_for" commands
# from remote users. Even if the commands are sent over a secure protocol
# like secure websocket, this should generally be discouraged, because it
# requires sending the secret to use for signing to the server. In order
# to sign transactions, users should prefer to use a standalone signing
# tool instead.
#
# This flag has no effect on the "sign" and "sign_for" command line options
# that rippled makes available.
#
# The default value of this field is "false"
#
# Example:
#
# [signing_support]
# true
#
# [crawl]
#
# List of options to control what data is reported through the /crawl endpoint
# See https://developers.ripple.com/peer-protocol.html#peer-crawler
#
# <flag>
#
# Enable or disable access to /crawl requests. Default is '1'
#
# overlay = <flag>
#
# Report information about peers this server is connected to, similar
# to the "peers" RPC API. Default is '1'.
#
# server = <flag>
#
# Report information about the local server, similar to the "server_state"
# RPC API. Default is '1'.
#
# counts = <flag>
#
# Report information about the local server health counters, similar to
# the "get_counts" RPC API. Default is '0'.
#
# unl = <flag>
#
# Report information about the local server's validator lists, similar to
# the "validators" and "validator_list_sites" RPC APIs. Default is '1'.
#
# Example:
#
# [crawl]
# 0
#
# [crawl]
# overlay = 1 # report peer overlay info
# server = 1 # report local server info
# counts = 0 # do not report server counts
# unl = 1 # report server validator lists
#
#-------------------------------------------------------------------------------
#
# 9. Example Settings
#
#--------------------
#
@@ -1051,7 +1158,7 @@ admin = 127.0.0.1
protocol = ws
#[port_ws_public]
#port = 5005
#port = 6005
#ip = 127.0.0.1
#protocol = wss
@@ -1076,6 +1183,15 @@ file_size_mult=2
online_delete=2000
advisory_delete=0
# This is the persistent datastore for shards. It is important for the health
# of the ripple network that rippled operators shard as much as practical.
# NuDB requires SSD storage. Helpful information can be found here
# https://ripple.com/build/history-sharding
#[shard_db]
#type=NuDB
#path=/var/lib/rippled/db/shards/nudb
#max_size_gb=500
[database_path]
/var/lib/rippled/db
@@ -1090,10 +1206,10 @@ time.apple.com
time.nist.gov
pool.ntp.org
# Where to find some other servers speaking the Ripple protocol.
#
[ips]
r.ripple.com 51235
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the

View File

@@ -14,10 +14,8 @@
#
# List of the validation public keys of nodes to always accept as validators.
#
# The latest list of recommended validators can be obtained from
# https://ripple.com/ripple.txt
#
# See also https://wiki.ripple.com/Ripple.txt
# Manually listing validator keys is not recommended for production networks.
# See validator_list_sites and validator_list_keys below.
#
# Examples:
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
@@ -27,9 +25,13 @@
#
# List of URIs serving lists of recommended validators.
#
# The latest list of recommended validator sites can be
# obtained from https://ripple.com/ripple.txt
#
# Examples:
# https://ripple.com/validators
# https://vl.ripple.com
# http://127.0.0.1:8000
# file:///etc/opt/ripple/vl.txt
#
# [validator_list_keys]
#
@@ -39,6 +41,9 @@
# publisher key.
# Validator list keys should be hex-encoded.
#
# The latest list of recommended validator keys can be
# obtained from https://ripple.com/ripple.txt
#
# Examples:
# ed499d732bded01504a7407c224412ef550cc1ade638a4de4eb88af7c36cb8b282
# 0202d3f36a801349f3be534e3f64cfa77dede6e1b6310a0b48f40f20f955cec945
@@ -46,10 +51,25 @@
#
# The default validator list publishers that the rippled instance
# trusts
# trusts.
#
# WARNING: Changing these values can cause your rippled instance to see a
# validated ledger that contradicts other rippled instances'
# validated ledgers (aka a ledger fork) if your validator list(s)
# do not sufficiently overlap with the list(s) used by others.
# See: https://arxiv.org/pdf/1802.07242.pdf
[validator_list_sites]
https://vl.ripple.com
[validator_list_keys]
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
# use the following configuration instead:
#
# [validator_list_sites]
# https://vl.altnet.rippletest.net
#
# [validator_list_keys]
# ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860

View File

@@ -1,29 +0,0 @@
machine:
services:
- docker
dependencies:
pre:
- sudo apt-add-repository -y ppa:ubuntu-toolchain-r/test
- echo "deb [arch=amd64 trusted=yes] https://test-mirrors.ripple.com/ubuntu/ trusty testing" | sudo tee /etc/apt/sources.list.d/ripple.list
- sudo apt-get update -qq
- sudo apt-get purge -qq libboost1.48-dev
- sudo apt-get install -qq libboost1.60-all-dev
- sudo apt-get install -qq clang-3.6 gcc-5 g++-5 libobjc-5-dev libgcc-5-dev libstdc++-5-dev libclang1-3.6 libgcc1 libgomp1 libstdc++6 scons protobuf-compiler libprotobuf-dev libssl-dev exuberant-ctags texinfo
- lsb_release -a
- sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 99
- sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 99
- sudo update-alternatives --force --install /usr/bin/clang clang /usr/bin/clang-3.6 99 --slave /usr/bin/clang++ clang++ /usr/bin/clang++-3.6
- gcc --version
- clang --version
- clang++ --version
- if [[ ! -e gdb-8.0 ]]; then wget https://ftp.gnu.org/gnu/gdb/gdb-8.0.tar.xz && tar xf gdb-8.0.tar.xz && cd gdb-8.0 && ./configure && make && cd ..; fi
- pushd gdb-8.0 && sudo make install && popd
- gdb --version
cache_directories:
- gdb-8.0
test:
pre:
- scons clang.debug
override:
# Execute unit tests under gdb
- gdb -return-child-result -quiet -batch -ex "set env MALLOC_CHECK_=3" -ex "set print thread-events off" -ex run -ex "thread apply all backtrace full" -ex "quit" --args build/clang.debug/rippled --unittest --quiet --unittest-log

View File

@@ -1,29 +0,0 @@
A list of rippled version numbers, and the Github pull requests they contain.
0.28.0-b12: Includes pulls 836, 887, 902, 903 and 904.
0.28.0-b13: Includes pulls 906, 912, 913, 914 and 915.
0.28.0-b14: Includes pulls 907, 910, 922 and 923.
0.28.0-b15: Includes pulls 832, 870, 879, 883, 911, 916, 919, 920, 924, 925 and 928. FAILED pulls 909 and 926.
0.28.0-b16: Includes pulls 909, 926, 929, 931, 932, 935 and 934.
0.28.0-b17: Includes pulls 927, 939, 940, 943, 944, 945 and 949.
0.28.0-b18: Includes pulls 930, 946, 947, 948, 951, 952, 953, 954, 955, 956, 959, 960 and 962.
0.29.0-b19: Includes pulls 967, 969 and 971.
0.29.0-b20: Includes pulls 935, 942, 957, 958, 963, 964, 965, 966, 968, 972, 973, 974 and 975.
0.29.0-b21: Includes pulls 970 and 976.
0.28.1-b4: Includes pulls 968, 998, 1005, 1008, 1010, 1011 and 1012.
0.28.1-b6: Includes pulls 983, 984, 1013, 1023 and 1024.
0.28.1-b8: Includes pulls 988, 1009, 1014, 1019, 1029, 1031, 1033, 1034 and 1035.
0.28.1-b9: Includes pulls 1026, 1030, 1036, 1037, 1038, 1040, and 1041.
0.28.1-rc2: Includes pulls 1044
0.28.1-rc3: Includes pulls 1052 and 1055.
0.28.1: Includes pulls 1056, 1059 and 1062, 1063.
0.28.2-b1: Includes pulls 866, 1045, 1046, 1047, 1050, 1051, 1057
0.28.2-b2: Includes pulls 1058, 1061, 1064 and 1065
0.28.2-b3: Includes pulls 1066, 1067, 1068
0.28.2-b4: Includes pulls 1060, 1069, 1071, 1072, 1075 and 1076.
0.28.2-b5: Includes pulls 1073, 1074, 1081, 1083, 1084, 1087, 1089, 1090, 1091
0.28.2-b6: Includes pulls 1085, 1093, 1094, 1096, 1101, 1102, 1105
0.28.2-b7: Includes pulls 1077, 1080, 1086, 1095, 1098, 1106 and 1112.
0.28.2-b8: Includes pulls 1078, 1100, 1108, 1114, 1118, 1119 and 1121.
0.28.2-b9: Includes pulls 1053, 1109, 1111, 1117, 1122 and 1123.
0.29.1-b11: Includes pulls 1279, 1271, 1289, 1291, 1290, 1267, 1294, 1276, 1231, and 1286.

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -1,126 +0,0 @@
#
# Sample ripple.txt
#
# More information: https://ripple.com/wiki/Ripple.txt
#
# Publishing this file allows a site to declare in a trustworthy manor ripple
# associated information.
#
# This file is stored on the web server for a domain. This file is searched
# for in the following order:
# - https://ripple.DOMAIN/ripple.txt
# - https://www.DOMAIN/ripple.txt
# - https://DOMAIN/ripple.txt
#
# The server MUST set the following HTTP header when serving this file:
# Access-Control-Allow-Origin: *
#
# This file is UTF-8 with Dos, UNIX, or Mac style end of lines.
# Blank lines and lines beginning with '#' are ignored.
# Undefined sections are reserved.
# No escapes are currently defined.
#
# IMPORTANT: Please remove these comments before publishing this file. Doing so
# makes the file much more readable to humans trying to find info on your site.
#
# [expires]
# Number of days after which this file should be considered expire. Clients
# are expected to check this file when they have cause to believe it has
# changed or expired. For example, if a client discovers an account declares
# that it is associated with the domain, it should check to see if this file
# has been updated. If an expiration date is not declared, the default
# expiration for this file is 30 days.
#
# Example: 30
#
# [accounts]
# Only valid in "ripple.txt". A list of accounts that are declared to be
# controlled by this domain. A client wishing to indicate that an account is
# verified as belonging to a domain will be show the account with a green
# background. A client can verify that an account belongs to a domain by
# checking if the account's root entry mentions the domain containing this
# file and this file found at the domain mentions the account in this
# section.
#
# Example: rG1QQv2nh2gr7RCZ1P8YYcBUKCCN633jCn
#
# [hotwallets]
# Only valid in "ripple.txt". A list of accounts that are declared to be
# controlled by this domain and used as hotwallets.
#
# Example: rG1QQv2nh2gr7RCZ1P8YYcBUKCCN633jCn
#
# [validation_public_key]:
# Only valid in "ripple.txt". A validation public key that is declared
# to be used by this domain for validating ledgers and that it is the
# authorized signature for the domain. This does not imply that a node
# serving the Ripple protocol is avilable at the web host providing the
# file.
#
# Example: n9MZTnHe5D5Q2cgE8oV2usFwRqhUvEA8MwP5Mu1XVD6TxmssPRev
#
# [domain]:
# Mandatory in "ripple.txt".
# Only valid in "ripple.txt".
# Must match location of file.
#
# Example: google.com
#
# [ips]:
# Only valid in "rippled.cfg", "ripple.txt", and the referered [ips_url].
# List of ips where the Newcoin protocol is avialable.
# One ipv4 or ipv6 address per line.
# A port may optionally be specified after adding a space to the address.
# By convention, if known, IPs are listed in from most to least trusted.
#
# Examples:
# 192.168.0.1
# 192.168.0.1 3939
# 2001:0db8:0100:f101:0210:a4ff:fee3:9566
#
# [validators]:
# Only valid in "rippled.cfg", "ripple.txt", and the referered [validators_url].
# List of Ripple validators this node recommends.
#
# For domains, rippled will probe for https web servers at the specied
# domain in the following order: ripple.DOMAIN, www.DOMAIN, DOMAIN
#
# These are encoded 257-bit secp256k1 public keys.
#
# Examples:
# redstem.com
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt John Doe
#
# [ips_url]:
# Only valid in "ripple.txt".
# https URL to a similarily formatted file containing [ips].
#
# Example: https://google.com/ripple_ips.txt
#
# [validators_url]:
# Only valid in "ripple.txt".
# https URL to a similarily formatted file containing [validators].
#
# Example: https://google.com/ripple_validators.txt
#
# [currencies]:
# This section allows a site to declare currencies it currently issues.
#
# Examples: (multiple allowed one per line)
# USD
# BTC
# LTC
#
[validation_public_key]
n9MZTnHe5D5Q2cgE8oV2usFwRqhUvEA8MwP5Mu1XVD6TxmssPRev
[domain]
loss
[ips]
192.168.0.5
[validators]
redstem.com

View File

@@ -1,10 +0,0 @@
[Unit]
Description=Ripple Peer-to-Peer Network Daemon
[Service]
Type=simple
User=nobody
ExecStart=/usr/bin/rippled --conf=/etc/rippled/rippled.cfg
[Install]
WantedBy=multi-user.target

View File

@@ -1,114 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: ripple
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the ripple network node
# Description: starts rippled using start-stop-daemon
### END INIT INFO
set -e
NAME=rippled
USER="rippled"
GROUP="rippled"
PIDFILE=/var/run/$NAME.pid
DAEMON=/usr/local/sbin/rippled
DAEMON_OPTS="--conf /etc/ripple/rippled.cfg"
NET_OPTS="--net $DAEMON_OPTS"
LOGDIR="/var/log/rippled"
DBDIR="/var/db/rippled/db/hyperldb"
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
# I wish it didn't come down to this, but this is the easiest way to ensure
# sanity of an install.
if [ ! -d $LOGDIR ]; then
mkdir -p $LOGDIR
chown $USER:$GROUP $LOGDIR
fi
if [ ! -d $DBDIR ]; then
mkdir -p $DBDIR
chown -R $USER:$GROUP $DBDIR
fi
case "$1" in
start)
echo -n "Starting daemon: "$NAME
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP --verbose -- $NET_OPTS
echo "."
;;
stop)
echo -n "Stopping daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
echo "."
;;
restart)
echo -n "Restarting daemon: "$NAME
$DAEMON $DAEMON_OPTS stop
rm -f $PIDFILE
start-stop-daemon --start --quiet --background -m --pidfile $PIDFILE \
--exec $DAEMON --chuid $USER --group $GROUP -- $NET_OPTS
echo "."
;;
status)
echo "Status of $NAME:"
echo -n "PID of $NAME: "
if [ -f "$PIDFILE" ]; then
cat $PIDFILE
$DAEMON $DAEMON_OPTS server_info
else
echo "$NAME not running."
fi
echo "."
;;
fetch)
echo "$NAME ledger fetching info:"
$DAEMON $DAEMON_OPTS fetch_info
echo "."
;;
uptime)
echo "$NAME uptime:"
$DAEMON $DAEMON_OPTS get_counts
echo "."
;;
startconfig)
echo "$NAME is being started with the following command line:"
echo "$DAEMON $NET_OPTS"
echo "."
;;
command)
# Truncate the script's argument vector by one position to get rid of
# this entry.
shift
# Pass the remainder of the argument vector to rippled.
$DAEMON $DAEMON_OPTS "$@"
echo "."
;;
test)
$DAEMON $DAEMON_OPTS ping
echo "."
;;
*)
echo "Usage: $0 {start|stop|restart|status|fetch|uptime|startconfig|"
echo " command|test}"
exit 1
esac
exit 0

View File

@@ -1,4 +1,6 @@
# Form
# Code Style Cheat Sheet
## Form
- One class per header file.
- Place each data member on its own line.
@@ -11,7 +13,7 @@
- Order class declarations as types, public, protected, private, then data.
- Prefer 'private' over 'protected'
# Function
## Function
- Minimize external dependencies
* Pass options in the ctor instead of using theConfig

View File

@@ -1,5 +1,3 @@
--------------------------------------------------------------------------------
# Coding Standards
Coding standards used here gradually evolve and propagate through
@@ -43,18 +41,16 @@ overlooked. Blank lines are used to separate code into "paragraphs."
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should always be followed by a space.
* The `!` operator should be preceded by a space, but not followed by one.
* The `~` operator should be preceded by a space, but not followed by one.
* The `++` and `--` operators should have no spaces between the operator and
the operand.
* A space never appears before a comma, and always appears after a comma.
* Always place a space before an opening parenthesis. One exception is if
the parentheses are empty.
* Don't put spaces after a parenthesis. A typical member function call might
look like this: `foobar (1, 2, 3);`
* In general, leave a blank line before an `if` statement.
* In general, leave a blank line after a closing brace `}`.
* Do not place code or comments on the same line as any opening or
* Do not place code on the same line as any opening or
closing brace.
* Do not write `if` statements all-on-one-line. The exception to this is when
you've got a sequence of similar `if` statements, and are aligning them all

View File

@@ -1,22 +1,31 @@
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get -y install build-essential g++ git libbz2-dev wget python-dev
RUN apt -y update
RUN apt -y upgrade
RUN apt -y install build-essential g++ git libbz2-dev wget python-dev
RUN apt -y install cmake flex bison graphviz graphviz-dev libicu-dev
RUN apt -y install jarwrapper java-common
# Install Boost
ENV BOOST_SHA 440a59f8bc4023dbe6285c9998b0f7fa288468b889746b1ef00e8b36c559dce1
RUN wget https://sourceforge.net/projects/boost/files/boost/1.62.0/boost_1_62_0.tar.gz
RUN echo "$BOOST_SHA boost_1_62_0.tar.gz" | sha256sum -c
RUN tar xzf boost_1_62_0.tar.gz
RUN cd boost_1_62_0 && ./bootstrap.sh --prefix=/usr/local
RUN cd boost_1_62_0 && ./b2 install
ENV BOOST_ROOT=/boost_1_62_0
RUN cd /tmp
ENV CM_INSTALLER=cmake-3.10.0-rc3-Linux-x86_64.sh
ENV CM_VER_DIR=/opt/local/cmake-3.10.0
RUN cd /tmp && wget https://cmake.org/files/v3.10/$CM_INSTALLER && chmod a+x $CM_INSTALLER
RUN mkdir -p $CM_VER_DIR
RUN ln -s $CM_VER_DIR /opt/local/cmake
RUN /tmp/$CM_INSTALLER --prefix=$CM_VER_DIR --exclude-subdir
RUN rm -f /tmp/$CM_INSTALLER
# Install dependencies
RUN apt-get -y install doxygen
RUN apt-get -y install xsltproc
RUN cd /tmp && wget https://ftp.stack.nl/pub/users/dimitri/doxygen-1.8.14.src.tar.gz
RUN cd /tmp && tar xvf doxygen-1.8.14.src.tar.gz
RUN mkdir -p /tmp/doxygen-1.8.14/build
RUN cd /tmp/doxygen-1.8.14/build && /opt/local/cmake/bin/cmake -G "Unix Makefiles" ..
RUN cd /tmp/doxygen-1.8.14/build && make -j2
RUN cd /tmp/doxygen-1.8.14/build && make install
RUN rm -f /tmp/doxygen-1.8.14.src.tar.gz
RUN rm -rf /tmp/doxygen-1.8.14
CMD cd /opt/rippled/docs && \
chmod +x makeqbk.sh && \
./makeqbk.sh && \
$BOOST_ROOT/b2
RUN mkdir -p /opt/plantuml
RUN wget -O /opt/plantuml/plantuml.jar http://sourceforge.net/projects/plantuml/files/plantuml.jar/download
ENV PLANTUML_JAR=/opt/plantuml/plantuml.jar
CMD cd /opt/rippled/docs && doxygen source.dox

View File

@@ -1,81 +0,0 @@
#
# Copyright (c) 2015-2016 Vinnie Falco (vinnie dot falco at gmail dot com)
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
import os ;
local broot = [ os.environ BOOST_ROOT ] ;
project rippled/doc ;
using boostbook ;
using quickbook ;
using doxygen ;
path-constant out : . ;
install stylesheets
:
$(broot)/doc/src/boostbook.css
:
<location>$(out)/html
;
explicit stylesheets ;
install images
:
[ glob $(broot)/doc/src/images/*.png ]
:
<location>$(out)/html/images
;
explicit images ;
install callouts
:
[ glob $(broot)/doc/src/images/callouts/*.png ]
:
<location>$(out)/html/images/callouts
;
explicit callout ;
install consensus_images
:
[ glob images/consensus/*.png ]
:
<location>$(out)/html/images/consensus
;
explicit consensus_images ;
xml doc
:
main.qbk
:
<location>temp
<include>$(broot)/tools/boostbook/dtd
;
boostbook boostdoc
:
doc
:
<xsl:param>chapter.autolabel=0
<xsl:param>boost.root=$(broot)
<xsl:param>chunk.first.sections=1 # Chunk the first top-level section?
<xsl:param>chunk.section.depth=8 # Depth to which sections should be chunked
<xsl:param>generate.section.toc.level=2 # Control depth of TOC generation in sections
<xsl:param>toc.max.depth=2 # How many levels should be created for each TOC?
<xsl:param>toc.section.depth=2 # How deep should recursive sections appear in the TOC?
<xsl:param>generate.toc="chapter toc section toc"
:
<location>temp
<dependency>stylesheets
<dependency>images
<dependency>consensus_images
;

View File

@@ -13,21 +13,6 @@ relative to the `docs/` directory.
Install these dependencies:
1. Install [Doxygen](http://www.stack.nl/~dimitri/doxygen/download.html)
2. Download the following zip files from [xsltproc](https://www.zlatkovic.com/pub/libxml/)
(Alternate download: ftp://ftp.zlatkovic.com/libxml/),
and extract the `bin\` folder contents into any folder in your path.
* iconv
* libxml2
* libxslt
* zlib
3. Download [Boost](http://www.boost.org/users/download/)
1. Extract the compressed file contents to your (new) `$BOOST_ROOT` location.
2. Open a command prompt or shell in the `$BOOST_ROOT`.
3. `./bootstrap.bat`
4. (Optional, if you also plan to build rippled) `./bjam.exe --toolset=msvc-14.0
--build-type=complete variant=debug,release link=static runtime-link=static
address-model=64 stage`
5. If it is not already there, add your `$BOOST_ROOT` to your environment `$PATH`.
### MacOS
@@ -38,47 +23,54 @@ Install these dependencies:
You'll then need to make doxygen available to your command line. You can
do this by adding a symbolic link from `/usr/local/bin` to the doxygen
executable. For example, `$ ln -s /Applications/Doxygen.app/Contents/Resources/doxygen /usr/local/bin/doxygen`
2. Install [Boost](http://www.boost.org/users/download/)
1. Extract the compressed file contents to your (new) `$BOOST_ROOT` location.
2. Open a command prompt or shell in the `$BOOST_ROOT`.
3. `$ ./bootstrap.bat`
4. (Optional, if you also plan to build rippled)
`$ ./b2 toolset=clang threading=multi runtime-link=static link=static
cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++" adress-model=64`
5. If it is not already there, add your `$BOOST_ROOT` to your environment
`$PATH`. This makes the `b2` command available to the command line.
3. That should be all that's required. In OS X 10.11, at least, libxml2 and
libxslt come pre-installed.
### Linux
1. Install [Docker](https://docs.docker.com/engine/installation/)
2. Build Docker image. From the rippled root folder:
```
sudo docker build -t rippled-docs docs/
```
1. Install doxygen using your package manager OR from source using the links above.
## Setup project submodules
### [Optional] Install Plantuml (all platforms)
1. Open a shell in your rippled root folder.
2. `git submodule init`
3. `git submodule update docs/docca`
Doxygen supports the optional use of [plantuml](http://plantuml.com) to
generate diagrams from `@startuml` sections. We don't currently rely on this
functionality for docs, so it's largely optional. Requirements:
1. Download/install a functioning java runtime, if you don't already have one.
2. Download [plantuml](http://plantuml.com) from
[here](http://sourceforge.net/projects/plantuml/files/plantuml.jar/download).
Set a system environment variable named `PLANTUML_JAR` with a value of the fullpath
to the file system location of the `plantuml.jar` file you downloaded.
## Do it
### Windows & MacOS
### all platforms
From the rippled root folder:
```
cd docs
./makeqbk.sh && b2
mkdir -p html_doc
doxygen source.dox
```
The output will be in `docs/html`.
The output will be in `docs/html_doc`.
### Linux
## Docker
(applicable to all platforms)
Instead of installing the doxygen tools locally, you can use the provided `Dockerfile` to create
an ubuntu based image for running the tools:
1. Install [Docker](https://docs.docker.com/engine/installation/)
2. Build Docker image. From the rippled root folder:
```
sudo docker build -t rippled-docs docs/
```
Then to run the image, from the rippled root folder:
From the rippled root folder:
```
sudo docker run -v $PWD:/opt/rippled --rm rippled-docs
```
The output will be in `docs/html`.
The output will be in `docs/html_doc`.

View File

@@ -1,439 +0,0 @@
<!--
BoostBook DTD - development version
For further information, see: http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Boost_Documentation_Format
Copyright (c) 2002 by Peter Simons <simons@cryp.to>
Copyright (c) 2003-2004 by Douglas Gregor <doug.gregor -at- gmail.com>
Copyright (c) 2007 by Frank Mori Hess <fmhess@users.sourceforge.net>
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt)
The latest stable DTD module is identified by the PUBLIC and SYSTEM identifiers:
PUBLIC "-//Boost//DTD BoostBook XML V1.1//EN"
SYSTEM "http://www.boost.org/tools/boostbook/dtd/1.1/boostbook.dtd"
$Revision$
$Date$
-->
<!--========== Define XInclude features. ==========-->
<!-- This is not really integrated into the DTD yet. Needs more
research. -->
<!--
<!ELEMENT xi:include (xi:fallback)?>
<!ATTLIST xi:include
xmlns:xi CDATA #FIXED "http://www.w3.org/2001/XInclude"
href CDATA #REQUIRED
parse (xml|text) "xml"
encoding CDATA #IMPLIED>
<!ELEMENT xi:fallback ANY>
<!ATTLIST xi:fallback
xmlns:xi CDATA #FIXED "http://www.w3.org/2001/XInclude">
-->
<!ENTITY % local.common.attrib "last-revision CDATA #IMPLIED">
<!--========== Define the BoostBook extensions ==========-->
<!ENTITY % boost.common.attrib "%local.common.attrib;
id CDATA #IMPLIED">
<!ENTITY % boost.namespace.mix
"class|class-specialization|struct|struct-specialization|
union|union-specialization|typedef|enum|
free-function-group|function|overloaded-function|
namespace">
<!ENTITY % boost.template.mix
"template-type-parameter|template-nontype-parameter|template-varargs">
<!ENTITY % boost.class.members
"static-constant|typedef|enum|
copy-assignment|constructor|destructor|method-group|
method|overloaded-method|data-member|class|class-specialization|struct|
struct-specialization|union|union-specialization">
<!ENTITY % boost.class.mix
"%boost.class.members;|free-function-group|function|overloaded-function">
<!ENTITY % boost.class.content
"template?, inherit*, purpose?, description?,
(%boost.class.mix;|access)*">
<!ENTITY % boost.class-specialization.content
"template?, specialization?, inherit?, purpose?, description?,
(%boost.class.mix;|access)*">
<!ENTITY % boost.function.semantics
"purpose?, description?, requires?, effects?, postconditions?,
returns?, throws?, complexity?, notes?, rationale?">
<!ENTITY % library.content
"libraryinfo, (title, ((section|library-reference|testsuite))+)?">
<!ELEMENT library (%library.content;)>
<!ATTLIST library
name CDATA #REQUIRED
dirname CDATA #REQUIRED
html-only CDATA #IMPLIED
url CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT boostbook (title, (chapter|library)*)>
<!ATTLIST boostbook %boost.common.attrib;>
<!ELEMENT libraryinfo (author+, copyright*, legalnotice*, librarypurpose, librarycategory*)>
<!ATTLIST libraryinfo %boost.common.attrib;>
<!ELEMENT librarypurpose (#PCDATA|code|ulink|functionname|methodname|classname|macroname|headername|enumname|globalname)*>
<!ATTLIST librarypurpose %boost.common.attrib;>
<!ELEMENT librarycategory (#PCDATA)>
<!ATTLIST librarycategory
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT libraryname (#PCDATA)>
<!ATTLIST libraryname %boost.common.attrib;>
<!ELEMENT library-reference ANY>
<!ATTLIST library-reference
%boost.common.attrib;>
<!ELEMENT librarylist EMPTY>
<!ATTLIST librarylist %boost.common.attrib;>
<!ELEMENT librarycategorylist (librarycategorydef)*>
<!ATTLIST librarycategorylist %boost.common.attrib;>
<!ELEMENT librarycategorydef (#PCDATA)>
<!ATTLIST librarycategorydef
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT header ANY>
<!ATTLIST header
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT namespace (%boost.namespace.mix;)*>
<!ATTLIST namespace
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT class (%boost.class.content;)>
<!ATTLIST class
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT struct (%boost.class.content;)>
<!ATTLIST struct
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT union (%boost.class.content;)>
<!ATTLIST union
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT class-specialization (%boost.class-specialization.content;)>
<!ATTLIST class-specialization
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT struct-specialization (%boost.class-specialization.content;)>
<!ATTLIST struct-specialization
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT union-specialization (%boost.class-specialization.content;)>
<!ATTLIST union-specialization
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT access (%boost.class.members;)+>
<!ATTLIST access
name CDATA #REQUIRED
%boost.common.attrib;>
<!--========= C++ Templates =========-->
<!ELEMENT template (%boost.template.mix;)*>
<!ATTLIST template %boost.common.attrib;>
<!ELEMENT template-type-parameter (default?, purpose?)>
<!ATTLIST template-type-parameter
name CDATA #REQUIRED
pack CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT template-nontype-parameter (type, default?, purpose?)>
<!ATTLIST template-nontype-parameter
name CDATA #REQUIRED
pack CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT template-varargs EMPTY>
<!ATTLIST template-varargs %boost.common.attrib;>
<!ELEMENT specialization (template-arg)*>
<!ATTLIST specialization %boost.common.attrib;>
<!ELEMENT template-arg ANY>
<!ATTLIST template-arg
pack CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT default ANY>
<!ATTLIST default %boost.common.attrib;>
<!ELEMENT inherit (type, purpose?)>
<!ATTLIST inherit
access CDATA #IMPLIED
pack CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT purpose ANY>
<!ATTLIST purpose %boost.common.attrib;>
<!ELEMENT description ANY>
<!ATTLIST description %boost.common.attrib;>
<!ELEMENT type ANY>
<!ATTLIST type %boost.common.attrib;>
<!ELEMENT typedef (type, purpose?, description?)>
<!ATTLIST typedef
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT enum (enumvalue*, purpose?, description?)>
<!ATTLIST enum
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT enumvalue (default?, purpose?, description?)>
<!ATTLIST enumvalue
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT static-constant (type, default, purpose?, description?)>
<!ATTLIST static-constant
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT data-member (type, purpose?, description?)>
<!ATTLIST data-member
name CDATA #REQUIRED
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT paramtype ANY>
<!ATTLIST paramtype %boost.common.attrib;>
<!ELEMENT effects ANY>
<!ATTLIST effects %boost.common.attrib;>
<!ELEMENT postconditions ANY>
<!ATTLIST postconditions %boost.common.attrib;>
<!ELEMENT method-group (method|overloaded-method)*>
<!ATTLIST method-group
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT constructor (template?, parameter*, %boost.function.semantics;)>
<!ATTLIST constructor
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT destructor (%boost.function.semantics;)>
<!ATTLIST destructor
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT method (template?, type, parameter*, %boost.function.semantics;)>
<!ATTLIST method
name CDATA #REQUIRED
cv CDATA #IMPLIED
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT function (template?, type, parameter*, %boost.function.semantics;)>
<!ATTLIST function
name CDATA #REQUIRED
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT overloaded-method (signature*, %boost.function.semantics;)>
<!ATTLIST overloaded-method
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT overloaded-function (signature*, %boost.function.semantics;)>
<!ATTLIST overloaded-function
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT signature (template?, type, parameter*)>
<!ATTLIST signature
cv CDATA #IMPLIED
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT requires ANY>
<!ATTLIST requires %boost.common.attrib;>
<!ELEMENT returns ANY>
<!ATTLIST returns %boost.common.attrib;>
<!ELEMENT throws ANY>
<!ATTLIST throws %boost.common.attrib;>
<!ELEMENT complexity ANY>
<!ATTLIST complexity %boost.common.attrib;>
<!ELEMENT notes ANY>
<!ATTLIST notes %boost.common.attrib;>
<!ELEMENT rationale ANY>
<!ATTLIST rationale %boost.common.attrib;>
<!ELEMENT functionname (#PCDATA)>
<!ATTLIST functionname
alt CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT enumname (#PCDATA)>
<!ATTLIST enumname
alt CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT macroname (#PCDATA)>
<!ATTLIST macroname
alt CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT headername (#PCDATA)>
<!ATTLIST headername
alt CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT globalname (#PCDATA)>
<!ATTLIST globalname
alt CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT copy-assignment
(template?, type?, parameter*, %boost.function.semantics;)>
<!ATTLIST copy-assignment
cv CDATA #IMPLIED
specifiers CDATA #IMPLIED
%boost.common.attrib;>
<!ELEMENT free-function-group (function|overloaded-function)*>
<!ATTLIST free-function-group
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT precondition ANY>
<!ATTLIST precondition %boost.common.attrib;>
<!ELEMENT code ANY>
<!ATTLIST code %boost.common.attrib;>
<!ELEMENT using-namespace EMPTY>
<!ATTLIST using-namespace
name CDATA #REQUIRED
%boost.common.attrib;>
<!ELEMENT using-class EMPTY>
<!ATTLIST using-class
name CDATA #REQUIRED
%boost.common.attrib;>
<!--========== Boost Testsuite Extensions ==========-->
<!ENTITY % boost.testsuite.tests
"compile-test|link-test|run-test|
compile-fail-test|link-fail-test|run-fail-test">
<!ENTITY % boost.testsuite.test.content
"source*, lib*, requirement*, purpose, if-fails?">
<!ELEMENT testsuite ((%boost.testsuite.tests;)+)>
<!ATTLIST testsuite %boost.common.attrib;>
<!ELEMENT compile-test (%boost.testsuite.test.content;)>
<!ATTLIST compile-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT link-test (%boost.testsuite.test.content;)>
<!ATTLIST link-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT run-test (%boost.testsuite.test.content;)>
<!ATTLIST run-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT compile-fail-test (%boost.testsuite.test.content;)>
<!ATTLIST compile-fail-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT link-fail-test (%boost.testsuite.test.content;)>
<!ATTLIST link-fail-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT run-fail-test (%boost.testsuite.test.content;)>
<!ATTLIST run-fail-test
filename CDATA #REQUIRED
name CDATA #IMPLIED>
<!ELEMENT source (#PCDATA|snippet)*>
<!ELEMENT snippet EMPTY>
<!ATTLIST snippet
name CDATA #REQUIRED>
<!ELEMENT lib (#PCDATA)>
<!ELEMENT requirement (#PCDATA)>
<!ATTLIST requirement
name CDATA #REQUIRED>
<!ELEMENT if-fails ANY>
<!ELEMENT parameter (paramtype, default?, description?)>
<!ATTLIST parameter
name CDATA #IMPLIED
pack CDATA #IMPLIED>
<!ELEMENT programlisting ANY>
<!ATTLIST programlisting
name CDATA #IMPLIED>
<!--========== Customize the DocBook DTD ==========-->
<!ENTITY % local.tech.char.class "|functionname|libraryname|enumname|headername|macroname|code">
<!ENTITY % local.para.class
"|using-namespace|using-class|librarylist|librarycategorylist">
<!ENTITY % local.descobj.class "|libraryinfo">
<!ENTITY % local.classname.attrib "alt CDATA #IMPLIED">
<!ENTITY % local.methodname.attrib "alt CDATA #IMPLIED">
<!ENTITY % local.refentry.class "|library-reference|testsuite">
<!ENTITY % local.title.char.mix "">
<!ENTITY % programlisting.module "IGNORE">
<!ENTITY % parameter.module "IGNORE">
<!ENTITY % function.module "IGNORE">
<!ENTITY % type.module "IGNORE">
<!--========== Import DocBook DTD ==========-->
<!ENTITY % DocBook PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
%DocBook;

View File

@@ -1,18 +1,18 @@
[section Consensus and Validation]
# Consensus and Validation
[*This section is a work in progress!!]
**This section is a work in progress!!**
Consensus is the task of reaching agreement within a distributed system in the
presence of faulty or even malicious participants. This document outlines the
[@https://ripple.com/files/ripple/consensus/whitepaper.pdf Ripple Consensus
Algorithm] as implemented in [@https://github.com/ripple/rippled rippled], but
[Ripple Consensus Algorithm](https://ripple.com/files/ripple/consensus/whitepaper.pdf)
as implemented in [rippled](https://github.com/ripple/rippled), but
focuses on its utility as a generic consensus algorithm independent of the
detailed mechanics of the Ripple Consensus Ledger. Most notably, the algorithm
does not require fully synchronous communication between all nodes in the
network, or even a fixed network topology, but instead achieves consensus via
collectively trusted subnetworks.
[heading Distributed Agreement]
## Distributed Agreement
A challenge for distributed systems is reaching agreement on changes in shared
state. For the Ripple network, the shared state is the current ledger--account
@@ -20,7 +20,7 @@ information, account balances, order books and other financial data. We will
refer to shared distributed state as a /ledger/ throughout the remainder of this
document.
[$images/consensus/ledger_chain.png [width 50%] [height 50%] ]
![Ledger Chain](images/consensus/ledger_chain.png "Ledger Chain")
As shown above, new ledgers are made by applying a set of transactions to the
prior ledger. For the Ripple network, transactions include payments,
@@ -33,14 +33,14 @@ the set of transactions to include, the order to apply those transactions, and
even the resulting ledger after applying the transactions. This is even more
difficult when some participants are faulty or malicious.
The Ripple network is a decentralized and _trust-full_ network. Anyone is free
The Ripple network is a decentralized and **trust-full** network. Anyone is free
to join and participants are free to choose a subset of peers that are
collectively trusted to not collude in an attempt to defraud the participant.
Leveraging this network of trust, the Ripple algorithm has two main components.
* /Consensus/ in which network participants agree on the transactions to apply
* *Consensus* in which network participants agree on the transactions to apply
to a prior ledger, based on the positions of their chosen peers.
* /Validation/ in which network participants agree on what ledger was
* *Validation* in which network participants agree on what ledger was
generated, based on the ledgers generated by chosen peers.
These phases are continually repeated to process transactions submitted to the
@@ -50,46 +50,46 @@ links between ledgers point backward to the parent. Also note the alternate
Ledger 2 that was generated by some participants, but which failed validation
and was abandoned.
[$images/consensus/block_chain.png]
![Block Chain](images/consensus/block_chain.png "Block Chain")
The remainder of this section describes the Consensus and Validation algorithms
in more detail and is meant as a companion guide to understanding the generic
implementation in =rippled=. The document *does not* discuss correctness,
implementation in `rippled`. The document **does not** discuss correctness,
fault-tolerance or liveness properties of the algorithms or the full details of
how they integrate within =rippled= to support the Ripple Consensus Ledger.
how they integrate within `rippled` to support the Ripple Consensus Ledger.
[section Consensus Overview]
## Consensus Overview
[heading Definitions]
### Definitions
* The /ledger/ is the shared distributed state. Each ledger has a unique ID to
distinguish it from all other ledgers. During consensus, the /previous/,
/prior/ or /last-closed/ ledger is the most recent ledger seen by consensus
* The *ledger* is the shared distributed state. Each ledger has a unique ID to
distinguish it from all other ledgers. During consensus, the *previous*,
*prior* or *last-closed* ledger is the most recent ledger seen by consensus
and is the basis upon which it will build the next ledger.
* A /transaction/ is an instruction for an atomic change in the ledger state. A
* A *transaction* is an instruction for an atomic change in the ledger state. A
unique ID distinguishes a transaction from other transactions.
* A /transaction set/ is a set of transactions under consideration by consensus.
* A *transaction set* is a set of transactions under consideration by consensus.
The goal of consensus is to reach agreement on this set. The generic
consensus algorithm does not rely on an ordering of transactions within the
set, nor does it specify how to apply a transaction set to a ledger to
generate a new ledger. A unique ID distinguishes a set of transactions from
all other sets of transactions.
* A /node/ is one of the distributed actors running the consensus algorithm. It
* A *node* is one of the distributed actors running the consensus algorithm. It
has a unique ID to distinguish it from all other nodes.
* A /peer/ of a node is another node that it has chosen to follow and which it
* A *peer* of a node is another node that it has chosen to follow and which it
believes will not collude with other chosen peers. The choice of peers is not
symmetric, since participants can decide on their chosen sets independently.
* A /position/ is the current belief of the next ledger's transaction set and
close time. Position can refer to the node's own position or the position of a
peer.
* A /proposal/ is one of a sequence of positions a node shares during consensus.
* A *proposal* is one of a sequence of positions a node shares during consensus.
An initial proposal contains the starting position taken by a node before it
considers any peer positions. If a node subsequently updates its position in
response to its peers, it will issue an updated proposal. A proposal is
uniquely identified by the ID of the proposing node, the ID of the position
taken, the ID of the prior ledger the proposal is for, and the sequence number
of the proposal.
* A /dispute/ is a transaction that is either not part of a node's position or
* A *dispute* is a transaction that is either not part of a node's position or
not in a peer's position. During consensus, the node will add or remove
disputed transactions from its position based on that transaction's support
amongst its peers.
@@ -101,61 +101,61 @@ contain the ID of the position of a peer. Since many peers likely have the same
position, this reduces the need to send the full transaction set multiple times.
Instead, a node can request the transaction set from the network if necessary.
[heading Overview ]
[$images/consensus/consensus_overview.png [width 50%] [height 50%] ]
### Overview
![Consensus Overview](images/consensus/consensus_overview.png "Consensus Overview")
The diagram above is an overview of the consensus process from the perspective
of a single participant. Recall that during a single consensus round, a node is
trying to agree with its peers on which transactions to apply to its prior
ledger when generating the next ledger. It also attempts to agree on the
[link effective_close_time network time when the ledger closed]. There are
[network time when the ledger closed](#effective_close_time). There are
3 main phases to a consensus round:
* A call to =startRound= places the node in the =Open= phase. In this phase,
* A call to `startRound` places the node in the `Open` phase. In this phase,
the node is waiting for transactions to include in its open ledger.
* At some point, the node will =Close= the open ledger and transition to the
=Establish= phase. In this phase, the node shares/receives peer proposals on
* At some point, the node will `Close` the open ledger and transition to the
`Establish` phase. In this phase, the node shares/receives peer proposals on
which transactions should be accepted in the closed ledger.
* At some point, the node determines it has reached consensus with its peers on
which transactions to include. It transitions to the =Accept= phase. In this
which transactions to include. It transitions to the `Accept` phase. In this
phase, the node works on applying the transactions to the prior ledger to
generate a new closed ledger. Once the new ledger is completed, the node shares
the validated ledger hash with the network and makes a call to =startRound= to
the validated ledger hash with the network and makes a call to `startRound` to
start the cycle again for the next ledger.
Throughout, a heartbeat timer calls =timerEntry= at a regular frequency to drive
the process forward. Although the =startRound= call occurs at arbitrary times
Throughout, a heartbeat timer calls `timerEntry` at a regular frequency to drive
the process forward. Although the `startRound` call occurs at arbitrary times
based on when the initial round began and the time it takes to apply
transactions, the transitions from =Open= to =Establish= and =Establish= to
=Accept= only occur during calls to =timerEntry=. Similarly, transactions can
transactions, the transitions from `Open` to `Establish` and `Establish` to
`Accept` only occur during calls to `timerEntry`. Similarly, transactions can
arrive at arbitrary times, independent of the heartbeat timer. Transactions
received after the =Open= to =Close= transition and not part of peer proposals
received after the `Open` to `Close` transition and not part of peer proposals
won't be considered until the next consensus round. They are represented above
by the light green triangles.
Peer proposals are issued by a node during a =timerEntry= call, but since peers
do not synchronize =timerEntry= calls, they are received by other peers at
Peer proposals are issued by a node during a `timerEntry` call, but since peers
do not synchronize `timerEntry` calls, they are received by other peers at
arbitrary times. Peer proposals are only considered if received prior to the
=Establish= to =Accept= transition, and only if the peer is working on the same
`Establish` to `Accept` transition, and only if the peer is working on the same
prior ledger. Peer proposals received after consensus is reached will not be
meaningful and are represented above by the circle with the X in it. Only
proposals from chosen peers are considered.
[#effective_close_time]
[heading Effective Close Time]
### Effective Close Time ### {#effective_close_time}
In addition to agreeing on a transaction set, each consensus round tries to
agree on the time the ledger closed. Each node calculates its own close time
when it closes the open ledger. This exact close time is rounded to the nearest
multiple of the current /effective close time resolution/. It is this
/effective close time/ that nodes seek to agree on. This allows servers to
multiple of the current *effective close time resolution*. It is this
*effective close time* that nodes seek to agree on. This allows servers to
derive a common time for a ledger without the need for perfectly synchronized
clocks. As depicted below, the 3 pink arrows represent exact close times from 3
consensus nodes that round to the same effective close time given the current
resolution. The purple arrow represents a peer whose estimate rounds to a
different effective close time given the current resolution.
[$images/consensus/EffCloseTime.png]
![Effective Close Time](images/consensus/EffCloseTime.png "Effective Close Time")
The effective close time is part of the node's position and is shared with peers
in its proposals. Just like the position on the consensus transaction set, a
@@ -168,14 +168,14 @@ subsequent consensus rounds if nodes are unable to reach consensus on an
effective close time and increasing (finer) resolution if nodes consistently
reach close time consensus.
[heading Modes]
### Modes
Internally, a node operates under one of the following consensus modes. Either
of the first two modes may be chosen when a consensus round starts.
* /Proposing/ indicates the node is a full-fledged consensus participant. It
* *Proposing* indicates the node is a full-fledged consensus participant. It
takes on positions and sends proposals to its peers.
* /Observing/ indicates the node is a passive consensus participant. It
* *Observing* indicates the node is a passive consensus participant. It
maintains a position internally, but does not propose that position to its
peers. Instead, it receives peer proposals and updates its position
to track the majority of its peers. This may be preferred if the node is only
@@ -184,21 +184,21 @@ of the first two modes may be chosen when a consensus round starts.
The other two modes are set internally during the consensus round when the node
believes it is no longer working on the dominant ledger chain based on peer
validations. It checks this on every call to =timerEntry=.
validations. It checks this on every call to `timerEntry`.
* /Wrong Ledger/ indicates the node is not working on the correct prior ledger
* *Wrong Ledger* indicates the node is not working on the correct prior ledger
and does not have it available. It requests that ledger from the network, but
continues to work towards consensus this round while waiting. If it had been
/proposing/, it will send a special "bowout" proposal to its peers to indicate
*proposing*, it will send a special "bowout" proposal to its peers to indicate
its change in mode for the rest of this round. For the duration of the round,
it defers to peer positions for determining the consensus outcome as if it
were just /observing/.
* /Switch Ledger/ indicates that the node has acquired the correct prior ledger
were just *observing*.
* *Switch Ledger* indicates that the node has acquired the correct prior ledger
from the network. Although it now has the correct prior ledger, the fact that
it had the wrong one at some point during this round means it is likely behind
and should defer to peer positions for determining the consensus outcome.
[$images/consensus/consensus_modes.png]
![Consensus Modes](images/consensus/consensus_modes.png "Consensus Modes")
Once either wrong ledger or switch ledger are reached, the node cannot
return to proposing or observing until the next consensus round. However,
@@ -212,49 +212,49 @@ time to share over the network, whereas the smaller ID could be shared in a peer
validation much more quickly. Distinguishing the two states allows the node to
decide how best to generate the next ledger once it declares consensus.
[heading Phases]
### Phases
As depicted in the overview diagram, consensus is best viewed as a progression
through 3 phases. There are 4 public methods of the generic consensus algorithm
that determine this progression
* =startRound= begins a consensus round.
* =timerEntry= is called at a regular frequency (=LEDGER_MIN_CLOSE=) and is the
only call to consensus that can change the phase from =Open= to =Establish=
or =Accept=.
* =peerProposal= is called whenever a peer proposal is received and is what
allows a node to update its position in a subsequent =timerEntry= call.
* =gotTxSet= is called when a transaction set is received from the network. This
* `startRound` begins a consensus round.
* `timerEntry` is called at a regular frequency (`LEDGER_MIN_CLOSE`) and is the
only call to consensus that can change the phase from `Open` to `Establish`
or `Accept`.
* `peerProposal` is called whenever a peer proposal is received and is what
allows a node to update its position in a subsequent `timerEntry` call.
* `gotTxSet` is called when a transaction set is received from the network. This
is typically in response to a prior request from the node to acquire the
transaction set corresponding to a disagreeing peer's position.
The following subsections describe each consensus phase in more detail and what
actions are taken in response to these calls.
[h6 Open]
#### Open
The =Open= phase is a quiescent period to allow transactions to build up in the
The `Open` phase is a quiescent period to allow transactions to build up in the
node's open ledger. The duration is a trade-off between latency and throughput.
A shorter window reduces the latency to generating the next ledger, but also
reduces transaction throughput due to fewer transactions accepted into the
ledger.
A call to =startRound= would forcibly begin the next consensus round, skipping
A call to `startRound` would forcibly begin the next consensus round, skipping
completion of the current round. This is not expected during normal operation.
Calls to =peerProposal= or =gotTxSet= simply store the proposal or transaction
set for use in the coming =Establish= phase.
Calls to `peerProposal` or `gotTxSet` simply store the proposal or transaction
set for use in the coming `Establish` phase.
A call to =timerEntry= first checks that the node is working on the correct
A call to `timerEntry` first checks that the node is working on the correct
prior ledger. If not, it will update the mode and request the correct ledger.
Otherwise, the node checks whether to switch to the =Establish= phase and close
Otherwise, the node checks whether to switch to the `Establish` phase and close
the ledger.
['Ledger Close]
##### Ledger Close
Under normal circumstances, the open ledger period ends when one of the following
is true
* if there are transactions in the open ledger and more than =LEDGER_MIN_CLOSE=
* if there are transactions in the open ledger and more than `LEDGER_MIN_CLOSE`
have elapsed. This is the typical behavior.
* if there are no open transactions and a suitably longer idle interval has
elapsed. This increases the opportunity to get some transaction into
@@ -275,48 +275,48 @@ transaction.
In the example below, we suppose our node has closed with transactions 1,2 and 3. It creates disputes
for transactions 2,3 and 4, since at least one peer position differs on each.
[#disputes_image]
[$images/consensus/disputes.png [width 20%] [height 20%]]
##### disputes ##### {#disputes_image}
![Disputes](images/consensus/disputes.png "Disputes")
[h6 Establish]
#### Establish
The establish phase is the active period of consensus in which the node
exchanges proposals with peers in an attempt to reach agreement on the consensus
transactions and effective close time.
A call to =startRound= would forcibly begin the next consensus round, skipping
A call to `startRound` would forcibly begin the next consensus round, skipping
completion of the current round. This is not expected during normal operation.
Calls to =peerProposal= or =gotTxSet= that reflect new positions will generate
Calls to `peerProposal` or `gotTxSet` that reflect new positions will generate
disputed transactions for any new disagreements and will update the peer's vote
for all disputed transactions.
A call to =timerEntry= first checks that the node is working from the correct
A call to `timerEntry` first checks that the node is working from the correct
prior ledger. If not, the node will update the mode and request the correct
ledger. Otherwise, the node updates the node's position and considers whether
to switch to the =Accepted= phase and declare consensus reached. However, at
least =LEDGER_MIN_CONSENSUS= time must have elapsed before doing either. This
to switch to the `Accepted` phase and declare consensus reached. However, at
least `LEDGER_MIN_CONSENSUS` time must have elapsed before doing either. This
allows peers an opportunity to take an initial position and share it.
['Update Position]
##### Update Position
In order to achieve consensus, the node is looking for a transaction set that is
supported by a super-majority of peers. The node works towards this set by
adding or removing disputed transactions from its position based on an
increasing threshold for inclusion.
[$images/consensus/threshold.png [width 50%] [height 50%]]
![Threshold](images/consensus/threshold.png "Threshold")
By starting with a lower threshold, a node initially allows a wide set of
transactions into its position. If the establish round continues and the node is
"stuck", a higher threshold can focus on accepting transactions with the most
support. The constants that define the thresholds and durations at which the
thresholds change are given by `AV_XXX_CONSENSUS_PCT` and
`AV_XXX_CONSENSUS_TIME` respectively, where =XXX= is =INIT=,=MID=,=LATE= and
=STUCK=. The effective close time position is updated using the same
`AV_XXX_CONSENSUS_TIME` respectively, where `XXX` is `INIT`,`MID`,`LATE` and
`STUCK`. The effective close time position is updated using the same
thresholds.
Given the [link disputes_image example disputes above] and an initial threshold
Given the [example disputes above](#disputes_image) and an initial threshold
of 50%, our node would retain its position since transaction 1 was not in
dispute and transactions 2 and 3 have 75% support. Since its position did not
change, it would not need to send a new proposal to peers. Peer C would not
@@ -333,7 +333,7 @@ Lastly, if our node were not in the proposing mode, it would not include its own
vote and just take the majority (>50%) position of its peers. In this example,
our node would maintain its position of transactions 1, 2 and 3.
['Checking Consensus]
##### Checking Consensus
After updating its position, the node checks for supermajority agreement with
its peers on its current position. This agreement is of the exact transaction
@@ -347,11 +347,11 @@ Consensus is declared when the following 3 clauses are true:
* `LEDGER_MIN_CONSENSUS` time has elapsed in the establish phase
* At least 75% of the prior round proposers have proposed OR this establish
phase is `LEDGER_MIN_CONSENSUS` longer than the last round's establish phase
* =minimumConsensusPercentage= of ourself and our peers share the same position
* `minimumConsensusPercentage` of ourself and our peers share the same position
The middle condition ensures slower peers have a chance to share positions, but
prevents waiting too long on peers that have disconnected. Additionally, a node
can declare that consensus has moved on if =minimumConsensusPercentage= peers
can declare that consensus has moved on if `minimumConsensusPercentage` peers
have sent validations and moved on to the next ledger. This outcome indicates
the node has fallen behind its peers and needs to catch up.
@@ -359,45 +359,43 @@ If a node is not proposing, it does not include its own position when
calculating the percent of agreeing participants but otherwise follows the above
logic.
['Accepting Consensus]
##### Accepting Consensus
Once consensus is reached (or moved on), the node switches to the =Accept= phase
Once consensus is reached (or moved on), the node switches to the `Accept` phase
and signals to the implementing code that the round is complete. That code is
responsible for using the consensus transaction set to generate the next ledger
and calling =startRound= to begin the next round. The implementation has total
and calling `startRound` to begin the next round. The implementation has total
freedom on ordering transactions, deciding what to do if consensus moved on,
determining whether to retry or abandon local transactions that did not make the
consensus set and updating any internal state based on the consensus progress.
#### Accept
[h6 Accept]
The =Accept= phase is the terminal phase of the consensus algorithm. Calls to
=timerEntry=, =peerProposal= and =gotTxSet= will not change the internal
The `Accept` phase is the terminal phase of the consensus algorithm. Calls to
`timerEntry`, `peerProposal` and `gotTxSet` will not change the internal
consensus state while in the accept phase. The expectation is that the
application specific code is working to generate the new ledger based on the
consensus outcome. Once complete, that code should make a call to =startRound=
to kick off the next consensus round. The =startRound= call includes the new
consensus outcome. Once complete, that code should make a call to `startRound`
to kick off the next consensus round. The `startRound` call includes the new
prior ledger, prior ledger ID and whether the round should begin in the
proposing or observing mode. After setting some initial state, the phase
transitions to =Open=. The node will also check if the provided prior ledger
transitions to `Open`. The node will also check if the provided prior ledger
and ID are correct, updating the mode and requesting the proper ledger from the
network if necessary.
[endsect] [/Consensus Overview]
[section Consensus Type Requirements]
## Consensus Type Requirements
The consensus type requirements are given below as minimal implementation stubs.
Actual implementations would augment these stubs with members appropriate for
managing the details of transactions and ledgers within the larger application
framework.
[heading Transaction]
The transaction type =Tx= encapsulates a single transaction under consideration
### Transaction
The transaction type `Tx` encapsulates a single transaction under consideration
by consensus.
```
```{.cpp}
struct Tx
{
using ID = ...;
@@ -407,13 +405,14 @@ struct Tx
};
```
[heading Transaction Set]
The transaction set type =TxSet= represents a set of [^Tx]s that are collectively
under consideration by consensus. A =TxSet= can be compared against other [^TxSet]s
### Transaction Set
The transaction set type `TxSet` represents a set of `Tx`s that are collectively
under consideration by consensus. A `TxSet` can be compared against other `TxSet`s
(typically from peers) and can be modified to add or remove transactions via
the mutable subtype.
```
```{.cpp}
struct TxSet
{
using Tx = Tx;
@@ -446,14 +445,16 @@ struct TxSet
};
```
[heading Ledger] The =Ledger= type represents the state shared amongst the
### Ledger
The `Ledger` type represents the state shared amongst the
distributed participants. Notice that the details of how the next ledger is
generated from the prior ledger and the consensus accepted transaction set is
not part of the interface. Within the generic code, this type is primarily used
to know that peers are working on the same tip of the ledger chain and to
provide some basic timing data for consensus.
```
```{.cpp}
struct Ledger
{
using ID = ...;
@@ -483,10 +484,13 @@ struct Ledger
};
```
[heading PeerProposal] The =PeerProposal= type represents the signed position taken
### PeerProposal
The `PeerProposal` type represents the signed position taken
by a peer during consensus. The only type requirement is owning an instance of a
generic =ConsensusProposal=.
```
generic `ConsensusProposal`.
```{.cpp}
// Represents our proposed position or a peer's proposed position
// and is provided with the generic code
template <class NodeID_t, class LedgerID_t, class Position_t> class ConsensusProposal;
@@ -502,15 +506,16 @@ struct PeerPosition
// ... implementation specific
};
```
[heading Generic Consensus Interface]
The generic =Consensus= relies on =Adaptor= template class to implement a set
### Generic Consensus Interface
The generic `Consensus` relies on `Adaptor` template class to implement a set
of helper functions that plug the consensus algorithm into a specific application.
The =Adaptor= class also defines the types above needed by the algorithm. Below
The `Adaptor` class also defines the types above needed by the algorithm. Below
are excerpts of the generic consensus implementation and of helper types that will
interact with the concrete implementing class.
```
```{.cpp}
// Represents a transction under dispute this round
template <class Tx_t, class NodeID_t> class DisputedTx;
@@ -583,11 +588,12 @@ public:
// ... details
};
```
[heading Adapting Generic Consensus]
### Adapting Generic Consensus
The stub below shows the set of callback/helper functions required in the implementing class.
```
```{.cpp}
struct Adaptor
{
using Ledger_t = Ledger;
@@ -651,30 +657,27 @@ struct Adaptor
The implementing class hides many details of the peer communication
model from the generic code.
* The =share= member functions are responsible for sharing the given type with a
* The `share` member functions are responsible for sharing the given type with a
node's peers, but are agnostic to the mechanism. Ideally, messages are delivered
faster than =LEDGER_GRANULARITY=.
faster than `LEDGER_GRANULARITY`.
* The generic code does not specify how transactions are submitted by clients,
propagated through the network or stored in the open ledger. Indeed, the open
ledger is only conceptual from the perspective of the generic code---the
initial position and transaction set are opaquely generated in a
`Consensus::Result` instance returned from the =onClose= callback.
* The calls to =acquireLedger= and =acquireTxSet= only have non-trivial return
`Consensus::Result` instance returned from the `onClose` callback.
* The calls to `acquireLedger` and `acquireTxSet` only have non-trivial return
if the ledger or transaction set of interest is available. The implementing
class is free to block while acquiring, or return the empty option while
servicing the request asynchronously. Due to legacy reasons, the two calls
are not symmetric. =acquireTxSet= requires the host application to call
=gotTxSet= when an asynchronous =acquire= completes. Conversely,
=acquireLedger= will be called again later by the consensus code if it still
are not symmetric. `acquireTxSet` requires the host application to call
`gotTxSet` when an asynchronous `acquire` completes. Conversely,
`acquireLedger` will be called again later by the consensus code if it still
desires the ledger with the hope that the asynchronous acquisition is
complete.
[endsect] [/Consensus Type Requirements]
[section Validation]
## Validation
Coming Soon!
[endsect] [/Validation]
[endsect] [/Consensus and Validation]

Submodule docs/docca deleted from 335dbf9c36

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -1,14 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "boostbook.dtd">
<!--
Copyright (c) 2013-2016 Vinnie Falco (vinnie dot falco at gmail dot com)
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="rippled.index">
<title>Index</title>
<index/>
</section>

View File

@@ -1,39 +0,0 @@
[/
Copyright (c) Copyright (c) 2012-2017 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
]
[library rippled
[quickbook 1.6]
[copyright 2012 - 2017 Ripple Labs Inc.]
[purpose C++ Library]
[license
Distributed under the ISC License
]
[authors [Labs, Ripple]]
[category template]
[category generic]
]
[template mdash[] '''&mdash; ''']
[template indexterm1[term1] '''<indexterm><primary>'''[term1]'''</primary></indexterm>''']
[template indexterm2[term1 term2] '''<indexterm><primary>'''[term1]'''</primary><secondary>'''[term2]'''</secondary></indexterm>''']
[include consensus.qbk]
[section:ref Reference]
[include temp/reference.qbk]
[endsect]
[xinclude index.xml]

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# Copyright (c) 2013-2016 Vinnie Falco (vinnie dot falco at gmail dot com)
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
mkdir -p temp
doxygen source.dox
xsltproc temp/combine.xslt temp/index.xml > temp/all.xml
xsltproc reference.xsl temp/all.xml > temp/reference.qbk

View File

@@ -1,61 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "boostbook.dtd">
<!--
Copyright (c) Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-->
<informaltable frame="all">
<tgroup cols="3">
<colspec colname="a"/>
<colspec colname="b"/>
<colspec colname="c"/>
<thead>
<row>
<entry valign="center" namest="a" nameend="c">
<bridgehead renderas="sect2">Core</bridgehead>
</entry>
</row>
</thead>
<tbody>
<row>
<entry valign="top">
<bridgehead renderas="sect3">Classes</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
<bridgehead renderas="sect3">Constants</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
</entry>
<entry valign="top">
<bridgehead renderas="sect3">Functions</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
<bridgehead renderas="sect3">Type Traits</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
</entry>
<entry valign="top">
<bridgehead renderas="sect3">Types</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
<bridgehead renderas="sect3">Concepts</bridgehead>
<simplelist type="vert" columns="1">
</simplelist>
</entry>
</row>
</tbody>
</tgroup>
</informaltable>

View File

@@ -1,14 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<!-- Variables (Edit for your project) -->
<xsl:variable name="doc-ref" select="'rippled.ref.'"/>
<xsl:variable name="doc-ns" select="'ripple'"/>
<xsl:variable name="debug" select="0"/>
<xsl:variable name="private" select="0"/>
<!-- End Variables -->
<xsl:include href="docca/include/docca/doxygen.xsl"/>
</xsl:stylesheet>

24
docs/sample_chart.doc Normal file
View File

@@ -0,0 +1,24 @@
/*!
\page somestatechart Example state diagram
\startuml SomeState "my state diagram"
scale 600 width
[*] -> State1
State1 --> State2 : Succeeded
State1 --> [*] : Aborted
State2 --> State3 : Succeeded
State2 --> [*] : Aborted
state State3 {
state "Accumulate Enough Data\nLong State Name" as long1
long1 : Just a test
[*] --> long1
long1 --> long1 : New Data
long1 --> ProcessData : Enough Data
}
State3 --> State3 : Failed
State3 --> [*] : Succeeded / Save Result
State3 --> [*] : Aborted
\enduml
*/

View File

@@ -6,6 +6,7 @@ PROJECT_NAME = "rippled"
PROJECT_NUMBER =
PROJECT_BRIEF = C++ Library
PROJECT_LOGO =
PROJECT_LOGO = images/LogoForDocumentation.png
OUTPUT_DIRECTORY =
CREATE_SUBDIRS = NO
ALLOW_UNICODE_NAMES = NO
@@ -104,6 +105,9 @@ WARN_LOGFILE =
#---------------------------------------------------------------------------
INPUT = \
\
../src/ripple/app/misc/TxQ.h \
../src/ripple/app/tx/apply.h \
../src/ripple/app/tx/applySteps.h \
../src/ripple/protocol/STObject.h \
../src/ripple/protocol/JsonFields.h \
../src/test/jtx/AbstractClient.h \
@@ -125,6 +129,42 @@ INPUT = \
../src/ripple/app/tx/applySteps.h \
../src/ripple/app/tx/impl/InvariantCheck.h \
../src/ripple/app/consensus/RCLValidations.h \
../src/README.md \
../src/ripple/README.md \
../README.md \
../RELEASENOTES.md \
../docs/CodingStyle.md \
../docs/CheatSheet.md \
../docs/README.md \
../docs/sample_chart.doc \
../docs/HeapProfiling.md \
../docs/Docker.md \
../docs/consensus.md \
../Builds/macos/README.md \
../Builds/linux/README.md \
../Builds/VisualStudio2017/README.md \
../src/ripple/consensus/README.md \
../src/ripple/app/consensus/README.md \
../src/test/csf/README.md \
../src/ripple/basics/README.md \
../src/ripple/crypto/README.md \
../src/ripple/peerfinder/README.md \
../src/ripple/app/misc/README.md \
../src/ripple/app/misc/FeeEscalation.md \
../src/ripple/app/ledger/README.md \
../src/ripple/app/paths/README.md \
../src/ripple/app/tx/README.md \
../src/ripple/proto/README.md \
../src/ripple/shamap/README.md \
../src/ripple/protocol/README.md \
../src/ripple/json/README.md \
../src/ripple/json/TODO.md \
../src/ripple/resource/README.md \
../src/ripple/rpc/README.md \
../src/ripple/overlay/README.md \
../src/ripple/nodestore/README.md \
../src/ripple/nodestore/Benchmarks.md \
INPUT_ENCODING = UTF-8
FILE_PATTERNS =
@@ -136,12 +176,16 @@ EXCLUDE_SYMBOLS =
EXAMPLE_PATH =
EXAMPLE_PATTERNS =
EXAMPLE_RECURSIVE = NO
IMAGE_PATH =
IMAGE_PATH = \
./images/ \
./images/consensus/ \
../src/test/csf/ \
INPUT_FILTER =
FILTER_PATTERNS =
FILTER_SOURCE_FILES = NO
FILTER_SOURCE_PATTERNS =
USE_MDFILE_AS_MAINPAGE =
USE_MDFILE_AS_MAINPAGE = ../src/README.md
#---------------------------------------------------------------------------
# Configuration options related to source browsing
@@ -168,8 +212,8 @@ IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
#---------------------------------------------------------------------------
GENERATE_HTML = NO
HTML_OUTPUT = dhtm
GENERATE_HTML = YES
HTML_OUTPUT = html_doc
HTML_FILE_EXTENSION = .html
HTML_HEADER =
HTML_FOOTER =
@@ -269,8 +313,8 @@ MAN_LINKS = NO
#---------------------------------------------------------------------------
# Configuration options related to the XML output
#---------------------------------------------------------------------------
GENERATE_XML = YES
XML_OUTPUT = temp/
GENERATE_XML = NO
XML_OUTPUT = temp
XML_PROGRAMLISTING = YES
#---------------------------------------------------------------------------
@@ -346,7 +390,7 @@ DOT_PATH =
DOTFILE_DIRS =
MSCFILE_DIRS =
DIAFILE_DIRS =
PLANTUML_JAR_PATH =
PLANTUML_JAR_PATH = $(PLANTUML_JAR)
PLANTUML_INCLUDE_PATH =
DOT_GRAPH_MAX_NODES = 50
MAX_DOT_GRAPH_DEPTH = 0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -1,165 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2012, 2013 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_BEASTCONFIG_H_INCLUDED
#define BEAST_BEASTCONFIG_H_INCLUDED
/** Configuration file for Beast
This sets various configurable options for Beast. In order to compile you
must place a copy of this file in a location where your build environment
can find it, and then customize its contents to suit your needs.
@file BeastConfig.h
*/
//------------------------------------------------------------------------------
//
// Unit Tests
//
//------------------------------------------------------------------------------
/** Config: BEAST_NO_UNIT_TEST_INLINE
Prevents unit test definitions from being inserted into a global table.
The default is to include inline unit test definitions.
*/
#ifndef BEAST_NO_UNIT_TEST_INLINE
//#define BEAST_NO_UNIT_TEST_INLINE 1
#endif
//------------------------------------------------------------------------------
//
// Diagnostics
//
//------------------------------------------------------------------------------
/** Config: BEAST_FORCE_DEBUG
Normally, BEAST_DEBUG is set to 1 or 0 based on compiler and project
settings, but if you define this value, you can override this to force it
to be true or false.
*/
#ifndef BEAST_FORCE_DEBUG
//#define BEAST_FORCE_DEBUG 1
#endif
/** Config: BEAST_CHECK_MEMORY_LEAKS
Enables a memory-leak check for certain objects when the app terminates.
*/
#ifndef BEAST_CHECK_MEMORY_LEAKS
//#define BEAST_CHECK_MEMORY_LEAKS 0
#endif
//------------------------------------------------------------------------------
//
// Libraries
//
//------------------------------------------------------------------------------
/** Config: BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES
In a Visual C++ build, this can be used to stop the required system libs
being automatically added to the link stage.
*/
#ifndef BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES
//#define BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES 1
#endif
/** Config: BEAST_INCLUDE_ZLIB_CODE
This can be used to disable Beast's embedded 3rd-party zlib code.
You might need to tweak this if you're linking to an external zlib library
in your app, but for normal apps, this option should be left alone.
If you disable this, you might also want to set a value for
BEAST_ZLIB_INCLUDE_PATH, to specify the path where your zlib headers live.
*/
#ifndef BEAST_INCLUDE_ZLIB_CODE
//#define BEAST_INCLUDE_ZLIB_CODE 1
#endif
/** Config: BEAST_ZLIB_INCLUDE_PATH
This is included when BEAST_INCLUDE_ZLIB_CODE is set to zero.
*/
#ifndef BEAST_ZLIB_INCLUDE_PATH
#define BEAST_ZLIB_INCLUDE_PATH <zlib.h>
#endif
/** Config: BEAST_SQLITE_FORCE_NDEBUG
Setting this option forces sqlite into release mode even if NDEBUG is not set
*/
#ifndef BEAST_SQLITE_FORCE_NDEBUG
#define BEAST_SQLITE_FORCE_NDEBUG 1
#endif
//------------------------------------------------------------------------------
//
// Ripple
//
//------------------------------------------------------------------------------
/** Config: RIPPLE_VERIFY_NODEOBJECT_KEYS
This verifies that the hash of node objects matches the payload.
It is quite expensive so normally this is turned off!
*/
#ifndef RIPPLE_VERIFY_NODEOBJECT_KEYS
//#define RIPPLE_VERIFY_NODEOBJECT_KEYS 1
#endif
/** Config: RIPPLE_DUMP_LEAKS_ON_EXIT
Displays heap blocks and counted objects which were not disposed of
during exit.
*/
#ifndef RIPPLE_DUMP_LEAKS_ON_EXIT
#define RIPPLE_DUMP_LEAKS_ON_EXIT 1
#endif
//------------------------------------------------------------------------------
// These control whether or not certain functionality gets
// compiled into the resulting rippled executable
/** Config: RIPPLE_ROCKSDB_AVAILABLE
Controls whether or not the RocksDB database back-end is compiled into
rippled. RocksDB requires a relatively modern C++ compiler (tested with
gcc versions 4.8.1 and later) that supports some C++11 features.
*/
#ifndef RIPPLE_ROCKSDB_AVAILABLE
//#define RIPPLE_ROCKSDB_AVAILABLE 0
#endif
//------------------------------------------------------------------------------
// Here temporarily to turn off new Validations code while it
// is being written.
//
#ifndef RIPPLE_USE_VALIDATORS
#define RIPPLE_USE_VALIDATORS 0
#endif
/** Config: RIPPLE_SINGLE_IO_SERVICE_THREAD
When set, restricts the number of threads calling io_service::run to one.
This is useful when debugging.
*/
#ifndef RIPPLE_SINGLE_IO_SERVICE_THREAD
#define RIPPLE_SINGLE_IO_SERVICE_THREAD 0
#endif
// Uses OpenSSL instead of alternatives
#ifndef RIPPLE_USE_OPENSSL
#define RIPPLE_USE_OPENSSL 1
#endif
#endif

View File

@@ -16,15 +16,23 @@ Source folders:
| Folder | Upstream Repo | Description |
|:----------------|:---------------------------------------------|:------------|
| `beast` | https://github.com/vinniefalco/Beast | Cross-platform library for WebSocket and HTTP built on [Boost.Asio](https://think-async.com/Asio) |
| `beast` | N/A | legacy utility code that was formerly associated with boost::beast
| `ed25519-donna` | https://github.com/floodyberry/ed25519-donna | [Ed25519](http://ed25519.cr.yp.to/) digital signatures |
| `lz4` | https://github.com/lz4/lz4 | LZ4 lossless compression algorithm |
| `nudb` | https://github.com/vinniefalco/NuDB | Constant-time insert-only key/value database for SSD drives (Less memory usage than RocksDB.) |
| `protobuf` | https://github.com/google/protobuf | Protocol buffer data interchange format. Ripple has changed some names in order to support the unity-style of build (a single .cpp added to the project, instead of linking to a separately built static library). |
| `ripple` | N/A | **Core source code for `rippled`** |
| `rocksdb2` | https://github.com/facebook/rocksdb | Fast key/value database. (Supports rotational disks better than NuDB.) |
| `secp256k1` | https://github.com/bitcoin-core/secp256k1 | ECDSA digital signatures using the **secp256k1** curve |
| `snappy` | https://github.com/google/snappy | "Snappy" lossless compression algorithm. (Technically, the source is in `snappy/snappy`, while `snappy/` also has config options that aren't part of the upstream repository.) |
| `soci` | https://github.com/SOCI/soci | Abstraction layer for database access. |
| `sqlite` | https://www.sqlite.org/src | An embedded database engine that writes to simple files. (Technically not a subtree, just a direct copy of the [SQLite source distribution](http://sqlite.org/download.html).) |
| `test` | N/A | **Unit tests for `rippled`** |
The following dependencies are downloaded and built using ExternalProject
(or FetchContent, where possible). Refer to CMakeLists.txt file for
details about how these sources are built :
| Name | Upstream Repo | Description |
|:----------------|:---------------------------------------------|:------------|
| `lz4` | https://github.com/lz4/lz4 | LZ4 lossless compression algorithm |
| `nudb` | https://github.com/vinniefalco/NuDB | Constant-time insert-only key/value database for SSD drives (Less memory usage than RocksDB.) |
| `snappy` | https://github.com/google/snappy | "Snappy" lossless compression algorithm. |
| `soci` | https://github.com/SOCI/soci | Abstraction layer for database access. |
| `sqlite` | https://www.sqlite.org/src | An embedded database engine that writes to simple files. |

View File

@@ -1,12 +0,0 @@
# Set default behaviour, in case users don't have core.autocrlf set.
* text=auto
# Github
.md text eol=lf
# Visual Studio
*.sln text eol=crlf
*.vcproj text eol=crlf
*.vcxproj text eol=crlf
*.props text eol=crlf
*.filters text eol=crlf

View File

@@ -1,23 +0,0 @@
PLEASE DON'T FORGET TO "STAR" THIS REPOSITORY :)
When reporting a bug please include the following:
### Version of Beast
You can find the version number in <beast/version.hpp>
or using the command "git log -1".
### Steps necessary to reproduce the problem
A small compiling program is the best. If your code is
public, you can provide a link to the repository.
### All relevant compiler information
If you are unable to compile please include the type and
version of compiler you are using as well as all compiler
output including the error message, file, and line numbers
involved.
The more information you provide the sooner your issue
can get resolved!

View File

@@ -1,7 +0,0 @@
bin/
bin64/
# Because of CMake and VS2017
Win32/
x64/

View File

@@ -1,130 +0,0 @@
sudo: false
language: cpp
env:
global:
- LLVM_VERSION=3.8.0
# Maintenance note: to move to a new version
# of boost, update both BOOST_ROOT and BOOST_URL.
# Note that for simplicity, BOOST_ROOT's final
# namepart must match the folder name internal
# to boost's .tar.gz.
- LCOV_ROOT=$HOME/lcov
- VALGRIND_ROOT=$HOME/valgrind-install
- BOOST_ROOT=$HOME/boost_1_58_0
- BOOST_URL='http://sourceforge.net/projects/boost/files/boost/1.58.0/boost_1_58_0.tar.gz'
addons:
apt:
sources: &base_sources
- ubuntu-toolchain-r-test
packages: &base_packages
- python-software-properties
- libffi-dev
- libstdc++6
- binutils-gold
# Provides a backtrace if the unittests crash
- gdb
# Needed for installing valgrind
- subversion
- automake
- autotools-dev
- libc6-dbg
matrix:
include:
# gcc coverage
- compiler: gcc
env:
- GCC_VER=6
- VARIANT=coverage
- ADDRESS_MODEL=64
- DO_VALGRIND=false
- BUILD_SYSTEM=cmake
- PATH=$PWD/cmake/bin:$PATH
addons:
apt:
packages:
- gcc-6
- g++-6
- libssl-dev
- *base_packages
sources:
- *base_sources
# older GCC, release
- compiler: gcc
env:
- GCC_VER=4.8
- VARIANT=release
- DO_VALGRIND=false
- ADDRESS_MODEL=64
addons:
apt:
packages:
- gcc-4.8
- g++-4.8
- *base_packages
sources:
- *base_sources
# later GCC
- compiler: gcc
env:
- GCC_VER=5
- VARIANT=release
- DO_VALGRIND=true
- ADDRESS_MODEL=64
- BUILD_SYSTEM=cmake
- PATH=$PWD/cmake/bin:$PATH
addons:
apt:
packages:
- gcc-5
- g++-5
- libssl-dev
- *base_packages
sources:
- *base_sources
# clang ubsan+asan
- compiler: clang
env:
- GCC_VER=5
- VARIANT=ubasan
- CLANG_VER=3.8
- DO_VALGRIND=false
- ADDRESS_MODEL=64
- UBSAN_OPTIONS='print_stacktrace=1'
- BUILD_SYSTEM=cmake
- PATH=$PWD/cmake/bin:$PATH
- PATH=$PWD/llvm-$LLVM_VERSION/bin:$PATH
addons:
apt:
packages:
- gcc-5
- g++-5
- libssl-dev
- *base_packages
sources:
- *base_sources
cache:
directories:
- $BOOST_ROOT
- $VALGRIND_ROOT
- llvm-$LLVM_VERSION
- cmake
before_install: &base_before_install
- scripts/install-dependencies.sh
script:
- travis_retry scripts/build-and-test.sh
after_script:
- cat nohup.out || echo "nohup.out already deleted"
notifications:
email:
false

File diff suppressed because it is too large Load Diff

View File

@@ -1,194 +0,0 @@
# Part of Beast
cmake_minimum_required (VERSION 3.5.2)
project (Beast VERSION 79)
set_property (GLOBAL PROPERTY USE_FOLDERS ON)
option (Beast_BUILD_EXAMPLES "Build examples" ON)
option (Beast_BUILD_TESTS "Build tests" ON)
if (MSVC)
set (CMAKE_VERBOSE_MAKEFILE FALSE)
add_definitions (-D_WIN32_WINNT=0x0601)
add_definitions (-D_SCL_SECURE_NO_WARNINGS=1)
add_definitions (-D_CRT_SECURE_NO_WARNINGS=1)
set (Boost_USE_STATIC_LIBS ON)
set (Boost_USE_STATIC_RUNTIME ON)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /MP /W4 /bigobj /permissive-")
set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MTd")
set (CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Ob2 /Oi /Ot /GL /MT")
set (CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /Oi /Ot /MT")
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /SAFESEH:NO")
set (CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /LTCG")
# for RelWithDebInfo builds, disable incremental linking
# since CMake sets it ON by default for that build type and it
# causes warnings
#
string (REPLACE "/INCREMENTAL" "/INCREMENTAL:NO" replacement_flags
${CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO})
set (CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO ${replacement_flags})
else()
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
set( CMAKE_CXX_FLAGS
"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wpedantic -Wno-unused-parameter")
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wrange-loop-analysis")
endif ()
endif()
#-------------------------------------------------------------------------------
#
# Boost
#
option (Boost_USE_STATIC_LIBS "Use static libraries for boost" ON)
set(BOOST_COMPONENTS system)
if (Beast_BUILD_EXAMPLES OR Beast_BUILD_TESTS)
list(APPEND BOOST_COMPONENTS coroutine context filesystem program_options thread)
endif()
find_package (Boost 1.58.0 REQUIRED COMPONENTS ${BOOST_COMPONENTS})
link_directories(${Boost_LIBRARY_DIRS})
if (MINGW)
link_libraries(ws2_32 mswsock)
endif()
#-------------------------------------------------------------------------------
#
# OpenSSL
#
if (APPLE AND NOT DEFINED ENV{OPENSSL_ROOT_DIR})
find_program(HOMEBREW brew)
if (NOT HOMEBREW STREQUAL "HOMEBREW-NOTFOUND")
execute_process(COMMAND brew --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
endif()
find_package(OpenSSL)
if (OPENSSL_FOUND)
add_definitions (-DBEAST_USE_OPENSSL=1)
else()
add_definitions (-DBEAST_USE_OPENSSL=0)
message("OpenSSL not found.")
endif()
#
#-------------------------------------------------------------------------------
function(DoGroupSources curdir rootdir folder)
file (GLOB children RELATIVE ${PROJECT_SOURCE_DIR}/${curdir} ${PROJECT_SOURCE_DIR}/${curdir}/*)
foreach (child ${children})
if (IS_DIRECTORY ${PROJECT_SOURCE_DIR}/${curdir}/${child})
DoGroupSources(${curdir}/${child} ${rootdir} ${folder})
elseif (${child} STREQUAL "CMakeLists.txt")
source_group("" FILES ${PROJECT_SOURCE_DIR}/${curdir}/${child})
else()
string(REGEX REPLACE ^${rootdir} ${folder} groupname ${curdir})
string(REPLACE "/" "\\" groupname ${groupname})
source_group(${groupname} FILES ${PROJECT_SOURCE_DIR}/${curdir}/${child})
endif()
endforeach()
endfunction()
function(GroupSources curdir folder)
DoGroupSources (${curdir} ${curdir} ${folder})
endfunction()
#-------------------------------------------------------------------------------
if ("${VARIANT}" STREQUAL "coverage")
if (MSVC)
else()
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse4.2 -fprofile-arcs -ftest-coverage")
set (CMAKE_BUILD_TYPE RELWITHDEBINFO)
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -lgcov")
endif()
elseif ("${VARIANT}" STREQUAL "ubasan")
if (MSVC)
else()
set (CMAKE_CXX_FLAGS
"${CMAKE_CXX_FLAGS} -DBEAST_NO_SLOW_TESTS=1 -msse4.2 -funsigned-char -fno-omit-frame-pointer -fsanitize=address,undefined -fsanitize-blacklist=${PROJECT_SOURCE_DIR}/scripts/blacklist.supp")
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=address,undefined")
set (CMAKE_BUILD_TYPE RELWITHDEBINFO)
endif()
elseif ("${VARIANT}" STREQUAL "debug")
set (CMAKE_BUILD_TYPE DEBUG)
elseif ("${VARIANT}" STREQUAL "release")
set (CMAKE_BUILD_TYPE RELEASE)
endif()
#-------------------------------------------------------------------------------
#
# Library interface
#
add_library (${PROJECT_NAME} INTERFACE)
target_link_libraries (${PROJECT_NAME} INTERFACE ${Boost_SYSTEM_LIBRARY})
if (NOT MSVC)
target_link_libraries (${PROJECT_NAME} INTERFACE Threads::Threads)
endif()
target_compile_definitions (${PROJECT_NAME} INTERFACE BOOST_COROUTINES_NO_DEPRECATION_WARNING=1)
target_include_directories(${PROJECT_NAME} INTERFACE ${PROJECT_SOURCE_DIR}/include)
target_include_directories(${PROJECT_NAME} SYSTEM INTERFACE ${Boost_INCLUDE_DIRS})
#-------------------------------------------------------------------------------
#
# Tests and examples
#
include_directories (.)
include_directories (extras)
include_directories (include)
if (OPENSSL_FOUND)
include_directories (${OPENSSL_INCLUDE_DIR})
endif()
file(GLOB_RECURSE BEAST_INCLUDES
${PROJECT_SOURCE_DIR}/include/beast/*.hpp
${PROJECT_SOURCE_DIR}/include/beast/*.ipp
)
file(GLOB_RECURSE COMMON_INCLUDES
${PROJECT_SOURCE_DIR}/example/common/*.hpp
)
file(GLOB_RECURSE EXAMPLE_INCLUDES
${PROJECT_SOURCE_DIR}/example/*.hpp
)
file(GLOB_RECURSE EXTRAS_INCLUDES
${PROJECT_SOURCE_DIR}/extras/beast/*.hpp
${PROJECT_SOURCE_DIR}/extras/beast/*.ipp
)
if (Beast_BUILD_TESTS)
add_subdirectory (test)
endif()
if (Beast_BUILD_EXAMPLES AND
(NOT "${VARIANT}" STREQUAL "coverage") AND
(NOT "${VARIANT}" STREQUAL "ubasan"))
add_subdirectory (example)
endif()

View File

@@ -1,115 +0,0 @@
#
# Copyright (c) 2013-2016 Vinnie Falco (vinnie dot falco at gmail dot com)
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
import os ;
import feature ;
import boost ;
import modules ;
import testing ;
boost.use-project ;
if [ os.name ] = SOLARIS
{
lib socket ;
lib nsl ;
}
else if [ os.name ] = NT
{
lib ws2_32 ;
lib mswsock ;
}
else if [ os.name ] = HPUX
{
lib ipv6 ;
}
else if [ os.name ] = QNXNTO
{
lib socket ;
}
else if [ os.name ] = HAIKU
{
lib network ;
}
if [ os.name ] = NT
{
lib ssl : : <name>ssleay32 ;
lib crypto : : <name>libeay32 ;
}
else
{
lib ssl ;
lib crypto ;
}
if [ os.name ] = MACOSX
{
using clang : : ;
}
variant coverage :
release
:
<cxxflags>"-msse4.2 -fprofile-arcs -ftest-coverage"
<linkflags>"-lgcov"
;
variant ubasan
:
release
:
<cxxflags>"-msse4.2 -funsigned-char -fno-omit-frame-pointer -fsanitize=address,undefined -fsanitize-blacklist=scripts/blacklist.supp"
<linkflags>"-fsanitize=address,undefined"
;
project beast
: requirements
<implicit-dependency>/boost//headers
<include>.
<include>./extras
<include>./include
#<use>/boost//headers
<library>/boost/system//boost_system
<library>/boost/coroutine//boost_coroutine
<library>/boost/filesystem//boost_filesystem
<library>/boost/program_options//boost_program_options
<define>BOOST_ALL_NO_LIB=1
<define>BOOST_COROUTINES_NO_DEPRECATION_WARNING=1
<threading>multi
<runtime-link>shared
<debug-symbols>on
<toolset>gcc:<cxxflags>-std=c++11
<toolset>gcc:<cxxflags>-Wno-unused-parameter
<toolset>gcc:<cxxflags>-Wno-unused-variable # Temporary until we can figure out -isystem
<toolset>clang:<cxxflags>-std=c++11
<toolset>clang:<cxxflags>-Wno-unused-parameter
<toolset>clang:<cxxflags>-Wno-unused-variable # Temporary until we can figure out -isystem
<toolset>clang:<cxxflags>-Wrange-loop-analysis
<toolset>msvc:<define>_SCL_SECURE_NO_WARNINGS=1
<toolset>msvc:<define>_CRT_SECURE_NO_WARNINGS=1
<toolset>msvc:<cxxflags>"/permissive- /bigobj"
<toolset>msvc:<variant>release:<cxxflags>"/Ob2 /Oi /Ot"
<os>LINUX:<define>_XOPEN_SOURCE=600
<os>LINUX:<define>_GNU_SOURCE=1
<os>SOLARIS:<define>_XOPEN_SOURCE=500
<os>SOLARIS:<define>__EXTENSIONS__
<os>SOLARIS:<library>socket
<os>SOLARIS:<library>nsl
<os>NT:<define>_WIN32_WINNT=0x0601
<os>NT,<toolset>cw:<library>ws2_32
<os>NT,<toolset>cw:<library>mswsock
<os>NT,<toolset>gcc:<library>ws2_32
<os>NT,<toolset>gcc:<library>mswsock
<os>NT,<toolset>gcc-cygwin:<define>__USE_W32_SOCKETS
: usage-requirements
:
build-dir bin
;
build-project test ;
build-project example ;

View File

@@ -1,23 +0,0 @@
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,206 +0,0 @@
<img width="880" height = "80" alt = "Beast"
src="https://raw.githubusercontent.com/vinniefalco/Beast/master/doc/images/readme.png">
# HTTP and WebSocket built on Boost.Asio in C++11
Branch | Build | Coverage | Documentation
------------|---------------|----------------|---------------
[master](https://github.com/vinniefalco/Beast/tree/master) | [![Build Status](https://travis-ci.org/vinniefalco/Beast.svg?branch=master)](https://travis-ci.org/vinniefalco/Beast) [![Build status](https://ci.appveyor.com/api/projects/status/g0llpbvhpjuxjnlw/branch/master?svg=true)](https://ci.appveyor.com/project/vinniefalco/beast/branch/master) | [![codecov](https://codecov.io/gh/vinniefalco/Beast/branch/master/graph/badge.svg)](https://codecov.io/gh/vinniefalco/Beast/branch/master) | [![Documentation](https://img.shields.io/badge/documentation-master-brightgreen.svg)](http://vinniefalco.github.io/beast/)
[develop](https://github.com/vinniefalco/Beast/tree/develop) | [![Build Status](https://travis-ci.org/vinniefalco/Beast.svg?branch=develop)](https://travis-ci.org/vinniefalco/Beast) [![Build status](https://ci.appveyor.com/api/projects/status/g0llpbvhpjuxjnlw/branch/develop?svg=true)](https://ci.appveyor.com/project/vinniefalco/beast/branch/develop) | [![codecov](https://codecov.io/gh/vinniefalco/Beast/branch/develop/graph/badge.svg)](https://codecov.io/gh/vinniefalco/Beast/branch/develop) | [![Documentation](https://img.shields.io/badge/documentation-develop-brightgreen.svg)](http://vinniefalco.github.io/stage/beast/develop)
## Contents
- [Introduction](#introduction)
- [Appearances](#appearances)
- [Description](#description)
- [Requirements](#requirements)
- [Building](#building)
- [Usage](#usage)
- [Licence](#licence)
- [Contact](#contact)
- [Contributing](#Contributing)
## Introduction
Beast is a C++ header-only library serving as a foundation for writing
interoperable networking libraries by providing **low-level HTTP/1,
WebSocket, and networking protocol** vocabulary types and algorithms
using the consistent asynchronous model of Boost.Asio.
This library is designed for:
* **Symmetry:** Algorithms are role-agnostic; build clients, servers, or both.
* **Ease of Use:** Boost.Asio users will immediately understand Beast.
* **Flexibility:** Users make the important decisions such as buffer or
thread management.
* **Performance:** Build applications handling thousands of connections or more.
* **Basis for Further Abstraction.** Components are well-suited for building upon.
## Appearances
| <a href="http://cppcast.com/2017/01/vinnie-falco/">CppCast 2017</a> | <a href="https://raw.githubusercontent.com/vinniefalco/Beast/master/doc/images/CppCon2016.pdf">CppCon 2016</a> |
| ------------ | ----------- |
| <a href="http://cppcast.com/2017/01/vinnie-falco/"><img width="180" height="180" alt="Vinnie Falco" src="https://avatars1.githubusercontent.com/u/1503976?v=3&u=76c56d989ef4c09625256662eca2775df78a16ad&s=180"></a> | <a href="https://www.youtube.com/watch?v=uJZgRcvPFwI"><img width="320" height = "180" alt="Beast" src="https://raw.githubusercontent.com/vinniefalco/Beast/master/doc/images/CppCon2016.png"></a> |
## Description
This software is currently in beta: interfaces may change.
For recent changes see the [CHANGELOG](CHANGELOG.md).
The library has been submitted to the
[Boost Library Incubator](http://rrsd.com/blincubator.com/bi_library/beast-2/?gform_post_id=1579)
* [Project Site](http://vinniefalco.github.io/)
* [Repository](https://github.com/vinniefalco/Beast)
* [Project Documentation](http://vinniefalco.github.io/beast/)
* [Autobahn.testsuite results](http://vinniefalco.github.io/autobahn/index.html)
## Requirements
This library is for programmers familiar with Boost.Asio. Users
who wish to use asynchronous interfaces should already know how to
create concurrent network programs using callbacks or coroutines.
* **C++11:** Robust support for most language features.
* **Boost:** Boost.Asio and some other parts of Boost.
* **OpenSSL:** Optional, for using TLS/Secure sockets.
When using Microsoft Visual C++, Visual Studio 2015 Update 3 or later is required.
These components are required in order to build the tests and examples:
* CMake 3.7.2 or later
* Properly configured bjam/b2
## Building
Beast is header-only so there are no libraries to build or link with.
To use Beast in your project, simply copy the Beast sources to your
project's source tree (alternatively, bring Beast into your Git repository
using the `git subtree` or `git submodule` commands). Then, edit your
build scripts to add the `include/` directory to the list of paths checked
by the C++ compiler when searching for includes. Beast `#include` lines
will look like this:
```C++
#include <beast/http.hpp>
#include <beast/websocket.hpp>
```
To link your program successfully, you'll need to add the Boost.System
library to link with. If you use coroutines you'll also need the
Boost.Coroutine library. Please visit the Boost documentation for
instructions on how to do this for your particular build system.
For the examples and tests, Beast provides build scripts for Boost.Build (bjam)
and CMake. It is possible to generate Microsoft Visual Studio or Apple
Xcode project files using CMake by executing these commands from
the root of the repository:
```
mkdir bin
cd bin
cmake .. # for 32-bit Windows builds
cmake -G Xcode .. # for Apple Xcode builds
cd ..
mkdir bin64
cd bin64
cmake -G"Visual Studio 14 2015 Win64" .. # for 64-bit Windows builds (VS2015)
cmake -G"Visual Studio 15 2017 Win64" .. # for 64-bit Windows builds (VS2017)
```
To build with Boost.Build, it is necessary to have the bjam executable
in your path. And bjam needs to know how to find the Boost sources. The
easiest way to do this is make sure that the version of bjam in your path
is the one at the root of the Boost source tree, which is built when
running `bootstrap.sh` (or `bootstrap.bat` on Windows).
Once bjam is in your path, simply run bjam in the root of the Beast
repository to automatically build the required Boost libraries if they
are not already built, build the examples, then build and run the unit
tests.
The files in the repository are laid out thusly:
```
./
bin/ Create this to hold executables and project files
bin64/ Create this to hold 64-bit Windows executables and project files
doc/ Source code and scripts for the documentation
include/ Add this to your compiler includes
beast/
extras/ Additional APIs, may change
example/ Self contained example programs
test/ Unit tests and benchmarks
```
## Usage
These examples are complete, self-contained programs that you can build
and run yourself (they are in the `example` directory).
http://vinniefalco.github.io/beast/beast/quick_start.html
## License
Distributed under the Boost Software License, Version 1.0.
(See accompanying file [LICENSE_1_0.txt](LICENSE_1_0.txt) or copy at
http://www.boost.org/LICENSE_1_0.txt)
## Contact
Please report issues or questions here:
https://github.com/vinniefalco/Beast/issues
---
## Contributing (We Need Your Help!)
If you would like to contribute to Beast and help us maintain high
quality, consider performing code reviews on active pull requests.
Any feedback from users and stakeholders, even simple questions about
how things work or why they were done a certain way, carries value
and can be used to improve the library. Code review provides these
benefits:
* Identify bugs
* Documentation proof-reading
* Adjust interfaces to suit use-cases
* Simplify code
You can look through the Closed pull requests to get an idea of how
reviews are performed. To give a code review just sign in with your
GitHub account and then add comments to any open pull requests below,
don't be shy!
<p>https://github.com/vinniefalco/Beast/pulls</p>
Here are some resources to learn more about
code reviews:
* <a href="https://blog.scottnonnenberg.com/top-ten-pull-request-review-mistakes/">Top 10 Pull Request Review Mistakes</a>
* <a href="https://smartbear.com/SmartBear/media/pdfs/best-kept-secrets-of-peer-code-review.pdf">Best Kept Secrets of Peer Code Review (pdf)</a>
* <a href="http://support.smartbear.com/support/media/resources/cc/11_Best_Practices_for_Peer_Code_Review.pdf">11 Best Practices for Peer Code Review (pdf)</a>
* <a href="http://www.evoketechnologies.com/blog/code-review-checklist-perform-effective-code-reviews/">Code Review Checklist To Perform Effective Code Reviews</a>
* <a href="https://www.codeproject.com/Articles/524235/Codeplusreviewplusguidelines">Code review guidelines</a>
* <a href="https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md">C++ Core Guidelines</a>
* <a href="https://doc.lagout.org/programmation/C/CPP101.pdf">C++ Coding Standards (Sutter & Andrescu)</a>
Beast thrives on code reviews and any sort of feedback from users and
stakeholders about its interfaces. Even if you just have questions,
asking them in the code review or in issues provides valuable information
that can be used to improve the library - do not hesitate, no question
is insignificant or unimportant!
While code reviews are the preferred form of donation, if you simply
must donate money to support the library, please do so
using <a href="https://bitcoin.org">Bitcoin</a> sent to this address:
<a href="bitcoin:1DaPsDvv6MjFUSnsxXSHzeYKSjzrWrQY7T?amount=0.03&label=Beast%20Library"><b>1DaPsDvv6MjFUSnsxXSHzeYKSjzrWrQY7T</b></a>
<a href="bitcoin:1DaPsDvv6MjFUSnsxXSHzeYKSjzrWrQY7T?amount=0.03&label=Beast%20Library">
<img src="https://raw.githubusercontent.com/vinniefalco/Beast/master/doc/images/btc_qr2.png" width="490" height="100"></a>

View File

@@ -1,102 +0,0 @@
# Copyright 2016 Peter Dimov
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at http://boost.org/LICENSE_1_0.txt)
#version: 1.0.{build}-{branch}
version: "{branch} (#{build})"
shallow_clone: true
platform:
#- x86
- x64
configuration:
#- Debug
- Release
install:
- cd ..
- git clone https://github.com/boostorg/boost.git boost
- cd boost
# - git checkout boost-1.64.0
- xcopy /s /e /q %APPVEYOR_BUILD_FOLDER% libs\beast\
- git submodule update --init tools/build
- git submodule update --init libs/config
- git submodule update --init tools/boostdep
# - python tools/boostdep/depinst/depinst.py beast
- git submodule update --init libs/any
- git submodule update --init libs/asio
- git submodule update --init libs/algorithm
- git submodule update --init libs/array
- git submodule update --init libs/assert
- git submodule update --init libs/atomic
- git submodule update --init libs/bind
- git submodule update --init libs/chrono
- git submodule update --init libs/concept_check
- git submodule update --init libs/config
- git submodule update --init libs/container
- git submodule update --init libs/context
- git submodule update --init libs/conversion
- git submodule update --init libs/core
- git submodule update --init libs/coroutine
- git submodule update --init libs/date_time
- git submodule update --init libs/detail
- git submodule update --init libs/endian
- git submodule update --init libs/exception
- git submodule update --init libs/filesystem
- git submodule update --init libs/foreach
- git submodule update --init libs/function
- git submodule update --init libs/function_types
- git submodule update --init libs/functional
- git submodule update --init libs/fusion
- git submodule update --init libs/integer
- git submodule update --init libs/intrusive
- git submodule update --init libs/io
- git submodule update --init libs/iostreams
- git submodule update --init libs/iterator
- git submodule update --init libs/lambda
- git submodule update --init libs/lexical_cast
- git submodule update --init libs/locale
- git submodule update --init libs/logic
- git submodule update --init libs/math
- git submodule update --init libs/move
- git submodule update --init libs/mpl
- git submodule update --init libs/numeric/conversion
- git submodule update --init libs/optional
# - git submodule update --init libs/phoenix
- git submodule update --init libs/pool
- git submodule update --init libs/predef
- git submodule update --init libs/preprocessor
- git submodule update --init libs/program_options
- git submodule update --init libs/proto
- git submodule update --init libs/random
- git submodule update --init libs/range
- git submodule update --init libs/ratio
- git submodule update --init libs/rational
- git submodule update --init libs/regex
- git submodule update --init libs/serialization
- git submodule update --init libs/smart_ptr
# - git submodule update --init libs/spirit
- git submodule update --init libs/static_assert
- git submodule update --init libs/system
- git submodule update --init libs/thread
- git submodule update --init libs/throw_exception
- git submodule update --init libs/tokenizer
- git submodule update --init libs/tti
- git submodule update --init libs/tuple
- git submodule update --init libs/type_index
- git submodule update --init libs/type_traits
- git submodule update --init libs/typeof
- git submodule update --init libs/unordered
- git submodule update --init libs/utility
- git submodule update --init libs/variant
- git submodule update --init libs/winapi
- bootstrap
- b2 headers
build: off
test_script:
- b2 libs/beast/example toolset=msvc-14.0
- b2 libs/beast/test toolset=msvc-14.0

View File

@@ -1,4 +0,0 @@
html
temp
reference.qbk
out.txt

Some files were not shown because too many files have changed in this diff Show More