Compare commits

...

188 Commits
1.2.0 ... 1.4.0

Author SHA1 Message Date
Nik Bougalis
06c371544a Set version to 1.4.0 2019-11-26 10:35:54 -08:00
Nik Bougalis
ffa0f048f3 Set version to 1.4.0-rc2 2019-11-25 21:51:54 -08:00
Mike Ellery
22ca0041b8 Use signed artifacts in pkg pipeline 2019-11-25 21:51:12 -08:00
Manoj doshi
232975bfdb Set version to 1.4.0-rc1 2019-11-04 11:45:31 -08:00
Howard Hinnant
4e84868747 Document the minimum GCC version as 7 2019-11-04 11:39:19 -08:00
John Northrup
9e1ccb900e GPG Sign DEB and RPM packages generated by build pipeline (#3144)
* adding package signing steps for rpm and deb

* first spike at GPG signing with CI and containers

* refine ubuntu portion

* get correct gpg package version

* adding CentOS support

* fixing errors in installing gpg on ubuntu

* base64 decode the GPG key

* fixing line continuations

* revised package signing, looking for package artifacts

* add dpkg-sig to ubuntu image

* sign all deb packges

* add passphrase to GPG process

* repeat yo slef on dpkg

* sign all the rpm packages too

* install rpm-sign in the CentOS docker image

* loop through rpm files

* no need for PIN on GPG signing
2019-11-03 12:08:40 -06:00
Manoj doshi
98d55b988f Set version to 1.4.0-b8 2019-10-30 12:24:25 -07:00
Mike Ellery
e720a66076 Ensure permissions for initial config 2019-10-30 12:23:58 -07:00
Joseph Busch
7be7343e05 Make the HTTP response size const 2019-10-30 12:23:58 -07:00
seelabs
e26dd7bdfe Reject overlong encodings earlier 2019-10-30 12:23:58 -07:00
seelabs
906b9ae00b Don't use set in AccountObjects test:
Collecting the returned and expected values in sets only works if there are no
duplicates. The implementation is changed to use sorted vectors to fix this case.
2019-10-30 12:23:58 -07:00
Mo Morsi
15c5f9c111 Report consensus phase changes in the server subscription stream 2019-10-30 12:23:57 -07:00
Howard Hinnant
726dd69ab9 Make Env::AppBundle constructor exception safe
When the Env::AppBundle constructor throws an exception
it still needs to run ~AppBundle(), otherwise the JobQueue
isn't properly shut down.  Specifically the  JobQueue
can destruct without waiting on outstanding jobs in the
queue.

This change ensures that if Env::AppBundle constructor
throws, Env::AppBundle::~AppBundle() runs.

This fixes the unit test crash exposed by PR #3047.
2019-10-30 12:23:57 -07:00
Manoj doshi
41b2c80dde Set version to 1.4.0-b7 2019-10-18 16:45:06 -07:00
Mike Ellery
cd01502d17 Relax STTx test failure criterion:
FIXES: #3106

Different versions of protobuf produce subtly different
results when given invalid message payloads. This leads to
subtly different behavior when we try to deserialize these
invalid messages. As such, we can't tie success to a
particular exception.
2019-10-18 16:44:16 -07:00
Mike Ellery
b2317f8b41 Omit downloader resolve test when it won't fail
Fixes: #3108
2019-10-18 16:44:16 -07:00
Nik Bougalis
113167acf4 Allow channel_authorize to use Ed25519 keys 2019-10-18 16:44:16 -07:00
Nik Bougalis
a3a9dc26b4 Introduce support for deletable accounts:
The XRP Ledger utilizes an account model. Unlike systems based on a UTXO
model, XRP Ledger accounts are first-class objects. This design choice
allows the XRP Ledger to offer rich functionality, including the ability
to own objects (offers, escrows, checks, signer lists) as well as other
advanced features, such as key rotation and configurable multi-signing
without needing to change a destination address.

The trade-off is that accounts must be stored on ledger. The XRP Ledger
applies reserve requirements, in XRP, to protect the shared global ledger
from growing excessively large as the result of spam or malicious usage.

Prior to this commit, accounts had been permanent objects; once created,
they could never be deleted.

This commit introduces a new amendment "DeletableAccounts" which, if
enabled, will allow account objects to be deleted by executing the new
"AccountDelete" transaction. Any funds remaining in the account will
be transferred to an account specified in the deletion transaction.

The amendment changes the mechanics of account creation; previously
a new account would have an initial sequence number of 1. Accounts
created after the amendment will have an initial sequence number that
is equal to the ledger in which the account was created.

Accounts can only be deleted if they are not associated with any
obligations (like RippleStates, Escrows, or PayChannels) and if the
current ledger sequence number exceeds the account's sequence number
by at least 256 so that, if recreated, the account can be protected
from transaction replay.
2019-10-18 16:44:16 -07:00
Joseph Busch
7e7664c29a Add deletion_blockers_only param to account_objects RPC command 2019-10-18 14:18:38 -07:00
Manoj doshi
fccb7e1c70 Set version to 1.4.0-b6 2019-10-15 12:02:12 -07:00
Howard Hinnant
3f45b8c3bd Switch deadlock detector to steady_clock
* Changes to system time should not trigger the deadlock detector.
* Fixes #3101
2019-10-15 12:01:37 -07:00
seelabs
ca6d5798e9 Support for boost 1.71:
* replace boost::beast::detail::iequals with boost::iequals
* replace deprecated `buffers` function with `make_printable`
* replace boost::beast::detail::ascii_tolower with lambda
* add missing includes
2019-10-15 12:01:37 -07:00
Mike Ellery
2110b24090 Add omitted unit tests, cleanup old files 2019-10-15 12:01:37 -07:00
Mike Ellery
82484e26f5 Add option to enable -Wextra for gcc/clang. 2019-10-15 12:01:37 -07:00
Scott Schurr
ca47583a3b Small bug fixes in BuildLedger.cpp 2019-10-15 12:01:37 -07:00
Joseph Busch
f4d6b0e1c4 Add metrics for PeerImp to track bandwidth usage 2019-10-15 12:01:37 -07:00
Devon White
9196d9541a Include validator's master public key in validation stream:
The validation stream only reported the ephemeral signing key for validators
which use manifests. This made tracking unnecessarily difficult for clients
processing the data stream.

With this change, the validator's long-term master public key is also
included.

This commit fixes #3005

* Provide proposing validator's master key in the validation stream
  subscription JSON responses.

Implement code review changes.

FIXES: #3005
2019-10-15 12:01:37 -07:00
Manoj doshi
b53fda1e1a Set version to 1.4.0-b5 2019-09-27 12:25:21 -07:00
Mike Ellery
9a5911f94c CI: remove boost from macos/hombrew 2019-09-27 12:24:19 -07:00
Mike Ellery
80acc85e59 Fix startup error with --import 2019-09-27 12:24:19 -07:00
Joseph Busch
8ce1c189f7 Fix VS2019 debug build 2019-09-27 12:24:19 -07:00
Howard Hinnant
7228b2e068 Remove SHAMap V2 2019-09-27 12:24:19 -07:00
Yusuf Sahin HAMZA
33ab0cd7bd Automatically update NodeDB path if changed 2019-09-27 12:24:19 -07:00
seelabs
e33ac1d450 Add PayChan to recipient's owner directory 2019-09-27 11:35:22 -07:00
Rome Reginelli
826cbbc3bf Clarify Linux build instructions for configuring
* Explain that Arch/Manjaro/etc. need `-Dstatic=OFF` during the configure step
* move configuration options closer to that step
* separate sub-headers for configuration and build
2019-09-27 11:35:22 -07:00
Mike Ellery
38f82115f7 Set version to 1.4.0-b4 2019-09-09 18:46:06 -07:00
Mike Ellery
d9bb78c8b8 Cleanup usage of ScopedUnlock 2019-09-09 18:45:56 -07:00
Mo Morsi
905f631b71 Modularize cmake build config 2019-09-09 18:45:53 -07:00
Mike Ellery
9213c49ca1 Honor SSL config settings for ValidatorSites:
FIXES: #2990

* refactor common SSL client setup
* enable SSL in unit-test http server
* add tests for SSLHTTPDownloader
* misc test refactoring
2019-09-09 10:55:31 -07:00
Nik Bougalis
fc7ecd672a Set version to 1.4.0-b3 2019-09-07 12:08:47 -07:00
Mark Travis
a6944be5cf Clarify online delete data error message. 2019-09-07 11:44:00 -07:00
Mark Travis
e5b61c9ac9 Update operating mode upon network disagreement. 2019-09-07 11:44:00 -07:00
Scott Schurr
a9a4e2c8fb Add Destination to Check threading 2019-09-07 11:39:02 -07:00
Miguel Portilla
56eac5c9a1 Fix rand_int assert in shard store 2019-09-07 11:39:02 -07:00
Miguel Portilla
4b1970afa9 Log database connection error 2019-09-07 11:39:02 -07:00
Miguel Portilla
22c9de487a Add shard thread safety 2019-09-07 11:39:02 -07:00
Miguel Portilla
66fad62e66 Implement Shard SQLite support 2019-09-07 11:39:02 -07:00
Mike Ellery
008fc5155a Improve CI and packaging:
* add travis build for min cmake supported
* add travis build for validator keys (uses xrpl_core)
* add travis build for ipv6 (mac only)
* add cmake target for validator keys via FetchContent
* use validator keys target in package build
2019-09-07 11:39:02 -07:00
seelabs
5834fbbc5d Set version to 1.4.0-b2 2019-08-23 11:33:59 -07:00
seelabs
06219f1151 Disable shadow warning:
Different compilers and handling the shadow warning differently. In particular,
some are warning about types being shadowed by variables. Until these can be
resolved the shadow warning is being disabled.

Note: the shadow warning was originally enabled to help with the structured
bindings patch. As that is now complete, it's less important to keep this
warning.
2019-08-23 11:33:59 -07:00
seelabs
c2d2ba9f45 Simplify code using if constexpr:
Also simplify msig construction
2019-08-23 11:33:59 -07:00
seelabs
1eb3753f26 Replace from_string_checked pair return type with optional<Endpoint> 2019-08-23 11:33:59 -07:00
seelabs
0a256247a0 Replace strUnHex pair return type with optional<Blob> 2019-08-23 11:33:59 -07:00
seelabs
7912ee6f7b Use structured bindings in some places:
Most of the new uses either:
* Replace some uses of `tie`
* bind to pairs when iterating through maps
2019-08-23 11:33:59 -07:00
seelabs
9c58f23cf8 Replace unordered_map::emplace with insert_or_assign 2019-08-23 08:47:43 -07:00
seelabs
9245b0b666 Use std::gcd to implement lowestTerms 2019-08-23 08:47:43 -07:00
seelabs
92925a0af6 Use [[fallthrough]] in some switch statements 2019-08-23 08:47:43 -07:00
seelabs
70c2e1b419 remove make_Amounts:
* c++-17's class template type deduction can replace this function
2019-08-23 08:47:43 -07:00
seelabs
5d1728cc96 Use class template argument deduction for locks 2019-08-23 08:47:43 -07:00
seelabs
4076b6d92e Replace for_each_arg trick with fold expressions 2019-08-23 08:47:42 -07:00
seelabs
b9e73b4852 Fix shadowing variables 2019-08-23 08:47:42 -07:00
seelabs
014df67fed Remove unused member variable 2019-08-23 08:47:35 -07:00
Mike Ellery
014ec021bb Build packages with gcc-8 2019-08-23 08:42:14 -07:00
Mike Ellery
7fa9b91d23 Fix jenkins/travis CI:
* remove clang builds from jenkins
* disable windows travis cache
* limit make parallelism
* update linux CI image
2019-08-23 08:41:49 -07:00
Nik Bougalis
98c4a7a2b1 Set version to 1.4.0-b1 2019-08-19 06:58:55 -07:00
Vishwas Patil
c04c00d279 Add "sahyadri.isrdc.in" to list of bootstrap nodes 2019-08-19 06:58:50 -07:00
Elliot Lee
ef139e8b3c Update README.md
- This fixes #3023; thanks @crypto-deus.
- This fixes #3026; thanks @William-Gomez.
2019-08-19 06:58:50 -07:00
Alloy Networks
45c1c38993 Warn if the update script is not executed as root 2019-08-19 06:58:50 -07:00
Mo Morsi
1942fee581 Modernize code and clean up out-of-date or obsolete comments:
- Remove references to nodestore ledger index. This was removed
  in f946d7b447.
2019-08-19 06:58:50 -07:00
seelabs
b3c85e2709 Remove unused TER code from StrandResult 2019-08-19 06:58:50 -07:00
seelabs
561942da23 Use enums for some parameters in payments:
* Use enums for StrandDirection, DebtDirection, and QualityDirection
2019-08-19 06:58:50 -07:00
seelabs
c217baa367 Enable C++17 2019-08-19 06:58:50 -07:00
Howard Hinnant
c699864c85 Fix incorrect snapShot unsharing
This fixes #3020.
2019-08-19 06:58:50 -07:00
Howard Hinnant
4ff0f482c3 Remove unneeded and unused base classes in insight 2019-08-19 06:58:50 -07:00
Scott Schurr
28b942b186 Enhance AccountTx unit test 2019-08-19 06:58:50 -07:00
Edward Hennis
4900e3081d Update appveyor dependencies for boost 1.70 2019-08-19 06:58:50 -07:00
Mike Ellery
145326c00c Add policy check to FindBoost 2019-08-19 06:58:50 -07:00
Mike Ellery
c7d90bfddd Update to current SOCI HEAD 2019-08-16 10:33:08 -07:00
Mike Ellery
bfa84cfca5 Fix io_latency_probe test on CI environments 2019-08-16 10:33:08 -07:00
Mike Ellery
cbc6e500b6 Set minimum versions for gcc/clang 2019-08-16 10:33:08 -07:00
Mike Ellery
13a4fefe34 Travis CI improvements:
FIXES: #2527

* define custom docker image for travis-linux builds based on
  package build image
* add macos builds
* add windows builds (currently allowed to fail)
* improve build and shell scripts as required for the CI envs
* add asio timer latency workaround
* omit several manual tests from TravisCI which cause memory exhaustion
2019-08-16 10:33:08 -07:00
John Freeman
87e9ee5ce9 Add support for reserved peer slots:
This commit allows server operators to reserve slots for specific
peers (identified by the peer's public node identity) and to make
changes to the reservations while the server is operating.

This commit closes #2938
2019-08-05 17:46:24 -07:00
John Freeman
20cc5df5fe Fix GitLab CI
- Update Docker image to Boost 1.70
- Bust dependency cache
- Pass `Boost_NO_BOOST_CMAKE` to CMake
2019-08-05 09:49:43 -07:00
Miguel Portilla
9e117b7b38 Fix shard path detection 2019-08-04 20:48:08 -07:00
Miguel Portilla
a02d914093 Move shard store init to Application 2019-08-04 20:48:08 -07:00
Miguel Portilla
c5a95f1eb5 Remove SQLite Validations table 2019-08-04 20:01:34 -07:00
Manoj doshi
e1adbd7ddd Set version to 1.3.1 2019-07-24 15:22:24 -07:00
Joseph Busch
355a7b04a8 Add a LogicError when a deadlock is detected 2019-07-24 14:08:47 -07:00
Lazaridis
d3ee0df93a Add links to in-repo build-instructions 2019-07-24 14:03:44 -07:00
Mark Travis
7c24f7b170 Improve logging during process startup. 2019-07-24 13:55:38 -07:00
Mike Ellery
a3060516c6 Improve package build pipeline:
- Add docker container tags for "latest_BRANCH"
- Prevent different branches from overwriting deb repo artifacts
- Manual approval always required before pushing to prod
2019-07-24 13:53:09 -07:00
Nik Bougalis
caa5c9e223 Set version to 1.3.0 2019-07-09 13:56:37 -07:00
Manoj doshi
846538304f Set version to 1.3.0-rc2 2019-07-09 13:50:12 -07:00
Mo Morsi
7b7e3b6750 Return WS error on closure when balance threshold exceeds 2019-07-09 13:50:12 -07:00
Miguel Portilla
a988b3224f Use NuDB context with backends 2019-07-09 13:50:12 -07:00
Scott Schurr
89b3bf0796 Stabilize RPC error code values:
The original intent was that RPC error codes were not stable.
But those codes were made available through the API, so some
users came to depend on the code values.  This change adapts
to the current state of affairs.
2019-07-09 13:50:12 -07:00
seelabs
6d8988b78a Improve handling of revoked manifests:
Manifests which are revoked can include ephemeral keys although doing
so does not make sense: a revoked manifest isn't used for signing and
so don't need to define an ephemeral key.
2019-07-09 13:38:59 -07:00
Mike Ellery
3acbd84f1d Set proper system openssldir in package build 2019-07-03 11:22:53 -07:00
Manoj doshi
45403b877f Set version to 1.3.0-rc1 2019-06-25 12:03:18 -07:00
Manoj doshi
f17d9bc421 Set version to 1.3.0-b6 2019-06-21 14:59:08 -07:00
Nik Bougalis
ba2714fa22 Make protocol message counters more granular:
A running instance of the server tracks the number of protocol messages
and the number of bytes it sends and receives.

This commit makes the counters more granular, allowing server operators
to better track and understand bandwidth usage.
2019-06-21 14:53:50 -07:00
Mike Ellery
2c4b3d515d Trim whitespace for all config lines
FIXES: 2979
2019-06-21 14:48:45 -07:00
Mike Ellery
59973a435e Add beast/cxx17 to install set 2019-06-19 11:32:31 -07:00
Mike Ellery
93232ec7df Add logrotate config to rpm/deb pkgs 2019-06-19 11:31:19 -07:00
Manoj doshi
efa926676c Set version to 1.3.0-b5 2019-06-13 21:06:25 -07:00
Scott Schurr
dc24748c24 Improve locking of PeerImp member variables 2019-06-13 20:59:28 -07:00
Mo Morsi
f8365f5009 Add JsonOptions enum class to contain options passed to getJSON methods 2019-06-13 20:40:33 -07:00
Mo Morsi
c2138c4e88 Fix Docker error about "FROM" macro usage 2019-06-13 20:39:47 -07:00
Mike Ellery
bfad96dbb9 Force snappy compression for RocksDB (remove option):
FIXES: https://github.com/ripple/rippled/issues/2860

 * Also remove RocksDBQuick backend which is non-functional.
2019-06-13 20:38:42 -07:00
Mike Ellery
0ef6d9f9a0 Add slack notify for approvals, remove old RPM build 2019-06-13 20:38:22 -07:00
Mike Ellery
c1a1cfe550 Pkgbld - Make approval blocking, add slack summary message 2019-06-13 20:38:05 -07:00
Howard Hinnant
773dcd1d48 Modernize base_uint:
*  Add construction and assignment from a generic
   contiguous container.  Both compile-time and run time
   safety checks are made to ensure the safety of this
   conversion.

*  Remove base_uint::copyFrom.  The generic copy assignment
   operator now does this functionality with enhanced
   safety and better syntax.

*  Remove construction from and dedendence on Blob.
   The generic constructor and assignment now handle this
   functionality.

*  Fix client code to adhere to this new API.

*  Removed the use of fromVoid in PeerImp.cpp as it was
   an inappropriate use of this dangerous API.  The
   generic container constructors do it with enhanced
   safety and better syntax.

*  Rename data member pn to data_ and make it private.

*  Remove constraint from hash_append

*  Remove array_type alias
2019-06-13 20:37:29 -07:00
Miguel Portilla
de99e79bf1 Fix SNTPClock shutdown
This PR addresses a problem where the server could hang indefinitely
on shutdown. The cause of the problem is the SNTPClock class was not
binding the socket to an endpoint on initialization. This can create
an error sent to the read handler. Unfortunately, the handler ignores
the error, reads again and enters into a loop preventing the
io_service from ever completing.
2019-06-13 20:36:45 -07:00
Manoj doshi
e83d367f49 Set version to 1.3.0-b4 2019-05-22 14:44:15 -07:00
invalidator
aa76b382af Document IPv6 usage in sample config:
- Explain how to bind to both IPv4 and IPv6 interfaces
- Provide a hint in the default [port_peer] section
- Do not enable it by default

Note that on Linux, use of '::' and IPv4-mapped IPv6 depends on a sysctl value
setting 'net.ipv6.bindv6only = 0' which seems to be the default on most Linux
distributions.
2019-05-22 13:15:43 -07:00
Howard Hinnant
595b7b194c Improve locking:
- Use `std::lock` when grabbing multiple mutexes to ensure consistent
locking order and avoid deadlocks.
- Reduce the scope of the master mutex lock by relesing it prior to
calling setHeartbeatTimer
2019-05-22 13:15:43 -07:00
JoelKatz
5f908ba870 Make some locks more granular:
- Overlay
- Cluster
- Validator List
- Consensus

The overlay also has its own lock and manages its own thread safety.
2019-05-22 13:15:43 -07:00
Mike Ellery
adc1b8a36b Update package build env to boost 1.70 2019-05-22 13:15:43 -07:00
Mike Ellery
73c6e47e8a Use local rippled core lib during pkg build 2019-05-22 13:15:43 -07:00
Mike Ellery
3a780f80f1 Remove repo package check from update script 2019-05-22 13:15:43 -07:00
mtrippled
c78404e233 Pause for lagging validators. 2019-05-22 13:15:43 -07:00
seelabs
79a0cb096b Payment paths with a zero output step are dry (RIPD-1749):
A tiny input amount to a payment step can cause this step to output zero. For
example, if a previous steps outputs a dust amount of 10^-80, and this step is a
IOU -> XRP offer, the offer may output zero drops. In this case, call the strand
dry. Before this patch, an error would be logged, the strand would be called
dry; in debug mode an assert triggered.

Note, this patch is not transaction breaking, as the caller did not user the ter
code. The caller only checked for success or failuer.

This patch addresses github issue issue reported here:
https://github.com/ripple/rippled/issues/2929
2019-05-20 15:58:54 -07:00
seelabs
6f9e8dc720 Support Boost 1.70:
This patch removes calls to several deprecated asio functions.

* `io_service::post` becomes `post` (free function)
* `io_service::work` becomes `executor_work_guard`
* `io_service::wrap` becomes `bind_executor`
* `get_io_context`   becomes `get_executor` or `get_executor().context()`

This patch was tested with boost 1.69 and 1.70. The functions
`ripple::get_lowest_layer` and `beast::create_waitable_timer` are required to
handle a breaking difference between these versions. When rippled no longer
needs to support pre 1.70 boost versions, both of these functions may be
removed, and the waitable timer injections may also be removed.
2019-05-20 15:58:54 -07:00
Scott Schurr
b39b0fef39 Get names of transactions and ledger types from jss 2019-05-20 15:58:54 -07:00
Edward Hennis
be139d9bde Add some missing items to help command list:
* validators
* validator_list_sites
* Put "version" in the right place
2019-05-20 15:58:54 -07:00
John Freeman
c6d82c722b Configure build+test matrix for GitLab CI:
* Disable parallel tests for address sanitizer
* Improve caching
* Prefer Ninja builds because they are faster
2019-05-20 15:58:54 -07:00
John Freeman
0c20e2eb8b Refine parseUrl regular expression (RIPD-1708):
The new parse logic is more strict but handles more cases. If an exception
is thrown, just bail.

* Allow parsing unenclosed IPv6 addresses without port
* Improve string construction
* Reduce nesting levels of code
2019-05-20 15:58:54 -07:00
James Fryman
63eeb8d734 Use recursive remove and clean for apt (OPS-508) 2019-05-20 15:58:34 -07:00
seelabs
5214b3c1b0 Set version to 1.3.0-b3 2019-04-29 08:18:48 -04:00
Jesper Wallin
5f7a61f040 Report a peer's public key and IP address in log messages (fixes #2675) 2019-04-29 08:17:24 -04:00
John Freeman
c5a938de55 Disallow using the master key as the regular key:
The XRP Ledger allows an account to authorize a secondary key pair,
called a regular key pair, to sign future transactions, while keeping
the master key pair offline.

The regular key pair can be changed as often as desired, without
requiring other changes on the account.

If merged, this commit corrects a minor technical flaw which would
allow an account holder to specify the master key as the account's
new regular key.

The change is controlled by the `fixMasterKeyAsRegularKey` amendment
which, if enabled, will:

1. Prevent specifying an account's master key as the account's
   regular key.
2. Prevent the "Disable Master Key" flag from incorrectly affecting
   regular keys.
2019-04-29 08:17:24 -04:00
Mike Ellery
9372a587e4 Request RocksDB PORTABLE build option 2019-04-29 08:17:24 -04:00
Mike Ellery
948e724dff Improvements to pkg CI pipeline:
* add manual approval option before push to prod
* Use new public repo DNS name
* add distros to smoketest
2019-04-29 08:17:24 -04:00
Mike Ellery
06faf2bd5b Improve exit and test failure handling in CI 2019-04-29 08:17:24 -04:00
Mike Ellery
1dd81c04f3 Improve jemalloc build config:
* fix include order for macos/homebrew
* use static jemalloc for static builds
* set CMP0074 for using <pkgname>_ROOT variables
2019-04-29 08:17:24 -04:00
Mike Ellery
56dbf70c3c Improve windows build README 2019-04-29 08:17:24 -04:00
Mike Ellery
f8a4ac6ad7 Use optimized OpenSSL implementations when possible 2019-04-29 08:17:24 -04:00
seelabs
61bd06177f Reserve memory before inserting into a flat set 2019-04-29 08:17:24 -04:00
seelabs
80e535a13c Arguments passed to jtx Env::operator() must be invocable:
Before this patch, jtx allowed non-invocable functions to be passed to
operator(). However, these arguments are ignored. This caused erronious code
code such as:

```
env (offer (account_to_test, BTC (250), XRP (1000)),
         offers (account_to_test, 1));
```

While it looks like the number of offers are checked, they are not. The `offers`
funclet is never run. While we could modify jtx to make the above code correct,
a cleaner solution is to run post conditions in a `require` statement after a
transasction runs.
2019-04-26 11:22:36 -07:00
Scott Schurr
64b55c0f88 Rename JsonFields.h to jss.h:
At this point all of the jss::* names are defined in the same
file.  That file has been named JsonFields.h.  That file name
has little to do with either JsonStaticStrings (which is what
jss is short for) or with jss.  The file is renamed to jss.h
so the file name better reflects what the file contains.

All includes of that file are fixed.  A few include order
issues are tidied up along the way.
2019-04-26 11:21:52 -07:00
Scott Schurr
afcc4ff296 Reduce likelihood of malformed SOTemplate:
Formerly an SOTemplate was default constructed and its elements
added using push_back().  This left open the possibility of a
malformed SOTemplate if adding one of the elements caused a throw.

With this commit the SOTemplate requires an initializer_list of
its elements at construction.  Elements may not be added after
construction.  With this approach either the SOTemplate is fully
constructed with all of its elements or the constructor throws,
which prevents an invalid SOTemplate from even existing.

This change requires all SOTemplate construction to be adjusted
at the call site.  Those changes are also in this commit.

The SOE_Flags enum is also renamed to SOEStyle, which harmonizes
the name with other uses in the code base.  SOEStyle elements
are renamed (slightly) to have an "soe" prefix rather than "SOE_".
This heads toward reserving identifiers with all upper case for
macros.  The new style also aligns with other prominent enums in
the code base like the collection of TER identifiers.

SOElement is adjusted so it can be stored directly in an STL
container, rather than requiring storage in a unique_ptr.
Correspondingly, unique_ptr usage is removed from both
SOTemplate and KnownFormats.
2019-04-26 11:17:45 -07:00
Scott Schurr
57fe197d3e Remove runtime inference of unrecognized SFields 2019-04-26 11:17:45 -07:00
Edward Hennis
9279a3fee7 Refactor SField construction:
* Use a private_access_tag_t to prevent other files from
  instantiating an SField.
* Delete SField move constructor and make helper.
2019-04-26 11:17:45 -07:00
JoelKatz
b6363289bf Use Json::StaticString for field names
Clean up some code relating to unknown fields and avoid
allocate/copy/free cycles for Json objects containing
serialized field names.
2019-04-26 11:17:45 -07:00
Nik Bougalis
8c1123edc6 Merge master (1.2.4) into develop (1.3.0-b2) 2019-04-26 10:42:51 -07:00
Nik Bougalis
834f545498 Set version to 1.2.4 2019-04-15 12:39:27 -07:00
Mike Ellery
dd99bf479f Enforce a 20s timeout when making validator list requests (RIPD-1737) 2019-04-15 12:39:16 -07:00
Miguel Portilla
2e26377e7c Use public key when routing shard crawl requests 2019-04-15 12:39:08 -07:00
seelabs
0329ee236f Set version to 1.2.3 2019-03-28 17:47:14 -04:00
seelabs
b347afcc5b Better error checking in CachedViewImpl::read:
* Prevent null pointer dereferences
* Alway check for correct sle type before returning sle
* Reformat code
2019-03-28 17:47:14 -04:00
Nik Bougalis
fa57859477 Set version to 1.3.0-b2 2019-03-19 15:31:21 -07:00
Nik Bougalis
88cb0e5928 Allow manifests to include an optional 'domain' field:
The new 'Domain' field allows validator operators to associate a domain
name with their manifest in a transparent and independently verifiable
fashion.

It is important to point out that while this system can cryptographically
prove that a particular validator claims to be associated with a domain
it does *NOT* prove that the validator is, actually, associated with that
domain.

Domain owners will have to cryptographically attest to operating particular
validators that claim to be associated with that domain. One option for
doing so would be by making available a file over HTTPS under the domain
being claimed, which is verified separately (e.g. by ensuring that the
certificate used to serve the file matches the domain being claimed) and
which contains the long-term master public keys of validator(s) associated
with that domain.

Credit for an early prototype of this idea goes to GitHub user @cryptobrad
who introduced a PR that would allow a validator list publisher to attest
that a particular validator was associated with a domain. The idea may be
worth revisiting as a way of verifying the domain name claimed by the
validator's operator.
2019-03-19 15:31:21 -07:00
Nik Bougalis
e239eed6de Remove obsolete code 2019-03-19 15:31:20 -07:00
Mark Travis
504b3441dd Apply resource limits to proxied clients:
Resource limits were not properly applied to connections with
known IP addresses but no corresponding users.

Add unit tests for unlimited vs. limited ports.
2019-03-19 08:00:17 -07:00
Scott Schurr
872478d965 Construct ErrorCodes lookup table at compile time 2019-03-19 08:00:17 -07:00
Scott Schurr
185f2baf76 Remove unused RPC error codes:
An audit showed that a number of the RPC error codes in
ErrorCodes.h are no longer used in the code base.  The unused
codes were removed from the file along with their support code
in ErrorCodes.cpp.
2019-03-19 08:00:09 -07:00
Scott Schurr
36d6758945 Disallow both single- and multi-signing in RPC (RIPD-1713):
The ledger already declared a transaction that is both single-
and multi-signing malformed.  This just adds some checking in
the signing RPC commands (like submit and sign_for) which allows
that sort of error to be identified a bit closer to the user.

In the process of adding this code a bug was found in the
RPCCall unit test.  That bug is fixed as well.
2019-03-18 17:08:36 -07:00
seelabs
d8c450d272 Remove incorrectly defaulted functions:
* The functions removed in this commit were explicitly defaulted
  but implicitly deleted
2019-03-18 17:08:36 -07:00
Mike Ellery
8ef5b9bab4 Make LedgerTrie remove work for truncated history 2019-03-18 17:08:36 -07:00
Mike Ellery
e6370a6482 Add dpkg/rpm building capability:
* docker container definitions for package building
* cmake targets for building packages
* initial gitlab CI + artifactory integration
2019-03-18 16:44:54 -07:00
Mike Ellery
b2170d016a Update travis dist/tools 2019-03-18 16:36:57 -07:00
Mike Ellery
5c124f11c2 Remove the 'rocksdb' subtree 2019-03-18 16:19:24 -07:00
Mike Ellery
2aed24a552 Build RocksDB by ExternalProject 2019-03-18 16:19:24 -07:00
Howard Hinnant
296469f5fe Reduce memory allocations for RCLCensorshipDetector 2019-03-18 16:19:24 -07:00
Miguel Portilla
08371ba2c4 Improve shard downloader status reporting 2019-03-18 16:19:24 -07:00
Miguel Portilla
56bc2a2ade Improve SSLHTTPDownloader:
* Use TLS 1.2
* Make certificate verification configurable
2019-03-18 16:19:23 -07:00
Nik Bougalis
1084dc6dd3 Set version to 1.3.0-b1 2019-03-06 19:37:54 -08:00
Elliot Lee
8023caaa97 Correct example configuration file:
Trailing comments are not permitted in the crawl section
and can cause the lines containing them to be ignored.
2019-03-06 19:37:48 -08:00
Howard Hinnant
8b97466285 Always use UTC to be timezone-neutral (RIPD-1659) 2019-03-06 19:37:48 -08:00
Mike Ellery
de1d102535 Allow build to support XCode 10.2 2019-03-06 19:14:52 -08:00
Scott Schurr
1e1e8c2547 Remove assert that accesses object post-dtor (RIPD-1704) 2019-03-06 19:14:52 -08:00
Crypto Brad Garlinghouse
aa49be65a1 Remove conditional check for feature introduced in 0.28.1-b7 2019-03-06 19:14:52 -08:00
Crypto Brad Garlinghouse
cd820b3777 Improve the server's PING/PONG logic 2019-03-06 19:14:52 -08:00
Crypto Brad Garlinghouse
8d59ed5b2a Remove STValidation::isValid overload 2019-03-06 19:14:52 -08:00
seelabs
e03efdbe0b Remove use of beast's detail::sec_ws_key_type 2019-03-06 19:14:52 -08:00
seelabs
4f52c2989c Remove use of beast::detail::is_invocable trait 2019-03-06 19:14:52 -08:00
Mike Ellery
3fb13233a9 Provide patch for FindBoost and apply it 2019-03-06 19:14:52 -08:00
seelabs
9695fd44ba Support boost 1.69 2019-03-06 19:14:52 -08:00
ChronusZ
1bb32134f8 Remove censorshipMaxWarnings 2019-03-06 19:14:52 -08:00
seelabs
0ebed96142 Set version to 1.2.2 2019-03-05 18:21:39 -05:00
Edward Hennis
4c06b3f86f Validate TxQ config and expected transactions range 2019-03-04 11:45:56 -05:00
Nik Bougalis
a3470c225b Set version to 1.2.1 2019-02-25 13:01:32 -08:00
seelabs
c5d215d901 Add delivered amount to the ledger RPC command 2019-02-25 13:01:12 -08:00
JoelKatz
9dbf8495ee Avoid a race condition during peer status change 2019-02-25 12:59:35 -08:00
Nik Bougalis
2529edd2b6 Properly transition state to disconnected:
If the number of peers a server has is below the configured
minimum peer limit, this commit will properly transition the
server's state to "disconnected".

The default limit for the minimum number of peers required was
0 meaning that a server that was connected but lost all its
peers would never transition to disconnected, since it could
never drop below zero peers.

This commit redefines the default minimum number of peers to 1
and produces a warning if the server is configured in a way
that will prevent it from ever achieving sufficient connectivity.
2019-02-25 12:59:35 -08:00
Nik Bougalis
e974c7d8a4 Avoid directly using memcpy to deserialize data 2019-02-25 12:59:34 -08:00
Nik Bougalis
b335adb674 Make validators opt out of crawl:
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.

This commit makes servers that are configured as validators
opt out of crawling.
2019-02-25 12:59:34 -08:00
Nik Bougalis
c6ab880c03 Display validator status only to admin requests:
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.

Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.

Servers will only provide this enhanced state information for
connections that have elevated privileges.

Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.

Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
2019-02-25 12:59:31 -08:00
1855 changed files with 24517 additions and 387320 deletions

View File

@@ -1,6 +1,5 @@
codecov:
ci:
- ci.ops.ripple.com # add custom jenkins server
- !appveyor
- !travis
- travis

2
.gitignore vendored
View File

@@ -93,3 +93,5 @@ Builds/VisualStudio2015/*.sdf
*.pdb
.vs/
CMakeSettings.json
compile_commands.json
.clangd

169
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,169 @@
# I don't know what the minimum size is, but we cannot build on t3.micro.
# TODO: Factor common builds between different tests.
# The parameters for our job matrix:
#
# 1. Generator (Make, Ninja, MSBuild)
# 2. Compiler (GCC, Clang, MSVC)
# 3. Build type (Debug, Release)
# 4. Definitions (-Dunity=OFF, -Dassert=ON, ...)
.job_linux_build_test:
only:
variables:
- $CI_PROJECT_URL =~ /^https?:\/\/gitlab.com\//
stage: build
tags:
- linux
- c5.2xlarge
image: thejohnfreeman/rippled-build-ubuntu:4b73694e07f0
script:
- bin/ci/build.sh
- bin/ci/test.sh
cache:
# Use a different key for each unique combination of (generator, compiler,
# build type). Caches are stored as `.zip` files; they are not merged.
# Generate a new key whenever you want to bust the cache, e.g. when the
# dependency versions have been bumped.
# By default, jobs pull the cache. Only a few specially chosen jobs update
# the cache (with policy `pull-push`); one for each unique combination of
# (generator, compiler, build type).
policy: pull
paths:
- .nih_c/
'build+test Make GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 62ada41c-fc9e-4949-9533-736d4d6512b6
policy: pull-push
'build+test Ninja GCC Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
policy: pull-push
'build+test Ninja GCC Debug -Dstatic=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dstatic=OFF -DBUILD_SHARED_LIBS=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dstatic=OFF -DBUILD_SHARED_LIBS=ON'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 1665d3eb-6233-4eef-9f57-172636899faa
'build+test Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
policy: pull-push
'build+test(manual) Ninja GCC Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: gcc
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
MANUAL_TEST: 'true'
cache:
key: c45ec125-9625-4c19-acf7-4e889d5f90bd
'build+test Make clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Unix Makefiles
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: bf578dc2-5277-4580-8de5-6b9523118b19
policy: pull-push
'build+test Ninja clang Debug':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
policy: pull-push
'build+test Ninja clang Debug -Dunity=OFF':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=address':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=address'
CONCURRENT_TESTS: 1
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Debug -Dunity=OFF -Dsan=undefined':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Debug
CMAKE_ARGS: '-Dunity=OFF -Dsan=undefined'
cache:
key: 762514c5-3d4c-4c7c-8da2-2df9d8839cbe
'build+test Ninja clang Release -Dassert=ON':
extends: .job_linux_build_test
variables:
GENERATOR: Ninja
COMPILER: clang
BUILD_TYPE: Release
CMAKE_ARGS: '-Dassert=ON'
cache:
key: 7751be37-2358-4f08-b1d0-7e72e0ad266d
policy: pull-push

View File

@@ -1,60 +1,325 @@
sudo: false
language: cpp
dist: xenial
services:
- docker
env:
global:
# Maintenance note: to move to a new version
# of boost, update both BOOST_ROOT and BOOST_URL.
# Note that for simplicity, BOOST_ROOT's final
# namepart must match the folder name internal
# to boost's .tar.gz.
- LCOV_ROOT=$HOME/lcov
- GDB_ROOT=$HOME/gdb
- BOOST_ROOT=$HOME/boost_1_67_0
- BOOST_URL='http://sourceforge.net/projects/boost/files/boost/1.67.0/boost_1_67_0.tar.gz'
addons:
apt:
sources:
- ubuntu-toolchain-r-test
- llvm-toolchain-trusty-5.0
packages:
- gcc-5
- g++-5
- python-software-properties
- protobuf-compiler
- libprotobuf-dev
- libssl-dev
- libstdc++6
- binutils-gold
- cmake
- lcov
- llvm-5.0
- clang-5.0
matrix:
include:
- compiler: gcc
env: GCC_VER=5 BUILD_TYPE=Debug
- compiler: clang
env: GCC_VER=5 BUILD_TYPE=Debug
cache:
directories:
- $BOOST_ROOT
- DOCKER_IMAGE="mellery451/rippled-ci-builder:2019-08-26"
- CMAKE_EXTRA_ARGS="-Dwerr=ON -Dwextra=ON"
- NINJA_BUILD=true
# change this if we get more VM capacity
- MAX_TIME_MIN=80
- CACHE_DIR=${TRAVIS_HOME}/_cache
- NIH_CACHE_ROOT=${CACHE_DIR}/nih_c
- PARALLEL_TESTS=true
# this is NOT used by linux container based builds (which already have boost installed)
- BOOST_URL='https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.bz2'
- VCPKG_DIR=${CACHE_DIR}/vcpkg
- USE_CCACHE=true
- CCACHE_BASEDIR=${TRAVIS_HOME}"
- CCACHE_NOHASHDIR=true
- CCACHE_DIR=${CACHE_DIR}/ccache
before_install:
- bin/ci/ubuntu/install-dependencies.sh
- if [ "$(uname)" = "Darwin" ] ; then export NUM_PROCESSORS=$(sysctl -n hw.physicalcpu); else export NUM_PROCESSORS=$(nproc); fi
- echo "NUM PROC is ${NUM_PROCESSORS}"
- if [ "$(uname)" = "Linux" ] ; then docker pull ${DOCKER_IMAGE}; fi
- if [ "${MATRIX_EVAL}" != "" ] ; then eval "${MATRIX_EVAL}"; fi
- if [ "${CMAKE_ADD}" != "" ] ; then export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} ${CMAKE_ADD}"; fi
- bin/ci/ubuntu/travis-cache-start.sh
matrix:
fast_finish: true
allow_failures:
# TODO these all need more investigation
#
# current tsan failure might be related to:
# https://github.com/google/sanitizers/issues/1104
- name: tsan, clang-8
# there are a number of UBs caught currently that need triage
- name: ubsan, clang-8
include:
# coverage builds
- compiler: gcc-8
name: coverage, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
- compiler: clang-8
name: coverage, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dcoverage=ON"
- TARGET=coverage_report
- SKIP_TESTS=true
# nounity
- compiler: gcc-8
name: non-unity, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
- compiler: clang-8
name: non-unity, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dunity=OFF"
# manual tests
- compiler: gcc-8
name: manual tests, gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- MANUAL_TESTS=true
# manual tests
- compiler: gcc-8
name: manual tests, gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON"
- MANUAL_TESTS=true
# release builds
- compiler: gcc-8
name: gcc-8, release
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON"
- compiler: clang-8
name: clang-8, release
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dassert=ON"
# debug builds
- compiler: gcc-8
name: gcc-8, debug
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- compiler: clang-8
name: clang-8, debug
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Debug
# asan
- compiler: clang-8
name: asan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=address"
- ASAN_OPTIONS="print_stats=true:atexit=true"
#- LSAN_OPTIONS="verbosity=1:log_threads=1"
- PARALLEL_TESTS=false
# ubsan
- compiler: clang-8
name: ubsan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=undefined"
# once we can run clean under ubsan, add halt_on_error=1 to options below
- UBSAN_OPTIONS="print_stacktrace=1:report_error_type=1"
- PARALLEL_TESTS=false
# tsan
- compiler: clang-8
name: tsan, clang-8
env:
- MATRIX_EVAL="CC=clang-8 && CXX=clang++-8"
- BUILD_TYPE=Release
- CMAKE_ADD="-Dsan=thread"
- TSAN_OPTIONS="history_size=3 external_symbolizer_path=/usr/bin/llvm-symbolizer verbosity=1"
- PARALLEL_TESTS=false
# dynamic lib builds
- compiler: gcc-8
name: non-static, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF"
- compiler: gcc-8
name: non-static + BUILD_SHARED_LIBS, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dstatic=OFF -DBUILD_SHARED_LIBS=ON"
# makefile
- compiler: gcc-8
name: makefile generator, gcc-8
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- NINJA_BUILD=false
# misc alternative compilers
- compiler: gcc-7
name: gcc-7
env:
- MATRIX_EVAL="CC=gcc-7 && CXX=g++-7"
- BUILD_TYPE=Debug
- compiler: gcc-9
name: gcc-9
env:
- MATRIX_EVAL="CC=gcc-9 && CXX=g++-9"
- BUILD_TYPE=Debug
- compiler: clang-5.0
name: clang-5
env:
- MATRIX_EVAL="CC=clang-5.0 && CXX=clang++-5.0"
- BUILD_TYPE=Debug
- compiler: clang-6.0
name: clang-6
env:
- MATRIX_EVAL="CC=clang-6.0 && CXX=clang++-6.0"
- BUILD_TYPE=Debug
- compiler: clang-7
name: clang-7
env:
- MATRIX_EVAL="CC=clang-7 && CXX=clang++-7"
- BUILD_TYPE=Debug
- compiler: clang-9
name: clang-9
env:
- MATRIX_EVAL="CC=clang-9 && CXX=clang++-9"
- BUILD_TYPE=Debug
# verify build with min version of cmake
- compiler: gcc-8
name: min cmake version
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_EXE=/opt/local/cmake-3.9/bin/cmake
- SKIP_TESTS=true
# validator keys project as subproj of rippled
- compiler: gcc-8
name: validator-keys
env:
- MATRIX_EVAL="CC=gcc-8 && CXX=g++-8"
- BUILD_TYPE=Debug
- CMAKE_ADD="-Dvalidator_keys=ON"
- TARGET=validator-keys
# macos
- &macos
os: osx
osx_image: xcode10.2
name: xcode10, debug
env:
- BLD_CONFIG=Debug
- TEST_EXTRA_ARGS=""
- BOOST_ROOT=${CACHE_DIR}/boost_1_70_0
- >-
CMAKE_ADD="
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DBoost_ARCHITECTURE=-x64
-DBoost_NO_SYSTEM_PATHS=ON
-DCMAKE_VERBOSE_MAKEFILE=ON"
addons:
homebrew:
packages:
- bash
- ninja
- cmake
- openssl@1.1
update: true
install:
- export OPENSSL_ROOT=$(brew --prefix openssl@1.1)
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
- brew uninstall --ignore-dependencies boost
script:
- mkdir -p build.macos && cd build.macos
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
- ./rippled --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS} ${TEST_EXTRA_ARGS}
- <<: *macos
name: xcode10, release
before_script:
- export BLD_CONFIG=Release
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dassert=ON"
- <<: *macos
name: ipv6 (macos)
before_script:
- export TEST_EXTRA_ARGS="--unittest-ipv6"
- <<: *macos
osx_image: xcode9.4
name: xcode9, debug
before_script:
# turn off warnings as err for this build
- export CMAKE_EXTRA_ARGS="${CMAKE_EXTRA_ARGS} -Dwerr=OFF"
- <<: *macos
osx_image: xcode11
name: xcode11, debug
# windows
- &windows
# caching is unreliable on windows...disable for now
cache: false
os: windows
name: windows, debug
env:
# put NIH in a non-cached location until
# we come up with a way to stabilize that
# cache on windows (minimize incremental changes)
- NIH_CACHE_ROOT=${TRAVIS_BUILD_DIR}/nih_c
- VCPKG_DEFAULT_TRIPLET="x64-windows-static"
- BOOST_ROOT=${CACHE_DIR}/boost_1_70_0
- MATRIX_EVAL="CC=cl.exe && CXX=cl.exe"
- >-
CMAKE_ADD="
-DBOOST_ROOT=${BOOST_ROOT}/_INSTALLED_
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_TOOLCHAIN_FILE=${VCPKG_DIR}/scripts/buildsystems/vcpkg.cmake
-DVCPKG_TARGET_TRIPLET=x64-windows-static"
install:
- choco upgrade cmake.install
- choco install ninja visualstudio2017-workload-vctools -y
- travis_wait 30 bin/sh/install-vcpkg.sh
- travis_wait ${MAX_TIME_MIN} Builds/containers/shared/install_boost.sh
before_script:
- export BLD_CONFIG=Debug
script:
- . ./bin/sh/setup-msvc.sh
- mkdir -p build.ms && cd build.ms
- cmake -G Ninja ${CMAKE_EXTRA_ARGS} -DCMAKE_BUILD_TYPE=${BLD_CONFIG} ..
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose
- ./rippled.exe --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
- <<: *windows
name: windows, release
before_script:
- export BLD_CONFIG=Release
- <<: *windows
name: windows, visual studio, debug
script:
- mkdir -p build.ms && cd build.ms
- cmake -G "Visual Studio 15 2017 Win64" ${CMAKE_EXTRA_ARGS} ..
- export DESTDIR=${PWD}/_installed_
- travis_wait ${MAX_TIME_MIN} cmake --build . --parallel --verbose --config ${BLD_CONFIG} --target install
- >-
"./_installed_/Program Files/rippled/bin/rippled.exe" --unittest --quiet --unittest-log --unittest-jobs ${NUM_PROCESSORS}
cache:
timeout: 900
directories:
- $CACHE_DIR
before_cache:
- if [ $(uname) = "Linux" ] ; then SUDO="sudo"; else SUDO=""; fi
- cd ${TRAVIS_HOME}
- if [ -f cache_ignore.tar ] ; then $SUDO tar xvf cache_ignore.tar; fi
- cd ${TRAVIS_BUILD_DIR}
script:
- travis_retry bin/ci/ubuntu/build-and-test.sh
- sudo chmod -R a+rw ${CACHE_DIR}
- ccache -s
- travis_wait ${MAX_TIME_MIN} bin/ci/ubuntu/build-in-docker.sh
- ccache -s
notifications:
email:
false
irc:
channels:
- "chat.freenode.net#ripple-dev"
email: false

View File

@@ -108,10 +108,15 @@ macro(group_sources curdir)
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
endmacro()
macro (exclude_from_default target_)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
endmacro ()
macro (exclude_if_included target_)
if (NOT ${CMAKE_CURRENT_SOURCE_DIR} STREQUAL ${CMAKE_SOURCE_DIR})
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_ALL ON)
set_target_properties (${target_} PROPERTIES EXCLUDE_FROM_DEFAULT_BUILD ON)
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
exclude_from_default (${target_})
endif ()
endmacro ()
@@ -246,3 +251,48 @@ endif()
set(${cmd_var} "${command}" PARENT_SCOPE)
endfunction()
find_package(Git)
# function that calls git log to get current hash
function (git_hash hash_val)
# note: optional second extra string argument not in signature
if (NOT GIT_FOUND)
return ()
endif ()
set (_hash "")
set (_format "%H")
if (ARGC GREATER_EQUAL 2)
string (TOLOWER ${ARGV1} _short)
if (_short STREQUAL "short")
set (_format "%h")
endif ()
endif ()
execute_process (COMMAND ${GIT_EXECUTABLE} "log" "--pretty=${_format}" "-n1"
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_hash
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_hash ${_temp_hash})
endif ()
set (${hash_val} "${_hash}" PARENT_SCOPE)
endfunction ()
function (git_branch branch_val)
if (NOT GIT_FOUND)
return ()
endif ()
set (_branch "")
execute_process (COMMAND ${GIT_EXECUTABLE} "rev-parse" "--abbrev-ref" "HEAD"
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
RESULT_VARIABLE _git_exit_code
OUTPUT_VARIABLE _temp_branch
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET)
if (_git_exit_code EQUAL 0)
set (_branch ${_temp_branch})
endif ()
set (${branch_val} "${_branch}" PARENT_SCOPE)
endfunction ()

File diff suppressed because it is too large Load Diff

18
Builds/CMake/README.md Normal file
View File

@@ -0,0 +1,18 @@
These are modules and sources that support our CMake build.
== FindBoost.cmake ==
In order to facilitate updating to latest releases of boost, we've made a local
copy of the FindBoost cmake module in our repo. The latest official version can
generally be obtained
[here](https://github.com/Kitware/CMake/blob/master/Modules/FindBoost.cmake).
The latest version provided by Kitware can be tailored for use with the
version of CMake that it ships with (typically the next upcoming CMake
release). As such, the latest version from the repository might not work
perfectly with older versions of CMake - for instance, the latest version
might use features or properties only available in the version of CMake that
it ships with. Given this, it's best to test any updates to this module with a few
different versions of cmake.

View File

@@ -12,7 +12,7 @@ if (static OR MSVC)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_dependency (Boost 1.67
find_dependency (Boost 1.70
COMPONENTS
chrono
context

View File

@@ -0,0 +1,173 @@
#[===================================================================[
setup project-wide compiler settings
#]===================================================================]
#[=========================================================[
TODO some/most of these common settings belong in a
toolchain file, especially the ABI-impacting ones
#]=========================================================]
add_library (common INTERFACE)
add_library (Ripple::common ALIAS common)
# add a single global dependency on this interface lib
link_libraries (Ripple::common)
set_target_properties (common
PROPERTIES INTERFACE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_CXX_EXTENSIONS OFF)
target_compile_features (common INTERFACE cxx_std_17)
target_compile_definitions (common
INTERFACE
$<$<CONFIG:Debug>:DEBUG _DEBUG>
$<$<AND:$<BOOL:${profile}>,$<NOT:$<BOOL:${assert}>>>:NDEBUG>)
# ^^^^ NOTE: CMAKE release builds already have NDEBUG
# defined, so no need to add it explicitly except for
# this special case of (profile ON) and (assert OFF)
# -- presumably this is because we don't want profile
# builds asserting unless asserts were specifically
# requested
if (MSVC)
# remove existing exception flag since we set it to -EHa
string (REGEX REPLACE "[-/]EH[a-z]+" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
foreach (var_
CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE)
# also remove dynamic runtime
string (REGEX REPLACE "[-/]MD[d]*" " " ${var_} "${${var_}}")
# /ZI (Edit & Continue debugging information) is incompatible with Gy-
string (REPLACE "/ZI" "/Zi" ${var_} "${${var_}}")
endforeach ()
target_compile_options (common
INTERFACE
-bigobj # Increase object file max size
-fp:precise # Floating point behavior
-Gd # __cdecl calling convention
-Gm- # Minimal rebuild: disabled
-Gy- # Function level linking: disabled
-MP # Multiprocessor compilation
-openmp- # pragma omp: disabled
-errorReport:none # No error reporting to Internet
-nologo # Suppress login banner
-wd4018 # Disable signed/unsigned comparison warnings
-wd4244 # Disable float to int possible loss of data warnings
-wd4267 # Disable size_t to T possible loss of data warnings
-wd4800 # Disable C4800(int to bool performance)
-wd4503 # Decorated name length exceeded, name was truncated
$<$<COMPILE_LANGUAGE:CXX>:
-EHa
-GR
>
$<$<CONFIG:Release>:-Ox>
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:
-GS
-Zc:forScope
>
# static runtime
$<$<CONFIG:Debug>:-MTd>
$<$<NOT:$<CONFIG:Debug>>:-MT>
$<$<BOOL:${werr}>:-WX>
)
target_compile_definitions (common
INTERFACE
_WIN32_WINNT=0x6000
_SCL_SECURE_NO_WARNINGS
_CRT_SECURE_NO_WARNINGS
WIN32_CONSOLE
NOMINMAX
# TODO: Resolve these warnings, don't just silence them
_SILENCE_ALL_CXX17_DEPRECATION_WARNINGS
$<$<AND:$<COMPILE_LANGUAGE:CXX>,$<CONFIG:Debug>>:_CRTDBG_MAP_ALLOC>)
target_link_libraries (common
INTERFACE
-errorreport:none
-machine:X64)
else ()
# HACK : because these need to come first, before any warning demotion
string (APPEND CMAKE_CXX_FLAGS " -Wall -Wdeprecated")
if (wextra)
string (APPEND CMAKE_CXX_FLAGS " -Wextra -Wno-unused-parameter")
endif ()
# not MSVC
target_compile_options (common
INTERFACE
$<$<BOOL:${werr}>:-Werror>
$<$<COMPILE_LANGUAGE:CXX>:
-frtti
-Wnon-virtual-dtor
>
-Wno-sign-compare
-Wno-char-subscripts
-Wno-format
-Wno-unused-local-typedefs
$<$<BOOL:${is_gcc}>:
-Wno-unused-but-set-variable
-Wno-deprecated
>
$<$<NOT:$<CONFIG:Debug>>:-fno-strict-aliasing>
# tweak gcc optimization for debug
$<$<AND:$<BOOL:${is_gcc}>,$<CONFIG:Debug>>:-O0>
# Add debug symbols to release config
$<$<CONFIG:Release>:-g>)
target_link_libraries (common
INTERFACE
-rdynamic
# link to static libc/c++ iff:
# * static option set and
# * NOT APPLE (AppleClang does not support static libc/c++) and
# * NOT san (sanitizers typically don't work with static libc/c++)
$<$<AND:$<BOOL:${static}>,$<NOT:$<BOOL:${APPLE}>>,$<NOT:$<BOOL:${san}>>>:-static-libstdc++>)
endif ()
if (use_gold AND is_gcc)
# use gold linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
#[=========================================================[
NOTE: THE gold linker inserts -rpath as DT_RUNPATH by
default intead of DT_RPATH, so you might have slightly
unexpected runtime ld behavior if you were expecting
DT_RPATH. Specify --disable-new-dtags to gold if you do
not want the default DT_RUNPATH behavior. This rpath
treatment as well as static/dynamic selection means that
gold does not currently have ideal default behavior when
we are using jemalloc. Thus for simplicity we don't use
it when jemalloc is requested. An alternative to
disabling would be to figure out all the settings
required to make gold play nicely with jemalloc.
#]=========================================================]
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
target_link_libraries (common
INTERFACE
-fuse-ld=gold
-Wl,--no-as-needed
#[=========================================================[
see https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1253638/comments/5
DT_RUNPATH does not work great for transitive
dependencies (of which boost has a few) - so just
switch to DT_RPATH if doing dynamic linking with gold
#]=========================================================]
$<$<NOT:$<BOOL:${static}>>:-Wl,--disable-new-dtags>)
endif ()
unset (LD_VERSION)
endif ()
if (use_lld)
# use lld linker if available
execute_process (
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if ("${LD_VERSION}" MATCHES "LLD")
target_link_libraries (common INTERFACE -fuse-ld=lld)
endif ()
unset (LD_VERSION)
endif()
if (assert)
foreach (var_ CMAKE_C_FLAGS_RELEASE CMAKE_CXX_FLAGS_RELEASE)
STRING (REGEX REPLACE "[-/]DNDEBUG" "" ${var_} "${${var_}}")
endforeach ()
endif ()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,98 @@
#[===================================================================[
coverage report target
#]===================================================================]
if (coverage)
if (is_clang)
if (APPLE)
execute_process (COMMAND xcrun -f llvm-profdata
OUTPUT_VARIABLE LLVM_PROFDATA
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_PROFDATA llvm-profdata)
endif ()
if (NOT LLVM_PROFDATA)
message (WARNING "unable to find llvm-profdata - skipping coverage_report target")
endif ()
if (APPLE)
execute_process (COMMAND xcrun -f llvm-cov
OUTPUT_VARIABLE LLVM_COV
OUTPUT_STRIP_TRAILING_WHITESPACE)
else ()
find_program (LLVM_COV llvm-cov)
endif ()
if (NOT LLVM_COV)
message (WARNING "unable to find llvm-cov - skipping coverage_report target")
endif ()
set (extract_pattern "")
if (coverage_core_only)
set (extract_pattern "${CMAKE_SOURCE_DIR}/src/ripple/")
endif ()
if (LLVM_COV AND LLVM_PROFDATA)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage - results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
COMMAND ${LLVM_PROFDATA}
merge -sparse default.profraw -o rip.profdata
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LLVM_COV}
report -instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
# generate html report
COMMAND ${LLVM_COV}
show -format=html -output-dir=${CMAKE_BINARY_DIR}/coverage
-instr-profile=rip.profdata
$<TARGET_FILE:rippled> ${extract_pattern}
BYPRODUCTS coverage/index.html)
endif ()
elseif (is_gcc)
find_program (LCOV lcov)
if (NOT LCOV)
message (WARNING "unable to find lcov - skipping coverage_report target")
endif ()
find_program (GENHTML genhtml)
if (NOT GENHTML)
message (WARNING "unable to find genhtml - skipping coverage_report target")
endif ()
set (extract_pattern "*")
if (coverage_core_only)
set (extract_pattern "*/src/ripple/*")
endif ()
if (LCOV AND GENHTML)
add_custom_target (coverage_report
USES_TERMINAL
COMMAND ${CMAKE_COMMAND} -E echo "Generating coverage- results will be in ${CMAKE_BINARY_DIR}/coverage/index.html."
# create baseline info file
COMMAND ${LCOV}
--no-external -d "${CMAKE_SOURCE_DIR}" -c -d . -i -o baseline.info
| grep -v "ignoring data for external file"
# run tests
COMMAND ${CMAKE_COMMAND} -E echo "Running rippled tests for coverage report."
COMMAND rippled --unittest$<$<BOOL:${coverage_test}>:=${coverage_test}> --quiet --unittest-log
# Create test coverage data file
COMMAND ${LCOV}
--no-external -d "${CMAKE_SOURCE_DIR}" -c -d . -o tests.info
| grep -v "ignoring data for external file"
# Combine baseline and test coverage data
COMMAND ${LCOV}
-a baseline.info -a tests.info -o lcov-all.info
# extract our files
COMMAND ${LCOV}
-e lcov-all.info "${extract_pattern}" -o lcov.info
COMMAND ${CMAKE_COMMAND} -E echo "Summary of coverage:"
COMMAND ${LCOV} --summary lcov.info
# generate HTML report
COMMAND ${GENHTML}
-o ${CMAKE_BINARY_DIR}/coverage lcov.info
BYPRODUCTS coverage/index.html)
endif ()
endif ()
endif ()

View File

@@ -0,0 +1,31 @@
#[===================================================================[
docs target (optional)
#]===================================================================]
find_package (Doxygen)
if (TARGET Doxygen::doxygen)
set (doc_srcs docs/source.dox)
file (GLOB_RECURSE other_docs docs/*.md)
list (APPEND doc_srcs "${other_docs}")
# read the source config and make a modified one
# that points the output files to our build directory
file (READ "${CMAKE_CURRENT_SOURCE_DIR}/docs/source.dox" dox_content)
string (REGEX REPLACE "[\t ]*OUTPUT_DIRECTORY[\t ]*=(.*)"
"OUTPUT_DIRECTORY=${CMAKE_BINARY_DIR}\n\\1"
new_config "${dox_content}")
file (WRITE "${CMAKE_BINARY_DIR}/source.dox" "${new_config}")
add_custom_target (docs
COMMAND "${DOXYGEN_EXECUTABLE}" "${CMAKE_BINARY_DIR}/source.dox"
BYPRODUCTS "${CMAKE_BINARY_DIR}/html_doc/index.html"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/docs"
SOURCES "${doc_srcs}")
if (is_multiconfig)
set_property (
SOURCE ${doc_srcs}
APPEND
PROPERTY HEADER_FILE_ONLY
true)
endif ()
else ()
message (STATUS "doxygen executable not found -- skipping docs target")
endif ()

View File

@@ -0,0 +1,51 @@
#[===================================================================[
install stuff
#]===================================================================]
install (
TARGETS
secp256k1
ed25519-donna
common
opts
ripple_syslibs
ripple_boost
xrpl_core
EXPORT RippleExports
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
install (EXPORT RippleExports
FILE RippleTargets.cmake
NAMESPACE Ripple::
DESTINATION lib/cmake/ripple)
include (CMakePackageConfigHelpers)
write_basic_package_version_file (
RippleConfigVersion.cmake
VERSION ${rippled_version}
COMPATIBILITY SameMajorVersion)
if (is_root_project)
install (TARGETS rippled RUNTIME DESTINATION bin)
set_target_properties(rippled PROPERTIES INSTALL_RPATH_USE_LINK_PATH ON)
install (
FILES
${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/RippleConfig.cmake
${CMAKE_CURRENT_BINARY_DIR}/RippleConfigVersion.cmake
DESTINATION lib/cmake/ripple)
# sample configs should not overwrite existing files
# install if-not-exists workaround as suggested by
# https://cmake.org/Bug/view.php?id=12646
install(CODE "
macro (copy_if_not_exists SRC DEST NEWNAME)
if (NOT EXISTS \"\$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
file (INSTALL FILE_PERMISSIONS OWNER_READ OWNER_WRITE DESTINATION \"\${CMAKE_INSTALL_PREFIX}/\${DEST}\" FILES \"\${SRC}\" RENAME \"\${NEWNAME}\")
else ()
message (\"-- Skipping : \$ENV{DESTDIR}\${CMAKE_INSTALL_PREFIX}/\${DEST}/\${NEWNAME}\")
endif ()
endmacro()
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/rippled-example.cfg\" etc rippled.cfg)
copy_if_not_exists(\"${CMAKE_CURRENT_SOURCE_DIR}/cfg/validators-example.txt\" etc validators.txt)
")
endif ()

View File

@@ -0,0 +1,108 @@
#[===================================================================[
rippled compile options/settings via an interface library
#]===================================================================]
add_library (opts INTERFACE)
add_library (Ripple::opts ALIAS opts)
target_compile_definitions (opts
INTERFACE
BOOST_ASIO_DISABLE_HANDLER_TYPE_REQUIREMENTS
$<$<BOOL:${boost_show_deprecated}>:
BOOST_ASIO_NO_DEPRECATED
BOOST_FILESYSTEM_NO_DEPRECATED
>
$<$<NOT:$<BOOL:${boost_show_deprecated}>>:
BOOST_COROUTINES_NO_DEPRECATION_WARNING
BOOST_BEAST_ALLOW_DEPRECATED
BOOST_FILESYSTEM_DEPRECATED
>
$<$<BOOL:${beast_hashers}>:
USE_BEAST_HASHER
>
$<$<BOOL:${beast_no_unit_test_inline}>:BEAST_NO_UNIT_TEST_INLINE=1>
$<$<BOOL:${beast_disable_autolink}>:BEAST_DONT_AUTOLINK_TO_WIN32_LIBRARIES=1>
$<$<BOOL:${single_io_service_thread}>:RIPPLE_SINGLE_IO_SERVICE_THREAD=1>
# doesn't currently compile ? :
$<$<BOOL:${verify_nodeobject_keys}>:RIPPLE_VERIFY_NODEOBJECT_KEYS=1>)
target_compile_options (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<COMPILE_LANGUAGE:CXX>>:-Wsuggest-override>
$<$<BOOL:${perf}>:-fno-omit-frame-pointer>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-fprofile-arcs -ftest-coverage>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-fprofile-instr-generate -fcoverage-mapping>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
target_link_libraries (opts
INTERFACE
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${coverage}>>:-fprofile-arcs -ftest-coverage>
$<$<AND:$<BOOL:${is_clang}>,$<BOOL:${coverage}>>:-fprofile-instr-generate -fcoverage-mapping>
$<$<BOOL:${profile}>:-pg>
$<$<AND:$<BOOL:${is_gcc}>,$<BOOL:${profile}>>:-p>)
if (jemalloc)
if (static)
set(JEMALLOC_USE_STATIC ON CACHE BOOL "" FORCE)
endif ()
find_package (jemalloc REQUIRED)
target_compile_definitions (opts INTERFACE PROFILE_JEMALLOC)
target_include_directories (opts SYSTEM INTERFACE ${JEMALLOC_INCLUDE_DIRS})
target_link_libraries (opts INTERFACE ${JEMALLOC_LIBRARIES})
get_filename_component (JEMALLOC_LIB_PATH ${JEMALLOC_LIBRARIES} DIRECTORY)
## TODO see if we can use the BUILD_RPATH target property (is it transitive?)
set (CMAKE_BUILD_RPATH ${CMAKE_BUILD_RPATH} ${JEMALLOC_LIB_PATH})
endif ()
if (san)
target_compile_options (opts
INTERFACE
# sanitizers recommend minimum of -O1 for reasonable performance
$<$<CONFIG:Debug>:-O1>
${SAN_FLAG}
-fno-omit-frame-pointer)
target_compile_definitions (opts
INTERFACE
$<$<STREQUAL:${san},address>:SANITIZER=ASAN>
$<$<STREQUAL:${san},thread>:SANITIZER=TSAN>
$<$<STREQUAL:${san},memory>:SANITIZER=MSAN>
$<$<STREQUAL:${san},undefined>:SANITIZER=UBSAN>)
target_link_libraries (opts INTERFACE ${SAN_FLAG} ${SAN_LIB})
endif ()
#[===================================================================[
rippled transitive library deps via an interface library
#]===================================================================]
add_library (ripple_syslibs INTERFACE)
add_library (Ripple::syslibs ALIAS ripple_syslibs)
target_link_libraries (ripple_syslibs
INTERFACE
$<$<BOOL:${MSVC}>:
legacy_stdio_definitions.lib
Shlwapi
kernel32
user32
gdi32
winspool
comdlg32
advapi32
shell32
ole32
oleaut32
uuid
odbc32
odbccp32
crypt32
>
$<$<NOT:$<BOOL:${MSVC}>>:dl>
$<$<NOT:$<OR:$<BOOL:${MSVC}>,$<BOOL:${APPLE}>>>:rt>)
if (NOT MSVC)
set (THREADS_PREFER_PTHREAD_FLAG ON)
find_package (Threads)
target_link_libraries (ripple_syslibs INTERFACE Threads::Threads)
endif ()
add_library (ripple_libs INTERFACE)
add_library (Ripple::libs ALIAS ripple_libs)
target_link_libraries (ripple_libs INTERFACE Ripple::syslibs)

View File

@@ -0,0 +1,39 @@
#[===================================================================[
multiconfig misc
#]===================================================================]
if (is_multiconfig)
# This code finds all source files in the src subdirectory for inclusion
# in the IDE file tree as non-compiled sources. Since this file list will
# have some overlap with files we have already added to our targets to
# be compiled, we explicitly remove any of these target source files from
# this list.
file (GLOB_RECURSE all_sources RELATIVE ${CMAKE_CURRENT_SOURCE_DIR}
CONFIGURE_DEPENDS
src/*.* Builds/*.md docs/*.md src/*.md Builds/*.cmake)
file(GLOB md_files RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} CONFIGURE_DEPENDS
*.md)
LIST(APPEND all_sources ${md_files})
foreach (_target secp256k1 ed25519-donna pbufs xrpl_core rippled)
get_target_property (_type ${_target} TYPE)
if(_type STREQUAL "INTERFACE_LIBRARY")
continue()
endif()
get_target_property (_src ${_target} SOURCES)
list (REMOVE_ITEM all_sources ${_src})
endforeach ()
target_sources (rippled PRIVATE ${all_sources})
set_property (
SOURCE ${all_sources}
APPEND
PROPERTY HEADER_FILE_ONLY true)
if (MSVC)
set_property(
DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
PROPERTY VS_STARTUP_PROJECT rippled)
endif ()
group_sources(src)
group_sources(docs)
group_sources(Builds)
endif ()

View File

@@ -0,0 +1,33 @@
#[===================================================================[
NIH prefix path..this is where we will download
and build any ExternalProjects, and they will hopefully
survive across build directory deletion (manual cleans)
#]===================================================================]
string (REGEX REPLACE "[ \\/%]+" "_" gen_for_path ${CMAKE_GENERATOR})
string (TOLOWER ${gen_for_path} gen_for_path)
# HACK: trying to shorten paths for windows CI (which hits 260 MAXPATH easily)
# @see: https://issues.jenkins-ci.org/browse/JENKINS-38706?focusedCommentId=339847
string (REPLACE "visual_studio" "vs" gen_for_path ${gen_for_path})
if (NOT DEFINED NIH_CACHE_ROOT)
if (DEFINED ENV{NIH_CACHE_ROOT})
set (NIH_CACHE_ROOT $ENV{NIH_CACHE_ROOT})
else ()
set (NIH_CACHE_ROOT "${CMAKE_SOURCE_DIR}/.nih_c")
endif ()
endif ()
set (nih_cache_path
"${NIH_CACHE_ROOT}/${gen_for_path}/${CMAKE_CXX_COMPILER_ID}_${CMAKE_CXX_COMPILER_VERSION}")
if (NOT is_multiconfig)
set (nih_cache_path "${nih_cache_path}/${CMAKE_BUILD_TYPE}")
endif ()
file(TO_CMAKE_PATH "${nih_cache_path}" nih_cache_path)
message (STATUS "NIH-EP cache path: ${nih_cache_path}")
## two convenience variables:
set (ep_lib_prefix ${CMAKE_STATIC_LIBRARY_PREFIX})
set (ep_lib_suffix ${CMAKE_STATIC_LIBRARY_SUFFIX})
# this is a setting for FetchContent and needs to be
# a cache variable
# https://cmake.org/cmake/help/latest/module/FetchContent.html#populating-the-content
set (FETCHCONTENT_BASE_DIR ${nih_cache_path} CACHE STRING "" FORCE)

View File

@@ -0,0 +1,181 @@
#[===================================================================[
package/container targets - (optional)
#]===================================================================]
if (is_root_project)
if (NOT DOCKER)
find_program (DOCKER docker)
endif ()
if (DOCKER)
# if no container label is provided, use current git hash
git_hash (commit_hash)
if (NOT container_label)
set (container_label ${commit_hash})
endif ()
message (STATUS "using [${container_label}] as build container tag...")
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/packages)
file (MAKE_DIRECTORY ${NIH_CACHE_ROOT}/pkgbuild)
if (is_linux)
execute_process (COMMAND id -u
OUTPUT_VARIABLE DOCKER_USER_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local user id: ${DOCKER_USER_ID}")
execute_process (COMMAND id -g
OUTPUT_VARIABLE DOCKER_GROUP_ID
OUTPUT_STRIP_TRAILING_WHITESPACE)
message (STATUS "docker local group id: ${DOCKER_GROUP_ID}")
endif ()
if (DOCKER_USER_ID AND DOCKER_GROUP_ID)
set(map_user TRUE)
endif ()
#[===================================================================[
rpm
#]===================================================================]
add_custom_target (rpm_container
docker build
--pull
--build-arg GIT_COMMIT=${commit_hash}
-t rippled-rpm-builder:${container_label}
$<$<BOOL:${rpm_cache_from}>:--cache-from=${rpm_cache_from}>
-f centos-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/centos-builder/Dockerfile
Builds/containers/centos-builder/centos_setup.sh
Builds/containers/centos-builder/extras.sh
Builds/containers/shared/build_deps.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/update_sources.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/packaging/rpm/rippled.spec
Builds/containers/packaging/rpm/build_rpm.sh
)
exclude_from_default (rpm_container)
add_custom_target (rpm
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
"$<$<BOOL:${map_user}>:--volume=/etc/passwd:/etc/passwd;--volume=/etc/group:/etc/group;--user=${DOCKER_USER_ID}:${DOCKER_GROUP_ID}>"
-t rippled-rpm-builder:${container_label}
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/rpm/rippled.spec
)
exclude_from_default (rpm)
if (NOT have_package_container)
add_dependencies(rpm rpm_container)
endif ()
#[===================================================================[
dpkg
#]===================================================================]
# currently use ubuntu 16.04 as a base b/c it has one of
# the lower versions of libc among ubuntu and debian releases.
# we could change this in the future and build with some other deb
# based system.
add_custom_target (dpkg_container
docker build
--pull
--build-arg DIST_TAG=16.04
--build-arg GIT_COMMIT=${commit_hash}
-t rippled-dpkg-builder:${container_label}
$<$<BOOL:${dpkg_cache_from}>:--cache-from=${dpkg_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
Builds/containers/shared/build_deps.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/update_sources.sh
Builds/containers/shared/update-rippled.sh
Builds/containers/packaging/dpkg/build_dpkg.sh
Builds/containers/packaging/dpkg/debian/README.Debian
Builds/containers/packaging/dpkg/debian/conffiles
Builds/containers/packaging/dpkg/debian/control
Builds/containers/packaging/dpkg/debian/copyright
Builds/containers/packaging/dpkg/debian/dirs
Builds/containers/packaging/dpkg/debian/docs
Builds/containers/packaging/dpkg/debian/rippled-dev.install
Builds/containers/packaging/dpkg/debian/rippled.install
Builds/containers/packaging/dpkg/debian/rippled.links
Builds/containers/packaging/dpkg/debian/rippled.postinst
Builds/containers/packaging/dpkg/debian/rippled.postrm
Builds/containers/packaging/dpkg/debian/rippled.preinst
Builds/containers/packaging/dpkg/debian/rippled.prerm
Builds/containers/packaging/dpkg/debian/rules
)
exclude_from_default (dpkg_container)
add_custom_target (dpkg
docker run
-e NIH_CACHE_ROOT=/opt/rippled_bld/pkg/.nih_c
-v ${NIH_CACHE_ROOT}/pkgbuild:/opt/rippled_bld/pkg/.nih_c
-v ${CMAKE_SOURCE_DIR}:/opt/rippled_bld/pkg/rippled
-v ${CMAKE_CURRENT_BINARY_DIR}/packages:/opt/rippled_bld/pkg/out
"$<$<BOOL:${map_user}>:--volume=/etc/passwd:/etc/passwd;--volume=/etc/group:/etc/group;--user=${DOCKER_USER_ID}:${DOCKER_GROUP_ID}>"
-t rippled-dpkg-builder:${container_label}
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/packaging/dpkg/debian/control
)
exclude_from_default (dpkg)
if (NOT have_package_container)
add_dependencies(dpkg dpkg_container)
endif ()
#[===================================================================[
ci container
#]===================================================================]
# now use the same ubuntu image for our travis-ci docker images,
# but we use a newer distro (18.04 vs 16.04).
#
# steps for publishing a new CI image when you make changes:
#
# mkdir bld.ci && cd bld.ci && cmake -Dpackages_only=ON -Dcontainer_label=CI_LATEST
# cmake --build . --target ci_container --verbose
# docker tag rippled-ci-builder:CI_LATEST <DOCKERHUB_USER>/rippled-ci-builder:YYYY-MM-DD
# (change YYYY-MM-DD to match current date..or use a different
# tag/label scheme if you prefer)
# docker push <DOCKERHUB_USER>/rippled-ci-builder:YYYY-MM-DD
#
# ...then change the DOCKER_IMAGE line in .travis.yml :
# - DOCKER_IMAGE="<DOCKERHUB_USER>/rippled-ci-builder:YYYY-MM-DD"
add_custom_target (ci_container
docker build
--pull
--build-arg DIST_TAG=18.04
--build-arg GIT_COMMIT=${commit_hash}
--build-arg CI_USE=true
-t rippled-ci-builder:${container_label}
$<$<BOOL:${ci_cache_from}>:--cache-from=${ci_cache_from}>
-f ubuntu-builder/Dockerfile .
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/Builds/containers
VERBATIM
USES_TERMINAL
COMMAND_EXPAND_LISTS
SOURCES
Builds/containers/ubuntu-builder/Dockerfile
Builds/containers/ubuntu-builder/ubuntu_setup.sh
Builds/containers/shared/build_deps.sh
Builds/containers/shared/rippled.service
Builds/containers/shared/update_sources.sh
Builds/containers/shared/update-rippled.sh
)
exclude_from_default (ci_container)
else ()
message (STATUS "docker NOT found -- won't be able to build containers for packaging")
endif ()
endif ()

View File

@@ -0,0 +1,88 @@
#[===================================================================[
convenience variables and sanity checks
#]===================================================================]
if (NOT ep_procs)
ProcessorCount(ep_procs)
if (ep_procs GREATER 1)
# never use more than half of cores for EP builds
math (EXPR ep_procs "${ep_procs} / 2")
message (STATUS "Using ${ep_procs} cores for ExternalProject builds.")
endif ()
endif ()
get_property (is_multiconfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (is_multiconfig STREQUAL "NOTFOUND")
if (${CMAKE_GENERATOR} STREQUAL "Xcode" OR ${CMAKE_GENERATOR} MATCHES "^Visual Studio")
set (is_multiconfig TRUE)
endif ()
endif ()
set (CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "" FORCE)
if (NOT is_multiconfig)
if (NOT CMAKE_BUILD_TYPE)
message (STATUS "Build type not specified - defaulting to Release")
set (CMAKE_BUILD_TYPE Release CACHE STRING "build type" FORCE)
elseif (NOT (CMAKE_BUILD_TYPE STREQUAL Debug OR CMAKE_BUILD_TYPE STREQUAL Release))
# for simplicity, these are the only two config types we care about. Limiting
# the build types simplifies dealing with external project builds especially
message (FATAL_ERROR " *** Only Debug or Release build types are currently supported ***")
endif ()
endif ()
get_directory_property(has_parent PARENT_DIRECTORY)
if (has_parent)
set (is_root_project OFF)
else ()
set (is_root_project ON)
endif ()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang") # both Clang and AppleClang
set (is_clang TRUE)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
message (FATAL_ERROR "This project requires clang 5 or later")
endif ()
# TODO min AppleClang version check ?
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set (is_gcc TRUE)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0)
message (FATAL_ERROR "This project requires GCC 7 or later")
endif ()
endif ()
if (CMAKE_GENERATOR STREQUAL "Xcode")
set (is_xcode TRUE)
endif ()
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set (is_linux TRUE)
else ()
set (is_linux FALSE)
endif ()
if ("$ENV{CI}" STREQUAL "true" OR "$ENV{CONTINUOUS_INTEGRATION}" STREQUAL "true")
set (is_ci TRUE)
else ()
set (is_ci FALSE)
endif ()
# check for in-source build and fail
if ("${CMAKE_CURRENT_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
message (FATAL_ERROR "Builds (in-source) are not allowed in "
"${CMAKE_CURRENT_SOURCE_DIR}. Please remove CMakeCache.txt and the CMakeFiles "
"directory from ${CMAKE_CURRENT_SOURCE_DIR} and try building in a separate directory.")
endif ()
if ("${CMAKE_GENERATOR}" MATCHES "Visual Studio" AND
NOT ("${CMAKE_GENERATOR}" MATCHES .*Win64.*))
message (FATAL_ERROR
"Visual Studio 32-bit build is not supported. Use -G\"${CMAKE_GENERATOR} Win64\"")
endif ()
if (NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
message (FATAL_ERROR "Rippled requires a 64 bit target architecture.\n"
"The most likely cause of this warning is trying to build rippled with a 32-bit OS.")
endif ()
if (APPLE AND NOT HOMEBREW)
find_program (HOMEBREW brew)
endif ()

View File

@@ -0,0 +1,116 @@
#[===================================================================[
declare user options/settings
#]===================================================================]
option (assert "Enables asserts, even in release builds" OFF)
option (unity "Creates a build based on unity sources. This is the default" ON)
if (is_gcc OR is_clang)
option (coverage "Generates coverage info." OFF)
option (profile "Add profiling flags" OFF)
set (coverage_test "" CACHE STRING
"On gcc & clang, the specific unit test(s) to run for coverage. Default is all tests.")
if (coverage_test AND NOT coverage)
set (coverage ON CACHE BOOL "gcc/clang only" FORCE)
endif ()
option (coverage_core_only
"Include only src/ripple files when generating coverage report. \
Set to OFF to include all sources in coverage report."
ON)
option (wextra "compile with extra gcc/clang warnings enabled" ON)
else ()
set (profile OFF CACHE BOOL "gcc/clang only" FORCE)
set (coverage OFF CACHE BOOL "gcc/clang only" FORCE)
set (wextra OFF CACHE BOOL "gcc/clang only" FORCE)
endif ()
if (is_linux)
option (BUILD_SHARED_LIBS "build shared ripple libraries" OFF)
option (static "link protobuf, openssl, libc++, and boost statically" ON)
option (perf "Enables flags that assist with perf recording" OFF)
option (use_gold "enables detection of gold (binutils) linker" ON)
else ()
# we are not ready to allow shared-libs on windows because it would require
# export declarations. On macos it's more feasible, but static openssl
# produces odd linker errors, thus we disable shared lib builds for now.
set (BUILD_SHARED_LIBS OFF CACHE BOOL "build shared ripple libraries - OFF for win/macos" FORCE)
set (static ON CACHE BOOL "static link, linux only. ON for WIN/macos" FORCE)
set (perf OFF CACHE BOOL "perf flags, linux only" FORCE)
set (use_gold OFF CACHE BOOL "gold linker, linux only" FORCE)
endif ()
if (is_clang)
option (use_lld "enables detection of lld linker" ON)
else ()
set (use_lld OFF CACHE BOOL "try lld linker, clang only" FORCE)
endif ()
option (jemalloc "Enables jemalloc for heap profiling" OFF)
option (werr "treat warnings as errors" OFF)
option (local_protobuf
"Force use of a local build of protobuf instead of system version." OFF)
# this one is a string and therefore can't be an option
set (san "" CACHE STRING "On gcc & clang, add sanitizer instrumentation")
set_property (CACHE san PROPERTY STRINGS ";undefined;memory;address;thread")
if (san)
string (TOLOWER ${san} san)
set (SAN_FLAG "-fsanitize=${san}")
set (SAN_LIB "")
if (is_gcc)
if (san STREQUAL "address")
set (SAN_LIB "asan")
elseif (san STREQUAL "thread")
set (SAN_LIB "tsan")
elseif (san STREQUAL "memory")
set (SAN_LIB "msan")
elseif (san STREQUAL "undefined")
set (SAN_LIB "ubsan")
endif ()
endif ()
set (_saved_CRL ${CMAKE_REQUIRED_LIBRARIES})
set (CMAKE_REQUIRED_LIBRARIES "${SAN_FLAG};${SAN_LIB}")
check_cxx_compiler_flag (${SAN_FLAG} COMPILER_SUPPORTS_SAN)
set (CMAKE_REQUIRED_LIBRARIES ${_saved_CRL})
if (NOT COMPILER_SUPPORTS_SAN)
message (FATAL_ERROR "${san} sanitizer does not seem to be supported by your compiler")
endif ()
endif ()
set (container_label "" CACHE STRING "tag to use for package building containers")
option (packages_only
"ONLY generate package building targets. This is special use-case and almost \
certainly not what you want. Use with caution as you won't be able to build \
any compiled targets locally." OFF)
option (have_package_container
"Sometimes you already have the tagged container you want to use for package \
building and you don't want docker to rebuild it. This flag will detach the \
dependency of the package build from the container build. It's an advanced \
use case and most likely you should not be touching this flag." OFF)
# the remaining options are obscure and rarely used
option (beast_no_unit_test_inline
"Prevents unit test definitions from being inserted into global table"
OFF)
# NOTE - THIS OPTION CURRENTLY DOES NOT COMPILE :
# TODO: fix or remove
option (verify_nodeobject_keys
"This verifies that the hash of node objects matches the payload. \
This check is expensive - use with caution."
OFF)
option (single_io_service_thread
"Restricts the number of threads calling io_service::run to one. \
This can be useful when debugging."
OFF)
option (boost_show_deprecated
"Allow boost to fail on deprecated usage. Only useful if you're trying\
to find deprecated calls."
OFF)
option (beast_hashers
"Use local implementations for sha/ripemd hashes (experimental, not recommended)"
OFF)
if (WIN32)
option (beast_disable_autolink "Disables autolinking of system libraries on WIN32" OFF)
else ()
set (beast_disable_autolink OFF CACHE BOOL "WIN32 only" FORCE)
endif ()
if (coverage)
message (STATUS "coverage build requested - forcing Debug build")
set (CMAKE_BUILD_TYPE Debug CACHE STRING "build type" FORCE)
endif ()

View File

@@ -0,0 +1,24 @@
option (validator_keys "Enables building of validator-keys-tool as a separate target (imported via FetchContent)" OFF)
if (validator_keys AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.11)
git_branch (current_branch)
# default to tracking VK develop branch unless we are on master/release
if (NOT (current_branch STREQUAL "master" OR current_branch STREQUAL "release"))
set (current_branch "develop")
endif ()
message (STATUS "tracking ValidatorKeys branch: ${current_branch}")
FetchContent_Declare (
validator_keys_src
GIT_REPOSITORY https://github.com/ripple/validator-keys-tool.git
GIT_TAG "${current_branch}"
)
FetchContent_GetProperties (validator_keys_src)
if (NOT validator_keys_src_POPULATED)
message (STATUS "Pausing to download ValidatorKeys...")
FetchContent_Populate (validator_keys_src)
endif ()
add_subdirectory (${validator_keys_src_SOURCE_DIR} ${CMAKE_BINARY_DIR}/validator-keys)
endif ()

View File

@@ -0,0 +1,15 @@
#[===================================================================[
read version from source
#]===================================================================]
file (STRINGS src/ripple/protocol/impl/BuildInfo.cpp BUILD_INFO)
foreach (line_ ${BUILD_INFO})
if (line_ MATCHES "versionString[ ]*=[ ]*\"(.+)\"")
set (rippled_version ${CMAKE_MATCH_1})
endif ()
endforeach ()
if (rippled_version)
message (STATUS "rippled version: ${rippled_version}")
else ()
message (FATAL_ERROR "unable to determine rippled version")
endif ()

View File

@@ -0,0 +1,92 @@
#[===================================================================[
NIH dep: boost
#]===================================================================]
if ((NOT DEFINED BOOST_ROOT) AND (DEFINED ENV{BOOST_ROOT}))
set (BOOST_ROOT $ENV{BOOST_ROOT})
endif ()
file (TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
if (WIN32 OR CYGWIN)
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
if (DEFINED BOOST_ROOT)
if (IS_DIRECTORY ${BOOST_ROOT}/stage64/lib)
set (BOOST_LIBRARYDIR ${BOOST_ROOT}/stage64/lib)
else ()
set (BOOST_LIBRARYDIR ${BOOST_ROOT}/stage/lib)
endif ()
endif ()
endif ()
message (STATUS "BOOST_ROOT: ${BOOST_ROOT}")
message (STATUS "BOOST_LIBRARYDIR: ${BOOST_LIBRARYDIR}")
# uncomment the following as needed to debug FindBoost issues:
#set (Boost_DEBUG ON)
#[=========================================================[
boost dynamic libraries don't trivially support @rpath
linking right now (cmake's default), so just force
static linking for macos, or if requested on linux by flag
#]=========================================================]
if (static)
set (Boost_USE_STATIC_LIBS ON)
endif ()
set (Boost_USE_MULTITHREADED ON)
if (static AND NOT APPLE)
set (Boost_USE_STATIC_RUNTIME ON)
else ()
set (Boost_USE_STATIC_RUNTIME OFF)
endif ()
find_package (Boost 1.70 REQUIRED
COMPONENTS
chrono
context
coroutine
date_time
filesystem
program_options
regex
serialization
system
thread)
add_library (ripple_boost INTERFACE)
add_library (Ripple::boost ALIAS ripple_boost)
if (is_xcode)
target_include_directories (ripple_boost BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
target_compile_options (ripple_boost INTERFACE --system-header-prefix="boost/")
else ()
target_include_directories (ripple_boost SYSTEM BEFORE INTERFACE ${Boost_INCLUDE_DIRS})
endif()
target_link_libraries (ripple_boost
INTERFACE
Boost::boost
Boost::chrono
Boost::coroutine
Boost::date_time
Boost::filesystem
Boost::program_options
Boost::regex
Boost::serialization
Boost::system
Boost::thread)
if (san)
if (NOT Boost_INCLUDE_DIRS AND TARGET Boost::headers)
get_target_property (Boost_INCLUDE_DIRS Boost::headers INTERFACE_INCLUDE_DIRECTORIES)
endif ()
message(STATUS "Adding [${Boost_INCLUDE_DIRS}] to sanitizer blacklist")
file (WRITE ${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt "src:${Boost_INCLUDE_DIRS}/*")
target_compile_options (opts
INTERFACE
# ignore boost headers for sanitizing
-fsanitize-blacklist=${CMAKE_CURRENT_BINARY_DIR}/san_bl.txt)
endif ()
# workaround for xcode 10.2 and boost < 1.69
# once we require Boost 1.69 or higher, this can be removed
# see: https://github.com/boostorg/asio/commit/43874d5
if (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang" AND
CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 10.0.1.10010043 AND
Boost_VERSION LESS 106900)
target_compile_definitions (opts INTERFACE BOOST_ASIO_HAS_STD_STRING_VIEW)
endif ()

View File

@@ -0,0 +1,28 @@
#[===================================================================[
NIH dep: ed25519-donna
#]===================================================================]
add_library (ed25519-donna STATIC
src/ed25519-donna/ed25519.c)
target_include_directories (ed25519-donna
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
$<INSTALL_INTERFACE:include>
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/src/ed25519-donna)
#[=========================================================[
NOTE for macos:
https://github.com/floodyberry/ed25519-donna/issues/29
our source for ed25519-donna-portable.h has been
patched to workaround this.
#]=========================================================]
target_link_libraries (ed25519-donna PUBLIC OpenSSL::SSL)
add_library (NIH::ed25519-donna ALIAS ed25519-donna)
target_link_libraries (ripple_libs INTERFACE NIH::ed25519-donna)
#[===========================[
headers installation
#]===========================]
install (
FILES
src/ed25519-donna/ed25519.h
DESTINATION include/ed25519-donna)

View File

@@ -0,0 +1,101 @@
#[===================================================================[
NIH dep: libarchive
#]===================================================================]
set (lib_post "")
if (MSVC)
set (lib_post "_static")
endif ()
ExternalProject_Add (libarchive
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/libarchive/libarchive.git
GIT_TAG v3.3.3
CMAKE_ARGS
# passing the compiler seems to be needed for windows CI, sadly
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DENABLE_LZ4=ON
-ULZ4_*
-DLZ4_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
# because we are building a static lib, this lz4 library doesn't
# actually matter since you can't generally link static libs to other static
# libs. The include files are needed, but the library itself is not (until
# we link our application, at which point we use the lz4 we built above).
# nonetheless, we need to provide a library to libarchive else it will
# NOT include lz4 support when configuring
-DLZ4_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-DENABLE_WERROR=OFF
-DENABLE_TAR=OFF
-DENABLE_TAR_SHARED=OFF
-DENABLE_INSTALL=ON
-DENABLE_NETTLE=OFF
-DENABLE_OPENSSL=OFF
-DENABLE_LZO=OFF
-DENABLE_LZMA=OFF
-DENABLE_ZLIB=OFF
-DENABLE_BZip2=OFF
-DENABLE_LIBXML2=OFF
-DENABLE_EXPAT=OFF
-DENABLE_PCREPOSIX=OFF
-DENABLE_LibGCC=OFF
-DENABLE_CNG=OFF
-DENABLE_CPIO=OFF
-DENABLE_CPIO_SHARED=OFF
-DENABLE_CAT=OFF
-DENABLE_CAT_SHARED=OFF
-DENABLE_XATTR=OFF
-DENABLE_ACL=OFF
-DENABLE_ICONV=OFF
-DENABLE_TEST=OFF
-DENABLE_COVERAGE=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target archive_static
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/libarchive/$<CONFIG>/${ep_lib_prefix}archive${lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/libarchive
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS lz4
BUILD_BYPRODUCTS
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
<BINARY_DIR>/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (libarchive BINARY_DIR)
ExternalProject_Get_Property (libarchive SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (libarchive)
endif ()
add_library (archive_lib STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${SOURCE_DIR}/libarchive)
set_target_properties (archive_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/libarchive/${ep_lib_prefix}archive${lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/libarchive
INTERFACE_COMPILE_DEFINITIONS
LIBARCHIVE_STATIC)
add_dependencies (archive_lib libarchive)
target_link_libraries (archive_lib INTERFACE lz4_lib)
target_link_libraries (ripple_libs INTERFACE archive_lib)
exclude_if_included (libarchive)
exclude_if_included (archive_lib)

View File

@@ -0,0 +1,60 @@
#[===================================================================[
NIH dep: lz4
#]===================================================================]
ExternalProject_Add (lz4
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/lz4/lz4.git
GIT_TAG v1.8.2
SOURCE_SUBDIR contrib/cmake_unofficial
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_STATIC_LIBS=ON
-DBUILD_SHARED_LIBS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
--target lz4_static
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}lz4$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}lz4${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}lz4_d${ep_lib_suffix}
)
ExternalProject_Get_Property (lz4 BINARY_DIR)
ExternalProject_Get_Property (lz4 SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (lz4)
endif ()
add_library (lz4_lib STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${SOURCE_DIR}/lz4)
set_target_properties (lz4_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}lz4_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}lz4${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/lib)
add_dependencies (lz4_lib lz4)
target_link_libraries (ripple_libs INTERFACE lz4_lib)
exclude_if_included (lz4)
exclude_if_included (lz4_lib)

View File

@@ -0,0 +1,47 @@
#[===================================================================[
NIH dep: nudb
NuDB is header-only, thus is an INTERFACE lib in CMake.
TODO: move the library definition into NuDB repo and add
proper targets and export/install
#]===================================================================]
if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
add_library (nudb INTERFACE)
if (CMAKE_VERSION VERSION_GREATER_EQUAL 3.11)
FetchContent_Declare(
nudb_src
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.1
)
FetchContent_GetProperties(nudb_src)
if(NOT nudb_src_POPULATED)
message (STATUS "Pausing to download NuDB...")
FetchContent_Populate(nudb_src)
endif()
else ()
ExternalProject_Add (nudb_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.1
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
TEST_COMMAND ""
INSTALL_COMMAND ""
)
ExternalProject_Get_Property (nudb_src SOURCE_DIR)
set (nudb_src_SOURCE_DIR "${SOURCE_DIR}")
file (MAKE_DIRECTORY ${nudb_src_SOURCE_DIR}/include)
add_dependencies (nudb nudb_src)
endif ()
file(TO_CMAKE_PATH "${nudb_src_SOURCE_DIR}" nudb_src_SOURCE_DIR)
# specify as system includes so as to avoid warnings
target_include_directories (nudb SYSTEM INTERFACE ${nudb_src_SOURCE_DIR}/include)
target_link_libraries (nudb
INTERFACE
Boost::thread
Boost::system)
add_library (NIH::nudb ALIAS nudb)
target_link_libraries (ripple_libs INTERFACE NIH::nudb)
endif ()

View File

@@ -0,0 +1,48 @@
#[===================================================================[
NIH dep: openssl
#]===================================================================]
#[===============================================[
OPENSSL_ROOT_DIR is the only variable that
FindOpenSSL honors for locating, so convert any
OPENSSL_ROOT vars to this
#]===============================================]
if (NOT DEFINED OPENSSL_ROOT_DIR)
if (DEFINED ENV{OPENSSL_ROOT})
set (OPENSSL_ROOT_DIR $ENV{OPENSSL_ROOT})
elseif (HOMEBREW)
execute_process (COMMAND ${HOMEBREW} --prefix openssl
OUTPUT_VARIABLE OPENSSL_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif ()
file (TO_CMAKE_PATH "${OPENSSL_ROOT_DIR}" OPENSSL_ROOT_DIR)
endif ()
if (static)
set (OPENSSL_USE_STATIC_LIBS ON)
endif ()
set (OPENSSL_MSVC_STATIC_RT ON)
find_package (OpenSSL 1.0.2 REQUIRED)
target_link_libraries (ripple_libs
INTERFACE
OpenSSL::SSL
OpenSSL::Crypto)
# disable SSLv2...this can also be done when building/configuring OpenSSL
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_COMPILE_DEFINITIONS OPENSSL_NO_SSL2)
#[=========================================================[
https://gitlab.kitware.com/cmake/cmake/issues/16885
depending on how openssl is built, it might depend
on zlib. In fact, the openssl find package should
figure this out for us, but it does not currently...
so let's add zlib ourselves to the lib list
TODO: investigate linking to static zlib for static
build option
#]=========================================================]
find_package (ZLIB)
set (has_zlib FALSE)
if (TARGET ZLIB::ZLIB)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_LINK_LIBRARIES ZLIB::ZLIB)
set (has_zlib TRUE)
endif ()

View File

@@ -0,0 +1,124 @@
#[===================================================================[
import protobuf (lib and compiler) and create a lib
from our proto message definitions. If the system protobuf
is not found, fallback on EP to download and build a version
from official source.
#]===================================================================]
if (static)
set (Protobuf_USE_STATIC_LIBS ON)
endif ()
find_package (Protobuf)
if (local_protobuf OR NOT TARGET protobuf::libprotobuf)
message (STATUS "using local protobuf build.")
if (WIN32)
# protobuf prepends lib even on windows
set (pbuf_lib_pre "lib")
else ()
set (pbuf_lib_pre ${ep_lib_prefix})
endif ()
# for the external project build of protobuf, we currently ignore the
# static option and always build static libs here. This is consistent
# with our other EP builds. Dynamic libs in an EP would add complexity
# because we'd need to get them into the runtime path, and probably
# install them.
ExternalProject_Add (protobuf_src
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/protocolbuffers/protobuf.git
GIT_TAG v3.6.1
SOURCE_SUBDIR cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-Dprotobuf_BUILD_TESTS=OFF
-Dprotobuf_BUILD_EXAMPLES=OFF
-Dprotobuf_BUILD_PROTOC_BINARIES=ON
-Dprotobuf_MSVC_STATIC_RUNTIME=ON
-DBUILD_SHARED_LIBS=OFF
-Dprotobuf_BUILD_SHARED_LIBS=OFF
-DCMAKE_DEBUG_POSTFIX=_d
-Dprotobuf_DEBUG_POSTFIX=_d
-Dprotobuf_WITH_ZLIB=$<IF:$<BOOL:${has_zlib}>,ON,OFF>
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${pbuf_lib_pre}protobuf$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/protoc${CMAKE_EXECUTABLE_SUFFIX}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${pbuf_lib_pre}protobuf${ep_lib_suffix}
<BINARY_DIR>/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
<BINARY_DIR>/protoc${CMAKE_EXECUTABLE_SUFFIX}
)
ExternalProject_Get_Property (protobuf_src BINARY_DIR)
ExternalProject_Get_Property (protobuf_src SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (protobuf_src)
endif ()
if (NOT TARGET protobuf::libprotobuf)
add_library (protobuf::libprotobuf STATIC IMPORTED GLOBAL)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/src)
set_target_properties (protobuf::libprotobuf PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${pbuf_lib_pre}protobuf_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${pbuf_lib_pre}protobuf${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/src)
add_dependencies (protobuf::libprotobuf protobuf_src)
exclude_if_included (protobuf_src)
exclude_if_included (protobuf::libprotobuf)
if (NOT TARGET protobuf::protoc)
add_executable (protobuf::protoc IMPORTED)
exclude_if_included (protobuf::protoc)
endif ()
set_target_properties (protobuf::protoc PROPERTIES
IMPORTED_LOCATION "${BINARY_DIR}/protoc${CMAKE_EXECUTABLE_SUFFIX}")
add_dependencies (protobuf::protoc protobuf_src)
endif ()
file (MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/proto_gen)
set (save_CBD ${CMAKE_CURRENT_BINARY_DIR})
set (CMAKE_CURRENT_BINARY_DIR ${CMAKE_BINARY_DIR}/proto_gen)
protobuf_generate_cpp (
PROTO_SRCS
PROTO_HDRS
src/ripple/proto/ripple.proto)
set (CMAKE_CURRENT_BINARY_DIR ${save_CBD})
add_library (pbufs STATIC ${PROTO_SRCS} ${PROTO_HDRS})
target_include_directories (pbufs PRIVATE src)
target_include_directories (pbufs
SYSTEM PUBLIC ${CMAKE_BINARY_DIR}/proto_gen)
target_link_libraries (pbufs protobuf::libprotobuf)
target_compile_options (pbufs
PUBLIC
$<$<BOOL:${is_xcode}>:
--system-header-prefix="google/protobuf"
-Wno-deprecated-dynamic-exception-spec
>)
add_library (Ripple::pbufs ALIAS pbufs)
target_link_libraries (ripple_libs INTERFACE Ripple::pbufs)
exclude_if_included (pbufs)

View File

@@ -0,0 +1,115 @@
#[===================================================================[
NIH dep: rocksdb
#]===================================================================]
ExternalProject_Add (rocksdb
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/facebook/rocksdb.git
GIT_TAG v5.17.2
PATCH_COMMAND
# only used by windows build
${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/Builds/CMake/rocks_thirdparty.inc
<SOURCE_DIR>/thirdparty.inc
COMMAND
# fixup their build version file to keep the values
# from changing always
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_SOURCE_DIR}/Builds/CMake/rocksdb_build_version.cc.in
<SOURCE_DIR>/util/build_version.cc.in
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DWITH_JEMALLOC=$<IF:$<BOOL:${jemalloc}>,ON,OFF>
-DWITH_SNAPPY=ON
-DWITH_LZ4=ON
-DWITH_ZLIB=OFF
-DUSE_RTTI=ON
-DWITH_ZSTD=OFF
-DWITH_GFLAGS=OFF
-DWITH_BZ2=OFF
-ULZ4_*
-DLZ4_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:lz4_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DLZ4_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:lz4_lib,IMPORTED_LOCATION_RELEASE>>
-DLZ4_FOUND=ON
-USNAPPY_*
-DSNAPPY_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:snappy_lib,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSNAPPY_LIBRARIES=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:snappy_lib,IMPORTED_LOCATION_RELEASE>>
-DSNAPPY_FOUND=ON
-DWITH_MD_LIBRARY=OFF
-DWITH_RUNTIME_DEBUG=$<IF:$<CONFIG:Debug>,ON,OFF>
-DFAIL_ON_WARNINGS=OFF
-DWITH_ASAN=OFF
-DWITH_TSAN=OFF
-DWITH_UBSAN=OFF
-DWITH_NUMA=OFF
-DWITH_TBB=OFF
-DWITH_WINDOWS_UTF8_FILENAMES=OFF
-DWITH_XPRESS=OFF
-DPORTABLE=ON
-DFORCE_SSE42=OFF
-DDISABLE_STALL_NOTIF=OFF
-DOPTDBG=ON
-DROCKSDB_LITE=OFF
-DWITH_FALLOCATE=ON
-DWITH_LIBRADOS=OFF
-DWITH_JNI=OFF
-DROCKSDB_INSTALL_ON_WINDOWS=OFF
-DWITH_TESTS=OFF
-DWITH_TOOLS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -MP /DNDEBUG"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-DNDEBUG"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}rocksdb$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
LIST_SEPARATOR ::
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS snappy lz4
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}rocksdb${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
)
ExternalProject_Get_Property (rocksdb BINARY_DIR)
ExternalProject_Get_Property (rocksdb SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (rocksdb)
endif ()
add_library (rocksdb_lib STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
set_target_properties (rocksdb_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}rocksdb_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}rocksdb${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR}/include
INTERFACE_COMPILE_DEFINITIONS
RIPPLE_ROCKSDB_AVAILABLE=1)
add_dependencies (rocksdb_lib rocksdb)
target_link_libraries (rocksdb_lib INTERFACE snappy_lib lz4_lib)
if (MSVC)
target_link_libraries (rocksdb_lib INTERFACE rpcrt4)
endif ()
target_link_libraries (ripple_libs INTERFACE rocksdb_lib)
exclude_if_included (rocksdb)
exclude_if_included (rocksdb_lib)

View File

@@ -0,0 +1,35 @@
#[===================================================================[
NIH dep: secp256k1
#]===================================================================]
add_library (secp256k1 STATIC
src/secp256k1/src/secp256k1.c)
target_compile_definitions (secp256k1
PRIVATE
USE_NUM_NONE
USE_FIELD_10X26
USE_FIELD_INV_BUILTIN
USE_SCALAR_8X32
USE_SCALAR_INV_BUILTIN)
target_include_directories (secp256k1
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
$<INSTALL_INTERFACE:include>
PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src/secp256k1)
target_compile_options (secp256k1
PRIVATE
$<$<BOOL:${MSVC}>:-wd4319>
$<$<NOT:$<BOOL:${MSVC}>>:
-Wno-deprecated-declarations
-Wno-unused-function
>
$<$<BOOL:${is_gcc}>:-Wno-nonnull-compare>)
add_library (NIH::secp256k1 ALIAS secp256k1)
target_link_libraries (ripple_libs INTERFACE NIH::secp256k1)
#[===========================[
headers installation
#]===========================]
install (
FILES
src/secp256k1/include/secp256k1.h
DESTINATION include/secp256k1/include)

View File

@@ -0,0 +1,59 @@
#[===================================================================[
NIH dep: snappy
#]===================================================================]
ExternalProject_Add (snappy
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/google/snappy.git
GIT_TAG 1.1.7
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_POSITION_INDEPENDENT_CODE=ON
-DSNAPPY_BUILD_TESTS=OFF
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}snappy$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}snappy${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}snappy_d${ep_lib_suffix}
)
ExternalProject_Get_Property (snappy BINARY_DIR)
ExternalProject_Get_Property (snappy SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (snappy)
endif ()
add_library (snappy_lib STATIC IMPORTED GLOBAL)
file (MAKE_DIRECTORY ${SOURCE_DIR}/snappy)
set_target_properties (snappy_lib PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}snappy_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}snappy${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
"${SOURCE_DIR};${BINARY_DIR}")
add_dependencies (snappy_lib snappy)
target_link_libraries (ripple_libs INTERFACE snappy_lib)
exclude_if_included (snappy)
exclude_if_included (snappy_lib)

View File

@@ -0,0 +1,121 @@
#[===================================================================[
NIH dep: soci
#]===================================================================]
set (soci_lib_pre ${ep_lib_prefix})
set (soci_lib_post "")
if (WIN32)
# for some reason soci on windows still prepends lib (non-standard)
set (soci_lib_pre lib)
# this version in the name might change if/when we change versions of soci
set (soci_lib_post "_4_0")
endif ()
get_target_property (_boost_incs Boost::date_time INTERFACE_INCLUDE_DIRECTORIES)
ExternalProject_Add (soci
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/SOCI/soci.git
GIT_TAG 04e1870294918d20761736743bb6136314c42dd5
# We had an issue with soci integer range checking for boost::optional
# and needed to remove the exception that SOCI throws in this case.
# This is *probably* a bug in SOCI, but has never been investigated more
# nor reported to the maintainers.
# This cmake script comments out the lines in question.
# This patch process is likely fragile and should be reviewed carefully
# whenever we update the GIT_TAG above.
PATCH_COMMAND
${CMAKE_COMMAND} -P ${CMAKE_SOURCE_DIR}/Builds/CMake/soci_patch.cmake
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_PREFIX_PATH=${CMAKE_BINARY_DIR}/sqlite3
-DCMAKE_MODULE_PATH=${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake
-DCMAKE_INCLUDE_PATH=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DCMAKE_LIBRARY_PATH=${sqlite_BINARY_DIR}
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
-DSOCI_CXX_C11=ON
-DSOCI_STATIC=ON
-DSOCI_LIBDIR=lib
-DSOCI_SHARED=OFF
-DSOCI_TESTS=OFF
# hack to workaround the fact that soci doesn't currently use
# boost imported targets in its cmake. If they switch to
# proper imported targets, this next line can be removed
# (as well as the get_property above that sets _boost_incs)
-DBoost_INCLUDE_DIRS=$<JOIN:${_boost_incs},::>
-DBOOST_ROOT=${BOOST_ROOT}
-DWITH_BOOST=ON
-DSOCI_DB2=OFF
-DSOCI_FIREBIRD=OFF
-DSOCI_MYSQL=OFF
-DSOCI_ODBC=OFF
-DSOCI_ORACLE=OFF
-DSOCI_POSTGRESQL=OFF
-DSOCI_SQLITE3=ON
-DSQLITE3_INCLUDE_DIR=$<JOIN:$<TARGET_PROPERTY:sqlite,INTERFACE_INCLUDE_DIRECTORIES>,::>
-DSQLITE3_LIBRARY=$<IF:$<CONFIG:Debug>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_DEBUG>,$<TARGET_PROPERTY:sqlite,IMPORTED_LOCATION_RELEASE>>
$<$<BOOL:${APPLE}>:-DCMAKE_FIND_FRAMEWORK=LAST>
$<$<BOOL:${MSVC}>:
"-DCMAKE_CXX_FLAGS=-GR -Gd -fp:precise -FS -EHa -MP"
"-DCMAKE_CXX_FLAGS_DEBUG=-MTd"
"-DCMAKE_CXX_FLAGS_RELEASE=-MT"
>
$<$<NOT:$<BOOL:${MSVC}>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations"
>
# SEE: https://github.com/SOCI/soci/issues/640
$<$<AND:$<BOOL:${is_gcc}>,$<VERSION_GREATER_EQUAL:${CMAKE_CXX_COMPILER_VERSION},8>>:
"-DCMAKE_CXX_FLAGS=-Wno-deprecated-declarations -Wno-error=format-overflow -Wno-format-overflow -Wno-error=format-truncation"
>
LIST_SEPARATOR ::
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_core${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_empty${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib/$<CONFIG>/${soci_lib_pre}soci_sqlite3${soci_lib_post}$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>/lib
>
TEST_COMMAND ""
INSTALL_COMMAND ""
DEPENDS sqlite3
BUILD_BYPRODUCTS
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_core${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_empty${soci_lib_post}_d${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}${ep_lib_suffix}
<BINARY_DIR>/lib/${soci_lib_pre}soci_sqlite3${soci_lib_post}_d${ep_lib_suffix}
)
ExternalProject_Get_Property (soci BINARY_DIR)
ExternalProject_Get_Property (soci SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (soci)
endif ()
file (MAKE_DIRECTORY ${SOURCE_DIR}/include)
file (MAKE_DIRECTORY ${BINARY_DIR}/include)
foreach (_comp core empty sqlite3)
add_library ("soci_${_comp}" STATIC IMPORTED GLOBAL)
set_target_properties ("soci_${_comp}" PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/lib/${soci_lib_pre}soci_${_comp}${soci_lib_post}${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
"${SOURCE_DIR}/include;${BINARY_DIR}/include")
add_dependencies ("soci_${_comp}" soci) # something has to depend on the ExternalProject to trigger it
target_link_libraries (ripple_libs INTERFACE "soci_${_comp}")
if (NOT _comp STREQUAL "core")
target_link_libraries ("soci_${_comp}" INTERFACE soci_core)
endif ()
exclude_if_included ("soci_${_comp}")
endforeach ()
exclude_if_included (soci)

View File

@@ -0,0 +1,67 @@
#[===================================================================[
NIH dep: sqlite
#]===================================================================]
ExternalProject_Add (sqlite3
PREFIX ${nih_cache_path}
# sqlite doesn't use git, but it provides versioned tarballs
URL https://www.sqlite.org/2018/sqlite-amalgamation-3260000.zip
# ^^^ version is apparent in the URL: 3260000 => 3.26.0
URL_HASH SHA256=de5dcab133aa339a4cf9e97c40aa6062570086d6085d8f9ad7bc6ddf8a52096e
# we wrote a very simple CMake file to build sqlite
# so that's what we copy here so that we can build with
# CMake. sqlite doesn't generally provided a build system
# for the single amalgamation source file.
PATCH_COMMAND
${CMAKE_COMMAND} -E copy_if_different
${CMAKE_SOURCE_DIR}/Builds/CMake/CMake_sqlite3.txt
<SOURCE_DIR>/CMakeLists.txt
CMAKE_ARGS
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:-DCMAKE_VERBOSE_MAKEFILE=ON>
-DCMAKE_DEBUG_POSTFIX=_d
$<$<NOT:$<BOOL:${is_multiconfig}>>:-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}>
$<$<BOOL:${MSVC}>:
"-DCMAKE_C_FLAGS=-GR -Gd -fp:precise -FS -MP"
"-DCMAKE_C_FLAGS_DEBUG=-MTd"
"-DCMAKE_C_FLAGS_RELEASE=-MT"
>
LOG_BUILD ON
LOG_CONFIGURE ON
BUILD_COMMAND
${CMAKE_COMMAND}
--build .
--config $<CONFIG>
$<$<VERSION_GREATER_EQUAL:${CMAKE_VERSION},3.12>:--parallel ${ep_procs}>
$<$<BOOL:${is_multiconfig}>:
COMMAND
${CMAKE_COMMAND} -E copy
<BINARY_DIR>/$<CONFIG>/${ep_lib_prefix}sqlite3$<$<CONFIG:Debug>:_d>${ep_lib_suffix}
<BINARY_DIR>
>
TEST_COMMAND ""
INSTALL_COMMAND ""
BUILD_BYPRODUCTS
<BINARY_DIR>/${ep_lib_prefix}sqlite3${ep_lib_suffix}
<BINARY_DIR>/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
)
ExternalProject_Get_Property (sqlite3 BINARY_DIR)
ExternalProject_Get_Property (sqlite3 SOURCE_DIR)
if (CMAKE_VERBOSE_MAKEFILE)
print_ep_logs (sqlite3)
endif ()
add_library (sqlite STATIC IMPORTED GLOBAL)
set_target_properties (sqlite PROPERTIES
IMPORTED_LOCATION_DEBUG
${BINARY_DIR}/${ep_lib_prefix}sqlite3_d${ep_lib_suffix}
IMPORTED_LOCATION_RELEASE
${BINARY_DIR}/${ep_lib_prefix}sqlite3${ep_lib_suffix}
INTERFACE_INCLUDE_DIRECTORIES
${SOURCE_DIR})
add_dependencies (sqlite sqlite3)
target_link_libraries (sqlite INTERFACE $<$<NOT:$<BOOL:${MSVC}>>:dl>)
target_link_libraries (ripple_libs INTERFACE sqlite)
exclude_if_included (sqlite3)
exclude_if_included (sqlite)
set(sqlite_BINARY_DIR ${BINARY_DIR})

View File

@@ -5,11 +5,13 @@
IN_FILE variable.
#]=========================================================]
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
if (EXISTS ${IN_FILE})
file (READ ${IN_FILE} contents)
## only print files that actually have some text in them
if (contents MATCHES "[a-z0-9A-Z]+")
execute_process(
COMMAND
${CMAKE_COMMAND} -E echo "${contents}")
endif ()
endif ()

View File

@@ -0,0 +1,16 @@
set (THIRDPARTY_LIBS "")
if(WITH_SNAPPY)
find_package(snappy REQUIRED)
add_definitions(-DSNAPPY)
include_directories(${SNAPPY_INCLUDE_DIR})
list(APPEND THIRDPARTY_LIBS ${SNAPPY_LIBRARIES})
endif()
if(WITH_LZ4)
find_package(lz4 REQUIRED)
add_definitions(-DLZ4)
include_directories(${LZ4_INCLUDE_DIR})
list(APPEND THIRDPARTY_LIBS ${LZ4_LIBRARIES})
endif()

View File

@@ -0,0 +1,4 @@
#include "build_version.h"
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha: N/A";
const char* rocksdb_build_git_date = "rocksdb_build_git_date: N/A";
const char* rocksdb_build_compile_date = "N/A";

View File

@@ -15,11 +15,10 @@ need these software components
| Component | Minimum Recommended Version |
|-----------|-----------------------|
| [Visual Studio 2017](README.md#install-visual-studio-2017)| 15.5.4 |
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1|
| [Google Protocol Buffers Compiler](README.md#install-google-protocol-buffers-compiler) | 2.5.1|
| [Git for Windows](README.md#install-git-for-windows)| 2.16.1 |
| [OpenSSL Library](README.md#install-openssl) | 1.0.2n |
| [Boost library](README.md#build-boost) | 1.67.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.10.2 |
| [Boost library](README.md#build-boost) | 1.70.0 |
| [CMake for Windows](README.md#optional-install-cmake-for-windows)* | 3.12 |
\* Only needed if not using the integrated CMake in VS 2017 and prefer generating dedicated project/solution files.
@@ -49,26 +48,6 @@ Windows](https://git-scm.com/) since it provides a Unix-like command line
environment useful for running shell scripts. Use of the bash shell under
Windows is mandatory for running the unit tests.
### Install Google Protocol Buffers Compiler
Building rippled requires **protoc.exe** version 2. Version 3 is not currently
supported.. At your option you may build it yourself from the sources in the
[Google Protocol Buffers](https://github.com/google/protobuf) repository, or you
may download a
[protoc.exe](https://ripple.github.io/Downloads/protoc/2.5.1/protoc.exe)
([alternate
link](https://github.com/ripple/Downloads/raw/gh-pages/protoc/2.5.1/protoc.exe))
precompiled Windows executable from the [Ripple
Organization](https://github.com/ripple).
Either way, once you have the required version of **protoc.exe**, copy it into a
standard location that is in your command line `%PATH%`.
* **NOTE:** If you use an older version of the compiler, the build will fail
with errors related to a mismatch of the version of protocol buffer headers
versus the compiler. Likewise, if you use version 3 or newer, the build will
fail.
### Install OpenSSL
[Download OpenSSL.](http://slproweb.com/products/Win32OpenSSL.html) There will
@@ -99,13 +78,13 @@ to get the correct 32-/64-bit variant.
### Build Boost
Boost 1.67 or later is required.
Boost 1.70 or later is required.
After [downloading boost](http://www.boost.org/users/download/) and unpacking it
to `c:\lib`. As of this writing, the most recent version of boost is 1.68.0,
which will unpack into a directory named `boost_1_68_0`. We recommended either
to `c:\lib`. As of this writing, the most recent version of boost is 1.70.0,
which will unpack into a directory named `boost_1_70_0`. We recommended either
renaming this directory to `boost`, or creating a junction link `mklink /J boost
boost_1_68_0`, so that you can more easily switch between versions.
boost_1_70_0`, so that you can more easily switch between versions.
Next, open **Developer Command Prompt** and type the following commands
@@ -137,12 +116,10 @@ library paths as they will be required later.
Studio 2017 includes an integrated version of CMake that avoids having to
manually run CMake, but it is undergoing continuous improvement. Users that
prefer to use standard Visual Studio project and solution files need to install
a dedicated version of Cmake to generate them. The latest version can be found
a dedicated version of CMake to generate them. The latest version can be found
at the [CMake download site](https://cmake.org/download/). It is recommended you
select the install option to add CMake to your path.
As of this writing, the latest version of CMake for windows is 3.10.2.
## Clone the rippled repository
If you are familiar with cloning github repositories, just follow your normal
@@ -218,13 +195,13 @@ documentation](https://docs.microsoft.com/en-us/cpp/ide/cmake-tools-for-visual-c
If using the provided `CMakeSettings.json` file, the executable will be in
```
.\build\x64-Release\Release\rippled(_classic).exe
.\build\x64-Release\Release\rippled.exe
```
or
```
.\build\x64-Debug\Debug\rippled(_classic).exe
.\build\x64-Debug\Debug\rippled.exe
```
where these paths are relative to your cloned git repository.
These paths are relative to your cloned git repository.
# Build using stand-alone CMake
@@ -237,24 +214,40 @@ execute the following commands within your `rippled` cloned repository:
```
mkdir build\cmake
cd build\cmake
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_68_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
cmake ..\.. -G"Visual Studio 15 2017 Win64" -DBOOST_ROOT="C:\lib\boost_1_70_0" -DOPENSSL_ROOT="C:\lib\OpenSSL-Win64"
```
Now launch Visual Studio 2017 and select **File | Open | Project/Solution**.
Navigate to the `build\cmake` folder created above and select the `rippled.sln`
file. You can then choose whether to build the `Debug` or `Release` solution
configuration. Within the **Solution Explorer**, selected either the `rippled`
(unity build) project or the `rippled_classic` (non-unity) project, and
right-click to build.
configuration.
The executable will be in
```
.\build\cmake\Release\rippled(_classic).exe
.\build\cmake\Release\rippled.exe
```
or
````
.\build\cmake\Debug\rippled(_classic).exe
````
where these paths are relative to your cloned git repository.
or
```
.\build\cmake\Debug\rippled.exe
```
These paths are relative to your cloned git repository.
# Unity/No-Unity Builds
The rippled build system defaults to using [unity source files](http://onqtam.com/programming/2018-07-07-unity-builds/)
to improve build times. In some cases it might be desirable to disable the unity build and compile
individual translation units. Here is how you can switch to a "no-unity" build configuration:
## Visual Studio Integrated CMake
Edit your `CmakeSettings.json` (described above) by adding `-Dunity=OFF` to the `cmakeCommandArgs` entry
for each build configuration.
## Standalone CMake Builds
When running cmake to generate the Visual Studio project files, add `-Dunity=OFF` to the
command line options passed to cmake.
**Note:** you will need to re-run the cmake configuration step anytime you want to switch between unity/no-unity builds.
# Unit Test (Recommended)

View File

@@ -0,0 +1,31 @@
# rippled Packaging and Containers
This folder contains docker container definitions and configuration
files to support building rpm and deb packages of rippled. The container
definitions include some additional software/packages that are used
for general build/test CI workflows of rippled but are not explicitly
needed for the package building workflow.
## CMake Targets
If you have docker installed on your local system, then the main
CMake file will enable several targets related to building packages:
`rpm_container`, `rpm`, `dpkg_container`, and `dpkg`. The package targets
depend on the container targets and will trigger a build of those first.
The container builds can take several dozen minutes to complete (depending
on hardware specs), so quick build cycles are not possible currently. As
such, these targets are often best suited to CI/automated build systems.
The package build can be invoked like any other cmake target from the
rippled root folder:
```
mkdir -p build/pkg && cd build/pkg
cmake -Dpackages_only=ON ../..
cmake --build . --target rpm
```
Upon successful completion, the generated package files will be in
the `build/pkg/packages` directory. For deb packages, simply replace
`rpm` with `dpkg` in the build command above.

View File

@@ -0,0 +1,46 @@
FROM centos:7
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
COPY centos-builder/centos_setup.sh /tmp/
COPY shared/build_deps.sh /tmp/
COPY shared/install_cmake.sh /tmp/
COPY centos-builder/extras.sh /tmp/
COPY shared/install_boost.sh /tmp/
RUN chmod +x /tmp/centos_setup.sh && \
chmod +x /tmp/build_deps.sh && \
chmod +x /tmp/install_boost.sh && \
chmod +x /tmp/install_cmake.sh && \
chmod +x /tmp/extras.sh
RUN /tmp/centos_setup.sh
RUN /tmp/install_cmake.sh 3.15.2 /opt/local/cmake-3.15
RUN ln -s /opt/local/cmake-3.15 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# also install min supported cmake for testing
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.9.0 /opt/local/cmake-3.9; fi
RUN source scl_source enable devtoolset-7 python27 && \
/tmp/build_deps.sh
ENV BOOST_ROOT="/opt/local/boost/_INSTALLED_"
ENV PLANTUML_JAR="/opt/plantuml/plantuml.jar"
ENV OPENSSL_ROOT="/opt/local/openssl"
ENV GDB_ROOT="/opt/local/gdb"
RUN source scl_source enable devtoolset-7 python27 && \
/tmp/extras.sh
# prep files for package building
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
WORKDIR /opt/rippled_bld/pkg
COPY packaging/rpm/rippled.spec ./
COPY shared/update_sources.sh ./
RUN mkdir -m 777 ./rpmbuild
RUN mkdir -m 777 ./rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
COPY packaging/rpm/build_rpm.sh ./
CMD ./build_rpm.sh

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
yum -y upgrade
yum -y update
yum -y install epel-release centos-release-scl
yum -y install \
wget curl time gcc-c++ time yum-utils \
libstdc++-static rpm-build gnupg which make cmake \
devtoolset-7 devtoolset-7-gdb devtoolset-7-libasan-devel devtoolset-7-libtsan-devel devtoolset-7-libubsan-devel \
devtoolset-8 devtoolset-8-gdb devtoolset-8-binutils devtoolset-8-libstdc++-devel \
devtoolset-8-libasan-devel devtoolset-8-libtsan-devel devtoolset-8-libubsan-devel devtoolset-8-liblsan-devel \
flex flex-devel bison bison-devel parallel \
ncurses ncurses-devel ncurses-libs graphviz graphviz-devel \
lzip p7zip bzip2 bzip2-devel lzma-sdk lzma-sdk-devel xz-devel \
zlib zlib-devel zlib-static texinfo openssl openssl-static \
jemalloc jemalloc-devel \
libicu-devel htop \
python27-python rh-python35-python \
python-devel python27-python-devel rh-python35-python-devel \
python27 rh-python35 \
ninja-build git svn \
protobuf protobuf-static protobuf-c-devel \
protobuf-compiler protobuf-devel \
swig perl-Digest-MD5 python2-pip
if [ "${CI_USE}" = true ] ; then
# TODO need permanent link
yum -y install ftp://ftp.pbone.net/mirror/archive.fedoraproject.org/fedora-secondary/updates/26/i386/Packages/p/python2-six-1.10.0-9.fc26.noarch.rpm
yum -y install \
llvm-toolset-7 llvm-toolset-7-runtime llvm-toolset-7-build llvm-toolset-7-clang \
llvm-toolset-7-clang-analyzer llvm-toolset-7-clang-devel llvm-toolset-7-clang-libs \
llvm-toolset-7-clang-tools-extra llvm-toolset-7-compiler-rt llvm-toolset-7-lldb \
llvm-toolset-7-lldb-devel llvm-toolset-7-python-lldb
fi

View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
set -ex
if [ "${CI_USE}" = true ] ; then
cd /tmp
wget https://ftp.gnu.org/gnu/gdb/gdb-8.2.tar.xz
tar xf gdb-8.2.tar.xz
cd gdb-8.2
./configure CFLAGS="-w -O2" CXXFLAGS="-std=gnu++11 -g -O2 -w" --prefix=/opt/local/gdb-8.2
make -j$(nproc)
make install
ln -s /opt/local/gdb-8.2 /opt/local/gdb
cd ..
rm -f gdb-8.2.tar.xz
rm -rf gdb-8.2
# clang from source
RELEASE=tags/RELEASE_701/final
INSTALL=/opt/llvm-7.0.1/
mkdir -p /tmp/clang-src
cd /tmp/clang-src
TOPDIR=`pwd`
svn co -q http://llvm.org/svn/llvm-project/llvm/${RELEASE} llvm
cd ${TOPDIR}/llvm/tools
svn co -q http://llvm.org/svn/llvm-project/cfe/${RELEASE} clang
cd ${TOPDIR}/llvm/tools/clang/tools
svn co -q http://llvm.org/svn/llvm-project/clang-tools-extra/${RELEASE} extra
cd ${TOPDIR}/llvm/tools
svn co -q http://llvm.org/svn/llvm-project/lld/${RELEASE} lld
cd ${TOPDIR}/llvm/tools
svn co -q http://llvm.org/svn/llvm-project/polly/${RELEASE} polly
cd ${TOPDIR}/llvm/projects
svn co -q http://llvm.org/svn/llvm-project/compiler-rt/${RELEASE} compiler-rt
cd ${TOPDIR}/llvm/projects
svn co -q http://llvm.org/svn/llvm-project/openmp/${RELEASE} openmp
cd ${TOPDIR}/llvm/projects
svn co -q http://llvm.org/svn/llvm-project/libcxx/${RELEASE} libcxx
svn co -q http://llvm.org/svn/llvm-project/libcxxabi/${RELEASE} libcxxabi
cd ${TOPDIR}/llvm/projects
## config/build
cd ${TOPDIR}
mkdir mybuilddir && cd mybuilddir
cmake ../llvm -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=${INSTALL} \
-DLLVM_LIBDIR_SUFFIX=64 \
-DLLVM_ENABLE_EH=ON \
-DLLVM_ENABLE_RTTI=ON
cmake --build . --parallel --target install
cd /tmp
rm -rf clang-src
fi

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_NAME}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_NAME}"
else
echo "invalid package type"
exit 1
fi
if docker pull "${ARTIFACTORY_HUB}/${container_name}:${CI_COMMIT_SHA}"; then
echo "${pkgtype} container for ${CI_COMMIT_SHA} already exists" \
"- skipping container build!"
exit 0
else
echo "no existing ${pkgtype} container for this branch - searching history."
for CID_PREV in $(git log --pretty=%H -n30) ; do
if docker pull "${ARTIFACTORY_HUB}/${container_name}:${CID_PREV}"; then
echo "found container for previous commit ${CID_PREV}" \
"- using as cache."
docker tag \
"${ARTIFACTORY_HUB}/${container_name}:${CID_PREV}" \
"${container_name}:${CID_PREV}"
CMAKE_EXTRA="-D${pkgtype}_cache_from=${container_name}:${CID_PREV}"
break
fi
done
fi
cmake --version
test -d build && rm -rf build
mkdir -p build/container && cd build/container
eval time \
cmake -Dpackages_only=ON -DCMAKE_VERBOSE_MAKEFILE=ON ${CMAKE_EXTRA} \
-G Ninja ../..
time cmake --build . --target "${pkgtype}_container" -- -v
docker tag \
"${container_name}:${CI_COMMIT_SHA}" \
"${ARTIFACTORY_HUB}/${container_name}:${CI_COMMIT_SHA}"
time docker push "${ARTIFACTORY_HUB}/${container_name}:${CI_COMMIT_SHA}"

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env sh
set -ex
pkgtype=$1
if [ "${pkgtype}" = "rpm" ] ; then
container_name="${RPM_CONTAINER_NAME}"
elif [ "${pkgtype}" = "dpkg" ] ; then
container_name="${DPKG_CONTAINER_NAME}"
else
echo "invalid package type"
exit 1
fi
time docker pull "${ARTIFACTORY_HUB}/${container_name}:${CI_COMMIT_SHA}"
docker tag \
"${ARTIFACTORY_HUB}/${container_name}:${CI_COMMIT_SHA}" \
"${container_name}:${CI_COMMIT_SHA}"
docker images
test -d build && rm -rf build
mkdir -p build/${pkgtype} && cd build/${pkgtype}
time cmake \
-Dpackages_only=ON -Dhave_package_container=ON -DCMAKE_VERBOSE_MAKEFILE=ON \
-G Ninja ../..
time cmake --build . --target ${pkgtype} -- -v

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env sh
set -ex
# used as a before/setup script for docker steps in gitlab-ci
# expects to be run in standard alpine/dind image
echo $(nproc)
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} ${ARTIFACTORY_HUB}
apk add \
bash util-linux coreutils binutils grep \
make ninja cmake build-base gcc g++ abuild git \
python3 python3-dev
pip3 install awscli
# list curdir contents to build log:
ls -la

View File

@@ -0,0 +1,16 @@
#!/usr/bin/env sh
case ${CI_COMMIT_REF_NAME} in
develop)
export COMPONENT="nightly"
;;
release)
export COMPONENT="unstable"
;;
master)
export COMPONENT="stable"
;;
*)
export COMPONENT="_unknown_"
;;
esac

View File

@@ -0,0 +1,623 @@
#########################################################################
## ##
## gitlab CI defintition for rippled build containers and distro ##
## packages (rpm and dpkg). ##
## ##
#########################################################################
# NOTE: these are sensible defaults for Ripple pipelines. These
# can be overridden by project or group variables as needed.
variables:
RPM_CONTAINER_NAME: "rippled-rpm-builder"
DPKG_CONTAINER_NAME: "rippled-dpkg-builder"
ARTIFACTORY_HOST: "artifactory.ops.ripple.com"
ARTIFACTORY_HUB: "${ARTIFACTORY_HOST}:6555"
GIT_SIGN_PUBKEYS_URL: "https://gitlab.ops.ripple.com/snippets/11/raw"
PUBLIC_REPO_ROOT: "https://repos.ripple.com/repos"
# also need to define this variable ONLY for the primary
# build/publish pipeline on the mainline repo:
# IS_PRIMARY_REPO = "true"
stages:
- build_containers
- build_packages
- sign_packages
- smoketest
- verify_sig
- tag_images
- push_to_test
- verify_from_test
- wait_approval_prod
- push_to_prod
- verify_from_prod
- get_final_hashes
.dind_template: &dind_param
before_script:
- . ./Builds/containers/gitlab-ci/docker_alpine_setup.sh
variables:
docker_driver: overlay2
image:
name: docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- docker:18-dind
tags:
- docker-4xlarge
.only_primary_template: &only_primary
only:
refs:
- /^(master|release|develop)$/
variables:
- $IS_PRIMARY_REPO == "true"
.smoketest_local_template: &run_local_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh local
.smoketest_repo_template: &run_repo_smoketest
tags:
- xlarge
script:
- . ./Builds/containers/gitlab-ci/smoketest.sh repo
#########################################################################
## ##
## stage: build_containers ##
## ##
## build containers from docker definitions. These containers are ##
## subsequently used to build the rpm and deb packages. ##
## ##
#########################################################################
build_centos_container:
stage: build_containers
<<: *dind_param
cache:
key: containers
paths:
- .nih_c
script:
- . ./Builds/containers/gitlab-ci/build_container.sh rpm
build_ubuntu_container:
stage: build_containers
<<: *dind_param
cache:
key: containers
paths:
- .nih_c
script:
- . ./Builds/containers/gitlab-ci/build_container.sh dpkg
#########################################################################
## ##
## stage: build_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_build:
stage: build_packages
dependencies:
- build_centos_container
<<: *dind_param
artifacts:
paths:
- build/rpm/packages/
cache:
key: rpm
paths:
- .nih_c/pkgbuild
script:
- . ./Builds/containers/gitlab-ci/build_package.sh rpm
dpkg_build:
stage: build_packages
dependencies:
- build_ubuntu_container
<<: *dind_param
artifacts:
paths:
- build/dpkg/packages/
cache:
key: dpkg
paths:
- .nih_c/pkgbuild
script:
- . ./Builds/containers/gitlab-ci/build_package.sh dpkg
#########################################################################
## ##
## stage: sign_packages ##
## ##
## build packages using containers from previous stage. ##
## ##
#########################################################################
rpm_sign:
stage: sign_packages
dependencies:
- rpm_build
image:
name: centos:7
# <<: *dind_param
before_script:
- |
# Make sure GnuPG is installed
yum -y install gnupg rpm-sign
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/rpm/packages/
script:
- ls -alh build/rpm/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh rpm
dpkg_sign:
stage: sign_packages
dependencies:
- dpkg_build
image:
name: ubuntu:19.04
# <<: *dind_param
before_script:
- |
# make sure we have GnuPG
apt update
apt install -y gpg dpkg-sig
# checking GPG signing support
if [ -n "$GPG_KEY_B64" ]; then
echo "$GPG_KEY_B64"| base64 -d | gpg --batch --no-tty --allow-secret-key-import --import -
unset GPG_KEY_B64
export GPG_PASSPHRASE=$(echo $GPG_KEY_PASS_B64 | base64 -di)
unset GPG_KEY_PASS_B64
export GPG_KEYID=$(gpg --with-colon --list-secret-keys | head -n1 | cut -d : -f 5)
else
echo -e "\033[0;31m****** GPG signing disabled ******\033[0m"
exit 1
fi
artifacts:
paths:
- build/dpkg/packages/
script:
- ls -alh build/dpkg/packages
- . ./Builds/containers/gitlab-ci/sign_package.sh dpkg
#########################################################################
## ##
## stage: smoketest ##
## ##
## install unsigned packages from previous step and run unit tests. ##
## ##
#########################################################################
centos_7_smoketest:
stage: smoketest
dependencies:
- rpm_sign
image:
name: centos:7
<<: *run_local_smoketest
fedora_29_smoketest:
stage: smoketest
dependencies:
- rpm_sign
image:
name: fedora:29
<<: *run_local_smoketest
fedora_28_smoketest:
stage: smoketest
dependencies:
- rpm_sign
image:
name: fedora:28
<<: *run_local_smoketest
fedora_27_smoketest:
stage: smoketest
dependencies:
- rpm_sign
image:
name: fedora:27
<<: *run_local_smoketest
## this one is not LTS, but we
## get some extra coverage by including it
## consider dropping it when 20.04 is ready
ubuntu_19_smoketest:
stage: smoketest
dependencies:
- dpkg_sign
image:
name: ubuntu:19.04
<<: *run_local_smoketest
ubuntu_18_smoketest:
stage: smoketest
dependencies:
- dpkg_sign
image:
name: ubuntu:18.04
<<: *run_local_smoketest
ubuntu_16_smoketest:
stage: smoketest
dependencies:
- dpkg_sign
image:
name: ubuntu:16.04
<<: *run_local_smoketest
debian_9_smoketest:
stage: smoketest
dependencies:
- dpkg_sign
image:
name: debian:9
<<: *run_local_smoketest
#########################################################################
## ##
## stage: verify_sig ##
## ##
## use git/gpg to verify that HEAD is signed by an approved ##
## committer. The whitelist of pubkeys is manually mantained ##
## and fetched from GIT_SIGN_PUBKEYS_URL (currently a snippet ##
## link). ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
verify_head_signed:
stage: verify_sig
image:
name: ubuntu:latest
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/verify_head_commit.sh
#########################################################################
## ##
## stage: tag_images ##
## ##
## apply rippled version tag to containers from previous stage. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
tag_bld_images:
stage: tag_images
variables:
docker_driver: overlay2
image:
name: docker:latest
services:
# workaround for TLS issues - consider going back
# back to unversioned `dind` when issues are resolved
- docker:18-dind
tags:
- docker-large
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/tag_docker_image.sh
#########################################################################
## ##
## stage: push_to_test ##
## ##
## push packages to artifactory repositories (test) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_test:
stage: push_to_test
variables:
DEB_REPO: "rippled-deb-test-mirror"
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: alpine:latest
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_test ##
## ##
## install/test packages from test repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_29_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: fedora:29
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_28_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: fedora:28
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_27_verify_repo_test:
stage: verify_from_test
variables:
RPM_REPO: "rippled-rpm-test-mirror"
image:
name: fedora:27
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_19_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "disco"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: ubuntu:19.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_16_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "xenial"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: ubuntu:16.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_9_verify_repo_test:
stage: verify_from_test
variables:
DISTRO: "stretch"
DEB_REPO: "rippled-deb-test-mirror"
image:
name: debian:9
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: wait_approval_prod ##
## ##
## wait for manual approval before proceeding to next stage ##
## which pushes to prod repo. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
wait_before_push_prod:
stage: wait_approval_prod
image:
name: alpine:latest
<<: *only_primary
script:
- echo "proceeding to next stage"
when: manual
allow_failure: false
#########################################################################
## ##
## stage: push_to_prod ##
## ##
## push packages to artifactory repositories (prod) ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
push_prod:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: alpine:latest
stage: push_to_prod
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "PUT" "."
#########################################################################
## ##
## stage: verify_from_prod ##
## ##
## install/test packages from prod repos. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
centos_7_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: centos:7
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_29_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: fedora:29
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_28_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: fedora:28
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
fedora_27_verify_repo_prod:
stage: verify_from_prod
variables:
RPM_REPO: "rippled-rpm"
image:
name: fedora:27
dependencies:
- rpm_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_19_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "disco"
DEB_REPO: "rippled-deb"
image:
name: ubuntu:19.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_18_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "bionic"
DEB_REPO: "rippled-deb"
image:
name: ubuntu:18.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
ubuntu_16_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "xenial"
DEB_REPO: "rippled-deb"
image:
name: ubuntu:16.04
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
debian_9_verify_repo_prod:
stage: verify_from_prod
variables:
DISTRO: "stretch"
DEB_REPO: "rippled-deb"
image:
name: debian:9
dependencies:
- dpkg_sign
<<: *only_primary
<<: *run_repo_smoketest
#########################################################################
## ##
## stage: get_final_hashes ##
## ##
## fetch final hashes from artifactory. ##
## ONLY RUNS FOR PRIMARY BRANCHES/REPO ##
## ##
#########################################################################
get_prod_hashes:
variables:
DEB_REPO: "rippled-deb"
RPM_REPO: "rippled-rpm"
image:
name: alpine:latest
stage: get_final_hashes
artifacts:
paths:
- files.info
dependencies:
- rpm_sign
- dpkg_sign
<<: *only_primary
script:
- . ./Builds/containers/gitlab-ci/push_to_artifactory.sh "GET" ".checksums"

View File

@@ -0,0 +1,91 @@
#!/usr/bin/env sh
set -ex
action=$1
filter=$2
. ./Builds/containers/gitlab-ci/get_component.sh
apk add curl jq coreutils util-linux
TOPDIR=$(pwd)
# DPKG
cd $TOPDIR
cd build/dpkg/packages
CURLARGS="-sk -X${action} -urippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}"
RIPPLED_PKG=$(ls rippled_*.deb)
RIPPLED_DEV_PKG=$(ls rippled-dev_*.deb)
RIPPLED_DBG_PKG=$(ls rippled-dbgsym_*.deb)
# TODO - where to upload src tgz?
RIPPLED_SRC=$(ls rippled_*.orig.tar.gz)
DEB_MATRIX=";deb.component=${COMPONENT};deb.architecture=amd64"
for dist in stretch buster xenial bionic disco ; do
DEB_MATRIX="${DEB_MATRIX};deb.distribution=${dist}"
done
echo "{ \"debs\": {" > "${TOPDIR}/files.info"
for deb in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ; do
# first item doesn't get a comma separator
if [ $deb != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${deb}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${DEB_REPO}/pool/${COMPONENT}/${deb}${DEB_MATRIX}"
ca="${ca} -T${deb}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${DEB_REPO}/pool/${COMPONENT}/${deb}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}," >> "${TOPDIR}/files.info"
# RPM
cd $TOPDIR
cd build/rpm/packages
RIPPLED_PKG=$(ls rippled-[0-9]*.x86_64.rpm)
RIPPLED_DEV_PKG=$(ls rippled-devel*.rpm)
RIPPLED_DBG_PKG=$(ls rippled-debuginfo*.rpm)
# TODO - where to upload src rpm ?
RIPPLED_SRC=$(ls rippled-[0-9]*.src.rpm)
echo "\"rpms\": {" >> "${TOPDIR}/files.info"
for rpm in ${RIPPLED_PKG} ${RIPPLED_DEV_PKG} ${RIPPLED_DBG_PKG} ; do
# first item doesn't get a comma separator
if [ $rpm != $RIPPLED_PKG ] ; then
echo "," >> "${TOPDIR}/files.info"
fi
echo "\"${rpm}\"": | tee -a "${TOPDIR}/files.info"
ca="${CURLARGS}"
if [ "${action}" = "PUT" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/${RPM_REPO}/${COMPONENT}/"
ca="${ca} -T${rpm}"
elif [ "${action}" = "GET" ] ; then
url="https://${ARTIFACTORY_HOST}/artifactory/api/storage/${RPM_REPO}/${COMPONENT}/${rpm}"
fi
echo "file info request url --> ${url}"
eval "curl ${ca} \"${url}\"" | jq -M "${filter}" | tee -a "${TOPDIR}/files.info"
done
echo "}}" >> "${TOPDIR}/files.info"
jq '.' "${TOPDIR}/files.info" > "${TOPDIR}/files.info.tmp"
mv "${TOPDIR}/files.info.tmp" "${TOPDIR}/files.info"
if [ ! -z "${SLACK_NOTIFY_URL}" ] && [ "${action}" = "GET" ] ; then
# extract files.info content to variable and sanitize so it can
# be interpolated into a slack text field below
finfo=$(cat ${TOPDIR}/files.info | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | sed -E 's/"/\\"/g')
# try posting file info to slack.
# can add channel field to payload if the
# default channel is incorrect. Get rid of
# newlines in payload json since slack doesn't accept them
CONTENT=$(tr -d '[\n]' <<JSON
payload={
"username": "GitlabCI",
"text": "The package build for branch \`${CI_COMMIT_REF_NAME}\` is complete. File hashes are: \`\`\`${finfo}\`\`\`",
"icon_emoji": ":package:"}
JSON
)
curl ${SLACK_NOTIFY_URL} --data-urlencode "${CONTENT}"
fi

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -eo pipefail
sign_dpkg() {
if [ -n "${GPG_KEYID}" ]; then
dpkg-sig \
-g "--no-tty --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --pinentry-mode=loopback" \
-k "${GPG_KEYID}" \
--sign builder \
"build/dpkg/packages/*.deb"
fi
}
sign_rpm() {
if [ -n "${GPG_KEYID}" ] ; then
find build/rpm/packages -name "*.rpm" -exec bash -c '
echo "yes" | setsid rpm \
--define "_gpg_name ${GPG_KEYID}" \
--define "_signature gpg" \
--define "__gpg_check_password_cmd /bin/true" \
--define "__gpg_sign_cmd %{__gpg} gpg --batch --no-armor --digest-algo 'sha512' --passphrase '${GPG_PASSPHRASE}' --no-secmem-warning -u '%{_gpg_name}' --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
--addsign '{} \;
fi
}
case "${1}" in
dpkg)
sign_dpkg
;;
rpm)
sign_rpm
;;
*)
echo "Usage: ${0} (dpkg|rpm)"
;;
esac

View File

@@ -0,0 +1,99 @@
#!/usr/bin/env sh
set -ex
install_from=$1
use_private=${2:-0} # this option not currently needed by any CI scripts,
# reserved for possible future use
if [ "$use_private" -gt 0 ] ; then
REPO_ROOT="https://rippled:${ARTIFACTORY_DEPLOY_KEY_RIPPLED}@${ARTIFACTORY_HOST}/artifactory"
else
REPO_ROOT="${PUBLIC_REPO_ROOT}"
fi
. ./Builds/containers/gitlab-ci/get_component.sh
. /etc/os-release
case ${ID} in
ubuntu|debian)
pkgtype="dpkg"
;;
fedora|centos|rhel|scientific)
pkgtype="rpm"
;;
*)
echo "unrecognized distro!"
exit 1
;;
esac
# this script provides info variables about pkg version
. build/${pkgtype}/packages/build_vars
if [ "${pkgtype}" = "dpkg" ] ; then
# sometimes update fails and requires a cleanup
updateWithRetry()
{
if ! apt-get -y update ; then
rm -rvf /var/lib/apt/lists/*
apt-get -y clean
apt-get -y update
fi
}
if [ "${install_from}" = "repo" ] ; then
apt-get -y upgrade
updateWithRetry
apt-get -y install apt apt-transport-https ca-certificates coreutils util-linux wget gnupg
wget -q -O - "${REPO_ROOT}/api/gpg/key/public" | apt-key add -
echo "deb ${REPO_ROOT}/${DEB_REPO} ${DISTRO} ${COMPONENT}" >> /etc/apt/sources.list
updateWithRetry
# uncomment this next line if you want to see the available package versions
# apt-cache policy rippled
apt-get -y install rippled=${dpkg_full_version}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
updateWithRetry
apt-get -y install libprotobuf-dev libssl-dev
rm -f build/dpkg/packages/rippled-dbgsym*.*
dpkg --no-debsig -i build/dpkg/packages/*.deb
else
echo "unrecognized pkg source!"
exit 1
fi
else
yum -y update
if [ "${install_from}" = "repo" ] ; then
yum -y install yum-utils coreutils util-linux
REPOFILE="/etc/yum.repos.d/artifactory.repo"
echo "[Artifactory]" > ${REPOFILE}
echo "name=Artifactory" >> ${REPOFILE}
echo "baseurl=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/" >> ${REPOFILE}
echo "enabled=1" >> ${REPOFILE}
echo "gpgcheck=0" >> ${REPOFILE}
echo "gpgkey=${REPO_ROOT}/${RPM_REPO}/${COMPONENT}/repodata/repomd.xml.key" >> ${REPOFILE}
echo "repo_gpgcheck=1" >> ${REPOFILE}
yum -y update
# uncomment this next line if you want to see the available package versions
# yum --showduplicates list rippled
yum -y install ${rpm_version_release}
elif [ "${install_from}" = "local" ] ; then
# cached pkg install
yum install -y yum-utils openssl-static zlib-static
rm -f build/rpm/packages/rippled-debug*.rpm
rm -f build/rpm/packages/*.src.rpm
rpm -i build/rpm/packages/*.rpm
else
echo "unrecognized pkg source!"
exit 1
fi
fi
# verify installed version
INSTALLED=$(/opt/ripple/bin/rippled --version | awk '{print $NF}')
if [ "${rippled_version}" != "${INSTALLED}" ] ; then
echo "INSTALLED version ${INSTALLED} does not match ${rippled_version}"
exit 1
fi
# run unit tests
/opt/ripple/bin/rippled --unittest --unittest-jobs $(nproc)
/opt/ripple/bin/validator-keys --unittest

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env sh
set -ex
docker login -u rippled \
-p ${ARTIFACTORY_DEPLOY_KEY_RIPPLED} "${ARTIFACTORY_HUB}"
# this gives us rippled_version :
source build/rpm/packages/build_vars
docker pull "${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${CI_COMMIT_SHA}"
docker pull "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${CI_COMMIT_SHA}"
# tag/push two labels...one using the current rippled version and one just using "latest"
for label in ${rippled_version} latest ; do
docker tag \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${CI_COMMIT_SHA}" \
"${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker tag \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${CI_COMMIT_SHA}" \
"${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}:${label}_${CI_COMMIT_REF_SLUG}"
docker push "${ARTIFACTORY_HUB}/${RPM_CONTAINER_NAME}"
docker push "${ARTIFACTORY_HUB}/${DPKG_CONTAINER_NAME}"
done

View File

@@ -0,0 +1,16 @@
#!/usr/bin/env sh
set -ex
apt -y update
apt -y install software-properties-common curl git gnupg
curl -sk -o rippled-pubkeys.txt "${GIT_SIGN_PUBKEYS_URL}"
gpg --import rippled-pubkeys.txt
if git verify-commit HEAD; then
echo "git commit signature check passed"
else
echo "git commit signature check failed"
git log -n 5 --color \
--pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an> [%G?]%Creset' \
--abbrev-commit
exit 1
fi

View File

@@ -0,0 +1,90 @@
#!/usr/bin/env bash
set -ex
source update_sources.sh
# Build the dpkg
#dpkg uses - as separator, so we need to change our -bN versions to tilde
RIPPLED_DPKG_VERSION=$(echo "${RIPPLED_VERSION}" | sed 's!-!~!g')
# TODO - decide how to handle the trailing/release
# version here (hardcoded to 1). Does it ever need to change?
RIPPLED_DPKG_FULL_VERSION="${RIPPLED_DPKG_VERSION}-1"
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled-${RIPPLED_DPKG_VERSION}/ -o ../rippled-${RIPPLED_DPKG_VERSION}.tar.gz HEAD
cd ..
# dpkg debmake would normally create this link, but we do it manually
ln -s ./rippled-${RIPPLED_DPKG_VERSION}.tar.gz rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz
tar xvf rippled-${RIPPLED_DPKG_VERSION}.tar.gz
cd rippled-${RIPPLED_DPKG_VERSION}
cp -pr ../debian .
# dpkg requires a changelog. We don't currently maintain
# a useable one, so let's just fake it with our current version
# TODO : not sure if the "unstable" will need to change for
# release packages (?)
NOWSTR=$(TZ=UTC date -R)
cat << CHANGELOG > ./debian/changelog
rippled (${RIPPLED_DPKG_FULL_VERSION}) unstable; urgency=low
* see RELEASENOTES
-- Ripple Labs Inc. <support@ripple.com> ${NOWSTR}
CHANGELOG
# PATH must be preserved for our more modern cmake in /opt/local
# TODO : consider allowing lintian to run in future ?
export DH_BUILD_DDEBS=1
export CC=gcc-8
export CXX=g++-8
debuild --no-lintian --preserve-envvar PATH --preserve-env -us -uc
rc=$?; if [[ $rc != 0 ]]; then
error "error building dpkg"
fi
cd ..
ls -latr
# copy artifacts
cp rippled-dev_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.deb ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.dsc ${PKG_OUTDIR}
# dbgsym suffix is ddeb under newer debuild, but just deb under earlier
cp rippled-dbgsym_${RIPPLED_DPKG_FULL_VERSION}_amd64.* ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.build ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz ${PKG_OUTDIR}
cp rippled_${RIPPLED_DPKG_FULL_VERSION}.debian.tar.xz ${PKG_OUTDIR}
# buildinfo is only generated by later version of debuild
if [ -e rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ] ; then
cp rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.buildinfo ${PKG_OUTDIR}
fi
cat rippled_${RIPPLED_DPKG_FULL_VERSION}_amd64.changes
# extract the text in the .changes file that appears between
# Checksums-Sha256: ...
# and
# Files: ...
awk '/Checksums-Sha256:/{hit=1;next}/Files:/{hit=0}hit' \
rippled_${RIPPLED_DPKG_VERSION}-1_amd64.changes | \
sed -E 's!^[[:space:]]+!!' > shasums
DEB_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
DBG_SHA256=$(cat shasums | \
grep "rippled-dbgsym_${RIPPLED_DPKG_VERSION}-1_amd64.*" | cut -d " " -f 1)
DEV_SHA256=$(cat shasums | \
grep "rippled-dev_${RIPPLED_DPKG_VERSION}-1_amd64.deb" | cut -d " " -f 1)
SRC_SHA256=$(cat shasums | \
grep "rippled_${RIPPLED_DPKG_VERSION}.orig.tar.gz" | cut -d " " -f 1)
echo "deb_sha256=${DEB_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=${DBG_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "dev_sha256=${DEV_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=${SRC_SHA256}" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=${RIPPLED_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_version=${RIPPLED_DPKG_VERSION}" >> ${PKG_OUTDIR}/build_vars
echo "dpkg_full_version=${RIPPLED_DPKG_FULL_VERSION}" >> ${PKG_OUTDIR}/build_vars

View File

@@ -0,0 +1,3 @@
rippled daemon
-- Mike Ellery <mellery451@gmail.com> Tue, 04 Dec 2018 18:19:03 +0000

View File

@@ -0,0 +1 @@
9

View File

@@ -0,0 +1,3 @@
opt/ripple/etc/rippled.cfg
opt/ripple/etc/validators.txt
etc/logrotate.d/rippled

View File

@@ -0,0 +1,21 @@
Source: rippled
Section: misc
Priority: extra
Maintainer: Ripple Labs Inc. <support@ripple.com>
Build-Depends: cmake, debhelper (>=9), libprotobuf-dev, libssl-dev, zlib1g-dev, dh-systemd, ninja-build
Standards-Version: 3.9.7
Homepage: http://ripple.com/
Package: rippled
Architecture: any
Multi-Arch: foreign
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: rippled daemon
Package: rippled-dev
Section: devel
Recommends: rippled (= ${binary:Version})
Architecture: any
Multi-Arch: same
Depends: ${misc:Depends}, ${shlibs:Depends}, libprotobuf-dev, libssl-dev
Description: development files for applications using xrpl core library (serialize + sign)

View File

@@ -0,0 +1,86 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rippled
Source: https://github.com/ripple/rippled
Files: *
Copyright: 2012-2019 Ripple Labs Inc.
License: __UNKNOWN__
The accompanying files under various copyrights.
Copyright (c) 2012, 2013, 2014 Ripple Labs Inc.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The accompanying files incorporate work covered by the following copyright
and previous license notice:
Copyright (c) 2011 Arthur Britto, David Schwartz, Jed McCaleb,
Vinnie Falco, Bob Way, Eric Lombrozo, Nikolaos D. Bougalis, Howard Hinnant
Some code from Raw Material Software, Ltd., provided under the terms of the
ISC License. See the corresponding source files for more details.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Some code from ASIO examples:
// Copyright (c) 2003-2011 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
Some code from Bitcoin:
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2011 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file license.txt or http://www.opensource.org/licenses/mit-license.php.
Some code from Tom Wu:
This software is covered under the following copyright:
/*
* Copyright (c) 2003-2005 Tom Wu
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
*
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*
* In addition, the following condition applies:
*
* All redistributions must retain an intact copy of this copyright notice
* and disclaimer.
*/
Address all questions regarding this license to:
Tom Wu
tjw@cs.Stanford.EDU

View File

@@ -0,0 +1,3 @@
/var/log/rippled/
/var/lib/rippled/
/etc/systemd/system/rippled.service.d/

View File

@@ -0,0 +1,3 @@
README.md
LICENSE
RELEASENOTES.md

View File

@@ -0,0 +1,3 @@
opt/ripple/include
opt/ripple/lib/*.a
opt/ripple/lib/cmake/ripple

View File

@@ -0,0 +1,7 @@
opt/ripple/bin/rippled
opt/ripple/bin/validator-keys
opt/ripple/bin/update-rippled.sh
opt/ripple/etc/rippled.cfg
opt/ripple/etc/validators.txt
opt/ripple/etc/update-rippled-cron
etc/logrotate.d/rippled

View File

@@ -0,0 +1,3 @@
opt/ripple/etc/rippled.cfg etc/opt/ripple/rippled.cfg
opt/ripple/etc/validators.txt etc/opt/ripple/validators.txt
opt/ripple/bin/rippled usr/local/bin/rippled

View File

@@ -0,0 +1,35 @@
#!/bin/sh
set -e
USER_NAME=rippled
GROUP_NAME=rippled
case "$1" in
configure)
id -u $USER_NAME >/dev/null 2>&1 || \
adduser --system --quiet \
--home /nonexistent --no-create-home \
--disabled-password \
--group "$GROUP_NAME"
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME /opt/ripple
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 /opt/ripple/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME /opt/ripple/etc/update-rippled-cron
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -0,0 +1,17 @@
#!/bin/sh
set -e
case "$1" in
purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -e
case "$1" in
install|upgrade)
;;
abort-upgrade)
;;
*)
echo "preinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -e
case "$1" in
remove|upgrade|deconfigure)
;;
failed-upgrade)
;;
*)
echo "prerm called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0

View File

@@ -0,0 +1,38 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export DH_OPTIONS = -v
# debuild sets some warnings that don't work well
# for our curent build..so try to remove those flags here:
export CFLAGS:=$(subst -Wformat,,$(CFLAGS))
export CFLAGS:=$(subst -Werror=format-security,,$(CFLAGS))
export CXXFLAGS:=$(subst -Wformat,,$(CXXFLAGS))
export CXXFLAGS:=$(subst -Werror=format-security,,$(CXXFLAGS))
%:
dh $@ --with systemd
override_dh_auto_configure:
env
rm -rf bld
mkdir -p bld
cd bld && \
cmake .. -G Ninja \
-DCMAKE_INSTALL_PREFIX=/opt/ripple \
-DCMAKE_BUILD_TYPE=Release \
-Dstatic=true \
-Dlocal_protobuf=ON \
-Dvalidator_keys=ON \
-DCMAKE_VERBOSE_MAKEFILE=ON
override_dh_auto_build:
cd bld && \
cmake --build . --target rippled --target validator-keys --parallel -- -v
override_dh_auto_install:
cd bld && DESTDIR=../debian/tmp cmake --build . --target install -- -v
install -D bld/validator-keys/validator-keys debian/tmp/opt/ripple/bin/validator-keys
install -D Builds/containers/shared/update-rippled.sh debian/tmp/opt/ripple/bin/update-rippled.sh
install -D Builds/containers/shared/update-rippled-cron debian/tmp/opt/ripple/etc/update-rippled-cron
install -D Builds/containers/shared/rippled-logrotate debian/tmp/etc/logrotate.d/rippled
rm -rf bld
rm -rf bld_vl

View File

@@ -0,0 +1 @@
3.0 (quilt)

View File

@@ -0,0 +1,2 @@
#abort-on-upstream-changes
#unapply-patches

View File

@@ -0,0 +1 @@
enable rippled.service

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
set -ex
source update_sources.sh
# Build the rpm
IFS='-' read -r RIPPLED_RPM_VERSION RELEASE <<< "$RIPPLED_VERSION"
export RIPPLED_RPM_VERSION
RPM_RELEASE=${RPM_RELEASE-1}
# post-release version
if [ "hf" = "$(echo "$RELEASE" | cut -c -2)" ]; then
RPM_RELEASE="${RPM_RELEASE}.${RELEASE}"
# pre-release version (-b or -rc)
elif [[ $RELEASE ]]; then
RPM_RELEASE="0.${RPM_RELEASE}.${RELEASE}"
fi
export RPM_RELEASE
if [[ $RPM_PATCH ]]; then
RPM_PATCH=".${RPM_PATCH}"
export RPM_PATCH
fi
cd /opt/rippled_bld/pkg/rippled
if [[ -n $(git status --porcelain) ]]; then
git status
error "Unstaged changes in this repo - please commit first"
fi
git archive --format tar.gz --prefix rippled/ -o ../rpmbuild/SOURCES/rippled.tar.gz HEAD
# TODO include validator-keys sources
cd ..
source /opt/rh/devtoolset-8/enable
rpmbuild --define "_topdir ${PWD}/rpmbuild" -ba rippled.spec
rc=$?; if [[ $rc != 0 ]]; then
error "error building rpm"
fi
# Make a tar of the rpm and source rpm
RPM_VERSION_RELEASE=$(rpm -qp --qf='%{NAME}-%{VERSION}-%{RELEASE}' ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm)
tar_file=$RPM_VERSION_RELEASE.tar.gz
cp ./rpmbuild/RPMS/x86_64/* ${PKG_OUTDIR}
cp ./rpmbuild/SRPMS/* ${PKG_OUTDIR}
RPM_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm 2>/dev/null)
DBG_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm 2>/dev/null)
DEV_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm 2>/dev/null)
SRC_MD5SUM=$(rpm -q --queryformat '%{SIGMD5}\n' -p ./rpmbuild/SRPMS/*.rpm 2>/dev/null)
RPM_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-[0-9]*.rpm | awk '{ print $1}')"
DBG_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-debuginfo*.rpm | awk '{ print $1}')"
DEV_SHA256="$(sha256sum ./rpmbuild/RPMS/x86_64/rippled-devel*.rpm | awk '{ print $1}')"
SRC_SHA256="$(sha256sum ./rpmbuild/SRPMS/*.rpm | awk '{ print $1}')"
echo "rpm_md5sum=$RPM_MD5SUM" > ${PKG_OUTDIR}/build_vars
echo "dbg_md5sum=$DBG_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "dev_md5sum=$DEV_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "src_md5sum=$SRC_MD5SUM" >> ${PKG_OUTDIR}/build_vars
echo "rpm_sha256=$RPM_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dbg_sha256=$DBG_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "dev_sha256=$DEV_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "src_sha256=$SRC_SHA256" >> ${PKG_OUTDIR}/build_vars
echo "rippled_version=$RIPPLED_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version=$RIPPLED_RPM_VERSION" >> ${PKG_OUTDIR}/build_vars
echo "rpm_file_name=$tar_file" >> ${PKG_OUTDIR}/build_vars
echo "rpm_version_release=$RPM_VERSION_RELEASE" >> ${PKG_OUTDIR}/build_vars

View File

@@ -0,0 +1,110 @@
%define rippled_version %(echo $RIPPLED_RPM_VERSION)
%define rpm_release %(echo $RPM_RELEASE)
%define rpm_patch %(echo $RPM_PATCH)
%define _prefix /opt/ripple
Name: rippled
# Dashes in Version extensions must be converted to underscores
Version: %{rippled_version}
Release: %{rpm_release}%{?dist}%{rpm_patch}
Summary: rippled daemon
License: MIT
URL: http://ripple.com/
Source0: rippled.tar.gz
BuildRequires: protobuf-static openssl-static cmake zlib-static ninja-build
%description
rippled
%package devel
Summary: Files for development of applications using xrpl core library
Group: Development/Libraries
Requires: openssl-static, zlib-static
%description devel
core library for development of standalone applications that sign transactions.
%prep
%setup -c -n rippled
%build
cd rippled
mkdir -p bld.release
cd bld.release
cmake .. -G Ninja -DCMAKE_INSTALL_PREFIX=%{_prefix} -DCMAKE_BUILD_TYPE=Release -Dstatic=true -DCMAKE_VERBOSE_MAKEFILE=ON -Dlocal_protobuf=ON -Dvalidator_keys=ON
cmake --build . --parallel --target rippled --target validator-keys -- -v
%pre
test -e /etc/pki/tls || { mkdir -p /etc/pki; ln -s /usr/lib/ssl /etc/pki/tls; }
%install
rm -rf $RPM_BUILD_ROOT
DESTDIR=$RPM_BUILD_ROOT cmake --build rippled/bld.release --target install -- -v
install -d ${RPM_BUILD_ROOT}/etc/opt/ripple
install -d ${RPM_BUILD_ROOT}/usr/local/bin
ln -s %{_prefix}/etc/rippled.cfg ${RPM_BUILD_ROOT}/etc/opt/ripple/rippled.cfg
ln -s %{_prefix}/etc/validators.txt ${RPM_BUILD_ROOT}/etc/opt/ripple/validators.txt
ln -s %{_prefix}/bin/rippled ${RPM_BUILD_ROOT}/usr/local/bin/rippled
install -D rippled/bld.release/validator-keys/validator-keys ${RPM_BUILD_ROOT}%{_bindir}/validator-keys
install -D ./rippled/Builds/containers/shared/rippled.service ${RPM_BUILD_ROOT}/usr/lib/systemd/system/rippled.service
install -D ./rippled/Builds/containers/packaging/rpm/50-rippled.preset ${RPM_BUILD_ROOT}/usr/lib/systemd/system-preset/50-rippled.preset
install -D ./rippled/Builds/containers/shared/update-rippled.sh ${RPM_BUILD_ROOT}%{_bindir}/update-rippled.sh
install -D ./rippled/Builds/containers/shared/update-rippled-cron ${RPM_BUILD_ROOT}%{_prefix}/etc/update-rippled-cron
install -D ./rippled/Builds/containers/shared/rippled-logrotate ${RPM_BUILD_ROOT}/etc/logrotate.d/rippled
install -d $RPM_BUILD_ROOT/var/log/rippled
install -d $RPM_BUILD_ROOT/var/lib/rippled
%post
USER_NAME=rippled
GROUP_NAME=rippled
getent passwd $USER_NAME &>/dev/null || useradd $USER_NAME
getent group $GROUP_NAME &>/dev/null || groupadd $GROUP_NAME
chown -R $USER_NAME:$GROUP_NAME /var/log/rippled/
chown -R $USER_NAME:$GROUP_NAME /var/lib/rippled/
chown -R $USER_NAME:$GROUP_NAME %{_prefix}/
chmod 755 /var/log/rippled/
chmod 755 /var/lib/rippled/
chmod 644 %{_prefix}/etc/update-rippled-cron
chmod 644 /etc/logrotate.d/rippled
chown -R root:$GROUP_NAME %{_prefix}/etc/update-rippled-cron
%files
%doc rippled/README.md rippled/LICENSE
%{_bindir}/rippled
/usr/local/bin/rippled
%{_bindir}/update-rippled.sh
%{_prefix}/etc/update-rippled-cron
%{_bindir}/validator-keys
%config(noreplace) %{_prefix}/etc/rippled.cfg
%config(noreplace) /etc/opt/ripple/rippled.cfg
%config(noreplace) %{_prefix}/etc/validators.txt
%config(noreplace) /etc/opt/ripple/validators.txt
%config(noreplace) /etc/logrotate.d/rippled
%config(noreplace) /usr/lib/systemd/system/rippled.service
%config(noreplace) /usr/lib/systemd/system-preset/50-rippled.preset
%dir /var/log/rippled/
%dir /var/lib/rippled/
%files devel
%{_prefix}/include
%{_prefix}/lib/*.a
%{_prefix}/lib/cmake/ripple
%changelog
* Wed Aug 28 2019 Mike Ellery <mellery451@gmail.com>
- Switch to subproject build for validator-keys
* Wed May 15 2019 Mike Ellery <mellery451@gmail.com>
- Make validator-keys use local rippled build for core lib
* Wed Aug 01 2018 Mike Ellery <mellery451@gmail.com>
- add devel package for signing library
* Thu Jun 02 2016 Brandon Wilson <bwilson@ripple.com>
- Install validators.txt

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
set -ex
function build_boost()
{
local boost_ver=$1
local do_link=$2
local boost_path=$(echo "${boost_ver}" | sed -e 's!\.!_!g')
mkdir -p /opt/local
cd /opt/local
BOOST_ROOT=/opt/local/boost_${boost_path}
BOOST_URL="https://dl.bintray.com/boostorg/release/${boost_ver}/source/boost_${boost_path}.tar.bz2"
BOOST_BUILD_ALL=true
. /tmp/install_boost.sh
if [ "$do_link" = true ] ; then
ln -s ./boost_${boost_path} boost
fi
}
build_boost "1.70.0" true
# installed in opt, so won't be used
# unless specified by OPENSSL_ROOT_DIR
cd /tmp
OPENSSL_VER=1.1.1
wget https://www.openssl.org/source/openssl-${OPENSSL_VER}.tar.gz
tar xf openssl-${OPENSSL_VER}.tar.gz
cd openssl-${OPENSSL_VER}
# NOTE: add -g to the end of the following line if we want debug symbols for openssl
SSLDIR=$(openssl version -d | cut -d: -f2 | tr -d [:space:]\")
./config -fPIC --prefix=/opt/local/openssl --openssldir=${SSLDIR} zlib shared
make -j$(nproc)
make install
cd ..
rm -f openssl-${OPENSSL_VER}.tar.gz
rm -rf openssl-${OPENSSL_VER}
LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-}:/opt/local/openssl/lib /opt/local/openssl/bin/openssl version -a
if [ "${CI_USE}" = true ] ; then
cd /tmp
wget https://github.com/doxygen/doxygen/archive/Release_1_8_16.tar.gz
tar xf Release_1_8_16.tar.gz
cd doxygen-Release_1_8_16
mkdir build
cd build
cmake -G "Unix Makefiles" ..
make -j$(nproc)
make install
cd ../..
rm -f Release_1_8_16.tar.gz
rm -rf doxygen-Release_1_8_16
mkdir -p /opt/plantuml
wget -O /opt/plantuml/plantuml.jar https://downloads.sourceforge.net/project/plantuml/plantuml.jar
cd /tmp
wget https://github.com/linux-test-project/lcov/releases/download/v1.14/lcov-1.14.tar.gz
tar xfz lcov-1.14.tar.gz
cd lcov-1.14
make install PREFIX=/usr/local
cd ..
rm -r lcov-1.14 lcov-1.14.tar.gz
cd /tmp
wget https://github.com/ccache/ccache/releases/download/v3.7.3/ccache-3.7.3.tar.gz
tar xf ccache-3.7.3.tar.gz
cd ccache-3.7.3
./configure --prefix=/usr/local
make
make install
cd ..
rm -f ccache-3.7.3.tar.gz
rm -rf ccache-3.7.3
pip install requests
pip install https://github.com/codecov/codecov-python/archive/master.zip
fi

View File

@@ -0,0 +1,80 @@
#!/usr/bin/env bash
# Assumptions:
# 1) BOOST_ROOT and BOOST_URL are already defined,
# and contain valid values.
# 2) The last namepart of BOOST_ROOT matches the
# folder name internal to boost's .tar.gz
# When testing you can force a boost build by clearing travis caches:
# https://travis-ci.org/ripple/rippled/caches
set -exu
odir=$(pwd)
if [[ -d "$BOOST_ROOT/lib" || -d "${BOOST_ROOT}/stage/lib" ]] ; then
echo "Using cached boost at $BOOST_ROOT"
exit
fi
#fetch/unpack:
fn=$(basename -- "$BOOST_URL")
ext="${fn##*.}"
wget --quiet $BOOST_URL -O /tmp/boost.tar.${ext}
cd $(dirname $BOOST_ROOT)
rm -fr ${BOOST_ROOT}
tar xf /tmp/boost.tar.${ext}
cd $BOOST_ROOT
BLDARGS=()
if [[ ${BOOST_BUILD_ALL:-false} == "true" ]]; then
# we never need boost-python...so even for ALL
# option we can skip it
BLDARGS+=(--without-python)
else
BLDARGS+=(--with-chrono)
BLDARGS+=(--with-context)
BLDARGS+=(--with-coroutine)
BLDARGS+=(--with-date_time)
BLDARGS+=(--with-filesystem)
BLDARGS+=(--with-program_options)
BLDARGS+=(--with-regex)
BLDARGS+=(--with-serialization)
BLDARGS+=(--with-system)
BLDARGS+=(--with-atomic)
BLDARGS+=(--with-thread)
fi
BLDARGS+=(-j$((2*${NUM_PROCESSORS:-2})))
BLDARGS+=(--prefix=${BOOST_ROOT}/_INSTALLED_)
BLDARGS+=(-d0) # suppress messages/output
if [[ -z ${COMSPEC:-} ]]; then
if [[ "$(uname)" == "Darwin" ]] ; then
BLDARGS+=(cxxflags="-std=c++14 -fvisibility=default")
else
BLDARGS+=(cxxflags="-std=c++14")
BLDARGS+=(runtime-link="static,shared")
fi
BLDARGS+=(--layout=tagged)
./bootstrap.sh
./b2 "${BLDARGS[@]}" stage
./b2 "${BLDARGS[@]}" install
else
BLDARGS+=(runtime-link="static,shared")
BLDARGS+=(--layout=versioned)
BLDARGS+=(--toolset="msvc-14.1")
BLDARGS+=(address-model=64)
BLDARGS+=(architecture=x86)
BLDARGS+=(link=static)
BLDARGS+=(threading=multi)
cmd /E:ON /D /S /C"bootstrap.bat"
./b2.exe "${BLDARGS[@]}" stage
./b2.exe "${BLDARGS[@]}" install
fi
if [[ ${CI:-false} == "true" ]]; then
# save some disk space...these are mostly
# obj files and don't need to be kept in CI contexts
rm -rf bin.v2
fi
cd $odir

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
set -e
IFS=. read cm_maj cm_min cm_rel <<<"$1"
: ${cm_rel:-0}
CMAKE_ROOT=${2:-"${HOME}/cmake"}
function cmake_version ()
{
if [[ -d ${CMAKE_ROOT} ]] ; then
local perms=$(test $(uname) = "Linux" && echo "/111" || echo "+111")
local installed=$(find ${CMAKE_ROOT} -perm ${perms} -type f -name cmake)
if [[ "${installed}" != "" ]] ; then
echo "$(${installed} --version | head -1)"
fi
fi
}
installed=$(cmake_version)
if [[ "${installed}" != "" && ${installed} =~ ${cm_maj}.${cm_min}.${cm_rel} ]] ; then
echo "cmake already installed: ${installed}"
exit
fi
pkgname="cmake-${cm_maj}.${cm_min}.${cm_rel}-$(uname)-x86_64.tar.gz"
tmppkg="/tmp/cmake.tar.gz"
wget --quiet https://cmake.org/files/v${cm_maj}.${cm_min}/${pkgname} -O ${tmppkg}
mkdir -p ${CMAKE_ROOT}
cd ${CMAKE_ROOT}
tar --strip-components 1 -xf ${tmppkg}
rm -f ${tmppkg}
echo "installed: $(cmake_version)"

View File

@@ -0,0 +1,15 @@
/var/log/rippled/*.log {
daily
minsize 200M
rotate 7
nocreate
missingok
notifempty
compress
compresscmd /usr/bin/nice
compressoptions -n19 ionice -c3 gzip
compressext .gz
postrotate
/opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate
endscript
}

View File

@@ -0,0 +1,15 @@
[Unit]
Description=Ripple Daemon
[Service]
Type=simple
ExecStart=/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
# Default KillSignal can be used if/when rippled handles SIGTERM
KillSignal=SIGINT
Restart=no
User=rippled
Group=rippled
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,10 @@
# For automatic updates, symlink this file to /etc/cron.d/
# Do not remove the newline at the end of this cron script
# bash required for use of RANDOM below.
SHELL=/bin/bash
PATH=/sbin;/bin;/usr/sbin;/usr/bin
# invoke check/update script with random delay up to 59 mins
0 * * * * root sleep $((RANDOM*3540/32768)) && /opt/ripple/bin/update-rippled.sh

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env bash
# auto-update script for rippled daemon
# Check for sudo/root permissions
if [[ $(id -u) -ne 0 ]] ; then
echo "This update script must be run as root or sudo"
exit 1
fi
LOCKDIR=/tmp/rippleupdate.lock
UPDATELOG=/var/log/rippled/update.log
function cleanup {
# If this directory isn't removed, future updates will fail.
rmdir $LOCKDIR
}
# Use mkdir to check if process is already running. mkdir is atomic, as against file create.
if ! mkdir $LOCKDIR 2>/dev/null; then
echo $(date -u) "lockdir exists - won't proceed." >> $UPDATELOG
exit 1
fi
trap cleanup EXIT
source /etc/os-release
can_update=false
if [[ "$ID" == "ubuntu" || "$ID" == "debian" ]] ; then
# Silent update
apt-get update -qq
# The next line is an "awk"ward way to check if the package needs to be updated.
RIPPLE=$(apt-get install -s --only-upgrade rippled | awk '/^Inst/ { print $2 }')
test "$RIPPLE" == "rippled" && can_update=true
function apply_update {
apt-get install rippled -qq
}
elif [[ "$ID" == "fedora" || "$ID" == "centos" || "$ID" == "rhel" || "$ID" == "scientific" ]] ; then
RIPPLE_REPO=${RIPPLE_REPO-stable}
yum --disablerepo=* --enablerepo=ripple-$RIPPLE_REPO clean expire-cache
yum check-update -q --enablerepo=ripple-$RIPPLE_REPO rippled || can_update=true
function apply_update {
yum update -y --enablerepo=ripple-$RIPPLE_REPO rippled
}
else
echo "unrecognized distro!"
exit 1
fi
# Do the actual update and restart the service after reloading systemctl daemon.
if [ "$can_update" = true ] ; then
exec 3>&1 1>>${UPDATELOG} 2>&1
set -e
apply_update
systemctl daemon-reload
systemctl restart rippled.service
echo $(date -u) "rippled daemon updated."
else
echo $(date -u) "no updates available" >> $UPDATELOG
fi

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
function error {
echo $1
exit 1
}
cd /opt/rippled_bld/pkg/rippled
export RIPPLED_VERSION=$(egrep -i -o "\b(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-[0-9a-z\-]+(\.[0-9a-z\-]+)*)?(\+[0-9a-z\-]+(\.[0-9a-z\-]+)*)?\b" src/ripple/protocol/impl/BuildInfo.cpp)
: ${PKG_OUTDIR:=/opt/rippled_bld/pkg/out}
export PKG_OUTDIR
if [ ! -d ${PKG_OUTDIR} ]; then
error "${PKG_OUTDIR} is not mounted"
fi
if [ -x ${OPENSSL_ROOT}/bin/openssl ]; then
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${OPENSSL_ROOT}/lib ${OPENSSL_ROOT}/bin/openssl version -a
fi

View File

@@ -0,0 +1,41 @@
ARG DIST_TAG=16.04
FROM ubuntu:$DIST_TAG
ARG GIT_COMMIT=unknown
ARG CI_USE=false
LABEL git-commit=$GIT_COMMIT
# install/setup prerequisites:
COPY ubuntu-builder/ubuntu_setup.sh /tmp/
COPY shared/build_deps.sh /tmp/
COPY shared/install_cmake.sh /tmp/
COPY shared/install_boost.sh /tmp/
RUN chmod +x /tmp/ubuntu_setup.sh && \
chmod +x /tmp/build_deps.sh && \
chmod +x /tmp/install_boost.sh && \
chmod +x /tmp/install_cmake.sh
RUN /tmp/ubuntu_setup.sh
RUN /tmp/install_cmake.sh 3.15.2 /opt/local/cmake-3.15
RUN ln -s /opt/local/cmake-3.15 /opt/local/cmake
ENV PATH="/opt/local/cmake/bin:$PATH"
# also install min supported cmake for testing
RUN if [ "${CI_USE}" = true ] ; then /tmp/install_cmake.sh 3.9.0 /opt/local/cmake-3.9; fi
RUN /tmp/build_deps.sh
ENV PLANTUML_JAR="/opt/plantuml/plantuml.jar"
ENV BOOST_ROOT="/opt/local/boost/_INSTALLED_"
ENV OPENSSL_ROOT="/opt/local/openssl"
# prep files for package building
RUN mkdir -m 777 -p /opt/rippled_bld/pkg
WORKDIR /opt/rippled_bld/pkg
COPY packaging/dpkg/debian /opt/rippled_bld/pkg/debian/
COPY shared/update_sources.sh ./
COPY shared/rippled.service /opt/rippled_bld/pkg/debian/
COPY packaging/dpkg/build_dpkg.sh ./
RUN update-alternatives --set gcc /usr/bin/gcc-8
CMD ./build_dpkg.sh

View File

@@ -0,0 +1,218 @@
#!/usr/bin/env bash
set -ex
source /etc/os-release
if [[ ${VERSION_ID} =~ ^18\. || ${VERSION_ID} =~ ^16\. ]] ; then
echo "setup for ${PRETTY_NAME}"
else
echo "${VERSION} not supported"
exit 1
fi
export DEBIAN_FRONTEND="noninteractive"
echo "Acquire::Retries 3;" > /etc/apt/apt.conf.d/80-retries
echo "Acquire::http::Pipeline-Depth 0;" >> /etc/apt/apt.conf.d/80-retries
echo "Acquire::http::No-Cache true;" >> /etc/apt/apt.conf.d/80-retries
echo "Acquire::BrokenProxy true;" >> /etc/apt/apt.conf.d/80-retries
apt-get update -o Acquire::CompressionTypes::Order::=gz
apt-get -y update
apt-get -y install apt-utils
apt-get -y install software-properties-common wget
apt-get -y upgrade
if [[ ${VERSION_ID} =~ ^18\. ]] ; then
apt-add-repository -y multiverse
apt-add-repository -y universe
fi
add-apt-repository -y ppa:ubuntu-toolchain-r/test
apt-get -y clean
apt-get -y update
apt-get -y --fix-missing install \
make cmake ninja-build \
protobuf-compiler libprotobuf-dev openssl libssl-dev \
liblzma-dev libbz2-dev zlib1g-dev \
libjemalloc-dev \
python-pip \
gdb gdbserver \
libstdc++6 \
flex bison parallel \
libicu-dev texinfo \
java-common javacc \
dpkg-dev debhelper devscripts fakeroot \
debmake git-buildpackage dh-make gitpkg debsums gnupg \
dh-buildinfo dh-make dh-systemd
apt-get -y install gcc-7 g++-7
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-7 40 \
--slave /usr/bin/g++ g++ /usr/bin/g++-7 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-7 \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-7 \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-7 \
--slave /usr/bin/gcov gcov /usr/bin/gcov-7 \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-7 \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-7
apt-get -y install gcc-8 g++-8
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-8 20 \
--slave /usr/bin/g++ g++ /usr/bin/g++-8 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-8 \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-8 \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-8 \
--slave /usr/bin/gcov gcov /usr/bin/gcov-8 \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-8 \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-8
update-alternatives --auto gcc
update-alternatives --install /usr/bin/cpp cpp /usr/bin/cpp-7 40
update-alternatives --install /usr/bin/cpp cpp /usr/bin/cpp-8 20
update-alternatives --auto cpp
if [ "${CI_USE}" = true ] ; then
apt-get -y install gcc-6 g++-6
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-6 10 \
--slave /usr/bin/g++ g++ /usr/bin/g++-6 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-6 \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-6 \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-6 \
--slave /usr/bin/gcov gcov /usr/bin/gcov-6 \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-6 \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-6
apt-get -y install gcc-9 g++-9
update-alternatives --install \
/usr/bin/gcc gcc /usr/bin/gcc-9 15 \
--slave /usr/bin/g++ g++ /usr/bin/g++-9 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-9 \
--slave /usr/bin/gcc-nm gcc-nm /usr/bin/gcc-nm-9 \
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-9 \
--slave /usr/bin/gcov gcov /usr/bin/gcov-9 \
--slave /usr/bin/gcov-tool gcov-tool /usr/bin/gcov-dump-9 \
--slave /usr/bin/gcov-dump gcov-dump /usr/bin/gcov-tool-9
fi
if [[ ${VERSION_ID} =~ ^18\. ]] ; then
apt-get -y install binutils
elif [[ ${VERSION_ID} =~ ^16\. ]] ; then
apt-get -y install python-software-properties binutils-gold
fi
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
if [[ ${VERSION_ID} =~ ^18\. ]] ; then
cat << EOF > /etc/apt/sources.list.d/llvm.list
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic main
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-7 main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-7 main
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main
deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main
deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main
EOF
elif [[ ${VERSION_ID} =~ ^16\. ]] ; then
cat << EOF > /etc/apt/sources.list.d/llvm.list
deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial main
deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial main
deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-7 main
deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-7 main
deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-8 main
deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-8 main
deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-9 main
deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-9 main
EOF
fi
apt-get -y update
apt-get -y install \
clang-7 libclang-common-7-dev libclang-7-dev libllvm7 llvm-7 \
llvm-7-dev llvm-7-runtime clang-format-7 python-clang-7 \
lld-7 libfuzzer-7-dev libc++-7-dev
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-7 40 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-7 \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-7 \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-7 \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-7 \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-7 \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-7 \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-7 \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-7
apt-get -y install \
clang-8 libclang-common-8-dev libclang-8-dev libllvm8 llvm-8 \
llvm-8-dev llvm-8-runtime clang-format-8 python-clang-8 \
lld-8 libfuzzer-8-dev libc++-8-dev
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-8 20 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-8 \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-8 \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-8 \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-8 \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-8 \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-8 \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-8 \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-8
update-alternatives --auto clang
if [ "${CI_USE}" = true ] ; then
apt-get -y install \
clang-5.0 libclang-common-5.0-dev libclang-5.0-dev libllvm5.0 llvm-5.0 \
llvm-5.0-dev llvm-5.0-runtime clang-format-5.0 python-clang-5.0 \
lld-5.0 libfuzzer-5.0-dev
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-5.0 10 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-5.0 \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-5.0 \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-5.0 \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-5.0 \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-5.0 \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-5.0 \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-5.0 \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-5.0
apt-get -y install \
clang-6.0 libclang-common-6.0-dev libclang-6.0-dev libllvm6.0 llvm-6.0 \
llvm-6.0-dev llvm-6.0-runtime clang-format-6.0 python-clang-6.0 \
lld-6.0 libfuzzer-6.0-dev
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-6.0 12 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-6.0 \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-6.0 \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-6.0 \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-6.0 \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-6.0 \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-6.0 \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-6.0 \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-6.0
apt-get -y install \
clang-9 libclang-common-9-dev libclang-9-dev libllvm9 llvm-9 \
llvm-9-dev llvm-9-runtime clang-format-9 python-clang-9 \
lld-9 libfuzzer-9-dev libc++-9-dev
update-alternatives --install \
/usr/bin/clang clang /usr/bin/clang-9 20 \
--slave /usr/bin/clang++ clang++ /usr/bin/clang++-9 \
--slave /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profdata-9 \
--slave /usr/bin/asan-symbolize asan-symbolize /usr/bin/asan_symbolize-9 \
--slave /usr/bin/llvm-symbolizer llvm-symbolizer /usr/bin/llvm-symbolizer-9 \
--slave /usr/bin/clang-format clang-format /usr/bin/clang-format-9 \
--slave /usr/bin/llvm-ar llvm-ar /usr/bin/llvm-ar-9 \
--slave /usr/bin/llvm-cov llvm-cov /usr/bin/llvm-cov-9 \
--slave /usr/bin/llvm-nm llvm-nm /usr/bin/llvm-nm-9
# only install latest lldb
apt-get -y install lldb-9 python-lldb-9 liblldb-9-dev
update-alternatives --install \
/usr/bin/lldb lldb /usr/bin/lldb-9 50 \
--slave /usr/bin/lldb-server lldb-server /usr/bin/lldb-server-9 \
--slave /usr/bin/lldb-argdumper lldb-argdumper /usr/bin/lldb-argdumper-9 \
--slave /usr/bin/lldb-instr lldb-instr /usr/bin/lldb-instr-9 \
--slave /usr/bin/lldb-mi lldb-mi /usr/bin/lldb-mi-9
update-alternatives --auto clang
fi
apt-get -y autoremove

View File

@@ -6,12 +6,15 @@ builds, including docker based builds for those distributions, please consult
the [rippled-package-builder](https://github.com/ripple/rippled-package-builder)
repository.
Development is regularly done on Ubuntu 16.04 or later. For non Ubuntu
distributions, the steps below should work be installing the appropriate
dependencies using that distribution's package management tools.
Note: Ubuntu 16.04 users may need to update their compiler (see the dependencies
section). For non Ubuntu distributions, the steps below should work be
installing the appropriate dependencies using that distribution's package
management tools.
## Dependencies
gcc-7 or later is required.
Use `apt-get` to install the dependencies provided by the distribution
```
@@ -25,14 +28,14 @@ protobuf will give errors.
### Build Boost
Boost 1.67 or later is required. We recommend downloading and compiling boost
Boost 1.70 or later is required. We recommend downloading and compiling boost
with the following process: After changing to the directory where
you wish to download and compile boost, run
```
$ wget https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.gz
$ tar -xzf boost_1_68_0.tar.gz
$ cd boost_1_68_0
$ wget https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.gz
$ tar -xzf boost_1_70_0.tar.gz
$ cd boost_1_70_0
$ ./bootstrap.sh
$ ./b2 headers
$ ./b2 -j<Num Parallel>
@@ -81,17 +84,17 @@ git checkout develop
If you didn't persistently set the `BOOST_ROOT` environment variable to the
directory in which you compiled boost, then you should set it temporarily.
For example, you built Boost in your home directory `~/boost_1_68_0`, you
For example, you built Boost in your home directory `~/boost_1_70_0`, you
would do for any shell in which you want to build:
```
export BOOST_ROOT=~/boost_1_68_0
export BOOST_ROOT=~/boost_1_70_0
```
Alternatively, you can add `DBOOST_ROOT=~/boost_1_68_0` to the command line when
Alternatively, you can add `DBOOST_ROOT=~/boost_1_70_0` to the command line when
invoking `cmake`.
### Generate and Build
### Generate Configuration
All builds should be done in a separate directory from the source tree root
(a subdirectory is fine). For example, from the root of the ripple source tree:
@@ -107,6 +110,10 @@ followed by:
cmake -DCMAKE_BUILD_TYPE=Debug ..
```
If your operating system does not provide static libraries (Arch Linux, and
Manjaro Linux, for example), you must configure a non-static build by adding
`-Dstatic=OFF` to the above cmake line.
`CMAKE_BUILD_TYPE` can be changed as desired for `Debug` vs.
`Release` builds (all four standard cmake build types are supported).
@@ -115,20 +122,6 @@ To select a different compiler (most likely gcc will be found by default), pass
`-DCMAKE_CXX_COMPILER=</path/to/cxx-compiler>` when configuring. If you prefer,
you can instead set `CC` and `CXX` environment variables which cmake will honor.
Once you have generated the build system, you can run the build via cmake:
```
cmake --build . -- -j <parallel jobs>
```
the `-j` parameter in this example tells the build tool to compile several
files in parallel. This value should be chosen roughly based on the number of
cores you have available and/or want to use for building.
When the build completes succesfully, you will have a `rippled` executable in
the current directory, which can be used to connect to the network (when
properly configured) or to run unit tests.
#### Options During Configuration:
The CMake file defines a number of configure-time options which can be
@@ -150,6 +143,23 @@ testing and running.
Several other infrequently used options are available - run `ccmake` or
`cmake-gui` for a list of all options.
### Build
Once you have generated the build system, you can run the build via cmake:
```
cmake --build . -- -j <parallel jobs>
```
the `-j` parameter in this example tells the build tool to compile several
files in parallel. This value should be chosen roughly based on the number of
cores you have available and/or want to use for building.
When the build completes succesfully, you will have a `rippled` executable in
the current directory, which can be used to connect to the network (when
properly configured) or to run unit tests.
#### Optional Installation
The rippled cmake build supports an installation target that will install

View File

@@ -60,17 +60,17 @@ brew install git cmake pkg-config protobuf openssl ninja
### Build Boost
Boost 1.67 or later is required.
Boost 1.70 or later is required.
We want to compile boost with clang/libc++
Download [a release](https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.bz2)
Download [a release](https://dl.bintray.com/boostorg/release/1.70.0/source/boost_1_70_0.tar.bz2)
Extract it to a folder, making note of where, open a terminal, then:
```
./bootstrap.sh
./b2 cxxflags="-std=c++14"
./b2 cxxflags="-std=c++14" visibility=global
```
Create an environment variable `BOOST_ROOT` in one of your `rc` files, pointing
@@ -120,11 +120,11 @@ If you didn't persistently set the `BOOST_ROOT` environment variable to the
root of the extracted directory above, then you should set it temporarily.
For example, assuming your username were `Abigail` and you extracted Boost
1.68.0 in `/Users/Abigail/Downloads/boost_1_68_0`, you would do for any
1.70.0 in `/Users/Abigail/Downloads/boost_1_70_0`, you would do for any
shell in which you want to build:
```
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_68_0
export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_70_0
```
### Generate and Build

File diff suppressed because it is too large Load Diff

779
Jenkinsfile vendored
View File

@@ -1,779 +0,0 @@
#!/usr/bin/env groovy
import groovy.json.JsonOutput
import java.text.*
all_status = [:]
commit_id = ''
git_fork = 'ripple'
git_repo = 'rippled'
//
// this is not the actual token, but an ID/key into the jenkins
// credential store which httpRequest can access.
//
github_cred = '6bd3f3b9-9a35-493e-8aef-975403c82d3e'
//
// root API url for our repo (default, overriden below)
//
github_api = 'https://api.github.com/repos/ripple/rippled'
try {
stage ('Startup Checks') {
// here we check the commit author against collaborators
// we need a node to do this because readJSON requires
// a filesystem (although we just pass it text)
node {
checkout scm
commit_id = getCommitID()
//
// NOTE this getUserRemoteConfigs call requires a one-time
// In-process Script Approval (configure jenkins). We need this
// to detect the remote repo to interact with via the github API.
//
def remote_url = scm.getUserRemoteConfigs()[0].getUrl()
if (remote_url) {
echo "GIT URL scm: $remote_url"
git_fork = remote_url.tokenize('/')[2]
git_repo = remote_url.tokenize('/')[3].split('\\.')[0]
echo "GIT FORK: $git_fork"
echo "GIT REPO: $git_repo"
github_api = "https://api.github.com/repos/${git_fork}/${git_repo}"
echo "API URL REPO: $github_api"
}
if (env.CHANGE_AUTHOR) {
def collab_found = false;
//
// this means we have some sort of PR , so verify the author
//
echo "CHANGE AUTHOR ---> $CHANGE_AUTHOR"
echo "CHANGE TARGET ---> $CHANGE_TARGET"
echo "CHANGE ID ---> $CHANGE_ID"
//
// check the commit author against collaborators
// we need a node to do this because readJSON requires
// a filesystem (although we just pass it text)
//
def response = httpRequest(
timeout: 10,
authentication: github_cred,
url: "${github_api}/collaborators")
def collab_data = readJSON(
text: response.content)
for (collaborator in collab_data) {
if (collaborator['login'] == "$CHANGE_AUTHOR") {
echo "$CHANGE_AUTHOR is a collaborator!"
collab_found = true;
break;
}
}
if (! collab_found) {
echo "$CHANGE_AUTHOR is not a collaborator - waiting for manual approval."
try {
response = httpRequest(
timeout: 10,
authentication: github_cred,
url: getCommentURL(),
contentType: 'APPLICATION_JSON',
httpMode: 'POST',
requestBody: JsonOutput.toJson([
body: """
**Thank you** for your submission. It will be reviewed soon and submitted for processing in CI.
"""
])
)
}
catch (e) {
echo 'had a problem interacting with github...comments are probably not updated'
}
try {
input (
message: "User $CHANGE_AUTHOR has submitted PR #$CHANGE_ID. " +
"**Please review** the changes for any CI/security concerns " +
"and then decide whether to proceed with building.")
}
catch(e) {
def user = e.getCauses()[0].getUser().toString()
all_status['startup'] = [
false,
'Approval Check',
"Build aborted by [${user}]",
"[console](${env.BUILD_URL}/console)"]
error "Aborted by: [${user}]"
}
}
}
}
}
stage ('Parallel Build') {
String[][] variants = [
['gcc.Release' ,'-Dassert=ON' ,'MANUAL_TESTS=true' ],
['gcc.Debug' ,'-Dcoverage=ON' ,'TARGET=coverage_report', 'SKIP_TESTS=true'],
['docs' ,'' ,'TARGET=docs' ],
['msvc.Debug' ],
['msvc.Debug' ,'' ,'NINJA_BUILD=true' ],
['msvc.Debug' ,'-Dunity=OFF' ],
['msvc.Release' ],
['clang.Debug' ],
['clang.Debug' ,'-Dunity=OFF' ],
['gcc.Debug' ],
['gcc.Debug' ,'-Dunity=OFF' ],
['clang.Release' ,'-Dassert=ON' ],
['gcc.Release' ,'-Dassert=ON' ],
['gcc.Debug' ,'-Dstatic=OFF' ],
['gcc.Debug' ,'-Dstatic=OFF -DBUILD_SHARED_LIBS=ON' ],
['gcc.Debug' ,'' ,'NINJA_BUILD=true' ],
['clang.Debug' ,'-Dunity=OFF -Dsan=address' ,'PARALLEL_TESTS=false', 'DEBUGGER=false'],
['clang.Debug' ,'-Dunity=OFF -Dsan=undefined' ,'PARALLEL_TESTS=false'],
// TODO - tsan runs currently fail/hang
//['clang.Debug' ,'-Dunity=OFF -Dsan=thread' ,'PARALLEL_TESTS=false'],
]
// create a map of all builds
// that we want to run. The map
// is string keys and node{} object values
def builds = [:]
for (int index = 0; index < variants.size(); index++) {
def bldtype = variants[index][0]
def cmake_extra = variants[index].size() > 1 ? variants[index][1] : ''
def bldlabel = bldtype + cmake_extra
def extra_env = variants[index].size() > 2 ? variants[index][2..-1] : []
for (int j = 0; j < extra_env.size(); j++) {
bldlabel += "_" + extra_env[j]
}
bldlabel = bldlabel.replace('-', '_')
bldlabel = bldlabel.replace(' ', '')
bldlabel = bldlabel.replace('=', '_')
def compiler = getFirstPart(bldtype)
def config = getSecondPart(bldtype)
def target = 'install' // currently ignored for windows builds
if (compiler == 'docs') {
compiler = 'gcc'
config = 'Release'
target = 'docs'
}
def cc =
(compiler == 'clang') ? '/opt/llvm-5.0.1/bin/clang' : 'gcc'
def cxx =
(compiler == 'clang') ? '/opt/llvm-5.0.1/bin/clang++' : 'g++'
def ucc = isNoUnity(cmake_extra) ? 'true' : 'false'
def node_type =
(compiler == 'msvc') ? 'rippled-win' : 'rippled-dev'
// the default disposition for parallel test..disabled
// for coverage, enabled otherwise. Can still be overridden
// by explicitly setting with extra env settings above.
def pt = isCoverage(cmake_extra) ? 'false' : 'true'
def max_minutes = 25
def env_vars = [
"TARGET=${target}",
"BUILD_TYPE=${config}",
"COMPILER=${compiler}",
"PARALLEL_TESTS=${pt}",
'BUILD=cmake',
"MAX_TIME=${max_minutes}m",
"BUILD_DIR=${bldlabel}",
"CMAKE_EXTRA_ARGS=-Dwerr=ON ${cmake_extra}",
'VERBOSE_BUILD=true']
builds[bldlabel] = {
node(node_type) {
checkout scm
dir ('build') {
deleteDir()
}
def cdir = upDir(pwd())
echo "BASEDIR: ${cdir}"
echo "COMPILER: ${compiler}"
echo "BUILD_TYPE: ${config}"
echo "USE_CC: ${ucc}"
env_vars.addAll([
"NIH_CACHE_ROOT=${cdir}"])
if (compiler == 'msvc') {
env_vars.addAll([
'BOOST_ROOT=c:\\lib\\boost_1_67',
'PROJECT_NAME=rippled',
'MSBUILDDISABLENODEREUSE=1', // this ENV setting is probably redundant since we also pass /nr:false to msbuild
'OPENSSL_ROOT=c:\\OpenSSL-Win64'])
}
else {
env_vars.addAll([
'NINJA_BUILD=false',
"CCACHE_BASEDIR=${cdir}",
'PLANTUML_JAR=/opt/plantuml/plantuml.jar',
'APP_ARGS=--unittest-ipv6',
'CCACHE_NOHASHDIR=true',
"CC=${cc}",
"CXX=${cxx}",
'LCOV_ROOT=""',
'PATH+CMAKE_BIN=/opt/local/cmake',
'GDB_ROOT=/opt/local/gdb',
'BOOST_ROOT=/opt/local/boost_1_67_0',
"USE_CCACHE=${ucc}"])
}
if (extra_env.size() > 0) {
env_vars.addAll(extra_env)
}
// try to figure out codecov token to use. Look for
// MY_CODECOV_TOKEN id first so users can set that
// on job scope but then default to RIPPLED_CODECOV_TOKEN
// which should be globally scoped
def codecov_token = ''
try {
withCredentials( [string( credentialsId: 'MY_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
}
catch (e) {
// this might throw when MY_CODECOV_TOKEN doesn't exist
}
if (codecov_token == '') {
withCredentials( [string( credentialsId: 'RIPPLED_CODECOV_TOKEN', variable: 'CODECOV_TOKEN')]) {
codecov_token = env.CODECOV_TOKEN
}
}
env_vars.addAll(["CODECOV_TOKEN=${codecov_token}"])
withEnv(env_vars) {
myStage(bldlabel)
try {
timeout(
time: max_minutes * 2,
units: 'MINUTES')
{
if (compiler == 'msvc') {
powershell "Remove-Item -Path \"${bldlabel}.txt\" -Force -ErrorAction Ignore"
// we capture stdout to variable because I could
// not figure out how to make powershell redirect internally
output = powershell (
returnStdout: true,
script: windowsBuildCmd())
// if the powershell command fails (has nonzero exit)
// then the command above throws, we don't get our output,
// and we never create this output file.
// SEE https://issues.jenkins-ci.org/browse/JENKINS-44930
// Alternatively, figure out how to reliably redirect
// all output above to a file (Start/Stop transcript does not work)
writeFile(
file: "${bldlabel}.txt",
text: output)
}
else {
sh "rm -fv ${bldlabel}.txt"
// execute the bld command in a redirecting shell
// to capture output
sh redhatBuildCmd(bldlabel)
}
}
}
finally {
if (bldtype == 'docs') {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportName: 'Doxygen',
reportDir: "build/${bldlabel}/html_doc",
reportFiles: 'index.html')
}
if (isCoverage(cmake_extra)) {
publishHTML(
allowMissing: true,
alwaysLinkToLastBuild: false,
keepAll: true,
reportName: 'Coverage',
reportDir: "build/${bldlabel}/coverage",
reportFiles: 'index.html')
}
def envs = ''
for (int j = 0; j < extra_env.size(); j++) {
envs += ", <br/>" + extra_env[j]
}
def cmake_txt = cmake_extra
if (cmake_txt != '') {
cmake_txt = " <br/>" + cmake_txt
}
def st = reportStatus(bldlabel, bldtype + cmake_txt + envs, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
} //try-catch-finally
} //withEnv
} //node
} //builds item
} //for variants
// Also add a single build job for doing the RPM build
// on a docker node
builds['rpm'] = {
node('docker') {
def bldlabel = 'rpm'
def remote =
(git_fork == 'ripple') ? 'origin' : git_fork
withCredentials(
[string(
credentialsId: 'RIPPLED_RPM_ROLE_ID',
variable: 'ROLE_ID')])
{
withEnv([
'docker_image=artifactory.ops.ripple.com:6555/rippled-rpm-builder:latest',
"git_commit=${commit_id}",
"git_remote=${remote}",
"rpm_release=${env.BUILD_ID}"])
{
try {
sh "rm -fv ${bldlabel}.txt"
sh "if [ -d rpm-out ]; then rm -rf rpm-out; fi"
sh rpmBuildCmd(bldlabel)
}
finally {
def st = reportStatus(bldlabel, bldlabel, env.BUILD_URL)
lock('rippled_dev_status') {
all_status[bldlabel] = st
}
archiveArtifacts(
artifacts: 'rpm-out/*.rpm',
allowEmptyArchive: true)
}
} //withEnv
} //withCredentials
} //node
}
// this actually executes all the builds we just defined
// above, in parallel as slaves are available
parallel builds
}
}
finally {
// anything here should run always...
stage ('Final Status') {
node {
def results = makeResultText()
try {
def res = getCommentID() //get array return b/c jenkins does not allow multiple direct return/assign
def comment_id = res[0]
def url_comment = res[1]
def mode = 'PATCH'
if (comment_id == 0) {
echo 'no existing status comment found'
mode = 'POST'
}
def body = JsonOutput.toJson([
body: results
])
response = httpRequest(
timeout: 10,
authentication: github_cred,
url: url_comment,
contentType: 'APPLICATION_JSON',
httpMode: mode,
requestBody: body)
}
catch (e) {
echo 'had a problem interacting with github...status is probably not updated'
}
}
}
}
// ---------------
// util functions
// ---------------
def myStage(name) {
echo """
+++++++++++++++++++++++++++++++++++++++++
>> building ${name}
+++++++++++++++++++++++++++++++++++++++++
"""
}
def printGitInfo(id, log) {
echo """
+++++++++++++++++++++++++++++++++++++++++
>> Building commit ID ${id}
>>
${log}
+++++++++++++++++++++++++++++++++++++++++
"""
}
def makeResultText () {
def start_time = new Date()
def sdf = new SimpleDateFormat('yyyyMMdd - HH:mm:ss')
def datestamp = sdf.format(start_time)
def results = """
## Jenkins Build Summary
Built from [this commit](https://github.com/${git_fork}/${git_repo}/commit/${commit_id})
Built at __${datestamp}__
### Test Results
Build Type | Log | Result | Status
---------- | --- | ------ | ------
"""
for ( e in all_status) {
results += e.value[1] + ' | ' + e.value[3] + ' | ' + e.value[2] + ' | ' +
(e.value[0] ? 'PASS :white_check_mark: ' : 'FAIL :red_circle: ') + '\n'
}
results += '\n'
echo 'FINAL BUILD RESULTS'
echo results
results
}
def getCommentURL () {
def url_c = ''
if (env.CHANGE_ID && env.CHANGE_ID ==~ /\d+/) {
//
// CHANGE_ID indicates we are building a PR
// find PR comments
//
def resp = httpRequest(
timeout: 10,
authentication: github_cred,
url: "${github_api}/pulls/$CHANGE_ID")
def result = readJSON(text: resp.content)
//
// follow issue comments link
//
url_c = result['_links']['issue']['href'] + '/comments'
}
else {
//
// if not a PR, just search comments for our commit ID
//
url_c =
"${github_api}/commits/${commit_id}/comments"
}
url_c
}
def getCommentID () {
def url_c = getCommentURL()
def response = httpRequest(
timeout: 10,
authentication: github_cred,
url: url_c)
def data = readJSON(text: response.content)
def comment_id = 0
// see if we can find and existing comment here with
// a heading that matches ours...
for (comment in data) {
if (comment['body'] =~ /(?m)^##\s+Jenkins Build/) {
comment_id = comment['id']
echo "existing status comment ${comment_id} found"
url_c = comment['url']
break;
}
}
[comment_id, url_c]
}
def getCommitID () {
def cid = sh (
script: 'git rev-parse HEAD',
returnStdout: true)
cid = cid.trim()
echo "commit ID is ${cid}"
commit_log = sh (
script: "git show --name-status ${cid}",
returnStdout: true)
printGitInfo (cid, commit_log)
cid
}
@NonCPS
def getResults(text, label) {
// example:
/// 194.5s, 154 suites, 948 cases, 360485 tests total, 0 failures
// or build log format:
// [msvc.release] 71.3s, 162 suites, 995 cases, 318901 tests total, 1 failure
def matcher =
text == '' ?
manager.getLogMatcher(/\[${label}\].+?(\d+) case[s]?, (\d+) test[s]? total, (\d+) (failure(s?))/) :
text =~ /(\d+) case[s]?, (\d+) test[s]? total, (\d+) (failure(s?))/
matcher ? matcher[0][1] + ' cases, ' + matcher[0][3] + ' failed' : 'no test results'
}
@NonCPS
def getFailures(text, label) {
// [see above for format]
def matcher =
text == '' ?
manager.getLogMatcher(/\[${label}\].+?(\d+) test[s]? total, (\d+) (failure(s?))/) :
text =~ /(\d+) test[s]? total, (\d+) (failure(s?))/
// if we didn't match, then return 1 since something is
// probably wrong, e.g. maybe the build failed...
matcher ? matcher[0][2] as Integer : 1i
}
@NonCPS
def getTime(text, label) {
// look for text following a label 'real' for
// wallclock time. Some `time`s report fractional
// seconds and we can omit those in what we report
def matcher =
text == '' ?
manager.getLogMatcher(/(?m)^\[${label}\]\s+real\s+(.+)\.(\d+?)[s]?/) :
text =~ /(?m)^real\s+(.+)\.(\d+?)[s]?/
if (matcher) {
return matcher[0][1] + 's'
}
// alternatively, look for powershell elapsed time
// format, e.g. :
// TotalSeconds : 523.2140529
def matcher2 =
text == '' ?
manager.getLogMatcher(/(?m)^\[${label}\]\s+TotalSeconds\s+:\s+(\d+)\.(\d+?)?/) :
text =~ /(?m)^TotalSeconds\s+:\s+(\d+)\.(\d+?)?/
matcher2 ? matcher2[0][1] + 's' : 'n/a'
}
@NonCPS
def getFirstPart(bld) {
def matcher = bld =~ /^(.+?)\.(.+)$/
matcher ? matcher[0][1] : bld
}
@NonCPS
def isNoUnity(bld) {
def matcher = bld =~ /-Dunity=(off|OFF)/
matcher ? true : false
}
@NonCPS
def isCoverage(bld) {
def matcher = bld =~ /-Dcoverage=(on|ON)/
matcher ? true : false
}
@NonCPS
def getSecondPart(bld) {
def matcher = bld =~ /^(.+?)\.(.+)$/
matcher ? matcher[0][2] : bld
}
// because I can't seem to find path manipulation
// functions in groovy....
@NonCPS
def upDir(path) {
def matcher = path =~ /^(.+)[\/\\](.+?)/
matcher ? matcher[0][1] : path
}
// the shell command used for building on redhat
def redhatBuildCmd(bldlabel) {
'''\
#!/bin/bash
set -ex
log_file=''' + "${bldlabel}.txt" + '''
exec 3>&1 1>>${log_file} 2>&1
ccache -s
source /opt/rh/devtoolset-7/enable
/usr/bin/time -p ./bin/ci/ubuntu/build-and-test.sh 2>&1
ccache -s
'''
}
// the powershell command used for building an RPM
def windowsBuildCmd() {
'''
# Enable streams 3-6
$WarningPreference = 'Continue'
$VerbosePreference = 'Continue'
$DebugPreference = 'Continue'
$InformationPreference = 'Continue'
Invoke-BatchFile "${env:ProgramFiles(x86)}\\Microsoft Visual Studio\\2017\\Community\\VC\\Auxiliary\\Build\\vcvarsall.bat" x86_amd64
Get-ChildItem env:* | Sort-Object name
cl
cmake --version
New-Item -ItemType Directory -Force -Path "build/$env:BUILD_DIR" -ErrorAction Stop
$sw = [Diagnostics.Stopwatch]::StartNew()
try {
Push-Location "build/$env:BUILD_DIR"
if ($env:NINJA_BUILD -eq "true") {
Invoke-Expression "& cmake -G`"Ninja`" -DCMAKE_BUILD_TYPE=$env:BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=ON $env:CMAKE_EXTRA_ARGS ../.."
}
else {
Invoke-Expression "& cmake -G`"Visual Studio 15 2017 Win64`" -DCMAKE_VERBOSE_MAKEFILE=ON $env:CMAKE_EXTRA_ARGS ../.."
}
if ($LastExitCode -ne 0) { throw "CMake failed" }
## as of 01/2018, DO NOT USE cmake to run the actual build step. for some
## reason, cmake spawning the build under jenkins causes MSBUILD/ninja to
## get stuck at the end of the build. Perhaps cmake is spawning
## incorrectly or failing to pass certain params
if ($env:NINJA_BUILD -eq "true") {
ninja -j $env:NUMBER_OF_PROCESSORS -v
}
else {
msbuild /fl /m /nr:false /p:Configuration="$env:BUILD_TYPE" /p:Platform=x64 /p:GenerateFullPaths=True /v:normal /nologo /clp:"ShowCommandLine;DisableConsoleColor" "$env:PROJECT_NAME.vcxproj"
}
if ($LastExitCode -ne 0) { throw "CMake build failed" }
$exe = "./$env:BUILD_TYPE/$env:PROJECT_NAME"
if ($env:NINJA_BUILD -eq "true") {
$exe = "./$env:PROJECT_NAME"
}
"Exe is at $exe"
$params = '--unittest', '--quiet', '--unittest-log'
if ($env:PARALLEL_TESTS -eq "true") {
$params = $params += "--unittest-jobs=$env:NUMBER_OF_PROCESSORS"
}
& $exe $params
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
}
catch {
throw
}
finally {
$sw.Stop()
$sw.Elapsed
Pop-Location
}
'''
}
// the shell command used for building an RPM
def rpmBuildCmd(bldlabel) {
'''\
#!/bin/bash
set -ex
log_file=''' + "${bldlabel}.txt" + '''
exec 3>&1 1>>${log_file} 2>&1
# Vault Steps
SECRET_ID=$(cat /.vault/rippled-build-role/secret-id)
export VAULT_TOKEN=$(/usr/local/ripple/ops-toolbox/vault/vault_approle_auth -r ${ROLE_ID} -s ${SECRET_ID} -t)
/usr/local/ripple/ops-toolbox/vault/vault_get_sts_token.py -r rippled-build-role
mkdir -p rpm-out
docker pull "${docker_image}"
echo "Running build container"
docker run --rm \
-v $PWD/rpm-out:/opt/rippled-rpm/out \
-e "GIT_COMMIT=$git_commit" \
-e "GIT_REMOTE=$git_remote" \
-e "RPM_RELEASE=$rpm_release" \
"${docker_image}"
. rpm-out/build_vars
cd rpm-out
tar xvf rippled-*.tar.gz
ls -la *.rpm
#################################
## for now we don't want the src
## and debugsource rpms for testing
## or archiving...
#################################
rm rippled-debugsource*.rpm
rm *.src.rpm
mkdir rpm-main
cp *.rpm rpm-main
cd rpm-main
cd ../..
cat > test_rpm.sh << "EOL"
#!/bin/bash
function error {
echo $1
exit 1
}
yum install -y yum-utils openssl-static zlib-static
rpm -i /opt/rippled-rpm/*.rpm
rc=$?; if [[ $rc != 0 ]]; then
error "error installing rpms"
fi
/opt/ripple/bin/rippled --unittest
rc=$?; if [[ $rc != 0 ]]; then
error "rippled --unittest failed"
fi
/opt/ripple/bin/validator-keys --unittest
rc=$?; if [[ $rc != 0 ]]; then
error "validator-keys --unittest failed"
fi
EOL
chmod +x test_rpm.sh
echo "Running test container"
docker run --rm \
-v $PWD/rpm-out/rpm-main:/opt/rippled-rpm \
-v $PWD:/opt/rippled --entrypoint /opt/rippled/test_rpm.sh \
centos:latest
'''
}
// post processing step after each build:
// * archives the log file
// * adds short description/status to build status
// * returns an array of result info to add to the all_build summary
def reportStatus(label, type, bldurl) {
def outstr = ''
def loglink = "[console](${bldurl}/console)"
def logfile = "${label}.txt"
if ( fileExists(logfile) ) {
archiveArtifacts( artifacts: logfile )
outstr = readFile(logfile)
loglink = "[logfile](${bldurl}/artifact/${logfile})"
}
def st = getResults(outstr, label)
def time = getTime(outstr, label)
def fail_count = getFailures(outstr, label)
outstr = null
def txtcolor =
fail_count == 0 ? 'DarkGreen' : 'Crimson'
def shortbld = label
// this is just an attempt to shorten the
// summary text label to the point of absurdity..
shortbld = shortbld.replace('Debug', 'dbg')
shortbld = shortbld.replace('Release', 'rel')
shortbld = shortbld.replace('true', 'Y')
shortbld = shortbld.replace('false', 'N')
shortbld = shortbld.replace('Dcoverage', 'cov')
shortbld = shortbld.replace('Dassert', 'asrt')
shortbld = shortbld.replace('Dunity', 'unty')
shortbld = shortbld.replace('Dsan=address', 'asan')
shortbld = shortbld.replace('Dsan=thread', 'tsan')
shortbld = shortbld.replace('Dsan=undefined', 'ubsan')
shortbld = shortbld.replace('PARALLEL_TEST', 'PL')
shortbld = shortbld.replace('MANUAL_TESTS', 'MAN')
shortbld = shortbld.replace('NINJA_BUILD', 'ninja')
shortbld = shortbld.replace('DEBUGGER', 'gdb')
shortbld = shortbld.replace('ON', 'Y')
shortbld = shortbld.replace('OFF', 'N')
manager.addShortText(
"${shortbld}: ${st}, t: ${time}",
txtcolor,
'white',
'0px',
'white')
[fail_count == 0, type, "${st}, t: ${time}", loglink]
}

View File

@@ -3,17 +3,22 @@
The XRP Ledger is a decentralized cryptographic ledger powered by a network of peer-to-peer servers. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.
## XRP
XRP is a public, counterparty-less asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP the help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
XRP is a public, counterparty-free asset native to the XRP Ledger, and is designed to bridge the many different currencies in use worldwide. XRP is traded on the open-market and is available for anyone to access. The XRP Ledger was created in 2012 with a finite supply of 100 billion units of XRP. Its creators gifted 80 billion XRP to a company, now called [Ripple](https://ripple.com/), to develop the XRP Ledger and its ecosystem. Ripple uses XRP to help build the Internet of Value, ushering in a world in which money moves as fast and efficiently as information does today.
## `rippled`
## rippled
The server software that powers the XRP Ledger is called `rippled` and is available in this repository under the permissive [ISC open-source license](LICENSE). The `rippled` server is written primarily in C++ and runs on a variety of platforms.
### Build from Source
# Key Features of the XRP Ledger
* [Linux](Builds/linux/README.md)
* [Mac](Builds/macos/README.md)
* [Windows](Builds/VisualStudio2017/README.md)
## Key Features of the XRP Ledger
- **[Censorship-Resistant Transaction Processing][]:** No single party decides which transactions succeed or fail, and no one can "roll back" a transaction after it completes. As long as those who choose to participate in the network keep it healthy, they can settle transactions in seconds.
- **[Fast, Efficient Consensus Algorithm][]:** The XRP Ledger's consensus algorithm settles transactions in 4 to 5 seconds, processing at a throughput of up to 1500 transactions per second. These properties put XRP at least an order of magnitude ahead of other top digital assets.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. (Each XRP is subdivisible down to 6 decimal places, for a grand total of 100 quintillion _drops_ of XRP.) The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction costs.
- **[Finite XRP Supply][]:** When the XRP Ledger began, 100 billion XRP were created, and no more XRP will ever be created. The available supply of XRP decreases slowly over time as small amounts are destroyed to pay transaction costs.
- **[Responsible Software Governance][]:** A team of full-time, world-class developers at Ripple maintain and continually improve the XRP Ledger's underlying software with contributions from the open-source community. Ripple acts as a steward for the technology and an advocate for its interests, and builds constructive relationships with governments and financial institutions worldwide.
- **[Secure, Adaptable Cryptography][]:** The XRP Ledger relies on industry standard digital signature systems like ECDSA (the same scheme used by Bitcoin) but also supports modern, efficient algorithms like Ed25519. The extensible nature of the XRP Ledger's software makes it possible to add and disable algorithms as the state of the art in cryptography advances.
- **[Modern Features for Smart Contracts][]:** Features like Escrow, Checks, and Payment Channels support cutting-edge financial applications including the [Interledger Protocol](https://interledger.org/). This toolbox of advanced features comes with safety features like a process for amending the network and separate checks against invariant constraints.
@@ -29,7 +34,7 @@ The server software that powers the XRP Ledger is called `rippled` and is availa
## Source Code
[![travis-ci.org: Build Status](https://travis-ci.org/ripple/rippled.png?branch=develop)](https://travis-ci.org/ripple/rippled)
[![travis-ci.com: Build Status](https://travis-ci.com/ripple/rippled.svg?branch=develop)](https://travis-ci.com/ripple/rippled)
[![codecov.io: Code Coverage](https://codecov.io/gh/ripple/rippled/branch/develop/graph/badge.svg)](https://codecov.io/gh/ripple/rippled)
### Repository Contents

View File

@@ -14,6 +14,111 @@ If you are using Red Hat Enterprise Linux 7 or CentOS 7, you can [update using `
# Releases
## Version 1.4.0
The `rippled` 1.4.0 release introduces several improvements and new features, including support for deleting accounts, improved peer slot management, improved CI integration and package building and support for [C++17](https://en.wikipedia.org/wiki/C%2B%2B17) and [Boost 1.71](https://www.boost.org/users/history/version_1_71_0.html). Finally, this release removes the code for the `SHAMapV2` amendment which failed to gain majority support and has been obsoleted by other improvements.
**New and Updated Features**
- The `DeletableAccounts` amendment which, if enabled, will make it possible for users to delete unused or unneeded accounts, recovering the account's reserve.
- Support for improved management of peer slots and the ability to add and removed reserved connections without requiring a restart of the server.
- Tracking and reporting of cumulative and instantaneous peer bandwidth usage.
- Preliminary support for post-processing historical shards after downloading to index their contents.
- Reporting the master public key alongside the ephemeral public key in the `validation` stream [subscriptions](https://xrpl.org/subscribe.html).
- Reporting consensus phase changes in the `server` stream [subscription](https://xrpl.org/subscribe.html).
**Bug Fixes**
- The `fixPayChanRecipientOwnerDir` amendment which corrects a minor technical flaw that would cause a payment channel to not appear in the recipient's owner directory, which made it unnecessarily difficult for users to enumerate all their payment channels.
- The `fixCheckThreading` amendment which corrects a minor technical flaw that caused checks to not be properly threaded against the account of the check's recipient.
- Respect the `ssl_verify` configuration option in the `SSLHTTPDownloader` and `HTTPClient` classes.
- Properly update the `server_state` when a server detects a disagreement between itself and the network.
- Allow Ed25519 keys to be used with the `channel_authorize` command.
## Version 1.3.1
The `rippled` 1.3.1 release improves the built-in deadlock detection code, improves logging during process startup, changes the package build pipeline and improves the build documentation.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Add a LogicError when a deadlock is detected (355a7b04)
- Improve logging during process startup (7c24f7b1)
## Version 1.3.0
The `rippled` 1.3.0 release introduces several new features and overall improvements to the codebase, including the `fixMasterKeyAsRegularKey` amendment, code to adjust the timing of the consensus process and support for decentralized validator domain verification. The release also includes miscellaneous improvements including in the transaction censorship detection code, transaction validation code, manifest parsing code, configuration file parsing code, log file rotation code, and in the build, continuous integration, testing and package building pipelines.
**New and Updated Features**
- The `fixMasterKeyAsRegularKey` amendment which, if enabled, will correct a technical flaw that allowed setting an account's regular key to the account's master key.
- Code that allows validators to adjust the timing of the consensus process in near-real-time to account for connection delays.
- Support for decentralized validator domain verification by adding support for a "domain" field in manifests.
**Bug Fixes**
- Improve ledger trie ancestry tracking to reduce unnecessary error messages.
- More efficient detection of dry paths in the payment engine. Although not a transaction-breaking change, this should reduces spurious error messages in the log files.
## Version 1.2.4
The `rippled` 1.2.4 release improves the way that shard crawl requests are routed and the robustness of configured validator list retrieval by imposing a 20 second timeout.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Use public keys when routing shard crawl requests
- Enforce a 20s timeout when making validator list requests (RIPD-1737)
## Version 1.2.3
The `rippled` 1.2.3 release corrects a technical flaw which in some circumstances can cause a null pointer dereference that can crash the server.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Fix a technical flaw which in some circumstances can cause a null pointer dereference that can crash the server.
## Version 1.2.2
The `rippled` 1.2.2 release corrects a technical flaw in the fee escalation
engine which could cause some fee metrics to be calculated incorrectly. In some
circumstances this can potentially cause the server to crash.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Fix a technical flaw in the fee escalation engine which could cause some fee metrics to be calculated incorrectly (4c06b3f86)
## Version 1.2.1
The `rippled` 1.2.1 release introduces several fixes including a change in the
information reported via the enhanced crawl functionality introduced in the
1.2.0 release, a fix for a potential race condition when processing a status
change message for a peer, and for a technical flaw that could cause a server
to not properly detect that it had lost all its peers.
The release also adds the `delivered_amount` field to more responses to simplify
the handling of payment or check cashing transactions.
**New and Updated Features**
This release has no new features.
**Bug Fixes**
- Fix a race condition during `TMStatusChange` handling (c8249981)
- Properly transition state to disconnected (9d027394)
- Display validator status only in response to admin requests (2d6a518a)
- Add the `delivered_amount` to more RPC commands (f2756914)
## Version 1.2.0
The `rippled` 1.2.0 release introduces the MultisignReserve Amendment, which

View File

@@ -1,12 +1,15 @@
# Set environment variables.
environment:
# We bundle up protoc.exe and only the parts of boost and openssl we need so
# We bundle up only the parts of boost and openssl we need so
# that it's a small download. We also use appveyor's free cache, avoiding fees
# downloading from S3 each time.
# TODO: script to create this package.
RIPPLED_DEPS_PATH: rippled_deps17.04
RIPPLED_DEPS_URL: https://ripple.github.io/Downloads/appveyor/%RIPPLED_DEPS_PATH%.zip
RIPPLED_DEPS_PATH: rippled_deps17.05
RIPPLED_DEPS_BASE_URL: https://ripple.github.io/Downloads/appveyor
RIPPLED_OPENSSL: rippled_deps.openssl.1.0.2j.zip
RIPPLED_BOOST: rippled_deps.boost.1.70.zip
RIPPLED_BOOST_STAGE: rippled_deps.boost.stage.1.70.zip
# CMake honors these environment variables, setting the include/lib paths.
BOOST_ROOT: C:/%RIPPLED_DEPS_PATH%/boost
@@ -36,25 +39,34 @@ cache:
shallow_clone: true
install:
# We want protoc.exe on PATH.
- SET PATH=C:/%RIPPLED_DEPS_PATH%;%PATH%
# Download dependencies if appveyor didn't restore them from the cache.
# Use 7zip to unzip.
- ps: |
if (-not(Test-Path 'C:/$env:RIPPLED_DEPS_PATH')) {
echo "Download from $env:RIPPLED_DEPS_URL"
Start-FileDownload "$env:RIPPLED_DEPS_URL"
7z x "$($env:RIPPLED_DEPS_PATH).zip" -oC:\ -y > $null
if ($LastExitCode -ne 0) { throw "7z failed" }
if (-not(Test-Path "C:/$env:RIPPLED_DEPS_PATH")) {
$files = @(
"$env:RIPPLED_BOOST",
"$env:RIPPLED_BOOST_STAGE",
"$env:RIPPLED_OPENSSL"
)
For ($i=0; $i -lt $files.Length; $i++) {
$file = $files[$i]
$url = "$env:RIPPLED_DEPS_BASE_URL/$file"
echo "Download $file from $url"
Start-FileDownload "$url"
7z x "$file" -o"C:\$env:RIPPLED_DEPS_PATH" -y > $null
if ($LastExitCode -ne 0) { throw "7z failed" }
}
}
else
{
"Dependencies are in cache"
ls "C:/$env:RIPPLED_DEPS_PATH"
}
# Newer DEPS include a versions file.
# Dump it so we can verify correct behavior.
- ps: |
if (Test-Path "C:/$env:RIPPLED_DEPS_PATH/versions.txt") {
cat "C:/$env:RIPPLED_DEPS_PATH/versions.txt"
}
cat "C:/$env:RIPPLED_DEPS_PATH/version*.txt"
# TODO: This is giving me grief
# artifacts:

24
bin/ci/README.md Normal file
View File

@@ -0,0 +1,24 @@
In this directory are two scripts, `build.sh` and `test.sh` used for building
and testing rippled.
(For now, they assume Bash and Linux. Once I get Windows containers for
testing, I'll try them there, but if Bash is not available, then they will
soon be joined by PowerShell scripts `build.ps` and `test.ps`.)
We don't want these scripts to require arcane invocations that can only be
pieced together from within a CI configuration. We want something that humans
can easily invoke, read, and understand, for when we eventually have to test
and debug them interactively. That means:
(1) They should work with no arguments.
(2) They should document their arguments.
(3) They should expand short arguments into long arguments.
While we want to provide options for common use cases, we don't need to offer
the kitchen sink. We can rightfully expect users with esoteric, complicated
needs to write their own scripts.
To make argument-handling easy for us, the implementers, we can just take all
arguments from environment variables. They have the nice advantage that every
command-line uses named arguments. For the benefit of us and our users, we
document those variables at the top of each script.

31
bin/ci/build.sh Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# The build system. Either 'Unix Makefiles' or 'Ninja'.
GENERATOR=${GENERATOR:-Unix Makefiles}
# The compiler. Either 'gcc' or 'clang'.
COMPILER=${COMPILER:-gcc}
# The build type. Either 'Debug' or 'Release'.
BUILD_TYPE=${BUILD_TYPE:-Debug}
# Additional arguments to CMake.
# We use the `-` substitution here instead of `:-` so that callers can erase
# the default by setting `$CMAKE_ARGS` to the empty string.
CMAKE_ARGS=${CMAKE_ARGS-'-Dwerr=ON'}
# https://gitlab.kitware.com/cmake/cmake/issues/18865
CMAKE_ARGS="-DBoost_NO_BOOST_CMAKE=ON ${CMAKE_ARGS}"
if [[ ${COMPILER} == 'gcc' ]]; then
export CC='gcc'
export CXX='g++'
elif [[ ${COMPILER} == 'clang' ]]; then
export CC='clang'
export CXX='clang++'
fi
mkdir build
cd build
cmake -G "${GENERATOR}" -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_ARGS} ..
cmake --build . -- -j $(nproc)

42
bin/ci/test.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
set -o xtrace
set -o errexit
# Set to 'true' to run the known "manual" tests in rippled.
MANUAL_TESTS=${MANUAL_TESTS:-false}
# The maximum number of concurrent tests.
CONCURRENT_TESTS=${CONCURRENT_TESTS:-$(nproc)}
# The path to rippled.
RIPPLED=${RIPPLED:-build/rippled}
# Additional arguments to rippled.
RIPPLED_ARGS=${RIPPLED_ARGS:-}
function join_by { local IFS="$1"; shift; echo "$*"; }
declare -a manual_tests=(
'beast.chrono.abstract_clock'
'beast.unit_test.print'
'ripple.NodeStore.Timing'
'ripple.app.Flow_manual'
'ripple.app.NoRippleCheckLimits'
'ripple.app.PayStrandAllPairs'
'ripple.consensus.ByzantineFailureSim'
'ripple.consensus.DistributedValidators'
'ripple.consensus.ScaleFreeSim'
'ripple.ripple_data.digest'
'ripple.tx.CrossingLimits'
'ripple.tx.FindOversizeCross'
'ripple.tx.Offer_manual'
'ripple.tx.OversizeMeta'
'ripple.tx.PlumpBook'
)
if [[ ${MANUAL_TESTS} == 'true' ]]; then
RIPPLED_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
RIPPLED_ARGS+=" --unittest --quiet --unittest-log"
fi
RIPPLED_ARGS+=" --unittest-jobs ${CONCURRENT_TESTS}"
${RIPPLED} ${RIPPLED_ARGS}

View File

@@ -1,18 +1,18 @@
#!/bin/bash -u
# We use set -e and bash with -u to bail on first non zero exit code of any
# processes launched or upon any unbound variable.
# We use set -x to print commands before running them to help with
# debugging.
#!/usr/bin/env bash
set -ex
function version_ge() { test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1"; }
__dirname=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
echo "using CC: ${CC}"
"${CC}" --version
export CC
COMPNAME=$(basename $CC)
echo "using CXX: ${CXX:-notset}"
if [[ $CXX ]]; then
"${CXX}" --version
export CXX
"${CXX}" --version
export CXX
fi
: ${BUILD_TYPE:=Debug}
echo "BUILD TYPE: ${BUILD_TYPE}"
@@ -20,150 +20,231 @@ echo "BUILD TYPE: ${BUILD_TYPE}"
: ${TARGET:=install}
echo "BUILD TARGET: ${TARGET}"
# Ensure APP defaults to rippled if it's not set.
: ${APP:=rippled}
echo "using APP: ${APP}"
JOBS=${NUM_PROCESSORS:-2}
if [[ ${TRAVIS:-false} != "true" ]]; then
JOBS=$((JOBS+1))
JOBS=$((JOBS+1))
fi
if [[ ! -z "${CMAKE_EXE:-}" ]] ; then
export PATH="$(dirname ${CMAKE_EXE}):$PATH"
fi
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
time=
fi
if [[ -z "${MAX_TIME:-}" ]] ; then
timeout_cmd=""
else
timeout_cmd="timeout ${MAX_TIME}"
fi
echo "cmake building ${APP}"
echo "Building rippled"
: ${CMAKE_EXTRA_ARGS:=""}
if [[ ${NINJA_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -G Ninja"
CMAKE_EXTRA_ARGS+=" -G Ninja"
fi
coverage=false
if [[ "${TARGET}" == "coverage_report" ]] ; then
echo "coverage option detected."
coverage=true
export PATH=$PATH:${LCOV_ROOT}/usr/bin
fi
cmake --version
CMAKE_VER=$(cmake --version | cut -d " " -f 3 | head -1)
#
# allow explicit setting of the name of the build
# dir, otherwise default to the compiler.build_type
#
: "${BUILD_DIR:=${COMPNAME}.${BUILD_TYPE}}"
BUILDARGS=" -j${JOBS}"
if [[ ${VERBOSE_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
# TODO: if we use a different generator, this
# option to build verbose would need to change:
if [[ ${NINJA_BUILD:-} == true ]]; then
BUILDARGS+=" -v"
else
BUILDARGS+=" verbose=1"
fi
BUILDARGS="--target ${TARGET}"
BUILDTOOLARGS=""
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" --parallel"
fi
if [[ ${NINJA_BUILD:-} == false ]]; then
if version_ge $CMAKE_VER "3.12.0" ; then
BUILDARGS+=" ${JOBS}"
else
BUILDTOOLARGS+=" -j ${JOBS}"
fi
fi
if [[ ${VERBOSE_BUILD:-} == true ]]; then
CMAKE_EXTRA_ARGS+=" -DCMAKE_VERBOSE_MAKEFILE=ON"
if version_ge $CMAKE_VER "3.14.0" ; then
BUILDARGS+=" --verbose"
else
if [[ ${NINJA_BUILD:-} == false ]]; then
BUILDTOOLARGS+=" verbose=1"
else
BUILDTOOLARGS+=" -v"
fi
fi
fi
if [[ ${USE_CCACHE:-} == true ]]; then
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
echo "using ccache with basedir [${CCACHE_BASEDIR:-}]"
CMAKE_EXTRA_ARGS+=" -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache"
fi
if [ -d "build/${BUILD_DIR}" ]; then
rm -rf "build/${BUILD_DIR}"
rm -rf "build/${BUILD_DIR}"
fi
mkdir -p "build/${BUILD_DIR}"
pushd "build/${BUILD_DIR}"
# generate
${time} cmake ../.. -DCMAKE_BUILD_TYPE=${BUILD_TYPE} ${CMAKE_EXTRA_ARGS}
# build
export DESTDIR=$(pwd)/_INSTALLED_
time ${timeout_cmd} cmake --build . --target ${TARGET} -- $BUILDARGS
${time} eval cmake --build . ${BUILDARGS} -- ${BUILDTOOLARGS}
if [[ ${TARGET} == "docs" ]]; then
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f html_doc/index.html ]; then
echo "1 case, 1 test total, 0 failures"
else
echo "1 case, 1 test total, 1 failures"
fi
exit
## mimic the standard test output for docs build
## to make controlling processes like jenkins happy
if [ -f html_doc/index.html ]; then
echo "1 case, 1 test total, 0 failures"
else
echo "1 case, 1 test total, 1 failures"
fi
exit
fi
popd
export APP_PATH="$PWD/build/${BUILD_DIR}/${APP}"
if [[ "${TARGET}" == "validator-keys" ]] ; then
export APP_PATH="$PWD/build/${BUILD_DIR}/validator-keys/validator-keys"
else
export APP_PATH="$PWD/build/${BUILD_DIR}/rippled"
fi
echo "using APP_PATH: ${APP_PATH}"
# See what we've actually built
ldd ${APP_PATH}
function join_by { local IFS="$1"; shift; echo "$*"; }
# This is a list of manual tests
# in rippled that we want to run
declare -a manual_tests=(
"beast.chrono.abstract_clock"
"beast.unit_test.print"
"ripple.NodeStore.Timing"
"ripple.app.Flow_manual"
"ripple.app.NoRippleCheckLimits"
"ripple.app.PayStrandAllPairs"
"ripple.consensus.ByzantineFailureSim"
"ripple.consensus.DistributedValidators"
"ripple.consensus.ScaleFreeSim"
"ripple.ripple_data.digest"
"ripple.tx.CrossingLimits"
"ripple.tx.FindOversizeCross"
"ripple.tx.Offer_manual"
"ripple.tx.OversizeMeta"
"ripple.tx.PlumpBook"
)
: ${APP_ARGS:=}
if [[ ${APP} == "rippled" ]]; then
if [[ ${MANUAL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
if [[ "${TARGET}" == "validator-keys" ]] ; then
APP_ARGS="--unittest"
else
function join_by { local IFS="$1"; shift; echo "$*"; }
# This is a list of manual tests
# in rippled that we want to run
# ORDER matters here...sorted in approximately
# descending execution time (longest running tests at top)
declare -a manual_tests=(
'ripple.ripple_data.digest'
'ripple.tx.Offer_manual'
'ripple.app.PayStrandAllPairs'
'ripple.tx.CrossingLimits'
'ripple.tx.PlumpBook'
'ripple.app.Flow_manual'
'ripple.tx.OversizeMeta'
'ripple.consensus.DistributedValidators'
'ripple.app.NoRippleCheckLimits'
'ripple.NodeStore.Timing'
'ripple.consensus.ByzantineFailureSim'
'beast.chrono.abstract_clock'
'beast.unit_test.print'
)
if [[ ${TRAVIS:-false} != "true" ]]; then
# these two tests cause travis CI to run out of memory.
# TODO: investigate possible workarounds.
manual_tests=(
'ripple.consensus.ScaleFreeSim'
'ripple.tx.FindOversizeCross'
"${manual_tests[@]}"
)
fi
if [[ ${MANUAL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest=$(join_by , "${manual_tests[@]}")"
else
APP_ARGS+=" --unittest --quiet --unittest-log"
fi
if [[ ${coverage} == false && ${PARALLEL_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-jobs ${JOBS}"
fi
if [[ ${IPV6_TESTS:-} == true ]]; then
APP_ARGS+=" --unittest-ipv6"
fi
fi
if [[ ${coverage} == true ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
if [[ ${coverage} == true && $CC =~ ^gcc ]]; then
# Push the results (lcov.info) to codecov
codecov -X gcov # don't even try and look for .gcov files ;)
find . -name "*.gcda" | xargs rm -f
fi
if [[ ${SKIP_TESTS:-} == true ]]; then
echo "skipping tests."
exit
echo "skipping tests."
exit
fi
if [[ ${DEBUGGER:-true} == "true" && -v GDB_ROOT && -x ${GDB_ROOT}/bin/gdb ]]; then
${GDB_ROOT}/bin/gdb -v
# Execute unit tests under gdb, printing a call stack
# if we get a crash.
export APP_ARGS
${timeout_cmd} ${GDB_ROOT}/bin/gdb -return-child-result -quiet -batch \
-ex "set env MALLOC_CHECK_=3" \
-ex "set print thread-events off" \
-ex run \
-ex "thread apply all backtrace full" \
-ex "quit" \
--args ${APP_PATH} ${APP_ARGS}
ulimit -a
corepat=$(cat /proc/sys/kernel/core_pattern)
if [[ ${corepat} =~ ^[:space:]*\| ]] ; then
echo "WARNING: core pattern is piping - can't search for core files"
look_core=false
else
${timeout_cmd} ${APP_PATH} ${APP_ARGS}
look_core=true
coredir=$(dirname ${corepat})
fi
if [[ ${look_core} == true ]]; then
before=$(ls -A1 ${coredir})
fi
set +e
echo "Running tests for ${APP_PATH}"
if [[ ${MANUAL_TESTS:-} == true && ${PARALLEL_TESTS:-} != true ]]; then
for t in "${manual_tests[@]}" ; do
${APP_PATH} --unittest=${t}
TEST_STAT=$?
if [[ $TEST_STAT -ne 0 ]] ; then
break
fi
done
else
${APP_PATH} ${APP_ARGS}
TEST_STAT=$?
fi
set -e
if [[ ${look_core} == true ]]; then
after=$(ls -A1 ${coredir})
oIFS="${IFS}"
IFS=$'\n\r'
found_core=false
for l in $(diff -w --suppress-common-lines <(echo "$before") <(echo "$after")) ; do
if [[ "$l" =~ ^[[:space:]]*\>[[:space:]]*(.+)$ ]] ; then
corefile="${BASH_REMATCH[1]}"
echo "FOUND core dump file at '${coredir}/${corefile}'"
gdb_output=$(/bin/mktemp /tmp/gdb_output_XXXXXXXXXX.txt)
found_core=true
gdb \
-ex "set height 0" \
-ex "set logging file ${gdb_output}" \
-ex "set logging on" \
-ex "print 'ripple::BuildInfo::versionString'" \
-ex "thread apply all backtrace full" \
-ex "info inferiors" \
-ex quit \
"$APP_PATH" \
"${coredir}/${corefile}" &> /dev/null
echo -e "CORE INFO: \n\n $(cat ${gdb_output}) \n\n)"
fi
done
IFS="${oIFS}"
fi
if [[ ${found_core} == true ]]; then
exit -1
else
exit $TEST_STAT
fi

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# run our build script in a docker container
# using travis-ci hosts
set -eux
function join_by { local IFS="$1"; shift; echo "$*"; }
set +x
echo "VERBOSE_BUILD=true" > /tmp/co.env
matchers=(
'TRAVIS.*' 'CI' 'CC' 'CXX'
'BUILD_TYPE' 'TARGET' 'MAX_TIME'
'CODECOV.+' 'CMAKE.*' '.+_TESTS'
'.+_OPTIONS' 'NINJA.*' 'NUM_.+'
'NIH_.+' 'BOOST.*' '.*CCACHE.*')
matchstring=$(join_by '|' "${matchers[@]}")
echo "MATCHSTRING IS:: $matchstring"
env | grep -E "^(${matchstring})=" >> /tmp/co.env
set -x
# need to eliminate TRAVIS_CMD...don't want to pass it to the container
cat /tmp/co.env | grep -v TRAVIS_CMD > /tmp/co.env.2
mv /tmp/co.env.2 /tmp/co.env
cat /tmp/co.env
mkdir -p -m 0777 ${TRAVIS_BUILD_DIR}/cores
echo "${TRAVIS_BUILD_DIR}/cores/%e.%p" | sudo tee /proc/sys/kernel/core_pattern
docker run \
-t --env-file /tmp/co.env \
-v ${TRAVIS_HOME}:${TRAVIS_HOME} \
-w ${TRAVIS_BUILD_DIR} \
--cap-add SYS_PTRACE \
--ulimit "core=-1" \
$DOCKER_IMAGE \
/bin/bash -c 'if [[ $CC =~ ([[:alpha:]]+)-([[:digit:].]+) ]] ; then sudo update-alternatives --set ${BASH_REMATCH[1]} /usr/bin/$CC; fi; bin/ci/ubuntu/build-and-test.sh'

View File

@@ -1,29 +0,0 @@
#!/bin/bash -u
# Exit if anything fails. Echo commands to aid debugging.
set -ex
# Target working dir - defaults to current dir.
# Can be set from caller, or in the first parameter
TWD=$( cd ${TWD:-${1:-${PWD:-$( pwd )}}}; pwd )
echo "Target path is: $TWD"
# Override gcc version to $GCC_VER.
# Put an appropriate symlink at the front of the path.
mkdir -pv $HOME/bin
for g in gcc g++ gcov gcc-ar gcc-nm gcc-ranlib
do
test -x $( type -p ${g}-$GCC_VER )
ln -sv $(type -p ${g}-$GCC_VER) $HOME/bin/${g}
done
# What versions are we ACTUALLY running?
if [ -x $HOME/bin/g++ ]; then
$HOME/bin/g++ -v
else
g++ -v
fi
pip install --user requests==2.13.0
pip install --user https://github.com/codecov/codecov-python/archive/master.zip
bash bin/sh/install-boost.sh

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
# some cached files create churn, so save them here for
# later restoration before packing the cache
set -eux
pushd ${TRAVIS_HOME}
if [ -f cache_ignore.tar ] ; then
rm -f cache_ignore.tar
fi
if [ -d _cache/nih_c ] ; then
find _cache/nih_c -name "build.ninja" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_deps" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name ".ninja_log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.log" | tar rf cache_ignore.tar --files-from -
find _cache/nih_c -name "*.tlog" | tar rf cache_ignore.tar --files-from -
# show .a files in the cache, for sanity checking
find _cache/nih_c -name "*.a" -ls
fi
if [ -d _cache/ccache ] ; then
find _cache/ccache -name "stats" | tar rf cache_ignore.tar --files-from -
fi
if [ -f cache_ignore.tar ] ; then
tar -tf cache_ignore.tar
fi
popd

View File

@@ -1,32 +0,0 @@
#!/bin/sh
# Assumptions:
# 1) BOOST_ROOT and BOOST_URL are already defined,
# and contain valid values.
# 2) The last namepart of BOOST_ROOT matches the
# folder name internal to boost's .tar.gz
# When testing you can force a boost build by clearing travis caches:
# https://travis-ci.org/ripple/rippled/caches
set -e
if [ -x /usr/bin/time ] ; then
: ${TIME:="Duration: %E"}
export TIME
time=/usr/bin/time
else
time=
fi
if [ ! -d "$BOOST_ROOT/lib" ]
then
wget $BOOST_URL -O /tmp/boost.tar.gz
cd `dirname $BOOST_ROOT`
rm -fr ${BOOST_ROOT}
tar xzf /tmp/boost.tar.gz
cd $BOOST_ROOT && \
$time ./bootstrap.sh --prefix=$BOOST_ROOT && \
$time ./b2 -d1 define=_GLIBCXX_USE_CXX11_ABI=0 -j$((2*${NUM_PROCESSORS:-2})) &&\
$time ./b2 -d0 define=_GLIBCXX_USE_CXX11_ABI=0 install
else
echo "Using cached boost at $BOOST_ROOT"
fi

33
bin/sh/install-vcpkg.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -exu
if [[ -z ${COMSPEC:-} ]]; then
EXE="vcpkg"
else
EXE="vcpkg.exe"
fi
if [[ -d "${VCPKG_DIR}" && -x "${VCPKG_DIR}/${EXE}" ]] ; then
echo "Using cached vcpkg at ${VCPKG_DIR}"
${VCPKG_DIR}/${EXE} list
exit
fi
if [[ -d "${VCPKG_DIR}" ]] ; then
rm -rf "${VCPKG_DIR}"
fi
git clone --branch 2019.06 https://github.com/Microsoft/vcpkg.git ${VCPKG_DIR}
pushd ${VCPKG_DIR}
if [[ -z ${COMSPEC:-} ]]; then
chmod +x ./bootstrap-vcpkg.sh
./bootstrap-vcpkg.sh
else
./bootstrap-vcpkg.bat
fi
popd
# TODO -- can pin specific ports to a commit/version like this:
#git checkout <SOME COMMIT HASH> ports/boost
${VCPKG_DIR}/${EXE} install openssl

38
bin/sh/setup-msvc.sh Executable file
View File

@@ -0,0 +1,38 @@
# NOTE: must be sourced from a shell so it can export vars
cat << BATCH > ./getenv.bat
CALL %*
ENV
BATCH
while read line ; do
IFS='"' read x path arg <<<"${line}"
if [ -f "${path}" ] ; then
echo "FOUND: $path"
export VCINSTALLDIR=$(./getenv.bat "${path}" ${arg} | grep "^VCINSTALLDIR=" | sed -E "s/^VCINSTALLDIR=//g")
if [ "${VCINSTALLDIR}" != "" ] ; then
echo "USING ${VCINSTALLDIR}"
export LIB=$(./getenv.bat "${path}" ${arg} | grep "^LIB=" | sed -E "s/^LIB=//g")
export LIBPATH=$(./getenv.bat "${path}" ${arg} | grep "^LIBPATH=" | sed -E "s/^LIBPATH=//g")
export INCLUDE=$(./getenv.bat "${path}" ${arg} | grep "^INCLUDE=" | sed -E "s/^INCLUDE=//g")
ADDPATH=$(./getenv.bat "${path}" ${arg} | grep "^PATH=" | sed -E "s/^PATH=//g")
export PATH="${ADDPATH}:${PATH}"
break
fi
fi
done <<EOL
"C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Auxiliary/Build/vcvarsall.bat" x86_amd64
"C:/Program Files (x86)/Microsoft Visual Studio 15.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 13.0/VC/vcvarsall.bat" amd64
"C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/vcvarsall.bat" amd64
EOL
# TODO: update the list above as needed to support newer versions of msvc tools
rm -f getenv.bat
if [ "${VCINSTALLDIR}" = "" ] ; then
echo "No compatible visual studio found!"
fi

View File

@@ -1,6 +1,5 @@
#-------------------------------------------------------------------------------
#
# Rippled Server Instance Configuration Example
#
#-------------------------------------------------------------------------------
#
@@ -34,14 +33,13 @@
#
# rippled.cfg
#
# For more information on where the rippled server instance searches for
# the file please visit the Ripple wiki. Specifically, the section explaining
# the --conf command line option:
# For more information on where the rippled server instance searches for the
# file, visit:
#
# https://ripple.com/wiki/Rippled#--conf.3Dpath
# https://developers.ripple.com/commandline-usage.html#generic-options
#
# This file should be named rippled.cfg. This file is UTF-8 with Dos, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
# Notation
@@ -49,8 +47,8 @@
# In this document a simple BNF notation is used. Angle brackets denote
# required elements, square brackets denote optional elements, and single
# quotes indicate string literals. A vertical bar separating 1 or more
# elements is a logical "or"; Any one of the elements may be chosen.
# Parenthesis are notational only, and used to group elements, they are not
# elements is a logical "or"; any one of the elements may be chosen.
# Parentheses are notational only, and used to group elements; they are not
# part of the syntax unless they appear in quotes. White space may always
# appear between elements, it has no effect on values.
#
@@ -126,13 +124,13 @@
# port = 80
#
# [port_public]
# ip=0.0.0.0
# ip = 0.0.0.0
# port = 443
# protocol=peer,https
# protocol = peer,https
#
# [port_private]
# ip=127.0.0.1
# protocol=http
# ip = 127.0.0.1
# protocol = http
#
# When rippled is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
@@ -148,7 +146,11 @@
# ip = <IP-address>
#
# Required. Determines the IP address of the network interface to bind
# to. To bind to all available interfaces, uses 0.0.0.0
# to. To bind to all available IPv4 interfaces, use 0.0.0.0
# To binding to all IPv4 and IPv6 interfaces, use ::
#
# NOTE if the ip value is ::, then any incoming IPv4 connections will
# be made as mapped IPv4 addresses.
#
# port = <number>
#
@@ -200,11 +202,17 @@
#
# When set, grants administrative command access to the specified IP
# addresses. These commands may be issued over http, https, ws, or wss
# if configured on the port. If unspecified, the default is to not allow
# if configured on the port. If not provided, the default is to not allow
# administrative commands.
#
# NOTE A common configuration value for the admin field is "localhost".
# If you are listening on all IPv4/IPv6 addresses by specifing
# ip = :: then you can use admin = ::ffff:127.0.0.1,::1 to allow
# administrative access from both IPv4 and IPv6 localhost
# connections.
#
# *SECURITY WARNING*
# 0.0.0.0 may be specified to allow access from any IP address. It must
# 0.0.0.0 or :: may be used to allow access from any IP address. It must
# be the only address specified and cannot be combined with other IPs.
# Use of this address can compromise server security, please consider its
# use carefully.
@@ -282,7 +290,7 @@
# keep rippled from connecting to other instances of rippled or
# prevent RPC and WebSocket clients from connecting.
#
# send_queue_limit = = [1..65535]
# send_queue_limit = [1..65535]
#
# A Websocket will disconnect when its send queue exceeds this limit.
# The default is 100. A larger value may help with erratic disconnects but
@@ -360,29 +368,26 @@
#
# [ips]
#
# List of hostnames or ips where the Ripple protocol is served. For a starter
# list, you can either copy entries from: https://ripple.com/ripple.txt or if
# you prefer you can specify r.ripple.com 51235
# List of hostnames or ips where the Ripple protocol is served. A default
# starter list is included in the code and used if no other hostnames are
# available.
#
# One IPv4 address or domain names per line is allowed. A port may must be
# specified after adding a space to the address. By convention, if known,
# IPs are listed in from most to least trusted.
# One address or domain name per line is allowed. A port may must be
# specified after adding a space to the address. The ordering of entries
# does not generally matter.
#
# The default list of entries is:
# - r.ripple.com 51235
# - zaphod.alloy.ee 51235
# - sahyadri.isrdc.in 51235
#
# Examples:
# 192.168.0.1
# 192.168.0.1 3939
# r.ripple.com 51235
#
# This will give you a good, up-to-date list of addresses:
#
# [ips]
# 192.168.0.1
# 192.168.0.1 2459
# r.ripple.com 51235
#
# The default is:
# [ips_fixed] addresses (if present)
# or
# ( r.ripple.com 51235 , zaphod.alloy.ee 51235 )
#
#
# [ips_fixed]
#
@@ -392,8 +397,8 @@
# Ripple network through a public-facing server, or for building a set
# of cluster peers.
#
# One IPv4 address or domain names per line is allowed. A port must be
# specified after adding a space to the address.
# One address or domain names per line is allowed. A port must be specified
# after adding a space to the address.
#
#
#
@@ -819,13 +824,7 @@
# RocksDB is an alternative backend for systems that don't use solid-state
# drives. Because RocksDB's performance degrades as it stores more data,
# keeping full history is not advised, and using online delete is
# recommended. RocksDB is not available on Windows.
#
# The RocksDB backend also provides these optional parameters:
#
# compression 0 for none, 1 for Snappy compression
#
#
# recommended.
#
# Required keys:
# path Location to store the database (all types)
@@ -845,7 +844,7 @@
#
# earliest_seq The default is 32570 to match the XRP ledger
# network's earliest allowed sequence. Alternate
# networks may set this value. Minimum value of 1.
# networks may set this value. Minimum value of 1.
#
# Notes:
# The 'node_db' entry configures the primary, persistent storage.
@@ -865,22 +864,10 @@
# ...
#
# Example:
# type=nudb
# path=db/shards/nudb
#
# The "type" field must be present and controls the choice of backend:
#
# type = NuDB
# NuDB is recommended for shards.
#
# type = RocksDB
#
# The RocksDB backend also provides these optional parameters:
#
# compression 0 for none, 1 for Snappy compression
#
# Required keys:
# path Location to store the database (all types)
# path Location to store the database
#
# max_size_gb Maximum disk space the database will utilize (in gigabytes)
#
@@ -1023,7 +1010,7 @@
#
# 8. Misc Settings
#
#----------
#-----------------
#
# [signing_support]
#
@@ -1051,38 +1038,42 @@
#
# <flag>
#
# Enable or disable access to /crawl requests. Default is '1'
# Enable or disable access to /crawl requests. Default is '1' which
# enables access.
#
# overlay = <flag>
#
# Report information about peers this server is connected to, similar
# to the "peers" RPC API. Default is '1'.
# to the "peers" RPC API. Default is '1' which means to report peer
# overlay info.
#
# server = <flag>
#
# Report information about the local server, similar to the "server_state"
# RPC API. Default is '1'.
# RPC API. Default is '1' which means to report local server info.
#
# counts = <flag>
#
# Report information about the local server health counters, similar to
# the "get_counts" RPC API. Default is '0'.
# the "get_counts" RPC API. Default is '0' which means not to report
# server counts.
#
# unl = <flag>
#
# Report information about the local server's validator lists, similar to
# the "validators" and "validator_list_sites" RPC APIs. Default is '1'.
# the "validators" and "validator_list_sites" RPC APIs. Default is '1'
# which means to report server validator lists.
#
# Example:
# Examples:
#
# [crawl]
# 0
#
# [crawl]
# overlay = 1 # report peer overlay info
# server = 1 # report local server info
# counts = 0 # do not report server counts
# unl = 1 # report server validator lists
# overlay = 1
# server = 1
# counts = 0
# unl = 1
#
#-------------------------------------------------------------------------------
#
@@ -1149,6 +1140,8 @@ protocol = http
[port_peer]
port = 51235
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
protocol = peer
[port_ws_admin_local]
@@ -1188,7 +1181,6 @@ advisory_delete=0
# NuDB requires SSD storage. Helpful information can be found here
# https://ripple.com/build/history-sharding
#[shard_db]
#type=NuDB
#path=/var/lib/rippled/db/shards/nudb
#max_size_gb=500

Some files were not shown because too many files have changed in this diff Show More