Compare commits

..

853 Commits

Author SHA1 Message Date
Vinnie Falco
c7118a183a Set version to 0.28.1-rc2 2015-05-13 12:41:04 -07:00
JoelKatz
b1881e798b Control query depth based on latency:
This changes TMGetLedger protocol in a backward-compatible way to include
a "query depth" parameter - the number of extra levels in the SHAMap tree
that a server should return in the corresponding TMLedgerData. Depending
on the value or absence of the field, a server may adjust the amount of
returned data based on the observed latency of the requestor: higher
latencies will return larger data sets (to compensate for greater
request/response turnaround times).
2015-05-13 12:40:16 -07:00
Nik Bougalis
d44230b745 Correctly clamp when the taker balance is the limiting factor 2015-05-13 12:40:09 -07:00
Vinnie Falco
7b417b9d51 Set version to 0.28.1-rc1 2015-05-12 17:21:48 -07:00
Vinnie Falco
cc05e5727d Merge release into develop 2015-05-12 17:20:43 -07:00
Vinnie Falco
764a8f2644 Set version to 0.28.1-b10 2015-05-12 12:47:56 -07:00
JoelKatz
a15785eb64 Reduce severity of some logging messages 2015-05-12 12:47:56 -07:00
Vinnie Falco
688f8c5f3f Add historical ledger fetches per minute to get_counts 2015-05-12 12:47:56 -07:00
Vinnie Falco
dde5ccf7fa Add DecayWindow implementation 2015-05-12 12:47:55 -07:00
Vinnie Falco
d5a6313c71 Add RangeSet::lebesgue_sum 2015-05-12 09:50:12 -07:00
Vinnie Falco
f030aab759 Set version to 0.28.1-b9 2015-05-11 18:14:45 -07:00
JoelKatz
4393f98a2c History fetch changes:
* Don't pollute ledger cache with history
* Avoid race condition when getting ledger sequence numbers
* Make fetch packs larger
2015-05-11 18:14:45 -07:00
JoelKatz
c377d6c94b InboundLedgers improvements:
* Change findCreate to acquire
* Return Ledger rather than InboundLedger
2015-05-11 18:14:45 -07:00
JoelKatz
16aa015682 Fix off-by-one error in SHAMapNodeID:
Limit is 64 inner nodes at depth 0-63, and one leaf at depth 64
2015-05-11 18:14:45 -07:00
Miguel Portilla
9cded76cf0 Fix RPC ledger synchronization requirements:
* Better rules specific to each lookup case:
* By hash: Any ledger found by hash is valid.
* By numeric index: If rippled is out of sync, and the index is after the
* validated ledger, return "InsufficientNetworkMode" error.
* By named index: If rippled is out of sync, or closed/current is requested and significantly older than the validated ledger, return "InsufficientNetworkMode" error.
2015-05-11 18:14:45 -07:00
Vinnie Falco
4ad07bb6b2 Fix hops adjustment for validations 2015-05-11 18:14:45 -07:00
David Schwartz
d0b28a6700 Compute validated ledger age from signing time 2015-05-11 18:14:39 -07:00
Vinnie Falco
18299c3f7a Tidy up PeerSet:
* Move PeerSet to overlay/
* Remove unused functions
* Make some public members private
* Rename some functions
* Add comments
2015-05-11 12:06:14 -07:00
Miguel Portilla
ca07a1230b Add filtering to Account Objects (RIPD-868) 2015-05-11 11:58:35 -07:00
Edward Hennis
e0ad66d967 Fail Travis if scons vcxproj produces differences 2015-05-11 11:56:45 -07:00
seelabs
5615c4a2a7 Force scons to include soci version file:
Running `scons vcxproj` will sometimes include
soci's version.h and sometimes it will not. This
patch forces it to always include the file.
2015-05-11 11:56:42 -07:00
Nik Bougalis
d7fbef6764 Set version to 0.28.1-b8 2015-05-06 14:00:34 -07:00
JoelKatz
e95bda3bdf Peer latency tracking (RIPD-879):
Track peer latency, report in RPC, make peer selection for
fetching latency aware.

This also cleans up the PeerImp timer to minimize
resetting. Indirect routing is made latency-aware as well.
2015-05-06 13:38:59 -07:00
JoelKatz
c010a85ef5 Check the correct progress flag on transaction root node receipt 2015-05-06 13:38:59 -07:00
MarkusTeufelberger
798d36efcf Fix typo in LedgerMaster.cpp 2015-05-06 13:25:50 -07:00
Nik Bougalis
2d44c8568f Eliminate need for ledger in delivered_amount calculation (RIPD-860) 2015-05-06 13:25:50 -07:00
Nik Bougalis
7232bdb40c Reduce PeerFinder log verbosity 2015-05-06 13:25:50 -07:00
David Schwartz
45f092488a Simplify InboundLedger expiration (RIPD-873)
Let sweep logic remove obsolete ledger requests.
Touch inbound ledgers to prevent sweeping on requests.
Update sequence number if possible.
2015-05-06 13:25:50 -07:00
JoelKatz
4244e1070d Improvements to STParsedJSON:
* Cleanups and reduction of copying
* Add STArray::back, operator[], push_back(&&)
* Add make_stvar
* Rework STParsedJSON
* Fix code and unit tests that use STParsedJSON
* STTx move constructor
2015-05-06 13:11:24 -07:00
Nik Bougalis
5a7fa8cfa9 Reduce STAmount public interface (RIPD-867):
The STAmount class includes a number of functions which serve as thin
wrappers, which are unused or used only in one place, or which break
encapsulation by exposing internal implemenation details. Removing
such functions simplifies the interface of the class and ensures
consistency.

* getSNValue and getNValue are now free functions
* canonicalizeRound is no longer exposed
* Removed addRound and subRound
* Removed overloads of multiply, mulRound, divide and divRound
2015-05-06 13:11:24 -07:00
Josh Juran
daf4f8fcde Remove wallet_accounts and generator maps (RIPD-804):
* Remove the deprecated wallet_accounts command.
 * Remove dead code for generator maps.
 * Remove the help for the obsolete wallet_add and wallet_claim commands
   (which have already been removed).
2015-05-06 13:11:24 -07:00
Miguel Portilla
d182d1455e Relax RPC ledger synchronization requirements (RIPD-27, RIPD-840):
This enhances the reporting capability of RPC::LookupLedger and reduces
the requirement of a current ledger for many RPC commands.

The perceived up-time of client handlers improves since requests will
not depend on the server being fully synced.
2015-05-06 13:10:47 -07:00
Vinnie Falco
dc2260adbe Set version to 0.28.1-b7 2015-04-29 16:44:47 -07:00
Vinnie Falco
83a01e0c7d Set hopsAware version cutoff to 0.28.1-b7 2015-04-29 16:44:05 -07:00
Tom Ritchford
53c1269ebd Set version to 0.28.1-b6 2015-04-29 14:34:54 -04:00
Nik Bougalis
f8bfe3a550 Terminate process on SIGINT in all cases 2015-04-29 14:34:54 -04:00
Vinnie Falco
90bb53af20 Structured Overlay support for TTL limited messages:
When the [overlay] configuration key "expire" is set to 1, proposals
and validations will include a hops field. The hops is incremented with
each relay. Messages with a hop count will be dropped when they exceed
the TTL (Time to Live). Messages containing a hops field will not be
relayed or broadcast to older versions of rippled that don't understand
the field.

This change will not affect normal operation of the network or rippled
instances that do not set "expire" to 1.
2015-04-29 14:34:54 -04:00
Vinnie Falco
c77a2f335a Tidy up some business logic:
* Add OverlayImpl::for_each to tidy up some call sites
* Add comment about computing the unique ID for message routing
* Remove unused code
2015-04-29 14:34:53 -04:00
Vinnie Falco
8e34a1f6a7 Tidy up aged container declarations 2015-04-29 14:34:53 -04:00
Tom Ritchford
2564b62f5c Fix C++ style issues.
* Restrict files to 80 columns.
* Function names in GenerateDeterministicKey now start with lower case.
* Remove deprecated boost::format calls.
2015-04-29 14:34:53 -04:00
seelabs
a7598c5610 Remove unused database table (RIPD-755) 2015-04-29 14:34:52 -04:00
seelabs
8377f2516b Cache and apply account credits after payment processing (RIPD-821):
Credits made to any account during the processing of a payment are delayed until
the payment completes, enforcing a new invariant: liquidity for any paths
during a payment's execution may never increase. This eliminates the need for special
code to handle a variety of corner cases where consuming liquidity in one path
increases liquidity in others.
2015-04-29 14:34:52 -04:00
Vinnie Falco
7efd0ab0d6 Set version to 0.28.0 2015-04-24 18:57:36 -07:00
Vinnie Falco
14d38a1a8d Fix --rpc_ip and --rpc_port (RIPD-679)
This reverts commit 2b040569e7.
2015-04-24 18:57:04 -07:00
seelabs
c8447c190c Report the inbound listening port during crawl (RIPD-866) 2015-04-24 18:55:53 -07:00
Mark Travis
aa5d16b3d8 Skip inefficent SQL query (RIPD-870):
For large data sets the JOIN may not make forward progress in time.
This prevents the deletion of those entries in the database during
online delete. The number of such entries is very small compared to
the total size of the data anyway. A future version will address
this more thoroughly.
2015-04-24 18:55:49 -07:00
Vinnie Falco
fd1135315c Set version to 0.28.1-b5 2015-04-24 18:44:30 -07:00
Vinnie Falco
98c915b2ca Fix --rpc_ip and --rpc_port (RIPD-679)
This reverts commit 2b040569e7.
2015-04-24 18:44:30 -07:00
seelabs
9114f3d2e6 Report the inbound listening port during crawl (RIPD-866) 2015-04-24 18:19:10 -07:00
Mark Travis
5b0109055d Skip inefficent SQL query (RIPD-870):
For large data sets the JOIN may not make forward progress in time.
This prevents the deletion of those entries in the database during
online delete. The number of such entries is very small compared to
the total size of the data anyway. A future version will address
this more thoroughly.
2015-04-24 17:21:27 -07:00
Tom Ritchford
5a3168c9ff Set version to 0.28.1-b4 2015-04-23 16:47:23 -04:00
seelabs
a14f29f84f Remove obsolete code 2015-04-23 16:47:23 -04:00
Tom Ritchford
6c1190a361 Remove unnecessary thread in Soci (RIPD-862). 2015-04-23 16:47:22 -04:00
Howard Hinnant
100a76f0e8 Remove nested types SField::ref and SField::ptr...
* This silences a warning about a redundant cv-qualifier.
* This makes future coding mistakes about redundant
  cv-qualifiers much less likely.
* This makes the code easier to read.
2015-04-23 16:47:22 -04:00
JoelKatz
47482acf83 In consensus, get relative times from a steady clock (RIPD-859)
Using the system clock to get relative times for consensus
timing can result in performance issues if the system time
changes frequently.
2015-04-23 16:47:21 -04:00
Nik Bougalis
54ef4ee6ef Reduce severity level of offer cancellation logging 2015-04-23 16:47:20 -04:00
seelabs
2389abc295 Fix ownership of memory buffers in StatsDCollector (RIPD-756):
* Ownership of buffer memory in StatsDCollector is passed to the
boost::asio callback function. Before this, the memory may have been
freed before async_send was finished with the memory.
2015-04-23 16:47:20 -04:00
Nik Bougalis
67c666b033 Clean up LedgerEntrySet and TransactionEngine:
* Reduce public interfaces
* Remove wrapper functions
* Remove freeze timed cutover code
* Return results directly instead of via ref parameters
2015-04-23 16:47:19 -04:00
Vinnie Falco
5ce3ed3555 Set version to 0.28.1-b3 2015-04-21 14:01:44 -07:00
Nik Bougalis
d30b32fcde Set TX processing change date to 2015-05-12 13:00:00PDT 2015-04-21 14:01:25 -07:00
Miguel Portilla
568e4cebda Fix check for current ledger ID in RPC 2015-04-21 14:01:18 -07:00
Tom Ritchford
29d644e9d3 Fix WebSockets treatment of ping timer:
This solves a problem that caused a hang on shutdown related to
the lifetime of the ping timer completion handlers used in WebSockets.

* Turn the ping timer back on
* Use std::weak_ptr for WebSockets timer callbacks.
* Disable WebSocket pings if frequency in the .cfg is non-positive.
2015-04-21 14:01:13 -07:00
Torrie Fischer
2dbb7301fb Fix circleci 2015-04-21 14:01:09 -07:00
seelabs
d2cba1c54f Tidy up SQLite table creation:
* Check if tables and indexes exist
* Remove commands for unused table
2015-04-21 14:01:05 -07:00
seelabs
6a0c26a709 Fix return value when looking up non existent transactions 2015-04-21 14:01:00 -07:00
JoelKatz
e44ae6af93 Give ledger data requests their own job type:
This gives requests for ledger data (and transaction set data)
from peers a separate job type and prioritizes it appropriately.
Previously it was lumped in with fetch packs which have a low
concurrency limit. This should improve the performance of
retrieving historical information.
2015-04-21 14:00:54 -07:00
Vinnie Falco
837b0799ac Set version to 0.28.0-rc3 2015-04-21 13:48:39 -07:00
Nik Bougalis
bc85a8b24f Set TX processing change date to 2015-05-12 13:00:00PDT 2015-04-21 13:48:38 -07:00
Miguel Portilla
15d68649d5 Fix check for current ledger ID in RPC 2015-04-21 13:48:30 -07:00
Tom Ritchford
e0d96ae807 Fix WebSockets treatment of ping timer:
This solves a problem that caused a hang on shutdown related to
the lifetime of the ping timer completion handlers used in WebSockets.

* Turn the ping timer back on
* Use std::weak_ptr for WebSockets timer callbacks.
* Disable WebSocket pings if frequency in the .cfg is non-positive.
2015-04-21 12:26:11 -07:00
Torrie Fischer
3aa39ced60 Fix circleci 2015-04-21 12:23:23 -07:00
seelabs
1f1c0618e1 Tidy up SQLite table creation:
* Check if tables and indexes exist
* Remove commands for unused table
2015-04-21 12:21:27 -07:00
seelabs
7788aa25b5 Fix return value when looking up non existent transactions 2015-04-21 12:20:53 -07:00
JoelKatz
d5b460a85c Give ledger data requests their own job type:
This gives requests for ledger data (and transaction set data)
from peers a separate job type and prioritizes it appropriately.
Previously it was lumped in with fetch packs which have a low
concurrency limit. This should improve the performance of
retrieving historical information.
2015-04-21 12:15:59 -07:00
Vinnie Falco
9019f3a4f2 Set version to 0.28.1-b2 2015-04-20 15:57:02 -07:00
seelabs
dfda0d566a Support for boost 1.58 2015-04-20 15:55:48 -07:00
Vinnie Falco
99c2fac143 STVar: optimized storage for STObject (RIPD-825):
This introduces the STVar container, capable of holding any STBase-derived
class and implementing a "small string" optimization. STObject is changed
to store std::vector<STVar> instead of boost::ptr_vector<STBase>. This
eliminates a significant number of needless dynamic memory allocations and
deallocations during transaction processing when ledger entries are
deserialized. It comes at the expense of larger overall storage requirements
for STObject.
2015-04-20 15:54:26 -07:00
Nicholas Dudfield
4c5308da8d Update account_objects test:
* Use Request over json-rpc
* Use lodash to filter irrelevant fields from expectations
* Use LedgerState for state setup
* Test using limit and marker

Conflicts:
	test/account_objects-test.js
2015-04-20 15:54:14 -07:00
Miguel Portilla
4d0ed3d857 RPC account_objects (RIPD-777)
General RPC command that can retrieve objects in the account root.
  * Add account objects integration test.
  * Support tickets.

* Add removeElement in Json::Value
2015-04-20 15:54:09 -07:00
Vinnie Falco
0b5582ed0d Disable redundant ping timer 2015-04-20 15:52:38 -07:00
Tom Ritchford
17734f833c Revert "Checkpoint SOCI exactly every 1000 pages."
This reverts commit e874a2624f.
2015-04-20 15:52:33 -07:00
Vinnie Falco
98a9d5d424 Lower the severity of some PeerFinder logging 2015-04-20 15:52:29 -07:00
Vinnie Falco
6d74f36449 Fix Crawl handshake header parsing in Overlay 2015-04-20 15:52:20 -07:00
Vinnie Falco
47a5bf6aa5 Fix beast::ci_equal 2015-04-20 15:52:16 -07:00
Vinnie Falco
2805e9eb3b Set version to 0.28.0-rc2 2015-04-20 15:19:07 -07:00
Vinnie Falco
72a1a86886 Disable redundant ping timer 2015-04-20 13:42:00 -07:00
Tom Ritchford
ec190bae33 Revert "Checkpoint SOCI exactly every 1000 pages."
This reverts commit e874a2624f.
2015-04-20 11:01:23 -07:00
Vinnie Falco
83003e43d7 Lower the severity of some PeerFinder logging 2015-04-20 11:01:20 -07:00
Vinnie Falco
3b20dc2994 Fix Crawl handshake header parsing in Overlay 2015-04-20 11:00:03 -07:00
Vinnie Falco
a7198298e7 Fix beast::ci_equal 2015-04-20 10:59:58 -07:00
Vinnie Falco
f3d76d5780 Set version to 0.28.1-b1 2015-04-17 11:43:51 -07:00
Vinnie Falco
e2305c3c5e Merge branch 'release' into develop
Conflicts:
	Builds/rpm/rippled.spec
	src/ripple/protocol/impl/BuildInfo.cpp
2015-04-17 11:43:09 -07:00
Vinnie Falco
ba737d7e58 Set version to 0.28.0-rc1 2015-04-17 11:41:50 -07:00
Vinnie Falco
88f69204c8 Merge 0.28.0-b21 into release 2015-04-17 11:41:17 -07:00
Vinnie Falco
bb4561c2b8 Set version to 0.28.0-b22 2015-04-16 11:31:57 -07:00
seelabs
4710f764e4 Quiet unused variable warning 2015-04-16 11:31:57 -07:00
JoelKatz
11a59a767e Adjust cache parameters for 'huge' node size 2015-04-16 11:31:54 -07:00
Miguel Portilla
4cf3157aad Set version to 0.28.0-b21 2015-04-14 18:54:31 -04:00
Miguel Portilla
b1f6cb349b Improved parsing of universal port configuration settings (RIPD-856) 2015-04-14 18:51:53 -04:00
David Schwartz
0c134582ca Track peer "sanity" (RIPD-836)
* Each peer has a "sane/insane/unknown" status
* Status updated based on peer ledger sequence
* Status reported in peer json
* Only sane peers preferred for historical ledgers
* Overlay endpoints only accepted from known sane peers
* Untrusted proposals not relayed from insane peers
* Untrusted validations not relayed from insane peers
* Transactions from insane peers are not processed
* Periodically drop outbound connections to bad peers
* Bad peers get bootcache valence of zero

Peer "sanity" is based on the ledger sequence number they are on.  We
quickly become able to assess this based on current trusted validations.
We quarrantine rogue messages and disconnect bad outbound connections to
help maintain the configured number of good outbound connections.
2015-04-14 18:51:52 -04:00
Nik Bougalis
acf2833362 Set version to 0.28.0-b20 2015-04-13 10:24:47 -07:00
Nik Bougalis
20f9971096 Finalize date for switchover to 0.28.0 processing semantics 2015-04-13 10:24:47 -07:00
Nik Bougalis
cefeaceef0 Signal error for incorrect configuration during unit test 2015-04-13 10:24:47 -07:00
Howard Hinnant
1ba7c4b6ee Remove unneeded member initializer:
* This works around a clang bug.
* Also un-commented correctly deleted copy members.
2015-04-13 10:24:47 -07:00
Vinnie Falco
1b49776819 Add fetchBatch Backend interface 2015-04-10 19:14:57 -07:00
Vinnie Falco
41c68f4bbc Use static_initializer in KnownFormats singleton 2015-04-10 19:14:57 -07:00
Nik Bougalis
56ac830405 Refund owner's ticket reserve when a ticket is canceled (RIPD-855) 2015-04-10 19:12:51 -07:00
Nik Bougalis
ebcf821d81 Return descriptive error from account_currencies RPC (RIPD-806):
The 'account_index' field is expected to be an integer. If something
else is specified, the error message should clearly indicate which
field is at fault.
2015-04-10 19:11:28 -07:00
Tom Ritchford
e874a2624f Checkpoint SOCI exactly every 1000 pages. 2015-04-10 19:11:28 -07:00
Tom Ritchford
03d1c0ed21 Clean SOCI code.
* Throw exception rather than SEGV.
* Hide details of checkpointing from clients.
* Restrict to 80 columns and minor style tweaks.
2015-04-10 19:11:28 -07:00
Tom Ritchford
1b8c77eee0 Allow logging to be used outside the ripple namespace.
* Split logging macros over multiple lines.
* Restrict Log.h to 80 columns.
2015-04-10 19:11:28 -07:00
Tom Ritchford
d575cd50b1 Clean up Sustain.h and Sustain.cpp.
* Bring out magic numbers.
  * Get rid of boost::format.
2015-04-10 19:11:28 -07:00
Tom Ritchford
2b040569e7 Remove deprecated flags --rpc_ip and --rpc_port. 2015-04-10 19:11:28 -07:00
Miguel Portilla
7a53f86fff Compare current seq vs validated (RIPD-669) 2015-04-10 19:11:28 -07:00
Torrie Fischer
a90bb53cd2 Drop nexmo SMS support. Reverts 58b3cc1d. 2015-04-10 19:11:27 -07:00
Tom Ritchford
b450d62138 Port to Python: Build and run tests for multiple build configurations. 2015-04-10 19:11:27 -07:00
Nik Bougalis
1a9d65c52a Set version to 0.28.0-b19 2015-04-10 19:00:45 -07:00
seelabs
05f4746bbe Add workaround include for Windows.h NOMINMAX 2015-04-10 19:00:45 -07:00
seelabs
1c587723fa Safer macro restoration using MSVC extensions 2015-04-10 19:00:34 -07:00
Nik Bougalis
b2a9c79de5 Fix transaction enumeration in account_tx (RIPD-734):
In some corner cases, an incorrect resume marker could be
returned, preventing the complete enumeration of account
transactions.

* Robust markers via improved paging support
* New unit tests
* Cleanup
2015-04-10 19:00:22 -07:00
Nik Bougalis
64259c7bcb Better transaction analysis (RIPD-755):
The analysis of differences between locally built ledgers and consensus
ledgers is now more intelligent. Differences in these values will be
categorized:

 - Operation results
 - Transaction ordering
 - Generated metadata
2015-04-10 18:58:52 -07:00
Nik Bougalis
a7efdb4e52 Improve version switchover semantics:
* Support PreviousTxnID until the switchover
* Implement "No Ripple" for issue_iou and redeem_iou.
* Do not utilize issue_iou and redeem_iou from legacy code
* Rename 0.27.x legacy files to account for VS build process
* Misc. cleanups
2015-04-10 18:56:52 -07:00
Tom Ritchford
091ff0cce0 Set version to 0.28.0-b18 2015-03-31 21:50:45 -04:00
seelabs
7e25a3a942 Fix SQL in online delete cleanup:
* SQL statement is corrected to perform an implicit JOIN
* Add unit test
2015-03-31 21:50:45 -04:00
Nik Bougalis
b3254e2b18 Remove unsupported proof-of-work command parsing 2015-03-31 21:50:44 -04:00
JoelKatz
9a0fa79144 Fix duplication of full below cache and tree node cache 2015-03-31 21:50:43 -04:00
JoelKatz
352db260b2 STArray optimization 2015-03-31 21:50:43 -04:00
Nik Bougalis
f072b5b679 Avoid copying and improve optimization opportunities 2015-03-31 21:50:43 -04:00
JoelKatz
b4058a813b Small changes to improve transaction benchmarking:
* Set transaction valid in hash router correctly
* Properly account for root nodes in walkLedger
* If loaded ledger is insane, log details
* Extra logging while loading replay ledger
* Don't test unsigned transactions expecting them to succeed
* Don't be too noisy about signature failures
2015-03-31 21:50:42 -04:00
Vinnie Falco
b27e152ead NuDB: Enforce pool_thresh minimum of 1:
pool_thresh is prevented from going to zero. This solves a problem when
using callgrind where the CPU is monopolized, causing operations that
should complete quickly to take days.
2015-03-31 21:50:42 -04:00
Tom Ritchford
936e83759d Remove three warnings. 2015-03-31 21:50:41 -04:00
Tom Ritchford
18fdc175c6 Clean structure of RPC::addPaymentDeliveredAmount 2015-03-31 21:50:41 -04:00
JoelKatz
47c6ab0ced Reduce SHAMapTreeNode copying during SHAMap unsharing:
In some code paths, we bump the SHAMap sequence number
before we unshare. This forces SHAMapTreeNode to be
copied. By making the ledger immutable we cause the
unsharing to occur earlier, eliminating the copies.
2015-03-31 21:50:40 -04:00
seelabs
4868135d47 Improve build times:
* Get classic & unity sources once only
* Use MD5-Timestamp
* Use implicit cache for specific debug builds
* Skip prep work for targets what will not be built
2015-03-31 21:50:39 -04:00
Miguel Portilla
5e70db651d Improved local tx error messages (RIPD-720)
Failed local built transactions report the specific error.
2015-03-31 21:50:39 -04:00
David Schwartz
1fedede771 Remove transaction set acquire logic from consensus object
This creates a new InboundTransactions object that handles transaction sets,
removing this responsibility from the consensus object. The main benefit is
that many inbound transaction operations no longer require the master lock.

Improve logic to decide which peers to query, when to add more peers, and
when to re-query existing peers.
2015-03-31 21:50:38 -04:00
seelabs
00596f1436 Reduce memory allocation, remove some functions in Serializer. 2015-03-31 21:50:37 -04:00
Nik Bougalis
db840b5604 Perform Transactor checks early (RIPD-751):
Certain checks that determine if a transaction is malformed can be performed
without needing to look up accounts or access the ledger.

Perform those checks as early as possible to optimize transaction processing.
2015-03-31 06:49:53 -07:00
Nik Bougalis
45070d0e51 Reduce Transaction public interface 2015-03-30 13:05:42 -07:00
Tom Ritchford
8a1081f9ef Set version to 0.28.0-b17 2015-03-26 12:38:33 -04:00
seelabs
ac84e44161 Correct missing semicolons on sql statements 2015-03-26 12:38:33 -04:00
seelabs
836dfb6503 Do not log errors from initial database statements 2015-03-26 12:38:33 -04:00
Edward Hennis
35a8ce2349 Pathfinding unit tests:
* Refactor ripple path find to be more testable.
* Reimplements the first 4 tests from `tests\path-test.js`
* Verify balances in Ledger test.
2015-03-26 12:38:33 -04:00
Tom Ritchford
bb7d68b3b9 Add notes about Rippled's container classes. 2015-03-26 12:38:33 -04:00
Howard Hinnant
1979846e5e Change several uses of std::list to alternative containers:
*  Performance motivated.
*  Several of these called size() which is O(N) in gcc-4.8.
*  Remove container copy from LedgerConsensusImp::playbackProposals().
*  Addresses RIPD-284.
2015-03-26 12:38:33 -04:00
Howard Hinnant
a61ffab3f9 Remove unnecessary allocation/deallocation from masterLock
* Add make_lock.
* Rename Application::LockType to Application::MutexType:
* Rename getMasterLock to getMasterMutex.
* Use getMasterMutex and make_lock.
* Remove unused code.
2015-03-26 12:38:33 -04:00
Howard Hinnant
698fe73608 Move SHAMap hash computations from dirtyUp to walkSubTree
in order to reduce the total number of hash computations.
2015-03-26 12:38:33 -04:00
Josh Juran
0083c32629 Update VS project files 2015-03-26 12:38:33 -04:00
Nik Bougalis
f313caaa73 Set version to 0.28.0-b16 2015-03-19 07:55:19 -07:00
Edward Hennis
6e3f07ddce Remove unused / redundant functions. 2015-03-19 07:41:57 -07:00
Mark Travis
11d28c4856 Always increment payment pass counter 2015-03-19 07:41:57 -07:00
Nik Bougalis
e9394ca85a Implement "Default Ripple" logic in active direction:
When a balance change invokes trustCreate, we need to set the no ripple flag
if the "active" account's asfRippleDefault flag is cleared.
2015-03-19 07:41:57 -07:00
Nik Bougalis
9445a30e72 Implement "Default Ripple" logic in LedgerEntrySet::checkState 2015-03-19 07:41:57 -07:00
JoelKatz
185b1a3d36 Add noripple_check RPC command
To help gateways make the changes needed to adjust to the
"default ripple" flag, we've added the "noripple_check"
RPC command. This command tells gateways what they need
to do to set this flag and fix any trust lines created
before they set the flag.

Once your server is running and synchronized, you can run
the tool from the command line with a command like:
  rippled json noripple_check '
  {
    "account" : "<gateway_trusted_address_here>",
    "role" : "gateway",
    "transactions" : "true"
  }'

The server will respond with a list of "problems" that it
sees with the configuration of the account and its trust
lines. It will also return a "transactions" array suggesting
the transactions needed to fix the problems it found.
2015-03-19 07:41:57 -07:00
JoelKatz
1c2f5d60a5 Subscribe/Unsubscribe improvements:
* Don't acquire the master lock where it's not needed
* InfoSub tracks RT and validated accounts separately
* Correctly remove accounts from the InfoSub
2015-03-19 07:41:57 -07:00
JoelKatz
2f32910bef Reduce master lock scope in some RPC functions 2015-03-19 07:41:57 -07:00
JoelKatz
8de1b20bb5 Defer/avoid acquiring the master lock on proposals 2015-03-19 07:41:57 -07:00
David Schwartz
60a7abcef6 Decongest the master lock:
* Reduce scope of lock in ledger accept
* Remove duplicate tracking of transaction sets
* Need master lock to secure ledger sequencing
2015-03-19 07:41:57 -07:00
David Schwartz
e44e75fa6b Track and report peer load:
* PeerImp::charge only calls fail if dispatched from the peer
* Add "load" to output of RPC command "peer"
* Add Resource::Charge values for peer commands
* Impose some fee for every peer command
* Cleanup fee imposition
2015-03-19 07:41:57 -07:00
JoelKatz
ff7dc0b446 Reduce chatty log outputs 2015-03-19 07:41:57 -07:00
JoelKatz
f813cb2310 Tolerate LedgerSequence field in pseudo-transactions:
This will enable a forthcoming change to prevent pseudo-transactions
from reusing a transaction ID
2015-03-19 07:41:57 -07:00
JoelKatz
cba19d7e23 Document and cleanup ledger advance logic
* Don't acquire if validated ledger is old
* Don't try to publish if no valid ledger
* Update README.md file
2015-03-19 07:41:57 -07:00
Nicholas Dudfield
9479c0e12d Update uniport tests to use new config 2015-03-18 19:39:30 -07:00
Nicholas Dudfield
65c9c45ec6 Rename test file so npm test finds it 2015-03-18 19:39:30 -07:00
Miguel Portilla
6d79004d4f Better admin IP management in .cfg (RIPD-820):
* Deprecate rpc_admin_allow section from configuration file
* New port-specific setting 'admin':
  * Comma-separated list of IP addresses that are allowed administrative
    privileges (subject to username & password authentication if configured)
  * 127.0.0.1 is no longer a default admin IP.
  * 0.0.0.0 may be specified to indicate "any IP" but cannot be combined
    with other IP addresses.
2015-03-18 19:39:30 -07:00
seelabs
97623d20c5 Use soci in more places:
* Validator, peerfinder, SHAMapStore,
  RpcDB, TxnDB, LedgerDB, WalletDB use soci backend.
2015-03-18 19:39:26 -07:00
seelabs
d37802a42f Remove SqliteFactory. 2015-03-18 19:37:09 -07:00
seelabs
9b837a24aa Remove beast's sqdb module.
Conflicts:
	src/beast/beast/module/sqdb/api/backend.h
	src/beast/beast/module/sqdb/api/blob.h
	src/beast/beast/module/sqdb/api/into.h
	src/beast/beast/module/sqdb/api/session.h
	src/beast/beast/module/sqdb/api/statement.h
	src/beast/beast/module/sqdb/api/transaction.h
	src/beast/beast/module/sqdb/api/type_conversion_traits.h
	src/beast/beast/module/sqdb/api/use.h
	src/beast/beast/module/sqdb/detail/error_codes.h
	src/beast/beast/module/sqdb/detail/exchange_traits.h
	src/beast/beast/module/sqdb/detail/into_type.h
	src/beast/beast/module/sqdb/detail/once_temp_type.h
	src/beast/beast/module/sqdb/detail/prepare_temp_type.h
	src/beast/beast/module/sqdb/detail/ref_counted_prepare_info.h
	src/beast/beast/module/sqdb/detail/ref_counted_statement.h
	src/beast/beast/module/sqdb/detail/statement_imp.h
	src/beast/beast/module/sqdb/detail/type_conversion.h
	src/beast/beast/module/sqdb/detail/type_ptr.h
	src/beast/beast/module/sqdb/detail/use_type.h
	src/beast/beast/module/sqdb/sqdb.h
2015-03-18 19:37:08 -07:00
seelabs
d0ef2f7dd8 Use soci in some places:
* Brings the soci subtree into rippled.
* Validator, peerfinder, and SHAMapStore use new soci backend.
* Optional postgresql backend for soci (if POSTGRESQL_ROOT env var is set).
2015-03-18 19:37:08 -07:00
seelabs
c7cfd23580 Update sqlite3 to 3.8.8.2. 2015-03-18 19:37:03 -07:00
Vinnie Falco
7cf1ec3f89 Merge commit '9708a1260720d879d76a10f894925962f20611bc' as 'src/soci' 2015-03-18 19:36:00 -07:00
Vinnie Falco
9708a12607 Squashed 'src/soci/' content from commit 6e9312c
git-subtree-dir: src/soci
git-subtree-split: 6e9312c4bb3748907bd28d62c40feca42878cfef
2015-03-18 19:36:00 -07:00
Nik Bougalis
92812fe723 Set version to 0.27.4 2015-03-18 17:54:57 -07:00
JoelKatz
79417ac59a Limit passes in the payment engine to prevent endless looping:
This adds a limit of 1,000 passes to the payment engine. It protects against
possible cases where the execution of a pass fails to exhaust the liquidity
that made the pass possible or cases where two passes alternate providing
liquidity for each other.
2015-03-18 17:53:52 -07:00
Mark Travis
984f66e083 Don't VACUUM SQLite databases on startup with online delete enabled. 2015-03-18 17:53:42 -07:00
Tom Ritchford
ef2a436769 Set version to 0.28.0-b15 2015-03-16 20:54:17 -04:00
Edward Hennis
7f1a95550f Clean up unit test logs on success.
* Add a little bit of shell variable safety and tweak output.
2015-03-16 20:54:17 -04:00
seelabs
803f5b5613 Use buffer in STBlob 2015-03-16 20:54:15 -04:00
Nicholas Dudfield
8ca9fa1c26 Fix testutils.create_accounts
* Don't call ledger_wait inside parallel async loop
2015-03-16 20:54:14 -04:00
David Schwartz
3b3b897193 Add "Default Ripple" account flag and associated logic:
AccountSet set/clear, asfDefaultRipple = 8

AccountRoot flag, lsfDefaultRipple = 0x00800000

In trustCreate, set no ripple flag if appropriate.

If an account does not have the default ripple flag set,
new ripple lines created as a result of its offers being
taken or people creating trust lines to it have no ripple
set by that account's side automatically

Trust lines can be deleted if the no ripple flag matches
its default setting based on the account's default ripple
setting.

Fix default no-rippling in integration tests.
2015-03-16 20:54:14 -04:00
Torrie Fischer
6c364f63cc Build docker images on circleci based on travis.yml 2015-03-16 20:54:14 -04:00
seelabs
6b9e842ddd Replaces StringPairArray with Section in Config. 2015-03-16 20:54:13 -04:00
Nik Bougalis
8f88d915ba Support switchover from 0.27 to 0.28 processing semantics based on time:
Changes made to support autobridging and improve the offer-crossing and
pathfinding logic result in transaction-breaking changes which cause
incompatibilities between 0.27 and 0.28 builds of RippleD.

This patch simplifies deployment of 0.28 on the Ripple network by allowing
RippleD to emulate the 0.27 semantics while the last closed ledger closed
before March 30, 2015 at 13:00:00 PDT, after which time the new 0.28
semantics will become active.

The transaction-breaking changes addressed in this commit are:
    3ccbd7c9b2
    b203db27a4
2015-03-16 20:54:12 -04:00
JoelKatz
eaa1f47f00 Limit passes in the payment engine to prevent endless looping:
This adds a limit of 1,000 passes to the payment engine. It protects against
possible cases where the execution of a pass fails to exhaust the liquidity
that made the pass possible or cases where two passes alternate providing
liquidity for each other.
2015-03-16 20:54:11 -04:00
JoelKatz
cbeae85731 Fix specified destination issuer in pathfinding (RIPD-812)
* Compute the effective recipient.
* Make sure the effective recipient exists.
* Prohibit paths to the recipient, if not the effective recipient.
* Treat paths to the effective recipient as complete.
* Don't find looped paths.
* Use the effective recipient for getPathsOut weight.
2015-03-16 20:54:09 -04:00
Nik Bougalis
84e618b3f2 Improve pool seeding during startup:
* When starting up, we no longer rely just on the standard
  system RNG to generate entropy: we attempt to squeeze some
  from the execution state, and to recover any entropy that
  we had previously stored.

* When shutting down, if sufficient entropy has been accumulated
  attempt to store it for future use.
2015-03-16 20:54:08 -04:00
JoelKatz
382a16ff07 Avoid excess ledger header requests 2015-03-16 20:54:07 -04:00
JoelKatz
7bd339b645 Balance peer selection in getFetchPack 2015-03-16 20:54:07 -04:00
David Schwartz
70d8b2c4b7 getMissingNode performance and logging improvements 2015-03-16 20:54:07 -04:00
David Schwartz
3764a83c6b Ledger binary option
Conflicts:
	src/ripple/app/ledger/Ledger.cpp
	src/ripple/app/ledger/Ledger.h
	src/ripple/rpc/handlers/Ledger.cpp
2015-03-16 20:54:06 -04:00
Tom Ritchford
c3d200ddcd Set version to 0.28.0-b14 2015-03-13 11:21:02 -04:00
Tom Ritchford
44c5e337ab Remove obsolete comments from doc/CHANGELOG. 2015-03-13 11:21:02 -04:00
Nik Bougalis
040982e321 Only report 'delivered_amount' for executed payments (RIPD-827) 2015-03-13 11:21:00 -04:00
Nik Bougalis
6c81ea846c Calculate deep offer quality 2015-03-13 11:20:59 -04:00
Josh Juran
d082a0696d Support Ed25519 keys and signatures:
Recognize a new JSON parameter `key_type` in handlers for wallet_propose
and sign/submit.  In addition to letting the caller to specify either of
secp256k1 or ed25519, its presence prohibits the (now-deprecated) use of
heuristically polymorphic parameters for secret data -- the `passphrase`
parameter to wallet_propose will be not be considered as an encoded seed
value (for which `seed` and `seed_hex` should be used), and the `secret`
parameter to sign and submit will be obsoleted entirely by the same trio
above.

* Use constants instead of literals for JSON parameter names.
* Move KeyType to its own unit and add string conversions.
* RippleAddress
  * Pass the entire message, rather than a hash, to accountPrivateSign()
    and accountPublicVerify().
  * Recognize a 33-byte value beginning with 0xED as an Ed25519 key when
    signing and verifying (for accounts only).
  * Add keyFromSeed() to generate an Ed25519 secret key from a seed.
  * Add getSeedFromRPC() to extract the seed from JSON parameters for an
    RPC call.
  * Add generateKeysFromSeed() to produce a key pair of either type from
    a seed.
* Extend Ledger tests to cover both key types.
2015-03-12 21:53:59 -07:00
Nik Bougalis
f999839e59 Set version to 0.27.3-sp2 2015-03-12 13:37:47 -07:00
JoelKatz
f1bc662a24 Add noripple_check RPC command
To help gateways make the changes needed to adjust to the
"default ripple" flag, we've added the "noripple_check"
RPC command. This command tells gateways what they need
to do to set this flag and fix any trust lines created
before they set the flag.

Once your server is running and synchronized, you can run
the tool from the command line with a command like:
  rippled json noripple_check '
  {
    "account" : "<gateway_trusted_address_here>",
    "role" : "gateway",
    "transactions" : "true"
  }'

The server will respond with a list of "problems" that it
sees with the configuration of the account and its trust
lines. It will also return a "transactions" array suggesting
the transactions needed to fix the problems it found.
2015-03-12 13:37:06 -07:00
Nik Bougalis
232693419a Set version to 0.27.3-sp1 2015-03-11 10:26:39 -07:00
Nicholas Dudfield
ed66b951c6 Fix testutils.create_accounts
* Don't call ledger_wait inside parallel async loop
2015-03-11 10:25:27 -07:00
Nik Bougalis
70c2854f7c Set version to 0.27.3 2015-03-10 14:06:33 -07:00
David Schwartz
e9381ddeb2 Add "Default Ripple" account flag and associated logic:
AccountSet set/clear, asfDefaultRipple = 8

AccountRoot flag, lsfDefaultRipple = 0x00800000

In trustCreate, set no ripple flag if appropriate.

If an account does not have the default ripple flag set,
new ripple lines created as a result of its offers being
taken or people creating trust lines to it have no ripple
set by that account's side automatically

Trust lines can be deleted if the no ripple flag matches
its default setting based on the account's default ripple
setting.

Fix default no-rippling in integration tests.
2015-03-10 14:05:18 -07:00
Tom Ritchford
1b46e003c3 Set version to 0.28.0-b13 2015-03-09 17:49:39 -04:00
Howard Hinnant
4611d5a35f Remove unused SyncUnorderedMap 2015-03-09 17:49:39 -04:00
Nicholas Dudfield
2e59378ab7 Fix AppVeyor:
* Detect continuous integration environment via `CI` variable
* Use double quotes for build cache path
2015-03-09 17:49:39 -04:00
JoelKatz
fc8bf39043 Simplify tracking of recently requested ledger entries
Instead of tracking recently-requested entries from inbound
ledgers by node ID, track by hash. This allows state and
transaction entries to be tracked in the same set.
2015-03-09 17:49:38 -04:00
Vinnie Falco
2cccd8ab28 Fix gentex usage in nudb 2015-03-09 17:49:37 -04:00
Vinnie Falco
d537ceedd6 Tidy up nudb:
* Define WIN32_LEAN_AND_MEAN before including Windows.h
* Remove unnecessary template argument
* Rename to identity
* Make identity default api codec
2015-03-09 17:49:37 -04:00
Tom Ritchford
ac7243b309 Remove unused static function. 2015-03-02 17:40:12 -05:00
Tom Ritchford
607e983f37 Set version to 0.28.0-b12 2015-03-02 16:50:01 -05:00
Mark Travis
02f7326b7e Remove orphan function. 2015-03-02 16:50:01 -05:00
Edward Hennis
b688f69031 Builds/test-only.sh will build and test by scons target.
* test-all.sh simplified to call test-only.sh.
* Script fails if build or tests fail. Allows chaining and git bisect run.
* Add copyright notice
* Ignore gprof performance data created by testing the profile builds.
2015-03-02 16:50:00 -05:00
Tom Ritchford
df41329df9 Replace "it's" with "its" in several places. 2015-03-02 16:49:59 -05:00
Tom Ritchford
0825bd7076 Cleanups to Json Object code.
* Replace Json::JsonException with std::logic_error.
* Move two functions definitions to Object.cpp.
* Fix include guards.
2015-03-02 16:49:56 -05:00
Tom Ritchford
e9b7003cf5 Move streaming Json objects to ripple/json. 2015-03-02 16:49:56 -05:00
Tom Ritchford
c5d673c426 Better integration between JsonObject and Json::Value. 2015-03-02 16:49:56 -05:00
Nik Bougalis
9cc8eec773 Set version to 0.27.2 2015-03-01 14:56:44 -08:00
Nik Bougalis
0b45535061 Calculate deep offer quality 2015-02-28 13:28:54 -08:00
Tom Ritchford
9f64ad8d89 Add WebSocket 04 interface.
* New WebSocket04 traits class implements strategies.
* New "websocket_version" configuration setting selects between 0.2 and 0.4.
2015-02-28 14:57:38 -05:00
Tom Ritchford
e5b0b7e9a7 Expose a method and add a handler in websocketpp.
* Expose websocketpp::transport::asio::connection::get_strand().
* Add new send_empty_handler to websocketpp::endpoint.
2015-02-28 14:57:38 -05:00
Tom Ritchford
9c3522cb70 Isolate WebSocket 0.2-specific code.
* Hide implementation details of the WebSocket server from clients.
* Extract a generic traits class.
2015-02-28 14:57:38 -05:00
Tom Ritchford
b357390215 Remove redundant post to strand in websocket. 2015-02-27 12:11:54 -05:00
Tom Ritchford
c66fc2f656 Rename all WebSocket code into one directory. 2015-02-27 12:11:54 -05:00
Tom Ritchford
64554aca6d Set version to 0.28.0-b11 2015-02-26 21:02:39 -05:00
Nik Bougalis
f1df9a02fa Fix declaration/implementation mismatches 2015-02-26 21:02:38 -05:00
JoelKatz
f3725bdd2e Return a validated ledger if there is one (RIPD-814)
LedgerMaster::getLedgerBySeq should return a validated
ledger (rather than the the open or closed ledger) for
a sequence number for which it has a fully-validated ledger.
2015-02-26 21:02:37 -05:00
Tom Ritchford
cb92b94d55 Remove unused variable. 2015-02-26 21:02:36 -05:00
Vinnie Falco
ef01f82e0c Add nounity targets to msvc projects 2015-02-26 21:02:35 -05:00
Vinnie Falco
4ba7ee8c92 Support per-target ExcludeFromBuild in VSProject 2015-02-26 21:02:35 -05:00
Howard Hinnant
c59633a588 Make SHAMap::fetchNodeFromDB const
When fetchNodeFromDB discovers a missing node in the database it
must reset the ledger sequence to 0.  By treating this as a logically
const operation, even though not physically const, many other member
functions can be made const, including compare.
2015-02-26 21:02:33 -05:00
Vinnie Falco
f56e37398c Always use HTTP handshaking in overlay:
Inbound and outbound peer connections always use HTTP handshakes to
negotiate connections, instead of the deprecated TMHello protocol
message.

rippled versions 0.27.0 and later support both optional HTTP handshakes
and legacy TMHello messages, so always using HTTP handshakes should not
cause disruption. However, versions before 0.27.0 will no longer be
able to participate in the overlay network - support for handshaking
via the TMHello message is removed.
2015-02-26 21:02:32 -05:00
Tom Ritchford
e43ffa6f2b Set version to 0.28.0-b10 2015-02-25 20:26:12 -05:00
Yana
6991bc9723 Spelling corrections 2015-02-25 20:26:12 -05:00
Miguel Portilla
9d6106a80b Require boost 1.57 2015-02-25 20:26:12 -05:00
Tom Ritchford
9e70404411 Get rid of compilation warning. 2015-02-25 20:26:12 -05:00
Nik Bougalis
bc48d299b6 Report server versions when crawling the overlay network 2015-02-25 20:26:12 -05:00
mDuo13
a8db5650a5 Add online_delete reminder to ledger_history in example cfg 2015-02-25 19:56:47 -05:00
Nicholas Dudfield
91871b418b Changes to Universal Port:
* Add tests
* Introduce requestRole helper
* Always honor admin=no
* Welcome guests anywhere admin privileges aren't required
2015-02-25 19:46:56 -05:00
Tom Ritchford
aaf98082e9 Set version to 0.28.0-b9 2015-02-24 20:28:43 -05:00
Tom Ritchford
ac228deeda Replace LEDGER_JSON_ macros with an enum. 2015-02-24 20:28:43 -05:00
Edward Hennis
fc661c83ef Build and run tests on all available build types
* Tests include unit and integration (npm)
2015-02-24 20:28:43 -05:00
Vinnie Falco
a2acffdfa3 New serialized object, public key, and private key interfaces
This introduces functions get and set, and a family of specialized
structs called STExchange. These interfaces allow efficient and
seamless interchange between serialized object fields and user
defined types, especially variable length objects.

A new base class template TypedField is mixed into existing SField
declarations to encode information on the field, allowing template
metaprograms to both customize interchange based on the type and
detect misuse at compile-time.

New types AnyPublicKey and AnySecretKey are introduced. These are
intended to replace the corresponding functionality in the deprecated
class RippleAddress. Specializations of STExchange for these types
are provided to allow interchange. New free functions verify and sign
allow signature verification and signature generation for serialized
objects.

* Add Buffer and Slice primitives
* Add TypedField and modify some SField
* Add STExchange and specializations for STBlob and STInteger
* Improve STBlob and STInteger to support STExchange
* Expose raw data in RippleAddress and Serializer
2015-02-24 20:28:43 -05:00
David Schwartz
79ce4ed226 Document cluster configuration and monitoring (RIPD-732) 2015-02-24 20:28:43 -05:00
Nik Bougalis
e3a7aa0033 Constrain valid inputs for memo fields (RIPD-712) 2015-02-24 20:25:34 -05:00
Vinnie Falco
95973ba3e8 Set version to 0.27.1 2015-02-24 13:31:13 -08:00
Tom Ritchford
fde6303ae6 Set version to 0.28.0-b8 2015-02-24 12:33:59 -05:00
Vinnie Falco
b4a1948951 Use all parts of suite name to detect duplicates 2015-02-24 12:33:59 -05:00
Miguel Portilla
b927028416 Display human readable SSL error codes 2015-02-24 12:33:59 -05:00
Tom Ritchford
fe5d1ff6c5 Set version to 0.28.0-b7 2015-02-23 11:53:21 -08:00
Yana
1308656000 Update README.md file (RIPD-802) 2015-02-23 11:53:20 -08:00
Howard Hinnant
ec1e6b9385 Cleanup and simplifications to SHAMap:
SHAMapTreeNode
* Remove SHAMapTreeNode::pointer and SHAMapTreeNode::ref.
* Add std includes necessary to make the header standalone.
* Remove implementation from the SHAMapTreeNode declaration.
* Make clear what part of SHAMapTreeNode is:
  1) Truly public.
  2) Used only by SHAMap.
  3) Truly private to SHAMapTreeNode.

SHAMapItem
* Remove SHAMapItem::pointer and SHAMapItem::ref.
* Add std includes necessary to make the header standalone.
* Remove implementation from the SHAMapItem declaration.
* Make clear what part of SHAMapItem is:
  1) Truly public.
  2) Used only by SHAMapTreeNode.
  3) Truly private to SHAMapItem.

SHAMapSyncFilter
* Add override for SHAMapSyncFilter-derived functions.
* Add missing header.
* Default the destructor and delete the SHAMapSyncFilter copy members.

SHAMapNodeID
* Remove unused mHash member.
* Remove unused std::hash and boost::hash specializations.
* Remove unused constructor.
* Remove unused comparison with uint256.
* Remove unused getNodeID (int depth, uint256 const& hash).
* Remove virtual specifier from getString().
* Fix operator<= and operator>=.
* Document what API is used outside of SHAMap.
* Move inline definitions outside of the class declaration.

SHAMapMissingNode
* Make SHAMapType a enum class to prevent unwanted conversions.
* Remove needless ~SHAMapMissingNode() declaration/definition.
* Add referenced std includes.

SHAMapAddNode
* Make SHAMapAddNode (int good, int bad, int duplicate) ctor private.
* Move all member function definitions out of the class declaration.
* Remove dependence on beast::lexicalCastThrow.
* Make getGood() const.
* Make get() const.
* Add #include <string>.

SHAMap
* Remove unused enum STATE_MAP_BUCKETS.
* Remove unused getCountedObjectName().
* Remove SHAMap::pointer
* Remove SHAMap::ref
* Remove unused fetchPackEntry_t.
* Remove inline member function definitions from class declaration.
* Remove unused getTrustedPath.
* Remove unused getPath.
* Remove unused visitLeavesInternal.
* Make SHAMapState an enum class.
* Explicitly delete SHAMap copy members.
* Reduce access to nested types as much as possible.
* Normalize member data names to one style.

* Change last of the typedefs to usings under shamap.
* Reorder some includes ripple-first, beast-second.
* Declare a few constructions from make_shared with auto.
* Mark those SHAMap member functions which can be, with const.

* Add missing includes
2015-02-23 11:44:57 -08:00
Tom Ritchford
315a8b6b60 Use jss for many Json fields.
* Document JsonFields.
  * Remove some unused JsonFields values.
2015-02-23 14:36:34 -05:00
Nik Bougalis
558c6b621b Clean up JSON code:
* Remove obsolete files
* Remove obsolete preprocessor configuration tags and decorations
* Remove arcane functionality (YAML compatibility, comment support)
* Enforce strict mode (single root)
* Improve parsing of numerical types
* Misc. cleanups
2015-02-23 14:36:34 -05:00
Edward Hennis
6d91d02c62 Unit test simulated ledgers can do signature verification. 2015-02-23 14:34:37 -05:00
Josh Juran
436ded68b7 Add unit tests for wallet keypair generation:
* Allow `passphrase` to be a seed encoded in any of three formats or a
    literal passphrase.
  * Recognize the absence of `passphrase` as requesting a random seed.

Extract walletPropose() and keypairForSignature() as separately factored
functions (from doWalletPropose() and transactionSign() respectively) to
facilitate unit testing.
2015-02-23 14:34:35 -05:00
JoelKatz
3ec88b3665 Implement getFetchPack using visitDifferences 2015-02-23 14:34:34 -05:00
Tom Ritchford
2caedb38a6 Set version to 0.28.0-b6 2015-02-18 13:31:31 -05:00
Vinnie Falco
49378ab7fe Fix streambuf bug:
The buffers_type::iterator could hold a pointer to a buffers_type that
was destroyed. This changes buffers_type::iterator to point to the
original streambuf instead, which always outlives the iterator.
2015-02-18 13:31:18 -05:00
Vinnie Falco
982dc6aa8c Reject invalid requests on peer port sooner. 2015-02-18 13:31:18 -05:00
Mark Travis
33175187b7 Don't VACUUM SQLite databases on startup with online delete enabled. 2015-02-18 13:31:18 -05:00
Tom Ritchford
0339904920 Update nudb comments. 2015-02-18 13:31:18 -05:00
Vinnie Falco
8bda9487c6 Add shamap::Family injection bundle 2015-02-18 13:31:18 -05:00
seelabs
617d84c0ef BasicConfig support for legacy values:
* A legacy value is a config section with a single-line.
* These values may be read from the BasicConfig interface so
  the deprecated Config class does not need to be exposed to
  clients.
* Made Config class more testable.
2015-02-18 13:31:18 -05:00
Vinnie Falco
ac64731d55 Set version to 0.27.1-rc3 2015-02-12 12:59:48 -08:00
Vinnie Falco
5dc064e971 Revert "Disable Overlay socket handoffs"
This reverts commit 8eb05d0950.
2015-02-12 12:59:48 -08:00
Vinnie Falco
c24732ed4e Fix streambuf bug:
The buffers_type::iterator could hold a pointer to a buffers_type that
was destroyed. This changes buffers_type::iterator to point to the
original streambuf instead, which always outlives the iterator.
2015-02-12 12:59:35 -08:00
Vinnie Falco
ba710bee86 Reject invalid requests on peer port sooner. 2015-02-12 12:43:42 -08:00
Vinnie Falco
c20392ca80 Set version to 0.27.1-rc2 2015-02-11 20:53:27 -08:00
Vinnie Falco
8eb05d0950 Disable Overlay socket handoffs 2015-02-11 20:53:27 -08:00
Tom Ritchford
b11ad375cd Set version to 0.28.0-b5 2015-02-11 20:42:38 -05:00
Josh Juran
7a6d533014 Refactor GenerateDeterministicKey and its call sites:
Remove the use of ec_key parameters and return values from ECDSA crypto
prototypes.  Don't store key data into an ec_key variable only to fetch
it back into the original type again.  Use uint256 and Blob explicitly.

Pass private keys as uint256, and pass public keys as either pointer and
length or Blob in calls to ECDSA{Sign,Verify}() and {en,de}cryptECIES().

Replace GenerateRootDeterministicKey() with separate functions returning
either the public or private key, since no caller needs both at once.

Simplify the use of GenerateDeterministicKey within RippleAddress.  Call
a single routine rather than pass the result of one as input to another.

Add openssl unit with RAII classes for bignum, bn_ctx, and ec_point plus
free utility functions.

Rewrite the functions in GenerateDeterministicKey.cpp to use RAII rather
than explicit cleanup code:
  * factor out secp256k1_group and secp256k1_order for reuse rather than
    computing them each time
  * replace getPublicKey() with serialize_ec_point(), which makes, sets,
    and destroys an ec_key internally (sparing the caller those details)
    and calls i2o_ECPublicKey() directly
  * return bignum rather than ec_key from GenerateRootDeterministicKey()
  * return ec_point rather than EC_KEY* from GenerateRootPubKey()

Move ECDSA{Private,Public}Key() to a new ECDSAKey unit.
Move ec_key.h into impl/ since it's no longer used outside crypto/.

Remove now-unused member functions from ec_key.

Change tabs to spaces; trim trailing whitespace (including blank lines).
2015-02-11 20:42:38 -05:00
Howard Hinnant
be44f75d2d Add missing includes. 2015-02-11 20:42:38 -05:00
Vinnie Falco
ab14123aed Remove obsolete classes:
Legacy workarounds for Visual Studio non thread-safe initialization
of function local objects with static storage duration are removed:

* Remove LeakChecked
* Remove StaticObject
* Remove SharedSingleton
2015-02-11 20:42:38 -05:00
Tom Ritchford
a963a6d10d Add noexcept qualifier to swaps and moves. 2015-02-11 20:42:38 -05:00
Vinnie Falco
69b4cd22a2 Speed up some unit tests:
A few of the slowest unit tests are modified to process a smaller data
set size, to reduce the time required to run all unit tests.
2015-02-11 20:14:44 -05:00
Vinnie Falco
958325653f Add elapsed time report for unit test runner:
When unit tests are complete, the longest running tests if any are logged.
2015-02-11 20:14:44 -05:00
Josh Juran
c5dc419f9e Remove unused source file 2015-02-11 20:14:44 -05:00
Vinnie Falco
2a201f9525 Add RocksDB to nudb import tool (RIPD-781,785):
This custom tool is specifically designed for very fast import of
RocksDB nodestore databases into NuDB.
2015-02-11 20:14:44 -05:00
Vinnie Falco
b7ba509618 NuDB: Use nodeobject codec in Backend (RIPD-793):
This adds codecs for snappy and lz4, and a new nodeobject codec. The
nodeobject codec provides a highly efficient custom compression scheme
for inner nodes, which make up the majority of nodestore databases.
Non inner node objects are compressed using lz4.

The NuDB backend is modified to use the nodeobject codec. This change
is not backward compatible - older NuDB databases cannot be opened or
imported.
2015-02-11 14:41:33 -08:00
Vinnie Falco
f946d7b447 Remove obsolete NodeObject fields:
Legacy fields of NodeObject are removed, as they are no longer
used and there is a space savings from omitting them:

* Remove LedgerIndex
2015-02-11 14:41:32 -08:00
Vinnie Falco
e2a5535ed6 NuDB: Performance improvements (RIPD-793,796):
This introduces changes in nudb to improve speed, reduce database size,
and enhance correctness. The most significant change is to store hashes
rather than entire keys in the key file. The output of the hash function
is reduced to 48 bits, and stored directly in buckets.

The API is also modified to introduce a Codec parameter allowing for
compression and decompression to be supported in the implementation
itself rather than callers.

THe data file no longer contains a salt, as the salt is applicable
only to the key and log files. This allows a data file to have multiple
key files with different salt values. To distinguish physical files
belonging to the same logical database, a new field UID is introduced.
The UID is a 64-bit random value generated once on creation and stored
in all three files.

Buckets are zero filled to the end of each block, this is a security
measure to prevent unintended contents of memory getting stored to
disk. NuDB offers the varint integer type, this is identical to
the varint described by Google.

* Add varint
* Add Codec template argument
* Add "api" convenience traits
* Store hash in buckets
* istream can throw short read errors
* Support std::uint8_t format in streams
* Make file classes part of the public interface
* Remove buffers pessimization, replace with buffer
* Consolidate creation utility functions to the same header
* Zero fill unused areas of buckets on disk
* More coverage and improvements to the recover test
* Fix file read/write to loop until all bytes processed
* Add verify_fast, faster verify for large databases

The database version number is incremented to 2; older databases can
no longer be opened and should be deleted.
2015-02-11 14:41:31 -08:00
Vinnie Falco
0f94e2c0c3 Set version to 0.27.1-rc1 2015-02-10 16:22:14 -08:00
Vinnie Falco
a25508b98d Add RocksDB to nudb import tool (RIPD-781,785):
This custom tool is specifically designed for very fast import of
RocksDB nodestore databases into NuDB.
2015-02-10 16:22:14 -08:00
Vinnie Falco
e825433a38 NuDB: Use nodeobject codec in Backend (RIPD-793):
This adds codecs for snappy and lz4, and a new nodeobject codec. The
nodeobject codec provides a highly efficient custom compression scheme
for inner nodes, which make up the majority of nodestore databases.
Non inner node objects are compressed using lz4.

The NuDB backend is modified to use the nodeobject codec. This change
is not backward compatible - older NuDB databases cannot be opened or
imported.
2015-02-10 16:22:13 -08:00
Vinnie Falco
d1c08889fe Remove obsolete NodeObject fields:
Legacy fields of NodeObject are removed, as they are no longer
used and there is a space savings from omitting them:

* Remove LedgerIndex
2015-02-10 16:22:13 -08:00
Vinnie Falco
0b82b5a0d6 NuDB: Performance improvements (RIPD-793,796):
This introduces changes in nudb to improve speed, reduce database size,
and enhance correctness. The most significant change is to store hashes
rather than entire keys in the key file. The output of the hash function
is reduced to 48 bits, and stored directly in buckets.

The API is also modified to introduce a Codec parameter allowing for
compression and decompression to be supported in the implementation
itself rather than callers.

THe data file no longer contains a salt, as the salt is applicable
only to the key and log files. This allows a data file to have multiple
key files with different salt values. To distinguish physical files
belonging to the same logical database, a new field UID is introduced.
The UID is a 64-bit random value generated once on creation and stored
in all three files.

Buckets are zero filled to the end of each block, this is a security
measure to prevent unintended contents of memory getting stored to
disk. NuDB offers the varint integer type, this is identical to
the varint described by Google.

* Add varint
* Add Codec template argument
* Add "api" convenience traits
* Store hash in buckets
* istream can throw short read errors
* Support std::uint8_t format in streams
* Make file classes part of the public interface
* Remove buffers pessimization, replace with buffer
* Consolidate creation utility functions to the same header
* Zero fill unused areas of buckets on disk
* More coverage and improvements to the recover test
* Fix file read/write to loop until all bytes processed
* Add verify_fast, faster verify for large databases

The database version number is incremented to 2; older databases can
no longer be opened and should be deleted.
2015-02-10 16:22:13 -08:00
Vinnie Falco
a33d0d4fb6 Add general delimiter split() to rfc2616 2015-02-08 19:43:37 -08:00
Vinnie Falco
ba42334d36 Add lz4
Conflicts:
	Builds/VisualStudio2013/RippleD.vcxproj
	Builds/VisualStudio2013/RippleD.vcxproj.filters
2015-02-08 19:43:26 -08:00
Vinnie Falco
2e62641aa4 Merge commit 'dad460dcfc8466a8e1c59529d490829fcfaeee73' as 'src/lz4' 2015-02-08 19:43:04 -08:00
Vinnie Falco
dad460dcfc Squashed 'src/lz4/' content from commit e25b51d
git-subtree-dir: src/lz4
git-subtree-split: e25b51de7b51101e04ceea194dd557fcc23c03ca
2015-02-08 19:43:04 -08:00
Vinnie Falco
3ae23b6a54 Remove spurious call to fetch in NuDBBackend 2015-02-08 19:42:07 -08:00
Vinnie Falco
97126f18b1 Add /crawl cgi request feature to peer protocol (RIPD-729):
This adds support for a cgi /crawl request, issued over HTTPS to the configured
peer protocol port. The response to the request is a JSON object containing
the node public key, type, and IP address of each directly connected neighbor.
The IP address is suppressed unless the neighbor has requested its address
to be revealed by adding "Crawl: public" to its HTTP headers. This field is
currently set by the peer_private option in the rippled.cfg file.
2015-02-08 19:42:01 -08:00
Vinnie Falco
3838d222c2 NuDB: limit size of mempool (RIPD-787):
Insert now blocks when the size of the memory pool exceeds a predefined
threshold. This solves the problem where sustained insertions cause the
memory pool to grow without bound.
2015-02-08 19:41:54 -08:00
Vinnie Falco
96c3292210 Add missing include 2015-02-08 19:41:48 -08:00
Vinnie Falco
b0fd92cb3f Fix unsafe iterator dereference in PeerFinder 2015-02-08 19:41:43 -08:00
Vinnie Falco
62c5b5e570 Add general delimiter split() to rfc2616 2015-02-07 15:19:55 -08:00
Vinnie Falco
feaa0871ac Add lz4 2015-02-07 06:38:37 -08:00
Vinnie Falco
9f41976926 Merge commit '1784f24c5f81e864bf0ad8dcfdf4266ca1108290' as 'src/lz4' 2015-02-05 15:40:04 -08:00
Vinnie Falco
1784f24c5f Squashed 'src/lz4/' content from commit e25b51d
git-subtree-dir: src/lz4
git-subtree-split: e25b51de7b51101e04ceea194dd557fcc23c03ca
2015-02-05 15:40:04 -08:00
Vinnie Falco
fc47d9fc4d Set version to 0.28.0-b4 2015-02-03 16:24:34 -08:00
Vinnie Falco
eade9f8f2b Revert RocksDB backend settings:
This reverts the change that makes RocksDBQuick the default settings for
node_db "type=rocksdb". The quick settings can be obtained by setting
"type=rocksdbquick".

RocksDBQuick settings are implicated in memory over-utilization problems
seen recently.
2015-02-03 16:24:34 -08:00
Vinnie Falco
6276c55cc9 Set version to 0.27.0-sp1 2015-02-03 14:58:02 -08:00
Vinnie Falco
afa6ff7c4b Revert RocksDB backend settings:
This reverts the change that makes RocksDBQuick the default settings for
node_db "type=rocksdb". The quick settings can be obtained by setting
"type=rocksdbquick".

RocksDBQuick settings are implicated in memory over-utilization problems
seen recently.
2015-02-03 14:57:54 -08:00
Vinnie Falco
f4dcbe3a84 Remove spurious call to fetch in NuDBBackend 2015-02-03 12:56:35 -08:00
Vinnie Falco
9c02cc1b17 Add /crawl cgi request feature to peer protocol (RIPD-729):
This adds support for a cgi /crawl request, issued over HTTPS to the configured
peer protocol port. The response to the request is a JSON object containing
the node public key, type, and IP address of each directly connected neighbor.
The IP address is suppressed unless the neighbor has requested its address
to be revealed by adding "Crawl: public" to its HTTP headers. This field is
currently set by the peer_private option in the rippled.cfg file.
2015-02-03 12:56:35 -08:00
seelabs
0cc3ef8f90 Add missing include:
* Compile previously failed on Mac with clang
2015-02-03 12:56:32 -08:00
Tom Ritchford
4cbbacc946 Set version to 0.28.0-b3 2015-02-02 17:01:50 -08:00
Vinnie Falco
9a0c71d4a7 NuDB: limit size of mempool (RIPD-787):
Insert now blocks when the size of the memory pool exceeds a predefined
threshold. This solves the problem where sustained insertions cause the
memory pool to grow without bound.
2015-02-02 17:01:19 -08:00
Vinnie Falco
0f1b831de7 Add missing include 2015-02-02 17:01:18 -08:00
Vinnie Falco
37a7a2aacd Fix unsafe iterator dereference in PeerFinder 2015-02-02 17:01:18 -08:00
Tom Ritchford
635b157b11 Fix C++ guards in beast. 2015-02-02 17:01:18 -08:00
Tom Ritchford
c3ae4da83a Fix include guards in rippled. 2015-02-02 17:01:17 -08:00
Tom Ritchford
c3809ece67 New RPC method "version". 2015-02-02 17:01:17 -08:00
Howard Hinnant
bfc436dccd Add metadata to transaction difference logging: RIPD-775 2015-02-02 17:01:16 -08:00
Nik Bougalis
71d6874236 Set version to 0.28.0-b2 2015-01-28 16:34:33 -08:00
Vinnie Falco
9bf1f994ae Remove buffer_view 2015-01-28 16:34:33 -08:00
Vinnie Falco
bb4127a6fb Refactor Serializer and SerializerIterator interfaces:
* Remove unused members
* SerialIter holds only a pointer and offset now
* Use free functions for some Serializer members
* Use SerialIter in some places instead of Serializer
2015-01-28 16:34:33 -08:00
Edward Hennis
a691632995 Support a "--noserver" command line option in tests:
* Run npm/integration tests without launching rippled, using a
  running instance of rippled (possibly in a debugger) instead.
* Works for "npm test" and "mocha"
2015-01-28 16:34:33 -08:00
Miguel Portilla
5d6ea3d75f Combine history_ledger_index and online_delete (RIPD-774) 2015-01-28 16:34:33 -08:00
Mark Travis
43873b1b2c Initialize canDelete_ properly in the constructor. 2015-01-28 16:34:33 -08:00
Mark Travis
9430f3665b Speed up access to ledger index value that can be deleted. 2015-01-28 16:34:33 -08:00
Vinnie Falco
f3c1f63444 Remove LevelDB and HyperLevelDB backends:
The LevelDB and HyperLevelDB are removed from the backend choices. Neither
were recommended for production environments. As RocksDB is not available
on Windows platforms yet, the recommended backend choice for Windows is NuDB.
2015-01-28 13:43:00 -08:00
Nik Bougalis
b5c7232d6f Set version to 0.28.0-b1 2015-01-27 18:21:54 -08:00
Edward Hennis
2f3677d593 Change timing on "sequence realignment" test. 2015-01-27 18:21:54 -08:00
Howard Hinnant
1e0efaffe8 Add missing includes. 2015-01-27 18:21:54 -08:00
Vinnie Falco
fc79754750 Remove unused SHAMap fields 2015-01-26 19:25:38 -08:00
Miguel Portilla
0e4de42be8 Add RPC metrics (RIPD-705)
Add metrics to record the number of RPC requests received. Record the number of
node store fetches performed per request. Additionally record the byte size of
each request response and measure the response time of each request in
milliseconds.

A new class, ScopedMetrics, uses the Boost Thead Local Storage mechanism to
efficiently record NodeStore metrics within the same thread.
2015-01-26 19:25:38 -08:00
Scott Determan
4e389127b5 Use microsecond granularity for sqlite lock backoff algorithm:
When sql tries to acquire a lock that is already held, it sleeps for some
microseconds using the usleep function and then try to acquire the lock
again. However, if the HAVE_USLEEP macro is not defined then the sleep
function will be used.

This fix will define HAVE_USLEEP even when it is not already defined by
the system. Although some Linux distros may not define HAVE_USLEEP,
all supported versions provide usleep. If the system does not actually
have a usleep function, then the compiler will flag the error.
2015-01-26 19:13:40 -08:00
Nik Bougalis
47593730d6 Modernize code:
* Clean STBase-derived class creation interfaces
* Annotate overriden STBase virtual functions
* Optimize path deserialization
* Prefer range-based for
* Prefer std::unique_ptr
* Remove BOOST_FOREACH
2015-01-26 19:13:40 -08:00
Tom Ritchford
e742da73bd Simplify lookupLedger(). 2015-01-26 19:13:40 -08:00
Vinnie Falco
890bf3cce1 Add PeerFinder Logic backoff unit test 2015-01-26 19:13:40 -08:00
Nik Bougalis
60eb312e3b Fix order of construction of Application members 2015-01-26 19:13:40 -08:00
Tom Ritchford
06207da185 Move Context.h up into rpc/. 2015-01-26 19:13:40 -08:00
Nicholas Dudfield
4dc2cf8a6b Update tests to support latest ripple-lib:
* Update ripple-lib api usage
* Use latest npm ripple-lib
  * Tested with bignumber.js branch and tip of develop
* Use new version of coffee-script
  * Better source maps
* Update mocha
* Add assert-diff for better error reporting
* Add rconsole, enabled via USE_RCONSOLE env var
  * For use with manual installation only
2015-01-26 19:13:40 -08:00
Scott Determan
44450bf644 Enable Amendments from config file or static data (RIPD-746):
* The rippled.cfg file has a new section called "amendments"

* Each line in this section contains two white-space separated items
** The first item is the ID of the amendment (a 256-bit hash)
** The second item is the friendly name

* Replaces config section name macros with variables

* Make addKnown arguments safer

* Added lock to addKnown
2015-01-26 19:13:02 -08:00
Nik Bougalis
312aec79ca Remove WalletAdd (RIPD-725) 2015-01-26 12:39:13 -08:00
Nik Bougalis
a8578c73f8 Remove support for deprecated PreviousTxnID field (RIPD-710):
The PreviousTxnID field has been deprecated and should not be used for
transactions that use the field will now be rejected.

The AccountTxnID feature should be used instead by enabling transaction
tracking and specifying a transaction ID at submission. More details
are available at: https://ripple.com/build/transactions/#accounttxnid
2015-01-26 12:39:13 -08:00
Miguel Portilla
c522ffa6db Eliminate temREDUNDANT_SEND_MAX (RIPD-590):
The rules for when a SendMax is redundant are complicated. It is easier to
always allow a SendMax and eliminate temREDUNDANT_SEND_MAX.
2015-01-26 12:39:13 -08:00
Nik Bougalis
93b7599b1c Fix missing 'else' when handling sfMessageKey:
When clearing out a message key the transactor would incorrectly
create an empty `sfMessageKey` field instead of simply deleting
the field.

Clarify logic by reordering checks.
2015-01-26 12:39:13 -08:00
Nik Bougalis
3ccbd7c9b2 Finalize autobridging implementation (RIPD-179):
Autobridging uses XRP as a natural bridge currency to allow IOU-to-IOU orders
to be satisfied not only from the direct IOU-to-IOU books but also over the
combined IOU-to-XRP and XRP-to-IOU books.

This commit addresses the following issues:

* RIPD-486: Refactoring the taker into a unit-testable architecture
* RIPD-659: Asset-aware offer crossing
* RIPD-491: Unit tests for IOU to XRP, XRP to IOU and IOU to IOU
* RIPD-441: Handle case when autobridging over same owner offers
* RIPD-665: Handle case when autobridging over own offers
* RIPD-273: Groom order books while crossing
2015-01-26 12:39:13 -08:00
Nik Bougalis
385a87db31 Claim a fee when a required destination tag is not specified (RIPD-574) 2015-01-26 12:39:12 -08:00
Nik Bougalis
5530353eef Add simplified explicit interfaces to handle XRP and IOU transfers:
The new interfaces take into account the different semantics of XRP, which
do not have an issuer or transfer fees, and IOUs which have issuers and
(optional) transfer fees.

For XRP, the new `LedgerEntrySet::transfer_xrp` will transfer the specified
amount of XRP between from a given source to a given destination.

For IOU, two new functions are introduced:
* `LedgerEntrySet::issue_iou` which transfers the specified amount of an
IOU from the IOU's issuer to an account.
* `LedgerEntrySet::redeem_iou` which transfers the specified amount of an
IOU from an account to the IOU's issuer.

A transfer from user-to-user (e.g. to fill an order during offer crossing)
requires the use of `redeem_iou` followed by `issue_iou`. This helps to
enforce the Ripple invariant that IOUs never flow directly from user to
user, but only through a gateway. Additionally, this  allows for the
explicit calculation and application of transfer fees by varying the
amounts redeemed and issued.

The new interfaces promote type safety since you cannot use the issue
and redeem APIs with XRP or the transfer API with IOU, and the issuer
to be used is implied by the currency being issued or redeemed.
2015-01-26 12:39:12 -08:00
Nik Bougalis
d1193093ef Require the master key when performing certain operations (RIPD-666):
* When disabling the use of the master key; or
* When enabling 'no freeze'.
2015-01-26 12:39:12 -08:00
David Schwartz
b203db27a4 Fix offer->ACCOUNT->offer 2015-01-26 12:39:12 -08:00
JoelKatz
0a3e1af04c When pathfinding, don't output a redundant account node 2015-01-26 12:39:12 -08:00
Vinnie Falco
c6c8e5d70c Set version to 0.27.0 2015-01-26 10:56:11 -08:00
Nik Bougalis
fa354ec8d9 Set version to 0.27.0-b11 2015-01-23 17:37:18 -08:00
Nik Bougalis
d0375f697d Invoke correct deleter 2015-01-23 17:34:30 -08:00
Vinnie Falco
33c8257d25 Set version to 0.27.0-b10 2015-01-21 15:25:11 -08:00
Scott Determan
f389bc33c3 VSProject: Handle tuples in CPPDEFINES:
The VSProject generator now handles tuples in addition to strings and
dicts when converting environment variables such as CPPDEFINES.
2015-01-21 15:20:06 -08:00
Vinnie Falco
4d5dca71ce Squelch Peerfinder fixed connection attempts 2015-01-21 14:59:47 -08:00
Vinnie Falco
a9c44a1b9c Set version to 0.27.0-b9 2015-01-21 14:23:50 -08:00
Vinnie Falco
4144f800a1 Fix PeerImp concurrent access of socket:
The PeerImp::run launch function is now dispatched on the strand to prevent
undefined behavior resulting from concurrent access to the ssl::stream object.
2015-01-21 14:21:43 -08:00
Vinnie Falco
6ef9a81017 Set version to 0.27.0-b8 2015-01-21 14:21:43 -08:00
Vinnie Falco
8c6722f3c5 Remove use of date and time from rocksdb unity build 2015-01-21 14:21:43 -08:00
Vinnie Falco
40e138627b Fixes to Overlay:
* Make ~Peer virtual
* Call close in ConnectAttempt::stop
* Handle nullptr return in new_outbound_slot
* Check gracefulClose_ in read loop
2015-01-21 10:48:32 -08:00
Vinnie Falco
a470dda4e6 Fix Journal::Stream::active to return the correct value 2015-01-21 10:48:32 -08:00
Edward Hennis
b725410623 Option to specify rippled path on command line.
* --rippled=<absolute or relative path>
* Works for "npm test" and "mocha"
* Remove "rippled_path" from config.js to require CLI path.
2015-01-21 10:48:31 -08:00
Miguel Portilla
a9dfb33126 Log abnormal close time offsets (RIPD-572) 2015-01-21 10:48:31 -08:00
Miguel Portilla
8b848770dc Add config "ledger_history_index" functionality (RIPD-559) 2015-01-21 10:48:31 -08:00
Vinnie Falco
94629edb9b Add NuDB backend:
The NuDB database backend is a high performance key/value store presented
as an alternative to RocksDB on Mac and Linux deployments, and the preferred
backend option for Windows deployments. The LevelDB backend is deprecated for
all platforms.

This includes these changes:

* Add Backend::verify API for doing consistency checks
* Add Database::close so caller can catch exceptions
* Improved Timing test for NodeStore creates a simulated workload
2015-01-21 10:48:30 -08:00
Vinnie Falco
2a3f2ca28d Add NuDB: A Key/Value Store For Decentralized Systems
NuDB is a high performance key/value database optimized for insert-only
workloads, with these features:

* Low memory footprint
* Values are immutable
* Value sizes from 1 2^48 bytes (281TB)
* All keys are the same size
* Performance independent of growth
* Optimized for concurrent fetch
* Key file can be rebuilt if needed
* Inserts are atomic and consistent
* Data file may be iterated, index rebuilt.
* Key and data files may be on different volumes
* Hardened against algorithmic complexity attacks
* Header-only, nothing to build or link
2015-01-21 10:48:30 -08:00
Edward Hennis
8ab1e7d432 Integration test to subscribe to offer books. 2015-01-20 16:45:04 -08:00
Miguel Portilla
b2ba6a0c85 Fix RPC subscribe with multiple books 2015-01-20 16:45:04 -08:00
Tom Ritchford
cca5421aed Fix Subscribe RPC to correctly distinguish bids and asks. 2015-01-20 16:45:04 -08:00
Nik Bougalis
799d9a73e6 Ensure that hash_append will never throw
Conflicts:
	src/beast/beast/net/IPAddress.h
2015-01-20 16:45:03 -08:00
Scott Determan
b0781622b2 Handle nullptr return values to InboundLedgers::findCreate
In normal operation, InboundLedgers::findCreate never returns null, but
during system shutdown, it will return null.

Since this only happens in system shutdown, when findCreate returns null
the calling function stops what it was doing and returns.

During testing, an issue where destroying the application object
and creating a new one caused problems with a static PathTable. This table
is now cleared when re-initialized.
2015-01-20 16:45:03 -08:00
Tom Ritchford
0d0eec6345 Clean up test documentation and a log message. 2015-01-20 16:45:03 -08:00
Nik Bougalis
1af79f7960 Properly validate the configured online delete interval 2015-01-20 16:45:03 -08:00
Vinnie Falco
15b570bbdd Add profile targets for gcc and clang 2015-01-20 16:45:02 -08:00
Tom Ritchford
7aa5599cc2 Remove unused parameter in two lambdas. 2015-01-20 16:45:02 -08:00
JoelKatz
676293ec42 Ensure account_tx queries over and returns correct range 2015-01-20 16:45:02 -08:00
Nik Bougalis
abc4fb81b1 Improve RippleLineCache hashing 2015-01-20 16:45:01 -08:00
Vinnie Falco
53a16f354f Add hardened_hash to basics/:
xxhasher is the default hash function for hardened_hash.
2015-01-20 16:45:01 -08:00
Vinnie Falco
6ab1ecd836 Tidy up container hash functions:
* Add xxhasher
* Move fnv1a, siphash, spookyto hash/
* Move hash_append, uhash to hash/
* Move hash_speed_test to hash/
* Move hash classes to individual header files
* Remove hardened_hash
2015-01-20 16:45:01 -08:00
Vinnie Falco
e7b16e7b47 Add rngfill 2015-01-20 16:45:01 -08:00
Vinnie Falco
14804f81a8 Optimize calls to unit_test::suite::expect:
This changes expect and unexpected to receive the reason text as a
template argument, allowing the std::string conversion of char const*
parameters to take place only if the condition evaluates to false. This
cuts all calls to malloc and free on tests that pass.
2015-01-20 16:45:00 -08:00
Vinnie Falco
9a61b8d77d Declare base_uints with using statements 2015-01-20 16:45:00 -08:00
Vinnie Falco
f42c2763d5 Improve streambuf unit test 2015-01-20 16:45:00 -08:00
Vinnie Falco
98d4e0e1b5 Fix ZeroCopyOutputStream:
Added a destructor that commits the last block of data
if there was no final call to BackUp.
2015-01-20 16:45:00 -08:00
Tom Ritchford
9156633baf Set version to 0.27.0-b7 2015-01-20 17:59:55 -05:00
Tom Ritchford
bcf4f836b4 Use websocketpp_02 namespace. 2015-01-20 17:08:15 -05:00
Vinnie Falco
dbc1d70f99 Set version to 0.27.0-b6 2015-01-20 09:41:27 -08:00
Vinnie Falco
78bc190a85 Merge commit '02855d7fed46d3c1aa1f2cefbcf4a42720575c3f' as 'src/websocketpp' 2015-01-20 09:35:00 -08:00
Vinnie Falco
02855d7fed Squashed 'src/websocketpp/' content from commit 875d420
git-subtree-dir: src/websocketpp
git-subtree-split: 875d420952
2015-01-20 09:35:00 -08:00
Vinnie Falco
6fdd5d32be Rename websocket/ to websocketpp_02 2015-01-20 09:34:54 -08:00
Vinnie Falco
d7f32b105b Set version to 0.27.0-b5 2015-01-13 11:50:58 -08:00
Vinnie Falco
0ac480a0bd Fix extra increment in GenerateRootDeterministicKey 2015-01-13 11:49:59 -08:00
Tom Ritchford
417996de02 Set version to 0.27.0-b4 2015-01-13 11:30:23 -05:00
Tom Ritchford
6c2d60cec2 Prevent RPC handlers from returning non-objects. 2015-01-13 11:30:23 -05:00
Tom Ritchford
743bd6c917 Fix RPC command logrotate to return a Json object. 2015-01-13 11:30:14 -05:00
Edward Hennis
ab61aa41d9 Set version to 0.27.0-b3 2015-01-12 18:00:55 -05:00
Edward Hennis
36396ae29e rippled.cfg [db_node] options for RocksDB
* open_files and compression.
2015-01-12 18:00:54 -05:00
Vinnie Falco
749e083e6e NodeStore improvements:
* Add Backend::verify API for doing consistency checks
* Add Database::close so caller can catch exceptions
* Improved Timing test for NodeStore creates a simulated workload
2015-01-12 18:00:52 -05:00
Vinnie Falco
67b9cf9e82 Improved support for exceptions in threads spawned by unit tests:
Unit tests that wish to spawn threads for testing concurrency may now do so
by using unit_test::thread as a replacement for std::thread. These threads
propagate unhandled exceptions to the unit test, and work with the abort on
failure feature.
2015-01-12 17:17:09 -05:00
Vinnie Falco
27fb20f3ab Add xor_shift_engine 2015-01-12 17:17:07 -05:00
Josh Juran
1c71b274f0 SConstruct: Add ed25519.c
Conflicts:
	Builds/VisualStudio2013/RippleD.vcxproj
	Builds/VisualStudio2013/RippleD.vcxproj.filters
	SConstruct
2015-01-12 12:16:26 -08:00
Vinnie Falco
fcd20b63fe Merge commit '8ec344ac1b6c66d936fa0f7005490e126a434a70' as 'src/ed25519-donna' 2015-01-12 11:27:15 -08:00
Vinnie Falco
8ec344ac1b Squashed 'src/ed25519-donna/' content from commit 04223b0
git-subtree-dir: src/ed25519-donna
git-subtree-split: 04223b04e22f5eff32c6c27e25194d4d984c6f41
2015-01-12 11:27:15 -08:00
Vinnie Falco
df966a9ac6 Set version to 0.27.0-b2 2015-01-05 18:49:18 -08:00
Vinnie Falco
f634666dc6 Make rocksdbquick settings default:
This removes the old default configuration for the "rocksdb" backend and
replaces it with the configuration that was formerly available using
the experimental backend "rocksdbquick".

The new configuration setting improves the performance of the key/value
database by changing the compaction style and tuning the size parameters for
the typical rippled workload. Testing shows a decrease in I/O spikes for both
reading and writing.
2015-01-05 18:49:17 -08:00
Tom Ritchford
e2f9f5d7e5 Fix compilation warnings. 2015-01-05 18:49:17 -08:00
Tom Ritchford
d078b0d143 Extract the git ID into a separate compilation unit. 2015-01-05 18:49:15 -08:00
Nik Bougalis
0ccdea3cd8 Disable Ticket transactions and tests 2015-01-05 14:18:27 -08:00
Vinnie Falco
df54b47cd0 Tidy up includes and add modules to the classic build:
An alternative to the unity build, the classic build compiles each
translation unit individually. This adds more modules to the classic build:

* Remove unity header app.h
* Add missing includes as needed
* Remove obsolete NodeStore backend code
* Add app/, core/, crypto/, json/, net/, overlay/, peerfinder/ to classic build
2015-01-05 13:35:57 -08:00
Vinnie Falco
2e595830b3 Levelize SHAMap:
The SHAMap class is refactored into a separate module where each translation
unit compiles separate without errors. Dependencies on higher level business
logic are removed. SHAMap now depends only on basics, crypto, nodestore,
and protocol:

* Inject NodeStore::Database& to SHAMap
* Move sync filter instances to app/ledger/
* Move shamap to its own module
* Move FullBelowCache to shamap/
* Move private code to shamap/impl/
* Refactor SHAMap treatment of missing node handler
* Inject and use Journal for logging in SHAMap
2015-01-05 11:46:11 -08:00
Vinnie Falco
96fbcc9a5a Refactor NodeStore:
Manager is changed to be a Meyer's singleton, with factories automatically
registering upon construction.
2015-01-05 11:46:11 -08:00
Vinnie Falco
6283801981 Add non-unity build targets:
The SConstruct is modified to provide a new family of targets, ending with
the suffix ".nounity", which compile individual translation units instead of
some of the unity translation units ("classic" builds). Two modules updated
for this treatment are ripple/basics/ and ripple/protocol/, with plans to
update more in the future. A consequence is longer build times in some cases.
A benefit of classic builds is that missing includes can be identified
through compiler errors.
2015-01-05 11:46:11 -08:00
Vinnie Falco
9a3214d46e Normalize files containing unit test code:
Source files are split to place all unit test code into translation
units ending in .test.cpp with no other business logic in the same file,
and in directories named "test".

A new target is added to the SConstruct, invoked by:
    scons count
This prints the total number of source code lines occupied by unit tests,
in rippled specific code and excluding library subtrees.
2015-01-05 11:46:07 -08:00
Vinnie Falco
9eb7c8344f Don't leak track StringPairArray 2015-01-05 11:44:15 -08:00
Vinnie Falco
4140bbb1f7 Set version to 0.27.0-b1 2015-01-05 11:37:01 -08:00
Vinnie Falco
ea44497136 Fix msvc ICE on initializer-list 2015-01-05 11:37:00 -08:00
Nik Bougalis
07737c6e5b Add 'delivered_amount' to Transaction JSON (RIPD-643):
The synthetic field 'delivered_amount' can be used to determine the exact
amount delivered by a Payment without having to check the DeliveredAmount
field, if present, or the Amount field otherwise.

The field is only returned when metadata is available and the data is not
returned in binary format.
2014-12-31 01:55:10 -08:00
JoelKatz
98d5eefc86 Pathfinding bugfixes (RIPD-735):
* Fix specifying paths and search level for ripple_path_find
* Don't modify the pathfinder, it has issuer-neutral paths.
* Handle previous paths correctly
* Compare paths correctly
2014-12-31 01:55:10 -08:00
JoelKatz
4f2d93bb65 Avoid processing transactions if we need a network ledger
Not processing tranasctions without a network ledger makes
initial network synchronization faster.
2014-12-31 01:55:10 -08:00
Edward Hennis
a5df3f1747 Support a "no_server" flag in test config.
* Will use a running instance of rippled (possibly in a debugger).
* Modify all tests to respect the server_default value.
* Fail test if new account already exists and has a balance.
* README.md with instructions for advanced test debugging, particularly using no_server.
2014-12-31 01:55:10 -08:00
Howard Hinnant
7f5f73887d Fix undefined behavior 2014-12-31 01:55:10 -08:00
Nik Bougalis
91ce7807b9 Remove legacy LoadTypes (RIPD-365) 2014-12-31 01:55:10 -08:00
Nik Bougalis
d26fae9875 Reduce Beast dependencies by leveraging C++11 features:
* Remove beast::Atomic (RIPD-728):
  * Use std-provided alternatives
  * Eliminate atomic variables where possible

* Cleanup beast::Thread interface:
  * Use std::string instead of beast::String
  * Remove unused functions and parameters

* Remove unused code:
  * beast::ThreadLocalValue
  * beast::ServiceQueue
2014-12-31 01:55:10 -08:00
Nik Bougalis
60bdc79ec4 Remove unused sitefiles module 2014-12-30 12:33:41 -08:00
Tom Ritchford
253ddf2998 Split off CheckLibraryVersions.test.cpp. 2014-12-30 12:33:41 -08:00
Nik Bougalis
9fa15b390a Always initialize LedgerHandler options field 2014-12-29 11:21:19 -08:00
Josh Juran
e7d6fe6c8b Cull duplicate/unused code in RippleAddress:
Deduplicate a call to GeneratePublicDeterministicKey().
Remove unused member functions.
2014-12-29 11:21:19 -08:00
Tom Ritchford
9650b1aa70 New ripple::TestSuite with method expectEquals(). 2014-12-29 11:21:19 -08:00
Howard Hinnant
eafa6f960f API for improved Unit Testing (RIPD-432):
* Added new test APIs allowing easy ways to create ledgers, apply
  transactions to them, close and advance them.

* Moved Ledger tests from Ledger.cpp to Ledger.test.cpp.

* Changed several TransactionEngine log priorities from lsINFO to lsDEBUG to
  reduce unnecessary verbosity in the log messages while running these tests.

* Moved LedgerConsensus:applyTransactions from a private member function to a
  free function so that it could be accessed externally, and without having to
  reference a LedgerConsensus object.  This was done to facilitate the new
  testing API.
2014-12-29 11:21:19 -08:00
Yana
c62ccf4870 Update README.md (RIPD-601) 2014-12-29 11:21:19 -08:00
Vinnie Falco
ef34439a79 Set version to 0.26.5-rc1 2014-12-22 15:20:03 -08:00
Nik Bougalis
b328ec2462 Prevent passing of non-POD types to POD-only interfaces:
This tidies up the code that produces random numbers to conform
to programming best practices and reduce dependencies.

* Use std::random_device instead of platform-specific code
* Remove RandomNumbers class and use free functions instead
2014-12-22 15:19:48 -08:00
Vinnie Falco
60f27178b8 Levelization, improve structure of source files:
Source files are moved between modules, includes changed and added,
and some code rewritten, with the goal of reducing cross-module dependencies
and eliminating cycles in the dependency graph of classes.

* Remove RippleAddress dependency in CKey_test
* ByteOrder.h, Blob.h, and strHex.h are moved to basics/. This makes
  the basics/ module fully independent of other ripple sources.
* types/ is merged into protocol/. The protocol module now contains
  all primitive types specific to the Ripple protocol.
* Move ErrorCodes to protocol/
* Move base_uint to basics/
* Move Base58 to crypto/
* Remove dependence on Serializer in GenerateDeterministicKey
* Eliminate unity header json.h
* Remove obsolete unity headers
* Remove unnecessary includes
2014-12-22 10:23:49 -08:00
Vinnie Falco
e3fbb83ad0 Tidy up usage of std::begin, std::end 2014-12-19 11:55:43 -08:00
Nik Bougalis
28b70a7b9a Remove 'Proof of Work' code 2014-12-19 11:00:29 -08:00
Nicholas Dudfield
dcdc341d0f Add appveyor 2014-12-19 11:00:29 -08:00
Tom Ritchford
fce77c9372 Configuration for yielding RPC server. 2014-12-19 11:00:28 -08:00
Tom Ritchford
a360c481c2 Refactor out a version of lookupLedger returning Status. 2014-12-19 11:00:28 -08:00
Tom Ritchford
c72db5fa5f Refactor away RPCHandler::doRpcCommand 2014-12-19 11:00:28 -08:00
Tom Ritchford
fc9a23d6d4 Send output incrementally in ServerHandlerImp. 2014-12-19 11:00:27 -08:00
Tom Ritchford
167f4666e2 New generic Ledger RPC handler. 2014-12-19 11:00:27 -08:00
Tom Ritchford
1cbcc7be21 Allow the Ledger to generically output to both Json models. 2014-12-19 11:00:27 -08:00
Tom Ritchford
8053598069 Better interoperation between Json::Value and JsonObject.
* Generic functions to add entries to both object models.
* Add Json::Value into JsonObjects.
* Write Json::Value to string incrementally.
* Get rid of ripple::RPC::New namespace
2014-12-19 11:00:26 -08:00
Tom Ritchford
7cfac1a91a Wrap Output in a coroutine that periodically yields. 2014-12-19 11:00:26 -08:00
Tom Ritchford
192cdd028e Change Output to be a generic std::function. 2014-12-19 11:00:26 -08:00
Tom Ritchford
029c143922 Add a comment to ledger/Ledger.h. 2014-12-19 11:00:26 -08:00
Tom Ritchford
00298cc68c Simplify LedgerData.cpp. 2014-12-19 11:00:25 -08:00
Tom Ritchford
d9c7db51af Make three ErrorCode functions generic. 2014-12-19 11:00:25 -08:00
Vinnie Falco
f12b15d22b Fix logic in HTTP/S server:
These bugs do not affect production code since callers do not invoke
`write` multiple times, but these would become a problem in the future.

* Access to Peer::write_queue_ is synchronized correctly.
* Remove unsafe access to deleted container element.
2014-12-19 11:00:25 -08:00
Vinnie Falco
409b8bac00 Remove unused and obsolete Ripple identifiers and tidy up:
These identifiers were part of a failed set of classes to replace
the functionality combined into RippleAddress. They are not used
and therefore can be removed.

* Remove RippleAccountPrivateKey
* Remove RippleAccountPublicKey
* Remove RippleAccountID
* Remove RipplePrivateKey
* Remove RipplePublicKeyHash
* Remove RippleLedgerHash
* Remove unused withCheck argument
* Remove CryptoIdentifier
* Remove IdentifierStorage
* Remove IdentifierType
* Remove SimpleIdentifier
* Add missing include
2014-12-19 11:00:24 -08:00
Vinnie Falco
28b09bde4b Simplify RipplePublicKey:
This implements the bare minimum necessary to store a 33 byte public
key and use it in ordered containers. It is an efficient and well
defined alternative to RippleAddress when the caller only needs
a node public key.
2014-12-19 11:00:24 -08:00
Vinnie Falco
2f6af906f4 Validators work (RIPD-703):
This replaces the experimental validators module with foundational
code to implement a new system for tracking validators, validations and
the UNL. The code is turned off by default, in BeastConfig.h

* Remove obsolete public Manager interfaces
* Remove obsolete database methods
* Remove obsolete ChosenList concept
* Remove obsolete code
* Add missing includes
* Tidy up STValidation.h
* Move factory function to Validators::make_Manager
* Add Connection object for tracking STValidations
2014-12-19 11:00:23 -08:00
Vinnie Falco
628e3ac1eb Add waitable_executor 2014-12-18 10:26:55 -08:00
Josh Juran
fbf5785e35 Combine STTx::checkSign overloads:
Callers don't need to specify the signing key -- they're just retrieving
the key from the SerializedTransaction and then passing it back.

This simplifies Ed25519 implementation.
2014-12-12 20:14:02 -08:00
Nicholas Dudfield
eeea2b1ff8 Use ppa:afrank/boost 1.57 for Travis 2014-12-12 20:14:02 -08:00
Vinnie Falco
32062e439f Split peer connect logic to another class (RIPD-711):
All of the logic for establishing an outbound peer connection including
the initial HTTP handshake exchange is moved into a separate class. This
allows PeerImp to have a strong invariant: All PeerImp objects that exist
represent active peer connections that have already gone through the
handshake process.
2014-12-12 20:14:02 -08:00
Vinnie Falco
930a0beaf1 Add ZeroCopyOutputStream and tidy up 2014-12-12 20:14:02 -08:00
Nik Bougalis
4a49fefdd9 Various cleanups:
* Replace SYSTEM_NAME and other macros with C++ constructs
* Remove RIPPLE_ARRAYSIZE and use std::extent or ranged for loops
* Remove old-style, unused offer crossing unit test
* Make STAmount::saFromRate free and remove default argument
2014-12-12 20:14:02 -08:00
Nik Bougalis
8e792855e0 Do not use path if path expansion fails 2014-12-10 16:55:06 -08:00
Tom Ritchford
69f5c6987a Whitespace: clean WebSockets to 80 columns. 2014-12-10 16:55:06 -08:00
Nik Bougalis
85fc9e4ecf Revert e4c9822d78 "Enable processor-specific optimizations when available:" 2014-12-08 14:54:03 -08:00
Mark Travis
d5c3f0c9cf Stability bugfixes for online delete SHAMapStore:
The correct ledger age is necessary for checking health
status, and the previous behavior caused the online deletion process to
abort if the process took too long.

The tuning parameter added and the parameter whose default was modified both
minimize impact of SQL DELETE operations by decreasing the default batch size
for deletes and for increasing the backoff period between deletion batches.
These parameters decrease contention for the SQLite and I/O with the trade-off
of longer processing time for online delete. Online-delete is not a
time-critical function, so a little slowness in wall-clock time is not harmful.
2014-12-08 14:54:03 -08:00
JoelKatz
a48120e675 Fix incorrect source issuer for XRP source 2014-12-08 14:54:03 -08:00
Nik Bougalis
36f8e4f2ad Improve hex conversion & parsing routines 2014-12-08 14:54:03 -08:00
David Schwartz
1084a39a45 Improve the humanAccountID cache (RIPD-693)
Profiling indicated some performance issues coming from the
cache of 160-bit account IDs to base58 format. This replaces
the single cache with two caches and rotates out old
entries.
2014-12-08 14:54:03 -08:00
Tom Ritchford
86df482842 Make sure that handlers always return Json::objectValue. 2014-12-01 17:16:24 -05:00
Tom Ritchford
b0d47ebcc6 Use better base64 handling in ServerHandlerImp. 2014-12-01 17:15:23 -05:00
Tom Ritchford
fffdf1dfba Make beast::detail::chunk_encoded_buffers::to_hex() static 2014-12-01 11:12:59 -05:00
Tom Ritchford
3273ed2616 Remove unused BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES. 2014-12-01 10:56:03 -05:00
Vinnie Falco
aa7b0a31b0 Refactor protocol message parsing:
This replaces the stateful class parser with a stateless free function.
The protocol buffer message is parsed using a ZeroCopyInputStream.

* Invoke method is now a free function.
* Protocol handler doesn't need to derive from an abstract interface
* Only up to one message is processed at a time by the invoker.
* Remove error_code return from the handler's message processing functions.
* Add ZeroCopyInputStream implementation that wraps a BufferSequence.
* Free function parses up to one protocol message and calls the handler.
* Message type and size can be calculated from an iterator
  range or a buffer sequence.
2014-11-26 12:23:21 -08:00
Vinnie Falco
fb0d44d403 Use cluster state in Slot instead of PeerImp 2014-11-26 12:23:10 -08:00
Vinnie Falco
cd8ec89cbb Use injections from OverlayImpl in PeerImp 2014-11-26 12:23:02 -08:00
Vinnie Falco
252f271dc5 Fixes to beast::asio::streambuf:
* Fix to_string conversion
* Fix assert on debug invariant checks
* Fix the treatment of the output position when the entire output is committed.
* Add unit test
2014-11-26 12:22:55 -08:00
Vinnie Falco
62d400c3a9 Move the call to cancel_timer to the right place 2014-11-26 12:22:46 -08:00
Scott Schurr
f9aa3e0da5 Add more unit tests to rpc/impl/TransactionSign (RIPD-480):
By adding a mock it is possible to test the transactionSign
function without interacting with the ledger.  This is the
smallest change I could come up with that allows transactionSign
to be unit tested.

The unit tests are white boxed.  Each test case is a result
of examining the code and identifying behavior associated with
different JSON fields.  That means the tests are not based on
requirements, they are based on observed behavior.
2014-11-26 12:07:44 -08:00
Vinnie Falco
685fe5b0fb Don't call std::exit on clean exit 2014-11-25 19:19:56 -08:00
Vinnie Falco
5180e71a0d Remove unused chrono::time_point stream conversions 2014-11-25 19:19:56 -08:00
Vinnie Falco
55637f7508 Template abstract_clock on Clock:
The abstract_clock is now templated on a type meeting the requirements of
the Clock concept. It inherits the nested types of the Clock on which it
is based. This resolves a problem with the original design which broke the
type-safety of time_point from different abstract clocks.
2014-11-25 19:19:56 -08:00
Nicholas Dudfield
7d72dfe0be Updated freeze tests:
* Always run freeze tests (and  enforcement tests)
* book_offers filtering tests are broken
2014-11-25 11:46:34 -08:00
Mark Travis
02529a0fc2 SHAMapStore Online Delete (RIPD-415):
Makes rippled configurable to support deletion of all data in its key-value
store (nodestore) and ledger and transaction SQLite databases based on
validated ledger sequence numbers. All records from a specified ledger
and forward shall remain available in the key-value store and SQLite, and
all data prior to that specific ledger may be deleted.

Additionally, the administrator may require that an RPC command be
executed to enable deletion. This is to align data deletion with local
policy.
2014-11-25 11:44:02 -08:00
JoelKatz
b44974677e Cleanup some stray formatting left in logs 2014-11-21 17:13:13 -08:00
Vinnie Falco
d4fd5e4fce HTTP Handshaking for Peers on Universal Port (RIPD-446):
This introduces a considerable change in the way that peers handshake. Instead
of sending the TMHello protocol message, the peer making the connection (client
role) sends an HTTP Upgrade request along with some special headers. The peer
acting in the server role sends an HTTP response completing the upgrade and
transition to RTXP (Ripple Transaction Protocol, a.k.a. peer protocol). If the
server has no available slots, then it sends a 503 Service Unavailable HTTP
response with a JSON content-body containing IP addresses of other servers to
try. The information that was previously contained in the TMHello message is
now communicated in the HTTP request and HTTP response including the secure
cookie to prevent man in the middle attacks. This information is documented
in the overlay README.md file.

To prevent disruption on the network, the handshake feature is rolled out in
two parts. This is part 1, where new servents acting in the client role will
send the old style TMHello handshake, and new servents acting in the server
role can automatically detect and accept both the old style TMHello handshake,
or the HTTP request accordingly. This detection happens in the Server module,
which supports the universal port. An experimental .cfg setting allows clients
to instead send HTTP handshakes when establishing peer connections. When this
code has reached a significant fraction of the network, these clients will be
able to establish a connection to the Ripple network using HTTP handshakes.

These changes clean up the handling of the socket for peers. It fixes a long
standing bug in the graceful close sequence, where remaining data such as the
IP addresses of other servers to try, did not get sent. Redundant state
variables for the peer are removed and the treatment of completion handlers is
streamlined. The treatment of SSL short reads and secure shutdown is also fixed.

Logging for the peers in the overlay module are divided into two partitions:
"Peer" and "Protocol". The Peer partition records activity taking place on the
socket while the Protocol partition informs about RTXP specific actions such as
transaction relay, fetch packs, and consensus rounds. The severity on the log
partitions may be adjusted independently to diagnose problems. Every log
message for peers is prefixed with a small, unique integer id in brackets,
to accurately associate log messages with peers.

HTTP handshaking is the first step in implementing the Hub and Spoke feature,
which transforms the network from a homogeneous network where all peers are
the same, into a structured network where peers with above average capabilities
in their ability to process ledgers and transactions self-assemble to form a
backbone of high powered machines which in turn serve a much larger number of
'leaves' with lower capacities with a goal to improve the number of
transactions that may be retired over time.
2014-11-21 16:47:12 -08:00
Vinnie Falco
30123eaa4a Add WrappedSink:
This class puts a configured string prefix in front of
each line of Journal output.
2014-11-21 16:46:57 -08:00
Nik Bougalis
454ec97d51 Replace custom exceptions with std::runtime_error 2014-11-21 13:15:41 -08:00
Vinnie Falco
c2ac331e78 Fix unit_test suite matching with full names 2014-11-21 12:59:32 -08:00
Nik Bougalis
be4a35af11 Clarify SetAccount logic and clean up existing code 2014-11-21 12:59:32 -08:00
Tom Ritchford
445b29ad0d Fix RPC handlers to use the results of lookupLedger. 2014-11-21 12:59:32 -08:00
Vinnie Falco
64d0f7fffd Fix DecayingSample treatment of the window 2014-11-21 12:59:32 -08:00
Nik Bougalis
baf0d09455 Simplify the Beast fatal error reporting framework:
* Reduce interface to a single function which reports error details
* Remove unused functions
2014-11-21 12:59:32 -08:00
Vinnie Falco
08a81a0ab9 Tidy up the structure of sources in protocol/:
Split out and rename STValidation
Split out and rename STBlob
Split out and rename STAccount
Split out STPathSet
Split STVector256 and move UintTypes to protocol/
Rename to STBase
Rename to STLedgerEntry
Rename to SOTemplate
Rename to STTx
Remove obsolete AgedHistory
Remove types.h and add missing includes
Remove unnecessary includes in app.h
Remove unnecessary includes in app.h
Remove include app.h from app1.cpp
2014-11-20 20:15:29 -08:00
Nik Bougalis
31110c7fd9 Cleanup ripple::Ledger:
* Convert static member functions to free functions
* Adopt consistent naming convention
* De-inline code
2014-11-20 20:15:29 -08:00
Vinnie Falco
0e1dd92d9b Fix case where slot==nullptr in Overlay:
This changes the Overlay to correctly handle the case when nullptr is
returned by PeerFinder new_inbound_slot on a detected self-connection.
2014-11-20 20:15:29 -08:00
Vinnie Falco
a3204a4df7 Add http::chunk_encode:
This transforms a ConstBufferSequence into a new ConstBufferSequence whose
data is encoded according to the Content transfer encoding rules of RFC2616.
The implementation does not copy any memory.
2014-11-20 20:15:29 -08:00
Donovan Hide
2288ab48b9 Use asio signal handling in Application (RIPD-140):
* Use signal_set as cross platform way of handling SIGINT
* Remove polling on main thread for shutdown.
* Add extra logging for received signal.
* Clean up exit handling on error in setup routines.
* Reuse isStopped() from Stoppable for status (could be isStopping() instead).
* Ctrl-C should now work for standalone mode as well on Windows.

Also small fixes to Resolver:
* Add Resolver prefix to logging.
* Fix AsyncObject::removeReference() logic.
* Fix work remaining logic.
2014-11-20 20:15:29 -08:00
mDuo13
670401884c Improve the human-readable description of the tesSUCCESS code:
Transactions that return tesSUCCESS have only been accepted and
propagated on the Ripple network and should not be considered
final until they have been included in a validated ledger.
2014-11-20 20:14:04 -08:00
Yana
37181c341e Changed doc/rippled-example.cfg to specify default for ssl_verify 2014-11-14 11:19:42 -08:00
wltsmrz
be7e677448 Update integration tests for changes to ripple-lib account request API:
Account requests expect an object as first argument
2014-11-14 11:10:12 -08:00
Josh Juran
b2eeb49a45 Clean up CKey and RippleAddress (RIPD-672)
* Remove CKey dependency on RippleAddress
* Create RAII ec_key wrapper that hides EC_KEY and other OpenSSL details
* Move CKey member logic into free functions
* Delete CKey class
* Rename units that are no longer CKey-related
* Delete code that was unused
2014-11-14 11:10:12 -08:00
Vinnie Falco
9aa040d917 Accept generic arguments in ci_equal 2014-11-14 11:10:11 -08:00
Vinnie Falco
c2043a223b Tidy up split_commas function and use it in Server
Conflicts:
	src/ripple/server/impl/ServerHandlerImp.cpp
2014-11-14 11:10:11 -08:00
Vinnie Falco
f24e859f17 Construct Server after Overlay and WSDoors:
When the ServerHandler is constructed before the Overlay, an incoming
connection received after the server's listening ports have been opened
but before the Overlay object has been created causes a crash.
2014-11-14 11:10:11 -08:00
Vinnie Falco
737b33f9d1 Merge branch 'release' into develop 2014-11-14 10:56:45 -08:00
David Schwartz
00791d2151 Set version to 0.26.4-sp3 2014-11-11 16:33:52 -08:00
David Schwartz
b141598f9b Fix bugs in pathfinding with XRP as the source currency 2014-11-11 16:27:39 -08:00
Tom Ritchford
d1618d79b0 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-11 16:27:03 -08:00
Tom Ritchford
00c84dfe5c Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.
2014-11-11 16:26:38 -08:00
Vinnie Falco
95f31b98a8 Set version to 0.26.4-sp2 2014-11-11 14:22:37 -08:00
Miguel Portilla
10d74ed100 Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct ledger info. account_offers, account_lines and book_offers allow admins unlimited size on the limit param. Specifying a negative value on limit clamps to the minimum value allowed. Incorrect types for limit are correctly reported in the result.
2014-11-11 14:21:49 -08:00
Vinnie Falco
8a7f612d5b Revert pathfinding changes:
* 5e7c527 Revert "Fix account_lines, account_offers and book_offers result (RIPD-682):"
* b3417ca Revert "Fix pathfinding with multiple issuers for one currency (RIPD-618)."
* 00db7f5 Revert "Clean up Pathfinder."
2014-11-11 14:21:40 -08:00
Tom Ritchford
0829ee9234 Add missing header needed for boost 1.57 compatibility. 2014-11-10 23:23:53 -05:00
Nik Bougalis
b22e33444b Make Stoppable unit tests manual 2014-11-10 23:23:53 -05:00
David Schwartz
d115a12cbe Remove dead TxQueue code 2014-11-10 23:23:53 -05:00
Nik Bougalis
b7b744de94 Remove sole use of beast::MurmurHash 2014-11-10 15:00:20 -08:00
Nik Bougalis
68e46e406a Remove MurmurHash from Beast 2014-11-10 14:00:54 -08:00
Miguel Portilla
a46ae4efec Set version to 0.26.4-sp1 2014-11-10 16:25:45 -05:00
Miguel Portilla
62777a794e Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct ledger info. account_offers, account_lines and book_offers allow admins unlimited size on the limit param. Specifying a negative value on limit clamps to the minimum value allowed. Incorrect types for limit are correctly reported in the result.
2014-11-10 16:25:14 -05:00
Tom Ritchford
329a969761 Remove unused RPCServer. 2014-11-10 12:53:21 -08:00
Vinnie Falco
30170bc394 Add short_read manual unit test:
This manual unit test explores the outcomes of shutting down
SSL stream connections at various point during a session.
2014-11-10 12:52:57 -08:00
Vinnie Falco
f193302e15 Add WrappedSink 2014-11-10 12:52:57 -08:00
Vinnie Falco
7c4870d641 Add operator<< for basic_streambuf 2014-11-10 12:52:43 -08:00
Vinnie Falco
8b84a76d5d Make ci_equal a function 2014-11-10 12:52:42 -08:00
Vinnie Falco
a4cd761372 Add rfc2616::parse_csv 2014-11-10 12:52:42 -08:00
Miguel Portilla
63d2cfd6ba Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct
ledger info. account_offers, account_lines and book_offers allow admins
unlimited size on the limit param. Specifying a negative value on limit clamps
to the minimum value allowed. Incorrect types for limit are correctly reported
in the result.
2014-11-10 12:46:36 -08:00
Tom Ritchford
bb44bdd047 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-10 12:07:57 -05:00
Tom Ritchford
6904e66384 Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.
2014-11-10 12:06:49 -05:00
Vinnie Falco
fbffe2367e Fix weak_fn unit test. 2014-11-09 20:27:05 -08:00
Vinnie Falco
e442a2846d Overlay improvements and bug fixes:
PeerImp::detach had a default argument graceful=true which did not
correctly close the socket and cause the Overlay to often hang on exit.
The logging for Overlay and Peers has been reworked. All the socket activity
is logged to Peers while protocol activity goes to Protocol. Every log line
is prefixed by a small integer ID unique to the connection.
* Removed graceful PeerImp::detach option
* Peer and Protocol log message handle respective types of logging
* Log messages prefixed with peer unique integer
* Prevent call to timer ancel from throwing an exception
2014-11-08 14:39:46 -08:00
Vinnie Falco
f6985586ea Better logging when opening Server ports. 2014-11-08 14:36:45 -08:00
Vinnie Falco
2bae5b0959 Throw if rippled.cfg is missing a [server] section 2014-11-08 14:36:45 -08:00
Vinnie Falco
1e58809fcc Remove obsolete get_pointer 2014-11-08 14:36:44 -08:00
Vinnie Falco
6b1d213cc2 Add weak_fn 2014-11-08 14:36:44 -08:00
David Schwartz
42bec13a83 Add missing include needed for std::bad_cast in LexicalCast.h 2014-11-07 15:23:43 -08:00
Vinnie Falco
4415a179b3 Update freeze test for moved TxFlags.h 2014-11-07 14:12:43 -08:00
Vinnie Falco
5d42604efd Refactor the structure of source files:
* New src/ripple/crypto and src/ripple/protocol directories
* Merged src/ripple/common into src/ripple/basics
* Move resource/api files up a level
* Add headers for "include what you use"
* Normalized include guards
* Renamed to JsonFields.h
* Remove obsolete files
* Remove net.h unity header
* Remove resource.h unity header
* Removed some deprecated unity includes
2014-11-07 13:40:43 -08:00
Vinnie Falco
b134b7d3f6 Add missing includes. 2014-11-07 12:24:02 -08:00
Vinnie Falco
788219fe05 Adjust SSL context generation for Server:
The creation of self-signed certificates slows down the command
line client when launched repeatedly during unit test.
* Contexts are no longer generated for the command line client
* A port with no secure protocols generates an empty context
2014-11-07 06:13:56 -08:00
Tom Ritchford
9a7f66cfe9 Fix compilation errors in RPC/RipplePathFind.cpp 2014-11-06 21:58:13 -05:00
Tom Ritchford
daa4d16e61 Remove unused isXRP(Issue) function. 2014-11-06 20:17:13 -05:00
Tom Ritchford
cf05f87795 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-06 20:14:01 -05:00
Tom Ritchford
c2f2f83b7c Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.

Conflicts:
	src/ripple/rpc/handlers/RipplePathFind.cpp
2014-11-06 16:58:10 -08:00
Tom Ritchford
b30b2a523f Fix public member names of RPC::Context.
Conflicts:
	src/ripple/rpc/handlers/AccountTx.cpp
	src/ripple/rpc/handlers/AccountTxOld.cpp
	src/ripple/rpc/handlers/Ledger.cpp
	src/ripple/rpc/handlers/LedgerData.cpp
	src/ripple/rpc/handlers/RipplePathFind.cpp
	src/ripple/rpc/handlers/ServerInfo.cpp
	src/ripple/rpc/handlers/ServerState.cpp
	src/ripple/rpc/handlers/Submit.cpp
	src/ripple/rpc/handlers/Subscribe.cpp
	src/ripple/rpc/handlers/TxHistory.cpp
	src/ripple/rpc/handlers/Unsubscribe.cpp
	src/ripple/rpc/impl/Context.h
2014-11-06 16:55:20 -08:00
Nicholas Dudfield
150a3810a8 Update npm test rippled.cfg to use [server]:
The test now generates a configuration file with the new
configuration sections define by the Universal Port feature.
2014-11-06 16:10:00 -08:00
Vinnie Falco
ac0eaa912b Universal Port (RIPD-160):
This changes the behavior and configuration specification of the listening
ports that rippled uses to accept incoming connections for the supported
protocols: peer (Peer Protocol), http (JSON-RPC over HTTP), https (JSON-RPC)
over HTTPS, ws (Websockets Clients), and wss (Secure Websockets Clients).
Each listening port is now capable of handshaking in multiple protocols
specified in the configuration file (subject to some restrictions). Each
port can be configured to provide its own SSL certificate, or to use a
self-signed certificate. Ports can be configured to share settings, this
allows multiple ports to use the same certificate or values. The list of
ports is dynamic, administrators can open as few or as many ports as they
like. Authentication settings such as user/password or admin user/admin
password (for administrative commands on RPC or Websockets interfaces) can
also be specified per-port.

As the configuration file has changed significantly, administrators will
need to update their ripple.cfg files and carefully review the documentation
and new settings.

Changes:

* rippled-example.cfg updated with documentation and new example settings:
  All obsolete websocket, rpc, and peer configuration sections have been
  removed, the documentation updated, and a new documented set of example
  settings added.

* HTTP::Writer abstraction for sending HTTP server requests and responses
* HTTP::Handler handler improvements to support Universal Port
* HTTP::Handler handler supports legacy Peer protocol handshakes
* HTTP::Port uses shared_ptr<boost::asio::ssl::context>
* HTTP::PeerImp and Overlay use ssl_bundle to support Universal Port
* New JsonWriter to stream message and body through HTTP server
* ServerHandler refactored to support Universal Port and legacy peers
* ServerHandler Setup struct updated for Universal Port
* Refactor some PeerFinder members
* WSDoor and Websocket code stores and uses the HTTP::Port configuration
* Websocket autotls class receives the current secure/plain SSL setting
* Remove PeerDoor and obsolete Overlay peer accept code
* Remove obsolete RPCDoor and synchronous RPC handling code
* Remove other obsolete classes, types, and files
* Command line tool uses ServerHandler Setup for port and authorization info
* Fix handling of admin_user, admin_password in administrative commands
* Fix adminRole to check credentials for Universal Port
* Updated Overlay README.md

* Overlay sends IP:port redirects on HTTP Upgrade peer connection requests:
  Incoming peers who handshake using the HTTP Upgrade mechanism don't get
  a slot, and always get HTTP Status 503 redirect containing a JSON
  content-body with a set of alternate IP and port addresses to try, learned
  from PeerFinder. A future commit related to the Hub and Spoke feature will
  change the response to grant the peer a slot when there are peer slots
  available.

* HTTP responses to outgoing Peer connect requests parse redirect IP:ports:
  When the [overlay] configuration section (which is experimental) has
  http_handshake = 1, HTTP redirect responses will have the JSON content-body
  parsed to obtain the redirect IP:port addresses.

* Use a single io_service for HTTP::Server and Overlay:
  This is necessary to allow HTTP::Server to pass sockets to and from Overlay
  and eventually Websockets. Unfortunately Websockets is not so easily changed
  to use an externally provided io_service. This will be addressed in a future
  commit, and is one step necessary ease the restriction on ports configured
  to offer Websocket protocols in the .cfg file.
2014-11-06 16:10:00 -08:00
Vinnie Falco
05a04aa801 Set version to 0.26.4 2014-11-03 16:53:37 -08:00
Vinnie Falco
e37d4043f6 Add missing includes to make headers compile separately 2014-11-03 16:40:57 -08:00
Vinnie Falco
d073425b44 Improved beast::http::message:
* Add headers::erase
* Set http::message version with std::pair
* Use std::pair for headers::value_type
2014-11-03 16:40:57 -08:00
Nicholas Dudfield
825b18cf71 Add bin/stop-test.js shutdown testing script:
This script launches rippled repeatedly and then issues a stop command
after a variable amount of time. This is to test the shutdown of the
application and catch errors.
2014-11-03 16:31:21 -08:00
Vinnie Falco
549ad3204f Fix race conditions closing HTTP I/O objects:
This fixes a case where stop can sometimes skip calling close on some
I/O objects or crash in a rare circumstance where a connection is in the
process of being torn down at the exact time the server is stopped. When
the acceptor receives errors, it logs the error and continues listening
instead of stopping.
2014-11-03 14:11:06 -08:00
Vinnie Falco
35f9499b67 Fix Overlay stop on exit:
The stop sequence for Overlay had a race condition where autoconnect could
be called after close_all, resulting in a hang on exit. This resolves the
problem by putting the close and timer operations on a strand:
* Rename some Overlay members
* Put close on strand and tidy up members
* Use completion handler instead of coroutine for timer
* Use App io_service in PeerFinder
2014-11-03 14:11:05 -08:00
Vinnie Falco
db82c35c17 Remove spurious assert in ResolverAsioImpl 2014-11-03 14:11:05 -08:00
Vinnie Falco
73c74f753c Change to the Application io_service:
* Simplified the implementation and removed class IoServicePool
* The io_service outlives the components of the Application
2014-11-03 14:11:05 -08:00
JoelKatz
a38fb2a5dc Clear the acquiring ledger when shutting down NetworkOPs:
This solves a circular destruction problem on exit.
2014-11-03 14:11:04 -08:00
Donovan Hide
38e99e01f9 Improve nodestore benchmarking:
* Use more succinct while loops on NodeFactory.
* Better formatting of multiple test results.
* Updated benchmarks.
* Use simpler and faster RNG to generate test data.
2014-11-02 07:16:08 -08:00
Donovan Hide
a1f46e84b8 Add new RocksDBQuickFactory for benchmarking:
This new factory is intended for benchmarking against the existing RocksDBFactory and has the following differences.
* Does not use BatchWriter
* Disables WAL for writes to memtable
* Uses a hash index in blocks
* Uses RocksDB OptimizeFor… functions
See Benchmarks.md for further discussion of some of the issues raised by investigation of RocksDB performance.
2014-11-01 07:12:09 -07:00
Donovan Hide
6540804571 Add repeatable NodeStore timing benchmark:
The timing test is changed to overcome possible file buffer cache effects by creating different read access patterns. The unittest-arg command line arguments allow running the benchmarks against any of the available backends and altering the parameters passed in the same format as rippled.cfg. The num_objects parameter permits variation of the number of key/values inserted. The data is random but matches reasonably well the values that rippled might generate.
2014-11-01 07:12:08 -07:00
Howard Hinnant
ffe6707595 Refactor Stoppable:
The Stoppable interface aids in the enforcement of invariants needed to
successful start and stop a multi-threaded application composed of classes
that depend on each other in complex ways.
* Test written to confirm the current behavior.
* Comments updated to reflect the current behavior.
* Public API reduced to what is currently in use.
* Protected data members made private.
* volatile bool members changed to std::atomic<bool>.
* std::atomic<int> members changed to std::atomic<bool>.
* Name storage uses std::string
2014-10-31 21:29:16 -07:00
Tom Ritchford
9b21740c9f Delete temporary directories at the end of tests (RIPD-460). 2014-10-31 21:21:54 -07:00
Tom Ritchford
bd12e2ab95 New class TempDirectory in UnitTestUtilities. 2014-10-31 21:21:54 -07:00
Donovan Hide
bffb5ef8b4 Avoid zero initialization of Blob:
This seemed to improve the performance of the copy, although there did seem to be some byte by byte copying still present. Further investigation recommended.
2014-10-31 20:12:39 -07:00
Donovan Hide
e4c9822d78 Enable processor-specific optimizations when available:
The SConstruct is modified to enable processor specific optimizations on clang and gcc toolchains. This improves the performance of RocksDB's CRC function. It might also enable other used libraries that are in the codebase now or in the future to apply cpu-specific optimisations. The mtune option ensures that a binary compiled on one machine will function on another,
2014-10-31 20:12:39 -07:00
Vinnie Falco
73187d8832 Remove obsolete multitls and proxy websocket features 2014-10-31 15:15:40 -07:00
Vinnie Falco
8101154d5e Remove obsolete websocket PROXY port 2014-10-31 15:15:40 -07:00
Vinnie Falco
c02937fd6f Remove obsolete sections from rippled-example.cfg:
* peer_port_proxy is obsolete since the MultiSocket was removed.
* peer_ssl_cipher_list has no effect, SSL ciphers are hard coded for security.
2014-10-31 15:15:40 -07:00
Vinnie Falco
3430be4075 Add PeerFinder onRedirects function 2014-10-31 13:27:55 -07:00
Vinnie Falco
3f2b6f771f Add streambuf to_string function 2014-10-31 13:27:38 -07:00
Vinnie Falco
6e39b49cc2 Add Json::stream to write Value to a Streambuf 2014-10-31 13:27:33 -07:00
Vinnie Falco
71c34ed4e0 Remove unused ErrorReply function 2014-10-31 13:25:54 -07:00
Vinnie Falco
477178675c Fix parseIniFile for duplicate sections 2014-10-31 13:25:30 -07:00
Vinnie Falco
dbdf68b248 Refactor HTTP::Server to support Universal Port:
These changes are necessary to support the Universal port feature. Synopsis:

* Persist HTTP peer io_service::work lifetime:
This simplification eliminates any potential for bugs caused by incorrect
lifetime management of the io_service::work object.

* Restructure Door to prevent data races, and handle clean exit:
The Server, Door, Door::detector, and Peer objects work together to
correctly implement graceful stop and destructors that block until
all child objects have been destroyed.

Cleanups:
* De-pimpl HTTP::Server
* Rename ServerImpl data members
* Tidy up HTTP::Port interface
2014-10-30 16:02:19 -07:00
Vinnie Falco
2fd139b307 Refactor Overlay and add [overlay] config section (experimental):
These changes prepare Overlay for the Universal Port and Hub and Spoke
features.

* Add [overlay configuration section:
The [overlay] section uses the new BasicConfig interface that
supports key-value pairs in the section. Some exposition is added to the
example cfg file. The new settings for overlay are related to the Hub and
Spoke feature which is currently in development. Production servers should
not set these configuration options, they are clearly marked experimental
in the example cfg file.

Other changes:
* Use _MSC_VER to detect Visual Studio
* Use ssl_bundle in Overlay::Peer
* Use shared_ptr to SSL context in Overlay:
* Removed undocumented PEER_SSL_CIPHER_LIST configuration setting
* Add Section::name: The Section object now stores its name for better diagnostic messages.
2014-10-30 13:55:01 -07:00
Vinnie Falco
a6c2657062 Add shared_ptr<boost::asio::ssl::context> to ssl_bundle:
This gives the ssl_bundle shared ownership of the underlying ssl context
so that ownership of the bundle may be transferred to other classes without
introduce lifetime issues.
2014-10-30 13:55:00 -07:00
Vinnie Falco
78a0bc0e2c Make streambuf buffers_type iterators default constructible 2014-10-30 13:55:00 -07:00
Edward Hennis
d7116d6867 Enable std::array overloads for boost::asio on clang:
* Remove Boost config option from beast config.
* Define from compiler, or let Boost figure out itself.
2014-10-30 13:55:00 -07:00
Josh Juran
edc15b9fa2 Use a self-signed certificate for peers (RIPD-108):
Generate a new RSA key pair and a self-signed X.509v3 certificate to use
with SSL connections to rippled peers.  New credentials are created each
startup.
2014-10-30 13:54:49 -07:00
Josh Juran
93d4b73b2f RippleSSLContext: Add openssl wrappers 2014-10-30 10:52:37 -07:00
Vinnie Falco
8e3849e591 Create Ripple SSL contexts using std::make_shared. 2014-10-30 10:52:36 -07:00
Vinnie Falco
acaa1098f7 Separate beast::http::body from beast::http::message (RIPD-660):
This changes the http::message object to no longer contain a body. It modifies
the parser to store the body in a separate object, or to pass the body data
to a functor. This allows the body to be stored in more flexible ways. For
example, in HTTP responses the body can be generated procedurally instead
of being required to exist entirely in memory at once.
2014-10-29 19:23:53 -07:00
Vinnie Falco
c1a5e88752 Use beast::asio::streambuf in Overlay 2014-10-29 19:23:53 -07:00
Vinnie Falco
74b99014d2 Add beast::asio::basic_streambuf (RIPD-661):
This is class whose interface is identical to the boost::asio::basic_streambuf,
and uses an implementation that stores the data in multiple discontiguous
linear buffers, expanding and shrinking as needed.
2014-10-29 19:23:53 -07:00
Tom Ritchford
9cba944d21 Add Json::to_string:
This allows the declaration for FastWriter to be hidden and makes
conversion of Json::Value objects to strings a little less clunky.
2014-10-29 19:23:52 -07:00
Tom Ritchford
8c1c2f5d05 Eliminate a copy of the string returned by FastWriter 2014-10-29 19:22:44 -07:00
Tom Ritchford
bf0fa8c562 Add 'sample' npm test:
test/sample-test.js is the smallest possible npm test.
2014-10-28 10:59:59 -07:00
Donovan Hide
3e1fc9ba6c Update unit testing command line parser parameters:
A string passed by the '--unittest-arg' command line parameter is passed to
suites when unit tests run and can be used to customize test behavior.
* Add '--unittest-arg' command line argument
* Remove obsolete '--unittest-format' command line argument
2014-10-28 10:55:44 -07:00
Vinnie Falco
4ceba603e4 Improvements to beast::unit_test framework:
* Some runner member functions are now thread-safe.
* De-inline and tidy up declarations and definitions.
* arg() interface allows command lines to be passed to suites.
2014-10-28 10:41:10 -07:00
Vinnie Falco
6591c21ace Set version to 0.26.4-rc4 2014-10-27 11:49:39 -07:00
Vinnie Falco
e8d03c7b9b Update rocksdb unity file 2014-10-27 11:48:21 -07:00
Vinnie Falco
6fbce4c2f7 Update src/rocksdb2 to rocksdb-3.5.1:
Merge commit 'c168d54495d7d7b84639514f6443ad99b89ce996' into develop
2014-10-27 11:37:01 -07:00
Vinnie Falco
c168d54495 Squashed 'src/rocksdb2/' changes from 25888ae..1fdd726
1fdd726 Hotfix RocksDB 3.5
d67500a Add `make install` to Makefile in 3.5.fb.
4cb631a update HISTORY.md
cfd0946 comments about the BlockBasedTableOptions migration in Options
REVERT: 25888ae Merge pull request #329 from fyrz/master
REVERT: 89833e5 Fixed signed-unsigned comparison warning in db_test.cc
REVERT: fcac705 Fixed compile warning on Mac caused by unused variables.
REVERT: b3343fd resolution for java build problem introduced by 5ec53f3edf62bec1b690ce12fb21a6c52203f3c8
REVERT: 187b299 ForwardIterator: update prev_key_ only if prefix hasn't changed
REVERT: 5ec53f3 make compaction related options changeable
REVERT: d122e7b Update INSTALL.md
REVERT: 986dad0 Merge pull request #324 from dalgaaf/wip-da-SCA-20140930
REVERT: 8ee75dc db/memtable.cc: remove unused variable merge_result
REVERT: 0fd8bbc db/db_impl.cc: reduce scope of prefix_initialized
REVERT: 676ff7b compaction_picker.cc: remove check for >=0 for unsigned
REVERT: e55aea5 document_db.cc: fix assert
REVERT: d517c83 in_table_factory.cc: use correct format specifier
REVERT: b140375 ttl/ttl_test.cc: prefer prefix ++operator for non-primitive types
REVERT: 43c789c spatialdb/spatial_db.cc: use !empty() instead of 'size() > 0'
REVERT: 0de452e document_db.cc: pass const parameter by reference
REVERT: 4cc8643 util/ldb_cmd.cc: prefer prefix ++operator for non-primitive types
REVERT: af8c2b2 util/signal_test.cc: suppress intentional null pointer deref
REVERT: 33580fa db/db_impl.cc: fix object handling, remove double lines
REVERT: 873f135 db_ttl_impl.h: pass func parameter by reference
REVERT: 8558457 ldb_cmd_execute_result.h: perform init in initialization list
REVERT: 063471b table/table_test.cc: pass func parameter by reference
REVERT: 93548ce table/cuckoo_table_reader.cc: pass func parameter by ref
REVERT: b8b7117 db/version_set.cc: use !empty() instead of 'size() > 0'
REVERT: 8ce050b table/bloom_block.*: pass func parameter by reference
REVERT: 53910dd db_test.cc: pass parameter by reference
REVERT: 68ca534 corruption_test.cc: pass parameter by reference
REVERT: 7506198 cuckoo_table_db_test.cc: add flush after delete
REVERT: 1f96330 Print MB per second compaction throughput separately for reads and writes
REVERT: ffe3d49 Add an instruction about SSE in INSTALL.md
REVERT: ee1f3cc Package generation for Ubuntu and CentOS
REVERT: f0f7955 Fixing comile errors on OS X
REVERT: 99fb613 remove 2 space linter
REVERT: b2d64a4 Fix linters, second try
REVERT: 747523d Print per column family metrics in db_bench
REVERT: 56ebd40 Fix arc lint (should fix #238)
REVERT: 637f891 Merge pull request #321 from eonnen/master
REVERT: 827e31c Make test use a compatible type in the size checks.
REVERT: fd5d80d CompactedDB: log using the correct info_log
REVERT: 2faf49d use GetContext to replace callback function pointer
REVERT: 983d2de Add AUTHORS file. Fix #203
REVERT: abd70c5 Merge pull request #316 from fyrz/ReverseBytewiseComparator
REVERT: 2dc6f62 handle kDelete type in cuckoo builder
REVERT: 8b8011a Changed name of ReverseBytewiseComparator based on review comment
REVERT: 389edb6 universal compaction picker: use double for potential overflow
REVERT: 5340484 Built-in comparator(s) in RocksJava
REVERT: d439451 delay initialization of cuckoo table iterator
REVERT: 94997ea reduce memory usage of cuckoo table builder
REVERT: c627595 improve memory efficiency of cuckoo reader
REVERT: 581442d option to choose module when calculating CuckooTable hash
REVERT: fbd2daf CompactedDBImpl::MultiGet() for better CuckooTable performance
REVERT: 3c68006 CompactedDBImpl
REVERT: f7375f3 Fix double deletes
REVERT: 21ddcf6 Remove allow_thread_local
REVERT: fb4a492 Merge pull request #311 from ankgup87/master
REVERT: 611e286 Merge branch 'master' of https://github.com/facebook/rocksdb
REVERT: 0103b44 Merge branch 'master' of ssh://github.com/ankgup87/rocksdb
REVERT: 1dfb7bb Add block based table config options
REVERT: cdaf44f Enlarge log size cap when printing file summary
REVERT: 7cc1ed7 Merge pull request #309 from naveenatceg/staticbuild
REVERT: ba6d660 Resolving merge conflict
REVERT: 51eeaf6 Addressing review comments
REVERT: fd7d3fe Addressing review comments (adding a env variable to override temp directory)
REVERT: cf7ace8 Addressing review comments
REVERT: 0a29ce5 re-enable BlockBasedTable::SetupForCompaction()
REVERT: 55af370 Remove TODO for checking index checksums
REVERT: 3d74f09 Fix compile
REVERT: 53b0039 Fix release compile
REVERT: d0de413 WriteBatchWithIndex to allow different Comparators for different column families
REVERT: 57a32f1 change target_file_size_base to uint64_t
REVERT: 5e6aee4 dont create backup_input if compaction filter v2 is not used
REVERT: 49b5f94 Merge pull request #306 from Liuchang0812/fix_cast
REVERT: 787cb4d remove cast, replace %llu with % PRIu64
REVERT: a7574d4 Update logging.cc
REVERT: 7e0dcb9 Update logging.cc
REVERT: 57fa3cc Merge pull request #304 from Liuchang0812/fix-check
REVERT: cd44522 Merge pull request #305 from Liuchang0812/fix-logging
REVERT: 6a031b6 remove unused variable
REVERT: 4436f17 fixed #303: replace %ld with % PRId64
REVERT: 7a1bd05 Merge pull request #302 from ankgup87/master
REVERT: 423e52c Merge branch 'master' of https://github.com/facebook/rocksdb
REVERT: bfeef94 Add rate limiter
REVERT: 32f2532 Print compression_size_percent as a signed int
REVERT: 976caca Skip AllocateTest if fallocate() is not supported in the file system
REVERT: 3b897cd Enable no-fbcode RocksDB build
REVERT: f445947 RocksDB: Format uint64 using PRIu64 in db_impl.cc
REVERT: e17bc65 Merge pull request #299 from ankgup87/master
REVERT: b93797a Fix build
REVERT: adae3ca [Java] Fix JNI link error caused by the removal of options.db_stats_log_interval
REVERT: 90b8c07 Fix unit tests errors
REVERT: 51af7c3 CuckooTable: add one option to allow identity function for the first hash function
REVERT: 0350435 Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
REVERT: 2fb1fea Fix syncronization issues
REVERT: ff76895 Remove some unnecessary constructors
REVERT: feadb9d fix cuckoo table builder test
REVERT: 3c232e1 Fix mac compile
REVERT: 54cada9 Run make format on PR #249
REVERT: 27b22f1 Merge pull request #249 from tdfischer/decompression-refactoring
REVERT: fb6456b Replace naked calls to operator new and delete (Fixes #222)
REVERT: 5600c8f cuckoo table: return estimated size - 1
REVERT: a062e1f SetOptions() for memtable related options
REVERT: e4eca6a Options conversion function for convenience
REVERT: a7c2094 Merge pull request #292 from saghmrossi/master
REVERT: 4d05234 Merge branch 'master' of github.com:saghmrossi/rocksdb
REVERT: 60a4aa1 Test use_mmap_reads
REVERT: 94e43a1 [Java] Fixed 32-bit overflowing issue when converting jlong to size_t
REVERT: f9eaaa6 added include for inttypes.h to fix nonworking printf statements
REVERT: f090575 Replaced "built on on earlier work" by "built on earlier work" in README.md
REVERT: faad439 Fix #284
REVERT: 49aacd8 Fix make install
REVERT: acb9348 [Java] Include WriteBatch into RocksDBSample.java, fix how DbBenchmark.java handles WriteBatch.
REVERT: 4a27a2f Don't sync manifest when disableDataSync = true
REVERT: 9b8480d Merge pull request #287 from yinqiwen/rate-limiter-crash-fix
REVERT: 28be16b fix rate limiter crash #286
REVERT: 04ce1b2 Fix #284
REVERT: add22e3 standardize scripts to run RocksDB benchmarks
REVERT: dee91c2 WriteThread
REVERT: 540a257 Fix WAL synced
REVERT: 24f034b Merge pull request #282 from Chilledheart/develop
REVERT: 49fe329 Fix build issue under macosx
REVERT: ebb5c65 Add make install
REVERT: 0352a9f add_wrapped_bloom_test
REVERT: 9c0e66c Don't run background jobs (flush, compactions) when bg_error_ is set
REVERT: a9639bd Fix valgrind test
REVERT: d1f24dc Relax FlushSchedule test
REVERT: 3d9e6f7 Push model for flushing memtables
REVERT: 059e584 [unit test] CompactRange should fail if we don't have space
REVERT: dd641b2 fix RocksDB java build
REVERT: 53404d9 add_qps_info_in cache bench
REVERT: a52cecb Fix Mac compile
REVERT: 092f97e Fix comments and typos
REVERT: 6cc1286 Added a few statistics for BackupableDB
REVERT: 0a42295 Fix SimpleWriteTimeoutTest
REVERT: 06d9862 Always pass MergeContext as pointer, not reference
REVERT: d343c3f Improve db recovery
REVERT: 6bb7e3e Merger test
REVERT: 88841bd Explicitly cast char to signed char in Hash()
REVERT: 5231146 MemTableOptions
REVERT: 1d284db Addressing review comments
REVERT: 55114e7 Some updates for SpatialDB
REVERT: 171d4ff remove TailingIterator reference in db_impl.h
REVERT: 9b0f7ff rename version_set options_ to db_options_ to avoid confusion
REVERT: 2d57828 Check stop level trigger-0 before slowdown level-0 trigger
REVERT: 659d2d5 move compaction_filter to immutable_options
REVERT: 048560a reduce references to cfd->options() in DBImpl
REVERT: 011241b DB::Flush() Do not wait for background threads when there is nothing in mem table
REVERT: a2bb7c3 Push- instead of pull-model for managing Write stalls
REVERT: 0af157f Implement full filter for block based table.
REVERT: 9360cc6 Fix valgrind issue
REVERT: 02d5bff Merge pull request #277 from wankai/master
REVERT: 88a2f44 fix comments
REVERT: 7c16e39 Merge pull request #276 from wankai/master
REVERT: 8237738 replace hard-coded number with named variable
REVERT: db8ca52 Merge pull request #273 from nbougalis/static-analysis
REVERT: b7b031f Merge pull request #274 from wankai/master
REVERT: 4c2b1f0 Merge remote-tracking branch 'upstream/master'
REVERT: a5d2863 typo improvement
REVERT: 9f8aa09 Don't leak data returned by opendir
REVERT: d1cfb71 Remove unused member(s)
REVERT: bfee319 sizeof(int*) where sizeof(int) was intended
REVERT: d40c1f7 Add missing break statement
REVERT: 2e97c38 Avoid off-by-one error when using readlink
REVERT: 40ddc3d add cache bench
REVERT: 9f1c80b Drop column family from write thread
REVERT: 8de151b Add db_bench with lots of column families to regression tests
REVERT: c9e419c rename options_ to db_options_ in DBImpl to avoid confusion
REVERT: 5cd0576 Fix compaction bug in Cuckoo Table Builder. Use kvs_.size() instead of num_entries in FileSize() method.
REVERT: 0fbb3fa fixed memory leak in unit test DBIteratorBoundTest
REVERT: adcd253 fix asan check
REVERT: 4092b7a Merge pull request #272 from project-zerus/patch-1
REVERT: bb6ae0f fix more compile warnings
REVERT: 6d31441 Merge pull request #271 from nbougalis/cleanups
REVERT: 0cd0ec4 Plug memory leak during index creation
REVERT: 4329d74 Fix swapped variable names to accurately reflect usage
REVERT: 45a5e3e Remove path with arena==nullptr from NewInternalIterator
REVERT: 5665e5e introduce ImmutableOptions
REVERT: e0b99d4 created a new ReadOptions parameter 'iterate_upper_bound'
REVERT: 51ea889 Fix travis builds
REVERT: a481626 Relax backupable rate limiting test
REVERT: f7f973d Merge pull request #269 from huahang/patch-2
REVERT: ef5b384 fix a few compile warnings
REVERT: 2fd3806 Merge pull request #263 from wankai/master
REVERT: 1785114 delete unused Comparator
REVERT: 1b1d961 update HISTORY.md
REVERT: 703c3ea comments about the BlockBasedTableOptions migration in Options
REVERT: 4b5ad88 Merge pull request #260 from wankai/master
REVERT: 19cc588 change to filter_block std::unique_ptr support RAII
REVERT: 9b976e3 Merge pull request #259 from wankai/master
REVERT: 5d25a46 Merge remote-tracking branch 'upstream/master'
REVERT: dff2b1a typo improvement
REVERT: 343e98a Reverting import change
REVERT: ddb8039 RocksDB static build Make file changes to download and build the dependencies .Load the shared library when RocksDB is initialized

git-subtree-dir: src/rocksdb2
git-subtree-split: 1fdd726a8254c13d0c66d8db8130ad17c13d7bcc
2014-10-27 11:36:32 -07:00
Vinnie Falco
2cce22052b Update SQLite to 3.8.7:
sha1: 3e23079f062fc06705eead4db108ee429878b532
2014-10-27 11:04:46 -07:00
Tom Ritchford
4e19d5f625 Adjust paths and costs in Pathfinder. 2014-10-27 11:03:19 -07:00
Tom Ritchford
5b667da526 Squelch some warnings in rippled and third-party code. 2014-10-27 10:00:03 -07:00
Nik Bougalis
f9fc9a3518 Reduce RippleD dependencies on Beast:
* Use static_assert where appropriate
* Use std::min and std::max where appropriate
* Simplify RippleD error reporting
* Remove use of beast::RandomAccessFile
2014-10-27 09:55:58 -07:00
Nik Bougalis
e005cfd70e Reduce Beast public interface and eliminate unused code:
Beast includes a lot of code for encapsulating cross-platform differences
which are not used or needed by rippled. Additionally, a lot of that code
implements functionality that is available from the standard library.

This moves away from custom implementations of features that the standard
library provides and reduces the number of platform-specific interfaces
andfeatures that Beast makes available.

Highlights include:
* Use std:: instead of beast implementations when possible
* Reduce the use of beast::String in public interfaces
* Remove Windows-specific COM and Registry code
* Reduce the public interface of beast::File
* Reduce the public interface of beast::SystemStats
* Remove unused sysctl/getsysinfo functions
* Remove beast::Logger
2014-10-27 09:55:43 -07:00
Vinnie Falco
feb997481c Refactor the structure of ServerHandler:
This is a cleanup to the structure of the sources.
* Rename to ServerHandler
* Move private implementation declaration to separate header
* De-inline function definitions in the class declaration.
2014-10-27 09:50:03 -07:00
Vinnie Falco
2c8e90c9d8 Remove obsolete RPCServerHandler:
This removes the legacy RPCServerHandler, which has been replaced by the
asynchronous RPC-HTTP/S server and corresponding RPCHTTPHandler.
2014-10-27 09:50:03 -07:00
Vinnie Falco
ec96d5afa0 Remove unused and obsolete classes and tidy up:
Many classes required to support type-erasure of handlers and boost::asio
types are now obsolete, so these classes and files are removed:
HTTPClientType, FixedInputBuffer, PeerRole, socket_wrapper,
client_session, basic_url, abstract_socket, buffer_sequence, memory_buffer,
enable_wait_for_async, shared_handler, wrap_handler, streambuf,
ContentBodyBuffer, SSLContext, completion-handler based handshake detectors.
These structural changes are made:
* Some missing includes added to headers
* asio module directory flattened
2014-10-26 08:40:52 -07:00
Vinnie Falco
8be8853c33 Remove obsolete classes, disable unused code, and tidy up:
* Removed MultiSocket. Code that previously used the MultiSocket now uses
  a combination of boost::asio coroutines and CRTP.
* Sitefiles headers rolled up and directory flattened.
* Disabled Sitefiles use of deprecated HTTPClient.
* Validators headers tidied up.
* Disabled Validators use of deprecated HTTPClient.
2014-10-26 08:38:37 -07:00
Vinnie Falco
c228f5a244 Set version to 0.26.4-rc3 2014-10-25 08:07:40 -07:00
Vinnie Falco
d4c8b4e3ac Merge branch 'release' into develop 2014-10-25 08:07:30 -07:00
Vinnie Falco
6564f6c164 Fix incorrect socket closure in Overlay peers:
On Application exit, Overlay was calling PeerImp::close for each peer.
The implementation of PeerImp::close only canceled all pending I/O and did not
call functions necessary for proper transition of Peer state during socket
closure. The correct transition is ensured by calling PeerImp::detach. This
changes PeerImp::close to call PeerImp::detach instead, ensuring that Overlay
invariants are maintained. Specifically, that reference counts for pending I/O
on peers will be correctly unwound by canceling operations and that the Peer
object will be destroyed, thus allowing the Overlay to stop correctly.
2014-10-25 08:01:57 -07:00
Vinnie Falco
1e37a5509c Add missing includes 2014-10-24 08:13:55 -07:00
Vinnie Falco
1e9503deaa Set version to 0.26.2-rc2 2014-10-23 13:49:22 -07:00
Vinnie Falco
ab1f36c565 Revert "Add [overlay] configuration section (experimental):"
This reverts commit 856fd9d69f.
2014-10-23 13:48:52 -07:00
Vinnie Falco
5a212cd626 Set version to 0.26.4-rc1 2014-10-23 13:01:12 -07:00
Vinnie Falco
856fd9d69f Add [overlay] configuration section (experimental):
This configuration section uses the new BasicConfig interface that supports
key-value pairs in the section. Some exposition is added to the example cfg
file. The new settings for overlay are related to the Hub and Spoke feature
which is currently in development. Production servers should not set
these configuration options, they are clearly marked experimental in the
example cfg file.

Conflicts:
	src/ripple/overlay/impl/OverlayImpl.cpp
	src/ripple/overlay/impl/OverlayImpl.h
	src/ripple/overlay/impl/PeerImp.cpp
	src/ripple/overlay/impl/PeerImp.h
2014-10-23 12:56:16 -07:00
Vinnie Falco
4606d99951 Don't use MultiSocket in Overlay:
The MultiSocket is obsolete technology which is superceded by a more
straightforward, template based implementation that is compatible with
boost::asio::coroutines. This removes support for the unused PROXY handshake
feature. After this change a large number of classes and source files may be
removed.
2014-10-23 12:56:16 -07:00
Tom Ritchford
dbd75169e5 New JsonWriter for improved client performance (RIPD-439):
When JSON-RPC and Websocket responses are calculated, the result is stored
in intermediate Json::Value objects and later composed in a single linear
memory buffer before being sent to the socket.  These classes support a
new model for building responses that supports incremental construction
of JSON replies in constant time and removes the requirement that all
data returned be located in continuguous memory.
* New JsonWriter incrementally writes JSON with O(1) granularity and memory.
* Array, Object are RAII wrappers for the O(1) JsonWriter.
2014-10-23 07:04:47 -07:00
Vinnie Falco
f5b39ee911 Remove HTTP::ScopedStream:
This class was used to allow stream style operator<< to write to the
HTTP::Session. This is being superceded by a more robust object-based model
that supports coroutines.
2014-10-22 19:36:28 -07:00
Vinnie Falco
db5d52b4b2 Keep a list of section config values that are not key/value pairs:
This change to BasicConfig stores all appended lines which are not key/value
pairs in a separate values vector which can be retrieved later. This is to
support sections containing both key/value pairs and a list of values.
2014-10-22 19:36:28 -07:00
Vinnie Falco
dfeb9967b8 Return error_code from beast::http::basic_parser:
This changes the HTTP parser interface to return an error_code instead
of a bool. This eliminates the need for the error() member function and
simplifies calling code.
2014-10-22 19:36:28 -07:00
Vinnie Falco
673e860c18 Add beast::asio::ssl_bundle workaround:
This works around the limitation that 1.56 boost::asio::ssl::stream objects
do not support r-value move or construction. It is required when the stream
does not own the socket.
2014-10-22 19:36:28 -07:00
Vinnie Falco
9deae34b20 Workaround for MSVC stdlib and coroutine interaction:
If beast::Time::currentTimeMillis is first called from a coroutine launched
using boost::asio::spawn, Win32 throws an exception. This workaround calls
getCurrentTime once in main to prevent the exception.
Reference:
    https://svn.boost.org/trac/boost/ticket/10657
2014-10-22 19:36:27 -07:00
Vinnie Falco
ec92344fb4 Use autotls instead of multitls in websocket:
The MultiSocket class implements a socket that handshakes in multiple
protocols including SSL and PROXY. Unfortunately the way it type-erases the
handlers and buffers is incompatible with boost::asio coroutines. To pave the
way for coroutines this is part of a larger set of changes that roll back the
usage of MultiSocket to older code, and some custom implementations that use
templates. The custom implementations are more simple since they use
coroutines. Removing MultiSocket will make many other classes and source files
unused, a big win for trimming down the codebase size.
2014-10-22 19:34:48 -07:00
Donovan Hide
44c68d6174 Change NodeStore::Backend tests to reflect observed patterns:
Empirical evidence shows a database access pattern with few hits
and many misses (objects that don't exist). This changes the timing
tests so they more accurately reflect rippled's actual usage:
* Add read missing keys test
* Increase numObjectsToTest to 1,000,000
* Alter PredictableObjectFactory to seed RNG once only
* Make NodeStoreTiming a manual test
2014-10-22 19:29:29 -07:00
Howard Hinnant
5b7f172d03 Fix OS X version parsing/error related to OS X 10.10 update. 2014-10-22 19:29:28 -07:00
JoelKatz
65125eac87 Add "deferred" flag to transaction relay message
If we receive a deferred transaction from a server in our
cluster, treat it as if it wasn't received from a server
in our cluster.

This currently has no effect but is needed for server to
interoperate with future code that will relay deferred
transactions.
2014-10-22 19:29:28 -07:00
Scott Schurr
761902864a Refactor STParsedJSON to parse an object or array [RIPD-480]
The implementation of multi-sign has a SigningAccounts array as a
member of the outermost object.  This array could not be parsed
by the previous implementation of STParsedJSON, which only knew
how to parse objects.  This refactor supports the required parsing.

The refactor divides the parsing into three separate functions:

 o parseNoRecurse() which parses most rippled data types.
 o parseObject() which parses object types that may contain
   arbitrary other types.
 o parseArray() which parses object types that may contain
   arbitrary other types.

The change is required by the multi-sign implementation, but is
independent.  So the parsing change is going in as a separate
commit.

The parsing is still far from perfect.  But this was as much as
needs doing to accomplish the ends and mitigate risk of breaking
the parser.
2014-10-22 19:29:28 -07:00
Vinnie Falco
af24d541d1 Workaround for MSVC move special members. 2014-10-18 08:16:12 -07:00
Donovan Hide
3ad68a617e Fix dependency on boost::thread on OS/X. 2014-10-16 21:44:36 -04:00
Nik Bougalis
9e1a6589d4 Return descriptive error message from memo validation (RIPD-591). 2014-10-16 21:44:36 -04:00
Josh Juran
da8ceed07e RippleSSLContext.cpp cleanup.
* These cleanups precede work on RIPD-108.
2014-10-16 21:44:36 -04:00
Nik Bougalis
35935adc98 Fix URL compositing in Beast (RIPD-636). 2014-10-16 21:44:36 -04:00
Howard Hinnant
5b4a501f68 Detab beast 2014-10-15 19:39:30 -04:00
Tom Ritchford
5425a90f16 Fix tabs and trailing whitespace. 2014-10-15 19:39:30 -04:00
Donovan Hide
7eaca149c1 Remove boost_thread dependency (RIPD-216).
Fixes RIPD-216
2014-10-15 19:37:25 -04:00
Mark Travis
4b5fd95657 Disable SSLv3 2014-10-15 19:37:25 -04:00
Nik Bougalis
96dedf553e Refactor SerializedTransaction:
* Use boost:tribool instead of two intertwined bool variables
* Trim down public interface, reduce member variables
2014-10-15 19:37:25 -04:00
sublimator
23219f2662 Disable transaction submission tests under Travis. 2014-10-15 19:37:25 -04:00
Vinnie Falco
af78ed608e Call Stoppable::stopped in PeerFinder onStop. 2014-10-15 19:37:25 -04:00
Vinnie Falco
51dc59e019 Fix outgoing bytes calculation in HTTP server. 2014-10-15 19:37:25 -04:00
Tom Ritchford
afc102e90a New class RPC::Status enforces JSON-RPC 2.0 error format.
* Relevant issues:
  * RIPD-92
  * RIPD-97
  * RIPD-98
  * RIPD-439
2014-10-15 19:37:25 -04:00
David Schwartz
fc560179e0 SHAMap performance improvements (RIPD-434)
This reworks the way SHAMaps are stored, flushed, backed, and
traversed. Rather than storing the linkages in the SHAMap itself,
that information is now stored in the nodes. This makes
snapshotting much cheaper and also allows traverse work done on
behalf of one SHAMap to be used by other SHAMaps that share inner
nodes with that SHAMap.

When a SHAMap is modified, nodes are modified all the way up to the
root. This means that the modified nodes in a SHAMap can easily be
traversed for flushing. So they don't need to be separately tracked.

Summary
* Remove mTNByID
* Remove mDirtyNodes
* Much faster traverses
* Much Faster snapshots
* New algorithm for flushing
* New vistNodes/visitLeaves
* Avoid I/O if a map is unbacked
2014-10-14 13:32:17 -04:00
sublimator
d26241de0e Remove Og from debug mode
Last time I used gdb, iterating over a directory's `Indexes`, each uint256 printed as `<optimized out>`.

Debug mode is for debugging ...
2014-10-14 12:57:41 -04:00
Howard Hinnant
00310f4f10 Silence clang warnings 2014-10-14 12:35:17 -04:00
Howard Hinnant
8caae219cf Gracefully cast from std:🧵:hardware_concurrency 2014-10-14 12:35:17 -04:00
Howard Hinnant
2264ae9247 Guarantee C locale
*  Remove all calls to setlocale to ensure that the global
   locale is always C.

*  Also replace beast::SystemStats::getNumCpus() with
   std:🧵:hardware_concurrency()
2014-10-14 12:35:17 -04:00
Nicholas Dudfield
29225bbe75 Attempt to fix spurious travis failures 2014-10-14 12:35:17 -04:00
Vinnie Falco
4b5625fd59 Load PeerFinder database in Stoppable::onPrepare:
OverlayImpl::onStart calls into PeerFinder before PeerFinder::Manager::onStart,
causing tests to sometimes fail and the application to intermittently not start.
The order of calls to Stoppable::onStart is implementation defined and not
predictable.

This changes PeerFinder to load the database in Stoppable::onPrepare, before
threads are launched. In general, creation and initialization of resources that
are shared between classes should happen in onPrepare rather than onStart,
to solve this problem.
2014-10-10 19:38:52 -07:00
Vinnie Falco
7c0c2419f7 Refactor PeerFinder:
Previously, the PeerFinder manager constructed with a Callback object
provided by the owner which was used to perform operations like connecting,
disconnecting, and sending messages. This made it difficult to change the
overlay code because a single call into the PeerFinder could cause both
OverlayImpl and PeerImp to be re-entered one or more times, sometimes while
holding a recursive mutex. This change eliminates the callback by changing
PeerFinder functions to return values indicating the action the caller should
take.

As a result of this change the PeerFinder no longer needs its own dedicated
thread. OverlayImpl is changed to call into PeerFinder on a timer to perform
periodic activities. Furthermore the Checker class used to perform connectivity
checks has been refactored. It no longer uses an abstract base class, in order
to not type-erase the handler passed to async_connect (ensuring compatibility
with coroutines). To allow unit tests that don't need a network, the Logic
class is now templated on the Checker type. Currently the Manager provides its
own io_service. However, this can easily be changed so that the io_service is
provided upon construction.

Summary
* Remove unused SiteFiles dependency injection
* Remove Callback and update signatures for public APIs
* Remove obsolete functions
* Move timer to overlay
* Steps toward a shared io_service
* Templated, simplified Checker
* Tidy up Checker declaration
2014-10-10 15:04:37 -07:00
Vinnie Falco
5f59282ba1 Clean up Overlay and PeerFinder sources:
* Tidy up identifiers and declarations
* Merge PeerFinder headers into one file
* Merge handout classes and functions into one file
2014-10-10 15:04:36 -07:00
Vinnie Falco
db03ce939c Add pending_handlers 2014-10-10 13:26:08 -07:00
Vinnie Falco
68bcbbb701 Add missing include in beast header 2014-10-10 13:26:08 -07:00
David Schwartz
8bdf7b3983 Remove unused file 2014-10-10 10:27:47 -07:00
Vinnie Falco
4ab427d315 Cleanup: Combine Section and BasicConfig, move to basics 2014-10-09 14:49:10 -07:00
Vinnie Falco
9a0a434dd8 Fix incorrect address in connectivity check report:
The remoteAddress is the address as seen on the socket, which for
incoming connections has a random port chosen by the remote implementation
that is different from the port number used to accept connections by the
remote listening socket. The checkedAddress is the remote address as seen
on the socket, combined with the port advertised in the TMEndpoints message.
This fixes the reporting and metadata associated with addresses tested
for connectivity.

The README has been updated to reflect that uptime is no longer part of
the metadata associated with IP addresses saved for bootstrapping.
2014-10-09 14:48:54 -07:00
Nik Bougalis
33d1dda954 Handle BIGNUM conversion failure 2014-10-06 11:24:42 -07:00
Howard Hinnant
8e9efb4ceb Remove unused transaction code. 2014-10-06 11:18:15 -07:00
Nik Bougalis
8835af11d5 Cleanups and surface reduction:
* Don't use friendship unless needed
* Trim down interfaces
* Make classes feel more like std containers
2014-10-06 11:18:15 -07:00
Nik Bougalis
cfb6b678f1 Remove HashMaps 2014-10-02 14:58:14 -07:00
miguelportilla
365500da98 Create orderbook integration test (RIPD-483) 2014-10-02 14:58:14 -07:00
Miguel Portilla
f14d75e798 Optimize account_lines and account_offers (RIPD-587)
Conflicts:
	src/ripple/app/ledger/Ledger.h
2014-10-02 14:58:14 -07:00
Tom Ritchford
0f71b4a378 Fix most compilation warnings for gcc, clang, release, debug. 2014-10-02 14:58:14 -07:00
JoelKatz
b651e0146d Fix some fee logic: (RIPD-614)
* fee_default sets cost in drops of reference transaction
* Offline signing uses fee_default
* Signing multiplier maximum works correctly
* Fix bugs in load fee track
* Remove dead code, add comments
2014-10-02 14:58:14 -07:00
Tom Ritchford
a0dbbb2d84 Update and sort ErrorCode descriptions 2014-10-02 14:57:31 -07:00
Torrie Fischer
a85fbf69e0 Update rocksdb 2014-10-02 14:57:31 -07:00
Torrie Fischer
92b8c7961b Squashed 'src/rocksdb2/' changes from 37c6740..25888ae
25888ae Merge pull request #329 from fyrz/master
89833e5 Fixed signed-unsigned comparison warning in db_test.cc
fcac705 Fixed compile warning on Mac caused by unused variables.
b3343fd resolution for java build problem introduced by 5ec53f3edf62bec1b690ce12fb21a6c52203f3c8
187b299 ForwardIterator: update prev_key_ only if prefix hasn't changed
5ec53f3 make compaction related options changeable
d122e7b Update INSTALL.md
986dad0 Merge pull request #324 from dalgaaf/wip-da-SCA-20140930
8ee75dc db/memtable.cc: remove unused variable merge_result
0fd8bbc db/db_impl.cc: reduce scope of prefix_initialized
676ff7b compaction_picker.cc: remove check for >=0 for unsigned
e55aea5 document_db.cc: fix assert
d517c83 in_table_factory.cc: use correct format specifier
b140375 ttl/ttl_test.cc: prefer prefix ++operator for non-primitive types
43c789c spatialdb/spatial_db.cc: use !empty() instead of 'size() > 0'
0de452e document_db.cc: pass const parameter by reference
4cc8643 util/ldb_cmd.cc: prefer prefix ++operator for non-primitive types
af8c2b2 util/signal_test.cc: suppress intentional null pointer deref
33580fa db/db_impl.cc: fix object handling, remove double lines
873f135 db_ttl_impl.h: pass func parameter by reference
8558457 ldb_cmd_execute_result.h: perform init in initialization list
063471b table/table_test.cc: pass func parameter by reference
93548ce table/cuckoo_table_reader.cc: pass func parameter by ref
b8b7117 db/version_set.cc: use !empty() instead of 'size() > 0'
8ce050b table/bloom_block.*: pass func parameter by reference
53910dd db_test.cc: pass parameter by reference
68ca534 corruption_test.cc: pass parameter by reference
7506198 cuckoo_table_db_test.cc: add flush after delete
1f96330 Print MB per second compaction throughput separately for reads and writes
ffe3d49 Add an instruction about SSE in INSTALL.md
ee1f3cc Package generation for Ubuntu and CentOS
f0f7955 Fixing comile errors on OS X
99fb613 remove 2 space linter
b2d64a4 Fix linters, second try
747523d Print per column family metrics in db_bench
56ebd40 Fix arc lint (should fix #238)
637f891 Merge pull request #321 from eonnen/master
827e31c Make test use a compatible type in the size checks.
fd5d80d CompactedDB: log using the correct info_log
2faf49d use GetContext to replace callback function pointer
983d2de Add AUTHORS file. Fix #203
abd70c5 Merge pull request #316 from fyrz/ReverseBytewiseComparator
2dc6f62 handle kDelete type in cuckoo builder
8b8011a Changed name of ReverseBytewiseComparator based on review comment
389edb6 universal compaction picker: use double for potential overflow
5340484 Built-in comparator(s) in RocksJava
d439451 delay initialization of cuckoo table iterator
94997ea reduce memory usage of cuckoo table builder
c627595 improve memory efficiency of cuckoo reader
581442d option to choose module when calculating CuckooTable hash
fbd2daf CompactedDBImpl::MultiGet() for better CuckooTable performance
3c68006 CompactedDBImpl
f7375f3 Fix double deletes
21ddcf6 Remove allow_thread_local
fb4a492 Merge pull request #311 from ankgup87/master
611e286 Merge branch 'master' of https://github.com/facebook/rocksdb
0103b44 Merge branch 'master' of ssh://github.com/ankgup87/rocksdb
1dfb7bb Add block based table config options
cdaf44f Enlarge log size cap when printing file summary
7cc1ed7 Merge pull request #309 from naveenatceg/staticbuild
ba6d660 Resolving merge conflict
51eeaf6 Addressing review comments
fd7d3fe Addressing review comments (adding a env variable to override temp directory)
cf7ace8 Addressing review comments
0a29ce5 re-enable BlockBasedTable::SetupForCompaction()
55af370 Remove TODO for checking index checksums
3d74f09 Fix compile
53b0039 Fix release compile
d0de413 WriteBatchWithIndex to allow different Comparators for different column families
57a32f1 change target_file_size_base to uint64_t
5e6aee4 dont create backup_input if compaction filter v2 is not used
49b5f94 Merge pull request #306 from Liuchang0812/fix_cast
787cb4d remove cast, replace %llu with % PRIu64
a7574d4 Update logging.cc
7e0dcb9 Update logging.cc
57fa3cc Merge pull request #304 from Liuchang0812/fix-check
cd44522 Merge pull request #305 from Liuchang0812/fix-logging
6a031b6 remove unused variable
4436f17 fixed #303: replace %ld with % PRId64
7a1bd05 Merge pull request #302 from ankgup87/master
423e52c Merge branch 'master' of https://github.com/facebook/rocksdb
bfeef94 Add rate limiter
32f2532 Print compression_size_percent as a signed int
976caca Skip AllocateTest if fallocate() is not supported in the file system
3b897cd Enable no-fbcode RocksDB build
f445947 RocksDB: Format uint64 using PRIu64 in db_impl.cc
e17bc65 Merge pull request #299 from ankgup87/master
b93797a Fix build
adae3ca [Java] Fix JNI link error caused by the removal of options.db_stats_log_interval
90b8c07 Fix unit tests errors
51af7c3 CuckooTable: add one option to allow identity function for the first hash function
0350435 Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
2fb1fea Fix syncronization issues
ff76895 Remove some unnecessary constructors
feadb9d fix cuckoo table builder test
3c232e1 Fix mac compile
54cada9 Run make format on PR #249
27b22f1 Merge pull request #249 from tdfischer/decompression-refactoring
fb6456b Replace naked calls to operator new and delete (Fixes #222)
5600c8f cuckoo table: return estimated size - 1
a062e1f SetOptions() for memtable related options
e4eca6a Options conversion function for convenience
a7c2094 Merge pull request #292 from saghmrossi/master
4d05234 Merge branch 'master' of github.com:saghmrossi/rocksdb
60a4aa1 Test use_mmap_reads
94e43a1 [Java] Fixed 32-bit overflowing issue when converting jlong to size_t
f9eaaa6 added include for inttypes.h to fix nonworking printf statements
f090575 Replaced "built on on earlier work" by "built on earlier work" in README.md
faad439 Fix #284
49aacd8 Fix make install
acb9348 [Java] Include WriteBatch into RocksDBSample.java, fix how DbBenchmark.java handles WriteBatch.
4a27a2f Don't sync manifest when disableDataSync = true
9b8480d Merge pull request #287 from yinqiwen/rate-limiter-crash-fix
28be16b fix rate limiter crash #286
04ce1b2 Fix #284
add22e3 standardize scripts to run RocksDB benchmarks
dee91c2 WriteThread
540a257 Fix WAL synced
24f034b Merge pull request #282 from Chilledheart/develop
49fe329 Fix build issue under macosx
ebb5c65 Add make install
0352a9f add_wrapped_bloom_test
9c0e66c Don't run background jobs (flush, compactions) when bg_error_ is set
a9639bd Fix valgrind test
d1f24dc Relax FlushSchedule test
3d9e6f7 Push model for flushing memtables
059e584 [unit test] CompactRange should fail if we don't have space
dd641b2 fix RocksDB java build
53404d9 add_qps_info_in cache bench
a52cecb Fix Mac compile
092f97e Fix comments and typos
6cc1286 Added a few statistics for BackupableDB
0a42295 Fix SimpleWriteTimeoutTest
06d9862 Always pass MergeContext as pointer, not reference
d343c3f Improve db recovery
6bb7e3e Merger test
88841bd Explicitly cast char to signed char in Hash()
5231146 MemTableOptions
1d284db Addressing review comments
55114e7 Some updates for SpatialDB
171d4ff remove TailingIterator reference in db_impl.h
9b0f7ff rename version_set options_ to db_options_ to avoid confusion
2d57828 Check stop level trigger-0 before slowdown level-0 trigger
659d2d5 move compaction_filter to immutable_options
048560a reduce references to cfd->options() in DBImpl
011241b DB::Flush() Do not wait for background threads when there is nothing in mem table
a2bb7c3 Push- instead of pull-model for managing Write stalls
0af157f Implement full filter for block based table.
9360cc6 Fix valgrind issue
02d5bff Merge pull request #277 from wankai/master
88a2f44 fix comments
7c16e39 Merge pull request #276 from wankai/master
8237738 replace hard-coded number with named variable
db8ca52 Merge pull request #273 from nbougalis/static-analysis
b7b031f Merge pull request #274 from wankai/master
4c2b1f0 Merge remote-tracking branch 'upstream/master'
a5d2863 typo improvement
9f8aa09 Don't leak data returned by opendir
d1cfb71 Remove unused member(s)
bfee319 sizeof(int*) where sizeof(int) was intended
d40c1f7 Add missing break statement
2e97c38 Avoid off-by-one error when using readlink
40ddc3d add cache bench
9f1c80b Drop column family from write thread
8de151b Add db_bench with lots of column families to regression tests
c9e419c rename options_ to db_options_ in DBImpl to avoid confusion
5cd0576 Fix compaction bug in Cuckoo Table Builder. Use kvs_.size() instead of num_entries in FileSize() method.
0fbb3fa fixed memory leak in unit test DBIteratorBoundTest
adcd253 fix asan check
4092b7a Merge pull request #272 from project-zerus/patch-1
bb6ae0f fix more compile warnings
6d31441 Merge pull request #271 from nbougalis/cleanups
0cd0ec4 Plug memory leak during index creation
4329d74 Fix swapped variable names to accurately reflect usage
45a5e3e Remove path with arena==nullptr from NewInternalIterator
5665e5e introduce ImmutableOptions
e0b99d4 created a new ReadOptions parameter 'iterate_upper_bound'
51ea889 Fix travis builds
a481626 Relax backupable rate limiting test
f7f973d Merge pull request #269 from huahang/patch-2
ef5b384 fix a few compile warnings
2fd3806 Merge pull request #263 from wankai/master
1785114 delete unused Comparator
1b1d961 update HISTORY.md
703c3ea comments about the BlockBasedTableOptions migration in Options
4b5ad88 Merge pull request #260 from wankai/master
19cc588 change to filter_block std::unique_ptr support RAII
9b976e3 Merge pull request #259 from wankai/master
5d25a46 Merge remote-tracking branch 'upstream/master'
9b58c73 call SanitizeDBOptionsByCFOptions() in the right place
a84234a Ignore missing column families
8ed70fc add assert to db Put in db_stress test
7f19bb9 Merge pull request #242 from tdfischer/perf-timer-destructors
8438a19 fix dropping column family bug
6614a48 Refactor PerfStepTimer to stop on destruct
076bd01 Fix compile
990df99 Fix ios compile
7dcadb1 Don't let flush preempt compaction in certain cases
dff2b1a typo improvement
985a31c Merge pull request #251 from nbougalis/master
f09329c Fix candidate file comparison when using path ids
7e9f28c limit max bytes that can be read/written per pread/write syscall
d20b8cf Improve Cuckoo Table Reader performance. Inlined hash function and number of buckets a power of two.
0f9c43e ForwardIterator: reset incomplete iterators on Seek()
722d80c reduce recordTick overhead in compaction loop
22a0a60 Merge pull request #250 from wankai/master
be25ee4 delete unused struct Options
0c26e76 Merge pull request #237 from tdfischer/tdfischer/faster-timeout-test
1d23b5c remove_internal_filter_policy
2a8faf7 Compact SpatialDB as we go, not at the end
7f71448 Implementing a cache friendly version of Cuckoo Hash
d977e55 Don't let other compactions run when manual compaction runs
d5bd6c7 Fix ios compile
6b46f78 Merge pull request #248 from wankai/master
528a11c Update block_builder.h
536e997 Remove assert in vector rep
4142a3e Adding a user comparator for comparing Uint64 slices.
1913ce2 more concurrent flushes in SpatialDB
808e809 Adjust SpatialDB column family options
0c39f54 Use Vector memtable when bulk loading SpatialDB
b6fd781 Don't do memtable lookup in db_impl_readonly if memtables are empty while opening db.
9dcb75b Add is-file-deletions-enabled property
1755581 improve OptimizeForPointLookup()
d9c0785 Fix assertion in PosixRandomAccessFile
bda6f33 fix valgrind error in c_test caused by BlockBasedTableOptions
0db6b02 Update timeout to 50ms instead of 3.
ff6ec0e Optimize SpatialDB
2386185 ReadOptions.total_order_seek to allow total order seek for block-based table when hash index is enabled
a98badf print table options
66f62e5 JNI changes corresponding to BlockBasedTableOptions migration
3844001 move block based table related options BlockBasedTableOptions
17b54ae Merge pull request #243 from andybons/patch-1
0508691 Add missing include to use std::unique_ptr
42ea795 Fix concurrency issue in CompactionPicker
bb530c0 Merge pull request #240 from ShaoYuZhang/master
f76eda7 Fix compilation issue on OSX
08be7f5 Implement Prepare method in CuckooTableReader
47b452c Fix the error of c_test.c
562b7a1 Add missing implementaiton of SanitizeDBOptions in simple_table_db_test.cc
63a2215 Improve Options sanitization and add MmapReadRequired() to TableFactory
e173bf9 Eliminate VersionSet memory leak
10720a5 Revert the unintended change that DestroyDB() doesn't clean up info logs.
01cbdd2 Optimize storage parameters for spatialDB
045575a Add CuckooHash table format to table_reader_bench
7c5173d test: db: fix test to have a smaller timeout for when it runs on faster hardware
6929b08 Remove BitStream* tests
50b790c Removing BitStream* functions
162b815 Adding Column Family support in db_bench.
28b5c76 WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index
5585e00 Update release note of 3.4
343e98a Reverting import change
ddb8039 RocksDB static build Make file changes to download and build the dependencies .Load the shared library when RocksDB is initialized
68eed8c Bump up version
36e759d Adding Cuckoo Table SST option to db_bench
a6fd14c Fix valgrind error in c_test
c703715 attempt to fix auto_roll_logger_test
c8ecfae Merge pull request #230 from cockroachdb/spencerkimball/send-user-keys-to-v2-filter
570ba5a Avoid retrying to read property block from a table when it does not exist.
625b9ef Merge pull request #234 from bbiao/master
59a2763 Fix typo huage => huge
f611935 Fix autovector iterator increment/decrement comments
58b0f9d Support purging logs from separate log directory
2da53b1 [Java] Add purgeOldBackups API
6c4c159 fix_sst_dump_for_old_sst_format
8dfe2fd fix compile error under Mac OS X
58c4946 Allow env_posix to lower background thread IO priority
6a2be31 fix_valgrind_error_caused_in_db_info_dummper
e91ebf1 print compaction_filter name in Options.Dump
5a5953b Add histogram for DB_SEEK
5e64240 log db path info before open
0c9dc9f Remove malloc from FormatFileNumber
bcefede Update HISTORY.md
4808177 Revert "Include candidate files under options.db_log_dir in FindObsoleteFiles()"
0138b8e Fixed compile errors (signed / unsigned comparison) in cuckoo_table_db_test on Mac
1562653 Fixed a signed-unsigned comparison error in db_test
218857b remove tailing_iter.h/cc
5d0074c set bytes_per_sync to 1MB if rate limiter is enabled
3fcf7b2 Pass parsed user key to prefix extractor in V2 compaction
2fa6434 Add scope guard
06a52bd Flush only one column family
9674c11 Integrating Cuckoo Hash SST Table format into RocksDB

git-subtree-dir: src/rocksdb2
git-subtree-split: 25888ae0068c9b8e3d9421ea8c78a7be339298d8
2014-10-02 10:47:26 -07:00
Torrie Fischer
225f8ac12f Merge commit '92b8c7961b433d12d9d77da5d61c26a920bbd370' into updated-rocksdb 2014-10-02 10:47:26 -07:00
Howard Hinnant
1161511207 Fix two Wunused-private-field warnings. 2014-10-01 08:47:56 -07:00
Nicholas Dudfield
ca8eda412e Make travis build and use debug variants for tests 2014-10-01 08:47:56 -07:00
Mark Travis
ec4ec48fb8 Add counters to track nodestore read and write activities. 2014-10-01 08:47:56 -07:00
Nik Bougalis
c0b69e8ef7 Remove the use of beast::String from rippled (RIPD-443) 2014-10-01 08:47:55 -07:00
Tom Ritchford
4241dbb600 Clean and harden Transaction.
* Replace boolean parameter with enumerated type.
* Get rid of std::ref.
* 80-column cleanups.
* Replace an std::bin with a lambda.
2014-10-01 08:47:55 -07:00
Tom Ritchford
f54280aaad New DatabaseReader reads ledger numbers from database. 2014-10-01 08:47:55 -07:00
Tom Ritchford
6069400538 Fix compiler warnings under gcc. 2014-10-01 08:47:55 -07:00
Howard Hinnant
616be1d76c Miscellaneous cleanups:
* Limit HashPrefix construction and disallow assignment

* Give KnownFormats deleted copy members so that derived
  classes will give the right answers if queried with the
  std::is_copy_constructible/assignable traits.

* Replace SharedSingleton with a local static in
  LedgerFormats::getInstance() to be consistent with
  similar code in other places.  This also allows the
  LedgerFormats default constructor to be marked private
  so that the compiler enforces the design that
  LedgerFormats is a singleton type.

* Change return types of LedgerFormats::getInstance() and
  TxFormats::getInstance() from pointer to non-const to
  reference to const so as follow more established design
  guidelines for singletons.  This prevents pointers being
  mistaken for heap-allocated objects, and the const
  ensures the singleton isn't mutable.

* Change RippleAddress to inherit privately from
  CBase58Data instead of publicly.  This lets the compiler
  enforce that there are no unintended conversions from
  RippleAddress to CBase58Data.  This change allows us
  to remove a comment warning about unwanted conversions.
2014-10-01 08:47:54 -07:00
Nik Bougalis
8e91ce67c5 Allow beast::lexicalCast to parse 'true' & 'false' into a bool 2014-10-01 08:47:54 -07:00
JoelKatz
c1ecd661c3 Fix broken assert in built/validated ledger mismatch handler 2014-10-01 08:47:54 -07:00
JoelKatz
b27e2aad07 Improve transaction security
* Check signatures of every transaction on every validator
* Remove obsolete code
* Check transaction status in submit/sign RPC handler
2014-10-01 08:47:54 -07:00
MarkusTeufelberger
5ce508e09d Change output range names of ledger_cleaner
The input parameters are called "min_ledger" and "max_ledger", they are also called "minRange" and "maxRange" in the code BUT "ledger_min" and ledger_max" if printed. This is inconsistent and should be changed, as it might lead to confusion on how to call this module via RPC.
2014-10-01 08:47:53 -07:00
Nik Bougalis
3cfa5a41b1 Improve BuildInfo interface:
* Remove unnecessary beast::String dependency
* Explicitly cast to result type while packing a version
* Add unit tests for version formatting
2014-10-01 08:47:53 -07:00
Vinnie Falco
6c072f37ef Remove unused testoverlay module 2014-10-01 08:47:53 -07:00
Nik Bougalis
dbd993ed2b Use namespaces instead of static-only classes 2014-10-01 08:47:52 -07:00
Nik Bougalis
45b5c4ba7a Use deleted members to prevent copying in Beast (RIPD-268) 2014-10-01 08:47:52 -07:00
Nik Bougalis
7933e5d1f9 Use deleted members to prevent copying in rippled (RIPD-268) 2014-10-01 06:28:32 -07:00
Vinnie Falco
01e52e6f9f Use trusted validators median fee
Conflicts:
	src/ripple/app/misc/Validations.cpp
2014-10-01 06:28:32 -07:00
Vinnie Falco
40a955e192 Consume handshake data in HTTP/S server 2014-10-01 06:28:12 -07:00
Vinnie Falco
a8296f7301 Set version to 0.26.3-sp4 2014-09-30 18:04:59 -07:00
Vinnie Falco
590c3b876b Use trusted validators median fee 2014-09-30 18:03:53 -07:00
Vinnie Falco
6dfc805eaa Rewrite HTTP/S server to use coroutines:
* Fix bug with more than one complete request in a read buffer
* Use stackful coroutines for simplified control flow
* Door refactored to detect handshakes
* Remove dependency on MultiSocket
* Remove dependency on handshake detect logic framework
2014-09-30 13:29:32 -07:00
Vinnie Falco
5ce6068df5 Remove obsolete SharedArg 2014-09-29 07:18:51 -07:00
Nik Bougalis
bf9b8f4d1b Use secure RPC connections when configured 2014-09-28 04:39:49 -07:00
Vinnie Falco
d618581060 Config improvements:
* More fine-grained Section mutators
* Add remap for mapping legacy single sections to key value pairs
* Add output stream operators for BasicConfig and Section
* Allow section values to be overwritten from command line
* Update rpc key/value configs from command line
* Add RPC::Setup with defaults and remap legacy rpc sections
2014-09-28 04:39:49 -07:00
David Schwartz
2936bbfae8 Make path filtering smarter (RIPD-561)
* Break path liquidity checking into its own function
* Measure initial quality over minimum destination amount
* Test for available liquidity
2014-09-24 11:54:12 -07:00
Nik Bougalis
47b08bfc02 Add --quorum command line argument (RIPD-563) 2014-09-24 11:19:39 -07:00
Nik Bougalis
da4f77ca1f Return correct error message for invalid fields 2014-09-24 11:19:38 -07:00
David Schwartz
1c0a75d467 Distinguish Byzantine failure from tx bug (RIPD-523) 2014-09-24 11:19:38 -07:00
Nik Bougalis
659cf0c221 Decouple LedgerMaster from configuration 2014-09-24 11:19:38 -07:00
Howard Hinnant
430229fd84 Mark several Ledger member functions as const. 2014-09-24 11:19:37 -07:00
Howard Hinnant
81699a0971 Add +DEBUG to the raw version string for DEBUG builds.
This will show up in the rpc server_info command.
There is no impact on the version string for release builds.
2014-09-24 11:19:37 -07:00
MarkusTeufelberger
c54aff74b3 Build gcc.debug using -Og flag
Since gcc 4.8 is required anyways, it might be nice to use its features.

Intro to feature (second bullet point):
https://gcc.gnu.org/gcc-4.8/changes.html

-g (line 328) is still needed:
http://stackoverflow.com/questions/12970596/gcc-4-8-does-og-imply-g
2014-09-24 11:19:37 -07:00
Mark Travis
7f43ab9097 Improvements to SConstruct:
* Default target is release instead of debug (scons with no arguments).
* All targets now include debug symbols, including release.
Rationale: "out of the box" builds of rippled using plain "scons" or "scons -j4" will produce
a debug instead of a release build, which could underperform.
2014-09-24 11:19:36 -07:00
Miguel Portilla
d78f740250 Add account_offers paging (RIPD-344) 2014-09-19 16:38:10 -07:00
Miguel Portilla
cd1bd18a49 Add account_lines paging (RIPD-343) 2014-09-19 16:18:50 -07:00
sublimator
f81b084448 Set page sizes for ledger_data correctly (RIPD-249) 2014-09-19 16:16:49 -07:00
Vinnie Falco
02d9c77402 Set version to 0.26.3-sp2 2014-09-19 11:57:22 -07:00
Howard Hinnant
a0c903c68c Add needed #include <istream>
This is needed for the combination of boost 1.56 and libc++
2014-09-19 10:29:14 -07:00
JoelKatz
6aa325d3da On missing node in consensus, bow out (RIPD-567) 2014-09-18 15:12:45 -07:00
JoelKatz
041f874d4c Improve transaction security
* Check signatures of every transaction on every validator
* Remove obsolete code
* Check transaction status in submit/sign RPC handler
2014-09-18 14:25:09 -07:00
Nik Bougalis
526ecd6a81 Detect invalid inputs during STAmount conversion (RIPD-570):
* More robust validation of input
* XRP may not be specified using fractions
* Prevent creating native amounts larger than max possible value
* Add unit tests to verify correct parsing
2014-09-18 12:46:21 -07:00
Nik Bougalis
d373054fc4 Templetize and improve beast string-to-integer conversions:
* Properly handle numbers at the edge of precision
* Improve and expand unit test coverage
2014-09-18 12:46:16 -07:00
Vinnie Falco
b6d9f1d4b2 Add fee voting configuration and docs (RIPD-564) 2014-09-17 12:22:51 -07:00
Vinnie Falco
3fef916972 Move some constants to core/SystemParameters.h 2014-09-17 12:22:49 -07:00
Vinnie Falco
89a51e5b91 Split Section to its own header and add convenience accessors 2014-09-17 12:22:49 -07:00
miguelportilla
f87a6ccc7a Fix missing includes for boost 1.56.0 2014-09-16 15:22:00 -07:00
Nik Bougalis
f65cea66ef Remove unused macros, config variables, and file 2014-09-16 14:15:13 -07:00
Vinnie Falco
4239880acb Clean up and restructure sources 2014-09-16 14:15:12 -07:00
Vinnie Falco
1dcd06a1c1 Add missing includes and tidy up 2014-09-16 14:03:50 -07:00
Vinnie Falco
0f30191d10 Refactor STAmount:
* Remove unused functions
* Remove unused constructor
* Use delegating constructors
* Mark some observers deprecated
* Clean up declaration parameter names
* Add checked and unchecked constructors
* De-inline unnecessary inlined functions
* Reorder and regroup members into sections
* Move globals from the unity file to the .cpp
* Change some member functions to be free functions
* Put implementation in one .cpp and the test in another .cpp

Remove unused STAmount constructor and delegate two others No change in functionality.
2014-09-16 07:39:50 -07:00
Vinnie Falco
8fb9d5daaa Set version to 0.26.4-alpha 2014-09-15 18:20:47 -07:00
Nik Bougalis
ed3c942ff1 Inject JobQueue in NetworkOPs 2014-09-15 16:05:01 -07:00
Nik Bougalis
80436d4a8b Cleanups:
* Remove obsolete string formatting function
* Remove unused ADDRESS macro
* Re-scope functions
2014-09-15 16:04:48 -07:00
Howard Hinnant
cfc702c766 Fix beast::http::headers move members 2014-09-15 16:03:36 -07:00
Vinnie Falco
88ae15ea8e Add base64 conversions and tests 2014-09-15 14:52:42 -07:00
Vinnie Falco
6bafca7386 Use transform_iterator in http::headers 2014-09-15 14:52:42 -07:00
Vinnie Falco
379e842080 Add BasicConfig simplified config interface 2014-09-15 12:46:04 -07:00
Vinnie Falco
c41ce469d0 Cleanup:
* Move QUALITY_ONE to Quality.h
* Move functional files up one level
* Remove core.h
* Merge routines into Config.cpp
* Rename Section to IniFileSections
* Rename IniFileSections routines
2014-09-15 12:21:36 -07:00
Vinnie Falco
a1ca68473d Merge branch 'release' into develop 2014-09-14 15:39:09 -07:00
Nik Bougalis
3345d03433 Avoid conversions whenever possible during RippleState lookups 2014-09-13 11:06:38 -07:00
Nik Bougalis
81a426608a Make log partitions case-insensitive 2014-09-13 11:06:19 -07:00
Vinnie Falco
2ad6f0a65e Set version to 0.26.3-sp1 2014-09-12 15:22:54 -07:00
Vinnie Falco
ee8bd8ddae Fix handling of HTTP/S keep-alives (RIPD-556):
* Proper shutdown for ssl and non-ssl connections
* Report session id in history
* Report histogram of requests per session
* Change print name to 'http'
* Split logging into "HTTP" and "HTTP-RPC" partitions
* More logging and refinement of logging severities
* Log the request count when a session is destroyed

Conflicts:
	src/ripple/http/impl/Peer.cpp
	src/ripple/http/impl/Peer.h
	src/ripple/http/impl/ServerImpl.cpp
	src/ripple/module/app/main/Application.cpp
	src/ripple/module/app/main/RPCHTTPServer.cpp
2014-09-12 15:19:17 -07:00
Vinnie Falco
319ac14e7d Add is_short_read() 2014-09-12 15:19:04 -07:00
Vinnie Falco
0215a7400d Fix handling of HTTP/S keep-alives (RIPD-556):
* Proper shutdown for ssl and non-ssl connections
* Report session id in history
* Report histogram of requests per session
* Change print name to 'http'
* Split logging into "HTTP" and "HTTP-RPC" partitions
* More logging and refinement of logging severities
* Log the request count when a session is destroyed
2014-09-12 14:20:30 -07:00
Vinnie Falco
79db0ca7a6 Add is_short_read() 2014-09-12 14:10:33 -07:00
JoelKatz
1a7eafb699 Add ledger cleaner documentation (RIPD-555) 2014-09-09 22:33:42 -07:00
Nik Bougalis
81a06ea6cd Cleanups:
* Remove obsolete config variables
* Reduce coupling
* Use C++11 ownership containers
* Use auto when it makes sense
* Detect edge-case in unit tests
* Reduce the number of LedgerEntrySet public members
2014-09-09 22:33:42 -07:00
Nik Bougalis
de4be649ab Refactor string-to-integer conversions 2014-09-09 21:38:09 -07:00
sublimator
d90ec5f06c Normalize sort paths in Visual Studio project generator 2014-09-08 11:17:40 -07:00
Vinnie Falco
32065ced6e Add peer count to HTTP server properties 2014-09-08 11:17:39 -07:00
Scott Schurr
b5224a2227 Improve regularity of STObject and STArray (RIPD-448, RIPD-544):
* reduce duplicated code using templates
* replace BOOST_FOREACH with C++11 for loops
* remove most direct calls to new
* limit line length to 80 characters
* clearly identify virtual and overridden methods
* split STObject and STArray into their own files
* name files after the class they contain
2014-09-05 13:02:07 -07:00
Nik Bougalis
c55777738f Refactor LedgerEntrySet:
* Split adjustOwnerCount to increment and decrement paths.
* Move pathfinding-specific functions out of LedgerEntrySet
* Convert members to free functions
2014-09-05 11:50:17 -07:00
JoelKatz
c72dff5a24 Make more RocksDB tunables
Add support for universal compaction
2014-09-05 11:50:17 -07:00
JoelKatz
6b09e49c08 Increase the size of the tree cache:
This change will not significantly increase memory consumption
because most entries are pinned anyway.
2014-09-05 11:48:00 -07:00
Nik Bougalis
413218c4c4 Create the directory for the debug_logfile (RIPD-551) 2014-09-04 16:51:31 -07:00
Miguel Portilla
16c04b50ee Add date to tx command (RIPD-542) 2014-09-04 16:51:31 -07:00
Nik Bougalis
56c18f7768 Cleanups and fixes (RIPD-532):
* Properly handle sfWalletLocator field
* Plug a tiny memory leak
* Avoid naked pointers
* Remove unused variables
* Other small cleanups
2014-09-04 16:51:31 -07:00
Tom Ritchford
22ca13bc78 Cleanups to RPC code 2014-09-04 16:51:31 -07:00
Nicholas Dudfield
4c7fd18230 Ticket integration tests 2014-09-04 16:51:31 -07:00
Nik Bougalis
39730fc13e Ticket issuing (RIPD-368):
* New CreateTicket transactor to create tickets
* New CancelTicket transactor to cancel tickets
* Ledger entries for tickets & associated functions
* First draft of M-of-N documentation
2014-09-04 16:11:44 -07:00
Nik Bougalis
889c0a0d0f Transactor refactor:
* Allocate transactors on the stack instead of the heap.
* Remove header files and reduce transactor public interface.
2014-09-04 16:11:44 -07:00
Nik Bougalis
624a803955 Handle whitespace separating an 'ip port' correctly (RIPD-552) 2014-09-04 12:26:27 -07:00
Vinnie Falco
9f5c21f80e Set version to 0.26.3 2014-09-03 16:15:07 -07:00
Nik Bougalis
a3fe089367 Fix missing return value error check 2014-09-02 08:45:19 -07:00
Nik Bougalis
85d5cd3118 Fix missing return value error check 2014-09-01 21:11:31 -07:00
Vinnie Falco
61006e626d Also report mismatched built ledger 2014-08-28 18:03:50 -07:00
Nik Bougalis
15aad1cb24 Optimize pathfinding operations (RIPD-537):
* Calculate and cache Account hashes without holding locks.
* Fast hash-based path element comparison.
* Use emplace instead of find/insert
2014-08-28 15:57:29 -07:00
Tom Ritchford
95c1c5f54e Stream generated JSON. 2014-08-28 12:38:21 -07:00
Vinnie Falco
c65fb91878 Fix special members for http classes 2014-08-28 12:38:03 -07:00
Vinnie Falco
d5a7e1331e Set version to 0.26.3-rc3 2014-08-27 15:58:41 -07:00
Vinnie Falco
04bcd93ba3 HTTP(S)-RPC server improvements (RIPD-489, RIPD-533):
* Correct handling of Keep-Alive in socket handlers
* Report session history in print command
2014-08-27 18:06:30 -04:00
Vinnie Falco
f97ef7039a HTTP message and parser improvements:
* streambuf wrapper supports rvalue move
* message class holds a complete HTTP message
* body class holds the HTTP content body
* headers class holds RFC-compliant HTTP headers
* basic_parser provides class interface to joyent's http-parser
* parser class parses into a message object
* Remove unused http get client free function
* unit test for parsing malformed messages
2014-08-27 18:06:30 -04:00
Tom Ritchford
9160b46c1e Bug fixes and new features for LedgerTool:
* Fix RIPD-509, RIPD-514, RIPD-519, RIPD-525, RIPD-527, RIPD-529,
  RIPD-530 and RIPD-531.
* Protect people from ledger-spew and remove cruft.
* Better error messages and handling.
* Cache command lists or clears ledger cache.
* Better ledger summaries.
* Offline mode.
2014-08-27 18:05:44 -04:00
Edward Hennis
aa4b116498 Wrap RippleCalc into a single function (RIPD-500):
* Change public members of Input and Output to remove trailing _.
* Remove Input constructor and separate flags in RippleCalc to reduce
  duplication and confusion.
* Make calculation result private; add getter.
* Narrow scope of some of the results of calls to rippleCalculate.
2014-08-27 18:05:30 -04:00
Edward Hennis
612bb71165 Add enable_if_lvalue 2014-08-27 17:10:24 -04:00
Torrie Fischer
5c67f99ef9 Remove old rocksdb/ 2014-08-27 12:37:13 -07:00
Torrie Fischer
101a4808a0 Update includes and scons 2014-08-27 12:37:05 -07:00
Torrie Fischer
1d38671f5e Squashed 'src/rocksdb2/' content from commit 37c6740
git-subtree-dir: src/rocksdb2
git-subtree-split: 37c6740c383bb9a6ee2747b04f08bc77fcfa10c5
2014-08-27 12:36:50 -07:00
Torrie Fischer
7f25d88f02 Merge commit '1d38671f5edc2322bc58417816674cc629ae7a70' as 'src/rocksdb2' 2014-08-27 12:36:50 -07:00
Vinnie Falco
be830d3dad Set version to 0.26.3-rc2 2014-08-26 10:00:03 -07:00
Nik Bougalis
5bc949d70f Fix a unit test warning 2014-08-26 10:00:03 -07:00
Vinnie Falco
43817bd722 Merge remote-tracking branch 'upstream/release' into tmp 2014-08-26 09:46:29 -07:00
JoelKatz
61623d6d75 Improve parallelization of getRippleLines 2014-08-26 09:28:10 -07:00
David Schwartz
9aad60f56d Make sure we update mTNByID when we replace the root 2014-08-26 09:28:09 -07:00
Vinnie Falco
e7cf3e8084 Add ledger.history.mismatch insight statistic 2014-08-26 09:28:07 -07:00
Tom Ritchford
e024e7c2ec Compile git tags now include name and branch. (RIPD-493) 2014-08-22 18:10:17 -04:00
Edward Hennis
6fc136ae9a Update npm tests & config to pass in Windows. (RIPD-209)
* Extend timeout for WebSocket test.
* Windows networking doesn't like connecting to 0.0.0.0. Use 127.0.0.1
* Additional command line options in config. Can potentially be used to run rippled in a debugger.
2014-08-22 18:10:17 -04:00
Edward Hennis
2b69ded1ea Convert rvalue to an lvalue. (RIPD-494)
* The rvalue gets destructed as soon as "rc" is constructed.
2014-08-22 18:10:17 -04:00
Tom Ritchford
8dd799aa6f New command line LedgerTool. (RIPD-243)
* Retrieve and process summary or full ledgers.
* Search using arbitrary criteria (any Python function).
* Search using arbitrary formats (any Python function).
* Caches ledgers as .gz files to avoid repeated server requests.
* Handles ledger numbers, ranges, and special names like validated or closed.
2014-08-22 18:10:11 -04:00
Vinnie Falco
7230ef41ee Fix warnings and compile errors 2014-08-20 17:44:00 -07:00
Edward Hennis
a86f0a743c Clean up some docs
* Consistent line endings in rippled-example.cfg
* Rewrite CodingStyle.md. Get down to top priorities
2014-08-20 16:20:15 -07:00
Josh Juran
af75b55ef7 src/README.md: s/addded/added/ 2014-08-20 16:19:28 -07:00
Vinnie Falco
9ecb37dd4f Add validators aged container test 2014-08-20 16:09:52 -07:00
Vinnie Falco
2e3784a914 Tidy up sources 2014-08-20 16:09:51 -07:00
Scott Schurr
019c1af435 Use aged containers in Validators module (RIPD-349) 2014-08-20 16:09:51 -07:00
Vinnie Falco
5322955f2b Fix exception safety in aged containers 2014-08-20 16:09:50 -07:00
Howard Hinnant
a8ea4ce283 Fix move constructor of aged_unordered_container (RIPD-490) 2014-08-20 16:08:59 -07:00
Mark Travis
c12862f60d Enable heap profiling with jemalloc:
The jemalloc library (which must be downloaded and installed separately)
is required to perform heap profiling. Instructions on how to enable heap
profiling with rippled are available in doc/HeapProfiling.md
2014-08-13 20:36:38 -07:00
Tom Ritchford
e889183fc5 JSON cleanups:
* Fix Json headers to include what they use.
* Get rid of ripple/json/api directory.
2014-08-13 20:36:36 -07:00
Jeff Trull
7be695c6bd Handle changes to boost::optional in newly released boost 1.56:
To improve compatibility with the proposed std::optional a number
of changes were made, one of which is the removal of the implicit
conversion to bool.  As a result, returning boost::optional as a
bool value now fails.  Explicit conversion to bool used for clarity.
2014-08-13 19:00:17 -07:00
Nik Bougalis
956901ae02 Properly handle edge-cases when parsing JSON integers (RIPD-470):
* Properly handle both unsigned and signed integers
* Return parsing error for overlong JSON numbers
* Implement unit test checking the edge cases that are of interest
2014-08-13 18:46:32 -07:00
Nik Bougalis
d562c5b2d5 Account for high-ascii (RIPD-464) 2014-08-13 18:46:32 -07:00
Nik Bougalis
d7b054c3f6 When compiling debug builds with GCC use _FORTIFY_SOURCE=2:
This gcc/glibc feature adds some (supposedly) lightweight checks which can
help detect errors such as buffer overflows.
2014-08-13 18:33:30 -07:00
Tom Ritchford
901ccad0cf Clarify unfunded offer deletion strategy. 2014-08-13 18:33:03 -07:00
Vinnie Falco
b9454e0f0c Set version to 0.26.2 2014-08-12 12:19:13 -07:00
Vinnie Falco
26181907fc Merge branch 'release-next' into release
Conflicts:
	src/ripple/module/app/paths/cursor/ForwardLiquidity.cpp
	src/ripple/overlay/tests/peer_info.test.cpp
	src/ripple/unity/http.h
2014-08-12 12:19:04 -07:00
Miguel Portilla
8368798ad2 Add owner_funds to subscription streams (RIPD-377) 2014-08-12 11:56:07 -07:00
Miguel Portilla
ed597e5e99 Add owner_funds to subscription streams (RIPD-377) 2014-08-12 11:54:49 -07:00
Miguel Portilla
2c88c15f7f Fix unhandled exception in async HTTP server (RIPD-475) 2014-08-12 11:54:02 -07:00
Miguel Portilla
af7cd3cc04 Fix unhandled exception in async HTTP server (RIPD-475) 2014-08-12 11:46:45 -07:00
Howard Hinnant
9552551f9a Refactor SField (RIPD-431)
* Restrict access to SField constructors.
* Make all SField access const.
* Hide and simplify databases used to hold SField constants.
* Separate the two concerns of representing a field,
  and maintaining a database of fields.
2014-08-08 19:16:51 -07:00
Edward Hennis
1c73a0f649 Remove unused code:
* StringConcat was only being referenced by unit test.
* `runall.sh` is no longer needed. Use `npm test` instead.
2014-08-08 19:16:42 -07:00
Tom Ritchford
e3698b2a07 Fix Pathfinder::getPathsOut to use Issue 2014-08-08 19:16:34 -07:00
Tom Ritchford
c841f8b360 Add the git tag to the compile (RIPD-238) 2014-08-08 19:16:33 -07:00
wltsmrz
50f9b68d61 Bump ledger_wait timeout for Travis 2014-08-08 14:57:39 -07:00
Nik Bougalis
27b48bc16e Refactor beast::SemanticVersion (RIPD-199) 2014-08-08 14:57:39 -07:00
Edward Hennis
bccdbaed2b Update Doxyfile (RIPD-175):
* Clean up the include path to reflect updated code structure.
* Update PROJECT_BRIEF to more accurate title.
* Fix location of logo graphic.
* Hide all undocumented classes.
2014-08-08 14:57:39 -07:00
Nik Bougalis
398095a667 Cleanups and performance optimizations (RIPD-450):
* Remove AccountItems and AccountItem
* Restructure RippleLineCache to not require shared_ptr
* Avoid expensive copies of base_uint<160> in RippleState
2014-08-08 14:57:39 -07:00
Nik Bougalis
80095824b9 Remove obsolete nickname support 2014-08-08 14:57:39 -07:00
Nik Bougalis
d91c1f96cc Detect node store batch write failures (RIPD-270):
* Raise open file limit from the soft max to the hard max.
2014-08-08 14:57:39 -07:00
Howard Hinnant
1c005a0292 Fix macro setting for rocksdb unity file 2014-08-08 14:50:15 -07:00
miguelportilla
c7ced496ac Update rocksdb unity file (RIPD-352) 2014-08-08 15:42:41 -04:00
Vinnie Falco
d4ff18834c Merge commit 'f86d9fd626df1cee55bce4c577d06bb064dc827b' as 'src/rocksdb' 2014-08-08 11:57:41 -07:00
Vinnie Falco
f86d9fd626 Squashed 'src/rocksdb/' content from commit 224932d
git-subtree-dir: src/rocksdb
git-subtree-split: 224932d4d0
2014-08-08 11:57:41 -07:00
Vinnie Falco
854604f724 Remove rocksdb in preparation for subtree add 2014-08-08 11:57:29 -07:00
Vinnie Falco
cfd3642cb1 Set version to 0.26.2-alpha-4 2014-08-07 17:13:55 -07:00
Tom Ritchford
f493590604 Logic fix for multiquality issues. 2014-08-07 17:10:38 -07:00
Vinnie Falco
8c084a3de8 Set version to 0.26.2-alpha-3 2014-08-07 10:13:22 -07:00
Tom Ritchford
985aa803a4 Fix error with multi-quality paths. 2014-08-07 10:12:09 -07:00
Vinnie Falco
9e319d7bd6 Set version to 0.26.2-alpha-2 2014-08-06 10:16:13 -07:00
David Schwartz
a122e176d7 Pathfinding fixes:
* Don't consider global freeze if not enforcing
 * Log if a covering path fails to cover
2014-08-06 10:16:12 -07:00
Tom Ritchford
88a6f2931e Fix local variable unfundedOffers that was shadowing a class variable. 2014-08-06 09:26:22 -07:00
Vinnie Falco
54f3a83e25 Fix missing return values in headers_t 2014-08-05 16:43:13 -07:00
Vinnie Falco
0955c0d8d3 Set version to 0.26.2-alpha-1 2014-08-05 15:40:34 -07:00
JoelKatz
6bb5be5216 Avoid a mutex 99+% of the time in SField::getField 2014-08-05 15:36:12 -07:00
JoelKatz
c9cd7e4be0 Rewrite STObject::setType for improved performance 2014-08-05 15:36:05 -07:00
Miguel Portilla
ce2cecf046 Add owner_funds to client subscription data (RIPD-377)
Conflicts:
	src/ripple/module/app/ledger/AcceptedLedger.cpp
2014-08-05 15:04:50 -07:00
Vinnie Falco
6e934ee6a1 HTTP handshake in peer protocol (RIPD-351):
* New I/O paths for client and server role
* New handshake_analyzer detects the peer protocol
* New basic_message class for parsing and storing HTTP messages
* Conditional compilation for selective feature enabling.
* Server supports both current handshake and HTTP handshake
2014-08-05 13:17:02 -07:00
Vinnie Falco
723d7d1263 HTTP support improvements:
* RFC2616 compliance
* Case insensitive equality, inequality operators for strings
* Improvements to http::parser
* Tidy up HTTP method enumeration
2014-08-05 13:17:01 -07:00
Scott Schurr
298572893e Improvements to aged_containers (RIPD-363)
- Added unit tests for element erase
- Added unit tests for range erase
- Added unit tests for touch
- Added unit tests for iterators and reverse_iterators
- Un-inlined operator== for unordered containers
- Fixed minor problems with ordered_container erase()
- Made ordered_container...
  - erase (reverse_iterator pos) not compile
  - erase (reverse_iterator first, reverse_iterator last) not compile
  - touch (reverse iterator pos) not compile
- Verified that ordered container...
  - insert() already rejects reverse_iterator
  - emplace_hint() already rejects reverse_iterator
- Made set/multiset iterators const

Regarding the set/multiset iterators, see section 1.5 of
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2913.pdf
as pointed out by Vinnie.
2014-08-04 15:25:30 -07:00
Mark Travis
405f6f7368 Make NetworkOPs::isFull() thread-safe 2014-08-04 11:19:07 -07:00
Tom Ritchford
648ccc7c17 Replace const Type& with Type const& for common types.
* std::string
* RippleAccount
* Account
* Currency
* uint256
* STAmount
* Json::Value
2014-08-04 11:18:44 -07:00
Mark Travis
f5afe0587f Fix filter_policy object leak in NodeStore backends 2014-08-04 11:16:44 -07:00
Tom Ritchford
91a227a475 Correct Pathfinder::getPaths out to handle order books (RIPD-427) 2014-08-02 10:38:57 -07:00
Tom Ritchford
4b905fe9ff Clean OrderBookDB::getBooksBy methods. 2014-08-02 10:38:57 -07:00
Howard Hinnant
0f409b7bec ASCII clean 2014-08-02 10:38:56 -07:00
Vinnie Falco
295c8de858 Detect inconsistency in PeerFinder self-connects (RIPD-411) 2014-07-31 16:10:54 -07:00
Nik Bougalis
e5252f90af Remove LedgerBase and decongest Ledger locks 2014-07-31 16:05:08 -07:00
Miguel Portilla
c2276155bf Add pubkey_node and hostid to server stream messages (RIPD-407) 2014-07-31 16:05:08 -07:00
Miguel Portilla
dbe49bcd87 Remove TRUST_NETWORK directive (RIPD-331) 2014-07-31 16:04:59 -07:00
David Schwartz
7b936de32c Freeze enforcing: (RIPD-399)
* Set enforce date: September 15, 2014
 * Enforce in stand alone mode
 * Enforce at source
 * Enforce intermediary nodes
 * Enforce global freeze in get paths out
 * Enforce global freeze in create offer
 * Don't consider frozen links a path out
 * Handle in getBookPage
 * Enforce in new offer transactors
2014-07-30 23:28:48 -07:00
evhub
9eb34f542c More fixes to VSProject sorting algorithm 2014-07-30 08:56:54 -07:00
Tom Ritchford
194304e544 Refactor RippleCalc:
* Rationalize method and filenames, move to subdirectory.
* Use Issue in Node.
* Restrict access to PathState variables.
* Line length and readability cleanups.
* New PathCursor stores path calculation data during rippleCalc.
* Extract methods PathCursor::node(), PathCursor::previousNode()
  and RippleCalc::addPath
2014-07-30 08:29:29 -07:00
Howard Hinnant
c59fc332d5 Make CBase58Data/RippleAddress movable (RIPD-428):
This significantly increases the performance in returning
these types from factory functions.
2014-07-30 07:25:24 -07:00
Nik Bougalis
b43832fe57 Use std::atomic 2014-07-29 21:50:58 -04:00
Howard Hinnant
c24a497a23 Further documentation improvements to Ledger Consensus. 2014-07-29 21:41:19 -04:00
Tom Ritchford
4096fcd1bf Reduce scope of RPC locks and general cleanup (RIPD-458) 2014-07-29 16:24:44 -04:00
Vinnie Falco
9a0e806f78 Set version to 0.26.1 2014-07-28 12:05:01 -07:00
Vinnie Falco
20c9632996 Enable asynchronous handling of HTTP-RPC (RIPD-390)
* Activate async code path
* Tidy up HTTP server code
* Use shared_ptr in HTTP server
* Remove check for unspecified IP
* Remove hairtrigger
* Fix missing HTTP authorization check
* Fix multisocket flags in RPC-HTTP server
* Fix authorization failure when no credentials required
* Addresses RIPD-159, RIPD-161, RIPD-390
2014-07-28 12:05:01 -07:00
Vinnie Falco
65a628ca88 Add HTTPHeaders::build_map 2014-07-28 12:05:00 -07:00
Vinnie Falco
d82dbba096 Make bind_handler variadic ctor explicit 2014-07-28 12:05:00 -07:00
Howard Hinnant
58547f6997 Tidy up hardened containers (RIPD-380):
* Rename hardened containers for clarity
* Fixes https://ripplelabs.atlassian.net/browse/RIPD-380
2014-07-28 09:06:35 -07:00
Howard Hinnant
e6f4eedb1e Tidy up hardened_hash:
* Added siphash as a HashAlgorithm
* Refactored class responsibilities
2014-07-28 09:06:22 -07:00
Vinnie Falco
c5b963141f Fix intrinsic calls in static_initializer 2014-07-28 09:04:41 -07:00
Nik Bougalis
5df40bd746 Fix auth handling during OfferCreate (RIPD-414) 2014-07-28 08:57:22 -07:00
Howard Hinnant
403f15dc48 Documentation for Ledger Consensus implementation (RIPD-405) 2014-07-28 08:56:23 -07:00
Nik Bougalis
704d7451a0 Fix auth handling during OfferCreate (RIPD-414) 2014-07-25 11:42:42 -07:00
Vinnie Falco
fa11071443 Enable asynchronous handling of HTTP-RPC (RIPD-390)
* Activate async code path
* Tidy up HTTP server code
* Use shared_ptr in HTTP server
* Remove check for unspecified IP
* Remove hairtrigger
* Fix missing HTTP authorization check
* Fix multisocket flags in RPC-HTTP server
* Fix authorization failure when no credentials required
* Addresses RIPD-159, RIPD-161, RIPD-390
2014-07-24 20:22:55 -07:00
Vinnie Falco
87351c8a0c Add HTTPHeaders::build_map 2014-07-24 20:18:51 -07:00
Vinnie Falco
2f5fb1e68e Make bind_handler variadic ctor explicit 2014-07-24 20:18:51 -07:00
Tom Ritchford
96e1ec6d31 Fix build warnings and .gitignore.
* Comment out unused local function for both clang and g++.
* Get rid of numerous Boost warnings for clang.
* Remove some unused local variables.
* Put TAGS into the .gitignore.
2014-07-24 20:18:51 -07:00
Nik Bougalis
ac3cf05f1a Properly order warning suppression flags during compile 2014-07-24 20:18:51 -07:00
Tom Ritchford
6335e34395 Simplify locking and move a typedef.
* Make DatabaseCon's lock private and expose a scoped lock_guard.
* Get rid of DeprecatedRecursiveMutex and DeprecatedScopedLock entirely.
* Move CancelCallback to Job where it logically belongs.
2014-07-23 19:38:52 -07:00
Scott Schurr
02c2029ac1 Add more documentation of ledger acquisition (RIPD-373)
Capturing information from a seminar on the topic into the source tree.
2014-07-23 19:29:13 -07:00
David Schwartz
6914aa3e27 Check for payment increments that make no progress. (RIPD-374) 2014-07-23 19:27:55 -07:00
David Schwartz
f4fcb1cc9a Remove some dead SHAMapMissingNode code 2014-07-23 19:27:50 -07:00
David Schwartz
b6eec21ec0 Pathfinder cleanups, more efficient 'd' handling (RIPD-156) 2014-07-23 19:27:46 -07:00
David Schwartz
0ce3aeb189 Stop finding paths when enough are found (RIPD-156) 2014-07-23 19:27:42 -07:00
Nicholas Dudfield
713c8efcbe Fix intermittently failing send test 2014-07-23 02:27:19 -07:00
3465 changed files with 353548 additions and 259558 deletions

7
.gitignore vendored
View File

@@ -19,6 +19,7 @@
*.o
build
tags
TAGS
bin/rippled
Debug/*.*
Release/*.*
@@ -30,6 +31,7 @@ Release/*.*
tmp
# Ignore database directory.
db/
db/*.db
db/*.db-*
@@ -74,4 +76,7 @@ My Amplifier XE Results - RippleD
/out.txt
# Build Log
rippled-build.log
rippled-build.log
# Profiling data
gmon.out

View File

@@ -6,20 +6,10 @@ before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq python-software-properties
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
- sudo add-apt-repository -y ppa:boost-latest/ppa
- sudo add-apt-repository -y ppa:afrank/boost
- sudo apt-get update -qq
- sudo apt-get install -qq g++-4.8
- sudo apt-get install -qq libboost1.55-all-dev
# We want debug symbols for boost as we install gdb later
- sudo apt-get install -qq libboost1.55-dbg
- | # Setup the BOOST_ROOT
export BOOST_ROOT=$HOME/boost_root
mkdir -p $BOOST_ROOT/stage
ln -s /usr/lib/x86_64-linux-gnu $BOOST_ROOT/stage/lib
ln -s /usr/include/boost $BOOST_ROOT/boost
- | # Try to patch boost
sudo patch /usr/include/boost/bimap/detail/debug/static_error.hpp Builds/travis/static_error.boost.patch
sudo patch /usr/include/boost/config/compiler/clang.hpp Builds/travis/clang.boost.patch
- sudo apt-get install -qq libboost1.57-all-dev
- sudo apt-get install -qq mlocate
- sudo updatedb
- sudo locate libboost | grep /lib | grep -e ".a$"
@@ -35,23 +25,38 @@ before_install:
# What versions are we ACTUALLY running?
- g++ -v
- clang -v
# Avoid `spurious errors` caused by ~/.npm permission issues
# Does it already exist? Who owns? What permissions?
- ls -lah ~/.npm || mkdir ~/.npm
# Make sure we own it
- sudo chown -R $USER ~/.npm
script:
# Set so any failing command will abort the build
- set -e
# If only we could do -j12 ;)
- scons
# Make sure vcxproj is up to date
- scons vcxproj
- git diff --exit-code
# $CC will be either `clang` or `gcc` (If only we could do -j12 ;)
- scons $CC.debug
# We can be sure we're using the build/$CC.debug variant (-f so never err)
- rm -f build/rippled
- export RIPPLED_PATH="$PWD/build/$CC.debug/rippled"
# See what we've actually built
- ldd ./build/rippled
- ldd $RIPPLED_PATH
# Run unittests (under gdb)
- | # create gdb script
echo "set env MALLOC_CHECK_=3" > script.gdb
echo "run" >> script.gdb
echo "backtrace full" >> script.gdb
# gdb --help
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args ./build/rippled --unittest
# Run integration tests
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args $RIPPLED_PATH --unittest
- npm install
# Use build/(gcc|clang).debug/rippled
- |
echo "exports.default_server_config = {\"rippled_path\" : \"$RIPPLED_PATH\"};" > test/config.js
# Run integration tests
- npm test
notifications:
email:

View File

@@ -0,0 +1,23 @@
FROM ubuntu
MAINTAINER Torrie Fischer <torrie@ripple.com>
RUN apt-get update -qq &&\
apt-get install -qq software-properties-common &&\
apt-add-repository -y ppa:ubuntu-toolchain-r/test &&\
apt-add-repository -y ppa:afrank/boost &&\
apt-get update -qq
RUN apt-get purge -qq libboost1.48-dev &&\
apt-get install -qq libprotobuf8 libboost1.57-all-dev
RUN mkdir -p /srv/rippled/data
VOLUME /srv/rippled/data/
ENTRYPOINT ["/srv/rippled/bin/rippled"]
CMD ["--conf", "/srv/rippled/data/rippled.cfg"]
EXPOSE 51235/udp
EXPOSE 5005/tcp
ADD ./rippled.cfg /srv/rippled/data/rippled.cfg
ADD ./rippled /srv/rippled/bin/

180
Builds/Test.py Executable file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env python
# This file is part of rippled: https://github.com/ripple/rippled
# Copyright (c) 2012 - 2015 Ripple Labs Inc.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
Invocation:
./Builds/Test.py - builds and tests all configurations
#
# The build must succeed without shell aliases for this to work.
#
# Common problems:
# 1) Boost not found. Solution: export BOOST_ROOT=[path to boost folder]
# 2) OpenSSL not found. Solution: export OPENSSL_ROOT=[path to OpenSSL folder]
# 3) scons is an alias. Solution: Create a script named "scons" somewhere in
# your $PATH (eg. ~/bin/scons will often work).
# #!/bin/sh
# python /C/Python27/Scripts/scons.py "${@}"
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import itertools
import os
import platform
import re
import subprocess
import sys
IS_WINDOWS = platform.system().lower() == 'windows'
if IS_WINDOWS:
BINARY_RE = re.compile(r'build\\([^\\]+)\\rippled.exe')
else:
BINARY_RE = re.compile(r'build/([^/]+)/rippled')
ALL_TARGETS = ['debug', 'release']
parser = argparse.ArgumentParser(
description='Test.py - run ripple tests'
)
parser.add_argument(
'--all', '-a',
action='store_true',
help='Build all configurations.',
)
parser.add_argument(
'--keep_going', '-k',
action='store_true',
help='Keep going after one configuration has failed.',
)
parser.add_argument(
'--silent', '-s',
action='store_true',
help='Silence all messages except errors',
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help=('Report more information about which commands are executed and the '
'results.'),
)
parser.add_argument(
'--test', '-t',
default='',
help='Add a prefix for unit tests',
)
parser.add_argument(
'scons_args',
default=(),
nargs='*'
)
ARGS = parser.parse_args()
def shell(*cmd, **kwds):
"Execute a shell command and return the output."
silent = kwds.pop('silent', ARGS.silent)
verbose = not silent and kwds.pop('verbose', ARGS.verbose)
if verbose:
print('$', ' '.join(cmd))
kwds['shell'] = IS_WINDOWS
process = subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
**kwds)
lines = []
count = 0
for line in process.stdout:
lines.append(line)
if verbose:
print(line, end='')
elif not silent:
count += 1
if count >= 80:
print()
count = 0
else:
print('.', end='')
if not verbose and count:
print()
process.wait()
return process.returncode, lines
if __name__ == '__main__':
args = list(ARGS.scons_args)
if ARGS.all:
for a in ALL_TARGETS:
if a not in args:
args.append(a)
print('Building:', *(args or ['(default)']))
# Build everything.
resultcode, lines = shell('scons', *args)
if resultcode:
print('Build FAILED:')
if not ARGS.verbose:
print(*lines, sep='')
exit(1)
# Now extract the executable names and corresponding targets.
failed = []
_, lines = shell('scons', '-n', '--tree=derived', *args, silent=True)
for line in lines:
match = BINARY_RE.search(line)
if match:
executable, target = match.group(0, 1)
print('Unit tests for', target)
testflag = '--unittest'
if ARGS.test:
testflag += ('=' + ARGS.test)
resultcode, lines = shell(executable, testflag)
if resultcode:
print('ERROR:', *lines, sep='')
failed.append([target, 'unittest'])
if not ARGS.keep_going:
break
ARGS.verbose and print(*lines, sep='')
print('npm tests for', target)
resultcode, lines = shell('npm', 'test', '--rippled=' + executable)
if resultcode:
print('ERROR:\n', *lines, sep='')
failed.append([target, 'npm'])
if not ARGS.keep_going:
break
else:
ARGS.verbose and print(*lines, sep='')
if failed:
print('FAILED:', *(':'.join(f) for f in failed))
exit(1)
else:
print('Success')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +1,26 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Express 2013 for Windows Desktop
VisualStudioVersion = 12.0.30110.0
VisualStudioVersion = 12.0.31101.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "RippleD", "RippleD.vcxproj", "{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
Debug|x64 = Debug|x64
Release|Win32 = Release|Win32
Release|x64 = Release|x64
debug.classic|x64 = debug.classic|x64
debug|x64 = debug|x64
release.classic|x64 = release.classic|x64
release|x64 = release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|Win32.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|x64.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|x64.Build.0 = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|Win32.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|x64.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|x64.Build.0 = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug.classic|x64.ActiveCfg = debug.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug.classic|x64.Build.0 = debug.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug|x64.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.debug|x64.Build.0 = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release.classic|x64.ActiveCfg = release.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release.classic|x64.Build.0 = release.classic|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release|x64.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.release|x64.Build.0 = release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE

View File

@@ -1,5 +1,5 @@
Name: rippled
Version: 0.26.0
Version: 0.28.1-rc2
Release: 1%{?dist}
Summary: Ripple peer-to-peer network daemon
@@ -50,4 +50,3 @@ rm -rf %{buildroot}
/usr/bin/rippled
/usr/share/rippled/LICENSE
/etc/rippled/rippled-example.cfg

View File

@@ -1,3 +1,77 @@
![Ripple](/images/ripple.png)
#The Worlds Fastest and Most Secure Payment System
**What is Ripple?**
Ripple is the open-source, distributed payment protocol that enables instant
payments with low fees, no chargebacks, and currency flexibility (for example
dollars, yen, euros, bitcoins, or even loyalty points). Businesses of any size
can easily build payment solutions such as banking or remittance apps, and
accelerate the movement of money. Ripple enables the world to move value the
way it moves information on the Internet.
![Ripple Network](images/network.png)
**What is a Gateway?**
Ripple works with gateways: independent businesses which hold customer
deposits in various currencies such as U.S. dollars (USD) or Euros (EUR),
in exchange for providing cryptographically-signed issuances that users can
send and trade with one another in seconds on the Ripple network. Within the
protocol, exchanges between multiple currencies can occur atomically without
any central authority to monitor them. Later, customers can withdraw their
Ripple balances from the gateways that created those issuances.
**How do Ripple payments work?**
A sender specifies the amount and currency the recipient should receive and
Ripple automatically converts the senders available currencies using the
distributed order books integrated into the Ripple protocol. Independent third
parties acting as market makers provide liquidity in these order books.
Ripple uses a pathfinding algorithm that considers currency pairs when
converting from the source to the destination currency. This algorithm searches
for a series of currency swaps that gives the user the lowest cost. Since
anyone can participate as a market maker, market forces drive fees to the
lowest practical level.
**What can you do with Ripple?**
The protocol is entirely open-source and the networks shared ledger is public
information, so no central authority prevents anyone from participating. Anyone
can become a market maker, create a wallet or a gateway, or monitor network
behavior. Competition drives down spreads and fees, making the network useful
to everyone.
###Key Protocol Features
1. XRP is Ripples native [cryptocurrency]
(http://en.wikipedia.org/wiki/Cryptocurrency) with a fixed supply that
decreases slowly over time, with no mining. XRP acts as a bridge currency, and
pays for transaction fees that protect the network against spam.
![XRP as a bridge currency](/images/vehicle_currency.png)
2. Pathfinding discovers cheap and efficient payment paths through multiple
[order books](https://www.ripplecharts.com) allowing anyone to [trade](https://www.rippletrade.com) anything. When two accounts arent linked by relationships of trust, the Ripple pathfinding engine considers intermediate links and order books to produce a set of possible paths the transaction can take. When the payment is processed, the liquidity along these paths is iteratively consumed in best-first order.
![Pathfinding from Dollars to Euro](/images/pathfinding.png)
3. [Consensus](https://www.youtube.com/watch?v=pj1QVb1vlC0) confirms
transactions in an atomic fashion, without mining, ensuring efficient use of
resources.
[transact]: https://ripple.com/files/ripple-FIs.pdf
[build]: https://ripple.com/build/
[transact.png]: /images/transact.png
[build.png]: /images/build.png
[contribute.png]: /images/contribute.png
###Join The Ripple Community
|![Transact][transact.png]|![Build][build.png]|![Contribute][contribute.png]|
|:-----------------------:|:-----------------:|:---------------------------:|
|[Transact on the fastest payment infrastructure][transact]|[Build Imaginative Apps][build]|Contribute to the Ripple Protocol Implementation|
#rippled - Ripple P2P server
##[![Build Status](https://travis-ci.org/ripple/rippled.png?branch=develop)](https://travis-ci.org/ripple/rippled)
@@ -10,6 +84,9 @@ This is the repository for Ripple's `rippled`, reference P2P server.
###Setup instructions:
* https://ripple.com/wiki/Rippled_setup_instructions
###Issues
* https://ripplelabs.atlassian.net/browse/RIPD
### Repository Contents
#### ./bin
@@ -37,5 +114,9 @@ Ripple is open source and permissively licensed under the ISC license. See the
LICENSE file for more details.
###For more information:
* https://ripple.com
* https://ripple.com/wiki
* Ripple Wiki - https://ripple.com/wiki/
* Ripple Primer - https://ripple.com/ripple_primer.pdf
* Ripple Primer (Market Making) - https://ripple.com/ripple-mm.pdf
* Ripple Gateway Primer - https://ripple.com/ripple-gateways.pdf
* Consensus - https://wiki.ripple.com/Consensus

View File

@@ -11,14 +11,17 @@
all All available variants
debug All available debug variants
release All available release variants
profile All available profile variants
clang All clang variants
clang.debug clang debug variant
clang.release clang release variant
clang.profile clang profile variant
gcc All gcc variants
gcc.debug gcc debug variant
gcc.release gcc release variant
gcc.profile gcc profile variant
msvc All msvc variants
msvc.debug MSVC debug variant
@@ -26,10 +29,37 @@
vcxproj Generate Visual Studio 2013 project file
count Show line count metrics
Any individual target can also have ".nounity" appended for a classic,
non unity build. Example:
scons gcc.debug.nounity
If the clang toolchain is detected, then the default target will use it, else
the gcc toolchain will be used. On Windows environments, the MSVC toolchain is
also detected.
The following environment variables modify the build environment:
CLANG_CC
CLANG_CXX
CLANG_LINK
If set, a clang toolchain will be used. These must all be set together.
GNU_CC
GNU_CXX
GNU_LINK
If set, a gcc toolchain will be used (unless a clang toolchain is
detected first). These must all be set together.
CXX
If set, used to detect a toolchain.
BOOST_ROOT
Path to the boost directory.
OPENSSL_ROOT
Path to the openssl directory.
'''
#
'''
@@ -67,6 +97,7 @@ CHECK_LINE = 'built on: '
BUILD_TIME = 'Mon Apr 7 20:33:19 UTC 2014'
OPENSSL_ERROR = ('Your openSSL was built on %s; '
'rippled needs a version built on or after %s.')
UNITY_BUILD_DIRECTORY = 'src/ripple/unity/'
def check_openssl():
if Beast.system.platform in CHECK_PLATFORMS:
@@ -82,6 +113,29 @@ def check_openssl():
(CHECK_LINE, CHECK_COMMAND))
def set_implicit_cache():
'''Use implicit_cache on some targets to improve build times.
By default, scons scans each file for include dependecies. The implicit
cache flag lets you cache these dependencies for later builds, and will
only rescan files that change.
Failure cases are:
1) If the include search paths are changed (i.e. CPPPATH), then a file
may not be rebuilt.
2) If a same-named file has been added to a directory that is earlier in
the search path than the directory in which the file was found.
Turn on if this build is for a specific debug target (i.e. clang.debug)
If one of the failure cases applies, you can force a rescan of dependencies
using the command line option `--implicit-deps-changed`
'''
if len(COMMAND_LINE_TARGETS) == 1:
s = COMMAND_LINE_TARGETS[0].split('.')
if len(s) > 1 and 'debug' in s:
SetOption('implicit_cache', 1)
def import_environ(env):
'''Imports environment settings into the construction environment'''
def set(keys):
@@ -198,11 +252,11 @@ def config_base(env):
)
check_openssl()
env.Append(CPPDEFINES=['OPENSSL_NO_SSL2'])
#git = Beast.Git(env) # TODO(TOM)
if False: #git.exists:
env.Append(CPPDEFINES={'GIT_COMMIT_ID' : '"%s"' % git.commit_id})
env.Append(CPPDEFINES=[
'OPENSSL_NO_SSL2'
,'DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER'
,{'HAVE_USLEEP' : '1'}
])
try:
BOOST_ROOT = os.path.normpath(os.environ['BOOST_ROOT'])
@@ -234,12 +288,21 @@ def config_base(env):
env.Prepend(CPPPATH='%s/include' % openssl)
env.Prepend(LIBPATH=['%s/lib' % openssl])
# handle command-line arguments
profile_jemalloc = ARGUMENTS.get('profile-jemalloc')
if profile_jemalloc:
env.Append(CPPDEFINES={'PROFILE_JEMALLOC' : profile_jemalloc})
env.Append(LIBS=['jemalloc'])
env.Append(LIBPATH=[os.path.join(profile_jemalloc, 'lib')])
env.Append(CPPPATH=[os.path.join(profile_jemalloc, 'include')])
env.Append(LINKFLAGS=['-Wl,-rpath,' + os.path.join(profile_jemalloc, 'lib')])
# Set toolchain and variant specific construction variables
def config_env(toolchain, variant, env):
if variant == 'debug':
env.Append(CPPDEFINES=['DEBUG', '_DEBUG'])
elif variant == 'release':
elif variant == 'release' or variant == 'profile':
env.Append(CPPDEFINES=['NDEBUG'])
if toolchain in Split('clang gcc'):
@@ -247,20 +310,42 @@ def config_env(toolchain, variant, env):
env.ParseConfig('pkg-config --static --cflags --libs openssl')
env.ParseConfig('pkg-config --static --cflags --libs protobuf')
env.Prepend(CCFLAGS=['-Wall'])
env.Prepend(CFLAGS=['-Wall'])
env.Prepend(CXXFLAGS=['-Wall'])
env.Append(CCFLAGS=[
'-Wno-sign-compare',
'-Wno-char-subscripts',
'-Wno-format',
'-Wno-deprecated-register'
'-g' # generate debug symbols
])
env.Append(LINKFLAGS=[
'-rdynamic',
'-g',
])
if variant == 'profile':
env.Append(CCFLAGS=[
'-p',
'-pg',
])
env.Append(LINKFLAGS=[
'-p',
'-pg',
])
if toolchain == 'clang':
env.Append(CCFLAGS=['-Wno-redeclared-class-member'])
env.Append(CPPDEFINES=['BOOST_ASIO_HAS_STD_ARRAY'])
env.Append(CXXFLAGS=[
'-frtti',
'-std=c++11',
'-Wno-invalid-offsetof'])
env.Append(CPPDEFINES=['_FILE_OFFSET_BITS=64'])
if Beast.system.osx:
env.Append(CPPDEFINES={
'BEAST_COMPILE_OBJECTIVE_CPP': 1,
@@ -281,6 +366,8 @@ def config_env(toolchain, variant, env):
])
boost_libs = [
'boost_coroutine',
'boost_context',
'boost_date_time',
'boost_filesystem',
'boost_program_options',
@@ -311,15 +398,7 @@ def config_env(toolchain, variant, env):
else:
env.Append(LIBS=['rt'])
env.Append(LINKFLAGS=[
'-rdynamic'
])
if variant == 'debug':
env.Append(CCFLAGS=[
'-g'
])
elif variant == 'release':
if variant == 'release':
env.Append(CCFLAGS=[
'-O3',
'-fno-strict-aliasing'
@@ -329,19 +408,37 @@ def config_env(toolchain, variant, env):
if Beast.system.osx:
env.Replace(CC='clang', CXX='clang++', LINK='clang++')
elif 'CLANG_CC' in env and 'CLANG_CXX' in env and 'CLANG_LINK' in env:
env.Replace(CC=env['CLANG_CC'], CXX=env['CLANG_CXX'], LINK=env['CLANG_LINK'])
env.Replace(CC=env['CLANG_CC'],
CXX=env['CLANG_CXX'],
LINK=env['CLANG_LINK'])
# C and C++
# Add '-Wshorten-64-to-32'
env.Append(CCFLAGS=[])
# C++ only
env.Append(CXXFLAGS=['-Wno-mismatched-tags'])
env.Append(CXXFLAGS=[
'-Wno-mismatched-tags',
'-Wno-deprecated-register',
])
elif toolchain == 'gcc':
if 'GNU_CC' in env and 'GNU_CXX' in env and 'GNU_LINK' in env:
env.Replace(CC=env['GNU_CC'], CXX=env['GNU_CXX'], LINK=env['GNU_LINK'])
env.Replace(CC=env['GNU_CC'],
CXX=env['GNU_CXX'],
LINK=env['GNU_LINK'])
# Why is this only for gcc?!
env.Append(CCFLAGS=['-Wno-unused-local-typedefs'])
# If we are in debug mode, use GCC-specific functionality to add
# extra error checking into the code (e.g. std::vector will throw
# for out-of-bounds conditions)
if variant == 'debug':
env.Append(CPPDEFINES={
'_FORTIFY_SOURCE': 2
})
env.Append(CCFLAGS=[
'-O0'
])
elif toolchain == 'msvc':
env.Append (CPPPATH=[
os.path.join('src', 'protobuf', 'src'),
@@ -430,17 +527,10 @@ def config_env(toolchain, variant, env):
#-------------------------------------------------------------------------------
def addSource(path, env, variant_dirs, CPPPATH=[]):
if CPPPATH:
env = env.Clone()
env.Prepend(CPPPATH=CPPPATH)
return env.Object(Beast.variantFile(path, variant_dirs))
#-------------------------------------------------------------------------------
# Configure the base construction environment
root_dir = Dir('#').srcnode().get_abspath() # Path to this SConstruct file
build_dir = os.path.join('build')
base = Environment(
toolpath=[os.path.join ('src', 'beast', 'site_scons', 'site_tools')],
tools=['default', 'Protoc', 'VSProject'],
@@ -452,10 +542,14 @@ base.Append(CPPPATH=[
'src',
os.path.join('src', 'beast'),
os.path.join(build_dir, 'proto'),
os.path.join('src','soci','src'),
])
base.Decider('MD5-timestamp')
set_implicit_cache()
# Configure the toolchains, variants, default toolchain, and default target
variants = ['debug', 'release']
variants = ['debug', 'release', 'profile']
all_toolchains = ['clang', 'gcc', 'msvc']
if Beast.system.osx:
toolchains = ['clang']
@@ -476,7 +570,9 @@ else:
default_toolchain = 'clang'
else:
raise ValueError("Don't understand toolchains in " + str(toolchains))
default_variant = 'debug'
default_tu_style = 'unity'
default_variant = 'release'
default_target = None
for source in [
@@ -488,111 +584,287 @@ for source in [
PROTOCOUTDIR=os.path.join(build_dir, 'proto'),
PROTOCPYTHONOUTDIR=None)
#-------------------------------------------------------------------------------
class ObjectBuilder(object):
def __init__(self, env, variant_dirs):
self.env = env
self.variant_dirs = variant_dirs
self.objects = []
def add_source_files(self, *filenames, **kwds):
for filename in filenames:
env = self.env
if kwds:
env = env.Clone()
env.Prepend(**kwds)
o = env.Object(Beast.variantFile(filename, self.variant_dirs))
self.objects.append(o)
def list_sources(base, suffixes):
def _iter(base):
for parent, dirs, files in os.walk(base):
files = [f for f in files if not f[0] == '.']
dirs[:] = [d for d in dirs if not d[0] == '.']
for path in files:
path = os.path.join(parent, path)
r = os.path.splitext(path)
if r[1] and r[1] in suffixes:
yield os.path.normpath(path)
return list(_iter(base))
def append_sources(result, *filenames, **kwds):
result.append([filenames, kwds])
def get_soci_sources(style):
result = []
cpp_path = [
'src/soci/src/core',
'src/sqlite', ]
append_sources(result,
'src/ripple/unity/soci.cpp',
CPPPATH=cpp_path)
if style == 'unity':
append_sources(result,
'src/ripple/unity/soci_ripple.cpp',
CPPPATH=cpp_path)
return result
def get_classic_sources():
result = []
append_sources(
result,
*list_sources('src/ripple/app', '.cpp'),
CPPPATH=[
'src/soci/src/core',
'src/sqlite']
)
append_sources(result, *list_sources('src/ripple/basics', '.cpp'))
append_sources(result, *list_sources('src/ripple/core', '.cpp'))
append_sources(result, *list_sources('src/ripple/crypto', '.cpp'))
append_sources(result, *list_sources('src/ripple/json', '.cpp'))
append_sources(result, *list_sources('src/ripple/legacy', '.cpp'))
append_sources(result, *list_sources('src/ripple/net', '.cpp'))
append_sources(result, *list_sources('src/ripple/overlay', '.cpp'))
append_sources(result, *list_sources('src/ripple/peerfinder', '.cpp'))
append_sources(result, *list_sources('src/ripple/protocol', '.cpp'))
append_sources(result, *list_sources('src/ripple/shamap', '.cpp'))
append_sources(
result,
*list_sources('src/ripple/nodestore', '.cpp'),
CPPPATH=[
'src/rocksdb2/include',
'src/snappy/snappy',
'src/snappy/config',
])
result += get_soci_sources('classic')
return result
def get_unity_sources():
result = []
append_sources(
result,
'src/ripple/unity/app.cpp',
'src/ripple/unity/app1.cpp',
'src/ripple/unity/app2.cpp',
'src/ripple/unity/app3.cpp',
'src/ripple/unity/app4.cpp',
'src/ripple/unity/app5.cpp',
'src/ripple/unity/app6.cpp',
'src/ripple/unity/app7.cpp',
'src/ripple/unity/app8.cpp',
'src/ripple/unity/app9.cpp',
'src/ripple/unity/core.cpp',
'src/ripple/unity/basics.cpp',
'src/ripple/unity/crypto.cpp',
'src/ripple/unity/net.cpp',
'src/ripple/unity/overlay.cpp',
'src/ripple/unity/peerfinder.cpp',
'src/ripple/unity/json.cpp',
'src/ripple/unity/protocol.cpp',
'src/ripple/unity/shamap.cpp',
'src/ripple/unity/legacy.cpp',
)
result += get_soci_sources('unity')
append_sources(
result,
'src/ripple/unity/nodestore.cpp',
CPPPATH=[
'src/rocksdb2/include',
'src/snappy/snappy',
'src/snappy/config',
])
return result
# Declare the targets
aliases = collections.defaultdict(list)
msvc_configs = []
for toolchain in all_toolchains:
for variant in variants:
# Configure this variant's construction environment
env = base.Clone()
config_env(toolchain, variant, env)
variant_name = '%s.%s' % (toolchain, variant)
variant_dir = os.path.join(build_dir, variant_name)
variant_dirs = {
os.path.join(variant_dir, 'src') :
'src',
os.path.join(variant_dir, 'proto') :
os.path.join (build_dir, 'proto'),
}
for dest, source in variant_dirs.iteritems():
env.VariantDir(dest, source, duplicate=0)
objects = []
objects.append(addSource('src/ripple/unity/app.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app1.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app2.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app3.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app4.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app5.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app6.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app7.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app8.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/app9.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/basics.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/beast.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/beastc.c', env, variant_dirs))
objects.append(addSource('src/ripple/unity/common.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/core.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/data.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/http.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/json.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/net.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/overlay.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/peerfinder.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/protobuf.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/ripple.proto.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/radmap.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/resource.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/rpcx.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/sitefiles.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/sslutil.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/testoverlay.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/types.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/validators.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/websocket.cpp', env, variant_dirs))
objects.append(addSource('src/ripple/unity/nodestore.cpp', env, variant_dirs, [
'src/leveldb/include',
#'src/hyperleveldb/include', # hyper
'src/rocksdb/include',
]))
objects.append(addSource('src/ripple/unity/leveldb.cpp', env, variant_dirs, [
'src/leveldb/',
'src/leveldb/include',
'src/snappy/snappy',
'src/snappy/config',
]))
def should_prepare_target(cl_target,
style, toolchain, variant):
if not cl_target:
# default target
return (style == default_tu_style and
toolchain == default_toolchain and
variant == default_variant)
if 'vcxproj' in cl_target:
return toolchain == 'msvc'
s = cl_target.split('.')
if style == 'unity' and 'nounity' in s:
return False
if len(s) == 1:
return ('all' in cl_target or
variant in cl_target or
toolchain in cl_target)
if len(s) == 2 or len(s) == 3:
return s[0] == toolchain and s[1] == variant
objects.append(addSource('src/ripple/unity/hyperleveldb.cpp', env, variant_dirs, [
'src/hyperleveldb',
'src/snappy/snappy',
'src/snappy/config',
]))
return True # A target we don't know about, better prepare to build it
objects.append(addSource('src/ripple/unity/rocksdb.cpp', env, variant_dirs, [
'src/rocksdb',
'src/rocksdb/include',
'src/snappy/snappy',
'src/snappy/config',
]))
objects.append(addSource('src/ripple/unity/snappy.cpp', env, variant_dirs, [
'src/snappy/snappy',
'src/snappy/config',
]))
def should_prepare_targets(style, toolchain, variant):
if not COMMAND_LINE_TARGETS:
return should_prepare_target(None, style, toolchain, variant)
for t in COMMAND_LINE_TARGETS:
if should_prepare_target(t, style, toolchain, variant):
return True
if toolchain == "clang" and Beast.system.osx:
objects.append(addSource('src/ripple/unity/beastobjc.mm', env, variant_dirs))
for tu_style in ['classic', 'unity']:
if tu_style == 'classic':
sources = get_classic_sources()
else:
sources = get_unity_sources()
for toolchain in all_toolchains:
for variant in variants:
if not should_prepare_targets(tu_style, toolchain, variant):
continue
if variant == 'profile' and toolchain == 'msvc':
continue
# Configure this variant's construction environment
env = base.Clone()
config_env(toolchain, variant, env)
variant_name = '%s.%s' % (toolchain, variant)
if tu_style == 'classic':
variant_name += '.nounity'
variant_dir = os.path.join(build_dir, variant_name)
variant_dirs = {
os.path.join(variant_dir, 'src') :
'src',
os.path.join(variant_dir, 'proto') :
os.path.join (build_dir, 'proto'),
}
for dest, source in variant_dirs.iteritems():
env.VariantDir(dest, source, duplicate=0)
target = env.Program(
target = os.path.join(variant_dir, 'rippled'),
source = objects
object_builder = ObjectBuilder(env, variant_dirs)
for s, k in sources:
object_builder.add_source_files(*s, **k)
git_commit_tag = {}
if toolchain != 'msvc':
git = Beast.Git(env)
if git.exists:
id = '%s+%s.%s' % (git.tags, git.user, git.branch)
git_commit_tag = {'CPPDEFINES':
{'GIT_COMMIT_ID' : '\'"%s"\'' % id }}
object_builder.add_source_files(
'src/ripple/unity/git_id.cpp',
**git_commit_tag)
object_builder.add_source_files(
'src/beast/beast/unity/hash_unity.cpp',
'src/ripple/unity/beast.cpp',
'src/ripple/unity/lz4.c',
'src/ripple/unity/protobuf.cpp',
'src/ripple/unity/ripple.proto.cpp',
'src/ripple/unity/resource.cpp',
'src/ripple/unity/rpcx.cpp',
'src/ripple/unity/server.cpp',
'src/ripple/unity/validators.cpp',
'src/ripple/unity/websocket02.cpp'
)
if toolchain == default_toolchain and variant == default_variant:
default_target = target
install_target = env.Install (build_dir, source = default_target)
env.Alias ('install', install_target)
env.Default (install_target)
aliases['all'].extend(install_target)
if toolchain == 'msvc':
config = env.VSProjectConfig(variant, 'x64', target, env)
msvc_configs.append(config)
if toolchain in toolchains:
aliases['all'].extend(target)
aliases[variant].extend(target)
aliases[toolchain].extend(target)
env.Alias(variant_name, target)
object_builder.add_source_files(
'src/ripple/unity/beastc.c',
CCFLAGS = ([] if toolchain == 'msvc' else ['-Wno-array-bounds']))
if 'gcc' in toolchain:
no_uninitialized_warning = {'CCFLAGS': ['-Wno-maybe-uninitialized']}
else:
no_uninitialized_warning = {}
object_builder.add_source_files(
'src/ripple/unity/ed25519.c',
CPPPATH=[
'src/ed25519-donna',
]
)
object_builder.add_source_files(
'src/ripple/unity/rocksdb.cpp',
CPPPATH=[
'src/rocksdb2',
'src/rocksdb2/include',
'src/snappy/snappy',
'src/snappy/config',
],
**no_uninitialized_warning
)
object_builder.add_source_files(
'src/ripple/unity/snappy.cpp',
CCFLAGS=([] if toolchain == 'msvc' else ['-Wno-unused-function']),
CPPPATH=[
'src/snappy/snappy',
'src/snappy/config',
]
)
object_builder.add_source_files(
'src/ripple/unity/websocket04.cpp',
CPPPATH='src/websocketpp',
)
if toolchain == "clang" and Beast.system.osx:
object_builder.add_source_files('src/ripple/unity/beastobjc.mm')
target = env.Program(
target=os.path.join(variant_dir, 'rippled'),
source=object_builder.objects
)
if tu_style == default_tu_style:
if toolchain == default_toolchain and (
variant == default_variant):
default_target = target
install_target = env.Install (build_dir, source=default_target)
env.Alias ('install', install_target)
env.Default (install_target)
aliases['all'].extend(install_target)
if toolchain == 'msvc':
config = env.VSProjectConfig(variant, 'x64', target, env)
msvc_configs.append(config)
if toolchain in toolchains:
aliases['all'].extend(target)
aliases[toolchain].extend(target)
elif toolchain == 'msvc':
config = env.VSProjectConfig(variant + ".classic", 'x64', target, env)
msvc_configs.append(config)
if toolchain in toolchains:
aliases[variant].extend(target)
env.Alias(variant_name, target)
for key, value in aliases.iteritems():
env.Alias(key, value)
@@ -603,3 +875,33 @@ vcxproj = base.VSProject(
VSPROJECT_ROOT_DIRS = ['src/beast', 'src', '.'],
VSPROJECT_CONFIGS = msvc_configs)
base.Alias('vcxproj', vcxproj)
#-------------------------------------------------------------------------------
# Adds a phony target to the environment that always builds
# See: http://www.scons.org/wiki/PhonyTargets
def PhonyTargets(env = None, **kw):
if not env: env = DefaultEnvironment()
for target, action in kw.items():
env.AlwaysBuild(env.Alias(target, [], action))
# Build the list of rippled source files that hold unit tests
def do_count(target, source, env):
def list_testfiles(base, suffixes):
def _iter(base):
for parent, _, files in os.walk(base):
for path in files:
path = os.path.join(parent, path)
r = os.path.splitext(path)
if r[1] in suffixes:
if r[0].endswith('.test'):
yield os.path.normpath(path)
return list(_iter(base))
testfiles = list_testfiles(os.path.join('src', 'ripple'), env.get('CPPSUFFIXES'))
lines = 0
for f in testfiles:
lines = lines + sum(1 for line in open(f))
print "Total unit test lines: %d" % lines
PhonyTargets(env, count = do_count)

77
appveyor.yml Normal file
View File

@@ -0,0 +1,77 @@
# Set environment variables.
environment:
PYTHON: C:/Python27-x64
# We bundle up protoc.exe and only the parts of boost and openssl we need so
# that it's a small download. We also use appveyor's free cache, avoiding fees
# downloading from S3 each time.
# TODO: script to create this package.
RIPPLED_DEPS_URL: https://s3-ap-northeast-1.amazonaws.com/history-replay/rippled_deps.zip
# Other dependencies we just download each time.
PIP_URL: https://bootstrap.pypa.io/get-pip.py
PYWIN32_URL: https://downloads.sourceforge.net/project/pywin32/pywin32/Build%20219/pywin32-219.win-amd64-py2.7.exe
# Scons honours these environment variables, setting the include/lib paths.
BOOST_ROOT: C:/rippled_deps/boost
OPENSSL_ROOT: C:/rippled_deps/openssl
# At the end of each successful build we cache this directory. It must be less
# than 100MB total compressed.
cache:
- "C:\\rippled_deps"
# This means we'll download a zip of the branch we want, rather than the full
# history.
shallow_clone: true
install:
# We want easy_install, python and protoc.exe on PATH.
- SET PATH=%PYTHON%;%PYTHON%/Scripts;C:/rippled_deps;%PATH%
# `ps` prefix means the command is executed by powershell.
- ps: Start-FileDownload $env:PIP_URL
- ps: Start-FileDownload $env:PYWIN32_URL
# Installing pip will install setuptools/easy_install.
- python get-pip.py
# Pip has some problems installing scons on windows so we use easy install.
- easy_install scons
# Scons has problems with parallel builds on windows without pywin32.
- easy_install pywin32-219.win-amd64-py2.7.exe
# (easy_install can do headless installs of .exe wizards)
# Download dependencies if appveyor didn't restore them from the cache.
# Use 7zip to unzip.
- ps: |
if (-not(Test-Path 'C:/rippled_deps')) {
Start-FileDownload "$env:RIPPLED_DEPS_URL"
7z x rippled_deps.zip -oC:\ -y > $null
}
# TODO: This is giving me grief
# artifacts:
# # Save rippled.exe in the cloud after each build.
# - path: "build\\rippled.exe"
build_script:
# We set the environment variables needed to put compilers on the PATH.
- '"%VS120COMNTOOLS%../../VC/vcvarsall.bat" x86_amd64'
# Show which version of the compiler we are using.
- cl
- scons msvc.debug -j%NUMBER_OF_PROCESSORS%
after_build:
# Put our executable in a place where npm test can find it.
- ps: cp build/msvc.debug/rippled.exe build
- ps: ls build
test_script:
# Run the unit tests
- build\\rippled --unittest
# Run the integration tests
- npm install
- npm test

1
bin/LT Symbolic link
View File

@@ -0,0 +1 @@
LedgerTool.py

24
bin/LedgerTool.py Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env python
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
import traceback
from ripple.ledger import Server
from ripple.ledger.commands import Cache, Info, Print
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util.CommandList import CommandList
_COMMANDS = CommandList(Cache, Info, Print)
if __name__ == '__main__':
try:
server = Server.Server()
args = list(ARGS.command)
_COMMANDS.run_safe(args.pop(0), server, *args)
except Exception as e:
if ARGS.verbose:
print(traceback.format_exc(), sys.stderr)
Log.error(e)

8
bin/README.md Normal file
View File

@@ -0,0 +1,8 @@
Unit Tests
==========
To run the Python unit tests, execute:
python -m unittest discover
from this directory.

251
bin/decorator.py Normal file
View File

@@ -0,0 +1,251 @@
########################## LICENCE ###############################
# Copyright (c) 2005-2012, Michele Simionato
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
# Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# Redistributions in bytecode form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
"""
Decorator module, see http://pypi.python.org/pypi/decorator
for the documentation.
"""
__version__ = '3.4.0'
__all__ = ["decorator", "FunctionMaker", "contextmanager"]
import sys, re, inspect
if sys.version >= '3':
from inspect import getfullargspec
def get_init(cls):
return cls.__init__
else:
class getfullargspec(object):
"A quick and dirty replacement for getfullargspec for Python 2.X"
def __init__(self, f):
self.args, self.varargs, self.varkw, self.defaults = \
inspect.getargspec(f)
self.kwonlyargs = []
self.kwonlydefaults = None
def __iter__(self):
yield self.args
yield self.varargs
yield self.varkw
yield self.defaults
def get_init(cls):
return cls.__init__.im_func
DEF = re.compile('\s*def\s*([_\w][_\w\d]*)\s*\(')
# basic functionality
class FunctionMaker(object):
"""
An object with the ability to create functions with a given signature.
It has attributes name, doc, module, signature, defaults, dict and
methods update and make.
"""
def __init__(self, func=None, name=None, signature=None,
defaults=None, doc=None, module=None, funcdict=None):
self.shortsignature = signature
if func:
# func can be a class or a callable, but not an instance method
self.name = func.__name__
if self.name == '<lambda>': # small hack for lambda functions
self.name = '_lambda_'
self.doc = func.__doc__
self.module = func.__module__
if inspect.isfunction(func):
argspec = getfullargspec(func)
self.annotations = getattr(func, '__annotations__', {})
for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs',
'kwonlydefaults'):
setattr(self, a, getattr(argspec, a))
for i, arg in enumerate(self.args):
setattr(self, 'arg%d' % i, arg)
if sys.version < '3': # easy way
self.shortsignature = self.signature = \
inspect.formatargspec(
formatvalue=lambda val: "", *argspec)[1:-1]
else: # Python 3 way
allargs = list(self.args)
allshortargs = list(self.args)
if self.varargs:
allargs.append('*' + self.varargs)
allshortargs.append('*' + self.varargs)
elif self.kwonlyargs:
allargs.append('*') # single star syntax
for a in self.kwonlyargs:
allargs.append('%s=None' % a)
allshortargs.append('%s=%s' % (a, a))
if self.varkw:
allargs.append('**' + self.varkw)
allshortargs.append('**' + self.varkw)
self.signature = ', '.join(allargs)
self.shortsignature = ', '.join(allshortargs)
self.dict = func.__dict__.copy()
# func=None happens when decorating a caller
if name:
self.name = name
if signature is not None:
self.signature = signature
if defaults:
self.defaults = defaults
if doc:
self.doc = doc
if module:
self.module = module
if funcdict:
self.dict = funcdict
# check existence required attributes
assert hasattr(self, 'name')
if not hasattr(self, 'signature'):
raise TypeError('You are decorating a non function: %s' % func)
def update(self, func, **kw):
"Update the signature of func with the data in self"
func.__name__ = self.name
func.__doc__ = getattr(self, 'doc', None)
func.__dict__ = getattr(self, 'dict', {})
func.func_defaults = getattr(self, 'defaults', ())
func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None)
func.__annotations__ = getattr(self, 'annotations', None)
callermodule = sys._getframe(3).f_globals.get('__name__', '?')
func.__module__ = getattr(self, 'module', callermodule)
func.__dict__.update(kw)
def make(self, src_templ, evaldict=None, addsource=False, **attrs):
"Make a new function from a given template and update the signature"
src = src_templ % vars(self) # expand name and signature
evaldict = evaldict or {}
mo = DEF.match(src)
if mo is None:
raise SyntaxError('not a valid function template\n%s' % src)
name = mo.group(1) # extract the function name
names = set([name] + [arg.strip(' *') for arg in
self.shortsignature.split(',')])
for n in names:
if n in ('_func_', '_call_'):
raise NameError('%s is overridden in\n%s' % (n, src))
if not src.endswith('\n'): # add a newline just for safety
src += '\n' # this is needed in old versions of Python
try:
code = compile(src, '<string>', 'single')
# print >> sys.stderr, 'Compiling %s' % src
exec code in evaldict
except:
print >> sys.stderr, 'Error in generated code:'
print >> sys.stderr, src
raise
func = evaldict[name]
if addsource:
attrs['__source__'] = src
self.update(func, **attrs)
return func
@classmethod
def create(cls, obj, body, evaldict, defaults=None,
doc=None, module=None, addsource=True, **attrs):
"""
Create a function from the strings name, signature and body.
evaldict is the evaluation dictionary. If addsource is true an attribute
__source__ is added to the result. The attributes attrs are added,
if any.
"""
if isinstance(obj, str): # "name(signature)"
name, rest = obj.strip().split('(', 1)
signature = rest[:-1] #strip a right parens
func = None
else: # a function
name = None
signature = None
func = obj
self = cls(func, name, signature, defaults, doc, module)
ibody = '\n'.join(' ' + line for line in body.splitlines())
return self.make('def %(name)s(%(signature)s):\n' + ibody,
evaldict, addsource, **attrs)
def decorator(caller, func=None):
"""
decorator(caller) converts a caller function into a decorator;
decorator(caller, func) decorates a function using a caller.
"""
if func is not None: # returns a decorated function
evaldict = func.func_globals.copy()
evaldict['_call_'] = caller
evaldict['_func_'] = func
return FunctionMaker.create(
func, "return _call_(_func_, %(shortsignature)s)",
evaldict, undecorated=func, __wrapped__=func)
else: # returns a decorator
if inspect.isclass(caller):
name = caller.__name__.lower()
callerfunc = get_init(caller)
doc = 'decorator(%s) converts functions/generators into ' \
'factories of %s objects' % (caller.__name__, caller.__name__)
fun = getfullargspec(callerfunc).args[1] # second arg
elif inspect.isfunction(caller):
name = '_lambda_' if caller.__name__ == '<lambda>' \
else caller.__name__
callerfunc = caller
doc = caller.__doc__
fun = getfullargspec(callerfunc).args[0] # first arg
else: # assume caller is an object with a __call__ method
name = caller.__class__.__name__.lower()
callerfunc = caller.__call__.im_func
doc = caller.__call__.__doc__
fun = getfullargspec(callerfunc).args[1] # second arg
evaldict = callerfunc.func_globals.copy()
evaldict['_call_'] = caller
evaldict['decorator'] = decorator
return FunctionMaker.create(
'%s(%s)' % (name, fun),
'return decorator(_call_, %s)' % fun,
evaldict, undecorated=caller, __wrapped__=caller,
doc=doc, module=caller.__module__)
######################### contextmanager ########################
def __call__(self, func):
'Context manager decorator'
return FunctionMaker.create(
func, "with _self_: return _func_(%(shortsignature)s)",
dict(_self_=self, _func_=func), __wrapped__=func)
try: # Python >= 3.2
from contextlib import _GeneratorContextManager
ContextManager = type(
'ContextManager', (_GeneratorContextManager,), dict(__call__=__call__))
except ImportError: # Python >= 2.5
from contextlib import GeneratorContextManager
def __init__(self, f, *a, **k):
return GeneratorContextManager.__init__(self, f(*a, **k))
ContextManager = type(
'ContextManager', (GeneratorContextManager,),
dict(__call__=__call__, __init__=__init__))
contextmanager = decorator(ContextManager)

View File

@@ -0,0 +1,4 @@
from .jsonpath import *
from .parser import parse
__version__ = '1.3.0'

510
bin/jsonpath_rw/jsonpath.py Normal file
View File

@@ -0,0 +1,510 @@
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
import logging
import six
from six.moves import xrange
from itertools import *
logger = logging.getLogger(__name__)
# Turn on/off the automatic creation of id attributes
# ... could be a kwarg pervasively but uses are rare and simple today
auto_id_field = None
class JSONPath(object):
"""
The base class for JSONPath abstract syntax; those
methods stubbed here are the interface to supported
JSONPath semantics.
"""
def find(self, data):
"""
All `JSONPath` types support `find()`, which returns an iterable of `DatumInContext`s.
They keep track of the path followed to the current location, so if the calling code
has some opinion about that, it can be passed in here as a starting point.
"""
raise NotImplementedError()
def update(self, data, val):
"Returns `data` with the specified path replaced by `val`"
raise NotImplementedError()
def child(self, child):
"""
Equivalent to Child(self, next) but with some canonicalization
"""
if isinstance(self, This) or isinstance(self, Root):
return child
elif isinstance(child, This):
return self
elif isinstance(child, Root):
return child
else:
return Child(self, child)
def make_datum(self, value):
if isinstance(value, DatumInContext):
return value
else:
return DatumInContext(value, path=Root(), context=None)
class DatumInContext(object):
"""
Represents a datum along a path from a context.
Essentially a zipper but with a structure represented by JsonPath,
and where the context is more of a parent pointer than a proper
representation of the context.
For quick-and-dirty work, this proxies any non-special attributes
to the underlying datum, but the actual datum can (and usually should)
be retrieved via the `value` attribute.
To place `datum` within another, use `datum.in_context(context=..., path=...)`
which extends the path. If the datum already has a context, it places the entire
context within that passed in, so an object can be built from the inside
out.
"""
@classmethod
def wrap(cls, data):
if isinstance(data, cls):
return data
else:
return cls(data)
def __init__(self, value, path=None, context=None):
self.value = value
self.path = path or This()
self.context = None if context is None else DatumInContext.wrap(context)
def in_context(self, context, path):
context = DatumInContext.wrap(context)
if self.context:
return DatumInContext(value=self.value, path=self.path, context=context.in_context(path=path, context=context))
else:
return DatumInContext(value=self.value, path=path, context=context)
@property
def full_path(self):
return self.path if self.context is None else self.context.full_path.child(self.path)
@property
def id_pseudopath(self):
"""
Looks like a path, but with ids stuck in when available
"""
try:
pseudopath = Fields(str(self.value[auto_id_field]))
except (TypeError, AttributeError, KeyError): # This may not be all the interesting exceptions
pseudopath = self.path
if self.context:
return self.context.id_pseudopath.child(pseudopath)
else:
return pseudopath
def __repr__(self):
return '%s(value=%r, path=%r, context=%r)' % (self.__class__.__name__, self.value, self.path, self.context)
def __eq__(self, other):
return isinstance(other, DatumInContext) and other.value == self.value and other.path == self.path and self.context == other.context
class AutoIdForDatum(DatumInContext):
"""
This behaves like a DatumInContext, but the value is
always the path leading up to it, not including the "id",
and with any "id" fields along the way replacing the prior
segment of the path
For example, it will make "foo.bar.id" return a datum
that behaves like DatumInContext(value="foo.bar", path="foo.bar.id").
This is disabled by default; it can be turned on by
settings the `auto_id_field` global to a value other
than `None`.
"""
def __init__(self, datum, id_field=None):
"""
Invariant is that datum.path is the path from context to datum. The auto id
will either be the id in the datum (if present) or the id of the context
followed by the path to the datum.
The path to this datum is always the path to the context, the path to the
datum, and then the auto id field.
"""
self.datum = datum
self.id_field = id_field or auto_id_field
@property
def value(self):
return str(self.datum.id_pseudopath)
@property
def path(self):
return self.id_field
@property
def context(self):
return self.datum
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self.datum)
def in_context(self, context, path):
return AutoIdForDatum(self.datum.in_context(context=context, path=path))
def __eq__(self, other):
return isinstance(other, AutoIdForDatum) and other.datum == self.datum and self.id_field == other.id_field
class Root(JSONPath):
"""
The JSONPath referring to the "root" object. Concrete syntax is '$'.
The root is the topmost datum without any context attached.
"""
def find(self, data):
if not isinstance(data, DatumInContext):
return [DatumInContext(data, path=Root(), context=None)]
else:
if data.context is None:
return [DatumInContext(data.value, context=None, path=Root())]
else:
return Root().find(data.context)
def update(self, data, val):
return val
def __str__(self):
return '$'
def __repr__(self):
return 'Root()'
def __eq__(self, other):
return isinstance(other, Root)
class This(JSONPath):
"""
The JSONPath referring to the current datum. Concrete syntax is '@'.
"""
def find(self, datum):
return [DatumInContext.wrap(datum)]
def update(self, data, val):
return val
def __str__(self):
return '`this`'
def __repr__(self):
return 'This()'
def __eq__(self, other):
return isinstance(other, This)
class Child(JSONPath):
"""
JSONPath that first matches the left, then the right.
Concrete syntax is <left> '.' <right>
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, datum):
"""
Extra special case: auto ids do not have children,
so cut it off right now rather than auto id the auto id
"""
return [submatch
for subdata in self.left.find(datum)
if not isinstance(subdata, AutoIdForDatum)
for submatch in self.right.find(subdata)]
def __eq__(self, other):
return isinstance(other, Child) and self.left == other.left and self.right == other.right
def __str__(self):
return '%s.%s' % (self.left, self.right)
def __repr__(self):
return '%s(%r, %r)' % (self.__class__.__name__, self.left, self.right)
class Parent(JSONPath):
"""
JSONPath that matches the parent node of the current match.
Will crash if no such parent exists.
Available via named operator `parent`.
"""
def find(self, datum):
datum = DatumInContext.wrap(datum)
return [datum.context]
def __eq__(self, other):
return isinstance(other, Parent)
def __str__(self):
return '`parent`'
def __repr__(self):
return 'Parent()'
class Where(JSONPath):
"""
JSONPath that first matches the left, and then
filters for only those nodes that have
a match on the right.
WARNING: Subject to change. May want to have "contains"
or some other better word for it.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, data):
return [subdata for subdata in self.left.find(data) if self.right.find(data)]
def __str__(self):
return '%s where %s' % (self.left, self.right)
def __eq__(self, other):
return isinstance(other, Where) and other.left == self.left and other.right == self.right
class Descendants(JSONPath):
"""
JSONPath that matches first the left expression then any descendant
of it which matches the right expression.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, datum):
# <left> .. <right> ==> <left> . (<right> | *..<right> | [*]..<right>)
#
# With with a wonky caveat that since Slice() has funky coercions
# we cannot just delegate to that equivalence or we'll hit an
# infinite loop. So right here we implement the coercion-free version.
# Get all left matches into a list
left_matches = self.left.find(datum)
if not isinstance(left_matches, list):
left_matches = [left_matches]
def match_recursively(datum):
right_matches = self.right.find(datum)
# Manually do the * or [*] to avoid coercion and recurse just the right-hand pattern
if isinstance(datum.value, list):
recursive_matches = [submatch
for i in range(0, len(datum.value))
for submatch in match_recursively(DatumInContext(datum.value[i], context=datum, path=Index(i)))]
elif isinstance(datum.value, dict):
recursive_matches = [submatch
for field in datum.value.keys()
for submatch in match_recursively(DatumInContext(datum.value[field], context=datum, path=Fields(field)))]
else:
recursive_matches = []
return right_matches + list(recursive_matches)
# TODO: repeatable iterator instead of list?
return [submatch
for left_match in left_matches
for submatch in match_recursively(left_match)]
def is_singular():
return False
def __str__(self):
return '%s..%s' % (self.left, self.right)
def __eq__(self, other):
return isinstance(other, Descendants) and self.left == other.left and self.right == other.right
class Union(JSONPath):
"""
JSONPath that returns the union of the results of each match.
This is pretty shoddily implemented for now. The nicest semantics
in case of mismatched bits (list vs atomic) is to put
them all in a list, but I haven't done that yet.
WARNING: Any appearance of this being the _concatenation_ is
coincidence. It may even be a bug! (or laziness)
"""
def __init__(self, left, right):
self.left = left
self.right = right
def is_singular(self):
return False
def find(self, data):
return self.left.find(data) + self.right.find(data)
class Intersect(JSONPath):
"""
JSONPath for bits that match *both* patterns.
This can be accomplished a couple of ways. The most
efficient is to actually build the intersected
AST as in building a state machine for matching the
intersection of regular languages. The next
idea is to build a filtered data and match against
that.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def is_singular(self):
return False
def find(self, data):
raise NotImplementedError()
class Fields(JSONPath):
"""
JSONPath referring to some field of the current object.
Concrete syntax ix comma-separated field names.
WARNING: If '*' is any of the field names, then they will
all be returned.
"""
def __init__(self, *fields):
self.fields = fields
def get_field_datum(self, datum, field):
if field == auto_id_field:
return AutoIdForDatum(datum)
else:
try:
field_value = datum.value[field] # Do NOT use `val.get(field)` since that confuses None as a value and None due to `get`
return DatumInContext(value=field_value, path=Fields(field), context=datum)
except (TypeError, KeyError, AttributeError):
return None
def reified_fields(self, datum):
if '*' not in self.fields:
return self.fields
else:
try:
fields = tuple(datum.value.keys())
return fields if auto_id_field is None else fields + (auto_id_field,)
except AttributeError:
return ()
def find(self, datum):
datum = DatumInContext.wrap(datum)
return [field_datum
for field_datum in [self.get_field_datum(datum, field) for field in self.reified_fields(datum)]
if field_datum is not None]
def __str__(self):
return ','.join(self.fields)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, ','.join(map(repr, self.fields)))
def __eq__(self, other):
return isinstance(other, Fields) and tuple(self.fields) == tuple(other.fields)
class Index(JSONPath):
"""
JSONPath that matches indices of the current datum, or none if not large enough.
Concrete syntax is brackets.
WARNING: If the datum is not long enough, it will not crash but will not match anything.
NOTE: For the concrete syntax of `[*]`, the abstract syntax is a Slice() with no parameters (equiv to `[:]`
"""
def __init__(self, index):
self.index = index
def find(self, datum):
datum = DatumInContext.wrap(datum)
if len(datum.value) > self.index:
return [DatumInContext(datum.value[self.index], path=self, context=datum)]
else:
return []
def __eq__(self, other):
return isinstance(other, Index) and self.index == other.index
def __str__(self):
return '[%i]' % self.index
class Slice(JSONPath):
"""
JSONPath matching a slice of an array.
Because of a mismatch between JSON and XML when schema-unaware,
this always returns an iterable; if the incoming data
was not a list, then it returns a one element list _containing_ that
data.
Consider these two docs, and their schema-unaware translation to JSON:
<a><b>hello</b></a> ==> {"a": {"b": "hello"}}
<a><b>hello</b><b>goodbye</b></a> ==> {"a": {"b": ["hello", "goodbye"]}}
If there were a schema, it would be known that "b" should always be an
array (unless the schema were wonky, but that is too much to fix here)
so when querying with JSON if the one writing the JSON knows that it
should be an array, they can write a slice operator and it will coerce
a non-array value to an array.
This may be a bit unfortunate because it would be nice to always have
an iterator, but dictionaries and other objects may also be iterable,
so this is the compromise.
"""
def __init__(self, start=None, end=None, step=None):
self.start = start
self.end = end
self.step = step
def find(self, datum):
datum = DatumInContext.wrap(datum)
# Here's the hack. If it is a dictionary or some kind of constant,
# put it in a single-element list
if (isinstance(datum.value, dict) or isinstance(datum.value, six.integer_types) or isinstance(datum.value, six.string_types)):
return self.find(DatumInContext([datum.value], path=datum.path, context=datum.context))
# Some iterators do not support slicing but we can still
# at least work for '*'
if self.start == None and self.end == None and self.step == None:
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in xrange(0, len(datum.value))]
else:
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in range(0, len(datum.value))[self.start:self.end:self.step]]
def __str__(self):
if self.start == None and self.end == None and self.step == None:
return '[*]'
else:
return '[%s%s%s]' % (self.start or '',
':%d'%self.end if self.end else '',
':%d'%self.step if self.step else '')
def __repr__(self):
return '%s(start=%r,end=%r,step=%r)' % (self.__class__.__name__, self.start, self.end, self.step)
def __eq__(self, other):
return isinstance(other, Slice) and other.start == self.start and self.end == other.end and other.step == self.step

171
bin/jsonpath_rw/lexer.py Normal file
View File

@@ -0,0 +1,171 @@
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
import sys
import logging
import ply.lex
logger = logging.getLogger(__name__)
class JsonPathLexerError(Exception):
pass
class JsonPathLexer(object):
'''
A Lexical analyzer for JsonPath.
'''
def __init__(self, debug=False):
self.debug = debug
if self.__doc__ == None:
raise JsonPathLexerError('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
def tokenize(self, string):
'''
Maps a string to an iterator over tokens. In other words: [char] -> [token]
'''
new_lexer = ply.lex.lex(module=self, debug=self.debug, errorlog=logger)
new_lexer.latest_newline = 0
new_lexer.string_value = None
new_lexer.input(string)
while True:
t = new_lexer.token()
if t is None: break
t.col = t.lexpos - new_lexer.latest_newline
yield t
if new_lexer.string_value is not None:
raise JsonPathLexerError('Unexpected EOF in string literal or identifier')
# ============== PLY Lexer specification ==================
#
# This probably should be private but:
# - the parser requires access to `tokens` (perhaps they should be defined in a third, shared dependency)
# - things like `literals` might be a legitimate part of the public interface.
#
# Anyhow, it is pythonic to give some rope to hang oneself with :-)
literals = ['*', '.', '[', ']', '(', ')', '$', ',', ':', '|', '&']
reserved_words = { 'where': 'WHERE' }
tokens = ['DOUBLEDOT', 'NUMBER', 'ID', 'NAMED_OPERATOR'] + list(reserved_words.values())
states = [ ('singlequote', 'exclusive'),
('doublequote', 'exclusive'),
('backquote', 'exclusive') ]
# Normal lexing, rather easy
t_DOUBLEDOT = r'\.\.'
t_ignore = ' \t'
def t_ID(self, t):
r'[a-zA-Z_@][a-zA-Z0-9_@\-]*'
t.type = self.reserved_words.get(t.value, 'ID')
return t
def t_NUMBER(self, t):
r'-?\d+'
t.value = int(t.value)
return t
# Single-quoted strings
t_singlequote_ignore = ''
def t_singlequote(self, t):
r"'"
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('singlequote')
def t_singlequote_content(self, t):
r"[^'\\]+"
t.lexer.string_value += t.value
def t_singlequote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_singlequote_end(self, t):
r"'"
t.value = t.lexer.string_value
t.type = 'ID'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_singlequote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing singlequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Double-quoted strings
t_doublequote_ignore = ''
def t_doublequote(self, t):
r'"'
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('doublequote')
def t_doublequote_content(self, t):
r'[^"\\]+'
t.lexer.string_value += t.value
def t_doublequote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_doublequote_end(self, t):
r'"'
t.value = t.lexer.string_value
t.type = 'ID'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_doublequote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing doublequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Back-quoted "magic" operators
t_backquote_ignore = ''
def t_backquote(self, t):
r'`'
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('backquote')
def t_backquote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_backquote_content(self, t):
r"[^`\\]+"
t.lexer.string_value += t.value
def t_backquote_end(self, t):
r'`'
t.value = t.lexer.string_value
t.type = 'NAMED_OPERATOR'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_backquote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing backquoted operator: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Counting lines, handling errors
def t_newline(self, t):
r'\n'
t.lexer.lineno += 1
t.lexer.latest_newline = t.lexpos
def t_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
if __name__ == '__main__':
logging.basicConfig()
lexer = JsonPathLexer(debug=True)
for token in lexer.tokenize(sys.stdin.read()):
print('%-20s%s' % (token.value, token.type))

187
bin/jsonpath_rw/parser.py Normal file
View File

@@ -0,0 +1,187 @@
from __future__ import print_function, absolute_import, division, generators, nested_scopes
import sys
import os.path
import logging
import ply.yacc
from jsonpath_rw.jsonpath import *
from jsonpath_rw.lexer import JsonPathLexer
logger = logging.getLogger(__name__)
def parse(string):
return JsonPathParser().parse(string)
class JsonPathParser(object):
'''
An LALR-parser for JsonPath
'''
tokens = JsonPathLexer.tokens
def __init__(self, debug=False, lexer_class=None):
if self.__doc__ == None:
raise Exception('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
self.debug = debug
self.lexer_class = lexer_class or JsonPathLexer # Crufty but works around statefulness in PLY
def parse(self, string, lexer = None):
lexer = lexer or self.lexer_class()
return self.parse_token_stream(lexer.tokenize(string))
def parse_token_stream(self, token_iterator, start_symbol='jsonpath'):
# Since PLY has some crufty aspects and dumps files, we try to keep them local
# However, we need to derive the name of the output Python file :-/
output_directory = os.path.dirname(__file__)
try:
module_name = os.path.splitext(os.path.split(__file__)[1])[0]
except:
module_name = __name__
parsing_table_module = '_'.join([module_name, start_symbol, 'parsetab'])
# And we regenerate the parse table every time; it doesn't actually take that long!
new_parser = ply.yacc.yacc(module=self,
debug=self.debug,
tabmodule = parsing_table_module,
outputdir = output_directory,
write_tables=0,
start = start_symbol,
errorlog = logger)
return new_parser.parse(lexer = IteratorToTokenStream(token_iterator))
# ===================== PLY Parser specification =====================
precedence = [
('left', ','),
('left', 'DOUBLEDOT'),
('left', '.'),
('left', '|'),
('left', '&'),
('left', 'WHERE'),
]
def p_error(self, t):
raise Exception('Parse error at %s:%s near token %s (%s)' % (t.lineno, t.col, t.value, t.type))
def p_jsonpath_binop(self, p):
"""jsonpath : jsonpath '.' jsonpath
| jsonpath DOUBLEDOT jsonpath
| jsonpath WHERE jsonpath
| jsonpath '|' jsonpath
| jsonpath '&' jsonpath"""
op = p[2]
if op == '.':
p[0] = Child(p[1], p[3])
elif op == '..':
p[0] = Descendants(p[1], p[3])
elif op == 'where':
p[0] = Where(p[1], p[3])
elif op == '|':
p[0] = Union(p[1], p[3])
elif op == '&':
p[0] = Intersect(p[1], p[3])
def p_jsonpath_fields(self, p):
"jsonpath : fields_or_any"
p[0] = Fields(*p[1])
def p_jsonpath_named_operator(self, p):
"jsonpath : NAMED_OPERATOR"
if p[1] == 'this':
p[0] = This()
elif p[1] == 'parent':
p[0] = Parent()
else:
raise Exception('Unknown named operator `%s` at %s:%s' % (p[1], p.lineno(1), p.lexpos(1)))
def p_jsonpath_root(self, p):
"jsonpath : '$'"
p[0] = Root()
def p_jsonpath_idx(self, p):
"jsonpath : '[' idx ']'"
p[0] = p[2]
def p_jsonpath_slice(self, p):
"jsonpath : '[' slice ']'"
p[0] = p[2]
def p_jsonpath_fieldbrackets(self, p):
"jsonpath : '[' fields ']'"
p[0] = Fields(*p[2])
def p_jsonpath_child_fieldbrackets(self, p):
"jsonpath : jsonpath '[' fields ']'"
p[0] = Child(p[1], Fields(*p[3]))
def p_jsonpath_child_idxbrackets(self, p):
"jsonpath : jsonpath '[' idx ']'"
p[0] = Child(p[1], p[3])
def p_jsonpath_child_slicebrackets(self, p):
"jsonpath : jsonpath '[' slice ']'"
p[0] = Child(p[1], p[3])
def p_jsonpath_parens(self, p):
"jsonpath : '(' jsonpath ')'"
p[0] = p[2]
# Because fields in brackets cannot be '*' - that is reserved for array indices
def p_fields_or_any(self, p):
"""fields_or_any : fields
| '*' """
if p[1] == '*':
p[0] = ['*']
else:
p[0] = p[1]
def p_fields_id(self, p):
"fields : ID"
p[0] = [p[1]]
def p_fields_comma(self, p):
"fields : fields ',' fields"
p[0] = p[1] + p[3]
def p_idx(self, p):
"idx : NUMBER"
p[0] = Index(p[1])
def p_slice_any(self, p):
"slice : '*'"
p[0] = Slice()
def p_slice(self, p): # Currently does not support `step`
"slice : maybe_int ':' maybe_int"
p[0] = Slice(start=p[1], end=p[3])
def p_maybe_int(self, p):
"""maybe_int : NUMBER
| empty"""
p[0] = p[1]
def p_empty(self, p):
'empty :'
p[0] = None
class IteratorToTokenStream(object):
def __init__(self, iterator):
self.iterator = iterator
def token(self):
try:
return next(self.iterator)
except StopIteration:
return None
if __name__ == '__main__':
logging.basicConfig()
parser = JsonPathParser(debug=True)
print(parser.parse(sys.stdin.read()))

4
bin/ply/__init__.py Normal file
View File

@@ -0,0 +1,4 @@
# PLY package
# Author: David Beazley (dave@dabeaz.com)
__all__ = ['lex','yacc']

898
bin/ply/cpp.py Normal file
View File

@@ -0,0 +1,898 @@
# -----------------------------------------------------------------------------
# cpp.py
#
# Author: David Beazley (http://www.dabeaz.com)
# Copyright (C) 2007
# All rights reserved
#
# This module implements an ANSI-C style lexical preprocessor for PLY.
# -----------------------------------------------------------------------------
from __future__ import generators
# -----------------------------------------------------------------------------
# Default preprocessor lexer definitions. These tokens are enough to get
# a basic preprocessor working. Other modules may import these if they want
# -----------------------------------------------------------------------------
tokens = (
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT', 'CPP_POUND','CPP_DPOUND'
)
literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
# Whitespace
def t_CPP_WS(t):
r'\s+'
t.lexer.lineno += t.value.count("\n")
return t
t_CPP_POUND = r'\#'
t_CPP_DPOUND = r'\#\#'
# Identifier
t_CPP_ID = r'[A-Za-z_][\w_]*'
# Integer literal
def CPP_INTEGER(t):
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU]|[lL]|[uU][lL]|[lL][uU])?)'
return t
t_CPP_INTEGER = CPP_INTEGER
# Floating literal
t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
def t_CPP_STRING(t):
r'\"([^\\\n]|(\\(.|\n)))*?\"'
t.lexer.lineno += t.value.count("\n")
return t
# Character constant 'c' or L'c'
def t_CPP_CHAR(t):
r'(L)?\'([^\\\n]|(\\(.|\n)))*?\''
t.lexer.lineno += t.value.count("\n")
return t
# Comment
def t_CPP_COMMENT(t):
r'(/\*(.|\n)*?\*/)|(//.*?\n)'
t.lexer.lineno += t.value.count("\n")
return t
def t_error(t):
t.type = t.value[0]
t.value = t.value[0]
t.lexer.skip(1)
return t
import re
import copy
import time
import os.path
# -----------------------------------------------------------------------------
# trigraph()
#
# Given an input string, this function replaces all trigraph sequences.
# The following mapping is used:
#
# ??= #
# ??/ \
# ??' ^
# ??( [
# ??) ]
# ??! |
# ??< {
# ??> }
# ??- ~
# -----------------------------------------------------------------------------
_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''')
_trigraph_rep = {
'=':'#',
'/':'\\',
"'":'^',
'(':'[',
')':']',
'!':'|',
'<':'{',
'>':'}',
'-':'~'
}
def trigraph(input):
return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)
# ------------------------------------------------------------------
# Macro object
#
# This object holds information about preprocessor macros
#
# .name - Macro name (string)
# .value - Macro value (a list of tokens)
# .arglist - List of argument names
# .variadic - Boolean indicating whether or not variadic macro
# .vararg - Name of the variadic parameter
#
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# ------------------------------------------------------------------
class Macro(object):
def __init__(self,name,value,arglist=None,variadic=False):
self.name = name
self.value = value
self.arglist = arglist
self.variadic = variadic
if variadic:
self.vararg = arglist[-1]
self.source = None
# ------------------------------------------------------------------
# Preprocessor object
#
# Object representing a preprocessor. Contains macro definitions,
# include directories, and other information
# ------------------------------------------------------------------
class Preprocessor(object):
def __init__(self,lexer=None):
if lexer is None:
lexer = lex.lexer
self.lexer = lexer
self.macros = { }
self.path = []
self.temp_path = []
# Probe the lexer for selected tokens
self.lexprobe()
tm = time.localtime()
self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm))
self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm))
self.parser = None
# -----------------------------------------------------------------------------
# tokenize()
#
# Utility function. Given a string of text, tokenize into a list of tokens
# -----------------------------------------------------------------------------
def tokenize(self,text):
tokens = []
self.lexer.input(text)
while True:
tok = self.lexer.token()
if not tok: break
tokens.append(tok)
return tokens
# ---------------------------------------------------------------------
# error()
#
# Report a preprocessor error/warning of some kind
# ----------------------------------------------------------------------
def error(self,file,line,msg):
print("%s:%d %s" % (file,line,msg))
# ----------------------------------------------------------------------
# lexprobe()
#
# This method probes the preprocessor lexer object to discover
# the token types of symbols that are important to the preprocessor.
# If this works right, the preprocessor will simply "work"
# with any suitable lexer regardless of how tokens have been named.
# ----------------------------------------------------------------------
def lexprobe(self):
# Determine the token type for identifiers
self.lexer.input("identifier")
tok = self.lexer.token()
if not tok or tok.value != "identifier":
print("Couldn't determine identifier type")
else:
self.t_ID = tok.type
# Determine the token type for integers
self.lexer.input("12345")
tok = self.lexer.token()
if not tok or int(tok.value) != 12345:
print("Couldn't determine integer type")
else:
self.t_INTEGER = tok.type
self.t_INTEGER_TYPE = type(tok.value)
# Determine the token type for strings enclosed in double quotes
self.lexer.input("\"filename\"")
tok = self.lexer.token()
if not tok or tok.value != "\"filename\"":
print("Couldn't determine string type")
else:
self.t_STRING = tok.type
# Determine the token type for whitespace--if any
self.lexer.input(" ")
tok = self.lexer.token()
if not tok or tok.value != " ":
self.t_SPACE = None
else:
self.t_SPACE = tok.type
# Determine the token type for newlines
self.lexer.input("\n")
tok = self.lexer.token()
if not tok or tok.value != "\n":
self.t_NEWLINE = None
print("Couldn't determine token for newlines")
else:
self.t_NEWLINE = tok.type
self.t_WS = (self.t_SPACE, self.t_NEWLINE)
# Check for other characters used by the preprocessor
chars = [ '<','>','#','##','\\','(',')',',','.']
for c in chars:
self.lexer.input(c)
tok = self.lexer.token()
if not tok or tok.value != c:
print("Unable to lex '%s' required for preprocessor" % c)
# ----------------------------------------------------------------------
# add_path()
#
# Adds a search path to the preprocessor.
# ----------------------------------------------------------------------
def add_path(self,path):
self.path.append(path)
# ----------------------------------------------------------------------
# group_lines()
#
# Given an input string, this function splits it into lines. Trailing whitespace
# is removed. Any line ending with \ is grouped with the next line. This
# function forms the lowest level of the preprocessor---grouping into text into
# a line-by-line format.
# ----------------------------------------------------------------------
def group_lines(self,input):
lex = self.lexer.clone()
lines = [x.rstrip() for x in input.splitlines()]
for i in xrange(len(lines)):
j = i+1
while lines[i].endswith('\\') and (j < len(lines)):
lines[i] = lines[i][:-1]+lines[j]
lines[j] = ""
j += 1
input = "\n".join(lines)
lex.input(input)
lex.lineno = 1
current_line = []
while True:
tok = lex.token()
if not tok:
break
current_line.append(tok)
if tok.type in self.t_WS and '\n' in tok.value:
yield current_line
current_line = []
if current_line:
yield current_line
# ----------------------------------------------------------------------
# tokenstrip()
#
# Remove leading/trailing whitespace tokens from a token list
# ----------------------------------------------------------------------
def tokenstrip(self,tokens):
i = 0
while i < len(tokens) and tokens[i].type in self.t_WS:
i += 1
del tokens[:i]
i = len(tokens)-1
while i >= 0 and tokens[i].type in self.t_WS:
i -= 1
del tokens[i+1:]
return tokens
# ----------------------------------------------------------------------
# collect_args()
#
# Collects comma separated arguments from a list of tokens. The arguments
# must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions)
# where tokencount is the number of tokens consumed, args is a list of arguments,
# and positions is a list of integers containing the starting index of each
# argument. Each argument is represented by a list of tokens.
#
# When collecting arguments, leading and trailing whitespace is removed
# from each argument.
#
# This function properly handles nested parenthesis and commas---these do not
# define new arguments.
# ----------------------------------------------------------------------
def collect_args(self,tokenlist):
args = []
positions = []
current_arg = []
nesting = 1
tokenlen = len(tokenlist)
# Search for the opening '('.
i = 0
while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
i += 1
if (i < tokenlen) and (tokenlist[i].value == '('):
positions.append(i+1)
else:
self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments")
return 0, [], []
i += 1
while i < tokenlen:
t = tokenlist[i]
if t.value == '(':
current_arg.append(t)
nesting += 1
elif t.value == ')':
nesting -= 1
if nesting == 0:
if current_arg:
args.append(self.tokenstrip(current_arg))
positions.append(i)
return i+1,args,positions
current_arg.append(t)
elif t.value == ',' and nesting == 1:
args.append(self.tokenstrip(current_arg))
positions.append(i+1)
current_arg = []
else:
current_arg.append(t)
i += 1
# Missing end argument
self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
return 0, [],[]
# ----------------------------------------------------------------------
# macro_prescan()
#
# Examine the macro value (token sequence) and identify patch points
# This is used to speed up macro expansion later on---we'll know
# right away where to apply patches to the value to form the expansion
# ----------------------------------------------------------------------
def macro_prescan(self,macro):
macro.patch = [] # Standard macro arguments
macro.str_patch = [] # String conversion expansion
macro.var_comma_patch = [] # Variadic macro comma patch
i = 0
while i < len(macro.value):
if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist:
argnum = macro.arglist.index(macro.value[i].value)
# Conversion of argument to a string
if i > 0 and macro.value[i-1].value == '#':
macro.value[i] = copy.copy(macro.value[i])
macro.value[i].type = self.t_STRING
del macro.value[i-1]
macro.str_patch.append((argnum,i-1))
continue
# Concatenation
elif (i > 0 and macro.value[i-1].value == '##'):
macro.patch.append(('c',argnum,i-1))
del macro.value[i-1]
continue
elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
macro.patch.append(('c',argnum,i))
i += 1
continue
# Standard expansion
else:
macro.patch.append(('e',argnum,i))
elif macro.value[i].value == '##':
if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \
((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \
(macro.value[i+1].value == macro.vararg):
macro.var_comma_patch.append(i-1)
i += 1
macro.patch.sort(key=lambda x: x[2],reverse=True)
# ----------------------------------------------------------------------
# macro_expand_args()
#
# Given a Macro and list of arguments (each a token list), this method
# returns an expanded version of a macro. The return value is a token sequence
# representing the replacement macro tokens
# ----------------------------------------------------------------------
def macro_expand_args(self,macro,args):
# Make a copy of the macro token sequence
rep = [copy.copy(_x) for _x in macro.value]
# Make string expansion patches. These do not alter the length of the replacement sequence
str_expansion = {}
for argnum, i in macro.str_patch:
if argnum not in str_expansion:
str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\")
rep[i] = copy.copy(rep[i])
rep[i].value = str_expansion[argnum]
# Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid
comma_patch = False
if macro.variadic and not args[-1]:
for i in macro.var_comma_patch:
rep[i] = None
comma_patch = True
# Make all other patches. The order of these matters. It is assumed that the patch list
# has been sorted in reverse order of patch location since replacements will cause the
# size of the replacement sequence to expand from the patch point.
expanded = { }
for ptype, argnum, i in macro.patch:
# Concatenation. Argument is left unexpanded
if ptype == 'c':
rep[i:i+1] = args[argnum]
# Normal expansion. Argument is macro expanded first
elif ptype == 'e':
if argnum not in expanded:
expanded[argnum] = self.expand_macros(args[argnum])
rep[i:i+1] = expanded[argnum]
# Get rid of removed comma if necessary
if comma_patch:
rep = [_i for _i in rep if _i]
return rep
# ----------------------------------------------------------------------
# expand_macros()
#
# Given a list of tokens, this function performs macro expansion.
# The expanded argument is a dictionary that contains macros already
# expanded. This is used to prevent infinite recursion.
# ----------------------------------------------------------------------
def expand_macros(self,tokens,expanded=None):
if expanded is None:
expanded = {}
i = 0
while i < len(tokens):
t = tokens[i]
if t.type == self.t_ID:
if t.value in self.macros and t.value not in expanded:
# Yes, we found a macro match
expanded[t.value] = True
m = self.macros[t.value]
if not m.arglist:
# A simple macro
ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded)
for e in ex:
e.lineno = t.lineno
tokens[i:i+1] = ex
i += len(ex)
else:
# A macro with arguments
j = i + 1
while j < len(tokens) and tokens[j].type in self.t_WS:
j += 1
if tokens[j].value == '(':
tokcount,args,positions = self.collect_args(tokens[j:])
if not m.variadic and len(args) != len(m.arglist):
self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
i = j + tokcount
elif m.variadic and len(args) < len(m.arglist)-1:
if len(m.arglist) > 2:
self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1))
else:
self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1))
i = j + tokcount
else:
if m.variadic:
if len(args) == len(m.arglist)-1:
args.append([])
else:
args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
del args[len(m.arglist):]
# Get macro replacement text
rep = self.macro_expand_args(m,args)
rep = self.expand_macros(rep,expanded)
for r in rep:
r.lineno = t.lineno
tokens[i:j+tokcount] = rep
i += len(rep)
del expanded[t.value]
continue
elif t.value == '__LINE__':
t.type = self.t_INTEGER
t.value = self.t_INTEGER_TYPE(t.lineno)
i += 1
return tokens
# ----------------------------------------------------------------------
# evalexpr()
#
# Evaluate an expression token sequence for the purposes of evaluating
# integral expressions.
# ----------------------------------------------------------------------
def evalexpr(self,tokens):
# tokens = tokenize(line)
# Search for defined macros
i = 0
while i < len(tokens):
if tokens[i].type == self.t_ID and tokens[i].value == 'defined':
j = i + 1
needparen = False
result = "0L"
while j < len(tokens):
if tokens[j].type in self.t_WS:
j += 1
continue
elif tokens[j].type == self.t_ID:
if tokens[j].value in self.macros:
result = "1L"
else:
result = "0L"
if not needparen: break
elif tokens[j].value == '(':
needparen = True
elif tokens[j].value == ')':
break
else:
self.error(self.source,tokens[i].lineno,"Malformed defined()")
j += 1
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE(result)
del tokens[i+1:j+1]
i += 1
tokens = self.expand_macros(tokens)
for i,t in enumerate(tokens):
if t.type == self.t_ID:
tokens[i] = copy.copy(t)
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE("0L")
elif t.type == self.t_INTEGER:
tokens[i] = copy.copy(t)
# Strip off any trailing suffixes
tokens[i].value = str(tokens[i].value)
while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
tokens[i].value = tokens[i].value[:-1]
expr = "".join([str(x.value) for x in tokens])
expr = expr.replace("&&"," and ")
expr = expr.replace("||"," or ")
expr = expr.replace("!"," not ")
try:
result = eval(expr)
except StandardError:
self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
result = 0
return result
# ----------------------------------------------------------------------
# parsegen()
#
# Parse an input string/
# ----------------------------------------------------------------------
def parsegen(self,input,source=None):
# Replace trigraph sequences
t = trigraph(input)
lines = self.group_lines(t)
if not source:
source = ""
self.define("__FILE__ \"%s\"" % source)
self.source = source
chunk = []
enable = True
iftrigger = False
ifstack = []
for x in lines:
for i,tok in enumerate(x):
if tok.type not in self.t_WS: break
if tok.value == '#':
# Preprocessor directive
for tok in x:
if tok in self.t_WS and '\n' in tok.value:
chunk.append(tok)
dirtokens = self.tokenstrip(x[i+1:])
if dirtokens:
name = dirtokens[0].value
args = self.tokenstrip(dirtokens[1:])
else:
name = ""
args = []
if name == 'define':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.define(args)
elif name == 'include':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
oldfile = self.macros['__FILE__']
for tok in self.include(args):
yield tok
self.macros['__FILE__'] = oldfile
self.source = source
elif name == 'undef':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.undef(args)
elif name == 'ifdef':
ifstack.append((enable,iftrigger))
if enable:
if not args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'ifndef':
ifstack.append((enable,iftrigger))
if enable:
if args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'if':
ifstack.append((enable,iftrigger))
if enable:
result = self.evalexpr(args)
if not result:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'elif':
if ifstack:
if ifstack[-1][0]: # We only pay attention if outer "if" allows this
if enable: # If already true, we flip enable False
enable = False
elif not iftrigger: # If False, but not triggered yet, we'll check expression
result = self.evalexpr(args)
if result:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
elif name == 'else':
if ifstack:
if ifstack[-1][0]:
if enable:
enable = False
elif not iftrigger:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #else")
elif name == 'endif':
if ifstack:
enable,iftrigger = ifstack.pop()
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #endif")
else:
# Unknown preprocessor directive
pass
else:
# Normal text
if enable:
chunk.extend(x)
for tok in self.expand_macros(chunk):
yield tok
chunk = []
# ----------------------------------------------------------------------
# include()
#
# Implementation of file-inclusion
# ----------------------------------------------------------------------
def include(self,tokens):
# Try to extract the filename and then process an include file
if not tokens:
return
if tokens:
if tokens[0].value != '<' and tokens[0].type != self.t_STRING:
tokens = self.expand_macros(tokens)
if tokens[0].value == '<':
# Include <...>
i = 1
while i < len(tokens):
if tokens[i].value == '>':
break
i += 1
else:
print("Malformed #include <...>")
return
filename = "".join([x.value for x in tokens[1:i]])
path = self.path + [""] + self.temp_path
elif tokens[0].type == self.t_STRING:
filename = tokens[0].value[1:-1]
path = self.temp_path + [""] + self.path
else:
print("Malformed #include statement")
return
for p in path:
iname = os.path.join(p,filename)
try:
data = open(iname,"r").read()
dname = os.path.dirname(iname)
if dname:
self.temp_path.insert(0,dname)
for tok in self.parsegen(data,filename):
yield tok
if dname:
del self.temp_path[0]
break
except IOError:
pass
else:
print("Couldn't find '%s'" % filename)
# ----------------------------------------------------------------------
# define()
#
# Define a new macro
# ----------------------------------------------------------------------
def define(self,tokens):
if isinstance(tokens,(str,unicode)):
tokens = self.tokenize(tokens)
linetok = tokens
try:
name = linetok[0]
if len(linetok) > 1:
mtype = linetok[1]
else:
mtype = None
if not mtype:
m = Macro(name.value,[])
self.macros[name.value] = m
elif mtype.type in self.t_WS:
# A normal macro
m = Macro(name.value,self.tokenstrip(linetok[2:]))
self.macros[name.value] = m
elif mtype.value == '(':
# A macro with arguments
tokcount, args, positions = self.collect_args(linetok[1:])
variadic = False
for a in args:
if variadic:
print("No more arguments may follow a variadic argument")
break
astr = "".join([str(_i.value) for _i in a])
if astr == "...":
variadic = True
a[0].type = self.t_ID
a[0].value = '__VA_ARGS__'
variadic = True
del a[1:]
continue
elif astr[-3:] == "..." and a[0].type == self.t_ID:
variadic = True
del a[1:]
# If, for some reason, "." is part of the identifier, strip off the name for the purposes
# of macro expansion
if a[0].value[-3:] == '...':
a[0].value = a[0].value[:-3]
continue
if len(a) > 1 or a[0].type != self.t_ID:
print("Invalid macro argument")
break
else:
mvalue = self.tokenstrip(linetok[1+tokcount:])
i = 0
while i < len(mvalue):
if i+1 < len(mvalue):
if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##':
del mvalue[i]
continue
elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS:
del mvalue[i+1]
i += 1
m = Macro(name.value,mvalue,[x[0].value for x in args],variadic)
self.macro_prescan(m)
self.macros[name.value] = m
else:
print("Bad macro definition")
except LookupError:
print("Bad macro definition")
# ----------------------------------------------------------------------
# undef()
#
# Undefine a macro
# ----------------------------------------------------------------------
def undef(self,tokens):
id = tokens[0].value
try:
del self.macros[id]
except LookupError:
pass
# ----------------------------------------------------------------------
# parse()
#
# Parse input text.
# ----------------------------------------------------------------------
def parse(self,input,source=None,ignore={}):
self.ignore = ignore
self.parser = self.parsegen(input,source)
# ----------------------------------------------------------------------
# token()
#
# Method to return individual tokens
# ----------------------------------------------------------------------
def token(self):
try:
while True:
tok = next(self.parser)
if tok.type not in self.ignore: return tok
except StopIteration:
self.parser = None
return None
if __name__ == '__main__':
import ply.lex as lex
lexer = lex.lex()
# Run a preprocessor
import sys
f = open(sys.argv[1])
input = f.read()
p = Preprocessor(lexer)
p.parse(input,sys.argv[1])
while True:
tok = p.token()
if not tok: break
print(p.source, tok)

133
bin/ply/ctokens.py Normal file
View File

@@ -0,0 +1,133 @@
# ----------------------------------------------------------------------
# ctokens.py
#
# Token specifications for symbols in ANSI C and C++. This file is
# meant to be used as a library in other tokenizers.
# ----------------------------------------------------------------------
# Reserved words
tokens = [
# Literals (identifier, integer constant, float constant, string constant, char const)
'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD',
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
'LOR', 'LAND', 'LNOT',
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
# Increment/decrement (++,--)
'PLUSPLUS', 'MINUSMINUS',
# Structure dereference (->)
'ARROW',
# Ternary operator (?)
'TERNARY',
# Delimeters ( ) [ ] { } , . ; :
'LPAREN', 'RPAREN',
'LBRACKET', 'RBRACKET',
'LBRACE', 'RBRACE',
'COMMA', 'PERIOD', 'SEMI', 'COLON',
# Ellipsis (...)
'ELLIPSIS',
]
# Operators
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_MODULO = r'%'
t_OR = r'\|'
t_AND = r'&'
t_NOT = r'~'
t_XOR = r'\^'
t_LSHIFT = r'<<'
t_RSHIFT = r'>>'
t_LOR = r'\|\|'
t_LAND = r'&&'
t_LNOT = r'!'
t_LT = r'<'
t_GT = r'>'
t_LE = r'<='
t_GE = r'>='
t_EQ = r'=='
t_NE = r'!='
# Assignment operators
t_EQUALS = r'='
t_TIMESEQUAL = r'\*='
t_DIVEQUAL = r'/='
t_MODEQUAL = r'%='
t_PLUSEQUAL = r'\+='
t_MINUSEQUAL = r'-='
t_LSHIFTEQUAL = r'<<='
t_RSHIFTEQUAL = r'>>='
t_ANDEQUAL = r'&='
t_OREQUAL = r'\|='
t_XOREQUAL = r'^='
# Increment/decrement
t_INCREMENT = r'\+\+'
t_DECREMENT = r'--'
# ->
t_ARROW = r'->'
# ?
t_TERNARY = r'\?'
# Delimeters
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_LBRACKET = r'\['
t_RBRACKET = r'\]'
t_LBRACE = r'\{'
t_RBRACE = r'\}'
t_COMMA = r','
t_PERIOD = r'\.'
t_SEMI = r';'
t_COLON = r':'
t_ELLIPSIS = r'\.\.\.'
# Identifiers
t_ID = r'[A-Za-z_][A-Za-z0-9_]*'
# Integer literal
t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?'
# Floating literal
t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
t_STRING = r'\"([^\\\n]|(\\.))*?\"'
# Character constant 'c' or L'c'
t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\''
# Comment (C-Style)
def t_COMMENT(t):
r'/\*(.|\n)*?\*/'
t.lexer.lineno += t.value.count('\n')
return t
# Comment (C++-Style)
def t_CPPCOMMENT(t):
r'//.*\n'
t.lexer.lineno += 1
return t

1058
bin/ply/lex.py Normal file

File diff suppressed because it is too large Load Diff

3276
bin/ply/yacc.py Normal file

File diff suppressed because it is too large Load Diff

187
bin/ripple/ledger/Args.py Normal file
View File

@@ -0,0 +1,187 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import importlib
import os
from ripple.ledger import LedgerNumber
from ripple.util import File
from ripple.util import Log
from ripple.util import PrettyPrint
from ripple.util import Range
from ripple.util.Function import Function
NAME = 'LedgerTool'
VERSION = '0.1'
NONE = '(none)'
_parser = argparse.ArgumentParser(
prog=NAME,
description='Retrieve and process Ripple ledgers.',
epilog=LedgerNumber.HELP,
)
# Positional arguments.
_parser.add_argument(
'command',
nargs='*',
help='Command to execute.'
)
# Flag arguments.
_parser.add_argument(
'--binary',
action='store_true',
help='If true, searches are binary - by default linear search is used.',
)
_parser.add_argument(
'--cache',
default='~/.local/share/ripple/ledger',
help='The cache directory.',
)
_parser.add_argument(
'--complete',
action='store_true',
help='If set, only match complete ledgers.',
)
_parser.add_argument(
'--condition', '-c',
help='The name of a condition function used to match ledgers.',
)
_parser.add_argument(
'--config',
help='The rippled configuration file name.',
)
_parser.add_argument(
'--database', '-d',
nargs='*',
default=NONE,
help='Specify a database.',
)
_parser.add_argument(
'--display',
help='Specify a function to display ledgers.',
)
_parser.add_argument(
'--full', '-f',
action='store_true',
help='If true, request full ledgers.',
)
_parser.add_argument(
'--indent', '-i',
type=int,
default=2,
help='How many spaces to indent when display in JSON.',
)
_parser.add_argument(
'--offline', '-o',
action='store_true',
help='If true, work entirely from cache, do not try to contact the server.',
)
_parser.add_argument(
'--position', '-p',
choices=['all', 'first', 'last'],
default='last',
help='Select which ledgers to display.',
)
_parser.add_argument(
'--rippled', '-r',
help='The filename of a rippled binary for retrieving ledgers.',
)
_parser.add_argument(
'--server', '-s',
help='IP address of a rippled JSON server.',
)
_parser.add_argument(
'--utc', '-u',
action='store_true',
help='If true, display times in UTC rather than local time.',
)
_parser.add_argument(
'--validations',
default=3,
help='The number of validations needed before considering a ledger valid.',
)
_parser.add_argument(
'--version',
action='version',
version='%(prog)s ' + VERSION,
help='Print the current version of %(prog)s',
)
_parser.add_argument(
'--verbose', '-v',
action='store_true',
help='If true, give status messages on stderr.',
)
_parser.add_argument(
'--window', '-w',
type=int,
default=0,
help='How many ledgers to display around the matching ledger.',
)
_parser.add_argument(
'--yes', '-y',
action='store_true',
help='If true, don\'t ask for confirmation on large commands.',
)
# Read the arguments from the command line.
ARGS = _parser.parse_args()
ARGS.NONE = NONE
Log.VERBOSE = ARGS.verbose
# Now remove any items that look like ledger numbers from the command line.
_command = ARGS.command
_parts = (ARGS.command, ARGS.ledgers) = ([], [])
for c in _command:
_parts[Range.is_range(c, *LedgerNumber.LEDGERS)].append(c)
ARGS.command = ARGS.command or ['print' if ARGS.ledgers else 'info']
ARGS.cache = File.normalize(ARGS.cache)
if not ARGS.ledgers:
if ARGS.condition:
Log.warn('--condition needs a range of ledgers')
if ARGS.display:
Log.warn('--display needs a range of ledgers')
ARGS.condition = Function(
ARGS.condition or 'all_ledgers', 'ripple.ledger.conditions')
ARGS.display = Function(
ARGS.display or 'ledger_number', 'ripple.ledger.displays')
if ARGS.window < 0:
raise ValueError('Window cannot be negative: --window=%d' %
ARGS.window)
PrettyPrint.INDENT = (ARGS.indent * ' ')
_loaders = (ARGS.database != NONE) + bool(ARGS.rippled) + bool(ARGS.server)
if not _loaders:
ARGS.rippled = 'rippled'
elif _loaders > 1:
raise ValueError('At most one of --database, --rippled and --server '
'may be specified')

View File

@@ -0,0 +1,78 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
import subprocess
from ripple.ledger.Args import ARGS
from ripple.util import ConfigFile
from ripple.util import Database
from ripple.util import File
from ripple.util import Log
from ripple.util import Range
LEDGER_QUERY = """
SELECT
L.*, count(1) validations
FROM
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq DESC) L
JOIN Validations V
ON (V.LedgerHash = L.LedgerHash)
GROUP BY L.LedgerHash
HAVING validations >= {validation_quorum}
ORDER BY 2;
"""
COMPLETE_QUERY = """
SELECT
L.LedgerSeq, count(*) validations
FROM
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq) L
JOIN Validations V
ON (V.LedgerHash = L.LedgerHash)
GROUP BY L.LedgerHash
HAVING validations >= :validation_quorum
ORDER BY 2;
"""
_DATABASE_NAME = 'ledger.db'
USE_PLACEHOLDERS = False
class DatabaseReader(object):
def __init__(self, config):
assert ARGS.database != ARGS.NONE
database = ARGS.database or config['database_path']
if not database.endswith(_DATABASE_NAME):
database = os.path.join(database, _DATABASE_NAME)
if USE_PLACEHOLDERS:
cursor = Database.fetchall(
database, COMPLETE_QUERY, config)
else:
cursor = Database.fetchall(
database, LEDGER_QUERY.format(**config), {})
self.complete = [c[1] for c in cursor]
def name_to_ledger_index(self, ledger_name, is_full=False):
if not self.complete:
return None
if ledger_name == 'closed':
return self.complete[-1]
if ledger_name == 'current':
return None
if ledger_name == 'validated':
return self.complete[-1]
def get_ledger(self, name, is_full=False):
cmd = ['ledger', str(name)]
if is_full:
cmd.append('full')
response = self._command(*cmd)
result = response.get('ledger')
if result:
return result
error = response['error']
etext = _ERROR_TEXT.get(error)
if etext:
error = '%s (%s)' % (etext, error)
Log.fatal(_ERROR_TEXT.get(error, error))

View File

@@ -0,0 +1,18 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Range
FIRST_EVER = 32570
LEDGERS = {
'closed': 'the most recently closed ledger',
'current': 'the current ledger',
'first': 'the first complete ledger on this server',
'last': 'the last complete ledger on this server',
'validated': 'the most recently validated ledger',
}
HELP = """
Ledgers are either represented by a number, or one of the special ledgers;
""" + ',\n'.join('%s, %s' % (k, v) for k, v in sorted(LEDGERS.items())
)

View File

@@ -0,0 +1,68 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
import subprocess
from ripple.ledger.Args import ARGS
from ripple.util import File
from ripple.util import Log
from ripple.util import Range
_ERROR_CODE_REASON = {
62: 'No rippled server is running.',
}
_ERROR_TEXT = {
'lgrNotFound': 'The ledger you requested was not found.',
'noCurrent': 'The server has no current ledger.',
'noNetwork': 'The server did not respond to your request.',
}
_DEFAULT_ERROR_ = "Couldn't connect to server."
class RippledReader(object):
def __init__(self, config):
fname = File.normalize(ARGS.rippled)
if not os.path.exists(fname):
raise Exception('No rippled found at %s.' % fname)
self.cmd = [fname]
if ARGS.config:
self.cmd.extend(['--conf', File.normalize(ARGS.config)])
self.info = self._command('server_info')['info']
c = self.info.get('complete_ledgers')
if c == 'empty':
self.complete = []
else:
self.complete = sorted(Range.from_string(c))
def name_to_ledger_index(self, ledger_name, is_full=False):
return self.get_ledger(ledger_name, is_full)['ledger_index']
def get_ledger(self, name, is_full=False):
cmd = ['ledger', str(name)]
if is_full:
cmd.append('full')
response = self._command(*cmd)
result = response.get('ledger')
if result:
return result
error = response['error']
etext = _ERROR_TEXT.get(error)
if etext:
error = '%s (%s)' % (etext, error)
Log.fatal(_ERROR_TEXT.get(error, error))
def _command(self, *cmds):
cmd = self.cmd + list(cmds)
try:
data = subprocess.check_output(cmd, stderr=subprocess.PIPE)
except subprocess.CalledProcessError as e:
raise Exception(_ERROR_CODE_REASON.get(
e.returncode, _DEFAULT_ERROR_))
part = json.loads(data)
try:
return part['result']
except:
raise ValueError(part.get('error', 'unknown error'))

View File

@@ -0,0 +1,24 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util import Search
def search(server):
"""Yields a stream of ledger numbers that match the given condition."""
condition = lambda number: ARGS.condition(server, number)
ledgers = server.ledgers
if ARGS.binary:
try:
position = Search.FIRST if ARGS.position == 'first' else Search.LAST
yield Search.binary_search(
ledgers[0], ledgers[-1], condition, position)
except:
Log.fatal('No ledgers matching condition "%s".' % condition,
file=sys.stderr)
else:
for x in Search.linear_search(ledgers, condition):
yield x

View File

@@ -0,0 +1,55 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
from ripple.ledger import DatabaseReader, RippledReader
from ripple.ledger.Args import ARGS
from ripple.util.FileCache import FileCache
from ripple.util import ConfigFile
from ripple.util import File
from ripple.util import Range
class Server(object):
def __init__(self):
cfg_file = File.normalize(ARGS.config or 'rippled.cfg')
self.config = ConfigFile.read(open(cfg_file))
if ARGS.database != ARGS.NONE:
reader = DatabaseReader.DatabaseReader(self.config)
else:
reader = RippledReader.RippledReader(self.config)
self.reader = reader
self.complete = reader.complete
names = {
'closed': reader.name_to_ledger_index('closed'),
'current': reader.name_to_ledger_index('current'),
'validated': reader.name_to_ledger_index('validated'),
'first': self.complete[0] if self.complete else None,
'last': self.complete[-1] if self.complete else None,
}
self.__dict__.update(names)
self.ledgers = sorted(Range.join_ranges(*ARGS.ledgers, **names))
def make_cache(is_full):
name = 'full' if is_full else 'summary'
filepath = os.path.join(ARGS.cache, name)
creator = lambda n: reader.get_ledger(n, is_full)
return FileCache(filepath, creator)
self._caches = [make_cache(False), make_cache(True)]
def info(self):
return self.reader.info
def cache(self, is_full):
return self._caches[is_full]
def get_ledger(self, number, is_full=False):
num = int(number)
save_in_cache = num in self.complete
can_create = (not ARGS.offline and
self.complete and
self.complete[0] <= num - 1)
cache = self.cache(is_full)
return cache.get_data(number, save_in_cache, can_create)

View File

@@ -0,0 +1,5 @@
from __future__ import absolute_import, division, print_function, unicode_literals
class ServerReader(object):
def __init__(self, config):
raise ValueError('Direct server connections are not yet implemented.')

View File

@@ -0,0 +1,34 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util.PrettyPrint import pretty_print
SAFE = True
HELP = """cache
return server_info"""
def cache(server, clear=False):
cache = server.cache(ARGS.full)
name = ['summary', 'full'][ARGS.full]
files = cache.file_count()
if not files:
Log.error('No files in %s cache.' % name)
elif clear:
if not clear.strip() == 'clear':
raise Exception("Don't understand 'clear %s'." % clear)
if not ARGS.yes:
yes = raw_input('OK to clear %s cache? (y/N) ' % name)
if not yes.lower().startswith('y'):
Log.out('Cancelled.')
return
cache.clear(ARGS.full)
Log.out('%s cache cleared - %d file%s deleted.' %
(name.capitalize(), files, '' if files == 1 else 's'))
else:
caches = (int(c) for c in cache.cache_list())
Log.out(Range.to_string(caches))

View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util.PrettyPrint import pretty_print
SAFE = True
HELP = 'info - return server_info'
def info(server):
Log.out('first =', server.first)
Log.out('last =', server.last)
Log.out('closed =', server.closed)
Log.out('current =', server.current)
Log.out('validated =', server.validated)
Log.out('complete =', Range.to_string(server.complete))
if ARGS.full:
Log.out(pretty_print(server.info()))

View File

@@ -0,0 +1,15 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.ledger import SearchLedgers
import json
SAFE = True
HELP = """print
Print the ledgers to stdout. The default command."""
def run_print(server):
ARGS.display(print, server, SearchLedgers.search(server))

View File

@@ -0,0 +1,4 @@
from __future__ import absolute_import, division, print_function, unicode_literals
def all_ledgers(server, ledger_number):
return True

View File

@@ -0,0 +1,89 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from functools import wraps
import jsonpath_rw
from ripple.ledger.Args import ARGS
from ripple.util import Dict
from ripple.util import Log
from ripple.util import Range
from ripple.util.Decimal import Decimal
from ripple.util.PrettyPrint import pretty_print, Streamer
TRANSACT_FIELDS = (
'accepted',
'close_time_human',
'closed',
'ledger_index',
'total_coins',
'transactions',
)
LEDGER_FIELDS = (
'accepted',
'accountState',
'close_time_human',
'closed',
'ledger_index',
'total_coins',
'transactions',
)
def _dict_filter(d, keys):
return dict((k, v) for (k, v) in d.items() if k in keys)
def ledger_number(print, server, numbers):
print(Range.to_string(numbers))
def display(f):
@wraps(f)
def wrapper(printer, server, numbers, *args):
streamer = Streamer(printer=printer)
for number in numbers:
ledger = server.get_ledger(number, ARGS.full)
if ledger:
streamer.add(number, f(ledger, *args))
streamer.finish()
return wrapper
def extractor(f):
@wraps(f)
def wrapper(printer, server, numbers, *paths):
try:
find = jsonpath_rw.parse('|'.join(paths)).find
except:
raise ValueError("Can't understand jsonpath '%s'." % path)
def fn(ledger, *args):
return f(find(ledger), *args)
display(fn)(printer, server, numbers)
return wrapper
@display
def ledger(ledger, full=False):
if ARGS.full:
if full:
return ledger
ledger = Dict.prune(ledger, 1, False)
return _dict_filter(ledger, LEDGER_FIELDS)
@display
def prune(ledger, level=1):
return Dict.prune(ledger, level, False)
@display
def transact(ledger):
return _dict_filter(ledger, TRANSACT_FIELDS)
@extractor
def extract(finds):
return dict((str(f.full_path), str(f.value)) for f in finds)
@extractor
def sum(finds):
d = Decimal()
for f in finds:
d.accumulate(f.value)
return [str(d), len(finds)]

40
bin/ripple/util/Cache.py Normal file
View File

@@ -0,0 +1,40 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from collections import defaultdict
class Cache(object):
def __init__(self):
self._value_to_index = {}
self._index_to_value = []
def value_to_index(self, value, **kwds):
index = self._value_to_index.get(value, None)
if index is None:
index = len(self._index_to_value)
self._index_to_value.append((value, kwds))
self._value_to_index[value] = index
return index
def index_to_value(self, index):
return self._index_to_value[index]
def NamedCache():
return defaultdict(Cache)
def cache_by_key(d, keyfunc=None, exclude=None):
cache = defaultdict(Cache)
exclude = exclude or None
keyfunc = keyfunc or (lambda x: x)
def visit(item):
if isinstance(item, list):
for i, x in enumerate(item):
item[i] = visit(x)
elif isinstance(item, dict):
for k, v in item.items():
item[k] = visit(v)
return item
return cache

View File

@@ -0,0 +1,77 @@
from __future__ import absolute_import, division, print_function, unicode_literals
# Code taken from github/rec/grit.
import os
import sys
from collections import namedtuple
from ripple.ledger.Args import ARGS
from ripple.util import Log
Command = namedtuple('Command', 'function help safe')
def make_command(module):
name = module.__name__.split('.')[-1].lower()
return name, Command(getattr(module, name, None) or
getattr(module, 'run_' + name),
getattr(module, 'HELP'),
getattr(module, 'SAFE', False))
class CommandList(object):
def __init__(self, *args, **kwds):
self.registry = {}
self.register(*args, **kwds)
def register(self, *modules, **kwds):
for module in modules:
name, command = make_command(module)
self.registry[name] = command
for k, v in kwds.items():
if not isinstance(v, (list, tuple)):
v = [v]
self.register_one(k, *v)
def keys(self):
return self.registry.keys()
def register_one(self, name, function, help='', safe=False):
assert name not in self.registry
self.registry[name] = Command(function, help, safe)
def _get(self, command):
command = command.lower()
c = self.registry.get(command)
if c:
return command, c
commands = [c for c in self.registry if c.startswith(command)]
if len(commands) == 1:
command = commands[0]
return command, self.registry[command]
if not commands:
raise ValueError('No such command: %s. Commands are %s.' %
(command, ', '.join(sorted(self.registry))))
if len(commands) > 1:
raise ValueError('Command %s was ambiguous: %s.' %
(command, ', '.join(commands)))
def get(self, command):
return self._get(command)[1]
def run(self, command, *args):
return self.get(command).function(*args)
def run_safe(self, command, *args):
name, cmd = self._get(command)
if not (ARGS.yes or cmd.safe):
confirm = raw_input('OK to execute "rl %s %s"? (y/N) ' %
(name, ' '.join(args)))
if not confirm.lower().startswith('y'):
Log.error('Cancelled.')
return
cmd.function(*args)
def help(self, command):
return self.get(command).help()

View File

@@ -0,0 +1,54 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
"""Ripple has a proprietary format for their .cfg files, so we need a reader for
them."""
def read(lines):
sections = []
section = []
for line in lines:
line = line.strip()
if (not line) or line[0] == '#':
continue
if line.startswith('['):
if section:
sections.append(section)
section = []
section.append(line)
if section:
sections.append(section)
result = {}
for section in sections:
option = section.pop(0)
assert section, ('No value for option "%s".' % option)
assert option.startswith('[') and option.endswith(']'), (
'No option name in block "%s"' % p[0])
option = option[1:-1]
assert option not in result, 'Duplicate option "%s".' % option
subdict = {}
items = []
for part in section:
if '=' in part:
assert not items, 'Dictionary mixed with list.'
k, v = part.split('=', 1)
assert k not in subdict, 'Repeated dictionary entry ' + k
subdict[k] = v
else:
assert not subdict, 'List mixed with dictionary.'
if part.startswith('{'):
items.append(json.loads(part))
else:
words = part.split()
if len(words) > 1:
items.append(words)
else:
items.append(part)
if len(items) == 1:
result[option] = items[0]
else:
result[option] = items or subdict
return result

View File

@@ -0,0 +1,12 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sqlite3
def fetchall(database, query, kwds):
conn = sqlite3.connect(database)
try:
cursor = conn.execute(query, kwds)
return cursor.fetchall()
finally:
conn.close()

View File

@@ -0,0 +1,46 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""Fixed point numbers."""
POSITIONS = 10
POSITIONS_SHIFT = 10 ** POSITIONS
class Decimal(object):
def __init__(self, desc='0'):
if isinstance(desc, int):
self.value = desc
return
if desc.startswith('-'):
sign = -1
desc = desc[1:]
else:
sign = 1
parts = desc.split('.')
if len(parts) == 1:
parts.append('0')
elif len(parts) > 2:
raise Exception('Too many decimals in "%s"' % desc)
number, decimal = parts
# Fix the number of positions.
decimal = (decimal + POSITIONS * '0')[:POSITIONS]
self.value = sign * int(number + decimal)
def accumulate(self, item):
if not isinstance(item, Decimal):
item = Decimal(item)
self.value += item.value
def __str__(self):
if self.value >= 0:
sign = ''
value = self.value
else:
sign = '-'
value = -self.value
number = value // POSITIONS_SHIFT
decimal = (value % POSITIONS_SHIFT) * POSITIONS_SHIFT
if decimal:
return '%s%s.%s' % (sign, number, str(decimal).rstrip('0'))
else:
return '%s%s' % (sign, number)

33
bin/ripple/util/Dict.py Normal file
View File

@@ -0,0 +1,33 @@
from __future__ import absolute_import, division, print_function, unicode_literals
def count_all_subitems(x):
"""Count the subitems of a Python object, including the object itself."""
if isinstance(x, list):
return 1 + sum(count_all_subitems(i) for i in x)
if isinstance(x, dict):
return 1 + sum(count_all_subitems(i) for i in x.itervalues())
return 1
def prune(item, level, count_recursively=True):
def subitems(x):
i = count_all_subitems(x) - 1 if count_recursively else len(x)
return '1 subitem' if i == 1 else '%d subitems' % i
assert level >= 0
if not item:
return item
if isinstance(item, list):
if level:
return [prune(i, level - 1, count_recursively) for i in item]
else:
return '[list with %s]' % subitems(item)
if isinstance(item, dict):
if level:
return dict((k, prune(v, level - 1, count_recursively))
for k, v in item.iteritems())
else:
return '{dict with %s}' % subitems(item)
return item

7
bin/ripple/util/File.py Normal file
View File

@@ -0,0 +1,7 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import os
def normalize(f):
f = os.path.join(*f.split('/')) # For Windows users.
return os.path.abspath(os.path.expanduser(f))

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import gzip
import json
import os
_NONE = object()
class FileCache(object):
"""A two-level cache, which stores expensive results in memory and on disk.
"""
def __init__(self, cache_directory, creator, open=gzip.open, suffix='.gz'):
self.cache_directory = cache_directory
self.creator = creator
self.open = open
self.suffix = suffix
self.cached_data = {}
if not os.path.exists(self.cache_directory):
os.makedirs(self.cache_directory)
def get_file_data(self, name):
if os.path.exists(filename):
return json.load(self.open(filename))
result = self.creator(name)
return result
def get_data(self, name, save_in_cache, can_create, default=None):
name = str(name)
result = self.cached_data.get(name, _NONE)
if result is _NONE:
filename = os.path.join(self.cache_directory, name) + self.suffix
if os.path.exists(filename):
result = json.load(self.open(filename)) or _NONE
if result is _NONE and can_create:
result = self.creator(name)
if save_in_cache:
json.dump(result, self.open(filename, 'w'))
return default if result is _NONE else result
def _files(self):
return os.listdir(self.cache_directory)
def cache_list(self):
for f in self._files():
if f.endswith(self.suffix):
yield f[:-len(self.suffix)]
def file_count(self):
return len(self._files())
def clear(self):
"""Clears both local files and memory."""
self.cached_data = {}
for f in self._files():
os.remove(os.path.join(self.cache_directory, f))

View File

@@ -0,0 +1,82 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""A function that can be specified at the command line, with an argument."""
import importlib
import re
import tokenize
from StringIO import StringIO
MATCHER = re.compile(r'([\w.]+)(.*)')
REMAPPINGS = {
'false': False,
'true': True,
'null': None,
'False': False,
'True': True,
'None': None,
}
def eval_arguments(args):
args = args.strip()
if not args or (args == '()'):
return ()
tokens = list(tokenize.generate_tokens(StringIO(args).readline))
def remap():
for type, name, _, _, _ in tokens:
if type == tokenize.NAME and name not in REMAPPINGS:
yield tokenize.STRING, '"%s"' % name
else:
yield type, name
untok = tokenize.untokenize(remap())
if untok[1:-1].strip():
untok = untok[:-1] + ',)' # Force a tuple.
try:
return eval(untok, REMAPPINGS)
except Exception as e:
raise ValueError('Couldn\'t evaluate expression "%s" (became "%s"), '
'error "%s"' % (args, untok, str(e)))
class Function(object):
def __init__(self, desc='', default_path=''):
self.desc = desc.strip()
if not self.desc:
# Make an empty function that does nothing.
self.args = ()
self.function = lambda *args, **kwds: None
return
m = MATCHER.match(desc)
if not m:
raise ValueError('"%s" is not a function' % desc)
self.function, self.args = (g.strip() for g in m.groups())
self.args = eval_arguments(self.args)
if '.' not in self.function:
if default_path and not default_path.endswith('.'):
default_path += '.'
self.function = default_path + self.function
p, m = self.function.rsplit('.', 1)
mod = importlib.import_module(p)
# Errors in modules are swallowed here.
# except:
# raise ValueError('Can\'t find Python module "%s"' % p)
try:
self.function = getattr(mod, m)
except:
raise ValueError('No function "%s" in module "%s"' % (m, p))
def __str__(self):
return self.desc
def __call__(self, *args, **kwds):
return self.function(*(args + self.args), **kwds)
def __eq__(self, other):
return self.function == other.function and self.args == other.args
def __ne__(self, other):
return not (self == other)

21
bin/ripple/util/Log.py Normal file
View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
VERBOSE = False
def out(*args, **kwds):
kwds.get('print', print)(*args, file=sys.stdout, **kwds)
def info(*args, **kwds):
if VERBOSE:
out(*args, **kwds)
def warn(*args, **kwds):
out('WARNING:', *args, **kwds)
def error(*args, **kwds):
out('ERROR:', *args, **kwds)
def fatal(*args, **kwds):
raise Exception('FATAL: ' + ' '.join(str(a) for a in args))

View File

@@ -0,0 +1,42 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from functools import wraps
import json
SEPARATORS = ',', ': '
INDENT = ' '
def pretty_print(item):
return json.dumps(item,
sort_keys=True,
indent=len(INDENT),
separators=SEPARATORS)
class Streamer(object):
def __init__(self, printer=print):
# No automatic spacing or carriage returns.
self.printer = lambda *args: printer(*args, end='', sep='')
self.first_key = True
def add(self, key, value):
if self.first_key:
self.first_key = False
self.printer('{')
else:
self.printer(',')
self.printer('\n', INDENT, '"', str(key), '": ')
pp = pretty_print(value).splitlines()
if len(pp) > 1:
for i, line in enumerate(pp):
if i > 0:
self.printer('\n', INDENT)
self.printer(line)
else:
self.printer(pp[0])
def finish(self):
if not self.first_key:
self.first_key = True
self.printer('\n}')

53
bin/ripple/util/Range.py Normal file
View File

@@ -0,0 +1,53 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""
Convert a discontiguous range of integers to and from a human-friendly form.
Real world example is the server_info.complete_ledgers:
8252899-8403772,8403824,8403827-8403830,8403834-8403876
"""
def from_string(desc, **aliases):
if not desc:
return []
result = set()
for d in desc.split(','):
nums = [int(aliases.get(x) or x) for x in d.split('-')]
if len(nums) == 1:
result.add(nums[0])
elif len(nums) == 2:
result.update(range(nums[0], nums[1] + 1))
return result
def to_string(r):
groups = []
next_group = []
for i, x in enumerate(sorted(r)):
if next_group and (x - next_group[-1]) > 1:
groups.append(next_group)
next_group = []
next_group.append(x)
if next_group:
groups.append(next_group)
def display(g):
if len(g) == 1:
return str(g[0])
else:
return '%s-%s' % (g[0], g[-1])
return ','.join(display(g) for g in groups)
def is_range(desc, *names):
try:
from_string(desc, **dict((n, 1) for n in names))
return True;
except ValueError:
return False
def join_ranges(*ranges, **aliases):
result = set()
for r in ranges:
result.update(from_string(r, **aliases))
return result

46
bin/ripple/util/Search.py Normal file
View File

@@ -0,0 +1,46 @@
from __future__ import absolute_import, division, print_function, unicode_literals
FIRST, LAST = range(2)
def binary_search(begin, end, condition, location=FIRST):
"""Search for an i in the interval [begin, end] where condition(i) is true.
If location is FIRST, return the first such i.
If location is LAST, return the last such i.
If there is no such i, then throw an exception.
"""
b = condition(begin)
e = condition(end)
if b and e:
return begin if location == FIRST else end
if not (b or e):
raise ValueError('%d/%d' % (begin, end))
if b and location is FIRST:
return begin
if e and location is LAST:
return end
width = end - begin + 1
if width == 1:
if not b:
raise ValueError('%d/%d' % (begin, end))
return begin
if width == 2:
return begin if b else end
mid = (begin + end) // 2
m = condition(mid)
if m == b:
return binary_search(mid, end, condition, location)
else:
return binary_search(begin, mid, condition, location)
def linear_search(items, condition):
"""Yields each i in the interval [begin, end] where condition(i) is true.
"""
for i in items:
if condition(i):
yield i

21
bin/ripple/util/Time.py Normal file
View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import datetime
# Format for human-readable dates in rippled
_DATE_FORMAT = '%Y-%b-%d'
_TIME_FORMAT = '%H:%M:%S'
_DATETIME_FORMAT = '%s %s' % (_DATE_FORMAT, _TIME_FORMAT)
_FORMATS = _DATE_FORMAT, _TIME_FORMAT, _DATETIME_FORMAT
def parse_datetime(desc):
for fmt in _FORMATS:
try:
return datetime.date.strptime(desc, fmt)
except:
pass
raise ValueError("Can't understand date '%s'." % date)
def format_datetime(dt):
return dt.strftime(_DATETIME_FORMAT)

View File

View File

@@ -0,0 +1,12 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Cache import NamedCache
from unittest import TestCase
class test_Cache(TestCase):
def setUp(self):
self.cache = NamedCache()
def test_trivial(self):
pass

View File

@@ -0,0 +1,163 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import ConfigFile
from unittest import TestCase
class test_ConfigFile(TestCase):
def test_trivial(self):
self.assertEquals(ConfigFile.read(''), {})
def test_full(self):
self.assertEquals(ConfigFile.read(FULL.splitlines()), RESULT)
RESULT = {
'websocket_port': '6206',
'database_path': '/development/alpha/db',
'sntp_servers':
['time.windows.com', 'time.apple.com', 'time.nist.gov', 'pool.ntp.org'],
'validation_seed': 'sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV',
'node_size': 'medium',
'rpc_startup': {
'command': 'log_level',
'severity': 'debug'},
'ips': ['r.ripple.com', '51235'],
'node_db': {
'file_size_mult': '2',
'file_size_mb': '8',
'cache_mb': '256',
'path': '/development/alpha/db/rocksdb',
'open_files': '2000',
'type': 'RocksDB',
'filter_bits': '12'},
'peer_port': '53235',
'ledger_history': 'full',
'rpc_ip': '127.0.0.1',
'websocket_public_ip': '0.0.0.0',
'rpc_allow_remote': '0',
'validators':
[['n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7', 'RL1'],
['n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj', 'RL2'],
['n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C', 'RL3'],
['n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS', 'RL4'],
['n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA', 'RL5']],
'debug_logfile': '/development/alpha/debug.log',
'websocket_public_port': '5206',
'peer_ip': '0.0.0.0',
'rpc_port': '5205',
'validation_quorum': '3',
'websocket_ip': '127.0.0.1'}
FULL = """
[ledger_history]
full
# Allow other peers to connect to this server.
#
[peer_ip]
0.0.0.0
[peer_port]
53235
# Allow untrusted clients to connect to this server.
#
[websocket_public_ip]
0.0.0.0
[websocket_public_port]
5206
# Provide trusted websocket ADMIN access to the localhost.
#
[websocket_ip]
127.0.0.1
[websocket_port]
6206
# Provide trusted json-rpc ADMIN access to the localhost.
#
[rpc_ip]
127.0.0.1
[rpc_port]
5205
[rpc_allow_remote]
0
[node_size]
medium
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
[node_db]
type=RocksDB
path=/development/alpha/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
[database_path]
/development/alpha/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/development/alpha/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
# Where to find some other servers speaking the Ripple protocol.
#
[ips]
r.ripple.com 51235
# The latest validators can be obtained from
# https://ripple.com/ripple.txt
#
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
# Ditto.
[validation_quorum]
3
[validation_seed]
sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "debug" }
# Configure SSL for WebSockets. Not enabled by default because not everybody
# has an SSL cert on their server, but if you uncomment the following lines and
# set the path to the SSL certificate and private key the WebSockets protocol
# will be protected by SSL/TLS.
#[websocket_secure]
#1
#[websocket_ssl_cert]
#/etc/ssl/certs/server.crt
#[websocket_ssl_key]
#/etc/ssl/private/server.key
# Defaults to 0 ("no") so that you can use self-signed SSL certificates for
# development, or internally.
#[ssl_verify]
#0
""".strip()

View File

@@ -0,0 +1,20 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Decimal import Decimal
from unittest import TestCase
class test_Decimal(TestCase):
def test_construct(self):
self.assertEquals(str(Decimal('')), '0')
self.assertEquals(str(Decimal('0')), '0')
self.assertEquals(str(Decimal('0.2')), '0.2')
self.assertEquals(str(Decimal('-0.2')), '-0.2')
self.assertEquals(str(Decimal('3.1416')), '3.1416')
def test_accumulate(self):
d = Decimal()
d.accumulate('0.5')
d.accumulate('3.1416')
d.accumulate('-23.34234')
self.assertEquals(str(d), '-19.70074')

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Dict
from unittest import TestCase
class test_Dict(TestCase):
def test_count_all_subitems(self):
self.assertEquals(Dict.count_all_subitems({}), 1)
self.assertEquals(Dict.count_all_subitems({'a': {}}), 2)
self.assertEquals(Dict.count_all_subitems([1]), 2)
self.assertEquals(Dict.count_all_subitems([1, 2]), 3)
self.assertEquals(Dict.count_all_subitems([1, {2: 3}]), 4)
self.assertEquals(Dict.count_all_subitems([1, {2: [3]}]), 5)
self.assertEquals(Dict.count_all_subitems([1, {2: [3, 4]}]), 6)
def test_prune(self):
self.assertEquals(Dict.prune({}, 0), {})
self.assertEquals(Dict.prune({}, 1), {})
self.assertEquals(Dict.prune({1: 2}, 0), '{dict with 1 subitem}')
self.assertEquals(Dict.prune({1: 2}, 1), {1: 2})
self.assertEquals(Dict.prune({1: 2}, 2), {1: 2})
self.assertEquals(Dict.prune([1, 2, 3], 0), '[list with 3 subitems]')
self.assertEquals(Dict.prune([1, 2, 3], 1), [1, 2, 3])
self.assertEquals(Dict.prune([{1: [2, 3]}], 0),
'[list with 4 subitems]')
self.assertEquals(Dict.prune([{1: [2, 3]}], 1),
['{dict with 3 subitems}'])
self.assertEquals(Dict.prune([{1: [2, 3]}], 2),
[{1: u'[list with 2 subitems]'}])
self.assertEquals(Dict.prune([{1: [2, 3]}], 3),
[{1: [2, 3]}])
def test_prune_nosub(self):
self.assertEquals(Dict.prune({}, 0, False), {})
self.assertEquals(Dict.prune({}, 1, False), {})
self.assertEquals(Dict.prune({1: 2}, 0, False), '{dict with 1 subitem}')
self.assertEquals(Dict.prune({1: 2}, 1, False), {1: 2})
self.assertEquals(Dict.prune({1: 2}, 2, False), {1: 2})
self.assertEquals(Dict.prune([1, 2, 3], 0, False),
'[list with 3 subitems]')
self.assertEquals(Dict.prune([1, 2, 3], 1, False), [1, 2, 3])
self.assertEquals(Dict.prune([{1: [2, 3]}], 0, False),
'[list with 1 subitem]')
self.assertEquals(Dict.prune([{1: [2, 3]}], 1, False),
['{dict with 1 subitem}'])
self.assertEquals(Dict.prune([{1: [2, 3]}], 2, False),
[{1: u'[list with 2 subitems]'}])
self.assertEquals(Dict.prune([{1: [2, 3]}], 3, False),
[{1: [2, 3]}])

View File

@@ -0,0 +1,37 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Function import Function, MATCHER
from unittest import TestCase
def FN(*args, **kwds):
return args, kwds
class test_Function(TestCase):
def match_test(self, item, *results):
self.assertEquals(MATCHER.match(item).groups(), results)
def test_simple(self):
self.match_test('function', 'function', '')
self.match_test('f(x)', 'f', '(x)')
def test_empty_function(self):
self.assertEquals(Function()(), None)
def test_empty_args(self):
f = Function('ripple.util.test_Function.FN()')
self.assertEquals(f(), ((), {}))
def test_function(self):
f = Function('ripple.util.test_Function.FN(True, {1: 2}, None)')
self.assertEquals(f(), ((True, {1: 2}, None), {}))
self.assertEquals(f('hello', foo='bar'),
(('hello', True, {1: 2}, None), {'foo':'bar'}))
self.assertEquals(
f, Function('ripple.util.test_Function.FN(true, {1: 2}, null)'))
def test_quoting(self):
f = Function('ripple.util.test_Function.FN(testing)')
self.assertEquals(f(), (('testing',), {}))
f = Function('ripple.util.test_Function.FN(testing, true, false, null)')
self.assertEquals(f(), (('testing', True, False, None), {}))

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import PrettyPrint
from unittest import TestCase
class test_PrettyPrint(TestCase):
def setUp(self):
self._results = []
self.printer = PrettyPrint.Streamer(printer=self.printer)
def printer(self, *args, **kwds):
self._results.extend(args)
def run_test(self, expected, *args):
for i in range(0, len(args), 2):
self.printer.add(args[i], args[i + 1])
self.printer.finish()
self.assertEquals(''.join(self._results), expected)
def test_simple_printer(self):
self.run_test(
'{\n "foo": "bar"\n}',
'foo', 'bar')
def test_multiple_lines(self):
self.run_test(
'{\n "foo": "bar",\n "baz": 5\n}',
'foo', 'bar', 'baz', 5)
def test_multiple_lines(self):
self.run_test(
"""
{
"foo": {
"bar": 1,
"baz": true
},
"bang": "bing"
}
""".strip(), 'foo', {'bar': 1, 'baz': True}, 'bang', 'bing')
def test_multiple_lines_with_list(self):
self.run_test(
"""
{
"foo": [
"bar",
1
],
"baz": [
23,
42
]
}
""".strip(), 'foo', ['bar', 1], 'baz', [23, 42])

View File

@@ -0,0 +1,28 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Range
from unittest import TestCase
class test_Range(TestCase):
def round_trip(self, s, *items):
self.assertEquals(Range.from_string(s), set(items))
self.assertEquals(Range.to_string(items), s)
def test_complete(self):
self.round_trip('10,19', 10, 19)
self.round_trip('10', 10)
self.round_trip('10-12', 10, 11, 12)
self.round_trip('10,19,42-45', 10, 19, 42, 43, 44, 45)
def test_names(self):
self.assertEquals(
Range.from_string('first,last,current', first=1, last=3, current=5),
set([1, 3, 5]))
def test_is_range(self):
self.assertTrue(Range.is_range(''))
self.assertTrue(Range.is_range('10'))
self.assertTrue(Range.is_range('10,12'))
self.assertFalse(Range.is_range('10,12,fred'))
self.assertTrue(Range.is_range('10,12,fred', 'fred'))

View File

@@ -0,0 +1,44 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Search import binary_search, linear_search, FIRST, LAST
from unittest import TestCase
class test_Search(TestCase):
def condition(self, i):
return 10 <= i < 15;
def test_linear_full(self):
self.assertEquals(list(linear_search(range(21), self.condition)),
[10, 11, 12, 13, 14])
def test_linear_partial(self):
self.assertEquals(list(linear_search(range(8, 14), self.condition)),
[10, 11, 12, 13])
self.assertEquals(list(linear_search(range(11, 14), self.condition)),
[11, 12, 13])
self.assertEquals(list(linear_search(range(12, 18), self.condition)),
[12, 13, 14])
def test_linear_empty(self):
self.assertEquals(list(linear_search(range(1, 4), self.condition)), [])
def test_binary_first(self):
self.assertEquals(binary_search(0, 14, self.condition, FIRST), 10)
self.assertEquals(binary_search(10, 19, self.condition, FIRST), 10)
self.assertEquals(binary_search(14, 14, self.condition, FIRST), 14)
self.assertEquals(binary_search(14, 15, self.condition, FIRST), 14)
self.assertEquals(binary_search(13, 15, self.condition, FIRST), 13)
def test_binary_last(self):
self.assertEquals(binary_search(10, 20, self.condition, LAST), 14)
self.assertEquals(binary_search(0, 14, self.condition, LAST), 14)
self.assertEquals(binary_search(14, 14, self.condition, LAST), 14)
self.assertEquals(binary_search(14, 15, self.condition, LAST), 14)
self.assertEquals(binary_search(13, 15, self.condition, LAST), 14)
def test_binary_throws(self):
self.assertRaises(
ValueError, binary_search, 0, 20, self.condition, LAST)
self.assertRaises(
ValueError, binary_search, 0, 20, self.condition, FIRST)

747
bin/six.py Normal file
View File

@@ -0,0 +1,747 @@
"""Utilities for writing code that runs on Python 2 and 3"""
# Copyright (c) 2010-2014 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import functools
import operator
import sys
import types
__author__ = "Benjamin Peterson <benjamin@python.org>"
__version__ = "1.7.3"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result) # Invokes __set__.
# This is a bit ugly, but it avoids running this again.
delattr(obj.__class__, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
_module = self._resolve()
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _SixMetaPathImporter(object):
"""
A meta path importer to import six.moves and its submodules.
This class implements a PEP302 finder and loader. It should be compatible
with Python 2.5 and all existing versions of Python3
"""
def __init__(self, six_module_name):
self.name = six_module_name
self.known_modules = {}
def _add_module(self, mod, *fullnames):
for fullname in fullnames:
self.known_modules[self.name + "." + fullname] = mod
def _get_module(self, fullname):
return self.known_modules[self.name + "." + fullname]
def find_module(self, fullname, path=None):
if fullname in self.known_modules:
return self
return None
def __get_module(self, fullname):
try:
return self.known_modules[fullname]
except KeyError:
raise ImportError("This loader does not know module " + fullname)
def load_module(self, fullname):
try:
# in case of a reload
return sys.modules[fullname]
except KeyError:
pass
mod = self.__get_module(fullname)
if isinstance(mod, MovedModule):
mod = mod._resolve()
else:
mod.__loader__ = self
sys.modules[fullname] = mod
return mod
def is_package(self, fullname):
"""
Return true, if the named module is a package.
We need this method to get correct spec objects with
Python 3.4 (see PEP451)
"""
return hasattr(self.__get_module(fullname), "__path__")
def get_code(self, fullname):
"""Return None
Required, if is_package is implemented"""
self.__get_module(fullname) # eventually raises ImportError
return None
get_source = get_code # same as get_code
_importer = _SixMetaPathImporter(__name__)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
__path__ = [] # mark as package
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserDict", "UserDict", "collections"),
MovedAttribute("UserList", "UserList", "collections"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
_importer._add_module(attr, "moves." + attr.name)
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = _MovedItems(__name__ + ".moves")
_importer._add_module(moves, "moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
"moves.urllib_parse", "moves.urllib.parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
"moves.urllib_error", "moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
"moves.urllib_request", "moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
"moves.urllib_response", "moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
"moves.urllib_robotparser", "moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
__path__ = [] # mark as package
parse = _importer._get_module("moves.urllib_parse")
error = _importer._get_module("moves.urllib_error")
request = _importer._get_module("moves.urllib_request")
response = _importer._get_module("moves.urllib_response")
robotparser = _importer._get_module("moves.urllib_robotparser")
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
"moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
else:
def iterkeys(d, **kw):
return iter(d.iterkeys(**kw))
def itervalues(d, **kw):
return iter(d.itervalues(**kw))
def iteritems(d, **kw):
return iter(d.iteritems(**kw))
def iterlists(d, **kw):
return iter(d.iterlists(**kw))
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
_add_doc(iteritems,
"Return an iterator over the (key, value) pairs of a dictionary.")
_add_doc(iterlists,
"Return an iterator over the (key, [values]) pairs of a dictionary.")
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
def iterbytes(buf):
return (ord(byte) for byte in buf)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, unicode) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
if sys.version_info[0:2] < (3, 4):
def wraps(wrapped):
def wrapper(f):
f = functools.wraps(wrapped)(f)
f.__wrapped__ = wrapped
return f
return wrapper
else:
wraps = functools.wraps
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(meta):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
__path__ = [] # required for PEP 302 and PEP 451
__package__ = __name__ # see PEP 366 @ReservedAssignment
if globals().get("__spec__") is not None:
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
if sys.meta_path:
for i, importer in enumerate(sys.meta_path):
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
if (type(importer).__name__ == "_SixMetaPathImporter" and
importer.name == __name__):
del sys.meta_path[i]
break
del i, importer
# Finally, add the importer to the meta path import hook.
sys.meta_path.append(_importer)

133
bin/stop-test.js Normal file
View File

@@ -0,0 +1,133 @@
/* -------------------------------- REQUIRES -------------------------------- */
var child = require("child_process");
var assert = require("assert");
/* --------------------------------- CONFIG --------------------------------- */
if (process.argv[2] == null) {
[
'Usage: ',
'',
' `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
'',
' Launch rippled and stop it after n seconds for all n in [i, j}',
' For all even values of n launch rippled with `--fg`',
' For values of n where n % 3 == 0 launch rippled with `--fg`\n',
'Examples: ',
'',
' $ node bin/stop-test.js 5,10',
(' $ node bin/stop-test.js 1,4 ' +
'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
]
.forEach(function(l){console.log(l)});
process.exit();
} else {
var testRange = process.argv[2].split(',').map(Number);
var rippledPath = process.argv[3] || 'build/rippled'
var rippledConf = process.argv[4] || 'rippled.cfg'
}
var options = {
env: process.env,
stdio: 'ignore' // we could dump the child io when it fails abnormally
};
// default args
var conf_args = ['--conf='+rippledConf];
var start_args = conf_args.concat([/*'--net'*/])
var stop_args = conf_args.concat(['stop']);
/* --------------------------------- HELPERS -------------------------------- */
function start(args) {
return child.spawn(rippledPath, args, options);
}
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
function show_results_and_exit(results) {
console.log(JSON.stringify(results, undefined, 2));
process.exit();
}
var timeTakes = function (range) {
function sumRange(n) {return (n+1) * n /2}
var ret = sumRange(range[1]);
if (range[0] > 1) {
ret = ret - sumRange(range[0] - 1)
}
var stopping = (range[1] - range[0]) * 0.5;
return ret + stopping;
}
/* ---------------------------------- TEST ---------------------------------- */
console.log("Test will take ~%s seconds", timeTakes(testRange));
(function oneTest(n /* seconds */, results) {
if (n >= testRange[1]) {
// show_results_and_exit(results);
console.log(JSON.stringify(results, undefined, 2));
oneTest(testRange[0], []);
return;
}
var args = start_args;
if (n % 2 == 0) {args = args.concat(['--fg'])}
if (n % 3 == 0) {args = args.concat(['--net'])}
var result = {args: args, alive_for: n};
results.push(result);
console.log("\nLaunching `%s` with `%s` for %d seconds",
rippledPath, JSON.stringify(args), n);
rippled = start(args);
console.log("Rippled pid: %d", rippled.pid);
// defaults
var b4StopSent = false;
var stopSent = false;
var stop_took = null;
rippled.once('exit', function(){
if (!stopSent && !b4StopSent) {
console.warn('\nRippled exited itself b4 stop issued');
process.exit();
};
// The io handles close AFTER exit, may have implications for
// `stdio:'inherit'` option to `child.spawn`.
rippled.once('close', function() {
result.stop_took = (+new Date() - stop_took) / 1000; // seconds
console.log("Stopping after %d seconds took %s seconds",
n, result.stop_took);
oneTest(n+1, results);
});
});
secs_l8r(n, function(){
console.log("Stopping rippled after %d seconds", n);
// possible race here ?
// seems highly unlikely, but I was having issues at one point
b4StopSent=true;
stop_took = (+new Date());
// when does `exit` actually get sent?
stop();
stopSent=true;
// Sometimes we want to attach with a debugger.
if (process.env.ABORT_TESTS_ON_STALL != null) {
// We wait 30 seconds, and if it hasn't stopped, we abort the process
secs_l8r(30, function() {
if (result.stop_took == null) {
console.log("rippled has stalled");
process.exit();
};
});
}
})
}(testRange[0], []));

52
circle.yml Normal file
View File

@@ -0,0 +1,52 @@
machine:
services:
- docker
dependencies:
pre:
- sudo apt-add-repository -y 'deb http://llvm.org/apt/precise/ llvm-toolchain-precise-3.4 main'
- sudo apt-add-repository -y ppa:ubuntu-toolchain-r/test
- sudo add-apt-repository -y ppa:afrank/boost
- wget -q -O - http://llvm.org/apt/llvm-snapshot.gpg.key | sudo apt-key add -
- sudo apt-get update -qq
- sudo apt-get purge -qq libboost1.48-dev
- sudo apt-get install -qq libboost1.57-all-dev
- sudo apt-get install -qq clang-3.4 gcc-4.8 libobjc-4.8-dev libgcc-4.8-dev libstdc++-4.8-dev libclang1-3.4 libgcc1 libgomp1 libstdc++6 scons protobuf-compiler libprotobuf-dev libssl-dev exuberant-ctags
- sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 99 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8
- sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-3.4 99 --slave /usr/bin/clang++ clang++ /usr/bin/clang++-3.4
- gcc --version
- clang --version
test:
pre:
- scons clang.debug
override:
- | # create gdb script
echo "set env MALLOC_CHECK_=3" > script.gdb
echo "run" >> script.gdb
echo "backtrace full" >> script.gdb
# gdb --help
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args build/clang.debug/rippled --unittest
- npm install
# Use build/(gcc|clang).debug/rippled
- |
echo "exports.default_server_config = {\"rippled_path\" : \"$HOME/rippled/build/clang.debug/rippled\"};" > test/config.js
# Run integration tests
- npm test
post:
- mkdir -p build/docker/
- cp doc/rippled-example.cfg build/clang.debug/rippled build/docker/
- cp Builds/Docker/Dockerfile-testnet build/docker/Dockerfile
- mv build/docker/rippled-example.cfg build/docker/rippled.cfg
- strip build/docker/rippled
- docker build -t ripple/rippled:$CIRCLE_SHA1 build/docker/
- docker tag ripple/rippled:$CIRCLE_SHA1 ripple/rippled:latest
- docker tag ripple/rippled:$CIRCLE_SHA1 ripple/rippled:$CIRCLE_BRANCH
- docker images
deployment:
docker:
branch: /.*/
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
- docker push ripple/rippled:$CIRCLE_SHA1
- docker push ripple/rippled:$CIRCLE_BRANCH
- docker push ripple/rippled:latest

View File

@@ -1,19 +1,18 @@
VFALCO NOTE - This file appears to be unmaintained.
A list of rippled version numbers, and the Github pull requests they contain.
Critical protocol changes
-------------------------
0.28.0-b12: Includes pulls 836, 887, 902, 903 and 904.
0.28.0-b13: Includes pulls 906, 912, 913, 914 and 915.
0.28.0-b14: Includes pulls 907, 910, 922 and 923.
0.28.0-b15: Includes pulls 832, 870, 879, 883, 911, 916, 919, 920, 924, 925 and 928. FAILED pulls 909 and 926.
0.28.0-b16: Includes pulls 909, 926, 929, 931, 932, 935 and 934.
0.28.0-b17: Includes pulls 927, 939, 940, 943, 944, 945 and 949.
0.28.0-b18: Includes pulls 930, 946, 947, 948, 951, 952, 953, 954, 955, 956, 959, 960 and 962.
0.29.0-b19: Includes pulls 967, 969 and 971.
0.29.0-b20: Includes pulls 935, 942, 957, 958, 963, 964, 965, 966, 968, 972, 973, 974 and 975.
0.29.0-b21: Includes pulls 970 and 976.
0.28.1-b4: Includes pulls 968, 998, 1005, 1008, 1010, 1011 and 1012.
0.28.1-b6: Includes pulls 983, 984, 1013, 1023 and 1024.
0.28.1-b8: Includes pulls 988, 1009, 1014, 1019, 1029, 1031, 1033, 1034 and 1035.
0.28.1-b9: Includes pulls 1026, 1030, 1036, 1037, 1038, 1040, and 1041.
0.28.1-rc2: Includes pulls 1044
Mon Apr 8 16:13:12 PDT 2013
* The JSON field "inLedger" changing to "ledger_index"
Previous
* The JSON field "metaData" changing to "meta".
* RPC ledger will no longer take "ledger", use "ledger_hash" or "ledger_index".
* "ledgerClose" events:
** "hash" DEPRECATED: use "ledger_hash"
** "seqNum" DEPRECATED: use "ledger_index"
** "closeTime" DEPRECATED: use "close" or "close_human"
* stream "rt_accounts" --> "accounts_proposed"
* stream "rt_transactions" --> "transactions_proposed"
* subscribe "username" --> "url_username"
* subscribe "password" --> "url_password"

View File

@@ -2,14 +2,38 @@
# Coding Standards
Coding standards used here are extreme strict and consistent. The style
evolved gradually over the years, incorporating generally acknowledged
best-practice C++ advice, experience, and personal preference.
Coding standards used here gradually evolve and propagate through
code reviews. Some aspects are enforced more strictly than others.
## Don't Repeat Yourself!
## Rules
The [Don't Repeat Yourself][1] principle summarises the essence of what it
means to write good code, in all languages, at all levels.
These rules only apply to our own code. We can't enforce any sort of
style on the external repositories and libraries we include. The best
guideline is to maintain the standards that are used in those libraries.
* Tab inserts 4 spaces. No tab characters.
* Braces are indented in the [Allman style][1].
* Modern C++ principles. No naked ```new``` or ```delete```.
* Line lengths limited to 80 characters. Exceptions limited to data and tables.
## Guidelines
If you want to do something contrary to these guidelines, understand
why you're doing it. Think, use common sense, and consider that this
your changes will probably need to be maintained long after you've
moved on to other projects.
* Use white space and blank lines to guide the eye and keep your intent clear.
* Put private data members at the top of a class, and the 6 public special
members immediately after, in the following order:
* Destructor
* Default constructor
* Copy constructor
* Copy assignment
* Move constructor
* Move assignment
* Don't over-inline by defining large functions within the class
declaration, not even for template classes.
## Formatting
@@ -17,9 +41,6 @@ The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
* No tab characters please.
* Tab stops are set to 4 spaces.
* Braces are indented in the [Allman style][2].
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should always be followed by a space.
@@ -62,156 +83,4 @@ overlooked. Blank lines are used to separate code into "paragraphs."
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
## Naming conventions
* Member variables and method names are written with camel-case, and never
begin with a capital letter.
* Class names are also written in camel-case, but always begin with a capital
letter.
* For global variables... well, you shouldn't have any, so it doesn't matter.
* Class data members begin with `m_`, static data members begin with `s_`.
Global variables begin with `g_`. This is so the scope of the corresponding
declaration can be easily determined.
* Avoid underscores in your names, especially leading or trailing underscores.
In particular, leading underscores should be avoided, as these are often used
in standard library code, so to use them in your own code looks quite jarring.
* If you really have to write a macro for some reason, then make it all caps,
with underscores to separate the words. And obviously make sure that its name
is unlikely to clash with symbols used in other libraries or 3rd party code.
## Types, const-correctness
* If a method can (and should!) be const, make it const!
* If a method definitely doesn't throw an exception (be careful!), mark it as
`noexcept`
* When returning a temporary object, e.g. a String, the returned object should
be non-const, so that if the class has a C++11 move operator, it can be used.
* If a local variable can be const, then make it const!
* Remember that pointers can be const as well as primitives; For example, if
you have a `char*` whose contents are going to be altered, you may still be
able to make the pointer itself const, e.g. `char* const foobar = getFoobar();`.
* Do not declare all your local variables at the top of a function or method
(i.e. in the old-fashioned C-style). Declare them at the last possible moment,
and give them as small a scope as possible.
* Object parameters should be passed as `const&` wherever possible. Only
pass a parameter as a copy-by-value object if you really need to mutate
a local copy inside the method, and if making a local copy inside the method
would be difficult.
* Use portable `for()` loop variable scoping (i.e. do not have multiple for
loops in the same scope that each re-declare the same variable name, as
this fails on older compilers)
* When you're testing a pointer to see if it's null, never write
`if (myPointer)`. Always avoid that implicit cast-to-bool by writing it more
fully: `if (myPointer != nullptr)`. And likewise, never ever write
`if (! myPointer)`, instead always write `if (myPointer == nullptr)`.
It is more readable that way.
* Avoid C-style casts except when converting between primitive numeric types.
Some people would say "avoid C-style casts altogether", but `static_cast` is
a bit unreadable when you just want to cast an `int` to a `float`. But
whenever a pointer is involved, or a non-primitive object, always use
`static_cast`. And when you're reinterpreting data, always use
`reinterpret_cast`.
* Until C++ gets a universal 64-bit primitive type (part of the C++11
standard), it's best to stick to the `int64` and `uint64` typedefs.
## Object lifetime and ownership
* Absolutely do NOT use `delete`, `deleteAndZero`, etc. There are very very few
situations where you can't use a `ScopedPointer` or some other automatic
lifetime management class.
* Do not use `new` unless there's no alternative. Whenever you type `new`, always
treat it as a failure to find a better solution. If a local variable can be
allocated on the stack rather than the heap, then always do so.
* Do not ever use `new` or `malloc` to allocate a C++ array. Always use a
`HeapBlock` instead.
* And just to make it doubly clear: Never use `malloc` or `calloc`.
* If a parent object needs to create and own some kind of child object, always
use composition as your first choice. If that's not possible (e.g. if the
child needs a pointer to the parent for its constructor), then use a
`ScopedPointer`.
* If possible, pass an object as a reference rather than a pointer. If possible,
make it a `const` reference.
* Obviously avoid static and global values. Sometimes there's no alternative,
but if there is an alternative, then use it, no matter how much effort it
involves.
* If allocating a local POD structure (e.g. an operating-system structure in
native code), and you need to initialise it with zeros, use the `= { 0 };`
syntax as your first choice for doing this. If for some reason that's not
appropriate, use the `zerostruct()` function, or in case that isn't suitable,
use `zeromem()`. Don't use `memset()`.
## Classes
* Declare a class's public section first, and put its constructors and
destructor first. Any protected items come next, and then private ones.
* Use the most restrictive access-specifier possible for each member. Prefer
`private` over `protected`, and `protected` over `public`. Don't expose
things unnecessarily.
* Preferred positioning for any inherited classes is to put them to the right
of the class name, vertically aligned, e.g.:
class Thing : public Foo,
private Bar
{
}
* Put a class's member variables (which should almost always be private, of course),
after all the public and protected method declarations.
* Any private methods can go towards the end of the class, after the member
variables.
* If your class does not have copy-by-value semantics, derive the class from
`Uncopyable`.
* If your class is likely to be leaked, then derive your class from
`LeakChecked<>`.
* Constructors that take a single parameter should be default be marked
`explicit`. Obviously there are cases where you do want implicit conversion,
but always think about it carefully before writing a non-explicit constructor.
* Do not use `NULL`, `null`, or 0 for a null-pointer. And especially never use
'0L', which is particulary burdensome. Use `nullptr` instead - this is the
C++2011 standard, so get used to it. There's a fallback definition for `nullptr`
in Beast, so it's always possible to use it even if your compiler isn't yet
C++2011 compliant.
* All the C++ 'guru' books and articles are full of excellent and detailed advice
on when it's best to use inheritance vs composition. If you're not already
familiar with the received wisdom in these matters, then do some reading!
## Miscellaneous
* `goto` statements should not be used at all, even if the alternative is
more verbose code. The only exception is when implementing an algorithm in
a function as a state machine.
* Don't use macros! OK, obviously there are many situations where they're the
right tool for the job, but treat them as a last resort. Certainly don't ever
use a macro just to hold a constant value or to perform any kind of function
that could have been done as a real inline function. And it goes without saying
that you should give them names which aren't going to clash with other code.
And `#undef` them after you've used them, if possible.
* When using the `++` or `--` operators, never use post-increment if
pre-increment could be used instead. Although it doesn't matter for
primitive types, it's good practice to pre-increment since this can be
much more efficient for more complex objects. In particular, if you're
writing a for loop, always use pre-increment,
e.g. `for (int = 0; i < 10; ++i)`
* Never put an "else" statement after a "return"! This is well-explained in the
LLVM coding standards...and a couple of other very good pieces of advice from
the LLVM standards are in there as well.
* When getting a possibly-null pointer and using it only if it's non-null, limit
the scope of the pointer as much as possible - e.g. Do NOT do this:
Foo* f = getFoo ();
if (f != nullptr)
f->doSomething ();
// other code
f->doSomething (); // oops! f may be null!
..instead, prefer to write it like this, which reduces the scope of the
pointer, making it impossible to write code that accidentally uses a null
pointer:
if (Foo* f = getFoo ())
f->doSomethingElse ();
// f is out-of-scope here, so impossible to use it if it's null
(This also results in smaller, cleaner code)
[1]: http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[2]: http://en.wikipedia.org/wiki/Indent_style#Allman_style
[1]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

16
doc/Docker.md Normal file
View File

@@ -0,0 +1,16 @@
# Rippled Docker Image
Rippled has a continuous deployment pipeline that turns every git commit into a
docker image for quick testing and deployment.
To run the tip of the latest release via docker:
```$ docker run -P -v /srv/rippled/ ripple/rippled:latest```
To run the tip of active development:
```$ docker run -P -v /srv/rippled/ ripple/rippled:develop```
Where ```/srv/rippled``` points to a directory containing a rippled.cfg and
database files. By default, port 5005/tcp maps to the RPC port and 51235/udp to
the peer port.

File diff suppressed because it is too large Load Diff

63
doc/HeapProfiling.md Normal file
View File

@@ -0,0 +1,63 @@
## Heap profiling of rippled with jemalloc
The jemalloc library provides a good API for doing heap analysis,
including a mechanism to dump a description of the heap from within the
running application via a function call. Details on how to perform this
activity in general, as well as how to acquire the software, are available on
the jemalloc site:
[https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling)
jemalloc is acquired separately from rippled, and is not affiliated
with Ripple Labs. If you compile and install jemalloc from the
source release with default options, it will install the library and header
under `/usr/local/lib` and `/usr/local/include`, respectively. Heap
profiling has been tested with rippled on a Linux platform. It should
work on platforms on which both rippled and jemalloc are available.
To link rippled with jemalloc, the argument
`profile-jemalloc=<jemalloc_dir>` is provided after the optional target.
The `<jemalloc_dir>` argument should be the same as that of the
`--prefix` parameter passed to the jemalloc configure script when building.
## Examples:
Build rippled with jemalloc library under /usr/local/lib and
header under /usr/local/include:
$ scons profile-jemalloc=/usr/local
Build rippled using clang with the jemalloc library under /opt/local/lib
and header under /opt/local/include:
$ scons clang profile-jemalloc=/opt/local
----------------------
## Using the jemalloc library from within the code
The `profile-jemalloc` parameter enables a macro definition called
`PROFILE_JEMALLOC`. Include the jemalloc header file as
well as the api call(s) that you wish to make within preprocessor
conditional groups, such as:
In global scope:
#ifdef PROFILE_JEMALLOC
#include <jemalloc/jemalloc.h>
#endif
And later, within a function scope:
#ifdef PROFILE_JEMALLOC
mallctl("prof.dump", NULL, NULL, NULL, 0);
#endif
Fuller descriptions of how to acquire and use jemalloc's api to do memory
analysis are available at the [jemalloc
site.](http://www.canonware.com/jemalloc/)
Linking against the jemalloc library will override
the system's default `malloc()` and related functions with jemalloc's
implementation. This is the case even if the code is not instrumented
to use jemalloc's specific API.

File diff suppressed because it is too large Load Diff

BIN
images/build.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

BIN
images/contribute.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

BIN
images/network.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

BIN
images/pathfinding.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
images/ripple.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

BIN
images/transact.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

BIN
images/vehicle_currency.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

View File

@@ -2,33 +2,30 @@
"name": "rippled",
"version": "0.0.1",
"description": "Rippled Server",
"private": true,
"directories": {
"test": "test"
},
"dependencies": {
"ripple-lib": "0.7.37",
"async": "~0.2.9",
"deep-equal": "0.0.0",
"extend": "~1.2.0",
"simple-jsonrpc": "~0.0.2",
"deep-equal": "0.0.0"
"lodash": "^3.5.0",
"request": "^2.47.0",
"ripple-lib": "0.10.0",
"simple-jsonrpc": "~0.0.2"
},
"devDependencies": {
"coffee-script": "~1.6.3",
"mocha": "~1.13.0"
"assert-diff": "^1.0.1",
"coffee-script": "^1.8.0",
"mocha": "^2.1.0"
},
"scripts": {
"test": "mocha test/websocket-test.js test/server-test.js test/*-test.{js,coffee}"
},
"repository": {
"type": "git",
"url": "git://github.com/ripple/rippled.git"
},
"readmeFilename": "README.md"
}

View File

@@ -57,38 +57,13 @@
//#define BEAST_FORCE_DEBUG 1
#endif
/** Config: BEAST_LOG_ASSERTIONS
If this flag is enabled, the the bassert and bassertfalse macros will always
use Logger::writeToLog() to write a message when an assertion happens.
Enabling it will also leave this turned on in release builds. When it's
disabled, however, the bassert and bassertfalse macros will not be compiled
in a release build.
@see bassert, bassertfalse, Logger
*/
#ifndef BEAST_LOG_ASSERTIONS
//#define BEAST_LOG_ASSERTIONS 1
#endif
/** Config: BEAST_CHECK_MEMORY_LEAKS
Enables a memory-leak check for certain objects when the app terminates.
See the LeakChecked class for more details about enabling leak checking for
specific classes.
*/
#ifndef BEAST_CHECK_MEMORY_LEAKS
//#define BEAST_CHECK_MEMORY_LEAKS 0
#endif
/** Config: BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
Setting this option makes Socket-derived classes generate compile errors
if they forget any of the virtual overrides As some Socket-derived classes
intentionally omit member functions that are not applicable, this macro
should only be enabled temporarily when writing your own Socket-derived
class, to make sure that the function signatures match as expected.
*/
#ifndef BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
//#define BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES 1
#endif
//------------------------------------------------------------------------------
//
// Libraries
@@ -205,13 +180,6 @@
#define RIPPLE_PROPOSE_AMENDMENTS 0
#endif
/** Config: RIPPLE_ENABLE_AUTOBRIDGING
This determines whether ripple implements offer autobridging via XRP.
*/
#ifndef RIPPLE_ENABLE_AUTOBRIDGING
#define RIPPLE_ENABLE_AUTOBRIDGING 0
#endif
/** Config: RIPPLE_SINGLE_IO_SERVICE_THREAD
When set, restricts the number of threads calling io_service::run to one.
This is useful when debugging.
@@ -220,11 +188,18 @@
#define RIPPLE_SINGLE_IO_SERVICE_THREAD 0
#endif
/** Config: RIPPLE_STRUCTURED_OVERLAY
Enables Structured Overlay support (unfinished)
/** Config: RIPPLE_HOOK_VALIDATORS
Activates code for handling validations and validators (work in progress).
*/
#ifndef RIPPLE_STRUCTURED_OVERLAY
#define RIPPLE_STRUCTURED_OVERLAY 0
#ifndef RIPPLE_HOOK_VALIDATORS
#define RIPPLE_HOOK_VALIDATORS 0
#endif
/** Config: RIPPLE_ENABLE_TICKETS
Enables processing of ticket transactions
*/
#ifndef RIPPLE_ENABLE_TICKETS
#define RIPPLE_ENABLE_TICKETS 0
#endif
#endif

View File

@@ -1,9 +1,9 @@
# src
Some of these directories come from entire outside repositories
brought in using git-subtree. This means that the source files are
inserted directly into the rippled repository. They can be edited
and committed just as if they were normal files.
Some of these directories come from entire outside repositories brought in
using git-subtree. This means that the source files are inserted directly
into the rippled repository. They can be edited and committed just as if they
were normal files.
However, if you create a commit that contains files both from a
subtree, and from the ripple source tree please use care when designing
@@ -99,7 +99,7 @@ ripple-fork
## protobuf
Ripple's fork of protobuf. We've changed some names in order to support the
unity-style of build (a single .cpp addded to the project, instead of
unity-style of build (a single .cpp added to the project, instead of
linking to a separately built static library).
Repository

View File

@@ -55,18 +55,6 @@
//#define BEAST_FORCE_DEBUG 1
#endif
/** Config: BEAST_LOG_ASSERTIONS
If this flag is enabled, the the bassert and bassertfalse macros will always
use Logger::writeToLog() to write a message when an assertion happens.
Enabling it will also leave this turned on in release builds. When it's
disabled, however, the bassert and bassertfalse macros will not be compiled
in a release build.
@see bassert, bassertfalse, Logger
*/
#ifndef BEAST_LOG_ASSERTIONS
//#define BEAST_LOG_ASSERTIONS 1
#endif
/** Config: BEAST_CHECK_MEMORY_LEAKS
Enables a memory-leak check for certain objects when the app terminates.
See the LeakChecked class for more details about enabling leak checking for
@@ -76,17 +64,6 @@
//#define BEAST_CHECK_MEMORY_LEAKS 0
#endif
/** Config: BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
Setting this option makes Socket-derived classes generate compile errors
if they forget any of the virtual overrides As some Socket-derived classes
intentionally omit member functions that are not applicable, this macro
should only be enabled temporarily when writing your own Socket-derived
class, to make sure that the function signatures match as expected.
*/
#ifndef BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
//#define BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES 1
#endif
//------------------------------------------------------------------------------
//
// Libraries

View File

@@ -34,101 +34,6 @@
namespace beast {
// Some indispensible min/max functions
/** Returns the larger of two values. */
template <typename Type>
inline Type bmax (const Type a, const Type b)
{ return (a < b) ? b : a; }
/** Returns the larger of three values. */
template <typename Type>
inline Type bmax (const Type a, const Type b, const Type c)
{ return (a < b) ? ((b < c) ? c : b) : ((a < c) ? c : a); }
/** Returns the larger of four values. */
template <typename Type>
inline Type bmax (const Type a, const Type b, const Type c, const Type d)
{ return bmax (a, bmax (b, c, d)); }
/** Returns the smaller of two values. */
template <typename Type>
inline Type bmin (const Type a, const Type b)
{ return (b < a) ? b : a; }
/** Returns the smaller of three values. */
template <typename Type>
inline Type bmin (const Type a, const Type b, const Type c)
{ return (b < a) ? ((c < b) ? c : b) : ((c < a) ? c : a); }
/** Returns the smaller of four values. */
template <typename Type>
inline Type bmin (const Type a, const Type b, const Type c, const Type d)
{ return bmin (a, bmin (b, c, d)); }
/** Scans an array of values, returning the minimum value that it contains. */
template <typename Type>
const Type findMinimum (const Type* data, int numValues)
{
if (numValues <= 0)
return Type();
Type result (*data++);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *data++;
if (v < result) result = v;
}
return result;
}
/** Scans an array of values, returning the maximum value that it contains. */
template <typename Type>
const Type findMaximum (const Type* values, int numValues)
{
if (numValues <= 0)
return Type();
Type result (*values++);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *values++;
if (result < v) result = v;
}
return result;
}
/** Scans an array of values, returning the minimum and maximum values that it contains. */
template <typename Type>
void findMinAndMax (const Type* values, int numValues, Type& lowest, Type& highest)
{
if (numValues <= 0)
{
lowest = Type();
highest = Type();
}
else
{
Type mn (*values++);
Type mx (mn);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *values++;
if (mx < v) mx = v;
if (v < mn) mn = v;
}
lowest = mn;
highest = mx;
}
}
//==============================================================================
/** Constrains a value to keep it within a given range.
@@ -151,7 +56,8 @@ inline Type blimit (const Type lowerLimit,
const Type upperLimit,
const Type valueToConstrain) noexcept
{
bassert (lowerLimit <= upperLimit); // if these are in the wrong order, results are unpredictable..
// if these are in the wrong order, results are unpredictable.
bassert (lowerLimit <= upperLimit);
return (valueToConstrain < lowerLimit) ? lowerLimit
: ((upperLimit < valueToConstrain) ? upperLimit
@@ -177,24 +83,6 @@ inline bool isPositiveAndBelow (const int valueToTest, const int upperLimit) noe
return static_cast <unsigned int> (valueToTest) < static_cast <unsigned int> (upperLimit);
}
/** Returns true if a value is at least zero, and also less than or equal to a specified upper limit.
This is basically a quicker way to write:
@code valueToTest >= 0 && valueToTest <= upperLimit
@endcode
*/
template <typename Type>
inline bool isPositiveAndNotGreaterThan (Type valueToTest, Type upperLimit) noexcept
{
bassert (Type() <= upperLimit); // makes no sense to call this if the upper limit is itself below zero..
return Type() <= valueToTest && valueToTest <= upperLimit;
}
template <>
inline bool isPositiveAndNotGreaterThan (const int valueToTest, const int upperLimit) noexcept
{
bassert (upperLimit >= 0); // makes no sense to call this if the upper limit is itself below zero..
return static_cast <unsigned int> (valueToTest) <= static_cast <unsigned int> (upperLimit);
}
//==============================================================================
@@ -214,55 +102,6 @@ int numElementsInArray (Type (&array)[N])
return N;
}
/** 64-bit abs function. */
inline std::int64_t abs64 (const std::int64_t n) noexcept
{
return (n >= 0) ? n : -n;
}
//==============================================================================
#if BEAST_MSVC
#pragma optimize ("t", off)
#ifndef __INTEL_COMPILER
#pragma float_control (precise, on, push)
#endif
#endif
/** Fast floating-point-to-integer conversion.
This is faster than using the normal c++ cast to convert a float to an int, and
it will round the value to the nearest integer, rather than rounding it down
like the normal cast does.
Note that this routine gets its speed at the expense of some accuracy, and when
rounding values whose floating point component is exactly 0.5, odd numbers and
even numbers will be rounded up or down differently.
*/
template <typename FloatType>
inline int roundToInt (const FloatType value) noexcept
{
#ifdef __INTEL_COMPILER
#pragma float_control (precise, on, push)
#endif
union { int asInt[2]; double asDouble; } n;
n.asDouble = ((double) value) + 6755399441055744.0;
#if BEAST_BIG_ENDIAN
return n.asInt [1];
#else
return n.asInt [0];
#endif
}
#if BEAST_MSVC
#ifndef __INTEL_COMPILER
#pragma float_control (pop)
#endif
#pragma optimize ("", on) // resets optimisations to the project defaults
#endif
}
#endif

View File

@@ -1,428 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Portions of this file are from JUCE.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ATOMIC_H_INCLUDED
#define BEAST_ATOMIC_H_INCLUDED
#include <beast/Config.h>
#include <beast/StaticAssert.h>
#include <beast/utility/noexcept.h>
#include <cstdint>
namespace beast {
//==============================================================================
/**
Simple class to hold a primitive value and perform atomic operations on it.
The type used must be a 32 or 64 bit primitive, like an int, pointer, etc.
There are methods to perform most of the basic atomic operations.
*/
template <typename Type>
class Atomic
{
public:
/** Creates a new value, initialised to zero. */
inline Atomic() noexcept
: value (0)
{
}
/** Creates a new value, with a given initial value. */
inline Atomic (const Type initialValue) noexcept
: value (initialValue)
{
}
/** Copies another value (atomically). */
inline Atomic (const Atomic& other) noexcept
: value (other.get())
{
}
/** Destructor. */
inline ~Atomic() noexcept
{
// This class can only be used for types which are 32 or 64 bits in size.
static_bassert (sizeof (Type) == 4 || sizeof (Type) == 8);
}
/** Atomically reads and returns the current value. */
Type get() const noexcept;
/** Copies another value onto this one (atomically). */
Atomic& operator= (const Atomic& other) noexcept
{ exchange (other.get()); return *this; }
/** Copies another value onto this one (atomically). */
Atomic& operator= (const Type newValue) noexcept
{ exchange (newValue); return *this; }
/** Atomically sets the current value. */
void set (Type newValue) noexcept
{ exchange (newValue); }
/** Atomically sets the current value, returning the value that was replaced. */
Type exchange (Type value) noexcept;
/** Atomically adds a number to this value, returning the new value. */
Type operator+= (Type amountToAdd) noexcept;
/** Atomically subtracts a number from this value, returning the new value. */
Type operator-= (Type amountToSubtract) noexcept;
/** Atomically increments this value, returning the new value. */
Type operator++() noexcept;
/** Atomically decrements this value, returning the new value. */
Type operator--() noexcept;
/** Atomically compares this value with a target value, and if it is equal, sets
this to be equal to a new value.
This operation is the atomic equivalent of doing this:
@code
bool compareAndSetBool (Type newValue, Type valueToCompare)
{
if (get() == valueToCompare)
{
set (newValue);
return true;
}
return false;
}
@endcode
@returns true if the comparison was true and the value was replaced; false if
the comparison failed and the value was left unchanged.
@see compareAndSetValue
*/
bool compareAndSetBool (Type newValue, Type valueToCompare) noexcept;
/** Atomically compares this value with a target value, and if it is equal, sets
this to be equal to a new value.
This operation is the atomic equivalent of doing this:
@code
Type compareAndSetValue (Type newValue, Type valueToCompare)
{
Type oldValue = get();
if (oldValue == valueToCompare)
set (newValue);
return oldValue;
}
@endcode
@returns the old value before it was changed.
@see compareAndSetBool
*/
Type compareAndSetValue (Type newValue, Type valueToCompare) noexcept;
//==============================================================================
#if BEAST_64BIT
BEAST_ALIGN (8)
#else
BEAST_ALIGN (4)
#endif
/** The raw value that this class operates on.
This is exposed publically in case you need to manipulate it directly
for performance reasons.
*/
volatile Type value;
private:
template <typename Dest, typename Source>
static inline Dest castTo (Source value) noexcept { union { Dest d; Source s; } u; u.s = value; return u.d; }
static inline Type castFrom32Bit (std::int32_t value) noexcept { return castTo <Type, std::int32_t> (value); }
static inline Type castFrom64Bit (std::int64_t value) noexcept { return castTo <Type, std::int64_t> (value); }
static inline std::int32_t castTo32Bit (Type value) noexcept { return castTo <std::int32_t, Type> (value); }
static inline std::int64_t castTo64Bit (Type value) noexcept { return castTo <std::int64_t, Type> (value); }
Type operator++ (int); // better to just use pre-increment with atomics..
Type operator-- (int);
/** This templated negate function will negate pointers as well as integers */
template <typename ValueType>
inline ValueType negateValue (ValueType n) noexcept
{
return sizeof (ValueType) == 1 ? (ValueType) -(signed char) n
: (sizeof (ValueType) == 2 ? (ValueType) -(short) n
: (sizeof (ValueType) == 4 ? (ValueType) -(int) n
: ((ValueType) -(std::int64_t) n)));
}
/** This templated negate function will negate pointers as well as integers */
template <typename PointerType>
inline PointerType* negateValue (PointerType* n) noexcept
{
return reinterpret_cast <PointerType*> (-reinterpret_cast <std::intptr_t> (n));
}
};
//==============================================================================
/*
The following code is in the header so that the atomics can be inlined where possible...
*/
#if BEAST_IOS || (BEAST_MAC && (BEAST_PPC || BEAST_CLANG || __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 2)))
#define BEAST_ATOMICS_MAC 1 // Older OSX builds using gcc4.1 or earlier
#if MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_5
#define BEAST_MAC_ATOMICS_VOLATILE
#else
#define BEAST_MAC_ATOMICS_VOLATILE volatile
#endif
#if BEAST_PPC || BEAST_IOS
// None of these atomics are available for PPC or for iOS 3.1 or earlier!!
template <typename Type> static Type OSAtomicAdd64Barrier (Type b, BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return *a += b; }
template <typename Type> static Type OSAtomicIncrement64Barrier (BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return ++*a; }
template <typename Type> static Type OSAtomicDecrement64Barrier (BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return --*a; }
template <typename Type> static bool OSAtomicCompareAndSwap64Barrier (Type old, Type newValue, BEAST_MAC_ATOMICS_VOLATILE Type* value) noexcept
{ bassertfalse; if (old == *value) { *value = newValue; return true; } return false; }
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
#elif BEAST_CLANG && BEAST_LINUX
#define BEAST_ATOMICS_GCC 1
//==============================================================================
#elif BEAST_GCC
#define BEAST_ATOMICS_GCC 1 // GCC with intrinsics
#if BEAST_IOS || BEAST_ANDROID // (64-bit ops will compile but not link on these mobile OSes)
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
//==============================================================================
#else
#define BEAST_ATOMICS_WINDOWS 1 // Windows with intrinsics
#if BEAST_USE_INTRINSICS
#ifndef __INTEL_COMPILER
#pragma intrinsic (_InterlockedExchange, _InterlockedIncrement, _InterlockedDecrement, _InterlockedCompareExchange, \
_InterlockedCompareExchange64, _InterlockedExchangeAdd, _ReadWriteBarrier)
#endif
#define beast_InterlockedExchange(a, b) _InterlockedExchange(a, b)
#define beast_InterlockedIncrement(a) _InterlockedIncrement(a)
#define beast_InterlockedDecrement(a) _InterlockedDecrement(a)
#define beast_InterlockedExchangeAdd(a, b) _InterlockedExchangeAdd(a, b)
#define beast_InterlockedCompareExchange(a, b, c) _InterlockedCompareExchange(a, b, c)
#define beast_InterlockedCompareExchange64(a, b, c) _InterlockedCompareExchange64(a, b, c)
#define beast_MemoryBarrier _ReadWriteBarrier
#else
long beast_InterlockedExchange (volatile long* a, long b) noexcept;
long beast_InterlockedIncrement (volatile long* a) noexcept;
long beast_InterlockedDecrement (volatile long* a) noexcept;
long beast_InterlockedExchangeAdd (volatile long* a, long b) noexcept;
long beast_InterlockedCompareExchange (volatile long* a, long b, long c) noexcept;
__int64 beast_InterlockedCompareExchange64 (volatile __int64* a, __int64 b, __int64 c) noexcept;
inline void beast_MemoryBarrier() noexcept { long x = 0; beast_InterlockedIncrement (&x); }
#endif
#if BEAST_64BIT
#ifndef __INTEL_COMPILER
#pragma intrinsic (_InterlockedExchangeAdd64, _InterlockedExchange64, _InterlockedIncrement64, _InterlockedDecrement64)
#endif
#define beast_InterlockedExchangeAdd64(a, b) _InterlockedExchangeAdd64(a, b)
#define beast_InterlockedExchange64(a, b) _InterlockedExchange64(a, b)
#define beast_InterlockedIncrement64(a) _InterlockedIncrement64(a)
#define beast_InterlockedDecrement64(a) _InterlockedDecrement64(a)
#else
// None of these atomics are available in a 32-bit Windows build!!
template <typename Type> static Type beast_InterlockedExchangeAdd64 (volatile Type* a, Type b) noexcept { bassertfalse; Type old = *a; *a += b; return old; }
template <typename Type> static Type beast_InterlockedExchange64 (volatile Type* a, Type b) noexcept { bassertfalse; Type old = *a; *a = b; return old; }
template <typename Type> static Type beast_InterlockedIncrement64 (volatile Type* a) noexcept { bassertfalse; return ++*a; }
template <typename Type> static Type beast_InterlockedDecrement64 (volatile Type* a) noexcept { bassertfalse; return --*a; }
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
#endif
#if BEAST_MSVC
#pragma warning (push)
#pragma warning (disable: 4311) // (truncation warning)
#endif
//==============================================================================
template <typename Type>
inline Type Atomic<Type>::get() const noexcept
{
#if BEAST_ATOMICS_MAC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) OSAtomicAdd32Barrier ((int32_t) 0, (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value))
: castFrom64Bit ((std::int64_t) OSAtomicAdd64Barrier ((int64_t) 0, (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value));
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedExchangeAdd ((volatile long*) &value, (long) 0))
: castFrom64Bit ((std::int64_t) beast_InterlockedExchangeAdd64 ((volatile __int64*) &value, (__int64) 0));
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) __sync_add_and_fetch ((volatile std::int32_t*) &value, 0))
: castFrom64Bit ((std::int64_t) __sync_add_and_fetch ((volatile std::int64_t*) &value, 0));
#endif
}
template <typename Type>
inline Type Atomic<Type>::exchange (const Type newValue) noexcept
{
#if BEAST_ATOMICS_MAC || BEAST_ATOMICS_GCC
Type currentVal = value;
while (! compareAndSetBool (newValue, currentVal)) { currentVal = value; }
return currentVal;
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedExchange ((volatile long*) &value, (long) castTo32Bit (newValue)))
: castFrom64Bit ((std::int64_t) beast_InterlockedExchange64 ((volatile __int64*) &value, (__int64) castTo64Bit (newValue)));
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator+= (const Type amountToAdd) noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicAdd32Barrier ((int32_t) castTo32Bit (amountToAdd), (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicAdd64Barrier ((int64_t) amountToAdd, (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) (beast_InterlockedExchangeAdd ((volatile long*) &value, (long) amountToAdd) + (long) amountToAdd)
: (Type) (beast_InterlockedExchangeAdd64 ((volatile __int64*) &value, (__int64) amountToAdd) + (__int64) amountToAdd);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, amountToAdd);
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator-= (const Type amountToSubtract) noexcept
{
return operator+= (negateValue (amountToSubtract));
}
template <typename Type>
inline Type Atomic<Type>::operator++() noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicIncrement32Barrier ((BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicIncrement64Barrier ((BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) beast_InterlockedIncrement ((volatile long*) &value)
: (Type) beast_InterlockedIncrement64 ((volatile __int64*) &value);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, (Type) 1);
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator--() noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicDecrement32Barrier ((BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicDecrement64Barrier ((BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) beast_InterlockedDecrement ((volatile long*) &value)
: (Type) beast_InterlockedDecrement64 ((volatile __int64*) &value);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, (Type) -1);
#endif
}
template <typename Type>
inline bool Atomic<Type>::compareAndSetBool (const Type newValue, const Type valueToCompare) noexcept
{
#if BEAST_ATOMICS_MAC
return sizeof (Type) == 4 ? OSAtomicCompareAndSwap32Barrier ((int32_t) castTo32Bit (valueToCompare), (int32_t) castTo32Bit (newValue), (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: OSAtomicCompareAndSwap64Barrier ((int64_t) castTo64Bit (valueToCompare), (int64_t) castTo64Bit (newValue), (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
#elif BEAST_ATOMICS_WINDOWS
return compareAndSetValue (newValue, valueToCompare) == valueToCompare;
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? __sync_bool_compare_and_swap ((volatile std::int32_t*) &value, castTo32Bit (valueToCompare), castTo32Bit (newValue))
: __sync_bool_compare_and_swap ((volatile std::int64_t*) &value, castTo64Bit (valueToCompare), castTo64Bit (newValue));
#endif
}
template <typename Type>
inline Type Atomic<Type>::compareAndSetValue (const Type newValue, const Type valueToCompare) noexcept
{
#if BEAST_ATOMICS_MAC
for (;;) // Annoying workaround for only having a bool CAS operation..
{
if (compareAndSetBool (newValue, valueToCompare))
return valueToCompare;
const Type result = value;
if (result != valueToCompare)
return result;
}
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedCompareExchange ((volatile long*) &value, (long) castTo32Bit (newValue), (long) castTo32Bit (valueToCompare)))
: castFrom64Bit ((std::int64_t) beast_InterlockedCompareExchange64 ((volatile __int64*) &value, (__int64) castTo64Bit (newValue), (__int64) castTo64Bit (valueToCompare)));
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) __sync_val_compare_and_swap ((volatile std::int32_t*) &value, castTo32Bit (valueToCompare), castTo32Bit (newValue)))
: castFrom64Bit ((std::int64_t) __sync_val_compare_and_swap ((volatile std::int64_t*) &value, castTo64Bit (valueToCompare), castTo64Bit (newValue)));
#endif
}
inline void memoryBarrier() noexcept
{
#if BEAST_ATOMICS_MAC
OSMemoryBarrier();
#elif BEAST_ATOMICS_GCC
__sync_synchronize();
#elif BEAST_ATOMICS_WINDOWS
beast_MemoryBarrier();
#endif
}
#if BEAST_MSVC
#pragma warning (pop)
#endif
}
#endif

View File

@@ -25,7 +25,6 @@
#define BEAST_BYTEORDER_H_INCLUDED
#include <beast/Config.h>
#include <beast/Uncopyable.h>
#include <cstdint>
@@ -35,7 +34,7 @@ namespace beast {
/** Contains static methods for converting the byte order between different
endiannesses.
*/
class ByteOrder : public Uncopyable
class ByteOrder
{
public:
//==============================================================================
@@ -105,6 +104,8 @@ public:
private:
ByteOrder();
ByteOrder(ByteOrder const&) = delete;
ByteOrder& operator= (ByteOrder const&) = delete;
};
//==============================================================================

View File

@@ -26,9 +26,6 @@
// VFALCO NOTE this is analogous to <boost/config.hpp>
// Assert to boost that we always have std::array support
#define BOOST_ASIO_HAS_STD_ARRAY 1
#if !defined(BEAST_COMPILER_CONFIG) && !defined(BEAST_NO_COMPILER_CONFIG) && !defined(BEAST_NO_CONFIG)
#include <beast/config/SelectCompilerConfig.h>
#endif

View File

@@ -20,11 +20,7 @@
#ifndef BEAST_CRYPTO_H_INCLUDED
#define BEAST_CRYPTO_H_INCLUDED
#include <beast/crypto/BinaryEncoding.h>
#include <beast/crypto/MurmurHash.h>
#include <beast/crypto/Sha256.h>
#include <beast/crypto/UnsignedInteger.h>
#include <beast/crypto/UnsignedIntegerCalc.h>
#endif

View File

@@ -29,7 +29,6 @@
#include <stdexcept>
#include <beast/Memory.h>
#include <beast/Uncopyable.h>
// If the MSVC debug heap headers were included, disable
// the macros during the juce include since they conflict.
@@ -122,7 +121,7 @@ namespace HeapBlockHelper
@see Array, MemoryBlock
*/
template <class ElementType, bool throwOnFailure = false>
class HeapBlock : public Uncopyable
class HeapBlock
{
public:
//==============================================================================
@@ -136,12 +135,6 @@ public:
{
}
HeapBlock (HeapBlock& other)
{
data = other.data;
other.data = nullptr;
}
/** Creates a HeapBlock containing a number of elements.
The contents of the block are undefined, as it will have been created by a
@@ -169,6 +162,10 @@ public:
throwOnAllocationFailure();
}
HeapBlock(HeapBlock const&) = delete;
HeapBlock& operator= (HeapBlock const&) = delete;
/** Destructor.
This will free the data, if any has been allocated.
*/

View File

@@ -27,7 +27,6 @@
#include <cstring>
#include <beast/Config.h>
#include <beast/Uncopyable.h>
namespace beast {
@@ -78,12 +77,16 @@ Type* createCopyIfNotNull (const Type* pointer)
/** A handy C++ wrapper that creates and deletes an NSAutoreleasePool object using RAII.
You should use the BEAST_AUTORELEASEPOOL macro to create a local auto-release pool on the stack.
*/
class ScopedAutoReleasePool : public Uncopyable
class ScopedAutoReleasePool
{
public:
ScopedAutoReleasePool();
~ScopedAutoReleasePool();
ScopedAutoReleasePool(ScopedAutoReleasePool const&) = delete;
ScopedAutoReleasePool& operator= (ScopedAutoReleasePool const&) = delete;
private:
void* pool;
};

View File

@@ -1,45 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_STATICASSERT_H_INCLUDED
#define BEAST_STATICASSERT_H_INCLUDED
#ifndef DOXYGEN
namespace beast
{
template <bool b>
struct BeastStaticAssert;
template <>
struct BeastStaticAssert <true>
{
static void dummy() {}
};
}
#endif
/** A compile-time assertion macro.
If the expression parameter is false, the macro will cause a compile error.
(The actual error message that the compiler generates may be completely
bizarre and seem to have no relation to the place where you put the
static_assert though!)
*/
#define static_bassert(expression) beast::BeastStaticAssert<expression>::dummy();
#endif

View File

@@ -25,11 +25,9 @@
#include <beast/threads/SharedLockGuard.h>
#include <beast/threads/SharedMutexAdapter.h>
#include <beast/threads/SharedData.h>
#include <beast/threads/ServiceQueue.h>
#include <beast/threads/SpinLock.h>
#include <beast/threads/Stoppable.h>
#include <beast/threads/Thread.h>
#include <beast/threads/ThreadLocalValue.h>
#include <beast/threads/WaitableEvent.h>
#include <beast/threads/ScopedWrapperContext.h>

View File

@@ -1,77 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_UNCOPYABLE_H_INCLUDED
#define BEAST_UNCOPYABLE_H_INCLUDED
namespace beast
{
// Ideas from boost
/** Prevent copy construction and assignment.
This is used to suppress warnings and prevent unsafe operations on
objects which cannot be passed by value. Ideas based on Boost.
For example, instead of
@code
class MyClass
{
public:
//...
private:
MyClass (const MyClass&);
MyClass& operator= (const MyClass&);
};
@endcode
..you can just write:
@code
class MyClass : public Uncopyable
{
public:
//...
};
@endcode
@note The derivation should be public or else child classes which
also derive from Uncopyable may not compile.
*/
class Uncopyable
{
protected:
Uncopyable () { }
~Uncopyable () { }
private:
Uncopyable (Uncopyable const&);
Uncopyable const& operator= (Uncopyable const&);
};
}
#endif

View File

@@ -22,11 +22,8 @@
#endif
#include <beast/asio/impl/IPAddressConversion.cpp>
#include <beast/asio/tests/wrap_handler.test.cpp>
#include <beast/asio/impl/error.cpp>
#include <beast/asio/tests/bind_handler.test.cpp>
#include <beast/asio/tests/enable_wait_for_async.test.cpp>
#include <beast/asio/tests/shared_handler.test.cpp>
#include <beast/asio/abstract_socket.cpp> // TEMPORARY!
#include <beast/asio/tests/streambuf.test.cpp>
#include <beast/asio/tests/error_test.cpp>

View File

@@ -1,217 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <beast/asio/abstract_socket.h>
#include <beast/asio/bind_handler.h>
namespace beast {
namespace asio {
#if ! BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
//------------------------------------------------------------------------------
//
// Socket
//
//------------------------------------------------------------------------------
void* abstract_socket::this_layer_ptr (char const*) const
{
pure_virtual_called ();
return nullptr;
}
//------------------------------------------------------------------------------
//
// native_handle
//
//------------------------------------------------------------------------------
bool abstract_socket::native_handle (char const*, void*)
{
pure_virtual_called ();
return false;
}
//------------------------------------------------------------------------------
//
// basic_io_object
//
//------------------------------------------------------------------------------
boost::asio::io_service& abstract_socket::get_io_service ()
{
pure_virtual_called ();
return *static_cast <boost::asio::io_service*>(nullptr);
}
//------------------------------------------------------------------------------
//
// basic_socket
//
//------------------------------------------------------------------------------
void*
abstract_socket::lowest_layer_ptr (char const*) const
{
pure_virtual_called ();
return nullptr;
}
auto
abstract_socket::cancel (boost::system::error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
auto
abstract_socket::shutdown (shutdown_type, boost::system::error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
auto
abstract_socket::close (boost::system::error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
//------------------------------------------------------------------------------
//
// basic_socket_acceptor
//
//------------------------------------------------------------------------------
auto
abstract_socket::accept (abstract_socket&, error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
void
abstract_socket::async_accept (abstract_socket&, error_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error()));
}
//------------------------------------------------------------------------------
//
// basic_stream_socket
//
//------------------------------------------------------------------------------
std::size_t
abstract_socket::read_some (mutable_buffers, error_code& ec)
{
ec = pure_virtual_error ();
return 0;
}
std::size_t
abstract_socket::write_some (const_buffers, error_code& ec)
{
ec = pure_virtual_error ();
return 0;
}
void
abstract_socket::async_read_some (mutable_buffers, transfer_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error(), 0));
}
void
abstract_socket::async_write_some (const_buffers, transfer_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error(), 0));
}
//------------------------------------------------------------------------------
//
// ssl::stream
//
//------------------------------------------------------------------------------
void*
abstract_socket::next_layer_ptr (char const*) const
{
pure_virtual_called ();
return nullptr;
}
bool
abstract_socket::needs_handshake ()
{
return false;
}
void
abstract_socket::set_verify_mode (int)
{
pure_virtual_called ();
}
auto
abstract_socket::handshake (handshake_type, error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
void
abstract_socket::async_handshake (handshake_type, error_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error()));
}
auto
abstract_socket::handshake (handshake_type, const_buffers, error_code& ec) ->
error_code
{
return pure_virtual_error (ec);
}
void
abstract_socket::async_handshake (handshake_type, const_buffers,
transfer_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error(), 0));
}
auto
abstract_socket::shutdown (error_code& ec) -> error_code
{
return pure_virtual_error (ec);
}
void
abstract_socket::async_shutdown (error_handler handler)
{
get_io_service ().post (bind_handler (
handler, pure_virtual_error()));
}
#endif
}
}

View File

@@ -1,404 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ASIO_ABSTRACT_SOCKET_H_INCLUDED
#define BEAST_ASIO_ABSTRACT_SOCKET_H_INCLUDED
#include <beast/asio/buffer_sequence.h>
#include <beast/asio/shared_handler.h>
#include <boost/asio/io_service.hpp>
#include <boost/asio/socket_base.hpp>
#include <boost/asio/ssl/stream_base.hpp>
// Checking overrides replaces unimplemented stubs with pure virtuals
#ifndef BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
# define BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES 1
#endif
#if BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
# define BEAST_SOCKET_VIRTUAL = 0
#else
# define BEAST_SOCKET_VIRTUAL
#endif
namespace beast {
namespace asio {
/** A high level socket abstraction.
This combines the capabilities of multiple socket interfaces such
as listening, connecting, streaming, and handshaking. It brings
everything together into a single abstract interface.
When member functions are called and the underlying implementation does
not support the operation, a fatal error is generated.
*/
class abstract_socket
: public boost::asio::ssl::stream_base
, public boost::asio::socket_base
{
protected:
typedef boost::system::error_code error_code;
typedef asio::shared_handler <void (void)> post_handler;
typedef asio::shared_handler <void (error_code)> error_handler;
typedef asio::shared_handler <
void (error_code, std::size_t)> transfer_handler;
static
void
pure_virtual_called()
{
throw std::runtime_error ("pure virtual called");
}
static
error_code
pure_virtual_error ()
{
pure_virtual_called();
return boost::system::errc::make_error_code (
boost::system::errc::function_not_supported);
}
static
error_code
pure_virtual_error (error_code& ec)
{
return ec = pure_virtual_error();
}
static
void
throw_if (error_code const& ec)
{
if (ec)
throw boost::system::system_error (ec);
}
public:
virtual ~abstract_socket ()
{
}
//--------------------------------------------------------------------------
//
// abstract_socket
//
//--------------------------------------------------------------------------
/** Retrieve the underlying object.
@note If the type doesn't match, nullptr is returned or an
exception is thrown if trying to acquire a reference.
*/
/** @{ */
template <class Object>
Object& this_layer ()
{
Object* object (this->this_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object const& this_layer () const
{
Object const* object (this->this_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object* this_layer_ptr ()
{
return static_cast <Object*> (
this->this_layer_ptr (typeid (Object).name ()));
}
template <class Object>
Object const* this_layer_ptr () const
{
return static_cast <Object const*> (
this->this_layer_ptr (typeid (Object).name ()));
}
/** @} */
virtual void* this_layer_ptr (char const* type_name) const
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// native_handle
//
//--------------------------------------------------------------------------
/** Retrieve the native representation of the object.
Since we dont know the return type, and because almost every
asio implementation passes the result by value, you need to provide
a pointer to a default-constructed object of the matching type.
@note If the type doesn't match, an exception is thrown.
*/
template <typename Handle>
void native_handle (Handle* dest)
{
if (! native_handle (typeid (Handle).name (), dest))
throw std::bad_cast ();
}
virtual bool native_handle (char const* type_name, void* dest)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// basic_io_object
//
//--------------------------------------------------------------------------
virtual boost::asio::io_service& get_io_service ()
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// basic_socket
//
//--------------------------------------------------------------------------
/** Retrieve the lowest layer object.
@note If the type doesn't match, nullptr is returned or an
exception is thrown if trying to acquire a reference.
*/
/** @{ */
template <class Object>
Object& lowest_layer ()
{
Object* object (this->lowest_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object const& lowest_layer () const
{
Object const* object (this->lowest_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object* lowest_layer_ptr ()
{
return static_cast <Object*> (
this->lowest_layer_ptr (typeid (Object).name ()));
}
template <class Object>
Object const* lowest_layer_ptr () const
{
return static_cast <Object const*> (
this->lowest_layer_ptr (typeid (Object).name ()));
}
/** @} */
virtual void* lowest_layer_ptr (char const* type_name) const
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
void cancel ()
{
error_code ec;
cancel (ec);
throw_if (ec);
}
virtual error_code cancel (error_code& ec)
BEAST_SOCKET_VIRTUAL;
void shutdown (shutdown_type what)
{
error_code ec;
shutdown (what, ec);
throw_if (ec);
}
virtual error_code shutdown (shutdown_type what,
error_code& ec)
BEAST_SOCKET_VIRTUAL;
void close ()
{
error_code ec;
close (ec);
throw_if (ec);
}
virtual error_code close (error_code& ec)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// basic_socket_acceptor
//
//--------------------------------------------------------------------------
virtual error_code accept (abstract_socket& peer, error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual void async_accept (abstract_socket& peer, error_handler handler)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// basic_stream_socket
//
//--------------------------------------------------------------------------
virtual std::size_t read_some (mutable_buffers buffers, error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual std::size_t write_some (const_buffers buffers, error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual void async_read_some (mutable_buffers buffers,
transfer_handler handler)
BEAST_SOCKET_VIRTUAL;
virtual void async_write_some (const_buffers buffers,
transfer_handler handler)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
//
// ssl::stream
//
//--------------------------------------------------------------------------
/** Retrieve the next layer object.
@note If the type doesn't match, nullptr is returned or an
exception is thrown if trying to acquire a reference.
*/
/** @{ */
template <class Object>
Object& next_layer ()
{
Object* object (this->next_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object const& next_layer () const
{
Object const* object (this->next_layer_ptr <Object> ());
if (object == nullptr)
throw std::bad_cast ();
return *object;
}
template <class Object>
Object* next_layer_ptr ()
{
return static_cast <Object*> (
this->next_layer_ptr (typeid (Object).name ()));
}
template <class Object>
Object const* next_layer_ptr () const
{
return static_cast <Object const*> (
this->next_layer_ptr (typeid (Object).name ()));
}
/** @} */
virtual void* next_layer_ptr (char const* type_name) const
BEAST_SOCKET_VIRTUAL;
/** Determines if the underlying stream requires a handshake.
If needs_handshake is true, it will be necessary to call handshake or
async_handshake after the connection is established. Furthermore it
will be necessary to call the shutdown member from the
HandshakeInterface to close the connection. Do not close the underlying
socket or else the closure will not be graceful. Only one side should
initiate the handshaking shutdon. The other side should observe it.
Which side does what is up to the user.
The default version returns false.
*/
virtual bool needs_handshake ()
BEAST_SOCKET_VIRTUAL;
virtual void set_verify_mode (int verify_mode)
BEAST_SOCKET_VIRTUAL;
void handshake (handshake_type type)
{
error_code ec;
handshake (type, ec);
throw_if (ec);
}
virtual error_code handshake (handshake_type type, error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual void async_handshake (handshake_type type, error_handler handler)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
virtual error_code handshake (handshake_type type,
const_buffers buffers, error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual void async_handshake (handshake_type type,
const_buffers buffers, transfer_handler handler)
BEAST_SOCKET_VIRTUAL;
//--------------------------------------------------------------------------
void shutdown ()
{
error_code ec;
shutdown (ec);
throw_if (ec);
}
virtual error_code shutdown (error_code& ec)
BEAST_SOCKET_VIRTUAL;
virtual void async_shutdown (error_handler handler)
BEAST_SOCKET_VIRTUAL;
};
}
}
#endif

View File

@@ -56,6 +56,7 @@ private:
public:
typedef void result_type;
explicit
bound_handler (DeducedHandler&& handler, Args&&... args)
: m_handler (std::forward <DeducedHandler> (handler))
, m_args (std::forward <Args> (args)...)

View File

@@ -1,126 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ASIO_BUFFER_SEQUENCE_H_INCLUDED
#define BEAST_ASIO_BUFFER_SEQUENCE_H_INCLUDED
#include <boost/asio/buffer.hpp>
#include <beast/utility/noexcept.h>
#include <algorithm>
#include <iterator>
#include <beast/cxx14/type_traits.h> // <type_traits>
#include <vector>
namespace beast {
namespace asio {
template <class Buffer>
class buffer_sequence
{
private:
typedef std::vector <Buffer> sequence_type;
public:
typedef Buffer value_type;
typedef typename sequence_type::const_iterator const_iterator;
private:
sequence_type m_buffers;
template <class FwdIter>
void assign (FwdIter first, FwdIter last)
{
m_buffers.clear();
m_buffers.reserve (std::distance (first, last));
for (;first != last; ++first)
m_buffers.push_back (*first);
}
public:
buffer_sequence ()
{
}
template <
class BufferSequence,
class = std::enable_if_t <std::is_constructible <
Buffer, typename BufferSequence::value_type>::value>
>
buffer_sequence (BufferSequence const& s)
{
assign (std::begin (s), std::end (s));
}
template <
class FwdIter,
class = std::enable_if_t <std::is_constructible <
Buffer, typename std::iterator_traits <
FwdIter>::value_type>::value>
>
buffer_sequence (FwdIter first, FwdIter last)
{
assign (first, last);
}
template <class BufferSequence>
std::enable_if_t <std::is_constructible <
Buffer, typename BufferSequence::value_type>::value,
buffer_sequence&
>
operator= (BufferSequence const& s)
{
return assign (s);
}
const_iterator
begin () const noexcept
{
return m_buffers.begin ();
}
const_iterator
end () const noexcept
{
return m_buffers.end ();
}
#if 0
template <class ConstBufferSequence>
void
assign (ConstBufferSequence const& buffers)
{
auto const n (std::distance (
std::begin (buffers), std::end (buffers)));
for (int i = 0, auto iter (std::begin (buffers));
iter != std::end (buffers); ++iter, ++i)
m_buffers[i] = Buffer (boost::asio::buffer_cast <void*> (
*iter), boost::asio::buffer_size (*iter));
}
#endif
};
typedef buffer_sequence <boost::asio::const_buffer> const_buffers;
typedef buffer_sequence <boost::asio::mutable_buffer> mutable_buffers;
}
}
#endif

View File

@@ -1,265 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ASIO_ENABLE_WAIT_FOR_ASYNC_H_INCLUDED
#define BEAST_ASIO_ENABLE_WAIT_FOR_ASYNC_H_INCLUDED
#include <beast/asio/wrap_handler.h>
#include <beast/utility/is_call_possible.h>
#include <boost/asio/detail/handler_alloc_helpers.hpp>
#include <boost/asio/detail/handler_cont_helpers.hpp>
#include <boost/asio/detail/handler_invoke_helpers.hpp>
#include <atomic>
#include <condition_variable>
#include <mutex>
#include <beast/cxx14/type_traits.h> // <type_traits>
namespace beast {
namespace asio {
namespace detail {
template <class Owner, class Handler>
class ref_counted_wrapped_handler
{
private:
static_assert (std::is_same <std::decay_t <Owner>, Owner>::value,
"Owner cannot be a const or reference type");
Handler m_handler;
std::reference_wrapper <Owner> m_owner;
bool m_continuation;
public:
ref_counted_wrapped_handler (Owner& owner,
Handler&& handler, bool continuation)
: m_handler (std::move (handler))
, m_owner (owner)
, m_continuation (continuation ? true :
boost_asio_handler_cont_helpers::is_continuation (m_handler))
{
m_owner.get().increment();
}
ref_counted_wrapped_handler (Owner& owner,
Handler const& handler, bool continuation)
: m_handler (handler)
, m_owner (owner)
, m_continuation (continuation ? true :
boost_asio_handler_cont_helpers::is_continuation (m_handler))
{
m_owner.get().increment();
}
~ref_counted_wrapped_handler ()
{
m_owner.get().decrement();
}
ref_counted_wrapped_handler (ref_counted_wrapped_handler const& other)
: m_handler (other.m_handler)
, m_owner (other.m_owner)
, m_continuation (other.m_continuation)
{
m_owner.get().increment();
}
ref_counted_wrapped_handler (ref_counted_wrapped_handler&& other)
: m_handler (std::move (other.m_handler))
, m_owner (other.m_owner)
, m_continuation (other.m_continuation)
{
m_owner.get().increment();
}
ref_counted_wrapped_handler& operator= (
ref_counted_wrapped_handler const&) = delete;
template <class... Args>
void
operator() (Args&&... args)
{
m_handler (std::forward <Args> (args)...);
}
template <class... Args>
void
operator() (Args&&... args) const
{
m_handler (std::forward <Args> (args)...);
}
template <class Function>
friend
void
asio_handler_invoke (Function& f,
ref_counted_wrapped_handler* h)
{
boost_asio_handler_invoke_helpers::
invoke (f, h->m_handler);
}
template <class Function>
friend
void
asio_handler_invoke (Function const& f,
ref_counted_wrapped_handler* h)
{
boost_asio_handler_invoke_helpers::
invoke (f, h->m_handler);
}
friend
void*
asio_handler_allocate (std::size_t size,
ref_counted_wrapped_handler* h)
{
return boost_asio_handler_alloc_helpers::
allocate (size, h->m_handler);
}
friend
void
asio_handler_deallocate (void* p, std::size_t size,
ref_counted_wrapped_handler* h)
{
boost_asio_handler_alloc_helpers::
deallocate (p, size, h->m_handler);
}
friend
bool
asio_handler_is_continuation (ref_counted_wrapped_handler* h)
{
return h->m_continuation;
}
};
}
//------------------------------------------------------------------------------
/** Facilitates blocking until no completion handlers are remaining.
If Derived has this member function:
@code
void on_wait_for_async (void)
@endcode
Then it will be called every time the number of pending completion
handlers transitions to zero from a non-zero value. The call is made
while holding the internal mutex.
*/
template <class Derived>
class enable_wait_for_async
{
private:
BEAST_DEFINE_IS_CALL_POSSIBLE(
has_on_wait_for_async,on_wait_for_async);
void increment()
{
std::lock_guard <decltype(m_mutex)> lock (m_mutex);
++m_count;
}
void notify (std::true_type)
{
static_cast <Derived*> (this)->on_wait_for_async();
}
void notify (std::false_type)
{
}
void decrement()
{
std::lock_guard <decltype(m_mutex)> lock (m_mutex);
--m_count;
if (m_count == 0)
{
m_cond.notify_all();
notify (std::integral_constant <bool,
has_on_wait_for_async<Derived, void(void)>::value>());
}
}
template <class Owner, class Handler>
friend class detail::ref_counted_wrapped_handler;
std::mutex m_mutex;
std::condition_variable m_cond;
std::size_t m_count;
public:
/** Blocks if there are any pending completion handlers. */
void
wait_for_async()
{
std::unique_lock <decltype (m_mutex)> lock (m_mutex);
while (m_count != 0)
m_cond.wait (lock);
}
protected:
enable_wait_for_async()
: m_count (0)
{
}
~enable_wait_for_async()
{
assert (m_count == 0);
}
/** Wraps the specified handler so it can be counted. */
/** @{ */
template <class Handler>
detail::ref_counted_wrapped_handler <
enable_wait_for_async,
std::remove_reference_t <Handler>
>
wrap_with_counter (Handler&& handler, bool continuation = false)
{
return detail::ref_counted_wrapped_handler <enable_wait_for_async,
std::remove_reference_t <Handler>> (*this,
std::forward <Handler> (handler), continuation);
}
template <class Handler>
detail::ref_counted_wrapped_handler <
enable_wait_for_async,
std::remove_reference_t <Handler>
>
wrap_with_counter (continuation_t, Handler&& handler)
{
return detail::ref_counted_wrapped_handler <enable_wait_for_async,
std::remove_reference_t <Handler>> (*this,
std::forward <Handler> (handler), true);
}
/** @} */
};
}
}
#endif

View File

@@ -0,0 +1,45 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ASIO_ERROR_H_INCLUDED
#define BEAST_ASIO_ERROR_H_INCLUDED
#include <boost/asio.hpp>
#include <boost/asio/ssl/error.hpp>
namespace beast {
namespace asio {
/** Returns `true` if the error code is a SSL "short read." */
inline
bool
is_short_read (boost::system::error_code const& ec)
{
return (ec.category() == boost::asio::error::get_ssl_category())
&& (ERR_GET_REASON(ec.value()) == SSL_R_SHORT_READ);
}
/** Returns a human readable message if the error code is SSL related. */
std::string
asio_message(boost::system::error_code const& ec);
}
}
#endif

View File

@@ -0,0 +1,59 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <beast/asio/error.h>
#include <boost/lexical_cast.hpp>
namespace beast {
namespace asio {
// This buffer must be at least 120 bytes, most examples use 256.
// https://www.openssl.org/docs/crypto/ERR_error_string.html
static std::uint32_t const errorBufferSize (256);
std::string
asio_message (boost::system::error_code const& ec)
{
std::string error;
if (ec.category () == boost::asio::error::get_ssl_category ())
{
error = " ("
+ boost::lexical_cast<std::string> (ERR_GET_LIB (ec.value ()))
+ ","
+ boost::lexical_cast<std::string> (ERR_GET_FUNC (ec.value ()))
+ ","
+ boost::lexical_cast<std::string> (ERR_GET_REASON (ec.value ()))
+ ") ";
//
char buf[errorBufferSize];
::ERR_error_string_n (ec.value (), buf, errorBufferSize);
error += buf;
}
else
{
error = ec.message ();
}
return error;
}
}
}

Some files were not shown because too many files have changed in this diff Show More