Compare commits

...

727 Commits

Author SHA1 Message Date
Nik Bougalis
70c2854f7c Set version to 0.27.3 2015-03-10 14:06:33 -07:00
David Schwartz
e9381ddeb2 Add "Default Ripple" account flag and associated logic:
AccountSet set/clear, asfDefaultRipple = 8

AccountRoot flag, lsfDefaultRipple = 0x00800000

In trustCreate, set no ripple flag if appropriate.

If an account does not have the default ripple flag set,
new ripple lines created as a result of its offers being
taken or people creating trust lines to it have no ripple
set by that account's side automatically

Trust lines can be deleted if the no ripple flag matches
its default setting based on the account's default ripple
setting.

Fix default no-rippling in integration tests.
2015-03-10 14:05:18 -07:00
Nik Bougalis
9cc8eec773 Set version to 0.27.2 2015-03-01 14:56:44 -08:00
Nik Bougalis
0b45535061 Calculate deep offer quality 2015-02-28 13:28:54 -08:00
Vinnie Falco
95973ba3e8 Set version to 0.27.1 2015-02-24 13:31:13 -08:00
Vinnie Falco
ac64731d55 Set version to 0.27.1-rc3 2015-02-12 12:59:48 -08:00
Vinnie Falco
5dc064e971 Revert "Disable Overlay socket handoffs"
This reverts commit 8eb05d0950.
2015-02-12 12:59:48 -08:00
Vinnie Falco
c24732ed4e Fix streambuf bug:
The buffers_type::iterator could hold a pointer to a buffers_type that
was destroyed. This changes buffers_type::iterator to point to the
original streambuf instead, which always outlives the iterator.
2015-02-12 12:59:35 -08:00
Vinnie Falco
ba710bee86 Reject invalid requests on peer port sooner. 2015-02-12 12:43:42 -08:00
Vinnie Falco
c20392ca80 Set version to 0.27.1-rc2 2015-02-11 20:53:27 -08:00
Vinnie Falco
8eb05d0950 Disable Overlay socket handoffs 2015-02-11 20:53:27 -08:00
Vinnie Falco
0f94e2c0c3 Set version to 0.27.1-rc1 2015-02-10 16:22:14 -08:00
Vinnie Falco
a25508b98d Add RocksDB to nudb import tool (RIPD-781,785):
This custom tool is specifically designed for very fast import of
RocksDB nodestore databases into NuDB.
2015-02-10 16:22:14 -08:00
Vinnie Falco
e825433a38 NuDB: Use nodeobject codec in Backend (RIPD-793):
This adds codecs for snappy and lz4, and a new nodeobject codec. The
nodeobject codec provides a highly efficient custom compression scheme
for inner nodes, which make up the majority of nodestore databases.
Non inner node objects are compressed using lz4.

The NuDB backend is modified to use the nodeobject codec. This change
is not backward compatible - older NuDB databases cannot be opened or
imported.
2015-02-10 16:22:13 -08:00
Vinnie Falco
d1c08889fe Remove obsolete NodeObject fields:
Legacy fields of NodeObject are removed, as they are no longer
used and there is a space savings from omitting them:

* Remove LedgerIndex
2015-02-10 16:22:13 -08:00
Vinnie Falco
0b82b5a0d6 NuDB: Performance improvements (RIPD-793,796):
This introduces changes in nudb to improve speed, reduce database size,
and enhance correctness. The most significant change is to store hashes
rather than entire keys in the key file. The output of the hash function
is reduced to 48 bits, and stored directly in buckets.

The API is also modified to introduce a Codec parameter allowing for
compression and decompression to be supported in the implementation
itself rather than callers.

THe data file no longer contains a salt, as the salt is applicable
only to the key and log files. This allows a data file to have multiple
key files with different salt values. To distinguish physical files
belonging to the same logical database, a new field UID is introduced.
The UID is a 64-bit random value generated once on creation and stored
in all three files.

Buckets are zero filled to the end of each block, this is a security
measure to prevent unintended contents of memory getting stored to
disk. NuDB offers the varint integer type, this is identical to
the varint described by Google.

* Add varint
* Add Codec template argument
* Add "api" convenience traits
* Store hash in buckets
* istream can throw short read errors
* Support std::uint8_t format in streams
* Make file classes part of the public interface
* Remove buffers pessimization, replace with buffer
* Consolidate creation utility functions to the same header
* Zero fill unused areas of buckets on disk
* More coverage and improvements to the recover test
* Fix file read/write to loop until all bytes processed
* Add verify_fast, faster verify for large databases

The database version number is incremented to 2; older databases can
no longer be opened and should be deleted.
2015-02-10 16:22:13 -08:00
Vinnie Falco
a33d0d4fb6 Add general delimiter split() to rfc2616 2015-02-08 19:43:37 -08:00
Vinnie Falco
ba42334d36 Add lz4
Conflicts:
	Builds/VisualStudio2013/RippleD.vcxproj
	Builds/VisualStudio2013/RippleD.vcxproj.filters
2015-02-08 19:43:26 -08:00
Vinnie Falco
2e62641aa4 Merge commit 'dad460dcfc8466a8e1c59529d490829fcfaeee73' as 'src/lz4' 2015-02-08 19:43:04 -08:00
Vinnie Falco
dad460dcfc Squashed 'src/lz4/' content from commit e25b51d
git-subtree-dir: src/lz4
git-subtree-split: e25b51de7b51101e04ceea194dd557fcc23c03ca
2015-02-08 19:43:04 -08:00
Vinnie Falco
3ae23b6a54 Remove spurious call to fetch in NuDBBackend 2015-02-08 19:42:07 -08:00
Vinnie Falco
97126f18b1 Add /crawl cgi request feature to peer protocol (RIPD-729):
This adds support for a cgi /crawl request, issued over HTTPS to the configured
peer protocol port. The response to the request is a JSON object containing
the node public key, type, and IP address of each directly connected neighbor.
The IP address is suppressed unless the neighbor has requested its address
to be revealed by adding "Crawl: public" to its HTTP headers. This field is
currently set by the peer_private option in the rippled.cfg file.
2015-02-08 19:42:01 -08:00
Vinnie Falco
3838d222c2 NuDB: limit size of mempool (RIPD-787):
Insert now blocks when the size of the memory pool exceeds a predefined
threshold. This solves the problem where sustained insertions cause the
memory pool to grow without bound.
2015-02-08 19:41:54 -08:00
Vinnie Falco
96c3292210 Add missing include 2015-02-08 19:41:48 -08:00
Vinnie Falco
b0fd92cb3f Fix unsafe iterator dereference in PeerFinder 2015-02-08 19:41:43 -08:00
Vinnie Falco
6276c55cc9 Set version to 0.27.0-sp1 2015-02-03 14:58:02 -08:00
Vinnie Falco
afa6ff7c4b Revert RocksDB backend settings:
This reverts the change that makes RocksDBQuick the default settings for
node_db "type=rocksdb". The quick settings can be obtained by setting
"type=rocksdbquick".

RocksDBQuick settings are implicated in memory over-utilization problems
seen recently.
2015-02-03 14:57:54 -08:00
Vinnie Falco
c6c8e5d70c Set version to 0.27.0 2015-01-26 10:56:11 -08:00
Nik Bougalis
fa354ec8d9 Set version to 0.27.0-b11 2015-01-23 17:37:18 -08:00
Nik Bougalis
d0375f697d Invoke correct deleter 2015-01-23 17:34:30 -08:00
Vinnie Falco
33c8257d25 Set version to 0.27.0-b10 2015-01-21 15:25:11 -08:00
Scott Determan
f389bc33c3 VSProject: Handle tuples in CPPDEFINES:
The VSProject generator now handles tuples in addition to strings and
dicts when converting environment variables such as CPPDEFINES.
2015-01-21 15:20:06 -08:00
Vinnie Falco
4d5dca71ce Squelch Peerfinder fixed connection attempts 2015-01-21 14:59:47 -08:00
Vinnie Falco
a9c44a1b9c Set version to 0.27.0-b9 2015-01-21 14:23:50 -08:00
Vinnie Falco
4144f800a1 Fix PeerImp concurrent access of socket:
The PeerImp::run launch function is now dispatched on the strand to prevent
undefined behavior resulting from concurrent access to the ssl::stream object.
2015-01-21 14:21:43 -08:00
Vinnie Falco
6ef9a81017 Set version to 0.27.0-b8 2015-01-21 14:21:43 -08:00
Vinnie Falco
8c6722f3c5 Remove use of date and time from rocksdb unity build 2015-01-21 14:21:43 -08:00
Vinnie Falco
40e138627b Fixes to Overlay:
* Make ~Peer virtual
* Call close in ConnectAttempt::stop
* Handle nullptr return in new_outbound_slot
* Check gracefulClose_ in read loop
2015-01-21 10:48:32 -08:00
Vinnie Falco
a470dda4e6 Fix Journal::Stream::active to return the correct value 2015-01-21 10:48:32 -08:00
Edward Hennis
b725410623 Option to specify rippled path on command line.
* --rippled=<absolute or relative path>
* Works for "npm test" and "mocha"
* Remove "rippled_path" from config.js to require CLI path.
2015-01-21 10:48:31 -08:00
Miguel Portilla
a9dfb33126 Log abnormal close time offsets (RIPD-572) 2015-01-21 10:48:31 -08:00
Miguel Portilla
8b848770dc Add config "ledger_history_index" functionality (RIPD-559) 2015-01-21 10:48:31 -08:00
Vinnie Falco
94629edb9b Add NuDB backend:
The NuDB database backend is a high performance key/value store presented
as an alternative to RocksDB on Mac and Linux deployments, and the preferred
backend option for Windows deployments. The LevelDB backend is deprecated for
all platforms.

This includes these changes:

* Add Backend::verify API for doing consistency checks
* Add Database::close so caller can catch exceptions
* Improved Timing test for NodeStore creates a simulated workload
2015-01-21 10:48:30 -08:00
Vinnie Falco
2a3f2ca28d Add NuDB: A Key/Value Store For Decentralized Systems
NuDB is a high performance key/value database optimized for insert-only
workloads, with these features:

* Low memory footprint
* Values are immutable
* Value sizes from 1 2^48 bytes (281TB)
* All keys are the same size
* Performance independent of growth
* Optimized for concurrent fetch
* Key file can be rebuilt if needed
* Inserts are atomic and consistent
* Data file may be iterated, index rebuilt.
* Key and data files may be on different volumes
* Hardened against algorithmic complexity attacks
* Header-only, nothing to build or link
2015-01-21 10:48:30 -08:00
Edward Hennis
8ab1e7d432 Integration test to subscribe to offer books. 2015-01-20 16:45:04 -08:00
Miguel Portilla
b2ba6a0c85 Fix RPC subscribe with multiple books 2015-01-20 16:45:04 -08:00
Tom Ritchford
cca5421aed Fix Subscribe RPC to correctly distinguish bids and asks. 2015-01-20 16:45:04 -08:00
Nik Bougalis
799d9a73e6 Ensure that hash_append will never throw
Conflicts:
	src/beast/beast/net/IPAddress.h
2015-01-20 16:45:03 -08:00
Scott Determan
b0781622b2 Handle nullptr return values to InboundLedgers::findCreate
In normal operation, InboundLedgers::findCreate never returns null, but
during system shutdown, it will return null.

Since this only happens in system shutdown, when findCreate returns null
the calling function stops what it was doing and returns.

During testing, an issue where destroying the application object
and creating a new one caused problems with a static PathTable. This table
is now cleared when re-initialized.
2015-01-20 16:45:03 -08:00
Tom Ritchford
0d0eec6345 Clean up test documentation and a log message. 2015-01-20 16:45:03 -08:00
Nik Bougalis
1af79f7960 Properly validate the configured online delete interval 2015-01-20 16:45:03 -08:00
Vinnie Falco
15b570bbdd Add profile targets for gcc and clang 2015-01-20 16:45:02 -08:00
Tom Ritchford
7aa5599cc2 Remove unused parameter in two lambdas. 2015-01-20 16:45:02 -08:00
JoelKatz
676293ec42 Ensure account_tx queries over and returns correct range 2015-01-20 16:45:02 -08:00
Nik Bougalis
abc4fb81b1 Improve RippleLineCache hashing 2015-01-20 16:45:01 -08:00
Vinnie Falco
53a16f354f Add hardened_hash to basics/:
xxhasher is the default hash function for hardened_hash.
2015-01-20 16:45:01 -08:00
Vinnie Falco
6ab1ecd836 Tidy up container hash functions:
* Add xxhasher
* Move fnv1a, siphash, spookyto hash/
* Move hash_append, uhash to hash/
* Move hash_speed_test to hash/
* Move hash classes to individual header files
* Remove hardened_hash
2015-01-20 16:45:01 -08:00
Vinnie Falco
e7b16e7b47 Add rngfill 2015-01-20 16:45:01 -08:00
Vinnie Falco
14804f81a8 Optimize calls to unit_test::suite::expect:
This changes expect and unexpected to receive the reason text as a
template argument, allowing the std::string conversion of char const*
parameters to take place only if the condition evaluates to false. This
cuts all calls to malloc and free on tests that pass.
2015-01-20 16:45:00 -08:00
Vinnie Falco
9a61b8d77d Declare base_uints with using statements 2015-01-20 16:45:00 -08:00
Vinnie Falco
f42c2763d5 Improve streambuf unit test 2015-01-20 16:45:00 -08:00
Vinnie Falco
98d4e0e1b5 Fix ZeroCopyOutputStream:
Added a destructor that commits the last block of data
if there was no final call to BackUp.
2015-01-20 16:45:00 -08:00
Tom Ritchford
9156633baf Set version to 0.27.0-b7 2015-01-20 17:59:55 -05:00
Tom Ritchford
bcf4f836b4 Use websocketpp_02 namespace. 2015-01-20 17:08:15 -05:00
Vinnie Falco
dbc1d70f99 Set version to 0.27.0-b6 2015-01-20 09:41:27 -08:00
Vinnie Falco
78bc190a85 Merge commit '02855d7fed46d3c1aa1f2cefbcf4a42720575c3f' as 'src/websocketpp' 2015-01-20 09:35:00 -08:00
Vinnie Falco
02855d7fed Squashed 'src/websocketpp/' content from commit 875d420
git-subtree-dir: src/websocketpp
git-subtree-split: 875d420952
2015-01-20 09:35:00 -08:00
Vinnie Falco
6fdd5d32be Rename websocket/ to websocketpp_02 2015-01-20 09:34:54 -08:00
Vinnie Falco
d7f32b105b Set version to 0.27.0-b5 2015-01-13 11:50:58 -08:00
Vinnie Falco
0ac480a0bd Fix extra increment in GenerateRootDeterministicKey 2015-01-13 11:49:59 -08:00
Tom Ritchford
417996de02 Set version to 0.27.0-b4 2015-01-13 11:30:23 -05:00
Tom Ritchford
6c2d60cec2 Prevent RPC handlers from returning non-objects. 2015-01-13 11:30:23 -05:00
Tom Ritchford
743bd6c917 Fix RPC command logrotate to return a Json object. 2015-01-13 11:30:14 -05:00
Edward Hennis
ab61aa41d9 Set version to 0.27.0-b3 2015-01-12 18:00:55 -05:00
Edward Hennis
36396ae29e rippled.cfg [db_node] options for RocksDB
* open_files and compression.
2015-01-12 18:00:54 -05:00
Vinnie Falco
749e083e6e NodeStore improvements:
* Add Backend::verify API for doing consistency checks
* Add Database::close so caller can catch exceptions
* Improved Timing test for NodeStore creates a simulated workload
2015-01-12 18:00:52 -05:00
Vinnie Falco
67b9cf9e82 Improved support for exceptions in threads spawned by unit tests:
Unit tests that wish to spawn threads for testing concurrency may now do so
by using unit_test::thread as a replacement for std::thread. These threads
propagate unhandled exceptions to the unit test, and work with the abort on
failure feature.
2015-01-12 17:17:09 -05:00
Vinnie Falco
27fb20f3ab Add xor_shift_engine 2015-01-12 17:17:07 -05:00
Josh Juran
1c71b274f0 SConstruct: Add ed25519.c
Conflicts:
	Builds/VisualStudio2013/RippleD.vcxproj
	Builds/VisualStudio2013/RippleD.vcxproj.filters
	SConstruct
2015-01-12 12:16:26 -08:00
Vinnie Falco
fcd20b63fe Merge commit '8ec344ac1b6c66d936fa0f7005490e126a434a70' as 'src/ed25519-donna' 2015-01-12 11:27:15 -08:00
Vinnie Falco
8ec344ac1b Squashed 'src/ed25519-donna/' content from commit 04223b0
git-subtree-dir: src/ed25519-donna
git-subtree-split: 04223b04e22f5eff32c6c27e25194d4d984c6f41
2015-01-12 11:27:15 -08:00
Vinnie Falco
df966a9ac6 Set version to 0.27.0-b2 2015-01-05 18:49:18 -08:00
Vinnie Falco
f634666dc6 Make rocksdbquick settings default:
This removes the old default configuration for the "rocksdb" backend and
replaces it with the configuration that was formerly available using
the experimental backend "rocksdbquick".

The new configuration setting improves the performance of the key/value
database by changing the compaction style and tuning the size parameters for
the typical rippled workload. Testing shows a decrease in I/O spikes for both
reading and writing.
2015-01-05 18:49:17 -08:00
Tom Ritchford
e2f9f5d7e5 Fix compilation warnings. 2015-01-05 18:49:17 -08:00
Tom Ritchford
d078b0d143 Extract the git ID into a separate compilation unit. 2015-01-05 18:49:15 -08:00
Nik Bougalis
0ccdea3cd8 Disable Ticket transactions and tests 2015-01-05 14:18:27 -08:00
Vinnie Falco
df54b47cd0 Tidy up includes and add modules to the classic build:
An alternative to the unity build, the classic build compiles each
translation unit individually. This adds more modules to the classic build:

* Remove unity header app.h
* Add missing includes as needed
* Remove obsolete NodeStore backend code
* Add app/, core/, crypto/, json/, net/, overlay/, peerfinder/ to classic build
2015-01-05 13:35:57 -08:00
Vinnie Falco
2e595830b3 Levelize SHAMap:
The SHAMap class is refactored into a separate module where each translation
unit compiles separate without errors. Dependencies on higher level business
logic are removed. SHAMap now depends only on basics, crypto, nodestore,
and protocol:

* Inject NodeStore::Database& to SHAMap
* Move sync filter instances to app/ledger/
* Move shamap to its own module
* Move FullBelowCache to shamap/
* Move private code to shamap/impl/
* Refactor SHAMap treatment of missing node handler
* Inject and use Journal for logging in SHAMap
2015-01-05 11:46:11 -08:00
Vinnie Falco
96fbcc9a5a Refactor NodeStore:
Manager is changed to be a Meyer's singleton, with factories automatically
registering upon construction.
2015-01-05 11:46:11 -08:00
Vinnie Falco
6283801981 Add non-unity build targets:
The SConstruct is modified to provide a new family of targets, ending with
the suffix ".nounity", which compile individual translation units instead of
some of the unity translation units ("classic" builds). Two modules updated
for this treatment are ripple/basics/ and ripple/protocol/, with plans to
update more in the future. A consequence is longer build times in some cases.
A benefit of classic builds is that missing includes can be identified
through compiler errors.
2015-01-05 11:46:11 -08:00
Vinnie Falco
9a3214d46e Normalize files containing unit test code:
Source files are split to place all unit test code into translation
units ending in .test.cpp with no other business logic in the same file,
and in directories named "test".

A new target is added to the SConstruct, invoked by:
    scons count
This prints the total number of source code lines occupied by unit tests,
in rippled specific code and excluding library subtrees.
2015-01-05 11:46:07 -08:00
Vinnie Falco
9eb7c8344f Don't leak track StringPairArray 2015-01-05 11:44:15 -08:00
Vinnie Falco
4140bbb1f7 Set version to 0.27.0-b1 2015-01-05 11:37:01 -08:00
Vinnie Falco
ea44497136 Fix msvc ICE on initializer-list 2015-01-05 11:37:00 -08:00
Nik Bougalis
07737c6e5b Add 'delivered_amount' to Transaction JSON (RIPD-643):
The synthetic field 'delivered_amount' can be used to determine the exact
amount delivered by a Payment without having to check the DeliveredAmount
field, if present, or the Amount field otherwise.

The field is only returned when metadata is available and the data is not
returned in binary format.
2014-12-31 01:55:10 -08:00
JoelKatz
98d5eefc86 Pathfinding bugfixes (RIPD-735):
* Fix specifying paths and search level for ripple_path_find
* Don't modify the pathfinder, it has issuer-neutral paths.
* Handle previous paths correctly
* Compare paths correctly
2014-12-31 01:55:10 -08:00
JoelKatz
4f2d93bb65 Avoid processing transactions if we need a network ledger
Not processing tranasctions without a network ledger makes
initial network synchronization faster.
2014-12-31 01:55:10 -08:00
Edward Hennis
a5df3f1747 Support a "no_server" flag in test config.
* Will use a running instance of rippled (possibly in a debugger).
* Modify all tests to respect the server_default value.
* Fail test if new account already exists and has a balance.
* README.md with instructions for advanced test debugging, particularly using no_server.
2014-12-31 01:55:10 -08:00
Howard Hinnant
7f5f73887d Fix undefined behavior 2014-12-31 01:55:10 -08:00
Nik Bougalis
91ce7807b9 Remove legacy LoadTypes (RIPD-365) 2014-12-31 01:55:10 -08:00
Nik Bougalis
d26fae9875 Reduce Beast dependencies by leveraging C++11 features:
* Remove beast::Atomic (RIPD-728):
  * Use std-provided alternatives
  * Eliminate atomic variables where possible

* Cleanup beast::Thread interface:
  * Use std::string instead of beast::String
  * Remove unused functions and parameters

* Remove unused code:
  * beast::ThreadLocalValue
  * beast::ServiceQueue
2014-12-31 01:55:10 -08:00
Nik Bougalis
60bdc79ec4 Remove unused sitefiles module 2014-12-30 12:33:41 -08:00
Tom Ritchford
253ddf2998 Split off CheckLibraryVersions.test.cpp. 2014-12-30 12:33:41 -08:00
Nik Bougalis
9fa15b390a Always initialize LedgerHandler options field 2014-12-29 11:21:19 -08:00
Josh Juran
e7d6fe6c8b Cull duplicate/unused code in RippleAddress:
Deduplicate a call to GeneratePublicDeterministicKey().
Remove unused member functions.
2014-12-29 11:21:19 -08:00
Tom Ritchford
9650b1aa70 New ripple::TestSuite with method expectEquals(). 2014-12-29 11:21:19 -08:00
Howard Hinnant
eafa6f960f API for improved Unit Testing (RIPD-432):
* Added new test APIs allowing easy ways to create ledgers, apply
  transactions to them, close and advance them.

* Moved Ledger tests from Ledger.cpp to Ledger.test.cpp.

* Changed several TransactionEngine log priorities from lsINFO to lsDEBUG to
  reduce unnecessary verbosity in the log messages while running these tests.

* Moved LedgerConsensus:applyTransactions from a private member function to a
  free function so that it could be accessed externally, and without having to
  reference a LedgerConsensus object.  This was done to facilitate the new
  testing API.
2014-12-29 11:21:19 -08:00
Yana
c62ccf4870 Update README.md (RIPD-601) 2014-12-29 11:21:19 -08:00
Vinnie Falco
ef34439a79 Set version to 0.26.5-rc1 2014-12-22 15:20:03 -08:00
Nik Bougalis
b328ec2462 Prevent passing of non-POD types to POD-only interfaces:
This tidies up the code that produces random numbers to conform
to programming best practices and reduce dependencies.

* Use std::random_device instead of platform-specific code
* Remove RandomNumbers class and use free functions instead
2014-12-22 15:19:48 -08:00
Vinnie Falco
60f27178b8 Levelization, improve structure of source files:
Source files are moved between modules, includes changed and added,
and some code rewritten, with the goal of reducing cross-module dependencies
and eliminating cycles in the dependency graph of classes.

* Remove RippleAddress dependency in CKey_test
* ByteOrder.h, Blob.h, and strHex.h are moved to basics/. This makes
  the basics/ module fully independent of other ripple sources.
* types/ is merged into protocol/. The protocol module now contains
  all primitive types specific to the Ripple protocol.
* Move ErrorCodes to protocol/
* Move base_uint to basics/
* Move Base58 to crypto/
* Remove dependence on Serializer in GenerateDeterministicKey
* Eliminate unity header json.h
* Remove obsolete unity headers
* Remove unnecessary includes
2014-12-22 10:23:49 -08:00
Vinnie Falco
e3fbb83ad0 Tidy up usage of std::begin, std::end 2014-12-19 11:55:43 -08:00
Nik Bougalis
28b70a7b9a Remove 'Proof of Work' code 2014-12-19 11:00:29 -08:00
Nicholas Dudfield
dcdc341d0f Add appveyor 2014-12-19 11:00:29 -08:00
Tom Ritchford
fce77c9372 Configuration for yielding RPC server. 2014-12-19 11:00:28 -08:00
Tom Ritchford
a360c481c2 Refactor out a version of lookupLedger returning Status. 2014-12-19 11:00:28 -08:00
Tom Ritchford
c72db5fa5f Refactor away RPCHandler::doRpcCommand 2014-12-19 11:00:28 -08:00
Tom Ritchford
fc9a23d6d4 Send output incrementally in ServerHandlerImp. 2014-12-19 11:00:27 -08:00
Tom Ritchford
167f4666e2 New generic Ledger RPC handler. 2014-12-19 11:00:27 -08:00
Tom Ritchford
1cbcc7be21 Allow the Ledger to generically output to both Json models. 2014-12-19 11:00:27 -08:00
Tom Ritchford
8053598069 Better interoperation between Json::Value and JsonObject.
* Generic functions to add entries to both object models.
* Add Json::Value into JsonObjects.
* Write Json::Value to string incrementally.
* Get rid of ripple::RPC::New namespace
2014-12-19 11:00:26 -08:00
Tom Ritchford
7cfac1a91a Wrap Output in a coroutine that periodically yields. 2014-12-19 11:00:26 -08:00
Tom Ritchford
192cdd028e Change Output to be a generic std::function. 2014-12-19 11:00:26 -08:00
Tom Ritchford
029c143922 Add a comment to ledger/Ledger.h. 2014-12-19 11:00:26 -08:00
Tom Ritchford
00298cc68c Simplify LedgerData.cpp. 2014-12-19 11:00:25 -08:00
Tom Ritchford
d9c7db51af Make three ErrorCode functions generic. 2014-12-19 11:00:25 -08:00
Vinnie Falco
f12b15d22b Fix logic in HTTP/S server:
These bugs do not affect production code since callers do not invoke
`write` multiple times, but these would become a problem in the future.

* Access to Peer::write_queue_ is synchronized correctly.
* Remove unsafe access to deleted container element.
2014-12-19 11:00:25 -08:00
Vinnie Falco
409b8bac00 Remove unused and obsolete Ripple identifiers and tidy up:
These identifiers were part of a failed set of classes to replace
the functionality combined into RippleAddress. They are not used
and therefore can be removed.

* Remove RippleAccountPrivateKey
* Remove RippleAccountPublicKey
* Remove RippleAccountID
* Remove RipplePrivateKey
* Remove RipplePublicKeyHash
* Remove RippleLedgerHash
* Remove unused withCheck argument
* Remove CryptoIdentifier
* Remove IdentifierStorage
* Remove IdentifierType
* Remove SimpleIdentifier
* Add missing include
2014-12-19 11:00:24 -08:00
Vinnie Falco
28b09bde4b Simplify RipplePublicKey:
This implements the bare minimum necessary to store a 33 byte public
key and use it in ordered containers. It is an efficient and well
defined alternative to RippleAddress when the caller only needs
a node public key.
2014-12-19 11:00:24 -08:00
Vinnie Falco
2f6af906f4 Validators work (RIPD-703):
This replaces the experimental validators module with foundational
code to implement a new system for tracking validators, validations and
the UNL. The code is turned off by default, in BeastConfig.h

* Remove obsolete public Manager interfaces
* Remove obsolete database methods
* Remove obsolete ChosenList concept
* Remove obsolete code
* Add missing includes
* Tidy up STValidation.h
* Move factory function to Validators::make_Manager
* Add Connection object for tracking STValidations
2014-12-19 11:00:23 -08:00
Vinnie Falco
628e3ac1eb Add waitable_executor 2014-12-18 10:26:55 -08:00
Josh Juran
fbf5785e35 Combine STTx::checkSign overloads:
Callers don't need to specify the signing key -- they're just retrieving
the key from the SerializedTransaction and then passing it back.

This simplifies Ed25519 implementation.
2014-12-12 20:14:02 -08:00
Nicholas Dudfield
eeea2b1ff8 Use ppa:afrank/boost 1.57 for Travis 2014-12-12 20:14:02 -08:00
Vinnie Falco
32062e439f Split peer connect logic to another class (RIPD-711):
All of the logic for establishing an outbound peer connection including
the initial HTTP handshake exchange is moved into a separate class. This
allows PeerImp to have a strong invariant: All PeerImp objects that exist
represent active peer connections that have already gone through the
handshake process.
2014-12-12 20:14:02 -08:00
Vinnie Falco
930a0beaf1 Add ZeroCopyOutputStream and tidy up 2014-12-12 20:14:02 -08:00
Nik Bougalis
4a49fefdd9 Various cleanups:
* Replace SYSTEM_NAME and other macros with C++ constructs
* Remove RIPPLE_ARRAYSIZE and use std::extent or ranged for loops
* Remove old-style, unused offer crossing unit test
* Make STAmount::saFromRate free and remove default argument
2014-12-12 20:14:02 -08:00
Nik Bougalis
8e792855e0 Do not use path if path expansion fails 2014-12-10 16:55:06 -08:00
Tom Ritchford
69f5c6987a Whitespace: clean WebSockets to 80 columns. 2014-12-10 16:55:06 -08:00
Nik Bougalis
85fc9e4ecf Revert e4c9822d78 "Enable processor-specific optimizations when available:" 2014-12-08 14:54:03 -08:00
Mark Travis
d5c3f0c9cf Stability bugfixes for online delete SHAMapStore:
The correct ledger age is necessary for checking health
status, and the previous behavior caused the online deletion process to
abort if the process took too long.

The tuning parameter added and the parameter whose default was modified both
minimize impact of SQL DELETE operations by decreasing the default batch size
for deletes and for increasing the backoff period between deletion batches.
These parameters decrease contention for the SQLite and I/O with the trade-off
of longer processing time for online delete. Online-delete is not a
time-critical function, so a little slowness in wall-clock time is not harmful.
2014-12-08 14:54:03 -08:00
JoelKatz
a48120e675 Fix incorrect source issuer for XRP source 2014-12-08 14:54:03 -08:00
Nik Bougalis
36f8e4f2ad Improve hex conversion & parsing routines 2014-12-08 14:54:03 -08:00
David Schwartz
1084a39a45 Improve the humanAccountID cache (RIPD-693)
Profiling indicated some performance issues coming from the
cache of 160-bit account IDs to base58 format. This replaces
the single cache with two caches and rotates out old
entries.
2014-12-08 14:54:03 -08:00
Tom Ritchford
86df482842 Make sure that handlers always return Json::objectValue. 2014-12-01 17:16:24 -05:00
Tom Ritchford
b0d47ebcc6 Use better base64 handling in ServerHandlerImp. 2014-12-01 17:15:23 -05:00
Tom Ritchford
fffdf1dfba Make beast::detail::chunk_encoded_buffers::to_hex() static 2014-12-01 11:12:59 -05:00
Tom Ritchford
3273ed2616 Remove unused BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES. 2014-12-01 10:56:03 -05:00
Vinnie Falco
aa7b0a31b0 Refactor protocol message parsing:
This replaces the stateful class parser with a stateless free function.
The protocol buffer message is parsed using a ZeroCopyInputStream.

* Invoke method is now a free function.
* Protocol handler doesn't need to derive from an abstract interface
* Only up to one message is processed at a time by the invoker.
* Remove error_code return from the handler's message processing functions.
* Add ZeroCopyInputStream implementation that wraps a BufferSequence.
* Free function parses up to one protocol message and calls the handler.
* Message type and size can be calculated from an iterator
  range or a buffer sequence.
2014-11-26 12:23:21 -08:00
Vinnie Falco
fb0d44d403 Use cluster state in Slot instead of PeerImp 2014-11-26 12:23:10 -08:00
Vinnie Falco
cd8ec89cbb Use injections from OverlayImpl in PeerImp 2014-11-26 12:23:02 -08:00
Vinnie Falco
252f271dc5 Fixes to beast::asio::streambuf:
* Fix to_string conversion
* Fix assert on debug invariant checks
* Fix the treatment of the output position when the entire output is committed.
* Add unit test
2014-11-26 12:22:55 -08:00
Vinnie Falco
62d400c3a9 Move the call to cancel_timer to the right place 2014-11-26 12:22:46 -08:00
Scott Schurr
f9aa3e0da5 Add more unit tests to rpc/impl/TransactionSign (RIPD-480):
By adding a mock it is possible to test the transactionSign
function without interacting with the ledger.  This is the
smallest change I could come up with that allows transactionSign
to be unit tested.

The unit tests are white boxed.  Each test case is a result
of examining the code and identifying behavior associated with
different JSON fields.  That means the tests are not based on
requirements, they are based on observed behavior.
2014-11-26 12:07:44 -08:00
Vinnie Falco
685fe5b0fb Don't call std::exit on clean exit 2014-11-25 19:19:56 -08:00
Vinnie Falco
5180e71a0d Remove unused chrono::time_point stream conversions 2014-11-25 19:19:56 -08:00
Vinnie Falco
55637f7508 Template abstract_clock on Clock:
The abstract_clock is now templated on a type meeting the requirements of
the Clock concept. It inherits the nested types of the Clock on which it
is based. This resolves a problem with the original design which broke the
type-safety of time_point from different abstract clocks.
2014-11-25 19:19:56 -08:00
Nicholas Dudfield
7d72dfe0be Updated freeze tests:
* Always run freeze tests (and  enforcement tests)
* book_offers filtering tests are broken
2014-11-25 11:46:34 -08:00
Mark Travis
02529a0fc2 SHAMapStore Online Delete (RIPD-415):
Makes rippled configurable to support deletion of all data in its key-value
store (nodestore) and ledger and transaction SQLite databases based on
validated ledger sequence numbers. All records from a specified ledger
and forward shall remain available in the key-value store and SQLite, and
all data prior to that specific ledger may be deleted.

Additionally, the administrator may require that an RPC command be
executed to enable deletion. This is to align data deletion with local
policy.
2014-11-25 11:44:02 -08:00
JoelKatz
b44974677e Cleanup some stray formatting left in logs 2014-11-21 17:13:13 -08:00
Vinnie Falco
d4fd5e4fce HTTP Handshaking for Peers on Universal Port (RIPD-446):
This introduces a considerable change in the way that peers handshake. Instead
of sending the TMHello protocol message, the peer making the connection (client
role) sends an HTTP Upgrade request along with some special headers. The peer
acting in the server role sends an HTTP response completing the upgrade and
transition to RTXP (Ripple Transaction Protocol, a.k.a. peer protocol). If the
server has no available slots, then it sends a 503 Service Unavailable HTTP
response with a JSON content-body containing IP addresses of other servers to
try. The information that was previously contained in the TMHello message is
now communicated in the HTTP request and HTTP response including the secure
cookie to prevent man in the middle attacks. This information is documented
in the overlay README.md file.

To prevent disruption on the network, the handshake feature is rolled out in
two parts. This is part 1, where new servents acting in the client role will
send the old style TMHello handshake, and new servents acting in the server
role can automatically detect and accept both the old style TMHello handshake,
or the HTTP request accordingly. This detection happens in the Server module,
which supports the universal port. An experimental .cfg setting allows clients
to instead send HTTP handshakes when establishing peer connections. When this
code has reached a significant fraction of the network, these clients will be
able to establish a connection to the Ripple network using HTTP handshakes.

These changes clean up the handling of the socket for peers. It fixes a long
standing bug in the graceful close sequence, where remaining data such as the
IP addresses of other servers to try, did not get sent. Redundant state
variables for the peer are removed and the treatment of completion handlers is
streamlined. The treatment of SSL short reads and secure shutdown is also fixed.

Logging for the peers in the overlay module are divided into two partitions:
"Peer" and "Protocol". The Peer partition records activity taking place on the
socket while the Protocol partition informs about RTXP specific actions such as
transaction relay, fetch packs, and consensus rounds. The severity on the log
partitions may be adjusted independently to diagnose problems. Every log
message for peers is prefixed with a small, unique integer id in brackets,
to accurately associate log messages with peers.

HTTP handshaking is the first step in implementing the Hub and Spoke feature,
which transforms the network from a homogeneous network where all peers are
the same, into a structured network where peers with above average capabilities
in their ability to process ledgers and transactions self-assemble to form a
backbone of high powered machines which in turn serve a much larger number of
'leaves' with lower capacities with a goal to improve the number of
transactions that may be retired over time.
2014-11-21 16:47:12 -08:00
Vinnie Falco
30123eaa4a Add WrappedSink:
This class puts a configured string prefix in front of
each line of Journal output.
2014-11-21 16:46:57 -08:00
Nik Bougalis
454ec97d51 Replace custom exceptions with std::runtime_error 2014-11-21 13:15:41 -08:00
Vinnie Falco
c2ac331e78 Fix unit_test suite matching with full names 2014-11-21 12:59:32 -08:00
Nik Bougalis
be4a35af11 Clarify SetAccount logic and clean up existing code 2014-11-21 12:59:32 -08:00
Tom Ritchford
445b29ad0d Fix RPC handlers to use the results of lookupLedger. 2014-11-21 12:59:32 -08:00
Vinnie Falco
64d0f7fffd Fix DecayingSample treatment of the window 2014-11-21 12:59:32 -08:00
Nik Bougalis
baf0d09455 Simplify the Beast fatal error reporting framework:
* Reduce interface to a single function which reports error details
* Remove unused functions
2014-11-21 12:59:32 -08:00
Vinnie Falco
08a81a0ab9 Tidy up the structure of sources in protocol/:
Split out and rename STValidation
Split out and rename STBlob
Split out and rename STAccount
Split out STPathSet
Split STVector256 and move UintTypes to protocol/
Rename to STBase
Rename to STLedgerEntry
Rename to SOTemplate
Rename to STTx
Remove obsolete AgedHistory
Remove types.h and add missing includes
Remove unnecessary includes in app.h
Remove unnecessary includes in app.h
Remove include app.h from app1.cpp
2014-11-20 20:15:29 -08:00
Nik Bougalis
31110c7fd9 Cleanup ripple::Ledger:
* Convert static member functions to free functions
* Adopt consistent naming convention
* De-inline code
2014-11-20 20:15:29 -08:00
Vinnie Falco
0e1dd92d9b Fix case where slot==nullptr in Overlay:
This changes the Overlay to correctly handle the case when nullptr is
returned by PeerFinder new_inbound_slot on a detected self-connection.
2014-11-20 20:15:29 -08:00
Vinnie Falco
a3204a4df7 Add http::chunk_encode:
This transforms a ConstBufferSequence into a new ConstBufferSequence whose
data is encoded according to the Content transfer encoding rules of RFC2616.
The implementation does not copy any memory.
2014-11-20 20:15:29 -08:00
Donovan Hide
2288ab48b9 Use asio signal handling in Application (RIPD-140):
* Use signal_set as cross platform way of handling SIGINT
* Remove polling on main thread for shutdown.
* Add extra logging for received signal.
* Clean up exit handling on error in setup routines.
* Reuse isStopped() from Stoppable for status (could be isStopping() instead).
* Ctrl-C should now work for standalone mode as well on Windows.

Also small fixes to Resolver:
* Add Resolver prefix to logging.
* Fix AsyncObject::removeReference() logic.
* Fix work remaining logic.
2014-11-20 20:15:29 -08:00
mDuo13
670401884c Improve the human-readable description of the tesSUCCESS code:
Transactions that return tesSUCCESS have only been accepted and
propagated on the Ripple network and should not be considered
final until they have been included in a validated ledger.
2014-11-20 20:14:04 -08:00
Yana
37181c341e Changed doc/rippled-example.cfg to specify default for ssl_verify 2014-11-14 11:19:42 -08:00
wltsmrz
be7e677448 Update integration tests for changes to ripple-lib account request API:
Account requests expect an object as first argument
2014-11-14 11:10:12 -08:00
Josh Juran
b2eeb49a45 Clean up CKey and RippleAddress (RIPD-672)
* Remove CKey dependency on RippleAddress
* Create RAII ec_key wrapper that hides EC_KEY and other OpenSSL details
* Move CKey member logic into free functions
* Delete CKey class
* Rename units that are no longer CKey-related
* Delete code that was unused
2014-11-14 11:10:12 -08:00
Vinnie Falco
9aa040d917 Accept generic arguments in ci_equal 2014-11-14 11:10:11 -08:00
Vinnie Falco
c2043a223b Tidy up split_commas function and use it in Server
Conflicts:
	src/ripple/server/impl/ServerHandlerImp.cpp
2014-11-14 11:10:11 -08:00
Vinnie Falco
f24e859f17 Construct Server after Overlay and WSDoors:
When the ServerHandler is constructed before the Overlay, an incoming
connection received after the server's listening ports have been opened
but before the Overlay object has been created causes a crash.
2014-11-14 11:10:11 -08:00
Vinnie Falco
737b33f9d1 Merge branch 'release' into develop 2014-11-14 10:56:45 -08:00
David Schwartz
00791d2151 Set version to 0.26.4-sp3 2014-11-11 16:33:52 -08:00
David Schwartz
b141598f9b Fix bugs in pathfinding with XRP as the source currency 2014-11-11 16:27:39 -08:00
Tom Ritchford
d1618d79b0 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-11 16:27:03 -08:00
Tom Ritchford
00c84dfe5c Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.
2014-11-11 16:26:38 -08:00
Vinnie Falco
95f31b98a8 Set version to 0.26.4-sp2 2014-11-11 14:22:37 -08:00
Miguel Portilla
10d74ed100 Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct ledger info. account_offers, account_lines and book_offers allow admins unlimited size on the limit param. Specifying a negative value on limit clamps to the minimum value allowed. Incorrect types for limit are correctly reported in the result.
2014-11-11 14:21:49 -08:00
Vinnie Falco
8a7f612d5b Revert pathfinding changes:
* 5e7c527 Revert "Fix account_lines, account_offers and book_offers result (RIPD-682):"
* b3417ca Revert "Fix pathfinding with multiple issuers for one currency (RIPD-618)."
* 00db7f5 Revert "Clean up Pathfinder."
2014-11-11 14:21:40 -08:00
Tom Ritchford
0829ee9234 Add missing header needed for boost 1.57 compatibility. 2014-11-10 23:23:53 -05:00
Nik Bougalis
b22e33444b Make Stoppable unit tests manual 2014-11-10 23:23:53 -05:00
David Schwartz
d115a12cbe Remove dead TxQueue code 2014-11-10 23:23:53 -05:00
Nik Bougalis
b7b744de94 Remove sole use of beast::MurmurHash 2014-11-10 15:00:20 -08:00
Nik Bougalis
68e46e406a Remove MurmurHash from Beast 2014-11-10 14:00:54 -08:00
Miguel Portilla
a46ae4efec Set version to 0.26.4-sp1 2014-11-10 16:25:45 -05:00
Miguel Portilla
62777a794e Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct ledger info. account_offers, account_lines and book_offers allow admins unlimited size on the limit param. Specifying a negative value on limit clamps to the minimum value allowed. Incorrect types for limit are correctly reported in the result.
2014-11-10 16:25:14 -05:00
Tom Ritchford
329a969761 Remove unused RPCServer. 2014-11-10 12:53:21 -08:00
Vinnie Falco
30170bc394 Add short_read manual unit test:
This manual unit test explores the outcomes of shutting down
SSL stream connections at various point during a session.
2014-11-10 12:52:57 -08:00
Vinnie Falco
f193302e15 Add WrappedSink 2014-11-10 12:52:57 -08:00
Vinnie Falco
7c4870d641 Add operator<< for basic_streambuf 2014-11-10 12:52:43 -08:00
Vinnie Falco
8b84a76d5d Make ci_equal a function 2014-11-10 12:52:42 -08:00
Vinnie Falco
a4cd761372 Add rfc2616::parse_csv 2014-11-10 12:52:42 -08:00
Miguel Portilla
63d2cfd6ba Fix account_lines, account_offers and book_offers result (RIPD-682):
The RPC account_lines and account_offers commands respond with the correct
ledger info. account_offers, account_lines and book_offers allow admins
unlimited size on the limit param. Specifying a negative value on limit clamps
to the minimum value allowed. Incorrect types for limit are correctly reported
in the result.
2014-11-10 12:46:36 -08:00
Tom Ritchford
bb44bdd047 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-10 12:07:57 -05:00
Tom Ritchford
6904e66384 Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.
2014-11-10 12:06:49 -05:00
Vinnie Falco
fbffe2367e Fix weak_fn unit test. 2014-11-09 20:27:05 -08:00
Vinnie Falco
e442a2846d Overlay improvements and bug fixes:
PeerImp::detach had a default argument graceful=true which did not
correctly close the socket and cause the Overlay to often hang on exit.
The logging for Overlay and Peers has been reworked. All the socket activity
is logged to Peers while protocol activity goes to Protocol. Every log line
is prefixed by a small integer ID unique to the connection.
* Removed graceful PeerImp::detach option
* Peer and Protocol log message handle respective types of logging
* Log messages prefixed with peer unique integer
* Prevent call to timer ancel from throwing an exception
2014-11-08 14:39:46 -08:00
Vinnie Falco
f6985586ea Better logging when opening Server ports. 2014-11-08 14:36:45 -08:00
Vinnie Falco
2bae5b0959 Throw if rippled.cfg is missing a [server] section 2014-11-08 14:36:45 -08:00
Vinnie Falco
1e58809fcc Remove obsolete get_pointer 2014-11-08 14:36:44 -08:00
Vinnie Falco
6b1d213cc2 Add weak_fn 2014-11-08 14:36:44 -08:00
David Schwartz
42bec13a83 Add missing include needed for std::bad_cast in LexicalCast.h 2014-11-07 15:23:43 -08:00
Vinnie Falco
4415a179b3 Update freeze test for moved TxFlags.h 2014-11-07 14:12:43 -08:00
Vinnie Falco
5d42604efd Refactor the structure of source files:
* New src/ripple/crypto and src/ripple/protocol directories
* Merged src/ripple/common into src/ripple/basics
* Move resource/api files up a level
* Add headers for "include what you use"
* Normalized include guards
* Renamed to JsonFields.h
* Remove obsolete files
* Remove net.h unity header
* Remove resource.h unity header
* Removed some deprecated unity includes
2014-11-07 13:40:43 -08:00
Vinnie Falco
b134b7d3f6 Add missing includes. 2014-11-07 12:24:02 -08:00
Vinnie Falco
788219fe05 Adjust SSL context generation for Server:
The creation of self-signed certificates slows down the command
line client when launched repeatedly during unit test.
* Contexts are no longer generated for the command line client
* A port with no secure protocols generates an empty context
2014-11-07 06:13:56 -08:00
Tom Ritchford
9a7f66cfe9 Fix compilation errors in RPC/RipplePathFind.cpp 2014-11-06 21:58:13 -05:00
Tom Ritchford
daa4d16e61 Remove unused isXRP(Issue) function. 2014-11-06 20:17:13 -05:00
Tom Ritchford
cf05f87795 Fix pathfinding with multiple issuers for one currency (RIPD-618).
* Allow pathfinding requests where the starting currency may have
  multiple issuers.

* Cache paths over all issuers to avoid repeating work.

* Clear the ledger checkpoint in one retry case.

* Add an additional node at the front of paths when the starting issuer
  is not the source account.
2014-11-06 20:14:01 -05:00
Tom Ritchford
c2f2f83b7c Clean up Pathfinder.
* Restrict to 80-columns and other style cleanups.
* Make pathfinding a free function and hide the class Pathfinder.
* Split off unrelated utility functions into separate files.

Conflicts:
	src/ripple/rpc/handlers/RipplePathFind.cpp
2014-11-06 16:58:10 -08:00
Tom Ritchford
b30b2a523f Fix public member names of RPC::Context.
Conflicts:
	src/ripple/rpc/handlers/AccountTx.cpp
	src/ripple/rpc/handlers/AccountTxOld.cpp
	src/ripple/rpc/handlers/Ledger.cpp
	src/ripple/rpc/handlers/LedgerData.cpp
	src/ripple/rpc/handlers/RipplePathFind.cpp
	src/ripple/rpc/handlers/ServerInfo.cpp
	src/ripple/rpc/handlers/ServerState.cpp
	src/ripple/rpc/handlers/Submit.cpp
	src/ripple/rpc/handlers/Subscribe.cpp
	src/ripple/rpc/handlers/TxHistory.cpp
	src/ripple/rpc/handlers/Unsubscribe.cpp
	src/ripple/rpc/impl/Context.h
2014-11-06 16:55:20 -08:00
Nicholas Dudfield
150a3810a8 Update npm test rippled.cfg to use [server]:
The test now generates a configuration file with the new
configuration sections define by the Universal Port feature.
2014-11-06 16:10:00 -08:00
Vinnie Falco
ac0eaa912b Universal Port (RIPD-160):
This changes the behavior and configuration specification of the listening
ports that rippled uses to accept incoming connections for the supported
protocols: peer (Peer Protocol), http (JSON-RPC over HTTP), https (JSON-RPC)
over HTTPS, ws (Websockets Clients), and wss (Secure Websockets Clients).
Each listening port is now capable of handshaking in multiple protocols
specified in the configuration file (subject to some restrictions). Each
port can be configured to provide its own SSL certificate, or to use a
self-signed certificate. Ports can be configured to share settings, this
allows multiple ports to use the same certificate or values. The list of
ports is dynamic, administrators can open as few or as many ports as they
like. Authentication settings such as user/password or admin user/admin
password (for administrative commands on RPC or Websockets interfaces) can
also be specified per-port.

As the configuration file has changed significantly, administrators will
need to update their ripple.cfg files and carefully review the documentation
and new settings.

Changes:

* rippled-example.cfg updated with documentation and new example settings:
  All obsolete websocket, rpc, and peer configuration sections have been
  removed, the documentation updated, and a new documented set of example
  settings added.

* HTTP::Writer abstraction for sending HTTP server requests and responses
* HTTP::Handler handler improvements to support Universal Port
* HTTP::Handler handler supports legacy Peer protocol handshakes
* HTTP::Port uses shared_ptr<boost::asio::ssl::context>
* HTTP::PeerImp and Overlay use ssl_bundle to support Universal Port
* New JsonWriter to stream message and body through HTTP server
* ServerHandler refactored to support Universal Port and legacy peers
* ServerHandler Setup struct updated for Universal Port
* Refactor some PeerFinder members
* WSDoor and Websocket code stores and uses the HTTP::Port configuration
* Websocket autotls class receives the current secure/plain SSL setting
* Remove PeerDoor and obsolete Overlay peer accept code
* Remove obsolete RPCDoor and synchronous RPC handling code
* Remove other obsolete classes, types, and files
* Command line tool uses ServerHandler Setup for port and authorization info
* Fix handling of admin_user, admin_password in administrative commands
* Fix adminRole to check credentials for Universal Port
* Updated Overlay README.md

* Overlay sends IP:port redirects on HTTP Upgrade peer connection requests:
  Incoming peers who handshake using the HTTP Upgrade mechanism don't get
  a slot, and always get HTTP Status 503 redirect containing a JSON
  content-body with a set of alternate IP and port addresses to try, learned
  from PeerFinder. A future commit related to the Hub and Spoke feature will
  change the response to grant the peer a slot when there are peer slots
  available.

* HTTP responses to outgoing Peer connect requests parse redirect IP:ports:
  When the [overlay] configuration section (which is experimental) has
  http_handshake = 1, HTTP redirect responses will have the JSON content-body
  parsed to obtain the redirect IP:port addresses.

* Use a single io_service for HTTP::Server and Overlay:
  This is necessary to allow HTTP::Server to pass sockets to and from Overlay
  and eventually Websockets. Unfortunately Websockets is not so easily changed
  to use an externally provided io_service. This will be addressed in a future
  commit, and is one step necessary ease the restriction on ports configured
  to offer Websocket protocols in the .cfg file.
2014-11-06 16:10:00 -08:00
Vinnie Falco
05a04aa801 Set version to 0.26.4 2014-11-03 16:53:37 -08:00
Vinnie Falco
e37d4043f6 Add missing includes to make headers compile separately 2014-11-03 16:40:57 -08:00
Vinnie Falco
d073425b44 Improved beast::http::message:
* Add headers::erase
* Set http::message version with std::pair
* Use std::pair for headers::value_type
2014-11-03 16:40:57 -08:00
Nicholas Dudfield
825b18cf71 Add bin/stop-test.js shutdown testing script:
This script launches rippled repeatedly and then issues a stop command
after a variable amount of time. This is to test the shutdown of the
application and catch errors.
2014-11-03 16:31:21 -08:00
Vinnie Falco
549ad3204f Fix race conditions closing HTTP I/O objects:
This fixes a case where stop can sometimes skip calling close on some
I/O objects or crash in a rare circumstance where a connection is in the
process of being torn down at the exact time the server is stopped. When
the acceptor receives errors, it logs the error and continues listening
instead of stopping.
2014-11-03 14:11:06 -08:00
Vinnie Falco
35f9499b67 Fix Overlay stop on exit:
The stop sequence for Overlay had a race condition where autoconnect could
be called after close_all, resulting in a hang on exit. This resolves the
problem by putting the close and timer operations on a strand:
* Rename some Overlay members
* Put close on strand and tidy up members
* Use completion handler instead of coroutine for timer
* Use App io_service in PeerFinder
2014-11-03 14:11:05 -08:00
Vinnie Falco
db82c35c17 Remove spurious assert in ResolverAsioImpl 2014-11-03 14:11:05 -08:00
Vinnie Falco
73c74f753c Change to the Application io_service:
* Simplified the implementation and removed class IoServicePool
* The io_service outlives the components of the Application
2014-11-03 14:11:05 -08:00
JoelKatz
a38fb2a5dc Clear the acquiring ledger when shutting down NetworkOPs:
This solves a circular destruction problem on exit.
2014-11-03 14:11:04 -08:00
Donovan Hide
38e99e01f9 Improve nodestore benchmarking:
* Use more succinct while loops on NodeFactory.
* Better formatting of multiple test results.
* Updated benchmarks.
* Use simpler and faster RNG to generate test data.
2014-11-02 07:16:08 -08:00
Donovan Hide
a1f46e84b8 Add new RocksDBQuickFactory for benchmarking:
This new factory is intended for benchmarking against the existing RocksDBFactory and has the following differences.
* Does not use BatchWriter
* Disables WAL for writes to memtable
* Uses a hash index in blocks
* Uses RocksDB OptimizeFor… functions
See Benchmarks.md for further discussion of some of the issues raised by investigation of RocksDB performance.
2014-11-01 07:12:09 -07:00
Donovan Hide
6540804571 Add repeatable NodeStore timing benchmark:
The timing test is changed to overcome possible file buffer cache effects by creating different read access patterns. The unittest-arg command line arguments allow running the benchmarks against any of the available backends and altering the parameters passed in the same format as rippled.cfg. The num_objects parameter permits variation of the number of key/values inserted. The data is random but matches reasonably well the values that rippled might generate.
2014-11-01 07:12:08 -07:00
Howard Hinnant
ffe6707595 Refactor Stoppable:
The Stoppable interface aids in the enforcement of invariants needed to
successful start and stop a multi-threaded application composed of classes
that depend on each other in complex ways.
* Test written to confirm the current behavior.
* Comments updated to reflect the current behavior.
* Public API reduced to what is currently in use.
* Protected data members made private.
* volatile bool members changed to std::atomic<bool>.
* std::atomic<int> members changed to std::atomic<bool>.
* Name storage uses std::string
2014-10-31 21:29:16 -07:00
Tom Ritchford
9b21740c9f Delete temporary directories at the end of tests (RIPD-460). 2014-10-31 21:21:54 -07:00
Tom Ritchford
bd12e2ab95 New class TempDirectory in UnitTestUtilities. 2014-10-31 21:21:54 -07:00
Donovan Hide
bffb5ef8b4 Avoid zero initialization of Blob:
This seemed to improve the performance of the copy, although there did seem to be some byte by byte copying still present. Further investigation recommended.
2014-10-31 20:12:39 -07:00
Donovan Hide
e4c9822d78 Enable processor-specific optimizations when available:
The SConstruct is modified to enable processor specific optimizations on clang and gcc toolchains. This improves the performance of RocksDB's CRC function. It might also enable other used libraries that are in the codebase now or in the future to apply cpu-specific optimisations. The mtune option ensures that a binary compiled on one machine will function on another,
2014-10-31 20:12:39 -07:00
Vinnie Falco
73187d8832 Remove obsolete multitls and proxy websocket features 2014-10-31 15:15:40 -07:00
Vinnie Falco
8101154d5e Remove obsolete websocket PROXY port 2014-10-31 15:15:40 -07:00
Vinnie Falco
c02937fd6f Remove obsolete sections from rippled-example.cfg:
* peer_port_proxy is obsolete since the MultiSocket was removed.
* peer_ssl_cipher_list has no effect, SSL ciphers are hard coded for security.
2014-10-31 15:15:40 -07:00
Vinnie Falco
3430be4075 Add PeerFinder onRedirects function 2014-10-31 13:27:55 -07:00
Vinnie Falco
3f2b6f771f Add streambuf to_string function 2014-10-31 13:27:38 -07:00
Vinnie Falco
6e39b49cc2 Add Json::stream to write Value to a Streambuf 2014-10-31 13:27:33 -07:00
Vinnie Falco
71c34ed4e0 Remove unused ErrorReply function 2014-10-31 13:25:54 -07:00
Vinnie Falco
477178675c Fix parseIniFile for duplicate sections 2014-10-31 13:25:30 -07:00
Vinnie Falco
dbdf68b248 Refactor HTTP::Server to support Universal Port:
These changes are necessary to support the Universal port feature. Synopsis:

* Persist HTTP peer io_service::work lifetime:
This simplification eliminates any potential for bugs caused by incorrect
lifetime management of the io_service::work object.

* Restructure Door to prevent data races, and handle clean exit:
The Server, Door, Door::detector, and Peer objects work together to
correctly implement graceful stop and destructors that block until
all child objects have been destroyed.

Cleanups:
* De-pimpl HTTP::Server
* Rename ServerImpl data members
* Tidy up HTTP::Port interface
2014-10-30 16:02:19 -07:00
Vinnie Falco
2fd139b307 Refactor Overlay and add [overlay] config section (experimental):
These changes prepare Overlay for the Universal Port and Hub and Spoke
features.

* Add [overlay configuration section:
The [overlay] section uses the new BasicConfig interface that
supports key-value pairs in the section. Some exposition is added to the
example cfg file. The new settings for overlay are related to the Hub and
Spoke feature which is currently in development. Production servers should
not set these configuration options, they are clearly marked experimental
in the example cfg file.

Other changes:
* Use _MSC_VER to detect Visual Studio
* Use ssl_bundle in Overlay::Peer
* Use shared_ptr to SSL context in Overlay:
* Removed undocumented PEER_SSL_CIPHER_LIST configuration setting
* Add Section::name: The Section object now stores its name for better diagnostic messages.
2014-10-30 13:55:01 -07:00
Vinnie Falco
a6c2657062 Add shared_ptr<boost::asio::ssl::context> to ssl_bundle:
This gives the ssl_bundle shared ownership of the underlying ssl context
so that ownership of the bundle may be transferred to other classes without
introduce lifetime issues.
2014-10-30 13:55:00 -07:00
Vinnie Falco
78a0bc0e2c Make streambuf buffers_type iterators default constructible 2014-10-30 13:55:00 -07:00
Edward Hennis
d7116d6867 Enable std::array overloads for boost::asio on clang:
* Remove Boost config option from beast config.
* Define from compiler, or let Boost figure out itself.
2014-10-30 13:55:00 -07:00
Josh Juran
edc15b9fa2 Use a self-signed certificate for peers (RIPD-108):
Generate a new RSA key pair and a self-signed X.509v3 certificate to use
with SSL connections to rippled peers.  New credentials are created each
startup.
2014-10-30 13:54:49 -07:00
Josh Juran
93d4b73b2f RippleSSLContext: Add openssl wrappers 2014-10-30 10:52:37 -07:00
Vinnie Falco
8e3849e591 Create Ripple SSL contexts using std::make_shared. 2014-10-30 10:52:36 -07:00
Vinnie Falco
acaa1098f7 Separate beast::http::body from beast::http::message (RIPD-660):
This changes the http::message object to no longer contain a body. It modifies
the parser to store the body in a separate object, or to pass the body data
to a functor. This allows the body to be stored in more flexible ways. For
example, in HTTP responses the body can be generated procedurally instead
of being required to exist entirely in memory at once.
2014-10-29 19:23:53 -07:00
Vinnie Falco
c1a5e88752 Use beast::asio::streambuf in Overlay 2014-10-29 19:23:53 -07:00
Vinnie Falco
74b99014d2 Add beast::asio::basic_streambuf (RIPD-661):
This is class whose interface is identical to the boost::asio::basic_streambuf,
and uses an implementation that stores the data in multiple discontiguous
linear buffers, expanding and shrinking as needed.
2014-10-29 19:23:53 -07:00
Tom Ritchford
9cba944d21 Add Json::to_string:
This allows the declaration for FastWriter to be hidden and makes
conversion of Json::Value objects to strings a little less clunky.
2014-10-29 19:23:52 -07:00
Tom Ritchford
8c1c2f5d05 Eliminate a copy of the string returned by FastWriter 2014-10-29 19:22:44 -07:00
Tom Ritchford
bf0fa8c562 Add 'sample' npm test:
test/sample-test.js is the smallest possible npm test.
2014-10-28 10:59:59 -07:00
Donovan Hide
3e1fc9ba6c Update unit testing command line parser parameters:
A string passed by the '--unittest-arg' command line parameter is passed to
suites when unit tests run and can be used to customize test behavior.
* Add '--unittest-arg' command line argument
* Remove obsolete '--unittest-format' command line argument
2014-10-28 10:55:44 -07:00
Vinnie Falco
4ceba603e4 Improvements to beast::unit_test framework:
* Some runner member functions are now thread-safe.
* De-inline and tidy up declarations and definitions.
* arg() interface allows command lines to be passed to suites.
2014-10-28 10:41:10 -07:00
Vinnie Falco
6591c21ace Set version to 0.26.4-rc4 2014-10-27 11:49:39 -07:00
Vinnie Falco
e8d03c7b9b Update rocksdb unity file 2014-10-27 11:48:21 -07:00
Vinnie Falco
6fbce4c2f7 Update src/rocksdb2 to rocksdb-3.5.1:
Merge commit 'c168d54495d7d7b84639514f6443ad99b89ce996' into develop
2014-10-27 11:37:01 -07:00
Vinnie Falco
c168d54495 Squashed 'src/rocksdb2/' changes from 25888ae..1fdd726
1fdd726 Hotfix RocksDB 3.5
d67500a Add `make install` to Makefile in 3.5.fb.
4cb631a update HISTORY.md
cfd0946 comments about the BlockBasedTableOptions migration in Options
REVERT: 25888ae Merge pull request #329 from fyrz/master
REVERT: 89833e5 Fixed signed-unsigned comparison warning in db_test.cc
REVERT: fcac705 Fixed compile warning on Mac caused by unused variables.
REVERT: b3343fd resolution for java build problem introduced by 5ec53f3edf62bec1b690ce12fb21a6c52203f3c8
REVERT: 187b299 ForwardIterator: update prev_key_ only if prefix hasn't changed
REVERT: 5ec53f3 make compaction related options changeable
REVERT: d122e7b Update INSTALL.md
REVERT: 986dad0 Merge pull request #324 from dalgaaf/wip-da-SCA-20140930
REVERT: 8ee75dc db/memtable.cc: remove unused variable merge_result
REVERT: 0fd8bbc db/db_impl.cc: reduce scope of prefix_initialized
REVERT: 676ff7b compaction_picker.cc: remove check for >=0 for unsigned
REVERT: e55aea5 document_db.cc: fix assert
REVERT: d517c83 in_table_factory.cc: use correct format specifier
REVERT: b140375 ttl/ttl_test.cc: prefer prefix ++operator for non-primitive types
REVERT: 43c789c spatialdb/spatial_db.cc: use !empty() instead of 'size() > 0'
REVERT: 0de452e document_db.cc: pass const parameter by reference
REVERT: 4cc8643 util/ldb_cmd.cc: prefer prefix ++operator for non-primitive types
REVERT: af8c2b2 util/signal_test.cc: suppress intentional null pointer deref
REVERT: 33580fa db/db_impl.cc: fix object handling, remove double lines
REVERT: 873f135 db_ttl_impl.h: pass func parameter by reference
REVERT: 8558457 ldb_cmd_execute_result.h: perform init in initialization list
REVERT: 063471b table/table_test.cc: pass func parameter by reference
REVERT: 93548ce table/cuckoo_table_reader.cc: pass func parameter by ref
REVERT: b8b7117 db/version_set.cc: use !empty() instead of 'size() > 0'
REVERT: 8ce050b table/bloom_block.*: pass func parameter by reference
REVERT: 53910dd db_test.cc: pass parameter by reference
REVERT: 68ca534 corruption_test.cc: pass parameter by reference
REVERT: 7506198 cuckoo_table_db_test.cc: add flush after delete
REVERT: 1f96330 Print MB per second compaction throughput separately for reads and writes
REVERT: ffe3d49 Add an instruction about SSE in INSTALL.md
REVERT: ee1f3cc Package generation for Ubuntu and CentOS
REVERT: f0f7955 Fixing comile errors on OS X
REVERT: 99fb613 remove 2 space linter
REVERT: b2d64a4 Fix linters, second try
REVERT: 747523d Print per column family metrics in db_bench
REVERT: 56ebd40 Fix arc lint (should fix #238)
REVERT: 637f891 Merge pull request #321 from eonnen/master
REVERT: 827e31c Make test use a compatible type in the size checks.
REVERT: fd5d80d CompactedDB: log using the correct info_log
REVERT: 2faf49d use GetContext to replace callback function pointer
REVERT: 983d2de Add AUTHORS file. Fix #203
REVERT: abd70c5 Merge pull request #316 from fyrz/ReverseBytewiseComparator
REVERT: 2dc6f62 handle kDelete type in cuckoo builder
REVERT: 8b8011a Changed name of ReverseBytewiseComparator based on review comment
REVERT: 389edb6 universal compaction picker: use double for potential overflow
REVERT: 5340484 Built-in comparator(s) in RocksJava
REVERT: d439451 delay initialization of cuckoo table iterator
REVERT: 94997ea reduce memory usage of cuckoo table builder
REVERT: c627595 improve memory efficiency of cuckoo reader
REVERT: 581442d option to choose module when calculating CuckooTable hash
REVERT: fbd2daf CompactedDBImpl::MultiGet() for better CuckooTable performance
REVERT: 3c68006 CompactedDBImpl
REVERT: f7375f3 Fix double deletes
REVERT: 21ddcf6 Remove allow_thread_local
REVERT: fb4a492 Merge pull request #311 from ankgup87/master
REVERT: 611e286 Merge branch 'master' of https://github.com/facebook/rocksdb
REVERT: 0103b44 Merge branch 'master' of ssh://github.com/ankgup87/rocksdb
REVERT: 1dfb7bb Add block based table config options
REVERT: cdaf44f Enlarge log size cap when printing file summary
REVERT: 7cc1ed7 Merge pull request #309 from naveenatceg/staticbuild
REVERT: ba6d660 Resolving merge conflict
REVERT: 51eeaf6 Addressing review comments
REVERT: fd7d3fe Addressing review comments (adding a env variable to override temp directory)
REVERT: cf7ace8 Addressing review comments
REVERT: 0a29ce5 re-enable BlockBasedTable::SetupForCompaction()
REVERT: 55af370 Remove TODO for checking index checksums
REVERT: 3d74f09 Fix compile
REVERT: 53b0039 Fix release compile
REVERT: d0de413 WriteBatchWithIndex to allow different Comparators for different column families
REVERT: 57a32f1 change target_file_size_base to uint64_t
REVERT: 5e6aee4 dont create backup_input if compaction filter v2 is not used
REVERT: 49b5f94 Merge pull request #306 from Liuchang0812/fix_cast
REVERT: 787cb4d remove cast, replace %llu with % PRIu64
REVERT: a7574d4 Update logging.cc
REVERT: 7e0dcb9 Update logging.cc
REVERT: 57fa3cc Merge pull request #304 from Liuchang0812/fix-check
REVERT: cd44522 Merge pull request #305 from Liuchang0812/fix-logging
REVERT: 6a031b6 remove unused variable
REVERT: 4436f17 fixed #303: replace %ld with % PRId64
REVERT: 7a1bd05 Merge pull request #302 from ankgup87/master
REVERT: 423e52c Merge branch 'master' of https://github.com/facebook/rocksdb
REVERT: bfeef94 Add rate limiter
REVERT: 32f2532 Print compression_size_percent as a signed int
REVERT: 976caca Skip AllocateTest if fallocate() is not supported in the file system
REVERT: 3b897cd Enable no-fbcode RocksDB build
REVERT: f445947 RocksDB: Format uint64 using PRIu64 in db_impl.cc
REVERT: e17bc65 Merge pull request #299 from ankgup87/master
REVERT: b93797a Fix build
REVERT: adae3ca [Java] Fix JNI link error caused by the removal of options.db_stats_log_interval
REVERT: 90b8c07 Fix unit tests errors
REVERT: 51af7c3 CuckooTable: add one option to allow identity function for the first hash function
REVERT: 0350435 Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
REVERT: 2fb1fea Fix syncronization issues
REVERT: ff76895 Remove some unnecessary constructors
REVERT: feadb9d fix cuckoo table builder test
REVERT: 3c232e1 Fix mac compile
REVERT: 54cada9 Run make format on PR #249
REVERT: 27b22f1 Merge pull request #249 from tdfischer/decompression-refactoring
REVERT: fb6456b Replace naked calls to operator new and delete (Fixes #222)
REVERT: 5600c8f cuckoo table: return estimated size - 1
REVERT: a062e1f SetOptions() for memtable related options
REVERT: e4eca6a Options conversion function for convenience
REVERT: a7c2094 Merge pull request #292 from saghmrossi/master
REVERT: 4d05234 Merge branch 'master' of github.com:saghmrossi/rocksdb
REVERT: 60a4aa1 Test use_mmap_reads
REVERT: 94e43a1 [Java] Fixed 32-bit overflowing issue when converting jlong to size_t
REVERT: f9eaaa6 added include for inttypes.h to fix nonworking printf statements
REVERT: f090575 Replaced "built on on earlier work" by "built on earlier work" in README.md
REVERT: faad439 Fix #284
REVERT: 49aacd8 Fix make install
REVERT: acb9348 [Java] Include WriteBatch into RocksDBSample.java, fix how DbBenchmark.java handles WriteBatch.
REVERT: 4a27a2f Don't sync manifest when disableDataSync = true
REVERT: 9b8480d Merge pull request #287 from yinqiwen/rate-limiter-crash-fix
REVERT: 28be16b fix rate limiter crash #286
REVERT: 04ce1b2 Fix #284
REVERT: add22e3 standardize scripts to run RocksDB benchmarks
REVERT: dee91c2 WriteThread
REVERT: 540a257 Fix WAL synced
REVERT: 24f034b Merge pull request #282 from Chilledheart/develop
REVERT: 49fe329 Fix build issue under macosx
REVERT: ebb5c65 Add make install
REVERT: 0352a9f add_wrapped_bloom_test
REVERT: 9c0e66c Don't run background jobs (flush, compactions) when bg_error_ is set
REVERT: a9639bd Fix valgrind test
REVERT: d1f24dc Relax FlushSchedule test
REVERT: 3d9e6f7 Push model for flushing memtables
REVERT: 059e584 [unit test] CompactRange should fail if we don't have space
REVERT: dd641b2 fix RocksDB java build
REVERT: 53404d9 add_qps_info_in cache bench
REVERT: a52cecb Fix Mac compile
REVERT: 092f97e Fix comments and typos
REVERT: 6cc1286 Added a few statistics for BackupableDB
REVERT: 0a42295 Fix SimpleWriteTimeoutTest
REVERT: 06d9862 Always pass MergeContext as pointer, not reference
REVERT: d343c3f Improve db recovery
REVERT: 6bb7e3e Merger test
REVERT: 88841bd Explicitly cast char to signed char in Hash()
REVERT: 5231146 MemTableOptions
REVERT: 1d284db Addressing review comments
REVERT: 55114e7 Some updates for SpatialDB
REVERT: 171d4ff remove TailingIterator reference in db_impl.h
REVERT: 9b0f7ff rename version_set options_ to db_options_ to avoid confusion
REVERT: 2d57828 Check stop level trigger-0 before slowdown level-0 trigger
REVERT: 659d2d5 move compaction_filter to immutable_options
REVERT: 048560a reduce references to cfd->options() in DBImpl
REVERT: 011241b DB::Flush() Do not wait for background threads when there is nothing in mem table
REVERT: a2bb7c3 Push- instead of pull-model for managing Write stalls
REVERT: 0af157f Implement full filter for block based table.
REVERT: 9360cc6 Fix valgrind issue
REVERT: 02d5bff Merge pull request #277 from wankai/master
REVERT: 88a2f44 fix comments
REVERT: 7c16e39 Merge pull request #276 from wankai/master
REVERT: 8237738 replace hard-coded number with named variable
REVERT: db8ca52 Merge pull request #273 from nbougalis/static-analysis
REVERT: b7b031f Merge pull request #274 from wankai/master
REVERT: 4c2b1f0 Merge remote-tracking branch 'upstream/master'
REVERT: a5d2863 typo improvement
REVERT: 9f8aa09 Don't leak data returned by opendir
REVERT: d1cfb71 Remove unused member(s)
REVERT: bfee319 sizeof(int*) where sizeof(int) was intended
REVERT: d40c1f7 Add missing break statement
REVERT: 2e97c38 Avoid off-by-one error when using readlink
REVERT: 40ddc3d add cache bench
REVERT: 9f1c80b Drop column family from write thread
REVERT: 8de151b Add db_bench with lots of column families to regression tests
REVERT: c9e419c rename options_ to db_options_ in DBImpl to avoid confusion
REVERT: 5cd0576 Fix compaction bug in Cuckoo Table Builder. Use kvs_.size() instead of num_entries in FileSize() method.
REVERT: 0fbb3fa fixed memory leak in unit test DBIteratorBoundTest
REVERT: adcd253 fix asan check
REVERT: 4092b7a Merge pull request #272 from project-zerus/patch-1
REVERT: bb6ae0f fix more compile warnings
REVERT: 6d31441 Merge pull request #271 from nbougalis/cleanups
REVERT: 0cd0ec4 Plug memory leak during index creation
REVERT: 4329d74 Fix swapped variable names to accurately reflect usage
REVERT: 45a5e3e Remove path with arena==nullptr from NewInternalIterator
REVERT: 5665e5e introduce ImmutableOptions
REVERT: e0b99d4 created a new ReadOptions parameter 'iterate_upper_bound'
REVERT: 51ea889 Fix travis builds
REVERT: a481626 Relax backupable rate limiting test
REVERT: f7f973d Merge pull request #269 from huahang/patch-2
REVERT: ef5b384 fix a few compile warnings
REVERT: 2fd3806 Merge pull request #263 from wankai/master
REVERT: 1785114 delete unused Comparator
REVERT: 1b1d961 update HISTORY.md
REVERT: 703c3ea comments about the BlockBasedTableOptions migration in Options
REVERT: 4b5ad88 Merge pull request #260 from wankai/master
REVERT: 19cc588 change to filter_block std::unique_ptr support RAII
REVERT: 9b976e3 Merge pull request #259 from wankai/master
REVERT: 5d25a46 Merge remote-tracking branch 'upstream/master'
REVERT: dff2b1a typo improvement
REVERT: 343e98a Reverting import change
REVERT: ddb8039 RocksDB static build Make file changes to download and build the dependencies .Load the shared library when RocksDB is initialized

git-subtree-dir: src/rocksdb2
git-subtree-split: 1fdd726a8254c13d0c66d8db8130ad17c13d7bcc
2014-10-27 11:36:32 -07:00
Vinnie Falco
2cce22052b Update SQLite to 3.8.7:
sha1: 3e23079f062fc06705eead4db108ee429878b532
2014-10-27 11:04:46 -07:00
Tom Ritchford
4e19d5f625 Adjust paths and costs in Pathfinder. 2014-10-27 11:03:19 -07:00
Tom Ritchford
5b667da526 Squelch some warnings in rippled and third-party code. 2014-10-27 10:00:03 -07:00
Nik Bougalis
f9fc9a3518 Reduce RippleD dependencies on Beast:
* Use static_assert where appropriate
* Use std::min and std::max where appropriate
* Simplify RippleD error reporting
* Remove use of beast::RandomAccessFile
2014-10-27 09:55:58 -07:00
Nik Bougalis
e005cfd70e Reduce Beast public interface and eliminate unused code:
Beast includes a lot of code for encapsulating cross-platform differences
which are not used or needed by rippled. Additionally, a lot of that code
implements functionality that is available from the standard library.

This moves away from custom implementations of features that the standard
library provides and reduces the number of platform-specific interfaces
andfeatures that Beast makes available.

Highlights include:
* Use std:: instead of beast implementations when possible
* Reduce the use of beast::String in public interfaces
* Remove Windows-specific COM and Registry code
* Reduce the public interface of beast::File
* Reduce the public interface of beast::SystemStats
* Remove unused sysctl/getsysinfo functions
* Remove beast::Logger
2014-10-27 09:55:43 -07:00
Vinnie Falco
feb997481c Refactor the structure of ServerHandler:
This is a cleanup to the structure of the sources.
* Rename to ServerHandler
* Move private implementation declaration to separate header
* De-inline function definitions in the class declaration.
2014-10-27 09:50:03 -07:00
Vinnie Falco
2c8e90c9d8 Remove obsolete RPCServerHandler:
This removes the legacy RPCServerHandler, which has been replaced by the
asynchronous RPC-HTTP/S server and corresponding RPCHTTPHandler.
2014-10-27 09:50:03 -07:00
Vinnie Falco
ec96d5afa0 Remove unused and obsolete classes and tidy up:
Many classes required to support type-erasure of handlers and boost::asio
types are now obsolete, so these classes and files are removed:
HTTPClientType, FixedInputBuffer, PeerRole, socket_wrapper,
client_session, basic_url, abstract_socket, buffer_sequence, memory_buffer,
enable_wait_for_async, shared_handler, wrap_handler, streambuf,
ContentBodyBuffer, SSLContext, completion-handler based handshake detectors.
These structural changes are made:
* Some missing includes added to headers
* asio module directory flattened
2014-10-26 08:40:52 -07:00
Vinnie Falco
8be8853c33 Remove obsolete classes, disable unused code, and tidy up:
* Removed MultiSocket. Code that previously used the MultiSocket now uses
  a combination of boost::asio coroutines and CRTP.
* Sitefiles headers rolled up and directory flattened.
* Disabled Sitefiles use of deprecated HTTPClient.
* Validators headers tidied up.
* Disabled Validators use of deprecated HTTPClient.
2014-10-26 08:38:37 -07:00
Vinnie Falco
c228f5a244 Set version to 0.26.4-rc3 2014-10-25 08:07:40 -07:00
Vinnie Falco
d4c8b4e3ac Merge branch 'release' into develop 2014-10-25 08:07:30 -07:00
Vinnie Falco
6564f6c164 Fix incorrect socket closure in Overlay peers:
On Application exit, Overlay was calling PeerImp::close for each peer.
The implementation of PeerImp::close only canceled all pending I/O and did not
call functions necessary for proper transition of Peer state during socket
closure. The correct transition is ensured by calling PeerImp::detach. This
changes PeerImp::close to call PeerImp::detach instead, ensuring that Overlay
invariants are maintained. Specifically, that reference counts for pending I/O
on peers will be correctly unwound by canceling operations and that the Peer
object will be destroyed, thus allowing the Overlay to stop correctly.
2014-10-25 08:01:57 -07:00
Vinnie Falco
1e37a5509c Add missing includes 2014-10-24 08:13:55 -07:00
Vinnie Falco
1e9503deaa Set version to 0.26.2-rc2 2014-10-23 13:49:22 -07:00
Vinnie Falco
ab1f36c565 Revert "Add [overlay] configuration section (experimental):"
This reverts commit 856fd9d69f.
2014-10-23 13:48:52 -07:00
Vinnie Falco
5a212cd626 Set version to 0.26.4-rc1 2014-10-23 13:01:12 -07:00
Vinnie Falco
856fd9d69f Add [overlay] configuration section (experimental):
This configuration section uses the new BasicConfig interface that supports
key-value pairs in the section. Some exposition is added to the example cfg
file. The new settings for overlay are related to the Hub and Spoke feature
which is currently in development. Production servers should not set
these configuration options, they are clearly marked experimental in the
example cfg file.

Conflicts:
	src/ripple/overlay/impl/OverlayImpl.cpp
	src/ripple/overlay/impl/OverlayImpl.h
	src/ripple/overlay/impl/PeerImp.cpp
	src/ripple/overlay/impl/PeerImp.h
2014-10-23 12:56:16 -07:00
Vinnie Falco
4606d99951 Don't use MultiSocket in Overlay:
The MultiSocket is obsolete technology which is superceded by a more
straightforward, template based implementation that is compatible with
boost::asio::coroutines. This removes support for the unused PROXY handshake
feature. After this change a large number of classes and source files may be
removed.
2014-10-23 12:56:16 -07:00
Tom Ritchford
dbd75169e5 New JsonWriter for improved client performance (RIPD-439):
When JSON-RPC and Websocket responses are calculated, the result is stored
in intermediate Json::Value objects and later composed in a single linear
memory buffer before being sent to the socket.  These classes support a
new model for building responses that supports incremental construction
of JSON replies in constant time and removes the requirement that all
data returned be located in continuguous memory.
* New JsonWriter incrementally writes JSON with O(1) granularity and memory.
* Array, Object are RAII wrappers for the O(1) JsonWriter.
2014-10-23 07:04:47 -07:00
Vinnie Falco
f5b39ee911 Remove HTTP::ScopedStream:
This class was used to allow stream style operator<< to write to the
HTTP::Session. This is being superceded by a more robust object-based model
that supports coroutines.
2014-10-22 19:36:28 -07:00
Vinnie Falco
db5d52b4b2 Keep a list of section config values that are not key/value pairs:
This change to BasicConfig stores all appended lines which are not key/value
pairs in a separate values vector which can be retrieved later. This is to
support sections containing both key/value pairs and a list of values.
2014-10-22 19:36:28 -07:00
Vinnie Falco
dfeb9967b8 Return error_code from beast::http::basic_parser:
This changes the HTTP parser interface to return an error_code instead
of a bool. This eliminates the need for the error() member function and
simplifies calling code.
2014-10-22 19:36:28 -07:00
Vinnie Falco
673e860c18 Add beast::asio::ssl_bundle workaround:
This works around the limitation that 1.56 boost::asio::ssl::stream objects
do not support r-value move or construction. It is required when the stream
does not own the socket.
2014-10-22 19:36:28 -07:00
Vinnie Falco
9deae34b20 Workaround for MSVC stdlib and coroutine interaction:
If beast::Time::currentTimeMillis is first called from a coroutine launched
using boost::asio::spawn, Win32 throws an exception. This workaround calls
getCurrentTime once in main to prevent the exception.
Reference:
    https://svn.boost.org/trac/boost/ticket/10657
2014-10-22 19:36:27 -07:00
Vinnie Falco
ec92344fb4 Use autotls instead of multitls in websocket:
The MultiSocket class implements a socket that handshakes in multiple
protocols including SSL and PROXY. Unfortunately the way it type-erases the
handlers and buffers is incompatible with boost::asio coroutines. To pave the
way for coroutines this is part of a larger set of changes that roll back the
usage of MultiSocket to older code, and some custom implementations that use
templates. The custom implementations are more simple since they use
coroutines. Removing MultiSocket will make many other classes and source files
unused, a big win for trimming down the codebase size.
2014-10-22 19:34:48 -07:00
Donovan Hide
44c68d6174 Change NodeStore::Backend tests to reflect observed patterns:
Empirical evidence shows a database access pattern with few hits
and many misses (objects that don't exist). This changes the timing
tests so they more accurately reflect rippled's actual usage:
* Add read missing keys test
* Increase numObjectsToTest to 1,000,000
* Alter PredictableObjectFactory to seed RNG once only
* Make NodeStoreTiming a manual test
2014-10-22 19:29:29 -07:00
Howard Hinnant
5b7f172d03 Fix OS X version parsing/error related to OS X 10.10 update. 2014-10-22 19:29:28 -07:00
JoelKatz
65125eac87 Add "deferred" flag to transaction relay message
If we receive a deferred transaction from a server in our
cluster, treat it as if it wasn't received from a server
in our cluster.

This currently has no effect but is needed for server to
interoperate with future code that will relay deferred
transactions.
2014-10-22 19:29:28 -07:00
Scott Schurr
761902864a Refactor STParsedJSON to parse an object or array [RIPD-480]
The implementation of multi-sign has a SigningAccounts array as a
member of the outermost object.  This array could not be parsed
by the previous implementation of STParsedJSON, which only knew
how to parse objects.  This refactor supports the required parsing.

The refactor divides the parsing into three separate functions:

 o parseNoRecurse() which parses most rippled data types.
 o parseObject() which parses object types that may contain
   arbitrary other types.
 o parseArray() which parses object types that may contain
   arbitrary other types.

The change is required by the multi-sign implementation, but is
independent.  So the parsing change is going in as a separate
commit.

The parsing is still far from perfect.  But this was as much as
needs doing to accomplish the ends and mitigate risk of breaking
the parser.
2014-10-22 19:29:28 -07:00
Vinnie Falco
af24d541d1 Workaround for MSVC move special members. 2014-10-18 08:16:12 -07:00
Donovan Hide
3ad68a617e Fix dependency on boost::thread on OS/X. 2014-10-16 21:44:36 -04:00
Nik Bougalis
9e1a6589d4 Return descriptive error message from memo validation (RIPD-591). 2014-10-16 21:44:36 -04:00
Josh Juran
da8ceed07e RippleSSLContext.cpp cleanup.
* These cleanups precede work on RIPD-108.
2014-10-16 21:44:36 -04:00
Nik Bougalis
35935adc98 Fix URL compositing in Beast (RIPD-636). 2014-10-16 21:44:36 -04:00
Howard Hinnant
5b4a501f68 Detab beast 2014-10-15 19:39:30 -04:00
Tom Ritchford
5425a90f16 Fix tabs and trailing whitespace. 2014-10-15 19:39:30 -04:00
Donovan Hide
7eaca149c1 Remove boost_thread dependency (RIPD-216).
Fixes RIPD-216
2014-10-15 19:37:25 -04:00
Mark Travis
4b5fd95657 Disable SSLv3 2014-10-15 19:37:25 -04:00
Nik Bougalis
96dedf553e Refactor SerializedTransaction:
* Use boost:tribool instead of two intertwined bool variables
* Trim down public interface, reduce member variables
2014-10-15 19:37:25 -04:00
sublimator
23219f2662 Disable transaction submission tests under Travis. 2014-10-15 19:37:25 -04:00
Vinnie Falco
af78ed608e Call Stoppable::stopped in PeerFinder onStop. 2014-10-15 19:37:25 -04:00
Vinnie Falco
51dc59e019 Fix outgoing bytes calculation in HTTP server. 2014-10-15 19:37:25 -04:00
Tom Ritchford
afc102e90a New class RPC::Status enforces JSON-RPC 2.0 error format.
* Relevant issues:
  * RIPD-92
  * RIPD-97
  * RIPD-98
  * RIPD-439
2014-10-15 19:37:25 -04:00
David Schwartz
fc560179e0 SHAMap performance improvements (RIPD-434)
This reworks the way SHAMaps are stored, flushed, backed, and
traversed. Rather than storing the linkages in the SHAMap itself,
that information is now stored in the nodes. This makes
snapshotting much cheaper and also allows traverse work done on
behalf of one SHAMap to be used by other SHAMaps that share inner
nodes with that SHAMap.

When a SHAMap is modified, nodes are modified all the way up to the
root. This means that the modified nodes in a SHAMap can easily be
traversed for flushing. So they don't need to be separately tracked.

Summary
* Remove mTNByID
* Remove mDirtyNodes
* Much faster traverses
* Much Faster snapshots
* New algorithm for flushing
* New vistNodes/visitLeaves
* Avoid I/O if a map is unbacked
2014-10-14 13:32:17 -04:00
sublimator
d26241de0e Remove Og from debug mode
Last time I used gdb, iterating over a directory's `Indexes`, each uint256 printed as `<optimized out>`.

Debug mode is for debugging ...
2014-10-14 12:57:41 -04:00
Howard Hinnant
00310f4f10 Silence clang warnings 2014-10-14 12:35:17 -04:00
Howard Hinnant
8caae219cf Gracefully cast from std:🧵:hardware_concurrency 2014-10-14 12:35:17 -04:00
Howard Hinnant
2264ae9247 Guarantee C locale
*  Remove all calls to setlocale to ensure that the global
   locale is always C.

*  Also replace beast::SystemStats::getNumCpus() with
   std:🧵:hardware_concurrency()
2014-10-14 12:35:17 -04:00
Nicholas Dudfield
29225bbe75 Attempt to fix spurious travis failures 2014-10-14 12:35:17 -04:00
Vinnie Falco
4b5625fd59 Load PeerFinder database in Stoppable::onPrepare:
OverlayImpl::onStart calls into PeerFinder before PeerFinder::Manager::onStart,
causing tests to sometimes fail and the application to intermittently not start.
The order of calls to Stoppable::onStart is implementation defined and not
predictable.

This changes PeerFinder to load the database in Stoppable::onPrepare, before
threads are launched. In general, creation and initialization of resources that
are shared between classes should happen in onPrepare rather than onStart,
to solve this problem.
2014-10-10 19:38:52 -07:00
Vinnie Falco
7c0c2419f7 Refactor PeerFinder:
Previously, the PeerFinder manager constructed with a Callback object
provided by the owner which was used to perform operations like connecting,
disconnecting, and sending messages. This made it difficult to change the
overlay code because a single call into the PeerFinder could cause both
OverlayImpl and PeerImp to be re-entered one or more times, sometimes while
holding a recursive mutex. This change eliminates the callback by changing
PeerFinder functions to return values indicating the action the caller should
take.

As a result of this change the PeerFinder no longer needs its own dedicated
thread. OverlayImpl is changed to call into PeerFinder on a timer to perform
periodic activities. Furthermore the Checker class used to perform connectivity
checks has been refactored. It no longer uses an abstract base class, in order
to not type-erase the handler passed to async_connect (ensuring compatibility
with coroutines). To allow unit tests that don't need a network, the Logic
class is now templated on the Checker type. Currently the Manager provides its
own io_service. However, this can easily be changed so that the io_service is
provided upon construction.

Summary
* Remove unused SiteFiles dependency injection
* Remove Callback and update signatures for public APIs
* Remove obsolete functions
* Move timer to overlay
* Steps toward a shared io_service
* Templated, simplified Checker
* Tidy up Checker declaration
2014-10-10 15:04:37 -07:00
Vinnie Falco
5f59282ba1 Clean up Overlay and PeerFinder sources:
* Tidy up identifiers and declarations
* Merge PeerFinder headers into one file
* Merge handout classes and functions into one file
2014-10-10 15:04:36 -07:00
Vinnie Falco
db03ce939c Add pending_handlers 2014-10-10 13:26:08 -07:00
Vinnie Falco
68bcbbb701 Add missing include in beast header 2014-10-10 13:26:08 -07:00
David Schwartz
8bdf7b3983 Remove unused file 2014-10-10 10:27:47 -07:00
Vinnie Falco
4ab427d315 Cleanup: Combine Section and BasicConfig, move to basics 2014-10-09 14:49:10 -07:00
Vinnie Falco
9a0a434dd8 Fix incorrect address in connectivity check report:
The remoteAddress is the address as seen on the socket, which for
incoming connections has a random port chosen by the remote implementation
that is different from the port number used to accept connections by the
remote listening socket. The checkedAddress is the remote address as seen
on the socket, combined with the port advertised in the TMEndpoints message.
This fixes the reporting and metadata associated with addresses tested
for connectivity.

The README has been updated to reflect that uptime is no longer part of
the metadata associated with IP addresses saved for bootstrapping.
2014-10-09 14:48:54 -07:00
Nik Bougalis
33d1dda954 Handle BIGNUM conversion failure 2014-10-06 11:24:42 -07:00
Howard Hinnant
8e9efb4ceb Remove unused transaction code. 2014-10-06 11:18:15 -07:00
Nik Bougalis
8835af11d5 Cleanups and surface reduction:
* Don't use friendship unless needed
* Trim down interfaces
* Make classes feel more like std containers
2014-10-06 11:18:15 -07:00
Nik Bougalis
cfb6b678f1 Remove HashMaps 2014-10-02 14:58:14 -07:00
miguelportilla
365500da98 Create orderbook integration test (RIPD-483) 2014-10-02 14:58:14 -07:00
Miguel Portilla
f14d75e798 Optimize account_lines and account_offers (RIPD-587)
Conflicts:
	src/ripple/app/ledger/Ledger.h
2014-10-02 14:58:14 -07:00
Tom Ritchford
0f71b4a378 Fix most compilation warnings for gcc, clang, release, debug. 2014-10-02 14:58:14 -07:00
JoelKatz
b651e0146d Fix some fee logic: (RIPD-614)
* fee_default sets cost in drops of reference transaction
* Offline signing uses fee_default
* Signing multiplier maximum works correctly
* Fix bugs in load fee track
* Remove dead code, add comments
2014-10-02 14:58:14 -07:00
Tom Ritchford
a0dbbb2d84 Update and sort ErrorCode descriptions 2014-10-02 14:57:31 -07:00
Torrie Fischer
a85fbf69e0 Update rocksdb 2014-10-02 14:57:31 -07:00
Torrie Fischer
92b8c7961b Squashed 'src/rocksdb2/' changes from 37c6740..25888ae
25888ae Merge pull request #329 from fyrz/master
89833e5 Fixed signed-unsigned comparison warning in db_test.cc
fcac705 Fixed compile warning on Mac caused by unused variables.
b3343fd resolution for java build problem introduced by 5ec53f3edf62bec1b690ce12fb21a6c52203f3c8
187b299 ForwardIterator: update prev_key_ only if prefix hasn't changed
5ec53f3 make compaction related options changeable
d122e7b Update INSTALL.md
986dad0 Merge pull request #324 from dalgaaf/wip-da-SCA-20140930
8ee75dc db/memtable.cc: remove unused variable merge_result
0fd8bbc db/db_impl.cc: reduce scope of prefix_initialized
676ff7b compaction_picker.cc: remove check for >=0 for unsigned
e55aea5 document_db.cc: fix assert
d517c83 in_table_factory.cc: use correct format specifier
b140375 ttl/ttl_test.cc: prefer prefix ++operator for non-primitive types
43c789c spatialdb/spatial_db.cc: use !empty() instead of 'size() > 0'
0de452e document_db.cc: pass const parameter by reference
4cc8643 util/ldb_cmd.cc: prefer prefix ++operator for non-primitive types
af8c2b2 util/signal_test.cc: suppress intentional null pointer deref
33580fa db/db_impl.cc: fix object handling, remove double lines
873f135 db_ttl_impl.h: pass func parameter by reference
8558457 ldb_cmd_execute_result.h: perform init in initialization list
063471b table/table_test.cc: pass func parameter by reference
93548ce table/cuckoo_table_reader.cc: pass func parameter by ref
b8b7117 db/version_set.cc: use !empty() instead of 'size() > 0'
8ce050b table/bloom_block.*: pass func parameter by reference
53910dd db_test.cc: pass parameter by reference
68ca534 corruption_test.cc: pass parameter by reference
7506198 cuckoo_table_db_test.cc: add flush after delete
1f96330 Print MB per second compaction throughput separately for reads and writes
ffe3d49 Add an instruction about SSE in INSTALL.md
ee1f3cc Package generation for Ubuntu and CentOS
f0f7955 Fixing comile errors on OS X
99fb613 remove 2 space linter
b2d64a4 Fix linters, second try
747523d Print per column family metrics in db_bench
56ebd40 Fix arc lint (should fix #238)
637f891 Merge pull request #321 from eonnen/master
827e31c Make test use a compatible type in the size checks.
fd5d80d CompactedDB: log using the correct info_log
2faf49d use GetContext to replace callback function pointer
983d2de Add AUTHORS file. Fix #203
abd70c5 Merge pull request #316 from fyrz/ReverseBytewiseComparator
2dc6f62 handle kDelete type in cuckoo builder
8b8011a Changed name of ReverseBytewiseComparator based on review comment
389edb6 universal compaction picker: use double for potential overflow
5340484 Built-in comparator(s) in RocksJava
d439451 delay initialization of cuckoo table iterator
94997ea reduce memory usage of cuckoo table builder
c627595 improve memory efficiency of cuckoo reader
581442d option to choose module when calculating CuckooTable hash
fbd2daf CompactedDBImpl::MultiGet() for better CuckooTable performance
3c68006 CompactedDBImpl
f7375f3 Fix double deletes
21ddcf6 Remove allow_thread_local
fb4a492 Merge pull request #311 from ankgup87/master
611e286 Merge branch 'master' of https://github.com/facebook/rocksdb
0103b44 Merge branch 'master' of ssh://github.com/ankgup87/rocksdb
1dfb7bb Add block based table config options
cdaf44f Enlarge log size cap when printing file summary
7cc1ed7 Merge pull request #309 from naveenatceg/staticbuild
ba6d660 Resolving merge conflict
51eeaf6 Addressing review comments
fd7d3fe Addressing review comments (adding a env variable to override temp directory)
cf7ace8 Addressing review comments
0a29ce5 re-enable BlockBasedTable::SetupForCompaction()
55af370 Remove TODO for checking index checksums
3d74f09 Fix compile
53b0039 Fix release compile
d0de413 WriteBatchWithIndex to allow different Comparators for different column families
57a32f1 change target_file_size_base to uint64_t
5e6aee4 dont create backup_input if compaction filter v2 is not used
49b5f94 Merge pull request #306 from Liuchang0812/fix_cast
787cb4d remove cast, replace %llu with % PRIu64
a7574d4 Update logging.cc
7e0dcb9 Update logging.cc
57fa3cc Merge pull request #304 from Liuchang0812/fix-check
cd44522 Merge pull request #305 from Liuchang0812/fix-logging
6a031b6 remove unused variable
4436f17 fixed #303: replace %ld with % PRId64
7a1bd05 Merge pull request #302 from ankgup87/master
423e52c Merge branch 'master' of https://github.com/facebook/rocksdb
bfeef94 Add rate limiter
32f2532 Print compression_size_percent as a signed int
976caca Skip AllocateTest if fallocate() is not supported in the file system
3b897cd Enable no-fbcode RocksDB build
f445947 RocksDB: Format uint64 using PRIu64 in db_impl.cc
e17bc65 Merge pull request #299 from ankgup87/master
b93797a Fix build
adae3ca [Java] Fix JNI link error caused by the removal of options.db_stats_log_interval
90b8c07 Fix unit tests errors
51af7c3 CuckooTable: add one option to allow identity function for the first hash function
0350435 Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
2fb1fea Fix syncronization issues
ff76895 Remove some unnecessary constructors
feadb9d fix cuckoo table builder test
3c232e1 Fix mac compile
54cada9 Run make format on PR #249
27b22f1 Merge pull request #249 from tdfischer/decompression-refactoring
fb6456b Replace naked calls to operator new and delete (Fixes #222)
5600c8f cuckoo table: return estimated size - 1
a062e1f SetOptions() for memtable related options
e4eca6a Options conversion function for convenience
a7c2094 Merge pull request #292 from saghmrossi/master
4d05234 Merge branch 'master' of github.com:saghmrossi/rocksdb
60a4aa1 Test use_mmap_reads
94e43a1 [Java] Fixed 32-bit overflowing issue when converting jlong to size_t
f9eaaa6 added include for inttypes.h to fix nonworking printf statements
f090575 Replaced "built on on earlier work" by "built on earlier work" in README.md
faad439 Fix #284
49aacd8 Fix make install
acb9348 [Java] Include WriteBatch into RocksDBSample.java, fix how DbBenchmark.java handles WriteBatch.
4a27a2f Don't sync manifest when disableDataSync = true
9b8480d Merge pull request #287 from yinqiwen/rate-limiter-crash-fix
28be16b fix rate limiter crash #286
04ce1b2 Fix #284
add22e3 standardize scripts to run RocksDB benchmarks
dee91c2 WriteThread
540a257 Fix WAL synced
24f034b Merge pull request #282 from Chilledheart/develop
49fe329 Fix build issue under macosx
ebb5c65 Add make install
0352a9f add_wrapped_bloom_test
9c0e66c Don't run background jobs (flush, compactions) when bg_error_ is set
a9639bd Fix valgrind test
d1f24dc Relax FlushSchedule test
3d9e6f7 Push model for flushing memtables
059e584 [unit test] CompactRange should fail if we don't have space
dd641b2 fix RocksDB java build
53404d9 add_qps_info_in cache bench
a52cecb Fix Mac compile
092f97e Fix comments and typos
6cc1286 Added a few statistics for BackupableDB
0a42295 Fix SimpleWriteTimeoutTest
06d9862 Always pass MergeContext as pointer, not reference
d343c3f Improve db recovery
6bb7e3e Merger test
88841bd Explicitly cast char to signed char in Hash()
5231146 MemTableOptions
1d284db Addressing review comments
55114e7 Some updates for SpatialDB
171d4ff remove TailingIterator reference in db_impl.h
9b0f7ff rename version_set options_ to db_options_ to avoid confusion
2d57828 Check stop level trigger-0 before slowdown level-0 trigger
659d2d5 move compaction_filter to immutable_options
048560a reduce references to cfd->options() in DBImpl
011241b DB::Flush() Do not wait for background threads when there is nothing in mem table
a2bb7c3 Push- instead of pull-model for managing Write stalls
0af157f Implement full filter for block based table.
9360cc6 Fix valgrind issue
02d5bff Merge pull request #277 from wankai/master
88a2f44 fix comments
7c16e39 Merge pull request #276 from wankai/master
8237738 replace hard-coded number with named variable
db8ca52 Merge pull request #273 from nbougalis/static-analysis
b7b031f Merge pull request #274 from wankai/master
4c2b1f0 Merge remote-tracking branch 'upstream/master'
a5d2863 typo improvement
9f8aa09 Don't leak data returned by opendir
d1cfb71 Remove unused member(s)
bfee319 sizeof(int*) where sizeof(int) was intended
d40c1f7 Add missing break statement
2e97c38 Avoid off-by-one error when using readlink
40ddc3d add cache bench
9f1c80b Drop column family from write thread
8de151b Add db_bench with lots of column families to regression tests
c9e419c rename options_ to db_options_ in DBImpl to avoid confusion
5cd0576 Fix compaction bug in Cuckoo Table Builder. Use kvs_.size() instead of num_entries in FileSize() method.
0fbb3fa fixed memory leak in unit test DBIteratorBoundTest
adcd253 fix asan check
4092b7a Merge pull request #272 from project-zerus/patch-1
bb6ae0f fix more compile warnings
6d31441 Merge pull request #271 from nbougalis/cleanups
0cd0ec4 Plug memory leak during index creation
4329d74 Fix swapped variable names to accurately reflect usage
45a5e3e Remove path with arena==nullptr from NewInternalIterator
5665e5e introduce ImmutableOptions
e0b99d4 created a new ReadOptions parameter 'iterate_upper_bound'
51ea889 Fix travis builds
a481626 Relax backupable rate limiting test
f7f973d Merge pull request #269 from huahang/patch-2
ef5b384 fix a few compile warnings
2fd3806 Merge pull request #263 from wankai/master
1785114 delete unused Comparator
1b1d961 update HISTORY.md
703c3ea comments about the BlockBasedTableOptions migration in Options
4b5ad88 Merge pull request #260 from wankai/master
19cc588 change to filter_block std::unique_ptr support RAII
9b976e3 Merge pull request #259 from wankai/master
5d25a46 Merge remote-tracking branch 'upstream/master'
9b58c73 call SanitizeDBOptionsByCFOptions() in the right place
a84234a Ignore missing column families
8ed70fc add assert to db Put in db_stress test
7f19bb9 Merge pull request #242 from tdfischer/perf-timer-destructors
8438a19 fix dropping column family bug
6614a48 Refactor PerfStepTimer to stop on destruct
076bd01 Fix compile
990df99 Fix ios compile
7dcadb1 Don't let flush preempt compaction in certain cases
dff2b1a typo improvement
985a31c Merge pull request #251 from nbougalis/master
f09329c Fix candidate file comparison when using path ids
7e9f28c limit max bytes that can be read/written per pread/write syscall
d20b8cf Improve Cuckoo Table Reader performance. Inlined hash function and number of buckets a power of two.
0f9c43e ForwardIterator: reset incomplete iterators on Seek()
722d80c reduce recordTick overhead in compaction loop
22a0a60 Merge pull request #250 from wankai/master
be25ee4 delete unused struct Options
0c26e76 Merge pull request #237 from tdfischer/tdfischer/faster-timeout-test
1d23b5c remove_internal_filter_policy
2a8faf7 Compact SpatialDB as we go, not at the end
7f71448 Implementing a cache friendly version of Cuckoo Hash
d977e55 Don't let other compactions run when manual compaction runs
d5bd6c7 Fix ios compile
6b46f78 Merge pull request #248 from wankai/master
528a11c Update block_builder.h
536e997 Remove assert in vector rep
4142a3e Adding a user comparator for comparing Uint64 slices.
1913ce2 more concurrent flushes in SpatialDB
808e809 Adjust SpatialDB column family options
0c39f54 Use Vector memtable when bulk loading SpatialDB
b6fd781 Don't do memtable lookup in db_impl_readonly if memtables are empty while opening db.
9dcb75b Add is-file-deletions-enabled property
1755581 improve OptimizeForPointLookup()
d9c0785 Fix assertion in PosixRandomAccessFile
bda6f33 fix valgrind error in c_test caused by BlockBasedTableOptions
0db6b02 Update timeout to 50ms instead of 3.
ff6ec0e Optimize SpatialDB
2386185 ReadOptions.total_order_seek to allow total order seek for block-based table when hash index is enabled
a98badf print table options
66f62e5 JNI changes corresponding to BlockBasedTableOptions migration
3844001 move block based table related options BlockBasedTableOptions
17b54ae Merge pull request #243 from andybons/patch-1
0508691 Add missing include to use std::unique_ptr
42ea795 Fix concurrency issue in CompactionPicker
bb530c0 Merge pull request #240 from ShaoYuZhang/master
f76eda7 Fix compilation issue on OSX
08be7f5 Implement Prepare method in CuckooTableReader
47b452c Fix the error of c_test.c
562b7a1 Add missing implementaiton of SanitizeDBOptions in simple_table_db_test.cc
63a2215 Improve Options sanitization and add MmapReadRequired() to TableFactory
e173bf9 Eliminate VersionSet memory leak
10720a5 Revert the unintended change that DestroyDB() doesn't clean up info logs.
01cbdd2 Optimize storage parameters for spatialDB
045575a Add CuckooHash table format to table_reader_bench
7c5173d test: db: fix test to have a smaller timeout for when it runs on faster hardware
6929b08 Remove BitStream* tests
50b790c Removing BitStream* functions
162b815 Adding Column Family support in db_bench.
28b5c76 WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index
5585e00 Update release note of 3.4
343e98a Reverting import change
ddb8039 RocksDB static build Make file changes to download and build the dependencies .Load the shared library when RocksDB is initialized
68eed8c Bump up version
36e759d Adding Cuckoo Table SST option to db_bench
a6fd14c Fix valgrind error in c_test
c703715 attempt to fix auto_roll_logger_test
c8ecfae Merge pull request #230 from cockroachdb/spencerkimball/send-user-keys-to-v2-filter
570ba5a Avoid retrying to read property block from a table when it does not exist.
625b9ef Merge pull request #234 from bbiao/master
59a2763 Fix typo huage => huge
f611935 Fix autovector iterator increment/decrement comments
58b0f9d Support purging logs from separate log directory
2da53b1 [Java] Add purgeOldBackups API
6c4c159 fix_sst_dump_for_old_sst_format
8dfe2fd fix compile error under Mac OS X
58c4946 Allow env_posix to lower background thread IO priority
6a2be31 fix_valgrind_error_caused_in_db_info_dummper
e91ebf1 print compaction_filter name in Options.Dump
5a5953b Add histogram for DB_SEEK
5e64240 log db path info before open
0c9dc9f Remove malloc from FormatFileNumber
bcefede Update HISTORY.md
4808177 Revert "Include candidate files under options.db_log_dir in FindObsoleteFiles()"
0138b8e Fixed compile errors (signed / unsigned comparison) in cuckoo_table_db_test on Mac
1562653 Fixed a signed-unsigned comparison error in db_test
218857b remove tailing_iter.h/cc
5d0074c set bytes_per_sync to 1MB if rate limiter is enabled
3fcf7b2 Pass parsed user key to prefix extractor in V2 compaction
2fa6434 Add scope guard
06a52bd Flush only one column family
9674c11 Integrating Cuckoo Hash SST Table format into RocksDB

git-subtree-dir: src/rocksdb2
git-subtree-split: 25888ae0068c9b8e3d9421ea8c78a7be339298d8
2014-10-02 10:47:26 -07:00
Torrie Fischer
225f8ac12f Merge commit '92b8c7961b433d12d9d77da5d61c26a920bbd370' into updated-rocksdb 2014-10-02 10:47:26 -07:00
Howard Hinnant
1161511207 Fix two Wunused-private-field warnings. 2014-10-01 08:47:56 -07:00
Nicholas Dudfield
ca8eda412e Make travis build and use debug variants for tests 2014-10-01 08:47:56 -07:00
Mark Travis
ec4ec48fb8 Add counters to track nodestore read and write activities. 2014-10-01 08:47:56 -07:00
Nik Bougalis
c0b69e8ef7 Remove the use of beast::String from rippled (RIPD-443) 2014-10-01 08:47:55 -07:00
Tom Ritchford
4241dbb600 Clean and harden Transaction.
* Replace boolean parameter with enumerated type.
* Get rid of std::ref.
* 80-column cleanups.
* Replace an std::bin with a lambda.
2014-10-01 08:47:55 -07:00
Tom Ritchford
f54280aaad New DatabaseReader reads ledger numbers from database. 2014-10-01 08:47:55 -07:00
Tom Ritchford
6069400538 Fix compiler warnings under gcc. 2014-10-01 08:47:55 -07:00
Howard Hinnant
616be1d76c Miscellaneous cleanups:
* Limit HashPrefix construction and disallow assignment

* Give KnownFormats deleted copy members so that derived
  classes will give the right answers if queried with the
  std::is_copy_constructible/assignable traits.

* Replace SharedSingleton with a local static in
  LedgerFormats::getInstance() to be consistent with
  similar code in other places.  This also allows the
  LedgerFormats default constructor to be marked private
  so that the compiler enforces the design that
  LedgerFormats is a singleton type.

* Change return types of LedgerFormats::getInstance() and
  TxFormats::getInstance() from pointer to non-const to
  reference to const so as follow more established design
  guidelines for singletons.  This prevents pointers being
  mistaken for heap-allocated objects, and the const
  ensures the singleton isn't mutable.

* Change RippleAddress to inherit privately from
  CBase58Data instead of publicly.  This lets the compiler
  enforce that there are no unintended conversions from
  RippleAddress to CBase58Data.  This change allows us
  to remove a comment warning about unwanted conversions.
2014-10-01 08:47:54 -07:00
Nik Bougalis
8e91ce67c5 Allow beast::lexicalCast to parse 'true' & 'false' into a bool 2014-10-01 08:47:54 -07:00
JoelKatz
c1ecd661c3 Fix broken assert in built/validated ledger mismatch handler 2014-10-01 08:47:54 -07:00
JoelKatz
b27e2aad07 Improve transaction security
* Check signatures of every transaction on every validator
* Remove obsolete code
* Check transaction status in submit/sign RPC handler
2014-10-01 08:47:54 -07:00
MarkusTeufelberger
5ce508e09d Change output range names of ledger_cleaner
The input parameters are called "min_ledger" and "max_ledger", they are also called "minRange" and "maxRange" in the code BUT "ledger_min" and ledger_max" if printed. This is inconsistent and should be changed, as it might lead to confusion on how to call this module via RPC.
2014-10-01 08:47:53 -07:00
Nik Bougalis
3cfa5a41b1 Improve BuildInfo interface:
* Remove unnecessary beast::String dependency
* Explicitly cast to result type while packing a version
* Add unit tests for version formatting
2014-10-01 08:47:53 -07:00
Vinnie Falco
6c072f37ef Remove unused testoverlay module 2014-10-01 08:47:53 -07:00
Nik Bougalis
dbd993ed2b Use namespaces instead of static-only classes 2014-10-01 08:47:52 -07:00
Nik Bougalis
45b5c4ba7a Use deleted members to prevent copying in Beast (RIPD-268) 2014-10-01 08:47:52 -07:00
Nik Bougalis
7933e5d1f9 Use deleted members to prevent copying in rippled (RIPD-268) 2014-10-01 06:28:32 -07:00
Vinnie Falco
01e52e6f9f Use trusted validators median fee
Conflicts:
	src/ripple/app/misc/Validations.cpp
2014-10-01 06:28:32 -07:00
Vinnie Falco
40a955e192 Consume handshake data in HTTP/S server 2014-10-01 06:28:12 -07:00
Vinnie Falco
a8296f7301 Set version to 0.26.3-sp4 2014-09-30 18:04:59 -07:00
Vinnie Falco
590c3b876b Use trusted validators median fee 2014-09-30 18:03:53 -07:00
Vinnie Falco
6dfc805eaa Rewrite HTTP/S server to use coroutines:
* Fix bug with more than one complete request in a read buffer
* Use stackful coroutines for simplified control flow
* Door refactored to detect handshakes
* Remove dependency on MultiSocket
* Remove dependency on handshake detect logic framework
2014-09-30 13:29:32 -07:00
Vinnie Falco
5ce6068df5 Remove obsolete SharedArg 2014-09-29 07:18:51 -07:00
Nik Bougalis
bf9b8f4d1b Use secure RPC connections when configured 2014-09-28 04:39:49 -07:00
Vinnie Falco
d618581060 Config improvements:
* More fine-grained Section mutators
* Add remap for mapping legacy single sections to key value pairs
* Add output stream operators for BasicConfig and Section
* Allow section values to be overwritten from command line
* Update rpc key/value configs from command line
* Add RPC::Setup with defaults and remap legacy rpc sections
2014-09-28 04:39:49 -07:00
David Schwartz
2936bbfae8 Make path filtering smarter (RIPD-561)
* Break path liquidity checking into its own function
* Measure initial quality over minimum destination amount
* Test for available liquidity
2014-09-24 11:54:12 -07:00
Nik Bougalis
47b08bfc02 Add --quorum command line argument (RIPD-563) 2014-09-24 11:19:39 -07:00
Nik Bougalis
da4f77ca1f Return correct error message for invalid fields 2014-09-24 11:19:38 -07:00
David Schwartz
1c0a75d467 Distinguish Byzantine failure from tx bug (RIPD-523) 2014-09-24 11:19:38 -07:00
Nik Bougalis
659cf0c221 Decouple LedgerMaster from configuration 2014-09-24 11:19:38 -07:00
Howard Hinnant
430229fd84 Mark several Ledger member functions as const. 2014-09-24 11:19:37 -07:00
Howard Hinnant
81699a0971 Add +DEBUG to the raw version string for DEBUG builds.
This will show up in the rpc server_info command.
There is no impact on the version string for release builds.
2014-09-24 11:19:37 -07:00
MarkusTeufelberger
c54aff74b3 Build gcc.debug using -Og flag
Since gcc 4.8 is required anyways, it might be nice to use its features.

Intro to feature (second bullet point):
https://gcc.gnu.org/gcc-4.8/changes.html

-g (line 328) is still needed:
http://stackoverflow.com/questions/12970596/gcc-4-8-does-og-imply-g
2014-09-24 11:19:37 -07:00
Mark Travis
7f43ab9097 Improvements to SConstruct:
* Default target is release instead of debug (scons with no arguments).
* All targets now include debug symbols, including release.
Rationale: "out of the box" builds of rippled using plain "scons" or "scons -j4" will produce
a debug instead of a release build, which could underperform.
2014-09-24 11:19:36 -07:00
Miguel Portilla
d78f740250 Add account_offers paging (RIPD-344) 2014-09-19 16:38:10 -07:00
Miguel Portilla
cd1bd18a49 Add account_lines paging (RIPD-343) 2014-09-19 16:18:50 -07:00
sublimator
f81b084448 Set page sizes for ledger_data correctly (RIPD-249) 2014-09-19 16:16:49 -07:00
Vinnie Falco
02d9c77402 Set version to 0.26.3-sp2 2014-09-19 11:57:22 -07:00
Howard Hinnant
a0c903c68c Add needed #include <istream>
This is needed for the combination of boost 1.56 and libc++
2014-09-19 10:29:14 -07:00
JoelKatz
6aa325d3da On missing node in consensus, bow out (RIPD-567) 2014-09-18 15:12:45 -07:00
JoelKatz
041f874d4c Improve transaction security
* Check signatures of every transaction on every validator
* Remove obsolete code
* Check transaction status in submit/sign RPC handler
2014-09-18 14:25:09 -07:00
Nik Bougalis
526ecd6a81 Detect invalid inputs during STAmount conversion (RIPD-570):
* More robust validation of input
* XRP may not be specified using fractions
* Prevent creating native amounts larger than max possible value
* Add unit tests to verify correct parsing
2014-09-18 12:46:21 -07:00
Nik Bougalis
d373054fc4 Templetize and improve beast string-to-integer conversions:
* Properly handle numbers at the edge of precision
* Improve and expand unit test coverage
2014-09-18 12:46:16 -07:00
Vinnie Falco
b6d9f1d4b2 Add fee voting configuration and docs (RIPD-564) 2014-09-17 12:22:51 -07:00
Vinnie Falco
3fef916972 Move some constants to core/SystemParameters.h 2014-09-17 12:22:49 -07:00
Vinnie Falco
89a51e5b91 Split Section to its own header and add convenience accessors 2014-09-17 12:22:49 -07:00
miguelportilla
f87a6ccc7a Fix missing includes for boost 1.56.0 2014-09-16 15:22:00 -07:00
Nik Bougalis
f65cea66ef Remove unused macros, config variables, and file 2014-09-16 14:15:13 -07:00
Vinnie Falco
4239880acb Clean up and restructure sources 2014-09-16 14:15:12 -07:00
Vinnie Falco
1dcd06a1c1 Add missing includes and tidy up 2014-09-16 14:03:50 -07:00
Vinnie Falco
0f30191d10 Refactor STAmount:
* Remove unused functions
* Remove unused constructor
* Use delegating constructors
* Mark some observers deprecated
* Clean up declaration parameter names
* Add checked and unchecked constructors
* De-inline unnecessary inlined functions
* Reorder and regroup members into sections
* Move globals from the unity file to the .cpp
* Change some member functions to be free functions
* Put implementation in one .cpp and the test in another .cpp

Remove unused STAmount constructor and delegate two others No change in functionality.
2014-09-16 07:39:50 -07:00
Vinnie Falco
8fb9d5daaa Set version to 0.26.4-alpha 2014-09-15 18:20:47 -07:00
Nik Bougalis
ed3c942ff1 Inject JobQueue in NetworkOPs 2014-09-15 16:05:01 -07:00
Nik Bougalis
80436d4a8b Cleanups:
* Remove obsolete string formatting function
* Remove unused ADDRESS macro
* Re-scope functions
2014-09-15 16:04:48 -07:00
Howard Hinnant
cfc702c766 Fix beast::http::headers move members 2014-09-15 16:03:36 -07:00
Vinnie Falco
88ae15ea8e Add base64 conversions and tests 2014-09-15 14:52:42 -07:00
Vinnie Falco
6bafca7386 Use transform_iterator in http::headers 2014-09-15 14:52:42 -07:00
Vinnie Falco
379e842080 Add BasicConfig simplified config interface 2014-09-15 12:46:04 -07:00
Vinnie Falco
c41ce469d0 Cleanup:
* Move QUALITY_ONE to Quality.h
* Move functional files up one level
* Remove core.h
* Merge routines into Config.cpp
* Rename Section to IniFileSections
* Rename IniFileSections routines
2014-09-15 12:21:36 -07:00
Vinnie Falco
a1ca68473d Merge branch 'release' into develop 2014-09-14 15:39:09 -07:00
Nik Bougalis
3345d03433 Avoid conversions whenever possible during RippleState lookups 2014-09-13 11:06:38 -07:00
Nik Bougalis
81a426608a Make log partitions case-insensitive 2014-09-13 11:06:19 -07:00
Vinnie Falco
2ad6f0a65e Set version to 0.26.3-sp1 2014-09-12 15:22:54 -07:00
Vinnie Falco
ee8bd8ddae Fix handling of HTTP/S keep-alives (RIPD-556):
* Proper shutdown for ssl and non-ssl connections
* Report session id in history
* Report histogram of requests per session
* Change print name to 'http'
* Split logging into "HTTP" and "HTTP-RPC" partitions
* More logging and refinement of logging severities
* Log the request count when a session is destroyed

Conflicts:
	src/ripple/http/impl/Peer.cpp
	src/ripple/http/impl/Peer.h
	src/ripple/http/impl/ServerImpl.cpp
	src/ripple/module/app/main/Application.cpp
	src/ripple/module/app/main/RPCHTTPServer.cpp
2014-09-12 15:19:17 -07:00
Vinnie Falco
319ac14e7d Add is_short_read() 2014-09-12 15:19:04 -07:00
Vinnie Falco
0215a7400d Fix handling of HTTP/S keep-alives (RIPD-556):
* Proper shutdown for ssl and non-ssl connections
* Report session id in history
* Report histogram of requests per session
* Change print name to 'http'
* Split logging into "HTTP" and "HTTP-RPC" partitions
* More logging and refinement of logging severities
* Log the request count when a session is destroyed
2014-09-12 14:20:30 -07:00
Vinnie Falco
79db0ca7a6 Add is_short_read() 2014-09-12 14:10:33 -07:00
JoelKatz
1a7eafb699 Add ledger cleaner documentation (RIPD-555) 2014-09-09 22:33:42 -07:00
Nik Bougalis
81a06ea6cd Cleanups:
* Remove obsolete config variables
* Reduce coupling
* Use C++11 ownership containers
* Use auto when it makes sense
* Detect edge-case in unit tests
* Reduce the number of LedgerEntrySet public members
2014-09-09 22:33:42 -07:00
Nik Bougalis
de4be649ab Refactor string-to-integer conversions 2014-09-09 21:38:09 -07:00
sublimator
d90ec5f06c Normalize sort paths in Visual Studio project generator 2014-09-08 11:17:40 -07:00
Vinnie Falco
32065ced6e Add peer count to HTTP server properties 2014-09-08 11:17:39 -07:00
Scott Schurr
b5224a2227 Improve regularity of STObject and STArray (RIPD-448, RIPD-544):
* reduce duplicated code using templates
* replace BOOST_FOREACH with C++11 for loops
* remove most direct calls to new
* limit line length to 80 characters
* clearly identify virtual and overridden methods
* split STObject and STArray into their own files
* name files after the class they contain
2014-09-05 13:02:07 -07:00
Nik Bougalis
c55777738f Refactor LedgerEntrySet:
* Split adjustOwnerCount to increment and decrement paths.
* Move pathfinding-specific functions out of LedgerEntrySet
* Convert members to free functions
2014-09-05 11:50:17 -07:00
JoelKatz
c72dff5a24 Make more RocksDB tunables
Add support for universal compaction
2014-09-05 11:50:17 -07:00
JoelKatz
6b09e49c08 Increase the size of the tree cache:
This change will not significantly increase memory consumption
because most entries are pinned anyway.
2014-09-05 11:48:00 -07:00
Nik Bougalis
413218c4c4 Create the directory for the debug_logfile (RIPD-551) 2014-09-04 16:51:31 -07:00
Miguel Portilla
16c04b50ee Add date to tx command (RIPD-542) 2014-09-04 16:51:31 -07:00
Nik Bougalis
56c18f7768 Cleanups and fixes (RIPD-532):
* Properly handle sfWalletLocator field
* Plug a tiny memory leak
* Avoid naked pointers
* Remove unused variables
* Other small cleanups
2014-09-04 16:51:31 -07:00
Tom Ritchford
22ca13bc78 Cleanups to RPC code 2014-09-04 16:51:31 -07:00
Nicholas Dudfield
4c7fd18230 Ticket integration tests 2014-09-04 16:51:31 -07:00
Nik Bougalis
39730fc13e Ticket issuing (RIPD-368):
* New CreateTicket transactor to create tickets
* New CancelTicket transactor to cancel tickets
* Ledger entries for tickets & associated functions
* First draft of M-of-N documentation
2014-09-04 16:11:44 -07:00
Nik Bougalis
889c0a0d0f Transactor refactor:
* Allocate transactors on the stack instead of the heap.
* Remove header files and reduce transactor public interface.
2014-09-04 16:11:44 -07:00
Nik Bougalis
624a803955 Handle whitespace separating an 'ip port' correctly (RIPD-552) 2014-09-04 12:26:27 -07:00
Vinnie Falco
9f5c21f80e Set version to 0.26.3 2014-09-03 16:15:07 -07:00
Nik Bougalis
a3fe089367 Fix missing return value error check 2014-09-02 08:45:19 -07:00
Nik Bougalis
85d5cd3118 Fix missing return value error check 2014-09-01 21:11:31 -07:00
Vinnie Falco
61006e626d Also report mismatched built ledger 2014-08-28 18:03:50 -07:00
Nik Bougalis
15aad1cb24 Optimize pathfinding operations (RIPD-537):
* Calculate and cache Account hashes without holding locks.
* Fast hash-based path element comparison.
* Use emplace instead of find/insert
2014-08-28 15:57:29 -07:00
Tom Ritchford
95c1c5f54e Stream generated JSON. 2014-08-28 12:38:21 -07:00
Vinnie Falco
c65fb91878 Fix special members for http classes 2014-08-28 12:38:03 -07:00
Vinnie Falco
d5a7e1331e Set version to 0.26.3-rc3 2014-08-27 15:58:41 -07:00
Vinnie Falco
04bcd93ba3 HTTP(S)-RPC server improvements (RIPD-489, RIPD-533):
* Correct handling of Keep-Alive in socket handlers
* Report session history in print command
2014-08-27 18:06:30 -04:00
Vinnie Falco
f97ef7039a HTTP message and parser improvements:
* streambuf wrapper supports rvalue move
* message class holds a complete HTTP message
* body class holds the HTTP content body
* headers class holds RFC-compliant HTTP headers
* basic_parser provides class interface to joyent's http-parser
* parser class parses into a message object
* Remove unused http get client free function
* unit test for parsing malformed messages
2014-08-27 18:06:30 -04:00
Tom Ritchford
9160b46c1e Bug fixes and new features for LedgerTool:
* Fix RIPD-509, RIPD-514, RIPD-519, RIPD-525, RIPD-527, RIPD-529,
  RIPD-530 and RIPD-531.
* Protect people from ledger-spew and remove cruft.
* Better error messages and handling.
* Cache command lists or clears ledger cache.
* Better ledger summaries.
* Offline mode.
2014-08-27 18:05:44 -04:00
Edward Hennis
aa4b116498 Wrap RippleCalc into a single function (RIPD-500):
* Change public members of Input and Output to remove trailing _.
* Remove Input constructor and separate flags in RippleCalc to reduce
  duplication and confusion.
* Make calculation result private; add getter.
* Narrow scope of some of the results of calls to rippleCalculate.
2014-08-27 18:05:30 -04:00
Edward Hennis
612bb71165 Add enable_if_lvalue 2014-08-27 17:10:24 -04:00
Torrie Fischer
5c67f99ef9 Remove old rocksdb/ 2014-08-27 12:37:13 -07:00
Torrie Fischer
101a4808a0 Update includes and scons 2014-08-27 12:37:05 -07:00
Torrie Fischer
1d38671f5e Squashed 'src/rocksdb2/' content from commit 37c6740
git-subtree-dir: src/rocksdb2
git-subtree-split: 37c6740c383bb9a6ee2747b04f08bc77fcfa10c5
2014-08-27 12:36:50 -07:00
Torrie Fischer
7f25d88f02 Merge commit '1d38671f5edc2322bc58417816674cc629ae7a70' as 'src/rocksdb2' 2014-08-27 12:36:50 -07:00
Vinnie Falco
be830d3dad Set version to 0.26.3-rc2 2014-08-26 10:00:03 -07:00
Nik Bougalis
5bc949d70f Fix a unit test warning 2014-08-26 10:00:03 -07:00
Vinnie Falco
43817bd722 Merge remote-tracking branch 'upstream/release' into tmp 2014-08-26 09:46:29 -07:00
JoelKatz
61623d6d75 Improve parallelization of getRippleLines 2014-08-26 09:28:10 -07:00
David Schwartz
9aad60f56d Make sure we update mTNByID when we replace the root 2014-08-26 09:28:09 -07:00
Vinnie Falco
e7cf3e8084 Add ledger.history.mismatch insight statistic 2014-08-26 09:28:07 -07:00
Tom Ritchford
e024e7c2ec Compile git tags now include name and branch. (RIPD-493) 2014-08-22 18:10:17 -04:00
Edward Hennis
6fc136ae9a Update npm tests & config to pass in Windows. (RIPD-209)
* Extend timeout for WebSocket test.
* Windows networking doesn't like connecting to 0.0.0.0. Use 127.0.0.1
* Additional command line options in config. Can potentially be used to run rippled in a debugger.
2014-08-22 18:10:17 -04:00
Edward Hennis
2b69ded1ea Convert rvalue to an lvalue. (RIPD-494)
* The rvalue gets destructed as soon as "rc" is constructed.
2014-08-22 18:10:17 -04:00
Tom Ritchford
8dd799aa6f New command line LedgerTool. (RIPD-243)
* Retrieve and process summary or full ledgers.
* Search using arbitrary criteria (any Python function).
* Search using arbitrary formats (any Python function).
* Caches ledgers as .gz files to avoid repeated server requests.
* Handles ledger numbers, ranges, and special names like validated or closed.
2014-08-22 18:10:11 -04:00
Vinnie Falco
7230ef41ee Fix warnings and compile errors 2014-08-20 17:44:00 -07:00
Edward Hennis
a86f0a743c Clean up some docs
* Consistent line endings in rippled-example.cfg
* Rewrite CodingStyle.md. Get down to top priorities
2014-08-20 16:20:15 -07:00
Josh Juran
af75b55ef7 src/README.md: s/addded/added/ 2014-08-20 16:19:28 -07:00
Vinnie Falco
9ecb37dd4f Add validators aged container test 2014-08-20 16:09:52 -07:00
Vinnie Falco
2e3784a914 Tidy up sources 2014-08-20 16:09:51 -07:00
Scott Schurr
019c1af435 Use aged containers in Validators module (RIPD-349) 2014-08-20 16:09:51 -07:00
Vinnie Falco
5322955f2b Fix exception safety in aged containers 2014-08-20 16:09:50 -07:00
Howard Hinnant
a8ea4ce283 Fix move constructor of aged_unordered_container (RIPD-490) 2014-08-20 16:08:59 -07:00
Mark Travis
c12862f60d Enable heap profiling with jemalloc:
The jemalloc library (which must be downloaded and installed separately)
is required to perform heap profiling. Instructions on how to enable heap
profiling with rippled are available in doc/HeapProfiling.md
2014-08-13 20:36:38 -07:00
Tom Ritchford
e889183fc5 JSON cleanups:
* Fix Json headers to include what they use.
* Get rid of ripple/json/api directory.
2014-08-13 20:36:36 -07:00
Jeff Trull
7be695c6bd Handle changes to boost::optional in newly released boost 1.56:
To improve compatibility with the proposed std::optional a number
of changes were made, one of which is the removal of the implicit
conversion to bool.  As a result, returning boost::optional as a
bool value now fails.  Explicit conversion to bool used for clarity.
2014-08-13 19:00:17 -07:00
Nik Bougalis
956901ae02 Properly handle edge-cases when parsing JSON integers (RIPD-470):
* Properly handle both unsigned and signed integers
* Return parsing error for overlong JSON numbers
* Implement unit test checking the edge cases that are of interest
2014-08-13 18:46:32 -07:00
Nik Bougalis
d562c5b2d5 Account for high-ascii (RIPD-464) 2014-08-13 18:46:32 -07:00
Nik Bougalis
d7b054c3f6 When compiling debug builds with GCC use _FORTIFY_SOURCE=2:
This gcc/glibc feature adds some (supposedly) lightweight checks which can
help detect errors such as buffer overflows.
2014-08-13 18:33:30 -07:00
Tom Ritchford
901ccad0cf Clarify unfunded offer deletion strategy. 2014-08-13 18:33:03 -07:00
Vinnie Falco
b9454e0f0c Set version to 0.26.2 2014-08-12 12:19:13 -07:00
Vinnie Falco
26181907fc Merge branch 'release-next' into release
Conflicts:
	src/ripple/module/app/paths/cursor/ForwardLiquidity.cpp
	src/ripple/overlay/tests/peer_info.test.cpp
	src/ripple/unity/http.h
2014-08-12 12:19:04 -07:00
Miguel Portilla
8368798ad2 Add owner_funds to subscription streams (RIPD-377) 2014-08-12 11:56:07 -07:00
Miguel Portilla
ed597e5e99 Add owner_funds to subscription streams (RIPD-377) 2014-08-12 11:54:49 -07:00
Miguel Portilla
2c88c15f7f Fix unhandled exception in async HTTP server (RIPD-475) 2014-08-12 11:54:02 -07:00
Miguel Portilla
af7cd3cc04 Fix unhandled exception in async HTTP server (RIPD-475) 2014-08-12 11:46:45 -07:00
Howard Hinnant
9552551f9a Refactor SField (RIPD-431)
* Restrict access to SField constructors.
* Make all SField access const.
* Hide and simplify databases used to hold SField constants.
* Separate the two concerns of representing a field,
  and maintaining a database of fields.
2014-08-08 19:16:51 -07:00
Edward Hennis
1c73a0f649 Remove unused code:
* StringConcat was only being referenced by unit test.
* `runall.sh` is no longer needed. Use `npm test` instead.
2014-08-08 19:16:42 -07:00
Tom Ritchford
e3698b2a07 Fix Pathfinder::getPathsOut to use Issue 2014-08-08 19:16:34 -07:00
Tom Ritchford
c841f8b360 Add the git tag to the compile (RIPD-238) 2014-08-08 19:16:33 -07:00
wltsmrz
50f9b68d61 Bump ledger_wait timeout for Travis 2014-08-08 14:57:39 -07:00
Nik Bougalis
27b48bc16e Refactor beast::SemanticVersion (RIPD-199) 2014-08-08 14:57:39 -07:00
Edward Hennis
bccdbaed2b Update Doxyfile (RIPD-175):
* Clean up the include path to reflect updated code structure.
* Update PROJECT_BRIEF to more accurate title.
* Fix location of logo graphic.
* Hide all undocumented classes.
2014-08-08 14:57:39 -07:00
Nik Bougalis
398095a667 Cleanups and performance optimizations (RIPD-450):
* Remove AccountItems and AccountItem
* Restructure RippleLineCache to not require shared_ptr
* Avoid expensive copies of base_uint<160> in RippleState
2014-08-08 14:57:39 -07:00
Nik Bougalis
80095824b9 Remove obsolete nickname support 2014-08-08 14:57:39 -07:00
Nik Bougalis
d91c1f96cc Detect node store batch write failures (RIPD-270):
* Raise open file limit from the soft max to the hard max.
2014-08-08 14:57:39 -07:00
Howard Hinnant
1c005a0292 Fix macro setting for rocksdb unity file 2014-08-08 14:50:15 -07:00
miguelportilla
c7ced496ac Update rocksdb unity file (RIPD-352) 2014-08-08 15:42:41 -04:00
Vinnie Falco
d4ff18834c Merge commit 'f86d9fd626df1cee55bce4c577d06bb064dc827b' as 'src/rocksdb' 2014-08-08 11:57:41 -07:00
Vinnie Falco
f86d9fd626 Squashed 'src/rocksdb/' content from commit 224932d
git-subtree-dir: src/rocksdb
git-subtree-split: 224932d4d0
2014-08-08 11:57:41 -07:00
Vinnie Falco
854604f724 Remove rocksdb in preparation for subtree add 2014-08-08 11:57:29 -07:00
Vinnie Falco
cfd3642cb1 Set version to 0.26.2-alpha-4 2014-08-07 17:13:55 -07:00
Tom Ritchford
f493590604 Logic fix for multiquality issues. 2014-08-07 17:10:38 -07:00
Vinnie Falco
8c084a3de8 Set version to 0.26.2-alpha-3 2014-08-07 10:13:22 -07:00
Tom Ritchford
985aa803a4 Fix error with multi-quality paths. 2014-08-07 10:12:09 -07:00
Vinnie Falco
9e319d7bd6 Set version to 0.26.2-alpha-2 2014-08-06 10:16:13 -07:00
David Schwartz
a122e176d7 Pathfinding fixes:
* Don't consider global freeze if not enforcing
 * Log if a covering path fails to cover
2014-08-06 10:16:12 -07:00
Tom Ritchford
88a6f2931e Fix local variable unfundedOffers that was shadowing a class variable. 2014-08-06 09:26:22 -07:00
Vinnie Falco
54f3a83e25 Fix missing return values in headers_t 2014-08-05 16:43:13 -07:00
Vinnie Falco
0955c0d8d3 Set version to 0.26.2-alpha-1 2014-08-05 15:40:34 -07:00
JoelKatz
6bb5be5216 Avoid a mutex 99+% of the time in SField::getField 2014-08-05 15:36:12 -07:00
JoelKatz
c9cd7e4be0 Rewrite STObject::setType for improved performance 2014-08-05 15:36:05 -07:00
Miguel Portilla
ce2cecf046 Add owner_funds to client subscription data (RIPD-377)
Conflicts:
	src/ripple/module/app/ledger/AcceptedLedger.cpp
2014-08-05 15:04:50 -07:00
Vinnie Falco
6e934ee6a1 HTTP handshake in peer protocol (RIPD-351):
* New I/O paths for client and server role
* New handshake_analyzer detects the peer protocol
* New basic_message class for parsing and storing HTTP messages
* Conditional compilation for selective feature enabling.
* Server supports both current handshake and HTTP handshake
2014-08-05 13:17:02 -07:00
Vinnie Falco
723d7d1263 HTTP support improvements:
* RFC2616 compliance
* Case insensitive equality, inequality operators for strings
* Improvements to http::parser
* Tidy up HTTP method enumeration
2014-08-05 13:17:01 -07:00
Scott Schurr
298572893e Improvements to aged_containers (RIPD-363)
- Added unit tests for element erase
- Added unit tests for range erase
- Added unit tests for touch
- Added unit tests for iterators and reverse_iterators
- Un-inlined operator== for unordered containers
- Fixed minor problems with ordered_container erase()
- Made ordered_container...
  - erase (reverse_iterator pos) not compile
  - erase (reverse_iterator first, reverse_iterator last) not compile
  - touch (reverse iterator pos) not compile
- Verified that ordered container...
  - insert() already rejects reverse_iterator
  - emplace_hint() already rejects reverse_iterator
- Made set/multiset iterators const

Regarding the set/multiset iterators, see section 1.5 of
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2913.pdf
as pointed out by Vinnie.
2014-08-04 15:25:30 -07:00
Mark Travis
405f6f7368 Make NetworkOPs::isFull() thread-safe 2014-08-04 11:19:07 -07:00
Tom Ritchford
648ccc7c17 Replace const Type& with Type const& for common types.
* std::string
* RippleAccount
* Account
* Currency
* uint256
* STAmount
* Json::Value
2014-08-04 11:18:44 -07:00
Mark Travis
f5afe0587f Fix filter_policy object leak in NodeStore backends 2014-08-04 11:16:44 -07:00
Tom Ritchford
91a227a475 Correct Pathfinder::getPaths out to handle order books (RIPD-427) 2014-08-02 10:38:57 -07:00
Tom Ritchford
4b905fe9ff Clean OrderBookDB::getBooksBy methods. 2014-08-02 10:38:57 -07:00
Howard Hinnant
0f409b7bec ASCII clean 2014-08-02 10:38:56 -07:00
Vinnie Falco
295c8de858 Detect inconsistency in PeerFinder self-connects (RIPD-411) 2014-07-31 16:10:54 -07:00
Nik Bougalis
e5252f90af Remove LedgerBase and decongest Ledger locks 2014-07-31 16:05:08 -07:00
Miguel Portilla
c2276155bf Add pubkey_node and hostid to server stream messages (RIPD-407) 2014-07-31 16:05:08 -07:00
Miguel Portilla
dbe49bcd87 Remove TRUST_NETWORK directive (RIPD-331) 2014-07-31 16:04:59 -07:00
David Schwartz
7b936de32c Freeze enforcing: (RIPD-399)
* Set enforce date: September 15, 2014
 * Enforce in stand alone mode
 * Enforce at source
 * Enforce intermediary nodes
 * Enforce global freeze in get paths out
 * Enforce global freeze in create offer
 * Don't consider frozen links a path out
 * Handle in getBookPage
 * Enforce in new offer transactors
2014-07-30 23:28:48 -07:00
evhub
9eb34f542c More fixes to VSProject sorting algorithm 2014-07-30 08:56:54 -07:00
Tom Ritchford
194304e544 Refactor RippleCalc:
* Rationalize method and filenames, move to subdirectory.
* Use Issue in Node.
* Restrict access to PathState variables.
* Line length and readability cleanups.
* New PathCursor stores path calculation data during rippleCalc.
* Extract methods PathCursor::node(), PathCursor::previousNode()
  and RippleCalc::addPath
2014-07-30 08:29:29 -07:00
Howard Hinnant
c59fc332d5 Make CBase58Data/RippleAddress movable (RIPD-428):
This significantly increases the performance in returning
these types from factory functions.
2014-07-30 07:25:24 -07:00
Nik Bougalis
b43832fe57 Use std::atomic 2014-07-29 21:50:58 -04:00
Howard Hinnant
c24a497a23 Further documentation improvements to Ledger Consensus. 2014-07-29 21:41:19 -04:00
Tom Ritchford
4096fcd1bf Reduce scope of RPC locks and general cleanup (RIPD-458) 2014-07-29 16:24:44 -04:00
Vinnie Falco
9a0e806f78 Set version to 0.26.1 2014-07-28 12:05:01 -07:00
Vinnie Falco
20c9632996 Enable asynchronous handling of HTTP-RPC (RIPD-390)
* Activate async code path
* Tidy up HTTP server code
* Use shared_ptr in HTTP server
* Remove check for unspecified IP
* Remove hairtrigger
* Fix missing HTTP authorization check
* Fix multisocket flags in RPC-HTTP server
* Fix authorization failure when no credentials required
* Addresses RIPD-159, RIPD-161, RIPD-390
2014-07-28 12:05:01 -07:00
Vinnie Falco
65a628ca88 Add HTTPHeaders::build_map 2014-07-28 12:05:00 -07:00
Vinnie Falco
d82dbba096 Make bind_handler variadic ctor explicit 2014-07-28 12:05:00 -07:00
Howard Hinnant
58547f6997 Tidy up hardened containers (RIPD-380):
* Rename hardened containers for clarity
* Fixes https://ripplelabs.atlassian.net/browse/RIPD-380
2014-07-28 09:06:35 -07:00
Howard Hinnant
e6f4eedb1e Tidy up hardened_hash:
* Added siphash as a HashAlgorithm
* Refactored class responsibilities
2014-07-28 09:06:22 -07:00
Vinnie Falco
c5b963141f Fix intrinsic calls in static_initializer 2014-07-28 09:04:41 -07:00
Nik Bougalis
5df40bd746 Fix auth handling during OfferCreate (RIPD-414) 2014-07-28 08:57:22 -07:00
Howard Hinnant
403f15dc48 Documentation for Ledger Consensus implementation (RIPD-405) 2014-07-28 08:56:23 -07:00
Nik Bougalis
704d7451a0 Fix auth handling during OfferCreate (RIPD-414) 2014-07-25 11:42:42 -07:00
Vinnie Falco
fa11071443 Enable asynchronous handling of HTTP-RPC (RIPD-390)
* Activate async code path
* Tidy up HTTP server code
* Use shared_ptr in HTTP server
* Remove check for unspecified IP
* Remove hairtrigger
* Fix missing HTTP authorization check
* Fix multisocket flags in RPC-HTTP server
* Fix authorization failure when no credentials required
* Addresses RIPD-159, RIPD-161, RIPD-390
2014-07-24 20:22:55 -07:00
Vinnie Falco
87351c8a0c Add HTTPHeaders::build_map 2014-07-24 20:18:51 -07:00
Vinnie Falco
2f5fb1e68e Make bind_handler variadic ctor explicit 2014-07-24 20:18:51 -07:00
Tom Ritchford
96e1ec6d31 Fix build warnings and .gitignore.
* Comment out unused local function for both clang and g++.
* Get rid of numerous Boost warnings for clang.
* Remove some unused local variables.
* Put TAGS into the .gitignore.
2014-07-24 20:18:51 -07:00
Nik Bougalis
ac3cf05f1a Properly order warning suppression flags during compile 2014-07-24 20:18:51 -07:00
Tom Ritchford
6335e34395 Simplify locking and move a typedef.
* Make DatabaseCon's lock private and expose a scoped lock_guard.
* Get rid of DeprecatedRecursiveMutex and DeprecatedScopedLock entirely.
* Move CancelCallback to Job where it logically belongs.
2014-07-23 19:38:52 -07:00
Scott Schurr
02c2029ac1 Add more documentation of ledger acquisition (RIPD-373)
Capturing information from a seminar on the topic into the source tree.
2014-07-23 19:29:13 -07:00
David Schwartz
6914aa3e27 Check for payment increments that make no progress. (RIPD-374) 2014-07-23 19:27:55 -07:00
David Schwartz
f4fcb1cc9a Remove some dead SHAMapMissingNode code 2014-07-23 19:27:50 -07:00
David Schwartz
b6eec21ec0 Pathfinder cleanups, more efficient 'd' handling (RIPD-156) 2014-07-23 19:27:46 -07:00
David Schwartz
0ce3aeb189 Stop finding paths when enough are found (RIPD-156) 2014-07-23 19:27:42 -07:00
Nicholas Dudfield
713c8efcbe Fix intermittently failing send test 2014-07-23 02:27:19 -07:00
Vinnie Falco
9fa5e39872 Set version to 0.26.0 2014-07-22 09:59:45 -07:00
Vinnie Falco
ace53fa405 Set version to 0.26.0-rc2 2014-07-21 10:39:48 -07:00
Vinnie Falco
3c06980107 Merge branch 'release' into develop 2014-07-21 10:25:23 -07:00
Vinnie Falco
1f26fbb5af Set version to 0.26.0-rc1 2014-07-21 10:00:16 -07:00
Howard Hinnant
84c6622122 Fix bugs, typo in SHAMapNodeID 2014-07-18 17:45:51 -07:00
Miguel Portilla
63f099f2f6 Fix book_offers limit parameter (RIPD-295)
Conflicts:
	src/ripple/module/app/misc/NetworkOPs.cpp
2014-07-18 10:37:45 -07:00
Nicholas Dudfield
373ce72984 Update freeze tests
* Conditionally skip freeze enforcement tests
* Honour remote.fee_cushion setting
* Workaround ripple-lib regression
* Don't use mocha features from wrong version
2014-07-18 10:35:06 -07:00
JoelKatz
4cf29455e4 Freeze flags: (RIPD-394)
* Define all flags
 * Set/clear bits in transactors
 * Frozen line counts towards reserve, is non-default
 * Report trust line state
2014-07-18 10:35:06 -07:00
David Schwartz
07db5d497c Remove SHAMapNodeID from SHAMapTreeNode (RIPD-347)
This resolves the "right data, wrong ID" issue in the
tree node cache.
2014-07-18 10:35:06 -07:00
JoelKatz
f1bb0afc4e Improve transaction fee and execution logic (RIPD-323):
* tecINSUFF_FEE if balance doesn't cover fee
* Ensure transaction recovery is deterministic
* Reduce transaction retries
2014-07-18 10:35:06 -07:00
Vinnie Falco
d791fe3013 Fix msvc static initialization in SHAMapNodeID 2014-07-17 20:05:39 -07:00
Miguel Portilla
c16e22a5c6 Fix book_offers limit parameter (RIPD-295) 2014-07-17 18:01:20 -07:00
Miguel Portilla
db7a720445 Fix static initializers in RippleSSLContext (RIPD-375) 2014-07-17 18:01:19 -07:00
Miguel Portilla
6c09a02099 Workaround gcc bug with static_initializer default ctor 2014-07-17 18:00:43 -07:00
Howard Hinnant
1b3356cafd Remove calls to SHAMapNode::getID (RIPD-347):
Calls in SHAMap::getCache and SHAMap::canonicalize are
purposefully left in place at this time (to be removed
later).
2014-07-17 15:58:03 -07:00
Scott Schurr
5869902f2c Tidy up Resource::Manager (RIPD-362):
* Style improvements
* More documentation
2014-07-17 14:18:18 -07:00
Howard Hinnant
28898031f0 Eliminate spurious SHAMap::getFetchPack failure (RIPD-379) 2014-07-17 10:38:07 -07:00
Vinnie Falco
1ce0f94638 Define OPENSSL_NO_SSL2 2014-07-17 10:18:37 -07:00
Vinnie Falco
f876ad973f Fix static_initializer: …
* Prevents double construction, invalid access
* Unit test works on MSVC and non MSVC
2014-07-16 16:43:17 -07:00
Tom Ritchford
6014b13234 Use Books in Ledger. 2014-07-15 21:16:42 -07:00
Vinnie Falco
3e9c702c47 Support stream composition of testcase names 2014-07-15 18:23:01 -07:00
Vinnie Falco
3ff919ccf1 Fix version to say 'rippled' 2014-07-15 13:03:40 -07:00
evhub
8dc0844c79 Fix VSProject item sorting (again)
Conflicts:
	src/beast/site_scons/site_tools/VSProject.py
2014-07-15 12:51:57 -07:00
David Schwartz
a2764b68ca Rate-limit SSL client renegotiation (RIPD-360) 2014-07-14 10:42:29 -07:00
Tom Ritchford
bbbae072ea Simplify initialization of Pathfinder using C++11 constructs. 2014-07-14 10:04:27 -07:00
evhub
b4735b5931 Fix VSProject item sorting:
* Use object type in the sort key
* Call _key recursively over containers
* Prevent passing of iterators to xsorted
* Fix VSProject generator with new sorting
2014-07-14 10:04:07 -07:00
Tom Ritchford
418638ad16 Serialization code improvements:
* Add generics to eliminate duplicate code in STHash
* Use generics in Serializer and callers
2014-07-11 12:00:49 -07:00
Scott Schurr
9fb09d3109 Pass and use time_point in DecayingSample ctor (RIPD-359)
Switches a number of places in Resource::Logic to use abstract_clock::now()
which returns a time_point.  Unfortunately Resource::Logic tracks time
locally also, but with ints, not time_point.  So Resource::Logic uses a
delicate mix of abstract_clock::now() and abstract_clock::elapsed() with
this commit.  That inconsistency could be addressed in a second commit.
2014-07-11 12:00:39 -07:00
Tom Ritchford
d7e08f96a5 Various tidying:
* Fix unused variable warnings.
* Clean unused items from TER.
* Improvement to LES use of shared_ptr.
2014-07-10 15:11:33 -07:00
JoelKatz
4d49d272eb Fix account_tx resume (RIPD-314) 2014-07-10 15:11:33 -07:00
JoelKatz
faa6890950 Find 'sabfd' paths (RIPD-335)
This permits USD/GW1 to be bridged to USD/GW2 by BTC/GW3 or
USD/GW1 to be bridged to BTC/GW2 by CNY/GW3.
2014-07-10 15:11:33 -07:00
David Schwartz
9c390f6da4 Impose a local limit on path lengths (RIPD-350) 2014-07-10 15:09:06 -07:00
Scott Schurr
6842277977 Add documentation for ledger entries (RIPD-361) 2014-07-10 15:08:07 -07:00
Mark Travis
ddf68d464d Set version to 0.25.2 2014-07-07 12:41:46 -07:00
JoelKatz
b2f19e8dc6 Find 'sabfd' paths
This permits USD/GW1 to be bridged to USD/GW2 by BTC/GW3 or
USD/GW1 to be bridged to BTC/GW2 by CNY/GW3. See RIPD-335.
2014-07-07 12:41:46 -07:00
David Schwartz
5714b42975 Impose a local limit on path lengths 2014-07-07 11:42:24 -07:00
David Schwartz
c4e9c49c10 Set version to 0.25.2-rc2 2014-07-07 11:33:26 -07:00
Mark Travis
9210efb051 In JSON, output unprintable currency codes as hex 2014-07-07 11:33:26 -07:00
Tom Ritchford
b9f1b05625 Use swapWith to save CPU; edit comment. 2014-07-07 11:33:26 -07:00
sublimator
828c2e3c71 Restore checkpointed ledger under all paths in PathFind. 2014-07-07 11:33:12 -07:00
Nik Bougalis
10150a7352 Patch Boost to suppress warnings on clang 2014-07-06 15:17:58 -07:00
Scott Schurr
baaa45f8c7 SHAMap Documentation 2014-07-06 14:57:59 -07:00
Tom Ritchford
322af30d6a Pathfinding Documentation 2014-07-06 14:57:59 -07:00
Tom Ritchford
206efbf30d Rename RippleAsset to Issue and RippleBook to Book:
* Split STAmount out of SerializedTypes.h
* New concept of "Issue consistency": when either both or neither of its
  currency and account are XRP.
* Stop checking for consistency of Issue in its constructor.
* Clarification of mIsNative logic in STAmount.
* Usual cleanups.
2014-07-06 14:57:59 -07:00
Tom Ritchford
a96dee85d2 Remove recursions from computeLiquidity 2014-07-06 14:57:58 -07:00
Vinnie Falco
d307568cbc Update rocksdb unity file 2014-07-03 17:49:17 -07:00
Vinnie Falco
0ee27b143c Merge commit '7bfb4a9ba5a2822f7d9ef7122b1388aea4be9404' as 'src/rocksdb' 2014-07-03 17:49:00 -07:00
Vinnie Falco
7bfb4a9ba5 Squashed 'src/rocksdb/' content from commit 78b8a7d
git-subtree-dir: src/rocksdb
git-subtree-split: 78b8a7d908
2014-07-03 17:49:00 -07:00
Vinnie Falco
110c73fc8d Remove rockdb subtree 2014-07-03 17:43:48 -07:00
Vinnie Falco
424d9b8385 Add preliminary Structured Overlay docs 2014-07-03 14:34:38 -07:00
Vinnie Falco
1b48ccc868 Fix base log partition severity 2014-07-03 11:57:14 -07:00
Howard Hinnant
fac82204b6 Remove boost::hash_value() overloads.
This addresses https://ripplelabs.atlassian.net/browse/RIPD-102
2014-07-02 15:33:11 -07:00
David Schwartz
61f114e655 Cleanup confusion of ledger base versus ledger header
The "ledger header" is the chunk of data that hashes to the
ledger's hash. It contains the sequence number, parent hash,
hash of the previous ledger, hash of the root node of the
state tree, and so on.

The term "ledger base" refers to a particular type of query
and response used in the ledger fetch process that includes
the ledger header but may also contain other information
such as the root node of the state tree.
2014-07-02 15:33:10 -07:00
MarkusTeufelberger
24410bf1bb Update Sconstruct 2014-07-01 19:12:47 -07:00
Vinnie Falco
aa24969eee Add README.md for ledger process 2014-07-01 12:42:41 -07:00
David Schwartz
a5297d13c4 Add new memo restrictions 2014-07-01 12:13:47 -07:00
David Schwartz
b06bdb83cb Fix a case where we put an extra node in the SHAMapDiff 2014-07-01 12:13:13 -07:00
David Schwartz
d06092212f Tighten up some serialization checks 2014-06-30 16:53:38 -07:00
David Schwartz
914778eae1 In JSON, output unprintable currency codes as hex 2014-06-30 16:40:53 -07:00
Nicholas Dudfield
e14c700c60 New integration tests:
* New tests for autobridging and freeze
* Discrepancy detection tests
* Don't let Mocha suppress load time errors
2014-06-28 18:27:33 -07:00
wltsmrz
0848e348bb Update ripple-lib integration tests 2014-06-28 13:17:31 -07:00
Vinnie Falco
3d5ae42660 Structured Overlay support:
* Add Peer Protocol detector
* Add RIPPLE_SINGLE_IO_SERVICE_THREAD setting
* Preliminary HTTP header parsing logic (disabled)
2014-06-26 18:24:50 -07:00
Vinnie Falco
f207b6b4c9 Improvements to HTTP parsing 2014-06-26 17:41:12 -07:00
David Schwartz
ed2c5078ad Tighten up some serialization checks 2014-06-26 17:16:34 -07:00
sublimator
aec792f5b8 Restore checkpointed ledger under all paths in PathFind. 2014-06-26 17:16:34 -07:00
Nik Bougalis
17d64de3d5 During playback close ledger before accepting 2014-06-26 17:16:34 -07:00
Tom Ritchford
f6bea08535 Update state during pathfinding 2014-06-26 17:16:33 -07:00
Tom Ritchford
feab6c39b3 New types Account, Currency, Directory:
* New tagged uint types.
* Extract to_string functions from header to hide dependencies.
* Include what you use and C++11 for cleanups.
2014-06-26 17:16:30 -07:00
Nik Bougalis
e999c76882 Autobridging:
* Remove legacy OfferCreate transactor
* Misc. cleanups on LedgerEntrySet
* Fix a subtle bug with arithmetic operations on Quality
* Sanity check offers after taking
2014-06-26 12:03:52 -07:00
Miguel Portilla
686cc599a2 Add ASIO strand to StatsDCollector 2014-06-26 12:03:51 -07:00
Vinnie Falco
92983556a0 Fix bug in VSProject when no LIBPATH in env 2014-06-26 12:03:51 -07:00
Scott Schurr
837872c3f3 Move static treeNodeCache from SHAMap to the Application
These changes address two JIRA issues:

 - 291 unittest reported leaked objects
 - 292 SHAMap::treeNodeCache should be a dependency injection

The treeNodeCache was a static member of SHAMap.  It's now a
non-static member of the Application accessed through
getTreeNodeCache().  That addressed JIRA 291

The SHAMap constructors were adjusted so the treeNodeCache is
passed in to the constructor.  That addresses JIRA 292,  It required
that any code constructing a SHAMap needed to be edited to pass
the new parameter to the constructed SHAMap.

In the mean time, SHAMap was examined for dead/unused code and
interfaces that could be made private.  Dead and unused interfaces
were removed and methods that could be private were made private.
2014-06-26 11:10:51 -07:00
Tom Ritchford
55222dc5d1 New types Account, Currency, Directory:
* New tagged uint types.
* Extract to_string functions from header to hide dependencies.
* Include what you use and C++11 for cleanups.
2014-06-24 11:11:25 -07:00
David Schwartz
adce6ae851 Ledger loading cleanups:
* Fix close time settings for ledger load
* More info in ledger_request and inbound ledger JSON replies
2014-06-23 20:19:53 -07:00
Howard Hinnant
9dc32cb791 SHAMap refactoring:
* Rename SHAMapNode to SHAMapNodeID.
* Cleanups
2014-06-23 20:19:53 -07:00
Howard Hinnant
23dc08c925 Improve Journal logging framework:
* Allow partition log levels to be adjusted
* Cleanups
2014-06-23 20:19:53 -07:00
Vinnie Falco
488a44b88e Fix case-sensitivity for NOLOGO in booltable 2014-06-18 17:04:31 -07:00
evhub
530bdf975e Fix VSProject generation issues with SConstruct:
* Always describe the Visual Studio targets
* Prevent checking nonexistant enviro vars
* Prevent pkg-config protobuf env vars if not clang/gcc
* Don't use any pkg-config enviro vars unless clang/gcc
* Make all VSProject/config directories windows-style
* Prevent beastobjc.mm from showing up in vcxproj
* Remove duplicate \src\protobuf\src
* Consistent quoting
2014-06-18 16:41:34 -07:00
evhub
d7a6627a1f Fixes to VSProject generator:
* Canonical sorting for all platforms
* Fix incorrect path separator
* Filter predefined duplicate environment switches
2014-06-18 16:38:26 -07:00
Vinnie Falco
d6066183b9 Refactor Overlay for Structured Network support:
* Move overlay up one directory
* Add abstract_protocol_handler, message_stream
* Add peer_protocol_detector
* Tidy up some declarations
* Use strand::running_in_this_thread instead of bool
* Update README.md
* Replace protocol message read loop:
  - Process data in arbitrary size chunks
  - message_stream extracts individual messages
  - peer_protocol_detector identifies the handshake
  - abstract_protocol_handler used for dispatching messages
* Remove unused protocol message types:
  - mtACCOUNT
  - mtCONTACT
  - mtERROR
  - mtGET_ACCOUNT
  - mtGET_CONTACTS
  - mtGET_VALIDATIONS
  - mtSEARCH_TRANSACTION
  - mtUNUSED_FIELD

Conflicts:
	src/ripple/module/app/main/Application.cpp
	src/ripple/module/app/misc/NetworkOPs.cpp
	src/ripple/module/app/peers/PeerSet.cpp
2014-06-18 15:17:18 -07:00
Vinnie Falco
3e2c3ba035 Add missing beast includes 2014-06-18 14:07:18 -07:00
Tom Ritchford
e24cba8c35 Better types and more comments in RippleCalc.
* Better automatic conversions to and from tagged uint160 varints.
* Start using tagged variants of uint160 for Currency, Account.
* Comments from 2014/6/11 RippleCalc session.
2014-06-18 12:38:06 -07:00
Tom Ritchford
a23013abc1 Tidy up rpc module:
* Move directory up to ripple/module/rpc
* Use C++11 idioms and best practices
2014-06-18 12:37:53 -07:00
evhub
27a4f44de5 Process switches with regex in VSProject generator:
* Handles "DisableSpecificWarnings" switches
2014-06-16 19:01:28 -07:00
Vinnie Falco
4e07dbbefc Fix state check in PeerImp::recvHello 2014-06-16 18:14:25 -07:00
David Schwartz
dcf4ad2c21 Ledger load and ledger replay fixes:
* Stash the loaded ledger where consensus can find it.
 * When loading a ledger for startup, try the backend too
 * Apply replay transactions to a mutable snapshot
2014-06-16 17:29:11 -07:00
Nik Bougalis
04dd861fe3 Cleanup:
* Remove is_bit_set and use regular bitwise operations instead.
* Remove the function-like macro "nothing".
2014-06-16 17:29:11 -07:00
Vinnie Falco
c8ee6c6f6d Use std::thread instead of boost::thread 2014-06-16 16:18:10 -07:00
Vinnie Falco
a57e4263d7 Fix MSVC workaround in basic_seconds_clock 2014-06-16 16:18:09 -07:00
Vinnie Falco
3d58f0d941 Add static_initializer 2014-06-16 16:18:09 -07:00
Vinnie Falco
a52c9232c4 Tidy up basics module:
* Move directory up
* Remove unity includes and "include what you use"
2014-06-15 18:51:05 -07:00
Vinnie Falco
7a059c7a73 Remove ScopedPointer, ContainerDeletePolicy 2014-06-15 18:35:12 -07:00
Vinnie Falco
506910147f Tidy up includes:
* Replace boost with std equivalents:
  - bind, ref, cref, function, placeholders
* More "include what you use"
* Remove unnecessary includes
2014-06-15 18:26:50 -07:00
Vinnie Falco
cf3eb24eb0 Move nodestore to its own module 2014-06-15 12:50:44 -07:00
Vinnie Falco
d965b23b2a Add missing beast includes 2014-06-15 12:37:52 -07:00
Vinnie Falco
f660743065 Remove unused SHAMap::getItem 2014-06-13 14:00:17 -07:00
David Schwartz
4559bd9030 Improve the way we declare a close time consensus 2014-06-12 10:20:48 -07:00
David Schwartz
3ac98fb101 SyncUnorderedMap cleanup 2014-06-12 10:14:47 -07:00
David Schwartz
aff52db289 Re-use the serializer 2014-06-12 10:14:47 -07:00
David Schwartz
ea27dfe08d Find 'sbfd' paths. 2014-06-12 09:41:52 -07:00
Tom Ritchford
27620af1bf Further cleanups of RippleCalc.
* Rename many variables.
* Make most of PathState private.
* Extract out common Node::isAccount() code.
* Rename bConsumed to allLiquidityConsumed_.
* Extract out code into PathState::clear().
2014-06-09 12:57:26 -07:00
Tom Ritchford
bf116308d4 Add newer validators to validators-example.txt 2014-06-09 12:41:27 -07:00
Tom Ritchford
3fb27d98ab Clean up LookupLedger and resolve issue 278.
* Use beast::zero to resolve issue 278.
* Rename variables and simplify logic.
* Restrict to 80 columns.
2014-06-09 12:40:38 -07:00
Tom Ritchford
7e45c17730 Rename ledger_get to ledger_request. 2014-06-09 12:37:24 -07:00
Tom Ritchford
626533d4a7 Clean up RPC messages from Main 2014-06-09 12:37:23 -07:00
Tom Ritchford
39719f4c17 Rearrange branch in accordance with new practices. 2014-06-09 12:37:23 -07:00
JoelKatz
526bd88dc4 Add ledger_get RPC command to fetch a ledger from the network
Conflicts:
	Builds/VisualStudio2013/RippleD.vcxproj
	Builds/VisualStudio2013/RippleD.vcxproj.filters
	Builds/VisualStudio2013/RippleD2.vcxproj
	Builds/VisualStudio2013/RippleD2.vcxproj.filters
	src/ripple_rpc/impl/Handlers.cpp
2014-06-09 12:37:22 -07:00
Vinnie Falco
37201ecaa6 Include BeastConfig.h from the root include path 2014-06-09 12:35:16 -07:00
Vinnie Falco
02ed879837 Update VS project file 2014-06-06 07:39:55 -07:00
Vinnie Falco
8b881d3a77 Add CBase58Data::hash_append 2014-06-05 20:00:24 -07:00
Miguel Portilla
f25456ce25 Fix VSProject.py extra dot 2014-06-05 17:47:46 -07:00
mDuo13
dfb1db4ab3 Return correct error for missing pathfinding request. Fixes RIPD-293 2014-06-05 17:47:28 -07:00
Vinnie Falco
4362cb660b Replace boost::shared_ptr with std::shared_ptr 2014-06-05 13:04:23 -07:00
Vinnie Falco
1aa0749ba8 Update rocksdb unity build 2014-06-04 17:20:43 -07:00
Vinnie Falco
c7f1f6a91f RocksDB changes to support unity build:
* Remove extra definition of TotalFileSize
* Remove extra definition of ClipToRange
* Move EncodedFileMetaData out from anonymous namespace
* Move version_set Saver to its own namespace
* Move some symbols into a named namespace
* Move symbols out of anonymous namespace (prevents warning)
* Make BloomHash inline
2014-06-04 17:15:28 -07:00
Vinnie Falco
888a3fec21 Merge commit '8514b88974c71c0fa85bb154507536ee49c33458' as 'src/rocksdb' 2014-06-04 16:10:38 -07:00
Vinnie Falco
8514b88974 Squashed 'src/rocksdb/' content from commit 457bae6
git-subtree-dir: src/rocksdb
git-subtree-split: 457bae6911
2014-06-04 16:10:38 -07:00
Vinnie Falco
724ec46129 Use per-file include directories for external code subtrees:
* leveldb, hyperleveldb, rocksdb, snappy
* SConstruct OSX fix regarding OpenSSL version check
2014-06-04 13:28:43 -07:00
Vinnie Falco
4f1d1d2a8a Reorganize source file hierarchy:
* Rename unity files
* Move some modules to new subdirectories
* Remove obsolete Visual Studio project files
* Remove obsolete coding style and TODO list
2014-06-03 21:43:59 -07:00
Tom Ritchford
39a387b54c Check for openSSL version in SConstruct.
* Uses the platform specific openssl binary to detect the current version.
2014-06-02 10:56:21 -07:00
Roberto Catini
2b0034667d Add Dockerfile:
* The ports are not automatically exposed.
* No test is performed after the build.
2014-06-02 10:27:09 -07:00
Tom Ritchford
98202c56ae base_uint now compares and assigns with beast::Zero.
Conflicts:
	src/ripple/types/api/base_uint.h
	src/ripple_basics/ripple_basics.h
2014-06-02 10:15:45 -07:00
Vinnie Falco
1e06ddf13c Refactor MultiSocket:
* Variadic constructor argument list
* Tidy up sources and filenames
2014-06-02 10:07:05 -07:00
Vinnie Falco
560071bb68 Make all include paths relative to a root directory:
* Better include path support in the VSProject scons tool.
* Various manual fixes to include paths.
2014-06-02 09:16:28 -07:00
Nik Bougalis
2e49ec47a3 Revert "Ste version to 0.25.2-rc1"
This reverts commit 3f10924594.
2014-06-01 16:32:41 -07:00
Mark Travis
e70d618aff Set version to 0.25.2-rc1 2014-06-01 22:51:13 +00:00
Mark Travis
3e4cf426bd Detect paths with illegal bridging offers 2014-06-01 22:45:55 +00:00
Mark Travis
3f10924594 Ste version to 0.25.2-rc1 2014-06-01 22:35:12 +00:00
Mark Travis
f8182c335a Detect paths with illegal bridging offers 2014-06-01 01:26:11 +00:00
Vinnie Falco
1d8d6a6d68 Revert "Improve checking of library versions."
This reverts commit 69fccdf5c6.
2014-05-28 19:04:37 -07:00
Tom Ritchford
69fccdf5c6 Improve checking of library versions.
* Correctly handle openSSL version checking on Ubuntu, Debian, OS/X.
* Also check versions of openSSL, Boost, GCC and MSVC at C++ compile time.
* Get rid of over-engineered runtime CheckLibraryVersions.
2014-05-28 17:07:35 -07:00
Nik Bougalis
27e8d44a56 Autobridging:
Complete implementation of bridged offers crossings. While processing an offer
A:B we consider both the A:B order book and the combined A:XRP and XRP:B books
and pick the better offers. The net result is better liquidity and potentially
better rates.

* Rearchitect core::Taker to perform direct and bridged crossings.
* Compute bridged qualities.
* Implement a new Bridged OfferCreate transactor.
* Factor out common code from the Bridged and Direct OfferCreate transactors.
* Perform flow calculations without losing accuracy.
* Rename all transactors.
* Cleanups.
2014-05-28 16:35:03 -07:00
Vinnie Falco
72bc4ebf37 Fix VSProject generator item sort 2014-05-28 16:35:02 -07:00
Tom Ritchford
353f32e6af Fix a few warnings 2014-05-28 09:28:50 -07:00
Howard Hinnant
9bbaa9a2ae Report whether a ledger is validated:
This addresses https://ripplelabs.atlassian.net/browse/RIPD-275
Use Json::StaticString to improve performance.
Use inject_error instead of manually adding errors.
2014-05-28 09:27:22 -07:00
Nicholas Dudfield
096fcefae9 Fix clang/travis, prefer static libs when BOOST_ROOT is set. 2014-05-28 08:32:11 -07:00
Vinnie Falco
0f1e292e34 New http::client_session for making requests:
- Add 10,000 popular URL data set for unit tests
2014-05-28 08:31:45 -07:00
Vinnie Falco
195957a7cc Add ci_char_traits 2014-05-28 07:30:02 -07:00
Vinnie Falco
eb122f45f9 Tidy up includes 2014-05-28 07:29:59 -07:00
Vinnie Falco
c51ac9d6da Fix rocksdb relative include 2014-05-27 15:23:50 -07:00
Vinnie Falco
97c9d02c43 Improved SConstruct:
* Automatic source list built via directory iteration
* Build multiple toolchains and flavors simulaneously:
    - Toolchains: gcc, clang, msvc
    - Flavors: debug, release
* Documentation on aliases (top of the SConstruct file)
2014-05-27 15:23:50 -07:00
Vinnie Falco
0c06939f38 Beast SCons tools:
* Add VSProject SCons Builder
* Add Protoc SCons Builder
* Add Beast build utilities python module
2014-05-27 15:23:49 -07:00
Vinnie Falco
5db677d74d Tidy up some source filenames:
* Add .unity suffix to mark unity sources
* Fix some relative include paths
2014-05-27 15:23:48 -07:00
Nik Bougalis
251ce4efbc base_uint cleanups:
* Remove unused and broken reverse iterators
* Use charUnHex to reduce code duplication
2014-05-27 09:26:18 -07:00
Nik Bougalis
386eabb61f Convert a unit test to manual 2014-05-26 15:29:48 -07:00
Nik Bougalis
b35eabe161 Optimize SQL query composition 2014-05-26 15:29:48 -07:00
Tom Swirly
9ad9845d69 Initialize unassigned variable bConsumed.
* Assigned in the code, but only to false.
* Someday will get assigned to true.
2014-05-23 12:16:24 -07:00
Tom Swirly
74eb25f9b5 RippleCalc: Stop using reference that might have been invalidated. 2014-05-23 12:16:22 -07:00
Nik Bougalis
044f390fe0 Properly standardize RFC1751 inputs 2014-05-23 11:06:43 -07:00
Miguel Portilla
06d6e4901e Complete NodeStore documentation
* Complete README.md
* Add missing doxygen
* Fix getHitRate bug
2014-05-19 10:53:40 -07:00
Tom Swirly
3ca9646329 Refactor the structure of RippleCalc
* Split code into multiple files
* Rename and move things
2014-05-19 10:53:40 -07:00
Tom Ritchford
4b3e629dfd Allow ledgers to be loaded from the command line.
Conflicts:

	src/ripple_data/protocol/BuildInfo.cpp
2014-05-19 10:53:39 -07:00
Tom Swirly
568fae9878 Add documentation comments 2014-05-19 10:53:39 -07:00
Tom Swirly
a844026f6b Clean up pre C++1 idioms:
* Use auto for in some loops
* Fix shadowing iterator declaration
* Rename NUMBER to ARRAYSIZE.
* Use placeholders instead of macros
* Replace macro BIND_TYPE with std::bind
* Replace BOOST_FOREACH with range-for
2014-05-19 10:53:38 -07:00
Vinnie Falco
b677cacb8c Set version to 0.25.1 2014-05-15 11:13:54 -07:00
Nik Bougalis
1d66169ef1 Fix rippled-example.cfg to list 'peers_max' 2014-05-15 11:11:16 -07:00
Vinnie Falco
74653e57e6 Use std::string in lexicalCast 2014-05-15 11:11:15 -07:00
Vinnie Falco
29d1d5f062 Set version to 0.25.0 2014-05-14 09:01:44 -07:00
Vinnie Falco
d8e8693d9a Update rippled-example.cfg 2014-05-13 20:43:39 -07:00
Miguel Portilla
be737e0047 Enable Snappy compression for NodeStore backends.
* Configured via config.cfg
2014-05-13 12:21:01 -07:00
Vinnie Falco
83ef06748d Add snappy compression integration 2014-05-13 12:16:39 -07:00
Vinnie Falco
5a93f9991a Merge commit '8b0602a5824199d495f6720ef2447f695179257a' as 'src/snappy/snappy' 2014-05-13 12:16:31 -07:00
Vinnie Falco
8b0602a582 Squashed 'src/snappy/snappy/' content from commit 1ff9be9
git-subtree-dir: src/snappy/snappy
git-subtree-split: 1ff9be9b8fafc8528ca9e055646f5932aa5db9c4
2014-05-13 12:16:31 -07:00
Vinnie Falco
eb24ca6def Update .gitattributes 2014-05-13 12:15:59 -07:00
Nik Bougalis
22af79606a Fix lexicographical compare during tfRequireAuth processing 2014-05-13 12:10:39 -07:00
Nik Bougalis
bee12fb89d Autobridging future support:
* Refactor and cleanup transactors
* Introduce new direct and bridged transactors
* Rename existing transactor to indicate legacy status
* New direct transactor defaults to being turned off (preserve legacy behavior)
2014-05-13 12:09:47 -07:00
Nik Bougalis
1a9fbab165 CBase58Data Refactoring:
* Remove unnecessary functions.
* Load from a base_uint const& - don't use void pointers.
* Free comparison functions.
* Explicitly specify encoding alphabet.
* Miscellaneous cleanups.
2014-05-13 08:48:53 -07:00
David Schwartz
9a4b9aa69f Remove redundant checkAccept call 2014-05-13 08:48:52 -07:00
David Schwartz
227043e51f Make I/O latency available through server_info 2014-05-13 08:48:51 -07:00
JoelKatz
db3a387224 Fetch pack cleanups:
* Limit backed up fetch pack requests
* Cleanups
* Dispatch fetch packs sooner
2014-05-13 08:48:50 -07:00
David Schwartz
0075f36bbc At level 64, only leaves are allowed 2014-05-13 08:48:49 -07:00
David Schwartz
52f45669d1 Dispatch incoming TX set data 2014-05-13 08:48:48 -07:00
JoelKatz
294a13d653 Put newly-created nodes in the treeNodeCache 2014-05-13 08:48:48 -07:00
JoelKatz
eed66894db SHAMap::canonicalize must return a node with the correct ID 2014-05-13 08:48:47 -07:00
Vinnie Falco
1434695c47 Fix vs2013 availability of make_reverse_iterator 2014-05-13 08:38:02 -07:00
Tom Swirly
390ea65e15 Refactor RippleCalc.cpp:
* Add comments
* Restrict code to 80 colums
* Remove boost::format
* Remove BOOST_FOREACH
* Make members private
* Add ripple_unordered_set to UnorderedContainers.h
* Replace boost::unordered_set with ripple::unordered_set
2014-05-09 12:03:57 -07:00
David Schwartz
6f2bcc6fb0 Keep SHAMapNodes that fault in handling client requests. 2014-05-09 11:46:24 -07:00
David Schwartz
66a762d504 Rework handling of modified ledger nodes:
* Track dirty nodes in a set
* Make flushDirty a member function
* Don't write deleted nodes
* Make sure new nodes are shareable
* Put new nodes in the treeNodeCache
2014-05-09 11:46:15 -07:00
David Schwartz
aaabec0b55 Use last closed ledger for old pathfinding 2014-05-09 11:46:05 -07:00
Nik Bougalis
6339800946 Refactor base_uint<> and STAmount::getText:
base_uint<>:
* Remove unnecessary typedefs and derived classes
* Factor out common code and eliminate redundancies
* Add tag support to improve type safety
* base_uint::ToString() -> to_string(base_uint const&)
* base_uint::GetHex() -> to_string(base_uint const&)

STAmount::getText:
* Don't overallocate
* Eliminate unnecessary copying
2014-05-09 11:45:44 -07:00
Nik Bougalis
a3e4a34021 Implement C++14 std::make_reverse_iterator 2014-05-09 11:45:10 -07:00
David Schwartz
6fcf3fedb6 Use Json::StaticString to improve performance:
* Move JSON-RPC fields to jsonrpc_fields.h
2014-05-07 13:04:35 -07:00
David Schwartz
34cbb26e47 Improved performance monitoring of NodeStore backends:
* Add multi-sample interface to LoadMonitor
* Instrument fetch operations and completions for reporting in server_info
2014-05-05 14:17:23 -07:00
David Schwartz
07d0379edd Reduce number of async fetches 2014-05-05 14:17:23 -07:00
Tom Swirly
96e8cddfc2 New strConcat concatenates strings in O(n) time.
* Accepts std::string and char const*.
* Also accepts numbers, bools and chars.
* New ripple::toString function augments std::to_string to handle
  bools and chars.
2014-05-05 14:17:23 -07:00
Tom Swirly
3c5e4e440b Clean up of TransactionSign and friends. 2014-05-05 14:17:22 -07:00
Tom Swirly
5ffcbb9b65 Add documentation to SHAMap 2014-05-05 14:17:21 -07:00
Nik Bougalis
ec0fe312af Simplify and improve performance of STAmount::createHumanCurrency 2014-05-05 14:17:21 -07:00
Miguel Portilla
3025d8611b Rename Feature to Amendment:
* Added a README.md describing an Amendment
2014-05-05 13:50:00 -07:00
Miguel Portilla
98612a7cd6 Cleanup nodestore backend classes:
* Add README.md
* Add missing std::move calls
* Refactor visitAll in backend
2014-05-05 13:50:00 -07:00
MarkusTeufelberger
6e428054ef Update RPM spec file to version 0.24.0 2014-05-05 13:49:59 -07:00
3072 changed files with 400433 additions and 192948 deletions

2
.gitattributes vendored
View File

@@ -1,5 +1,5 @@
# Set default behaviour, in case users don't have core.autocrlf set.
* text=auto
#* text=auto
# These annoying files
rippled.1 binary

1
.gitignore vendored
View File

@@ -19,6 +19,7 @@
*.o
build
tags
TAGS
bin/rippled
Debug/*.*
Release/*.*

View File

@@ -6,12 +6,10 @@ before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq python-software-properties
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
- sudo add-apt-repository -y ppa:boost-latest/ppa
- sudo add-apt-repository -y ppa:afrank/boost
- sudo apt-get update -qq
- sudo apt-get install -qq g++-4.8
- sudo apt-get install -qq libboost1.55-all-dev
# We want debug symbols for boost as we install gdb later
- sudo apt-get install -qq libboost1.55-dbg
- sudo apt-get install -qq libboost1.57-all-dev
- sudo apt-get install -qq mlocate
- sudo updatedb
- sudo locate libboost | grep /lib | grep -e ".a$"
@@ -27,23 +25,35 @@ before_install:
# What versions are we ACTUALLY running?
- g++ -v
- clang -v
# Avoid `spurious errors` caused by ~/.npm permission issues
# Does it already exist? Who owns? What permissions?
- ls -lah ~/.npm || mkdir ~/.npm
# Make sure we own it
- sudo chown -R $USER ~/.npm
script:
# Set so any failing command will abort the build
- set -e
# If only we could do -j12 ;)
- scons
# $CC will be either `clang` or `gcc` (If only we could do -j12 ;)
- scons $CC.debug
# We can be sure we're using the build/$CC.debug variant (-f so never err)
- rm -f build/rippled
- export RIPPLED_PATH="$PWD/build/$CC.debug/rippled"
# See what we've actually built
- ldd ./build/rippled
- ldd $RIPPLED_PATH
# Run unittests (under gdb)
- | # create gdb script
echo "set env MALLOC_CHECK_=3" > script.gdb
echo "run" >> script.gdb
echo "backtrace full" >> script.gdb
# gdb --help
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args ./build/rippled --unittest
# Run integration tests
- cat script.gdb | gdb --ex 'set print thread-events off' --return-child-result --args $RIPPLED_PATH --unittest
- npm install
# Use build/(gcc|clang).debug/rippled
- |
echo "exports.default_server_config = {\"rippled_path\" : \"$RIPPLED_PATH\"};" > test/config.js
# Run integration tests
- npm test
notifications:
email:

30
Builds/Docker/Dockerfile Normal file
View File

@@ -0,0 +1,30 @@
# rippled
# use the ubuntu base image
FROM ubuntu
MAINTAINER Roberto Catini roberto.catini@gmail.com
# make sure the package repository is up to date
RUN apt-get update
RUN apt-get -y upgrade
# install the dependencies
RUN apt-get -y install git scons pkg-config protobuf-compiler libprotobuf-dev libssl-dev libboost1.55-all-dev
# download source code from official repository
RUN git clone https://github.com/ripple/rippled.git src; cd src/; git checkout master
# compile
RUN cd src/; scons build/rippled
# move to root directory and strip
RUN cp src/build/rippled rippled; strip rippled
# copy default config
RUN cp src/doc/rippled-example.cfg rippled.cfg
# clean source
RUN rm -r src
# launch rippled when launching the container
ENTRYPOINT ./rippled

View File

@@ -62,42 +62,42 @@ UI_HEADERS_DIR += ../../src/ripple_basics
# New style
#
SOURCES += \
../../src/ripple/beast/ripple_beast.cpp \
../../src/ripple/beast/ripple_beast.unity.cpp \
../../src/ripple/beast/ripple_beastc.c \
../../src/ripple/common/ripple_common.cpp \
../../src/ripple/http/ripple_http.cpp \
../../src/ripple/json/ripple_json.cpp \
../../src/ripple/peerfinder/ripple_peerfinder.cpp \
../../src/ripple/radmap/ripple_radmap.cpp \
../../src/ripple/resource/ripple_resource.cpp \
../../src/ripple/sitefiles/ripple_sitefiles.cpp \
../../src/ripple/sslutil/ripple_sslutil.cpp \
../../src/ripple/testoverlay/ripple_testoverlay.cpp \
../../src/ripple/types/ripple_types.cpp \
../../src/ripple/validators/ripple_validators.cpp
../../src/ripple/common/ripple_common.unity.cpp \
../../src/ripple/http/ripple_http.unity.cpp \
../../src/ripple/json/ripple_json.unity.cpp \
../../src/ripple/peerfinder/ripple_peerfinder.unity.cpp \
../../src/ripple/radmap/ripple_radmap.unity.cpp \
../../src/ripple/resource/ripple_resource.unity.cpp \
../../src/ripple/sitefiles/ripple_sitefiles.unity.cpp \
../../src/ripple/sslutil/ripple_sslutil.unity.cpp \
../../src/ripple/testoverlay/ripple_testoverlay.unity.cpp \
../../src/ripple/types/ripple_types.unity.cpp \
../../src/ripple/validators/ripple_validators.unity.cpp
# ---------
# Old style
#
SOURCES += \
../../src/ripple_app/ripple_app.cpp \
../../src/ripple_app/ripple_app_pt1.cpp \
../../src/ripple_app/ripple_app_pt2.cpp \
../../src/ripple_app/ripple_app_pt3.cpp \
../../src/ripple_app/ripple_app_pt4.cpp \
../../src/ripple_app/ripple_app_pt5.cpp \
../../src/ripple_app/ripple_app_pt6.cpp \
../../src/ripple_app/ripple_app_pt7.cpp \
../../src/ripple_app/ripple_app_pt8.cpp \
../../src/ripple_basics/ripple_basics.cpp \
../../src/ripple_core/ripple_core.cpp \
../../src/ripple_data/ripple_data.cpp \
../../src/ripple_hyperleveldb/ripple_hyperleveldb.cpp \
../../src/ripple_leveldb/ripple_leveldb.cpp \
../../src/ripple_net/ripple_net.cpp \
../../src/ripple_overlay/ripple_overlay.cpp \
../../src/ripple_rpc/ripple_rpc.cpp \
../../src/ripple_websocket/ripple_websocket.cpp
../../src/ripple_app/ripple_app.unity.cpp \
../../src/ripple_app/ripple_app_pt1.unity.cpp \
../../src/ripple_app/ripple_app_pt2.unity.cpp \
../../src/ripple_app/ripple_app_pt3.unity.cpp \
../../src/ripple_app/ripple_app_pt4.unity.cpp \
../../src/ripple_app/ripple_app_pt5.unity.cpp \
../../src/ripple_app/ripple_app_pt6.unity.cpp \
../../src/ripple_app/ripple_app_pt7.unity.cpp \
../../src/ripple_app/ripple_app_pt8.unity.cpp \
../../src/ripple_basics/ripple_basics.unity.cpp \
../../src/ripple_core/ripple_core.unity.cpp \
../../src/ripple_data/ripple_data.unity.cpp \
../../src/ripple_hyperleveldb/ripple_hyperleveldb.unity.cpp \
../../src/ripple_leveldb/ripple_leveldb.unity.cpp \
../../src/ripple_net/ripple_net.unity.cpp \
../../src/ripple_overlay/ripple_overlay.unity.cpp \
../../src/ripple_rpc/ripple_rpc.unity.cpp \
../../src/ripple_websocket/ripple_websocket.unity.cpp
LIBS += \
-lboost_date_time-mt\

View File

@@ -0,0 +1,4 @@
RippleD.vcxproj -text
RippleD.vcxproj.filters -text

View File

@@ -1,33 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros">
<RepoDir>..\..</RepoDir>
</PropertyGroup>
<PropertyGroup>
<OutDir>$(RepoDir)\build\VisualStudio2013\$(Configuration).$(Platform)\</OutDir>
<IntDir>$(RepoDir)\build\obj\VisualStudio2013\$(Configuration).$(Platform)\</IntDir>
<TargetName>rippled</TargetName>
</PropertyGroup>
<ItemDefinitionGroup>
<ClCompile>
<PreprocessorDefinitions>_VARIADIC_MAX=10;_WIN32_WINNT=0x0600;_SCL_SECURE_NO_WARNINGS;_CRT_SECURE_NO_WARNINGS;WIN32;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<MultiProcessorCompilation>true</MultiProcessorCompilation>
<WarningLevel>Level3</WarningLevel>
<AdditionalIncludeDirectories>$(RepoDir)\src\protobuf\src;$(RepoDir)\src\protobuf\vsprojects;$(RepoDir)\src\leveldb;$(RepoDir)\src\leveldb\include;$(RepoDir)\build\proto;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalOptions>/bigobj %(AdditionalOptions)</AdditionalOptions>
<ExceptionHandling>Async</ExceptionHandling>
<DisableSpecificWarnings>4018;4244</DisableSpecificWarnings>
</ClCompile>
<Link>
<AdditionalDependencies>Shlwapi.lib;%(AdditionalDependencies)</AdditionalDependencies>
<SpecifySectionAttributes>
</SpecifySectionAttributes>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<BuildMacro Include="RepoDir">
<Value>$(RepoDir)</Value>
</BuildMacro>
</ItemGroup>
</Project>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros" />
<PropertyGroup />
<ItemDefinitionGroup />
<ItemGroup />
</Project>

View File

@@ -1,11 +1,9 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Express 2013 for Windows Desktop
VisualStudioVersion = 12.0.30110.0
VisualStudioVersion = 12.0.31101.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "beast", "..\..\src\beast\Builds\VisualStudio2013\beast.vcxproj", "{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}"
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "RippleD", "RippleD.vcxproj", "{B7F39ECD-473C-484D-BC34-31F8362506A5}"
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "RippleD", "RippleD.vcxproj", "{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
@@ -15,22 +13,14 @@ Global
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|Win32.ActiveCfg = Debug|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|Win32.Build.0 = Debug|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|x64.ActiveCfg = Debug|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Debug|x64.Build.0 = Debug|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|Win32.ActiveCfg = Release|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|Win32.Build.0 = Release|Win32
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|x64.ActiveCfg = Release|x64
{73C5A0F0-7629-4DE7-9194-BE7AC6C19535}.Release|x64.Build.0 = Release|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|Win32.ActiveCfg = Debug|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|Win32.Build.0 = Debug|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|x64.ActiveCfg = Debug|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Debug|x64.Build.0 = Debug|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|Win32.ActiveCfg = Release|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|Win32.Build.0 = Release|Win32
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|x64.ActiveCfg = Release|x64
{B7F39ECD-473C-484D-BC34-31F8362506A5}.Release|x64.Build.0 = Release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|Win32.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|Win32.Build.0 = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|x64.ActiveCfg = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Debug|x64.Build.0 = debug|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|Win32.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|Win32.Build.0 = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|x64.ActiveCfg = release|x64
{26B7D9AC-1A80-8EF8-6703-D061F1BECB75}.Release|x64.Build.0 = release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE

View File

@@ -1,12 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros" />
<PropertyGroup />
<ItemDefinitionGroup>
<ClCompile>
<DisableSpecificWarnings>4018;4244;4267</DisableSpecificWarnings>
</ClCompile>
</ItemDefinitionGroup>
<ItemGroup />
</Project>

View File

@@ -1,5 +1,5 @@
Name: rippled
Version: 0.23.0
Version: 0.27.3
Release: 1%{?dist}
Summary: Ripple peer-to-peer network daemon
@@ -50,4 +50,3 @@ rm -rf %{buildroot}
/usr/bin/rippled
/usr/share/rippled/LICENSE
/etc/rippled/rippled-example.cfg

View File

@@ -0,0 +1,13 @@
--- /usr/include/boost/config/compiler/clang.hpp 2013-07-20 13:17:10.000000000 -0400
+++ /usr/include/boost/config/compiler/clang.rippled.hpp 2014-03-11 16:40:51.000000000 -0400
@@ -39,6 +39,10 @@
// Clang supports "long long" in all compilation modes.
#define BOOST_HAS_LONG_LONG
+#if defined(__SIZEOF_INT128__)
+# define BOOST_HAS_INT128
+#endif
+
//
// Dynamic shared object (DSO) and dynamic-link library (DLL) support
//

View File

@@ -0,0 +1,10 @@
--- /usr/include/boost/bimap/detail/debug/static_error.hpp 2008-03-22 17:45:55.000000000 -0400
+++ /usr/include/boost/bimap/detail/debug/static_error.rippled.hpp 2014-03-12 19:40:05.000000000 -0400
@@ -25,7 +25,6 @@
// a static error.
/*===========================================================================*/
#define BOOST_BIMAP_STATIC_ERROR(MESSAGE,VARIABLES) \
- struct BOOST_PP_CAT(BIMAP_STATIC_ERROR__,MESSAGE) {}; \
BOOST_MPL_ASSERT_MSG(false, \
BOOST_PP_CAT(BIMAP_STATIC_ERROR__,MESSAGE), \
VARIABLES)

View File

@@ -1,3 +1,77 @@
![Ripple](/images/ripple.png)
#The Worlds Fastest and Most Secure Payment System
**What is Ripple?**
Ripple is the open-source, distributed payment protocol that enables instant
payments with low fees, no chargebacks, and currency flexibility (for example
dollars, yen, euros, bitcoins, or even loyalty points). Businesses of any size
can easily build payment solutions such as banking or remittance apps, and
accelerate the movement of money. Ripple enables the world to move value the
way it moves information on the Internet.
![Ripple Network](images/network.png)
**What is a Gateway?**
Ripple works with gateways: independent businesses which hold customer
deposits in various currencies such as U.S. dollars (USD) or Euros (EUR),
in exchange for providing cryptographically-signed issuances that users can
send and trade with one another in seconds on the Ripple network. Within the
protocol, exchanges between multiple currencies can occur atomically without
any central authority to monitor them. Later, customers can withdraw their
Ripple balances from the gateways that created those issuances.
**How do Ripple payments work?**
A sender specifies the amount and currency the recipient should receive and
Ripple automatically converts the senders available currencies using the
distributed order books integrated into the Ripple protocol. Independent third
parties acting as market makers provide liquidity in these order books.
Ripple uses a pathfinding algorithm that considers currency pairs when
converting from the source to the destination currency. This algorithm searches
for a series of currency swaps that gives the user the lowest cost. Since
anyone can participate as a market maker, market forces drive fees to the
lowest practical level.
**What can you do with Ripple?**
The protocol is entirely open-source and the networks shared ledger is public
information, so no central authority prevents anyone from participating.Anyone
can become a market maker, create a wallet or a gateway, or monitor network
behavior. Competition drives down spreads and fees, making the network useful
to everyone.
###Key Protocol Features
1. XRP is Ripples native [cryptocurrency]
(http://en.wikipedia.org/wiki/Cryptocurrency) with a fixed supply that
decreases slowly over time, with no mining. XRP acts as a bridge currency, and
pays for transaction fees that protect the network against spam
![XRP as a bridge currency](/images/vehicle_currency.png)
2. Pathfinding discovers cheap and efficient payment paths through multiple
[order books](https://www.ripplecharts.com) allowing anyone to [trade](https://www.rippletrade.com) anything. When two accounts arent linked by relationships of trust, the Ripple pathfinding engine considers intermediate links and order books to produce a set of possible paths the transaction can take. When the payment is processed, the liquidity along these paths is iteratively consumed in best-first order.
![Pathfinding from Euro to Japanese Yen](/images/pathfinding.png)
3. [Consensus](https://www.youtube.com/watch?v=pj1QVb1vlC0) confirms
transactions in an atomic fashion, without mining, ensuring efficient use of
resources.
[transact]: https://ripple.com/files/ripple-FIs.pdf
[build]: https://ripple.com/build/
[transact.png]: /images/transact.png
[build.png]: /images/build.png
[contribute.png]: /images/contribute.png
###Join The Ripple Community
|![Transact][transact.png]|![Build][build.png]|![Contribute][contribute.png]|
|:-----------------------:|:-----------------:|:---------------------------:|
|[Transact on the fastest payment infrastructure][transact]|[Build Imaginative Apps][build]|Contribute to the Ripple Protocol Implementation|
#rippled - Ripple P2P server
##[![Build Status](https://travis-ci.org/ripple/rippled.png?branch=develop)](https://travis-ci.org/ripple/rippled)
@@ -37,5 +111,9 @@ Ripple is open source and permissively licensed under the ISC license. See the
LICENSE file for more details.
###For more information:
* https://ripple.com
* https://ripple.com/wiki
* Ripple Wiki - https://ripple.com/wiki/
* Ripple Primer - https://ripple.com/ripple_primer.pdf
* Ripple Primer (Market Making) - https://ripple.com/ripple-mm.pdf
* Ripple Gateway Primer - https://ripple.com/ripple-gateways.pdf
* Consensus - https://wiki.ripple.com/Consensus

1164
SConstruct

File diff suppressed because it is too large Load Diff

77
appveyor.yml Normal file
View File

@@ -0,0 +1,77 @@
# Set environment variables.
environment:
PYTHON: C:/Python27-x64
# We bundle up protoc.exe and only the parts of boost and openssl we need so
# that it's a small download. We also use appveyor's free cache, avoiding fees
# downloading from S3 each time.
# TODO: script to create this package.
RIPPLED_DEPS_URL: https://s3-ap-northeast-1.amazonaws.com/history-replay/rippled_deps.zip
# Other dependencies we just download each time.
PIP_URL: https://bootstrap.pypa.io/get-pip.py
PYWIN32_URL: https://downloads.sourceforge.net/project/pywin32/pywin32/Build%20219/pywin32-219.win-amd64-py2.7.exe
# Scons honours these environment variables, setting the include/lib paths.
BOOST_ROOT: C:/rippled_deps/boost
OPENSSL_ROOT: C:/rippled_deps/openssl
# At the end of each successful build we cache this directory. It must be less
# than 100MB total compressed.
cache:
- 'C:\\rippled_deps'
# This means we'll download a zip of the branch we want, rather than the full
# history.
shallow_clone: true
install:
# We want easy_install, python and protoc.exe on PATH.
- SET PATH=%PYTHON%;%PYTHON%/Scripts;C:/rippled_deps;%PATH%
# `ps` prefix means the command is executed by powershell.
- ps: Start-FileDownload $env:PIP_URL
- ps: Start-FileDownload $env:PYWIN32_URL
# Installing pip will install setuptools/easy_install.
- python get-pip.py
# Pip has some problems installing scons on windows so we use easy install.
- easy_install scons
# Scons has problems with parallel builds on windows without pywin32.
- easy_install pywin32-219.win-amd64-py2.7.exe
# (easy_install can do headless installs of .exe wizards)
# Download dependencies if appveyor didn't restore them from the cache.
# Use 7zip to unzip.
- ps: |
if (-not(Test-Path 'C:/rippled_deps')) {
Start-FileDownload "$env:RIPPLED_DEPS_URL"
7z x rippled_deps.zip -oC:\ -y > $null
}
# TODO: This is giving me grief
# artifacts:
# # Save rippled.exe in the cloud after each build.
# - path: "build\\rippled.exe"
build_script:
# We set the environment variables needed to put compilers on the PATH.
- '"%VS120COMNTOOLS%../../VC/vcvarsall.bat" x86_amd64'
# Show which version of the compiler we are using.
- cl
- scons msvc.debug -j%NUMBER_OF_PROCESSORS%
after_build:
# Put our executable in a place where npm test can find it.
- ps: cp build/msvc.debug/rippled.exe build
- ps: ls build
test_script:
# Run the unit tests
- build\\rippled --unittest
# Run the integration tests
- npm install
- npm test

1
bin/LT Symbolic link
View File

@@ -0,0 +1 @@
LedgerTool.py

24
bin/LedgerTool.py Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env python
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
import traceback
from ripple.ledger import Server
from ripple.ledger.commands import Cache, Info, Print
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util.CommandList import CommandList
_COMMANDS = CommandList(Cache, Info, Print)
if __name__ == '__main__':
try:
server = Server.Server()
args = list(ARGS.command)
_COMMANDS.run_safe(args.pop(0), server, *args)
except Exception as e:
if ARGS.verbose:
print(traceback.format_exc(), sys.stderr)
Log.error(e)

8
bin/README.md Normal file
View File

@@ -0,0 +1,8 @@
Unit Tests
==========
To run the Python unit tests, execute:
python -m unittest discover
from this directory.

251
bin/decorator.py Normal file
View File

@@ -0,0 +1,251 @@
########################## LICENCE ###############################
# Copyright (c) 2005-2012, Michele Simionato
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
# Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# Redistributions in bytecode form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
"""
Decorator module, see http://pypi.python.org/pypi/decorator
for the documentation.
"""
__version__ = '3.4.0'
__all__ = ["decorator", "FunctionMaker", "contextmanager"]
import sys, re, inspect
if sys.version >= '3':
from inspect import getfullargspec
def get_init(cls):
return cls.__init__
else:
class getfullargspec(object):
"A quick and dirty replacement for getfullargspec for Python 2.X"
def __init__(self, f):
self.args, self.varargs, self.varkw, self.defaults = \
inspect.getargspec(f)
self.kwonlyargs = []
self.kwonlydefaults = None
def __iter__(self):
yield self.args
yield self.varargs
yield self.varkw
yield self.defaults
def get_init(cls):
return cls.__init__.im_func
DEF = re.compile('\s*def\s*([_\w][_\w\d]*)\s*\(')
# basic functionality
class FunctionMaker(object):
"""
An object with the ability to create functions with a given signature.
It has attributes name, doc, module, signature, defaults, dict and
methods update and make.
"""
def __init__(self, func=None, name=None, signature=None,
defaults=None, doc=None, module=None, funcdict=None):
self.shortsignature = signature
if func:
# func can be a class or a callable, but not an instance method
self.name = func.__name__
if self.name == '<lambda>': # small hack for lambda functions
self.name = '_lambda_'
self.doc = func.__doc__
self.module = func.__module__
if inspect.isfunction(func):
argspec = getfullargspec(func)
self.annotations = getattr(func, '__annotations__', {})
for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs',
'kwonlydefaults'):
setattr(self, a, getattr(argspec, a))
for i, arg in enumerate(self.args):
setattr(self, 'arg%d' % i, arg)
if sys.version < '3': # easy way
self.shortsignature = self.signature = \
inspect.formatargspec(
formatvalue=lambda val: "", *argspec)[1:-1]
else: # Python 3 way
allargs = list(self.args)
allshortargs = list(self.args)
if self.varargs:
allargs.append('*' + self.varargs)
allshortargs.append('*' + self.varargs)
elif self.kwonlyargs:
allargs.append('*') # single star syntax
for a in self.kwonlyargs:
allargs.append('%s=None' % a)
allshortargs.append('%s=%s' % (a, a))
if self.varkw:
allargs.append('**' + self.varkw)
allshortargs.append('**' + self.varkw)
self.signature = ', '.join(allargs)
self.shortsignature = ', '.join(allshortargs)
self.dict = func.__dict__.copy()
# func=None happens when decorating a caller
if name:
self.name = name
if signature is not None:
self.signature = signature
if defaults:
self.defaults = defaults
if doc:
self.doc = doc
if module:
self.module = module
if funcdict:
self.dict = funcdict
# check existence required attributes
assert hasattr(self, 'name')
if not hasattr(self, 'signature'):
raise TypeError('You are decorating a non function: %s' % func)
def update(self, func, **kw):
"Update the signature of func with the data in self"
func.__name__ = self.name
func.__doc__ = getattr(self, 'doc', None)
func.__dict__ = getattr(self, 'dict', {})
func.func_defaults = getattr(self, 'defaults', ())
func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None)
func.__annotations__ = getattr(self, 'annotations', None)
callermodule = sys._getframe(3).f_globals.get('__name__', '?')
func.__module__ = getattr(self, 'module', callermodule)
func.__dict__.update(kw)
def make(self, src_templ, evaldict=None, addsource=False, **attrs):
"Make a new function from a given template and update the signature"
src = src_templ % vars(self) # expand name and signature
evaldict = evaldict or {}
mo = DEF.match(src)
if mo is None:
raise SyntaxError('not a valid function template\n%s' % src)
name = mo.group(1) # extract the function name
names = set([name] + [arg.strip(' *') for arg in
self.shortsignature.split(',')])
for n in names:
if n in ('_func_', '_call_'):
raise NameError('%s is overridden in\n%s' % (n, src))
if not src.endswith('\n'): # add a newline just for safety
src += '\n' # this is needed in old versions of Python
try:
code = compile(src, '<string>', 'single')
# print >> sys.stderr, 'Compiling %s' % src
exec code in evaldict
except:
print >> sys.stderr, 'Error in generated code:'
print >> sys.stderr, src
raise
func = evaldict[name]
if addsource:
attrs['__source__'] = src
self.update(func, **attrs)
return func
@classmethod
def create(cls, obj, body, evaldict, defaults=None,
doc=None, module=None, addsource=True, **attrs):
"""
Create a function from the strings name, signature and body.
evaldict is the evaluation dictionary. If addsource is true an attribute
__source__ is added to the result. The attributes attrs are added,
if any.
"""
if isinstance(obj, str): # "name(signature)"
name, rest = obj.strip().split('(', 1)
signature = rest[:-1] #strip a right parens
func = None
else: # a function
name = None
signature = None
func = obj
self = cls(func, name, signature, defaults, doc, module)
ibody = '\n'.join(' ' + line for line in body.splitlines())
return self.make('def %(name)s(%(signature)s):\n' + ibody,
evaldict, addsource, **attrs)
def decorator(caller, func=None):
"""
decorator(caller) converts a caller function into a decorator;
decorator(caller, func) decorates a function using a caller.
"""
if func is not None: # returns a decorated function
evaldict = func.func_globals.copy()
evaldict['_call_'] = caller
evaldict['_func_'] = func
return FunctionMaker.create(
func, "return _call_(_func_, %(shortsignature)s)",
evaldict, undecorated=func, __wrapped__=func)
else: # returns a decorator
if inspect.isclass(caller):
name = caller.__name__.lower()
callerfunc = get_init(caller)
doc = 'decorator(%s) converts functions/generators into ' \
'factories of %s objects' % (caller.__name__, caller.__name__)
fun = getfullargspec(callerfunc).args[1] # second arg
elif inspect.isfunction(caller):
name = '_lambda_' if caller.__name__ == '<lambda>' \
else caller.__name__
callerfunc = caller
doc = caller.__doc__
fun = getfullargspec(callerfunc).args[0] # first arg
else: # assume caller is an object with a __call__ method
name = caller.__class__.__name__.lower()
callerfunc = caller.__call__.im_func
doc = caller.__call__.__doc__
fun = getfullargspec(callerfunc).args[1] # second arg
evaldict = callerfunc.func_globals.copy()
evaldict['_call_'] = caller
evaldict['decorator'] = decorator
return FunctionMaker.create(
'%s(%s)' % (name, fun),
'return decorator(_call_, %s)' % fun,
evaldict, undecorated=caller, __wrapped__=caller,
doc=doc, module=caller.__module__)
######################### contextmanager ########################
def __call__(self, func):
'Context manager decorator'
return FunctionMaker.create(
func, "with _self_: return _func_(%(shortsignature)s)",
dict(_self_=self, _func_=func), __wrapped__=func)
try: # Python >= 3.2
from contextlib import _GeneratorContextManager
ContextManager = type(
'ContextManager', (_GeneratorContextManager,), dict(__call__=__call__))
except ImportError: # Python >= 2.5
from contextlib import GeneratorContextManager
def __init__(self, f, *a, **k):
return GeneratorContextManager.__init__(self, f(*a, **k))
ContextManager = type(
'ContextManager', (GeneratorContextManager,),
dict(__call__=__call__, __init__=__init__))
contextmanager = decorator(ContextManager)

View File

@@ -0,0 +1,4 @@
from .jsonpath import *
from .parser import parse
__version__ = '1.3.0'

510
bin/jsonpath_rw/jsonpath.py Normal file
View File

@@ -0,0 +1,510 @@
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
import logging
import six
from six.moves import xrange
from itertools import *
logger = logging.getLogger(__name__)
# Turn on/off the automatic creation of id attributes
# ... could be a kwarg pervasively but uses are rare and simple today
auto_id_field = None
class JSONPath(object):
"""
The base class for JSONPath abstract syntax; those
methods stubbed here are the interface to supported
JSONPath semantics.
"""
def find(self, data):
"""
All `JSONPath` types support `find()`, which returns an iterable of `DatumInContext`s.
They keep track of the path followed to the current location, so if the calling code
has some opinion about that, it can be passed in here as a starting point.
"""
raise NotImplementedError()
def update(self, data, val):
"Returns `data` with the specified path replaced by `val`"
raise NotImplementedError()
def child(self, child):
"""
Equivalent to Child(self, next) but with some canonicalization
"""
if isinstance(self, This) or isinstance(self, Root):
return child
elif isinstance(child, This):
return self
elif isinstance(child, Root):
return child
else:
return Child(self, child)
def make_datum(self, value):
if isinstance(value, DatumInContext):
return value
else:
return DatumInContext(value, path=Root(), context=None)
class DatumInContext(object):
"""
Represents a datum along a path from a context.
Essentially a zipper but with a structure represented by JsonPath,
and where the context is more of a parent pointer than a proper
representation of the context.
For quick-and-dirty work, this proxies any non-special attributes
to the underlying datum, but the actual datum can (and usually should)
be retrieved via the `value` attribute.
To place `datum` within another, use `datum.in_context(context=..., path=...)`
which extends the path. If the datum already has a context, it places the entire
context within that passed in, so an object can be built from the inside
out.
"""
@classmethod
def wrap(cls, data):
if isinstance(data, cls):
return data
else:
return cls(data)
def __init__(self, value, path=None, context=None):
self.value = value
self.path = path or This()
self.context = None if context is None else DatumInContext.wrap(context)
def in_context(self, context, path):
context = DatumInContext.wrap(context)
if self.context:
return DatumInContext(value=self.value, path=self.path, context=context.in_context(path=path, context=context))
else:
return DatumInContext(value=self.value, path=path, context=context)
@property
def full_path(self):
return self.path if self.context is None else self.context.full_path.child(self.path)
@property
def id_pseudopath(self):
"""
Looks like a path, but with ids stuck in when available
"""
try:
pseudopath = Fields(str(self.value[auto_id_field]))
except (TypeError, AttributeError, KeyError): # This may not be all the interesting exceptions
pseudopath = self.path
if self.context:
return self.context.id_pseudopath.child(pseudopath)
else:
return pseudopath
def __repr__(self):
return '%s(value=%r, path=%r, context=%r)' % (self.__class__.__name__, self.value, self.path, self.context)
def __eq__(self, other):
return isinstance(other, DatumInContext) and other.value == self.value and other.path == self.path and self.context == other.context
class AutoIdForDatum(DatumInContext):
"""
This behaves like a DatumInContext, but the value is
always the path leading up to it, not including the "id",
and with any "id" fields along the way replacing the prior
segment of the path
For example, it will make "foo.bar.id" return a datum
that behaves like DatumInContext(value="foo.bar", path="foo.bar.id").
This is disabled by default; it can be turned on by
settings the `auto_id_field` global to a value other
than `None`.
"""
def __init__(self, datum, id_field=None):
"""
Invariant is that datum.path is the path from context to datum. The auto id
will either be the id in the datum (if present) or the id of the context
followed by the path to the datum.
The path to this datum is always the path to the context, the path to the
datum, and then the auto id field.
"""
self.datum = datum
self.id_field = id_field or auto_id_field
@property
def value(self):
return str(self.datum.id_pseudopath)
@property
def path(self):
return self.id_field
@property
def context(self):
return self.datum
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self.datum)
def in_context(self, context, path):
return AutoIdForDatum(self.datum.in_context(context=context, path=path))
def __eq__(self, other):
return isinstance(other, AutoIdForDatum) and other.datum == self.datum and self.id_field == other.id_field
class Root(JSONPath):
"""
The JSONPath referring to the "root" object. Concrete syntax is '$'.
The root is the topmost datum without any context attached.
"""
def find(self, data):
if not isinstance(data, DatumInContext):
return [DatumInContext(data, path=Root(), context=None)]
else:
if data.context is None:
return [DatumInContext(data.value, context=None, path=Root())]
else:
return Root().find(data.context)
def update(self, data, val):
return val
def __str__(self):
return '$'
def __repr__(self):
return 'Root()'
def __eq__(self, other):
return isinstance(other, Root)
class This(JSONPath):
"""
The JSONPath referring to the current datum. Concrete syntax is '@'.
"""
def find(self, datum):
return [DatumInContext.wrap(datum)]
def update(self, data, val):
return val
def __str__(self):
return '`this`'
def __repr__(self):
return 'This()'
def __eq__(self, other):
return isinstance(other, This)
class Child(JSONPath):
"""
JSONPath that first matches the left, then the right.
Concrete syntax is <left> '.' <right>
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, datum):
"""
Extra special case: auto ids do not have children,
so cut it off right now rather than auto id the auto id
"""
return [submatch
for subdata in self.left.find(datum)
if not isinstance(subdata, AutoIdForDatum)
for submatch in self.right.find(subdata)]
def __eq__(self, other):
return isinstance(other, Child) and self.left == other.left and self.right == other.right
def __str__(self):
return '%s.%s' % (self.left, self.right)
def __repr__(self):
return '%s(%r, %r)' % (self.__class__.__name__, self.left, self.right)
class Parent(JSONPath):
"""
JSONPath that matches the parent node of the current match.
Will crash if no such parent exists.
Available via named operator `parent`.
"""
def find(self, datum):
datum = DatumInContext.wrap(datum)
return [datum.context]
def __eq__(self, other):
return isinstance(other, Parent)
def __str__(self):
return '`parent`'
def __repr__(self):
return 'Parent()'
class Where(JSONPath):
"""
JSONPath that first matches the left, and then
filters for only those nodes that have
a match on the right.
WARNING: Subject to change. May want to have "contains"
or some other better word for it.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, data):
return [subdata for subdata in self.left.find(data) if self.right.find(data)]
def __str__(self):
return '%s where %s' % (self.left, self.right)
def __eq__(self, other):
return isinstance(other, Where) and other.left == self.left and other.right == self.right
class Descendants(JSONPath):
"""
JSONPath that matches first the left expression then any descendant
of it which matches the right expression.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def find(self, datum):
# <left> .. <right> ==> <left> . (<right> | *..<right> | [*]..<right>)
#
# With with a wonky caveat that since Slice() has funky coercions
# we cannot just delegate to that equivalence or we'll hit an
# infinite loop. So right here we implement the coercion-free version.
# Get all left matches into a list
left_matches = self.left.find(datum)
if not isinstance(left_matches, list):
left_matches = [left_matches]
def match_recursively(datum):
right_matches = self.right.find(datum)
# Manually do the * or [*] to avoid coercion and recurse just the right-hand pattern
if isinstance(datum.value, list):
recursive_matches = [submatch
for i in range(0, len(datum.value))
for submatch in match_recursively(DatumInContext(datum.value[i], context=datum, path=Index(i)))]
elif isinstance(datum.value, dict):
recursive_matches = [submatch
for field in datum.value.keys()
for submatch in match_recursively(DatumInContext(datum.value[field], context=datum, path=Fields(field)))]
else:
recursive_matches = []
return right_matches + list(recursive_matches)
# TODO: repeatable iterator instead of list?
return [submatch
for left_match in left_matches
for submatch in match_recursively(left_match)]
def is_singular():
return False
def __str__(self):
return '%s..%s' % (self.left, self.right)
def __eq__(self, other):
return isinstance(other, Descendants) and self.left == other.left and self.right == other.right
class Union(JSONPath):
"""
JSONPath that returns the union of the results of each match.
This is pretty shoddily implemented for now. The nicest semantics
in case of mismatched bits (list vs atomic) is to put
them all in a list, but I haven't done that yet.
WARNING: Any appearance of this being the _concatenation_ is
coincidence. It may even be a bug! (or laziness)
"""
def __init__(self, left, right):
self.left = left
self.right = right
def is_singular(self):
return False
def find(self, data):
return self.left.find(data) + self.right.find(data)
class Intersect(JSONPath):
"""
JSONPath for bits that match *both* patterns.
This can be accomplished a couple of ways. The most
efficient is to actually build the intersected
AST as in building a state machine for matching the
intersection of regular languages. The next
idea is to build a filtered data and match against
that.
"""
def __init__(self, left, right):
self.left = left
self.right = right
def is_singular(self):
return False
def find(self, data):
raise NotImplementedError()
class Fields(JSONPath):
"""
JSONPath referring to some field of the current object.
Concrete syntax ix comma-separated field names.
WARNING: If '*' is any of the field names, then they will
all be returned.
"""
def __init__(self, *fields):
self.fields = fields
def get_field_datum(self, datum, field):
if field == auto_id_field:
return AutoIdForDatum(datum)
else:
try:
field_value = datum.value[field] # Do NOT use `val.get(field)` since that confuses None as a value and None due to `get`
return DatumInContext(value=field_value, path=Fields(field), context=datum)
except (TypeError, KeyError, AttributeError):
return None
def reified_fields(self, datum):
if '*' not in self.fields:
return self.fields
else:
try:
fields = tuple(datum.value.keys())
return fields if auto_id_field is None else fields + (auto_id_field,)
except AttributeError:
return ()
def find(self, datum):
datum = DatumInContext.wrap(datum)
return [field_datum
for field_datum in [self.get_field_datum(datum, field) for field in self.reified_fields(datum)]
if field_datum is not None]
def __str__(self):
return ','.join(self.fields)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, ','.join(map(repr, self.fields)))
def __eq__(self, other):
return isinstance(other, Fields) and tuple(self.fields) == tuple(other.fields)
class Index(JSONPath):
"""
JSONPath that matches indices of the current datum, or none if not large enough.
Concrete syntax is brackets.
WARNING: If the datum is not long enough, it will not crash but will not match anything.
NOTE: For the concrete syntax of `[*]`, the abstract syntax is a Slice() with no parameters (equiv to `[:]`
"""
def __init__(self, index):
self.index = index
def find(self, datum):
datum = DatumInContext.wrap(datum)
if len(datum.value) > self.index:
return [DatumInContext(datum.value[self.index], path=self, context=datum)]
else:
return []
def __eq__(self, other):
return isinstance(other, Index) and self.index == other.index
def __str__(self):
return '[%i]' % self.index
class Slice(JSONPath):
"""
JSONPath matching a slice of an array.
Because of a mismatch between JSON and XML when schema-unaware,
this always returns an iterable; if the incoming data
was not a list, then it returns a one element list _containing_ that
data.
Consider these two docs, and their schema-unaware translation to JSON:
<a><b>hello</b></a> ==> {"a": {"b": "hello"}}
<a><b>hello</b><b>goodbye</b></a> ==> {"a": {"b": ["hello", "goodbye"]}}
If there were a schema, it would be known that "b" should always be an
array (unless the schema were wonky, but that is too much to fix here)
so when querying with JSON if the one writing the JSON knows that it
should be an array, they can write a slice operator and it will coerce
a non-array value to an array.
This may be a bit unfortunate because it would be nice to always have
an iterator, but dictionaries and other objects may also be iterable,
so this is the compromise.
"""
def __init__(self, start=None, end=None, step=None):
self.start = start
self.end = end
self.step = step
def find(self, datum):
datum = DatumInContext.wrap(datum)
# Here's the hack. If it is a dictionary or some kind of constant,
# put it in a single-element list
if (isinstance(datum.value, dict) or isinstance(datum.value, six.integer_types) or isinstance(datum.value, six.string_types)):
return self.find(DatumInContext([datum.value], path=datum.path, context=datum.context))
# Some iterators do not support slicing but we can still
# at least work for '*'
if self.start == None and self.end == None and self.step == None:
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in xrange(0, len(datum.value))]
else:
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in range(0, len(datum.value))[self.start:self.end:self.step]]
def __str__(self):
if self.start == None and self.end == None and self.step == None:
return '[*]'
else:
return '[%s%s%s]' % (self.start or '',
':%d'%self.end if self.end else '',
':%d'%self.step if self.step else '')
def __repr__(self):
return '%s(start=%r,end=%r,step=%r)' % (self.__class__.__name__, self.start, self.end, self.step)
def __eq__(self, other):
return isinstance(other, Slice) and other.start == self.start and self.end == other.end and other.step == self.step

171
bin/jsonpath_rw/lexer.py Normal file
View File

@@ -0,0 +1,171 @@
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
import sys
import logging
import ply.lex
logger = logging.getLogger(__name__)
class JsonPathLexerError(Exception):
pass
class JsonPathLexer(object):
'''
A Lexical analyzer for JsonPath.
'''
def __init__(self, debug=False):
self.debug = debug
if self.__doc__ == None:
raise JsonPathLexerError('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
def tokenize(self, string):
'''
Maps a string to an iterator over tokens. In other words: [char] -> [token]
'''
new_lexer = ply.lex.lex(module=self, debug=self.debug, errorlog=logger)
new_lexer.latest_newline = 0
new_lexer.string_value = None
new_lexer.input(string)
while True:
t = new_lexer.token()
if t is None: break
t.col = t.lexpos - new_lexer.latest_newline
yield t
if new_lexer.string_value is not None:
raise JsonPathLexerError('Unexpected EOF in string literal or identifier')
# ============== PLY Lexer specification ==================
#
# This probably should be private but:
# - the parser requires access to `tokens` (perhaps they should be defined in a third, shared dependency)
# - things like `literals` might be a legitimate part of the public interface.
#
# Anyhow, it is pythonic to give some rope to hang oneself with :-)
literals = ['*', '.', '[', ']', '(', ')', '$', ',', ':', '|', '&']
reserved_words = { 'where': 'WHERE' }
tokens = ['DOUBLEDOT', 'NUMBER', 'ID', 'NAMED_OPERATOR'] + list(reserved_words.values())
states = [ ('singlequote', 'exclusive'),
('doublequote', 'exclusive'),
('backquote', 'exclusive') ]
# Normal lexing, rather easy
t_DOUBLEDOT = r'\.\.'
t_ignore = ' \t'
def t_ID(self, t):
r'[a-zA-Z_@][a-zA-Z0-9_@\-]*'
t.type = self.reserved_words.get(t.value, 'ID')
return t
def t_NUMBER(self, t):
r'-?\d+'
t.value = int(t.value)
return t
# Single-quoted strings
t_singlequote_ignore = ''
def t_singlequote(self, t):
r"'"
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('singlequote')
def t_singlequote_content(self, t):
r"[^'\\]+"
t.lexer.string_value += t.value
def t_singlequote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_singlequote_end(self, t):
r"'"
t.value = t.lexer.string_value
t.type = 'ID'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_singlequote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing singlequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Double-quoted strings
t_doublequote_ignore = ''
def t_doublequote(self, t):
r'"'
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('doublequote')
def t_doublequote_content(self, t):
r'[^"\\]+'
t.lexer.string_value += t.value
def t_doublequote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_doublequote_end(self, t):
r'"'
t.value = t.lexer.string_value
t.type = 'ID'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_doublequote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing doublequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Back-quoted "magic" operators
t_backquote_ignore = ''
def t_backquote(self, t):
r'`'
t.lexer.string_start = t.lexer.lexpos
t.lexer.string_value = ''
t.lexer.push_state('backquote')
def t_backquote_escape(self, t):
r'\\.'
t.lexer.string_value += t.value[1]
def t_backquote_content(self, t):
r"[^`\\]+"
t.lexer.string_value += t.value
def t_backquote_end(self, t):
r'`'
t.value = t.lexer.string_value
t.type = 'NAMED_OPERATOR'
t.lexer.string_value = None
t.lexer.pop_state()
return t
def t_backquote_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s while lexing backquoted operator: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
# Counting lines, handling errors
def t_newline(self, t):
r'\n'
t.lexer.lineno += 1
t.lexer.latest_newline = t.lexpos
def t_error(self, t):
raise JsonPathLexerError('Error on line %s, col %s: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
if __name__ == '__main__':
logging.basicConfig()
lexer = JsonPathLexer(debug=True)
for token in lexer.tokenize(sys.stdin.read()):
print('%-20s%s' % (token.value, token.type))

187
bin/jsonpath_rw/parser.py Normal file
View File

@@ -0,0 +1,187 @@
from __future__ import print_function, absolute_import, division, generators, nested_scopes
import sys
import os.path
import logging
import ply.yacc
from jsonpath_rw.jsonpath import *
from jsonpath_rw.lexer import JsonPathLexer
logger = logging.getLogger(__name__)
def parse(string):
return JsonPathParser().parse(string)
class JsonPathParser(object):
'''
An LALR-parser for JsonPath
'''
tokens = JsonPathLexer.tokens
def __init__(self, debug=False, lexer_class=None):
if self.__doc__ == None:
raise Exception('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
self.debug = debug
self.lexer_class = lexer_class or JsonPathLexer # Crufty but works around statefulness in PLY
def parse(self, string, lexer = None):
lexer = lexer or self.lexer_class()
return self.parse_token_stream(lexer.tokenize(string))
def parse_token_stream(self, token_iterator, start_symbol='jsonpath'):
# Since PLY has some crufty aspects and dumps files, we try to keep them local
# However, we need to derive the name of the output Python file :-/
output_directory = os.path.dirname(__file__)
try:
module_name = os.path.splitext(os.path.split(__file__)[1])[0]
except:
module_name = __name__
parsing_table_module = '_'.join([module_name, start_symbol, 'parsetab'])
# And we regenerate the parse table every time; it doesn't actually take that long!
new_parser = ply.yacc.yacc(module=self,
debug=self.debug,
tabmodule = parsing_table_module,
outputdir = output_directory,
write_tables=0,
start = start_symbol,
errorlog = logger)
return new_parser.parse(lexer = IteratorToTokenStream(token_iterator))
# ===================== PLY Parser specification =====================
precedence = [
('left', ','),
('left', 'DOUBLEDOT'),
('left', '.'),
('left', '|'),
('left', '&'),
('left', 'WHERE'),
]
def p_error(self, t):
raise Exception('Parse error at %s:%s near token %s (%s)' % (t.lineno, t.col, t.value, t.type))
def p_jsonpath_binop(self, p):
"""jsonpath : jsonpath '.' jsonpath
| jsonpath DOUBLEDOT jsonpath
| jsonpath WHERE jsonpath
| jsonpath '|' jsonpath
| jsonpath '&' jsonpath"""
op = p[2]
if op == '.':
p[0] = Child(p[1], p[3])
elif op == '..':
p[0] = Descendants(p[1], p[3])
elif op == 'where':
p[0] = Where(p[1], p[3])
elif op == '|':
p[0] = Union(p[1], p[3])
elif op == '&':
p[0] = Intersect(p[1], p[3])
def p_jsonpath_fields(self, p):
"jsonpath : fields_or_any"
p[0] = Fields(*p[1])
def p_jsonpath_named_operator(self, p):
"jsonpath : NAMED_OPERATOR"
if p[1] == 'this':
p[0] = This()
elif p[1] == 'parent':
p[0] = Parent()
else:
raise Exception('Unknown named operator `%s` at %s:%s' % (p[1], p.lineno(1), p.lexpos(1)))
def p_jsonpath_root(self, p):
"jsonpath : '$'"
p[0] = Root()
def p_jsonpath_idx(self, p):
"jsonpath : '[' idx ']'"
p[0] = p[2]
def p_jsonpath_slice(self, p):
"jsonpath : '[' slice ']'"
p[0] = p[2]
def p_jsonpath_fieldbrackets(self, p):
"jsonpath : '[' fields ']'"
p[0] = Fields(*p[2])
def p_jsonpath_child_fieldbrackets(self, p):
"jsonpath : jsonpath '[' fields ']'"
p[0] = Child(p[1], Fields(*p[3]))
def p_jsonpath_child_idxbrackets(self, p):
"jsonpath : jsonpath '[' idx ']'"
p[0] = Child(p[1], p[3])
def p_jsonpath_child_slicebrackets(self, p):
"jsonpath : jsonpath '[' slice ']'"
p[0] = Child(p[1], p[3])
def p_jsonpath_parens(self, p):
"jsonpath : '(' jsonpath ')'"
p[0] = p[2]
# Because fields in brackets cannot be '*' - that is reserved for array indices
def p_fields_or_any(self, p):
"""fields_or_any : fields
| '*' """
if p[1] == '*':
p[0] = ['*']
else:
p[0] = p[1]
def p_fields_id(self, p):
"fields : ID"
p[0] = [p[1]]
def p_fields_comma(self, p):
"fields : fields ',' fields"
p[0] = p[1] + p[3]
def p_idx(self, p):
"idx : NUMBER"
p[0] = Index(p[1])
def p_slice_any(self, p):
"slice : '*'"
p[0] = Slice()
def p_slice(self, p): # Currently does not support `step`
"slice : maybe_int ':' maybe_int"
p[0] = Slice(start=p[1], end=p[3])
def p_maybe_int(self, p):
"""maybe_int : NUMBER
| empty"""
p[0] = p[1]
def p_empty(self, p):
'empty :'
p[0] = None
class IteratorToTokenStream(object):
def __init__(self, iterator):
self.iterator = iterator
def token(self):
try:
return next(self.iterator)
except StopIteration:
return None
if __name__ == '__main__':
logging.basicConfig()
parser = JsonPathParser(debug=True)
print(parser.parse(sys.stdin.read()))

4
bin/ply/__init__.py Normal file
View File

@@ -0,0 +1,4 @@
# PLY package
# Author: David Beazley (dave@dabeaz.com)
__all__ = ['lex','yacc']

898
bin/ply/cpp.py Normal file
View File

@@ -0,0 +1,898 @@
# -----------------------------------------------------------------------------
# cpp.py
#
# Author: David Beazley (http://www.dabeaz.com)
# Copyright (C) 2007
# All rights reserved
#
# This module implements an ANSI-C style lexical preprocessor for PLY.
# -----------------------------------------------------------------------------
from __future__ import generators
# -----------------------------------------------------------------------------
# Default preprocessor lexer definitions. These tokens are enough to get
# a basic preprocessor working. Other modules may import these if they want
# -----------------------------------------------------------------------------
tokens = (
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT', 'CPP_POUND','CPP_DPOUND'
)
literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
# Whitespace
def t_CPP_WS(t):
r'\s+'
t.lexer.lineno += t.value.count("\n")
return t
t_CPP_POUND = r'\#'
t_CPP_DPOUND = r'\#\#'
# Identifier
t_CPP_ID = r'[A-Za-z_][\w_]*'
# Integer literal
def CPP_INTEGER(t):
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU]|[lL]|[uU][lL]|[lL][uU])?)'
return t
t_CPP_INTEGER = CPP_INTEGER
# Floating literal
t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
def t_CPP_STRING(t):
r'\"([^\\\n]|(\\(.|\n)))*?\"'
t.lexer.lineno += t.value.count("\n")
return t
# Character constant 'c' or L'c'
def t_CPP_CHAR(t):
r'(L)?\'([^\\\n]|(\\(.|\n)))*?\''
t.lexer.lineno += t.value.count("\n")
return t
# Comment
def t_CPP_COMMENT(t):
r'(/\*(.|\n)*?\*/)|(//.*?\n)'
t.lexer.lineno += t.value.count("\n")
return t
def t_error(t):
t.type = t.value[0]
t.value = t.value[0]
t.lexer.skip(1)
return t
import re
import copy
import time
import os.path
# -----------------------------------------------------------------------------
# trigraph()
#
# Given an input string, this function replaces all trigraph sequences.
# The following mapping is used:
#
# ??= #
# ??/ \
# ??' ^
# ??( [
# ??) ]
# ??! |
# ??< {
# ??> }
# ??- ~
# -----------------------------------------------------------------------------
_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''')
_trigraph_rep = {
'=':'#',
'/':'\\',
"'":'^',
'(':'[',
')':']',
'!':'|',
'<':'{',
'>':'}',
'-':'~'
}
def trigraph(input):
return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)
# ------------------------------------------------------------------
# Macro object
#
# This object holds information about preprocessor macros
#
# .name - Macro name (string)
# .value - Macro value (a list of tokens)
# .arglist - List of argument names
# .variadic - Boolean indicating whether or not variadic macro
# .vararg - Name of the variadic parameter
#
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# ------------------------------------------------------------------
class Macro(object):
def __init__(self,name,value,arglist=None,variadic=False):
self.name = name
self.value = value
self.arglist = arglist
self.variadic = variadic
if variadic:
self.vararg = arglist[-1]
self.source = None
# ------------------------------------------------------------------
# Preprocessor object
#
# Object representing a preprocessor. Contains macro definitions,
# include directories, and other information
# ------------------------------------------------------------------
class Preprocessor(object):
def __init__(self,lexer=None):
if lexer is None:
lexer = lex.lexer
self.lexer = lexer
self.macros = { }
self.path = []
self.temp_path = []
# Probe the lexer for selected tokens
self.lexprobe()
tm = time.localtime()
self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm))
self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm))
self.parser = None
# -----------------------------------------------------------------------------
# tokenize()
#
# Utility function. Given a string of text, tokenize into a list of tokens
# -----------------------------------------------------------------------------
def tokenize(self,text):
tokens = []
self.lexer.input(text)
while True:
tok = self.lexer.token()
if not tok: break
tokens.append(tok)
return tokens
# ---------------------------------------------------------------------
# error()
#
# Report a preprocessor error/warning of some kind
# ----------------------------------------------------------------------
def error(self,file,line,msg):
print("%s:%d %s" % (file,line,msg))
# ----------------------------------------------------------------------
# lexprobe()
#
# This method probes the preprocessor lexer object to discover
# the token types of symbols that are important to the preprocessor.
# If this works right, the preprocessor will simply "work"
# with any suitable lexer regardless of how tokens have been named.
# ----------------------------------------------------------------------
def lexprobe(self):
# Determine the token type for identifiers
self.lexer.input("identifier")
tok = self.lexer.token()
if not tok or tok.value != "identifier":
print("Couldn't determine identifier type")
else:
self.t_ID = tok.type
# Determine the token type for integers
self.lexer.input("12345")
tok = self.lexer.token()
if not tok or int(tok.value) != 12345:
print("Couldn't determine integer type")
else:
self.t_INTEGER = tok.type
self.t_INTEGER_TYPE = type(tok.value)
# Determine the token type for strings enclosed in double quotes
self.lexer.input("\"filename\"")
tok = self.lexer.token()
if not tok or tok.value != "\"filename\"":
print("Couldn't determine string type")
else:
self.t_STRING = tok.type
# Determine the token type for whitespace--if any
self.lexer.input(" ")
tok = self.lexer.token()
if not tok or tok.value != " ":
self.t_SPACE = None
else:
self.t_SPACE = tok.type
# Determine the token type for newlines
self.lexer.input("\n")
tok = self.lexer.token()
if not tok or tok.value != "\n":
self.t_NEWLINE = None
print("Couldn't determine token for newlines")
else:
self.t_NEWLINE = tok.type
self.t_WS = (self.t_SPACE, self.t_NEWLINE)
# Check for other characters used by the preprocessor
chars = [ '<','>','#','##','\\','(',')',',','.']
for c in chars:
self.lexer.input(c)
tok = self.lexer.token()
if not tok or tok.value != c:
print("Unable to lex '%s' required for preprocessor" % c)
# ----------------------------------------------------------------------
# add_path()
#
# Adds a search path to the preprocessor.
# ----------------------------------------------------------------------
def add_path(self,path):
self.path.append(path)
# ----------------------------------------------------------------------
# group_lines()
#
# Given an input string, this function splits it into lines. Trailing whitespace
# is removed. Any line ending with \ is grouped with the next line. This
# function forms the lowest level of the preprocessor---grouping into text into
# a line-by-line format.
# ----------------------------------------------------------------------
def group_lines(self,input):
lex = self.lexer.clone()
lines = [x.rstrip() for x in input.splitlines()]
for i in xrange(len(lines)):
j = i+1
while lines[i].endswith('\\') and (j < len(lines)):
lines[i] = lines[i][:-1]+lines[j]
lines[j] = ""
j += 1
input = "\n".join(lines)
lex.input(input)
lex.lineno = 1
current_line = []
while True:
tok = lex.token()
if not tok:
break
current_line.append(tok)
if tok.type in self.t_WS and '\n' in tok.value:
yield current_line
current_line = []
if current_line:
yield current_line
# ----------------------------------------------------------------------
# tokenstrip()
#
# Remove leading/trailing whitespace tokens from a token list
# ----------------------------------------------------------------------
def tokenstrip(self,tokens):
i = 0
while i < len(tokens) and tokens[i].type in self.t_WS:
i += 1
del tokens[:i]
i = len(tokens)-1
while i >= 0 and tokens[i].type in self.t_WS:
i -= 1
del tokens[i+1:]
return tokens
# ----------------------------------------------------------------------
# collect_args()
#
# Collects comma separated arguments from a list of tokens. The arguments
# must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions)
# where tokencount is the number of tokens consumed, args is a list of arguments,
# and positions is a list of integers containing the starting index of each
# argument. Each argument is represented by a list of tokens.
#
# When collecting arguments, leading and trailing whitespace is removed
# from each argument.
#
# This function properly handles nested parenthesis and commas---these do not
# define new arguments.
# ----------------------------------------------------------------------
def collect_args(self,tokenlist):
args = []
positions = []
current_arg = []
nesting = 1
tokenlen = len(tokenlist)
# Search for the opening '('.
i = 0
while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
i += 1
if (i < tokenlen) and (tokenlist[i].value == '('):
positions.append(i+1)
else:
self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments")
return 0, [], []
i += 1
while i < tokenlen:
t = tokenlist[i]
if t.value == '(':
current_arg.append(t)
nesting += 1
elif t.value == ')':
nesting -= 1
if nesting == 0:
if current_arg:
args.append(self.tokenstrip(current_arg))
positions.append(i)
return i+1,args,positions
current_arg.append(t)
elif t.value == ',' and nesting == 1:
args.append(self.tokenstrip(current_arg))
positions.append(i+1)
current_arg = []
else:
current_arg.append(t)
i += 1
# Missing end argument
self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
return 0, [],[]
# ----------------------------------------------------------------------
# macro_prescan()
#
# Examine the macro value (token sequence) and identify patch points
# This is used to speed up macro expansion later on---we'll know
# right away where to apply patches to the value to form the expansion
# ----------------------------------------------------------------------
def macro_prescan(self,macro):
macro.patch = [] # Standard macro arguments
macro.str_patch = [] # String conversion expansion
macro.var_comma_patch = [] # Variadic macro comma patch
i = 0
while i < len(macro.value):
if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist:
argnum = macro.arglist.index(macro.value[i].value)
# Conversion of argument to a string
if i > 0 and macro.value[i-1].value == '#':
macro.value[i] = copy.copy(macro.value[i])
macro.value[i].type = self.t_STRING
del macro.value[i-1]
macro.str_patch.append((argnum,i-1))
continue
# Concatenation
elif (i > 0 and macro.value[i-1].value == '##'):
macro.patch.append(('c',argnum,i-1))
del macro.value[i-1]
continue
elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
macro.patch.append(('c',argnum,i))
i += 1
continue
# Standard expansion
else:
macro.patch.append(('e',argnum,i))
elif macro.value[i].value == '##':
if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \
((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \
(macro.value[i+1].value == macro.vararg):
macro.var_comma_patch.append(i-1)
i += 1
macro.patch.sort(key=lambda x: x[2],reverse=True)
# ----------------------------------------------------------------------
# macro_expand_args()
#
# Given a Macro and list of arguments (each a token list), this method
# returns an expanded version of a macro. The return value is a token sequence
# representing the replacement macro tokens
# ----------------------------------------------------------------------
def macro_expand_args(self,macro,args):
# Make a copy of the macro token sequence
rep = [copy.copy(_x) for _x in macro.value]
# Make string expansion patches. These do not alter the length of the replacement sequence
str_expansion = {}
for argnum, i in macro.str_patch:
if argnum not in str_expansion:
str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\")
rep[i] = copy.copy(rep[i])
rep[i].value = str_expansion[argnum]
# Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid
comma_patch = False
if macro.variadic and not args[-1]:
for i in macro.var_comma_patch:
rep[i] = None
comma_patch = True
# Make all other patches. The order of these matters. It is assumed that the patch list
# has been sorted in reverse order of patch location since replacements will cause the
# size of the replacement sequence to expand from the patch point.
expanded = { }
for ptype, argnum, i in macro.patch:
# Concatenation. Argument is left unexpanded
if ptype == 'c':
rep[i:i+1] = args[argnum]
# Normal expansion. Argument is macro expanded first
elif ptype == 'e':
if argnum not in expanded:
expanded[argnum] = self.expand_macros(args[argnum])
rep[i:i+1] = expanded[argnum]
# Get rid of removed comma if necessary
if comma_patch:
rep = [_i for _i in rep if _i]
return rep
# ----------------------------------------------------------------------
# expand_macros()
#
# Given a list of tokens, this function performs macro expansion.
# The expanded argument is a dictionary that contains macros already
# expanded. This is used to prevent infinite recursion.
# ----------------------------------------------------------------------
def expand_macros(self,tokens,expanded=None):
if expanded is None:
expanded = {}
i = 0
while i < len(tokens):
t = tokens[i]
if t.type == self.t_ID:
if t.value in self.macros and t.value not in expanded:
# Yes, we found a macro match
expanded[t.value] = True
m = self.macros[t.value]
if not m.arglist:
# A simple macro
ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded)
for e in ex:
e.lineno = t.lineno
tokens[i:i+1] = ex
i += len(ex)
else:
# A macro with arguments
j = i + 1
while j < len(tokens) and tokens[j].type in self.t_WS:
j += 1
if tokens[j].value == '(':
tokcount,args,positions = self.collect_args(tokens[j:])
if not m.variadic and len(args) != len(m.arglist):
self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
i = j + tokcount
elif m.variadic and len(args) < len(m.arglist)-1:
if len(m.arglist) > 2:
self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1))
else:
self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1))
i = j + tokcount
else:
if m.variadic:
if len(args) == len(m.arglist)-1:
args.append([])
else:
args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
del args[len(m.arglist):]
# Get macro replacement text
rep = self.macro_expand_args(m,args)
rep = self.expand_macros(rep,expanded)
for r in rep:
r.lineno = t.lineno
tokens[i:j+tokcount] = rep
i += len(rep)
del expanded[t.value]
continue
elif t.value == '__LINE__':
t.type = self.t_INTEGER
t.value = self.t_INTEGER_TYPE(t.lineno)
i += 1
return tokens
# ----------------------------------------------------------------------
# evalexpr()
#
# Evaluate an expression token sequence for the purposes of evaluating
# integral expressions.
# ----------------------------------------------------------------------
def evalexpr(self,tokens):
# tokens = tokenize(line)
# Search for defined macros
i = 0
while i < len(tokens):
if tokens[i].type == self.t_ID and tokens[i].value == 'defined':
j = i + 1
needparen = False
result = "0L"
while j < len(tokens):
if tokens[j].type in self.t_WS:
j += 1
continue
elif tokens[j].type == self.t_ID:
if tokens[j].value in self.macros:
result = "1L"
else:
result = "0L"
if not needparen: break
elif tokens[j].value == '(':
needparen = True
elif tokens[j].value == ')':
break
else:
self.error(self.source,tokens[i].lineno,"Malformed defined()")
j += 1
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE(result)
del tokens[i+1:j+1]
i += 1
tokens = self.expand_macros(tokens)
for i,t in enumerate(tokens):
if t.type == self.t_ID:
tokens[i] = copy.copy(t)
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE("0L")
elif t.type == self.t_INTEGER:
tokens[i] = copy.copy(t)
# Strip off any trailing suffixes
tokens[i].value = str(tokens[i].value)
while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
tokens[i].value = tokens[i].value[:-1]
expr = "".join([str(x.value) for x in tokens])
expr = expr.replace("&&"," and ")
expr = expr.replace("||"," or ")
expr = expr.replace("!"," not ")
try:
result = eval(expr)
except StandardError:
self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
result = 0
return result
# ----------------------------------------------------------------------
# parsegen()
#
# Parse an input string/
# ----------------------------------------------------------------------
def parsegen(self,input,source=None):
# Replace trigraph sequences
t = trigraph(input)
lines = self.group_lines(t)
if not source:
source = ""
self.define("__FILE__ \"%s\"" % source)
self.source = source
chunk = []
enable = True
iftrigger = False
ifstack = []
for x in lines:
for i,tok in enumerate(x):
if tok.type not in self.t_WS: break
if tok.value == '#':
# Preprocessor directive
for tok in x:
if tok in self.t_WS and '\n' in tok.value:
chunk.append(tok)
dirtokens = self.tokenstrip(x[i+1:])
if dirtokens:
name = dirtokens[0].value
args = self.tokenstrip(dirtokens[1:])
else:
name = ""
args = []
if name == 'define':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.define(args)
elif name == 'include':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
oldfile = self.macros['__FILE__']
for tok in self.include(args):
yield tok
self.macros['__FILE__'] = oldfile
self.source = source
elif name == 'undef':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.undef(args)
elif name == 'ifdef':
ifstack.append((enable,iftrigger))
if enable:
if not args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'ifndef':
ifstack.append((enable,iftrigger))
if enable:
if args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'if':
ifstack.append((enable,iftrigger))
if enable:
result = self.evalexpr(args)
if not result:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'elif':
if ifstack:
if ifstack[-1][0]: # We only pay attention if outer "if" allows this
if enable: # If already true, we flip enable False
enable = False
elif not iftrigger: # If False, but not triggered yet, we'll check expression
result = self.evalexpr(args)
if result:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
elif name == 'else':
if ifstack:
if ifstack[-1][0]:
if enable:
enable = False
elif not iftrigger:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #else")
elif name == 'endif':
if ifstack:
enable,iftrigger = ifstack.pop()
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #endif")
else:
# Unknown preprocessor directive
pass
else:
# Normal text
if enable:
chunk.extend(x)
for tok in self.expand_macros(chunk):
yield tok
chunk = []
# ----------------------------------------------------------------------
# include()
#
# Implementation of file-inclusion
# ----------------------------------------------------------------------
def include(self,tokens):
# Try to extract the filename and then process an include file
if not tokens:
return
if tokens:
if tokens[0].value != '<' and tokens[0].type != self.t_STRING:
tokens = self.expand_macros(tokens)
if tokens[0].value == '<':
# Include <...>
i = 1
while i < len(tokens):
if tokens[i].value == '>':
break
i += 1
else:
print("Malformed #include <...>")
return
filename = "".join([x.value for x in tokens[1:i]])
path = self.path + [""] + self.temp_path
elif tokens[0].type == self.t_STRING:
filename = tokens[0].value[1:-1]
path = self.temp_path + [""] + self.path
else:
print("Malformed #include statement")
return
for p in path:
iname = os.path.join(p,filename)
try:
data = open(iname,"r").read()
dname = os.path.dirname(iname)
if dname:
self.temp_path.insert(0,dname)
for tok in self.parsegen(data,filename):
yield tok
if dname:
del self.temp_path[0]
break
except IOError:
pass
else:
print("Couldn't find '%s'" % filename)
# ----------------------------------------------------------------------
# define()
#
# Define a new macro
# ----------------------------------------------------------------------
def define(self,tokens):
if isinstance(tokens,(str,unicode)):
tokens = self.tokenize(tokens)
linetok = tokens
try:
name = linetok[0]
if len(linetok) > 1:
mtype = linetok[1]
else:
mtype = None
if not mtype:
m = Macro(name.value,[])
self.macros[name.value] = m
elif mtype.type in self.t_WS:
# A normal macro
m = Macro(name.value,self.tokenstrip(linetok[2:]))
self.macros[name.value] = m
elif mtype.value == '(':
# A macro with arguments
tokcount, args, positions = self.collect_args(linetok[1:])
variadic = False
for a in args:
if variadic:
print("No more arguments may follow a variadic argument")
break
astr = "".join([str(_i.value) for _i in a])
if astr == "...":
variadic = True
a[0].type = self.t_ID
a[0].value = '__VA_ARGS__'
variadic = True
del a[1:]
continue
elif astr[-3:] == "..." and a[0].type == self.t_ID:
variadic = True
del a[1:]
# If, for some reason, "." is part of the identifier, strip off the name for the purposes
# of macro expansion
if a[0].value[-3:] == '...':
a[0].value = a[0].value[:-3]
continue
if len(a) > 1 or a[0].type != self.t_ID:
print("Invalid macro argument")
break
else:
mvalue = self.tokenstrip(linetok[1+tokcount:])
i = 0
while i < len(mvalue):
if i+1 < len(mvalue):
if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##':
del mvalue[i]
continue
elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS:
del mvalue[i+1]
i += 1
m = Macro(name.value,mvalue,[x[0].value for x in args],variadic)
self.macro_prescan(m)
self.macros[name.value] = m
else:
print("Bad macro definition")
except LookupError:
print("Bad macro definition")
# ----------------------------------------------------------------------
# undef()
#
# Undefine a macro
# ----------------------------------------------------------------------
def undef(self,tokens):
id = tokens[0].value
try:
del self.macros[id]
except LookupError:
pass
# ----------------------------------------------------------------------
# parse()
#
# Parse input text.
# ----------------------------------------------------------------------
def parse(self,input,source=None,ignore={}):
self.ignore = ignore
self.parser = self.parsegen(input,source)
# ----------------------------------------------------------------------
# token()
#
# Method to return individual tokens
# ----------------------------------------------------------------------
def token(self):
try:
while True:
tok = next(self.parser)
if tok.type not in self.ignore: return tok
except StopIteration:
self.parser = None
return None
if __name__ == '__main__':
import ply.lex as lex
lexer = lex.lex()
# Run a preprocessor
import sys
f = open(sys.argv[1])
input = f.read()
p = Preprocessor(lexer)
p.parse(input,sys.argv[1])
while True:
tok = p.token()
if not tok: break
print(p.source, tok)

133
bin/ply/ctokens.py Normal file
View File

@@ -0,0 +1,133 @@
# ----------------------------------------------------------------------
# ctokens.py
#
# Token specifications for symbols in ANSI C and C++. This file is
# meant to be used as a library in other tokenizers.
# ----------------------------------------------------------------------
# Reserved words
tokens = [
# Literals (identifier, integer constant, float constant, string constant, char const)
'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD',
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
'LOR', 'LAND', 'LNOT',
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
# Increment/decrement (++,--)
'PLUSPLUS', 'MINUSMINUS',
# Structure dereference (->)
'ARROW',
# Ternary operator (?)
'TERNARY',
# Delimeters ( ) [ ] { } , . ; :
'LPAREN', 'RPAREN',
'LBRACKET', 'RBRACKET',
'LBRACE', 'RBRACE',
'COMMA', 'PERIOD', 'SEMI', 'COLON',
# Ellipsis (...)
'ELLIPSIS',
]
# Operators
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_MODULO = r'%'
t_OR = r'\|'
t_AND = r'&'
t_NOT = r'~'
t_XOR = r'\^'
t_LSHIFT = r'<<'
t_RSHIFT = r'>>'
t_LOR = r'\|\|'
t_LAND = r'&&'
t_LNOT = r'!'
t_LT = r'<'
t_GT = r'>'
t_LE = r'<='
t_GE = r'>='
t_EQ = r'=='
t_NE = r'!='
# Assignment operators
t_EQUALS = r'='
t_TIMESEQUAL = r'\*='
t_DIVEQUAL = r'/='
t_MODEQUAL = r'%='
t_PLUSEQUAL = r'\+='
t_MINUSEQUAL = r'-='
t_LSHIFTEQUAL = r'<<='
t_RSHIFTEQUAL = r'>>='
t_ANDEQUAL = r'&='
t_OREQUAL = r'\|='
t_XOREQUAL = r'^='
# Increment/decrement
t_INCREMENT = r'\+\+'
t_DECREMENT = r'--'
# ->
t_ARROW = r'->'
# ?
t_TERNARY = r'\?'
# Delimeters
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_LBRACKET = r'\['
t_RBRACKET = r'\]'
t_LBRACE = r'\{'
t_RBRACE = r'\}'
t_COMMA = r','
t_PERIOD = r'\.'
t_SEMI = r';'
t_COLON = r':'
t_ELLIPSIS = r'\.\.\.'
# Identifiers
t_ID = r'[A-Za-z_][A-Za-z0-9_]*'
# Integer literal
t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?'
# Floating literal
t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
t_STRING = r'\"([^\\\n]|(\\.))*?\"'
# Character constant 'c' or L'c'
t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\''
# Comment (C-Style)
def t_COMMENT(t):
r'/\*(.|\n)*?\*/'
t.lexer.lineno += t.value.count('\n')
return t
# Comment (C++-Style)
def t_CPPCOMMENT(t):
r'//.*\n'
t.lexer.lineno += 1
return t

1058
bin/ply/lex.py Normal file

File diff suppressed because it is too large Load Diff

3276
bin/ply/yacc.py Normal file

File diff suppressed because it is too large Load Diff

187
bin/ripple/ledger/Args.py Normal file
View File

@@ -0,0 +1,187 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import importlib
import os
from ripple.ledger import LedgerNumber
from ripple.util import File
from ripple.util import Log
from ripple.util import PrettyPrint
from ripple.util import Range
from ripple.util.Function import Function
NAME = 'LedgerTool'
VERSION = '0.1'
NONE = '(none)'
_parser = argparse.ArgumentParser(
prog=NAME,
description='Retrieve and process Ripple ledgers.',
epilog=LedgerNumber.HELP,
)
# Positional arguments.
_parser.add_argument(
'command',
nargs='*',
help='Command to execute.'
)
# Flag arguments.
_parser.add_argument(
'--binary',
action='store_true',
help='If true, searches are binary - by default linear search is used.',
)
_parser.add_argument(
'--cache',
default='~/.local/share/ripple/ledger',
help='The cache directory.',
)
_parser.add_argument(
'--complete',
action='store_true',
help='If set, only match complete ledgers.',
)
_parser.add_argument(
'--condition', '-c',
help='The name of a condition function used to match ledgers.',
)
_parser.add_argument(
'--config',
help='The rippled configuration file name.',
)
_parser.add_argument(
'--database', '-d',
nargs='*',
default=NONE,
help='Specify a database.',
)
_parser.add_argument(
'--display',
help='Specify a function to display ledgers.',
)
_parser.add_argument(
'--full', '-f',
action='store_true',
help='If true, request full ledgers.',
)
_parser.add_argument(
'--indent', '-i',
type=int,
default=2,
help='How many spaces to indent when display in JSON.',
)
_parser.add_argument(
'--offline', '-o',
action='store_true',
help='If true, work entirely from cache, do not try to contact the server.',
)
_parser.add_argument(
'--position', '-p',
choices=['all', 'first', 'last'],
default='last',
help='Select which ledgers to display.',
)
_parser.add_argument(
'--rippled', '-r',
help='The filename of a rippled binary for retrieving ledgers.',
)
_parser.add_argument(
'--server', '-s',
help='IP address of a rippled JSON server.',
)
_parser.add_argument(
'--utc', '-u',
action='store_true',
help='If true, display times in UTC rather than local time.',
)
_parser.add_argument(
'--validations',
default=3,
help='The number of validations needed before considering a ledger valid.',
)
_parser.add_argument(
'--version',
action='version',
version='%(prog)s ' + VERSION,
help='Print the current version of %(prog)s',
)
_parser.add_argument(
'--verbose', '-v',
action='store_true',
help='If true, give status messages on stderr.',
)
_parser.add_argument(
'--window', '-w',
type=int,
default=0,
help='How many ledgers to display around the matching ledger.',
)
_parser.add_argument(
'--yes', '-y',
action='store_true',
help='If true, don\'t ask for confirmation on large commands.',
)
# Read the arguments from the command line.
ARGS = _parser.parse_args()
ARGS.NONE = NONE
Log.VERBOSE = ARGS.verbose
# Now remove any items that look like ledger numbers from the command line.
_command = ARGS.command
_parts = (ARGS.command, ARGS.ledgers) = ([], [])
for c in _command:
_parts[Range.is_range(c, *LedgerNumber.LEDGERS)].append(c)
ARGS.command = ARGS.command or ['print' if ARGS.ledgers else 'info']
ARGS.cache = File.normalize(ARGS.cache)
if not ARGS.ledgers:
if ARGS.condition:
Log.warn('--condition needs a range of ledgers')
if ARGS.display:
Log.warn('--display needs a range of ledgers')
ARGS.condition = Function(
ARGS.condition or 'all_ledgers', 'ripple.ledger.conditions')
ARGS.display = Function(
ARGS.display or 'ledger_number', 'ripple.ledger.displays')
if ARGS.window < 0:
raise ValueError('Window cannot be negative: --window=%d' %
ARGS.window)
PrettyPrint.INDENT = (ARGS.indent * ' ')
_loaders = (ARGS.database != NONE) + bool(ARGS.rippled) + bool(ARGS.server)
if not _loaders:
ARGS.rippled = 'rippled'
elif _loaders > 1:
raise ValueError('At most one of --database, --rippled and --server '
'may be specified')

View File

@@ -0,0 +1,78 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
import subprocess
from ripple.ledger.Args import ARGS
from ripple.util import ConfigFile
from ripple.util import Database
from ripple.util import File
from ripple.util import Log
from ripple.util import Range
LEDGER_QUERY = """
SELECT
L.*, count(1) validations
FROM
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq DESC) L
JOIN Validations V
ON (V.LedgerHash = L.LedgerHash)
GROUP BY L.LedgerHash
HAVING validations >= {validation_quorum}
ORDER BY 2;
"""
COMPLETE_QUERY = """
SELECT
L.LedgerSeq, count(*) validations
FROM
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq) L
JOIN Validations V
ON (V.LedgerHash = L.LedgerHash)
GROUP BY L.LedgerHash
HAVING validations >= :validation_quorum
ORDER BY 2;
"""
_DATABASE_NAME = 'ledger.db'
USE_PLACEHOLDERS = False
class DatabaseReader(object):
def __init__(self, config):
assert ARGS.database != ARGS.NONE
database = ARGS.database or config['database_path']
if not database.endswith(_DATABASE_NAME):
database = os.path.join(database, _DATABASE_NAME)
if USE_PLACEHOLDERS:
cursor = Database.fetchall(
database, COMPLETE_QUERY, config)
else:
cursor = Database.fetchall(
database, LEDGER_QUERY.format(**config), {})
self.complete = [c[1] for c in cursor]
def name_to_ledger_index(self, ledger_name, is_full=False):
if not self.complete:
return None
if ledger_name == 'closed':
return self.complete[-1]
if ledger_name == 'current':
return None
if ledger_name == 'validated':
return self.complete[-1]
def get_ledger(self, name, is_full=False):
cmd = ['ledger', str(name)]
if is_full:
cmd.append('full')
response = self._command(*cmd)
result = response.get('ledger')
if result:
return result
error = response['error']
etext = _ERROR_TEXT.get(error)
if etext:
error = '%s (%s)' % (etext, error)
Log.fatal(_ERROR_TEXT.get(error, error))

View File

@@ -0,0 +1,18 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Range
FIRST_EVER = 32570
LEDGERS = {
'closed': 'the most recently closed ledger',
'current': 'the current ledger',
'first': 'the first complete ledger on this server',
'last': 'the last complete ledger on this server',
'validated': 'the most recently validated ledger',
}
HELP = """
Ledgers are either represented by a number, or one of the special ledgers;
""" + ',\n'.join('%s, %s' % (k, v) for k, v in sorted(LEDGERS.items())
)

View File

@@ -0,0 +1,68 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
import subprocess
from ripple.ledger.Args import ARGS
from ripple.util import File
from ripple.util import Log
from ripple.util import Range
_ERROR_CODE_REASON = {
62: 'No rippled server is running.',
}
_ERROR_TEXT = {
'lgrNotFound': 'The ledger you requested was not found.',
'noCurrent': 'The server has no current ledger.',
'noNetwork': 'The server did not respond to your request.',
}
_DEFAULT_ERROR_ = "Couldn't connect to server."
class RippledReader(object):
def __init__(self, config):
fname = File.normalize(ARGS.rippled)
if not os.path.exists(fname):
raise Exception('No rippled found at %s.' % fname)
self.cmd = [fname]
if ARGS.config:
self.cmd.extend(['--conf', File.normalize(ARGS.config)])
self.info = self._command('server_info')['info']
c = self.info.get('complete_ledgers')
if c == 'empty':
self.complete = []
else:
self.complete = sorted(Range.from_string(c))
def name_to_ledger_index(self, ledger_name, is_full=False):
return self.get_ledger(ledger_name, is_full)['ledger_index']
def get_ledger(self, name, is_full=False):
cmd = ['ledger', str(name)]
if is_full:
cmd.append('full')
response = self._command(*cmd)
result = response.get('ledger')
if result:
return result
error = response['error']
etext = _ERROR_TEXT.get(error)
if etext:
error = '%s (%s)' % (etext, error)
Log.fatal(_ERROR_TEXT.get(error, error))
def _command(self, *cmds):
cmd = self.cmd + list(cmds)
try:
data = subprocess.check_output(cmd, stderr=subprocess.PIPE)
except subprocess.CalledProcessError as e:
raise Exception(_ERROR_CODE_REASON.get(
e.returncode, _DEFAULT_ERROR_))
part = json.loads(data)
try:
return part['result']
except:
raise ValueError(part.get('error', 'unknown error'))

View File

@@ -0,0 +1,24 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util import Search
def search(server):
"""Yields a stream of ledger numbers that match the given condition."""
condition = lambda number: ARGS.condition(server, number)
ledgers = server.ledgers
if ARGS.binary:
try:
position = Search.FIRST if ARGS.position == 'first' else Search.LAST
yield Search.binary_search(
ledgers[0], ledgers[-1], condition, position)
except:
Log.fatal('No ledgers matching condition "%s".' % condition,
file=sys.stderr)
else:
for x in Search.linear_search(ledgers, condition):
yield x

View File

@@ -0,0 +1,55 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
import os
from ripple.ledger import DatabaseReader, RippledReader
from ripple.ledger.Args import ARGS
from ripple.util.FileCache import FileCache
from ripple.util import ConfigFile
from ripple.util import File
from ripple.util import Range
class Server(object):
def __init__(self):
cfg_file = File.normalize(ARGS.config or 'rippled.cfg')
self.config = ConfigFile.read(open(cfg_file))
if ARGS.database != ARGS.NONE:
reader = DatabaseReader.DatabaseReader(self.config)
else:
reader = RippledReader.RippledReader(self.config)
self.reader = reader
self.complete = reader.complete
names = {
'closed': reader.name_to_ledger_index('closed'),
'current': reader.name_to_ledger_index('current'),
'validated': reader.name_to_ledger_index('validated'),
'first': self.complete[0] if self.complete else None,
'last': self.complete[-1] if self.complete else None,
}
self.__dict__.update(names)
self.ledgers = sorted(Range.join_ranges(*ARGS.ledgers, **names))
def make_cache(is_full):
name = 'full' if is_full else 'summary'
filepath = os.path.join(ARGS.cache, name)
creator = lambda n: reader.get_ledger(n, is_full)
return FileCache(filepath, creator)
self._caches = [make_cache(False), make_cache(True)]
def info(self):
return self.reader.info
def cache(self, is_full):
return self._caches[is_full]
def get_ledger(self, number, is_full=False):
num = int(number)
save_in_cache = num in self.complete
can_create = (not ARGS.offline and
self.complete and
self.complete[0] <= num - 1)
cache = self.cache(is_full)
return cache.get_data(number, save_in_cache, can_create)

View File

@@ -0,0 +1,5 @@
from __future__ import absolute_import, division, print_function, unicode_literals
class ServerReader(object):
def __init__(self, config):
raise ValueError('Direct server connections are not yet implemented.')

View File

@@ -0,0 +1,34 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util.PrettyPrint import pretty_print
SAFE = True
HELP = """cache
return server_info"""
def cache(server, clear=False):
cache = server.cache(ARGS.full)
name = ['summary', 'full'][ARGS.full]
files = cache.file_count()
if not files:
Log.error('No files in %s cache.' % name)
elif clear:
if not clear.strip() == 'clear':
raise Exception("Don't understand 'clear %s'." % clear)
if not ARGS.yes:
yes = raw_input('OK to clear %s cache? (y/N) ' % name)
if not yes.lower().startswith('y'):
Log.out('Cancelled.')
return
cache.clear(ARGS.full)
Log.out('%s cache cleared - %d file%s deleted.' %
(name.capitalize(), files, '' if files == 1 else 's'))
else:
caches = (int(c) for c in cache.cache_list())
Log.out(Range.to_string(caches))

View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.util import Log
from ripple.util import Range
from ripple.util.PrettyPrint import pretty_print
SAFE = True
HELP = 'info - return server_info'
def info(server):
Log.out('first =', server.first)
Log.out('last =', server.last)
Log.out('closed =', server.closed)
Log.out('current =', server.current)
Log.out('validated =', server.validated)
Log.out('complete =', Range.to_string(server.complete))
if ARGS.full:
Log.out(pretty_print(server.info()))

View File

@@ -0,0 +1,15 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.ledger.Args import ARGS
from ripple.ledger import SearchLedgers
import json
SAFE = True
HELP = """print
Print the ledgers to stdout. The default command."""
def run_print(server):
ARGS.display(print, server, SearchLedgers.search(server))

View File

@@ -0,0 +1,4 @@
from __future__ import absolute_import, division, print_function, unicode_literals
def all_ledgers(server, ledger_number):
return True

View File

@@ -0,0 +1,89 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from functools import wraps
import jsonpath_rw
from ripple.ledger.Args import ARGS
from ripple.util import Dict
from ripple.util import Log
from ripple.util import Range
from ripple.util.Decimal import Decimal
from ripple.util.PrettyPrint import pretty_print, Streamer
TRANSACT_FIELDS = (
'accepted',
'close_time_human',
'closed',
'ledger_index',
'total_coins',
'transactions',
)
LEDGER_FIELDS = (
'accepted',
'accountState',
'close_time_human',
'closed',
'ledger_index',
'total_coins',
'transactions',
)
def _dict_filter(d, keys):
return dict((k, v) for (k, v) in d.items() if k in keys)
def ledger_number(print, server, numbers):
print(Range.to_string(numbers))
def display(f):
@wraps(f)
def wrapper(printer, server, numbers, *args):
streamer = Streamer(printer=printer)
for number in numbers:
ledger = server.get_ledger(number, ARGS.full)
if ledger:
streamer.add(number, f(ledger, *args))
streamer.finish()
return wrapper
def extractor(f):
@wraps(f)
def wrapper(printer, server, numbers, *paths):
try:
find = jsonpath_rw.parse('|'.join(paths)).find
except:
raise ValueError("Can't understand jsonpath '%s'." % path)
def fn(ledger, *args):
return f(find(ledger), *args)
display(fn)(printer, server, numbers)
return wrapper
@display
def ledger(ledger, full=False):
if ARGS.full:
if full:
return ledger
ledger = Dict.prune(ledger, 1, False)
return _dict_filter(ledger, LEDGER_FIELDS)
@display
def prune(ledger, level=1):
return Dict.prune(ledger, level, False)
@display
def transact(ledger):
return _dict_filter(ledger, TRANSACT_FIELDS)
@extractor
def extract(finds):
return dict((str(f.full_path), str(f.value)) for f in finds)
@extractor
def sum(finds):
d = Decimal()
for f in finds:
d.accumulate(f.value)
return [str(d), len(finds)]

40
bin/ripple/util/Cache.py Normal file
View File

@@ -0,0 +1,40 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from collections import defaultdict
class Cache(object):
def __init__(self):
self._value_to_index = {}
self._index_to_value = []
def value_to_index(self, value, **kwds):
index = self._value_to_index.get(value, None)
if index is None:
index = len(self._index_to_value)
self._index_to_value.append((value, kwds))
self._value_to_index[value] = index
return index
def index_to_value(self, index):
return self._index_to_value[index]
def NamedCache():
return defaultdict(Cache)
def cache_by_key(d, keyfunc=None, exclude=None):
cache = defaultdict(Cache)
exclude = exclude or None
keyfunc = keyfunc or (lambda x: x)
def visit(item):
if isinstance(item, list):
for i, x in enumerate(item):
item[i] = visit(x)
elif isinstance(item, dict):
for k, v in item.items():
item[k] = visit(v)
return item
return cache

View File

@@ -0,0 +1,77 @@
from __future__ import absolute_import, division, print_function, unicode_literals
# Code taken from github/rec/grit.
import os
import sys
from collections import namedtuple
from ripple.ledger.Args import ARGS
from ripple.util import Log
Command = namedtuple('Command', 'function help safe')
def make_command(module):
name = module.__name__.split('.')[-1].lower()
return name, Command(getattr(module, name, None) or
getattr(module, 'run_' + name),
getattr(module, 'HELP'),
getattr(module, 'SAFE', False))
class CommandList(object):
def __init__(self, *args, **kwds):
self.registry = {}
self.register(*args, **kwds)
def register(self, *modules, **kwds):
for module in modules:
name, command = make_command(module)
self.registry[name] = command
for k, v in kwds.items():
if not isinstance(v, (list, tuple)):
v = [v]
self.register_one(k, *v)
def keys(self):
return self.registry.keys()
def register_one(self, name, function, help='', safe=False):
assert name not in self.registry
self.registry[name] = Command(function, help, safe)
def _get(self, command):
command = command.lower()
c = self.registry.get(command)
if c:
return command, c
commands = [c for c in self.registry if c.startswith(command)]
if len(commands) == 1:
command = commands[0]
return command, self.registry[command]
if not commands:
raise ValueError('No such command: %s. Commands are %s.' %
(command, ', '.join(sorted(self.registry))))
if len(commands) > 1:
raise ValueError('Command %s was ambiguous: %s.' %
(command, ', '.join(commands)))
def get(self, command):
return self._get(command)[1]
def run(self, command, *args):
return self.get(command).function(*args)
def run_safe(self, command, *args):
name, cmd = self._get(command)
if not (ARGS.yes or cmd.safe):
confirm = raw_input('OK to execute "rl %s %s"? (y/N) ' %
(name, ' '.join(args)))
if not confirm.lower().startswith('y'):
Log.error('Cancelled.')
return
cmd.function(*args)
def help(self, command):
return self.get(command).help()

View File

@@ -0,0 +1,54 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import json
"""Ripple has a proprietary format for their .cfg files, so we need a reader for
them."""
def read(lines):
sections = []
section = []
for line in lines:
line = line.strip()
if (not line) or line[0] == '#':
continue
if line.startswith('['):
if section:
sections.append(section)
section = []
section.append(line)
if section:
sections.append(section)
result = {}
for section in sections:
option = section.pop(0)
assert section, ('No value for option "%s".' % option)
assert option.startswith('[') and option.endswith(']'), (
'No option name in block "%s"' % p[0])
option = option[1:-1]
assert option not in result, 'Duplicate option "%s".' % option
subdict = {}
items = []
for part in section:
if '=' in part:
assert not items, 'Dictionary mixed with list.'
k, v = part.split('=', 1)
assert k not in subdict, 'Repeated dictionary entry ' + k
subdict[k] = v
else:
assert not subdict, 'List mixed with dictionary.'
if part.startswith('{'):
items.append(json.loads(part))
else:
words = part.split()
if len(words) > 1:
items.append(words)
else:
items.append(part)
if len(items) == 1:
result[option] = items[0]
else:
result[option] = items or subdict
return result

View File

@@ -0,0 +1,12 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sqlite3
def fetchall(database, query, kwds):
conn = sqlite3.connect(database)
try:
cursor = conn.execute(query, kwds)
return cursor.fetchall()
finally:
conn.close()

View File

@@ -0,0 +1,46 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""Fixed point numbers."""
POSITIONS = 10
POSITIONS_SHIFT = 10 ** POSITIONS
class Decimal(object):
def __init__(self, desc='0'):
if isinstance(desc, int):
self.value = desc
return
if desc.startswith('-'):
sign = -1
desc = desc[1:]
else:
sign = 1
parts = desc.split('.')
if len(parts) == 1:
parts.append('0')
elif len(parts) > 2:
raise Exception('Too many decimals in "%s"' % desc)
number, decimal = parts
# Fix the number of positions.
decimal = (decimal + POSITIONS * '0')[:POSITIONS]
self.value = sign * int(number + decimal)
def accumulate(self, item):
if not isinstance(item, Decimal):
item = Decimal(item)
self.value += item.value
def __str__(self):
if self.value >= 0:
sign = ''
value = self.value
else:
sign = '-'
value = -self.value
number = value // POSITIONS_SHIFT
decimal = (value % POSITIONS_SHIFT) * POSITIONS_SHIFT
if decimal:
return '%s%s.%s' % (sign, number, str(decimal).rstrip('0'))
else:
return '%s%s' % (sign, number)

33
bin/ripple/util/Dict.py Normal file
View File

@@ -0,0 +1,33 @@
from __future__ import absolute_import, division, print_function, unicode_literals
def count_all_subitems(x):
"""Count the subitems of a Python object, including the object itself."""
if isinstance(x, list):
return 1 + sum(count_all_subitems(i) for i in x)
if isinstance(x, dict):
return 1 + sum(count_all_subitems(i) for i in x.itervalues())
return 1
def prune(item, level, count_recursively=True):
def subitems(x):
i = count_all_subitems(x) - 1 if count_recursively else len(x)
return '1 subitem' if i == 1 else '%d subitems' % i
assert level >= 0
if not item:
return item
if isinstance(item, list):
if level:
return [prune(i, level - 1, count_recursively) for i in item]
else:
return '[list with %s]' % subitems(item)
if isinstance(item, dict):
if level:
return dict((k, prune(v, level - 1, count_recursively))
for k, v in item.iteritems())
else:
return '{dict with %s}' % subitems(item)
return item

7
bin/ripple/util/File.py Normal file
View File

@@ -0,0 +1,7 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import os
def normalize(f):
f = os.path.join(*f.split('/')) # For Windows users.
return os.path.abspath(os.path.expanduser(f))

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import gzip
import json
import os
_NONE = object()
class FileCache(object):
"""A two-level cache, which stores expensive results in memory and on disk.
"""
def __init__(self, cache_directory, creator, open=gzip.open, suffix='.gz'):
self.cache_directory = cache_directory
self.creator = creator
self.open = open
self.suffix = suffix
self.cached_data = {}
if not os.path.exists(self.cache_directory):
os.makedirs(self.cache_directory)
def get_file_data(self, name):
if os.path.exists(filename):
return json.load(self.open(filename))
result = self.creator(name)
return result
def get_data(self, name, save_in_cache, can_create, default=None):
name = str(name)
result = self.cached_data.get(name, _NONE)
if result is _NONE:
filename = os.path.join(self.cache_directory, name) + self.suffix
if os.path.exists(filename):
result = json.load(self.open(filename)) or _NONE
if result is _NONE and can_create:
result = self.creator(name)
if save_in_cache:
json.dump(result, self.open(filename, 'w'))
return default if result is _NONE else result
def _files(self):
return os.listdir(self.cache_directory)
def cache_list(self):
for f in self._files():
if f.endswith(self.suffix):
yield f[:-len(self.suffix)]
def file_count(self):
return len(self._files())
def clear(self):
"""Clears both local files and memory."""
self.cached_data = {}
for f in self._files():
os.remove(os.path.join(self.cache_directory, f))

View File

@@ -0,0 +1,82 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""A function that can be specified at the command line, with an argument."""
import importlib
import re
import tokenize
from StringIO import StringIO
MATCHER = re.compile(r'([\w.]+)(.*)')
REMAPPINGS = {
'false': False,
'true': True,
'null': None,
'False': False,
'True': True,
'None': None,
}
def eval_arguments(args):
args = args.strip()
if not args or (args == '()'):
return ()
tokens = list(tokenize.generate_tokens(StringIO(args).readline))
def remap():
for type, name, _, _, _ in tokens:
if type == tokenize.NAME and name not in REMAPPINGS:
yield tokenize.STRING, '"%s"' % name
else:
yield type, name
untok = tokenize.untokenize(remap())
if untok[1:-1].strip():
untok = untok[:-1] + ',)' # Force a tuple.
try:
return eval(untok, REMAPPINGS)
except Exception as e:
raise ValueError('Couldn\'t evaluate expression "%s" (became "%s"), '
'error "%s"' % (args, untok, str(e)))
class Function(object):
def __init__(self, desc='', default_path=''):
self.desc = desc.strip()
if not self.desc:
# Make an empty function that does nothing.
self.args = ()
self.function = lambda *args, **kwds: None
return
m = MATCHER.match(desc)
if not m:
raise ValueError('"%s" is not a function' % desc)
self.function, self.args = (g.strip() for g in m.groups())
self.args = eval_arguments(self.args)
if '.' not in self.function:
if default_path and not default_path.endswith('.'):
default_path += '.'
self.function = default_path + self.function
p, m = self.function.rsplit('.', 1)
mod = importlib.import_module(p)
# Errors in modules are swallowed here.
# except:
# raise ValueError('Can\'t find Python module "%s"' % p)
try:
self.function = getattr(mod, m)
except:
raise ValueError('No function "%s" in module "%s"' % (m, p))
def __str__(self):
return self.desc
def __call__(self, *args, **kwds):
return self.function(*(args + self.args), **kwds)
def __eq__(self, other):
return self.function == other.function and self.args == other.args
def __ne__(self, other):
return not (self == other)

21
bin/ripple/util/Log.py Normal file
View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
VERBOSE = False
def out(*args, **kwds):
kwds.get('print', print)(*args, file=sys.stdout, **kwds)
def info(*args, **kwds):
if VERBOSE:
out(*args, **kwds)
def warn(*args, **kwds):
out('WARNING:', *args, **kwds)
def error(*args, **kwds):
out('ERROR:', *args, **kwds)
def fatal(*args, **kwds):
raise Exception('FATAL: ' + ' '.join(str(a) for a in args))

View File

@@ -0,0 +1,42 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from functools import wraps
import json
SEPARATORS = ',', ': '
INDENT = ' '
def pretty_print(item):
return json.dumps(item,
sort_keys=True,
indent=len(INDENT),
separators=SEPARATORS)
class Streamer(object):
def __init__(self, printer=print):
# No automatic spacing or carriage returns.
self.printer = lambda *args: printer(*args, end='', sep='')
self.first_key = True
def add(self, key, value):
if self.first_key:
self.first_key = False
self.printer('{')
else:
self.printer(',')
self.printer('\n', INDENT, '"', str(key), '": ')
pp = pretty_print(value).splitlines()
if len(pp) > 1:
for i, line in enumerate(pp):
if i > 0:
self.printer('\n', INDENT)
self.printer(line)
else:
self.printer(pp[0])
def finish(self):
if not self.first_key:
self.first_key = True
self.printer('\n}')

53
bin/ripple/util/Range.py Normal file
View File

@@ -0,0 +1,53 @@
from __future__ import absolute_import, division, print_function, unicode_literals
"""
Convert a discontiguous range of integers to and from a human-friendly form.
Real world example is the server_info.complete_ledgers:
8252899-8403772,8403824,8403827-8403830,8403834-8403876
"""
def from_string(desc, **aliases):
if not desc:
return []
result = set()
for d in desc.split(','):
nums = [int(aliases.get(x) or x) for x in d.split('-')]
if len(nums) == 1:
result.add(nums[0])
elif len(nums) == 2:
result.update(range(nums[0], nums[1] + 1))
return result
def to_string(r):
groups = []
next_group = []
for i, x in enumerate(sorted(r)):
if next_group and (x - next_group[-1]) > 1:
groups.append(next_group)
next_group = []
next_group.append(x)
if next_group:
groups.append(next_group)
def display(g):
if len(g) == 1:
return str(g[0])
else:
return '%s-%s' % (g[0], g[-1])
return ','.join(display(g) for g in groups)
def is_range(desc, *names):
try:
from_string(desc, **dict((n, 1) for n in names))
return True;
except ValueError:
return False
def join_ranges(*ranges, **aliases):
result = set()
for r in ranges:
result.update(from_string(r, **aliases))
return result

46
bin/ripple/util/Search.py Normal file
View File

@@ -0,0 +1,46 @@
from __future__ import absolute_import, division, print_function, unicode_literals
FIRST, LAST = range(2)
def binary_search(begin, end, condition, location=FIRST):
"""Search for an i in the interval [begin, end] where condition(i) is true.
If location is FIRST, return the first such i.
If location is LAST, return the last such i.
If there is no such i, then throw an exception.
"""
b = condition(begin)
e = condition(end)
if b and e:
return begin if location == FIRST else end
if not (b or e):
raise ValueError('%d/%d' % (begin, end))
if b and location is FIRST:
return begin
if e and location is LAST:
return end
width = end - begin + 1
if width == 1:
if not b:
raise ValueError('%d/%d' % (begin, end))
return begin
if width == 2:
return begin if b else end
mid = (begin + end) // 2
m = condition(mid)
if m == b:
return binary_search(mid, end, condition, location)
else:
return binary_search(begin, mid, condition, location)
def linear_search(items, condition):
"""Yields each i in the interval [begin, end] where condition(i) is true.
"""
for i in items:
if condition(i):
yield i

21
bin/ripple/util/Time.py Normal file
View File

@@ -0,0 +1,21 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import datetime
# Format for human-readable dates in rippled
_DATE_FORMAT = '%Y-%b-%d'
_TIME_FORMAT = '%H:%M:%S'
_DATETIME_FORMAT = '%s %s' % (_DATE_FORMAT, _TIME_FORMAT)
_FORMATS = _DATE_FORMAT, _TIME_FORMAT, _DATETIME_FORMAT
def parse_datetime(desc):
for fmt in _FORMATS:
try:
return datetime.date.strptime(desc, fmt)
except:
pass
raise ValueError("Can't understand date '%s'." % date)
def format_datetime(dt):
return dt.strftime(_DATETIME_FORMAT)

View File

@@ -0,0 +1,12 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Cache import NamedCache
from unittest import TestCase
class test_Cache(TestCase):
def setUp(self):
self.cache = NamedCache()
def test_trivial(self):
pass

View File

@@ -0,0 +1,163 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import ConfigFile
from unittest import TestCase
class test_ConfigFile(TestCase):
def test_trivial(self):
self.assertEquals(ConfigFile.read(''), {})
def test_full(self):
self.assertEquals(ConfigFile.read(FULL.splitlines()), RESULT)
RESULT = {
'websocket_port': '6206',
'database_path': '/development/alpha/db',
'sntp_servers':
['time.windows.com', 'time.apple.com', 'time.nist.gov', 'pool.ntp.org'],
'validation_seed': 'sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV',
'node_size': 'medium',
'rpc_startup': {
'command': 'log_level',
'severity': 'debug'},
'ips': ['r.ripple.com', '51235'],
'node_db': {
'file_size_mult': '2',
'file_size_mb': '8',
'cache_mb': '256',
'path': '/development/alpha/db/rocksdb',
'open_files': '2000',
'type': 'RocksDB',
'filter_bits': '12'},
'peer_port': '53235',
'ledger_history': 'full',
'rpc_ip': '127.0.0.1',
'websocket_public_ip': '0.0.0.0',
'rpc_allow_remote': '0',
'validators':
[['n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7', 'RL1'],
['n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj', 'RL2'],
['n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C', 'RL3'],
['n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS', 'RL4'],
['n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA', 'RL5']],
'debug_logfile': '/development/alpha/debug.log',
'websocket_public_port': '5206',
'peer_ip': '0.0.0.0',
'rpc_port': '5205',
'validation_quorum': '3',
'websocket_ip': '127.0.0.1'}
FULL = """
[ledger_history]
full
# Allow other peers to connect to this server.
#
[peer_ip]
0.0.0.0
[peer_port]
53235
# Allow untrusted clients to connect to this server.
#
[websocket_public_ip]
0.0.0.0
[websocket_public_port]
5206
# Provide trusted websocket ADMIN access to the localhost.
#
[websocket_ip]
127.0.0.1
[websocket_port]
6206
# Provide trusted json-rpc ADMIN access to the localhost.
#
[rpc_ip]
127.0.0.1
[rpc_port]
5205
[rpc_allow_remote]
0
[node_size]
medium
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
[node_db]
type=RocksDB
path=/development/alpha/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
[database_path]
/development/alpha/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/development/alpha/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
# Where to find some other servers speaking the Ripple protocol.
#
[ips]
r.ripple.com 51235
# The latest validators can be obtained from
# https://ripple.com/ripple.txt
#
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
# Ditto.
[validation_quorum]
3
[validation_seed]
sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "debug" }
# Configure SSL for WebSockets. Not enabled by default because not everybody
# has an SSL cert on their server, but if you uncomment the following lines and
# set the path to the SSL certificate and private key the WebSockets protocol
# will be protected by SSL/TLS.
#[websocket_secure]
#1
#[websocket_ssl_cert]
#/etc/ssl/certs/server.crt
#[websocket_ssl_key]
#/etc/ssl/private/server.key
# Defaults to 0 ("no") so that you can use self-signed SSL certificates for
# development, or internally.
#[ssl_verify]
#0
""".strip()

View File

@@ -0,0 +1,20 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Decimal import Decimal
from unittest import TestCase
class test_Decimal(TestCase):
def test_construct(self):
self.assertEquals(str(Decimal('')), '0')
self.assertEquals(str(Decimal('0')), '0')
self.assertEquals(str(Decimal('0.2')), '0.2')
self.assertEquals(str(Decimal('-0.2')), '-0.2')
self.assertEquals(str(Decimal('3.1416')), '3.1416')
def test_accumulate(self):
d = Decimal()
d.accumulate('0.5')
d.accumulate('3.1416')
d.accumulate('-23.34234')
self.assertEquals(str(d), '-19.70074')

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Dict
from unittest import TestCase
class test_Dict(TestCase):
def test_count_all_subitems(self):
self.assertEquals(Dict.count_all_subitems({}), 1)
self.assertEquals(Dict.count_all_subitems({'a': {}}), 2)
self.assertEquals(Dict.count_all_subitems([1]), 2)
self.assertEquals(Dict.count_all_subitems([1, 2]), 3)
self.assertEquals(Dict.count_all_subitems([1, {2: 3}]), 4)
self.assertEquals(Dict.count_all_subitems([1, {2: [3]}]), 5)
self.assertEquals(Dict.count_all_subitems([1, {2: [3, 4]}]), 6)
def test_prune(self):
self.assertEquals(Dict.prune({}, 0), {})
self.assertEquals(Dict.prune({}, 1), {})
self.assertEquals(Dict.prune({1: 2}, 0), '{dict with 1 subitem}')
self.assertEquals(Dict.prune({1: 2}, 1), {1: 2})
self.assertEquals(Dict.prune({1: 2}, 2), {1: 2})
self.assertEquals(Dict.prune([1, 2, 3], 0), '[list with 3 subitems]')
self.assertEquals(Dict.prune([1, 2, 3], 1), [1, 2, 3])
self.assertEquals(Dict.prune([{1: [2, 3]}], 0),
'[list with 4 subitems]')
self.assertEquals(Dict.prune([{1: [2, 3]}], 1),
['{dict with 3 subitems}'])
self.assertEquals(Dict.prune([{1: [2, 3]}], 2),
[{1: u'[list with 2 subitems]'}])
self.assertEquals(Dict.prune([{1: [2, 3]}], 3),
[{1: [2, 3]}])
def test_prune_nosub(self):
self.assertEquals(Dict.prune({}, 0, False), {})
self.assertEquals(Dict.prune({}, 1, False), {})
self.assertEquals(Dict.prune({1: 2}, 0, False), '{dict with 1 subitem}')
self.assertEquals(Dict.prune({1: 2}, 1, False), {1: 2})
self.assertEquals(Dict.prune({1: 2}, 2, False), {1: 2})
self.assertEquals(Dict.prune([1, 2, 3], 0, False),
'[list with 3 subitems]')
self.assertEquals(Dict.prune([1, 2, 3], 1, False), [1, 2, 3])
self.assertEquals(Dict.prune([{1: [2, 3]}], 0, False),
'[list with 1 subitem]')
self.assertEquals(Dict.prune([{1: [2, 3]}], 1, False),
['{dict with 1 subitem}'])
self.assertEquals(Dict.prune([{1: [2, 3]}], 2, False),
[{1: u'[list with 2 subitems]'}])
self.assertEquals(Dict.prune([{1: [2, 3]}], 3, False),
[{1: [2, 3]}])

View File

@@ -0,0 +1,37 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Function import Function, MATCHER
from unittest import TestCase
def FN(*args, **kwds):
return args, kwds
class test_Function(TestCase):
def match_test(self, item, *results):
self.assertEquals(MATCHER.match(item).groups(), results)
def test_simple(self):
self.match_test('function', 'function', '')
self.match_test('f(x)', 'f', '(x)')
def test_empty_function(self):
self.assertEquals(Function()(), None)
def test_empty_args(self):
f = Function('ripple.util.test_Function.FN()')
self.assertEquals(f(), ((), {}))
def test_function(self):
f = Function('ripple.util.test_Function.FN(True, {1: 2}, None)')
self.assertEquals(f(), ((True, {1: 2}, None), {}))
self.assertEquals(f('hello', foo='bar'),
(('hello', True, {1: 2}, None), {'foo':'bar'}))
self.assertEquals(
f, Function('ripple.util.test_Function.FN(true, {1: 2}, null)'))
def test_quoting(self):
f = Function('ripple.util.test_Function.FN(testing)')
self.assertEquals(f(), (('testing',), {}))
f = Function('ripple.util.test_Function.FN(testing, true, false, null)')
self.assertEquals(f(), (('testing', True, False, None), {}))

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import PrettyPrint
from unittest import TestCase
class test_PrettyPrint(TestCase):
def setUp(self):
self._results = []
self.printer = PrettyPrint.Streamer(printer=self.printer)
def printer(self, *args, **kwds):
self._results.extend(args)
def run_test(self, expected, *args):
for i in range(0, len(args), 2):
self.printer.add(args[i], args[i + 1])
self.printer.finish()
self.assertEquals(''.join(self._results), expected)
def test_simple_printer(self):
self.run_test(
'{\n "foo": "bar"\n}',
'foo', 'bar')
def test_multiple_lines(self):
self.run_test(
'{\n "foo": "bar",\n "baz": 5\n}',
'foo', 'bar', 'baz', 5)
def test_multiple_lines(self):
self.run_test(
"""
{
"foo": {
"bar": 1,
"baz": true
},
"bang": "bing"
}
""".strip(), 'foo', {'bar': 1, 'baz': True}, 'bang', 'bing')
def test_multiple_lines_with_list(self):
self.run_test(
"""
{
"foo": [
"bar",
1
],
"baz": [
23,
42
]
}
""".strip(), 'foo', ['bar', 1], 'baz', [23, 42])

View File

@@ -0,0 +1,28 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util import Range
from unittest import TestCase
class test_Range(TestCase):
def round_trip(self, s, *items):
self.assertEquals(Range.from_string(s), set(items))
self.assertEquals(Range.to_string(items), s)
def test_complete(self):
self.round_trip('10,19', 10, 19)
self.round_trip('10', 10)
self.round_trip('10-12', 10, 11, 12)
self.round_trip('10,19,42-45', 10, 19, 42, 43, 44, 45)
def test_names(self):
self.assertEquals(
Range.from_string('first,last,current', first=1, last=3, current=5),
set([1, 3, 5]))
def test_is_range(self):
self.assertTrue(Range.is_range(''))
self.assertTrue(Range.is_range('10'))
self.assertTrue(Range.is_range('10,12'))
self.assertFalse(Range.is_range('10,12,fred'))
self.assertTrue(Range.is_range('10,12,fred', 'fred'))

View File

@@ -0,0 +1,44 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from ripple.util.Search import binary_search, linear_search, FIRST, LAST
from unittest import TestCase
class test_Search(TestCase):
def condition(self, i):
return 10 <= i < 15;
def test_linear_full(self):
self.assertEquals(list(linear_search(range(21), self.condition)),
[10, 11, 12, 13, 14])
def test_linear_partial(self):
self.assertEquals(list(linear_search(range(8, 14), self.condition)),
[10, 11, 12, 13])
self.assertEquals(list(linear_search(range(11, 14), self.condition)),
[11, 12, 13])
self.assertEquals(list(linear_search(range(12, 18), self.condition)),
[12, 13, 14])
def test_linear_empty(self):
self.assertEquals(list(linear_search(range(1, 4), self.condition)), [])
def test_binary_first(self):
self.assertEquals(binary_search(0, 14, self.condition, FIRST), 10)
self.assertEquals(binary_search(10, 19, self.condition, FIRST), 10)
self.assertEquals(binary_search(14, 14, self.condition, FIRST), 14)
self.assertEquals(binary_search(14, 15, self.condition, FIRST), 14)
self.assertEquals(binary_search(13, 15, self.condition, FIRST), 13)
def test_binary_last(self):
self.assertEquals(binary_search(10, 20, self.condition, LAST), 14)
self.assertEquals(binary_search(0, 14, self.condition, LAST), 14)
self.assertEquals(binary_search(14, 14, self.condition, LAST), 14)
self.assertEquals(binary_search(14, 15, self.condition, LAST), 14)
self.assertEquals(binary_search(13, 15, self.condition, LAST), 14)
def test_binary_throws(self):
self.assertRaises(
ValueError, binary_search, 0, 20, self.condition, LAST)
self.assertRaises(
ValueError, binary_search, 0, 20, self.condition, FIRST)

747
bin/six.py Normal file
View File

@@ -0,0 +1,747 @@
"""Utilities for writing code that runs on Python 2 and 3"""
# Copyright (c) 2010-2014 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import functools
import operator
import sys
import types
__author__ = "Benjamin Peterson <benjamin@python.org>"
__version__ = "1.7.3"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result) # Invokes __set__.
# This is a bit ugly, but it avoids running this again.
delattr(obj.__class__, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
_module = self._resolve()
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _SixMetaPathImporter(object):
"""
A meta path importer to import six.moves and its submodules.
This class implements a PEP302 finder and loader. It should be compatible
with Python 2.5 and all existing versions of Python3
"""
def __init__(self, six_module_name):
self.name = six_module_name
self.known_modules = {}
def _add_module(self, mod, *fullnames):
for fullname in fullnames:
self.known_modules[self.name + "." + fullname] = mod
def _get_module(self, fullname):
return self.known_modules[self.name + "." + fullname]
def find_module(self, fullname, path=None):
if fullname in self.known_modules:
return self
return None
def __get_module(self, fullname):
try:
return self.known_modules[fullname]
except KeyError:
raise ImportError("This loader does not know module " + fullname)
def load_module(self, fullname):
try:
# in case of a reload
return sys.modules[fullname]
except KeyError:
pass
mod = self.__get_module(fullname)
if isinstance(mod, MovedModule):
mod = mod._resolve()
else:
mod.__loader__ = self
sys.modules[fullname] = mod
return mod
def is_package(self, fullname):
"""
Return true, if the named module is a package.
We need this method to get correct spec objects with
Python 3.4 (see PEP451)
"""
return hasattr(self.__get_module(fullname), "__path__")
def get_code(self, fullname):
"""Return None
Required, if is_package is implemented"""
self.__get_module(fullname) # eventually raises ImportError
return None
get_source = get_code # same as get_code
_importer = _SixMetaPathImporter(__name__)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
__path__ = [] # mark as package
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserDict", "UserDict", "collections"),
MovedAttribute("UserList", "UserList", "collections"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
_importer._add_module(attr, "moves." + attr.name)
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = _MovedItems(__name__ + ".moves")
_importer._add_module(moves, "moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
"moves.urllib_parse", "moves.urllib.parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
"moves.urllib_error", "moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
"moves.urllib_request", "moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
"moves.urllib_response", "moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
"moves.urllib_robotparser", "moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
__path__ = [] # mark as package
parse = _importer._get_module("moves.urllib_parse")
error = _importer._get_module("moves.urllib_error")
request = _importer._get_module("moves.urllib_request")
response = _importer._get_module("moves.urllib_response")
robotparser = _importer._get_module("moves.urllib_robotparser")
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
"moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
else:
def iterkeys(d, **kw):
return iter(d.iterkeys(**kw))
def itervalues(d, **kw):
return iter(d.itervalues(**kw))
def iteritems(d, **kw):
return iter(d.iteritems(**kw))
def iterlists(d, **kw):
return iter(d.iterlists(**kw))
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
_add_doc(iteritems,
"Return an iterator over the (key, value) pairs of a dictionary.")
_add_doc(iterlists,
"Return an iterator over the (key, [values]) pairs of a dictionary.")
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
def iterbytes(buf):
return (ord(byte) for byte in buf)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, unicode) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
if sys.version_info[0:2] < (3, 4):
def wraps(wrapped):
def wrapper(f):
f = functools.wraps(wrapped)(f)
f.__wrapped__ = wrapped
return f
return wrapper
else:
wraps = functools.wraps
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(meta):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
__path__ = [] # required for PEP 302 and PEP 451
__package__ = __name__ # see PEP 366 @ReservedAssignment
if globals().get("__spec__") is not None:
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
if sys.meta_path:
for i, importer in enumerate(sys.meta_path):
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
if (type(importer).__name__ == "_SixMetaPathImporter" and
importer.name == __name__):
del sys.meta_path[i]
break
del i, importer
# Finally, add the importer to the meta path import hook.
sys.meta_path.append(_importer)

133
bin/stop-test.js Normal file
View File

@@ -0,0 +1,133 @@
/* -------------------------------- REQUIRES -------------------------------- */
var child = require("child_process");
var assert = require("assert");
/* --------------------------------- CONFIG --------------------------------- */
if (process.argv[2] == null) {
[
'Usage: ',
'',
' `node bin/stop-test.js i,j [rippled_path] [rippled_conf]`',
'',
' Launch rippled and stop it after n seconds for all n in [i, j}',
' For all even values of n launch rippled with `--fg`',
' For values of n where n % 3 == 0 launch rippled with `--fg`\n',
'Examples: ',
'',
' $ node bin/stop-test.js 5,10',
(' $ node bin/stop-test.js 1,4 ' +
'build/clang.debug/rippled $HOME/.confs/rippled.cfg')
]
.forEach(function(l){console.log(l)});
process.exit();
} else {
var testRange = process.argv[2].split(',').map(Number);
var rippledPath = process.argv[3] || 'build/rippled'
var rippledConf = process.argv[4] || 'rippled.cfg'
}
var options = {
env: process.env,
stdio: 'ignore' // we could dump the child io when it fails abnormally
};
// default args
var conf_args = ['--conf='+rippledConf];
var start_args = conf_args.concat([/*'--net'*/])
var stop_args = conf_args.concat(['stop']);
/* --------------------------------- HELPERS -------------------------------- */
function start(args) {
return child.spawn(rippledPath, args, options);
}
function stop(rippled) { child.execFile(rippledPath, stop_args, options)}
function secs_l8r(ms, f) {setTimeout(f, ms * 1000); }
function show_results_and_exit(results) {
console.log(JSON.stringify(results, undefined, 2));
process.exit();
}
var timeTakes = function (range) {
function sumRange(n) {return (n+1) * n /2}
var ret = sumRange(range[1]);
if (range[0] > 1) {
ret = ret - sumRange(range[0] - 1)
}
var stopping = (range[1] - range[0]) * 0.5;
return ret + stopping;
}
/* ---------------------------------- TEST ---------------------------------- */
console.log("Test will take ~%s seconds", timeTakes(testRange));
(function oneTest(n /* seconds */, results) {
if (n >= testRange[1]) {
// show_results_and_exit(results);
console.log(JSON.stringify(results, undefined, 2));
oneTest(testRange[0], []);
return;
}
var args = start_args;
if (n % 2 == 0) {args = args.concat(['--fg'])}
if (n % 3 == 0) {args = args.concat(['--net'])}
var result = {args: args, alive_for: n};
results.push(result);
console.log("\nLaunching `%s` with `%s` for %d seconds",
rippledPath, JSON.stringify(args), n);
rippled = start(args);
console.log("Rippled pid: %d", rippled.pid);
// defaults
var b4StopSent = false;
var stopSent = false;
var stop_took = null;
rippled.once('exit', function(){
if (!stopSent && !b4StopSent) {
console.warn('\nRippled exited itself b4 stop issued');
process.exit();
};
// The io handles close AFTER exit, may have implications for
// `stdio:'inherit'` option to `child.spawn`.
rippled.once('close', function() {
result.stop_took = (+new Date() - stop_took) / 1000; // seconds
console.log("Stopping after %d seconds took %s seconds",
n, result.stop_took);
oneTest(n+1, results);
});
});
secs_l8r(n, function(){
console.log("Stopping rippled after %d seconds", n);
// possible race here ?
// seems highly unlikely, but I was having issues at one point
b4StopSent=true;
stop_took = (+new Date());
// when does `exit` actually get sent?
stop();
stopSent=true;
// Sometimes we want to attach with a debugger.
if (process.env.ABORT_TESTS_ON_STALL != null) {
// We wait 30 seconds, and if it hasn't stopped, we abort the process
secs_l8r(30, function() {
if (result.stop_took == null) {
console.log("rippled has stalled");
process.exit();
};
});
}
})
}(testRange[0], []));

View File

@@ -2,14 +2,38 @@
# Coding Standards
Coding standards used here are extreme strict and consistent. The style
evolved gradually over the years, incorporating generally acknowledged
best-practice C++ advice, experience, and personal preference.
Coding standards used here gradually evolve and propagate through
code reviews. Some aspects are enforced more strictly than others.
## Don't Repeat Yourself!
## Rules
The [Don't Repeat Yourself][1] principle summarises the essence of what it
means to write good code, in all languages, at all levels.
These rules only apply to our own code. We can't enforce any sort of
style on the external repositories and libraries we include. The best
guideline is to maintain the standards that are used in those libraries.
* Tab inserts 4 spaces. No tab characters.
* Braces are indented in the [Allman style][1].
* Modern C++ principles. No naked ```new``` or ```delete```.
* Line lengths limited to 80 characters. Exceptions limited to data and tables.
## Guidelines
If you want to do something contrary to these guidelines, understand
why you're doing it. Think, use common sense, and consider that this
your changes will probably need to be maintained long after you've
moved on to other projects.
* Use white space and blank lines to guide the eye and keep your intent clear.
* Put private data members at the top of a class, and the 6 public special
members immediately after, in the following order:
* Destructor
* Default constructor
* Copy constructor
* Copy assignment
* Move constructor
* Move assignment
* Don't over-inline by defining large functions within the class
declaration, not even for template classes.
## Formatting
@@ -17,9 +41,6 @@ The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
* No tab characters please.
* Tab stops are set to 4 spaces.
* Braces are indented in the [Allman style][2].
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should always be followed by a space.
@@ -62,156 +83,4 @@ overlooked. Blank lines are used to separate code into "paragraphs."
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
## Naming conventions
* Member variables and method names are written with camel-case, and never
begin with a capital letter.
* Class names are also written in camel-case, but always begin with a capital
letter.
* For global variables... well, you shouldn't have any, so it doesn't matter.
* Class data members begin with `m_`, static data members begin with `s_`.
Global variables begin with `g_`. This is so the scope of the corresponding
declaration can be easily determined.
* Avoid underscores in your names, especially leading or trailing underscores.
In particular, leading underscores should be avoided, as these are often used
in standard library code, so to use them in your own code looks quite jarring.
* If you really have to write a macro for some reason, then make it all caps,
with underscores to separate the words. And obviously make sure that its name
is unlikely to clash with symbols used in other libraries or 3rd party code.
## Types, const-correctness
* If a method can (and should!) be const, make it const!
* If a method definitely doesn't throw an exception (be careful!), mark it as
`noexcept`
* When returning a temporary object, e.g. a String, the returned object should
be non-const, so that if the class has a C++11 move operator, it can be used.
* If a local variable can be const, then make it const!
* Remember that pointers can be const as well as primitives; For example, if
you have a `char*` whose contents are going to be altered, you may still be
able to make the pointer itself const, e.g. `char* const foobar = getFoobar();`.
* Do not declare all your local variables at the top of a function or method
(i.e. in the old-fashioned C-style). Declare them at the last possible moment,
and give them as small a scope as possible.
* Object parameters should be passed as `const&` wherever possible. Only
pass a parameter as a copy-by-value object if you really need to mutate
a local copy inside the method, and if making a local copy inside the method
would be difficult.
* Use portable `for()` loop variable scoping (i.e. do not have multiple for
loops in the same scope that each re-declare the same variable name, as
this fails on older compilers)
* When you're testing a pointer to see if it's null, never write
`if (myPointer)`. Always avoid that implicit cast-to-bool by writing it more
fully: `if (myPointer != nullptr)`. And likewise, never ever write
`if (! myPointer)`, instead always write `if (myPointer == nullptr)`.
It is more readable that way.
* Avoid C-style casts except when converting between primitive numeric types.
Some people would say "avoid C-style casts altogether", but `static_cast` is
a bit unreadable when you just want to cast an `int` to a `float`. But
whenever a pointer is involved, or a non-primitive object, always use
`static_cast`. And when you're reinterpreting data, always use
`reinterpret_cast`.
* Until C++ gets a universal 64-bit primitive type (part of the C++11
standard), it's best to stick to the `int64` and `uint64` typedefs.
## Object lifetime and ownership
* Absolutely do NOT use `delete`, `deleteAndZero`, etc. There are very very few
situations where you can't use a `ScopedPointer` or some other automatic
lifetime management class.
* Do not use `new` unless there's no alternative. Whenever you type `new`, always
treat it as a failure to find a better solution. If a local variable can be
allocated on the stack rather than the heap, then always do so.
* Do not ever use `new` or `malloc` to allocate a C++ array. Always use a
`HeapBlock` instead.
* And just to make it doubly clear: Never use `malloc` or `calloc`.
* If a parent object needs to create and own some kind of child object, always
use composition as your first choice. If that's not possible (e.g. if the
child needs a pointer to the parent for its constructor), then use a
`ScopedPointer`.
* If possible, pass an object as a reference rather than a pointer. If possible,
make it a `const` reference.
* Obviously avoid static and global values. Sometimes there's no alternative,
but if there is an alternative, then use it, no matter how much effort it
involves.
* If allocating a local POD structure (e.g. an operating-system structure in
native code), and you need to initialise it with zeros, use the `= { 0 };`
syntax as your first choice for doing this. If for some reason that's not
appropriate, use the `zerostruct()` function, or in case that isn't suitable,
use `zeromem()`. Don't use `memset()`.
## Classes
* Declare a class's public section first, and put its constructors and
destructor first. Any protected items come next, and then private ones.
* Use the most restrictive access-specifier possible for each member. Prefer
`private` over `protected`, and `protected` over `public`. Don't expose
things unnecessarily.
* Preferred positioning for any inherited classes is to put them to the right
of the class name, vertically aligned, e.g.:
class Thing : public Foo,
private Bar
{
}
* Put a class's member variables (which should almost always be private, of course),
after all the public and protected method declarations.
* Any private methods can go towards the end of the class, after the member
variables.
* If your class does not have copy-by-value semantics, derive the class from
`Uncopyable`.
* If your class is likely to be leaked, then derive your class from
`LeakChecked<>`.
* Constructors that take a single parameter should be default be marked
`explicit`. Obviously there are cases where you do want implicit conversion,
but always think about it carefully before writing a non-explicit constructor.
* Do not use `NULL`, `null`, or 0 for a null-pointer. And especially never use
'0L', which is particulary burdensome. Use `nullptr` instead - this is the
C++2011 standard, so get used to it. There's a fallback definition for `nullptr`
in Beast, so it's always possible to use it even if your compiler isn't yet
C++2011 compliant.
* All the C++ 'guru' books and articles are full of excellent and detailed advice
on when it's best to use inheritance vs composition. If you're not already
familiar with the received wisdom in these matters, then do some reading!
## Miscellaneous
* `goto` statements should not be used at all, even if the alternative is
more verbose code. The only exception is when implementing an algorithm in
a function as a state machine.
* Don't use macros! OK, obviously there are many situations where they're the
right tool for the job, but treat them as a last resort. Certainly don't ever
use a macro just to hold a constant value or to perform any kind of function
that could have been done as a real inline function. And it goes without saying
that you should give them names which aren't going to clash with other code.
And `#undef` them after you've used them, if possible.
* When using the `++` or `--` operators, never use post-increment if
pre-increment could be used instead. Although it doesn't matter for
primitive types, it's good practice to pre-increment since this can be
much more efficient for more complex objects. In particular, if you're
writing a for loop, always use pre-increment,
e.g. `for (int = 0; i < 10; ++i)`
* Never put an "else" statement after a "return"! This is well-explained in the
LLVM coding standards...and a couple of other very good pieces of advice from
the LLVM standards are in there as well.
* When getting a possibly-null pointer and using it only if it's non-null, limit
the scope of the pointer as much as possible - e.g. Do NOT do this:
Foo* f = getFoo ();
if (f != nullptr)
f->doSomething ();
// other code
f->doSomething (); // oops! f may be null!
..instead, prefer to write it like this, which reduces the scope of the
pointer, making it impossible to write code that accidentally uses a null
pointer:
if (Foo* f = getFoo ())
f->doSomethingElse ();
// f is out-of-scope here, so impossible to use it if it's null
(This also results in smaller, cleaner code)
[1]: http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[2]: http://en.wikipedia.org/wiki/Indent_style#Allman_style
[1]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

File diff suppressed because it is too large Load Diff

63
doc/HeapProfiling.md Normal file
View File

@@ -0,0 +1,63 @@
## Heap profiling of rippled with jemalloc
The jemalloc library provides a good API for doing heap analysis,
including a mechanism to dump a description of the heap from within the
running application via a function call. Details on how to perform this
activity in general, as well as how to acquire the software, are available on
the jemalloc site:
[https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case:-Heap-Profiling)
jemalloc is acquired separately from rippled, and is not affiliated
with Ripple Labs. If you compile and install jemalloc from the
source release with default options, it will install the library and header
under `/usr/local/lib` and `/usr/local/include`, respectively. Heap
profiling has been tested with rippled on a Linux platform. It should
work on platforms on which both rippled and jemalloc are available.
To link rippled with jemalloc, the argument
`profile-jemalloc=<jemalloc_dir>` is provided after the optional target.
The `<jemalloc_dir>` argument should be the same as that of the
`--prefix` parameter passed to the jemalloc configure script when building.
## Examples:
Build rippled with jemalloc library under /usr/local/lib and
header under /usr/local/include:
$ scons profile-jemalloc=/usr/local
Build rippled using clang with the jemalloc library under /opt/local/lib
and header under /opt/local/include:
$ scons clang profile-jemalloc=/opt/local
----------------------
## Using the jemalloc library from within the code
The `profile-jemalloc` parameter enables a macro definition called
`PROFILE_JEMALLOC`. Include the jemalloc header file as
well as the api call(s) that you wish to make within preprocessor
conditional groups, such as:
In global scope:
#ifdef PROFILE_JEMALLOC
#include <jemalloc/jemalloc.h>
#endif
And later, within a function scope:
#ifdef PROFILE_JEMALLOC
mallctl("prof.dump", NULL, NULL, NULL, 0);
#endif
Fuller descriptions of how to acquire and use jemalloc's api to do memory
analysis are available at the [jemalloc
site.](http://www.canonware.com/jemalloc/)
Linking against the jemalloc library will override
the system's default `malloc()` and related functions with jemalloc's
implementation. This is the case even if the code is not instrumented
to use jemalloc's specific API.

View File

@@ -6,33 +6,35 @@
#
# Contents
#
# 1. Peer Networking
# 1. Server
#
# 2. Websocket Networking
# 2. Peer Protocol
#
# 3. RPC Networking
# 3. SMS Gateway
#
# 4. SMS Gateway
# 4. Ripple Protocol
#
# 5. Ripple Protcol
# 5. HTTPS Client
#
# 6. HTTPS Client
# 6. Database
#
# 7. Database
# 7. Diagnostics
#
# 8. Diagnostics
# 8. Voting
#
# 9. Example Settings
#
#-------------------------------------------------------------------------------
#
# Purpose
#
# This file documents and provides examples of all rippled server process
# configuration options. When the rippled server instance is lanched, it looks
# for a file with the following name:
# configuration options. When the rippled server instance is launched, it
# looks for a file with the following name:
#
# rippled.cfg
#
# For more information on where the rippled serer instance searches for
# For more information on where the rippled server instance searches for
# the file please visit the Ripple wiki. Specifically, the section explaining
# the --conf command line option:
#
@@ -42,19 +44,235 @@
# or Mac style end of lines. Blank lines and lines beginning with '#' are
# ignored. Undefined sections are reserved. No escapes are currently defined.
#
# Notation
#
# In this document a simple BNF notation is used. Angle brackets denote
# required elements, square brackets denote optional elements, and single
# quotes indicate string literals. A vertical bar separating 1 or more
# elements is a logical "or"; Any one of the elements may be chosen.
# Parenthesis are notational only, and used to group elements, they are not
# part of the syntax unless they appear in quotes. White space may always
# appear between elements, it has no effect on values.
#
# <key> A required identifier
# '=' The equals sign character
# | Logical "or"
# ( ) Used for grouping
#
#
# An identifier is a string of upper or lower case letters, digits, or
# underscores subject to the requirement that the first character of an
# identifier must be a letter. Identifiers are not case sensitive (but
# values may be).
#
# Some configuration sections contain key/value pairs. A line containing
# a key/value pair has this syntax:
#
# <identifier> '=' <value>
#
# Depending on the section and key, different value types are possible:
#
# <integer> A signed integer
# <unsigned> An unsigned integer
# <flag> A boolean. 1 = true/yes/on, 0 = false/no/off.
#
# Consult the documentation on the key in question to determine the possible
# value types.
#
#
#
#-------------------------------------------------------------------------------
#
# 1. Peer Networking
# 1. Server
#
#-------------------
#----------
#
#
#
# rippled offers various server protocols to clients making inbound
# connections. The listening ports rippled uses are "universal" ports
# which may be configured to handshake in one or more of the available
# supported protocols. These universal ports simplify administration:
# A single open port can be used for multiple protocols.
#
# NOTE At least one server port must be defined in order
# to accept incoming network connections.
#
#
# [server]
#
# A list of port names and key/value pairs. A port name must start with a
# letter and contain only letters and numbers. The name is not case-sensitive.
# For each name in this list, rippled will look for a configuration file
# section with the same name and use it to create a listening port. The
# name is informational only; the choice of name does not affect the function
# of the listening port.
#
# Key/value pairs specified in this section are optional, and apply to all
# listening ports unless the port overrides the value in its section. They
# may be considered default values.
#
# Suggestion:
#
# To avoid a conflict with port names and future configuration sections,
# we recommend prepending "port_" to the port name. This prefix is not
# required, but suggested.
#
# This example defines two ports with different port numbers and settings:
#
# [server]
# port_public
# port_private
# port = 80
#
# [port_public]
# ip=0.0.0.0
# port = 443
# protocol=peer,https
#
# [port_private]
# ip=127.0.0.1
# protocol=http
#
# When rippled is used as a command line client (for example, issuing a
# server stop command), the first port advertising the http or https
# protocol will be used to make the connection.
#
#
#
# [<name>]
#
# A series of key/value pairs that define the settings for the port with
# the corresponding name. These keys are possible:
#
# ip = <IP-address>
#
# Required. Determines the IP address of the network interface to bind
# to. To bind to all available interfaces, uses 0.0.0.0
#
# port = <number>
#
# Required. Sets the port number to use for this port.
#
# protocol = [ http, https, peer ]
#
# Required. A comma-separated list of protocols to support:
#
# http JSON-RPC over HTTP
# https JSON-RPC over HTTPS
# ws Websockets
# wss Secure Websockets
# peer Peer Protocol
#
# Restrictions:
#
# Only one port may be configured to support the peer protocol.
# A port cannot have websocket and non websocket protocols at the
# same time. It is possible have both Websockets and Secure Websockets
# together in one port.
#
# NOTE If no ports support the peer protocol, rippled cannot
# receive incoming peer connections or become a superpeer.
#
# user = <text>
# password = <text>
#
# When set, these credentials will be required on HTTP/S requests.
# The credentials must be provided using HTTP's Basic Authentication
# headers. If either or both fields are empty, then no credentials are
# required. IP address restrictions, if any, will be checked in addition
# to the credentials specified here.
#
# When acting in the client role, rippled will supply these credentials
# using HTTP's Basic Authentication headers when making outbound HTTP/S
# requests.
#
# admin = no | allow
#
# Controls whether or not administrative commands are allowed. These
# commands may be issued over http, https, ws, or wss if configured
# on the port. If unspecified, the default is to not allow
# administrative commands.
#
# admin_user = <text>
# admin_password = <text>
#
# When set, clients must provide these credentials in the submitted
# JSON for any administrative command requests submitted to the HTTP/S,
# WS, or WSS protocol interfaces. If administrative commands are
# disabled for a port, these credentials have no effect.
#
# When acting in the client role, rippled will supply these credentials
# in the submitted JSON for any administrative command requests when
# invoking JSON-RPC commands on remote servers.
#
# ssl_key = <filename>
# ssl_cert = <filename>
# ssl_chain = <filename>
#
# Use the specified files when configuring SSL on the port.
#
# NOTE If no files are specified and secure protocols are selected,
# rippled will generate an internal self-signed certificate.
#
# The files have these meanings:
#
# ssl_key
#
# Specifies the filename holding the SSL key in PEM format.
#
# ssl_cert
#
# Specifies the path to the SSL certificate file in PEM format.
# This is not needed if the chain includes it.
#
# ssl_chain
#
# If you need a certificate chain, specify the path to the
# certificate chain here. The chain may include the end certificate.
#
#
#
# [rpc_admin_allow]
#
# Specify a list of IP addresses allowed to have admin access. One per line.
# If you want to test the output of non-admin commands add this section and
# just put an ip address not under your control.
# Defaults to 127.0.0.1.
#
#
#
# [rpc_startup]
#
# Specify a list of RPC commands to run at startup.
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
#
#
#
# [websocket_ping_frequency]
#
# <number>
#
# The amount of time to wait in seconds, before sending a websocket 'ping'
# message. Ping messages are used to determine if the remote end of the
# connection is no longer available.
#
#
#
#-------------------------------------------------------------------------------
#
# 2. Peer Protocol
#
#-----------------
#
# These settings control security and access attributes of the Peer to Peer
# server section of the rippled process. Peer Networking implements the
# server section of the rippled process. Peer Protocol implements the
# Ripple Payment protocol. It is over peer connections that transactions
# and validations are passed from to machine to machine, to make up the
# components of closed ledgers.
# and validations are passed from to machine to machine, to determine the
# contents of validated ledgers.
#
#
#
@@ -93,40 +311,6 @@
#
#
#
# [peer_ip]
#
# IP address or domain to bind to allow external connections from peers.
# Defaults to not binding, which disallows external connections from peers.
#
# Examples: 0.0.0.0 - Bind on all interfaces.
#
#
#
# [peer_port]
#
# If peer_ip is supplied, corresponding port to bind to for peer connections.
#
#
#
# [peer_port_proxy]
#
# An optional, additional listening port number for peers. Incoming
# connections on this port will be required to provide a PROXY Protocol
# handshake, described in this document (external link):
#
# http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
#
# The PROXY Protocol is a popular method used by elastic load balancing
# service providers such as Amazon, to identify the true IP address and
# port number of external incoming connections.
#
# In addition to enabling this setting, it will also be required to
# use your provider-specific control panel or administrative web page
# to configure your server instance to receive PROXY Protocol handshakes,
# and also to restrict access to your instance to the Elastic Load Balancer.
#
#
#
# [peer_private]
#
# 0 or 1.
@@ -136,7 +320,7 @@
#
#
#
# [max_peers]
# [peers_max]
#
# The largest number of desired peer connections (incoming or outgoing).
# Cluster and fixed peers do not count towards this total. There are
@@ -145,19 +329,6 @@
#
#
#
# [peer_ssl_cipher_list]
#
# A colon delimited string with the allowed SSL cipher modes for peer. The
# choices for for ciphers are defined by the OpenSSL API function
# SSL_CTX_set_cipher_list, documented here (external link):
#
# http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc2%2Fcpp_ssl_ctx_set_cipher_list.html
#
# The default setting is "ALL:!LOW:!EXP:!MD5:@STRENGTH", which allows
# non-authenticated peer connections (they are, however, secure).
#
#
#
# [node_seed]
#
# This is used for clustering. To force a particular node seed or key, the
@@ -191,243 +362,49 @@
#
#
#
#-------------------------------------------------------------------------------
# [overlay] EXPERIMENTAL
#
# 2. Websocket Networking
# This section is EXPERIMENTAL, and should not be
# present for production configuration settings.
#
#------------------------
# A set of key/value pair parameters to configure the overlay.
#
# These settings control security and access attributes of the Websocket
# server section of the rippled process, primarily used to service
# client requests and backend applications.
# auto_connect = 0 | 1
#
# When set, activates the autoconnect feature. This maintains outgoing
# connections using PeerFinder's "Outgoing Connection Strategy."
#
# http_handshake = 0 | 1
#
# [websocket_public_ip]
# When set, outgoing peer connections will handshaking using a HTTP
# request instead of the legacy TMHello protocol buffers message.
# Incoming peer connections have their handshakes detected automatically.
#
# IP address or domain to bind to allow untrusted connections from clients.
# In the future, this option will go away and the peer_ip will accept
# websocket client connections.
# become_superpeer = 'never' | 'always' | 'auto'
#
# Examples: 0.0.0.0 - Bind on all interfaces.
# 127.0.0.1 - Bind on localhost interface. Only local programs may connect.
# Controls the selection of peer roles:
#
# 'never' Always handshake in the leaf role.
# 'always' Always handshake in the superpeer role.
# 'auto' Start as a leaf, promote to superpeer after
# passing capability check (default).
#
# In the leaf role, a peer does not advertise its IP and port for
# the purpose of receiving incoming connections. The peer also does
# not forward transactions and validations received from other peers.
#
# [websocket_public_port]
#
# Port to bind to allow untrusted connections from clients. In the future,
# this option will go away and the peer_ip will accept websocket client
# connections.
#
#
#
# [websocket_public_secure]
#
# 0, 1 or 2.
# 0: Provide ws service for websocket_public_ip/websocket_public_port.
# 1: Provide both ws and wss service for websocket_public_ip/websocket_public_port. [default]
# 2: Provide wss service only for websocket_public_ip/websocket_public_port.
#
# Browser pages like the Ripple client will not be able to connect to a secure
# websocket connection if a self-signed certificate is used. As the Ripple
# reference client currently shares secrets with its server, this should be
# enabled.
#
#
#
# [websocket_ping_frequency]
#
# <number>
#
# The amount of time to wait in seconds, before sending a websocket 'ping'
# message. Ping messages are used to determine if the remote end of the
# connection is no longer availabile.
#
#
#
# [websocket_ip]
#
# IP address or domain to bind to allow trusted ADMIN connections from backend
# applications.
#
# Examples: 0.0.0.0 - Bind on all interfaces.
# 127.0.0.1 - Bind on localhost interface. Only local programs may connect.
#
#
#
# [websocket_port]
#
# Port to bind to allow trusted ADMIN connections from backend applications.
#
#
#
# [websocket_secure]
#
# 0, 1, or 2.
# 0: Provide ws service only for websocket_ip/websocket_port. [default]
# 1: Provide ws and wss service for websocket_ip/websocket_port
# 2: Provide wss service for websocket_ip/websocket_port.
#
#
#
# [websocket_ssl_cert]
#
# Specify the path to the SSL certificate file in PEM format.
# This is not needed if the chain includes it.
#
#
#
# [websocket_ssl_chain]
#
# If you need a certificate chain, specify the path to the certificate chain
# here. The chain may include the end certificate.
#
#
#
# [websocket_ssl_key]
#
# Specify the filename holding the SSL key in PEM format.
# In the superpeer role, a peer advertises its IP and port for
# receiving incoming connections after passing an incoming connection
# test. Superpeers forward transactions and protocol messages to all
# other peers. Superpeers do not forward validations to other superpeers.
# Instead, a validation received by a superpeer from a leaf is forwarded
# only to other leaf connections.
#
#
#
#-------------------------------------------------------------------------------
#
# 3. RPC Networking
#
#------------------
#
# This group of settings configures security and access attributes of the
# RPC server section of the rippled process, used to service both local
# an optional remote clients.
#
#
#
# [rpc_allow_remote]
#
# 0 or 1.
#
# 0: Allow RPC connections only from 127.0.0.1. [default]
# 1: Allow RPC connections from any IP.
#
#
#
# [rpc_admin_allow]
#
# Specify an list of IP addresses allowed to have admin access. One per line.
# If you want to test the output of non-admin commands add this section and
# just put an ip address not under your control.
# Defaults to 127.0.0.1.
#
#
#
# [rpc_admin_user]
#
# As a server, require this as the admin user to be specified. Also, require
# rpc_admin_user and rpc_admin_password to be checked for RPC admin functions.
# The request must specify these as the admin_user and admin_password in the
# request object.
#
# As a client, supply this to the server in the request object.
#
#
#
# [rpc_admin_password]
#
# As a server, require this as the admin pasword to be specified. Also,
# require rpc_admin_user and rpc_admin_password to be checked for RPC admin
# functions. The request must specify these as the admin_user and
# admin_password in the request object.
#
# As a client, supply this to the server in the request object.
#
#
#
# [rpc_ip]
#
# IP address or domain to bind to allow insecure RPC connections.
# Defaults to not binding, which disallows RPC connections.
#
#
#
# [rpc_port]
#
# If rpc_ip is supplied, corresponding port to bind to for peer connections.
#
#
#
# [rpc_user]
#
# As a server, require a this user to specified and require rpc_password to
# be checked for RPC access via the rpc_ip and rpc_port. The user and password
# must be specified via HTTP's basic authentication method.
# As a client, supply this to the server via HTTP's basic authentication
# method.
#
#
#
# [rpc_password]
#
# As a server, require a this password to specified and require rpc_user to
# be checked for RPC access via the rpc_ip and rpc_port. The user and password
# must be specified via HTTP's basic authentication method.
# As a client, supply this to the server via HTTP's basic authentication
# method.
#
#
#
# [rpc_startup]
#
# Specify a list of RPC commands to run at startup.
#
# Examples:
# { "command" : "server_info" }
# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
#
#
#
# [rpc_secure]
#
# 0 or 1.
#
# 0: Server certificates are not provided for RPC clients using SSL [default]
# 1: Client RPC connections wil be provided with SSL certificates.
#
# Note that if rpc_secure is enabled, it will also be necessasry to configure the
# certificate file settings located in rpc_ssl_cert, rpc_ssl_chain, and rpc_ssl_key
#
#
#
# [rpc_ssl_cert]
#
# <pathname>
#
# A file system path leading to the SSL certificate file to use for secure RPC.
# The file is in PEM format. The file is not needed if the chain includes it.
#
#
#
# [rpc_ssl_chain]
#
# <pathname>
#
# A file system path leading to the file with the certificate chain.
# The chain may include the end certificate.
#
#
#
# [rpc_ssl_key]
#
# <pathname>
#
# A file system path leading to the file with the SSL key.
# The file is in PEM format.
#
#
#
#-------------------------------------------------------------------------------
#
# 4. SMS Gateway
# 3. SMS Gateway
#
#---------------
#
@@ -444,7 +421,7 @@
#
# [sms_url]?from=[sms_from]&to=[sms_to]&api_key=[sms_key]&api_secret=[sms_secret]&text=['text']
#
# Where [...] are the corresponding valus from the configuration file, and
# Where [...] are the corresponding values from the configuration file, and
# ['test'] is the value of the JSON field with name 'text'.
#
# [sms_url]
@@ -463,9 +440,9 @@
#
#-------------------------------------------------------------------------------
#
# 5. Ripple Protocol
# 4. Ripple Protocol
#
#------------------
#-------------------
#
# These settings affect the behavior of the server instance with respect
# to Ripple payment protocol level activities such as validating and
@@ -504,6 +481,13 @@
#
#
#
# [ledger_history_index]
#
# If set to greater than 0, the index number of the earliest ledger to
# acquire.
#
#
#
# [fetch_depth]
#
# The number of past ledgers to serve to other peers that request historical
@@ -538,8 +522,8 @@
# For domains, rippled will probe for https web servers at the specified
# domain in the following order: ripple.DOMAIN, www.DOMAIN, DOMAIN
#
# For public key entries, a comment may optionally be spcified after adding a
# space to the pulic key.
# For public key entries, a comment may optionally be specified after adding
# a space to the public key.
#
# Examples:
# ripple.com
@@ -587,20 +571,27 @@
#
# [path_search_fast]
# [path_search_max]
# When seaching for paths, the minimum and maximum search aggressiveness.
# When searching for paths, the minimum and maximum search aggressiveness.
#
# The default for 'path_search_fast' is 2. The default for 'path_search_max' is 10.
#
# [path_search_old]
#
# For clients that use the legacy path finding interfaces, the search
# agressiveness to use. The default is 7.
# agressivness to use. The default is 7.
#
#
#
# [fee_default]
#
# Sets the base cost of a transaction in drops. Used when the server has
# no other source of fee information, such as signing transactions offline.
#
#
#
#-------------------------------------------------------------------------------
#
# 6. HTTPS Client
# 5. HTTPS Client
#
#----------------
#
@@ -614,7 +605,9 @@
# 0 or 1.
#
# 0. HTTPS client connections will not verify certificates.
# 1. Certificates will be checked for HTTPS client connections .
# 1. Certificates will be checked for HTTPS client connections.
#
# If not specified, this parameter defaults to 1.
#
#
#
@@ -640,7 +633,7 @@
#
#-------------------------------------------------------------------------------
#
# 7. Database
# 6. Database
#
#------------
#
@@ -675,19 +668,28 @@
# Examples:
# type=HyperLevelDB
# path=db/hyperldb
# compression=0
#
# Choices for 'type' (not case-sensitive)
# HyperLevelDB Use an improved version of LevelDB (preferred)
# LevelDB Use Google's LevelDB database (deprecated)
# none Use no backend
# RocksDB Use Facebook's RocksDB database
# SQLite Use SQLite
# RocksDB Use Facebook's RocksDB database (preferred)
# NuDB Use Ripple Labs' NuDB (Windows preferred)
# HyperLevelDB (Deprecated)
# SQLite (Deprecated)
# LevelDB (Deprecated)
# none (No backend)
#
# Required keys:
# path Location to store the database (all types)
#
# Optional keys:
# (none yet)
# compression 0 for none, 1 for Snappy compression
# online_delete Minimum value of 256. Enable automatic purging
# of older ledger information. Maintain at least this
# number of ledger records online. Must be greater
# than or equal to ledger_history.
# advisory_delete 0 for disabled, 1 for enabled. If set, then
# require administrative RPC call "can_delete"
# to enable online deletion of ledger records.
#
# Notes:
# The 'node_db' entry configures the primary, persistent storage.
@@ -712,7 +714,7 @@
#
#-------------------------------------------------------------------------------
#
# 8. Diagnostics
# 7. Diagnostics
#
#---------------
#
@@ -767,41 +769,135 @@
# prefix=my_validator
#
#-------------------------------------------------------------------------------
# Allow other peers to connect to this server.
#
[peer_ip]
0.0.0.0
[peer_port]
51235
# Allow untrusted clients to connect to this server.
# 8. Voting
#
[websocket_public_ip]
0.0.0.0
[websocket_public_port]
5006
# Provide trusted websocket ADMIN access to the localhost.
#----------
#
[websocket_ip]
127.0.0.1
[websocket_port]
6006
# Provide trusted json-rpc ADMIN access to the localhost.
# The vote settings configure settings for the entire Ripple network.
# While a single instance of rippled cannot unilaterally enforce network-wide
# settings, these choices become part of the instance's vote during the
# consensus process for each voting ledger.
#
[rpc_ip]
127.0.0.1
# [voting]
#
# A set of key/value pair parameters used during voting ledgers.
#
# reference_fee = <drops>
#
# The cost of the reference transaction fee, specified in drops.
# The reference transaction is the simplest form of transaction.
# It represents an XRP payment between two parties.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# reference_fee = 10 # 10 drops
#
# account_reserve = <drops>
#
# The account reserve requirement specified in drops. The portion of an
# account's XRP balance that is at or below the reserve may only be
# spent on transaction fees, and not transferred out of the account.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# account_reserve = 20000000 # 20 XRP
#
# owner_reserve = <drops>
#
# The owner reserve is the amount of XRP reserved in the account for
# each ledger item owned by the account. Ledger items an account may
# own include trust lines, open orders, and tickets.
#
# If this parameter is unspecified, rippled will use an internal
# default. Don't change this without understanding the consequences.
#
# Example:
# owner_reserve = 5000000 # 5 XRP
#
#-------------------------------------------------------------------------------
#
# 9. Example Settings
#
#--------------------
#
# Administrators can use these values as a starting poing for configuring
# their instance of rippled, but each value should be checked to make sure
# it meets the business requirements for the organization.
#
# Server
#
# These example configuration settings create these ports:
#
# "peer"
#
# Peer protocol open to everyone. This is required to accept
# incoming rippled connections. This does not affect automatic
# or manual outgoing Peer protocol connections.
#
# "rpc"
#
# Administrative RPC commands over HTTPS, when originating from
# the same machine (via the loopback adapter at 127.0.0.1).
#
# "wss_admin"
#
# Admin level API commands over Secure Websockets, when originating
# from the same machine (via the loopback adapter at 127.0.0.1).
#
# This port is commented out but can be enabled by removing
# the '#' from each corresponding line including the entry under [server]
#
# "wss_public"
#
# Guest level API commands over Secure Websockets, open to everyone.
#
# For HTTPS and Secure Websockets ports, if no certificate and key file
# are specified then a self-signed certificate will be generated on startup.
# If you have a certificate and key file, uncomment the corresponding lines
# and ensure the paths to the files are correct.
#
# NOTE
#
# To accept connections on well known ports such as 80 (HTTP) or
# 443 (HTTPS), most operating systems will require rippled to
# run with administrator privileges, or else rippled will not start.
[rpc_port]
5005
[server]
port_rpc
port_peer
port_wss_admin
#port_ws_public
#ssl_key = /etc/ssl/private/server.key
#ssl_cert = /etc/ssl/certs/server.crt
[rpc_allow_remote]
0
[port_rpc]
port = 5005
ip = 127.0.0.1
admin = allow
protocol = https
[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peer
[port_wss_admin]
port = 6006
ip = 127.0.0.1
admin = allow
protocol = wss
#[port_ws_public]
#port = 5005
#ip = 127.0.0.1
#protocol = wss
#-------------------------------------------------------------------------------
[node_size]
medium
@@ -809,6 +905,8 @@ medium
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
# delete old ledgers while maintaining at least 2000. Do not require an
# external administrative command to initiate deletion.
[node_db]
type=RocksDB
path=/var/lib/rippled/db/rocksdb
@@ -817,6 +915,8 @@ filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
online_delete=2000
advisory_delete=0
[database_path]
/var/lib/rippled/db
@@ -837,8 +937,9 @@ pool.ntp.org
[ips]
r.ripple.com 51235
# These validators are taken from the v0.16 release notes on the wiki:
# https://ripple.com/wiki/Latest_rippled_release_notes
# The latest validators can be obtained from
# https://ripple.com/ripple.txt
#
[validators]
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
@@ -855,22 +956,7 @@ n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
# Configure SSL for WebSockets. Not enabled by default because not everybody
# has an SSL cert on their server, but if you uncomment the following lines and
# set the path to the SSL certificate and private key the WebSockets protocol
# will be protected by SSL/TLS.
#[websocket_secure]
#1
#[websocket_ssl_cert]
#/etc/ssl/certs/server.crt
#[websocket_ssl_key]
#/etc/ssl/private/server.key
# Defaults to 0 ("no") so that you can use self-signed SSL certificates for
# development, or internally.
# Defaults to 1 ("yes") so that certificates will be validated. To allow the use
# of self-signed certificates for development or internal use, set to 0 ("no").
#[ssl_verify]
#0

View File

@@ -20,6 +20,8 @@
#
[validators]
n9KPnVLn7ewVzHvn218DcEYsnWLzKerTDwhpofhk4Ym1RUq4TeGw first
n9LFzWuhKNvXStHAuemfRKFVECLApowncMAM5chSCL9R5ECHGN4V second
n94rSdgTyBNGvYg8pZXGuNt59Y5bGAZGxbxyvjDaqD9ceRAgD85P third
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5

BIN
images/build.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

BIN
images/contribute.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

BIN
images/network.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
images/pathfinding.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

BIN
images/ripple.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

BIN
images/transact.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

BIN
images/vehicle_currency.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -2,33 +2,28 @@
"name": "rippled",
"version": "0.0.1",
"description": "Rippled Server",
"private": true,
"directories": {
"test": "test"
},
"dependencies": {
"ripple-lib": "0.7.25",
"ripple-lib": "0.8.2",
"async": "~0.2.9",
"extend": "~1.2.0",
"simple-jsonrpc": "~0.0.2",
"deep-equal": "0.0.0"
},
"devDependencies": {
"assert-diff": "^1.0.1",
"coffee-script": "~1.6.3",
"mocha": "~1.13.0"
},
"scripts": {
"test": "mocha test/websocket-test.js test/server-test.js test/*-test.{js,coffee}"
},
"repository": {
"type": "git",
"url": "git://github.com/ripple/rippled.git"
},
"readmeFilename": "README.md"
}

View File

@@ -1,96 +0,0 @@
#
# Copyright (c) 2009 Scott Stafford
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Author : Scott Stafford
# Date : 2009-12-09 20:36:14
#
# Changes : Vinnie Falco <vinnie.falco@gmail.com>
# Date : 2014--4-25
"""
protoc.py: Protoc Builder for SCons
This Builder invokes protoc to generate C++ and Python from a .proto file.
NOTE: Java is not currently supported.
"""
__author__ = "Scott Stafford"
import SCons.Action
import SCons.Builder
import SCons.Defaults
import SCons.Node.FS
import SCons.Util
from SCons.Script import File, Dir
import os.path
protocs = 'protoc'
ProtocAction = SCons.Action.Action('$PROTOCCOM', '$PROTOCCOMSTR')
def ProtocEmitter(target, source, env):
PROTOCOUTDIR = env['PROTOCOUTDIR']
PROTOCPYTHONOUTDIR = env['PROTOCPYTHONOUTDIR']
for source_path in [str(x) for x in source]:
base = os.path.splitext(os.path.basename(source_path))[0]
if PROTOCOUTDIR:
target.extend([os.path.join(PROTOCOUTDIR, base + '.pb.cc'),
os.path.join(PROTOCOUTDIR, base + '.pb.h')])
if PROTOCPYTHONOUTDIR:
target.append(os.path.join(PROTOCPYTHONOUTDIR, base + '_pb2.py'))
try:
target.append(env['PROTOCFDSOUT'])
except KeyError:
pass
#print "PROTOC SOURCE:", [str(s) for s in source]
#print "PROTOC TARGET:", [str(s) for s in target]
return target, source
ProtocBuilder = SCons.Builder.Builder(action = ProtocAction,
emitter = ProtocEmitter,
srcsuffix = '$PROTOCSRCSUFFIX')
def generate(env):
"""Add Builders and construction variables for protoc to an Environment."""
try:
bld = env['BUILDERS']['Protoc']
except KeyError:
bld = ProtocBuilder
env['BUILDERS']['Protoc'] = bld
env['PROTOC'] = env.Detect(protocs) or 'protoc'
env['PROTOCFLAGS'] = SCons.Util.CLVar('')
env['PROTOCPROTOPATH'] = SCons.Util.CLVar('')
env['PROTOCCOM'] = '$PROTOC ${["-I%s"%x for x in PROTOCPROTOPATH]} $PROTOCFLAGS --cpp_out=$PROTOCCPPOUTFLAGS$PROTOCOUTDIR ${PROTOCPYTHONOUTDIR and ("--python_out="+PROTOCPYTHONOUTDIR) or ""} ${PROTOCFDSOUT and ("-o"+PROTOCFDSOUT) or ""} ${SOURCES}'
env['PROTOCOUTDIR'] = '${SOURCE.dir}'
env['PROTOCPYTHONOUTDIR'] = "python"
env['PROTOCSRCSUFFIX'] = '.proto'
def exists(env):
return env.Detect(protocs)

2
src/.gitignore vendored
View File

@@ -1,2 +0,0 @@
# boost subtree
/boost

View File

@@ -57,18 +57,6 @@
//#define BEAST_FORCE_DEBUG 1
#endif
/** Config: BEAST_LOG_ASSERTIONS
If this flag is enabled, the the bassert and bassertfalse macros will always
use Logger::writeToLog() to write a message when an assertion happens.
Enabling it will also leave this turned on in release builds. When it's
disabled, however, the bassert and bassertfalse macros will not be compiled
in a release build.
@see bassert, bassertfalse, Logger
*/
#ifndef BEAST_LOG_ASSERTIONS
//#define BEAST_LOG_ASSERTIONS 1
#endif
/** Config: BEAST_CHECK_MEMORY_LEAKS
Enables a memory-leak check for certain objects when the app terminates.
See the LeakChecked class for more details about enabling leak checking for
@@ -78,17 +66,6 @@
//#define BEAST_CHECK_MEMORY_LEAKS 0
#endif
/** Config: BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
Setting this option makes Socket-derived classes generate compile errors
if they forget any of the virtual overrides As some Socket-derived classes
intentionally omit member functions that are not applicable, this macro
should only be enabled temporarily when writing your own Socket-derived
class, to make sure that the function signatures match as expected.
*/
#ifndef BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
//#define BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES 1
#endif
//------------------------------------------------------------------------------
//
// Libraries
@@ -166,16 +143,6 @@
#define RIPPLE_DUMP_LEAKS_ON_EXIT 1
#endif
/** Config: RIPPLE_TRACK_MUTEXES
Turns on a feature that enables tracking and diagnostics for mutex
and recursive mutex objects. This affects the type of lock used
by RippleMutex and RippleRecursiveMutex
@note This can slow down performance considerably.
*/
#ifndef RIPPLE_TRACK_MUTEXES
#define RIPPLE_TRACK_MUTEXES 0
#endif
//------------------------------------------------------------------------------
// These control whether or not certain functionality gets
@@ -200,19 +167,48 @@
#endif
/** Config: BEAST_USE_BOOST_FEATURES
This activates boost specific features and improvements. If this is
turned on, the include paths for your build environment must be set
correctly to find the boost headers.
This activates boost specific features and improvements. If this is
turned on, the include paths for your build environment must be set
correctly to find the boost headers.
*/
#ifndef BEAST_USE_BOOST_FEATURES
//#define BEAST_USE_BOOST_FEATURES 1
#endif
/** Config: RIPPLE_PROPOSE_FEATURES
This determines whether to add any features to the proposed transaction set.
This determines whether to add any features to the proposed transaction set.
*/
#ifndef RIPPLE_PROPOSE_FEATURES
#define RIPPLE_PROPOSE_FEATURES 0
#ifndef RIPPLE_PROPOSE_AMENDMENTS
#define RIPPLE_PROPOSE_AMENDMENTS 0
#endif
/** Config: RIPPLE_ENABLE_AUTOBRIDGING
This determines whether ripple implements offer autobridging via XRP.
*/
#ifndef RIPPLE_ENABLE_AUTOBRIDGING
#define RIPPLE_ENABLE_AUTOBRIDGING 0
#endif
/** Config: RIPPLE_SINGLE_IO_SERVICE_THREAD
When set, restricts the number of threads calling io_service::run to one.
This is useful when debugging.
*/
#ifndef RIPPLE_SINGLE_IO_SERVICE_THREAD
#define RIPPLE_SINGLE_IO_SERVICE_THREAD 0
#endif
/** Config: RIPPLE_HOOK_VALIDATORS
Activates code for handling validations and validators (work in progress).
*/
#ifndef RIPPLE_HOOK_VALIDATORS
#define RIPPLE_HOOK_VALIDATORS 0
#endif
/** Config: RIPPLE_ENABLE_TICKETS
Enables processing of ticket transactions
*/
#ifndef RIPPLE_ENABLE_TICKETS
#define RIPPLE_ENABLE_TICKETS 0
#endif
#endif

View File

@@ -1,9 +1,9 @@
# src
Some of these directories come from entire outside repositories
brought in using git-subtree. This means that the source files are
inserted directly into the rippled repository. They can be edited
and committed just as if they were normal files.
Some of these directories come from entire outside repositories brought in
using git-subtree. This means that the source files are inserted directly
into the rippled repository. They can be edited and committed just as if they
were normal files.
However, if you create a commit that contains files both from a
subtree, and from the ripple source tree please use care when designing
@@ -99,7 +99,7 @@ ripple-fork
## protobuf
Ripple's fork of protobuf. We've changed some names in order to support the
unity-style of build (a single .cpp addded to the project, instead of
unity-style of build (a single .cpp added to the project, instead of
linking to a separately built static library).
Repository

View File

@@ -55,18 +55,6 @@
//#define BEAST_FORCE_DEBUG 1
#endif
/** Config: BEAST_LOG_ASSERTIONS
If this flag is enabled, the the bassert and bassertfalse macros will always
use Logger::writeToLog() to write a message when an assertion happens.
Enabling it will also leave this turned on in release builds. When it's
disabled, however, the bassert and bassertfalse macros will not be compiled
in a release build.
@see bassert, bassertfalse, Logger
*/
#ifndef BEAST_LOG_ASSERTIONS
//#define BEAST_LOG_ASSERTIONS 1
#endif
/** Config: BEAST_CHECK_MEMORY_LEAKS
Enables a memory-leak check for certain objects when the app terminates.
See the LeakChecked class for more details about enabling leak checking for
@@ -76,17 +64,6 @@
//#define BEAST_CHECK_MEMORY_LEAKS 0
#endif
/** Config: BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
Setting this option makes Socket-derived classes generate compile errors
if they forget any of the virtual overrides As some Socket-derived classes
intentionally omit member functions that are not applicable, this macro
should only be enabled temporarily when writing your own Socket-derived
class, to make sure that the function signatures match as expected.
*/
#ifndef BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES
//#define BEAST_COMPILER_CHECKS_SOCKET_OVERRIDES 1
#endif
//------------------------------------------------------------------------------
//
// Libraries

View File

@@ -1,23 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ImportGroup Label="PropertySheets" />
<PropertyGroup Label="UserMacros">
<RepoDir>..\..</RepoDir>
</PropertyGroup>
<PropertyGroup />
<ItemDefinitionGroup>
<ClCompile>
<WarningLevel>Level4</WarningLevel>
<PreprocessorDefinitions>BEAST_INCLUDE_BEASTCONFIG=1;_CRTDBG_MAP_ALLOC;_WIN32_WINNT=0x0600;_SCL_SECURE_NO_WARNINGS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<MultiProcessorCompilation>true</MultiProcessorCompilation>
<MinimalRebuild>false</MinimalRebuild>
<AdditionalIncludeDirectories>%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalOptions>/bigobj %(AdditionalOptions)</AdditionalOptions>
</ClCompile>
</ItemDefinitionGroup>
<ItemGroup>
<BuildMacro Include="RepoDir">
<Value>$(RepoDir)</Value>
</BuildMacro>
</ItemGroup>
</Project>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,228 +0,0 @@
Write code in a clear, self-documenting style but use comments where necessary.
Use Test Driven Development. It encourages designing interfaces first, before implementation.
Don't Repeat Yourself, <20>D.R.Y.<2E>. Put redundant code in a class so it can be re-used and unit tested.
Expose as little of a class as possible. Prefer private over protected. Prefer protected over public. The smaller the interface footprint, the easier it is to write unit tests and comprehend the operation of the class. This is the Interface Segregation Principle.
Use language constants (enum or static const) with descriptive names instead of hard-coded <20>magic numbers.<2E>
Make classes depend on as few external classes or routines as possible. Ideally, no dependencies.
Don't limit flexibility of parameters by forcing the caller to use specific types where general types would work.
--------------------------------------------------------------------------------
# Coding Standards
Coding standards used here are extreme strict and consistent. The style
evolved gradually over the years, incorporating generally acknowledged
best-practice C++ advice, experience, and personal preference.
## Don't Repeat Yourself!
The [Don't Repeat Yourself][1] principle summarises the essence of what it
means to write good code, in all languages, at all levels.
## Formatting
The goal of source code formatting should always be to make things as easy to
read as possible. White space is used to guide the eye so that details are not
overlooked. Blank lines are used to separate code into "paragraphs."
* No tab characters please.
* Tab stops are set to 4 spaces.
* Braces are indented in the [Allman style][2].
* Always place a space before and after all binary operators,
especially assignments (`operator=`).
* The `!` operator should always be followed by a space.
* The `~` operator should be preceded by a space, but not followed by one.
* The `++` and `--` operators should have no spaces between the operator and
the operand.
* A space never appears before a comma, and always appears after a comma.
* Always place a space before an opening parenthesis. One exception is if
the parentheses are empty.
* Don't put spaces after a parenthesis. A typical member function call might
look like this: `foobar (1, 2, 3);`
* In general, leave a blank line before an `if` statement.
* In general, leave a blank line after a closing brace `}`.
* Do not place code or comments on the same line as any opening or
closing brace.
* Do not write `if` statements all-on-one-line. The exception to this is when
you've got a sequence of similar `if` statements, and are aligning them all
vertically to highlight their similarities.
* In an `if-else` statement, if you surround one half of the statement with
braces, you also need to put braces around the other half, to match.
* When writing a pointer type, use this spacing: `SomeObject* myObject`.
Technically, a more correct spacing would be `SomeObject *myObject`, but
it makes more sense for the asterisk to be grouped with the type name,
since being a pointer is part of the type, not the variable name. The only
time that this can lead to any problems is when you're declaring multiple
pointers of the same type in the same statement - which leads on to the next
rule:
* When declaring multiple pointers, never do so in a single statement, e.g.
`SomeObject* p1, *p2;` - instead, always split them out onto separate lines
and write the type name again, to make it quite clear what's going on, and
avoid the danger of missing out any vital asterisks.
* The previous point also applies to references, so always put the `&` next to
the type rather than the variable, e.g. `void foo (Thing const& thing)`. And
don't put a space on both sides of the `*` or `&` - always put a space after
it, but never before it.
* The word `const` should be placed to the right of the thing that it modifies,
for consistency. For example `int const` refers to an int which is const.
`int const*` is a pointer to an int which is const. `int *const` is a const
pointer to an int.
* Always place a space in between the template angle brackets and the type
name. Template code is already hard enough to read!
## Naming conventions
* Member variables and method names are written with camel-case, and never
begin with a capital letter.
* Class names are also written in camel-case, but always begin with a capital
letter.
* For global variables... well, you shouldn't have any, so it doesn't matter.
* Class data members begin with `m_`, static data members begin with `s_`.
Global variables begin with `g_`. This is so the scope of the corresponding
declaration can be easily determined.
* Avoid underscores in your names, especially leading or trailing underscores.
In particular, leading underscores should be avoided, as these are often used
in standard library code, so to use them in your own code looks quite jarring.
* If you really have to write a macro for some reason, then make it all caps,
with underscores to separate the words. And obviously make sure that its name
is unlikely to clash with symbols used in other libraries or 3rd party code.
## Types, const-correctness
* If a method can (and should!) be const, make it const!
* If a method definitely doesn't throw an exception (be careful!), mark it as
`noexcept`
* When returning a temporary object, e.g. a String, the returned object should
be non-const, so that if the class has a C++11 move operator, it can be used.
* If a local variable can be const, then make it const!
* Remember that pointers can be const as well as primitives; For example, if
you have a `char*` whose contents are going to be altered, you may still be
able to make the pointer itself const, e.g. `char* const foobar = getFoobar();`.
* Do not declare all your local variables at the top of a function or method
(i.e. in the old-fashioned C-style). Declare them at the last possible moment,
and give them as small a scope as possible.
* Object parameters should be passed as `const&` wherever possible. Only
pass a parameter as a copy-by-value object if you really need to mutate
a local copy inside the method, and if making a local copy inside the method
would be difficult.
* Use portable `for()` loop variable scoping (i.e. do not have multiple for
loops in the same scope that each re-declare the same variable name, as
this fails on older compilers)
* When you're testing a pointer to see if it's null, never write
`if (myPointer)`. Always avoid that implicit cast-to-bool by writing it more
fully: `if (myPointer != nullptr)`. And likewise, never ever write
`if (! myPointer)`, instead always write `if (myPointer == nullptr)`.
It is more readable that way.
* Avoid C-style casts except when converting between primitive numeric types.
Some people would say "avoid C-style casts altogether", but `static_cast` is
a bit unreadable when you just want to cast an `int` to a `float`. But
whenever a pointer is involved, or a non-primitive object, always use
`static_cast`. And when you're reinterpreting data, always use
`reinterpret_cast`.
* Until C++ gets a universal 64-bit primitive type (part of the C++11
standard), it's best to stick to the `std::int64_t` and `std::uint64_t` typedefs.
## Object lifetime and ownership
* Absolutely do NOT use `delete`, `deleteAndZero`, etc. There are very very few
situations where you can't use a `ScopedPointer` or some other automatic
lifetime management class.
* Do not use `new` unless there's no alternative. Whenever you type `new`, always
treat it as a failure to find a better solution. If a local variable can be
allocated on the stack rather than the heap, then always do so.
* Do not ever use `new` or `malloc` to allocate a C++ array. Always use a
`HeapBlock` instead.
* And just to make it doubly clear: Never use `malloc` or `calloc`.
* If a parent object needs to create and own some kind of child object, always
use composition as your first choice. If that's not possible (e.g. if the
child needs a pointer to the parent for its constructor), then use a
`ScopedPointer`.
* If possible, pass an object as a reference rather than a pointer. If possible,
make it a `const` reference.
* Obviously avoid static and global values. Sometimes there's no alternative,
but if there is an alternative, then use it, no matter how much effort it
involves.
* If allocating a local POD structure (e.g. an operating-system structure in
native code), and you need to initialise it with zeros, use the `= { 0 };`
syntax as your first choice for doing this. If for some reason that's not
appropriate, use the `zerostruct()` function, or in case that isn't suitable,
use `zeromem()`. Don't use `memset()`.
## Classes
* Declare a class's public section first, and put its constructors and
destructor first. Any protected items come next, and then private ones.
* Use the most restrictive access-specifier possible for each member. Prefer
`private` over `protected`, and `protected` over `public`. Don't expose
things unnecessarily.
* Preferred positioning for any inherited classes is to put them to the right
of the class name, vertically aligned, e.g.:
class Thing : public Foo,
private Bar
{
}
* Put a class's member variables (which should almost always be private, of course),
after all the public and protected method declarations.
* Any private methods can go towards the end of the class, after the member
variables.
* If your class does not have copy-by-value semantics, derive the class from
`Uncopyable`.
* If your class is likely to be leaked, then derive your class from
`LeakChecked<>`.
* Constructors that take a single parameter should be default be marked
`explicit`. Obviously there are cases where you do want implicit conversion,
but always think about it carefully before writing a non-explicit constructor.
* Do not use `NULL`, `null`, or 0 for a null-pointer. And especially never use
'0L', which is particulary burdensome. Use `nullptr` instead - this is the
C++2011 standard, so get used to it. There's a fallback definition for `nullptr`
in Beast, so it's always possible to use it even if your compiler isn't yet
C++2011 compliant.
* All the C++ 'guru' books and articles are full of excellent and detailed advice
on when it's best to use inheritance vs composition. If you're not already
familiar with the received wisdom in these matters, then do some reading!
## Miscellaneous
* Constrain the scope of identifiers to the smallest area that needs it. Consider
using nested classes or classes inside functions, if doing so will expose less
interface and implementation overall.
* `goto` statements should not be used at all, even if the alternative is
more verbose code. The only exception is when implementing an algorithm in
a function as a state machine.
* Don't use macros! OK, obviously there are many situations where they're the
right tool for the job, but treat them as a last resort. Certainly don't ever
use a macro just to hold a constant value or to perform any kind of function
that could have been done as a real inline function. And it goes without saying
that you should give them names which aren't going to clash with other code.
And `#undef` them after you've used them, if possible.
* When using the `++` or `--` operators, never use post-increment if
pre-increment could be used instead. Although it doesn't matter for
primitive types, it's good practice to pre-increment since this can be
much more efficient for more complex objects. In particular, if you're
writing a for loop, always use pre-increment,
e.g. `for (int = 0; i < 10; ++i)`
* Never put an "else" statement after a "return"! This is well-explained in the
LLVM coding standards...and a couple of other very good pieces of advice from
the LLVM standards are in there as well.
* When getting a possibly-null pointer and using it only if it's non-null, limit
the scope of the pointer as much as possible - e.g. Do NOT do this:
Foo* f = getFoo ();
if (f != nullptr)
f->doSomething ();
// other code
f->doSomething (); // oops! f may be null!
..instead, prefer to write it like this, which reduces the scope of the
pointer, making it impossible to write code that accidentally uses a null
pointer:
if (Foo* f = getFoo ())
f->doSomethingElse ();
// f is out-of-scope here, so impossible to use it if it's null
(This also results in smaller, cleaner code)
[1]: http://en.wikipedia.org/wiki/Don%27t_repeat_yourself
[2]: http://en.wikipedia.org/wiki/Indent_style#Allman_style

View File

@@ -102,7 +102,7 @@ INPUT_ENCODING = UTF-8
FILE_PATTERNS =
RECURSIVE = YES
EXCLUDE = modules/beast_core/beast_core.h \
modules/beast_core/beast_core.cpp \
modules/beast_core/beast_core.unity.cpp \
modules/beast_basics/beast_basics.cpp \
modules/beast_basics/native \
modules/beast_basics/zip/zlib

View File

@@ -1,7 +1,10 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import copy
import itertools
import ntpath
import os
import random
import sys
def add_beast_to_path():
@@ -24,7 +27,7 @@ VARIANT_DIRECTORIES = {
'modules': ('bin', 'modules'),
}
BOOST_LIBRARIES = 'boost_system',
BOOST_LIBRARIES = '' #boost_system'
MAIN_PROGRAM_FILE = 'beast/unit_test/tests/main.cpp'
DOTFILE = '~/.scons'
@@ -43,6 +46,94 @@ def main():
for name, path in VARIANT_DIRECTORIES.items():
env.VariantDir(os.path.join(*path), name, duplicate=0)
env.Replace(PRINT_CMD_LINE_FUNC=Print.print_cmd_line)
Tests.run_tests(env, MAIN_PROGRAM_FILE, '.', '.test.cpp')
#Tests.run_tests(env, MAIN_PROGRAM_FILE, '.', '.test.cpp')
#main()
#-------------------------------------------------------------------------------
def is_unity(path):
b, e = os.path.splitext(path)
return os.path.splitext(b)[1] == '.unity' and e in ['.c', '.cpp']
def files(base):
for parent, _, files in os.walk(base):
for path in files:
path = os.path.join(parent, path)
yield os.path.normpath(path)
#-------------------------------------------------------------------------------
'''
/MP /GS /W3 /wd"4018" /wd"4244" /wd"4267" /Gy- /Zc:wchar_t
/I"D:\lib\OpenSSL-Win64\include" /I"D:\lib\boost_1_55_0"
/I"..\..\src\protobuf\src" /I"..\..\src\protobuf\vsprojects"
/I"..\..\src\leveldb" /I"..\..\src\leveldb\include" /I"..\..\build\proto"
/Zi /Gm- /Od /Fd"..\..\build\obj\VisualStudio2013\Debug.x64\vc120.pdb"
/fp:precise /D "_CRTDBG_MAP_ALLOC" /D "WIN32" /D "_DEBUG" /D "_CONSOLE"
/D "_VARIADIC_MAX=10" /D "_WIN32_WINNT=0x0600" /D "_SCL_SECURE_NO_WARNINGS"
/D "_CRT_SECURE_NO_WARNINGS" /D "_MBCS" /errorReport:prompt /WX- /Zc:forScope
/RTC1 /GR /Gd /MTd /openmp- /Fa"..\..\build\obj\VisualStudio2013\Debug.x64\"
/EHa /nologo /Fo"..\..\build\obj\VisualStudio2013\Debug.x64\"
/Fp"..\..\build\obj\VisualStudio2013\Debug.x64\rippled.pch"
'''
# Path to this SConstruct file
base_dir = Dir('#').srcnode().get_abspath()
base_env = Environment(
tools = ['default', 'VSProject'],
CCCOMSTR = '',
CMDLINE_QUIET = 1,
CPPPATH = [
os.environ['BOOST_ROOT'],
os.environ['OPENSSL_ROOT']
],
CPPDEFINES = [
'_WIN32_WINNT=0x6000']
)
#base_env.Replace(PRINT_CMD_LINE_FUNC=Print.print_cmd_line)
env = base_env
bin_dir = os.path.join(base_dir, 'bin')
srcs = filter(is_unity, list(files('beast')) + list(files('modules')))
for variant in ['Debug']: #, 'Release']:
for platform in ['Win32']:
#env = base_env.Clone()
#env.Replace(PRINT_CMD_LINE_FUNC=Print.print_cmd_line)
variant_dir = os.path.join(bin_dir, variant + '.' + platform)
env.VariantDir(os.path.join(variant_dir, 'beast'), 'beast', duplicate=0)
env.VariantDir(os.path.join(variant_dir, 'modules'), 'modules', duplicate=0)
env.Append(CCFLAGS=[
'/EHsc',
'/bigobj',
'/Fd${TARGET}.pdb'
])
if variant == 'Debug':
env.Append(CCFLAGS=[
'/MTd',
'/Od',
'/Zi'
])
else:
env.Append(CCFLAGS=[
'/MT',
'/Ox'
])
variant_srcs = [os.path.join(variant_dir, os.path.relpath(f, base_dir)) for f in srcs]
beast = env.StaticLibrary(
target = os.path.join(variant_dir, 'beast.lib'),
source = variant_srcs)
env.VSProject (
'out',
buildtarget = beast,
source = filter(is_unity, list(files('beast')) + list(files('modules'))))
env.Default ('out.vcxproj')
#env.Default (os.path.join(bin_dir,'Debug.Win32', 'beast.lib'))
main()

View File

@@ -1,44 +0,0 @@
--------------------------------------------------------------------------------
BEAST TODO
--------------------------------------------------------------------------------
- Change sqdb to use exceptions instead of errors
- Use SemanticVersion for beast version numbers to replace BEAST_VERSION
- add support for a __PRETTY_FUNCTION__ equivalent for all environments
- Import secp256k1 from sipa
- Set sqlite thread safety model to '2' in beast_sqlite
- Document and rename all the sqdb files and classes
- Specialize UnsignedInteger<> for peformance in the storage format
- Rename HeapBlock routines to not conflict with _CRTDBG_MAP_ALLOC macros
- Implement beast::Bimap?
- Use Bimap for storage in the DeadlineTimer::Manager, to support
thousands of timers.
- Think about adding a shouldStop bool to InterruptibleThread, along
with a shouldStop () function returning bool, and a stop() method.
- Tidy up CacheLine, MemoryAlignment
- Implement a reasonable substitute for boost's thread_local_storage
- Rename malloc/calloc JUCE members that conflict with the debug CRT from MSVC
- Reformat every Doxygen comment
- Fix Doxygen metatags
- update Beast Doxyfile
- Rename SharedData to SharedState or something?
- Make sure the template BeastConfig.h is included in the Doxyfile
- Implement robust key/value database with bulk write

View File

@@ -24,9 +24,9 @@
#ifndef BEAST_ARITHMETIC_H_INCLUDED
#define BEAST_ARITHMETIC_H_INCLUDED
#include "Config.h"
#include <beast/Config.h>
#include "utility/noexcept.h"
#include <beast/utility/noexcept.h>
#include <cmath>
#include <cstdint>
@@ -34,101 +34,6 @@
namespace beast {
// Some indispensible min/max functions
/** Returns the larger of two values. */
template <typename Type>
inline Type bmax (const Type a, const Type b)
{ return (a < b) ? b : a; }
/** Returns the larger of three values. */
template <typename Type>
inline Type bmax (const Type a, const Type b, const Type c)
{ return (a < b) ? ((b < c) ? c : b) : ((a < c) ? c : a); }
/** Returns the larger of four values. */
template <typename Type>
inline Type bmax (const Type a, const Type b, const Type c, const Type d)
{ return bmax (a, bmax (b, c, d)); }
/** Returns the smaller of two values. */
template <typename Type>
inline Type bmin (const Type a, const Type b)
{ return (b < a) ? b : a; }
/** Returns the smaller of three values. */
template <typename Type>
inline Type bmin (const Type a, const Type b, const Type c)
{ return (b < a) ? ((c < b) ? c : b) : ((c < a) ? c : a); }
/** Returns the smaller of four values. */
template <typename Type>
inline Type bmin (const Type a, const Type b, const Type c, const Type d)
{ return bmin (a, bmin (b, c, d)); }
/** Scans an array of values, returning the minimum value that it contains. */
template <typename Type>
const Type findMinimum (const Type* data, int numValues)
{
if (numValues <= 0)
return Type();
Type result (*data++);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *data++;
if (v < result) result = v;
}
return result;
}
/** Scans an array of values, returning the maximum value that it contains. */
template <typename Type>
const Type findMaximum (const Type* values, int numValues)
{
if (numValues <= 0)
return Type();
Type result (*values++);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *values++;
if (result < v) result = v;
}
return result;
}
/** Scans an array of values, returning the minimum and maximum values that it contains. */
template <typename Type>
void findMinAndMax (const Type* values, int numValues, Type& lowest, Type& highest)
{
if (numValues <= 0)
{
lowest = Type();
highest = Type();
}
else
{
Type mn (*values++);
Type mx (mn);
while (--numValues > 0) // (> 0 rather than >= 0 because we've already taken the first sample)
{
const Type& v = *values++;
if (mx < v) mx = v;
if (v < mn) mn = v;
}
lowest = mn;
highest = mx;
}
}
//==============================================================================
/** Constrains a value to keep it within a given range.
@@ -151,7 +56,8 @@ inline Type blimit (const Type lowerLimit,
const Type upperLimit,
const Type valueToConstrain) noexcept
{
bassert (lowerLimit <= upperLimit); // if these are in the wrong order, results are unpredictable..
// if these are in the wrong order, results are unpredictable.
bassert (lowerLimit <= upperLimit);
return (valueToConstrain < lowerLimit) ? lowerLimit
: ((upperLimit < valueToConstrain) ? upperLimit
@@ -177,24 +83,6 @@ inline bool isPositiveAndBelow (const int valueToTest, const int upperLimit) noe
return static_cast <unsigned int> (valueToTest) < static_cast <unsigned int> (upperLimit);
}
/** Returns true if a value is at least zero, and also less than or equal to a specified upper limit.
This is basically a quicker way to write:
@code valueToTest >= 0 && valueToTest <= upperLimit
@endcode
*/
template <typename Type>
inline bool isPositiveAndNotGreaterThan (Type valueToTest, Type upperLimit) noexcept
{
bassert (Type() <= upperLimit); // makes no sense to call this if the upper limit is itself below zero..
return Type() <= valueToTest && valueToTest <= upperLimit;
}
template <>
inline bool isPositiveAndNotGreaterThan (const int valueToTest, const int upperLimit) noexcept
{
bassert (upperLimit >= 0); // makes no sense to call this if the upper limit is itself below zero..
return static_cast <unsigned int> (valueToTest) <= static_cast <unsigned int> (upperLimit);
}
//==============================================================================
@@ -214,55 +102,6 @@ int numElementsInArray (Type (&array)[N])
return N;
}
/** 64-bit abs function. */
inline std::int64_t abs64 (const std::int64_t n) noexcept
{
return (n >= 0) ? n : -n;
}
//==============================================================================
#if BEAST_MSVC
#pragma optimize ("t", off)
#ifndef __INTEL_COMPILER
#pragma float_control (precise, on, push)
#endif
#endif
/** Fast floating-point-to-integer conversion.
This is faster than using the normal c++ cast to convert a float to an int, and
it will round the value to the nearest integer, rather than rounding it down
like the normal cast does.
Note that this routine gets its speed at the expense of some accuracy, and when
rounding values whose floating point component is exactly 0.5, odd numbers and
even numbers will be rounded up or down differently.
*/
template <typename FloatType>
inline int roundToInt (const FloatType value) noexcept
{
#ifdef __INTEL_COMPILER
#pragma float_control (precise, on, push)
#endif
union { int asInt[2]; double asDouble; } n;
n.asDouble = ((double) value) + 6755399441055744.0;
#if BEAST_BIG_ENDIAN
return n.asInt [1];
#else
return n.asInt [0];
#endif
}
#if BEAST_MSVC
#ifndef __INTEL_COMPILER
#pragma float_control (pop)
#endif
#pragma optimize ("", on) // resets optimisations to the project defaults
#endif
}
#endif

View File

@@ -1,428 +0,0 @@
//------------------------------------------------------------------------------
/*
This file is part of Beast: https://github.com/vinniefalco/Beast
Copyright 2013, Vinnie Falco <vinnie.falco@gmail.com>
Portions of this file are from JUCE.
Copyright (c) 2013 - Raw Material Software Ltd.
Please visit http://www.juce.com
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef BEAST_ATOMIC_H_INCLUDED
#define BEAST_ATOMIC_H_INCLUDED
#include "Config.h"
#include "StaticAssert.h"
#include "utility/noexcept.h"
#include <cstdint>
namespace beast {
//==============================================================================
/**
Simple class to hold a primitive value and perform atomic operations on it.
The type used must be a 32 or 64 bit primitive, like an int, pointer, etc.
There are methods to perform most of the basic atomic operations.
*/
template <typename Type>
class Atomic
{
public:
/** Creates a new value, initialised to zero. */
inline Atomic() noexcept
: value (0)
{
}
/** Creates a new value, with a given initial value. */
inline Atomic (const Type initialValue) noexcept
: value (initialValue)
{
}
/** Copies another value (atomically). */
inline Atomic (const Atomic& other) noexcept
: value (other.get())
{
}
/** Destructor. */
inline ~Atomic() noexcept
{
// This class can only be used for types which are 32 or 64 bits in size.
static_bassert (sizeof (Type) == 4 || sizeof (Type) == 8);
}
/** Atomically reads and returns the current value. */
Type get() const noexcept;
/** Copies another value onto this one (atomically). */
Atomic& operator= (const Atomic& other) noexcept
{ exchange (other.get()); return *this; }
/** Copies another value onto this one (atomically). */
Atomic& operator= (const Type newValue) noexcept
{ exchange (newValue); return *this; }
/** Atomically sets the current value. */
void set (Type newValue) noexcept
{ exchange (newValue); }
/** Atomically sets the current value, returning the value that was replaced. */
Type exchange (Type value) noexcept;
/** Atomically adds a number to this value, returning the new value. */
Type operator+= (Type amountToAdd) noexcept;
/** Atomically subtracts a number from this value, returning the new value. */
Type operator-= (Type amountToSubtract) noexcept;
/** Atomically increments this value, returning the new value. */
Type operator++() noexcept;
/** Atomically decrements this value, returning the new value. */
Type operator--() noexcept;
/** Atomically compares this value with a target value, and if it is equal, sets
this to be equal to a new value.
This operation is the atomic equivalent of doing this:
@code
bool compareAndSetBool (Type newValue, Type valueToCompare)
{
if (get() == valueToCompare)
{
set (newValue);
return true;
}
return false;
}
@endcode
@returns true if the comparison was true and the value was replaced; false if
the comparison failed and the value was left unchanged.
@see compareAndSetValue
*/
bool compareAndSetBool (Type newValue, Type valueToCompare) noexcept;
/** Atomically compares this value with a target value, and if it is equal, sets
this to be equal to a new value.
This operation is the atomic equivalent of doing this:
@code
Type compareAndSetValue (Type newValue, Type valueToCompare)
{
Type oldValue = get();
if (oldValue == valueToCompare)
set (newValue);
return oldValue;
}
@endcode
@returns the old value before it was changed.
@see compareAndSetBool
*/
Type compareAndSetValue (Type newValue, Type valueToCompare) noexcept;
//==============================================================================
#if BEAST_64BIT
BEAST_ALIGN (8)
#else
BEAST_ALIGN (4)
#endif
/** The raw value that this class operates on.
This is exposed publically in case you need to manipulate it directly
for performance reasons.
*/
volatile Type value;
private:
template <typename Dest, typename Source>
static inline Dest castTo (Source value) noexcept { union { Dest d; Source s; } u; u.s = value; return u.d; }
static inline Type castFrom32Bit (std::int32_t value) noexcept { return castTo <Type, std::int32_t> (value); }
static inline Type castFrom64Bit (std::int64_t value) noexcept { return castTo <Type, std::int64_t> (value); }
static inline std::int32_t castTo32Bit (Type value) noexcept { return castTo <std::int32_t, Type> (value); }
static inline std::int64_t castTo64Bit (Type value) noexcept { return castTo <std::int64_t, Type> (value); }
Type operator++ (int); // better to just use pre-increment with atomics..
Type operator-- (int);
/** This templated negate function will negate pointers as well as integers */
template <typename ValueType>
inline ValueType negateValue (ValueType n) noexcept
{
return sizeof (ValueType) == 1 ? (ValueType) -(signed char) n
: (sizeof (ValueType) == 2 ? (ValueType) -(short) n
: (sizeof (ValueType) == 4 ? (ValueType) -(int) n
: ((ValueType) -(std::int64_t) n)));
}
/** This templated negate function will negate pointers as well as integers */
template <typename PointerType>
inline PointerType* negateValue (PointerType* n) noexcept
{
return reinterpret_cast <PointerType*> (-reinterpret_cast <std::intptr_t> (n));
}
};
//==============================================================================
/*
The following code is in the header so that the atomics can be inlined where possible...
*/
#if BEAST_IOS || (BEAST_MAC && (BEAST_PPC || BEAST_CLANG || __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 2)))
#define BEAST_ATOMICS_MAC 1 // Older OSX builds using gcc4.1 or earlier
#if MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_5
#define BEAST_MAC_ATOMICS_VOLATILE
#else
#define BEAST_MAC_ATOMICS_VOLATILE volatile
#endif
#if BEAST_PPC || BEAST_IOS
// None of these atomics are available for PPC or for iOS 3.1 or earlier!!
template <typename Type> static Type OSAtomicAdd64Barrier (Type b, BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return *a += b; }
template <typename Type> static Type OSAtomicIncrement64Barrier (BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return ++*a; }
template <typename Type> static Type OSAtomicDecrement64Barrier (BEAST_MAC_ATOMICS_VOLATILE Type* a) noexcept { bassertfalse; return --*a; }
template <typename Type> static bool OSAtomicCompareAndSwap64Barrier (Type old, Type newValue, BEAST_MAC_ATOMICS_VOLATILE Type* value) noexcept
{ bassertfalse; if (old == *value) { *value = newValue; return true; } return false; }
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
#elif BEAST_CLANG && BEAST_LINUX
#define BEAST_ATOMICS_GCC 1
//==============================================================================
#elif BEAST_GCC
#define BEAST_ATOMICS_GCC 1 // GCC with intrinsics
#if BEAST_IOS || BEAST_ANDROID // (64-bit ops will compile but not link on these mobile OSes)
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
//==============================================================================
#else
#define BEAST_ATOMICS_WINDOWS 1 // Windows with intrinsics
#if BEAST_USE_INTRINSICS
#ifndef __INTEL_COMPILER
#pragma intrinsic (_InterlockedExchange, _InterlockedIncrement, _InterlockedDecrement, _InterlockedCompareExchange, \
_InterlockedCompareExchange64, _InterlockedExchangeAdd, _ReadWriteBarrier)
#endif
#define beast_InterlockedExchange(a, b) _InterlockedExchange(a, b)
#define beast_InterlockedIncrement(a) _InterlockedIncrement(a)
#define beast_InterlockedDecrement(a) _InterlockedDecrement(a)
#define beast_InterlockedExchangeAdd(a, b) _InterlockedExchangeAdd(a, b)
#define beast_InterlockedCompareExchange(a, b, c) _InterlockedCompareExchange(a, b, c)
#define beast_InterlockedCompareExchange64(a, b, c) _InterlockedCompareExchange64(a, b, c)
#define beast_MemoryBarrier _ReadWriteBarrier
#else
long beast_InterlockedExchange (volatile long* a, long b) noexcept;
long beast_InterlockedIncrement (volatile long* a) noexcept;
long beast_InterlockedDecrement (volatile long* a) noexcept;
long beast_InterlockedExchangeAdd (volatile long* a, long b) noexcept;
long beast_InterlockedCompareExchange (volatile long* a, long b, long c) noexcept;
__int64 beast_InterlockedCompareExchange64 (volatile __int64* a, __int64 b, __int64 c) noexcept;
inline void beast_MemoryBarrier() noexcept { long x = 0; beast_InterlockedIncrement (&x); }
#endif
#if BEAST_64BIT
#ifndef __INTEL_COMPILER
#pragma intrinsic (_InterlockedExchangeAdd64, _InterlockedExchange64, _InterlockedIncrement64, _InterlockedDecrement64)
#endif
#define beast_InterlockedExchangeAdd64(a, b) _InterlockedExchangeAdd64(a, b)
#define beast_InterlockedExchange64(a, b) _InterlockedExchange64(a, b)
#define beast_InterlockedIncrement64(a) _InterlockedIncrement64(a)
#define beast_InterlockedDecrement64(a) _InterlockedDecrement64(a)
#else
// None of these atomics are available in a 32-bit Windows build!!
template <typename Type> static Type beast_InterlockedExchangeAdd64 (volatile Type* a, Type b) noexcept { bassertfalse; Type old = *a; *a += b; return old; }
template <typename Type> static Type beast_InterlockedExchange64 (volatile Type* a, Type b) noexcept { bassertfalse; Type old = *a; *a = b; return old; }
template <typename Type> static Type beast_InterlockedIncrement64 (volatile Type* a) noexcept { bassertfalse; return ++*a; }
template <typename Type> static Type beast_InterlockedDecrement64 (volatile Type* a) noexcept { bassertfalse; return --*a; }
#define BEAST_64BIT_ATOMICS_UNAVAILABLE 1
#endif
#endif
#if BEAST_MSVC
#pragma warning (push)
#pragma warning (disable: 4311) // (truncation warning)
#endif
//==============================================================================
template <typename Type>
inline Type Atomic<Type>::get() const noexcept
{
#if BEAST_ATOMICS_MAC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) OSAtomicAdd32Barrier ((int32_t) 0, (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value))
: castFrom64Bit ((std::int64_t) OSAtomicAdd64Barrier ((int64_t) 0, (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value));
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedExchangeAdd ((volatile long*) &value, (long) 0))
: castFrom64Bit ((std::int64_t) beast_InterlockedExchangeAdd64 ((volatile __int64*) &value, (__int64) 0));
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) __sync_add_and_fetch ((volatile std::int32_t*) &value, 0))
: castFrom64Bit ((std::int64_t) __sync_add_and_fetch ((volatile std::int64_t*) &value, 0));
#endif
}
template <typename Type>
inline Type Atomic<Type>::exchange (const Type newValue) noexcept
{
#if BEAST_ATOMICS_MAC || BEAST_ATOMICS_GCC
Type currentVal = value;
while (! compareAndSetBool (newValue, currentVal)) { currentVal = value; }
return currentVal;
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedExchange ((volatile long*) &value, (long) castTo32Bit (newValue)))
: castFrom64Bit ((std::int64_t) beast_InterlockedExchange64 ((volatile __int64*) &value, (__int64) castTo64Bit (newValue)));
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator+= (const Type amountToAdd) noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicAdd32Barrier ((int32_t) castTo32Bit (amountToAdd), (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicAdd64Barrier ((int64_t) amountToAdd, (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) (beast_InterlockedExchangeAdd ((volatile long*) &value, (long) amountToAdd) + (long) amountToAdd)
: (Type) (beast_InterlockedExchangeAdd64 ((volatile __int64*) &value, (__int64) amountToAdd) + (__int64) amountToAdd);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, amountToAdd);
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator-= (const Type amountToSubtract) noexcept
{
return operator+= (negateValue (amountToSubtract));
}
template <typename Type>
inline Type Atomic<Type>::operator++() noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicIncrement32Barrier ((BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicIncrement64Barrier ((BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) beast_InterlockedIncrement ((volatile long*) &value)
: (Type) beast_InterlockedIncrement64 ((volatile __int64*) &value);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, (Type) 1);
#endif
}
template <typename Type>
inline Type Atomic<Type>::operator--() noexcept
{
#if BEAST_ATOMICS_MAC
# ifdef __clang__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wint-to-void-pointer-cast"
# pragma clang diagnostic ignored "-Wint-to-pointer-cast"
# endif
return sizeof (Type) == 4 ? (Type) OSAtomicDecrement32Barrier ((BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: (Type) OSAtomicDecrement64Barrier ((BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
# ifdef __clang__
# pragma clang diagnostic pop
# endif
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? (Type) beast_InterlockedDecrement ((volatile long*) &value)
: (Type) beast_InterlockedDecrement64 ((volatile __int64*) &value);
#elif BEAST_ATOMICS_GCC
return (Type) __sync_add_and_fetch (&value, (Type) -1);
#endif
}
template <typename Type>
inline bool Atomic<Type>::compareAndSetBool (const Type newValue, const Type valueToCompare) noexcept
{
#if BEAST_ATOMICS_MAC
return sizeof (Type) == 4 ? OSAtomicCompareAndSwap32Barrier ((int32_t) castTo32Bit (valueToCompare), (int32_t) castTo32Bit (newValue), (BEAST_MAC_ATOMICS_VOLATILE int32_t*) &value)
: OSAtomicCompareAndSwap64Barrier ((int64_t) castTo64Bit (valueToCompare), (int64_t) castTo64Bit (newValue), (BEAST_MAC_ATOMICS_VOLATILE int64_t*) &value);
#elif BEAST_ATOMICS_WINDOWS
return compareAndSetValue (newValue, valueToCompare) == valueToCompare;
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? __sync_bool_compare_and_swap ((volatile std::int32_t*) &value, castTo32Bit (valueToCompare), castTo32Bit (newValue))
: __sync_bool_compare_and_swap ((volatile std::int64_t*) &value, castTo64Bit (valueToCompare), castTo64Bit (newValue));
#endif
}
template <typename Type>
inline Type Atomic<Type>::compareAndSetValue (const Type newValue, const Type valueToCompare) noexcept
{
#if BEAST_ATOMICS_MAC
for (;;) // Annoying workaround for only having a bool CAS operation..
{
if (compareAndSetBool (newValue, valueToCompare))
return valueToCompare;
const Type result = value;
if (result != valueToCompare)
return result;
}
#elif BEAST_ATOMICS_WINDOWS
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) beast_InterlockedCompareExchange ((volatile long*) &value, (long) castTo32Bit (newValue), (long) castTo32Bit (valueToCompare)))
: castFrom64Bit ((std::int64_t) beast_InterlockedCompareExchange64 ((volatile __int64*) &value, (__int64) castTo64Bit (newValue), (__int64) castTo64Bit (valueToCompare)));
#elif BEAST_ATOMICS_GCC
return sizeof (Type) == 4 ? castFrom32Bit ((std::int32_t) __sync_val_compare_and_swap ((volatile std::int32_t*) &value, castTo32Bit (valueToCompare), castTo32Bit (newValue)))
: castFrom64Bit ((std::int64_t) __sync_val_compare_and_swap ((volatile std::int64_t*) &value, castTo64Bit (valueToCompare), castTo64Bit (newValue)));
#endif
}
inline void memoryBarrier() noexcept
{
#if BEAST_ATOMICS_MAC
OSMemoryBarrier();
#elif BEAST_ATOMICS_GCC
__sync_synchronize();
#elif BEAST_ATOMICS_WINDOWS
beast_MemoryBarrier();
#endif
}
#if BEAST_MSVC
#pragma warning (pop)
#endif
}
#endif

View File

@@ -22,6 +22,6 @@
// These classes require boost in order to be used.
#include "boost/ErrorCode.h"
#include <beast/boost/ErrorCode.h>
#endif

View File

@@ -24,8 +24,7 @@
#ifndef BEAST_BYTEORDER_H_INCLUDED
#define BEAST_BYTEORDER_H_INCLUDED
#include "Config.h"
#include "Uncopyable.h"
#include <beast/Config.h>
#include <cstdint>
@@ -35,7 +34,7 @@ namespace beast {
/** Contains static methods for converting the byte order between different
endiannesses.
*/
class ByteOrder : public Uncopyable
class ByteOrder
{
public:
//==============================================================================
@@ -105,6 +104,8 @@ public:
private:
ByteOrder();
ByteOrder(ByteOrder const&) = delete;
ByteOrder& operator= (ByteOrder const&) = delete;
};
//==============================================================================

Some files were not shown because too many files have changed in this diff Show More