Replace Journal public data members with member function accessors
in order to make Journal lighter weight. The change makes a
Journal cheaper to pass by value.
Also add missing stream checks (e.g., calls to JLOG) to avoid
text processing that ultimately will not be stored in the log.
The Journal API is affected. There are two uses for the
Journal::Severity enum:
o It is used to declare a threshold which log messages must meet
in order to be logged.
o It declares the current logging level which will be compared
to the threshold.
Those uses that affect the threshold are now named threshold()
rather than severity() to make the uses easier to distinguish.
Additionally, Journal no longer carries a Severity variable.
All handling of the threshold() is now delegated to the
Journal::Sink.
Sinks are no longer constructed with a default threshold of
kWarning; their threshold must be passed in on construction.
The RippleAddress class was used to represent a number of fundamentally
different types: account public keys, account secret keys, node public
keys, node secret keys, seeds and generators.
The class is replaced by the following types:
* PublicKey for account and node public keys
* SecretKey for account and node private keys
* Generator for generating secp256k1 accounts
* Seed for account, node and generator seeds
* A new, unified interface for generating random numbers and
filling buffers supporting any engine that fits the
UniformRandomNumberGenerator concept;
* Automatically seeded replacement for rand using the fast
xorshift+ PRNG engine;
* A CSPRNG engine that can be used with the new framework
when needing to to generate cryptographically secure
randomness.
* Unit test cleanups to work with new engine.
Multiple servers behind NAT might share a single public IP, making it
difficult for them to connect to the Ripple network since multiple
incoming connections from the same non-private IP are currently not
allowed.
RippleD now automatically allows between 2 and 5 incoming connections,
from the same public IP based on the total number of peers that it is
configured to accept.
Administrators can manually change the limit by adding an "ip_limit"
key value pair in the [overlay] stanza of the configuration file and
specifying a positive non-zero number. For example:
[overlay]
ip_limit=3
The previous "one connection per IP" strategy can be emulated by
setting "ip_limit" to 1.
The implementation imposes both soft and hard upper limits and will
adjust the value so that a single IP cannot consume all inbound slots.
* Remove cxx14 compatibility layer from ripple
* Update travis to clang 3.6 and drop gcc 4.8
* Remove unneeded beast CXX14 defines
* Do not run clang build with gdb with travis
* Update circle ci to clang 3.6 & gcc-5
* Don't run rippled in gdb, clang builds crash gdb
* Staticly link libstdc++, boost, ssl, & protobuf
* Support builds on ubuntu 15.10
* Each peer has a "sane/insane/unknown" status
* Status updated based on peer ledger sequence
* Status reported in peer json
* Only sane peers preferred for historical ledgers
* Overlay endpoints only accepted from known sane peers
* Untrusted proposals not relayed from insane peers
* Untrusted validations not relayed from insane peers
* Transactions from insane peers are not processed
* Periodically drop outbound connections to bad peers
* Bad peers get bootcache valence of zero
Peer "sanity" is based on the ledger sequence number they are on. We
quickly become able to assess this based on current trusted validations.
We quarrantine rogue messages and disconnect bad outbound connections to
help maintain the configured number of good outbound connections.
* Brings the soci subtree into rippled.
* Validator, peerfinder, and SHAMapStore use new soci backend.
* Optional postgresql backend for soci (if POSTGRESQL_ROOT env var is set).
Legacy workarounds for Visual Studio non thread-safe initialization
of function local objects with static storage duration are removed:
* Remove LeakChecked
* Remove StaticObject
* Remove SharedSingleton
This adds support for a cgi /crawl request, issued over HTTPS to the configured
peer protocol port. The response to the request is a JSON object containing
the node public key, type, and IP address of each directly connected neighbor.
The IP address is suppressed unless the neighbor has requested its address
to be revealed by adding "Crawl: public" to its HTTP headers. This field is
currently set by the peer_private option in the rippled.cfg file.
An alternative to the unity build, the classic build compiles each
translation unit individually. This adds more modules to the classic build:
* Remove unity header app.h
* Add missing includes as needed
* Remove obsolete NodeStore backend code
* Add app/, core/, crypto/, json/, net/, overlay/, peerfinder/ to classic build
Source files are split to place all unit test code into translation
units ending in .test.cpp with no other business logic in the same file,
and in directories named "test".
A new target is added to the SConstruct, invoked by:
scons count
This prints the total number of source code lines occupied by unit tests,
in rippled specific code and excluding library subtrees.
This implements the bare minimum necessary to store a 33 byte public
key and use it in ordered containers. It is an efficient and well
defined alternative to RippleAddress when the caller only needs
a node public key.
The abstract_clock is now templated on a type meeting the requirements of
the Clock concept. It inherits the nested types of the Clock on which it
is based. This resolves a problem with the original design which broke the
type-safety of time_point from different abstract clocks.
This introduces a considerable change in the way that peers handshake. Instead
of sending the TMHello protocol message, the peer making the connection (client
role) sends an HTTP Upgrade request along with some special headers. The peer
acting in the server role sends an HTTP response completing the upgrade and
transition to RTXP (Ripple Transaction Protocol, a.k.a. peer protocol). If the
server has no available slots, then it sends a 503 Service Unavailable HTTP
response with a JSON content-body containing IP addresses of other servers to
try. The information that was previously contained in the TMHello message is
now communicated in the HTTP request and HTTP response including the secure
cookie to prevent man in the middle attacks. This information is documented
in the overlay README.md file.
To prevent disruption on the network, the handshake feature is rolled out in
two parts. This is part 1, where new servents acting in the client role will
send the old style TMHello handshake, and new servents acting in the server
role can automatically detect and accept both the old style TMHello handshake,
or the HTTP request accordingly. This detection happens in the Server module,
which supports the universal port. An experimental .cfg setting allows clients
to instead send HTTP handshakes when establishing peer connections. When this
code has reached a significant fraction of the network, these clients will be
able to establish a connection to the Ripple network using HTTP handshakes.
These changes clean up the handling of the socket for peers. It fixes a long
standing bug in the graceful close sequence, where remaining data such as the
IP addresses of other servers to try, did not get sent. Redundant state
variables for the peer are removed and the treatment of completion handlers is
streamlined. The treatment of SSL short reads and secure shutdown is also fixed.
Logging for the peers in the overlay module are divided into two partitions:
"Peer" and "Protocol". The Peer partition records activity taking place on the
socket while the Protocol partition informs about RTXP specific actions such as
transaction relay, fetch packs, and consensus rounds. The severity on the log
partitions may be adjusted independently to diagnose problems. Every log
message for peers is prefixed with a small, unique integer id in brackets,
to accurately associate log messages with peers.
HTTP handshaking is the first step in implementing the Hub and Spoke feature,
which transforms the network from a homogeneous network where all peers are
the same, into a structured network where peers with above average capabilities
in their ability to process ledgers and transactions self-assemble to form a
backbone of high powered machines which in turn serve a much larger number of
'leaves' with lower capacities with a goal to improve the number of
transactions that may be retired over time.
* New src/ripple/crypto and src/ripple/protocol directories
* Merged src/ripple/common into src/ripple/basics
* Move resource/api files up a level
* Add headers for "include what you use"
* Normalized include guards
* Renamed to JsonFields.h
* Remove obsolete files
* Remove net.h unity header
* Remove resource.h unity header
* Removed some deprecated unity includes
This changes the behavior and configuration specification of the listening
ports that rippled uses to accept incoming connections for the supported
protocols: peer (Peer Protocol), http (JSON-RPC over HTTP), https (JSON-RPC)
over HTTPS, ws (Websockets Clients), and wss (Secure Websockets Clients).
Each listening port is now capable of handshaking in multiple protocols
specified in the configuration file (subject to some restrictions). Each
port can be configured to provide its own SSL certificate, or to use a
self-signed certificate. Ports can be configured to share settings, this
allows multiple ports to use the same certificate or values. The list of
ports is dynamic, administrators can open as few or as many ports as they
like. Authentication settings such as user/password or admin user/admin
password (for administrative commands on RPC or Websockets interfaces) can
also be specified per-port.
As the configuration file has changed significantly, administrators will
need to update their ripple.cfg files and carefully review the documentation
and new settings.
Changes:
* rippled-example.cfg updated with documentation and new example settings:
All obsolete websocket, rpc, and peer configuration sections have been
removed, the documentation updated, and a new documented set of example
settings added.
* HTTP::Writer abstraction for sending HTTP server requests and responses
* HTTP::Handler handler improvements to support Universal Port
* HTTP::Handler handler supports legacy Peer protocol handshakes
* HTTP::Port uses shared_ptr<boost::asio::ssl::context>
* HTTP::PeerImp and Overlay use ssl_bundle to support Universal Port
* New JsonWriter to stream message and body through HTTP server
* ServerHandler refactored to support Universal Port and legacy peers
* ServerHandler Setup struct updated for Universal Port
* Refactor some PeerFinder members
* WSDoor and Websocket code stores and uses the HTTP::Port configuration
* Websocket autotls class receives the current secure/plain SSL setting
* Remove PeerDoor and obsolete Overlay peer accept code
* Remove obsolete RPCDoor and synchronous RPC handling code
* Remove other obsolete classes, types, and files
* Command line tool uses ServerHandler Setup for port and authorization info
* Fix handling of admin_user, admin_password in administrative commands
* Fix adminRole to check credentials for Universal Port
* Updated Overlay README.md
* Overlay sends IP:port redirects on HTTP Upgrade peer connection requests:
Incoming peers who handshake using the HTTP Upgrade mechanism don't get
a slot, and always get HTTP Status 503 redirect containing a JSON
content-body with a set of alternate IP and port addresses to try, learned
from PeerFinder. A future commit related to the Hub and Spoke feature will
change the response to grant the peer a slot when there are peer slots
available.
* HTTP responses to outgoing Peer connect requests parse redirect IP:ports:
When the [overlay] configuration section (which is experimental) has
http_handshake = 1, HTTP redirect responses will have the JSON content-body
parsed to obtain the redirect IP:port addresses.
* Use a single io_service for HTTP::Server and Overlay:
This is necessary to allow HTTP::Server to pass sockets to and from Overlay
and eventually Websockets. Unfortunately Websockets is not so easily changed
to use an externally provided io_service. This will be addressed in a future
commit, and is one step necessary ease the restriction on ports configured
to offer Websocket protocols in the .cfg file.
The stop sequence for Overlay had a race condition where autoconnect could
be called after close_all, resulting in a hang on exit. This resolves the
problem by putting the close and timer operations on a strand:
* Rename some Overlay members
* Put close on strand and tidy up members
* Use completion handler instead of coroutine for timer
* Use App io_service in PeerFinder
OverlayImpl::onStart calls into PeerFinder before PeerFinder::Manager::onStart,
causing tests to sometimes fail and the application to intermittently not start.
The order of calls to Stoppable::onStart is implementation defined and not
predictable.
This changes PeerFinder to load the database in Stoppable::onPrepare, before
threads are launched. In general, creation and initialization of resources that
are shared between classes should happen in onPrepare rather than onStart,
to solve this problem.