If the number of peers a server has is below the configured
minimum peer limit, this commit will properly transition the
server's state to "disconnected".
The default limit for the minimum number of peers required was
0 meaning that a server that was connected but lost all its
peers would never transition to disconnected, since it could
never drop below zero peers.
This commit redefines the default minimum number of peers to 1
and produces a warning if the server is configured in a way
that will prevent it from ever achieving sufficient connectivity.
If a server is configured to support crawl, it will report the
IP addresses of all peers it is connected to, unless those peers
have explicitly opted out by setting the `peer_private` option
in their config file.
This commit makes servers that are configured as validators
opt out of crawling.
Several commands allow a user to retrieve a server's status. Commands
will typically limit disclosure of information that can reveal that a
particular server is a validator to connections that are not verified
to make it more difficult to determine validators via fingerprinting.
Prior to this commit, servers configured to operate as validators
would, instead of simply reporting their server state as 'full',
augment their state information to indicate whether they are
'proposing' or 'validating'.
Servers will only provide this enhanced state information for
connections that have elevated privileges.
Acknowledgements:
Ripple thanks Markus Teufelberger for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
The /crawl API endpoint allows developers to examine the structure of
the XRP Ledger's overlay network.
This commit adds additional information about the local server to the
/crawl endpoint, making it possible for developers to create data-rich
network-wide status dashboards.
Related:
- https://developers.ripple.com/peer-protocol.html
- https://github.com/ripple/rippled-network-crawler
When deserializing specially crafted data, the code would ignore certain
types of errors. Reserializing objects created from such data results in
failures or generates a different serialization, which is not ideal.
Also addresses: RIPD-1677, RIPD-1682, RIPD-1686 and RIPD-1689.
Acknowledgements:
Ripple thanks Guido Vranken for responsibly disclosing these issues.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to responsibly
disclose any issues that they may find. For more on Ripple's Bug Bounty
program, please visit: https://ripple.com/bug-bounty
Specially crafted messages could cause the server to buffer large
amounts of memory which could increase memory pressure.
This commit changes how messages are buffered and imposes a limit
on the amount of data that the server is willing to buffer.
Acknowledgements:
Aaron Hook for responsibly disclosing this issue.
Bug Bounties and Responsible Disclosures:
We welcome reviews of the rippled code and urge researchers to
responsibly disclose any issues they may find. For information
on Ripple's Bug Bounty program, please visit:
https://ripple.com/bug-bounty
The constructor would previously assert that the specified buffer pointer
was non-null, even if the buffer size is specified as 0. While reasonable,
this also makes it more difficult to use this API.