Compare commits

...

9 Commits

Author SHA1 Message Date
seelabs
2084d61efa Periodically reshare federator sigs:
* Periodically remove signatures for txns this federator is unaware of.
* Reshare mainchain and sidechain signatures on a heartbeat timer.
* Switch between sharing sidechain and mainchain signatures on each timeout, in
  an attempt to reduce network traffic.
2022-04-14 09:33:01 -04:00
seelabs
57b9da62bd fixes need after rebase onto 1.9.0-b1 2022-03-15 17:35:06 -04:00
Peng Wang
79c583e1f7 add door and ticket test 2022-03-15 15:36:04 -04:00
seelabs
5743dc4537 Change initialization code to use "disable master key" txn as milestone:
* The initialization code now assumes that all transactions that come before
"disable master key" are part of setup. This is also used to set the initial
sequence numbers for the door accounts.

* Add additional logging information

* Rm start of historic transactions

* Delay unlocking of federator main loop

* Fix stop history tx only bug
2022-03-15 15:36:04 -04:00
seelabs
d86b1f8b7d Script to generate reports from logs, and bug fixes:
* log_report.py is a script to generate debugging reports and combine the logs
of the locally run mainchain and sidechain servers.

* Log address book before pytest start

* Cleanup test utils

* Modify log_analyzer so joins all logs into a single file

* Organize "all" log as a dictionary

* Allow ConfigFile and Section classes to be pickled:
This caused a bug on mac platforms. Linux did not appear to use pickle.

* Add account history command to py scripts

* Add additional logging

* Add support to run sidechains under rr:

This is an undocumented feature to help debugging.
If the environment variable `RIPPLED_SIDECHAIN_RR` is set, it is assumed to
point to the rr executable. Sidechain 0 will then be run under rr.
2022-03-15 15:36:04 -04:00
seelabs
d57f88fc18 Implement side chain federators:
Co-authored-by: seelabs <scott.determan@yahoo.com>
Co-authored-by: Peng Wang <pwang200@gmail.com>
2022-03-15 15:35:58 -04:00
seelabs
24cf8ab8c7 Sidechain python test environment and repl 2022-03-15 11:23:09 -04:00
seelabs
e56a85cb3b Sidechain docs:
Co-authored-by: seelabs <scott.determan@yahoo.com>
Co-authored-by: Peng Wang <pwang200@gmail.com>
2022-03-15 11:23:09 -04:00
seelabs
d1fdea9bc8 [REMOVE] Add more structure to logs 2022-03-15 11:23:09 -04:00
80 changed files with 16358 additions and 37 deletions

View File

@@ -407,6 +407,17 @@ target_sources (rippled PRIVATE
src/ripple/app/rdb/impl/RelationalDBInterface_nodes.cpp
src/ripple/app/rdb/impl/RelationalDBInterface_postgres.cpp
src/ripple/app/rdb/impl/RelationalDBInterface_shards.cpp
src/ripple/app/sidechain/Federator.cpp
src/ripple/app/sidechain/FederatorEvents.cpp
src/ripple/app/sidechain/impl/ChainListener.cpp
src/ripple/app/sidechain/impl/DoorKeeper.cpp
src/ripple/app/sidechain/impl/InitialSync.cpp
src/ripple/app/sidechain/impl/MainchainListener.cpp
src/ripple/app/sidechain/impl/SidechainListener.cpp
src/ripple/app/sidechain/impl/SignatureCollector.cpp
src/ripple/app/sidechain/impl/SignerList.cpp
src/ripple/app/sidechain/impl/TicketHolder.cpp
src/ripple/app/sidechain/impl/WebsocketClient.cpp
src/ripple/app/tx/impl/ApplyContext.cpp
src/ripple/app/tx/impl/BookTip.cpp
src/ripple/app/tx/impl/CancelCheck.cpp
@@ -576,6 +587,7 @@ target_sources (rippled PRIVATE
src/ripple/rpc/handlers/DepositAuthorized.cpp
src/ripple/rpc/handlers/DownloadShard.cpp
src/ripple/rpc/handlers/Feature1.cpp
src/ripple/rpc/handlers/FederatorInfo.cpp
src/ripple/rpc/handlers/Fee1.cpp
src/ripple/rpc/handlers/FetchInfo.cpp
src/ripple/rpc/handlers/GatewayBalances.cpp

View File

@@ -11,7 +11,7 @@ Loop: ripple.app ripple.nodestore
ripple.app > ripple.nodestore
Loop: ripple.app ripple.overlay
ripple.overlay ~= ripple.app
ripple.overlay == ripple.app
Loop: ripple.app ripple.peerfinder
ripple.peerfinder ~= ripple.app

View File

@@ -6,6 +6,7 @@ ripple.app > ripple.crypto
ripple.app > ripple.json
ripple.app > ripple.protocol
ripple.app > ripple.resource
ripple.app > ripple.server
ripple.app > test.unit_test
ripple.basics > ripple.beast
ripple.conditions > ripple.basics

View File

@@ -21,6 +21,12 @@ if(Git_FOUND)
endif()
endif() #git
if (thread_safety_analysis)
add_compile_options(-Wthread-safety -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DRIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS)
add_compile_options("-stdlib=libc++")
add_link_options("-stdlib=libc++")
endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/Builds/CMake/deps")

View File

@@ -1,3 +1,97 @@
# XRP Ledger Side chain Branch
## Warning
This is not the main branch of the XRP Ledger. This branch supports side chains
on the XRP ledger and integrates an implementation of side chain federators.
This is a developers prelease and it should not be used for production or to
transfer real value. Consider this "alpha" quality software. There will be bugs.
See "Status" for a fuller description.
The latest production code for the XRP ledger is on the "master" branch.
Until this branch is merged with the mainline branch, it will periodically be
rebased on that branch and force pushed to github.
## What are side chains?
Side chains are independent ledgers. They can have their own transaction types,
their own UNL list (or even their own consensus algorithm), and their own set of
other unique features (like a smart contracts layer). What's special about them
is there is a way to transfer assets from the XRP ledger to the side chain, and
a way to return those assets back from the side chain to the XRP ledger. Both
XRP and issued assets may be exchanged.
The motivation for creating a side chain is to implement an idea that may not be
a great fit for the main XRP ledger, or may take a long time before such a
feature is adopted by the main XRP ledger. The advantage of a side chain over a
brand new ledger is it allows the side chain to immediate use tokens with real
monetary value.
This implementation is meant to support side chains that are similar to the XRP
ledger and use the XRP ledger as the main chain. The idea is to develop a new
side chain, first this code will be forked and the new features specific to the
new chain will be implemented.
## Status
All the functionality needed to build side chains should be complete. However,
it has not been well tested or polished.
In particular, all of the following are built:
* Cross chain transactions for both XRP and Issued Assets
* Refunds if transactions fail
* Allowing federators to rejoin a network
* Detecting and handling when federators fall too far behind in processing
transactions
* A python library to easily create configuration files for testing side chains
and spin up side chains on a local machine
* Python scripts to test side chains
* An interactive shell to explore side chain functionality
The biggest missing pieces are:
* Testing: While the functionality is there, it has just begun to be tested.
There will be bugs. Even horrible and embarrassing bugs. Of course, this will
improve as testing progresses.
* Tooling: There is a python library and an interactive shell that was built to
help development. However, these tools are geared to run a test network on a
local machine. They are not geared to new users or to production systems.
Better tooling is coming.
* Documentation: There is documentation that describes the technical details of
how side chains works, how to run the python scripts to set up side chains on
the local machine, and the changes to the configuration files. However, like
the tooling, this is not geared to new users or production systems. Better
documentation is coming. In particular, good documentation for how to set up a
production side chain - or even a test net that doesn't run on a local
machine - needs to be written.
## Getting Started
See the instructions [here](docs/sidechain/GettingStarted.md) for how to
run an interactive shell that will spin up a set of federators on your local
machine and allow you to transfer assets between the main chain and a side
chain.
After setting things up and completing a cross chain transaction with the
"getting started" script above, it may to useful to browse some other
documentation:
* [This](bin/sidechain/python/README.md) document describes the scripts and
python modules used to test and explore side chains on your local machine.
* [This](docs/sidechain/configFile.md) document describes the new stanzas in the
config file needed for side chains.
* [This](docs/sidechain/federatorAccountSetup.md) document describes how to set
up the federator accounts if not using the python scripts.
* [This](docs/sidechain/design.md) document describes the low-level details for
how side chains work.
# The XRP Ledger
The [XRP Ledger](https://xrpl.org/) is a decentralized cryptographic ledger powered by a network of peer-to-peer nodes. The XRP Ledger uses a novel Byzantine Fault Tolerant consensus algorithm to settle and record transactions in a secure distributed database without a central operator.

View File

@@ -0,0 +1,183 @@
## Introduction
This directory contains python scripts to tests and explore side chains.
See the instructions [here](docs/sidechain/GettingStarted.md) for how to install
the necessary dependencies and run an interactive shell that will spin up a set
of federators on your local machine and allow you to transfer assets between the
main chain and a side chain.
For all these scripts, make sure the `RIPPLED_MAINCHAIN_EXE`,
`RIPPLED_SIDECHAIN_EXE`, and `RIPPLED_SIDECHAIN_CFG_DIR` environment variables
are correctly set, and the side chain configuration files exist. Also make sure the python
dependencies are installed and the virtual environment is activated.
Note: the unit tests do not use the configuration files, so the `RIPPLED_SIDECHAIN_CFG_DIR` is
not needed for that script.
## Unit tests
The "tests" directory contains a simple unit test. It take several minutes to
run, and will create the necessary configuration files, start a test main chain
in standalone mode, and a test side chain with 5 federators, and do some simple
cross chain transactions. Side chains do not yet have extensive tests. Testing
is being actively worked on.
To run the tests, change directories to the `bin/sidechain/python/tests` directory and type:
```
pytest
```
To capture logging information and to set the log level (to help with debugging), type this instead:
```
pytest --log-file=log.txt --log-file-level=info
```
The response should be something like the following:
```
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/swd/projs/ripple/mine/bin/sidechain/python/tests
collected 1 item
simple_xchain_transfer_test.py . [100%]
======================== 1 passed in 215.20s (0:03:35) =========================
```
## Scripts
### riplrepl.py
This is an interactive shell for experimenting with side chains. It will spin up
a test main chain running in standalone mode, and a test side chain with five
federators - all running on the local machine. There are commands to make
payments within a chain, make cross chain payments, check balances, check server
info, and check federator info. There is a simple "help" system, but more
documentation is needed for this tool (or more likely we need to replace this
with some web front end).
Note: a "repl" is another name for an interactive shell. It stands for
"read-eval-print-loop". It is pronounced "rep-ul".
### create_config_file.py
This is a script used to create the config files needed to run a test side chain
on your local machine. To run this, make sure the rippled is built,
`RIPPLED_MAINCHAIN_EXE`, `RIPPLED_SIDECHAIN_EXE`, and
`RIPPLED_SIDECHAIN_CFG_DIR` environment variables are correctly set, and the
side chain configuration files exist. Also make sure the python dependencies are
installed and the virtual environment is activated. Running this will create
config files in the directory specified by the `RIPPLED_SIDECHAIN_CFG_DIR`
environment variable.
### log_analyzer.py
This is a script used to take structured log files and convert them to json for easier debugging.
## Python modules
### sidechain.py
A python module that can be used to write python scripts to interact with
side chains. This is used to write unit tests and the interactive shell. To write
a standalone script, look at how the tests are written in
`test/simple_xchain_transfer_test.py`. The idea is to call
`sidechain._multinode_with_callback`, which sets up the two chains, and place
your code in the callback. For example:
```
def multinode_test(params: Params):
def callback(mc_app: App, sc_app: App):
my_function(mc_app, sc_app, params)
sidechain._multinode_with_callback(params,
callback,
setup_user_accounts=False)
```
The functions `sidechain.main_to_side_transfer` and
`sidechain.side_to_main_transfer` can be used as convenience functions to initiate
cross chain transfers. Of course, these transactions can also be initiated with
a payment to the door account with the memo data set to the destination account
on the destination chain (which is what those convenience functions do under the
hood).
Transactions execute asynchonously. Use the function
`test_utils.wait_for_balance_change` to ensure a transaction has completed.
### transaction.py
A python module for transactions. Currently there are transactions for:
* Payment
* Trust (trust set)
* SetRegularKey
* SignerLisetSet
* AccountSet
* Offer
* Ticket
* Hook (experimental - useful paying with the hook amendment from XRPL Labs).
Typically, a transaction is submitted through the call operator on an `App` object. For example, to make a payment from the account `alice` to the account `bob` for 500 XRP:
```
mc_app(Payment(account=alice, dst=bob, amt=XRP(500)))
```
(where mc_app is an App object representing the main chain).
### command.py
A python module for RPC commands. Currently there are commands for:
* PathFind
* Sign
* LedgerAccept (for standalone mode)
* Stop
* LogLevel
* WalletPropose
* ValidationCreate
* AccountInfo
* AccountLines
* AccountTx
* BookOffers
* BookSubscription
* ServerInfo
* FederatorInfo
* Subscribe
### common.py
Python module for common ledger objects, including:
* Account
* Asset
* Path
* Pathlist
### app.py
Python module for an application. An application is responsible for local
network (or single server) and an address book that maps aliases to accounts.
### config_file.py
Python module representing a config file that is read from disk.
### interactive.py
Python module with the implementation of the RiplRepl interactive shell.
### ripple_client.py
A python module representing a rippled server.
### testnet.py
A python module representing a rippled testnet running on the local machine.
## Other
### requirements.txt
These are the python dependencies needed by the scripts. Use `pip3 install -r
requirements.txt` to install them. A python virtual environment is recommended.
See the instructions [here](docs/sidechain/GettingStarted.md) for how to install
the dependencies.

624
bin/sidechain/python/app.py Normal file
View File

@@ -0,0 +1,624 @@
from contextlib import contextmanager
import json
import logging
import os
import pandas as pd
from pathlib import Path
import subprocess
import time
from typing import Callable, Dict, List, Optional, Set, Union
from ripple_client import RippleClient
from common import Account, Asset, XRP
from command import AccountInfo, AccountLines, BookOffers, Command, FederatorInfo, LedgerAccept, Sign, Submit, SubscriptionCommand, WalletPropose
from config_file import ConfigFile
import testnet
from transaction import Payment, Transaction
class KeyManager:
def __init__(self):
self._aliases = {} # alias -> account
self._accounts = {} # account id -> account
def add(self, account: Account) -> bool:
if account.nickname:
self._aliases[account.nickname] = account
self._accounts[account.account_id] = account
def is_alias(self, name: str):
return name in self._aliases
def account_from_alias(self, name: str) -> Account:
assert name in self._aliases
return self._aliases[name]
def known_accounts(self) -> List[Account]:
return list(self._accounts.values())
def account_id_dict(self) -> Dict[str, Account]:
return self._accounts
def alias_or_account_id(self, id: Union[Account, str]) -> str:
'''
return the alias if it exists, otherwise return the id
'''
if isinstance(id, Account):
return id.alias_or_account_id()
if id in self._accounts:
return self._accounts[id].nickname
return id
def alias_to_account_id(self, alias: str) -> Optional[str]:
if id in self._aliases:
return self._aliases[id].account_id
return None
def to_string(self, nickname: Optional[str] = None):
names = []
account_ids = []
if nickname:
names = [nickname]
if nickname in self._aliases:
account_ids = [self._aliases[nickname].account_id]
else:
account_id = ['NA']
else:
for (k, v) in self._aliases.items():
names.append(k)
account_ids.append(v.account_id)
# use a dataframe to get a nice table output
df = pd.DataFrame(data={'name': names, 'id': account_ids})
return f'{df.to_string(index=False)}'
class AssetAliases:
def __init__(self):
self._aliases = {} # alias -> asset
def add(self, asset: Asset, name: str):
self._aliases[name] = asset
def is_alias(self, name: str):
return name in self._aliases
def asset_from_alias(self, name: str) -> Asset:
assert name in self._aliases
return self._aliases[name]
def known_aliases(self) -> List[str]:
return list(self._aliases.keys())
def known_assets(self) -> List[Asset]:
return list(self._aliases.values())
def to_string(self, nickname: Optional[str] = None):
names = []
currencies = []
issuers = []
if nickname:
names = [nickname]
if nickname in self._aliases:
v = self._aliases[nickname]
currencies = [v.currency]
iss = v.issuer if v.issuer else ''
issuers = [v.issuer if v.issuer else '']
else:
currencies = ['NA']
issuers = ['NA']
else:
for (k, v) in self._aliases.items():
names.append(k)
currencies.append(v.currency)
issuers.append(v.issuer if v.issuer else '')
# use a dataframe to get a nice table output
df = pd.DataFrame(data={
'name': names,
'currency': currencies,
'issuer': issuers
})
return f'{df.to_string(index=False)}'
class App:
'''App to to interact with rippled servers'''
def __init__(self,
*,
standalone: bool,
network: Optional[testnet.Network] = None,
client: Optional[RippleClient] = None):
if network and client:
raise ValueError('Cannot specify both a testnet and client in App')
if not network and not client:
raise ValueError('Must specify a testnet or a client in App')
self.standalone = standalone
self.network = network
if client:
self.client = client
else:
self.client = self.network.get_client(0)
self.key_manager = KeyManager()
self.asset_aliases = AssetAliases()
root_account = Account(nickname='root',
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
secret_key='masterpassphrase')
self.key_manager.add(root_account)
def shutdown(self):
if self.network:
self.network.shutdown()
else:
self.client.shutdown()
def send_signed(self, txn: Transaction) -> dict:
'''Sign then send the given transaction'''
if not txn.account.secret_key:
raise ValueError('Cannot sign transaction without secret key')
r = self(Sign(txn.account.secret_key, txn.to_cmd_obj()))
raw_signed = r['tx_blob']
r = self(Submit(raw_signed))
logging.info(f'App.send_signed: {json.dumps(r, indent=1)}')
return r
def send_command(self, cmd: Command) -> dict:
'''Send the command to the rippled server'''
r = self.client.send_command(cmd)
logging.info(
f'App.send_command : {cmd.cmd_name()} : {json.dumps(r, indent=1)}')
return r
# Need async version to close ledgers from async functions
async def async_send_command(self, cmd: Command) -> dict:
'''Send the command to the rippled server'''
return await self.client.async_send_command(cmd)
def send_subscribe_command(
self,
cmd: SubscriptionCommand,
callback: Optional[Callable[[dict], None]] = None) -> dict:
'''Send the subscription command to the rippled server. If already subscribed, it will unsubscribe'''
return self.client.send_subscribe_command(cmd, callback)
def get_pids(self) -> List[int]:
if self.network:
return self.network.get_pids()
if pid := self.client.get_pid():
return [pid]
def get_running_status(self) -> List[bool]:
if self.network:
return self.network.get_running_status()
if self.client.get_pid():
return [True]
else:
return [False]
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
def get_brief_server_info(self) -> dict:
if self.network:
return self.network.get_brief_server_info()
else:
ret = {}
for (k, v) in self.client.get_brief_server_info().items():
ret[k] = [v]
return ret
def servers_start(self,
server_indexes: Optional[Union[Set[int],
List[int]]] = None,
*,
extra_args: Optional[List[List[str]]] = None):
if self.network:
return self.network.servers_start(server_indexes,
extra_args=extra_args)
else:
raise ValueError('Cannot start stand alone server')
def servers_stop(self,
server_indexes: Optional[Union[Set[int],
List[int]]] = None):
if self.network:
return self.network.servers_stop(server_indexes)
else:
raise ValueError('Cannot stop stand alone server')
def federator_info(self,
server_indexes: Optional[Union[Set[int],
List[int]]] = None):
# key is server index. value is federator_info result
result_dict = {}
if self.network:
if not server_indexes:
server_indexes = [
i for i in range(self.network.num_clients())
if self.network.is_running(i)
]
for i in server_indexes:
if self.network.is_running(i):
result_dict[i] = self.network.get_client(i).send_command(
FederatorInfo())
else:
if 0 in server_indexes:
result_dict[0] = self.client.send_command(FederatorInfo())
return result_dict
def __call__(self,
to_send: Union[Transaction, Command, SubscriptionCommand],
callback: Optional[Callable[[dict], None]] = None,
*,
insert_seq_and_fee=False) -> dict:
'''Call `send_signed` for transactions or `send_command` for commands'''
if isinstance(to_send, SubscriptionCommand):
return self.send_subscribe_command(to_send, callback)
assert callback is None
if isinstance(to_send, Transaction):
if insert_seq_and_fee:
self.insert_seq_and_fee(to_send)
return self.send_signed(to_send)
if isinstance(to_send, Command):
return self.send_command(to_send)
raise ValueError(
'Expected `to_send` to be either a Transaction, Command, or SubscriptionCommand'
)
def get_configs(self) -> List[str]:
if self.network:
return self.network.get_configs()
return [self.client.config]
def create_account(self, name: str) -> Account:
''' Create an account. Use the name as the alias. '''
if name == 'root':
return
assert not self.key_manager.is_alias(name)
account = Account(nickname=name, result_dict=self(WalletPropose()))
self.key_manager.add(account)
return account
def create_accounts(self,
names: List[str],
funding_account: Union[Account, str] = 'root',
amt: Union[int, Asset] = 1000000000) -> List[Account]:
'''Fund the accounts with nicknames 'names' by using the funding account and amt'''
accounts = [self.create_account(n) for n in names]
if not isinstance(funding_account, Account):
org_funding_account = funding_account
funding_account = self.key_manager.account_from_alias(
funding_account)
if not isinstance(funding_account, Account):
raise ValueError(
f'Could not find funding account {org_funding_account}')
if not isinstance(amt, Asset):
assert isinstance(amt, int)
amt = Asset(value=amt)
for a in accounts:
p = Payment(account=funding_account, dst=a, amt=amt)
self.send_signed(p)
return accounts
def maybe_ledger_accept(self):
if not self.standalone:
return
self(LedgerAccept())
# Need async version to close ledgers from async functions
async def async_maybe_ledger_accept(self):
if not self.standalone:
return
await self.async_send_command(LedgerAccept())
def get_balances(
self,
account: Union[Account, List[Account], None] = None,
asset: Union[Asset, List[Asset]] = Asset()
) -> pd.DataFrame:
'''Return a pandas dataframe of account balances. If account is None, treat as a wildcard (use address book)'''
if account is None:
account = self.key_manager.known_accounts()
if isinstance(account, list):
result = [self.get_balances(acc, asset) for acc in account]
return pd.concat(result, ignore_index=True)
if isinstance(asset, list):
result = [self.get_balances(account, ass) for ass in asset]
return pd.concat(result, ignore_index=True)
if asset.is_xrp():
try:
df = self.get_account_info(account)
except:
# Most likely the account does not exist on the ledger. Give a balance of zero.
df = pd.DataFrame({
'account': [account],
'balance': [0],
'flags': [0],
'owner_count': [0],
'previous_txn_id': ['NA'],
'previous_txn_lgr_seq': [-1],
'sequence': [-1]
})
df = df.assign(currency='XRP', peer='', limit='')
return df.loc[:,
['account', 'balance', 'currency', 'peer', 'limit']]
else:
try:
df = self.get_trust_lines(account)
if df.empty: return df
df = df[(df['peer'] == asset.issuer.account_id)
& (df['currency'] == asset.currency)]
except:
# Most likely the account does not exist on the ledger. Return an empty data frame
df = pd.DataFrame({
'account': [],
'balance': [],
'currency': [],
'peer': [],
'limit': [],
})
return df.loc[:,
['account', 'balance', 'currency', 'peer', 'limit']]
def get_balance(self, account: Account, asset: Asset) -> Asset:
'''Get a balance from a single account in a single asset'''
try:
df = self.get_balances(account, asset)
return asset(df.iloc[0]['balance'])
except:
return asset(0)
def get_account_info(self,
account: Optional[Account] = None) -> pd.DataFrame:
'''Return a pandas dataframe of account info. If account is None, treat as a wildcard (use address book)'''
if account is None:
known_accounts = self.key_manager.known_accounts()
result = [self.get_account_info(acc) for acc in known_accounts]
return pd.concat(result, ignore_index=True)
try:
result = self.client.send_command(AccountInfo(account))
except:
# Most likely the account does not exist on the ledger. Give a balance of zero.
return pd.DataFrame({
'account': [account],
'balance': [0],
'flags': [0],
'owner_count': [0],
'previous_txn_id': ['NA'],
'previous_txn_lgr_seq': [-1],
'sequence': [-1]
})
if 'account_data' not in result:
raise ValueError('Bad result from account_info command')
info = result['account_data']
for dk in ['LedgerEntryType', 'index']:
del info[dk]
df = pd.DataFrame([info])
df.rename(columns={
'Account': 'account',
'Balance': 'balance',
'Flags': 'flags',
'OwnerCount': 'owner_count',
'PreviousTxnID': 'previous_txn_id',
'PreviousTxnLgrSeq': 'previous_txn_lgr_seq',
'Sequence': 'sequence'
},
inplace=True)
df['balance'] = df['balance'].astype(int)
return df
def get_trust_lines(self,
account: Account,
peer: Optional[Account] = None) -> pd.DataFrame:
'''Return a pandas dataframe account trust lines. If peer account is None, treat as a wildcard'''
result = self.send_command(AccountLines(account, peer=peer))
if 'lines' not in result or 'account' not in result:
raise ValueError('Bad result from account_lines command')
account = result['account']
lines = result['lines']
for d in lines:
d['peer'] = d['account']
d['account'] = account
return pd.DataFrame(lines)
def get_offers(self, taker_pays: Asset, taker_gets: Asset) -> pd.DataFrame:
'''Return a pandas dataframe of offers'''
result = self.send_command(BookOffers(taker_pays, taker_gets))
if 'offers' not in result:
raise ValueError('Bad result from book_offers command')
offers = result['offers']
delete_keys = [
'BookDirectory', 'BookNode', 'LedgerEntryType', 'OwnerNode',
'PreviousTxnID', 'PreviousTxnLgrSeq', 'Sequence', 'index'
]
for d in offers:
for dk in delete_keys:
del d[dk]
for t in ['TakerPays', 'TakerGets', 'owner_funds']:
if 'value' in d[t]:
d[t] = d[t]['value']
df = pd.DataFrame(offers)
df.rename(columns={
'Account': 'account',
'Flags': 'flags',
'TakerGets': 'taker_gets',
'TakerPays': 'taker_pays'
},
inplace=True)
return df
def account_balance(self, account: Account, asset: Asset) -> Asset:
'''get the account's balance of the asset'''
pass
def substitute_nicknames(
self,
df: pd.DataFrame,
cols: List[str] = ['account', 'peer']) -> pd.DataFrame:
result = df.copy(deep=True)
for c in cols:
if c not in result:
continue
result[c] = result[c].map(
lambda x: self.key_manager.alias_or_account_id(x))
return result
def add_to_keymanager(self, account: Account):
self.key_manager.add(account)
def is_alias(self, name: str) -> bool:
return self.key_manager.is_alias(name)
def account_from_alias(self, name: str) -> Account:
return self.key_manager.account_from_alias(name)
def known_accounts(self) -> List[Account]:
return self.key_manager.known_accounts()
def known_asset_aliases(self) -> List[str]:
return self.asset_aliases.known_aliases()
def known_iou_assets(self) -> List[Asset]:
return self.asset_aliases.known_assets()
def is_asset_alias(self, name: str) -> bool:
return self.asset_aliases.is_alias(name)
def add_asset_alias(self, asset: Asset, name: str):
self.asset_aliases.add(asset, name)
def asset_from_alias(self, name: str) -> Asset:
return self.asset_aliases.asset_from_alias(name)
def insert_seq_and_fee(self, txn: Transaction):
acc_info = self(AccountInfo(txn.account))
# TODO: set better fee (Hard code a fee of 15 for now)
txn.set_seq_and_fee(acc_info['account_data']['Sequence'], 15)
def get_client(self) -> RippleClient:
return self.client
def balances_dataframe(chains: List[App],
chain_names: List[str],
account_ids: Optional[List[Account]] = None,
assets: List[Asset] = None,
in_drops: bool = False):
def _removesuffix(self: str, suffix: str) -> str:
if suffix and self.endswith(suffix):
return self[:-len(suffix)]
else:
return self[:]
def _balance_df(chain: App, acc: Optional[Account],
asset: Union[Asset, List[Asset]], in_drops: bool):
b = chain.get_balances(acc, asset)
if not in_drops:
b.loc[b['currency'] == 'XRP', 'balance'] /= 1_000_000
b = chain.substitute_nicknames(b)
b = b.set_index('account')
return b
if account_ids is None:
account_ids = [None] * len(chains)
if assets is None:
# XRP and all assets in the assets alias list
assets = [[XRP(0)] + c.known_iou_assets() for c in chains]
dfs = []
keys = []
for chain, chain_name, acc, asset in zip(chains, chain_names, account_ids,
assets):
dfs.append(_balance_df(chain, acc, asset, in_drops))
keys.append(_removesuffix(chain_name, 'chain'))
df = pd.concat(dfs, keys=keys)
return df
# Start an app with a single client
@contextmanager
def single_client_app(*,
config: ConfigFile,
command_log: Optional[str] = None,
server_out=os.devnull,
run_server: bool = True,
exe: Optional[str] = None,
extra_args: Optional[List[str]] = None,
standalone=False):
'''Start a ripple server and return an app'''
try:
if extra_args is None:
extra_args = []
to_run = None
app = None
client = RippleClient(config=config, command_log=command_log, exe=exe)
if run_server:
to_run = [client.exe, '--conf', client.config_file_name]
if standalone:
to_run.append('-a')
fout = open(server_out, 'w')
p = subprocess.Popen(to_run + extra_args,
stdout=fout,
stderr=subprocess.STDOUT)
client.set_pid(p.pid)
print(
f'started rippled: config: {client.config_file_name} PID: {p.pid}',
flush=True)
time.sleep(1.5) # give process time to startup
app = App(client=client, standalone=standalone)
yield app
finally:
if app:
app.shutdown()
if run_server and to_run:
subprocess.Popen(to_run + ['stop'],
stdout=fout,
stderr=subprocess.STDOUT)
p.wait()
def configs_for_testnet(config_file_prefix: str) -> List[ConfigFile]:
configs = []
p = Path(config_file_prefix)
dir = p.parent
file = p.name
file_names = []
for f in os.listdir(dir):
cfg = os.path.join(dir, f, 'rippled.cfg')
if f.startswith(file) and os.path.exists(cfg):
file_names.append(cfg)
file_names.sort()
return [ConfigFile(file_name=f) for f in file_names]
# Start an app for a network with the config files matched by `config_file_prefix*/rippled.cfg`
# Undocumented feature: if the environment variable RIPPLED_SIDECHAIN_RR is set, it is
# assumed to point to the rr executable. Sidechain server 0 will then be run under rr.
@contextmanager
def testnet_app(*,
exe: str,
configs: List[ConfigFile],
command_logs: Optional[List[str]] = None,
run_server: Optional[List[bool]] = None,
sidechain_rr: Optional[str] = None,
extra_args: Optional[List[List[str]]] = None):
'''Start a ripple testnet and return an app'''
try:
app = None
network = testnet.Network(exe,
configs,
command_logs=command_logs,
run_server=run_server,
with_rr=sidechain_rr,
extra_args=extra_args)
network.wait_for_validated_ledger()
app = App(network=network, standalone=False)
yield app
finally:
if app:
app.shutdown()

View File

@@ -0,0 +1,563 @@
import json
from typing import List, Optional, Union
from common import Account, Asset
class Command:
'''Interface for all commands sent to the server'''
# command id useful for websocket messages
next_cmd_id = 1
def __init__(self):
self.cmd_id = Command.next_cmd_id
Command.next_cmd_id += 1
def cmd_name(self) -> str:
'''Return the command name for use in a command line'''
assert False
return ''
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name, json.dumps(self.to_cmd_obj())]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = self.to_cmd_obj()
return self.add_websocket_fields(result)
def to_cmd_obj(self) -> dict:
'''Return an object suitalbe for use in a command (input to json.dumps or similar)'''
assert False
return {}
def add_websocket_fields(self, cmd_dict: dict) -> dict:
cmd_dict['id'] = self.cmd_id
cmd_dict['command'] = self.cmd_name()
return cmd_dict
def _set_flag(self, flag_bit: int, value: bool = True):
'''Set or clear the flag bit'''
if value:
self.flags |= flag_bit
else:
self.flags &= ~flag_bit
return self
class SubscriptionCommand(Command):
def __init__(self):
super().__init__()
class PathFind(Command):
'''Rippled ripple_path_find command'''
def __init__(self,
*,
src: Account,
dst: Account,
amt: Asset,
send_max: Optional[Asset] = None,
src_currencies: Optional[List[Asset]] = None,
ledger_hash: Optional[str] = None,
ledger_index: Optional[Union[int, str]] = None):
super().__init__()
self.src = src
self.dst = dst
self.amt = amt
self.send_max = send_max
self.src_currencies = src_currencies
self.ledger_hash = ledger_hash
self.ledger_index = ledger_index
def cmd_name(self) -> str:
return 'ripple_path_find'
def add_websocket_fields(self, cmd_dict: dict) -> dict:
cmd_dict = super().add_websocket_fields(cmd_dict)
cmd_dict['subcommand'] = 'create'
return cmd_dict
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
cmd = {
'source_account': self.src.account_id,
'destination_account': self.dst.account_id,
'destination_amount': self.amt.to_cmd_obj()
}
if self.send_max is not None:
cmd['send_max'] = self.send_max.to_cmd_obj()
if self.ledger_hash is not None:
cmd['ledger_hash'] = self.ledger_hash
if self.ledger_index is not None:
cmd['ledger_index'] = self.ledger_index
if self.src_currencies:
c = []
for sc in self.src_currencies:
d = {'currency': sc.currency, 'issuer': sc.issuer.account_id}
c.append(d)
cmd['source_currencies'] = c
return cmd
class Sign(Command):
'''Rippled sign command'''
def __init__(self, secret: str, tx: dict):
super().__init__()
self.tx = tx
self.secret = secret
def cmd_name(self) -> str:
return 'sign'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name(), self.secret, f'{json.dumps(self.tx)}']
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'secret': self.secret, 'tx_json': self.tx}
return self.add_websocket_fields(result)
class Submit(Command):
'''Rippled submit command'''
def __init__(self, tx_blob: str):
super().__init__()
self.tx_blob = tx_blob
def cmd_name(self) -> str:
return 'submit'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name(), self.tx_blob]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'tx_blob': self.tx_blob}
return self.add_websocket_fields(result)
class LedgerAccept(Command):
'''Rippled ledger_accept command'''
def __init__(self):
super().__init__()
def cmd_name(self) -> str:
return 'ledger_accept'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
return self.add_websocket_fields(result)
class Stop(Command):
'''Rippled stop command'''
def __init__(self):
super().__init__()
def cmd_name(self) -> str:
return 'stop'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
return self.add_websocket_fields(result)
class LogLevel(Command):
'''Rippled log_level command'''
def __init__(self, severity: str, *, partition: Optional[str] = None):
super().__init__()
self.severity = severity
self.partition = partition
def cmd_name(self) -> str:
return 'log_level'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
if self.partition is not None:
return [self.cmd_name(), self.partition, self.severity]
return [self.cmd_name(), self.severity]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'severity': self.severity}
if self.partition is not None:
result['partition'] = self.partition
return self.add_websocket_fields(result)
class WalletPropose(Command):
'''Rippled wallet_propose command'''
def __init__(self,
*,
passphrase: Optional[str] = None,
seed: Optional[str] = None,
seed_hex: Optional[str] = None,
key_type: Optional[str] = None):
super().__init__()
self.passphrase = passphrase
self.seed = seed
self.seed_hex = seed_hex
self.key_type = key_type
def cmd_name(self) -> str:
return 'wallet_propose'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
assert not self.seed and not self.seed_hex and (
not self.key_type or self.key_type == 'secp256k1')
if self.passphrase:
return [self.cmd_name(), self.passphrase]
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
if self.seed is not None:
result['seed'] = self.seed
if self.seed_hex is not None:
result['seed_hex'] = self.seed_hex
if self.passphrase is not None:
result['passphrase'] = self.passphrase
if self.key_type is not None:
result['key_type'] = self.key_type
return self.add_websocket_fields(result)
class ValidationCreate(Command):
'''Rippled validation_create command'''
def __init__(self, *, secret: Optional[str] = None):
super().__init__()
self.secret = secret
def cmd_name(self) -> str:
return 'validation_create'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
if self.secret:
return [self.cmd_name(), self.secret]
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
if self.secret is not None:
result['secret'] = self.secret
return self.add_websocket_fields(result)
class AccountInfo(Command):
'''Rippled account_info command'''
def __init__(self,
account: Account,
*,
strict: Optional[bool] = None,
ledger_hash: Optional[str] = None,
ledger_index: Optional[Union[str, int]] = None,
queue: Optional[bool] = None,
signers_list: Optional[bool] = None):
super().__init__()
self.account = account
self.strict = strict
self.ledger_hash = ledger_hash
self.ledger_index = ledger_index
self.queue = queue
self.signers_list = signers_list
assert not ((ledger_hash is not None) and (ledger_index is not None))
def cmd_name(self) -> str:
return 'account_info'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
result = [self.cmd_name(), self.account.account_id]
if self.ledger_index is not None:
result.append(self.ledger_index)
if self.ledger_hash is not None:
result.append(self.ledger_hash)
if self.strict is not None:
result.append(self.strict)
return result
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'account': self.account.account_id}
if self.ledger_index is not None:
result['ledger_index'] = self.ledger_index
if self.ledger_hash is not None:
result['ledger_hash'] = self.ledger_hash
if self.strict is not None:
result['strict'] = self.strict
if self.queue is not None:
result['queue'] = self.queue
return self.add_websocket_fields(result)
class AccountLines(Command):
'''Rippled account_lines command'''
def __init__(self,
account: Account,
*,
peer: Optional[Account] = None,
ledger_hash: Optional[str] = None,
ledger_index: Optional[Union[str, int]] = None,
limit: Optional[int] = None,
marker=None):
super().__init__()
self.account = account
self.peer = peer
self.ledger_hash = ledger_hash
self.ledger_index = ledger_index
self.limit = limit
self.marker = marker
assert not ((ledger_hash is not None) and (ledger_index is not None))
def cmd_name(self) -> str:
return 'account_lines'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
assert sum(x is None for x in [
self.ledger_index, self.ledger_hash, self.limit, self.marker
]) == 4
result = [self.cmd_name(), self.account.account_id]
if self.peer is not None:
result.append(self.peer)
return result
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'account': self.account.account_id}
if self.peer is not None:
result['peer'] = self.peer
if self.ledger_index is not None:
result['ledger_index'] = self.ledger_index
if self.ledger_hash is not None:
result['ledger_hash'] = self.ledger_hash
if self.limit is not None:
result['limit'] = self.limit
if self.marker is not None:
result['marker'] = self.marker
return self.add_websocket_fields(result)
class AccountTx(Command):
def __init__(self,
account: Account,
*,
limit: Optional[int] = None,
marker=None):
super().__init__()
self.account = account
self.limit = limit
self.marker = marker
def cmd_name(self) -> str:
return 'account_tx'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
result = [self.cmd_name(), self.account.account_id]
return result
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {'account': self.account.account_id}
if self.limit is not None:
result['limit'] = self.limit
if self.marker is not None:
result['marker'] = self.marker
return self.add_websocket_fields(result)
class BookOffers(Command):
'''Rippled book_offers command'''
def __init__(self,
taker_pays: Asset,
taker_gets: Asset,
*,
taker: Optional[Account] = None,
ledger_hash: Optional[str] = None,
ledger_index: Optional[Union[str, int]] = None,
limit: Optional[int] = None,
marker=None):
super().__init__()
self.taker_pays = taker_pays
self.taker_gets = taker_gets
self.taker = taker
self.ledger_hash = ledger_hash
self.ledger_index = ledger_index
self.limit = limit
self.marker = marker
assert not ((ledger_hash is not None) and (ledger_index is not None))
def cmd_name(self) -> str:
return 'book_offers'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
assert sum(x is None for x in [
self.ledger_index, self.ledger_hash, self.limit, self.marker
]) == 4
return [
self.cmd_name(),
self.taker_pays.cmd_str(),
self.taker_gets.cmd_str()
]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {
'taker_pays': self.taker_pays.to_cmd_obj(),
'taker_gets': self.taker_gets.to_cmd_obj()
}
if self.taker is not None:
result['taker'] = self.taker.account_id
if self.ledger_index is not None:
result['ledger_index'] = self.ledger_index
if self.ledger_hash is not None:
result['ledger_hash'] = self.ledger_hash
if self.limit is not None:
result['limit'] = self.limit
if self.marker is not None:
result['marker'] = self.marker
return self.add_websocket_fields(result)
class BookSubscription:
'''Spec for a book in a subscribe command'''
def __init__(self,
taker_pays: Asset,
taker_gets: Asset,
*,
taker: Optional[Account] = None,
snapshot: Optional[bool] = None,
both: Optional[bool] = None):
self.taker_pays = taker_pays
self.taker_gets = taker_gets
self.taker = taker
self.snapshot = snapshot
self.both = both
def to_cmd_obj(self) -> dict:
'''Return an object suitalbe for use in a command'''
result = {
'taker_pays': self.taker_pays.to_cmd_obj(),
'taker_gets': self.taker_gets.to_cmd_obj()
}
if self.taker is not None:
result['taker'] = self.taker.account_id
if self.snapshot is not None:
result['snapshot'] = self.snapshot
if self.both is not None:
result['both'] = self.both
return result
class ServerInfo(Command):
'''Rippled server_info command'''
def __init__(self):
super().__init__()
def cmd_name(self) -> str:
return 'server_info'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
return self.add_websocket_fields(result)
class FederatorInfo(Command):
'''Rippled federator_info command'''
def __init__(self):
super().__init__()
def cmd_name(self) -> str:
return 'federator_info'
def get_command_line_list(self) -> List[str]:
'''Return a list of strings suitable for a command line command for a rippled server'''
return [self.cmd_name()]
def get_websocket_dict(self) -> dict:
'''Return a dictionary suitable for converting to json and sending to a rippled server using a websocket'''
result = {}
return self.add_websocket_fields(result)
class Subscribe(SubscriptionCommand):
'''The subscribe method requests periodic notifications from the server
when certain events happen. See: https://developers.ripple.com/subscribe.html'''
def __init__(
self,
*,
streams: Optional[List[str]] = None,
accounts: Optional[List[Account]] = None,
accounts_proposed: Optional[List[Account]] = None,
account_history_account: Optional[Account] = None,
books: Optional[
List[BookSubscription]] = None, # taker_pays, taker_gets
url: Optional[str] = None,
url_username: Optional[str] = None,
url_password: Optional[str] = None):
super().__init__()
self.streams = streams
self.accounts = accounts
self.account_history_account = account_history_account
self.accounts_proposed = accounts_proposed
self.books = books
self.url = url
self.url_username = url_username
self.url_password = url_password
self.websocket = None
def cmd_name(self) -> str:
if self.websocket:
return 'unsubscribe'
return 'subscribe'
def to_cmd_obj(self) -> dict:
d = {}
if self.streams is not None:
d['streams'] = self.streams
if self.accounts is not None:
d['accounts'] = [a.account_id for a in self.accounts]
if self.account_history_account is not None:
d['account_history_tx_stream'] = {
'account': self.account_history_account.account_id
}
if self.accounts_proposed is not None:
d['accounts_proposed'] = [
a.account_id for a in self.accounts_proposed
]
if self.books is not None:
d['books'] = [b.to_cmd_obj() for b in self.books]
if self.url is not None:
d['url'] = self.url
if self.url_username is not None:
d['url_username'] = self.url_username
if self.url_password is not None:
d['url_password'] = self.url_password
return d

View File

@@ -0,0 +1,256 @@
import binascii
import datetime
import logging
from typing import List, Optional, Union
import pandas as pd
import pytz
import sys
EPRINT_ENABLED = True
def disable_eprint():
global EPRINT_ENABLED
EPRINT_ENABLED = False
def enable_eprint():
global EPRINT_ENABLED
EPRINT_ENABLED = True
def eprint(*args, **kwargs):
if not EPRINT_ENABLED:
return
logging.error(*args)
print(*args, file=sys.stderr, flush=True, **kwargs)
def to_rippled_epoch(d: datetime.datetime) -> int:
'''Convert from a datetime to the number of seconds since Jan 1, 2000 (rippled epoch)'''
start = datetime.datetime(2000, 1, 1, tzinfo=pytz.utc)
return int((d - start).total_seconds())
class Account: # pylint: disable=too-few-public-methods
'''
Account in the ripple ledger
'''
def __init__(self,
*,
account_id: Optional[str] = None,
nickname: Optional[str] = None,
public_key: Optional[str] = None,
public_key_hex: Optional[str] = None,
secret_key: Optional[str] = None,
result_dict: Optional[dict] = None):
self.account_id = account_id
self.nickname = nickname
self.public_key = public_key
self.public_key_hex = public_key_hex
self.secret_key = secret_key
if result_dict is not None:
self.account_id = result_dict['account_id']
self.public_key = result_dict['public_key']
self.public_key_hex = result_dict['public_key_hex']
self.secret_key = result_dict['master_seed']
# Accounts are equal if they represent the same account on the ledger
# I.e. only check the account_id field for equality.
def __eq__(self, lhs):
if not isinstance(lhs, self.__class__):
return False
return self.account_id == lhs.account_id
def __ne__(self, lhs):
return not self.__eq__(lhs)
def __str__(self) -> str:
if self.nickname is not None:
return self.nickname
return self.account_id
def alias_or_account_id(self) -> str:
'''
return the alias if it exists, otherwise return the id
'''
if self.nickname is not None:
return self.nickname
return self.account_id
def account_id_str_as_hex(self) -> str:
return binascii.hexlify(self.account_id.encode()).decode('utf-8')
def to_cmd_obj(self) -> dict:
return {
'account_id': self.account_id,
'nickname': self.nickname,
'public_key': self.public_key,
'public_key_hex': self.public_key_hex,
'secret_key': self.secret_key
}
class Asset:
'''An XRP or IOU value'''
def __init__(
self,
*,
value: Union[int, float, None] = None,
currency: Optional[
str] = None, # Will default to 'XRP' if not specified
issuer: Optional[Account] = None,
from_asset=None, # asset is of type Optional[Asset]
# from_rpc_result is a python object resulting from an rpc command
from_rpc_result: Optional[Union[dict, str]] = None):
assert from_asset is None or from_rpc_result is None
self.value = value
self.issuer = issuer
self.currency = currency
if from_asset is not None:
if self.value is None:
self.value = from_asset.value
if self.issuer is None:
self.issuer = from_asset.issuer
if self.currency is None:
self.currency = from_asset.currency
if from_rpc_result is not None:
if isinstance(from_rpc_result, str):
self.value = int(from_rpc_result)
self.currency = 'XRP'
else:
self.value = from_rpc_result['value']
self.currency = float(from_rpc_result['currency'])
self.issuer = Account(account_id=from_rpc_result['issuer'])
if self.currency is None:
self.currency = 'XRP'
if isinstance(self.value, str):
if self.is_xrp():
self.value = int(value)
else:
self.value = float(value)
def __call__(self, value: Union[int, float]):
'''Call operator useful for a terse syntax for assets in tests. I.e. USD(100)'''
return Asset(value=value, from_asset=self)
def __add__(self, lhs):
assert (self.issuer == lhs.issuer and self.currency == lhs.currency)
return Asset(value=self.value + lhs.value,
currency=self.currency,
issuer=self.issuer)
def __sub__(self, lhs):
assert (self.issuer == lhs.issuer and self.currency == lhs.currency)
return Asset(value=self.value - lhs.value,
currency=self.currency,
issuer=self.issuer)
def __eq__(self, lhs):
if not isinstance(lhs, self.__class__):
return False
return (self.value == lhs.value and self.currency == lhs.currency
and self.issuer == lhs.issuer)
def __ne__(self, lhs):
return not self.__eq__(lhs)
def __str__(self) -> str:
value_part = '' if self.value is None else f'{self.value}/'
issuer_part = '' if self.issuer is None else f'/{self.issuer}'
return f'{value_part}{self.currency}{issuer_part}'
def __repr__(self) -> str:
return self.__str__()
def is_xrp(self) -> bool:
''' return true if the asset represents XRP'''
return self.currency == 'XRP'
def cmd_str(self) -> str:
value_part = '' if self.value is None else f'{self.value}/'
issuer_part = '' if self.issuer is None else f'/{self.issuer.account_id}'
return f'{value_part}{self.currency}{issuer_part}'
def to_cmd_obj(self) -> dict:
'''Return an object suitalbe for use in a command'''
if self.currency == 'XRP':
if self.value is not None:
return f'{self.value}' # must be a string
return {'currency': self.currency}
result = {'currency': self.currency, 'issuer': self.issuer.account_id}
if self.value is not None:
result['value'] = f'{self.value}' # must be a string
return result
def XRP(v: Union[int, float]) -> Asset:
return Asset(value=v * 1_000_000)
def drops(v: int) -> Asset:
return Asset(value=v)
class Path:
'''Payment Path'''
def __init__(self,
nodes: Optional[List[Union[Account, Asset]]] = None,
*,
result_list: Optional[List[dict]] = None):
assert not (nodes and result_list)
if result_list is not None:
self.result_list = result_list
return
if nodes is None:
self.result_list = []
return
self.result_list = [
self._create_account_path_node(n)
if isinstance(n, Account) else self._create_currency_path_node(n)
for n in nodes
]
def _create_account_path_node(self, account: Account) -> dict:
return {
'account': account.account_id,
'type': 1,
'type_hex': '0000000000000001'
}
def _create_currency_path_node(self, asset: Asset) -> dict:
result = {
'currency': asset.currency,
'type': 48,
'type_hex': '0000000000000030'
}
if not asset.is_xrp():
result['issuer'] = asset.issuer.account_id
return result
def to_cmd_obj(self) -> list:
'''Return an object suitalbe for use in a command'''
return self.result_list
class PathList:
'''Collection of paths for use in payments'''
def __init__(self,
path_list: Optional[List[Path]] = None,
*,
result_list: Optional[List[List[dict]]] = None):
# result_list can be the response from the rippled server
assert not (path_list and result_list)
if result_list is not None:
self.paths = [Path(result_list=l) for l in result_list]
return
self.paths = path_list
def to_cmd_obj(self) -> list:
'''Return an object suitalbe for use in a command'''
return [p.to_cmd_obj() for p in self.paths]

View File

@@ -0,0 +1,101 @@
from typing import List, Optional, Tuple
class Section:
def section_header(l: str) -> Optional[str]:
'''
If the line is a section header, return the section name
otherwise return None
'''
if l.startswith('[') and l.endswith(']'):
return l[1:-1]
return None
def __init__(self, name: str):
super().__setattr__('_name', name)
# lines contains all non key-value pairs
super().__setattr__('_lines', [])
super().__setattr__('_kv_pairs', {})
def get_name(self):
return self._name
def add_line(self, l):
s = l.split('=')
if len(s) == 2:
self._kv_pairs[s[0].strip()] = s[1].strip()
else:
self._lines.append(l)
def get_lines(self):
return self._lines
def get_line(self) -> Optional[str]:
if len(self._lines) > 0:
return self._lines[0]
return None
def __getattr__(self, name):
try:
return self._kv_pairs[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
if name in self.__dict__:
super().__setattr__(name, value)
else:
self._kv_pairs[name] = value
def __getstate__(self):
return vars(self)
def __setstate__(self, state):
vars(self).update(state)
class ConfigFile:
def __init__(self, *, file_name: Optional[str] = None):
# parse the file
self._file_name = file_name
self._sections = {}
if not file_name:
return
cur_section = None
with open(file_name) as f:
for n, l in enumerate(f):
l = l.strip()
if l.startswith('#') or not l:
continue
if section_name := Section.section_header(l):
if cur_section:
self.add_section(cur_section)
cur_section = Section(section_name)
continue
if not cur_section:
raise ValueError(
f'Error parsing config file: {file_name} line_num: {n} line: {l}'
)
cur_section.add_line(l)
if cur_section:
self.add_section(cur_section)
def add_section(self, s: Section):
self._sections[s.get_name()] = s
def get_file_name(self):
return self._file_name
def __getstate__(self):
return vars(self)
def __setstate__(self, state):
vars(self).update(state)
def __getattr__(self, name):
try:
return self._sections[name]
except KeyError:
raise AttributeError(name)

View File

@@ -0,0 +1,630 @@
#!/usr/bin/env python3
# Generate rippled config files, each with their own ports, database paths, and validation_seeds.
# There will be configs for shards/no_shards, main/test nets, two config files for each combination
# (so one can run in a dogfood mode while another is tested). To avoid confusion,The directory path
# will be $data_dir/{main | test}.{shard | no_shard}.{dog | test}
# The config file will reside in that directory with the name rippled.cfg
# The validators file will reside in that directory with the name validators.txt
'''
Script to test and debug sidechains.
The rippled exe location can be set through the command line or
the environment variable RIPPLED_MAINCHAIN_EXE
The configs_dir (where the config files will reside) can be set through the command line
or the environment variable RIPPLED_SIDECHAIN_CFG_DIR
'''
import argparse
from dataclasses import dataclass
import json
import os
from pathlib import Path
import sys
from typing import Dict, List, Optional, Tuple, Union
from config_file import ConfigFile
from command import ValidationCreate, WalletPropose
from common import Account, Asset, eprint, XRP
from app import App, single_client_app
mainnet_validators = """
[validator_list_sites]
https://vl.ripple.com
[validator_list_keys]
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
"""
altnet_validators = """
[validator_list_sites]
https://vl.altnet.rippletest.net
[validator_list_keys]
ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860
"""
node_size = 'medium'
default_data_dir = '/home/swd/data/rippled'
@dataclass
class Keypair:
public_key: str
secret_key: str
account_id: Optional[str]
def generate_node_keypairs(n: int, rip: App) -> List[Keypair]:
'''
generate keypairs suitable for validator keys
'''
result = []
for i in range(n):
keys = rip(ValidationCreate())
result.append(
Keypair(public_key=keys["validation_public_key"],
secret_key=keys["validation_seed"],
account_id=None))
return result
def generate_federator_keypairs(n: int, rip: App) -> List[Keypair]:
'''
generate keypairs suitable for federator keys
'''
result = []
for i in range(n):
keys = rip(WalletPropose(key_type='ed25519'))
result.append(
Keypair(public_key=keys["public_key"],
secret_key=keys["master_seed"],
account_id=keys["account_id"]))
return result
class Ports:
'''
Port numbers for various services.
Port numbers differ by cfg_index so different configs can run
at the same time without interfering with each other.
'''
peer_port_base = 51235
http_admin_port_base = 5005
ws_public_port_base = 6005
def __init__(self, cfg_index: int):
self.peer_port = Ports.peer_port_base + cfg_index
self.http_admin_port = Ports.http_admin_port_base + cfg_index
self.ws_public_port = Ports.ws_public_port_base + (2 * cfg_index)
# note admin port uses public port base
self.ws_admin_port = Ports.ws_public_port_base + (2 * cfg_index) + 1
class Network:
def __init__(self, num_nodes: int, num_validators: int,
start_cfg_index: int, rip: App):
self.validator_keypairs = generate_node_keypairs(num_validators, rip)
self.ports = [Ports(start_cfg_index + i) for i in range(num_nodes)]
class SidechainNetwork(Network):
def __init__(self, num_nodes: int, num_federators: int,
num_validators: int, start_cfg_index: int, rip: App):
super().__init__(num_nodes, num_validators, start_cfg_index, rip)
self.federator_keypairs = generate_federator_keypairs(
num_federators, rip)
self.main_account = rip(WalletPropose(key_type='secp256k1'))
class XChainAsset:
def __init__(self, main_asset: Asset, side_asset: Asset,
main_value: Union[int, float], side_value: Union[int, float],
main_refund_penalty: Union[int, float],
side_refund_penalty: Union[int, float]):
self.main_asset = main_asset(main_value)
self.side_asset = side_asset(side_value)
self.main_refund_penalty = main_asset(main_refund_penalty)
self.side_refund_penalty = side_asset(side_refund_penalty)
def generate_asset_stanzas(
assets: Optional[Dict[str, XChainAsset]] = None) -> str:
if assets is None:
# default to xrp only at a 1:1 value
assets = {}
assets['xrp_xrp_sidechain_asset'] = XChainAsset(
XRP(0), XRP(0), 1, 1, 400, 400)
index_stanza = """
[sidechain_assets]"""
asset_stanzas = []
for name, xchainasset in assets.items():
index_stanza += '\n' + name
new_stanza = f"""
[{name}]
mainchain_asset={json.dumps(xchainasset.main_asset.to_cmd_obj())}
sidechain_asset={json.dumps(xchainasset.side_asset.to_cmd_obj())}
mainchain_refund_penalty={json.dumps(xchainasset.main_refund_penalty.to_cmd_obj())}
sidechain_refund_penalty={json.dumps(xchainasset.side_refund_penalty.to_cmd_obj())}"""
asset_stanzas.append(new_stanza)
return index_stanza + '\n' + '\n'.join(asset_stanzas)
# First element of the returned tuple is the sidechain stanzas
# second element is the bootstrap stanzas
def generate_sidechain_stanza(
mainchain_ports: Ports,
main_account: dict,
federators: List[Keypair],
signing_key: str,
mainchain_cfg_file: str,
xchain_assets: Optional[Dict[str,
XChainAsset]] = None) -> Tuple[str, str]:
mainchain_ip = "127.0.0.1"
federators_stanza = """
# federator signing public keys
[sidechain_federators]
"""
federators_secrets_stanza = """
# federator signing secret keys (for standalone-mode testing only; Normally won't be in a config file)
[sidechain_federators_secrets]
"""
bootstrap_federators_stanza = """
# first value is federator signing public key, second is the signing pk account
[sidechain_federators]
"""
assets_stanzas = generate_asset_stanzas(xchain_assets)
for fed in federators:
federators_stanza += f'{fed.public_key}\n'
federators_secrets_stanza += f'{fed.secret_key}\n'
bootstrap_federators_stanza += f'{fed.public_key} {fed.account_id}\n'
sidechain_stanzas = f"""
[sidechain]
signing_key={signing_key}
mainchain_account={main_account["account_id"]}
mainchain_ip={mainchain_ip}
mainchain_port_ws={mainchain_ports.ws_public_port}
# mainchain config file is: {mainchain_cfg_file}
{assets_stanzas}
{federators_stanza}
{federators_secrets_stanza}
"""
bootstrap_stanzas = f"""
[sidechain]
mainchain_secret={main_account["master_seed"]}
{bootstrap_federators_stanza}
"""
return (sidechain_stanzas, bootstrap_stanzas)
# cfg_type will typically be either 'dog' or 'test', but can be any string. It is only used
# to create the data directories.
def generate_cfg_dir(*,
ports: Ports,
with_shards: bool,
main_net: bool,
cfg_type: str,
sidechain_stanza: str,
sidechain_bootstrap_stanza: str,
validation_seed: Optional[str] = None,
validators: Optional[List[str]] = None,
fixed_ips: Optional[List[Ports]] = None,
data_dir: str,
full_history: bool = False,
with_hooks: bool = False) -> str:
ips_stanza = ''
this_ip = '127.0.0.1'
if fixed_ips:
ips_stanza = '# Fixed ips for a testnet.\n'
ips_stanza += '[ips_fixed]\n'
for i, p in enumerate(fixed_ips):
if p.peer_port == ports.peer_port:
continue
# rippled limits the number of connects per ip. So use the other loopback devices
ips_stanza += f'127.0.0.{i+1} {p.peer_port}\n'
else:
ips_stanza = '# Where to find some other servers speaking the Ripple protocol.\n'
ips_stanza += '[ips]\n'
if main_net:
ips_stanza += 'r.ripple.com 51235\n'
else:
ips_stanza += 'r.altnet.rippletest.net 51235\n'
disable_shards = '' if with_shards else '# '
disable_delete = '#' if full_history else ''
history_line = 'full' if full_history else '256'
earliest_seq_line = ''
if sidechain_stanza:
earliest_seq_line = 'earliest_seq=1'
hooks_line = 'Hooks' if with_hooks else ''
validation_seed_stanza = ''
if validation_seed:
validation_seed_stanza = f'''
[validation_seed]
{validation_seed}
'''
node_size = 'medium'
shard_str = 'shards' if with_shards else 'no_shards'
net_str = 'main' if main_net else 'test'
if not fixed_ips:
sub_dir = data_dir + f'/{net_str}.{shard_str}.{cfg_type}'
if sidechain_stanza:
sub_dir += '.sidechain'
else:
sub_dir = data_dir + f'/{cfg_type}'
db_path = sub_dir + '/db'
debug_logfile = sub_dir + '/debug.log'
shard_db_path = sub_dir + '/shards'
node_db_path = db_path + '/nudb'
cfg_str = f"""
[server]
port_rpc_admin_local
port_peer
port_ws_admin_local
port_ws_public
#ssl_key = /etc/ssl/private/server.key
#ssl_cert = /etc/ssl/certs/server.crt
[port_rpc_admin_local]
port = {ports.http_admin_port}
ip = {this_ip}
admin = {this_ip}
protocol = http
[port_peer]
port = {ports.peer_port}
ip = 0.0.0.0
protocol = peer
[port_ws_admin_local]
port = {ports.ws_admin_port}
ip = {this_ip}
admin = {this_ip}
protocol = ws
[port_ws_public]
port = {ports.ws_public_port}
ip = {this_ip}
protocol = ws
# protocol = wss
[node_size]
{node_size}
[ledger_history]
{history_line}
[node_db]
type=NuDB
path={node_db_path}
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
{earliest_seq_line}
{disable_delete}online_delete=256
{disable_delete}advisory_delete=0
[database_path]
{db_path}
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
{debug_logfile}
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
{ips_stanza}
[validators_file]
validators.txt
[rpc_startup]
{{ "command": "log_level", "severity": "fatal" }}
{{ "command": "log_level", "partition": "SidechainFederator", "severity": "trace" }}
[ssl_verify]
1
{validation_seed_stanza}
{disable_shards}[shard_db]
{disable_shards}type=NuDB
{disable_shards}path={shard_db_path}
{disable_shards}max_historical_shards=6
{sidechain_stanza}
[features]
{hooks_line}
PayChan
Flow
FlowCross
TickSize
fix1368
Escrow
fix1373
EnforceInvariants
SortedDirectories
fix1201
fix1512
fix1513
fix1523
fix1528
DepositAuth
Checks
fix1571
fix1543
fix1623
DepositPreauth
fix1515
fix1578
MultiSignReserve
fixTakerDryOfferRemoval
fixMasterKeyAsRegularKey
fixCheckThreading
fixPayChanRecipientOwnerDir
DeletableAccounts
fixQualityUpperBound
RequireFullyCanonicalSig
fix1781
HardenedValidations
fixAmendmentMajorityCalc
NegativeUNL
TicketBatch
FlowSortStrands
fixSTAmountCanonicalize
fixRmSmallIncreasedQOffers
CheckCashMakesTrustLine
"""
validators_str = ''
for p in [sub_dir, db_path, shard_db_path]:
Path(p).mkdir(parents=True, exist_ok=True)
# Add the validators.txt file
if validators:
validators_str = '[validators]\n'
for k in validators:
validators_str += f'{k}\n'
else:
validators_str = mainnet_validators if main_net else altnet_validators
with open(sub_dir + "/validators.txt", "w") as f:
f.write(validators_str)
# add the rippled.cfg file
with open(sub_dir + "/rippled.cfg", "w") as f:
f.write(cfg_str)
if sidechain_bootstrap_stanza:
# add the bootstrap file
with open(sub_dir + "/sidechain_bootstrap.cfg", "w") as f:
f.write(sidechain_bootstrap_stanza)
return sub_dir + "/rippled.cfg"
def generate_multinode_net(out_dir: str,
mainnet: Network,
sidenet: SidechainNetwork,
xchain_assets: Optional[Dict[str,
XChainAsset]] = None):
mainnet_cfgs = []
for i in range(len(mainnet.ports)):
validator_kp = mainnet.validator_keypairs[i]
ports = mainnet.ports[i]
mainchain_cfg_file = generate_cfg_dir(
ports=ports,
with_shards=False,
main_net=True,
cfg_type=f'mainchain_{i}',
sidechain_stanza='',
sidechain_bootstrap_stanza='',
validation_seed=validator_kp.secret_key,
data_dir=out_dir)
mainnet_cfgs.append(mainchain_cfg_file)
for i in range(len(sidenet.ports)):
validator_kp = sidenet.validator_keypairs[i]
ports = sidenet.ports[i]
mainnet_i = i % len(mainnet.ports)
sidechain_stanza, sidechain_bootstrap_stanza = generate_sidechain_stanza(
mainnet.ports[mainnet_i], sidenet.main_account,
sidenet.federator_keypairs,
sidenet.federator_keypairs[i].secret_key, mainnet_cfgs[mainnet_i],
xchain_assets)
generate_cfg_dir(
ports=ports,
with_shards=False,
main_net=True,
cfg_type=f'sidechain_{i}',
sidechain_stanza=sidechain_stanza,
sidechain_bootstrap_stanza=sidechain_bootstrap_stanza,
validation_seed=validator_kp.secret_key,
validators=[kp.public_key for kp in sidenet.validator_keypairs],
fixed_ips=sidenet.ports,
data_dir=out_dir,
full_history=True,
with_hooks=False)
def parse_args():
parser = argparse.ArgumentParser(
description=('Create config files for testing sidechains'))
parser.add_argument(
'--exe',
'-e',
help=('path to rippled executable'),
)
parser.add_argument(
'--usd',
'-u',
action='store_true',
help=('include a USD/root IOU asset for cross chain transfers'),
)
parser.add_argument(
'--cfgs_dir',
'-c',
help=
('path to configuration file dir (where the output config files will be located)'
),
)
return parser.parse_known_args()[0]
class Params:
def __init__(self):
args = parse_args()
self.exe = None
if 'RIPPLED_MAINCHAIN_EXE' in os.environ:
self.exe = os.environ['RIPPLED_MAINCHAIN_EXE']
if args.exe:
self.exe = args.exe
self.configs_dir = None
if 'RIPPLED_SIDECHAIN_CFG_DIR' in os.environ:
self.configs_dir = os.environ['RIPPLED_SIDECHAIN_CFG_DIR']
if args.cfgs_dir:
self.configs_dir = args.cfgs_dir
self.usd = False
if args.usd:
self.usd = args.usd
def check_error(self) -> str:
'''
Check for errors. Return `None` if no errors,
otherwise return a string describing the error
'''
if not self.exe:
return 'Missing exe location. Either set the env variable RIPPLED_MAINCHAIN_EXE or use the --exe_mainchain command line switch'
if not self.configs_dir:
return 'Missing configs directory location. Either set the env variable RIPPLED_SIDECHAIN_CFG_DIR or use the --cfgs_dir command line switch'
def main(params: Params,
xchain_assets: Optional[Dict[str, XChainAsset]] = None):
if err_str := params.check_error():
eprint(err_str)
sys.exit(1)
index = 0
nonvalidator_cfg_file_name = generate_cfg_dir(
ports=Ports(index),
with_shards=False,
main_net=True,
cfg_type='non_validator',
sidechain_stanza='',
sidechain_bootstrap_stanza='',
validation_seed=None,
data_dir=params.configs_dir)
index = index + 1
nonvalidator_config = ConfigFile(file_name=nonvalidator_cfg_file_name)
with single_client_app(exe=params.exe,
config=nonvalidator_config,
standalone=True) as rip:
mainnet = Network(num_nodes=1,
num_validators=1,
start_cfg_index=index,
rip=rip)
sidenet = SidechainNetwork(num_nodes=5,
num_federators=5,
num_validators=5,
start_cfg_index=index + 1,
rip=rip)
generate_multinode_net(
out_dir=f'{params.configs_dir}/sidechain_testnet',
mainnet=mainnet,
sidenet=sidenet,
xchain_assets=xchain_assets)
index = index + 2
(Path(params.configs_dir) / 'logs').mkdir(parents=True, exist_ok=True)
for with_shards in [True, False]:
for is_main_net in [True, False]:
for cfg_type in ['dog', 'test', 'one', 'two']:
if not is_main_net and cfg_type not in ['dog', 'test']:
continue
mainnet = Network(num_nodes=1,
num_validators=1,
start_cfg_index=index,
rip=rip)
mainchain_cfg_file = generate_cfg_dir(
data_dir=params.configs_dir,
ports=mainnet.ports[0],
with_shards=with_shards,
main_net=is_main_net,
cfg_type=cfg_type,
sidechain_stanza='',
sidechain_bootstrap_stanza='',
validation_seed=mainnet.validator_keypairs[0].
secret_key)
sidenet = SidechainNetwork(num_nodes=1,
num_federators=5,
num_validators=1,
start_cfg_index=index + 1,
rip=rip)
signing_key = sidenet.federator_keypairs[0].secret_key
sidechain_stanza, sizechain_bootstrap_stanza = generate_sidechain_stanza(
mainnet.ports[0], sidenet.main_account,
sidenet.federator_keypairs, signing_key,
mainchain_cfg_file, xchain_assets)
generate_cfg_dir(
data_dir=params.configs_dir,
ports=sidenet.ports[0],
with_shards=with_shards,
main_net=is_main_net,
cfg_type=cfg_type,
sidechain_stanza=sidechain_stanza,
sidechain_bootstrap_stanza=sizechain_bootstrap_stanza,
validation_seed=sidenet.validator_keypairs[0].
secret_key)
index = index + 2
if __name__ == '__main__':
params = Params()
xchain_assets = None
if params.usd:
xchain_assets = {}
xchain_assets['xrp_xrp_sidechain_asset'] = XChainAsset(
XRP(0), XRP(0), 1, 1, 200, 200)
root_account = Account(account_id="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh")
main_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
side_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
xchain_assets['iou_iou_sidechain_asset'] = XChainAsset(
main_iou_asset, side_iou_asset, 1, 1, 0.02, 0.02)
main(params, xchain_assets)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,198 @@
#!/usr/bin/env python3
import argparse
import json
import os
import re
import sys
from common import eprint
from typing import IO, Optional
class LogLine:
UNSTRUCTURED_RE = re.compile(r'''(?x)
# The x flag enables insignificant whitespace mode (allowing comments)
^(?P<timestamp>.*UTC)
[\ ]
(?P<module>[^:]*):(?P<level>[^\ ]*)
[\ ]
(?P<msg>.*$)
''')
STRUCTURED_RE = re.compile(r'''(?x)
# The x flag enables insignificant whitespace mode (allowing comments)
^(?P<timestamp>.*UTC)
[\ ]
(?P<module>[^:]*):(?P<level>[^\ ]*)
[\ ]
(?P<msg>[^{]*)
[\ ]
(?P<json_data>.*$)
''')
def __init__(self, line: str):
self.raw_line = line
self.json_data = None
try:
if line.endswith('}'):
m = self.STRUCTURED_RE.match(line)
try:
self.json_data = json.loads(m.group('json_data'))
except:
m = self.UNSTRUCTURED_RE.match(line)
else:
m = self.UNSTRUCTURED_RE.match(line)
self.timestamp = m.group('timestamp')
self.level = m.group('level')
self.module = m.group('module')
self.msg = m.group('msg')
except Exception as e:
eprint(f'init exception: {e} line: {line}')
def to_mixed_json_str(self) -> str:
'''
return a pretty printed string as mixed json
'''
try:
r = f'{self.timestamp} {self.module}:{self.level} {self.msg}'
if self.json_data:
r += '\n' + json.dumps(self.json_data, indent=1)
return r
except:
eprint(f'Using raw line: {self.raw_line}')
return self.raw_line
def to_pure_json(self) -> dict:
'''
return a json dict
'''
dict = {}
dict['t'] = self.timestamp
dict['m'] = self.module
dict['l'] = self.level
dict['msg'] = self.msg
if self.json_data:
dict['data'] = self.json_data
return dict
def to_pure_json_str(self, f_id: Optional[str] = None) -> str:
'''
return a pretty printed string as pure json
'''
try:
dict = self.to_pure_json(f_id)
return json.dumps(dict, indent=1)
except:
return self.raw_line
def convert_log(in_file_name: str,
out: str,
*,
as_list=False,
pure_json=False,
module: Optional[str] = 'SidechainFederator') -> list:
result = []
try:
prev_lines = None
with open(in_file_name) as input:
for l in input:
l = l.strip()
if not l:
continue
if LogLine.UNSTRUCTURED_RE.match(l):
if prev_lines:
log_line = LogLine(prev_lines)
if not module or log_line.module == module:
if as_list:
result.append(log_line.to_pure_json())
else:
if pure_json:
print(log_line.to_pure_json_str(),
file=out)
else:
print(log_line.to_mixed_json_str(),
file=out)
prev_lines = l
else:
if not prev_lines:
eprint(f'Error: Expected prev_lines. Cur line: {l}')
assert prev_lines
prev_lines += f' {l}'
if prev_lines:
log_line = LogLine(prev_lines)
if not module or log_line.module == module:
if as_list:
result.append(log_line.to_pure_json())
else:
if pure_json:
print(log_line.to_pure_json_str(f_id),
file=out,
flush=True)
else:
print(log_line.to_mixed_json_str(),
file=out,
flush=True)
except Exception as e:
eprint(f'Excption: {e}')
raise e
return result
def convert_all(in_dir_name: str, out: IO, *, pure_json=False):
'''
Convert all the "debug.log" log files in one directory level below the in_dir_name into a single json file.
There will be a field called 'f' for the director name that the origional log file came from.
This is useful when analyzing networks that run on the local machine.
'''
if not os.path.isdir(in_dir_name):
print(f'Error: {in_dir_name} is not a directory')
files = []
f_ids = []
for subdir in os.listdir(in_dir_name):
file = f'{in_dir_name}/{subdir}/debug.log'
if not os.path.isfile(file):
continue
files.append(file)
f_ids.append(subdir)
result = {}
for f, f_id in zip(files, f_ids):
l = convert_log(f, out, as_list=True, pure_json=pure_json, module=None)
result[f_id] = l
print(json.dumps(result, indent=1), file=out, flush=True)
def parse_args():
parser = argparse.ArgumentParser(
description=('python script to convert log files to json'))
parser.add_argument(
'--input',
'-i',
help=('input log file or sidechain config directory structure'),
)
parser.add_argument(
'--output',
'-o',
help=('output log file'),
)
return parser.parse_known_args()[0]
if __name__ == '__main__':
try:
args = parse_args()
with open(args.output, "w") as out:
if os.path.isdir(args.input):
convert_all(args.input, out, pure_json=True)
else:
convert_log(args.input, out, pure_json=True)
except Exception as e:
eprint(f'Excption: {e}')
raise e

View File

@@ -0,0 +1,285 @@
#!/usr/bin/env python3
import argparse
from collections import defaultdict
import datetime
import json
import numpy as np
import os
import pandas as pd
import string
import sys
from typing import Dict, Set
from common import eprint
import log_analyzer
def _has_256bit_hex_field_other(data, result: Set[str]):
return
_has_256bit_hex_field_overloads = defaultdict(
lambda: _has_256bit_hex_field_other)
def _has_256bit_hex_field_str(data: str, result: Set[str]):
if len(data) != 64:
return
for c in data:
o = ord(c.upper())
if ord('A') <= o <= ord('F'):
continue
if ord('0') <= o <= ord('9'):
continue
return
result.add(data)
_has_256bit_hex_field_overloads[str] = _has_256bit_hex_field_str
def _has_256bit_hex_field_dict(data: dict, result: Set[str]):
for k, v in data.items():
if k in [
"meta", "index", "LedgerIndex", "ledger_index", "ledger_hash",
"SigningPubKey", "suppression"
]:
continue
_has_256bit_hex_field_overloads[type(v)](v, result)
_has_256bit_hex_field_overloads[dict] = _has_256bit_hex_field_dict
def _has_256bit_hex_field_list(data: list, result: Set[str]):
for v in data:
_has_256bit_hex_field_overloads[type(v)](v, result)
_has_256bit_hex_field_overloads[list] = _has_256bit_hex_field_list
def has_256bit_hex_field(data: dict) -> Set[str]:
'''
Find all the fields that are strings 64 chars long with only hex digits
This is useful when grouping transactions by hex
'''
result = set()
_has_256bit_hex_field_dict(data, result)
return result
def group_by_txn(data: dict) -> dict:
'''
return a dictionary where the key is the transaction hash, the value is another dictionary.
The second dictionary the key is the server id, and the values are a list of log items
'''
def _make_default():
return defaultdict(lambda: list())
result = defaultdict(_make_default)
for server_id, log_list in data.items():
for log_item in log_list:
if txn_hashes := has_256bit_hex_field(log_item):
for h in txn_hashes:
result[h][server_id].append(log_item)
return result
def _rekey_dict_by_txn_date(hash_to_timestamp: dict,
grouped_by_txn: dict) -> dict:
'''
hash_to_timestamp is a dictionary with a key of the txn hash and a value of the timestamp.
grouped_by_txn is a dictionary with a key of the txn and an unspecified value.
the keys in hash_to_timestamp are a superset of the keys in grouped_by_txn
This function returns a new grouped_by_txn dictionary with the transactions sorted by date.
'''
known_txns = [
k for k, v in sorted(hash_to_timestamp.items(), key=lambda x: x[1])
]
result = {}
for k, v in grouped_by_txn.items():
if k not in known_txns:
result[k] = v
for h in known_txns:
result[h] = grouped_by_txn[h]
return result
def _to_timestamp(str_time: str) -> datetime.datetime:
return datetime.datetime.strptime(
str_time.split('.')[0], "%Y-%b-%d %H:%M:%S")
class Report:
def __init__(self, in_dir, out_dir):
self.in_dir = in_dir
self.out_dir = out_dir
self.combined_logs_file_name = f'{self.out_dir}/combined_logs.json'
self.grouped_by_txn_file_name = f'{self.out_dir}/grouped_by_txn.json'
self.counts_by_txn_and_server_file_name = f'{self.out_dir}/counts_by_txn_and_server.org'
self.data = None # combined logs
# grouped_by_txn is a dictionary where the key is the server id. mainchain servers
# have a key of `mainchain_#` and sidechain servers have a key of
# `sidechain_#`, where `#` is a number.
self.grouped_by_txn = None
if not os.path.isdir(in_dir):
eprint(f'The input {self.in_dir} must be an existing directory')
sys.exit(1)
if os.path.exists(self.out_dir):
if not os.path.isdir(self.out_dir):
eprint(
f'The output: {self.out_dir} exists and is not a directory'
)
sys.exit(1)
else:
os.makedirs(self.out_dir)
self.combine_logs()
with open(self.combined_logs_file_name) as f:
self.data = json.load(f)
self.grouped_by_txn = group_by_txn(self.data)
# counts_by_txn_and_server is a dictionary where the key is the txn_hash
# and the value is a pandas df with a row for every server and a column for every message
# the value is a count of how many times that message appears for that server.
counts_by_txn_and_server = {}
# dict where the key is a transaction hash and the value is the transaction
hash_to_txn = {}
# dict where the key is a transaction hash and the value is earliest timestamp in a log file
hash_to_timestamp = {}
for txn_hash, server_dict in self.grouped_by_txn.items():
message_set = set()
# message list is ordered by when it appears in the log
message_list = []
for server_id, messages in server_dict.items():
for m in messages:
try:
d = m['data']
if 'msg' in d and 'transaction' in d['msg']:
t = d['msg']['transaction']
elif 'tx_json' in d:
t = d['tx_json']
if t['hash'] == txn_hash:
hash_to_txn[txn_hash] = t
except:
pass
msg = m['msg']
t = _to_timestamp(m['t'])
if txn_hash not in hash_to_timestamp:
hash_to_timestamp[txn_hash] = t
elif hash_to_timestamp[txn_hash] > t:
hash_to_timestamp[txn_hash] = t
if msg not in message_set:
message_set.add(msg)
message_list.append(msg)
df = pd.DataFrame(0,
index=server_dict.keys(),
columns=message_list)
for server_id, messages in server_dict.items():
for m in messages:
df[m['msg']][server_id] += 1
counts_by_txn_and_server[txn_hash] = df
# sort the transactions by timestamp, but the txns with unknown timestamp at the beginning
self.grouped_by_txn = _rekey_dict_by_txn_date(hash_to_timestamp,
self.grouped_by_txn)
counts_by_txn_and_server = _rekey_dict_by_txn_date(
hash_to_timestamp, counts_by_txn_and_server)
with open(self.grouped_by_txn_file_name, 'w') as out:
print(json.dumps(self.grouped_by_txn, indent=1), file=out)
with open(self.counts_by_txn_and_server_file_name, 'w') as out:
for txn_hash, df in counts_by_txn_and_server.items():
print(f'\n\n* Txn: {txn_hash}', file=out)
if txn_hash in hash_to_txn:
print(json.dumps(hash_to_txn[txn_hash], indent=1),
file=out)
rename_dict = {}
for column, renamed_column in zip(df.columns.array,
string.ascii_uppercase):
print(f'{renamed_column} = {column}', file=out)
rename_dict[column] = renamed_column
df.rename(columns=rename_dict, inplace=True)
print(f'\n{df}', file=out)
def combine_logs(self):
try:
with open(self.combined_logs_file_name, "w") as out:
log_analyzer.convert_all(args.input, out, pure_json=True)
except Exception as e:
eprint(f'Excption: {e}')
raise e
def main(input_dir_name: str, output_dir_name: str):
r = Report(input_dir_name, output_dir_name)
# Values are a list of log lines formatted as json. There are five fields:
# `t` is the timestamp.
# `m` is the module.
# `l` is the log level.
# `msg` is the message.
# `data` is the data.
# For example:
#
# {
# "t": "2021-Oct-08 21:33:41.731371562 UTC",
# "m": "SidechainFederator",
# "l": "TRC",
# "msg": "no last xchain txn with result",
# "data": {
# "needsOtherChainLastXChainTxn": true,
# "isMainchain": false,
# "jlogId": 121
# }
# },
# Lifecycle of a transaction
# For each federator record:
# Transaction detected: amount, seq, destination, chain, hash
# Signature received: hash, seq
# Signature sent: hash, seq, federator dst
# Transaction submitted
# Result received, and detect if error
# Detect any field that doesn't match
# Lifecycle of initialization
# Chain listener messages
def parse_args():
parser = argparse.ArgumentParser(description=(
'python script to generate a log report from a sidechain config directory structure containing the logs'
))
parser.add_argument(
'--input',
'-i',
help=('directory with sidechain config directory structure'),
)
parser.add_argument(
'--output',
'-o',
help=('output directory for report files'),
)
return parser.parse_known_args()[0]
if __name__ == '__main__':
try:
args = parse_args()
main(args.input, args.output)
except Exception as e:
eprint(f'Excption: {e}')
raise e

View File

@@ -0,0 +1,14 @@
attrs==21.2.0
iniconfig==1.1.1
numpy==1.21.2
packaging==21.0
pandas==1.3.3
pluggy==1.0.0
py==1.10.0
pyparsing==2.4.7
pytest==6.2.5
python-dateutil==2.8.2
pytz==2021.1
six==1.16.0
toml==0.10.2
websockets==8.1

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env python3
'''
Script to run an interactive shell to test sidechains.
'''
import sys
from common import disable_eprint, eprint
import interactive
import sidechain
def main():
params = sidechain.Params()
params.interactive = True
interactive.set_hooks_dir(params.hooks_dir)
if err_str := params.check_error():
eprint(err_str)
sys.exit(1)
if params.verbose:
print("eprint enabled")
else:
disable_eprint()
if params.standalone:
sidechain.standalone_interactive_repl(params)
else:
sidechain.multinode_interactive_repl(params)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,193 @@
import asyncio
import datetime
import json
import os
from os.path import expanduser
import subprocess
import sys
from typing import Callable, List, Optional, Union
import time
import websockets
from command import Command, ServerInfo, SubscriptionCommand
from common import eprint
from config_file import ConfigFile
class RippleClient:
'''Client to send commands to the rippled server'''
def __init__(self,
*,
config: ConfigFile,
exe: str,
command_log: Optional[str] = None):
self.config = config
self.exe = exe
self.command_log = command_log
section = config.port_ws_admin_local
self.websocket_uri = f'{section.protocol}://{section.ip}:{section.port}'
self.subscription_websockets = []
self.tasks = []
self.pid = None
if command_log:
with open(self.command_log, 'w') as f:
f.write(f'# Start \n')
@property
def config_file_name(self):
return self.config.get_file_name()
def shutdown(self):
try:
group = asyncio.gather(*self.tasks, return_exceptions=True)
group.cancel()
asyncio.get_event_loop().run_until_complete(group)
for ws in self.subscription_websockets:
asyncio.get_event_loop().run_until_complete(ws.close())
except asyncio.CancelledError:
pass
def set_pid(self, pid: int):
self.pid = pid
def get_pid(self) -> Optional[int]:
return self.pid
def get_config(self) -> ConfigFile:
return self.config
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
def get_brief_server_info(self) -> dict:
ret = {
'server_state': 'NA',
'ledger_seq': 'NA',
'complete_ledgers': 'NA'
}
if not self.pid or self.pid == -1:
return ret
r = self.send_command(ServerInfo())
if 'info' not in r:
return ret
r = r['info']
for f in ['server_state', 'complete_ledgers']:
if f in r:
ret[f] = r[f]
if 'validated_ledger' in r:
ret['ledger_seq'] = r['validated_ledger']['seq']
return ret
def _write_command_log_command(self, cmd: str, cmd_index: int) -> None:
if not self.command_log:
return
with open(self.command_log, 'a') as f:
f.write(f'\n\n# command {cmd_index}\n')
f.write(f'{cmd}')
def _write_command_log_result(self, result: str, cmd_index: int) -> None:
if not self.command_log:
return
with open(self.command_log, 'a') as f:
f.write(f'\n\n# result {cmd_index}\n')
f.write(f'{result}')
def _send_command_line_command(self, cmd_id: int, *args) -> dict:
'''Send the command to the rippled server using the command line interface'''
to_run = [self.exe, '-q', '--conf', self.config_file_name, '--']
to_run.extend(args)
self._write_command_log_command(to_run, cmd_id)
max_retries = 4
for retry_count in range(0, max_retries + 1):
try:
r = subprocess.check_output(to_run)
self._write_command_log_result(r, cmd_id)
return json.loads(r.decode('utf-8'))['result']
except Exception as e:
if retry_count == max_retries:
raise
eprint(
f'Got exception: {str(e)}\nretrying..{retry_count+1} of {max_retries}'
)
time.sleep(1) # give process time to startup
async def _send_websock_command(
self,
cmd: Command,
conn: Optional[websockets.client.Connect] = None) -> dict:
assert self.websocket_uri
if conn is None:
async with websockets.connect(self.websocket_uri) as ws:
return await self._send_websock_command(cmd, ws)
to_send = json.dumps(cmd.get_websocket_dict())
self._write_command_log_command(to_send, cmd.cmd_id)
await conn.send(to_send)
r = await conn.recv()
self._write_command_log_result(r, cmd.cmd_id)
j = json.loads(r)
if not 'result' in j:
eprint(
f'Error sending websocket command: {json.dumps(cmd.get_websocket_dict(), indent=1)}'
)
eprint(f'Result: {json.dumps(j, indent=1)}')
raise ValueError('Error sending websocket command')
return j['result']
def send_command(self, cmd: Command) -> dict:
'''Send the command to the rippled server'''
if self.websocket_uri:
return asyncio.get_event_loop().run_until_complete(
self._send_websock_command(cmd))
return self._send_command_line_command(cmd.cmd_id,
*cmd.get_command_line_list())
# Need async version to close ledgers from async functions
async def async_send_command(self, cmd: Command) -> dict:
'''Send the command to the rippled server'''
if self.websocket_uri:
return await self._send_websock_command(cmd)
return self._send_command_line_command(cmd.cmd_id,
*cmd.get_command_line_list())
def send_subscribe_command(
self,
cmd: SubscriptionCommand,
callback: Optional[Callable[[dict], None]] = None) -> dict:
'''Send the command to the rippled server'''
assert self.websocket_uri
ws = cmd.websocket
if ws is None:
# subscribe
assert callback
ws = asyncio.get_event_loop().run_until_complete(
websockets.connect(self.websocket_uri))
self.subscription_websockets.append(ws)
result = asyncio.get_event_loop().run_until_complete(
self._send_websock_command(cmd, ws))
if cmd.websocket is not None:
# unsubscribed. close the websocket
self.subscription_websockets.remove(cmd.websocket)
cmd.websocket.close()
cmd.websocket = None
else:
# setup a task to read the websocket
cmd.websocket = ws # must be set after the _send_websock_command or will unsubscribe
async def subscribe_callback(ws: websockets.client.Connect,
cb: Callable[[dict], None]):
while True:
r = await ws.recv()
d = json.loads(r)
cb(d)
task = asyncio.get_event_loop().create_task(
subscribe_callback(cmd.websocket, callback))
self.tasks.append(task)
return result
def stop(self):
'''Stop the server'''
return self.send_command(Stop())
def set_log_level(self, severity: str, *, partition: Optional[str] = None):
'''Set the server log level'''
return self.send_command(LogLevel(severity, parition=parition))

583
bin/sidechain/python/sidechain.py Executable file
View File

@@ -0,0 +1,583 @@
#!/usr/bin/env python3
'''
Script to test and debug sidechains.
The mainchain exe location can be set through the command line or
the environment variable RIPPLED_MAINCHAIN_EXE
The sidechain exe location can be set through the command line or
the environment variable RIPPLED_SIDECHAIN_EXE
The configs_dir (generated with create_config_files.py) can be set through the command line
or the environment variable RIPPLED_SIDECHAIN_CFG_DIR
'''
import argparse
import json
from multiprocessing import Process, Value
import os
import sys
import time
from typing import Callable, Dict, List, Optional
from app import App, single_client_app, testnet_app, configs_for_testnet
from command import AccountInfo, AccountTx, LedgerAccept, LogLevel, Subscribe
from common import Account, Asset, eprint, disable_eprint, XRP
from config_file import ConfigFile
import interactive
from log_analyzer import convert_log
from test_utils import mc_wait_for_payment_detect, sc_wait_for_payment_detect, mc_connect_subscription, sc_connect_subscription
from transaction import AccountSet, Payment, SignerListSet, SetRegularKey, Ticket, Trust
def parse_args_helper(parser: argparse.ArgumentParser):
parser.add_argument(
'--debug_sidechain',
'-ds',
action='store_true',
help=('Mode to debug sidechain (prompt to run sidechain in gdb)'),
)
parser.add_argument(
'--debug_mainchain',
'-dm',
action='store_true',
help=('Mode to debug mainchain (prompt to run sidechain in gdb)'),
)
parser.add_argument(
'--exe_mainchain',
'-em',
help=('path to mainchain rippled executable'),
)
parser.add_argument(
'--exe_sidechain',
'-es',
help=('path to mainchain rippled executable'),
)
parser.add_argument(
'--cfgs_dir',
'-c',
help=
('path to configuration file dir (generated with create_config_files.py)'
),
)
parser.add_argument(
'--standalone',
'-a',
action='store_true',
help=('run standalone tests'),
)
parser.add_argument(
'--interactive',
'-i',
action='store_true',
help=('run interactive repl'),
)
parser.add_argument(
'--quiet',
'-q',
action='store_true',
help=('Disable printing errors (eprint disabled)'),
)
parser.add_argument(
'--verbose',
'-v',
action='store_true',
help=('Enable printing errors (eprint enabled)'),
)
# Pauses are use for attaching debuggers and looking at logs are know checkpoints
parser.add_argument(
'--with_pauses',
'-p',
action='store_true',
help=
('Add pauses at certain checkpoints in tests until "enter" key is hit'
),
)
parser.add_argument(
'--hooks_dir',
help=('path to hooks dir'),
)
def parse_args():
parser = argparse.ArgumentParser(description=('Test and debug sidechains'))
parse_args_helper(parser)
return parser.parse_known_args()[0]
class Params:
def __init__(self, *, configs_dir: Optional[str] = None):
args = parse_args()
self.debug_sidechain = False
if args.debug_sidechain:
self.debug_sidechain = args.debug_sidechain
self.debug_mainchain = False
if args.debug_mainchain:
self.debug_mainchain = arts.debug_mainchain
# Undocumented feature: if the environment variable RIPPLED_SIDECHAIN_RR is set, it is
# assumed to point to the rr executable. Sidechain server 0 will then be run under rr.
self.sidechain_rr = None
if 'RIPPLED_SIDECHAIN_RR' in os.environ:
self.sidechain_rr = os.environ['RIPPLED_SIDECHAIN_RR']
self.standalone = args.standalone
self.with_pauses = args.with_pauses
self.interactive = args.interactive
self.quiet = args.quiet
self.verbose = args.verbose
self.mainchain_exe = None
if 'RIPPLED_MAINCHAIN_EXE' in os.environ:
self.mainchain_exe = os.environ['RIPPLED_MAINCHAIN_EXE']
if args.exe_mainchain:
self.mainchain_exe = args.exe_mainchain
self.sidechain_exe = None
if 'RIPPLED_SIDECHAIN_EXE' in os.environ:
self.sidechain_exe = os.environ['RIPPLED_SIDECHAIN_EXE']
if args.exe_sidechain:
self.sidechain_exe = args.exe_sidechain
self.configs_dir = None
if 'RIPPLED_SIDECHAIN_CFG_DIR' in os.environ:
self.configs_dir = os.environ['RIPPLED_SIDECHAIN_CFG_DIR']
if args.cfgs_dir:
self.configs_dir = args.cfgs_dir
if configs_dir is not None:
self.configs_dir = configs_dir
self.hooks_dir = None
if 'RIPPLED_SIDECHAIN_HOOKS_DIR' in os.environ:
self.hooks_dir = os.environ['RIPPLED_SIDECHAIN_HOOKS_DIR']
if args.hooks_dir:
self.hooks_dir = args.hooks_dir
if not self.configs_dir:
self.mainchain_config = None
self.sidechain_config = None
self.sidechain_bootstrap_config = None
self.genesis_account = None
self.mc_door_account = None
self.user_account = None
self.sc_door_account = None
self.federators = None
return
if self.standalone:
self.mainchain_config = ConfigFile(
file_name=f'{self.configs_dir}/main.no_shards.dog/rippled.cfg')
self.sidechain_config = ConfigFile(
file_name=
f'{self.configs_dir}/main.no_shards.dog.sidechain/rippled.cfg')
self.sidechain_bootstrap_config = ConfigFile(
file_name=
f'{self.configs_dir}/main.no_shards.dog.sidechain/sidechain_bootstrap.cfg'
)
else:
self.mainchain_config = ConfigFile(
file_name=
f'{self.configs_dir}/sidechain_testnet/main.no_shards.mainchain_0/rippled.cfg'
)
self.sidechain_config = ConfigFile(
file_name=
f'{self.configs_dir}/sidechain_testnet/sidechain_0/rippled.cfg'
)
self.sidechain_bootstrap_config = ConfigFile(
file_name=
f'{self.configs_dir}/sidechain_testnet/sidechain_0/sidechain_bootstrap.cfg'
)
self.genesis_account = Account(
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
secret_key='masterpassphrase',
nickname='genesis')
self.mc_door_account = Account(
account_id=self.sidechain_config.sidechain.mainchain_account,
secret_key=self.sidechain_bootstrap_config.sidechain.
mainchain_secret,
nickname='door')
self.user_account = Account(
account_id='rJynXY96Vuq6B58pST9K5Ak5KgJ2JcRsQy',
secret_key='snVsJfrr2MbVpniNiUU6EDMGBbtzN',
nickname='alice')
self.sc_door_account = Account(
account_id='rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh',
secret_key='masterpassphrase',
nickname='door')
self.federators = [
l.split()[1].strip() for l in
self.sidechain_bootstrap_config.sidechain_federators.get_lines()
]
def check_error(self) -> str:
'''
Check for errors. Return `None` if no errors,
otherwise return a string describing the error
'''
if not self.mainchain_exe:
return 'Missing mainchain_exe location. Either set the env variable RIPPLED_MAINCHAIN_EXE or use the --exe_mainchain command line switch'
if not self.sidechain_exe:
return 'Missing sidechain_exe location. Either set the env variable RIPPLED_SIDECHAIN_EXE or use the --exe_sidechain command line switch'
if not self.configs_dir:
return 'Missing configs directory location. Either set the env variable RIPPLED_SIDECHAIN_CFG_DIR or use the --cfgs_dir command line switch'
if self.verbose and self.quiet:
return 'Cannot specify both verbose and quiet options at the same time'
mainDoorKeeper = 0
sideDoorKeeper = 1
updateSignerList = 2
def setup_mainchain(mc_app: App,
params: Params,
setup_user_accounts: bool = True):
mc_app.add_to_keymanager(params.mc_door_account)
if setup_user_accounts:
mc_app.add_to_keymanager(params.user_account)
mc_app(LogLevel('fatal'))
# Allow rippling through the genesis account
mc_app(AccountSet(account=params.genesis_account).set_default_ripple(True))
mc_app.maybe_ledger_accept()
# Create and fund the mc door account
mc_app(
Payment(account=params.genesis_account,
dst=params.mc_door_account,
amt=XRP(10_000)))
mc_app.maybe_ledger_accept()
# Create a trust line so USD/root account ious can be sent cross chain
mc_app(
Trust(account=params.mc_door_account,
limit_amt=Asset(value=1_000_000,
currency='USD',
issuer=params.genesis_account)))
# set the chain's signer list and disable the master key
divide = 4 * len(params.federators)
by = 5
quorum = (divide + by - 1) // by
mc_app(
SignerListSet(account=params.mc_door_account,
quorum=quorum,
keys=params.federators))
mc_app.maybe_ledger_accept()
r = mc_app(Ticket(account=params.mc_door_account, src_tag=mainDoorKeeper))
mc_app.maybe_ledger_accept()
mc_app(Ticket(account=params.mc_door_account, src_tag=sideDoorKeeper))
mc_app.maybe_ledger_accept()
mc_app(Ticket(account=params.mc_door_account, src_tag=updateSignerList))
mc_app.maybe_ledger_accept()
mc_app(AccountSet(account=params.mc_door_account).set_disable_master())
mc_app.maybe_ledger_accept()
if setup_user_accounts:
# Create and fund a regular user account
mc_app(
Payment(account=params.genesis_account,
dst=params.user_account,
amt=XRP(2_000)))
mc_app.maybe_ledger_accept()
def setup_sidechain(sc_app: App,
params: Params,
setup_user_accounts: bool = True):
sc_app.add_to_keymanager(params.sc_door_account)
if setup_user_accounts:
sc_app.add_to_keymanager(params.user_account)
sc_app(LogLevel('fatal'))
sc_app(LogLevel('trace', partition='SidechainFederator'))
# set the chain's signer list and disable the master key
divide = 4 * len(params.federators)
by = 5
quorum = (divide + by - 1) // by
sc_app(
SignerListSet(account=params.genesis_account,
quorum=quorum,
keys=params.federators))
sc_app.maybe_ledger_accept()
sc_app(Ticket(account=params.genesis_account, src_tag=mainDoorKeeper))
sc_app.maybe_ledger_accept()
sc_app(Ticket(account=params.genesis_account, src_tag=sideDoorKeeper))
sc_app.maybe_ledger_accept()
sc_app(Ticket(account=params.genesis_account, src_tag=updateSignerList))
sc_app.maybe_ledger_accept()
sc_app(AccountSet(account=params.genesis_account).set_disable_master())
sc_app.maybe_ledger_accept()
def _xchain_transfer(from_chain: App, to_chain: App, src: Account,
dst: Account, amt: Asset, from_chain_door: Account,
to_chain_door: Account):
memos = [{'Memo': {'MemoData': dst.account_id_str_as_hex()}}]
from_chain(Payment(account=src, dst=from_chain_door, amt=amt, memos=memos))
from_chain.maybe_ledger_accept()
if to_chain.standalone:
# from_chain (side chain) sends a txn, but won't close the to_chain (main chain) ledger
time.sleep(1)
to_chain.maybe_ledger_accept()
def main_to_side_transfer(mc_app: App, sc_app: App, src: Account, dst: Account,
amt: Asset, params: Params):
_xchain_transfer(mc_app, sc_app, src, dst, amt, params.mc_door_account,
params.sc_door_account)
def side_to_main_transfer(mc_app: App, sc_app: App, src: Account, dst: Account,
amt: Asset, params: Params):
_xchain_transfer(sc_app, mc_app, src, dst, amt, params.sc_door_account,
params.mc_door_account)
def simple_test(mc_app: App, sc_app: App, params: Params):
try:
bob = sc_app.create_account('bob')
main_to_side_transfer(mc_app, sc_app, params.user_account, bob,
XRP(200), params)
main_to_side_transfer(mc_app, sc_app, params.user_account, bob,
XRP(60), params)
if params.with_pauses:
_convert_log_files_to_json(
mc_app.get_configs() + sc_app.get_configs(),
'checkpoint1.json')
input(
"Pausing to check for main -> side txns (press enter to continue)"
)
side_to_main_transfer(mc_app, sc_app, bob, params.user_account, XRP(9),
params)
side_to_main_transfer(mc_app, sc_app, bob, params.user_account,
XRP(11), params)
if params.with_pauses:
input(
"Pausing to check for side -> main txns (press enter to continue)"
)
finally:
_convert_log_files_to_json(mc_app.get_configs() + sc_app.get_configs(),
'final.json')
def _rm_debug_log(config: ConfigFile):
try:
debug_log = config.debug_logfile.get_line()
if debug_log:
print(f'removing debug file: {debug_log}', flush=True)
os.remove(debug_log)
except:
pass
def _standalone_with_callback(params: Params,
callback: Callable[[App, App], None],
setup_user_accounts: bool = True):
if (params.debug_mainchain):
input("Start mainchain server and press enter to continue: ")
else:
_rm_debug_log(params.mainchain_config)
with single_client_app(config=params.mainchain_config,
exe=params.mainchain_exe,
standalone=True,
run_server=not params.debug_mainchain) as mc_app:
mc_connect_subscription(mc_app, params.mc_door_account)
setup_mainchain(mc_app, params, setup_user_accounts)
if (params.debug_sidechain):
input("Start sidechain server and press enter to continue: ")
else:
_rm_debug_log(params.sidechain_config)
with single_client_app(
config=params.sidechain_config,
exe=params.sidechain_exe,
standalone=True,
run_server=not params.debug_sidechain) as sc_app:
sc_connect_subscription(sc_app, params.sc_door_account)
setup_sidechain(sc_app, params, setup_user_accounts)
callback(mc_app, sc_app)
def _convert_log_files_to_json(to_convert: List[ConfigFile], suffix: str):
'''
Convert the log file to json
'''
for c in to_convert:
try:
debug_log = c.debug_logfile.get_line()
if not os.path.exists(debug_log):
continue
converted_log = f'{debug_log}.{suffix}'
if os.path.exists(converted_log):
os.remove(converted_log)
print(f'Converting log {debug_log} to {converted_log}', flush=True)
convert_log(debug_log, converted_log, pure_json=True)
except:
eprint(f'Exception converting log')
def _multinode_with_callback(params: Params,
callback: Callable[[App, App], None],
setup_user_accounts: bool = True):
mainchain_cfg = ConfigFile(
file_name=
f'{params.configs_dir}/sidechain_testnet/main.no_shards.mainchain_0/rippled.cfg'
)
_rm_debug_log(mainchain_cfg)
if params.debug_mainchain:
input("Start mainchain server and press enter to continue: ")
with single_client_app(config=mainchain_cfg,
exe=params.mainchain_exe,
standalone=True,
run_server=not params.debug_mainchain) as mc_app:
if params.with_pauses:
input("Pausing after mainchain start (press enter to continue)")
mc_connect_subscription(mc_app, params.mc_door_account)
setup_mainchain(mc_app, params, setup_user_accounts)
if params.with_pauses:
input("Pausing after mainchain setup (press enter to continue)")
testnet_configs = configs_for_testnet(
f'{params.configs_dir}/sidechain_testnet/sidechain_')
for c in testnet_configs:
_rm_debug_log(c)
run_server_list = [True] * len(testnet_configs)
if params.debug_sidechain:
run_server_list[0] = False
input(
f'Start testnet server {testnet_configs[0].get_file_name()} and press enter to continue: '
)
with testnet_app(exe=params.sidechain_exe,
configs=testnet_configs,
run_server=run_server_list,
sidechain_rr=params.sidechain_rr) as n_app:
if params.with_pauses:
input("Pausing after testnet start (press enter to continue)")
sc_connect_subscription(n_app, params.sc_door_account)
setup_sidechain(n_app, params, setup_user_accounts)
if params.with_pauses:
input(
"Pausing after sidechain setup (press enter to continue)")
callback(mc_app, n_app)
def standalone_test(params: Params):
def callback(mc_app: App, sc_app: App):
simple_test(mc_app, sc_app, params)
_standalone_with_callback(params, callback)
def multinode_test(params: Params):
def callback(mc_app: App, sc_app: App):
simple_test(mc_app, sc_app, params)
_multinode_with_callback(params, callback)
# The mainchain runs in standalone mode. Most operations - like cross chain
# paymens - will automatically close ledgers. However, some operations, like
# refunds need an extra close. This loop automatically closes ledgers.
def close_mainchain_ledgers(stop_token: Value, params: Params, sleep_time=4):
with single_client_app(config=params.mainchain_config,
exe=params.mainchain_exe,
standalone=True,
run_server=False) as mc_app:
while stop_token.value != 0:
mc_app.maybe_ledger_accept()
time.sleep(sleep_time)
def standalone_interactive_repl(params: Params):
def callback(mc_app: App, sc_app: App):
# process will run while stop token is non-zero
stop_token = Value('i', 1)
p = None
if mc_app.standalone:
p = Process(target=close_mainchain_ledgers,
args=(stop_token, params))
p.start()
try:
interactive.repl(mc_app, sc_app)
finally:
if p:
stop_token.value = 0
p.join()
_standalone_with_callback(params, callback, setup_user_accounts=False)
def multinode_interactive_repl(params: Params):
def callback(mc_app: App, sc_app: App):
# process will run while stop token is non-zero
stop_token = Value('i', 1)
p = None
if mc_app.standalone:
p = Process(target=close_mainchain_ledgers,
args=(stop_token, params))
p.start()
try:
interactive.repl(mc_app, sc_app)
finally:
if p:
stop_token.value = 0
p.join()
_multinode_with_callback(params, callback, setup_user_accounts=False)
def main():
params = Params()
interactive.set_hooks_dir(params.hooks_dir)
if err_str := params.check_error():
eprint(err_str)
sys.exit(1)
if params.quiet:
print("Disabling eprint")
disable_eprint()
if params.interactive:
if params.standalone:
standalone_interactive_repl(params)
else:
multinode_interactive_repl(params)
elif params.standalone:
standalone_test(params)
else:
multinode_test(params)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,176 @@
import asyncio
import collections
from contextlib import contextmanager
import json
import logging
import pprint
import time
from typing import Callable, Dict, List, Optional
from app import App, balances_dataframe
from common import Account, Asset, XRP, eprint
from command import Subscribe
MC_SUBSCRIBE_QUEUE = []
SC_SUBSCRIBE_QUEUE = []
def _mc_subscribe_callback(v: dict):
MC_SUBSCRIBE_QUEUE.append(v)
logging.info(f'mc subscribe_callback:\n{json.dumps(v, indent=1)}')
def _sc_subscribe_callback(v: dict):
SC_SUBSCRIBE_QUEUE.append(v)
logging.info(f'sc subscribe_callback:\n{json.dumps(v, indent=1)}')
def mc_connect_subscription(app: App, door_account: Account):
app(Subscribe(account_history_account=door_account),
_mc_subscribe_callback)
def sc_connect_subscription(app: App, door_account: Account):
app(Subscribe(account_history_account=door_account),
_sc_subscribe_callback)
# This pops elements off the subscribe_queue until the transaction is found
# It mofifies the queue in place.
async def async_wait_for_payment_detect(app: App, subscribe_queue: List[dict],
src: Account, dst: Account,
amt_asset: Asset):
logging.info(
f'Wait for payment {src.account_id = } {dst.account_id = } {amt_asset = }'
)
n_txns = 10 # keep this many txn in a circular buffer.
# If the payment is not detected, write them to the log.
last_n_paytxns = collections.deque(maxlen=n_txns)
for i in range(30):
while subscribe_queue:
d = subscribe_queue.pop(0)
if 'transaction' not in d:
continue
txn = d['transaction']
if txn['TransactionType'] != 'Payment':
continue
txn_asset = Asset(from_rpc_result=txn['Amount'])
if txn['Account'] == src.account_id and txn[
'Destination'] == dst.account_id and txn_asset == amt_asset:
if d['engine_result_code'] == 0:
logging.info(
f'Found payment {src.account_id = } {dst.account_id = } {amt_asset = }'
)
return
else:
logging.error(
f'Expected payment failed {src.account_id = } {dst.account_id = } {amt_asset = }'
)
raise ValueError(
f'Expected payment failed {src.account_id = } {dst.account_id = } {amt_asset = }'
)
else:
last_n_paytxns.append(txn)
if i > 0 and not (i % 5):
logging.warning(
f'Waiting for txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
)
# side chain can send transactions to the main chain, but won't close the ledger
# We don't know when the transaction will be sent, so may need to close the ledger here
await app.async_maybe_ledger_accept()
await asyncio.sleep(2)
logging.warning(
f'Last {len(last_n_paytxns)} pay txns while waiting for payment detect'
)
for t in last_n_paytxns:
logging.warning(
f'Detected pay transaction while waiting for payment: {t}')
logging.error(
f'Expected txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
)
raise ValueError(
f'Expected txn detect {src.account_id = } {dst.account_id = } {amt_asset = }'
)
def mc_wait_for_payment_detect(app: App, src: Account, dst: Account,
amt_asset: Asset):
logging.info(f'mainchain waiting for payment detect')
return asyncio.get_event_loop().run_until_complete(
async_wait_for_payment_detect(app, MC_SUBSCRIBE_QUEUE, src, dst,
amt_asset))
def sc_wait_for_payment_detect(app: App, src: Account, dst: Account,
amt_asset: Asset):
logging.info(f'sidechain waiting for payment detect')
return asyncio.get_event_loop().run_until_complete(
async_wait_for_payment_detect(app, SC_SUBSCRIBE_QUEUE, src, dst,
amt_asset))
def wait_for_balance_change(app: App,
acc: Account,
pre_balance: Asset,
expected_diff: Optional[Asset] = None):
logging.info(
f'waiting for balance change {acc.account_id = } {pre_balance = } {expected_diff = }'
)
for i in range(30):
new_bal = app.get_balance(acc, pre_balance(0))
diff = new_bal - pre_balance
if new_bal != pre_balance:
logging.info(
f'Balance changed {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
)
if expected_diff is None or diff == expected_diff:
return
app.maybe_ledger_accept()
time.sleep(2)
if i > 0 and not (i % 5):
logging.warning(
f'Waiting for balance to change {acc.account_id = } {pre_balance = }'
)
logging.error(
f'Expected balance to change {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
)
raise ValueError(
f'Expected balance to change {acc.account_id = } {pre_balance = } {new_bal = } {diff = } {expected_diff = }'
)
def log_chain_state(mc_app, sc_app, log, msg='Chain State'):
chains = [mc_app, sc_app]
chain_names = ['mainchain', 'sidechain']
balances = balances_dataframe(chains, chain_names)
df_as_str = balances.to_string(float_format=lambda x: f'{x:,.6f}')
log(f'{msg} Balances: \n{df_as_str}')
federator_info = sc_app.federator_info()
log(f'{msg} Federator Info: \n{pprint.pformat(federator_info)}')
# Tests can set this to True to help debug test failures by showing account
# balances in the log before the test runs
test_context_verbose_logging = False
@contextmanager
def test_context(mc_app, sc_app, verbose_logging: Optional[bool] = None):
'''Write extra context info to the log on test failure'''
global test_context_verbose_logging
if verbose_logging is None:
verbose_logging = test_context_verbose_logging
try:
if verbose_logging:
log_chain_state(mc_app, sc_app, logging.info)
start_time = time.monotonic()
yield
except:
log_chain_state(mc_app, sc_app, logging.error)
raise
finally:
elapased_time = time.monotonic() - start_time
logging.info(f'Test elapsed time: {elapased_time}')
if verbose_logging:
log_chain_state(mc_app, sc_app, logging.info)

View File

@@ -0,0 +1,216 @@
'''
Bring up a rippled testnetwork from a set of config files with fixed ips.
'''
from contextlib import contextmanager
import glob
import os
import subprocess
import time
from typing import Callable, List, Optional, Set, Union
from command import ServerInfo
from config_file import ConfigFile
from ripple_client import RippleClient
class Network:
# If run_server is None, run all the servers.
# This is useful to help debugging
def __init__(
self,
exe: str,
configs: List[ConfigFile],
*,
command_logs: Optional[List[str]] = None,
run_server: Optional[List[bool]] = None,
# undocumented feature. If with_rr is not None, assume it points to the rr debugger executable
# and run server 0 under rr
with_rr: Optional[str] = None,
extra_args: Optional[List[List[str]]] = None):
self.with_rr = with_rr
if not configs:
raise ValueError(f'Must specify at least one config')
if run_server and len(run_server) != len(configs):
raise ValueError(
f'run_server length must match number of configs (or be None): {len(configs) = } {len(run_server) = }'
)
self.configs = configs
self.clients = []
self.running_server_indexes = set()
self.processes = {}
if not run_server:
run_server = []
run_server += [True] * (len(configs) - len(run_server))
self.run_server = run_server
if not command_logs:
command_logs = []
command_logs += [None] * (len(configs) - len(command_logs))
self.command_logs = command_logs
# remove the old database directories.
# we want tests to start from the same empty state every time
for config in self.configs:
db_path = config.database_path.get_line()
if db_path and os.path.isdir(db_path):
files = glob.glob(f'{db_path}/**', recursive=True)
for f in files:
if os.path.isdir(f):
continue
os.unlink(f)
for config, log in zip(self.configs, self.command_logs):
client = RippleClient(config=config, command_log=log, exe=exe)
self.clients.append(client)
self.servers_start(extra_args=extra_args)
def shutdown(self):
for a in self.clients:
a.shutdown()
self.servers_stop()
def num_clients(self) -> int:
return len(self.clients)
def get_client(self, i: int) -> RippleClient:
return self.clients[i]
def get_configs(self) -> List[ConfigFile]:
return [c.config for c in self.clients]
def get_pids(self) -> List[int]:
return [c.get_pid() for c in self.clients if c.get_pid() is not None]
# Get a dict of the server_state, validated_ledger_seq, and complete_ledgers
def get_brief_server_info(self) -> dict:
ret = {'server_state': [], 'ledger_seq': [], 'complete_ledgers': []}
for c in self.clients:
r = c.get_brief_server_info()
for (k, v) in r.items():
ret[k].append(v)
return ret
# returns true if the server is running, false if not. Note, this relies on
# servers being shut down through the `servers_stop` interface. If a server
# crashes, or is started or stopped through other means, an incorrect status
# may be reported.
def get_running_status(self) -> List[bool]:
return [
i in self.running_server_indexes for i in range(len(self.clients))
]
def is_running(self, index: int) -> bool:
return index in self.running_server_indexes
def wait_for_validated_ledger(self, server_index: Optional[int] = None):
'''
Don't return until the network has at least one validated ledger
'''
if server_index is None:
for i in range(len(self.configs)):
self.wait_for_validated_ledger(i)
return
client = self.clients[server_index]
for i in range(600):
r = client.send_command(ServerInfo())
state = None
if 'info' in r:
state = r['info']['server_state']
if state == 'proposing':
print(f'Synced: {server_index} : {state}', flush=True)
break
if not i % 10:
print(f'Waiting for sync: {server_index} : {state}',
flush=True)
time.sleep(1)
for i in range(600):
r = client.send_command(ServerInfo())
state = None
if 'info' in r:
complete_ledgers = r['info']['complete_ledgers']
if complete_ledgers and complete_ledgers != 'empty':
print(f'Have complete ledgers: {server_index} : {state}',
flush=True)
return
if not i % 10:
print(
f'Waiting for complete_ledgers: {server_index} : {complete_ledgers}',
flush=True)
time.sleep(1)
raise ValueError('Could not sync server {client.config_file_name}')
def servers_start(self,
server_indexes: Optional[Union[Set[int],
List[int]]] = None,
*,
extra_args: Optional[List[List[str]]] = None):
if server_indexes is None:
server_indexes = [i for i in range(len(self.clients))]
if extra_args is None:
extra_args = []
extra_args += [list()] * (len(self.configs) - len(extra_args))
for i in server_indexes:
if i in self.running_server_indexes or not self.run_server[i]:
continue
client = self.clients[i]
to_run = [client.exe, '--conf', client.config_file_name]
if self.with_rr and i == 0:
to_run = [self.with_rr, 'record'] + to_run
print(f'Starting server with rr {client.config_file_name}')
else:
print(f'Starting server {client.config_file_name}')
fout = open(os.devnull, 'w')
p = subprocess.Popen(to_run + extra_args[i],
stdout=fout,
stderr=subprocess.STDOUT)
client.set_pid(p.pid)
print(
f'started rippled: config: {client.config_file_name} PID: {p.pid}',
flush=True)
self.running_server_indexes.add(i)
self.processes[i] = p
time.sleep(2) # give servers time to start
def servers_stop(self,
server_indexes: Optional[Union[Set[int],
List[int]]] = None):
if server_indexes is None:
server_indexes = self.running_server_indexes.copy()
if 0 in server_indexes:
print(
f'WARNING: Server 0 is being stopped. RPC commands cannot be sent until this is restarted.'
)
for i in server_indexes:
if i not in self.running_server_indexes:
continue
client = self.clients[i]
to_run = [client.exe, '--conf', client.config_file_name]
fout = open(os.devnull, 'w')
subprocess.Popen(to_run + ['stop'],
stdout=fout,
stderr=subprocess.STDOUT)
self.running_server_indexes.discard(i)
for i in server_indexes:
self.processes[i].wait()
del self.processes[i]
self.get_client(i).set_pid(-1)

View File

@@ -0,0 +1,64 @@
# Add parent directory to module path
import os, sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from common import Account, Asset, XRP
import create_config_files
import sidechain
import pytest
'''
Sidechains uses argparse.ArgumentParser to add command line options.
The function call to add an argument is `add_argument`. pytest uses `addoption`.
This wrapper class changes calls from `add_argument` to calls to `addoption`.
To avoid conflicts between pytest and sidechains, all sidechain arguments have
the suffix `_sc` appended to them. I.e. `--verbose` is for pytest, `--verbose_sc`
is for sidechains.
'''
class ArgumentParserWrapper:
def __init__(self, wrapped):
self.wrapped = wrapped
def add_argument(self, *args, **kwargs):
for a in args:
if not a.startswith('--'):
continue
a = a + '_sc'
self.wrapped.addoption(a, **kwargs)
def pytest_addoption(parser):
wrapped = ArgumentParserWrapper(parser)
sidechain.parse_args_helper(wrapped)
def _xchain_assets(ratio: int = 1):
assets = {}
assets['xrp_xrp_sidechain_asset'] = create_config_files.XChainAsset(
XRP(0), XRP(0), 1, 1 * ratio, 200, 200 * ratio)
root_account = Account(account_id="rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh")
main_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
side_iou_asset = Asset(value=0, currency='USD', issuer=root_account)
assets['iou_iou_sidechain_asset'] = create_config_files.XChainAsset(
main_iou_asset, side_iou_asset, 1, 1 * ratio, 0.02, 0.02 * ratio)
return assets
# Diction of config dirs. Key is ratio
_config_dirs = None
@pytest.fixture
def configs_dirs_dict(tmp_path):
global _config_dirs
if not _config_dirs:
params = create_config_files.Params()
_config_dirs = {}
for ratio in (1, 2):
params.configs_dir = str(tmp_path / f'test_config_files_{ratio}')
create_config_files.main(params, _xchain_assets(ratio))
_config_dirs[ratio] = params.configs_dir
return _config_dirs

View File

@@ -0,0 +1,117 @@
from typing import Dict
from app import App
from common import XRP
from sidechain import Params
import sidechain
import test_utils
import time
from transaction import Payment
import tst_common
batch_test_num_accounts = 200
def door_test(mc_app: App, sc_app: App, params: Params):
# setup, create accounts on both chains
for i in range(batch_test_num_accounts):
name = "m_" + str(i)
account_main = mc_app.create_account(name)
name = "s_" + str(i)
account_side = sc_app.create_account(name)
mc_app(
Payment(account=params.genesis_account,
dst=account_main,
amt=XRP(20_000)))
mc_app.maybe_ledger_accept()
account_main_last = mc_app.account_from_alias("m_" +
str(batch_test_num_accounts -
1))
test_utils.wait_for_balance_change(mc_app, account_main_last, XRP(0),
XRP(20_000))
# test
to_side_xrp = XRP(1000)
to_main_xrp = XRP(100)
last_tx_xrp = XRP(343)
with test_utils.test_context(mc_app, sc_app, True):
# send xchain payment to open accounts on sidechain
for i in range(batch_test_num_accounts):
name_main = "m_" + str(i)
account_main = mc_app.account_from_alias(name_main)
name_side = "s_" + str(i)
account_side = sc_app.account_from_alias(name_side)
memos = [{
'Memo': {
'MemoData': account_side.account_id_str_as_hex()
}
}]
mc_app(
Payment(account=account_main,
dst=params.mc_door_account,
amt=to_side_xrp,
memos=memos))
while 1:
federator_info = sc_app.federator_info()
should_loop = False
for v in federator_info.values():
for c in ['mainchain', 'sidechain']:
state = v['info'][c]['listener_info']['state']
if state != 'normal':
should_loop = True
if not should_loop:
break
time.sleep(1)
# wait some time for the door to change
door_closing = False
door_reopened = False
for i in range(batch_test_num_accounts * 2 + 40):
server_index = [0]
federator_info = sc_app.federator_info(server_index)
for v in federator_info.values():
door_status = v['info']['mainchain']['door_status']['status']
if not door_closing:
if door_status != 'open':
door_closing = True
else:
if door_status == 'open':
door_reopened = True
if not door_reopened:
time.sleep(1)
mc_app.maybe_ledger_accept()
else:
break
if not door_reopened:
raise ValueError('Expected door status changes did not happen')
# wait for accounts created on sidechain
for i in range(batch_test_num_accounts):
name_side = "s_" + str(i)
account_side = sc_app.account_from_alias(name_side)
test_utils.wait_for_balance_change(sc_app, account_side, XRP(0),
to_side_xrp)
# # try one xchain payment, each direction
name_main = "m_" + str(0)
account_main = mc_app.account_from_alias(name_main)
name_side = "s_" + str(0)
account_side = sc_app.account_from_alias(name_side)
pre_bal = mc_app.get_balance(account_main, XRP(0))
sidechain.side_to_main_transfer(mc_app, sc_app, account_side,
account_main, to_main_xrp, params)
test_utils.wait_for_balance_change(mc_app, account_main, pre_bal,
to_main_xrp)
pre_bal = sc_app.get_balance(account_side, XRP(0))
sidechain.main_to_side_transfer(mc_app, sc_app, account_main,
account_side, last_tx_xrp, params)
test_utils.wait_for_balance_change(sc_app, account_side, pre_bal,
last_tx_xrp)
def test_door_operations(configs_dirs_dict: Dict[int, str]):
tst_common.test_start(configs_dirs_dict, door_test)

View File

@@ -0,0 +1,151 @@
import logging
import pprint
import pytest
from multiprocessing import Process, Value
from typing import Dict
import sys
from app import App
from common import Asset, eprint, disable_eprint, drops, XRP
import interactive
from sidechain import Params
import sidechain
import test_utils
import time
from transaction import Payment, Trust
import tst_common
def simple_xrp_test(mc_app: App, sc_app: App, params: Params):
alice = mc_app.account_from_alias('alice')
adam = sc_app.account_from_alias('adam')
mc_door = mc_app.account_from_alias('door')
sc_door = sc_app.account_from_alias('door')
# main to side
# First txn funds the side chain account
with test_utils.test_context(mc_app, sc_app):
to_send_asset = XRP(9999)
mc_pre_bal = mc_app.get_balance(mc_door, to_send_asset)
sc_pre_bal = sc_app.get_balance(adam, to_send_asset)
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
to_send_asset, params)
test_utils.wait_for_balance_change(mc_app, mc_door, mc_pre_bal,
to_send_asset)
test_utils.wait_for_balance_change(sc_app, adam, sc_pre_bal,
to_send_asset)
for i in range(2):
# even amounts for main to side
for value in range(20, 30, 2):
with test_utils.test_context(mc_app, sc_app):
to_send_asset = drops(value)
mc_pre_bal = mc_app.get_balance(mc_door, to_send_asset)
sc_pre_bal = sc_app.get_balance(adam, to_send_asset)
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
to_send_asset, params)
test_utils.wait_for_balance_change(mc_app, mc_door, mc_pre_bal,
to_send_asset)
test_utils.wait_for_balance_change(sc_app, adam, sc_pre_bal,
to_send_asset)
# side to main
# odd amounts for side to main
for value in range(19, 29, 2):
with test_utils.test_context(mc_app, sc_app):
to_send_asset = drops(value)
pre_bal = mc_app.get_balance(alice, to_send_asset)
sidechain.side_to_main_transfer(mc_app, sc_app, adam, alice,
to_send_asset, params)
test_utils.wait_for_balance_change(mc_app, alice, pre_bal,
to_send_asset)
def simple_iou_test(mc_app: App, sc_app: App, params: Params):
alice = mc_app.account_from_alias('alice')
adam = sc_app.account_from_alias('adam')
mc_asset = Asset(value=0,
currency='USD',
issuer=mc_app.account_from_alias('root'))
sc_asset = Asset(value=0,
currency='USD',
issuer=sc_app.account_from_alias('door'))
mc_app.add_asset_alias(mc_asset, 'mcd') # main chain dollar
sc_app.add_asset_alias(sc_asset, 'scd') # side chain dollar
mc_app(Trust(account=alice, limit_amt=mc_asset(1_000_000)))
## make sure adam account on the side chain exists and set the trust line
with test_utils.test_context(mc_app, sc_app):
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam, XRP(300),
params)
# create a trust line to alice and pay her USD/root
mc_app(Trust(account=alice, limit_amt=mc_asset(1_000_000)))
mc_app.maybe_ledger_accept()
mc_app(
Payment(account=mc_app.account_from_alias('root'),
dst=alice,
amt=mc_asset(10_000)))
mc_app.maybe_ledger_accept()
# create a trust line for adam
sc_app(Trust(account=adam, limit_amt=sc_asset(1_000_000)))
for i in range(2):
# even amounts for main to side
for value in range(10, 20, 2):
with test_utils.test_context(mc_app, sc_app):
to_send_asset = mc_asset(value)
rcv_asset = sc_asset(value)
pre_bal = sc_app.get_balance(adam, rcv_asset)
sidechain.main_to_side_transfer(mc_app, sc_app, alice, adam,
to_send_asset, params)
test_utils.wait_for_balance_change(sc_app, adam, pre_bal,
rcv_asset)
# side to main
# odd amounts for side to main
for value in range(9, 19, 2):
with test_utils.test_context(mc_app, sc_app):
to_send_asset = sc_asset(value)
rcv_asset = mc_asset(value)
pre_bal = mc_app.get_balance(alice, to_send_asset)
sidechain.side_to_main_transfer(mc_app, sc_app, adam, alice,
to_send_asset, params)
test_utils.wait_for_balance_change(mc_app, alice, pre_bal,
rcv_asset)
def setup_accounts(mc_app: App, sc_app: App, params: Params):
# Setup a funded user account on the main chain, and add an unfunded account.
# Setup address book and add a funded account on the mainchain.
# Typical female names are addresses on the mainchain.
# The first account is funded.
alice = mc_app.create_account('alice')
beth = mc_app.create_account('beth')
carol = mc_app.create_account('carol')
deb = mc_app.create_account('deb')
ella = mc_app.create_account('ella')
mc_app(Payment(account=params.genesis_account, dst=alice, amt=XRP(20_000)))
mc_app.maybe_ledger_accept()
# Typical male names are addresses on the sidechain.
# All accounts are initially unfunded
adam = sc_app.create_account('adam')
bob = sc_app.create_account('bob')
charlie = sc_app.create_account('charlie')
dan = sc_app.create_account('dan')
ed = sc_app.create_account('ed')
def run_all(mc_app: App, sc_app: App, params: Params):
setup_accounts(mc_app, sc_app, params)
logging.info(f'mainchain:\n{mc_app.key_manager.to_string()}')
logging.info(f'sidechain:\n{sc_app.key_manager.to_string()}')
simple_xrp_test(mc_app, sc_app, params)
simple_iou_test(mc_app, sc_app, params)
def test_simple_xchain(configs_dirs_dict: Dict[int, str]):
tst_common.test_start(configs_dirs_dict, run_all)

View File

@@ -0,0 +1,74 @@
import logging
import pprint
import pytest
from multiprocessing import Process, Value
from typing import Callable, Dict
import sys
from app import App
from common import eprint, disable_eprint, XRP
from sidechain import Params
import sidechain
import test_utils
import time
def run(mc_app: App, sc_app: App, params: Params,
test_case: Callable[[App, App, Params], None]):
# process will run while stop token is non-zero
stop_token = Value('i', 1)
p = None
if mc_app.standalone:
p = Process(target=sidechain.close_mainchain_ledgers,
args=(stop_token, params))
p.start()
try:
test_case(mc_app, sc_app, params)
finally:
if p:
stop_token.value = 0
p.join()
sidechain._convert_log_files_to_json(
mc_app.get_configs() + sc_app.get_configs(), 'final.json')
def standalone_test(params: Params, test_case: Callable[[App, App, Params],
None]):
def callback(mc_app: App, sc_app: App):
run(mc_app, sc_app, params, test_case)
sidechain._standalone_with_callback(params,
callback,
setup_user_accounts=False)
def multinode_test(params: Params, test_case: Callable[[App, App, Params],
None]):
def callback(mc_app: App, sc_app: App):
run(mc_app, sc_app, params, test_case)
sidechain._multinode_with_callback(params,
callback,
setup_user_accounts=False)
def test_start(configs_dirs_dict: Dict[int, str],
test_case: Callable[[App, App, Params], None]):
params = sidechain.Params(configs_dir=configs_dirs_dict[1])
if err_str := params.check_error():
eprint(err_str)
sys.exit(1)
if params.verbose:
print("eprint enabled")
else:
disable_eprint()
# Set to true to help debug tests
test_utils.test_context_verbose_logging = True
if params.standalone:
standalone_test(params, test_case)
else:
multinode_test(params, test_case)

View File

@@ -0,0 +1,366 @@
import datetime
import json
from typing import Dict, List, Optional, Union
from command import Command
from common import Account, Asset, Path, PathList, to_rippled_epoch
class Transaction(Command):
'''Interface for all transactions'''
def __init__(
self,
*,
account: Account,
flags: Optional[int] = None,
fee: Optional[Union[Asset, int]] = None,
sequence: Optional[int] = None,
account_txn_id: Optional[str] = None,
last_ledger_sequence: Optional[int] = None,
src_tag: Optional[int] = None,
memos: Optional[List[Dict[str, dict]]] = None,
):
super().__init__()
self.account = account
# set even if None
self.flags = flags
self.fee = fee
self.sequence = sequence
self.account_txn_id = account_txn_id
self.last_ledger_sequence = last_ledger_sequence
self.src_tag = src_tag
self.memos = memos
def cmd_name(self) -> str:
return 'submit'
def set_seq_and_fee(self, seq: int, fee: Union[Asset, int]):
self.sequence = seq
self.fee = fee
def to_cmd_obj(self) -> dict:
txn = {
'Account': self.account.account_id,
}
if self.flags is not None:
txn['Flags'] = flags
if self.fee is not None:
if isinstance(self.fee, int):
txn['Fee'] = f'{self.fee}' # must be a string
else:
txn['Fee'] = self.fee.to_cmd_obj()
if self.sequence is not None:
txn['Sequence'] = self.sequence
if self.account_txn_id is not None:
txn['AccountTxnID'] = self.account_txn_id
if self.last_ledger_sequence is not None:
txn['LastLedgerSequence'] = self.last_ledger_sequence
if self.src_tag is not None:
txn['SourceTag'] = self.src_tag
if self.memos is not None:
txn['Memos'] = self.memos
return txn
class Payment(Transaction):
'''A payment transaction'''
def __init__(self,
*,
dst: Account,
amt: Asset,
send_max: Optional[Asset] = None,
paths: Optional[PathList] = None,
dst_tag: Optional[int] = None,
deliver_min: Optional[Asset] = None,
**rest):
super().__init__(**rest)
self.dst = dst
self.amt = amt
self.send_max = send_max
if paths is not None and isinstance(paths, Path):
# allow paths = Path([...]) special case
self.paths = PathList([paths])
else:
self.paths = paths
self.dst_tag = dst_tag
self.deliver_min = deliver_min
def set_partial_payment(self, value: bool = True):
'''Set or clear the partial payment flag'''
self._set_flag(0x0002_0000, value)
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
txn = super().to_cmd_obj()
txn = {
**txn,
'TransactionType': 'Payment',
'Destination': self.dst.account_id,
'Amount': self.amt.to_cmd_obj(),
}
if self.paths is not None:
txn['Paths'] = self.paths.to_cmd_obj()
if self.send_max is not None:
txn['SendMax'] = self.send_max.to_cmd_obj()
if self.dst_tag is not None:
txn['DestinationTag'] = self.dst_tag
if self.deliver_min is not None:
txn['DeliverMin'] = self.deliver_min
return txn
class Trust(Transaction):
'''A trust set transaction'''
def __init__(self,
*,
limit_amt: Optional[Asset] = None,
qin: Optional[int] = None,
qout: Optional[int] = None,
**rest):
super().__init__(**rest)
self.limit_amt = limit_amt
self.qin = qin
self.qout = qout
def set_auth(self):
'''Set the auth flag (cannot be cleared)'''
self._set_flag(0x00010000)
return self
def set_no_ripple(self, value: bool = True):
'''Set or clear the noRipple flag'''
self._set_flag(0x0002_0000, value)
self._set_flag(0x0004_0000, not value)
return self
def set_freeze(self, value: bool = True):
'''Set or clear the freeze flag'''
self._set_flag(0x0020_0000, value)
self._set_flag(0x0040_0000, not value)
return self
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
result = super().to_cmd_obj()
result = {
**result,
'TransactionType': 'TrustSet',
'LimitAmount': self.limit_amt.to_cmd_obj(),
}
if self.qin is not None:
result['QualityIn'] = self.qin
if self.qout is not None:
result['QualityOut'] = self.qout
return result
class SetRegularKey(Transaction):
'''A SetRegularKey transaction'''
def __init__(self, *, key: str, **rest):
super().__init__(**rest)
self.key = key
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
result = super().to_cmd_obj()
result = {
**result,
'TransactionType': 'SetRegularKey',
'RegularKey': self.key,
}
return result
class SignerListSet(Transaction):
'''A SignerListSet transaction'''
def __init__(self,
*,
keys: List[str],
weights: Optional[List[int]] = None,
quorum: int,
**rest):
super().__init__(**rest)
self.keys = keys
self.quorum = quorum
if weights:
if len(weights) != len(keys):
raise ValueError(
f'SignerSetList number of weights must equal number of keys (or be empty). Weights: {weights} Keys: {keys}'
)
self.weights = weights
else:
self.weights = [1] * len(keys)
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
result = super().to_cmd_obj()
result = {
**result,
'TransactionType': 'SignerListSet',
'SignerQuorum': self.quorum,
}
entries = []
for k, w in zip(self.keys, self.weights):
entries.append({'SignerEntry': {'Account': k, 'SignerWeight': w}})
result['SignerEntries'] = entries
return result
class AccountSet(Transaction):
'''An account set transaction'''
def __init__(self, account: Account, **rest):
super().__init__(account=account, **rest)
self.clear_flag = None
self.set_flag = None
self.transfer_rate = None
self.tick_size = None
def _set_account_flag(self, flag_id: int, value):
if value:
self.set_flag = flag_id
else:
self.clear_flag = flag_id
return self
def set_account_txn_id(self, value: bool = True):
'''Set or clear the asfAccountTxnID flag'''
return self._set_account_flag(5, value)
def set_default_ripple(self, value: bool = True):
'''Set or clear the asfDefaultRipple flag'''
return self._set_account_flag(8, value)
def set_deposit_auth(self, value: bool = True):
'''Set or clear the asfDepositAuth flag'''
return self._set_account_flag(9, value)
def set_disable_master(self, value: bool = True):
'''Set or clear the asfDisableMaster flag'''
return self._set_account_flag(4, value)
def set_disallow_xrp(self, value: bool = True):
'''Set or clear the asfDisallowXRP flag'''
return self._set_account_flag(3, value)
def set_global_freeze(self, value: bool = True):
'''Set or clear the asfGlobalFreeze flag'''
return self._set_account_flag(7, value)
def set_no_freeze(self, value: bool = True):
'''Set or clear the asfNoFreeze flag'''
return self._set_account_flag(6, value)
def set_require_auth(self, value: bool = True):
'''Set or clear the asfRequireAuth flag'''
return self._set_account_flag(2, value)
def set_require_dest(self, value: bool = True):
'''Set or clear the asfRequireDest flag'''
return self._set_account_flag(1, value)
def set_transfer_rate(self, value: int):
'''Set the fee to change when users transfer this account's issued currencies'''
self.transfer_rate = value
return self
def set_tick_size(self, value: int):
'''Tick size to use for offers involving a currency issued by this address'''
self.tick_size = value
return self
def to_cmd_obj(self) -> dict:
'''convert to transaction form (suitable for using json.dumps or similar)'''
result = super().to_cmd_obj()
result = {
**result,
'TransactionType': 'AccountSet',
}
if self.clear_flag is not None:
result['ClearFlag'] = self.clear_flag
if self.set_flag is not None:
result['SetFlag'] = self.set_flag
if self.transfer_rate is not None:
result['TransferRate'] = self.transfer_rate
if self.tick_size is not None:
result['TickSize'] = self.tick_size
return result
class Offer(Transaction):
'''An offer transaction'''
def __init__(self,
*,
taker_pays: Asset,
taker_gets: Asset,
expiration: Optional[int] = None,
offer_sequence: Optional[int] = None,
**rest):
super().__init__(**rest)
self.taker_pays = taker_pays
self.taker_gets = taker_gets
self.expiration = expiration
self.offer_sequence = offer_sequence
def set_passive(self, value: bool = True):
return self._set_flag(0x0001_0000, value)
def set_immediate_or_cancel(self, value: bool = True):
return self._set_flag(0x0002_0000, value)
def set_fill_or_kill(self, value: bool = True):
return self._set_flag(0x0004_0000, value)
def set_sell(self, value: bool = True):
return self._set_flag(0x0008_0000, value)
def to_cmd_obj(self) -> dict:
txn = super().to_cmd_obj()
txn = {
**txn,
'TransactionType': 'OfferCreate',
'TakerPays': self.taker_pays.to_cmd_obj(),
'TakerGets': self.taker_gets.to_cmd_obj(),
}
if self.expiration is not None:
txn['Expiration'] = self.expiration
if self.offer_sequence is not None:
txn['OfferSequence'] = self.offer_sequence
return txn
class Ticket(Transaction):
'''A ticket create transaction'''
def __init__(self, *, count: int = 1, **rest):
super().__init__(**rest)
self.count = count
def to_cmd_obj(self) -> dict:
txn = super().to_cmd_obj()
txn = {
**txn,
'TransactionType': 'TicketCreate',
'TicketCount': self.count,
}
return txn
class SetHook(Transaction):
'''A SetHook transaction for the experimental hook amendment'''
def __init__(self,
*,
create_code: str,
hook_on: str = '0000000000000000',
**rest):
super().__init__(**rest)
self.create_code = create_code
self.hook_on = hook_on
def to_cmd_obj(self) -> dict:
txn = super().to_cmd_obj()
txn = {
**txn,
'TransactionType': 'SetHook',
'CreateCode': self.create_code,
'HookOn': self.hook_on,
}
return txn

View File

@@ -0,0 +1,217 @@
## Introduction
This document walks through the steps to setup a side chain running on your local
machine and make your first cross chain transfers.
## Get Ready
This section describes how to install the python dependencies, create the
environment variables, and create the configuration files that scripts need to
run correctly.
### Build rippled
Checkout the `sidechain` branch from the rippled repository, and follow the
usual process to build rippled.
### Create a Python virtual environment and install dependencies
1. Check the current python version. The python scripts require python 3.8 or greater:
```
python3 --version
```
2. Choose a directory to put the virtual environment. For example, `~/envs`.
3. Create this directory and cd to it:
```
$ mkdir ~/env
$ cd ~/env
```
4. Create a new python virtual environment and activate it. Here the new
envrionment is called `sidechain`. Of course, you can choose whatever name
you'd like:
```
$ python3 -m venv sidechain
$ source ./sidechain/bin/activate
```
5. Install the required python modules. Change directories to where the
side chain branch is located and use pip3 to install the modules: Assuming the
code is located in `~/projs/sidechain`, the following commands will do it:
```
cd ~/projs/sidechain
$ pip3 install -r bin/sidechain/python/requirements.txt
```
### Activate the python virtual environment
```
$ cd ~/env
$ source ./sidechain/bin/activate
```
There's no harm if it was already active.
### Environment variables
The python scripts need to know the locations of two files and one directory.
These can be specified either through command line arguments or by setting
environment variables.
1. The location of the rippled executable used for main chain servers. Either
set the environment variable `RIPPLED_MAINCHAIN_EXE` or use the command line
switch `--exe_mainchain`. Until a new RPC is integrated into the main branch
(this will happen very soon), use the code built from the sidechain branch as
the main chain exe.
2. The location of the rippled executable used for side chain servers. Either
set the environment variable `RIPPLED_SIDECHAIN_EXE` or use the command line
switch `--exe_sidechain`. This should be the rippled executable built from
the sidechain branch.
3. The location of the directory that has the rippled configuration files.
Either set the environment variable `RIPPLED_SIDECHAIN_CFG_DIR` or use the
command line switch `--cfgs_dir`. The configuration files do not exist yet.
There is a script to create these for you. For now, just choose a location
where the files should live and make sure that directory exists.
Setting environment variables can be very convient. For example, a small script
can be sourced to set these environment variables when working with side chains.
### Creating configuration files
Assuming rippled is built, the three environment variables are set, and the
python environment is activated, run the following script:
```
bin/sidechain/python/create_config_files.py --usd
```
There should now be many configuration files in the directory specified by the
`RIPPLED_SIDECHAIN_CFG_DIR` environment variable. The `--usd` creates a sample
cross chain assert for USD -> USD transfers.
## Running the interactive shell
There is an interactive shell called `RiplRepl` that can be used to explore
side chains. It will use the configuration files built above to spin up test
rippled main chain running in standalone mode as well as 5 side chain federators
running in regular consensus mode.
To start the shell, run the following script:
```
bin/sidechain/python/riplrepl.py
```
The shell will not start until the servers have synced. It may take a minute or
two until they do sync. The script should give feedback while it is syncing.
Once the shell has started, the following message should appear:
```
Welcome to the sidechain test shell. Type help or ? to list commands.
RiplRepl>
```
Type the command `server_info` to make sure the servers are running. An example output would be:
```
RiplRepl> server_info
pid config running server_state ledger_seq complete_ledgers
main 0 136206 main.no_shards.mainchain_0/rippled.cfg True proposing 75 2-75
side 0 136230 sidechain_0/rippled.cfg True proposing 92 1-92
1 136231 sidechain_1/rippled.cfg True proposing 92 1-92
2 136232 sidechain_2/rippled.cfg True proposing 92 1-92
3 136233 sidechain_3/rippled.cfg True proposing 92 1-92
4 136234 sidechain_4/rippled.cfg True proposing 92 1-92
```
Of course, you'll see slightly different output on your machine. The important
thing to notice is there are two categories, one called `main` for the main chain
and one called `side` for the side chain. There should be a single server for the
main chain and five servers for the side chain.
Next, type the `balance` command, to see the balances of the accounts in the address book:
```
RiplRepl> balance
balance currency peer limit
account
main root 99,999,989,999.999985 XRP
door 9,999.999940 XRP
side door 99,999,999,999.999954 XRP
```
There are two accounts on the main chain: `root` and `door`; and one account on the side chain: `door`. These are not user accounts. Let's add two user accounts, one on the main chain called `alice` and one on the side chain called `bob`. The `new_account` command does this for us.
```
RiplRepl> new_account mainchain alice
RiplRepl> new_account sidechain bob
```
This just added the accounts to the address book, but they don't exist on the
ledger yet. To do that, we need to fund the accounts with a payment. For now,
let's just fund the `alice` account and check the balances. The `pay` command
makes a payment on one of the chains:
```
RiplRepl> pay mainchain root alice 5000
RiplRepl> balance
balance currency peer limit
account
main root 99,999,984,999.999969 XRP
door 9,999.999940 XRP
alice 5,000.000000 XRP
side door 99,999,999,999.999954 XRP
bob 0.000000 XRP
```
Finally, let's do something specific to side chains: make a cross chain payment.
The `xchain` command makes a payment between chains:
```
RiplRepl> xchain mainchain alice bob 4000
RiplRepl> balance
balance currency peer limit
account
main root 99,999,984,999.999969 XRP
door 13,999.999940 XRP
alice 999.999990 XRP
side door 99,999,995,999.999863 XRP
bob 4,000.000000 XRP
```
Note: the account reserve on the side chain is 100 XRP. The cross chain amount
must be greater than 100 XRP or the payment will fail.
Making a cross chain transaction from the side chain to the main chain is similar:
```
RiplRepl> xchain sidechain bob alice 2000
RiplRepl> balance
balance currency peer limit
account
main root 99,999,984,999.999969 XRP
door 11,999.999840 XRP
alice 2,999.999990 XRP
side door 99,999,997,999.999863 XRP
bob 1,999.999990 XRP
```
If you typed `balance` very quickly, you may catch a cross chain payment in
progress and the XRP may be deducted from bob's account before it is added to
alice's. If this happens just wait a couple seconds and retry the command. Also
note that accounts pay a ten drop fee when submitting transactions.
Finally, exit the program with the `quit` command:
```
RiplRepl> quit
Thank you for using RiplRepl. Goodbye.
WARNING: Server 0 is being stopped. RPC commands cannot be sent until this is restarted.
```
Ignore the warning about the server being stopped.
## Conclusion
Those two cross chain payments are a "hello world" for side chains. It makes sure
you're environment is set up correctly.

View File

@@ -0,0 +1,130 @@
## Introduction
The config file for side chain servers that run as federators require three
addition configuration stanzas. One additional stanza is required if the
federator will run in standalone mode, and one existing stanza (`ips_fixed`) can
be useful if running a side chain network on the local machine.
## The `[sidechain]` stanza
This stanza defines the side chain top level parameters. This includes:
* The federator's signing key. This is needed to add a signature to a
mutli-signed transaction before submitting it on the main chain or the side
chain.
* The main chain account. This is the account controlled by the federators and
the account users will send their assets to initiate cross chain transactions.
Some documentation calls this the main chain "door" account.
* The ip address and port of the main chain. This is needed to communicate with
the main chain server.
An example stanza may look like this (where the "X" are part of a secret key):
```
[sidechain]
signing_key=sXXXXXXXXXXXXXXXXXXXXXXXXXXXX
mainchain_account=rDj4pMuPv8gAD5ZvUrpHza3bn6QMAK6Zoo
mainchain_ip=127.0.0.1
mainchain_port_ws=6007
```
## The `[sidechain_federators]` stanza
This stanza defines the signing public keys of the sidechain federators. This is
needed to know which servers to collect transaction signatures from. An example
stanza may look like this:
```
[sidechain_federators]
aKNmFC2QWXbCUFq9XxaLgz1Av6SY5ccE457zFjSoNwaFPGEwz6ab
aKE9m7iDjhy5QAtnrmE8RVbY4RRvFY1Fn3AZ5NN2sB4N9EzQe82Z
aKNFZ3L7Y7z8SdGVewkVuqMKmDr6bqmaErXBdWAVqv1cjgkt1X36
aKEhTF5hRYDenn2Rb1NMza1vF9RswX8gxyJuuYmz6kpU5W6hc7zi
aKEydZ5rmPm7oYQZi9uagk8fnbXz4gmx82WBTJcTVdgYWfRBo1Mf
```
## The `[sidechain_assets]` and associated stanzas.
These stanza define what asset is used as the cross chain asset between the main
chain and the side chain. The `mainchain_asset` is the asset that accounts on
the main chain send to the account controlled by the federators to initiate an
cross chain transaction. The `sidechain_asset` is the asset that will be sent to
the destination address on the side chain. When returning an asset from the side
chain to the main chain, the `sidechain_asset` is sent to the side chain account
controlled by the federators and the `mainchain_asset` will be sent to the
destination address on the main chain. There are amounts associated with these
two assets. These amount define an exchange rate. If the value of the main chain
asset is 1, and the amount of the side chain asset is 2, then for every asset
locked on the main chain, twice of many assets are sent on the side chain.
Similarly, for every asset returned from the main chain, half as many assets are
sent on the main chain. The format used to specify these amounts is the same as
used in json RPC commands.
There are also fields for "refund_penalty" on the main chain and side chain.
This is the amount to deduct from refunds if a transaction fails. For example,
if a cross chain transaction sends 1 XRP to an address on the side chain that
doesn't exist (and the reserve is greater than 1 XRP), then a refund is issued
on the main chain. If the `mainchain_refund_penalty` is 400 drops, then the
amount returned is 1 XRP - 400 drops.
An example of stanzas where the main chain asset is XRP, and the sidechain asset
is also XRP, and the exchange rate is 1 to 1 may look like this:
```
[sidechain_assets]
xrp_xrp_sidechain_asset
[xrp_xrp_sidechain_asset]
mainchain_asset="1"
sidechain_asset="1"
mainchain_refund_penalty="400"
sidechain_refund_penalty="400"
```
An example of stanzas where the main chain asset is USD/rD... and the side chain
asset is USD/rHb... and the exchange rate is 1 to 2 may look like this:
```
[sidechain_assets]
iou_iou_sidechain_asset
[iou_iou_sidechain_asset]
mainchain_asset={"currency": "USD", "issuer": "rDj4pMuPv8gAD5ZvUrpHza3bn6QMAK6Zoo", "value": "1"}
sidechain_asset={"currency": "USD", "issuer": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh", "value": "2"}
mainchain_refund_penalty={"currency": "USD", "issuer": "rDj4pMuPv8gAD5ZvUrpHza3bn6QMAK6Zoo", "value": "0.02"}
sidechain_refund_penalty={"currency": "USD", "issuer": "rHb9CJAWyB4rj91VRWn96DkukG4bwdtyTh", "value": "0.04"}
```
## The `[sidechain_federators_secrets]` stanza
When running a side chain with a single federator in stand alone mode (useful
for debugging), that single server needs to know the signing keys of all the
federators in order to submit transactions. This stanza will not normally only
be part of configuration files that are used for testing and debugging.
An example of a stanza with federator secrets may look like this (where the "X"
are part of a secret key).
```
[sidechain_federators_secrets]
sXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
## The `[ips_fixed]` stanza
When running a test net it can be useful to hard code the ip addresses of the
side chain servers. An example of such a stanza used to run a test net locally
may look like this:
```
[ips_fixed]
127.0.0.2 51238
127.0.0.3 51239
127.0.0.4 51240
127.0.0.5 51241
```

823
docs/sidechain/design.md Normal file
View File

@@ -0,0 +1,823 @@
# Introduction
This document covers the design of side chains using the XRP ledger. It covers
the implementation of federators, how federators are initially synced and kept
in sync, how cross chain transactions work, and how errors are handled. It does
not give a high level overview of side chains or describe their benefits.
# Terminology
_federator_: A server that listens for triggering transactions on both the main
chain and the side chain. Each federator has a signing key associated with it
that is used to sign transactions. A transaction must be signed by a quorum of
federators before it can be submitted. Federators are responsible for creating
and signing valid response transactions, collecting signatures from other
federators, and submitting transactions to the main chain and side chain.
_main chain_: Ledger where assets originate and where assets will be locked
while used on the side chain. For most applications, the main chain will be the
XRP ledger mainnet.
_side chain_: Ledger where proxy assets for the locked main chain assets are
issued. Side chains may have rules, transactors, and validators that are very
different from the main chain. Proxy assets on the side chain can be sent back
to the main chain where they will be be unlocked from the control of the
federators.
_door account_: Account controlled by the federators. There are two door
accounts: one on the main chain and one on the side chain. Cross chain
transactions are started by users sending assets to a door account. Main chain
to side chain transactions cause the balance to increase on the main chain door
account and the balance to decrease on the side chain door account. It is called
a "door" because it is the mechanism to move assets from one chain to another -
much like going between rooms in a house requires stepping through a door.
_triggering transaction_: A transaction that causes the federators to start the
process of signing and submitting a new response transaction. For example,
sending XRP to the main chain's door account is a triggering transaction that
will cause the federators to submit a new transaction on the side chain.
_response transaction_: A transaction submitted by the federators in reaction to
a triggering transaction. Note that _triggering transaction_ and _response
transaction_ depends on context. Sending XRP from a _door account_ to a user
account is a _response transaction_ when thinking about cross chain
transactions. It is a _triggering transaction_ when thinking about how to handle
failed transactions.
# New RPC Command is a key primitive
Side chains introduce a new subscription stream called
"account_history_tx_stream". Given an account, this streams both new
transactions and historical transactions from validated ledgers back to the
client. The transactions are streamed in order and without gaps, and each
transaction is given a numeric id. New transactions start at id 0 and continue
in the positive direction. Historical transaction start at id -1 and continue in
the negative direction. New transactions are sent in the same order as they were
applied to the ledger, and historical tranasations are in the reverse order they
were applied to the ledger. The server will continue to stream historical
transaction until it reaches the account's first transaction or the user sends a
command signaling that historical transactions are no longer needed. This can be
done without closing the stream, and new transactions will continue to be sent.
Note that these transactions include all the transactions that effect the
account, not just triggering and response transactions.
It's important to note that while historical and new transactions may be
interleaved in the stream, there are never any gaps in the transactions.
Transaction 7 MUST be sent before transaction 8, and transaction -7 MUST be sent
before transaction -8.
This is the key primitive that allows federators to agree on transaction
values - transaction types, sequence numbers, asset amounts, and destination
addresses - without communicating amung themselves (of course signing
transaction requires communication). Since the transactions are from validated
ledgers, all the federators will see the same transactions in the same order.
# Federators
## Federator Introduction
A federator acts as a bridge between a main chain and a side chain. Through a
multi-signature scheme, the federators collectively control an account on the
main chain and an account on the side chain. These accounts are called door
accounts. A federator listens for transactions on these door accounts. When a
federator hears a triggering transaction, it will eventually submit a new
response transaction that completes the triggering transaction.
Initially, the federators will live in the same executable as the side chain
validators. However, the proposed implementation purposely does not take
advantage of this fact. The motivation for this is:
1. It makes it easy to eventually separate out the federator implementation from
side chain validators.
2. The side chain to main chain transactions will be implemented the same way as
the main chain to side chain transactions. Building and maintaining one
implementation is preferable to maintaining two implementations.
## Keeping the federators in sync
Federators decide to sign transactions by using the "account_history_tx_stream"
to listen for transactions on each chain. New transactions on the main chain
will cause a federator to sign a transaction meant for the side chain.
Similarly, new transactions on the side chain will cause a federator to sign a
transaction meant for the main chain. As a concrete example, consider how XRP is
locked on the main chain and distributed on a side chain. A user sends XRP to
the main chain door account. This causes the federators to submit a transaction
sending XRP to a destination on the side chain. Recall that a transaction that
causes a federator to sign a transaction is called a triggering transaction, and
a transaction created to handle a triggering transaction is called a response
transaction. In the example above, the user sending XRP on the main chain is a
triggering transaction and the tranasaction created by the federators and
submitted on the side chain is a response transaction.
When a new triggering transaction is detected, a federator needs to create a
response transaction. The fee, destination address, and amount are all known.
The only value that isn't fixed or derived from the triggering transaction is
the account sequence number. It is easy to agree on a sequence number. But first
let's show that stream of triggering transactions are the same for all the
validators.
Notice that a response transaction is always on the opposite chain as the
corresponding triggering transaction. If a response transaction could be on
either chain, then there would be timing issues. Consider what would happen if
two triggering transaction came in, one from the main chain and one from the
side chain, and both of these triggering transactions required response
transactions on the main chain. Since there are separate transaction streams
comming from the main chain and side chain, different federators may see these
transaction arrive in different orders. However, since triggering and response
transactions are on different chains, the federators don't need to deal with
this case. (Note: there is at one response transaction that is needed in
exceptional circumstances that violates this. Tickets are used to handle this
transactions and will be described later).
Also notice that "account_history_tx_stream" delivers transactions in the order
they were applied to the ledger without gaps.
This means that once a federator is in a state where it knows what sequence
number should be used for the next response transaction (call it S), it will
know the sequence number for all the subsequent response transactions. It will
just be S+1, S+2, S+3, ect. Also notice that once in sync all the federators
will see the same transactions in the same order. So all the federators will
create the same response transactions and in the same order.
## Getting a federator in sync
When a federator joins the network, it doesn't know which triggering
transactions it needs to sign and it doesn't know what sequence numbers should
be used for response transactions. The "account_history_tx_stream" can be used
to find this information and get the federator to a state where it can start
signing and submitting transactions.
While getting into sync, a federator collects historical and new transactions
from both the side chain and main chain. Each transaction has an id that is used
to keep the transactions in the same order they were applied to the ledger, and
the collections are kept in this order.
One piece of information a syncing federator needs to find is which triggering
transactions need to be treated as new and which ones have already been handled.
This is easily found by looking at the historical transaction from the
"account_history_tx_stream". The first time a response transaction is found,
the hash of associated triggering transaction is noted (this is be recorded as
a memo in the response transaction). All the triggering transactions that
precede the noted triggering transaction have already been handled. The
"account_history_tx_stream" must continue to stream historical transactions at
least until the first response transaction is found. For example, if the first
observed response transaction on the main chain has hash `r_hash_mainchain`
and associated triggering transaction of `t_hash_sidechain`, that means we know
all the triggering transactions on the side chain before `t_hash_sidechain` have
been handled (including `t_hash_sidechain`).
Another set of data that needs to be collected are all the historical
triggering transactions that come after `t_hash` (see above). Of course,
`t_hash` comes from the "account_history_tx_stream" from one chain, and the
triggering transactions come from the other chain. This means more
transactions than needed may be gathered.
Historical transaction continue to be streamed until the triggering transaction
associated with `t_hash_this_chain` is found and value of `t_hash_other_chain`
is found. For example, the main chain will continue to collect historical
transaction until:
1) The side chain stream has found a response transaction and informed the main
chain of the hash of the associated triggering transaction.
2) The side chain stream has found that triggering transaction.
3) This main chain stream has found a response transaction and informed the side
chain syncing algorithm of the associated triggering transaction.
The above description does not handle the case where the start of histoical
transactions is reached without finding any response transactions. If this
happens then the other chain must also collect all the historical transactions,
since we cannot show that triggering transaction has ever been handled.
Once this data has been collected, a command will be sent to ask the
"account_history_tx_stream" to stop sending historical transaction (Note:
without closing the stream. If the stream were closed it would be possible to
miss a transaction). Starting from the transaction after the `t_hash`
transaction, the collected triggering transaction will be iterated in the order
they were applied to the ledger and treated as if they were newly arrived from
the transaction stream. Once this is done the federator is synced and can switch
to handling new transactions normally.
As long as there are regular cross chain transactions being sent from both the
main chain and the side chain, the above procedure doesn't require too many
historical transactions. However, if one chain almost never sends cross chain
transactions then the syncing procedure is not as effiecient as it could be. As
an extreme example, consider a main chain that sends cross chain transactions to
a side chain, and the side chain never sends cross chain transactions back.
Since there would be no response transactions on the main chain, the sync
algorithm would fetch all of the main chain transactions. One way to improve
this situation is for the federators to checkpoint the last known response
transaction and its corresponding triggering transactions. If they did this
individually, then on startup a federator would need to fetch at most as much
history as the time it was down. If they did this as a group (by adding a new
ledger object on the side chain, for example), then syncing could require much
less history. For now, these strategies are not used. The benefits of a simplier
implementations and not adding any new ledger objects outweighted the benefits
of faster syncing for some types of sidechains.
## Federator Implementation
The Federator is an event loop that services events sent to it from the
listeners. Events are handed in the `mainLoop` method. This runs on a separate
thread. It runs on a separate thread so all the event handlers run on the order
they were received on and on the same thread.
### Federator Events
A `Federator` event is a `std::variant` of all the event types. The current
event types are:
* `XChainTransferDetected`. This is added when a federator detects the start of
cross chain transaction.
```c++
struct XChainTransferDetected
{
// direction of the transfer
Dir dir_;
// Src account on the src chain
AccountID src_;
// Dst account on the dst chain
AccountID dst_;
STAmount deliveredAmt_;
std::uint32_t txnSeq_;
uint256 txnHash_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `XChainTransferResult`. This is added when a federator detects the end of a
cross chain transaction.
```c++
struct XChainTransferResult
{
// direction is the direction of the triggering transaction.
// I.e. A "mainToSide" transfer result is a transaction that
// happens on the sidechain (the triggering transaction happended on the
// mainchain)
Dir dir_;
AccountID dst_;
std::optional<STAmount> deliveredAmt_;
std::uint32_t txnSeq_;
// Txn hash of the initiating xchain transaction
uint256 srcChainTxnHash_;
// Txn has of the federator's transaction on the dst chain
uint256 txnHash_;
TER ter_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `RefundTransferResult`. This is added when a federator detects the end of a
refund transactions. Refunds may occur if there is an error transfering funds
at the end of a cross chain transaction.
```c++
struct RefundTransferResult
{
// direction is the direction of the triggering transaction.
// I.e. A "mainToSide" refund transfer result is a transaction that
// happens on the mainchain (the triggering transaction happended on the
// mainchain, the failed result happened on the side chain, and the refund
// result happened on the mainchain)
Dir dir_;
AccountID dst_;
std::optional<STAmount> deliveredAmt_;
std::uint32_t txnSeq_;
// Txn hash of the initiating xchain transaction
uint256 srcChainTxnHash_;
// Txn hash of the federator's transaction on the dst chain
uint256 dstChainTxnHash_;
// Txn hash of the refund result
uint256 txnHash_;
TER ter_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `TicketCreateResult`. This is added when the federator detects a ticket create
transaction.
```
struct TicketCreateResult
{
Dir dir_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
uint256 srcChainTxnHash_;
uint256 txnHash_;
std::int32_t rpcOrder_;
std::uint32_t sourceTag_;
std::string memoStr_;
EventType
eventType() const;
Json::Value
toJson() const;
void
removeTrigger();
};
```
* `DepositAuthResult`. This is added when the federator detects a deposit auth
transaction. Deposit auth is used to pause cross chain transactions if the
federators fall too far behind.
```
struct DepositAuthResult
{
Dir dir_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
uint256 srcChainTxnHash_;
std::int32_t rpcOrder_;
AccountFlagOp op_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `BootstrapTicket`. This is added when the federator detects one of the initial
ticket transactions that is added during account setup.
```
struct BootstrapTicket
{
bool isMainchain_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
std::int32_t rpcOrder_;
std::uint32_t sourceTag_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `DisableMasterKeyResult`. This is added when the federator detects an
`AccountSet` transaction that disables the master key. Transactions that come
before this are assumed to be part of account setup.
```
struct DisableMasterKeyResult
{
bool isMainchain_;
std::uint32_t txnSeq_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
```
* `HeartbeatTimer`. This is added at regular intervals and is used to trigger
evants based on timeouts.
### Federator Event Handling
Handling the events is very simple. The `mainLoop` pops events off the event
queue and dispatches it to an event handler. There is one event handler for each
event type. There is also some logic to prevent busy waiting.
```c++
void
onEvent(event::XChainTransferDetected const& e);
void
onEvent(event::XChainTransferResult const& e);
void
onEvent(event::RefundTransferResult const& e);
void
onEvent(event::HeartbeatTimer const& e);
void
Federator::mainLoop()
{
FederatorEvent event;
while (!requestStop_)
{
if (!events_.pop(event))
{
using namespace std::chrono_literals;
// In rare cases, an event may be pushed and the condition
// variable signaled before the condition variable is waited on.
// To handle this, set a timeout on the wait.
std::unique_lock l{m_};
cv_.wait_for(
l, 1s, [this] { return requestStop_ || !events_.empty(); });
continue;
}
std::visit([this](auto&& e) { this->onEvent(e); }, event);
}
}
```
Events are added to a queue in the `push` method.
```c++
void
Federator::push(FederatorEvent const& e)
{
bool const notify = events_.empty();
events_.push(e);
if (notify)
{
std::lock_guard<std::mutex> l(m_);
cv_.notify_one();
}
}
```
Due the threading and lifetime issues, `Federator` is kept as a `shared_ptr`
inside of the app and enables `shared_from_this`. Since `shared_from_this`
cannot be used from a constructor, it uses two-phase initialization of a
constructor and an `init` function. These constructors are private, and objects
are created with a `make_federator` function that implements the two-phase
initialization. (Note: since `make_shared` cannot call a private constructor, a
private `PrivateTag` is used instead.)
### Federator Listeners
There are two listener classes: a `MainchainListener` and a `SidechainListener`
(both inherit from a common `ChainListener` class where much of the
implementation lives). These classes monitor the chains for transactions on the
door accounts and add the appropriate event to the federator. The
`MainchainListener` uses a websocket to monitor transactions. Since the
federator lives in the same executable as the side chain validators,
`SidechainListener` used `InfoSub` directly rather than a websocket.
The federator is kept as `weak_ptr` in this class. Since these class will be used
as part of a callback from different threads, both listener classes enable
`shared_from_this`.
### Federator WebsocketClient
The `WebsocketClient` class takes an asio `io_service`, main chain server ip and
port, and callback. When a command response or new stream result is received,
the callback is executed (is will be called from the `io_service` thread). The
`MainchainListener` uses this class to listen for transactions.
This class is also be used to send transactions to the main chain.
```c++
WebsocketClient(
std::function<void(Json::Value const&)> callback,
boost::asio::io_service& ios,
boost::asio::ip::address const& ip,
std::uint16_t port,
std::unordered_map<std::string, std::string> const& headers = {});
```
# Triggering Transaction Handler
The triggering transaction handler is part of a single-threaded event loop that
responds to events triggered from the different chains. It is single-threaded so
there's no danger of events being processed out-of-order and for simplicity of
code. Note that event handlers are not computationally intensive, so there would
be little benefit to multiple threads.
When a new event is handled, a response transaction may be prepared. Apart from
the sequence number, all the values of a response transaction can be determined
from the trigging transaction, and it does not require communicating with the
other federators. For example, when sending assets between chains, the response
transaction will contain an amount equal to the XRP `delivered_amt` in the
triggering transaction, a fixed fee, and a memo with the hash of the cross chain
transaction. The memo is important because it creates a transaction unique to
the triggering cross chain transaction, and it safer to sign such a transaction
in case sequence numbers somehow get out of sync between the federators. The
`LastLedgerSequence` field is not be set.
Next the federator will sign these transactions and send its signature to its
peers. This happens in another thread so the event loop isn't slowed down.
Next the federator adds its signatures to the txn in the `pending transactions`
collection for the appropriate chain. See [Adding a signature to Pending
Transactions](#adding-a-signature-to-pending-transactions)
If it doesn't have enough signatures to complete the multi-signature, the
federator will add signatures by listening for signature messages from other
peers see the [Collecting Signatures](#collecting-signatures) section.
## Collecting Signatures
A federator receives signatures by listening to peer messages. Signatures are
automatically sent when a federator detects a new cross chain transaction.
When a federator receives a new signature, it forwards it to its peers that have
not already received this signature from this federator.
Next it checks if this transaction has already been handled. If so, it does
nothing further.
Next the federator adds the signature to the txn in the `pending transactions`
collection. See [Adding a signature to Pending
Transactions](#adding-a-signature-to-pending-transactions).
Note that a federator may receive multiple signatures for the same transaction
but with different sequence numbers. This should only happen if a federator
somehow has the wrong sequence number and is later corrected.
## Adding a signature to Pending Transactions
The `pending transactions` collections stores the transactions that do not yet
have enough signatures to be submitted or have not been confirmed as sent. There
is one collection for the main chain and one for the side chain. The key to this
collection is a hash of the triggering transaction, but with a sequence of zero
and a fee of zero (remember that this transaction has a memo field with the hash
of the cross chain transaction, so it is unique). It is hashed this way to
detect inconsistent sequence numbers and fees. The `value` of this collection is
a struct that contains this federators signature (if available) and another map
with a `key` of sequence number and fee, and a `value` of a collection of
signatures. Before adding a signature to this collection, if the signature is
not on the multi-signature list, it is discarded. The signature is also checked
for validity. If it is invalid, it is discarded.
After adding a signature to this collection it checks if it this signature is
for a transaction with the same sequence number and fee as this federator, and
if has enough signatures for a valid multi-signed transaction. If so, the
transactions are added to the `queued transactions` collection for the
appropriate chain and the function to submit transactions is called (see
[Transaction Submit](#transaction-submit)).
If the transaction has enough signature for a valid multi-signed transaction,
and the sequence and fee _do not_ match the ones from this federator, then this
federator sequence number must be out of sync with the rest of the network. If
it is detected, the federator will correct its sequence number. Note that there
may be other transactions that have been submitted since this transaction, so
the sequence number needs to be appropriately adjusted to account for this.
Inconsistent fee will not be handled in the prototype. The fee will always be a
constant. This federator will also change its signature for this transaction and
submit it to the network (see [Transaction Submit](#transaction-submit)). This
new signature will be broadcast to the network.
A heartbeat timer event will periodically check the collection for transactions
in the queue older than some threshold. When these are detected, a error will be
printed to the log. If the federator knows the transaction has already been
handled by the network, it will be removed from the queue.
## Transaction Submit
There is a limit on the number of transaction in flight at any time. While the
number of transactions is below this limit, and the next transaction in the
sequence is part of the `queued transactions` collection, send the response
transaction to the appropriate chain. Once a transaction is sent, it is removed
from the `queued transactions` collection. However, it remains part of the
`pending transactions` collection until a response transaction result is
observed.
Note that limiting the number of transactions in flight to one makes for a
simpler design, but greatly limits the throughput of cross-chain transactions.
## Handling Response Transaction Results
Response transaction results are used to know when a response transaction has
been handled and can be removed from the `pending transactions` collection. It
is also used to issue refunds when a response transaction fails under some
circumstances.
If a transaction is fails is anything other than `tefAlready`, then a new
response transaction is created that refunds some protion of the origional
amount to the origional sending account. Gathering signatures for this refund
transaction is the same as what's done other triggering transactions. Much like
sending an asset cross-chain, transactions that trigger refunds and their
response transations happen on different chains. This means we can use the same
algorithm to assign sequence numbers to the response transaction.
If a transaction fails with a `tefAlready`, that means another federator already
submitted the transaction. Ignore this error.
There is also a timer that checks for transactions that have not had results for
too long of a time. If they are detected, an error is logged, but the prototype
does not attempt to handle the error further.
## Assigning sequence numbers
A federator keeps a variable that gives the next sequence number to assign to a
response transaction. When a new triggering transaction is detected, this
sequence number is given to the response transaction and incremented. How the
initial value is assigned is described in the [Getting a federator in
sync](#getting-a-federator-in-sync) section. How an incorrect sequence number is
corrected is described in the [Adding a signature to Pending
Transactions](#adding-a-signature-to-pending-transactions) section.
## Assigning fees
Side chain fees are burned, so this balance can never be redeemed through normal
cross chain transactions. If we wanted, these burned fees could be made
available to the federators by withdrawing XRP from the main chain account.
Given these fees are burned and are (in effect) payed by the account doing a
cross chain transaction, I propose these fees should be set on startup and kept
constant. Initial implementations will use a 20 drop fee.
## Specifying a cross-chain destination address
When sending a cross chain payment, the destination on the originating chain is
the special "door" account controlled by the federators. How does the user
specify what the destination address on the side chain should be? There are two
ways:
1) Specify the destination as a memo.
2) Assign tags to destinations.
The memo field can always be used to specify a destination. In addition, when a
new account is created, a new mapping is assigned between a tag and an address.
With a main to side transaction, the new tag will map to the newly created side
chain account.
The "tags" scheme is not yet designed. For now, the implementation will always
use the memo field to specify the destination address.
If an account sends an asset to the door account without specifying an address, a
refund will be issued to the sending account.
## Setting up the door accounts
The root account on the side chain is used as the door account. The door account
on the main chain is just a regular account. The following transactions must be
sent to these accounts before a running the federators:
* `SignerListSet`: Since the federators will jointly control these accounts, a
`SignerListSet` transaction must be sent to both the main chain account and
the side chain account. The signer list should consist of the federator's
public signing keys and should match the keys specified in the config file.
The quorum should be set to 80% of the federators on the list (i.e. for five
federators, set this to 4).
The federators use tickets to handle unusual situations. For example, if the
federators fall too far behind they will disallow new cross chain transactions
until they catch up. Three tickets are needed, and three transactions are needed
to create the tickets (since they use the source tag as a way to set the purpose
for the ticket).
* `Ticket`: Sent a `Ticket` transaction with the source tag of `1` to both the
main chain account and side chain account.
* `Ticket`: Sent a `Ticket` transaction with the source tag of `2` to both the
main chain account and side chain account.
* `Ticket`: Sent a `Ticket` transaction with the source tag of `3` to both the
main chain account and side chain account.
* `TrustSet` if the cross chain transactions involve issued assets (IOUs), set
up the trust lines by sending a `TrustSet` transaction to the appropriate
accounts. If the cross chain transactions only involve XRP, this is not
needed.
* `AccountSet`: Disable the master key with an `AccountSet` transactions. This
ensures that nothing except the federators (as a group) control these
accounts. Send this transaction to both the main chain account and side chain
account.
*Important*: The `AccountSet` transaction that disables the master key *must* be
the last transaction. The federator's initialization code uses this to
distinguish transactions that are part of setup and other transactions.
## Handling a crashed main chain server
The side chain depends on a websocket to main chain server to listen for
transactions. If the main chain server disconnects from the side chain a
fail-over option should be added. This would allow a federator to automatically
connect to other main chain servers if the one it is currency connected to goes
down.
## Federator throughput
On average, federators must process cross chain transactions faster than they
occur. If there are 10 cross chain transactions per ledger, but the federators
can only sign and submit 5 response transactions per ledger, the federators will
keep falling father behind and will never catch up. Monitoring the size of the
queues can detect this situation, but there is no good remedy for it. The rate
of cross chain transactions are out of it's control.
If this situation occurs, new transactions to the door account will be disabled
with a "deposit auth" transaction, and transactions will be disabled until the
federators catch up. Because this transaction is not in response to a triggering
transaction on the "opposite" chain, assigning a sequence number for the
"deposit auth" transaction is more involved. We use the protocol
in [Handling an unordered event](#Handling-an-unordered-event) section to submit
the transaction.
## Handling an unordered event
Cross chain payment transactions from one chain are sorted by that chain's
consensus protocol. Each transaction results in one payment transaction in the
destination chain, hence consuming one sequence number of the door account in
the destination chain. Staring from the same initial sequence number assigned
when the account was created, and processing the same stream of payment
transactions, the federators agree on which sequence number to use for a given
payment transaction without communication.
From time to time, however, federators have to create transactions to process
unordered events, such as temporarily "disable" a door account with a "deposit
auth" AccountSet transaction, or update the door accounts signerLists with
SignerListSet transactions. Assigning sequence number to these transactions are
more involved than payment transactions, because these events are not sorted
with themselves nor with payment transactions. Since they are not sorted, there
is a chance that different federators use different sequence numbers. If
different sequence numbers were used for a transaction, this transaction and
(depending on the design) some transactions following it won't be processed. So
for these transactions, tickets are used to assign sequence numbers. Tickets are
reserved when the side chain first starts (on genesis), and tickets are
replenished as they are used.
Our first ticket based protocol used tickets for both the transaction that
handles an unordered event and the "TicketCreate" transaction to create new
tickets. It was simple but had two issues: the sequence numbers allocated to
renewed tickets and the sequence numbers used for payment transactions occurred
at the same time may overlap so the payment transactions must be modified,
resigned and resubmitted; there is also a small chance that the payment
transactions are delivered to the destination chain out of order. Our current
design grows from the first design. We use a pre-allocated ticket pair, one main
chain ticket and one side chain ticket, to submit "no-op" transactions to both
chains. Once they are processed by the chains' consensus and sorted with payment
transactions, in later rounds of the protocol, both the "TicketCreate"
transactions to create new tickets and the transaction(s) that handles an
unordered event will use real sequence numbers instead of tickets. Hence we
avoid both the issues of the first design. The current design is shown in the
diagram below.
The top portion of the diagram shows (simplified) payment process sequence. Note
that the only usage of the side chain sequence numbers is for processing
payments already sorted by main chain consensus, and vice versa. The bottom
portion of the diagram shows the 3-round protocol for processing unordered
events (round 2 and 3 could be merged). Note that the side chain sequence
numbers still have a single usage, i.e. processing transactions already sorted
by main chain consensus, and vice versa.
In more detail, the first round of the protocol uses a ticket pair to send no-op
transactions to both of the chains. In the second round, the inclusion of the
no-op transactions in the ledgers means they are sorted together with other
transactions. Since they are sorted, the sequence numbers of their corresponding
next round transactions (TicketCreate) are agreed by the federators. The
federators can also predict the ticket numbers that will be allocated once the
ticketCreate transactions are processed by the consensus. Hence they will not
use those sequence numbers for other purposes. In the final round of the
protocol, the new tickets are indeed created and the ticket pair are refilled,
and the transaction(s) that handles the unordered event can take a sequence
number and submit to its destination chain.
This protocol can be used for one-sided events such as "disable" main chain
account temporarily, or two-sided events such as update the signerLists of both
door accounts. To avoid a race resulted from multiple events compete the same
ticket pair, every ticket pair has a "purpose" so that the pair can only be used
for one type of events. Currently three purposes are implemented, or planned.
![Handle unordered events](./ticketsAndSeq.png "Handle unordered events")
## Federator as a separate program
For the prototype, the federators will be validators on the side chain. However,
a federator can be independent from both chains. The federator can listen for
side chain transactions with a websocket, just like it does for the main chain.
The `last submitted txn` and `last confirmed txn` values can be kept by the
federators themselves.
The biggest advantage to combining a federator and a validator is to re-use the
overlay layer. This saves on implementation time in the prototype. Longer term,
it makes sense to separate a federator and a validator.
# Config file changes
See [this](configFile.md) document for the config file stanzas used to support
side chains.
# New ledger objects
Notice that side chains do not require any new ledger objects, and do not
require federators to communicate in order to agree on transaction values.
# New RPC commands
* "account_history_tx_stream" is used to get historic transactions
when syncing and get new transactions once synced.
* "Federator info" is used to get information about the state of a federator,
including its sync state and the state of its transaction queues.

View File

@@ -0,0 +1,50 @@
## Introduction
Side chain federators work by controlling an account on the main chain and an
account on the side chain. The account on the side chain is the root account.
The account on the main chain is specified in the configuration file (See
[configFile.md](docs/sidechain/configFile.md) for the new configuration file
stanzas).
The test scripts will set up these accounts for you when running a test network
on your local machine (see the functions `setup_mainchain` and `setup_sidechain`
in the sidechain.py module). This document describes what's needed to set up
these accounts if not using the scripts.
## Transactions
* `SignerListSet`: Since the federators will jointly control these accounts, a
`SignerListSet` transaction must be sent to both the main chain account and
the side chain account. The signer list should consist of the federator's
public signing keys and should match the keys specified in the config file.
The quorum should be set to 80% of the federators on the list (i.e. for five
federators, set this to 4).
The federators use tickets to handle unusual situations. For example, if the
federators fall too far behind they will disallow new cross chain transactions
until they catch up. Three tickets are needed, and three transactions are needed
to create the tickets (since they use the source tag as a way to set the purpose
for the ticket).
* `Ticket`: Sent a `Ticket` transaction with the source tag of `1` to both the
main chain account and side chain account.
* `Ticket`: Sent a `Ticket` transaction with the source tag of `2` to both the
main chain account and side chain account.
* `Ticket`: Sent a `Ticket` transaction with the source tag of `3` to both the
main chain account and side chain account.
* `TrustSet` if the cross chain transactions involve issued assets (IOUs), set
up the trust lines by sending a `TrustSet` transaction to the appropriate
accounts. If the cross chain transactions only involve XRP, this is not
needed.
* `AccountSet`: Disable the master key with an `AccountSet` transactions. This
ensures that nothing except the federators (as a group) control these
accounts. Send this transaction to both the main chain account and side chain
account.
*Important*: The `AccountSet` transaction that disables the master key *must* be
the last transaction. The federator's initialization code uses this to
distinguish transactions that are part of setup and other transactions.

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

View File

@@ -0,0 +1,76 @@
@startuml
participant "main chain network" as mc #LightGreen
box "Federator"
participant "**main** chain\ndoor account listener" as mdl
participant "**main** chain\nsignature collector" as msc
participant "**main** chain\nsequence number" as msn #LightBlue
participant "ticket pair" as t #LightBlue
participant "unordered event" as ue #LightCoral
participant "**side** chain\nsequence number" as ssn #LightBlue
participant "**side** chain\nsignature collector" as ssc
participant "**side** chain\ndoor account listener" as sdl
end box
participant "side chain network" as sc #LightGreen
actor "federator admin" as fa #LightGreen
== payments ==
group cross chain payment to side chain
mc -> mdl: payment tx to door account\nin ledger
mdl -> ssn: side chain door account payment tx created
ssn -> ssc: sequence number filled
ssc -> ssc: payment tx signed,\ncollect signatures
ssc -> sc : with quorum signatures\nsubmit tx to network
end
group cross chain payment to main chain
sc -> sdl: payment tx to door account\nin ledger
sdl -> msn: main chain door account payment tx created
msn -> msc: sequence number filled
msc -> msc: payment tx signed,\ncollect signatures
msc -> mc : with quorum signatures\nsubmit tx to network
end
== unordered events ==
group round 1
fa -> ue : misc request from admin\nor federator internal event\nE.g. close main chain door account due to high load
ue -> t : **two** no-op AccountSet txns created\n(for trigger ticketCreate txns round 2)
activate t
t -> msc: ticket number filled
t -> ssc: ticket number filled
deactivate t
msc -> msc: no-op AccountSet tx signed,\ncollect signatures
ssc -> ssc: no-op AccountSet tx signed,\ncollect signatures
msc -> mc : with quorum signatures\nsubmit tx to network
ssc -> sc : with quorum signatures\nsubmit tx to network
end
group round 2
'== unordered event, round 2 ==
mc -> mdl: no-op AccountSet in ledger
sc -> sdl: no-op AccountSet in ledger
mdl -> ssn: create side chain door account ticketCreate tx\nto allocate side chain door account ticket\nto refill ticket pair
sdl -> msn: create main chain door account ticketCreate tx\nto allocate main chain door account ticket\nto refill ticket pair
ssn -> ssc: sequence number filled
msn -> msc: sequence number filled
ssc -> ssc: ticketCreate tx signed,\ncollect signatures
msc -> msc: ticketCreate tx signed,\ncollect signatures
ssc -> sc : with quorum signatures\nsubmit tx to network
msc -> mc : with quorum signatures\nsubmit tx to network
end
group round 3
'== unordered event, round 3 ==
mc -> mdl: ticketCreate in ledger
mdl -> t : refill
sc -> sdl: ticketCreate in ledger
activate sdl
sdl -> t : refill
sdl -> msn: main chain deposit-auth AccountSet created
note left: assuming the unordered event is to\nclose main chain door account\nto block new payments temporarily
deactivate sdl
msn -> msc: sequence number filled
msc -> msc: deposit-auth AccountSet tx signed,\ncollect signatures
msc -> mc : with quorum signatures\nsubmit tx to network
end
@enduml

View File

@@ -48,6 +48,7 @@
#include <ripple/app/rdb/RelationalDBInterface_global.h>
#include <ripple/app/rdb/backend/RelationalDBInterfacePostgres.h>
#include <ripple/app/reporting/ReportingETL.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/tx/apply.h>
#include <ripple/basics/ByteUtilities.h>
#include <ripple/basics/PerfLog.h>
@@ -215,6 +216,8 @@ public:
RCLValidations mValidations;
std::unique_ptr<LoadManager> m_loadManager;
std::unique_ptr<TxQ> txQ_;
std::shared_ptr<sidechain::Federator> sidechainFederator_;
ClosureCounter<void, boost::system::error_code const&> waitHandlerCounter_;
boost::asio::steady_timer sweepTimer_;
boost::asio::steady_timer entropyTimer_;
@@ -519,6 +522,8 @@ public:
checkSigs(bool) override;
bool
isStopping() const override;
void
startFederator() override;
int
fdRequired() const override;
@@ -889,6 +894,12 @@ public:
return *mWalletDB;
}
std::shared_ptr<sidechain::Federator>
getSidechainFederator() override
{
return sidechainFederator_;
}
ReportingETL&
getReportingETL() override
{
@@ -1057,6 +1068,8 @@ public:
ledgerCleaner_->stop();
if (reportingETL_)
reportingETL_->stop();
if (sidechainFederator_)
sidechainFederator_->stop();
if (auto pg = dynamic_cast<RelationalDBInterfacePostgres*>(
&*mRelationalDBInterface))
pg->stop();
@@ -1163,6 +1176,8 @@ public:
getLedgerReplayer().sweep();
m_acceptedLedgerCache.sweep();
cachedSLEs_.sweep();
if (sidechainFederator_)
sidechainFederator_->sweep();
#ifdef RIPPLED_REPORTING
if (auto pg = dynamic_cast<RelationalDBInterfacePostgres*>(
@@ -1597,6 +1612,15 @@ ApplicationImp::setup()
if (reportingETL_)
reportingETL_->start();
sidechainFederator_ = sidechain::make_Federator(
*this,
get_io_service(),
*config_,
logs_->journal("SidechainFederator"));
if (sidechainFederator_)
sidechainFederator_->start();
return true;
}
@@ -1621,6 +1645,7 @@ ApplicationImp::start(bool withTimers)
grpcServer_->start();
ledgerCleaner_->start();
perfLog_->start();
startFederator();
}
void
@@ -1678,6 +1703,14 @@ ApplicationImp::isStopping() const
return isTimeToStop;
}
void
ApplicationImp::startFederator()
{
if (sidechainFederator_)
sidechainFederator_->unlockMainLoop(
sidechain::Federator::UnlockMainLoopKey::app);
}
int
ApplicationImp::fdRequired() const
{

View File

@@ -50,6 +50,10 @@ namespace RPC {
class ShardArchiveHandler;
}
namespace sidechain {
class Federator;
}
// VFALCO TODO Fix forward declares required for header dependency loops
class AmendmentTable;
@@ -150,6 +154,9 @@ public:
virtual bool
isStopping() const = 0;
virtual void
startFederator() = 0;
//
// ---
//
@@ -274,6 +281,9 @@ public:
virtual DatabaseCon&
getWalletDB() = 0;
virtual std::shared_ptr<sidechain::Federator>
getSidechainFederator() = 0;
/** Ensure that a newly-started validator does not sign proposals older
* than the last ledger it persisted. */
virtual LedgerIndex

View File

@@ -20,6 +20,7 @@
#include <ripple/app/main/Application.h>
#include <ripple/app/main/DBInit.h>
#include <ripple/app/rdb/RelationalDBInterface_global.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/basics/contract.h>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,430 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_FEDERATOR_H_INCLUDED
#define RIPPLE_SIDECHAIN_FEDERATOR_H_INCLUDED
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/app/sidechain/impl/DoorKeeper.h>
#include <ripple/app/sidechain/impl/MainchainListener.h>
#include <ripple/app/sidechain/impl/SidechainListener.h>
#include <ripple/app/sidechain/impl/SignatureCollector.h>
#include <ripple/app/sidechain/impl/SignerList.h>
#include <ripple/app/sidechain/impl/TicketHolder.h>
#include <ripple/basics/Buffer.h>
#include <ripple/basics/ThreadSaftyAnalysis.h>
#include <ripple/basics/UnorderedContainers.h>
#include <ripple/basics/base_uint.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/core/Config.h>
#include <ripple/json/json_value.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Quality.h>
#include <ripple/protocol/STAmount.h>
#include <ripple/protocol/SecretKey.h>
#include <boost/asio.hpp>
#include <boost/asio/deadline_timer.hpp>
#include <boost/container/flat_map.hpp>
#include <atomic>
#include <condition_variable>
#include <mutex>
#include <optional>
#include <thread>
#include <unordered_map>
#include <variant>
#include <vector>
namespace ripple {
class Application;
class STTx;
namespace sidechain {
class Federator : public std::enable_shared_from_this<Federator>
{
public:
enum ChainType { sideChain, mainChain };
constexpr static size_t numChains = 2;
enum class UnlockMainLoopKey { app, mainChain, sideChain };
constexpr static size_t numUnlockMainLoopKeys = 3;
// These enums are encoded in the transaction. Changing the order will break
// backward compatibility. If a new type is added change txnTypeLast.
enum class TxnType { xChain, refund };
constexpr static std::uint8_t txnTypeLast = 2;
static constexpr std::uint32_t accountControlTxFee{1000};
private:
// Tag so make_Federator can call `std::make_shared`
class PrivateTag
{
};
friend std::shared_ptr<Federator>
make_Federator(
Application& app,
boost::asio::io_service& ios,
BasicConfig const& config,
beast::Journal j);
std::thread thread_;
bool running_ = false;
std::atomic<bool> requestStop_ = false;
Application& app_;
std::array<AccountID, numChains> const account_;
std::array<std::atomic<std::uint32_t>, numChains> accountSeq_{1, 1};
std::array<std::atomic<std::uint32_t>, numChains> lastTxnSeqSent_{0, 0};
std::array<std::atomic<std::uint32_t>, numChains> lastTxnSeqConfirmed_{
0,
0};
std::array<std::atomic<bool>, numUnlockMainLoopKeys> unlockMainLoopKeys_{
false,
false,
false};
std::shared_ptr<MainchainListener> mainchainListener_;
std::shared_ptr<SidechainListener> sidechainListener_;
mutable std::mutex eventsMutex_;
std::vector<FederatorEvent> GUARDED_BY(eventsMutex_) events_;
// When a user account sends an asset to the account controlled by the
// federator, the asset to be issued on the other chain is determined by the
// `assetProps` maps - one for each chain. The asset to be issued is
// `issue`, the amount of the asset to issue is determined by `quality`
// (ratio of output amount/input amount). When issuing refunds, the
// `refundPenalty` is subtracted from the sent amount before sending the
// refund.
struct OtherChainAssetProperties
{
Quality quality;
Issue issue;
STAmount refundPenalty;
};
std::array<
boost::container::flat_map<Issue, OtherChainAssetProperties>,
numChains> const assetProps_;
PublicKey signingPK_;
SecretKey signingSK_;
// federator signing public keys
mutable std::mutex federatorPKsMutex_;
hash_set<PublicKey> GUARDED_BY(federatorPKsMutex_) federatorPKs_;
SignerList mainSignerList_;
SignerList sideSignerList_;
SignatureCollector mainSigCollector_;
SignatureCollector sideSigCollector_;
TicketRunner ticketRunner_;
DoorKeeper mainDoorKeeper_;
DoorKeeper sideDoorKeeper_;
boost::posix_time::seconds const heartbeatInterval{5};
std::unique_ptr<boost::asio::deadline_timer> heartbeatTimer;
struct PeerTxnSignature
{
Buffer sig;
std::uint32_t seq;
};
struct SequenceInfo
{
// Number of signatures at this sequence number
std::uint32_t count{0};
// Serialization of the transaction for everything except the signature
// id (which varies for each signature). This can be used to verify one
// of the signatures in a multisig.
Blob partialTxnSerialization;
};
struct PendingTransaction
{
STAmount amount;
AccountID srcChainSrcAccount;
AccountID dstChainDstAccount;
// Key is the federator's public key
hash_map<PublicKey, PeerTxnSignature> sigs;
// Key is a sequence number
hash_map<std::uint32_t, SequenceInfo> sequenceInfo;
// True if the transaction was ever put into the toSendTxns_ queue
bool queuedToSend_{false};
};
// Key is the hash of the triggering transaction
mutable std::mutex pendingTxnsM_;
hash_map<uint256, PendingTransaction> GUARDED_BY(pendingTxnsM_)
pendingTxns_[numChains];
mutable std::mutex toSendTxnsM_;
// Signed transactions ready to send
// Key is the transaction's sequence number. The transactions must be sent
// in the correct order. If the next trasnaction the account needs to send
// has a sequence number of N, the transaction with sequence N+1 can't be
// sent just because it collected signatures first.
std::map<std::uint32_t, STTx> GUARDED_BY(toSendTxnsM_)
toSendTxns_[numChains];
std::set<std::uint32_t> GUARDED_BY(toSendTxnsM_) toSkipSeq_[numChains];
// Use a condition variable to prevent busy waiting when the queue is
// empty
mutable std::mutex m_;
std::condition_variable cv_;
// prevent the main loop from starting until explictly told to run.
// This is used to allow bootstrap code to run before any events are
// processed
mutable std::mutex mainLoopMutex_;
bool mainLoopLocked_{true};
std::condition_variable mainLoopCv_;
beast::Journal j_;
static std::array<
boost::container::flat_map<Issue, OtherChainAssetProperties>,
numChains>
makeAssetProps(BasicConfig const& config, beast::Journal j);
public:
// Constructor should be private, but needs to be public so
// `make_shared` can use it
Federator(
PrivateTag,
Application& app,
SecretKey signingKey,
hash_set<PublicKey>&& federators,
boost::asio::ip::address mainChainIp,
std::uint16_t mainChainPort,
AccountID const& mainAccount,
AccountID const& sideAccount,
std::array<
boost::container::flat_map<Issue, OtherChainAssetProperties>,
numChains>&& assetProps,
beast::Journal j);
~Federator();
void
start();
void
stop() EXCLUDES(m_);
void
push(FederatorEvent&& e) EXCLUDES(m_, eventsMutex_);
// Don't process any events until the bootstrap has a chance to run
void
unlockMainLoop(UnlockMainLoopKey key) EXCLUDES(m_);
void
addPendingTxnSig(
TxnType txnType,
ChainType chaintype,
PublicKey const& federatorPk,
uint256 const& srcChainTxnHash,
std::optional<uint256> const& dstChainTxnHash,
STAmount const& amt,
AccountID const& srcChainSrcAccount,
AccountID const& dstChainDstAccount,
std::uint32_t seq,
Buffer&& sig) EXCLUDES(federatorPKsMutex_, pendingTxnsM_, toSendTxnsM_);
void
addPendingTxnSig(
ChainType chaintype,
PublicKey const& publicKey,
uint256 const& mId,
Buffer&& sig);
// Return true if a transaction with this sequence has already been sent
bool
alreadySent(ChainType chaintype, std::uint32_t seq) const;
void
setLastXChainTxnWithResult(
ChainType chaintype,
std::uint32_t seq,
std::uint32_t seqTook,
uint256 const& hash);
void
setNoLastXChainTxnWithResult(ChainType chaintype);
void
stopHistoricalTxns(ChainType chaintype);
void
initialSyncDone(ChainType chaintype);
// Get stats on the federator, including pending transactions and
// initialization state
Json::Value
getInfo() const EXCLUDES(pendingTxnsM_);
void
sweep();
SignatureCollector&
getSignatureCollector(ChainType chain);
DoorKeeper&
getDoorKeeper(ChainType chain);
TicketRunner&
getTicketRunner();
void
addSeqToSkip(ChainType chain, std::uint32_t seq) EXCLUDES(toSendTxnsM_);
// TODO multi-sig refactor?
void
addTxToSend(ChainType chain, std::uint32_t seq, STTx const& tx)
EXCLUDES(toSendTxnsM_);
// Set the accountSeq to the max of the current value and the requested
// value. This is done with a lock free algorithm.
void
setAccountSeqMax(ChainType chaintype, std::uint32_t reqValue);
private:
// Two phase init needed for shared_from this.
// Only called from `make_Federator`
void
init(
boost::asio::io_service& ios,
boost::asio::ip::address& ip,
std::uint16_t port,
std::shared_ptr<MainchainListener>&& mainchainListener,
std::shared_ptr<SidechainListener>&& sidechainListener);
void
heartbeatTimerHandler(const boost::system::error_code& ec);
// Convert between the asset on the src chain to the asset on the other
// chain. The `assetProps_` array controls how this conversion is done.
// An empty option is returned if the from issue is not part of the map in
// the `assetProps_` array.
[[nodiscard]] std::optional<STAmount>
toOtherChainAmount(ChainType srcChain, STAmount const& from) const;
// Set the lastTxnSeqSent to the max of the current value and the requested
// value. This is done with a lock free algorithm.
void
setLastTxnSeqSentMax(ChainType chaintype, std::uint32_t reqValue);
void
setLastTxnSeqConfirmedMax(ChainType chaintype, std::uint32_t reqValue);
mutable std::mutex sendTxnsMutex_;
void
sendTxns() EXCLUDES(sendTxnsMutex_, toSendTxnsM_);
void
mainLoop() EXCLUDES(mainLoopMutex_);
void
payTxn(
TxnType txnType,
ChainType dstChain,
STAmount const& amt,
// srcChainSrcAccount is the origional sending account in a cross chain
// transaction. Note, for refunds, the srcChainSrcAccount and the dst
// will be the same.
AccountID const& srcChainSrcAccount,
AccountID const& dst,
uint256 const& srcChainTxnHash,
std::optional<uint256> const& dstChainTxnHash);
// Issue a refund to the destination account. Refunds may be issued when a
// cross chain transaction fails on the destination chain. In this case, the
// funds will already be locked on one chain, but can not be completed on
// the other chain. Note that refunds may not be for the full amount sent.
// In effect, not refunding the full amount charges a fee to discourage
// abusing refunds to try to overload the system.
void
sendRefund(
ChainType chaintype,
STAmount const& amt,
AccountID const& dst,
uint256 const& txnHash,
uint256 const& triggeringResultTxnHash);
void
onEvent(event::XChainTransferDetected const& e);
void
onEvent(event::XChainTransferResult const& e) EXCLUDES(pendingTxnsM_);
void
onEvent(event::RefundTransferResult const& e) EXCLUDES(pendingTxnsM_);
void
onEvent(event::HeartbeatTimer const& e);
void
onEvent(event::TicketCreateTrigger const& e);
void
onEvent(event::TicketCreateResult const& e);
void
onEvent(event::DepositAuthResult const& e);
void
onEvent(event::BootstrapTicket const& e);
void
onEvent(event::DisableMasterKeyResult const& e);
// void
// onEvent(event::SignerListSetResult const& e);
void
updateDoorKeeper(ChainType chainType) EXCLUDES(pendingTxnsM_);
void
onResult(ChainType chainType, std::uint32_t resultTxSeq);
};
[[nodiscard]] std::shared_ptr<Federator>
make_Federator(
Application& app,
boost::asio::io_service& ios,
BasicConfig const& config,
beast::Journal j);
// Id used for message suppression
[[nodiscard]] uint256
crossChainTxnSignatureId(
PublicKey signingPK,
uint256 const& srcChainTxnHash,
std::optional<uint256> const& dstChainTxnHash,
STAmount const& amt,
AccountID const& src,
AccountID const& dst,
std::uint32_t seq,
Slice const& signature);
[[nodiscard]] Federator::ChainType
srcChainType(event::Dir dir);
[[nodiscard]] Federator::ChainType
dstChainType(event::Dir dir);
[[nodiscard]] Federator::ChainType
otherChainType(Federator::ChainType ct);
[[nodiscard]] Federator::ChainType
getChainType(bool isMainchain);
uint256
computeMessageSuppression(uint256 const& mId, Slice const& signature);
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,325 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/FederatorEvents.h>
#include <string_view>
#include <type_traits>
namespace ripple {
namespace sidechain {
namespace event {
namespace {
std::string const&
to_string(Dir dir)
{
switch (dir)
{
case Dir::mainToSide: {
static std::string const r("main");
return r;
}
case Dir::sideToMain: {
static std::string const r("side");
return r;
}
}
// Some compilers will warn about not returning from all control paths
// without this, but this code will never execute.
assert(0);
static std::string const error("error");
return error;
}
std::string const&
to_string(AccountFlagOp op)
{
switch (op)
{
case AccountFlagOp::set: {
static std::string const r("set");
return r;
}
case AccountFlagOp::clear: {
static std::string const r("clear");
return r;
}
}
// Some compilers will warn about not returning from all control paths
// without this, but this code will never execute.
assert(0);
static std::string const error("error");
return error;
}
} // namespace
EventType
XChainTransferDetected::eventType() const
{
return EventType::trigger;
}
Json::Value
XChainTransferDetected::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "XChainTransferDetected";
result["src"] = toBase58(src_);
result["dst"] = toBase58(dst_);
result["deliveredAmt"] = deliveredAmt_.getJson(JsonOptions::none);
result["txnSeq"] = txnSeq_;
result["txnHash"] = to_string(txnHash_);
result["rpcOrder"] = rpcOrder_;
return result;
}
EventType
HeartbeatTimer::eventType() const
{
return EventType::heartbeat;
}
Json::Value
HeartbeatTimer::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "HeartbeatTimer";
return result;
}
EventType
XChainTransferResult::eventType() const
{
return EventType::result;
}
Json::Value
XChainTransferResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "XChainTransferResult";
result["dir"] = to_string(dir_);
result["dst"] = toBase58(dst_);
if (deliveredAmt_)
result["deliveredAmt"] = deliveredAmt_->getJson(JsonOptions::none);
result["txnSeq"] = txnSeq_;
result["srcChainTxnHash"] = to_string(srcChainTxnHash_);
result["txnHash"] = to_string(txnHash_);
result["ter"] = transHuman(ter_);
result["rpcOrder"] = rpcOrder_;
return result;
}
EventType
RefundTransferResult::eventType() const
{
return EventType::result;
}
Json::Value
RefundTransferResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "RefundTransferResult";
result["dir"] = to_string(dir_);
result["dst"] = toBase58(dst_);
if (deliveredAmt_)
result["deliveredAmt"] = deliveredAmt_->getJson(JsonOptions::none);
result["txnSeq"] = txnSeq_;
result["srcChainTxnHash"] = to_string(srcChainTxnHash_);
result["dstChainTxnHash"] = to_string(dstChainTxnHash_);
result["txnHash"] = to_string(txnHash_);
result["ter"] = transHuman(ter_);
result["rpcOrder"] = rpcOrder_;
return result;
}
EventType
TicketCreateTrigger::eventType() const
{
return EventType::trigger;
}
Json::Value
TicketCreateTrigger::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "TicketCreateTrigger";
result["dir"] = to_string(dir_);
result["success"] = success_;
result["txnSeq"] = txnSeq_;
result["ledgerIndex"] = ledgerIndex_;
result["txnHash"] = to_string(txnHash_);
result["rpcOrder"] = rpcOrder_;
result["sourceTag"] = sourceTag_;
result["memo"] = memoStr_;
return result;
}
EventType
TicketCreateResult::eventType() const
{
return EventType::resultAndTrigger;
}
Json::Value
TicketCreateResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "TicketCreateResult";
result["dir"] = to_string(dir_);
result["success"] = success_;
result["txnSeq"] = txnSeq_;
result["ledgerIndex"] = ledgerIndex_;
result["srcChainTxnHash"] = to_string(srcChainTxnHash_);
result["txnHash"] = to_string(txnHash_);
result["rpcOrder"] = rpcOrder_;
result["sourceTag"] = sourceTag_;
result["memo"] = memoStr_;
return result;
}
void
TicketCreateResult::removeTrigger()
{
memoStr_.clear();
}
EventType
DepositAuthResult::eventType() const
{
return EventType::result;
}
Json::Value
DepositAuthResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "DepositAuthResult";
result["dir"] = to_string(dir_);
result["success"] = success_;
result["txnSeq"] = txnSeq_;
result["ledgerIndex"] = ledgerIndex_;
result["srcChainTxnHash"] = to_string(srcChainTxnHash_);
result["rpcOrder"] = rpcOrder_;
result["op"] = to_string(op_);
return result;
}
EventType
SignerListSetResult::eventType() const
{
return EventType::result;
}
Json::Value
SignerListSetResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "SignerListSetResult";
return result;
}
EventType
BootstrapTicket::eventType() const
{
return EventType::bootstrap;
}
Json::Value
BootstrapTicket::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "BootstrapTicket";
result["isMainchain"] = isMainchain_;
result["txnSeq"] = txnSeq_;
result["rpcOrder"] = rpcOrder_;
return result;
}
EventType
DisableMasterKeyResult::eventType() const
{
return EventType::result; // TODO change to bootstrap type too?
}
Json::Value
DisableMasterKeyResult::toJson() const
{
Json::Value result{Json::objectValue};
result["eventType"] = "DisableMasterKeyResult";
result["isMainchain"] = isMainchain_;
result["txnSeq"] = txnSeq_;
result["rpcOrder"] = rpcOrder_;
return result;
}
} // namespace event
namespace {
template <typename T, typename = void>
struct hasTxnHash : std::false_type
{
};
template <typename T>
struct hasTxnHash<T, std::void_t<decltype(std::declval<T>().txnHash_)>>
: std::true_type
{
};
template <class T>
inline constexpr bool hasTxnHash_v = hasTxnHash<T>::value;
// Check that the traits work as expected
static_assert(
hasTxnHash_v<event::XChainTransferResult> &&
!hasTxnHash_v<event::HeartbeatTimer>,
"");
} // namespace
std::optional<uint256>
txnHash(FederatorEvent const& event)
{
return std::visit(
[](auto const& e) -> std::optional<uint256> {
if constexpr (hasTxnHash_v<std::decay_t<decltype(e)>>)
{
return e.txnHash_;
}
return std::nullopt;
},
event);
}
event::EventType
eventType(FederatorEvent const& event)
{
return std::visit([](auto const& e) { return e.eventType(); }, event);
}
Json::Value
toJson(FederatorEvent const& event)
{
return std::visit([](auto const& e) { return e.toJson(); }, event);
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,262 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_FEDERATOR_EVENTS_H_INCLUDED
#define RIPPLE_SIDECHAIN_FEDERATOR_EVENTS_H_INCLUDED
#include <ripple/beast/utility/Journal.h>
#include <ripple/json/json_value.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/Issue.h>
#include <ripple/protocol/STAmount.h>
#include <ripple/protocol/TER.h>
#include <boost/format.hpp>
#include <optional>
#include <sstream>
#include <string>
#include <variant>
namespace ripple {
namespace sidechain {
namespace event {
enum class Dir { sideToMain, mainToSide };
enum class AccountFlagOp { set, clear };
static constexpr std::uint32_t MemoStringMax = 512;
enum class EventType {
bootstrap,
trigger,
result,
resultAndTrigger,
heartbeat,
};
// A cross chain transfer was detected on this federator
struct XChainTransferDetected
{
// direction of the transfer
Dir dir_;
// Src account on the src chain
AccountID src_;
// Dst account on the dst chain
AccountID dst_;
STAmount deliveredAmt_;
std::uint32_t txnSeq_;
uint256 txnHash_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct HeartbeatTimer
{
EventType
eventType() const;
Json::Value
toJson() const;
};
struct XChainTransferResult
{
// direction is the direction of the triggering transaction.
// I.e. A "mainToSide" transfer result is a transaction that
// happens on the sidechain (the triggering transaction happended on the
// mainchain)
Dir dir_;
AccountID dst_;
std::optional<STAmount> deliveredAmt_;
std::uint32_t txnSeq_;
// Txn hash of the initiating xchain transaction
uint256 srcChainTxnHash_;
// Txn has of the federator's transaction on the dst chain
uint256 txnHash_;
TER ter_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct RefundTransferResult
{
// direction is the direction of the triggering transaction.
// I.e. A "mainToSide" refund transfer result is a transaction that
// happens on the mainchain (the triggering transaction happended on the
// mainchain, the failed result happened on the side chain, and the refund
// result happened on the mainchain)
Dir dir_;
AccountID dst_;
std::optional<STAmount> deliveredAmt_;
std::uint32_t txnSeq_;
// Txn hash of the initiating xchain transaction
uint256 srcChainTxnHash_;
// Txn hash of the federator's transaction on the dst chain
uint256 dstChainTxnHash_;
// Txn hash of the refund result
uint256 txnHash_;
TER ter_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct TicketCreateTrigger
{
Dir dir_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
uint256 txnHash_;
std::int32_t rpcOrder_;
std::uint32_t sourceTag_;
std::string memoStr_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct TicketCreateResult
{
Dir dir_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
uint256 srcChainTxnHash_;
uint256 txnHash_;
std::int32_t rpcOrder_;
std::uint32_t sourceTag_;
std::string memoStr_;
EventType
eventType() const;
Json::Value
toJson() const;
void
removeTrigger();
};
struct DepositAuthResult
{
Dir dir_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
uint256 srcChainTxnHash_;
std::int32_t rpcOrder_;
AccountFlagOp op_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct SignerListSetResult
{
// TODO
EventType
eventType() const;
Json::Value
toJson() const;
};
struct BootstrapTicket
{
bool isMainchain_;
bool success_;
std::uint32_t txnSeq_;
std::uint32_t ledgerIndex_;
std::int32_t rpcOrder_;
std::uint32_t sourceTag_;
EventType
eventType() const;
Json::Value
toJson() const;
};
struct DisableMasterKeyResult
{
bool isMainchain_;
std::uint32_t txnSeq_;
std::int32_t rpcOrder_;
EventType
eventType() const;
Json::Value
toJson() const;
};
} // namespace event
using FederatorEvent = std::variant<
event::XChainTransferDetected,
event::HeartbeatTimer,
event::XChainTransferResult,
event::RefundTransferResult,
event::TicketCreateTrigger,
event::TicketCreateResult,
event::DepositAuthResult,
event::BootstrapTicket,
event::DisableMasterKeyResult>;
event::EventType
eventType(FederatorEvent const& event);
Json::Value
toJson(FederatorEvent const& event);
// If the event has a txnHash_ field (all the trigger events), return the hash,
// otherwise return nullopt
std::optional<uint256>
txnHash(FederatorEvent const& event);
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,856 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/app/sidechain/impl/WebsocketClient.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/XRPAmount.h>
#include <ripple/basics/strHex.h>
#include <ripple/json/Output.h>
#include <ripple/json/json_writer.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/STAmount.h>
#include <ripple/protocol/TxFlags.h>
#include <ripple/protocol/jss.h>
#include <type_traits>
namespace ripple {
namespace sidechain {
class Federator;
ChainListener::ChainListener(
IsMainchain isMainchain,
AccountID const& account,
std::weak_ptr<Federator>&& federator,
beast::Journal j)
: isMainchain_{isMainchain == IsMainchain::yes}
, doorAccount_{account}
, doorAccountStr_{toBase58(account)}
, federator_{std::move(federator)}
, initialSync_{std::make_unique<InitialSync>(federator_, isMainchain_, j)}
, j_{j}
{
}
// destructor must be defined after WebsocketClient size is known (i.e. it can
// not be defaulted in the header or the unique_ptr declration of
// WebsocketClient won't work)
ChainListener::~ChainListener() = default;
std::string const&
ChainListener::chainName() const
{
// Note: If this function is ever changed to return a value instead of a
// ref, review the code to ensure the "jv" functions don't bind to temps
static const std::string m("Mainchain");
static const std::string s("Sidechain");
return isMainchain_ ? m : s;
}
namespace detail {
template <class T>
std::optional<T>
getMemoData(Json::Value const& v, std::uint32_t index) = delete;
template <>
std::optional<uint256>
getMemoData<uint256>(Json::Value const& v, std::uint32_t index)
{
try
{
uint256 result;
if (result.parseHex(
v[jss::Memos][index][jss::Memo][jss::MemoData].asString()))
return result;
}
catch (...)
{
}
return {};
}
template <>
std::optional<uint8_t>
getMemoData<uint8_t>(Json::Value const& v, std::uint32_t index)
{
try
{
auto const hexData =
v[jss::Memos][index][jss::Memo][jss::MemoData].asString();
auto d = hexData.data();
if (hexData.size() != 2)
return {};
auto highNibble = charUnHex(d[0]);
auto lowNibble = charUnHex(d[1]);
if (highNibble < 0 || lowNibble < 0)
return {};
return (highNibble << 4) | lowNibble;
}
catch (...)
{
}
return {};
}
} // namespace detail
template <class E>
void
ChainListener::pushEvent(
E&& e,
int txHistoryIndex,
std::lock_guard<std::mutex> const&)
{
static_assert(std::is_rvalue_reference_v<decltype(e)>, "");
if (initialSync_)
{
auto const hasReplayed = initialSync_->onEvent(std::move(e));
if (hasReplayed)
initialSync_.reset();
}
else if (auto f = federator_.lock(); f && txHistoryIndex >= 0)
{
f->push(std::move(e));
}
}
void
ChainListener::processMessage(Json::Value const& msg)
{
// Even though this lock has a large scope, this function does very little
// processing and should run relatively quickly
std::lock_guard l{m_};
JLOGV(
j_.trace(),
"chain listener message",
jv("msg", msg),
jv("isMainchain", isMainchain_));
if (!msg.isMember(jss::validated) || !msg[jss::validated].asBool())
{
JLOGV(
j_.trace(),
"ignoring listener message",
jv("reason", "not validated"),
jv("msg", msg),
jv("chain_name", chainName()));
return;
}
if (!msg.isMember(jss::engine_result_code))
{
JLOGV(
j_.trace(),
"ignoring listener message",
jv("reason", "no engine result code"),
jv("msg", msg),
jv("chain_name", chainName()));
return;
}
if (!msg.isMember(jss::account_history_tx_index))
{
JLOGV(
j_.trace(),
"ignoring listener message",
jv("reason", "no account history tx index"),
jv("msg", msg),
jv("chain_name", chainName()));
return;
}
if (!msg.isMember(jss::meta))
{
JLOGV(
j_.trace(),
"ignoring listener message",
jv("reason", "tx meta"),
jv("msg", msg),
jv("chain_name", chainName()));
return;
}
auto fieldMatchesStr =
[](Json::Value const& val, char const* field, char const* toMatch) {
if (!val.isMember(field))
return false;
auto const f = val[field];
if (!f.isString())
return false;
return f.asString() == toMatch;
};
TER const txnTER = [&msg] {
return TER::fromInt(msg[jss::engine_result_code].asInt());
}();
bool const txnSuccess = (txnTER == tesSUCCESS);
// values < 0 are historical txns. values >= 0 are new transactions. Only
// the initial sync needs historical txns.
int const txnHistoryIndex = msg[jss::account_history_tx_index].asInt();
auto const meta = msg[jss::meta];
// There are two payment types of interest:
// 1. User initiated payments on this chain that trigger a transaction on
// the other chain.
// 2. Federated initated payments on this chain whose status needs to be
// checked.
enum class PaymentType { user, federator };
auto paymentTypeOpt = [&]() -> std::optional<PaymentType> {
// Only keep transactions to or from the door account.
// Transactions to the account are initiated by users and are are cross
// chain transactions. Transaction from the account are initiated by
// federators and need to be monitored for errors. There are two types
// of transactions that originate from the door account: the second half
// of a cross chain payment and a refund of a failed cross chain
// payment.
if (!fieldMatchesStr(msg, jss::type, jss::transaction))
return {};
if (!msg.isMember(jss::transaction))
return {};
auto const txn = msg[jss::transaction];
if (!fieldMatchesStr(txn, jss::TransactionType, "Payment"))
return {};
bool const accIsSrc =
fieldMatchesStr(txn, jss::Account, doorAccountStr_.c_str());
bool const accIsDst =
fieldMatchesStr(txn, jss::Destination, doorAccountStr_.c_str());
if (accIsSrc == accIsDst)
{
// either account is not involved, or self send
return {};
}
if (accIsSrc)
return PaymentType::federator;
return PaymentType::user;
}();
// There are four types of messages used to control the federator accounts:
// 1. AccountSet without modifying account settings. These txns are used to
// trigger TicketCreate txns.
// 2. TicketCreate to issue tickets.
// 3. AccountSet that changes the depositAuth setting of accounts.
// 4. SignerListSet to update the signerList of accounts.
// 5. AccoutSet that disables the master key. All transactions before this
// are used for setup only and should be ignored. This transaction is also
// used to help set the initial transaction sequence numbers
enum class AccountControlType {
trigger,
ticket,
depositAuth,
signerList,
disableMasterKey
};
auto accountControlTypeOpt = [&]() -> std::optional<AccountControlType> {
if (!fieldMatchesStr(msg, jss::type, jss::transaction))
return {};
if (!msg.isMember(jss::transaction))
return {};
auto const txn = msg[jss::transaction];
if (fieldMatchesStr(txn, jss::TransactionType, "AccountSet"))
{
if (!(txn.isMember(jss::SetFlag) || txn.isMember(jss::ClearFlag)))
{
return AccountControlType::trigger;
}
else
{
// Get the flags value at the key. If the key is not present,
// return 0.
auto getFlags =
[&txn](Json::StaticString const& key) -> std::uint32_t {
if (txn.isMember(key))
{
auto const val = txn[key];
try
{
return val.asUInt();
}
catch (...)
{
}
}
return 0;
};
std::uint32_t const setFlags = getFlags(jss::SetFlag);
std::uint32_t const clearFlags = getFlags(jss::ClearFlag);
if (setFlags == asfDepositAuth || clearFlags == asfDepositAuth)
return AccountControlType::depositAuth;
if (setFlags == asfDisableMaster)
return AccountControlType::disableMasterKey;
}
}
if (fieldMatchesStr(txn, jss::TransactionType, "TicketCreate"))
return AccountControlType::ticket;
if (fieldMatchesStr(txn, jss::TransactionType, "SignerListSet"))
return AccountControlType::signerList;
return {};
}();
if (!paymentTypeOpt && !accountControlTypeOpt)
{
JLOGV(
j_.warn(),
"ignoring listener message",
jv("reason", "wrong type, not payment nor account control tx"),
jv("msg", msg),
jv("chain_name", chainName()));
return;
}
assert(!paymentTypeOpt || !accountControlTypeOpt);
auto const txnHash = [&]() -> std::optional<uint256> {
try
{
uint256 result;
if (result.parseHex(msg[jss::transaction][jss::hash].asString()))
return result;
}
catch (...)
{
}
// TODO: this is an insane input stream
// Detect and connect to another server
return {};
}();
if (!txnHash)
{
JLOG(j_.warn()) << "ignoring listener message, no tx hash";
return;
}
auto const seq = [&]() -> std::optional<std::uint32_t> {
try
{
return msg[jss::transaction][jss::Sequence].asUInt();
}
catch (...)
{
// TODO: this is an insane input stream
// Detect and connect to another server
return {};
}
}();
if (!seq)
{
JLOG(j_.warn()) << "ignoring listener message, no tx seq";
return;
}
if (paymentTypeOpt)
{
PaymentType const paymentType = *paymentTypeOpt;
std::optional<STAmount> deliveredAmt;
if (meta.isMember(jss::delivered_amount))
{
deliveredAmt =
amountFromJson(sfGeneric, meta[jss::delivered_amount]);
}
auto const src = [&]() -> std::optional<AccountID> {
try
{
return parseBase58<AccountID>(
msg[jss::transaction][jss::Account].asString());
}
catch (...)
{
}
// TODO: this is an insane input stream
// Detect and connect to another server
return {};
}();
if (!src)
{
// TODO: handle the error
return;
}
auto const dst = [&]() -> std::optional<AccountID> {
try
{
switch (paymentType)
{
case PaymentType::user: {
// This is the destination of the "other chain"
// transfer, which is specified as a memo.
if (!msg.isMember(jss::transaction))
{
return std::nullopt;
}
try
{
// the memo data is a hex encoded version of the
// base58 encoded address. This was chosen for ease
// of encoding by clients.
auto const hexData =
msg[jss::transaction][jss::Memos][0u][jss::Memo]
[jss::MemoData]
.asString();
if ((hexData.size() > 100) || (hexData.size() % 2))
return std::nullopt;
auto const asciiData = [&]() -> std::string {
std::string result;
result.reserve(40);
auto d = hexData.data();
for (int i = 0; i < hexData.size(); i += 2)
{
auto highNibble = charUnHex(d[i]);
auto lowNibble = charUnHex(d[i + 1]);
if (highNibble < 0 || lowNibble < 0)
return {};
char c = (highNibble << 4) | lowNibble;
result.push_back(c);
}
return result;
}();
return parseBase58<AccountID>(asciiData);
}
catch (...)
{
// User did not specify a destination address in a
// memo
return std::nullopt;
}
}
case PaymentType::federator:
return parseBase58<AccountID>(
msg[jss::transaction][jss::Destination].asString());
}
}
catch (...)
{
}
// TODO: this is an insane input stream
// Detect and connect to another server
return {};
}();
if (!dst)
{
// TODO: handle the error
return;
}
switch (paymentType)
{
case PaymentType::federator: {
auto s = txnSuccess ? j_.trace() : j_.error();
char const* status = txnSuccess ? "success" : "fail";
JLOGV(
s,
"federator txn status",
jv("chain_name", chainName()),
jv("status", status),
jv("msg", msg));
auto const txnTypeRaw =
detail::getMemoData<uint8_t>(msg[jss::transaction], 0);
if (!txnTypeRaw || *txnTypeRaw > Federator::txnTypeLast)
{
JLOGV(
j_.fatal(),
"expected valid txnType in ChainListener",
jv("msg", msg));
return;
}
Federator::TxnType const txnType =
static_cast<Federator::TxnType>(*txnTypeRaw);
auto const srcChainTxnHash =
detail::getMemoData<uint256>(msg[jss::transaction], 1);
if (!srcChainTxnHash)
{
JLOGV(
j_.fatal(),
"expected srcChainTxnHash in ChainListener",
jv("msg", msg));
return;
}
static_assert(
Federator::txnTypeLast == 2, "Add new case below");
switch (txnType)
{
case Federator::TxnType::xChain: {
using namespace event;
// The dirction looks backwards, but it's not. The
// direction is for the *triggering* transaction.
auto const dir =
isMainchain_ ? Dir::sideToMain : Dir::mainToSide;
XChainTransferResult e{
dir,
*dst,
deliveredAmt,
*seq,
*srcChainTxnHash,
*txnHash,
txnTER,
txnHistoryIndex};
pushEvent(std::move(e), txnHistoryIndex, l);
}
break;
case Federator::TxnType::refund: {
using namespace event;
// The direction is for the triggering transaction.
auto const dir =
isMainchain_ ? Dir::mainToSide : Dir::sideToMain;
auto const dstChainTxnHash =
detail::getMemoData<uint256>(
msg[jss::transaction], 2);
if (!dstChainTxnHash)
{
JLOGV(
j_.fatal(),
"expected valid dstChainTxnHash in "
"ChainListener",
jv("msg", msg));
return;
}
RefundTransferResult e{
dir,
*dst,
deliveredAmt,
*seq,
*srcChainTxnHash,
*dstChainTxnHash,
*txnHash,
txnTER,
txnHistoryIndex};
pushEvent(std::move(e), txnHistoryIndex, l);
}
break;
}
}
break;
case PaymentType::user: {
if (!txnSuccess)
return;
if (!deliveredAmt)
return;
{
using namespace event;
XChainTransferDetected e{
isMainchain_ ? Dir::mainToSide : Dir::sideToMain,
*src,
*dst,
*deliveredAmt,
*seq,
*txnHash,
txnHistoryIndex};
pushEvent(std::move(e), txnHistoryIndex, l);
}
}
break;
}
}
else
{
// account control tx
auto const ledgerIndex = [&]() -> std::optional<std::uint32_t> {
try
{
return msg["ledger_index"].asInt();
}
catch (...)
{
JLOGV(j_.error(), "no ledger_index", jv("message", msg));
assert(false);
return {};
}
}();
if (!ledgerIndex)
{
JLOG(j_.warn()) << "ignoring listener message, no ledgerIndex";
return;
}
auto const getSourceTag = [&]() -> std::optional<std::uint32_t> {
try
{
return msg[jss::transaction]["SourceTag"].asUInt();
}
catch (...)
{
JLOGV(j_.error(), "wrong SourceTag", jv("message", msg));
assert(false);
return {};
}
};
auto const getMemoStr = [&](std::uint32_t index) -> std::string {
try
{
if (msg[jss::transaction][jss::Memos][index] ==
Json::Value::null)
return {};
auto str = std::string(msg[jss::transaction][jss::Memos][index]
[jss::Memo][jss::MemoData]
.asString());
assert(str.length() <= event::MemoStringMax);
return str;
}
catch (...)
{
}
return {};
};
auto const accountControlType = *accountControlTypeOpt;
switch (accountControlType)
{
case AccountControlType::trigger: {
JLOGV(
j_.trace(),
"AccountControlType::trigger",
jv("chain_name", chainName()),
jv("account_seq", *seq),
jv("msg", msg));
auto sourceTag = getSourceTag();
if (!sourceTag)
{
JLOG(j_.warn())
<< "ignoring listener message, no sourceTag";
return;
}
auto memoStr = getMemoStr(0);
event::TicketCreateTrigger e = {
isMainchain_ ? event::Dir::mainToSide
: event::Dir::sideToMain,
txnSuccess,
0,
*ledgerIndex,
*txnHash,
txnHistoryIndex,
*sourceTag,
std::move(memoStr)};
pushEvent(std::move(e), txnHistoryIndex, l);
break;
}
case AccountControlType::ticket: {
JLOGV(
j_.trace(),
"AccountControlType::ticket",
jv("chain_name", chainName()),
jv("account_seq", *seq),
jv("msg", msg));
auto sourceTag = getSourceTag();
if (!sourceTag)
{
JLOG(j_.warn())
<< "ignoring listener message, no sourceTag";
return;
}
auto const triggeringTxnHash =
detail::getMemoData<uint256>(msg[jss::transaction], 0);
if (!triggeringTxnHash)
{
JLOGV(
(txnSuccess ? j_.trace() : j_.error()),
"bootstrap ticket",
jv("chain_name", chainName()),
jv("account_seq", *seq),
jv("msg", msg));
if (!txnSuccess)
return;
event::BootstrapTicket e = {
isMainchain_,
txnSuccess,
*seq,
*ledgerIndex,
txnHistoryIndex,
*sourceTag};
pushEvent(std::move(e), txnHistoryIndex, l);
return;
}
// The TicketCreate tx is both the result of its triggering
// AccountSet tx, and the trigger of another account control tx,
// if there is a tx in the memo field.
event::TicketCreateResult e = {
isMainchain_ ? event::Dir::sideToMain
: event::Dir::mainToSide,
txnSuccess,
*seq,
*ledgerIndex,
*triggeringTxnHash,
*txnHash,
txnHistoryIndex,
*sourceTag,
getMemoStr(1)};
pushEvent(std::move(e), txnHistoryIndex, l);
break;
}
case AccountControlType::depositAuth: {
JLOGV(
j_.trace(),
"AccountControlType::depositAuth",
jv("chain_name", chainName()),
jv("account_seq", *seq),
jv("msg", msg));
auto const triggeringTxHash =
detail::getMemoData<uint256>(msg[jss::transaction], 0);
if (!triggeringTxHash)
{
JLOG(j_.warn())
<< "ignoring listener message, no triggeringTxHash";
return;
}
auto opOpt = [&]() -> std::optional<event::AccountFlagOp> {
try
{
if (msg[jss::transaction].isMember(jss::SetFlag) &&
msg[jss::transaction][jss::SetFlag].isIntegral())
{
assert(
msg[jss::transaction][jss::SetFlag].asUInt() ==
asfDepositAuth);
return event::AccountFlagOp::set;
}
if (msg[jss::transaction].isMember(jss::ClearFlag) &&
msg[jss::transaction][jss::ClearFlag].isIntegral())
{
assert(
msg[jss::transaction][jss::ClearFlag]
.asUInt() == asfDepositAuth);
return event::AccountFlagOp::clear;
}
}
catch (...)
{
}
JLOGV(
j_.error(),
"unexpected accountSet tx",
jv("message", msg));
assert(false);
return {};
}();
if (!opOpt)
return;
event::DepositAuthResult e{
isMainchain_ ? event::Dir::sideToMain
: event::Dir::mainToSide,
txnSuccess,
*seq,
*ledgerIndex,
*triggeringTxHash,
txnHistoryIndex,
*opOpt};
pushEvent(std::move(e), txnHistoryIndex, l);
break;
}
case AccountControlType::signerList:
// TODO
break;
case AccountControlType::disableMasterKey: {
event::DisableMasterKeyResult e{
isMainchain_, *seq, txnHistoryIndex};
pushEvent(std::move(e), txnHistoryIndex, l);
break;
}
break;
}
}
// Note: Handling "last in history" is done through the lambda given
// to `make_scope` earlier in the function
}
void
ChainListener::setLastXChainTxnWithResult(uint256 const& hash)
{
// Note that `onMessage` also locks this mutex, and it calls
// `setLastXChainTxnWithResult`. However, it calls that function on the
// other chain, so the mutex will not be locked twice on the same
// thread.
std::lock_guard l{m_};
if (!initialSync_)
return;
auto const hasReplayed = initialSync_->setLastXChainTxnWithResult(hash);
if (hasReplayed)
initialSync_.reset();
}
void
ChainListener::setNoLastXChainTxnWithResult()
{
// Note that `onMessage` also locks this mutex, and it calls
// `setNoLastXChainTxnWithResult`. However, it calls that function on
// the other chain, so the mutex will not be locked twice on the same
// thread.
std::lock_guard l{m_};
if (!initialSync_)
return;
bool const hasReplayed = initialSync_->setNoLastXChainTxnWithResult();
if (hasReplayed)
initialSync_.reset();
}
Json::Value
ChainListener::getInfo() const
{
std::lock_guard l{m_};
Json::Value ret{Json::objectValue};
ret[jss::state] = initialSync_ ? "syncing" : "normal";
if (initialSync_)
{
ret[jss::sync_info] = initialSync_->getInfo();
}
// get the state (in sync, syncing)
return ret;
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,102 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_CHAINLISTENER_H_INCLUDED
#define RIPPLE_SIDECHAIN_IMPL_CHAINLISTENER_H_INCLUDED
#include <ripple/protocol/AccountID.h>
#include <ripple/app/sidechain/impl/InitialSync.h>
#include <ripple/beast/utility/Journal.h>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/address.hpp>
#include <memory>
#include <mutex>
#include <string>
namespace ripple {
namespace sidechain {
class Federator;
class WebsocketClient;
class ChainListener
{
protected:
enum class IsMainchain { no, yes };
bool const isMainchain_;
// Sending xrp to the door account will trigger a x-chain transaction
AccountID const doorAccount_;
std::string const doorAccountStr_;
std::weak_ptr<Federator> federator_;
mutable std::mutex m_;
// Logic to handle potentially collecting and replaying historical
// transactions. Will be empty after replaying.
std::unique_ptr<InitialSync> GUARDED_BY(m_) initialSync_;
beast::Journal j_;
ChainListener(
IsMainchain isMainchain,
AccountID const& account,
std::weak_ptr<Federator>&& federator,
beast::Journal j);
virtual ~ChainListener();
std::string const&
chainName() const;
void
processMessage(Json::Value const& msg) EXCLUDES(m_);
template <class E>
void
pushEvent(E&& e, int txHistoryIndex, std::lock_guard<std::mutex> const&)
REQUIRES(m_);
public:
void
setLastXChainTxnWithResult(uint256 const& hash) EXCLUDES(m_);
void
setNoLastXChainTxnWithResult() EXCLUDES(m_);
Json::Value
getInfo() const EXCLUDES(m_);
using RpcCallback = std::function<void(Json::Value const&)>;
/**
* send a RPC and call the callback with the RPC result
* @param cmd PRC command
* @param params RPC command parameter
* @param onResponse callback to process RPC result
*/
virtual void
send(
std::string const& cmd,
Json::Value const& params,
RpcCallback onResponse) = 0;
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,327 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/DoorKeeper.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/impl/TicketHolder.h>
#include <ripple/basics/Log.h>
#include <ripple/json/json_writer.h>
#include <ripple/protocol/LedgerFormats.h>
#include <ripple/protocol/TxFlags.h>
namespace ripple {
namespace sidechain {
DoorKeeper::DoorKeeper(
bool isMainChain,
AccountID const& account,
TicketRunner& ticketRunner,
Federator& federator,
beast::Journal j)
: isMainChain_(isMainChain)
, accountStr_(toBase58(account))
, ticketRunner_(ticketRunner)
, federator_(federator)
, j_(j)
{
}
void
DoorKeeper::init()
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::waitLedger)
return;
initData_.status_ = InitializeStatus::waitAccountInfo;
rpcAccountInfo(lock);
}
void
DoorKeeper::updateQueueLength(std::uint32_t length)
{
DoorStatus oldStatus;
auto const tx = [&]() -> std::optional<Json::Value> {
enum Action { setFlag, clearFlag, noAction };
Action action = noAction;
std::lock_guard lock(mtx_);
JLOGV(
j_.trace(),
"updateQueueLength",
jv("account:", accountStr_),
jv("QLen", length),
jv("chain", (isMainChain_ ? "main" : "side")));
if (initData_.status_ != InitializeStatus::initialized)
return {};
oldStatus = status_;
if (length >= HighWaterMark && status_ == DoorStatus::open)
{
action = setFlag;
status_ = DoorStatus::closing;
}
else if (length <= LowWaterMark && status_ == DoorStatus::closed)
{
action = clearFlag;
status_ = DoorStatus::opening;
}
if (action == noAction)
return {};
XRPAmount const fee{Federator::accountControlTxFee};
Json::Value txJson;
txJson[jss::TransactionType] = "AccountSet";
txJson[jss::Account] = accountStr_;
txJson[jss::Sequence] = 0; // to be filled by ticketRunner
txJson[jss::Fee] = to_string(fee);
if (action == setFlag)
txJson[jss::SetFlag] = asfDepositAuth;
else
txJson[jss::ClearFlag] = asfDepositAuth;
return txJson;
}();
if (tx)
{
bool triggered = false;
if (isMainChain_)
{
triggered =
ticketRunner_.trigger(TicketPurpose::mainDoorKeeper, tx, {});
}
else
{
triggered =
ticketRunner_.trigger(TicketPurpose::sideDoorKeeper, {}, tx);
}
JLOGV(
j_.trace(),
"updateQueueLength",
jv("account:", accountStr_),
jv("QLen", length),
jv("chain", (isMainChain_ ? "main" : "side")),
jv("tx", *tx),
jv("triggered", (triggered ? "yes" : "no")));
if (!triggered)
{
std::lock_guard lock(mtx_);
status_ = oldStatus;
}
}
}
void
DoorKeeper::onEvent(const event::DepositAuthResult& e)
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::initialized)
{
JLOG(j_.trace()) << "Queue an event";
initData_.toReplay_.push(e);
}
else
{
processEvent(e, lock);
}
}
void
DoorKeeper::rpcAccountInfo(std::lock_guard<std::mutex> const&)
{
Json::Value params = [&] {
Json::Value r;
r[jss::account] = accountStr_;
r[jss::ledger_index] = "validated";
r[jss::signer_lists] = false;
return r;
}();
rpcChannel_->send(
"account_info",
params,
[chain = isMainChain_ ? Federator::mainChain : Federator::sideChain,
wp = federator_.weak_from_this()](Json::Value const& response) {
if (auto f = wp.lock())
f->getDoorKeeper(chain).accountInfoResult(response);
});
}
void
DoorKeeper::accountInfoResult(const Json::Value& rpcResult)
{
auto ledgerNFlagsOpt =
[&]() -> std::optional<std::pair<std::uint32_t, std::uint32_t>> {
try
{
if (rpcResult.isMember(jss::error))
{
return {};
}
if (!rpcResult[jss::validated].asBool())
{
return {};
}
if (rpcResult[jss::account_data][jss::Account] != accountStr_)
{
return {};
}
if (!rpcResult[jss::account_data][jss::Flags].isIntegral())
{
return {};
}
if (!rpcResult[jss::ledger_index].isIntegral())
{
return {};
}
return std::make_pair(
rpcResult[jss::ledger_index].asUInt(),
rpcResult[jss::account_data][jss::Flags].asUInt());
}
catch (...)
{
return {};
}
}();
if (!ledgerNFlagsOpt)
{
// should not reach here since we only ask account_object after a
// validated ledger
JLOGV(j_.error(), "account_info result ", jv("result", rpcResult));
assert(false);
return;
}
auto [ledgerIndex, flags] = *ledgerNFlagsOpt;
{
JLOGV(
j_.trace(),
"accountInfoResult",
jv("ledgerIndex", ledgerIndex),
jv("flags", flags));
std::lock_guard lock(mtx_);
initData_.ledgerIndex_ = ledgerIndex;
status_ = (flags & lsfDepositAuth) == 0 ? DoorStatus::open
: DoorStatus::closed;
while (!initData_.toReplay_.empty())
{
processEvent(initData_.toReplay_.front(), lock);
initData_.toReplay_.pop();
}
initData_.status_ = InitializeStatus::initialized;
JLOG(j_.info()) << "DoorKeeper initialized, status "
<< (status_ == DoorStatus::open ? "open" : "closed");
}
}
void
DoorKeeper::processEvent(
const event::DepositAuthResult& e,
std::lock_guard<std::mutex> const&)
{
if (e.ledgerIndex_ <= initData_.ledgerIndex_)
{
JLOGV(
j_.trace(),
"DepositAuthResult, ignoring an old result",
jv("account:", accountStr_),
jv("operation",
(e.op_ == event::AccountFlagOp::set ? "set" : "clear")));
return;
}
JLOGV(
j_.trace(),
"DepositAuthResult",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("account:", accountStr_),
jv("operation",
(e.op_ == event::AccountFlagOp::set ? "set" : "clear")));
if (!e.success_)
{
JLOG(j_.error()) << "DepositAuthResult event error, account "
<< (isMainChain_ ? "main" : "side") << accountStr_;
assert(false);
return;
}
switch (e.op_)
{
case event::AccountFlagOp::set:
assert(
status_ == DoorStatus::open || status_ == DoorStatus::closing);
status_ = DoorStatus::closed;
break;
case event::AccountFlagOp::clear:
assert(
status_ == DoorStatus::closed ||
status_ == DoorStatus::opening);
status_ = DoorStatus::open;
break;
}
}
Json::Value
DoorKeeper::getInfo() const
{
auto DoorStatusToStr = [](DoorKeeper::DoorStatus s) -> std::string {
switch (s)
{
case DoorKeeper::DoorStatus::open:
return "open";
case DoorKeeper::DoorStatus::opening:
return "opening";
case DoorKeeper::DoorStatus::closed:
return "closed";
case DoorKeeper::DoorStatus::closing:
return "closing";
}
return {};
};
Json::Value ret{Json::objectValue};
{
std::lock_guard lock{mtx_};
if (initData_.status_ == InitializeStatus::initialized)
{
ret["initialized"] = "true";
ret["status"] = DoorStatusToStr(status_);
}
else
{
ret["initialized"] = "false";
}
}
return ret;
}
void
DoorKeeper::setRpcChannel(std::shared_ptr<ChainListener> channel)
{
rpcChannel_ = std::move(channel);
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,128 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_DOOR_OPENER_H
#define RIPPLE_SIDECHAIN_IMPL_DOOR_OPENER_H
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/json/json_value.h>
#include <ripple/protocol/AccountID.h>
#include <mutex>
#include <queue>
#include <string>
namespace ripple {
namespace sidechain {
class TicketRunner;
class Federator;
class DoorKeeper
{
public:
static constexpr std::uint32_t LowWaterMark = 0;
static constexpr std::uint32_t HighWaterMark = 100;
static_assert(HighWaterMark > LowWaterMark);
enum class DoorStatus { open, closing, closed, opening };
private:
enum class InitializeStatus { waitLedger, waitAccountInfo, initialized };
struct InitializeData
{
InitializeStatus status_ = InitializeStatus::waitLedger;
std::queue<event::DepositAuthResult> toReplay_;
std::uint32_t ledgerIndex_ = 0;
};
std::shared_ptr<ChainListener> rpcChannel_;
bool const isMainChain_;
std::string const accountStr_;
mutable std::mutex mtx_;
InitializeData GUARDED_BY(mtx_) initData_;
DoorStatus GUARDED_BY(mtx_) status_;
TicketRunner& ticketRunner_;
Federator& federator_;
beast::Journal j_;
public:
DoorKeeper(
bool isMainChain,
AccountID const& account,
TicketRunner& ticketRunner,
Federator& federator,
beast::Journal j);
~DoorKeeper() = default;
/**
* start to initialize the doorKeeper by sending accountInfo RPC
*/
void
init() EXCLUDES(mtx_);
/**
* process the accountInfo result and set the door status
* This is the end of initialization
*
* @param rpcResult the accountInfo result
*/
void
accountInfoResult(Json::Value const& rpcResult) EXCLUDES(mtx_);
/**
* update the doorKeeper about the number of pending XChain payments
* The doorKeeper will close the door if there are too many
* pending XChain payments and reopen the door later
*
* @param length the number of pending XChain payments
*/
void
updateQueueLength(std::uint32_t length) EXCLUDES(mtx_);
/**
* process a DepositAuthResult event and set the door status.
* It queues the event if the doorKeeper is not yet initialized.
*
* @param e the DepositAuthResult event
*/
void
onEvent(event::DepositAuthResult const& e) EXCLUDES(mtx_);
Json::Value
getInfo() const EXCLUDES(mtx_);
void
setRpcChannel(std::shared_ptr<ChainListener> channel);
private:
void
rpcAccountInfo(std::lock_guard<std::mutex> const&) REQUIRES(mtx_);
void
processEvent(
event::DepositAuthResult const& e,
std::lock_guard<std::mutex> const&) REQUIRES(mtx_);
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,586 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/InitialSync.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/contract.h>
#include <ripple/json/json_writer.h>
#include <type_traits>
#include <variant>
namespace ripple {
namespace sidechain {
InitialSync::InitialSync(
std::weak_ptr<Federator> federator,
bool isMainchain,
beast::Journal j)
: federator_{std::move(federator)}, isMainchain_{isMainchain}, j_{j}
{
}
bool
InitialSync::hasTransaction(
uint256 const& txnHash,
std::lock_guard<std::mutex> const&) const
{
return seenTriggeringTxns_.count(txnHash);
}
bool
InitialSync::canReplay(std::lock_guard<std::mutex> const&) const
{
return !(
needsLastXChainTxn_ || needsOtherChainLastXChainTxn_ ||
needsReplayStartTxnHash_);
}
void
InitialSync::stopHistoricalTxns(std::lock_guard<std::mutex> const&)
{
if (!acquiringHistoricData_)
return;
acquiringHistoricData_ = false;
if (auto f = federator_.lock())
{
f->stopHistoricalTxns(getChainType(isMainchain_));
}
}
void
InitialSync::done()
{
if (auto f = federator_.lock())
{
f->initialSyncDone(
isMainchain_ ? Federator::ChainType::mainChain
: Federator::ChainType::sideChain);
}
}
bool
InitialSync::setLastXChainTxnWithResult(uint256 const& hash)
{
std::lock_guard l{m_};
JLOGV(
j_.trace(),
"last xchain txn with result",
jv("needsOtherChainLastXChainTxn", needsOtherChainLastXChainTxn_),
jv("isMainchain", isMainchain_),
jv("hash", hash));
assert(lastXChainTxnWithResult_.value_or(hash) == hash);
if (hasReplayed_ || lastXChainTxnWithResult_)
return hasReplayed_;
lastXChainTxnWithResult_ = hash;
needsReplayStartTxnHash_ = false;
if (needsLastXChainTxn_)
{
needsLastXChainTxn_ =
!seenTriggeringTxns_.count(*lastXChainTxnWithResult_);
}
if (!acquiringHistoricData_ && needsLastXChainTxn_)
LogicError("Initial sync could not find historic XChain transaction");
if (canReplay(l))
replay(l);
return hasReplayed_;
}
bool
InitialSync::setNoLastXChainTxnWithResult()
{
std::lock_guard l{m_};
JLOGV(
j_.trace(),
"no last xchain txn with result",
jv("needsOtherChainLastXChainTxn", needsOtherChainLastXChainTxn_),
jv("isMainchain", isMainchain_));
assert(!lastXChainTxnWithResult_);
if (hasReplayed_)
return hasReplayed_;
needsLastXChainTxn_ = false;
needsReplayStartTxnHash_ = false;
if (canReplay(l))
replay(l);
return hasReplayed_;
}
void
InitialSync::replay(std::lock_guard<std::mutex> const& l)
{
if (hasReplayed_)
return;
assert(canReplay(l));
// Note that this function may push a large number of events to the
// federator, and it runs under a lock. However, pushing an event to the
// federator just copies it into a collection (it does not handle the event
// in the same thread). So this should run relatively quickly.
stopHistoricalTxns(l);
hasReplayed_ = true;
JLOGV(
j_.trace(),
"InitialSync replay,",
jv("chain_name", (isMainchain_ ? "Mainchain" : "Sidechain")),
jv("lastXChainTxnWithResult_",
(lastXChainTxnWithResult_ ? strHex(*lastXChainTxnWithResult_)
: "not set")));
if (lastXChainTxnWithResult_)
assert(seenTriggeringTxns_.count(*lastXChainTxnWithResult_));
if (lastXChainTxnWithResult_ &&
seenTriggeringTxns_.count(*lastXChainTxnWithResult_))
{
// Remove the XChainTransferDetected event associated with this txn, and
// all the XChainTransferDetected events before it. They have already
// been submitted. If they are not removed, they will never collect
// enough signatures to be submitted (since the other federators have
// already submitted it), and it will prevent subsequent event from
// replaying.
std::vector<decltype(pendingEvents_)::const_iterator> toRemove;
toRemove.reserve(pendingEvents_.size());
std::vector<decltype(pendingEvents_)::const_iterator> toRemoveTrigger;
bool matched = false;
for (auto i = pendingEvents_.cbegin(), e = pendingEvents_.cend();
i != e;
++i)
{
auto const et = eventType(i->second);
if (et == event::EventType::trigger)
{
toRemove.push_back(i);
}
else if (et == event::EventType::resultAndTrigger)
{
toRemoveTrigger.push_back(i);
}
else
{
continue;
}
auto const txnHash = sidechain::txnHash(i->second);
if (!txnHash)
{
// All triggering events should have a txnHash
assert(0);
continue;
}
JLOGV(
j_.trace(),
"InitialSync replay, remove trigger event from pendingEvents_",
jv("chain_name", (isMainchain_ ? "Mainchain" : "Sidechain")),
jv("txn", toJson(i->second)));
if (*lastXChainTxnWithResult_ == *txnHash)
{
matched = true;
break;
}
}
assert(matched);
if (matched)
{
for (auto const& i : toRemoveTrigger)
{
if (auto ticketResult = std::get_if<event::TicketCreateResult>(
&(pendingEvents_.erase(i, i)->second));
ticketResult)
{
ticketResult->removeTrigger();
}
}
for (auto const& i : toRemove)
{
pendingEvents_.erase(i);
}
}
}
if (disableMasterKeySeq_)
{
// Remove trigger events that come beofre disableMasterKeySeq_
std::vector<decltype(pendingEvents_)::const_iterator> toRemove;
toRemove.reserve(pendingEvents_.size());
for (auto i = pendingEvents_.cbegin(), e = pendingEvents_.cend();
i != e;
++i)
{
if (std::holds_alternative<event::DisableMasterKeyResult>(
i->second))
break;
if (eventType(i->second) != event::EventType::trigger)
continue;
toRemove.push_back(i);
}
for (auto const& i : toRemove)
{
pendingEvents_.erase(i);
}
}
if (auto f = federator_.lock())
{
for (auto&& [_, e] : pendingEvents_)
{
JLOGV(
j_.trace(),
"InitialSync replay, pushing event",
jv("chain_name", (isMainchain_ ? "Mainchain" : "Sidechain")),
jv("txn", toJson(e)));
f->push(std::move(e));
}
}
seenTriggeringTxns_.clear();
pendingEvents_.clear();
if (auto f = federator_.lock())
{
auto const key = isMainchain_ ? Federator::UnlockMainLoopKey::mainChain
: Federator::UnlockMainLoopKey::sideChain;
f->unlockMainLoop(key);
}
done();
}
bool
InitialSync::onEvent(event::XChainTransferDetected&& e)
{
return onTriggerEvent(std::move(e));
}
bool
InitialSync::onEvent(event::XChainTransferResult&& e)
{
return onResultEvent(std::move(e), 1);
}
bool
InitialSync::onEvent(event::TicketCreateTrigger&& e)
{
return onTriggerEvent(std::move(e));
}
bool
InitialSync::onEvent(event::TicketCreateResult&& e)
{
static_assert(std::is_rvalue_reference_v<decltype(e)>, "");
std::lock_guard l{m_};
if (hasReplayed_)
{
assert(0);
return hasReplayed_;
}
JLOGV(
j_.trace(), "InitialSync TicketCreateResult", jv("event", e.toJson()));
if (needsOtherChainLastXChainTxn_)
{
if (auto f = federator_.lock())
{
// Inform the other sync object that the last transaction with a
// result was found. e.dir_ is for the triggering transaction.
Federator::ChainType const chainType = srcChainType(e.dir_);
f->setLastXChainTxnWithResult(
chainType, e.txnSeq_, 2, e.srcChainTxnHash_);
}
needsOtherChainLastXChainTxn_ = false;
}
if (!e.memoStr_.empty())
{
seenTriggeringTxns_.insert(e.txnHash_);
if (lastXChainTxnWithResult_ && needsLastXChainTxn_)
{
if (e.txnHash_ == *lastXChainTxnWithResult_)
{
needsLastXChainTxn_ = false;
JLOGV(
j_.trace(),
"InitialSync TicketCreateResult, found the trigger tx",
jv("txHash", e.txnHash_),
jv("chain_name",
(isMainchain_ ? "Mainchain" : "Sidechain")));
}
}
}
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
replay(l);
return hasReplayed_;
}
bool
InitialSync::onEvent(event::DepositAuthResult&& e)
{
return onResultEvent(std::move(e), 1);
}
bool
InitialSync::onEvent(event::BootstrapTicket&& e)
{
std::lock_guard l{m_};
JLOGV(j_.trace(), "InitialSync onBootstrapTicket", jv("event", e.toJson()));
if (hasReplayed_)
{
assert(0);
return hasReplayed_;
}
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
replay(l);
return hasReplayed_;
}
bool
InitialSync::onEvent(event::DisableMasterKeyResult&& e)
{
std::lock_guard l{m_};
JLOGV(
j_.trace(),
"InitialSync onDisableMasterKeyResultEvent",
jv("event", e.toJson()));
assert(!disableMasterKeySeq_);
disableMasterKeySeq_ = e.txnSeq_;
if (lastXChainTxnWithResult_)
LogicError("Initial sync could not find historic XChain transaction");
if (needsOtherChainLastXChainTxn_)
{
if (auto f = federator_.lock())
{
// Inform the other sync object that the last transaction
// with a result was found. Note that if start of historic
// transactions is found while listening to the mainchain, the
// _sidechain_ listener needs to be informed that there is no last
// cross chain transaction with result.
Federator::ChainType const chainType = getChainType(!isMainchain_);
f->setNoLastXChainTxnWithResult(chainType);
}
needsOtherChainLastXChainTxn_ = false;
}
if (hasReplayed_)
{
assert(0);
return hasReplayed_;
}
if (auto f = federator_.lock())
{
// Set the account sequence right away. Otherwise when replaying, a
// triggering transaction from the main chain can be replayed before the
// disable master key event on the side chain, and the sequence number
// will be wrong.
f->setAccountSeqMax(getChainType(e.isMainchain_), e.txnSeq_ + 1);
}
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
replay(l);
return hasReplayed_;
}
template <class T>
bool
InitialSync::onTriggerEvent(T&& e)
{
static_assert(std::is_rvalue_reference_v<decltype(e)>, "");
std::lock_guard l{m_};
if (hasReplayed_)
{
assert(0);
return hasReplayed_;
}
JLOGV(j_.trace(), "InitialSync onTriggerEvent", jv("event", e.toJson()));
seenTriggeringTxns_.insert(e.txnHash_);
if (lastXChainTxnWithResult_ && needsLastXChainTxn_)
{
if (e.txnHash_ == *lastXChainTxnWithResult_)
{
needsLastXChainTxn_ = false;
JLOGV(
j_.trace(),
"InitialSync onTriggerEvent, found the trigger tx",
jv("txHash", e.txnHash_),
jv("chain_name", (isMainchain_ ? "Mainchain" : "Sidechain")));
}
}
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
{
replay(l);
}
return hasReplayed_;
}
template <class T>
bool
InitialSync::onResultEvent(T&& e, std::uint32_t seqTook)
{
static_assert(std::is_rvalue_reference_v<decltype(e)>, "");
std::lock_guard l{m_};
if (hasReplayed_)
{
assert(0);
return hasReplayed_;
}
JLOGV(j_.trace(), "InitialSync onResultEvent", jv("event", e.toJson()));
if (needsOtherChainLastXChainTxn_)
{
if (auto f = federator_.lock())
{
// Inform the other sync object that the last transaction with a
// result was found. e.dir_ is for the triggering transaction.
Federator::ChainType const chainType = srcChainType(e.dir_);
f->setLastXChainTxnWithResult(
chainType, e.txnSeq_, seqTook, e.srcChainTxnHash_);
}
needsOtherChainLastXChainTxn_ = false;
}
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
replay(l);
return hasReplayed_;
}
bool
InitialSync::onEvent(event::RefundTransferResult&& e)
{
std::lock_guard l{m_};
if (hasReplayed_)
{
assert(0);
}
else
{
pendingEvents_[e.rpcOrder_] = std::move(e);
if (canReplay(l))
replay(l);
}
return hasReplayed_;
}
namespace detail {
Json::Value
getInfo(FederatorEvent const& event)
{
return std::visit(
[](auto const& e) {
using eventType = decltype(e);
Json::Value ret{Json::objectValue};
if constexpr (std::is_same_v<
eventType,
event::XChainTransferDetected>)
{
ret[jss::type] = "xchain_transfer_detected";
ret[jss::amount] = to_string(e.amt_);
ret[jss::destination_account] = to_string(e.dst_);
ret[jss::hash] = strHex(e.txnHash_);
ret[jss::sequence] = e.txnSeq_;
ret["rpc_order"] = e.rpcOrder_;
}
else if constexpr (std::is_same_v<
eventType,
event::XChainTransferResult>)
{
ret[jss::type] = "xchain_transfer_result";
ret[jss::amount] = to_string(e.amt_);
ret[jss::destination_account] = to_string(e.dst_);
ret[jss::hash] = strHex(e.txnHash_);
ret["triggering_tx_hash"] = strHex(e.triggeringTxnHash_);
ret[jss::sequence] = e.txnSeq_;
ret[jss::result] = transHuman(e.ter_);
ret["rpc_order"] = e.rpcOrder_;
}
else
{
ret[jss::type] = "other_event";
}
return ret;
},
event);
}
} // namespace detail
Json::Value
InitialSync::getInfo() const
{
Json::Value ret{Json::objectValue};
{
std::lock_guard l{m_};
ret["last_x_chain_txn_with_result"] = lastXChainTxnWithResult_
? strHex(*lastXChainTxnWithResult_)
: "None";
Json::Value triggerinTxns{Json::arrayValue};
for (auto const& h : seenTriggeringTxns_)
{
triggerinTxns.append(strHex(h));
}
ret["seen_triggering_txns"] = triggerinTxns;
ret["needs_last_x_chain_txn"] = needsLastXChainTxn_;
ret["needs_other_chain_last_x_chain_txn"] =
needsOtherChainLastXChainTxn_;
ret["acquiring_historic_data"] = acquiringHistoricData_;
ret["needs_replay_start_txn_hash"] = needsReplayStartTxnHash_;
}
return ret;
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,212 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_INITIALSYNC_H_INCLUDED
#define RIPPLE_SIDECHAIN_IMPL_INITIALSYNC_H_INCLUDED
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/basics/ThreadSaftyAnalysis.h>
#include <ripple/basics/UnorderedContainers.h>
#include <ripple/beast/utility/Journal.h>
#include <map>
namespace ripple {
namespace sidechain {
class Federator;
class WebsocketClient;
// This class handles the logic of getting a federator that joins the network
// into a "normal" state of handling new cross chain transactions and results.
// There will be two instance of this class, one for the main chain and one for
// the side chain.
//
// When a federator joins the network of other federators, the network can be in
// one of three states:
//
// 1) The initial sidechain startup.
// 2) Running normally with a quorum of federators. This federator that's
// joining just increases the quorum.
// 3) A stalled sidechain without enough federators to make forward progress.
// This federator may or may not increase the quorum enough so cross chain
// transactions can continue. In the meantime, cross chain transactions may
// continue to accumulate.
//
// No matter the state of the federators network, connecting to the network goes
// through the same steps. There are two instances of this class, one
// for the main chain and one for the side chain.
//
// The RPC command used to fetch transactions will initially be configured to
// retrieve both historical transactions and new transactions. Once the
// information needed from the historical transactions are retrieved, it will be
// changed to only stream new transactions.
//
// There are two states this class can be in: pre-replay and post-replay. In
// pre-replay mode, the class collects information from both historic and
// transactions that will be used for helping the this instance and the "other"
// instance of this class known when to stop collecting historic data, as well
// as collecting transactions for replaying.
//
// Historic data needs to be collected until:
//
// 1) The most recent historic `XChainTransferResult` event is detected (or the
// account's first transaction is detected). This is used to inform the "other"
// instance of this class which `XChainTransferDetected` event is the first that
// may need to be replayed. Since the previous `XChainTransferDetected` events
// have results on the other chain, we can definitively say the federators have
// handled these events and they don't need to be replayed.
//
// 2) Once the `lastXChainTxnWithResult_` is know, historic transactions need be
// acquired until that transaction is seen on a `XChainTransferDetected` event.
//
// Once historic data collection has completed, the collected transactions are
// replayed to the federator, and this class is not longer needed. All new
// transactions should simply be forwarded to the federator.
//
class InitialSync
{
private:
std::weak_ptr<Federator> federator_;
// Holds all the eventsseen so far. These events will be replayed to the
// federator upon switching to `normal` mode. Will be cleared while
// replaying.
std::map<std::int32_t, FederatorEvent> GUARDED_BY(m_) pendingEvents_;
// Holds all triggering cross chain transactions seen so far. This is used
// to determine if the `XChainTransferDetected` event with the
// `lastXChainTxnWithResult_` has has been seen or not. Will be cleared
// while replaying
hash_set<uint256> GUARDED_BY(m_) seenTriggeringTxns_;
// Hash of the last cross chain transaction on this chain with a result on
// the "other" chain. Note: this is set when the `InitialSync` for the
// "other" chain encounters the transaction.
std::optional<uint256> GUARDED_BY(m_) lastXChainTxnWithResult_;
// Track if we need to keep acquiring historic transactions for the
// `lastXChainTxnWithResult_`. This is true if the lastXChainTxnWithResult_
// is unknown, or it is known and the transaction is not part of that
// collection yet.
bool GUARDED_BY(m_) needsLastXChainTxn_{true};
// Track if we need to keep acquiring historic transactions for the other
// chain's `lastXChainTxnWithResult_` hash value. This is true if no
// cross chain transaction results are known and the first historical
// transaction has not been encountered.
bool GUARDED_BY(m_) needsOtherChainLastXChainTxn_{true};
// Track if the transaction to start the replay from is known. This is true
// until `lastXChainTxnWithResult_` is known and the other listener has not
// encountered the first historical transaction
bool GUARDED_BY(m_) needsReplayStartTxnHash_{true};
// True if the historical transactions have been replayed to the federator
bool GUARDED_BY(m_) hasReplayed_{false};
// Track the state of the transaction data we are acquiring.
// If this is `false`, only new transactions events will be streamed.
// Note: there will be a period where this is `false` but historic txns will
// continue to come in until the rpc command has responded to the request to
// shut off historic data.
bool GUARDED_BY(m_) acquiringHistoricData_{true};
// All transactions before "DisableMasterKey" are setup transactions and
// should be ignored
std::optional<std::uint32_t> GUARDED_BY(m_) disableMasterKeySeq_;
bool const isMainchain_;
mutable std::mutex m_;
beast::Journal j_;
// See description on class for explanation of states
public:
InitialSync(
std::weak_ptr<Federator> federator,
bool isMainchain,
beast::Journal j);
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
[[nodiscard]] bool
setLastXChainTxnWithResult(uint256 const& hash) EXCLUDES(m_);
// There have not been any cross chain transactions.
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
[[nodiscard]] bool
setNoLastXChainTxnWithResult() EXCLUDES(m_);
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
[[nodiscard]] bool
onEvent(event::XChainTransferDetected&& e) EXCLUDES(m_);
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
[[nodiscard]] bool
onEvent(event::XChainTransferResult&& e) EXCLUDES(m_);
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
[[nodiscard]] bool
onEvent(event::RefundTransferResult&& e) EXCLUDES(m_);
// Return `hasReplayed_`. This is used to determine if events should
// continue to be routed to this object. Once replayed, events can be
// processed normally.
// Return `hasReplayed_`.
[[nodiscard]] bool
onEvent(event::TicketCreateTrigger&& e) EXCLUDES(m_);
// Return `hasReplayed_`.
[[nodiscard]] bool
onEvent(event::TicketCreateResult&& e) EXCLUDES(m_);
// Return `hasReplayed_`.
[[nodiscard]] bool
onEvent(event::DepositAuthResult&& e) EXCLUDES(m_);
[[nodiscard]] bool
onEvent(event::BootstrapTicket&& e) EXCLUDES(m_);
[[nodiscard]] bool
onEvent(event::DisableMasterKeyResult&& e) EXCLUDES(m_);
Json::Value
getInfo() const EXCLUDES(m_);
private:
// Replay when historical transactions are no longer being acquired,
// and the transaction to start the replay from is known.
bool
canReplay(std::lock_guard<std::mutex> const&) const REQUIRES(m_);
void
replay(std::lock_guard<std::mutex> const&) REQUIRES(m_);
bool
hasTransaction(uint256 const& txnHash, std::lock_guard<std::mutex> const&)
const REQUIRES(m_);
void
stopHistoricalTxns(std::lock_guard<std::mutex> const&) REQUIRES(m_);
template <class T>
bool
onTriggerEvent(T&& e);
template <class T>
bool
onResultEvent(T&& e, std::uint32_t seqTook);
void
done();
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,147 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/MainchainListener.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/app/sidechain/impl/WebsocketClient.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/XRPAmount.h>
#include <ripple/json/Output.h>
#include <ripple/json/json_writer.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/STAmount.h>
#include <ripple/protocol/jss.h>
namespace ripple {
namespace sidechain {
class Federator;
MainchainListener::MainchainListener(
AccountID const& account,
std::weak_ptr<Federator>&& federator,
beast::Journal j)
: ChainListener(
ChainListener::IsMainchain::yes,
account,
std::move(federator),
j)
{
}
void
MainchainListener::onMessage(Json::Value const& msg)
{
auto callbackOpt = [&]() -> std::optional<RpcCallback> {
if (msg.isMember(jss::id) && msg[jss::id].isIntegral())
{
auto callbackId = msg[jss::id].asUInt();
std::lock_guard lock(callbacksMtx_);
auto i = callbacks_.find(callbackId);
if (i != callbacks_.end())
{
auto cb = i->second;
callbacks_.erase(i);
return cb;
}
}
return {};
}();
if (callbackOpt)
{
JLOGV(
j_.trace(),
"Mainchain onMessage, reply to a callback",
jv("msg", msg));
assert(msg.isMember(jss::result));
(*callbackOpt)(msg[jss::result]);
}
else
{
processMessage(msg);
}
}
void
MainchainListener::init(
boost::asio::io_service& ios,
boost::asio::ip::address const& ip,
std::uint16_t port)
{
wsClient_ = std::make_unique<WebsocketClient>(
[self = shared_from_this()](Json::Value const& msg) {
self->onMessage(msg);
},
ios,
ip,
port,
/*headers*/ std::unordered_map<std::string, std::string>{},
j_);
Json::Value params;
params[jss::account_history_tx_stream] = Json::objectValue;
params[jss::account_history_tx_stream][jss::account] = doorAccountStr_;
send("subscribe", params);
}
// destructor must be defined after WebsocketClient size is known (i.e. it can
// not be defaulted in the header or the unique_ptr declration of
// WebsocketClient won't work)
MainchainListener::~MainchainListener() = default;
void
MainchainListener::shutdown()
{
if (wsClient_)
wsClient_->shutdown();
}
std::uint32_t
MainchainListener::send(std::string const& cmd, Json::Value const& params)
{
return wsClient_->send(cmd, params);
}
void
MainchainListener::stopHistoricalTxns()
{
Json::Value params;
params[jss::account_history_tx_stream] = Json::objectValue;
params[jss::account_history_tx_stream][jss::stop_history_tx_only] = true;
params[jss::account_history_tx_stream][jss::account] = doorAccountStr_;
send("unsubscribe", params);
}
void
MainchainListener::send(
std::string const& cmd,
Json::Value const& params,
RpcCallback onResponse)
{
JLOGV(
j_.trace(), "Mainchain send", jv("command", cmd), jv("params", params));
auto id = wsClient_->send(cmd, params);
std::lock_guard lock(callbacksMtx_);
callbacks_.emplace(id, onResponse);
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,84 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_MAINCHAINLISTENER_H_INCLUDED
#define RIPPLE_SIDECHAIN_IMPL_MAINCHAINLISTENER_H_INCLUDED
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/protocol/AccountID.h>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/address.hpp>
#include <memory>
namespace ripple {
namespace sidechain {
class Federator;
class WebsocketClient;
class MainchainListener : public ChainListener,
public std::enable_shared_from_this<MainchainListener>
{
std::unique_ptr<WebsocketClient> wsClient_;
mutable std::mutex callbacksMtx_;
std::map<std::uint32_t, RpcCallback> GUARDED_BY(callbacksMtx_) callbacks_;
void
onMessage(Json::Value const& msg) EXCLUDES(callbacksMtx_);
public:
MainchainListener(
AccountID const& account,
std::weak_ptr<Federator>&& federator,
beast::Journal j);
~MainchainListener();
void
init(
boost::asio::io_service& ios,
boost::asio::ip::address const& ip,
std::uint16_t port);
// Returns command id that will be returned in the response
std::uint32_t
send(std::string const& cmd, Json::Value const& params)
EXCLUDES(callbacksMtx_);
void
shutdown();
void
stopHistoricalTxns();
void
send(
std::string const& cmd,
Json::Value const& params,
RpcCallback onResponse) override;
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,130 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/SidechainListener.h>
#include <ripple/app/main/Application.h>
#include <ripple/app/misc/NetworkOPs.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/XRPAmount.h>
#include <ripple/json/Output.h>
#include <ripple/json/json_writer.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/STAmount.h>
#include <ripple/protocol/jss.h>
#include <ripple/resource/Fees.h>
#include <ripple/rpc/Context.h>
#include <ripple/rpc/RPCHandler.h>
#include <ripple/rpc/impl/RPCHelpers.h>
namespace ripple {
namespace sidechain {
SidechainListener::SidechainListener(
Source& source,
AccountID const& account,
std::weak_ptr<Federator>&& federator,
Application& app,
beast::Journal j)
: InfoSub(source)
, ChainListener(
ChainListener::IsMainchain::no,
account,
std::move(federator),
j)
, app_(app)
{
}
void
SidechainListener::init(NetworkOPs& netOPs)
{
auto e = netOPs.subAccountHistory(shared_from_this(), doorAccount_);
if (e != rpcSUCCESS)
LogicError("Could not subscribe to side chain door account history.");
}
void
SidechainListener::send(Json::Value const& msg, bool)
{
processMessage(msg);
}
void
SidechainListener::stopHistoricalTxns(NetworkOPs& netOPs)
{
netOPs.unsubAccountHistory(
shared_from_this(), doorAccount_, /*history only*/ true);
}
void
SidechainListener::send(
std::string const& cmd,
Json::Value const& params,
RpcCallback onResponse)
{
std::weak_ptr<SidechainListener> selfWeak = shared_from_this();
auto job = [cmd, params, onResponse, selfWeak]() {
auto self = selfWeak.lock();
if (!self)
return;
JLOGV(
self->j_.trace(),
"Sidechain send",
jv("command", cmd),
jv("params", params));
Json::Value const request = [&] {
Json::Value r(params);
r[jss::method] = cmd;
r[jss::jsonrpc] = "2.0";
r[jss::ripplerpc] = "2.0";
return r;
}();
Resource::Charge loadType = Resource::feeReferenceRPC;
Resource::Consumer c;
RPC::JsonContext context{
{self->j_,
self->app_,
loadType,
self->app_.getOPs(),
self->app_.getLedgerMaster(),
c,
Role::ADMIN,
{},
{},
RPC::apiMaximumSupportedVersion},
std::move(request)};
Json::Value jvResult;
RPC::doCommand(context, jvResult);
JLOG(self->j_.trace()) << "Sidechain response: " << jvResult;
if (self->app_.config().standalone())
self->app_.getOPs().acceptLedger();
onResponse(jvResult);
};
app_.getJobQueue().addJob(jtRPC, "federator rpc", job);
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,74 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_SIDECHAINLISTENER_H_INCLUDED
#define RIPPLE_SIDECHAIN_IMPL_SIDECHAINLISTENER_H_INCLUDED
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/net/InfoSub.h>
#include <ripple/protocol/AccountID.h>
#include <memory>
namespace ripple {
class NetworkOPs;
class Application;
namespace sidechain {
class Federator;
class SidechainListener : public InfoSub,
public ChainListener,
public std::enable_shared_from_this<SidechainListener>
{
Application& app_;
public:
SidechainListener(
Source& source,
AccountID const& account,
std::weak_ptr<Federator>&& federator,
Application& app,
beast::Journal j);
void
init(NetworkOPs& netOPs);
~SidechainListener() = default;
void
send(Json::Value const& msg, bool) override;
void
stopHistoricalTxns(NetworkOPs& netOPs);
void
send(
std::string const& cmd,
Json::Value const& params,
RpcCallback onResponse) override;
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,353 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/SignatureCollector.h>
#include <ripple/app/main/Application.h>
#include <ripple/app/misc/HashRouter.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/impl/SignerList.h>
#include <ripple/basics/Slice.h>
#include <ripple/core/Job.h>
#include <ripple/core/JobQueue.h>
#include <ripple/json/json_value.h>
#include <ripple/json/json_writer.h>
#include <ripple/overlay/Message.h>
#include <ripple/overlay/Overlay.h>
#include <ripple/overlay/Peer.h>
#include <ripple/protocol/STAccount.h>
#include <ripple/protocol/STObject.h>
#include <ripple/protocol/STParsedJSON.h>
#include <ripple/protocol/Serializer.h>
#include <mutex>
namespace ripple {
namespace sidechain {
std::chrono::seconds messageExpire = std::chrono::minutes{10};
uint256
computeMessageSuppression(MessageId const& mId, Slice const& signature)
{
Serializer s(128);
s.addBitString(mId);
s.addVL(signature);
return s.getSHA512Half();
}
SignatureCollector::SignatureCollector(
bool isMainChain,
SecretKey const& mySecKey,
PublicKey const& myPubKey,
beast::abstract_clock<std::chrono::steady_clock>& c,
SignerList& signers,
Federator& federator,
Application& app,
beast::Journal j)
: isMainChain_(isMainChain)
, mySecKey_(mySecKey)
, myPubKey_(myPubKey)
, messages_(c)
, signers_(signers)
, federator_(federator)
, app_(app)
, j_(j)
{
}
void
SignatureCollector::signAndSubmit(Json::Value const& txJson)
{
auto job = [tx = txJson,
myPK = myPubKey_,
mySK = mySecKey_,
chain =
isMainChain_ ? Federator::mainChain : Federator::sideChain,
f = federator_.weak_from_this(),
j = j_]() mutable {
auto federator = f.lock();
if (!federator)
return;
STParsedJSONObject parsed(std::string(jss::tx_json), tx);
if (parsed.object == std::nullopt)
{
JLOGV(j.fatal(), "cannot parse transaction", jv("tx", tx));
assert(0);
return;
}
try
{
parsed.object->setFieldVL(sfSigningPubKey, Slice(nullptr, 0));
STTx tx(std::move(parsed.object.value()));
MessageId mId{tx.getSigningHash()};
Buffer sig{tx.getMultiSignature(calcAccountID(myPK), myPK, mySK)};
federator->getSignatureCollector(chain).processSig(
mId, myPK, std::move(sig), std::move(tx));
}
catch (...)
{
JLOGV(j.fatal(), "invalid transaction", jv("tx", tx));
assert(0);
}
};
app_.getJobQueue().addJob(jtFEDERATORSIGNATURE, "federator signature", job);
}
bool
SignatureCollector::processSig(
MessageId const& mId,
PublicKey const& pk,
Buffer const& sig,
std::optional<STTx> const& txOpt)
{
JLOGV(
j_.trace(),
"processSig",
jv("public key", strHex(pk)),
jv("message", mId));
if (!signers_.isFederator(pk))
{
return false;
}
auto valid = addSig(mId, pk, sig, txOpt);
if (txOpt)
shareSig(mId, sig);
return valid;
}
void
SignatureCollector::expire()
{
std::lock_guard lock(mtx_);
// Never expire collections with this server's sig or submitted txns
for (auto i = messages_.begin(), e = messages_.end(); i != e; ++i)
{
auto const& multiSigMsg = i->second;
if (multiSigMsg.submitted_ ||
std::any_of(
multiSigMsg.sigMaps_.begin(),
multiSigMsg.sigMaps_.end(),
[&](auto const& p) { return p.first == myPubKey_; }))
{
messages_.touch(i);
}
}
beast::expire(messages_, messageExpire);
}
void
SignatureCollector::reshareSigs()
{
std::lock_guard lock(mtx_);
for (auto const& [messageId, multiSigMsg] : messages_)
{
if (multiSigMsg.submitted_)
continue;
for (auto const& [pk, sig] : multiSigMsg.sigMaps_)
{
shareSig(messageId, sig);
}
}
}
bool
SignatureCollector::addSig(
MessageId const& mId,
PublicKey const& pk,
Buffer const& sig,
std::optional<STTx> const& txOpt)
{
JLOGV(
j_.trace(),
"addSig",
jv("message", mId),
jv("public key", strHex(pk)),
jv("sig", strHex(sig)));
std::lock_guard lock(mtx_);
auto txi = messages_.find(mId);
if (txi == messages_.end())
{
PeerSignatureMap sigMaps;
sigMaps.emplace(pk, sig);
MultiSigMessage m{sigMaps, txOpt};
messages_.emplace(mId, std::move(m));
return true;
}
auto const verifySingle = [&](PublicKey const& pk,
Buffer const& sig) -> bool {
Serializer s;
s.add32(HashPrefix::txMultiSign);
(*txi->second.tx_).addWithoutSigningFields(s);
s.addBitString(calcAccountID(pk));
return verify(pk, s.slice(), sig, true);
};
MultiSigMessage& message = txi->second;
if (txOpt)
{
message.tx_.emplace(std::move(*txOpt));
for (auto i = message.sigMaps_.begin(); i != message.sigMaps_.end();)
{
if (verifySingle(i->first, i->second))
++i;
else
{
JLOGV(
j_.trace(),
"verifySingle failed",
jv("public key", strHex(i->first)));
message.sigMaps_.erase(i);
}
}
}
else
{
if (message.tx_)
{
if (!verifySingle(pk, sig))
{
JLOGV(
j_.trace(),
"verifySingle failed",
jv("public key", strHex(pk)));
return false;
}
}
}
message.sigMaps_.emplace(pk, sig);
if (!message.submitted_ && message.tx_ &&
message.sigMaps_.size() >= signers_.quorum())
{
// message.submitted_ = true;
submit(mId, lock);
}
return true;
}
void
SignatureCollector::shareSig(MessageId const& mId, Buffer const& sig)
{
JLOGV(j_.trace(), "shareSig", jv("message", mId), jv("sig", strHex(sig)));
std::shared_ptr<Message> toSend = [&]() -> std::shared_ptr<Message> {
protocol::TMFederatorAccountCtrlSignature m;
m.set_chain(isMainChain_ ? ::protocol::fct_MAIN : ::protocol::fct_SIDE);
m.set_publickey(myPubKey_.data(), myPubKey_.size());
m.set_messageid(mId.data(), mId.size());
m.set_signature(sig.data(), sig.size());
return std::make_shared<Message>(
m, protocol::mtFederatorAccountCtrlSignature);
}();
Overlay& overlay = app_.overlay();
HashRouter& hashRouter = app_.getHashRouter();
auto const suppression = computeMessageSuppression(mId, sig);
overlay.foreach([&](std::shared_ptr<Peer> const& p) {
hashRouter.addSuppressionPeer(suppression, p->id());
JLOGV(
j_.trace(),
"sending signature to peer",
jv("pid", p->id()),
jv("mid", mId));
p->send(toSend);
});
}
void
SignatureCollector::submit(
MessageId const& mId,
std::lock_guard<std::mutex> const&)
{
JLOGV(j_.trace(), "submit", jv("message", mId));
assert(messages_.find(mId) != messages_.end());
auto& message = messages_[mId];
assert(!message.submitted_);
message.submitted_ = true;
STArray signatures;
auto sigCount = message.sigMaps_.size();
assert(sigCount >= signers_.quorum());
signatures.reserve(sigCount);
for (auto const& item : message.sigMaps_)
{
STObject obj{sfSigner};
obj[sfAccount] = calcAccountID(item.first);
obj[sfSigningPubKey] = item.first;
obj[sfTxnSignature] = item.second;
signatures.push_back(std::move(obj));
};
std::sort(
signatures.begin(),
signatures.end(),
[](STObject const& lhs, STObject const& rhs) {
return lhs[sfAccount] < rhs[sfAccount];
});
message.tx_->setFieldArray(sfSigners, std::move(signatures));
auto sp = message.tx_->getSeqProxy();
if (sp.isTicket())
{
Json::Value r;
r[jss::tx_blob] = strHex(message.tx_->getSerializer().peekData());
JLOGV(j_.trace(), "submit", jv("tx", r));
auto callback = [&](Json::Value const& response) {
JLOGV(
j_.trace(),
"SignatureCollector::submit ",
jv("response", response));
};
rpcChannel_->send("submit", r, callback);
}
else
{
JLOGV(
j_.trace(),
"forward to federator to submit",
jv("tx", strHex(message.tx_->getSerializer().peekData())));
federator_.addTxToSend(
(isMainChain_ ? Federator::ChainType::mainChain
: Federator::ChainType::sideChain),
sp.value(),
*(message.tx_));
}
}
void
SignatureCollector::setRpcChannel(std::shared_ptr<ChainListener> channel)
{
rpcChannel_ = std::move(channel);
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,149 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_SIGNATURE_COLLECTOR_H
#define RIPPLE_SIDECHAIN_IMPL_SIGNATURE_COLLECTOR_H
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/basics/Buffer.h>
#include <ripple/basics/UnorderedContainers.h>
#include <ripple/beast/clock/abstract_clock.h>
#include <ripple/beast/container/aged_unordered_map.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/protocol/PublicKey.h>
#include <ripple/protocol/STTx.h>
#include <ripple/protocol/SecretKey.h>
#include <ripple.pb.h>
#include <mutex>
#include <optional>
#include <set>
namespace Json {
class Value;
}
namespace ripple {
class Application;
namespace sidechain {
class SignerList;
class Federator;
using PeerSignatureMap = hash_map<PublicKey, Buffer>;
using MessageId = uint256;
struct MultiSigMessage
{
PeerSignatureMap sigMaps_;
std::optional<STTx> tx_;
bool submitted_ = false;
};
class SignatureCollector
{
std::shared_ptr<ChainListener> rpcChannel_;
bool const isMainChain_;
SecretKey const mySecKey_;
PublicKey const myPubKey_;
mutable std::mutex mtx_;
beast::aged_unordered_map<
MessageId,
MultiSigMessage,
std::chrono::steady_clock,
beast::uhash<>>
GUARDED_BY(mtx_) messages_;
SignerList& signers_;
Federator& federator_;
Application& app_;
beast::Journal j_;
public:
SignatureCollector(
bool isMainChain,
SecretKey const& mySecKey,
PublicKey const& myPubKey,
beast::abstract_clock<std::chrono::steady_clock>& c,
SignerList& signers,
Federator& federator,
Application& app,
beast::Journal j);
/**
* sign the tx, and share with network
* once quorum signatures are collected, the tx will be submitted
* @param tx the transaction to be signed and later submitted
*/
void
signAndSubmit(Json::Value const& tx);
/**
* verify the signature and remember it.
* If quorum signatures are collected for the same MessageId,
* a tx will be submitted.
*
* @param mId identify the tx
* @param pk public key of signer
* @param sig signature
* @param txOpt the transaction, only used by the local node
* @return if the signature is from a federator
*/
bool
processSig(
MessageId const& mId,
PublicKey const& pk,
Buffer const& sig,
std::optional<STTx> const& txOpt);
/**
* remove stale signatures
*/
void
expire() EXCLUDES(mtx_); // TODO retry logic
void
setRpcChannel(std::shared_ptr<ChainListener> channel);
void
reshareSigs() EXCLUDES(mtx_);
private:
// verify a signature (if it is from a peer) and add to a collection
bool
addSig(
MessageId const& mId,
PublicKey const& pk,
Buffer const& sig,
std::optional<STTx> const& txOpt) EXCLUDES(mtx_);
// share a signature to the network
void
shareSig(MessageId const& mId, Buffer const& sig);
// submit a tx since it collected quorum signatures
void
submit(MessageId const& mId, std::lock_guard<std::mutex> const&)
REQUIRES(mtx_);
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,51 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/SignerList.h>
namespace ripple {
namespace sidechain {
SignerList::SignerList(
AccountID const& account,
hash_set<PublicKey> const& signers,
beast::Journal j)
: account_(account)
, signers_(signers)
, quorum_(static_cast<std::uint32_t>(std::ceil(signers.size() * 0.8)))
, j_(j)
{
(void)j_;
}
bool
SignerList::isFederator(PublicKey const& pk) const
{
std::lock_guard<std::mutex> lock(mtx_);
return signers_.find(pk) != signers_.end();
}
std::uint32_t
SignerList::quorum() const
{
std::lock_guard<std::mutex> lock(mtx_);
return quorum_;
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,63 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_SIGNER_LIST_H
#define RIPPLE_SIDECHAIN_IMPL_SIGNER_LIST_H
#include <ripple/basics/ThreadSaftyAnalysis.h>
#include <ripple/basics/UnorderedContainers.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/protocol/AccountID.h>
#include <ripple/protocol/PublicKey.h>
#include <ripple.pb.h>
#include <mutex>
namespace ripple {
namespace sidechain {
/**
* grow to handle singer list changes
*/
class SignerList
{
AccountID const account_;
mutable std::mutex mtx_;
hash_set<PublicKey> GUARDED_BY(mtx_) signers_;
std::uint32_t GUARDED_BY(mtx_) quorum_;
beast::Journal j_;
public:
SignerList(
AccountID const& account,
hash_set<PublicKey> const& signers,
beast::Journal j);
~SignerList() = default;
bool
isFederator(PublicKey const& pk) const EXCLUDES(mtx_);
std::uint32_t
quorum() const EXCLUDES(mtx_);
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,791 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/TicketHolder.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/sidechain/impl/SignatureCollector.h>
#include <ripple/basics/Log.h>
#include <ripple/basics/StringUtilities.h>
#include <ripple/json/json_writer.h>
#include <ripple/protocol/STParsedJSON.h>
namespace ripple {
namespace sidechain {
std::string
TicketPurposeToStr(TicketPurpose tp)
{
switch (tp)
{
case TicketPurpose::mainDoorKeeper:
return "mainDoorKeeper";
case TicketPurpose::sideDoorKeeper:
return "sideDoorKeeper";
case TicketPurpose::updateSignerList:
return "updateSignerList";
default:
break;
}
return "unknown";
}
TicketHolder::TicketHolder(
bool isMainChain,
AccountID const& account,
Federator& federator,
beast::Journal j)
: isMainChain_(isMainChain)
, accountStr_(toBase58(account))
, federator_(federator)
, j_(j)
{
}
void
TicketHolder::init()
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::waitLedger)
return;
initData_.status_ = InitializeStatus::waitAccountObject;
rpcAccountObject();
}
std::optional<std::uint32_t>
TicketHolder::getTicket(TicketPurpose purpose, PeekOrTake pt)
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::initialized)
{
JLOGV(
j_.debug(),
"TicketHolder getTicket but ticket holder not initialized",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("purpose", TicketPurposeToStr(purpose)));
if (initData_.status_ == InitializeStatus::needToQueryTx)
rpcTx(lock);
return {};
}
auto index = static_cast<std::underlying_type_t<TicketPurpose>>(purpose);
if (tickets_[index].status_ == AutoRenewedTicket::Status::available)
{
if (pt == PeekOrTake::take)
{
JLOGV(
j_.trace(),
"getTicket",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("seq", tickets_[index].seq_));
tickets_[index].status_ = AutoRenewedTicket::Status::taken;
}
return tickets_[index].seq_;
}
if (pt == PeekOrTake::take)
{
JLOGV(
j_.trace(),
"getTicket",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("no ticket for ", TicketPurposeToStr(purpose)));
}
return {};
}
void
TicketHolder::onEvent(event::TicketCreateResult const& e)
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::initialized)
{
JLOG(j_.trace()) << "TicketHolder queues an event";
initData_.toReplay_.emplace(e);
return;
}
processEvent(e, lock);
}
void
TicketHolder::onEvent(event::BootstrapTicket const& e)
{
std::lock_guard lock(mtx_);
if (initData_.status_ != InitializeStatus::initialized)
{
JLOG(j_.trace()) << "TicketHolder queues an event";
initData_.bootstrapTicketToReplay_.emplace(e);
return;
}
processEvent(e, lock);
}
Json::Value
TicketHolder::getInfo() const
{
Json::Value ret{Json::objectValue};
{
std::lock_guard lock{mtx_};
if (initData_.status_ == InitializeStatus::initialized)
{
ret["initialized"] = "true";
Json::Value tickets{Json::arrayValue};
for (auto const& t : tickets_)
{
Json::Value tj{Json::objectValue};
tj["ticket_seq"] = t.seq_;
tj["status"] = t.status_ == AutoRenewedTicket::Status::taken
? "taken"
: "available";
tickets.append(tj);
}
ret["tickets"] = tickets;
}
else
{
ret["initialized"] = "false";
}
}
return ret;
}
void
TicketHolder::rpcAccountObject()
{
Json::Value params = [&] {
Json::Value r;
r[jss::account] = accountStr_;
r[jss::ledger_index] = "validated";
r[jss::type] = "ticket";
r[jss::limit] = 250;
return r;
}();
rpcChannel_->send(
"account_objects",
params,
[isMainChain = isMainChain_,
f = federator_.weak_from_this()](Json::Value const& response) {
auto federator = f.lock();
if (!federator)
return;
federator->getTicketRunner().accountObjectResult(
isMainChain, response);
});
}
void
TicketHolder::accountObjectResult(Json::Value const& rpcResult)
{
auto ledgerNAccountObjectOpt =
[&]() -> std::optional<std::pair<std::uint32_t, Json::Value>> {
try
{
if (rpcResult.isMember(jss::error))
{
return {};
}
if (!rpcResult[jss::validated].asBool())
{
return {};
}
if (rpcResult[jss::account] != accountStr_)
{
return {};
}
if (!rpcResult[jss::ledger_index].isIntegral())
{
return {};
}
if (!rpcResult.isMember(jss::account_objects) ||
!rpcResult[jss::account_objects].isArray())
{
return {};
}
return std::make_pair(
rpcResult[jss::ledger_index].asUInt(),
rpcResult[jss::account_objects]);
}
catch (...)
{
return {};
}
}();
if (!ledgerNAccountObjectOpt)
{
// can reach here?
// should not since we only ask account_object after a validated ledger
JLOGV(j_.error(), "AccountObject", jv("result", rpcResult));
assert(false);
return;
}
auto& [ledgerIndex, accountObjects] = *ledgerNAccountObjectOpt;
std::lock_guard<std::mutex> lock(mtx_);
if (initData_.status_ != InitializeStatus::waitAccountObject)
{
JLOG(j_.warn()) << "unexpected AccountObject";
return;
}
initData_.ledgerIndex_ = ledgerIndex;
for (auto const& o : accountObjects)
{
if (!o.isMember("LedgerEntryType") ||
o["LedgerEntryType"] != jss::Ticket)
continue;
// the following fields are mandatory
uint256 txHash;
if (!txHash.parseHex(o["PreviousTxnID"].asString()))
{
JLOGV(
j_.error(),
"AccountObject cannot parse tx hash",
jv("result", rpcResult));
assert(false);
return;
}
std::uint32_t ticketSeq = o["TicketSequence"].asUInt();
if (initData_.tickets_.find(txHash) != initData_.tickets_.end())
{
JLOGV(
j_.error(),
"AccountObject cannot parse tx hash",
jv("result", rpcResult));
assert(false);
return;
}
initData_.tickets_.emplace(txHash, ticketSeq);
JLOGV(
j_.trace(),
"AccountObject, add",
jv("tx hash", txHash),
jv("ticketSeq", ticketSeq));
}
if (initData_.tickets_.empty())
{
JLOG(j_.debug()) << "Door account has no tickets in current ledger, "
"unlikely but could happen";
replay(lock);
}
else
{
rpcTx(lock);
}
}
void
TicketHolder::rpcTx(std::lock_guard<std::mutex> const&)
{
assert(!initData_.tickets_.empty());
initData_.status_ = InitializeStatus::waitTx;
for (auto const& t : initData_.tickets_)
{
JLOG(j_.trace()) << "TicketHolder query tx " << t.first;
Json::Value params;
params[jss::transaction] = strHex(t.first);
rpcChannel_->send(
"tx",
params,
[isMainChain = isMainChain_,
f = federator_.weak_from_this()](Json::Value const& response) {
auto federator = f.lock();
if (!federator)
return;
federator->getTicketRunner().txResult(isMainChain, response);
});
}
}
void
TicketHolder::txResult(Json::Value const& rpcResult)
{
std::lock_guard<std::mutex> lock(mtx_);
if (initData_.status_ != InitializeStatus::waitTx &&
initData_.status_ != InitializeStatus::needToQueryTx)
return;
auto txOpt = [&]() -> std::optional<std::pair<TicketPurpose, uint256>> {
try
{
if (rpcResult.isMember(jss::error))
{
return {};
}
if (rpcResult[jss::Account] != accountStr_)
{
return {};
}
if (rpcResult[jss::TransactionType] != "TicketCreate")
{
return {};
}
if (!rpcResult["SourceTag"].isIntegral())
{
return {};
}
std::uint32_t tp = rpcResult["SourceTag"].asUInt();
if (tp >= static_cast<std::underlying_type_t<TicketPurpose>>(
TicketPurpose::TP_NumberOfItems))
{
return {};
}
uint256 txHash;
if (!txHash.parseHex(rpcResult[jss::hash].asString()))
{
return {};
}
return std::make_pair(static_cast<TicketPurpose>(tp), txHash);
}
catch (...)
{
return {};
}
}();
if (!txOpt)
{
JLOGV(
j_.warn(),
"TicketCreate can not be found or has wrong format",
jv("result", rpcResult));
if (initData_.status_ == InitializeStatus::waitTx)
initData_.status_ = InitializeStatus::needToQueryTx;
return;
}
auto [tPurpose, txHash] = *txOpt;
if (initData_.tickets_.find(txHash) == initData_.tickets_.end())
{
JLOGV(
j_.debug(),
"Repeated TicketCreate tx result",
jv("result", rpcResult));
return;
}
auto& ticket = initData_.tickets_[txHash];
JLOGV(
j_.trace(),
"TicketHolder txResult",
jv("purpose", TicketPurposeToStr(tPurpose)),
jv("txHash", txHash));
auto index = static_cast<std::underlying_type_t<TicketPurpose>>(tPurpose);
tickets_[index].seq_ = ticket;
tickets_[index].status_ = AutoRenewedTicket::Status::available;
initData_.tickets_.erase(txHash);
if (initData_.tickets_.empty())
{
replay(lock);
}
}
void
TicketHolder::replay(std::lock_guard<std::mutex> const& lock)
{
assert(initData_.tickets_.empty());
// replay bootstrap tickets first if any
while (!initData_.bootstrapTicketToReplay_.empty())
{
auto e = initData_.bootstrapTicketToReplay_.front();
processEvent(e, lock);
initData_.bootstrapTicketToReplay_.pop();
}
while (!initData_.toReplay_.empty())
{
auto e = initData_.toReplay_.front();
processEvent(e, lock);
initData_.toReplay_.pop();
}
initData_.status_ = InitializeStatus::initialized;
JLOG(j_.info()) << "TicketHolder initialized";
}
template <class E>
void
TicketHolder::processEvent(E const& e, std::lock_guard<std::mutex> const&)
{
std::uint32_t const tSeq = e.txnSeq_ + 1;
if (e.sourceTag_ >= static_cast<std::underlying_type_t<TicketPurpose>>(
TicketPurpose::TP_NumberOfItems))
{
JLOGV(
j_.error(),
"Wrong sourceTag",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("sourceTag", e.sourceTag_));
assert(false);
return;
}
auto purposeStr =
TicketPurposeToStr(static_cast<TicketPurpose>(e.sourceTag_));
if (e.ledgerIndex_ <= initData_.ledgerIndex_)
{
JLOGV(
j_.trace(),
"TicketHolder, ignoring an old ticket",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("ticket seq", tSeq),
jv("purpose", purposeStr));
return;
}
if (!e.success_)
{
JLOGV(
j_.error(),
"CreateTicket failed",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("ticket seq", tSeq),
jv("purpose", purposeStr));
assert(false);
return;
}
JLOGV(
j_.trace(),
"TicketHolder, got a ticket",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("ticket seq", tSeq),
jv("purpose", purposeStr));
std::uint32_t const ticketPurposeToIndex = e.sourceTag_;
if (e.eventType() == event::EventType::bootstrap &&
tickets_[ticketPurposeToIndex].seq_ != 0)
{
JLOGV(
j_.error(),
"Got a bootstrap ticket too late",
jv("chain", (isMainChain_ ? "main" : "side")),
jv("ticket seq", tSeq),
jv("purpose", purposeStr));
assert(false);
return;
}
tickets_[ticketPurposeToIndex].seq_ = tSeq;
tickets_[ticketPurposeToIndex].status_ =
AutoRenewedTicket::Status::available;
}
void
TicketHolder::setRpcChannel(std::shared_ptr<ChainListener> channel)
{
rpcChannel_ = std::move(channel);
}
TicketRunner::TicketRunner(
const AccountID& mainAccount,
const AccountID& sideAccount,
Federator& federator,
beast::Journal j)
: mainAccountStr_(toBase58(mainAccount))
, sideAccountStr_(toBase58(sideAccount))
, federator_(federator)
, mainHolder_(true, mainAccount, federator, j)
, sideHolder_(false, sideAccount, federator, j)
, j_(j)
{
}
void
TicketRunner::setRpcChannel(
bool isMainChain,
std::shared_ptr<ChainListener> channel)
{
if (isMainChain)
mainHolder_.setRpcChannel(std::move(channel));
else
sideHolder_.setRpcChannel(std::move(channel));
}
void
TicketRunner::init(bool isMainChain)
{
if (isMainChain)
mainHolder_.init();
else
sideHolder_.init();
}
void
TicketRunner::accountObjectResult(
bool isMainChain,
Json::Value const& rpcResult)
{
if (isMainChain)
mainHolder_.accountObjectResult(rpcResult);
else
sideHolder_.accountObjectResult(rpcResult);
}
void
TicketRunner::txResult(bool isMainChain, Json::Value const& rpcResult)
{
if (isMainChain)
mainHolder_.txResult(rpcResult);
else
sideHolder_.txResult(rpcResult);
}
bool
TicketRunner::trigger(
TicketPurpose purpose,
std::optional<Json::Value> const& mainTxJson,
std::optional<Json::Value> const& sideTxJson)
{
if (!mainTxJson && !sideTxJson)
{
assert(false);
return false;
}
auto ticketPair =
[&]() -> std::optional<std::pair<std::uint32_t, std::uint32_t>> {
std::lock_guard<std::mutex> lock(mtx_);
if (!mainHolder_.getTicket(purpose, TicketHolder::PeekOrTake::peek) ||
!sideHolder_.getTicket(purpose, TicketHolder::PeekOrTake::peek))
{
JLOG(j_.trace()) << "TicketRunner tickets no ready";
return {};
}
auto mainTicket =
mainHolder_.getTicket(purpose, TicketHolder::PeekOrTake::take);
auto sideTicket =
sideHolder_.getTicket(purpose, TicketHolder::PeekOrTake::take);
assert(mainTicket && sideTicket);
JLOGV(
j_.trace(),
"TicketRunner trigger",
jv("main ticket", *mainTicket),
jv("side ticket", *sideTicket),
jv("purpose", TicketPurposeToStr(purpose)));
return {{*mainTicket, *sideTicket}};
}();
if (!ticketPair)
return false;
auto sendTriggerTx = [&](std::string const& accountStr,
std::uint32_t ticketSequence,
std::optional<Json::Value> const& memoJson,
SignatureCollector& signatureCollector) {
XRPAmount const fee{Federator::accountControlTxFee};
Json::Value txJson;
txJson[jss::TransactionType] = "AccountSet";
txJson[jss::Account] = accountStr;
txJson[jss::Sequence] = 0;
txJson[jss::Fee] = to_string(fee);
txJson["SourceTag"] =
static_cast<std::underlying_type_t<TicketPurpose>>(purpose);
txJson["TicketSequence"] = ticketSequence;
if (memoJson)
{
Serializer s;
try
{
STParsedJSONObject parsed(std::string(jss::tx_json), *memoJson);
if (!parsed.object)
{
JLOGV(
j_.fatal(), "invalid transaction", jv("tx", *memoJson));
assert(0);
return;
}
parsed.object->setFieldVL(sfSigningPubKey, Slice(nullptr, 0));
parsed.object->add(s);
}
catch (...)
{
JLOGV(j_.fatal(), "invalid transaction", jv("tx", *memoJson));
assert(0);
return;
}
Json::Value memos{Json::arrayValue};
Json::Value memo;
auto const dataStr = strHex(s.peekData());
memo[jss::Memo][jss::MemoData] = dataStr;
memos.append(memo);
txJson[jss::Memos] = memos;
JLOGV(
j_.trace(),
"TicketRunner",
jv("tx", *memoJson),
jv("tx packed", dataStr),
jv("packed size", dataStr.length()));
assert(
memo[jss::Memo][jss::MemoData].asString().length() <=
event::MemoStringMax);
}
signatureCollector.signAndSubmit(txJson);
};
sendTriggerTx(
mainAccountStr_,
ticketPair->first,
mainTxJson,
federator_.getSignatureCollector(Federator::ChainType::mainChain));
sendTriggerTx(
sideAccountStr_,
ticketPair->second,
sideTxJson,
federator_.getSignatureCollector(Federator::ChainType::sideChain));
return true;
}
void
TicketRunner::onEvent(
std::uint32_t accountSeq,
const event::TicketCreateTrigger& e)
{
Json::Value txJson;
XRPAmount const fee{Federator::accountControlTxFee};
txJson[jss::TransactionType] = "TicketCreate";
txJson[jss::Account] =
e.dir_ == event::Dir::mainToSide ? sideAccountStr_ : mainAccountStr_;
txJson[jss::Sequence] = accountSeq;
txJson[jss::Fee] = to_string(fee);
txJson["TicketCount"] = 1;
txJson["SourceTag"] = e.sourceTag_;
{
Json::Value memos{Json::arrayValue};
{
Json::Value memo;
memo[jss::Memo][jss::MemoData] = to_string(e.txnHash_);
memos.append(memo);
}
if (!e.memoStr_.empty())
{
Json::Value memo;
memo[jss::Memo][jss::MemoData] = e.memoStr_;
memos.append(memo);
}
txJson[jss::Memos] = memos;
}
JLOGV(
j_.trace(),
"TicketRunner TicketTriggerDetected",
jv("chain", (e.dir_ == event::Dir::mainToSide ? "main" : "side")),
jv("seq", accountSeq),
jv("CreateTicket tx", txJson));
if (e.dir_ == event::Dir::mainToSide)
federator_.getSignatureCollector(Federator::ChainType::sideChain)
.signAndSubmit(txJson);
else
federator_.getSignatureCollector(Federator::ChainType::mainChain)
.signAndSubmit(txJson);
}
void
TicketRunner::onEvent(
std::uint32_t accountSeq,
const event::TicketCreateResult& e)
{
auto const [fromChain, toChain] = e.dir_ == event::Dir::mainToSide
? std::make_pair(Federator::sideChain, Federator::mainChain)
: std::make_pair(Federator::mainChain, Federator::sideChain);
auto ticketSeq = e.txnSeq_ + 1;
JLOGV(
j_.trace(),
"TicketRunner CreateTicketResult",
jv("chain",
(fromChain == Federator::ChainType::mainChain ? "main" : "side")),
jv("ticket seq", ticketSeq));
if (fromChain == Federator::ChainType::mainChain)
mainHolder_.onEvent(e);
else
sideHolder_.onEvent(e);
federator_.addSeqToSkip(fromChain, ticketSeq);
if (accountSeq)
{
assert(!e.memoStr_.empty());
auto txData = strUnHex(e.memoStr_);
if (!txData || !txData->size())
{
assert(false);
return;
}
SerialIter sitTrans(makeSlice(*txData));
STTx tx(sitTrans);
tx.setFieldU32(sfSequence, accountSeq);
auto txJson = tx.getJson(JsonOptions::none);
// trigger hash
Json::Value memos{Json::arrayValue};
{
Json::Value memo;
memo[jss::Memo][jss::MemoData] = to_string(e.txnHash_);
memos.append(memo);
}
txJson[jss::Memos] = memos;
JLOGV(
j_.trace(),
"TicketRunner AccountControlTrigger",
jv("chain",
(toChain == Federator::ChainType::mainChain ? "side" : "main")),
jv("tx with added memos", txJson.toStyledString()));
federator_.getSignatureCollector(toChain).signAndSubmit(txJson);
}
}
void
TicketRunner::onEvent(const event::BootstrapTicket& e)
{
auto ticketSeq = e.txnSeq_ + 1;
JLOGV(
j_.trace(),
"TicketRunner BootstrapTicket",
jv("chain", (e.isMainchain_ ? "main" : "side")),
jv("ticket seq", ticketSeq));
if (e.isMainchain_)
mainHolder_.onEvent(e);
else
sideHolder_.onEvent(e);
}
Json::Value
TicketRunner::getInfo(bool isMainchain) const
{
if (isMainchain)
return mainHolder_.getInfo();
else
return sideHolder_.getInfo();
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,255 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IMPL_TICKET_BOOTH_H
#define RIPPLE_SIDECHAIN_IMPL_TICKET_BOOTH_H
#include <ripple/app/sidechain/FederatorEvents.h>
#include <ripple/app/sidechain/impl/ChainListener.h>
#include <ripple/basics/UnorderedContainers.h>
#include <ripple/basics/base_uint.h>
#include <ripple/beast/utility/Journal.h>
#include <ripple/json/json_value.h>
#include <ripple/protocol/AccountID.h>
#include <limits>
#include <mutex>
#include <optional>
#include <queue>
#include <string>
namespace ripple {
namespace sidechain {
class Federator;
enum class TicketPurpose : std::uint32_t {
mainDoorKeeper,
sideDoorKeeper,
updateSignerList,
TP_NumberOfItems
};
std::string
TicketPurposeToStr(TicketPurpose tp);
struct AutoRenewedTicket
{
enum class Status { available, taken };
std::uint32_t seq_;
Status status_;
AutoRenewedTicket() : seq_(0), status_(Status::taken)
{
}
};
class TicketHolder
{
enum class InitializeStatus {
waitLedger,
waitAccountObject,
waitTx,
needToQueryTx,
initialized
};
struct InitializeData
{
InitializeStatus status_ = InitializeStatus::waitLedger;
hash_map<uint256, std::uint32_t> tickets_;
std::queue<event::TicketCreateResult> toReplay_;
std::queue<event::BootstrapTicket> bootstrapTicketToReplay_;
std::uint32_t ledgerIndex_ = 0;
};
std::shared_ptr<ChainListener> rpcChannel_;
bool isMainChain_;
std::string const accountStr_;
AutoRenewedTicket
tickets_[static_cast<std::underlying_type_t<TicketPurpose>>(
TicketPurpose::TP_NumberOfItems)];
InitializeData initData_;
Federator& federator_;
beast::Journal j_;
mutable std::mutex mtx_;
public:
TicketHolder(
bool isMainChain,
AccountID const& account,
Federator& federator,
beast::Journal j);
/**
* start to initialize the ticketHolder by sending accountObject RPC
*/
void
init() EXCLUDES(mtx_);
/**
* process accountObject result and find the tickets.
* Initialization is not completed, because a ticket ledger object
* does not have information about its purpose.
* The purpose is in the TicketCreate tx what created the ticket.
* So the ticketHolder queries the TicketCreate tx for each ticket found.
* @param rpcResult accountObject result
*/
void
accountObjectResult(Json::Value const& rpcResult) EXCLUDES(mtx_);
/**
* process tx RPC result
* Initialization is completed once all TicketCreate txns are found,
* one for every ticket found in the previous initialization step.
* @param rpcResult tx result
*/
void
txResult(Json::Value const& rpcResult) EXCLUDES(mtx_);
enum class PeekOrTake { peek, take };
/**
* take or peek the ticket for a purpose
* @param purpose the ticket purpose
* @param pt take or peek
* @return the ticket if exist and not taken
*/
std::optional<std::uint32_t>
getTicket(TicketPurpose purpose, PeekOrTake pt) EXCLUDES(mtx_);
/**
* process a TicketCreateResult event, update the ticket number and status
* It queues the event if the ticketHolder is not yet initialized.
*
* @param e the TicketCreateResult event
*/
void
onEvent(event::TicketCreateResult const& e) EXCLUDES(mtx_);
/**
* process a ticket created during network bootstrap
* @param e the BootstrapTicket event
*/
void
onEvent(event::BootstrapTicket const& e) EXCLUDES(mtx_);
Json::Value
getInfo() const EXCLUDES(mtx_);
void
setRpcChannel(std::shared_ptr<ChainListener> channel);
private:
void
rpcAccountObject();
void
rpcTx(std::lock_guard<std::mutex> const&) REQUIRES(mtx_);
// replay accumulated events before finish initialization
void
replay(std::lock_guard<std::mutex> const&) REQUIRES(mtx_);
template <class E>
void
processEvent(E const& e, std::lock_guard<std::mutex> const&) REQUIRES(mtx_);
};
class TicketRunner
{
std::string const mainAccountStr_;
std::string const sideAccountStr_;
Federator& federator_;
TicketHolder mainHolder_;
TicketHolder sideHolder_;
beast::Journal j_;
// Only one thread at a time can grab tickets
mutable std::mutex mtx_;
public:
TicketRunner(
AccountID const& mainAccount,
AccountID const& sideAccount,
Federator& federator,
beast::Journal j);
// set RpcChannel for a ticketHolder
void
setRpcChannel(bool isMainChain, std::shared_ptr<ChainListener> channel);
// init a ticketHolder
void
init(bool isMainChain);
// pass a accountObject RPC result to a ticketHolder
void
accountObjectResult(bool isMainChain, Json::Value const& rpcResult);
// pass a tx RPC result to a ticketHolder
void
txResult(bool isMainChain, Json::Value const& rpcResult);
/**
* Start to run a protocol that submit a federator account control tx
* to the network.
*
* Comparing to a normal tx submission that takes one step, a federator
* account control tx (such as depositAuth and signerListSet) takes 3 steps:
* 1. use a ticket to send a accountSet no-op tx as a trigger
* 2. create a new ticket
* 3. submit the account control tx
*
* @param ticketPurpose the purpose of ticket. The purpose describes
* the account control tx use case.
* @param mainTxJson account control tx for main chain
* @param sideTxJson account control tx for side chain
* @note mainTxJson and sideTxJson cannot both be empty
* @return if the protocol started
*/
[[nodiscard]] bool
trigger(
TicketPurpose ticketPurpose,
std::optional<Json::Value> const& mainTxJson,
std::optional<Json::Value> const& sideTxJson) EXCLUDES(mtx_);
/**
* process a TicketCreateTrigger event, by submitting TicketCreate tx
*
* This event is generated when the accountSet no-op tx
* (as the protocol trigger) appears in the tx stream,
* i.e. sorted with regular XChain payments.
*/
void
onEvent(std::uint32_t accountSeq, event::TicketCreateTrigger const& e);
/**
* process a TicketCreateResult event, update the ticketHolder.
*
* This event is generated when the TicketCreate tx appears
* in the tx stream.
*/
void
onEvent(std::uint32_t accountSeq, event::TicketCreateResult const& e);
/**
* process a ticket created during network bootstrap
*/
void
onEvent(event::BootstrapTicket const& e);
Json::Value
getInfo(bool isMainchain) const;
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -0,0 +1,177 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2016 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/sidechain/impl/WebsocketClient.h>
#include <ripple/basics/Log.h>
#include <ripple/json/Output.h>
#include <ripple/json/json_reader.h>
#include <ripple/json/json_writer.h>
#include <ripple/json/to_string.h>
#include <ripple/protocol/jss.h>
#include <ripple/server/Port.h>
#include <boost/beast/websocket.hpp>
#include <condition_variable>
#include <string>
#include <unordered_map>
#include <iostream>
namespace ripple {
namespace sidechain {
template <class ConstBuffers>
std::string
WebsocketClient::buffer_string(ConstBuffers const& b)
{
using boost::asio::buffer;
using boost::asio::buffer_size;
std::string s;
s.resize(buffer_size(b));
buffer_copy(buffer(&s[0], s.size()), b);
return s;
}
void
WebsocketClient::cleanup()
{
ios_.post(strand_.wrap([this] {
if (!peerClosed_)
{
{
std::lock_guard l{m_};
ws_.async_close({}, strand_.wrap([&](error_code ec) {
stream_.cancel(ec);
std::lock_guard l(shutdownM_);
isShutdown_ = true;
shutdownCv_.notify_one();
}));
}
}
else
{
std::lock_guard<std::mutex> l(shutdownM_);
isShutdown_ = true;
shutdownCv_.notify_one();
}
}));
}
void
WebsocketClient::shutdown()
{
cleanup();
std::unique_lock l{shutdownM_};
shutdownCv_.wait(l, [this] { return isShutdown_; });
}
WebsocketClient::WebsocketClient(
std::function<void(Json::Value const&)> callback,
boost::asio::io_service& ios,
boost::asio::ip::address const& ip,
std::uint16_t port,
std::unordered_map<std::string, std::string> const& headers,
beast::Journal j)
: ios_(ios)
, strand_(ios_)
, stream_(ios_)
, ws_(stream_)
, callback_(callback)
, j_{j}
{
try
{
boost::asio::ip::tcp::endpoint const ep{ip, port};
stream_.connect(ep);
ws_.set_option(boost::beast::websocket::stream_base::decorator(
[&](boost::beast::websocket::request_type& req) {
for (auto const& h : headers)
req.set(h.first, h.second);
}));
ws_.handshake(
ep.address().to_string() + ":" + std::to_string(ep.port()), "/");
ws_.async_read(
rb_,
strand_.wrap(std::bind(
&WebsocketClient::onReadMsg, this, std::placeholders::_1)));
}
catch (std::exception&)
{
cleanup();
Rethrow();
}
}
WebsocketClient::~WebsocketClient()
{
cleanup();
}
std::uint32_t
WebsocketClient::send(std::string const& cmd, Json::Value params)
{
params[jss::method] = cmd;
params[jss::jsonrpc] = "2.0";
params[jss::ripplerpc] = "2.0";
auto const id = nextId_++;
params[jss::id] = id;
auto const s = to_string(params);
std::lock_guard l{m_};
ws_.write_some(true, boost::asio::buffer(s));
return id;
}
void
WebsocketClient::onReadMsg(error_code const& ec)
{
if (ec)
{
JLOGV(j_.trace(), "WebsocketClient::onReadMsg error", jv("ec", ec));
if (ec == boost::beast::websocket::error::closed)
peerClosed_ = true;
return;
}
Json::Value jval;
Json::Reader jr;
jr.parse(buffer_string(rb_.data()), jval);
rb_.consume(rb_.size());
JLOGV(j_.trace(), "WebsocketClient::onReadMsg", jv("msg", jval));
callback_(jval);
std::lock_guard l{m_};
ws_.async_read(
rb_,
strand_.wrap(std::bind(
&WebsocketClient::onReadMsg, this, std::placeholders::_1)));
}
// Called when the read op terminates
void
WebsocketClient::onReadDone()
{
}
} // namespace sidechain
} // namespace ripple

View File

@@ -0,0 +1,105 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_SIDECHAIN_IO_WEBSOCKET_CLIENT_H_INCLUDED
#define RIPPLE_SIDECHAIN_IO_WEBSOCKET_CLIENT_H_INCLUDED
#include <ripple/basics/ThreadSaftyAnalysis.h>
#include <ripple/core/Config.h>
#include <boost/asio/io_service.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/strand.hpp>
#include <boost/beast/core/multi_buffer.hpp>
#include <boost/beast/websocket/stream.hpp>
#include <boost/optional.hpp>
#include <chrono>
#include <condition_variable>
#include <functional>
#include <memory>
namespace ripple {
namespace sidechain {
class WebsocketClient
{
using error_code = boost::system::error_code;
template <class ConstBuffers>
static std::string
buffer_string(ConstBuffers const& b);
// mutex for ws_
std::mutex m_;
// mutex for shutdown
std::mutex shutdownM_;
bool isShutdown_ = false;
std::condition_variable shutdownCv_;
boost::asio::io_service& ios_;
boost::asio::io_service::strand strand_;
boost::asio::ip::tcp::socket stream_;
boost::beast::websocket::stream<boost::asio::ip::tcp::socket&> GUARDED_BY(
m_) ws_;
boost::beast::multi_buffer rb_;
std::atomic<bool> peerClosed_{false};
std::function<void(Json::Value const&)> callback_;
std::atomic<std::uint32_t> nextId_{0};
beast::Journal j_;
void
cleanup();
public:
// callback will be called from a io_service thread
WebsocketClient(
std::function<void(Json::Value const&)> callback,
boost::asio::io_service& ios,
boost::asio::ip::address const& ip,
std::uint16_t port,
std::unordered_map<std::string, std::string> const& headers,
beast::Journal j);
~WebsocketClient();
// Returns command id that will be returned in the response
std::uint32_t
send(std::string const& cmd, Json::Value params) EXCLUDES(m_);
void
shutdown() EXCLUDES(shutdownM_);
private:
void
onReadMsg(error_code const& ec) EXCLUDES(m_);
// Called when the read op terminates
void
onReadDone();
};
} // namespace sidechain
} // namespace ripple
#endif

View File

@@ -27,8 +27,15 @@
#include <map>
#include <memory>
#include <mutex>
#include <string_view>
#include <type_traits>
#include <utility>
namespace Json {
class Value;
class Compact;
} // namespace Json
namespace ripple {
// DEPRECATED use beast::severities::Severity instead
@@ -256,6 +263,92 @@ private:
x
#endif
// terse "std::tie"
template <class T1, class T2>
std::tuple<T1 const&, T2 const&>
jv(T1 const& t1, T2 const& t2)
{
return std::tie(t1, t2);
}
template <class Stream>
void
jlogv_fields(Stream&& stream, char const* sep)
{
}
template <class Stream, class T1, class T2, class... Ts>
void
jlogv_fields(
Stream&& stream,
char const* sep,
std::tuple<T1 const&, T2 const&> const& nameValue,
Ts&&... nameValues)
{
bool const withQuotes = [&] {
if constexpr (std::is_arithmetic_v<T2>)
{
return false;
}
if constexpr (std::is_same_v<std::decay_t<T2>, Json::Value>)
{
auto const& v = std::get<1>(nameValue);
return !v.isObject() && !v.isNumeric() && !v.isBool();
}
if constexpr (std::is_same_v<std::decay_t<T2>, Json::Compact>)
{
return false;
}
return true;
}();
stream << sep << '"' << std::get<0>(nameValue) << "\": ";
if constexpr (std::is_same_v<std::decay_t<T2>, Json::Value>)
{
stream << string_for_log(std::get<1>(nameValue));
}
else
{
if (withQuotes)
{
// print the value with quotes
stream << '"' << std::get<1>(nameValue) << '"';
}
else
{
stream << std::get<1>(nameValue);
}
}
jlogv_fields(
std::forward<Stream>(stream), ", ", std::forward<Ts>(nameValues)...);
}
template <class Stream, class... Ts>
void
jlogv(
Stream&& stream,
std::size_t lineNo,
std::string_view const& msg,
Ts&&... nameValues)
{
beast::Journal::ScopedStream s{std::forward<Stream>(stream), msg};
s << " {";
jlogv_fields(s, "", std::forward<Ts>(nameValues)..., jv("jlogId", lineNo));
s << '}';
}
// Wraps a Journal::Stream to skip evaluation of
// expensive argument lists if the stream is not active.
#ifndef JLOGV
#define JLOGV(x, msg, ...) \
if (!x) \
{ \
} \
else \
jlogv(x, __LINE__, msg, __VA_ARGS__)
#endif
//------------------------------------------------------------------------------
// Debug logging:

View File

@@ -0,0 +1,82 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#ifndef RIPPLE_BASICS_THREAD_SAFTY_ANALYSIS_H_INCLUDED
#define RIPPLE_BASICS_THREAD_SAFTY_ANALYSIS_H_INCLUDED
#ifdef RIPPLE_ENABLE_THREAD_SAFETY_ANNOTATIONS
#define THREAD_ANNOTATION_ATTRIBUTE__(x) __attribute__((x))
#else
#define THREAD_ANNOTATION_ATTRIBUTE__(x) // no-op
#endif
#define CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(capability(x))
#define SCOPED_CAPABILITY THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)
#define GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(guarded_by(x))
#define PT_GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(pt_guarded_by(x))
#define ACQUIRED_BEFORE(...) \
THREAD_ANNOTATION_ATTRIBUTE__(acquired_before(__VA_ARGS__))
#define ACQUIRED_AFTER(...) \
THREAD_ANNOTATION_ATTRIBUTE__(acquired_after(__VA_ARGS__))
#define REQUIRES(...) \
THREAD_ANNOTATION_ATTRIBUTE__(requires_capability(__VA_ARGS__))
#define REQUIRES_SHARED(...) \
THREAD_ANNOTATION_ATTRIBUTE__(requires_shared_capability(__VA_ARGS__))
#define ACQUIRE(...) \
THREAD_ANNOTATION_ATTRIBUTE__(acquire_capability(__VA_ARGS__))
#define ACQUIRE_SHARED(...) \
THREAD_ANNOTATION_ATTRIBUTE__(acquire_shared_capability(__VA_ARGS__))
#define RELEASE(...) \
THREAD_ANNOTATION_ATTRIBUTE__(release_capability(__VA_ARGS__))
#define RELEASE_SHARED(...) \
THREAD_ANNOTATION_ATTRIBUTE__(release_shared_capability(__VA_ARGS__))
#define RELEASE_GENERIC(...) \
THREAD_ANNOTATION_ATTRIBUTE__(release_generic_capability(__VA_ARGS__))
#define TRY_ACQUIRE(...) \
THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_capability(__VA_ARGS__))
#define TRY_ACQUIRE_SHARED(...) \
THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_shared_capability(__VA_ARGS__))
#define EXCLUDES(...) THREAD_ANNOTATION_ATTRIBUTE__(locks_excluded(__VA_ARGS__))
#define ASSERT_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(assert_capability(x))
#define ASSERT_SHARED_CAPABILITY(x) \
THREAD_ANNOTATION_ATTRIBUTE__(assert_shared_capability(x))
#define RETURN_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(lock_returned(x))
#define NO_THREAD_SAFETY_ANALYSIS \
THREAD_ANNOTATION_ATTRIBUTE__(no_thread_safety_analysis)
#endif

View File

@@ -134,7 +134,6 @@ public:
class Stream;
private:
/* Scoped ostream-based container for writing messages to a Journal. */
class ScopedStream
{
@@ -189,7 +188,6 @@ private:
#endif
//--------------------------------------------------------------------------
public:
/** Provide a light-weight way to check active() before string formatting */
class Stream
{

View File

@@ -60,20 +60,21 @@ enum JobType {
jtREPLAY_TASK, // A Ledger replay task/subtask
jtLEDGER_DATA, // Received data for a ledger we're acquiring
jtTRANSACTION, // A transaction received from the network
jtMISSING_TXN, // Request missing transactions
jtREQUESTED_TXN, // Reply with requested transactions
jtBATCH, // Apply batched transactions
jtADVANCE, // Advance validated/acquired ledgers
jtPUBLEDGER, // Publish a fully-accepted ledger
jtTXN_DATA, // Fetch a proposed set
jtWAL, // Write-ahead logging
jtVALIDATION_t, // A validation from a trusted source
jtWRITE, // Write out hashed objects
jtACCEPT, // Accept a consensus ledger
jtPROPOSAL_t, // A proposal from a trusted source
jtNETOP_CLUSTER, // NetworkOPs cluster peer report
jtNETOP_TIMER, // NetworkOPs net timer processing
jtADMIN, // An administrative operation
jtFEDERATORSIGNATURE, // A signature from a sidechain federator
jtMISSING_TXN, // Request missing transactions
jtREQUESTED_TXN, // Reply with requested transactions
jtBATCH, // Apply batched transactions
jtADVANCE, // Advance validated/acquired ledgers
jtPUBLEDGER, // Publish a fully-accepted ledger
jtTXN_DATA, // Fetch a proposed set
jtWAL, // Write-ahead logging
jtVALIDATION_t, // A validation from a trusted source
jtWRITE, // Write out hashed objects
jtACCEPT, // Accept a consensus ledger
jtPROPOSAL_t, // A proposal from a trusted source
jtNETOP_CLUSTER, // NetworkOPs cluster peer report
jtNETOP_TIMER, // NetworkOPs net timer processing
jtADMIN, // An administrative operation
// Special job types which are not dispatched by the job pool
jtPEER,

View File

@@ -90,6 +90,8 @@ private:
add(jtUPDATE_PF, "updatePaths", 1, 0ms, 0ms);
add(jtTRANSACTION, "transaction", maxLimit, 250ms, 1000ms);
add(jtBATCH, "batch", maxLimit, 250ms, 1000ms);
// TODO chose ave latency and peak latency numbers
add(jtFEDERATORSIGNATURE, "federatorSignature", maxLimit, 250ms, 1000ms);
add(jtADVANCE, "advanceLedger", maxLimit, 0ms, 0ms);
add(jtPUBLEDGER, "publishNewLedger", maxLimit, 3000ms, 4500ms);
add(jtTXN_DATA, "fetchTxnData", 5, 0ms, 0ms);

View File

@@ -342,6 +342,13 @@ public:
}
};
[[nodiscard]] inline std::string
string_for_log(Json::Value const& v)
{
FastWriter w;
return w.write(v);
}
} // namespace Json
#endif // JSON_WRITER_H_INCLUDED

View File

@@ -1250,6 +1250,7 @@ public:
{"deposit_authorized", &RPCParser::parseDepositAuthorized, 2, 3},
{"download_shard", &RPCParser::parseDownloadShard, 2, -1},
{"feature", &RPCParser::parseFeature, 0, 2},
{"federator_info", &RPCParser::parseAsIs, 0, 0},
{"fetch_info", &RPCParser::parseFetchInfo, 0, 1},
{"gateway_balances", &RPCParser::parseGatewayBalances, 1, -1},
{"get_counts", &RPCParser::parseGetCounts, 0, 1},

View File

@@ -87,6 +87,7 @@ Message::compress()
case protocol::mtVALIDATORLISTCOLLECTION:
case protocol::mtREPLAY_DELTA_RESPONSE:
case protocol::mtTRANSACTIONS:
case protocol::mtFederatorXChainTxnSignature:
return true;
case protocol::mtPING:
case protocol::mtCLUSTER:
@@ -102,6 +103,7 @@ Message::compress()
case protocol::mtGET_PEER_SHARD_INFO_V2:
case protocol::mtPEER_SHARD_INFO_V2:
case protocol::mtHAVE_TRANSACTIONS:
case protocol::mtFederatorAccountCtrlSignature:
break;
}
return false;

View File

@@ -27,6 +27,7 @@
#include <ripple/app/misc/NetworkOPs.h>
#include <ripple/app/misc/Transaction.h>
#include <ripple/app/misc/ValidatorList.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/app/tx/apply.h>
#include <ripple/basics/UptimeClock.h>
#include <ripple/basics/base64.h>
@@ -39,6 +40,7 @@
#include <ripple/overlay/impl/PeerImp.h>
#include <ripple/overlay/impl/Tuning.h>
#include <ripple/overlay/predicates.h>
#include <ripple/protocol/SField.h>
#include <ripple/protocol/digest.h>
#include <boost/algorithm/string.hpp>
@@ -2928,6 +2930,339 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMSquelch> const& m)
<< "onMessage: TMSquelch " << slice << " " << id() << " " << duration;
}
void
PeerImp::onMessage(
std::shared_ptr<protocol::TMFederatorXChainTxnSignature> const& m)
{
std::shared_ptr<sidechain::Federator> federator =
app_.getSidechainFederator();
auto sidechainJ = app_.journal("SidechainFederator");
auto badData = [&](std::string msg) {
fee_ = Resource::feeBadData;
JLOG(p_journal_.warn()) << msg;
};
auto getTxnType = [](::protocol::TMFederatorTxnType tt)
-> std::optional<sidechain::Federator::TxnType> {
switch (tt)
{
case ::protocol::TMFederatorTxnType::ftxnt_XCHAIN:
return sidechain::Federator::TxnType::xChain;
case ::protocol::TMFederatorTxnType::ftxnt_REFUND:
return sidechain::Federator::TxnType::refund;
default:
return {};
}
};
auto getChainType = [](::protocol::TMFederatorChainType ct)
-> std::optional<sidechain::Federator::ChainType> {
switch (ct)
{
case ::protocol::TMFederatorChainType::fct_SIDE:
return sidechain::Federator::sideChain;
case ::protocol::TMFederatorChainType::fct_MAIN:
return sidechain::Federator::mainChain;
default:
return {};
}
};
auto getPublicKey =
[](std::string const& data) -> std::optional<PublicKey> {
if (data.empty())
return {};
auto const s = makeSlice(data);
if (!publicKeyType(s))
return {};
return PublicKey(s);
};
auto getHash = [](std::string const& data) -> std::optional<uint256> {
if (data.size() != 32)
return {};
return uint256(data);
};
auto getAccountId =
[](std::string const& data) -> std::optional<AccountID> {
if (data.size() != 20)
return {};
return AccountID(data);
};
auto const signingPK = getPublicKey(m->signingpk());
if (!signingPK)
{
return badData("Invalid federator key");
}
auto const txnType = getTxnType(m->txntype());
if (!txnType)
{
return badData("Invalid txn type");
}
auto const dstChain = getChainType(m->dstchain());
if (!dstChain)
{
return badData("Invalid dst chain");
}
auto const srcChainTxnHash = getHash(m->srcchaintxnhash());
if (!srcChainTxnHash)
{
return badData("Invalid src chain txn hash");
}
auto const dstChainTxnHash = getHash(m->dstchaintxnhash());
if (*txnType == sidechain::Federator::TxnType::refund && !dstChainTxnHash)
{
return badData("Invalid dst chain txn hash for refund");
}
if (*txnType == sidechain::Federator::TxnType::xChain && dstChainTxnHash)
{
return badData("Invalid dst chain txn hash for xchain");
}
auto const srcChainSrcAccount = getAccountId(m->srcchainsrcaccount());
if (!srcChainSrcAccount)
{
return badData("Invalid src chain src account");
}
auto const dstChainSrcAccount = getAccountId(m->dstchainsrcaccount());
if (!dstChainSrcAccount)
{
return badData("Invalid dst chain src account");
}
auto const dstChainDstAccount = getAccountId(m->dstchaindstaccount());
if (!dstChainDstAccount)
{
return badData("Invalid dst account");
}
auto const seq = m->seq();
auto const amt = [&m]() -> std::optional<STAmount> {
try
{
SerialIter iter{m->amount().data(), m->amount().size()};
STAmount amt{iter, sfGeneric};
if (!iter.empty() || amt.signum() <= 0)
{
return std::nullopt;
}
if (amt.native() && amt.xrp() > INITIAL_XRP)
{
return std::nullopt;
}
return amt;
}
catch (std::exception const&)
{
}
return std::nullopt;
}();
if (!amt)
{
return badData("Invalid amount");
}
Buffer sig{m->signature().data(), m->signature().size()};
JLOGV(
sidechainJ.trace(),
"Received signature from peer",
jv("id", id()),
jv("sig", strHex(sig.data(), sig.data() + sig.size())));
if (federator && federator->alreadySent(*dstChain, seq))
{
// already sent this transaction, no need to forward signature
return;
}
uint256 const suppression = sidechain::crossChainTxnSignatureId(
*signingPK,
*srcChainTxnHash,
dstChainTxnHash,
*amt,
*dstChainSrcAccount,
*dstChainDstAccount,
seq,
sig);
app_.getHashRouter().addSuppressionPeer(suppression, id_);
app_.getJobQueue().addJob(
jtFEDERATORSIGNATURE,
"federator signature",
[self = shared_from_this(),
federator = std::move(federator),
suppression,
txnType,
dstChain,
signingPK,
srcChainTxnHash,
dstChainTxnHash,
amt,
srcChainSrcAccount,
dstChainDstAccount,
seq,
m,
j = sidechainJ,
sig = std::move(sig)]() mutable {
auto& hashRouter = self->app_.getHashRouter();
if (auto const toSkip = hashRouter.shouldRelay(suppression))
{
auto const toSend = std::make_shared<Message>(
*m, protocol::mtFederatorXChainTxnSignature);
self->overlay_.foreach([&](std::shared_ptr<Peer> const& p) {
hashRouter.addSuppressionPeer(suppression, p->id());
if (toSkip->count(p->id()))
{
JLOGV(
j.trace(),
"Not forwarding signature to peer",
jv("id", p->id()),
jv("suppression", suppression));
return;
}
JLOGV(
j.trace(),
"Forwarding signature to peer",
jv("id", p->id()),
jv("suppression", suppression));
p->send(toSend);
});
}
if (federator)
{
// Signature is checked in `addPendingTxnSig`
federator->addPendingTxnSig(
*txnType,
*dstChain,
*signingPK,
*srcChainTxnHash,
dstChainTxnHash,
*amt,
*srcChainSrcAccount,
*dstChainDstAccount,
seq,
std::move(sig));
}
});
}
void
PeerImp::onMessage(
std::shared_ptr<protocol::TMFederatorAccountCtrlSignature> const& m)
{
std::shared_ptr<sidechain::Federator> federator =
app_.getSidechainFederator();
auto sidechainJ = app_.journal("SidechainFederator");
auto badData = [&](std::string msg) {
fee_ = Resource::feeBadData;
JLOG(p_journal_.warn()) << msg;
};
auto getChainType = [](::protocol::TMFederatorChainType ct)
-> std::optional<sidechain::Federator::ChainType> {
switch (ct)
{
case ::protocol::TMFederatorChainType::fct_SIDE:
return sidechain::Federator::sideChain;
case ::protocol::TMFederatorChainType::fct_MAIN:
return sidechain::Federator::mainChain;
default:
return {};
}
};
auto const dstChain = getChainType(m->chain());
if (!dstChain)
{
return badData("Invalid dst chain");
}
auto getPublicKey =
[](std::string const& data) -> std::optional<PublicKey> {
if (data.empty())
return {};
auto const s = makeSlice(data);
if (!publicKeyType(s))
return {};
return PublicKey(s);
};
auto getHash = [](std::string const& data) -> std::optional<uint256> {
if (data.size() != 32)
return {};
return uint256(data);
};
auto const pk = getPublicKey(m->publickey());
if (!pk)
{
return badData("Invalid federator key");
}
auto const mId = getHash(m->messageid());
if (!mId)
{
return badData("Invalid txn hash");
}
Buffer sig{m->signature().data(), m->signature().size()};
JLOGV(
sidechainJ.trace(),
"Received signature from peer",
jv("id", id()),
jv("sig", strHex(sig.data(), sig.data() + sig.size())));
uint256 const suppression = sidechain::computeMessageSuppression(*mId, sig);
app_.getHashRouter().addSuppressionPeer(suppression, id_);
app_.getJobQueue().addJob(
jtFEDERATORSIGNATURE,
"federator signature",
[self = shared_from_this(),
federator = std::move(federator),
suppression,
pk,
mId,
m,
j = sidechainJ,
chain = *dstChain,
sig = std::move(sig)]() mutable {
auto& hashRouter = self->app_.getHashRouter();
if (auto const toSkip = hashRouter.shouldRelay(suppression))
{
auto const toSend = std::make_shared<Message>(
*m, protocol::mtFederatorAccountCtrlSignature);
self->overlay_.foreach([&](std::shared_ptr<Peer> const& p) {
hashRouter.addSuppressionPeer(suppression, p->id());
if (toSkip->count(p->id()))
{
JLOGV(
j.trace(),
"Not forwarding signature to peer",
jv("id", p->id()),
jv("suppression", suppression));
return;
}
JLOGV(
j.trace(),
"Forwarding signature to peer",
jv("id", p->id()),
jv("suppression", suppression));
p->send(toSend);
});
}
if (federator)
{
// Signature is checked in `addPendingTxnSig`
federator->addPendingTxnSig(chain, *pk, *mId, std::move(sig));
}
});
}
//--------------------------------------------------------------------------
void

View File

@@ -584,6 +584,12 @@ public:
onMessage(std::shared_ptr<protocol::TMReplayDeltaRequest> const& m);
void
onMessage(std::shared_ptr<protocol::TMReplayDeltaResponse> const& m);
void
onMessage(
std::shared_ptr<protocol::TMFederatorXChainTxnSignature> const& m);
void
onMessage(
std::shared_ptr<protocol::TMFederatorAccountCtrlSignature> const& m);
private:
//--------------------------------------------------------------------------

View File

@@ -113,6 +113,10 @@ protocolMessageName(int type)
return "get_peer_shard_info_v2";
case protocol::mtPEER_SHARD_INFO_V2:
return "peer_shard_info_v2";
case protocol::mtFederatorXChainTxnSignature:
return "federator_xchain_txn_signature";
case protocol::mtFederatorAccountCtrlSignature:
return "federator_account_ctrl_signature";
default:
break;
}
@@ -493,6 +497,14 @@ invokeProtocolMessage(
success = detail::invoke<protocol::TMPeerShardInfoV2>(
*header, buffers, handler);
break;
case protocol::mtFederatorXChainTxnSignature:
success = detail::invoke<protocol::TMFederatorXChainTxnSignature>(
*header, buffers, handler);
break;
case protocol::mtFederatorAccountCtrlSignature:
success = detail::invoke<protocol::TMFederatorAccountCtrlSignature>(
*header, buffers, handler);
break;
default:
handler.onMessageUnknown(header->message_type);
success = true;

View File

@@ -163,6 +163,9 @@ TrafficCount::categorize(
if (type == protocol::mtTRANSACTIONS)
return TrafficCount::category::requested_transactions;
if (type == protocol::mtFederatorXChainTxnSignature)
return TrafficCount::category::federator_xchain_txn_signature;
return TrafficCount::category::unknown;
}

View File

@@ -157,6 +157,8 @@ public:
// TMTransactions
requested_transactions,
federator_xchain_txn_signature,
unknown // must be last
};
@@ -243,12 +245,13 @@ protected:
{"getobject_share"}, // category::share_hash
{"getobject_get"}, // category::get_hash
{"proof_path_request"}, // category::proof_path_request
{"proof_path_response"}, // category::proof_path_response
{"replay_delta_request"}, // category::replay_delta_request
{"replay_delta_response"}, // category::replay_delta_response
{"have_transactions"}, // category::have_transactions
{"requested_transactions"}, // category::transactions
{"unknown"} // category::unknown
{"proof_path_response"}, // category::proof_path_response
{"replay_delta_request"}, // category::replay_delta_request
{"replay_delta_response"}, // category::replay_delta_response
{"have_transactions"}, // category::have_transactions
{"requested_transactions"}, // category::transactions
{"federator_xchain_txn_signature"}, // category::federator_xchain_txn_signature
{"unknown"} // category::unknown
}};
};

View File

@@ -33,6 +33,8 @@ enum MessageType
mtPEER_SHARD_INFO_V2 = 62;
mtHAVE_TRANSACTIONS = 63;
mtTRANSACTIONS = 64;
mtFederatorXChainTxnSignature = 65;
mtFederatorAccountCtrlSignature = 66;
}
// token, iterations, target, challenge = issue demand for proof of work
@@ -450,3 +452,42 @@ message TMHaveTransactions
repeated bytes hashes = 1;
}
enum TMFederatorChainType
{
fct_SIDE = 1;
fct_MAIN = 2;
}
enum TMFederatorTxnType
{
ftxnt_XCHAIN = 1; // cross chain
ftxnt_REFUND = 2;
}
message TMFederatorXChainTxnSignature
{
required TMFederatorTxnType txnType = 1;
required TMFederatorChainType dstChain = 2;
required bytes signingPK = 3; // federator's signing public key (unencoded binary data)
// txn hash for the src chain (unencoded bigendian binary data)
// This will be origional transaction from the src account to the door account
required bytes srcChainTxnHash = 4;
// txn hash for the dst chain (unencoded bigendian binary data)
// This will be empty for XCHAIN transaction, and will the failed transaction from the
// door account to the dst account for refund txns.
optional bytes dstChainTxnHash = 5;
required bytes amount = 6; // STAmount in wire serialized format
required bytes srcChainSrcAccount = 7; // account id (unencoded bigendian binary data)
required bytes dstChainSrcAccount = 8; // account id (unencoded bigendian binary data)
required bytes dstChainDstAccount = 9; // account id (unencoded bigendian binary data)
required uint32 seq = 10; // sequence number
required bytes signature = 11; // (unencoded bigendian binary data)
}
message TMFederatorAccountCtrlSignature
{
required TMFederatorChainType chain = 1;
required bytes publicKey = 2;
required bytes messageId = 3;
required bytes signature = 4;
}

View File

@@ -20,6 +20,7 @@
#ifndef RIPPLE_PROTOCOL_STTX_H_INCLUDED
#define RIPPLE_PROTOCOL_STTX_H_INCLUDED
#include <ripple/basics/Buffer.h>
#include <ripple/basics/Expected.h>
#include <ripple/protocol/PublicKey.h>
#include <ripple/protocol/STObject.h>
@@ -77,6 +78,20 @@ public:
Blob
getSignature() const;
Buffer
getSignature(PublicKey const& publicKey, SecretKey const& secretKey) const;
// Get one of the multi-signatures
Buffer
getMultiSignature(
AccountID const& signingID,
PublicKey const& publicKey,
SecretKey const& secretKey) const;
// unconditionally set signature. No error checking.
void
setSignature(Buffer const& sig);
uint256
getSigningHash() const;

View File

@@ -124,6 +124,11 @@ parseGenericSeed(std::string const& str);
std::string
seedAs1751(Seed const& seed);
/** ripple-lib encodes seeds used to generate an Ed25519 wallet in a
* non-standard way. */
std::optional<Seed>
parseRippleLibSeed(std::string const& s);
/** Format a seed as a Base58 string */
inline std::string
toBase58(Seed const& seed)

View File

@@ -160,6 +160,16 @@ getSigningData(STTx const& that)
return s.getData();
}
static Blob
getMultiSigningData(STTx const& that, AccountID const& signingID)
{
Serializer s;
s.add32(HashPrefix::txMultiSign);
that.addWithoutSigningFields(s);
s.addBitString(signingID);
return s.getData();
}
uint256
STTx::getSigningHash() const
{
@@ -179,6 +189,30 @@ STTx::getSignature() const
}
}
Buffer
STTx::getSignature(PublicKey const& publicKey, SecretKey const& secretKey) const
{
auto const data = getSigningData(*this);
return ripple::sign(publicKey, secretKey, makeSlice(data));
}
Buffer
STTx::getMultiSignature(
AccountID const& signingID,
PublicKey const& publicKey,
SecretKey const& secretKey) const
{
auto const data = getMultiSigningData(*this, signingID);
return ripple::sign(publicKey, secretKey, makeSlice(data));
}
void
STTx::setSignature(Buffer const& sig)
{
setFieldVL(sfTxnSignature, sig);
tid_ = getHash(HashPrefix::transactionID);
}
SeqProxy
STTx::getSeqProxy() const
{
@@ -197,12 +231,7 @@ STTx::getSeqProxy() const
void
STTx::sign(PublicKey const& publicKey, SecretKey const& secretKey)
{
auto const data = getSigningData(*this);
auto const sig = ripple::sign(publicKey, secretKey, makeSlice(data));
setFieldVL(sfTxnSignature, sig);
tid_ = getHash(HashPrefix::transactionID);
setSignature(getSignature(publicKey, secretKey));
}
Expected<void, std::string>

View File

@@ -135,4 +135,17 @@ seedAs1751(Seed const& seed)
return encodedKey;
}
std::optional<Seed>
parseRippleLibSeed(std::string const& s)
{
auto const result = decodeBase58Token(s, TokenType::None);
if (result.size() == 18 &&
static_cast<std::uint8_t>(result[0]) == std::uint8_t(0xE1) &&
static_cast<std::uint8_t>(result[1]) == std::uint8_t(0x4B))
return Seed(makeSlice(result.substr(2)));
return std::nullopt;
}
} // namespace ripple

View File

@@ -70,6 +70,10 @@ JSS(Invalid); //
JSS(LastLedgerSequence); // in: TransactionSign; field
JSS(LedgerHashes); // ledger type.
JSS(LimitAmount); // field.
JSS(Memo); // txn common field
JSS(Memos); // txn common field
JSS(MemoType); // txn common field
JSS(MemoData); // txn common field
JSS(Offer); // ledger type.
JSS(OfferCancel); // transaction type.
JSS(OfferCreate); // transaction type.
@@ -91,6 +95,10 @@ JSS(SetFlag); // field.
JSS(SetRegularKey); // transaction type.
JSS(SignerList); // ledger type.
JSS(SignerListSet); // transaction type.
JSS(SignerEntry); // transaction type.
JSS(SignerEntries); // transaction type.
JSS(SignerQuorum); // transaction type.
JSS(SignerWeight); // transaction type.
JSS(SigningPubKey); // field.
JSS(TakerGets); // field.
JSS(TakerPays); // field.
@@ -312,6 +320,7 @@ JSS(last_close); // out: NetworkOPs
JSS(last_refresh_time); // out: ValidatorSite
JSS(last_refresh_status); // out: ValidatorSite
JSS(last_refresh_message); // out: ValidatorSite
JSS(last_transaction_sent_seq); // out: federator_info
JSS(ledger); // in: NetworkOPs, LedgerCleaner,
// RPCHelpers
// out: NetworkOPs, PeerImp
@@ -339,6 +348,7 @@ JSS(limit); // in/out: AccountTx*, AccountOffers,
JSS(limit_peer); // out: AccountLines
JSS(lines); // out: AccountLines
JSS(list); // out: ValidatorList
JSS(listener_info); // out: federator_info
JSS(load); // out: NetworkOPs, PeerImp
JSS(load_base); // out: NetworkOPs
JSS(load_factor); // out: NetworkOPs
@@ -355,6 +365,7 @@ JSS(local_txs); // out: GetCounts
JSS(local_static_keys); // out: ValidatorList
JSS(lowest_sequence); // out: AccountInfo
JSS(lowest_ticket); // out: AccountInfo
JSS(mainchain); // out: federator_info
JSS(majority); // out: RPC feature
JSS(manifest); // out: ValidatorInfo, Manifest
JSS(marker); // in/out: AccountTx, AccountOffers,
@@ -436,6 +447,7 @@ JSS(peers); // out: InboundLedger, handlers/Peers, Overlay
JSS(peer_disconnects); // Severed peer connection counter.
JSS(peer_disconnects_resources); // Severed peer connections because of
// excess resource consumption.
JSS(pending_transactions); // out: federator_info
JSS(port); // in: Connect
JSS(previous); // out: Reservations
JSS(previous_ledger); // out: LedgerPropose
@@ -508,7 +520,9 @@ JSS(server_version); // out: NetworkOPs
JSS(settle_delay); // out: AccountChannels
JSS(severity); // in: LogLevel
JSS(shards); // in/out: GetCounts, DownloadShard
JSS(sidechain); // out: federator_info
JSS(signature); // out: NetworkOPs, ChannelAuthorize
JSS(signatures); // out: federator_info
JSS(signature_verified); // out: ChannelVerify
JSS(signing_key); // out: NetworkOPs
JSS(signing_keys); // out: ValidatorList
@@ -536,6 +550,7 @@ JSS(sub_index); // in: LedgerEntry
JSS(subcommand); // in: PathFind
JSS(success); // rpc
JSS(supported); // out: AmendmentTableImpl
JSS(sync_info); // out: federator_info
JSS(system_time_offset); // out: NetworkOPs
JSS(tag); // out: Peers
JSS(taker); // in: Subscribe, BookOffers

View File

@@ -0,0 +1,46 @@
//------------------------------------------------------------------------------
/*
This file is part of rippled: https://github.com/ripple/rippled
Copyright (c) 2021 Ripple Labs Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
//==============================================================================
#include <ripple/app/main/Application.h>
#include <ripple/app/sidechain/Federator.h>
#include <ripple/json/json_value.h>
#include <ripple/net/RPCErr.h>
#include <ripple/protocol/jss.h>
#include <ripple/rpc/Context.h>
namespace ripple {
Json::Value
doFederatorInfo(RPC::JsonContext& context)
{
Json::Value ret{Json::objectValue};
if (auto f = context.app.getSidechainFederator())
{
ret[jss::info] = f->getInfo();
}
else
{
ret[jss::info] = "Not configured as a sidechain federator";
}
return ret;
}
} // namespace ripple

View File

@@ -59,6 +59,8 @@ doDownloadShard(RPC::JsonContext&);
Json::Value
doFeature(RPC::JsonContext&);
Json::Value
doFederatorInfo(RPC::JsonContext&);
Json::Value
doFee(RPC::JsonContext&);
Json::Value
doFetchInfo(RPC::JsonContext&);

View File

@@ -89,6 +89,7 @@ Handler const handlerArray[]{
#endif
{"get_counts", byRef(&doGetCounts), Role::ADMIN, NO_CONDITION},
{"feature", byRef(&doFeature), Role::ADMIN, NO_CONDITION},
{"federator_info", byRef(&doFederatorInfo), Role::USER, NO_CONDITION},
{"fee", byRef(&doFee), Role::USER, NEEDS_CURRENT_LEDGER},
{"fetch_info", byRef(&doFetchInfo), Role::ADMIN, NO_CONDITION},
{"ledger_accept",

View File

@@ -683,14 +683,7 @@ parseRippleLibSeed(Json::Value const& value)
if (!value.isString())
return std::nullopt;
auto const result = decodeBase58Token(value.asString(), TokenType::None);
if (result.size() == 18 &&
static_cast<std::uint8_t>(result[0]) == std::uint8_t(0xE1) &&
static_cast<std::uint8_t>(result[1]) == std::uint8_t(0x4B))
return Seed(makeSlice(result.substr(2)));
return std::nullopt;
return ripple::parseRippleLibSeed(value.asString());
}
std::optional<Seed>

View File

@@ -320,6 +320,7 @@ ServerHandlerImp::onWSMessage(
if (size > RPC::Tuning::maxRequestSize ||
!Json::Reader{}.parse(jv, buffers) || !jv.isObject())
{
Json::Reader{}.parse(jv, buffers); // swd debug
Json::Value jvResult(Json::objectValue);
jvResult[jss::type] = jss::error;
jvResult[jss::error] = "jsonInvalid";