mirror of
https://github.com/XRPLF/rippled.git
synced 2025-11-05 03:35:51 +00:00
Compare commits
249 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d2fc4e3569 | ||
|
|
fbfb4bd74e | ||
|
|
6dc79c23ed | ||
|
|
5a9c3c797c | ||
|
|
a4a43a4de9 | ||
|
|
25b002b37f | ||
|
|
40c39c4afb | ||
|
|
f0e1024ad6 | ||
|
|
c11e186659 | ||
|
|
0a48916d98 | ||
|
|
dffb999efb | ||
|
|
044dd53513 | ||
|
|
02059a27d6 | ||
|
|
8f347a5333 | ||
|
|
bce9bca2ce | ||
|
|
cafe18c592 | ||
|
|
3e5490ef6d | ||
|
|
c76656cf7f | ||
|
|
c7c1b3cc3b | ||
|
|
39f9135104 | ||
|
|
dd52bdd2c4 | ||
|
|
7a0fa312ea | ||
|
|
cd2d52acdc | ||
|
|
62127d725d | ||
|
|
fc640504ba | ||
|
|
3c0b35092c | ||
|
|
89e8ea436a | ||
|
|
21dc05fc33 | ||
|
|
589570daa3 | ||
|
|
a02a469b20 | ||
|
|
be1f734845 | ||
|
|
98d7fa3fd9 | ||
|
|
74823cb7d1 | ||
|
|
e47bfa223f | ||
|
|
5c1ec051f0 | ||
|
|
65094d9c90 | ||
|
|
c00341a97e | ||
|
|
36423a5f77 | ||
|
|
60dd194b72 | ||
|
|
9ae717c433 | ||
|
|
d90a0647d6 | ||
|
|
35d81e65c1 | ||
|
|
cca574c9a9 | ||
|
|
c96c423afb | ||
|
|
463b154e3d | ||
|
|
3666948610 | ||
|
|
397410bac6 | ||
|
|
7aa838c091 | ||
|
|
458ac470aa | ||
|
|
a79cb95c85 | ||
|
|
5ad49454f1 | ||
|
|
1a56b9c5f2 | ||
|
|
efe3700f70 | ||
|
|
fc89d2e014 | ||
|
|
3c37539cee | ||
|
|
3d977aeacb | ||
|
|
4308b124c2 | ||
|
|
b23e9c207d | ||
|
|
c3a6b1600f | ||
|
|
138e1ba9a8 | ||
|
|
2858661bce | ||
|
|
afc791835e | ||
|
|
d981bff8ea | ||
|
|
b24d47c093 | ||
|
|
df086301b6 | ||
|
|
01b4d5cdd4 | ||
|
|
8c155dd875 | ||
|
|
3f9a38697d | ||
|
|
87742a5e6c | ||
|
|
5b9e8a77ca | ||
|
|
2821624ede | ||
|
|
ac1ab720c4 | ||
|
|
e0168b98d7 | ||
|
|
18801b81de | ||
|
|
1daefeb594 | ||
|
|
068048718e | ||
|
|
56946e8128 | ||
|
|
3dfb4a13f1 | ||
|
|
7ae3c91015 | ||
|
|
95f107d487 | ||
|
|
61316c7f95 | ||
|
|
49bdf2e72d | ||
|
|
f0f96bd1da | ||
|
|
3bfd9de677 | ||
|
|
f9b5ab4728 | ||
|
|
7abd70356d | ||
|
|
d8313288ad | ||
|
|
a89be5b269 | ||
|
|
7b0d482810 | ||
|
|
e81f3eb1d2 | ||
|
|
cd0a2d6ef3 | ||
|
|
d04d3d3c4a | ||
|
|
da7da5527c | ||
|
|
6f4bc30684 | ||
|
|
fa72795d84 | ||
|
|
68b8ffdb63 | ||
|
|
cb91d56d07 | ||
|
|
6e889e6a18 | ||
|
|
be9c3b218b | ||
|
|
a92f5b8e5a | ||
|
|
3eeb79ee12 | ||
|
|
24e1b9911a | ||
|
|
3a973ab719 | ||
|
|
6f10fe8502 | ||
|
|
d471e533b7 | ||
|
|
1a238048d5 | ||
|
|
aa2ff00485 | ||
|
|
f2787dc35c | ||
|
|
8002a13dd2 | ||
|
|
7dc2fe9ce7 | ||
|
|
5f37765292 | ||
|
|
a56d31910f | ||
|
|
24505a358a | ||
|
|
c570695aa1 | ||
|
|
208028a142 | ||
|
|
dceef25e2c | ||
|
|
256e58204a | ||
|
|
c1d64e1b1a | ||
|
|
1dbc5a57e6 | ||
|
|
9cc542fe67 | ||
|
|
f7a7f13287 | ||
|
|
96ece1b9f0 | ||
|
|
46004158a2 | ||
|
|
7e9ac16c22 | ||
|
|
2e5ab4e0e3 | ||
|
|
00c60d408a | ||
|
|
d5dc715d9c | ||
|
|
369909df84 | ||
|
|
2cd55ebf98 | ||
|
|
4e43e22a3a | ||
|
|
d8dea963fa | ||
|
|
84816d1c21 | ||
|
|
8430f9deff | ||
|
|
fcceb0aac1 | ||
|
|
2680b78b5b | ||
|
|
068889e5b1 | ||
|
|
3bd9772c04 | ||
|
|
af66c62814 | ||
|
|
5bc8f2e3e8 | ||
|
|
22c97ba801 | ||
|
|
026a249173 | ||
|
|
e52614ac81 | ||
|
|
10a7f5b933 | ||
|
|
c6b6d82a75 | ||
|
|
9a0249e793 | ||
|
|
e92760eec8 | ||
|
|
7b82051bdb | ||
|
|
aea54b7230 | ||
|
|
1a7a6f22e2 | ||
|
|
fab3ec0b56 | ||
|
|
2449f9c18d | ||
|
|
fee30262ac | ||
|
|
7cd4d78897 | ||
|
|
4ff40d4954 | ||
|
|
0d4fe469c6 | ||
|
|
8b43d67a73 | ||
|
|
128f7cefb1 | ||
|
|
09f9720ebb | ||
|
|
dbe74dffcb | ||
|
|
b958fa413e | ||
|
|
c453df927f | ||
|
|
1bb92d40aa | ||
|
|
15f969a469 | ||
|
|
bc5a74057d | ||
|
|
fc0d64f5ee | ||
|
|
885aaab8c8 | ||
|
|
9d4500cf69 | ||
|
|
9ff9fa0aea | ||
|
|
1d482eeecb | ||
|
|
b4e765362b | ||
|
|
c981eb81d9 | ||
|
|
95aebfc38c | ||
|
|
7265729446 | ||
|
|
846723d771 | ||
|
|
80d9b0464a | ||
|
|
09a1d1a593 | ||
|
|
aebcc2115d | ||
|
|
6fac038320 | ||
|
|
0df1b09a73 | ||
|
|
f432095532 | ||
|
|
e27a38939e | ||
|
|
ffa79ac6a5 | ||
|
|
2e632b1660 | ||
|
|
0b187a6a4e | ||
|
|
6cea5d0838 | ||
|
|
ffc7cf8f6c | ||
|
|
69bc58c5f6 | ||
|
|
f423181b94 | ||
|
|
112a863e73 | ||
|
|
cfde591ac9 | ||
|
|
0c97dda276 | ||
|
|
35f4698aed | ||
|
|
b7e2a3bd5f | ||
|
|
bb61b398a6 | ||
|
|
1e438f51c5 | ||
|
|
60416b18a5 | ||
|
|
4b0a0b0b85 | ||
|
|
f1377d5d30 | ||
|
|
30b6e4e2e5 | ||
|
|
5cf38bf88a | ||
|
|
9e3dadce0d | ||
|
|
73b4c818c5 | ||
|
|
2c2b0eb2f1 | ||
|
|
17726c2cac | ||
|
|
af79c9007e | ||
|
|
3de623bf66 | ||
|
|
7ec58cc554 | ||
|
|
3d6a1781e7 | ||
|
|
ce9238b389 | ||
|
|
2c6b0f3193 | ||
|
|
b4a16b165b | ||
|
|
a8cf5e0a5c | ||
|
|
2fcde0e0b6 | ||
|
|
b45f45dcef | ||
|
|
e823e60ca0 | ||
|
|
74977ab3db | ||
|
|
80dfb7d72d | ||
|
|
c30fe3066a | ||
|
|
0b605b3609 | ||
|
|
9bb337fb1f | ||
|
|
460dd8f186 | ||
|
|
6b0817b7ba | ||
|
|
7698477e86 | ||
|
|
8b60ef9db4 | ||
|
|
209fe8f7a9 | ||
|
|
b514f1aae9 | ||
|
|
f6a0345831 | ||
|
|
ce7e83f763 | ||
|
|
71b42dcec5 | ||
|
|
f5af8b03de | ||
|
|
e01f6e7455 | ||
|
|
b3eada1dc2 | ||
|
|
a3e3b9321e | ||
|
|
c652cf066d | ||
|
|
c218417d1a | ||
|
|
69db2ace58 | ||
|
|
79149b4c0c | ||
|
|
232ec62c75 | ||
|
|
7ca03d3bca | ||
|
|
15a30c745c | ||
|
|
8345475bc3 | ||
|
|
c7de7950c4 | ||
|
|
b6126f219f | ||
|
|
e05bf0844d | ||
|
|
fdff943262 | ||
|
|
a1a8ba7f53 | ||
|
|
d8a5f5b094 | ||
|
|
1ede09760e | ||
|
|
708fc6cd6f |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -33,9 +33,6 @@ Release/*.*
|
||||
*.gcda
|
||||
*.gcov
|
||||
|
||||
# Ignore locally installed node_modules
|
||||
/node_modules
|
||||
|
||||
# Ignore tmp directory.
|
||||
tmp
|
||||
|
||||
@@ -50,7 +47,6 @@ debug_log.txt
|
||||
# Ignore customized configs
|
||||
rippled.cfg
|
||||
validators.txt
|
||||
test/config.js
|
||||
|
||||
# Doxygen generated documentation output
|
||||
HtmlDocumentation
|
||||
|
||||
12
.travis.yml
12
.travis.yml
@@ -10,9 +10,13 @@ env:
|
||||
# namepart must match the folder name internal
|
||||
# to boost's .tar.gz.
|
||||
- LCOV_ROOT=$HOME/lcov
|
||||
- GDB_ROOT=$HOME/gdb
|
||||
- BOOST_ROOT=$HOME/boost_1_60_0
|
||||
- BOOST_URL='http://sourceforge.net/projects/boost/files/boost/1.60.0/boost_1_60_0.tar.gz'
|
||||
|
||||
# Travis is timing out on Trusty. So, for now, use Precise. July 2017
|
||||
dist: precise
|
||||
|
||||
addons:
|
||||
apt:
|
||||
sources: ['ubuntu-toolchain-r-test']
|
||||
@@ -27,6 +31,8 @@ addons:
|
||||
- binutils-gold
|
||||
# Provides a backtrace if the unittests crash
|
||||
- gdb
|
||||
# needed to build gdb
|
||||
- texinfo
|
||||
|
||||
matrix:
|
||||
include:
|
||||
@@ -40,6 +46,9 @@ matrix:
|
||||
|
||||
- compiler: clang
|
||||
env: GCC_VER=5 TARGET=debug CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PATH
|
||||
cache:
|
||||
directories:
|
||||
- $GDB_ROOT
|
||||
|
||||
- compiler: clang
|
||||
env: GCC_VER=5 TARGET=debug.nounity CLANG_VER=3.8 PATH=$PWD/llvm-$LLVM_VERSION/bin:$PATH
|
||||
@@ -56,6 +65,7 @@ cache:
|
||||
- $BOOST_ROOT
|
||||
- llvm-$LLVM_VERSION
|
||||
- cmake
|
||||
- $GDB_ROOT
|
||||
|
||||
before_install:
|
||||
- bin/ci/ubuntu/install-dependencies.sh
|
||||
@@ -69,3 +79,5 @@ notifications:
|
||||
irc:
|
||||
channels:
|
||||
- "chat.freenode.net#ripple-dev"
|
||||
|
||||
dist: precise
|
||||
|
||||
@@ -12,13 +12,17 @@ endif()
|
||||
|
||||
macro(parse_target)
|
||||
|
||||
if (NOT target AND NOT CMAKE_BUILD_TYPE)
|
||||
if (NOT target OR target STREQUAL "default")
|
||||
if (NOT CMAKE_BUILD_TYPE)
|
||||
set(CMAKE_BUILD_TYPE Debug)
|
||||
endif()
|
||||
string(TOLOWER ${CMAKE_BUILD_TYPE} target)
|
||||
if (APPLE)
|
||||
set(target clang.debug)
|
||||
set(target clang.${target})
|
||||
elseif(WIN32)
|
||||
set(target msvc)
|
||||
else()
|
||||
set(target gcc.debug)
|
||||
set(target gcc.${target})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
@@ -33,16 +37,16 @@ macro(parse_target)
|
||||
if (${cur_component} STREQUAL gcc)
|
||||
if (DEFINED ENV{GNU_CC})
|
||||
set(CMAKE_C_COMPILER $ENV{GNU_CC})
|
||||
elseif ($ENV{CXX} MATCHES .*gcc.*)
|
||||
set(CMAKE_CXX_COMPILER $ENV{CC})
|
||||
elseif ($ENV{CC} MATCHES .*gcc.*)
|
||||
set(CMAKE_C_COMPILER $ENV{CC})
|
||||
else()
|
||||
find_program(CMAKE_C_COMPILER gcc)
|
||||
endif()
|
||||
|
||||
if (DEFINED ENV{GNU_CXX})
|
||||
set(CMAKE_C_COMPILER $ENV{GNU_CXX})
|
||||
set(CMAKE_CXX_COMPILER $ENV{GNU_CXX})
|
||||
elseif ($ENV{CXX} MATCHES .*g\\+\\+.*)
|
||||
set(CMAKE_C_COMPILER $ENV{CC})
|
||||
set(CMAKE_CXX_COMPILER $ENV{CXX})
|
||||
else()
|
||||
find_program(CMAKE_CXX_COMPILER g++)
|
||||
endif()
|
||||
@@ -51,16 +55,16 @@ macro(parse_target)
|
||||
if (${cur_component} STREQUAL clang)
|
||||
if (DEFINED ENV{CLANG_CC})
|
||||
set(CMAKE_C_COMPILER $ENV{CLANG_CC})
|
||||
elseif ($ENV{CXX} MATCHES .*clang.*)
|
||||
set(CMAKE_CXX_COMPILER $ENV{CC})
|
||||
elseif ($ENV{CC} MATCHES .*clang.*)
|
||||
set(CMAKE_C_COMPILER $ENV{CC})
|
||||
else()
|
||||
find_program(CMAKE_C_COMPILER clang)
|
||||
endif()
|
||||
|
||||
if (DEFINED ENV{CLANG_CXX})
|
||||
set(CMAKE_C_COMPILER $ENV{CLANG_CXX})
|
||||
set(CMAKE_CXX_COMPILER $ENV{CLANG_CXX})
|
||||
elseif ($ENV{CXX} MATCHES .*clang.*)
|
||||
set(CMAKE_C_COMPILER $ENV{CC})
|
||||
set(CMAKE_CXX_COMPILER $ENV{CXX})
|
||||
else()
|
||||
find_program(CMAKE_CXX_COMPILER clang++)
|
||||
endif()
|
||||
@@ -105,18 +109,55 @@ macro(parse_target)
|
||||
endif()
|
||||
|
||||
endwhile()
|
||||
|
||||
if (release)
|
||||
set(CMAKE_BUILD_TYPE Release)
|
||||
else()
|
||||
set(CMAKE_BUILD_TYPE Debug)
|
||||
endif()
|
||||
|
||||
if (NOT unity)
|
||||
set(CMAKE_BUILD_TYPE ${CMAKE_BUILD_TYPE}Classic)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(CMAKE_C_COMPILER MATCHES "-NOTFOUND$" OR
|
||||
CMAKE_CXX_COMPILER MATCHES "-NOTFOUND$")
|
||||
message(FATAL_ERROR "Can not find appropriate compiler for target ${target}")
|
||||
endif()
|
||||
|
||||
# If defined, promote the compiler path values to the CACHE, then
|
||||
# unset the locals to prevent shadowing. Some scenarios do not
|
||||
# need or want to find a compiler, such as -GNinja under Windows.
|
||||
# Setting these values in those case may prevent CMake from finding
|
||||
# a valid compiler.
|
||||
if (CMAKE_C_COMPILER)
|
||||
set(CMAKE_C_COMPILER ${CMAKE_C_COMPILER} CACHE FILEPATH
|
||||
"Path to a program" FORCE)
|
||||
unset(CMAKE_C_COMPILER)
|
||||
endif (CMAKE_C_COMPILER)
|
||||
if (CMAKE_CXX_COMPILER)
|
||||
set(CMAKE_CXX_COMPILER ${CMAKE_CXX_COMPILER} CACHE FILEPATH
|
||||
"Path to a program" FORCE)
|
||||
unset(CMAKE_CXX_COMPILER)
|
||||
endif (CMAKE_CXX_COMPILER)
|
||||
|
||||
if (release)
|
||||
set(CMAKE_BUILD_TYPE Release)
|
||||
else()
|
||||
set(CMAKE_BUILD_TYPE Debug)
|
||||
endif()
|
||||
|
||||
# ensure that the unity flags are set and exclusive
|
||||
if (NOT DEFINED unity OR unity)
|
||||
# Default to unity builds
|
||||
set(unity true)
|
||||
set(nonunity false)
|
||||
else()
|
||||
set(unity false)
|
||||
set(nonunity true)
|
||||
endif()
|
||||
|
||||
if (NOT unity)
|
||||
set(CMAKE_BUILD_TYPE ${CMAKE_BUILD_TYPE}Classic)
|
||||
endif()
|
||||
# Promote this value to the CACHE, then unset the local
|
||||
# to prevent shadowing.
|
||||
set(CMAKE_BUILD_TYPE ${CMAKE_BUILD_TYPE} CACHE INTERNAL
|
||||
"Choose the type of build, options are in CMAKE_CONFIGURATION_TYPES"
|
||||
FORCE)
|
||||
unset(CMAKE_BUILD_TYPE)
|
||||
|
||||
endmacro()
|
||||
|
||||
############################################################
|
||||
@@ -128,11 +169,17 @@ macro(setup_build_cache)
|
||||
set(assert false CACHE BOOL "Enables asserts, even in release builds")
|
||||
set(static false CACHE BOOL
|
||||
"On linux, link protobuf, openssl, libc++, and boost statically")
|
||||
set(jemalloc false CACHE BOOL "Enables jemalloc for heap profiling")
|
||||
set(perf false CACHE BOOL "Enables flags that assist with perf recording")
|
||||
|
||||
if (static AND (WIN32 OR APPLE))
|
||||
message(FATAL_ERROR "Static linking is only supported on linux.")
|
||||
endif()
|
||||
|
||||
if (perf AND (WIN32 OR APPLE))
|
||||
message(FATAL_ERROR "perf flags are only supported on linux.")
|
||||
endif()
|
||||
|
||||
if (${CMAKE_GENERATOR} STREQUAL "Unix Makefiles" AND NOT CMAKE_BUILD_TYPE)
|
||||
set(CMAKE_BUILD_TYPE Debug)
|
||||
endif()
|
||||
@@ -147,15 +194,13 @@ macro(setup_build_cache)
|
||||
set(CMAKE_CONFIGURATION_TYPES
|
||||
DebugClassic
|
||||
ReleaseClassic)
|
||||
if(${CMAKE_BUILD_TYPE} STREQUAL "Debug")
|
||||
set(CMAKE_BUILD_TYPE DebugClassic)
|
||||
elseif(${CMAKE_BUILD_TYPE} STREQUAL "Release")
|
||||
set(CMAKE_BUILD_TYPE ReleaseClassic)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Promote this value to the CACHE, then unset the local
|
||||
# to prevent shadowing.
|
||||
set(CMAKE_CONFIGURATION_TYPES
|
||||
${CMAKE_CONFIGURATION_TYPES} CACHE STRING "" FORCE)
|
||||
unset(CMAKE_CONFIGURATION_TYPES)
|
||||
endmacro()
|
||||
|
||||
############################################################
|
||||
@@ -174,20 +219,24 @@ macro(append_flags name)
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
macro(group_sources curdir)
|
||||
file(GLOB children RELATIVE ${PROJECT_SOURCE_DIR}/${curdir}
|
||||
${PROJECT_SOURCE_DIR}/${curdir}/*)
|
||||
macro(group_sources_in source_dir curdir)
|
||||
file(GLOB children RELATIVE ${source_dir}/${curdir}
|
||||
${source_dir}/${curdir}/*)
|
||||
foreach (child ${children})
|
||||
if (IS_DIRECTORY ${PROJECT_SOURCE_DIR}/${curdir}/${child})
|
||||
group_sources(${curdir}/${child})
|
||||
if (IS_DIRECTORY ${source_dir}/${curdir}/${child})
|
||||
group_sources_in(${source_dir} ${curdir}/${child})
|
||||
else()
|
||||
string(REPLACE "/" "\\" groupname ${curdir})
|
||||
source_group(${groupname} FILES
|
||||
${PROJECT_SOURCE_DIR}/${curdir}/${child})
|
||||
${source_dir}/${curdir}/${child})
|
||||
endif()
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
macro(group_sources curdir)
|
||||
group_sources_in(${PROJECT_SOURCE_DIR} ${curdir})
|
||||
endmacro()
|
||||
|
||||
macro(add_with_props src_var files)
|
||||
list(APPEND ${src_var} ${files})
|
||||
foreach (arg ${ARGN})
|
||||
@@ -272,6 +321,7 @@ macro(use_boost)
|
||||
if ((NOT DEFINED BOOST_ROOT) AND (DEFINED ENV{BOOST_ROOT}))
|
||||
set(BOOST_ROOT $ENV{BOOST_ROOT})
|
||||
endif()
|
||||
file(TO_CMAKE_PATH "${BOOST_ROOT}" BOOST_ROOT)
|
||||
if(WIN32 OR CYGWIN)
|
||||
# Workaround for MSVC having two boost versions - x86 and x64 on same PC in stage folders
|
||||
if(DEFINED BOOST_ROOT)
|
||||
@@ -290,8 +340,11 @@ macro(use_boost)
|
||||
set(Boost_USE_STATIC_LIBS on)
|
||||
set(Boost_USE_MULTITHREADED on)
|
||||
set(Boost_USE_STATIC_RUNTIME off)
|
||||
find_package(Boost COMPONENTS
|
||||
${ARGN})
|
||||
if(MSVC)
|
||||
find_package(Boost REQUIRED)
|
||||
else()
|
||||
find_package(Boost REQUIRED ${ARGN})
|
||||
endif()
|
||||
|
||||
if (Boost_FOUND OR
|
||||
((CYGWIN OR WIN32) AND Boost_INCLUDE_DIRS AND Boost_LIBRARY_DIRS))
|
||||
@@ -335,6 +388,11 @@ macro(use_openssl openssl_min)
|
||||
endif()
|
||||
|
||||
find_package(OpenSSL)
|
||||
# depending on how openssl is built, it might depend
|
||||
# on zlib. In fact, the openssl find package should
|
||||
# figure this out for us, but it does not currently...
|
||||
# so let's add zlib ourselves to the lib list
|
||||
find_package(ZLIB)
|
||||
|
||||
if (static)
|
||||
set(CMAKE_FIND_LIBRARY_SUFFIXES tmp)
|
||||
@@ -342,6 +400,7 @@ macro(use_openssl openssl_min)
|
||||
|
||||
if (OPENSSL_FOUND)
|
||||
include_directories(${OPENSSL_INCLUDE_DIR})
|
||||
list(APPEND OPENSSL_LIBRARIES ${ZLIB_LIBRARIES})
|
||||
else()
|
||||
message(FATAL_ERROR "OpenSSL not found")
|
||||
endif()
|
||||
@@ -474,6 +533,10 @@ macro(setup_build_boilerplate)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (perf)
|
||||
add_compile_options(-fno-omit-frame-pointer)
|
||||
endif()
|
||||
|
||||
############################################################
|
||||
|
||||
add_definitions(
|
||||
@@ -487,6 +550,25 @@ macro(setup_build_boilerplate)
|
||||
|
||||
if (is_gcc)
|
||||
add_compile_options(-Wno-unused-but-set-variable -Wno-deprecated)
|
||||
|
||||
# use gold linker if available
|
||||
execute_process(
|
||||
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=gold -Wl,--version
|
||||
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
|
||||
# NOTE: THE gold linker inserts -rpath as DT_RUNPATH by default
|
||||
# intead of DT_RPATH, so you might have slightly unexpected
|
||||
# runtime ld behavior if you were expecting DT_RPATH.
|
||||
# Specify --disable-new-dtags to gold if you do not want
|
||||
# the default DT_RUNPATH behavior. This rpath treatment as well
|
||||
# as static/dynamic selection means that gold does not currently
|
||||
# have ideal default behavior when we are using jemalloc. Thus
|
||||
# for simplicity we don't use it when jemalloc is requested.
|
||||
# An alternative to disabling would be to figure out all the settings
|
||||
# required to make gold play nicely with jemalloc.
|
||||
if (("${LD_VERSION}" MATCHES "GNU gold") AND (NOT jemalloc))
|
||||
append_flags(CMAKE_EXE_LINKER_FLAGS -fuse-ld=gold)
|
||||
endif ()
|
||||
unset(LD_VERSION)
|
||||
endif()
|
||||
|
||||
# Generator expressions are not supported in add_definitions, use set_property instead
|
||||
@@ -502,6 +584,24 @@ macro(setup_build_boilerplate)
|
||||
APPEND
|
||||
PROPERTY COMPILE_DEFINITIONS
|
||||
$<$<OR:$<BOOL:${profile}>,$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:NDEBUG>)
|
||||
else()
|
||||
# CMAKE_CXX_FLAGS_RELEASE is created by CMake for most / all generators
|
||||
# with defaults including /DNDEBUG or -DNDEBUG, and that value is stored
|
||||
# in the cache. Override that locally so that the cache value will be
|
||||
# avaiable if "assert" is ever changed.
|
||||
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE}")
|
||||
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_CXX_FLAGS_RELEASECLASSIC "${CMAKE_CXX_FLAGS_RELEASECLASSIC}")
|
||||
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
|
||||
STRING(REGEX REPLACE "[-/]DNDEBUG" "" CMAKE_C_FLAGS_RELEASECLASSIC "${CMAKE_C_FLAGS_RELEASECLASSIC}")
|
||||
endif()
|
||||
|
||||
if (jemalloc)
|
||||
find_package(jemalloc REQUIRED)
|
||||
add_definitions(-DPROFILE_JEMALLOC)
|
||||
include_directories(SYSTEM ${JEMALLOC_INCLUDE_DIRS})
|
||||
link_libraries(${JEMALLOC_LIBRARIES})
|
||||
get_filename_component(JEMALLOC_LIB_PATH ${JEMALLOC_LIBRARIES} DIRECTORY)
|
||||
set(CMAKE_BUILD_RPATH ${CMAKE_BUILD_RPATH} ${JEMALLOC_LIB_PATH})
|
||||
endif()
|
||||
|
||||
if (NOT WIN32)
|
||||
@@ -520,13 +620,21 @@ macro(setup_build_boilerplate)
|
||||
add_compile_options(
|
||||
-Wno-redeclared-class-member -Wno-mismatched-tags -Wno-deprecated-register)
|
||||
add_definitions(-DBOOST_ASIO_HAS_STD_ARRAY)
|
||||
|
||||
# use ldd linker if available
|
||||
execute_process(
|
||||
COMMAND ${CMAKE_CXX_COMPILER} -fuse-ld=lld -Wl,--version
|
||||
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
|
||||
if ("${LD_VERSION}" MATCHES "LLD")
|
||||
append_flags(CMAKE_EXE_LINKER_FLAGS -fuse-ld=lld)
|
||||
endif ()
|
||||
unset(LD_VERSION)
|
||||
endif()
|
||||
|
||||
if (APPLE)
|
||||
add_definitions(-DBEAST_COMPILE_OBJECTIVE_CPP=1
|
||||
-DNO_LOG_UNHANDLED_EXCEPTIONS)
|
||||
add_definitions(-DBEAST_COMPILE_OBJECTIVE_CPP=1)
|
||||
add_compile_options(
|
||||
-Wno-deprecated -Wno-deprecated-declarations -Wno-unused-variable -Wno-unused-function)
|
||||
-Wno-deprecated -Wno-deprecated-declarations -Wno-unused-function)
|
||||
endif()
|
||||
|
||||
if (is_gcc)
|
||||
@@ -535,27 +643,27 @@ macro(setup_build_boilerplate)
|
||||
endif (is_gcc)
|
||||
else(NOT WIN32)
|
||||
add_compile_options(
|
||||
/bigobj # Increase object file max size
|
||||
/EHa # ExceptionHandling all
|
||||
/fp:precise # Floating point behavior
|
||||
/Gd # __cdecl calling convention
|
||||
/Gm- # Minimal rebuild: disabled
|
||||
/GR # Enable RTTI
|
||||
/Gy- # Function level linking: disabled
|
||||
/bigobj # Increase object file max size
|
||||
/EHa # ExceptionHandling all
|
||||
/fp:precise # Floating point behavior
|
||||
/Gd # __cdecl calling convention
|
||||
/Gm- # Minimal rebuild: disabled
|
||||
/GR # Enable RTTI
|
||||
/Gy- # Function level linking: disabled
|
||||
/FS
|
||||
/MP # Multiprocessor compilation
|
||||
/openmp- # pragma omp: disabled
|
||||
/Zc:forScope # Language extension: for scope
|
||||
/Zi # Generate complete debug info
|
||||
/errorReport:none # No error reporting to Internet
|
||||
/nologo # Suppress login banner
|
||||
/W3 # Warning level 3
|
||||
/WX- # Disable warnings as errors
|
||||
/wd"4018"
|
||||
/wd"4244"
|
||||
/wd"4267"
|
||||
/wd"4800" # Disable C4800(int to bool performance)
|
||||
/wd"4503" # Decorated name length exceeded, name was truncated
|
||||
/MP # Multiprocessor compilation
|
||||
/openmp- # pragma omp: disabled
|
||||
/Zc:forScope # Language conformance: for scope
|
||||
/Zi # Generate complete debug info
|
||||
/errorReport:none # No error reporting to Internet
|
||||
/nologo # Suppress login banner
|
||||
/W3 # Warning level 3
|
||||
/WX- # Disable warnings as errors
|
||||
/wd4018 # Disable signed/unsigned comparison warnings
|
||||
/wd4244 # Disable float to int possible loss of data warnings
|
||||
/wd4267 # Disable size_t to T possible loss of data warnings
|
||||
/wd4800 # Disable C4800(int to bool performance)
|
||||
/wd4503 # Decorated name length exceeded, name was truncated
|
||||
)
|
||||
add_definitions(
|
||||
-D_WIN32_WINNT=0x6000
|
||||
@@ -640,7 +748,7 @@ macro(link_common_libraries cur_project)
|
||||
find_library(app_kit AppKit)
|
||||
find_library(foundation Foundation)
|
||||
target_link_libraries(${cur_project}
|
||||
crypto ssl ${app_kit} ${foundation})
|
||||
${app_kit} ${foundation})
|
||||
else()
|
||||
target_link_libraries(${cur_project} rt)
|
||||
endif()
|
||||
@@ -653,6 +761,6 @@ macro(link_common_libraries cur_project)
|
||||
$<$<OR:$<CONFIG:Release>,$<CONFIG:ReleaseClassic>>:VC/static/libeay32MT>)
|
||||
target_link_libraries(${cur_project}
|
||||
legacy_stdio_definitions.lib Shlwapi kernel32 user32 gdi32 winspool comdlg32
|
||||
advapi32 shell32 ole32 oleaut32 uuid odbc32 odbccp32)
|
||||
advapi32 shell32 ole32 oleaut32 uuid odbc32 odbccp32 crypt32)
|
||||
endif (NOT MSVC)
|
||||
endmacro()
|
||||
|
||||
47
Builds/CMake/Findjemalloc.cmake
Normal file
47
Builds/CMake/Findjemalloc.cmake
Normal file
@@ -0,0 +1,47 @@
|
||||
# - Try to find jemalloc
|
||||
# Once done this will define
|
||||
# JEMALLOC_FOUND - System has jemalloc
|
||||
# JEMALLOC_INCLUDE_DIRS - The jemalloc include directories
|
||||
# JEMALLOC_LIBRARIES - The libraries needed to use jemalloc
|
||||
|
||||
if(NOT USE_BUNDLED_JEMALLOC)
|
||||
find_package(PkgConfig)
|
||||
if (PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_JEMALLOC QUIET jemalloc)
|
||||
endif()
|
||||
else()
|
||||
set(PC_JEMALLOC_INCLUDEDIR)
|
||||
set(PC_JEMALLOC_INCLUDE_DIRS)
|
||||
set(PC_JEMALLOC_LIBDIR)
|
||||
set(PC_JEMALLOC_LIBRARY_DIRS)
|
||||
set(LIMIT_SEARCH NO_DEFAULT_PATH)
|
||||
endif()
|
||||
|
||||
set(JEMALLOC_DEFINITIONS ${PC_JEMALLOC_CFLAGS_OTHER})
|
||||
|
||||
find_path(JEMALLOC_INCLUDE_DIR jemalloc/jemalloc.h
|
||||
PATHS ${PC_JEMALLOC_INCLUDEDIR} ${PC_JEMALLOC_INCLUDE_DIRS}
|
||||
${LIMIT_SEARCH})
|
||||
|
||||
# If we're asked to use static linkage, add libjemalloc.a as a preferred library name.
|
||||
if(JEMALLOC_USE_STATIC)
|
||||
list(APPEND JEMALLOC_NAMES
|
||||
"${CMAKE_STATIC_LIBRARY_PREFIX}jemalloc${CMAKE_STATIC_LIBRARY_SUFFIX}")
|
||||
endif()
|
||||
|
||||
list(APPEND JEMALLOC_NAMES jemalloc)
|
||||
|
||||
find_library(JEMALLOC_LIBRARY NAMES ${JEMALLOC_NAMES}
|
||||
HINTS ${PC_JEMALLOC_LIBDIR} ${PC_JEMALLOC_LIBRARY_DIRS}
|
||||
${LIMIT_SEARCH})
|
||||
|
||||
set(JEMALLOC_LIBRARIES ${JEMALLOC_LIBRARY})
|
||||
set(JEMALLOC_INCLUDE_DIRS ${JEMALLOC_INCLUDE_DIR})
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
# handle the QUIETLY and REQUIRED arguments and set JEMALLOC_FOUND to TRUE
|
||||
# if all listed variables are TRUE
|
||||
find_package_handle_standard_args(JeMalloc DEFAULT_MSG
|
||||
JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR)
|
||||
|
||||
mark_as_advanced(JEMALLOC_INCLUDE_DIR JEMALLOC_LIBRARY)
|
||||
306
Builds/Test.py
306
Builds/Test.py
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# This file is part of rippled: https://github.com/ripple/rippled
|
||||
# Copyright (c) 2012 - 2015 Ripple Labs Inc.
|
||||
# Copyright (c) 2012 - 2017 Ripple Labs Inc.
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
@@ -27,6 +27,11 @@ the -- flag - like this:
|
||||
|
||||
./Builds/Test.py -- -j4 # Pass -j4 to scons.
|
||||
|
||||
To build with CMake, use the --cmake flag, or any of the specific configuration
|
||||
flags
|
||||
|
||||
./Builds/Test.py --cmake -- -j4 # Pass -j4 to cmake --build
|
||||
|
||||
|
||||
Common problems:
|
||||
|
||||
@@ -61,6 +66,38 @@ def powerset(iterable):
|
||||
IS_WINDOWS = platform.system().lower() == 'windows'
|
||||
IS_OS_X = platform.system().lower() == 'darwin'
|
||||
|
||||
# CMake
|
||||
if IS_WINDOWS:
|
||||
CMAKE_UNITY_CONFIGS = ['Debug', 'Release']
|
||||
CMAKE_NONUNITY_CONFIGS = ['DebugClassic', 'ReleaseClassic']
|
||||
else:
|
||||
CMAKE_UNITY_CONFIGS = []
|
||||
CMAKE_NONUNITY_CONFIGS = []
|
||||
CMAKE_UNITY_COMBOS = { '' : [['rippled', 'rippled_classic'], CMAKE_UNITY_CONFIGS],
|
||||
'.nounity' : [['rippled', 'rippled_unity'], CMAKE_NONUNITY_CONFIGS] }
|
||||
|
||||
if IS_WINDOWS:
|
||||
CMAKE_DIR_TARGETS = { ('msvc' + unity,) : targets for unity, targets in
|
||||
CMAKE_UNITY_COMBOS.items() }
|
||||
elif IS_OS_X:
|
||||
CMAKE_DIR_TARGETS = { (build + unity,) : targets
|
||||
for build in ['debug', 'release']
|
||||
for unity, targets in CMAKE_UNITY_COMBOS.items() }
|
||||
else:
|
||||
CMAKE_DIR_TARGETS = { (cc + "." + build + unity,) : targets
|
||||
for cc in ['gcc', 'clang']
|
||||
for build in ['debug', 'release', 'coverage', 'profile']
|
||||
for unity, targets in CMAKE_UNITY_COMBOS.items() }
|
||||
|
||||
# list of tuples of all possible options
|
||||
if IS_WINDOWS or IS_OS_X:
|
||||
CMAKE_ALL_GENERATE_OPTIONS = [tuple(x) for x in powerset(['-GNinja', '-Dassert=true'])]
|
||||
else:
|
||||
CMAKE_ALL_GENERATE_OPTIONS = list(set(
|
||||
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=address'])] +
|
||||
[tuple(x) for x in powerset(['-GNinja', '-Dstatic=true', '-Dassert=true', '-Dsan=thread'])]))
|
||||
|
||||
# Scons
|
||||
if IS_WINDOWS or IS_OS_X:
|
||||
ALL_TARGETS = [('debug',), ('release',)]
|
||||
else:
|
||||
@@ -71,7 +108,7 @@ else:
|
||||
|
||||
# list of tuples of all possible options
|
||||
if IS_WINDOWS or IS_OS_X:
|
||||
ALL_OPTIONS = [tuple(x) for x in powerset(['--assert'])]
|
||||
ALL_OPTIONS = [tuple(x) for x in powerset(['--ninja', '--assert'])]
|
||||
else:
|
||||
ALL_OPTIONS = list(set(
|
||||
[tuple(x) for x in powerset(['--ninja', '--static', '--assert', '--sanitize=address'])] +
|
||||
@@ -124,13 +161,81 @@ parser.add_argument(
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'scons_args',
|
||||
'--quiet', '-q',
|
||||
action='store_true',
|
||||
help='Reduce output where possible (unit tests)',
|
||||
)
|
||||
|
||||
# Scons and CMake parameters are too different to run
|
||||
# both side-by-side
|
||||
pgroup = parser.add_mutually_exclusive_group()
|
||||
|
||||
pgroup.add_argument(
|
||||
'--cmake',
|
||||
action='store_true',
|
||||
help='Build using CMake.',
|
||||
)
|
||||
|
||||
pgroup.add_argument(
|
||||
'--scons',
|
||||
action='store_true',
|
||||
help='Build using Scons. Default behavior.')
|
||||
|
||||
parser.add_argument(
|
||||
'--dir', '-d',
|
||||
default=(),
|
||||
nargs='*'
|
||||
nargs='*',
|
||||
help='Specify one or more CMake dir names. Implies --cmake. '
|
||||
'Will also be used as -Dtarget=<dir> running cmake.'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--target',
|
||||
default=(),
|
||||
nargs='*',
|
||||
help='Specify one or more CMake build targets. Implies --cmake. '
|
||||
'Will be used as --target <target> running cmake --build.'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--config',
|
||||
default=(),
|
||||
nargs='*',
|
||||
help='Specify one or more CMake build configs. Implies --cmake. '
|
||||
'Will be used as --config <config> running cmake --build.'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--generator_option',
|
||||
action='append',
|
||||
help='Specify a CMake generator option. Repeat for multiple options. '
|
||||
'Implies --cmake. Will be passed to the cmake generator. '
|
||||
'Due to limits of the argument parser, arguments starting with \'-\' '
|
||||
'must be attached to this option. e.g. --generator_option=-GNinja.')
|
||||
|
||||
parser.add_argument(
|
||||
'--build_option',
|
||||
action='append',
|
||||
help='Specify a build option. Repeat for multiple options. Implies --cmake. '
|
||||
'Will be passed to the build tool via cmake --build. '
|
||||
'Due to limits of the argument parser, arguments starting with \'-\' '
|
||||
'must be attached to this option. e.g. --build_option=-j8.')
|
||||
|
||||
parser.add_argument(
|
||||
'extra_args',
|
||||
default=(),
|
||||
nargs='*',
|
||||
help='Extra arguments are passed through to the tools'
|
||||
)
|
||||
|
||||
ARGS = parser.parse_args()
|
||||
|
||||
def decodeString(line):
|
||||
# Python 2 vs. Python 3
|
||||
if isinstance(line, str):
|
||||
return line
|
||||
else:
|
||||
return line.decode()
|
||||
|
||||
def shell(cmd, args=(), silent=False):
|
||||
""""Execute a shell command and return the output."""
|
||||
@@ -141,6 +246,7 @@ def shell(cmd, args=(), silent=False):
|
||||
|
||||
command = (cmd,) + args
|
||||
|
||||
# shell is needed in Windows to find scons in the path
|
||||
process = subprocess.Popen(
|
||||
command,
|
||||
stdin=subprocess.PIPE,
|
||||
@@ -149,12 +255,9 @@ def shell(cmd, args=(), silent=False):
|
||||
shell=IS_WINDOWS)
|
||||
lines = []
|
||||
count = 0
|
||||
for line in process.stdout:
|
||||
# Python 2 vs. Python 3
|
||||
if isinstance(line, str):
|
||||
decoded = line
|
||||
else:
|
||||
decoded = line.decode()
|
||||
# readline returns '' at EOF
|
||||
for line in iter(process.stdout.readline, ''):
|
||||
decoded = decodeString(line)
|
||||
lines.append(decoded)
|
||||
if verbose:
|
||||
print(decoded, end='')
|
||||
@@ -186,9 +289,12 @@ def run_tests(args):
|
||||
|
||||
print('Unit tests for', target)
|
||||
testflag = '--unittest'
|
||||
quiet = ''
|
||||
if ARGS.test:
|
||||
testflag += ('=' + ARGS.test)
|
||||
resultcode, lines = shell(executable, (testflag,))
|
||||
if ARGS.quiet:
|
||||
quiet = '-q'
|
||||
resultcode, lines = shell(executable, (testflag, quiet,))
|
||||
|
||||
if resultcode:
|
||||
if not ARGS.verbose:
|
||||
@@ -218,45 +324,163 @@ def run_build(args=None):
|
||||
print(*lines, sep='')
|
||||
sys.exit(1)
|
||||
|
||||
def get_cmake_dir(cmake_dir):
|
||||
return os.path.join('build' , 'cmake' , cmake_dir)
|
||||
|
||||
def run_cmake(directory, cmake_dir, args):
|
||||
print('Generating build in', directory, 'with', *args or ('default options',))
|
||||
old_dir = os.getcwd()
|
||||
if not os.path.exists(directory):
|
||||
os.makedirs(directory)
|
||||
os.chdir(directory)
|
||||
if IS_WINDOWS and not any(arg.startswith("-G") for arg in args) and not os.path.exists("CMakeCache.txt"):
|
||||
if '--ninja' in args:
|
||||
args += ( '-GNinja', )
|
||||
else:
|
||||
args += ( '-GVisual Studio 14 2015 Win64', )
|
||||
args += ( '-Dtarget=' + cmake_dir, os.path.join('..', '..', '..'), )
|
||||
resultcode, lines = shell('cmake', args)
|
||||
|
||||
if resultcode:
|
||||
print('Generating FAILED:')
|
||||
if not ARGS.verbose:
|
||||
print(*lines, sep='')
|
||||
sys.exit(1)
|
||||
|
||||
os.chdir(old_dir)
|
||||
|
||||
def run_cmake_build(directory, target, config, args):
|
||||
print('Building', target, config, 'in', directory, 'with', *args or ('default options',))
|
||||
build_args=('--build', directory)
|
||||
if target:
|
||||
build_args += ('--target', target)
|
||||
if config:
|
||||
build_args += ('--config', config)
|
||||
if args:
|
||||
build_args += ('--',)
|
||||
build_args += tuple(args)
|
||||
resultcode, lines = shell('cmake', build_args)
|
||||
|
||||
if resultcode:
|
||||
print('Build FAILED:')
|
||||
if not ARGS.verbose:
|
||||
print(*lines, sep='')
|
||||
sys.exit(1)
|
||||
|
||||
def run_cmake_tests(directory, target, config):
|
||||
failed = []
|
||||
if IS_WINDOWS:
|
||||
target += '.exe'
|
||||
executable = os.path.join(directory, config if config else 'Debug', target)
|
||||
if(not os.path.exists(executable)):
|
||||
executable = os.path.join(directory, target)
|
||||
print('Unit tests for', executable)
|
||||
testflag = '--unittest'
|
||||
quiet = ''
|
||||
if ARGS.test:
|
||||
testflag += ('=' + ARGS.test)
|
||||
if ARGS.quiet:
|
||||
quiet = '-q'
|
||||
resultcode, lines = shell(executable, (testflag, quiet,))
|
||||
|
||||
if resultcode:
|
||||
if not ARGS.verbose:
|
||||
print('ERROR:', *lines, sep='')
|
||||
failed.append([target, 'unittest'])
|
||||
|
||||
return failed
|
||||
|
||||
def main():
|
||||
if ARGS.all:
|
||||
to_build = ALL_BUILDS
|
||||
else:
|
||||
to_build = [tuple(ARGS.scons_args)]
|
||||
|
||||
all_failed = []
|
||||
|
||||
for build in to_build:
|
||||
args = ()
|
||||
# additional arguments come first
|
||||
for arg in list(ARGS.scons_args):
|
||||
if arg not in build:
|
||||
args += (arg,)
|
||||
args += build
|
||||
if ARGS.dir or ARGS.target or ARGS.config or ARGS.build_option or ARGS.generator_option:
|
||||
ARGS.cmake=True
|
||||
|
||||
run_build(args)
|
||||
failed = run_tests(args)
|
||||
|
||||
if failed:
|
||||
print('FAILED:', *(':'.join(f) for f in failed))
|
||||
if not ARGS.keep_going:
|
||||
sys.exit(1)
|
||||
else:
|
||||
all_failed.extend([','.join(build), ':'.join(f)]
|
||||
for f in failed)
|
||||
if not ARGS.cmake:
|
||||
if ARGS.all:
|
||||
to_build = ALL_BUILDS
|
||||
else:
|
||||
print('Success')
|
||||
to_build = [tuple(ARGS.extra_args)]
|
||||
|
||||
if ARGS.clean:
|
||||
shutil.rmtree('build')
|
||||
if '--ninja' in args:
|
||||
os.remove('build.ninja')
|
||||
os.remove('.ninja_deps')
|
||||
os.remove('.ninja_log')
|
||||
for build in to_build:
|
||||
args = ()
|
||||
# additional arguments come first
|
||||
for arg in list(ARGS.extra_args):
|
||||
if arg not in build:
|
||||
args += (arg,)
|
||||
args += build
|
||||
|
||||
run_build(args)
|
||||
failed = run_tests(args)
|
||||
|
||||
if failed:
|
||||
print('FAILED:', *(':'.join(f) for f in failed))
|
||||
if not ARGS.keep_going:
|
||||
sys.exit(1)
|
||||
else:
|
||||
all_failed.extend([','.join(build), ':'.join(f)]
|
||||
for f in failed)
|
||||
else:
|
||||
print('Success')
|
||||
|
||||
if ARGS.clean:
|
||||
shutil.rmtree('build')
|
||||
if '--ninja' in args:
|
||||
os.remove('build.ninja')
|
||||
os.remove('.ninja_deps')
|
||||
os.remove('.ninja_log')
|
||||
else:
|
||||
if ARGS.all:
|
||||
build_dir_targets = CMAKE_DIR_TARGETS
|
||||
generator_options = CMAKE_ALL_GENERATE_OPTIONS
|
||||
else:
|
||||
build_dir_targets = { tuple(ARGS.dir) : [ARGS.target, ARGS.config] }
|
||||
if ARGS.generator_option:
|
||||
generator_options = [tuple(ARGS.generator_option)]
|
||||
else:
|
||||
generator_options = [tuple()]
|
||||
|
||||
if not build_dir_targets:
|
||||
# Let CMake choose the build tool.
|
||||
build_dir_targets = { () : [] }
|
||||
|
||||
if ARGS.build_option:
|
||||
ARGS.build_option = ARGS.build_option + list(ARGS.extra_args)
|
||||
else:
|
||||
ARGS.build_option = list(ARGS.extra_args)
|
||||
|
||||
for args in generator_options:
|
||||
for build_dirs, (build_targets, build_configs) in build_dir_targets.items():
|
||||
if not build_dirs:
|
||||
build_dirs = ('default',)
|
||||
if not build_targets:
|
||||
build_targets = ('rippled',)
|
||||
if not build_configs:
|
||||
build_configs = ('',)
|
||||
for cmake_dir in build_dirs:
|
||||
cmake_full_dir = get_cmake_dir(cmake_dir)
|
||||
run_cmake(cmake_full_dir, cmake_dir, args)
|
||||
|
||||
for target in build_targets:
|
||||
for config in build_configs:
|
||||
run_cmake_build(cmake_full_dir, target, config, ARGS.build_option)
|
||||
failed = run_cmake_tests(cmake_full_dir, target, config)
|
||||
|
||||
if failed:
|
||||
print('FAILED:', *(':'.join(f) for f in failed))
|
||||
if not ARGS.keep_going:
|
||||
sys.exit(1)
|
||||
else:
|
||||
all_failed.extend([decodeString(cmake_dir +
|
||||
"." + target + "." + config), ':'.join(f)]
|
||||
for f in failed)
|
||||
else:
|
||||
print('Success')
|
||||
if ARGS.clean:
|
||||
shutil.rmtree(cmake_full_dir)
|
||||
|
||||
if all_failed:
|
||||
if len(to_build) > 1:
|
||||
if len(all_failed) > 1:
|
||||
print()
|
||||
print('FAILED:', *(':'.join(f) for f in all_failed))
|
||||
sys.exit(1)
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
# This script builds boost with the correct ABI flags for ubuntu
|
||||
#
|
||||
|
||||
version=59
|
||||
version=63
|
||||
patch=0
|
||||
|
||||
if hash lsb_release 2>/dev/null; then
|
||||
|
||||
@@ -44,7 +44,11 @@ if [ ${ubuntu_release} == "14.04" ] || [ ${ubuntu_release} == "15.04" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ ${ubuntu_release} == "16.04" ] || [ ${ubuntu_release} == "15.10" ]; then
|
||||
# Test if 0th parameter has a version number greater than or equal to the 1st param
|
||||
function version_check() { test "$(printf '%s\n' "$@" | sort -V | tail -n 1)" == "$1"; }
|
||||
|
||||
# this should work for versions greater than 15.10
|
||||
if version_check ${ubuntu_release} 15.10; then
|
||||
apt-get update
|
||||
apt-get -y upgrade
|
||||
apt-get -y install python-software-properties curl git scons ctags pkg-config protobuf-compiler libprotobuf-dev libssl-dev python-software-properties libboost-all-dev
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -2,13 +2,13 @@
|
||||
|
||||
## Important
|
||||
|
||||
We don't recommend OS X for rippled production use at this time. Currently, the
|
||||
We don't recommend macos for rippled production use at this time. Currently, the
|
||||
Ubuntu platform has received the highest level of quality assurance and
|
||||
testing.
|
||||
testing. That said, macos is suitable for many development/test tasks.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You'll need OSX 10.8 or later
|
||||
You'll need macos 10.8 or later
|
||||
|
||||
To clone the source code repository, create branches for inspection or
|
||||
modification, build rippled using clang, and run the system tests you will need
|
||||
@@ -17,7 +17,7 @@ these software components:
|
||||
* [XCode](https://developer.apple.com/xcode/)
|
||||
* [Homebrew](http://brew.sh/)
|
||||
* [Git](http://git-scm.com/)
|
||||
* [Scons](http://www.scons.org/)
|
||||
* [CMake](http://cmake.org/)
|
||||
|
||||
## Install Software
|
||||
|
||||
@@ -64,14 +64,14 @@ later.
|
||||
|
||||
### Install Scons
|
||||
|
||||
Requires version 2.3.0 or later
|
||||
Requires version 3.6.0 or later
|
||||
|
||||
```
|
||||
brew install scons
|
||||
brew install cmake
|
||||
```
|
||||
|
||||
`brew` will generally install the latest stable version of any package, which
|
||||
will satisfy the scons minimum version requirement for rippled.
|
||||
should satisfy the cmake minimum version requirement for rippled.
|
||||
|
||||
### Install Package Config
|
||||
|
||||
@@ -163,10 +163,16 @@ export BOOST_ROOT=/Users/Abigail/Downloads/boost_1_61_0
|
||||
## Build
|
||||
|
||||
```
|
||||
scons
|
||||
mkdir xcode_build && cd xcode_build
|
||||
cmake -GXcode ..
|
||||
```
|
||||
|
||||
See: [here](https://ripple.com/wiki/Rippled_build_instructions#Building)
|
||||
There are a number of variables/options that our CMake files support and they
|
||||
can be added to the above command as needed (e.g. `-Dassert=ON` to enable
|
||||
asserts)
|
||||
|
||||
After generation succeeds, the xcode project file can be opened and used to
|
||||
build and debug.
|
||||
|
||||
## Unit Tests (Recommended)
|
||||
|
||||
|
||||
6
Builds/build_all.sh
Normal file → Executable file
6
Builds/build_all.sh
Normal file → Executable file
@@ -2,5 +2,7 @@
|
||||
|
||||
num_procs=$(lscpu -p | grep -v '^#' | sort -u -t, -k 2,4 | wc -l) # number of physical cores
|
||||
|
||||
cd ..
|
||||
./Builds/Test.py -a -c -- -j${num_procs}
|
||||
path=$(cd $(dirname $0) && pwd)
|
||||
cd $(dirname $path)
|
||||
${path}/Test.py -a -c --test=TxQ -- -j${num_procs}
|
||||
${path}/Test.py -a -c -k --test=TxQ --cmake -- -j${num_procs}
|
||||
|
||||
@@ -15,24 +15,30 @@
|
||||
# * ninja builds
|
||||
# * check openssl version on linux
|
||||
# * static builds (swd TBD: needs to be tested by building & deploying on different systems)
|
||||
#
|
||||
# TBD:
|
||||
# * jemalloc support
|
||||
# * count
|
||||
# * Windows protobuf compiler puts generated file in src directory instead of build directory.
|
||||
# * jemalloc enabled builds (linux and macos only)
|
||||
# * perf builds (linux only) - which just sets recommended compiler flags
|
||||
# for running perf on the executable
|
||||
#
|
||||
# Notes:
|
||||
# * Use the -G"Visual Studio 14 2015 Win64" generator on Windows. Without this
|
||||
# a 32-bit project will be created. There is no way to set the generator or
|
||||
# force a 64-bit build in CMakeLists.txt (setting CMAKE_GENERATOR_PLATFORM won't work).
|
||||
# The best solution may be to wrap cmake with a script.
|
||||
# * Use the -G"Visual Studio 14 2015 Win64" generator, or the "VS2015 x86 x64
|
||||
# Cross Tools" Command Prompt on Windows. Without this a 32-bit project will be
|
||||
# created. There is no way to set the generator or force a 64-bit build in
|
||||
# CMakeLists.txt (setting CMAKE_GENERATOR_PLATFORM won't work). The best solution
|
||||
# may be to wrap cmake with a script.
|
||||
#
|
||||
# * Ninja command line builds seem to work under Windows, but only from within
|
||||
# the "VS2015 x86 x64 Cross Tools" Command Prompt.
|
||||
#
|
||||
# * It is not possible to generate a visual studio project on linux or
|
||||
# mac. The visual studio generator is only available on windows.
|
||||
#
|
||||
# * The visual studio project can be _either_ unity or
|
||||
# non-unity (selected at generation time). It does not appear possible
|
||||
# to disable compilation based on configuration.
|
||||
# * The Visual Studio solution will be generated with two projects, one
|
||||
# unity, one non-unity. Which is default depends on the nounity flag in
|
||||
# -Dtarget. Unity targets will create `rippled` and `rippled_classic`.
|
||||
# Non-unity targets will create `rippled` and `rippled_unity`. In either
|
||||
# case, only the `rippled` build will be enabled by default. It does
|
||||
# not appear possible to include both unity and non-unity configs in one
|
||||
# project and disable compilation based on configuration.
|
||||
#
|
||||
# * Language is _much_ worse than python, poor documentation and "quirky"
|
||||
# language support (for example, generator expressions can only be used
|
||||
@@ -52,7 +58,22 @@
|
||||
|
||||
############################################################
|
||||
|
||||
#########################################################
|
||||
# CMAKE_C_COMPILER and CMAKE_CXX_COMPILER must be defined
|
||||
# before the project statement; However, the project
|
||||
# statement will clear CMAKE_BUILD_TYPE. CACHE variables,
|
||||
# along with the order of this code, are used to work
|
||||
# around these constraints.
|
||||
#
|
||||
# Don't put any code above or in this block, unless it
|
||||
# has similar constraints.
|
||||
cmake_minimum_required(VERSION 3.1.0)
|
||||
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/Builds/CMake")
|
||||
include(CMakeFuncs)
|
||||
set(openssl_min 1.0.2)
|
||||
parse_target()
|
||||
project(rippled)
|
||||
#########################################################
|
||||
|
||||
if("${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
|
||||
set(dir "build")
|
||||
@@ -82,22 +103,8 @@ if("${CMAKE_GENERATOR}" MATCHES "Visual Studio" AND
|
||||
-G\"${CMAKE_GENERATOR} Win64\"")
|
||||
endif()
|
||||
|
||||
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/Builds/CMake")
|
||||
include(CMakeFuncs)
|
||||
|
||||
set(openssl_min 1.0.2)
|
||||
|
||||
parse_target()
|
||||
|
||||
if (NOT DEFINED unity)
|
||||
set(unity true)
|
||||
set(nonunity false)
|
||||
endif()
|
||||
|
||||
setup_build_cache()
|
||||
|
||||
project(rippled)
|
||||
|
||||
if(nonunity)
|
||||
get_cmake_property(allvars VARIABLES)
|
||||
string(REGEX MATCHALL "[^;]*(DEBUG|RELEASE)[^;]*" matchvars "${allvars}")
|
||||
@@ -178,31 +185,41 @@ beast_utility_unity.cpp)
|
||||
|
||||
prepend(ripple_unity_srcs
|
||||
src/ripple/unity/
|
||||
app_consensus.cpp
|
||||
app_ledger.cpp
|
||||
app_main.cpp
|
||||
app_ledger_impl.cpp
|
||||
app_main1.cpp
|
||||
app_main2.cpp
|
||||
app_misc.cpp
|
||||
app_misc_impl.cpp
|
||||
app_paths.cpp
|
||||
app_tx.cpp
|
||||
conditions.cpp
|
||||
consensus.cpp
|
||||
core.cpp
|
||||
basics.cpp
|
||||
crypto.cpp
|
||||
ledger.cpp
|
||||
net.cpp
|
||||
overlay.cpp
|
||||
overlay1.cpp
|
||||
overlay2.cpp
|
||||
peerfinder.cpp
|
||||
json.cpp
|
||||
protocol.cpp
|
||||
rpcx.cpp
|
||||
rpcx1.cpp
|
||||
rpcx2.cpp
|
||||
shamap.cpp
|
||||
server.cpp)
|
||||
|
||||
prepend(test_unity_srcs
|
||||
src/test/unity/
|
||||
app_test_unity.cpp
|
||||
app_test_unity1.cpp
|
||||
app_test_unity2.cpp
|
||||
basics_test_unity.cpp
|
||||
beast_test_unity.cpp
|
||||
beast_test_unity1.cpp
|
||||
beast_test_unity2.cpp
|
||||
conditions_test_unity.cpp
|
||||
consensus_test_unity.cpp
|
||||
core_test_unity.cpp
|
||||
json_test_unity.cpp
|
||||
ledger_test_unity.cpp
|
||||
@@ -212,8 +229,11 @@ protocol_test_unity.cpp
|
||||
resource_test_unity.cpp
|
||||
rpc_test_unity.cpp
|
||||
server_test_unity.cpp
|
||||
server_status_test_unity.cpp
|
||||
shamap_test_unity.cpp
|
||||
support_unity.cpp)
|
||||
jtx_unity1.cpp
|
||||
jtx_unity2.cpp
|
||||
csf_unity.cpp)
|
||||
|
||||
list(APPEND rippled_src_unity ${beast_unity_srcs} ${ripple_unity_srcs} ${test_unity_srcs})
|
||||
|
||||
@@ -256,6 +276,7 @@ foreach(curdir
|
||||
basics
|
||||
conditions
|
||||
crypto
|
||||
consensus
|
||||
json
|
||||
ledger
|
||||
legacy
|
||||
@@ -288,8 +309,11 @@ foreach(curdir
|
||||
basics
|
||||
beast
|
||||
conditions
|
||||
consensus
|
||||
core
|
||||
csf
|
||||
json
|
||||
jtx
|
||||
ledger
|
||||
nodestore
|
||||
overlay
|
||||
@@ -298,8 +322,7 @@ foreach(curdir
|
||||
resource
|
||||
rpc
|
||||
server
|
||||
shamap
|
||||
jtx)
|
||||
shamap)
|
||||
file(GLOB_RECURSE cursrcs src/test/${curdir}/*.cpp)
|
||||
list(APPEND test_srcs "${cursrcs}")
|
||||
endforeach()
|
||||
@@ -314,7 +337,7 @@ list(APPEND non_unity_srcs "${test_srcs}")
|
||||
|
||||
if(WIN32 OR is_xcode)
|
||||
# Rippled headers. Only needed for IDEs.
|
||||
file(GLOB_RECURSE rippled_headers src/*.h src/*.hpp)
|
||||
file(GLOB_RECURSE rippled_headers src/*.h src/*.hpp *.md)
|
||||
list(APPEND rippled_headers Builds/CMake/CMakeFuncs.cmake)
|
||||
foreach(curdir
|
||||
beast/asio
|
||||
@@ -346,6 +369,7 @@ if (WIN32 OR is_xcode)
|
||||
docs/
|
||||
Jamfile.v2
|
||||
boostbook.dtd
|
||||
consensus.qbk
|
||||
index.xml
|
||||
main.qbk
|
||||
quickref.xml
|
||||
|
||||
15
README.md
15
README.md
@@ -1,8 +1,11 @@
|
||||

|
||||
|
||||
**Do you work at a digital asset exchange or wallet provider?**
|
||||
|
||||
Please [contact us](mailto:support@ripple.com). We can help guide your integration.
|
||||
|
||||
# What is Ripple?
|
||||
Ripple is a network of computers which use the [Ripple consensus algorithm]
|
||||
(https://www.youtube.com/watch?v=pj1QVb1vlC0) to atomically settle and record
|
||||
Ripple is a network of computers which use the [Ripple consensus algorithm](https://www.youtube.com/watch?v=pj1QVb1vlC0) to atomically settle and record
|
||||
transactions on a secure distributed database, the Ripple Consensus Ledger
|
||||
(RCL). Because of its distributed nature, the RCL offers transaction immutability
|
||||
without a central operator. The RCL contains a built-in currency exchange and its
|
||||
@@ -71,12 +74,12 @@ ISC license. See the LICENSE file for more details.
|
||||
Some of the directories under `src` are external repositories inlined via
|
||||
git-subtree. See the corresponding README for more details.
|
||||
|
||||
##For more information:
|
||||
## For more information:
|
||||
|
||||
* [Ripple Knowledge Center](https://ripple.com/learn/)
|
||||
* [Ripple Developer Center](https://ripple.com/build/)
|
||||
* [Ripple Whitepapers & Reports](https://ripple.com/whitepapers-reports/)
|
||||
* [Ripple Consensus Whitepaper](https://ripple.com/consensus-whitepaper/)
|
||||
* Ripple Whitepapers & Reports
|
||||
* [Ripple Consensus Whitepaper](https://ripple.com/files/ripple_consensus_whitepaper.pdf)
|
||||
* [Ripple Solutions Guide](https://ripple.com/files/ripple_solutions_guide.pdf)
|
||||
|
||||
To learn about how Ripple is transforming global payments visit
|
||||
@@ -84,7 +87,7 @@ To learn about how Ripple is transforming global payments visit
|
||||
|
||||
- - -
|
||||
|
||||
Copyright © 2015, Ripple Labs. All rights reserved.
|
||||
Copyright © 2017, Ripple Labs. All rights reserved.
|
||||
|
||||
Portions of this document, including but not limited to the Ripple logo,
|
||||
images and image templates are the property of Ripple Labs and cannot be
|
||||
|
||||
248
RELEASENOTES.md
248
RELEASENOTES.md
@@ -1,11 +1,253 @@
|
||||

|
||||
|
||||
This document contains the release notes for `rippled`, the reference server
|
||||
implementation of the Ripple protocol. To learn more about how to build and
|
||||
run a `rippled` server, visit https://ripple.com/build/rippled-setup/
|
||||
This document contains the release notes for `rippled`, the reference server implementation of the Ripple protocol. To learn more about how to build and run a `rippled` server, visit https://ripple.com/build/rippled-setup/
|
||||
|
||||
**Do you work at a digital asset exchange or wallet provider?**
|
||||
|
||||
Please [contact us](mailto:support@ripple.com). We can help guide your integration.
|
||||
|
||||
## Updating `rippled`
|
||||
If you are using Red Hat Enterprise Linux 7 or CentOS 7, you can [update using `yum`](https://ripple.com/build/rippled-setup/#updating-rippled). For other platforms, please [compile from source](https://wiki.ripple.com/Rippled_build_instructions).
|
||||
|
||||
# Releases
|
||||
|
||||
## Version 0.80.2
|
||||
|
||||
The `rippled` 0.80.2 release introduces changes that improve the scalability of the XRP Ledger.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Do not dispatch a transaction received from a peer for processing if it has already been dispatched within the past ten seconds.
|
||||
- Increase the number of transaction handlers that can be in flight in the job queue and decrease the relative cost for peers to share transaction and ledger data.
|
||||
- Make better use of resources by adjusting the number of threads we initialize, by reverting commit [#68b8ffd](https://github.com/ripple/rippled/commit/68b8ffdb638d07937f841f7217edeb25efdb3b5d).
|
||||
|
||||
## Version 0.80.1
|
||||
|
||||
The `rippled` 0.80.1 release provides several enhancements in support of published validator lists and corrects several bugs.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
- Allow including validator manifests in published list ([#2278](https://github.com/ripple/rippled/issues/2278))
|
||||
- Add validator list RPC commands ([#2242](https://github.com/ripple/rippled/issues/2242))
|
||||
- Support [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) when querying published list sites and use Windows system root certificates ([#2275](https://github.com/ripple/rippled/issues/2275))
|
||||
- Grow TxQ expected size quickly, shrink slowly ([#2235](https://github.com/ripple/rippled/issues/2235))
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Make consensus quorum unreachable if validator list expires ([#2240](https://github.com/ripple/rippled/issues/2240))
|
||||
- Properly use ledger hash to break ties when determing working ledger for consensus ([#2257](https://github.com/ripple/rippled/issues/2257))
|
||||
- Explictly use std::deque for missing node handler in SHAMap code ([#2252](https://github.com/ripple/rippled/issues/2252))
|
||||
- Verify validator token manifest matches private key ([#2268](https://github.com/ripple/rippled/issues/2268))
|
||||
|
||||
|
||||
## Version 0.80.0
|
||||
|
||||
The `rippled` 0.80.0 release introduces several enhancements that improve the reliability, scalability and security of the XRP Ledger.
|
||||
|
||||
Highlights of this release include:
|
||||
|
||||
- The `SortedDirectories` amendment, which allows the entries stored within a page to be sorted, and corrects a technical flaw that could, in some edge cases, prevent an empty intermediate page from being deleted.
|
||||
- Changes to the UNL and quorum rules
|
||||
+ Use a fixed size UNL if the total listed validators are below threshold
|
||||
+ Ensure a quorum of 0 cannot be configured
|
||||
+ Set a quorum to provide Byzantine fault tolerance until a threshold of total validators is exceeded, at which time the quorum is 80%
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
- Improve directory insertion and deletion ([#2165](https://github.com/ripple/rippled/issues/2165))
|
||||
- Move consensus thread safety logic from the generic implementation in Consensus into the RCL adapted version RCLConsensus ([#2106](https://github.com/ripple/rippled/issues/2106))
|
||||
- Refactor Validations class into a generic version that can be adapted ([#2084](https://github.com/ripple/rippled/issues/2084))
|
||||
- Make minimum quorum Byzantine fault tolerant ([#2093](https://github.com/ripple/rippled/issues/2093))
|
||||
- Make amendment blocked state thread-safe and simplify a constructor ([#2207](https://github.com/ripple/rippled/issues/2207))
|
||||
- Use ledger hash to break ties ([#2169](https://github.com/ripple/rippled/issues/2169))
|
||||
- Refactor RangeSet ([#2113](https://github.com/ripple/rippled/issues/2113))
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Fix an issue where `setAmendmentBlocked` is only called when processing the `EnableAmendment` transaction for the amendment ([#2137](https://github.com/ripple/rippled/issues/2137))
|
||||
- Track escrow in recipient's owner directory ([#2212](https://github.com/ripple/rippled/issues/2212))
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
## Version 0.70.2
|
||||
|
||||
The `rippled` 0.70.2 release corrects an emergent behavior which causes large numbers of transactions to get
|
||||
stuck in different nodes' open ledgers without being passed on to validators, resulting in a spike in the open
|
||||
ledger fee on those nodes.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Recent fee rises and TxQ issues ([#2215](https://github.com/ripple/rippled/issues/2215))
|
||||
|
||||
|
||||
## Version 0.70.1
|
||||
|
||||
The `rippled` 0.70.1 release corrects a technical flaw in the newly refactored consensus code that could cause a node to get stuck in consensus due to stale votes from a
|
||||
peer, and allows compiling `rippled` under the 1.1.x releases of OpenSSL.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Allow compiling against OpenSSL 1.1.0 ([#2151](https://github.com/ripple/rippled/pull/2151))
|
||||
- Log invariant check messages at "fatal" level ([2154](https://github.com/ripple/rippled/pull/2154))
|
||||
- Fix the consensus code to update all disputed transactions after a node changes a position ([2156](https://github.com/ripple/rippled/pull/2156))
|
||||
|
||||
|
||||
## Version 0.70.0
|
||||
|
||||
The `rippled` 0.70.0 release introduces several enhancements that improve the reliability, scalability and security of the network.
|
||||
|
||||
Highlights of this release include:
|
||||
|
||||
- The `FlowCross` amendment, which streamlines offer crossing and autobrigding logic by leveraging the new “Flow” payment engine.
|
||||
- The `EnforceInvariants` amendment, which can safeguard the integrity of the XRP Ledger by introducing code that executes after every transaction and ensures that the execution did not violate key protocol rules.
|
||||
- `fix1373`, which addresses an issue that would cause payments with certain path specifications to not be properly parsed.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
- Implement and test invariant checks for transactions (#2054)
|
||||
- TxQ: Functionality to dump all queued transactions (#2020)
|
||||
- Consensus refactor for simulation/cleanup (#2040)
|
||||
- Payment flow code should support offer crossing (#1884)
|
||||
- make `Config` init extensible via lambda (#1993)
|
||||
- Improve Consensus Documentation (#2064)
|
||||
- Refactor Dependencies & Unit Test Consensus (#1941)
|
||||
- `feature` RPC test (#1988)
|
||||
- Add unit Tests for handlers/TxHistory.cpp (#2062)
|
||||
- Add unit tests for handlers/AccountCurrenciesHandler.cpp (#2085)
|
||||
- Add unit test for handlers/Peers.cpp (#2060)
|
||||
- Improve logging for Transaction affects no accounts warning (#2043)
|
||||
- Increase logging in PeerImpl fail (#2043)
|
||||
- Allow filtering of ledger objects by type in RPC (#2066)
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Fix displayed warning when generating brain wallets (#2121)
|
||||
- Cmake build does not append '+DEBUG' to the version info for non-unity builds
|
||||
- Crossing tiny offers can misbehave on RCL
|
||||
- `asfRequireAuth` flag not always obeyed (#2092)
|
||||
- Strand creating is incorrectly accepting invalid paths
|
||||
- JobQueue occasionally crashes on shutdown (#2025)
|
||||
- Improve pseudo-transaction handling (#2104)
|
||||
|
||||
## Version 0.60.3
|
||||
|
||||
The `rippled` 0.60.3 release helps to increase the stability of the network under heavy load.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
Server overlay improvements ([#2110](https://github.com/ripple/rippled/pull/2011))
|
||||
|
||||
## Version 0.60.2
|
||||
|
||||
The `rippled` 0.60.2 release further strengthens handling of cases associated with a previously patched exploit, in which NoRipple flags were being bypassed by using offers.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
Prevent the ability to bypass the `NoRipple` flag using offers ([#7cd4d78](https://github.com/ripple/rippled/commit/4ff40d4954dfaa237c8b708c2126cb39566776da))
|
||||
|
||||
## Version 0.60.1
|
||||
|
||||
The `rippled` 0.60.1 release corrects a technical flaw that resulted from using 32-bit space identifiers instead of the protocol-defined 16-bit values for Escrow and Payment Channel ledger entries. rippled version 0.60.1 also fixes a problem where the WebSocket timeout timer would not be cancelled when certain errors occurred during subscription streams. Ripple requires upgrading to rippled version 0.60.1 immediately.
|
||||
|
||||
**New and Updated Feature**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
Correct calculation of Escrow and Payment Channel indices.
|
||||
Fix WebSocket timeout timer issues.
|
||||
|
||||
## Version 0.60.0
|
||||
|
||||
The `rippled` 0.60.0 release introduces several enhancements that improve the reliability and scalability of the Ripple Consensus Ledger (RCL), including features that add ledger interoperability by improving Interledger Protocol compatibility. Ripple recommends that all server operators upgrade to version 0.60.0 by Thursday, 2017-03-30, for service continuity.
|
||||
|
||||
Highlights of this release include:
|
||||
|
||||
- `Escrow` (previously called `SusPay`) which permits users to cryptographically escrow XRP on RCL with an expiration date, and optionally a hashlock crypto-condition. Ripple expects Escrow to be enabled via an Amendment named [`Escrow`](https://ripple.com/build/amendments/#escrow) on Thursday, 2017-03-30. See below for details.
|
||||
- Dynamic UNL Lite, which allows `rippled` to automatically adjust which validators it trusts based on recommended lists from trusted publishers.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
- Add `Escrow` support (#2039)
|
||||
- Dynamize trusted validator list and quorum (#1842)
|
||||
- Simplify fee handling during transaction submission (#1992)
|
||||
- Publish server stream when fee changes (#2016)
|
||||
- Replace manifest with validator token (#1975)
|
||||
- Add validator key revocations (#2019)
|
||||
- Add `SecretKey` comparison operator (#2004)
|
||||
- Reduce `LEDGER_MIN_CONSENSUS` (#2013)
|
||||
- Update libsecp256k1 and Beast B30 (#1983)
|
||||
- Make `Config` extensible via lambda (#1993)
|
||||
- WebSocket permessage-deflate integration (#1995)
|
||||
- Do not close socket on a foreign thread (#2014)
|
||||
- Update build scripts to support latest boost and ubuntu distros (#1997)
|
||||
- Handle protoc targets in scons ninja build (#2022)
|
||||
- Specify syntax version for ripple.proto file (#2007)
|
||||
- Eliminate protocol header dependency (#1962)
|
||||
- Use gnu gold or clang lld linkers if available (#2031)
|
||||
- Add tests for `lookupLedger` (#1989)
|
||||
- Add unit test for `get_counts` RPC method (#2011)
|
||||
- Add test for `transaction_entry` request (#2017)
|
||||
- Unit tests of RPC "sign" (#2010)
|
||||
- Add failure only unit test reporter (#2018)
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Enforce rippling constraints during payments (#2049)
|
||||
- Fix limiting step re-execute bug (#1936)
|
||||
- Make "wss" work the same as "wss2" (#2033)
|
||||
- Config test uses unique directories for each test (#1984)
|
||||
- Check for malformed public key on payment channel (#2027)
|
||||
- Send a websocket ping before timing out in server (#2035)
|
||||
|
||||
|
||||
## Version 0.50.3
|
||||
|
||||
The `rippled` 0.50.3 release corrects a reported exploit that would allow a combination of trust lines and order books in a payment path to bypass the blocking effect of the [`NoRipple`](https://ripple.com/build/understanding-the-noripple-flag/) flag. Ripple recommends that all server operators immediately upgrade to version 0.50.3.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
Correct a reported exploit that would allow a combination of trust lines and order books in a payment path to bypass the blocking effect of the “NoRipple” flag.
|
||||
|
||||
|
||||
## Version 0.50.2
|
||||
|
||||
The `rippled` 0.50.2 release adjusts the default TLS cipher list and corrects a flaw that would not allow an SSL handshake to properly complete if the port was configured using the `wss` keyword. Ripple recommends upgrading to 0.50.2 only if server operators are running rippled servers that accept client connections over TLS.
|
||||
|
||||
**New and Updated Features**
|
||||
|
||||
This release has no new features.
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
Adjust the default cipher list and correct a flaw that would not allow an SSL handshake to properly complete if the port was configured using the `wss` keyword (#1985)
|
||||
|
||||
|
||||
## Version 0.50.0
|
||||
|
||||
The `rippled` 0.50.0 release includes TickSize, which allows gateways to set a "tick size" for assets they issue to help promote faster price discovery and deeper liquidity, as well as reduce transaction spam and ledger churn on RCL. Ripple expects TickSize to be enabled via an Amendment called TickSize on Tuesday, 2017-02-21.
|
||||
|
||||
60
SConstruct
60
SConstruct
@@ -392,8 +392,6 @@ def config_base(env):
|
||||
env.Prepend(LIBPATH=['%s/lib' % openssl])
|
||||
except:
|
||||
pass
|
||||
if not 'vcxproj' in COMMAND_LINE_TARGETS:
|
||||
env.Append(CPPDEFINES=['NO_LOG_UNHANDLED_EXCEPTIONS'])
|
||||
|
||||
# handle command-line arguments
|
||||
profile_jemalloc = ARGUMENTS.get('profile-jemalloc')
|
||||
@@ -424,7 +422,7 @@ def get_libs(lib, static):
|
||||
try:
|
||||
cmd = ['pkg-config', '--static', '--libs', lib]
|
||||
libs = subprocess.check_output(cmd,
|
||||
stderr=subprocess.STDOUT).strip()
|
||||
stderr=subprocess.STDOUT).strip().decode("utf-8")
|
||||
all_libs = [l[2:] for l in libs.split() if l.startswith('-l')]
|
||||
if not static:
|
||||
return ([], all_libs)
|
||||
@@ -554,6 +552,13 @@ def config_env(toolchain, variant, env):
|
||||
if toolchain == 'clang':
|
||||
env.Append(CCFLAGS=['-Wno-redeclared-class-member'])
|
||||
env.Append(CPPDEFINES=['BOOST_ASIO_HAS_STD_ARRAY'])
|
||||
try:
|
||||
ldd_ver = subprocess.check_output([env['CLANG_CXX'], '-fuse-ld=lld', '-Wl,--version'],
|
||||
stderr=subprocess.STDOUT).strip()
|
||||
# have lld
|
||||
env.Append(LINKFLAGS=['-fuse-ld=lld'])
|
||||
except:
|
||||
pass
|
||||
|
||||
env.Append(CXXFLAGS=[
|
||||
'-frtti',
|
||||
@@ -573,7 +578,6 @@ def config_env(toolchain, variant, env):
|
||||
env.Append(CCFLAGS=[
|
||||
'-Wno-deprecated',
|
||||
'-Wno-deprecated-declarations',
|
||||
'-Wno-unused-variable',
|
||||
'-Wno-unused-function',
|
||||
])
|
||||
else:
|
||||
@@ -586,11 +590,17 @@ def config_env(toolchain, variant, env):
|
||||
'-D_GLIBCXX_USE_CXX11_ABI' : 0
|
||||
})
|
||||
if toolchain == 'gcc':
|
||||
|
||||
env.Append(CCFLAGS=[
|
||||
'-Wno-unused-but-set-variable',
|
||||
'-Wno-deprecated',
|
||||
])
|
||||
try:
|
||||
ldd_ver = subprocess.check_output([env['GNU_CXX'], '-fuse-ld=gold', '-Wl,--version'],
|
||||
stderr=subprocess.STDOUT).strip()
|
||||
# have ld.gold
|
||||
env.Append(LINKFLAGS=['-fuse-ld=gold'])
|
||||
except:
|
||||
pass
|
||||
|
||||
boost_libs = [
|
||||
# resist the temptation to alphabetize these. coroutine
|
||||
@@ -744,6 +754,7 @@ def config_env(toolchain, variant, env):
|
||||
'uuid.lib',
|
||||
'odbc32.lib',
|
||||
'odbccp32.lib',
|
||||
'crypt32.lib'
|
||||
])
|
||||
env.Append(LINKFLAGS=[
|
||||
'/DEBUG',
|
||||
@@ -939,6 +950,7 @@ def get_classic_sources(toolchain):
|
||||
append_sources(result, *list_sources('src/ripple/basics', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/conditions', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/crypto', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/consensus', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/json', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/ledger', '.cpp'))
|
||||
append_sources(result, *list_sources('src/ripple/legacy', '.cpp'))
|
||||
@@ -953,6 +965,7 @@ def get_classic_sources(toolchain):
|
||||
append_sources(result, *list_sources('src/test/basics', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/beast', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/conditions', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/consensus', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/core', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/json', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/ledger', '.cpp'))
|
||||
@@ -964,8 +977,9 @@ def get_classic_sources(toolchain):
|
||||
append_sources(result, *list_sources('src/test/server', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/shamap', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/jtx', '.cpp'))
|
||||
append_sources(result, *list_sources('src/test/csf', '.cpp'))
|
||||
|
||||
|
||||
|
||||
if use_shp(toolchain):
|
||||
cc_flags = {'CCFLAGS': ['--system-header-prefix=rocksdb2']}
|
||||
else:
|
||||
@@ -993,27 +1007,37 @@ def get_unity_sources(toolchain):
|
||||
'src/ripple/beast/unity/beast_insight_unity.cpp',
|
||||
'src/ripple/beast/unity/beast_net_unity.cpp',
|
||||
'src/ripple/beast/unity/beast_utility_unity.cpp',
|
||||
'src/ripple/unity/app_consensus.cpp',
|
||||
'src/ripple/unity/app_ledger.cpp',
|
||||
'src/ripple/unity/app_main.cpp',
|
||||
'src/ripple/unity/app_ledger_impl.cpp',
|
||||
'src/ripple/unity/app_main1.cpp',
|
||||
'src/ripple/unity/app_main2.cpp',
|
||||
'src/ripple/unity/app_misc.cpp',
|
||||
'src/ripple/unity/app_misc_impl.cpp',
|
||||
'src/ripple/unity/app_paths.cpp',
|
||||
'src/ripple/unity/app_tx.cpp',
|
||||
'src/ripple/unity/conditions.cpp',
|
||||
'src/ripple/unity/consensus.cpp',
|
||||
'src/ripple/unity/core.cpp',
|
||||
'src/ripple/unity/basics.cpp',
|
||||
'src/ripple/unity/crypto.cpp',
|
||||
'src/ripple/unity/ledger.cpp',
|
||||
'src/ripple/unity/net.cpp',
|
||||
'src/ripple/unity/overlay.cpp',
|
||||
'src/ripple/unity/overlay1.cpp',
|
||||
'src/ripple/unity/overlay2.cpp',
|
||||
'src/ripple/unity/peerfinder.cpp',
|
||||
'src/ripple/unity/json.cpp',
|
||||
'src/ripple/unity/protocol.cpp',
|
||||
'src/ripple/unity/rpcx.cpp',
|
||||
'src/ripple/unity/rpcx1.cpp',
|
||||
'src/ripple/unity/rpcx2.cpp',
|
||||
'src/ripple/unity/shamap.cpp',
|
||||
'src/ripple/unity/server.cpp',
|
||||
'src/test/unity/app_test_unity.cpp',
|
||||
'src/test/unity/app_test_unity1.cpp',
|
||||
'src/test/unity/app_test_unity2.cpp',
|
||||
'src/test/unity/basics_test_unity.cpp',
|
||||
'src/test/unity/beast_test_unity.cpp',
|
||||
'src/test/unity/beast_test_unity1.cpp',
|
||||
'src/test/unity/beast_test_unity2.cpp',
|
||||
'src/test/unity/consensus_test_unity.cpp',
|
||||
'src/test/unity/core_test_unity.cpp',
|
||||
'src/test/unity/conditions_test_unity.cpp',
|
||||
'src/test/unity/json_test_unity.cpp',
|
||||
@@ -1024,8 +1048,11 @@ def get_unity_sources(toolchain):
|
||||
'src/test/unity/resource_test_unity.cpp',
|
||||
'src/test/unity/rpc_test_unity.cpp',
|
||||
'src/test/unity/server_test_unity.cpp',
|
||||
'src/test/unity/server_status_test_unity.cpp',
|
||||
'src/test/unity/shamap_test_unity.cpp',
|
||||
'src/test/unity/support_unity.cpp'
|
||||
'src/test/unity/jtx_unity1.cpp',
|
||||
'src/test/unity/jtx_unity2.cpp',
|
||||
'src/test/unity/csf_unity.cpp'
|
||||
)
|
||||
|
||||
if use_shp(toolchain):
|
||||
@@ -1135,7 +1162,7 @@ for tu_style in ['classic', 'unity']:
|
||||
os.path.join(variant_dir, 'proto') :
|
||||
os.path.join (build_dir, 'proto'),
|
||||
}
|
||||
for dest, source in variant_dirs.iteritems():
|
||||
for dest, source in variant_dirs.items():
|
||||
env.VariantDir(dest, source, duplicate=0)
|
||||
|
||||
object_builder = ObjectBuilder(env, variant_dirs)
|
||||
@@ -1234,10 +1261,11 @@ for tu_style in ['classic', 'unity']:
|
||||
if should_build_ninja(tu_style, toolchain, variant):
|
||||
print('Generating ninja: {}:{}:{}'.format(tu_style, toolchain, variant))
|
||||
scons_to_ninja.GenerateNinjaFile(
|
||||
[object_builder.env] + object_builder.child_envs,
|
||||
# add base env last to ensure protoc targets are added
|
||||
[object_builder.env] + object_builder.child_envs + [base],
|
||||
dest_file='build.ninja')
|
||||
|
||||
for key, value in aliases.iteritems():
|
||||
for key, value in aliases.items():
|
||||
env.Alias(key, value)
|
||||
|
||||
vcxproj = base.VSProject(
|
||||
@@ -1278,6 +1306,6 @@ def do_count(target, source, env):
|
||||
lines = 0
|
||||
for f in testfiles:
|
||||
lines = lines + sum(1 for line in open(f))
|
||||
print "Total unit test lines: %d" % lines
|
||||
print ("Total unit test lines: %d" % lines)
|
||||
|
||||
PhonyTargets(env, count = do_count)
|
||||
|
||||
@@ -127,7 +127,7 @@ test_script:
|
||||
- ps: |
|
||||
& {
|
||||
# Run the rippled unit tests
|
||||
& $exe --unittest
|
||||
& $exe --unittest --quiet --unittest-log
|
||||
# https://connect.microsoft.com/PowerShell/feedback/details/751703/option-to-stop-script-if-command-line-exe-fails
|
||||
if ($LastExitCode -ne 0) { throw "Unit tests failed" }
|
||||
}
|
||||
|
||||
@@ -10,6 +10,12 @@ echo "using TARGET: $TARGET"
|
||||
|
||||
# Ensure APP defaults to rippled if it's not set.
|
||||
: ${APP:=rippled}
|
||||
|
||||
JOBS=${NUM_PROCESSORS:-2}
|
||||
if [[ ${TARGET} == *.nounity ]]; then
|
||||
JOBS=$((2*${JOBS}))
|
||||
fi
|
||||
|
||||
if [[ ${BUILD:-scons} == "cmake" ]]; then
|
||||
echo "cmake building ${APP}"
|
||||
CMAKE_TARGET=$CC.$TARGET
|
||||
@@ -19,7 +25,7 @@ if [[ ${BUILD:-scons} == "cmake" ]]; then
|
||||
mkdir -p "build/${CMAKE_TARGET}"
|
||||
pushd "build/${CMAKE_TARGET}"
|
||||
cmake ../.. -Dtarget=$CMAKE_TARGET
|
||||
cmake --build . -- -j${NUM_PROCESSORS:-2}
|
||||
cmake --build . -- -j${JOBS}
|
||||
popd
|
||||
export APP_PATH="$PWD/build/${CMAKE_TARGET}/${APP}"
|
||||
echo "using APP_PATH: $APP_PATH"
|
||||
@@ -33,7 +39,7 @@ else
|
||||
# $CC will be either `clang` or `gcc`
|
||||
# http://docs.travis-ci.com/user/migrating-from-legacy/?utm_source=legacy-notice&utm_medium=banner&utm_campaign=legacy-upgrade
|
||||
# indicates that 2 cores are available to containers.
|
||||
scons -j${NUM_PROCESSORS:-2} $CC.$TARGET
|
||||
scons -j${JOBS} $CC.$TARGET
|
||||
fi
|
||||
# We can be sure we're using the build/$CC.$TARGET variant
|
||||
# (-f so never err)
|
||||
@@ -43,7 +49,7 @@ rm -f build/${APP}
|
||||
ldd $APP_PATH
|
||||
|
||||
if [[ ${APP} == "rippled" ]]; then
|
||||
export APP_ARGS="--unittest"
|
||||
export APP_ARGS="--unittest --quiet --unittest-log"
|
||||
# Only report on src/ripple files
|
||||
export LCOV_FILES="*/src/ripple/*"
|
||||
# Nothing to explicitly exclude
|
||||
@@ -62,15 +68,19 @@ if [[ $TARGET == "coverage" ]]; then
|
||||
lcov --no-external -c -i -d . -o baseline.info
|
||||
fi
|
||||
|
||||
# Execute unit tests under gdb, printing a call stack
|
||||
# if we get a crash.
|
||||
gdb -return-child-result -quiet -batch \
|
||||
-ex "set env MALLOC_CHECK_=3" \
|
||||
-ex "set print thread-events off" \
|
||||
-ex run \
|
||||
-ex "thread apply all backtrace full" \
|
||||
-ex "quit" \
|
||||
--args $APP_PATH --unittest
|
||||
if [[ ${TARGET} == debug ]]; then
|
||||
# Execute unit tests under gdb, printing a call stack
|
||||
# if we get a crash.
|
||||
$GDB_ROOT/bin/gdb -return-child-result -quiet -batch \
|
||||
-ex "set env MALLOC_CHECK_=3" \
|
||||
-ex "set print thread-events off" \
|
||||
-ex run \
|
||||
-ex "thread apply all backtrace full" \
|
||||
-ex "quit" \
|
||||
--args $APP_PATH $APP_ARGS
|
||||
else
|
||||
$APP_PATH $APP_ARGS
|
||||
fi
|
||||
|
||||
if [[ $TARGET == "coverage" ]]; then
|
||||
# Create test coverage data file
|
||||
@@ -87,6 +97,8 @@ if [[ $TARGET == "coverage" ]]; then
|
||||
|
||||
# Push the results (lcov.info) to codecov
|
||||
codecov -X gcov # don't even try and look for .gcov files ;)
|
||||
|
||||
find . -name "*.gcda" | xargs rm -f
|
||||
fi
|
||||
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ TWD=$( cd ${TWD:-${1:-${PWD:-$( pwd )}}}; pwd )
|
||||
echo "Target path is: $TWD"
|
||||
# Override gcc version to $GCC_VER.
|
||||
# Put an appropriate symlink at the front of the path.
|
||||
mkdir -v $HOME/bin
|
||||
mkdir -pv $HOME/bin
|
||||
for g in gcc g++ gcov gcc-ar gcc-nm gcc-ranlib
|
||||
do
|
||||
test -x $( type -p ${g}-$GCC_VER )
|
||||
@@ -53,6 +53,7 @@ if [ -x $HOME/bin/g++ ]; then
|
||||
$HOME/bin/g++ -v
|
||||
fi
|
||||
|
||||
pip install --user requests==2.13.0
|
||||
pip install --user https://github.com/codecov/codecov-python/archive/master.zip
|
||||
|
||||
bash bin/sh/install-boost.sh
|
||||
@@ -65,3 +66,17 @@ tar xfvz lcov-1.12.tar.gz -C $HOME
|
||||
# Set install path
|
||||
mkdir -p $LCOV_ROOT
|
||||
cd $HOME/lcov-1.12 && make install PREFIX=$LCOV_ROOT
|
||||
|
||||
|
||||
if [[ ${TARGET} == debug && ! -x ${GDB_ROOT}/bin/gdb ]]; then
|
||||
pushd $HOME
|
||||
#install gdb
|
||||
wget https://ftp.gnu.org/gnu/gdb/gdb-8.0.tar.xz
|
||||
tar xf gdb-8.0.tar.xz
|
||||
pushd gdb-8.0
|
||||
./configure CFLAGS='-w -O2' CXXFLAGS='-std=gnu++11 -g -O2 -w' --prefix=$GDB_ROOT
|
||||
make -j2
|
||||
make install
|
||||
popd
|
||||
popd
|
||||
fi
|
||||
|
||||
2
bin/getInfoRippled.sh
Normal file → Executable file
2
bin/getInfoRippled.sh
Normal file → Executable file
@@ -31,6 +31,8 @@ then
|
||||
BEGIN {skip=0; db_path="";print > OUT_FILE}
|
||||
/^\[validation_seed\]/ {skip=1; next}
|
||||
/^\[node_seed\]/ {skip=1; next}
|
||||
/^\[validation_manifest\]/ {skip=1; next}
|
||||
/^\[validator_token\]/ {skip=1; next}
|
||||
/^\[.*\]/ {skip=0}
|
||||
skip==1 {next}
|
||||
save==1 {save=0;db_path=$0}
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
python/Manifest.py
|
||||
@@ -1,24 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from ripple.ledger import Server
|
||||
from ripple.ledger.commands import Cache, Info, Print
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Log
|
||||
from ripple.util.CommandList import CommandList
|
||||
|
||||
_COMMANDS = CommandList(Cache, Info, Print)
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
server = Server.Server()
|
||||
args = list(ARGS.command)
|
||||
_COMMANDS.run_safe(args.pop(0), server, *args)
|
||||
except Exception as e:
|
||||
if ARGS.verbose:
|
||||
print(traceback.format_exc(), sys.stderr)
|
||||
Log.error(e)
|
||||
@@ -1,7 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import sys
|
||||
from ripple.util import Sign
|
||||
|
||||
result = Sign.run_command(sys.argv[1:])
|
||||
sys.exit(0 if result else -1)
|
||||
@@ -1,15 +0,0 @@
|
||||
Unit Tests
|
||||
==========
|
||||
|
||||
To run the Python unit tests, execute:
|
||||
|
||||
python -m unittest discover
|
||||
|
||||
from this directory.
|
||||
|
||||
To run Python unit tests from a particular file (such as
|
||||
`ripple/util/test_Sign.py`), execute:
|
||||
|
||||
python -m unittest ripple.util.test_Sign
|
||||
|
||||
Add `-v` to run tests in verbose mode.
|
||||
@@ -1,251 +0,0 @@
|
||||
########################## LICENCE ###############################
|
||||
|
||||
# Copyright (c) 2005-2012, Michele Simionato
|
||||
# All rights reserved.
|
||||
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
# met:
|
||||
|
||||
# Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# Redistributions in bytecode form must reproduce the above copyright
|
||||
# notice, this list of conditions and the following disclaimer in
|
||||
# the documentation and/or other materials provided with the
|
||||
# distribution.
|
||||
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
||||
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
|
||||
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
||||
# DAMAGE.
|
||||
|
||||
"""
|
||||
Decorator module, see http://pypi.python.org/pypi/decorator
|
||||
for the documentation.
|
||||
"""
|
||||
|
||||
__version__ = '3.4.0'
|
||||
|
||||
__all__ = ["decorator", "FunctionMaker", "contextmanager"]
|
||||
|
||||
import sys, re, inspect
|
||||
if sys.version >= '3':
|
||||
from inspect import getfullargspec
|
||||
def get_init(cls):
|
||||
return cls.__init__
|
||||
else:
|
||||
class getfullargspec(object):
|
||||
"A quick and dirty replacement for getfullargspec for Python 2.X"
|
||||
def __init__(self, f):
|
||||
self.args, self.varargs, self.varkw, self.defaults = \
|
||||
inspect.getargspec(f)
|
||||
self.kwonlyargs = []
|
||||
self.kwonlydefaults = None
|
||||
def __iter__(self):
|
||||
yield self.args
|
||||
yield self.varargs
|
||||
yield self.varkw
|
||||
yield self.defaults
|
||||
def get_init(cls):
|
||||
return cls.__init__.im_func
|
||||
|
||||
DEF = re.compile('\s*def\s*([_\w][_\w\d]*)\s*\(')
|
||||
|
||||
# basic functionality
|
||||
class FunctionMaker(object):
|
||||
"""
|
||||
An object with the ability to create functions with a given signature.
|
||||
It has attributes name, doc, module, signature, defaults, dict and
|
||||
methods update and make.
|
||||
"""
|
||||
def __init__(self, func=None, name=None, signature=None,
|
||||
defaults=None, doc=None, module=None, funcdict=None):
|
||||
self.shortsignature = signature
|
||||
if func:
|
||||
# func can be a class or a callable, but not an instance method
|
||||
self.name = func.__name__
|
||||
if self.name == '<lambda>': # small hack for lambda functions
|
||||
self.name = '_lambda_'
|
||||
self.doc = func.__doc__
|
||||
self.module = func.__module__
|
||||
if inspect.isfunction(func):
|
||||
argspec = getfullargspec(func)
|
||||
self.annotations = getattr(func, '__annotations__', {})
|
||||
for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs',
|
||||
'kwonlydefaults'):
|
||||
setattr(self, a, getattr(argspec, a))
|
||||
for i, arg in enumerate(self.args):
|
||||
setattr(self, 'arg%d' % i, arg)
|
||||
if sys.version < '3': # easy way
|
||||
self.shortsignature = self.signature = \
|
||||
inspect.formatargspec(
|
||||
formatvalue=lambda val: "", *argspec)[1:-1]
|
||||
else: # Python 3 way
|
||||
allargs = list(self.args)
|
||||
allshortargs = list(self.args)
|
||||
if self.varargs:
|
||||
allargs.append('*' + self.varargs)
|
||||
allshortargs.append('*' + self.varargs)
|
||||
elif self.kwonlyargs:
|
||||
allargs.append('*') # single star syntax
|
||||
for a in self.kwonlyargs:
|
||||
allargs.append('%s=None' % a)
|
||||
allshortargs.append('%s=%s' % (a, a))
|
||||
if self.varkw:
|
||||
allargs.append('**' + self.varkw)
|
||||
allshortargs.append('**' + self.varkw)
|
||||
self.signature = ', '.join(allargs)
|
||||
self.shortsignature = ', '.join(allshortargs)
|
||||
self.dict = func.__dict__.copy()
|
||||
# func=None happens when decorating a caller
|
||||
if name:
|
||||
self.name = name
|
||||
if signature is not None:
|
||||
self.signature = signature
|
||||
if defaults:
|
||||
self.defaults = defaults
|
||||
if doc:
|
||||
self.doc = doc
|
||||
if module:
|
||||
self.module = module
|
||||
if funcdict:
|
||||
self.dict = funcdict
|
||||
# check existence required attributes
|
||||
assert hasattr(self, 'name')
|
||||
if not hasattr(self, 'signature'):
|
||||
raise TypeError('You are decorating a non function: %s' % func)
|
||||
|
||||
def update(self, func, **kw):
|
||||
"Update the signature of func with the data in self"
|
||||
func.__name__ = self.name
|
||||
func.__doc__ = getattr(self, 'doc', None)
|
||||
func.__dict__ = getattr(self, 'dict', {})
|
||||
func.func_defaults = getattr(self, 'defaults', ())
|
||||
func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None)
|
||||
func.__annotations__ = getattr(self, 'annotations', None)
|
||||
callermodule = sys._getframe(3).f_globals.get('__name__', '?')
|
||||
func.__module__ = getattr(self, 'module', callermodule)
|
||||
func.__dict__.update(kw)
|
||||
|
||||
def make(self, src_templ, evaldict=None, addsource=False, **attrs):
|
||||
"Make a new function from a given template and update the signature"
|
||||
src = src_templ % vars(self) # expand name and signature
|
||||
evaldict = evaldict or {}
|
||||
mo = DEF.match(src)
|
||||
if mo is None:
|
||||
raise SyntaxError('not a valid function template\n%s' % src)
|
||||
name = mo.group(1) # extract the function name
|
||||
names = set([name] + [arg.strip(' *') for arg in
|
||||
self.shortsignature.split(',')])
|
||||
for n in names:
|
||||
if n in ('_func_', '_call_'):
|
||||
raise NameError('%s is overridden in\n%s' % (n, src))
|
||||
if not src.endswith('\n'): # add a newline just for safety
|
||||
src += '\n' # this is needed in old versions of Python
|
||||
try:
|
||||
code = compile(src, '<string>', 'single')
|
||||
# print >> sys.stderr, 'Compiling %s' % src
|
||||
exec code in evaldict
|
||||
except:
|
||||
print >> sys.stderr, 'Error in generated code:'
|
||||
print >> sys.stderr, src
|
||||
raise
|
||||
func = evaldict[name]
|
||||
if addsource:
|
||||
attrs['__source__'] = src
|
||||
self.update(func, **attrs)
|
||||
return func
|
||||
|
||||
@classmethod
|
||||
def create(cls, obj, body, evaldict, defaults=None,
|
||||
doc=None, module=None, addsource=True, **attrs):
|
||||
"""
|
||||
Create a function from the strings name, signature and body.
|
||||
evaldict is the evaluation dictionary. If addsource is true an attribute
|
||||
__source__ is added to the result. The attributes attrs are added,
|
||||
if any.
|
||||
"""
|
||||
if isinstance(obj, str): # "name(signature)"
|
||||
name, rest = obj.strip().split('(', 1)
|
||||
signature = rest[:-1] #strip a right parens
|
||||
func = None
|
||||
else: # a function
|
||||
name = None
|
||||
signature = None
|
||||
func = obj
|
||||
self = cls(func, name, signature, defaults, doc, module)
|
||||
ibody = '\n'.join(' ' + line for line in body.splitlines())
|
||||
return self.make('def %(name)s(%(signature)s):\n' + ibody,
|
||||
evaldict, addsource, **attrs)
|
||||
|
||||
def decorator(caller, func=None):
|
||||
"""
|
||||
decorator(caller) converts a caller function into a decorator;
|
||||
decorator(caller, func) decorates a function using a caller.
|
||||
"""
|
||||
if func is not None: # returns a decorated function
|
||||
evaldict = func.func_globals.copy()
|
||||
evaldict['_call_'] = caller
|
||||
evaldict['_func_'] = func
|
||||
return FunctionMaker.create(
|
||||
func, "return _call_(_func_, %(shortsignature)s)",
|
||||
evaldict, undecorated=func, __wrapped__=func)
|
||||
else: # returns a decorator
|
||||
if inspect.isclass(caller):
|
||||
name = caller.__name__.lower()
|
||||
callerfunc = get_init(caller)
|
||||
doc = 'decorator(%s) converts functions/generators into ' \
|
||||
'factories of %s objects' % (caller.__name__, caller.__name__)
|
||||
fun = getfullargspec(callerfunc).args[1] # second arg
|
||||
elif inspect.isfunction(caller):
|
||||
name = '_lambda_' if caller.__name__ == '<lambda>' \
|
||||
else caller.__name__
|
||||
callerfunc = caller
|
||||
doc = caller.__doc__
|
||||
fun = getfullargspec(callerfunc).args[0] # first arg
|
||||
else: # assume caller is an object with a __call__ method
|
||||
name = caller.__class__.__name__.lower()
|
||||
callerfunc = caller.__call__.im_func
|
||||
doc = caller.__call__.__doc__
|
||||
fun = getfullargspec(callerfunc).args[1] # second arg
|
||||
evaldict = callerfunc.func_globals.copy()
|
||||
evaldict['_call_'] = caller
|
||||
evaldict['decorator'] = decorator
|
||||
return FunctionMaker.create(
|
||||
'%s(%s)' % (name, fun),
|
||||
'return decorator(_call_, %s)' % fun,
|
||||
evaldict, undecorated=caller, __wrapped__=caller,
|
||||
doc=doc, module=caller.__module__)
|
||||
|
||||
######################### contextmanager ########################
|
||||
|
||||
def __call__(self, func):
|
||||
'Context manager decorator'
|
||||
return FunctionMaker.create(
|
||||
func, "with _self_: return _func_(%(shortsignature)s)",
|
||||
dict(_self_=self, _func_=func), __wrapped__=func)
|
||||
|
||||
try: # Python >= 3.2
|
||||
|
||||
from contextlib import _GeneratorContextManager
|
||||
ContextManager = type(
|
||||
'ContextManager', (_GeneratorContextManager,), dict(__call__=__call__))
|
||||
|
||||
except ImportError: # Python >= 2.5
|
||||
|
||||
from contextlib import GeneratorContextManager
|
||||
def __init__(self, f, *a, **k):
|
||||
return GeneratorContextManager.__init__(self, f(*a, **k))
|
||||
ContextManager = type(
|
||||
'ContextManager', (GeneratorContextManager,),
|
||||
dict(__call__=__call__, __init__=__init__))
|
||||
|
||||
contextmanager = decorator(ContextManager)
|
||||
@@ -1,14 +0,0 @@
|
||||
__all__ = ["curves", "der", "ecdsa", "ellipticcurve", "keys", "numbertheory",
|
||||
"test_pyecdsa", "util", "six"]
|
||||
from .keys import SigningKey, VerifyingKey, BadSignatureError, BadDigestError
|
||||
from .curves import NIST192p, NIST224p, NIST256p, NIST384p, NIST521p, SECP256k1
|
||||
|
||||
_hush_pyflakes = [SigningKey, VerifyingKey, BadSignatureError, BadDigestError,
|
||||
NIST192p, NIST224p, NIST256p, NIST384p, NIST521p, SECP256k1]
|
||||
del _hush_pyflakes
|
||||
|
||||
# This code comes from http://github.com/warner/python-ecdsa
|
||||
|
||||
from ._version import get_versions
|
||||
__version__ = get_versions()['version']
|
||||
del get_versions
|
||||
@@ -1,183 +0,0 @@
|
||||
|
||||
# This file helps to compute a version number in source trees obtained from
|
||||
# git-archive tarball (such as those provided by githubs download-from-tag
|
||||
# feature). Distribution tarballs (built by setup.py sdist) and build
|
||||
# directories (produced by setup.py build) will contain a much shorter file
|
||||
# that just contains the computed version number.
|
||||
|
||||
# This file is released into the public domain. Generated by
|
||||
# versioneer-0.12 (https://github.com/warner/python-versioneer)
|
||||
|
||||
# these strings will be replaced by git during git-archive
|
||||
git_refnames = " (HEAD, master)"
|
||||
git_full = "e7a6daff51221b8edd888cff404596ef90432869"
|
||||
|
||||
# these strings are filled in when 'setup.py versioneer' creates _version.py
|
||||
tag_prefix = "python-ecdsa-"
|
||||
parentdir_prefix = "ecdsa-"
|
||||
versionfile_source = "ecdsa/_version.py"
|
||||
|
||||
import os, sys, re, subprocess, errno
|
||||
|
||||
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
|
||||
assert isinstance(commands, list)
|
||||
p = None
|
||||
for c in commands:
|
||||
try:
|
||||
# remember shell=False, so use git.cmd on windows, not just git
|
||||
p = subprocess.Popen([c] + args, cwd=cwd, stdout=subprocess.PIPE,
|
||||
stderr=(subprocess.PIPE if hide_stderr
|
||||
else None))
|
||||
break
|
||||
except EnvironmentError:
|
||||
e = sys.exc_info()[1]
|
||||
if e.errno == errno.ENOENT:
|
||||
continue
|
||||
if verbose:
|
||||
print("unable to run %s" % args[0])
|
||||
print(e)
|
||||
return None
|
||||
else:
|
||||
if verbose:
|
||||
print("unable to find command, tried %s" % (commands,))
|
||||
return None
|
||||
stdout = p.communicate()[0].strip()
|
||||
if sys.version >= '3':
|
||||
stdout = stdout.decode()
|
||||
if p.returncode != 0:
|
||||
if verbose:
|
||||
print("unable to run %s (error)" % args[0])
|
||||
return None
|
||||
return stdout
|
||||
|
||||
|
||||
def versions_from_parentdir(parentdir_prefix, root, verbose=False):
|
||||
# Source tarballs conventionally unpack into a directory that includes
|
||||
# both the project name and a version string.
|
||||
dirname = os.path.basename(root)
|
||||
if not dirname.startswith(parentdir_prefix):
|
||||
if verbose:
|
||||
print("guessing rootdir is '%s', but '%s' doesn't start with prefix '%s'" %
|
||||
(root, dirname, parentdir_prefix))
|
||||
return None
|
||||
return {"version": dirname[len(parentdir_prefix):], "full": ""}
|
||||
|
||||
def git_get_keywords(versionfile_abs):
|
||||
# the code embedded in _version.py can just fetch the value of these
|
||||
# keywords. When used from setup.py, we don't want to import _version.py,
|
||||
# so we do it with a regexp instead. This function is not used from
|
||||
# _version.py.
|
||||
keywords = {}
|
||||
try:
|
||||
f = open(versionfile_abs,"r")
|
||||
for line in f.readlines():
|
||||
if line.strip().startswith("git_refnames ="):
|
||||
mo = re.search(r'=\s*"(.*)"', line)
|
||||
if mo:
|
||||
keywords["refnames"] = mo.group(1)
|
||||
if line.strip().startswith("git_full ="):
|
||||
mo = re.search(r'=\s*"(.*)"', line)
|
||||
if mo:
|
||||
keywords["full"] = mo.group(1)
|
||||
f.close()
|
||||
except EnvironmentError:
|
||||
pass
|
||||
return keywords
|
||||
|
||||
def git_versions_from_keywords(keywords, tag_prefix, verbose=False):
|
||||
if not keywords:
|
||||
return {} # keyword-finding function failed to find keywords
|
||||
refnames = keywords["refnames"].strip()
|
||||
if refnames.startswith("$Format"):
|
||||
if verbose:
|
||||
print("keywords are unexpanded, not using")
|
||||
return {} # unexpanded, so not in an unpacked git-archive tarball
|
||||
refs = set([r.strip() for r in refnames.strip("()").split(",")])
|
||||
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
|
||||
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
|
||||
TAG = "tag: "
|
||||
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
|
||||
if not tags:
|
||||
# Either we're using git < 1.8.3, or there really are no tags. We use
|
||||
# a heuristic: assume all version tags have a digit. The old git %d
|
||||
# expansion behaves like git log --decorate=short and strips out the
|
||||
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
|
||||
# between branches and tags. By ignoring refnames without digits, we
|
||||
# filter out many common branch names like "release" and
|
||||
# "stabilization", as well as "HEAD" and "master".
|
||||
tags = set([r for r in refs if re.search(r'\d', r)])
|
||||
if verbose:
|
||||
print("discarding '%s', no digits" % ",".join(refs-tags))
|
||||
if verbose:
|
||||
print("likely tags: %s" % ",".join(sorted(tags)))
|
||||
for ref in sorted(tags):
|
||||
# sorting will prefer e.g. "2.0" over "2.0rc1"
|
||||
if ref.startswith(tag_prefix):
|
||||
r = ref[len(tag_prefix):]
|
||||
if verbose:
|
||||
print("picking %s" % r)
|
||||
return { "version": r,
|
||||
"full": keywords["full"].strip() }
|
||||
# no suitable tags, so we use the full revision id
|
||||
if verbose:
|
||||
print("no suitable tags, using full revision id")
|
||||
return { "version": keywords["full"].strip(),
|
||||
"full": keywords["full"].strip() }
|
||||
|
||||
|
||||
def git_versions_from_vcs(tag_prefix, root, verbose=False):
|
||||
# this runs 'git' from the root of the source tree. This only gets called
|
||||
# if the git-archive 'subst' keywords were *not* expanded, and
|
||||
# _version.py hasn't already been rewritten with a short version string,
|
||||
# meaning we're inside a checked out source tree.
|
||||
|
||||
if not os.path.exists(os.path.join(root, ".git")):
|
||||
if verbose:
|
||||
print("no .git in %s" % root)
|
||||
return {}
|
||||
|
||||
GITS = ["git"]
|
||||
if sys.platform == "win32":
|
||||
GITS = ["git.cmd", "git.exe"]
|
||||
stdout = run_command(GITS, ["describe", "--tags", "--dirty", "--always"],
|
||||
cwd=root)
|
||||
if stdout is None:
|
||||
return {}
|
||||
if not stdout.startswith(tag_prefix):
|
||||
if verbose:
|
||||
print("tag '%s' doesn't start with prefix '%s'" % (stdout, tag_prefix))
|
||||
return {}
|
||||
tag = stdout[len(tag_prefix):]
|
||||
stdout = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
|
||||
if stdout is None:
|
||||
return {}
|
||||
full = stdout.strip()
|
||||
if tag.endswith("-dirty"):
|
||||
full += "-dirty"
|
||||
return {"version": tag, "full": full}
|
||||
|
||||
|
||||
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
|
||||
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
|
||||
# __file__, we can work backwards from there to the root. Some
|
||||
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
|
||||
# case we can only use expanded keywords.
|
||||
|
||||
keywords = { "refnames": git_refnames, "full": git_full }
|
||||
ver = git_versions_from_keywords(keywords, tag_prefix, verbose)
|
||||
if ver:
|
||||
return ver
|
||||
|
||||
try:
|
||||
root = os.path.abspath(__file__)
|
||||
# versionfile_source is the relative path from the top of the source
|
||||
# tree (where the .git directory might live) to this file. Invert
|
||||
# this to find the root from __file__.
|
||||
for i in range(len(versionfile_source.split(os.sep))):
|
||||
root = os.path.dirname(root)
|
||||
except NameError:
|
||||
return default
|
||||
|
||||
return (git_versions_from_vcs(tag_prefix, root, verbose)
|
||||
or versions_from_parentdir(parentdir_prefix, root, verbose)
|
||||
or default)
|
||||
@@ -1,53 +0,0 @@
|
||||
from __future__ import division
|
||||
|
||||
from . import der, ecdsa
|
||||
|
||||
class UnknownCurveError(Exception):
|
||||
pass
|
||||
|
||||
def orderlen(order):
|
||||
return (1+len("%x"%order))//2 # bytes
|
||||
|
||||
# the NIST curves
|
||||
class Curve:
|
||||
def __init__(self, name, openssl_name,
|
||||
curve, generator, oid):
|
||||
self.name = name
|
||||
self.openssl_name = openssl_name # maybe None
|
||||
self.curve = curve
|
||||
self.generator = generator
|
||||
self.order = generator.order()
|
||||
self.baselen = orderlen(self.order)
|
||||
self.verifying_key_length = 2*self.baselen
|
||||
self.signature_length = 2*self.baselen
|
||||
self.oid = oid
|
||||
self.encoded_oid = der.encode_oid(*oid)
|
||||
|
||||
NIST192p = Curve("NIST192p", "prime192v1",
|
||||
ecdsa.curve_192, ecdsa.generator_192,
|
||||
(1, 2, 840, 10045, 3, 1, 1))
|
||||
NIST224p = Curve("NIST224p", "secp224r1",
|
||||
ecdsa.curve_224, ecdsa.generator_224,
|
||||
(1, 3, 132, 0, 33))
|
||||
NIST256p = Curve("NIST256p", "prime256v1",
|
||||
ecdsa.curve_256, ecdsa.generator_256,
|
||||
(1, 2, 840, 10045, 3, 1, 7))
|
||||
NIST384p = Curve("NIST384p", "secp384r1",
|
||||
ecdsa.curve_384, ecdsa.generator_384,
|
||||
(1, 3, 132, 0, 34))
|
||||
NIST521p = Curve("NIST521p", "secp521r1",
|
||||
ecdsa.curve_521, ecdsa.generator_521,
|
||||
(1, 3, 132, 0, 35))
|
||||
SECP256k1 = Curve("SECP256k1", "secp256k1",
|
||||
ecdsa.curve_secp256k1, ecdsa.generator_secp256k1,
|
||||
(1, 3, 132, 0, 10))
|
||||
|
||||
curves = [NIST192p, NIST224p, NIST256p, NIST384p, NIST521p, SECP256k1]
|
||||
|
||||
def find_curve(oid_curve):
|
||||
for c in curves:
|
||||
if c.oid == oid_curve:
|
||||
return c
|
||||
raise UnknownCurveError("I don't know about the curve with oid %s."
|
||||
"I only know about these: %s" %
|
||||
(oid_curve, [c.name for c in curves]))
|
||||
@@ -1,199 +0,0 @@
|
||||
from __future__ import division
|
||||
|
||||
import binascii
|
||||
import base64
|
||||
from .six import int2byte, b, integer_types, text_type
|
||||
|
||||
class UnexpectedDER(Exception):
|
||||
pass
|
||||
|
||||
def encode_constructed(tag, value):
|
||||
return int2byte(0xa0+tag) + encode_length(len(value)) + value
|
||||
def encode_integer(r):
|
||||
assert r >= 0 # can't support negative numbers yet
|
||||
h = ("%x" % r).encode()
|
||||
if len(h) % 2:
|
||||
h = b("0") + h
|
||||
s = binascii.unhexlify(h)
|
||||
num = s[0] if isinstance(s[0], integer_types) else ord(s[0])
|
||||
if num <= 0x7f:
|
||||
return b("\x02") + int2byte(len(s)) + s
|
||||
else:
|
||||
# DER integers are two's complement, so if the first byte is
|
||||
# 0x80-0xff then we need an extra 0x00 byte to prevent it from
|
||||
# looking negative.
|
||||
return b("\x02") + int2byte(len(s)+1) + b("\x00") + s
|
||||
|
||||
def encode_bitstring(s):
|
||||
return b("\x03") + encode_length(len(s)) + s
|
||||
def encode_octet_string(s):
|
||||
return b("\x04") + encode_length(len(s)) + s
|
||||
def encode_oid(first, second, *pieces):
|
||||
assert first <= 2
|
||||
assert second <= 39
|
||||
encoded_pieces = [int2byte(40*first+second)] + [encode_number(p)
|
||||
for p in pieces]
|
||||
body = b('').join(encoded_pieces)
|
||||
return b('\x06') + encode_length(len(body)) + body
|
||||
def encode_sequence(*encoded_pieces):
|
||||
total_len = sum([len(p) for p in encoded_pieces])
|
||||
return b('\x30') + encode_length(total_len) + b('').join(encoded_pieces)
|
||||
def encode_number(n):
|
||||
b128_digits = []
|
||||
while n:
|
||||
b128_digits.insert(0, (n & 0x7f) | 0x80)
|
||||
n = n >> 7
|
||||
if not b128_digits:
|
||||
b128_digits.append(0)
|
||||
b128_digits[-1] &= 0x7f
|
||||
return b('').join([int2byte(d) for d in b128_digits])
|
||||
|
||||
def remove_constructed(string):
|
||||
s0 = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
if (s0 & 0xe0) != 0xa0:
|
||||
raise UnexpectedDER("wanted constructed tag (0xa0-0xbf), got 0x%02x"
|
||||
% s0)
|
||||
tag = s0 & 0x1f
|
||||
length, llen = read_length(string[1:])
|
||||
body = string[1+llen:1+llen+length]
|
||||
rest = string[1+llen+length:]
|
||||
return tag, body, rest
|
||||
|
||||
def remove_sequence(string):
|
||||
if not string.startswith(b("\x30")):
|
||||
n = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
raise UnexpectedDER("wanted sequence (0x30), got 0x%02x" % n)
|
||||
length, lengthlength = read_length(string[1:])
|
||||
endseq = 1+lengthlength+length
|
||||
return string[1+lengthlength:endseq], string[endseq:]
|
||||
|
||||
def remove_octet_string(string):
|
||||
if not string.startswith(b("\x04")):
|
||||
n = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
raise UnexpectedDER("wanted octetstring (0x04), got 0x%02x" % n)
|
||||
length, llen = read_length(string[1:])
|
||||
body = string[1+llen:1+llen+length]
|
||||
rest = string[1+llen+length:]
|
||||
return body, rest
|
||||
|
||||
def remove_object(string):
|
||||
if not string.startswith(b("\x06")):
|
||||
n = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
raise UnexpectedDER("wanted object (0x06), got 0x%02x" % n)
|
||||
length, lengthlength = read_length(string[1:])
|
||||
body = string[1+lengthlength:1+lengthlength+length]
|
||||
rest = string[1+lengthlength+length:]
|
||||
numbers = []
|
||||
while body:
|
||||
n, ll = read_number(body)
|
||||
numbers.append(n)
|
||||
body = body[ll:]
|
||||
n0 = numbers.pop(0)
|
||||
first = n0//40
|
||||
second = n0-(40*first)
|
||||
numbers.insert(0, first)
|
||||
numbers.insert(1, second)
|
||||
return tuple(numbers), rest
|
||||
|
||||
def remove_integer(string):
|
||||
if not string.startswith(b("\x02")):
|
||||
n = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
raise UnexpectedDER("wanted integer (0x02), got 0x%02x" % n)
|
||||
length, llen = read_length(string[1:])
|
||||
numberbytes = string[1+llen:1+llen+length]
|
||||
rest = string[1+llen+length:]
|
||||
nbytes = numberbytes[0] if isinstance(numberbytes[0], integer_types) else ord(numberbytes[0])
|
||||
assert nbytes < 0x80 # can't support negative numbers yet
|
||||
return int(binascii.hexlify(numberbytes), 16), rest
|
||||
|
||||
def read_number(string):
|
||||
number = 0
|
||||
llen = 0
|
||||
# base-128 big endian, with b7 set in all but the last byte
|
||||
while True:
|
||||
if llen > len(string):
|
||||
raise UnexpectedDER("ran out of length bytes")
|
||||
number = number << 7
|
||||
d = string[llen] if isinstance(string[llen], integer_types) else ord(string[llen])
|
||||
number += (d & 0x7f)
|
||||
llen += 1
|
||||
if not d & 0x80:
|
||||
break
|
||||
return number, llen
|
||||
|
||||
def encode_length(l):
|
||||
assert l >= 0
|
||||
if l < 0x80:
|
||||
return int2byte(l)
|
||||
s = ("%x" % l).encode()
|
||||
if len(s)%2:
|
||||
s = b("0")+s
|
||||
s = binascii.unhexlify(s)
|
||||
llen = len(s)
|
||||
return int2byte(0x80|llen) + s
|
||||
|
||||
def read_length(string):
|
||||
num = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
if not (num & 0x80):
|
||||
# short form
|
||||
return (num & 0x7f), 1
|
||||
# else long-form: b0&0x7f is number of additional base256 length bytes,
|
||||
# big-endian
|
||||
llen = num & 0x7f
|
||||
if llen > len(string)-1:
|
||||
raise UnexpectedDER("ran out of length bytes")
|
||||
return int(binascii.hexlify(string[1:1+llen]), 16), 1+llen
|
||||
|
||||
def remove_bitstring(string):
|
||||
num = string[0] if isinstance(string[0], integer_types) else ord(string[0])
|
||||
if not string.startswith(b("\x03")):
|
||||
raise UnexpectedDER("wanted bitstring (0x03), got 0x%02x" % num)
|
||||
length, llen = read_length(string[1:])
|
||||
body = string[1+llen:1+llen+length]
|
||||
rest = string[1+llen+length:]
|
||||
return body, rest
|
||||
|
||||
# SEQUENCE([1, STRING(secexp), cont[0], OBJECT(curvename), cont[1], BINTSTRING)
|
||||
|
||||
|
||||
# signatures: (from RFC3279)
|
||||
# ansi-X9-62 OBJECT IDENTIFIER ::= {
|
||||
# iso(1) member-body(2) us(840) 10045 }
|
||||
#
|
||||
# id-ecSigType OBJECT IDENTIFIER ::= {
|
||||
# ansi-X9-62 signatures(4) }
|
||||
# ecdsa-with-SHA1 OBJECT IDENTIFIER ::= {
|
||||
# id-ecSigType 1 }
|
||||
## so 1,2,840,10045,4,1
|
||||
## so 0x42, .. ..
|
||||
|
||||
# Ecdsa-Sig-Value ::= SEQUENCE {
|
||||
# r INTEGER,
|
||||
# s INTEGER }
|
||||
|
||||
# id-public-key-type OBJECT IDENTIFIER ::= { ansi-X9.62 2 }
|
||||
#
|
||||
# id-ecPublicKey OBJECT IDENTIFIER ::= { id-publicKeyType 1 }
|
||||
|
||||
# I think the secp224r1 identifier is (t=06,l=05,v=2b81040021)
|
||||
# secp224r1 OBJECT IDENTIFIER ::= {
|
||||
# iso(1) identified-organization(3) certicom(132) curve(0) 33 }
|
||||
# and the secp384r1 is (t=06,l=05,v=2b81040022)
|
||||
# secp384r1 OBJECT IDENTIFIER ::= {
|
||||
# iso(1) identified-organization(3) certicom(132) curve(0) 34 }
|
||||
|
||||
def unpem(pem):
|
||||
if isinstance(pem, text_type):
|
||||
pem = pem.encode()
|
||||
|
||||
d = b("").join([l.strip() for l in pem.split(b("\n"))
|
||||
if l and not l.startswith(b("-----"))])
|
||||
return base64.b64decode(d)
|
||||
def topem(der, name):
|
||||
b64 = base64.b64encode(der)
|
||||
lines = [("-----BEGIN %s-----\n" % name).encode()]
|
||||
lines.extend([b64[start:start+64]+b("\n")
|
||||
for start in range(0, len(b64), 64)])
|
||||
lines.append(("-----END %s-----\n" % name).encode())
|
||||
return b("").join(lines)
|
||||
|
||||
@@ -1,576 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
|
||||
"""
|
||||
Implementation of Elliptic-Curve Digital Signatures.
|
||||
|
||||
Classes and methods for elliptic-curve signatures:
|
||||
private keys, public keys, signatures,
|
||||
NIST prime-modulus curves with modulus lengths of
|
||||
192, 224, 256, 384, and 521 bits.
|
||||
|
||||
Example:
|
||||
|
||||
# (In real-life applications, you would probably want to
|
||||
# protect against defects in SystemRandom.)
|
||||
from random import SystemRandom
|
||||
randrange = SystemRandom().randrange
|
||||
|
||||
# Generate a public/private key pair using the NIST Curve P-192:
|
||||
|
||||
g = generator_192
|
||||
n = g.order()
|
||||
secret = randrange( 1, n )
|
||||
pubkey = Public_key( g, g * secret )
|
||||
privkey = Private_key( pubkey, secret )
|
||||
|
||||
# Signing a hash value:
|
||||
|
||||
hash = randrange( 1, n )
|
||||
signature = privkey.sign( hash, randrange( 1, n ) )
|
||||
|
||||
# Verifying a signature for a hash value:
|
||||
|
||||
if pubkey.verifies( hash, signature ):
|
||||
print_("Demo verification succeeded.")
|
||||
else:
|
||||
print_("*** Demo verification failed.")
|
||||
|
||||
# Verification fails if the hash value is modified:
|
||||
|
||||
if pubkey.verifies( hash-1, signature ):
|
||||
print_("**** Demo verification failed to reject tampered hash.")
|
||||
else:
|
||||
print_("Demo verification correctly rejected tampered hash.")
|
||||
|
||||
Version of 2009.05.16.
|
||||
|
||||
Revision history:
|
||||
2005.12.31 - Initial version.
|
||||
2008.11.25 - Substantial revisions introducing new classes.
|
||||
2009.05.16 - Warn against using random.randrange in real applications.
|
||||
2009.05.17 - Use random.SystemRandom by default.
|
||||
|
||||
Written in 2005 by Peter Pearson and placed in the public domain.
|
||||
"""
|
||||
|
||||
from .six import int2byte, b, print_
|
||||
from . import ellipticcurve
|
||||
from . import numbertheory
|
||||
import random
|
||||
|
||||
|
||||
|
||||
class Signature( object ):
|
||||
"""ECDSA signature.
|
||||
"""
|
||||
def __init__( self, r, s ):
|
||||
self.r = r
|
||||
self.s = s
|
||||
|
||||
|
||||
|
||||
class Public_key( object ):
|
||||
"""Public key for ECDSA.
|
||||
"""
|
||||
|
||||
def __init__( self, generator, point ):
|
||||
"""generator is the Point that generates the group,
|
||||
point is the Point that defines the public key.
|
||||
"""
|
||||
|
||||
self.curve = generator.curve()
|
||||
self.generator = generator
|
||||
self.point = point
|
||||
n = generator.order()
|
||||
if not n:
|
||||
raise RuntimeError("Generator point must have order.")
|
||||
if not n * point == ellipticcurve.INFINITY:
|
||||
raise RuntimeError("Generator point order is bad.")
|
||||
if point.x() < 0 or n <= point.x() or point.y() < 0 or n <= point.y():
|
||||
raise RuntimeError("Generator point has x or y out of range.")
|
||||
|
||||
|
||||
def verifies( self, hash, signature ):
|
||||
"""Verify that signature is a valid signature of hash.
|
||||
Return True if the signature is valid.
|
||||
"""
|
||||
|
||||
# From X9.62 J.3.1.
|
||||
|
||||
G = self.generator
|
||||
n = G.order()
|
||||
r = signature.r
|
||||
s = signature.s
|
||||
if r < 1 or r > n-1: return False
|
||||
if s < 1 or s > n-1: return False
|
||||
c = numbertheory.inverse_mod( s, n )
|
||||
u1 = ( hash * c ) % n
|
||||
u2 = ( r * c ) % n
|
||||
xy = u1 * G + u2 * self.point
|
||||
v = xy.x() % n
|
||||
return v == r
|
||||
|
||||
|
||||
|
||||
class Private_key( object ):
|
||||
"""Private key for ECDSA.
|
||||
"""
|
||||
|
||||
def __init__( self, public_key, secret_multiplier ):
|
||||
"""public_key is of class Public_key;
|
||||
secret_multiplier is a large integer.
|
||||
"""
|
||||
|
||||
self.public_key = public_key
|
||||
self.secret_multiplier = secret_multiplier
|
||||
|
||||
def sign( self, hash, random_k ):
|
||||
"""Return a signature for the provided hash, using the provided
|
||||
random nonce. It is absolutely vital that random_k be an unpredictable
|
||||
number in the range [1, self.public_key.point.order()-1]. If
|
||||
an attacker can guess random_k, he can compute our private key from a
|
||||
single signature. Also, if an attacker knows a few high-order
|
||||
bits (or a few low-order bits) of random_k, he can compute our private
|
||||
key from many signatures. The generation of nonces with adequate
|
||||
cryptographic strength is very difficult and far beyond the scope
|
||||
of this comment.
|
||||
|
||||
May raise RuntimeError, in which case retrying with a new
|
||||
random value k is in order.
|
||||
"""
|
||||
|
||||
G = self.public_key.generator
|
||||
n = G.order()
|
||||
k = random_k % n
|
||||
p1 = k * G
|
||||
r = p1.x()
|
||||
if r == 0: raise RuntimeError("amazingly unlucky random number r")
|
||||
s = ( numbertheory.inverse_mod( k, n ) * \
|
||||
( hash + ( self.secret_multiplier * r ) % n ) ) % n
|
||||
if s == 0: raise RuntimeError("amazingly unlucky random number s")
|
||||
return Signature( r, s )
|
||||
|
||||
|
||||
|
||||
def int_to_string( x ):
|
||||
"""Convert integer x into a string of bytes, as per X9.62."""
|
||||
assert x >= 0
|
||||
if x == 0: return b('\0')
|
||||
result = []
|
||||
while x:
|
||||
ordinal = x & 0xFF
|
||||
result.append(int2byte(ordinal))
|
||||
x >>= 8
|
||||
|
||||
result.reverse()
|
||||
return b('').join(result)
|
||||
|
||||
|
||||
def string_to_int( s ):
|
||||
"""Convert a string of bytes into an integer, as per X9.62."""
|
||||
result = 0
|
||||
for c in s:
|
||||
if not isinstance(c, int): c = ord( c )
|
||||
result = 256 * result + c
|
||||
return result
|
||||
|
||||
|
||||
def digest_integer( m ):
|
||||
"""Convert an integer into a string of bytes, compute
|
||||
its SHA-1 hash, and convert the result to an integer."""
|
||||
#
|
||||
# I don't expect this function to be used much. I wrote
|
||||
# it in order to be able to duplicate the examples
|
||||
# in ECDSAVS.
|
||||
#
|
||||
from hashlib import sha1
|
||||
return string_to_int( sha1( int_to_string( m ) ).digest() )
|
||||
|
||||
|
||||
def point_is_valid( generator, x, y ):
|
||||
"""Is (x,y) a valid public key based on the specified generator?"""
|
||||
|
||||
# These are the tests specified in X9.62.
|
||||
|
||||
n = generator.order()
|
||||
curve = generator.curve()
|
||||
if x < 0 or n <= x or y < 0 or n <= y:
|
||||
return False
|
||||
if not curve.contains_point( x, y ):
|
||||
return False
|
||||
if not n*ellipticcurve.Point( curve, x, y ) == \
|
||||
ellipticcurve.INFINITY:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
|
||||
# NIST Curve P-192:
|
||||
_p = 6277101735386680763835789423207666416083908700390324961279
|
||||
_r = 6277101735386680763835789423176059013767194773182842284081
|
||||
# s = 0x3045ae6fc8422f64ed579528d38120eae12196d5L
|
||||
# c = 0x3099d2bbbfcb2538542dcd5fb078b6ef5f3d6fe2c745de65L
|
||||
_b = 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1
|
||||
_Gx = 0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012
|
||||
_Gy = 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811
|
||||
|
||||
curve_192 = ellipticcurve.CurveFp( _p, -3, _b )
|
||||
generator_192 = ellipticcurve.Point( curve_192, _Gx, _Gy, _r )
|
||||
|
||||
|
||||
# NIST Curve P-224:
|
||||
_p = 26959946667150639794667015087019630673557916260026308143510066298881
|
||||
_r = 26959946667150639794667015087019625940457807714424391721682722368061
|
||||
# s = 0xbd71344799d5c7fcdc45b59fa3b9ab8f6a948bc5L
|
||||
# c = 0x5b056c7e11dd68f40469ee7f3c7a7d74f7d121116506d031218291fbL
|
||||
_b = 0xb4050a850c04b3abf54132565044b0b7d7bfd8ba270b39432355ffb4
|
||||
_Gx =0xb70e0cbd6bb4bf7f321390b94a03c1d356c21122343280d6115c1d21
|
||||
_Gy = 0xbd376388b5f723fb4c22dfe6cd4375a05a07476444d5819985007e34
|
||||
|
||||
curve_224 = ellipticcurve.CurveFp( _p, -3, _b )
|
||||
generator_224 = ellipticcurve.Point( curve_224, _Gx, _Gy, _r )
|
||||
|
||||
# NIST Curve P-256:
|
||||
_p = 115792089210356248762697446949407573530086143415290314195533631308867097853951
|
||||
_r = 115792089210356248762697446949407573529996955224135760342422259061068512044369
|
||||
# s = 0xc49d360886e704936a6678e1139d26b7819f7e90L
|
||||
# c = 0x7efba1662985be9403cb055c75d4f7e0ce8d84a9c5114abcaf3177680104fa0dL
|
||||
_b = 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b
|
||||
_Gx = 0x6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296
|
||||
_Gy = 0x4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5
|
||||
|
||||
curve_256 = ellipticcurve.CurveFp( _p, -3, _b )
|
||||
generator_256 = ellipticcurve.Point( curve_256, _Gx, _Gy, _r )
|
||||
|
||||
# NIST Curve P-384:
|
||||
_p = 39402006196394479212279040100143613805079739270465446667948293404245721771496870329047266088258938001861606973112319
|
||||
_r = 39402006196394479212279040100143613805079739270465446667946905279627659399113263569398956308152294913554433653942643
|
||||
# s = 0xa335926aa319a27a1d00896a6773a4827acdac73L
|
||||
# c = 0x79d1e655f868f02fff48dcdee14151ddb80643c1406d0ca10dfe6fc52009540a495e8042ea5f744f6e184667cc722483L
|
||||
_b = 0xb3312fa7e23ee7e4988e056be3f82d19181d9c6efe8141120314088f5013875ac656398d8a2ed19d2a85c8edd3ec2aef
|
||||
_Gx = 0xaa87ca22be8b05378eb1c71ef320ad746e1d3b628ba79b9859f741e082542a385502f25dbf55296c3a545e3872760ab7
|
||||
_Gy = 0x3617de4a96262c6f5d9e98bf9292dc29f8f41dbd289a147ce9da3113b5f0b8c00a60b1ce1d7e819d7a431d7c90ea0e5f
|
||||
|
||||
curve_384 = ellipticcurve.CurveFp( _p, -3, _b )
|
||||
generator_384 = ellipticcurve.Point( curve_384, _Gx, _Gy, _r )
|
||||
|
||||
# NIST Curve P-521:
|
||||
_p = 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151
|
||||
_r = 6864797660130609714981900799081393217269435300143305409394463459185543183397655394245057746333217197532963996371363321113864768612440380340372808892707005449
|
||||
# s = 0xd09e8800291cb85396cc6717393284aaa0da64baL
|
||||
# c = 0x0b48bfa5f420a34949539d2bdfc264eeeeb077688e44fbf0ad8f6d0edb37bd6b533281000518e19f1b9ffbe0fe9ed8a3c2200b8f875e523868c70c1e5bf55bad637L
|
||||
_b = 0x051953eb9618e1c9a1f929a21a0b68540eea2da725b99b315f3b8b489918ef109e156193951ec7e937b1652c0bd3bb1bf073573df883d2c34f1ef451fd46b503f00
|
||||
_Gx = 0xc6858e06b70404e9cd9e3ecb662395b4429c648139053fb521f828af606b4d3dbaa14b5e77efe75928fe1dc127a2ffa8de3348b3c1856a429bf97e7e31c2e5bd66
|
||||
_Gy = 0x11839296a789a3bc0045c8a5fb42c7d1bd998f54449579b446817afbd17273e662c97ee72995ef42640c550b9013fad0761353c7086a272c24088be94769fd16650
|
||||
|
||||
curve_521 = ellipticcurve.CurveFp( _p, -3, _b )
|
||||
generator_521 = ellipticcurve.Point( curve_521, _Gx, _Gy, _r )
|
||||
|
||||
# Certicom secp256-k1
|
||||
_a = 0x0000000000000000000000000000000000000000000000000000000000000000
|
||||
_b = 0x0000000000000000000000000000000000000000000000000000000000000007
|
||||
_p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f
|
||||
_Gx = 0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
|
||||
_Gy = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8
|
||||
_r = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141
|
||||
|
||||
curve_secp256k1 = ellipticcurve.CurveFp( _p, _a, _b)
|
||||
generator_secp256k1 = ellipticcurve.Point( curve_secp256k1, _Gx, _Gy, _r)
|
||||
|
||||
|
||||
|
||||
def __main__():
|
||||
class TestFailure(Exception): pass
|
||||
|
||||
def test_point_validity( generator, x, y, expected ):
|
||||
"""generator defines the curve; is (x,y) a point on
|
||||
this curve? "expected" is True if the right answer is Yes."""
|
||||
if point_is_valid( generator, x, y ) == expected:
|
||||
print_("Point validity tested as expected.")
|
||||
else:
|
||||
raise TestFailure("*** Point validity test gave wrong result.")
|
||||
|
||||
def test_signature_validity( Msg, Qx, Qy, R, S, expected ):
|
||||
"""Msg = message, Qx and Qy represent the base point on
|
||||
elliptic curve c192, R and S are the signature, and
|
||||
"expected" is True iff the signature is expected to be valid."""
|
||||
pubk = Public_key( generator_192,
|
||||
ellipticcurve.Point( curve_192, Qx, Qy ) )
|
||||
got = pubk.verifies( digest_integer( Msg ), Signature( R, S ) )
|
||||
if got == expected:
|
||||
print_("Signature tested as expected: got %s, expected %s." % \
|
||||
( got, expected ))
|
||||
else:
|
||||
raise TestFailure("*** Signature test failed: got %s, expected %s." % \
|
||||
( got, expected ))
|
||||
|
||||
print_("NIST Curve P-192:")
|
||||
|
||||
p192 = generator_192
|
||||
|
||||
# From X9.62:
|
||||
|
||||
d = 651056770906015076056810763456358567190100156695615665659
|
||||
Q = d * p192
|
||||
if Q.x() != 0x62B12D60690CDCF330BABAB6E69763B471F994DD702D16A5:
|
||||
raise TestFailure("*** p192 * d came out wrong.")
|
||||
else:
|
||||
print_("p192 * d came out right.")
|
||||
|
||||
k = 6140507067065001063065065565667405560006161556565665656654
|
||||
R = k * p192
|
||||
if R.x() != 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD \
|
||||
or R.y() != 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835:
|
||||
raise TestFailure("*** k * p192 came out wrong.")
|
||||
else:
|
||||
print_("k * p192 came out right.")
|
||||
|
||||
u1 = 2563697409189434185194736134579731015366492496392189760599
|
||||
u2 = 6266643813348617967186477710235785849136406323338782220568
|
||||
temp = u1 * p192 + u2 * Q
|
||||
if temp.x() != 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD \
|
||||
or temp.y() != 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835:
|
||||
raise TestFailure("*** u1 * p192 + u2 * Q came out wrong.")
|
||||
else:
|
||||
print_("u1 * p192 + u2 * Q came out right.")
|
||||
|
||||
e = 968236873715988614170569073515315707566766479517
|
||||
pubk = Public_key( generator_192, generator_192 * d )
|
||||
privk = Private_key( pubk, d )
|
||||
sig = privk.sign( e, k )
|
||||
r, s = sig.r, sig.s
|
||||
if r != 3342403536405981729393488334694600415596881826869351677613 \
|
||||
or s != 5735822328888155254683894997897571951568553642892029982342:
|
||||
raise TestFailure("*** r or s came out wrong.")
|
||||
else:
|
||||
print_("r and s came out right.")
|
||||
|
||||
valid = pubk.verifies( e, sig )
|
||||
if valid: print_("Signature verified OK.")
|
||||
else: raise TestFailure("*** Signature failed verification.")
|
||||
|
||||
valid = pubk.verifies( e-1, sig )
|
||||
if not valid: print_("Forgery was correctly rejected.")
|
||||
else: raise TestFailure("*** Forgery was erroneously accepted.")
|
||||
|
||||
print_("Testing point validity, as per ECDSAVS.pdf B.2.2:")
|
||||
|
||||
test_point_validity( \
|
||||
p192, \
|
||||
0xcd6d0f029a023e9aaca429615b8f577abee685d8257cc83a, \
|
||||
0x00019c410987680e9fb6c0b6ecc01d9a2647c8bae27721bacdfc, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x00017f2fce203639e9eaf9fb50b81fc32776b30e3b02af16c73b, \
|
||||
0x95da95c5e72dd48e229d4748d4eee658a9a54111b23b2adb, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x4f77f8bc7fccbadd5760f4938746d5f253ee2168c1cf2792, \
|
||||
0x000147156ff824d131629739817edb197717c41aab5c2a70f0f6, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0xc58d61f88d905293bcd4cd0080bcb1b7f811f2ffa41979f6, \
|
||||
0x8804dc7a7c4c7f8b5d437f5156f3312ca7d6de8a0e11867f, \
|
||||
True )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0xcdf56c1aa3d8afc53c521adf3ffb96734a6a630a4a5b5a70, \
|
||||
0x97c1c44a5fb229007b5ec5d25f7413d170068ffd023caa4e, \
|
||||
True )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x89009c0dc361c81e99280c8e91df578df88cdf4b0cdedced, \
|
||||
0x27be44a529b7513e727251f128b34262a0fd4d8ec82377b9, \
|
||||
True )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x6a223d00bd22c52833409a163e057e5b5da1def2a197dd15, \
|
||||
0x7b482604199367f1f303f9ef627f922f97023e90eae08abf, \
|
||||
True )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x6dccbde75c0948c98dab32ea0bc59fe125cf0fb1a3798eda, \
|
||||
0x0001171a3e0fa60cf3096f4e116b556198de430e1fbd330c8835, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0xd266b39e1f491fc4acbbbc7d098430931cfa66d55015af12, \
|
||||
0x193782eb909e391a3148b7764e6b234aa94e48d30a16dbb2, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x9d6ddbcd439baa0c6b80a654091680e462a7d1d3f1ffeb43, \
|
||||
0x6ad8efc4d133ccf167c44eb4691c80abffb9f82b932b8caa, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0x146479d944e6bda87e5b35818aa666a4c998a71f4e95edbc, \
|
||||
0xa86d6fe62bc8fbd88139693f842635f687f132255858e7f6, \
|
||||
False )
|
||||
|
||||
test_point_validity(
|
||||
p192, \
|
||||
0xe594d4a598046f3598243f50fd2c7bd7d380edb055802253, \
|
||||
0x509014c0c4d6b536e3ca750ec09066af39b4c8616a53a923, \
|
||||
False )
|
||||
|
||||
print_("Trying signature-verification tests from ECDSAVS.pdf B.2.4:")
|
||||
print_("P-192:")
|
||||
Msg = 0x84ce72aa8699df436059f052ac51b6398d2511e49631bcb7e71f89c499b9ee425dfbc13a5f6d408471b054f2655617cbbaf7937b7c80cd8865cf02c8487d30d2b0fbd8b2c4e102e16d828374bbc47b93852f212d5043c3ea720f086178ff798cc4f63f787b9c2e419efa033e7644ea7936f54462dc21a6c4580725f7f0e7d158
|
||||
Qx = 0xd9dbfb332aa8e5ff091e8ce535857c37c73f6250ffb2e7ac
|
||||
Qy = 0x282102e364feded3ad15ddf968f88d8321aa268dd483ebc4
|
||||
R = 0x64dca58a20787c488d11d6dd96313f1b766f2d8efe122916
|
||||
S = 0x1ecba28141e84ab4ecad92f56720e2cc83eb3d22dec72479
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, True )
|
||||
|
||||
Msg = 0x94bb5bacd5f8ea765810024db87f4224ad71362a3c28284b2b9f39fab86db12e8beb94aae899768229be8fdb6c4f12f28912bb604703a79ccff769c1607f5a91450f30ba0460d359d9126cbd6296be6d9c4bb96c0ee74cbb44197c207f6db326ab6f5a659113a9034e54be7b041ced9dcf6458d7fb9cbfb2744d999f7dfd63f4
|
||||
Qx = 0x3e53ef8d3112af3285c0e74842090712cd324832d4277ae7
|
||||
Qy = 0xcc75f8952d30aec2cbb719fc6aa9934590b5d0ff5a83adb7
|
||||
R = 0x8285261607283ba18f335026130bab31840dcfd9c3e555af
|
||||
S = 0x356d89e1b04541afc9704a45e9c535ce4a50929e33d7e06c
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, True )
|
||||
|
||||
Msg = 0xf6227a8eeb34afed1621dcc89a91d72ea212cb2f476839d9b4243c66877911b37b4ad6f4448792a7bbba76c63bdd63414b6facab7dc71c3396a73bd7ee14cdd41a659c61c99b779cecf07bc51ab391aa3252386242b9853ea7da67fd768d303f1b9b513d401565b6f1eb722dfdb96b519fe4f9bd5de67ae131e64b40e78c42dd
|
||||
Qx = 0x16335dbe95f8e8254a4e04575d736befb258b8657f773cb7
|
||||
Qy = 0x421b13379c59bc9dce38a1099ca79bbd06d647c7f6242336
|
||||
R = 0x4141bd5d64ea36c5b0bd21ef28c02da216ed9d04522b1e91
|
||||
S = 0x159a6aa852bcc579e821b7bb0994c0861fb08280c38daa09
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x16b5f93afd0d02246f662761ed8e0dd9504681ed02a253006eb36736b563097ba39f81c8e1bce7a16c1339e345efabbc6baa3efb0612948ae51103382a8ee8bc448e3ef71e9f6f7a9676694831d7f5dd0db5446f179bcb737d4a526367a447bfe2c857521c7f40b6d7d7e01a180d92431fb0bbd29c04a0c420a57b3ed26ccd8a
|
||||
Qx = 0xfd14cdf1607f5efb7b1793037b15bdf4baa6f7c16341ab0b
|
||||
Qy = 0x83fa0795cc6c4795b9016dac928fd6bac32f3229a96312c4
|
||||
R = 0x8dfdb832951e0167c5d762a473c0416c5c15bc1195667dc1
|
||||
S = 0x1720288a2dc13fa1ec78f763f8fe2ff7354a7e6fdde44520
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x08a2024b61b79d260e3bb43ef15659aec89e5b560199bc82cf7c65c77d39192e03b9a895d766655105edd9188242b91fbde4167f7862d4ddd61e5d4ab55196683d4f13ceb90d87aea6e07eb50a874e33086c4a7cb0273a8e1c4408f4b846bceae1ebaac1b2b2ea851a9b09de322efe34cebe601653efd6ddc876ce8c2f2072fb
|
||||
Qx = 0x674f941dc1a1f8b763c9334d726172d527b90ca324db8828
|
||||
Qy = 0x65adfa32e8b236cb33a3e84cf59bfb9417ae7e8ede57a7ff
|
||||
R = 0x9508b9fdd7daf0d8126f9e2bc5a35e4c6d800b5b804d7796
|
||||
S = 0x36f2bf6b21b987c77b53bb801b3435a577e3d493744bfab0
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x1843aba74b0789d4ac6b0b8923848023a644a7b70afa23b1191829bbe4397ce15b629bf21a8838298653ed0c19222b95fa4f7390d1b4c844d96e645537e0aae98afb5c0ac3bd0e4c37f8daaff25556c64e98c319c52687c904c4de7240a1cc55cd9756b7edaef184e6e23b385726e9ffcba8001b8f574987c1a3fedaaa83ca6d
|
||||
Qx = 0x10ecca1aad7220b56a62008b35170bfd5e35885c4014a19f
|
||||
Qy = 0x04eb61984c6c12ade3bc47f3c629ece7aa0a033b9948d686
|
||||
R = 0x82bfa4e82c0dfe9274169b86694e76ce993fd83b5c60f325
|
||||
S = 0xa97685676c59a65dbde002fe9d613431fb183e8006d05633
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x5a478f4084ddd1a7fea038aa9732a822106385797d02311aeef4d0264f824f698df7a48cfb6b578cf3da416bc0799425bb491be5b5ecc37995b85b03420a98f2c4dc5c31a69a379e9e322fbe706bbcaf0f77175e05cbb4fa162e0da82010a278461e3e974d137bc746d1880d6eb02aa95216014b37480d84b87f717bb13f76e1
|
||||
Qx = 0x6636653cb5b894ca65c448277b29da3ad101c4c2300f7c04
|
||||
Qy = 0xfdf1cbb3fc3fd6a4f890b59e554544175fa77dbdbeb656c1
|
||||
R = 0xeac2ddecddfb79931a9c3d49c08de0645c783a24cb365e1c
|
||||
S = 0x3549fee3cfa7e5f93bc47d92d8ba100e881a2a93c22f8d50
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0xc598774259a058fa65212ac57eaa4f52240e629ef4c310722088292d1d4af6c39b49ce06ba77e4247b20637174d0bd67c9723feb57b5ead232b47ea452d5d7a089f17c00b8b6767e434a5e16c231ba0efa718a340bf41d67ea2d295812ff1b9277daacb8bc27b50ea5e6443bcf95ef4e9f5468fe78485236313d53d1c68f6ba2
|
||||
Qx = 0xa82bd718d01d354001148cd5f69b9ebf38ff6f21898f8aaa
|
||||
Qy = 0xe67ceede07fc2ebfafd62462a51e4b6c6b3d5b537b7caf3e
|
||||
R = 0x4d292486c620c3de20856e57d3bb72fcde4a73ad26376955
|
||||
S = 0xa85289591a6081d5728825520e62ff1c64f94235c04c7f95
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0xca98ed9db081a07b7557f24ced6c7b9891269a95d2026747add9e9eb80638a961cf9c71a1b9f2c29744180bd4c3d3db60f2243c5c0b7cc8a8d40a3f9a7fc910250f2187136ee6413ffc67f1a25e1c4c204fa9635312252ac0e0481d89b6d53808f0c496ba87631803f6c572c1f61fa049737fdacce4adff757afed4f05beb658
|
||||
Qx = 0x7d3b016b57758b160c4fca73d48df07ae3b6b30225126c2f
|
||||
Qy = 0x4af3790d9775742bde46f8da876711be1b65244b2b39e7ec
|
||||
R = 0x95f778f5f656511a5ab49a5d69ddd0929563c29cbc3a9e62
|
||||
S = 0x75c87fc358c251b4c83d2dd979faad496b539f9f2ee7a289
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x31dd9a54c8338bea06b87eca813d555ad1850fac9742ef0bbe40dad400e10288acc9c11ea7dac79eb16378ebea9490e09536099f1b993e2653cd50240014c90a9c987f64545abc6a536b9bd2435eb5e911fdfde2f13be96ea36ad38df4ae9ea387b29cced599af777338af2794820c9cce43b51d2112380a35802ab7e396c97a
|
||||
Qx = 0x9362f28c4ef96453d8a2f849f21e881cd7566887da8beb4a
|
||||
Qy = 0xe64d26d8d74c48a024ae85d982ee74cd16046f4ee5333905
|
||||
R = 0xf3923476a296c88287e8de914b0b324ad5a963319a4fe73b
|
||||
S = 0xf0baeed7624ed00d15244d8ba2aede085517dbdec8ac65f5
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, True )
|
||||
|
||||
Msg = 0xb2b94e4432267c92f9fdb9dc6040c95ffa477652761290d3c7de312283f6450d89cc4aabe748554dfb6056b2d8e99c7aeaad9cdddebdee9dbc099839562d9064e68e7bb5f3a6bba0749ca9a538181fc785553a4000785d73cc207922f63e8ce1112768cb1de7b673aed83a1e4a74592f1268d8e2a4e9e63d414b5d442bd0456d
|
||||
Qx = 0xcc6fc032a846aaac25533eb033522824f94e670fa997ecef
|
||||
Qy = 0xe25463ef77a029eccda8b294fd63dd694e38d223d30862f1
|
||||
R = 0x066b1d07f3a40e679b620eda7f550842a35c18b80c5ebe06
|
||||
S = 0xa0b0fb201e8f2df65e2c4508ef303bdc90d934016f16b2dc
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x4366fcadf10d30d086911de30143da6f579527036937007b337f7282460eae5678b15cccda853193ea5fc4bc0a6b9d7a31128f27e1214988592827520b214eed5052f7775b750b0c6b15f145453ba3fee24a085d65287e10509eb5d5f602c440341376b95c24e5c4727d4b859bfe1483d20538acdd92c7997fa9c614f0f839d7
|
||||
Qx = 0x955c908fe900a996f7e2089bee2f6376830f76a19135e753
|
||||
Qy = 0xba0c42a91d3847de4a592a46dc3fdaf45a7cc709b90de520
|
||||
R = 0x1f58ad77fc04c782815a1405b0925e72095d906cbf52a668
|
||||
S = 0xf2e93758b3af75edf784f05a6761c9b9a6043c66b845b599
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x543f8af57d750e33aa8565e0cae92bfa7a1ff78833093421c2942cadf9986670a5ff3244c02a8225e790fbf30ea84c74720abf99cfd10d02d34377c3d3b41269bea763384f372bb786b5846f58932defa68023136cd571863b304886e95e52e7877f445b9364b3f06f3c28da12707673fecb4b8071de06b6e0a3c87da160cef3
|
||||
Qx = 0x31f7fa05576d78a949b24812d4383107a9a45bb5fccdd835
|
||||
Qy = 0x8dc0eb65994a90f02b5e19bd18b32d61150746c09107e76b
|
||||
R = 0xbe26d59e4e883dde7c286614a767b31e49ad88789d3a78ff
|
||||
S = 0x8762ca831c1ce42df77893c9b03119428e7a9b819b619068
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0xd2e8454143ce281e609a9d748014dcebb9d0bc53adb02443a6aac2ffe6cb009f387c346ecb051791404f79e902ee333ad65e5c8cb38dc0d1d39a8dc90add5023572720e5b94b190d43dd0d7873397504c0c7aef2727e628eb6a74411f2e400c65670716cb4a815dc91cbbfeb7cfe8c929e93184c938af2c078584da045e8f8d1
|
||||
Qx = 0x66aa8edbbdb5cf8e28ceb51b5bda891cae2df84819fe25c0
|
||||
Qy = 0x0c6bc2f69030a7ce58d4a00e3b3349844784a13b8936f8da
|
||||
R = 0xa4661e69b1734f4a71b788410a464b71e7ffe42334484f23
|
||||
S = 0x738421cf5e049159d69c57a915143e226cac8355e149afe9
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
Msg = 0x6660717144040f3e2f95a4e25b08a7079c702a8b29babad5a19a87654bc5c5afa261512a11b998a4fb36b5d8fe8bd942792ff0324b108120de86d63f65855e5461184fc96a0a8ffd2ce6d5dfb0230cbbdd98f8543e361b3205f5da3d500fdc8bac6db377d75ebef3cb8f4d1ff738071ad0938917889250b41dd1d98896ca06fb
|
||||
Qx = 0xbcfacf45139b6f5f690a4c35a5fffa498794136a2353fc77
|
||||
Qy = 0x6f4a6c906316a6afc6d98fe1f0399d056f128fe0270b0f22
|
||||
R = 0x9db679a3dafe48f7ccad122933acfe9da0970b71c94c21c1
|
||||
S = 0x984c2db99827576c0a41a5da41e07d8cc768bc82f18c9da9
|
||||
test_signature_validity( Msg, Qx, Qy, R, S, False )
|
||||
|
||||
|
||||
|
||||
print_("Testing the example code:")
|
||||
|
||||
# Building a public/private key pair from the NIST Curve P-192:
|
||||
|
||||
g = generator_192
|
||||
n = g.order()
|
||||
|
||||
# (random.SystemRandom is supposed to provide
|
||||
# crypto-quality random numbers, but as Debian recently
|
||||
# illustrated, a systems programmer can accidentally
|
||||
# demolish this security, so in serious applications
|
||||
# further precautions are appropriate.)
|
||||
|
||||
randrange = random.SystemRandom().randrange
|
||||
|
||||
secret = randrange( 1, n )
|
||||
pubkey = Public_key( g, g * secret )
|
||||
privkey = Private_key( pubkey, secret )
|
||||
|
||||
# Signing a hash value:
|
||||
|
||||
hash = randrange( 1, n )
|
||||
signature = privkey.sign( hash, randrange( 1, n ) )
|
||||
|
||||
# Verifying a signature for a hash value:
|
||||
|
||||
if pubkey.verifies( hash, signature ):
|
||||
print_("Demo verification succeeded.")
|
||||
else:
|
||||
raise TestFailure("*** Demo verification failed.")
|
||||
|
||||
if pubkey.verifies( hash-1, signature ):
|
||||
raise TestFailure( "**** Demo verification failed to reject tampered hash.")
|
||||
else:
|
||||
print_("Demo verification correctly rejected tampered hash.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
__main__()
|
||||
@@ -1,293 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
#
|
||||
# Implementation of elliptic curves, for cryptographic applications.
|
||||
#
|
||||
# This module doesn't provide any way to choose a random elliptic
|
||||
# curve, nor to verify that an elliptic curve was chosen randomly,
|
||||
# because one can simply use NIST's standard curves.
|
||||
#
|
||||
# Notes from X9.62-1998 (draft):
|
||||
# Nomenclature:
|
||||
# - Q is a public key.
|
||||
# The "Elliptic Curve Domain Parameters" include:
|
||||
# - q is the "field size", which in our case equals p.
|
||||
# - p is a big prime.
|
||||
# - G is a point of prime order (5.1.1.1).
|
||||
# - n is the order of G (5.1.1.1).
|
||||
# Public-key validation (5.2.2):
|
||||
# - Verify that Q is not the point at infinity.
|
||||
# - Verify that X_Q and Y_Q are in [0,p-1].
|
||||
# - Verify that Q is on the curve.
|
||||
# - Verify that nQ is the point at infinity.
|
||||
# Signature generation (5.3):
|
||||
# - Pick random k from [1,n-1].
|
||||
# Signature checking (5.4.2):
|
||||
# - Verify that r and s are in [1,n-1].
|
||||
#
|
||||
# Version of 2008.11.25.
|
||||
#
|
||||
# Revision history:
|
||||
# 2005.12.31 - Initial version.
|
||||
# 2008.11.25 - Change CurveFp.is_on to contains_point.
|
||||
#
|
||||
# Written in 2005 by Peter Pearson and placed in the public domain.
|
||||
|
||||
from __future__ import division
|
||||
|
||||
from .six import print_
|
||||
from . import numbertheory
|
||||
|
||||
class CurveFp( object ):
|
||||
"""Elliptic Curve over the field of integers modulo a prime."""
|
||||
def __init__( self, p, a, b ):
|
||||
"""The curve of points satisfying y^2 = x^3 + a*x + b (mod p)."""
|
||||
self.__p = p
|
||||
self.__a = a
|
||||
self.__b = b
|
||||
|
||||
def p( self ):
|
||||
return self.__p
|
||||
|
||||
def a( self ):
|
||||
return self.__a
|
||||
|
||||
def b( self ):
|
||||
return self.__b
|
||||
|
||||
def contains_point( self, x, y ):
|
||||
"""Is the point (x,y) on this curve?"""
|
||||
return ( y * y - ( x * x * x + self.__a * x + self.__b ) ) % self.__p == 0
|
||||
|
||||
|
||||
|
||||
class Point( object ):
|
||||
"""A point on an elliptic curve. Altering x and y is forbidding,
|
||||
but they can be read by the x() and y() methods."""
|
||||
def __init__( self, curve, x, y, order = None ):
|
||||
"""curve, x, y, order; order (optional) is the order of this point."""
|
||||
self.__curve = curve
|
||||
self.__x = x
|
||||
self.__y = y
|
||||
self.__order = order
|
||||
# self.curve is allowed to be None only for INFINITY:
|
||||
if self.__curve: assert self.__curve.contains_point( x, y )
|
||||
if order: assert self * order == INFINITY
|
||||
|
||||
def __eq__( self, other ):
|
||||
"""Return True if the points are identical, False otherwise."""
|
||||
if self.__curve == other.__curve \
|
||||
and self.__x == other.__x \
|
||||
and self.__y == other.__y:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def __add__( self, other ):
|
||||
"""Add one point to another point."""
|
||||
|
||||
# X9.62 B.3:
|
||||
|
||||
if other == INFINITY: return self
|
||||
if self == INFINITY: return other
|
||||
assert self.__curve == other.__curve
|
||||
if self.__x == other.__x:
|
||||
if ( self.__y + other.__y ) % self.__curve.p() == 0:
|
||||
return INFINITY
|
||||
else:
|
||||
return self.double()
|
||||
|
||||
p = self.__curve.p()
|
||||
|
||||
l = ( ( other.__y - self.__y ) * \
|
||||
numbertheory.inverse_mod( other.__x - self.__x, p ) ) % p
|
||||
|
||||
x3 = ( l * l - self.__x - other.__x ) % p
|
||||
y3 = ( l * ( self.__x - x3 ) - self.__y ) % p
|
||||
|
||||
return Point( self.__curve, x3, y3 )
|
||||
|
||||
def __mul__( self, other ):
|
||||
"""Multiply a point by an integer."""
|
||||
|
||||
def leftmost_bit( x ):
|
||||
assert x > 0
|
||||
result = 1
|
||||
while result <= x: result = 2 * result
|
||||
return result // 2
|
||||
|
||||
e = other
|
||||
if self.__order: e = e % self.__order
|
||||
if e == 0: return INFINITY
|
||||
if self == INFINITY: return INFINITY
|
||||
assert e > 0
|
||||
|
||||
# From X9.62 D.3.2:
|
||||
|
||||
e3 = 3 * e
|
||||
negative_self = Point( self.__curve, self.__x, -self.__y, self.__order )
|
||||
i = leftmost_bit( e3 ) // 2
|
||||
result = self
|
||||
# print_("Multiplying %s by %d (e3 = %d):" % ( self, other, e3 ))
|
||||
while i > 1:
|
||||
result = result.double()
|
||||
if ( e3 & i ) != 0 and ( e & i ) == 0: result = result + self
|
||||
if ( e3 & i ) == 0 and ( e & i ) != 0: result = result + negative_self
|
||||
# print_(". . . i = %d, result = %s" % ( i, result ))
|
||||
i = i // 2
|
||||
|
||||
return result
|
||||
|
||||
def __rmul__( self, other ):
|
||||
"""Multiply a point by an integer."""
|
||||
|
||||
return self * other
|
||||
|
||||
def __str__( self ):
|
||||
if self == INFINITY: return "infinity"
|
||||
return "(%d,%d)" % ( self.__x, self.__y )
|
||||
|
||||
def double( self ):
|
||||
"""Return a new point that is twice the old."""
|
||||
|
||||
if self == INFINITY:
|
||||
return INFINITY
|
||||
|
||||
# X9.62 B.3:
|
||||
|
||||
p = self.__curve.p()
|
||||
a = self.__curve.a()
|
||||
|
||||
l = ( ( 3 * self.__x * self.__x + a ) * \
|
||||
numbertheory.inverse_mod( 2 * self.__y, p ) ) % p
|
||||
|
||||
x3 = ( l * l - 2 * self.__x ) % p
|
||||
y3 = ( l * ( self.__x - x3 ) - self.__y ) % p
|
||||
|
||||
return Point( self.__curve, x3, y3 )
|
||||
|
||||
def x( self ):
|
||||
return self.__x
|
||||
|
||||
def y( self ):
|
||||
return self.__y
|
||||
|
||||
def curve( self ):
|
||||
return self.__curve
|
||||
|
||||
def order( self ):
|
||||
return self.__order
|
||||
|
||||
|
||||
# This one point is the Point At Infinity for all purposes:
|
||||
INFINITY = Point( None, None, None )
|
||||
|
||||
def __main__():
|
||||
|
||||
class FailedTest(Exception): pass
|
||||
def test_add( c, x1, y1, x2, y2, x3, y3 ):
|
||||
"""We expect that on curve c, (x1,y1) + (x2, y2 ) = (x3, y3)."""
|
||||
p1 = Point( c, x1, y1 )
|
||||
p2 = Point( c, x2, y2 )
|
||||
p3 = p1 + p2
|
||||
print_("%s + %s = %s" % ( p1, p2, p3 ), end=' ')
|
||||
if p3.x() != x3 or p3.y() != y3:
|
||||
raise FailedTest("Failure: should give (%d,%d)." % ( x3, y3 ))
|
||||
else:
|
||||
print_(" Good.")
|
||||
|
||||
def test_double( c, x1, y1, x3, y3 ):
|
||||
"""We expect that on curve c, 2*(x1,y1) = (x3, y3)."""
|
||||
p1 = Point( c, x1, y1 )
|
||||
p3 = p1.double()
|
||||
print_("%s doubled = %s" % ( p1, p3 ), end=' ')
|
||||
if p3.x() != x3 or p3.y() != y3:
|
||||
raise FailedTest("Failure: should give (%d,%d)." % ( x3, y3 ))
|
||||
else:
|
||||
print_(" Good.")
|
||||
|
||||
def test_double_infinity( c ):
|
||||
"""We expect that on curve c, 2*INFINITY = INFINITY."""
|
||||
p1 = INFINITY
|
||||
p3 = p1.double()
|
||||
print_("%s doubled = %s" % ( p1, p3 ), end=' ')
|
||||
if p3.x() != INFINITY.x() or p3.y() != INFINITY.y():
|
||||
raise FailedTest("Failure: should give (%d,%d)." % ( INFINITY.x(), INFINITY.y() ))
|
||||
else:
|
||||
print_(" Good.")
|
||||
|
||||
def test_multiply( c, x1, y1, m, x3, y3 ):
|
||||
"""We expect that on curve c, m*(x1,y1) = (x3,y3)."""
|
||||
p1 = Point( c, x1, y1 )
|
||||
p3 = p1 * m
|
||||
print_("%s * %d = %s" % ( p1, m, p3 ), end=' ')
|
||||
if p3.x() != x3 or p3.y() != y3:
|
||||
raise FailedTest("Failure: should give (%d,%d)." % ( x3, y3 ))
|
||||
else:
|
||||
print_(" Good.")
|
||||
|
||||
|
||||
# A few tests from X9.62 B.3:
|
||||
|
||||
c = CurveFp( 23, 1, 1 )
|
||||
test_add( c, 3, 10, 9, 7, 17, 20 )
|
||||
test_double( c, 3, 10, 7, 12 )
|
||||
test_add( c, 3, 10, 3, 10, 7, 12 ) # (Should just invoke double.)
|
||||
test_multiply( c, 3, 10, 2, 7, 12 )
|
||||
|
||||
test_double_infinity(c)
|
||||
|
||||
# From X9.62 I.1 (p. 96):
|
||||
|
||||
g = Point( c, 13, 7, 7 )
|
||||
|
||||
check = INFINITY
|
||||
for i in range( 7 + 1 ):
|
||||
p = ( i % 7 ) * g
|
||||
print_("%s * %d = %s, expected %s . . ." % ( g, i, p, check ), end=' ')
|
||||
if p == check:
|
||||
print_(" Good.")
|
||||
else:
|
||||
raise FailedTest("Bad.")
|
||||
check = check + g
|
||||
|
||||
# NIST Curve P-192:
|
||||
p = 6277101735386680763835789423207666416083908700390324961279
|
||||
r = 6277101735386680763835789423176059013767194773182842284081
|
||||
#s = 0x3045ae6fc8422f64ed579528d38120eae12196d5L
|
||||
c = 0x3099d2bbbfcb2538542dcd5fb078b6ef5f3d6fe2c745de65
|
||||
b = 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1
|
||||
Gx = 0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012
|
||||
Gy = 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811
|
||||
|
||||
c192 = CurveFp( p, -3, b )
|
||||
p192 = Point( c192, Gx, Gy, r )
|
||||
|
||||
# Checking against some sample computations presented
|
||||
# in X9.62:
|
||||
|
||||
d = 651056770906015076056810763456358567190100156695615665659
|
||||
Q = d * p192
|
||||
if Q.x() != 0x62B12D60690CDCF330BABAB6E69763B471F994DD702D16A5:
|
||||
raise FailedTest("p192 * d came out wrong.")
|
||||
else:
|
||||
print_("p192 * d came out right.")
|
||||
|
||||
k = 6140507067065001063065065565667405560006161556565665656654
|
||||
R = k * p192
|
||||
if R.x() != 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD \
|
||||
or R.y() != 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835:
|
||||
raise FailedTest("k * p192 came out wrong.")
|
||||
else:
|
||||
print_("k * p192 came out right.")
|
||||
|
||||
u1 = 2563697409189434185194736134579731015366492496392189760599
|
||||
u2 = 6266643813348617967186477710235785849136406323338782220568
|
||||
temp = u1 * p192 + u2 * Q
|
||||
if temp.x() != 0x885052380FF147B734C330C43D39B2C4A89F29B0F749FEAD \
|
||||
or temp.y() != 0x9CF9FA1CBEFEFB917747A3BB29C072B9289C2547884FD835:
|
||||
raise FailedTest("u1 * p192 + u2 * Q came out wrong.")
|
||||
else:
|
||||
print_("u1 * p192 + u2 * Q came out right.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
__main__()
|
||||
@@ -1,283 +0,0 @@
|
||||
import binascii
|
||||
|
||||
from . import ecdsa
|
||||
from . import der
|
||||
from . import rfc6979
|
||||
from .curves import NIST192p, find_curve
|
||||
from .util import string_to_number, number_to_string, randrange
|
||||
from .util import sigencode_string, sigdecode_string
|
||||
from .util import oid_ecPublicKey, encoded_oid_ecPublicKey
|
||||
from .six import PY3, b
|
||||
from hashlib import sha1
|
||||
|
||||
class BadSignatureError(Exception):
|
||||
pass
|
||||
class BadDigestError(Exception):
|
||||
pass
|
||||
|
||||
class VerifyingKey:
|
||||
def __init__(self, _error__please_use_generate=None):
|
||||
if not _error__please_use_generate:
|
||||
raise TypeError("Please use SigningKey.generate() to construct me")
|
||||
|
||||
@classmethod
|
||||
def from_public_point(klass, point, curve=NIST192p, hashfunc=sha1):
|
||||
self = klass(_error__please_use_generate=True)
|
||||
self.curve = curve
|
||||
self.default_hashfunc = hashfunc
|
||||
self.pubkey = ecdsa.Public_key(curve.generator, point)
|
||||
self.pubkey.order = curve.order
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def from_string(klass, string, curve=NIST192p, hashfunc=sha1,
|
||||
validate_point=True):
|
||||
order = curve.order
|
||||
assert len(string) == curve.verifying_key_length, \
|
||||
(len(string), curve.verifying_key_length)
|
||||
xs = string[:curve.baselen]
|
||||
ys = string[curve.baselen:]
|
||||
assert len(xs) == curve.baselen, (len(xs), curve.baselen)
|
||||
assert len(ys) == curve.baselen, (len(ys), curve.baselen)
|
||||
x = string_to_number(xs)
|
||||
y = string_to_number(ys)
|
||||
if validate_point:
|
||||
assert ecdsa.point_is_valid(curve.generator, x, y)
|
||||
from . import ellipticcurve
|
||||
point = ellipticcurve.Point(curve.curve, x, y, order)
|
||||
return klass.from_public_point(point, curve, hashfunc)
|
||||
|
||||
@classmethod
|
||||
def from_pem(klass, string):
|
||||
return klass.from_der(der.unpem(string))
|
||||
|
||||
@classmethod
|
||||
def from_der(klass, string):
|
||||
# [[oid_ecPublicKey,oid_curve], point_str_bitstring]
|
||||
s1,empty = der.remove_sequence(string)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER pubkey: %s" %
|
||||
binascii.hexlify(empty))
|
||||
s2,point_str_bitstring = der.remove_sequence(s1)
|
||||
# s2 = oid_ecPublicKey,oid_curve
|
||||
oid_pk, rest = der.remove_object(s2)
|
||||
oid_curve, empty = der.remove_object(rest)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER pubkey objects: %s" %
|
||||
binascii.hexlify(empty))
|
||||
assert oid_pk == oid_ecPublicKey, (oid_pk, oid_ecPublicKey)
|
||||
curve = find_curve(oid_curve)
|
||||
point_str, empty = der.remove_bitstring(point_str_bitstring)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after pubkey pointstring: %s" %
|
||||
binascii.hexlify(empty))
|
||||
assert point_str.startswith(b("\x00\x04"))
|
||||
return klass.from_string(point_str[2:], curve)
|
||||
|
||||
def to_string(self):
|
||||
# VerifyingKey.from_string(vk.to_string()) == vk as long as the
|
||||
# curves are the same: the curve itself is not included in the
|
||||
# serialized form
|
||||
order = self.pubkey.order
|
||||
x_str = number_to_string(self.pubkey.point.x(), order)
|
||||
y_str = number_to_string(self.pubkey.point.y(), order)
|
||||
return x_str + y_str
|
||||
|
||||
def to_pem(self):
|
||||
return der.topem(self.to_der(), "PUBLIC KEY")
|
||||
|
||||
def to_der(self):
|
||||
order = self.pubkey.order
|
||||
x_str = number_to_string(self.pubkey.point.x(), order)
|
||||
y_str = number_to_string(self.pubkey.point.y(), order)
|
||||
point_str = b("\x00\x04") + x_str + y_str
|
||||
return der.encode_sequence(der.encode_sequence(encoded_oid_ecPublicKey,
|
||||
self.curve.encoded_oid),
|
||||
der.encode_bitstring(point_str))
|
||||
|
||||
def verify(self, signature, data, hashfunc=None, sigdecode=sigdecode_string):
|
||||
hashfunc = hashfunc or self.default_hashfunc
|
||||
digest = hashfunc(data).digest()
|
||||
return self.verify_digest(signature, digest, sigdecode)
|
||||
|
||||
def verify_digest(self, signature, digest, sigdecode=sigdecode_string):
|
||||
if len(digest) > self.curve.baselen:
|
||||
raise BadDigestError("this curve (%s) is too short "
|
||||
"for your digest (%d)" % (self.curve.name,
|
||||
8*len(digest)))
|
||||
number = string_to_number(digest)
|
||||
r, s = sigdecode(signature, self.pubkey.order)
|
||||
sig = ecdsa.Signature(r, s)
|
||||
if self.pubkey.verifies(number, sig):
|
||||
return True
|
||||
raise BadSignatureError
|
||||
|
||||
class SigningKey:
|
||||
def __init__(self, _error__please_use_generate=None):
|
||||
if not _error__please_use_generate:
|
||||
raise TypeError("Please use SigningKey.generate() to construct me")
|
||||
|
||||
@classmethod
|
||||
def generate(klass, curve=NIST192p, entropy=None, hashfunc=sha1):
|
||||
secexp = randrange(curve.order, entropy)
|
||||
return klass.from_secret_exponent(secexp, curve, hashfunc)
|
||||
|
||||
# to create a signing key from a short (arbitrary-length) seed, convert
|
||||
# that seed into an integer with something like
|
||||
# secexp=util.randrange_from_seed__X(seed, curve.order), and then pass
|
||||
# that integer into SigningKey.from_secret_exponent(secexp, curve)
|
||||
|
||||
@classmethod
|
||||
def from_secret_exponent(klass, secexp, curve=NIST192p, hashfunc=sha1):
|
||||
self = klass(_error__please_use_generate=True)
|
||||
self.curve = curve
|
||||
self.default_hashfunc = hashfunc
|
||||
self.baselen = curve.baselen
|
||||
n = curve.order
|
||||
assert 1 <= secexp < n
|
||||
pubkey_point = curve.generator*secexp
|
||||
pubkey = ecdsa.Public_key(curve.generator, pubkey_point)
|
||||
pubkey.order = n
|
||||
self.verifying_key = VerifyingKey.from_public_point(pubkey_point, curve,
|
||||
hashfunc)
|
||||
self.privkey = ecdsa.Private_key(pubkey, secexp)
|
||||
self.privkey.order = n
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def from_string(klass, string, curve=NIST192p, hashfunc=sha1):
|
||||
assert len(string) == curve.baselen, (len(string), curve.baselen)
|
||||
secexp = string_to_number(string)
|
||||
return klass.from_secret_exponent(secexp, curve, hashfunc)
|
||||
|
||||
@classmethod
|
||||
def from_pem(klass, string, hashfunc=sha1):
|
||||
# the privkey pem file has two sections: "EC PARAMETERS" and "EC
|
||||
# PRIVATE KEY". The first is redundant.
|
||||
if PY3 and isinstance(string, str):
|
||||
string = string.encode()
|
||||
privkey_pem = string[string.index(b("-----BEGIN EC PRIVATE KEY-----")):]
|
||||
return klass.from_der(der.unpem(privkey_pem), hashfunc)
|
||||
@classmethod
|
||||
def from_der(klass, string, hashfunc=sha1):
|
||||
# SEQ([int(1), octetstring(privkey),cont[0], oid(secp224r1),
|
||||
# cont[1],bitstring])
|
||||
s, empty = der.remove_sequence(string)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER privkey: %s" %
|
||||
binascii.hexlify(empty))
|
||||
one, s = der.remove_integer(s)
|
||||
if one != 1:
|
||||
raise der.UnexpectedDER("expected '1' at start of DER privkey,"
|
||||
" got %d" % one)
|
||||
privkey_str, s = der.remove_octet_string(s)
|
||||
tag, curve_oid_str, s = der.remove_constructed(s)
|
||||
if tag != 0:
|
||||
raise der.UnexpectedDER("expected tag 0 in DER privkey,"
|
||||
" got %d" % tag)
|
||||
curve_oid, empty = der.remove_object(curve_oid_str)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER privkey "
|
||||
"curve_oid: %s" % binascii.hexlify(empty))
|
||||
curve = find_curve(curve_oid)
|
||||
|
||||
# we don't actually care about the following fields
|
||||
#
|
||||
#tag, pubkey_bitstring, s = der.remove_constructed(s)
|
||||
#if tag != 1:
|
||||
# raise der.UnexpectedDER("expected tag 1 in DER privkey, got %d"
|
||||
# % tag)
|
||||
#pubkey_str = der.remove_bitstring(pubkey_bitstring)
|
||||
#if empty != "":
|
||||
# raise der.UnexpectedDER("trailing junk after DER privkey "
|
||||
# "pubkeystr: %s" % binascii.hexlify(empty))
|
||||
|
||||
# our from_string method likes fixed-length privkey strings
|
||||
if len(privkey_str) < curve.baselen:
|
||||
privkey_str = b("\x00")*(curve.baselen-len(privkey_str)) + privkey_str
|
||||
return klass.from_string(privkey_str, curve, hashfunc)
|
||||
|
||||
def to_string(self):
|
||||
secexp = self.privkey.secret_multiplier
|
||||
s = number_to_string(secexp, self.privkey.order)
|
||||
return s
|
||||
|
||||
def to_pem(self):
|
||||
# TODO: "BEGIN ECPARAMETERS"
|
||||
return der.topem(self.to_der(), "EC PRIVATE KEY")
|
||||
|
||||
def to_der(self):
|
||||
# SEQ([int(1), octetstring(privkey),cont[0], oid(secp224r1),
|
||||
# cont[1],bitstring])
|
||||
encoded_vk = b("\x00\x04") + self.get_verifying_key().to_string()
|
||||
return der.encode_sequence(der.encode_integer(1),
|
||||
der.encode_octet_string(self.to_string()),
|
||||
der.encode_constructed(0, self.curve.encoded_oid),
|
||||
der.encode_constructed(1, der.encode_bitstring(encoded_vk)),
|
||||
)
|
||||
|
||||
def get_verifying_key(self):
|
||||
return self.verifying_key
|
||||
|
||||
def sign_deterministic(self, data, hashfunc=None, sigencode=sigencode_string):
|
||||
hashfunc = hashfunc or self.default_hashfunc
|
||||
digest = hashfunc(data).digest()
|
||||
|
||||
return self.sign_digest_deterministic(digest, hashfunc=hashfunc, sigencode=sigencode)
|
||||
|
||||
def sign_digest_deterministic(self, digest, hashfunc=None, sigencode=sigencode_string):
|
||||
"""
|
||||
Calculates 'k' from data itself, removing the need for strong
|
||||
random generator and producing deterministic (reproducible) signatures.
|
||||
See RFC 6979 for more details.
|
||||
"""
|
||||
secexp = self.privkey.secret_multiplier
|
||||
k = rfc6979.generate_k(
|
||||
self.curve.generator.order(), secexp, hashfunc, digest)
|
||||
|
||||
return self.sign_digest(digest, sigencode=sigencode, k=k)
|
||||
|
||||
def sign(self, data, entropy=None, hashfunc=None, sigencode=sigencode_string, k=None):
|
||||
"""
|
||||
hashfunc= should behave like hashlib.sha1 . The output length of the
|
||||
hash (in bytes) must not be longer than the length of the curve order
|
||||
(rounded up to the nearest byte), so using SHA256 with nist256p is
|
||||
ok, but SHA256 with nist192p is not. (In the 2**-96ish unlikely event
|
||||
of a hash output larger than the curve order, the hash will
|
||||
effectively be wrapped mod n).
|
||||
|
||||
Use hashfunc=hashlib.sha1 to match openssl's -ecdsa-with-SHA1 mode,
|
||||
or hashfunc=hashlib.sha256 for openssl-1.0.0's -ecdsa-with-SHA256.
|
||||
"""
|
||||
|
||||
hashfunc = hashfunc or self.default_hashfunc
|
||||
h = hashfunc(data).digest()
|
||||
return self.sign_digest(h, entropy, sigencode, k)
|
||||
|
||||
def sign_digest(self, digest, entropy=None, sigencode=sigencode_string, k=None):
|
||||
if len(digest) > self.curve.baselen:
|
||||
raise BadDigestError("this curve (%s) is too short "
|
||||
"for your digest (%d)" % (self.curve.name,
|
||||
8*len(digest)))
|
||||
number = string_to_number(digest)
|
||||
r, s = self.sign_number(number, entropy, k)
|
||||
return sigencode(r, s, self.privkey.order)
|
||||
|
||||
def sign_number(self, number, entropy=None, k=None):
|
||||
# returns a pair of numbers
|
||||
order = self.privkey.order
|
||||
# privkey.sign() may raise RuntimeError in the amazingly unlikely
|
||||
# (2**-192) event that r=0 or s=0, because that would leak the key.
|
||||
# We could re-try with a different 'k', but we couldn't test that
|
||||
# code, so I choose to allow the signature to fail instead.
|
||||
|
||||
# If k is set, it is used directly. In other cases
|
||||
# it is generated using entropy function
|
||||
if k is not None:
|
||||
_k = k
|
||||
else:
|
||||
_k = randrange(order, entropy)
|
||||
|
||||
assert 1 <= _k < order
|
||||
sig = self.privkey.sign(number, _k)
|
||||
return sig.r, sig.s
|
||||
@@ -1,613 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
#
|
||||
# Provide some simple capabilities from number theory.
|
||||
#
|
||||
# Version of 2008.11.14.
|
||||
#
|
||||
# Written in 2005 and 2006 by Peter Pearson and placed in the public domain.
|
||||
# Revision history:
|
||||
# 2008.11.14: Use pow( base, exponent, modulus ) for modular_exp.
|
||||
# Make gcd and lcm accept arbitrarly many arguments.
|
||||
|
||||
from __future__ import division
|
||||
|
||||
from .six import print_, integer_types
|
||||
from .six.moves import reduce
|
||||
|
||||
import math
|
||||
|
||||
|
||||
class Error( Exception ):
|
||||
"""Base class for exceptions in this module."""
|
||||
pass
|
||||
|
||||
class SquareRootError( Error ):
|
||||
pass
|
||||
|
||||
class NegativeExponentError( Error ):
|
||||
pass
|
||||
|
||||
|
||||
def modular_exp( base, exponent, modulus ):
|
||||
"Raise base to exponent, reducing by modulus"
|
||||
if exponent < 0:
|
||||
raise NegativeExponentError( "Negative exponents (%d) not allowed" \
|
||||
% exponent )
|
||||
return pow( base, exponent, modulus )
|
||||
# result = 1L
|
||||
# x = exponent
|
||||
# b = base + 0L
|
||||
# while x > 0:
|
||||
# if x % 2 > 0: result = (result * b) % modulus
|
||||
# x = x // 2
|
||||
# b = ( b * b ) % modulus
|
||||
# return result
|
||||
|
||||
|
||||
def polynomial_reduce_mod( poly, polymod, p ):
|
||||
"""Reduce poly by polymod, integer arithmetic modulo p.
|
||||
|
||||
Polynomials are represented as lists of coefficients
|
||||
of increasing powers of x."""
|
||||
|
||||
# This module has been tested only by extensive use
|
||||
# in calculating modular square roots.
|
||||
|
||||
# Just to make this easy, require a monic polynomial:
|
||||
assert polymod[-1] == 1
|
||||
|
||||
assert len( polymod ) > 1
|
||||
|
||||
while len( poly ) >= len( polymod ):
|
||||
if poly[-1] != 0:
|
||||
for i in range( 2, len( polymod ) + 1 ):
|
||||
poly[-i] = ( poly[-i] - poly[-1] * polymod[-i] ) % p
|
||||
poly = poly[0:-1]
|
||||
|
||||
return poly
|
||||
|
||||
|
||||
|
||||
def polynomial_multiply_mod( m1, m2, polymod, p ):
|
||||
"""Polynomial multiplication modulo a polynomial over ints mod p.
|
||||
|
||||
Polynomials are represented as lists of coefficients
|
||||
of increasing powers of x."""
|
||||
|
||||
# This is just a seat-of-the-pants implementation.
|
||||
|
||||
# This module has been tested only by extensive use
|
||||
# in calculating modular square roots.
|
||||
|
||||
# Initialize the product to zero:
|
||||
|
||||
prod = ( len( m1 ) + len( m2 ) - 1 ) * [0]
|
||||
|
||||
# Add together all the cross-terms:
|
||||
|
||||
for i in range( len( m1 ) ):
|
||||
for j in range( len( m2 ) ):
|
||||
prod[i+j] = ( prod[i+j] + m1[i] * m2[j] ) % p
|
||||
|
||||
return polynomial_reduce_mod( prod, polymod, p )
|
||||
|
||||
|
||||
def polynomial_exp_mod( base, exponent, polymod, p ):
|
||||
"""Polynomial exponentiation modulo a polynomial over ints mod p.
|
||||
|
||||
Polynomials are represented as lists of coefficients
|
||||
of increasing powers of x."""
|
||||
|
||||
# Based on the Handbook of Applied Cryptography, algorithm 2.227.
|
||||
|
||||
# This module has been tested only by extensive use
|
||||
# in calculating modular square roots.
|
||||
|
||||
assert exponent < p
|
||||
|
||||
if exponent == 0: return [ 1 ]
|
||||
|
||||
G = base
|
||||
k = exponent
|
||||
if k%2 == 1: s = G
|
||||
else: s = [ 1 ]
|
||||
|
||||
while k > 1:
|
||||
k = k // 2
|
||||
G = polynomial_multiply_mod( G, G, polymod, p )
|
||||
if k%2 == 1: s = polynomial_multiply_mod( G, s, polymod, p )
|
||||
|
||||
return s
|
||||
|
||||
|
||||
|
||||
def jacobi( a, n ):
|
||||
"""Jacobi symbol"""
|
||||
|
||||
# Based on the Handbook of Applied Cryptography (HAC), algorithm 2.149.
|
||||
|
||||
# This function has been tested by comparison with a small
|
||||
# table printed in HAC, and by extensive use in calculating
|
||||
# modular square roots.
|
||||
|
||||
assert n >= 3
|
||||
assert n%2 == 1
|
||||
a = a % n
|
||||
if a == 0: return 0
|
||||
if a == 1: return 1
|
||||
a1, e = a, 0
|
||||
while a1%2 == 0:
|
||||
a1, e = a1//2, e+1
|
||||
if e%2 == 0 or n%8 == 1 or n%8 == 7: s = 1
|
||||
else: s = -1
|
||||
if a1 == 1: return s
|
||||
if n%4 == 3 and a1%4 == 3: s = -s
|
||||
return s * jacobi( n % a1, a1 )
|
||||
|
||||
|
||||
|
||||
def square_root_mod_prime( a, p ):
|
||||
"""Modular square root of a, mod p, p prime."""
|
||||
|
||||
# Based on the Handbook of Applied Cryptography, algorithms 3.34 to 3.39.
|
||||
|
||||
# This module has been tested for all values in [0,p-1] for
|
||||
# every prime p from 3 to 1229.
|
||||
|
||||
assert 0 <= a < p
|
||||
assert 1 < p
|
||||
|
||||
if a == 0: return 0
|
||||
if p == 2: return a
|
||||
|
||||
jac = jacobi( a, p )
|
||||
if jac == -1: raise SquareRootError( "%d has no square root modulo %d" \
|
||||
% ( a, p ) )
|
||||
|
||||
if p % 4 == 3: return modular_exp( a, (p+1)//4, p )
|
||||
|
||||
if p % 8 == 5:
|
||||
d = modular_exp( a, (p-1)//4, p )
|
||||
if d == 1: return modular_exp( a, (p+3)//8, p )
|
||||
if d == p-1: return ( 2 * a * modular_exp( 4*a, (p-5)//8, p ) ) % p
|
||||
raise RuntimeError("Shouldn't get here.")
|
||||
|
||||
for b in range( 2, p ):
|
||||
if jacobi( b*b-4*a, p ) == -1:
|
||||
f = ( a, -b, 1 )
|
||||
ff = polynomial_exp_mod( ( 0, 1 ), (p+1)//2, f, p )
|
||||
assert ff[1] == 0
|
||||
return ff[0]
|
||||
raise RuntimeError("No b found.")
|
||||
|
||||
|
||||
|
||||
def inverse_mod( a, m ):
|
||||
"""Inverse of a mod m."""
|
||||
|
||||
if a < 0 or m <= a: a = a % m
|
||||
|
||||
# From Ferguson and Schneier, roughly:
|
||||
|
||||
c, d = a, m
|
||||
uc, vc, ud, vd = 1, 0, 0, 1
|
||||
while c != 0:
|
||||
q, c, d = divmod( d, c ) + ( c, )
|
||||
uc, vc, ud, vd = ud - q*uc, vd - q*vc, uc, vc
|
||||
|
||||
# At this point, d is the GCD, and ud*a+vd*m = d.
|
||||
# If d == 1, this means that ud is a inverse.
|
||||
|
||||
assert d == 1
|
||||
if ud > 0: return ud
|
||||
else: return ud + m
|
||||
|
||||
|
||||
def gcd2(a, b):
|
||||
"""Greatest common divisor using Euclid's algorithm."""
|
||||
while a:
|
||||
a, b = b%a, a
|
||||
return b
|
||||
|
||||
|
||||
def gcd( *a ):
|
||||
"""Greatest common divisor.
|
||||
|
||||
Usage: gcd( [ 2, 4, 6 ] )
|
||||
or: gcd( 2, 4, 6 )
|
||||
"""
|
||||
|
||||
if len( a ) > 1: return reduce( gcd2, a )
|
||||
if hasattr( a[0], "__iter__" ): return reduce( gcd2, a[0] )
|
||||
return a[0]
|
||||
|
||||
|
||||
def lcm2(a,b):
|
||||
"""Least common multiple of two integers."""
|
||||
|
||||
return (a*b)//gcd(a,b)
|
||||
|
||||
|
||||
def lcm( *a ):
|
||||
"""Least common multiple.
|
||||
|
||||
Usage: lcm( [ 3, 4, 5 ] )
|
||||
or: lcm( 3, 4, 5 )
|
||||
"""
|
||||
|
||||
if len( a ) > 1: return reduce( lcm2, a )
|
||||
if hasattr( a[0], "__iter__" ): return reduce( lcm2, a[0] )
|
||||
return a[0]
|
||||
|
||||
|
||||
|
||||
def factorization( n ):
|
||||
"""Decompose n into a list of (prime,exponent) pairs."""
|
||||
|
||||
assert isinstance( n, integer_types )
|
||||
|
||||
if n < 2: return []
|
||||
|
||||
result = []
|
||||
d = 2
|
||||
|
||||
# Test the small primes:
|
||||
|
||||
for d in smallprimes:
|
||||
if d > n: break
|
||||
q, r = divmod( n, d )
|
||||
if r == 0:
|
||||
count = 1
|
||||
while d <= n:
|
||||
n = q
|
||||
q, r = divmod( n, d )
|
||||
if r != 0: break
|
||||
count = count + 1
|
||||
result.append( ( d, count ) )
|
||||
|
||||
# If n is still greater than the last of our small primes,
|
||||
# it may require further work:
|
||||
|
||||
if n > smallprimes[-1]:
|
||||
if is_prime( n ): # If what's left is prime, it's easy:
|
||||
result.append( ( n, 1 ) )
|
||||
else: # Ugh. Search stupidly for a divisor:
|
||||
d = smallprimes[-1]
|
||||
while 1:
|
||||
d = d + 2 # Try the next divisor.
|
||||
q, r = divmod( n, d )
|
||||
if q < d: break # n < d*d means we're done, n = 1 or prime.
|
||||
if r == 0: # d divides n. How many times?
|
||||
count = 1
|
||||
n = q
|
||||
while d <= n: # As long as d might still divide n,
|
||||
q, r = divmod( n, d ) # see if it does.
|
||||
if r != 0: break
|
||||
n = q # It does. Reduce n, increase count.
|
||||
count = count + 1
|
||||
result.append( ( d, count ) )
|
||||
if n > 1: result.append( ( n, 1 ) )
|
||||
|
||||
return result
|
||||
|
||||
|
||||
|
||||
def phi( n ):
|
||||
"""Return the Euler totient function of n."""
|
||||
|
||||
assert isinstance( n, integer_types )
|
||||
|
||||
if n < 3: return 1
|
||||
|
||||
result = 1
|
||||
ff = factorization( n )
|
||||
for f in ff:
|
||||
e = f[1]
|
||||
if e > 1:
|
||||
result = result * f[0] ** (e-1) * ( f[0] - 1 )
|
||||
else:
|
||||
result = result * ( f[0] - 1 )
|
||||
return result
|
||||
|
||||
|
||||
def carmichael( n ):
|
||||
"""Return Carmichael function of n.
|
||||
|
||||
Carmichael(n) is the smallest integer x such that
|
||||
m**x = 1 mod n for all m relatively prime to n.
|
||||
"""
|
||||
|
||||
return carmichael_of_factorized( factorization( n ) )
|
||||
|
||||
|
||||
def carmichael_of_factorized( f_list ):
|
||||
"""Return the Carmichael function of a number that is
|
||||
represented as a list of (prime,exponent) pairs.
|
||||
"""
|
||||
|
||||
if len( f_list ) < 1: return 1
|
||||
|
||||
result = carmichael_of_ppower( f_list[0] )
|
||||
for i in range( 1, len( f_list ) ):
|
||||
result = lcm( result, carmichael_of_ppower( f_list[i] ) )
|
||||
|
||||
return result
|
||||
|
||||
def carmichael_of_ppower( pp ):
|
||||
"""Carmichael function of the given power of the given prime.
|
||||
"""
|
||||
|
||||
p, a = pp
|
||||
if p == 2 and a > 2: return 2**(a-2)
|
||||
else: return (p-1) * p**(a-1)
|
||||
|
||||
|
||||
|
||||
def order_mod( x, m ):
|
||||
"""Return the order of x in the multiplicative group mod m.
|
||||
"""
|
||||
|
||||
# Warning: this implementation is not very clever, and will
|
||||
# take a long time if m is very large.
|
||||
|
||||
if m <= 1: return 0
|
||||
|
||||
assert gcd( x, m ) == 1
|
||||
|
||||
z = x
|
||||
result = 1
|
||||
while z != 1:
|
||||
z = ( z * x ) % m
|
||||
result = result + 1
|
||||
return result
|
||||
|
||||
|
||||
def largest_factor_relatively_prime( a, b ):
|
||||
"""Return the largest factor of a relatively prime to b.
|
||||
"""
|
||||
|
||||
while 1:
|
||||
d = gcd( a, b )
|
||||
if d <= 1: break
|
||||
b = d
|
||||
while 1:
|
||||
q, r = divmod( a, d )
|
||||
if r > 0:
|
||||
break
|
||||
a = q
|
||||
return a
|
||||
|
||||
|
||||
def kinda_order_mod( x, m ):
|
||||
"""Return the order of x in the multiplicative group mod m',
|
||||
where m' is the largest factor of m relatively prime to x.
|
||||
"""
|
||||
|
||||
return order_mod( x, largest_factor_relatively_prime( m, x ) )
|
||||
|
||||
|
||||
def is_prime( n ):
|
||||
"""Return True if x is prime, False otherwise.
|
||||
|
||||
We use the Miller-Rabin test, as given in Menezes et al. p. 138.
|
||||
This test is not exact: there are composite values n for which
|
||||
it returns True.
|
||||
|
||||
In testing the odd numbers from 10000001 to 19999999,
|
||||
about 66 composites got past the first test,
|
||||
5 got past the second test, and none got past the third.
|
||||
Since factors of 2, 3, 5, 7, and 11 were detected during
|
||||
preliminary screening, the number of numbers tested by
|
||||
Miller-Rabin was (19999999 - 10000001)*(2/3)*(4/5)*(6/7)
|
||||
= 4.57 million.
|
||||
"""
|
||||
|
||||
# (This is used to study the risk of false positives:)
|
||||
global miller_rabin_test_count
|
||||
|
||||
miller_rabin_test_count = 0
|
||||
|
||||
if n <= smallprimes[-1]:
|
||||
if n in smallprimes: return True
|
||||
else: return False
|
||||
|
||||
if gcd( n, 2*3*5*7*11 ) != 1: return False
|
||||
|
||||
# Choose a number of iterations sufficient to reduce the
|
||||
# probability of accepting a composite below 2**-80
|
||||
# (from Menezes et al. Table 4.4):
|
||||
|
||||
t = 40
|
||||
n_bits = 1 + int( math.log( n, 2 ) )
|
||||
for k, tt in ( ( 100, 27 ),
|
||||
( 150, 18 ),
|
||||
( 200, 15 ),
|
||||
( 250, 12 ),
|
||||
( 300, 9 ),
|
||||
( 350, 8 ),
|
||||
( 400, 7 ),
|
||||
( 450, 6 ),
|
||||
( 550, 5 ),
|
||||
( 650, 4 ),
|
||||
( 850, 3 ),
|
||||
( 1300, 2 ),
|
||||
):
|
||||
if n_bits < k: break
|
||||
t = tt
|
||||
|
||||
# Run the test t times:
|
||||
|
||||
s = 0
|
||||
r = n - 1
|
||||
while ( r % 2 ) == 0:
|
||||
s = s + 1
|
||||
r = r // 2
|
||||
for i in range( t ):
|
||||
a = smallprimes[ i ]
|
||||
y = modular_exp( a, r, n )
|
||||
if y != 1 and y != n-1:
|
||||
j = 1
|
||||
while j <= s - 1 and y != n - 1:
|
||||
y = modular_exp( y, 2, n )
|
||||
if y == 1:
|
||||
miller_rabin_test_count = i + 1
|
||||
return False
|
||||
j = j + 1
|
||||
if y != n-1:
|
||||
miller_rabin_test_count = i + 1
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def next_prime( starting_value ):
|
||||
"Return the smallest prime larger than the starting value."
|
||||
|
||||
if starting_value < 2: return 2
|
||||
result = ( starting_value + 1 ) | 1
|
||||
while not is_prime( result ): result = result + 2
|
||||
return result
|
||||
|
||||
|
||||
smallprimes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41,
|
||||
43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97,
|
||||
101, 103, 107, 109, 113, 127, 131, 137, 139, 149,
|
||||
151, 157, 163, 167, 173, 179, 181, 191, 193, 197,
|
||||
199, 211, 223, 227, 229, 233, 239, 241, 251, 257,
|
||||
263, 269, 271, 277, 281, 283, 293, 307, 311, 313,
|
||||
317, 331, 337, 347, 349, 353, 359, 367, 373, 379,
|
||||
383, 389, 397, 401, 409, 419, 421, 431, 433, 439,
|
||||
443, 449, 457, 461, 463, 467, 479, 487, 491, 499,
|
||||
503, 509, 521, 523, 541, 547, 557, 563, 569, 571,
|
||||
577, 587, 593, 599, 601, 607, 613, 617, 619, 631,
|
||||
641, 643, 647, 653, 659, 661, 673, 677, 683, 691,
|
||||
701, 709, 719, 727, 733, 739, 743, 751, 757, 761,
|
||||
769, 773, 787, 797, 809, 811, 821, 823, 827, 829,
|
||||
839, 853, 857, 859, 863, 877, 881, 883, 887, 907,
|
||||
911, 919, 929, 937, 941, 947, 953, 967, 971, 977,
|
||||
983, 991, 997, 1009, 1013, 1019, 1021, 1031, 1033,
|
||||
1039, 1049, 1051, 1061, 1063, 1069, 1087, 1091, 1093,
|
||||
1097, 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163,
|
||||
1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, 1229]
|
||||
|
||||
miller_rabin_test_count = 0
|
||||
|
||||
def __main__():
|
||||
|
||||
# Making sure locally defined exceptions work:
|
||||
# p = modular_exp( 2, -2, 3 )
|
||||
# p = square_root_mod_prime( 2, 3 )
|
||||
|
||||
|
||||
print_("Testing gcd...")
|
||||
assert gcd( 3*5*7, 3*5*11, 3*5*13 ) == 3*5
|
||||
assert gcd( [ 3*5*7, 3*5*11, 3*5*13 ] ) == 3*5
|
||||
assert gcd( 3 ) == 3
|
||||
|
||||
print_("Testing lcm...")
|
||||
assert lcm( 3, 5*3, 7*3 ) == 3*5*7
|
||||
assert lcm( [ 3, 5*3, 7*3 ] ) == 3*5*7
|
||||
assert lcm( 3 ) == 3
|
||||
|
||||
print_("Testing next_prime...")
|
||||
bigprimes = ( 999671,
|
||||
999683,
|
||||
999721,
|
||||
999727,
|
||||
999749,
|
||||
999763,
|
||||
999769,
|
||||
999773,
|
||||
999809,
|
||||
999853,
|
||||
999863,
|
||||
999883,
|
||||
999907,
|
||||
999917,
|
||||
999931,
|
||||
999953,
|
||||
999959,
|
||||
999961,
|
||||
999979,
|
||||
999983 )
|
||||
|
||||
for i in range( len( bigprimes ) - 1 ):
|
||||
assert next_prime( bigprimes[i] ) == bigprimes[ i+1 ]
|
||||
|
||||
error_tally = 0
|
||||
|
||||
# Test the square_root_mod_prime function:
|
||||
|
||||
for p in smallprimes:
|
||||
print_("Testing square_root_mod_prime for modulus p = %d." % p)
|
||||
squares = []
|
||||
|
||||
for root in range( 0, 1+p//2 ):
|
||||
sq = ( root * root ) % p
|
||||
squares.append( sq )
|
||||
calculated = square_root_mod_prime( sq, p )
|
||||
if ( calculated * calculated ) % p != sq:
|
||||
error_tally = error_tally + 1
|
||||
print_("Failed to find %d as sqrt( %d ) mod %d. Said %d." % \
|
||||
( root, sq, p, calculated ))
|
||||
|
||||
for nonsquare in range( 0, p ):
|
||||
if nonsquare not in squares:
|
||||
try:
|
||||
calculated = square_root_mod_prime( nonsquare, p )
|
||||
except SquareRootError:
|
||||
pass
|
||||
else:
|
||||
error_tally = error_tally + 1
|
||||
print_("Failed to report no root for sqrt( %d ) mod %d." % \
|
||||
( nonsquare, p ))
|
||||
|
||||
# Test the jacobi function:
|
||||
for m in range( 3, 400, 2 ):
|
||||
print_("Testing jacobi for modulus m = %d." % m)
|
||||
if is_prime( m ):
|
||||
squares = []
|
||||
for root in range( 1, m ):
|
||||
if jacobi( root * root, m ) != 1:
|
||||
error_tally = error_tally + 1
|
||||
print_("jacobi( %d * %d, %d ) != 1" % ( root, root, m ))
|
||||
squares.append( root * root % m )
|
||||
for i in range( 1, m ):
|
||||
if not i in squares:
|
||||
if jacobi( i, m ) != -1:
|
||||
error_tally = error_tally + 1
|
||||
print_("jacobi( %d, %d ) != -1" % ( i, m ))
|
||||
else: # m is not prime.
|
||||
f = factorization( m )
|
||||
for a in range( 1, m ):
|
||||
c = 1
|
||||
for i in f:
|
||||
c = c * jacobi( a, i[0] ) ** i[1]
|
||||
if c != jacobi( a, m ):
|
||||
error_tally = error_tally + 1
|
||||
print_("%d != jacobi( %d, %d )" % ( c, a, m ))
|
||||
|
||||
|
||||
# Test the inverse_mod function:
|
||||
print_("Testing inverse_mod . . .")
|
||||
import random
|
||||
n_tests = 0
|
||||
for i in range( 100 ):
|
||||
m = random.randint( 20, 10000 )
|
||||
for j in range( 100 ):
|
||||
a = random.randint( 1, m-1 )
|
||||
if gcd( a, m ) == 1:
|
||||
n_tests = n_tests + 1
|
||||
inv = inverse_mod( a, m )
|
||||
if inv <= 0 or inv >= m or ( a * inv ) % m != 1:
|
||||
error_tally = error_tally + 1
|
||||
print_("%d = inverse_mod( %d, %d ) is wrong." % ( inv, a, m ))
|
||||
assert n_tests > 1000
|
||||
print_(n_tests, " tests of inverse_mod completed.")
|
||||
|
||||
class FailedTest(Exception): pass
|
||||
print_(error_tally, "errors detected.")
|
||||
if error_tally != 0:
|
||||
raise FailedTest("%d errors detected" % error_tally)
|
||||
|
||||
if __name__ == '__main__':
|
||||
__main__()
|
||||
@@ -1,103 +0,0 @@
|
||||
'''
|
||||
RFC 6979:
|
||||
Deterministic Usage of the Digital Signature Algorithm (DSA) and
|
||||
Elliptic Curve Digital Signature Algorithm (ECDSA)
|
||||
|
||||
http://tools.ietf.org/html/rfc6979
|
||||
|
||||
Many thanks to Coda Hale for his implementation in Go language:
|
||||
https://github.com/codahale/rfc6979
|
||||
'''
|
||||
|
||||
import hmac
|
||||
from binascii import hexlify
|
||||
from .util import number_to_string, number_to_string_crop
|
||||
from .six import b
|
||||
|
||||
try:
|
||||
bin(0)
|
||||
except NameError:
|
||||
binmap = {"0": "0000", "1": "0001", "2": "0010", "3": "0011",
|
||||
"4": "0100", "5": "0101", "6": "0110", "7": "0111",
|
||||
"8": "1000", "9": "1001", "a": "1010", "b": "1011",
|
||||
"c": "1100", "d": "1101", "e": "1110", "f": "1111"}
|
||||
def bin(value): # for python2.5
|
||||
v = "".join(binmap[x] for x in "%x"%abs(value)).lstrip("0")
|
||||
if value < 0:
|
||||
return "-0b" + v
|
||||
return "0b" + v
|
||||
|
||||
def bit_length(num):
|
||||
# http://docs.python.org/dev/library/stdtypes.html#int.bit_length
|
||||
s = bin(num) # binary representation: bin(-37) --> '-0b100101'
|
||||
s = s.lstrip('-0b') # remove leading zeros and minus sign
|
||||
return len(s) # len('100101') --> 6
|
||||
|
||||
def bits2int(data, qlen):
|
||||
x = int(hexlify(data), 16)
|
||||
l = len(data) * 8
|
||||
|
||||
if l > qlen:
|
||||
return x >> (l-qlen)
|
||||
return x
|
||||
|
||||
def bits2octets(data, order):
|
||||
z1 = bits2int(data, bit_length(order))
|
||||
z2 = z1 - order
|
||||
|
||||
if z2 < 0:
|
||||
z2 = z1
|
||||
|
||||
return number_to_string_crop(z2, order)
|
||||
|
||||
# https://tools.ietf.org/html/rfc6979#section-3.2
|
||||
def generate_k(order, secexp, hash_func, data):
|
||||
'''
|
||||
generator - order of the DSA generator used in the signature
|
||||
secexp - secure exponent (private key) in numeric form
|
||||
hash_func - reference to the same hash function used for generating hash
|
||||
data - hash in binary form of the signing data
|
||||
'''
|
||||
|
||||
qlen = bit_length(order)
|
||||
holen = hash_func().digest_size
|
||||
rolen = (qlen + 7) / 8
|
||||
bx = number_to_string(secexp, order) + bits2octets(data, order)
|
||||
|
||||
# Step B
|
||||
v = b('\x01') * holen
|
||||
|
||||
# Step C
|
||||
k = b('\x00') * holen
|
||||
|
||||
# Step D
|
||||
|
||||
k = hmac.new(k, v+b('\x00')+bx, hash_func).digest()
|
||||
|
||||
# Step E
|
||||
v = hmac.new(k, v, hash_func).digest()
|
||||
|
||||
# Step F
|
||||
k = hmac.new(k, v+b('\x01')+bx, hash_func).digest()
|
||||
|
||||
# Step G
|
||||
v = hmac.new(k, v, hash_func).digest()
|
||||
|
||||
# Step H
|
||||
while True:
|
||||
# Step H1
|
||||
t = b('')
|
||||
|
||||
# Step H2
|
||||
while len(t) < rolen:
|
||||
v = hmac.new(k, v, hash_func).digest()
|
||||
t += v
|
||||
|
||||
# Step H3
|
||||
secret = bits2int(t, qlen)
|
||||
|
||||
if secret >= 1 and secret < order:
|
||||
return secret
|
||||
|
||||
k = hmac.new(k, v+b('\x00'), hash_func).digest()
|
||||
v = hmac.new(k, v, hash_func).digest()
|
||||
@@ -1,394 +0,0 @@
|
||||
"""Utilities for writing code that runs on Python 2 and 3"""
|
||||
|
||||
# Copyright (c) 2010-2012 Benjamin Peterson
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
# this software and associated documentation files (the "Software"), to deal in
|
||||
# the Software without restriction, including without limitation the rights to
|
||||
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
# the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
# subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import operator
|
||||
import sys
|
||||
import types
|
||||
|
||||
__author__ = "Benjamin Peterson <benjamin@python.org>"
|
||||
__version__ = "1.2.0"
|
||||
|
||||
|
||||
# True if we are running on Python 3.
|
||||
PY3 = sys.version_info[0] == 3
|
||||
|
||||
if PY3:
|
||||
string_types = str,
|
||||
integer_types = int,
|
||||
class_types = type,
|
||||
text_type = str
|
||||
binary_type = bytes
|
||||
|
||||
MAXSIZE = sys.maxsize
|
||||
else:
|
||||
string_types = basestring,
|
||||
integer_types = (int, long)
|
||||
class_types = (type, types.ClassType)
|
||||
text_type = unicode
|
||||
binary_type = str
|
||||
|
||||
if sys.platform.startswith("java"):
|
||||
# Jython always uses 32 bits.
|
||||
MAXSIZE = int((1 << 31) - 1)
|
||||
else:
|
||||
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
|
||||
class X(object):
|
||||
def __len__(self):
|
||||
return 1 << 31
|
||||
try:
|
||||
len(X())
|
||||
except OverflowError:
|
||||
# 32-bit
|
||||
MAXSIZE = int((1 << 31) - 1)
|
||||
else:
|
||||
# 64-bit
|
||||
MAXSIZE = int((1 << 63) - 1)
|
||||
del X
|
||||
|
||||
|
||||
def _add_doc(func, doc):
|
||||
"""Add documentation to a function."""
|
||||
func.__doc__ = doc
|
||||
|
||||
|
||||
def _import_module(name):
|
||||
"""Import module, returning the module after the last dot."""
|
||||
__import__(name)
|
||||
return sys.modules[name]
|
||||
|
||||
|
||||
class _LazyDescr(object):
|
||||
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def __get__(self, obj, tp):
|
||||
result = self._resolve()
|
||||
setattr(obj, self.name, result)
|
||||
# This is a bit ugly, but it avoids running this again.
|
||||
delattr(tp, self.name)
|
||||
return result
|
||||
|
||||
|
||||
class MovedModule(_LazyDescr):
|
||||
|
||||
def __init__(self, name, old, new=None):
|
||||
super(MovedModule, self).__init__(name)
|
||||
if PY3:
|
||||
if new is None:
|
||||
new = name
|
||||
self.mod = new
|
||||
else:
|
||||
self.mod = old
|
||||
|
||||
def _resolve(self):
|
||||
return _import_module(self.mod)
|
||||
|
||||
|
||||
class MovedAttribute(_LazyDescr):
|
||||
|
||||
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
|
||||
super(MovedAttribute, self).__init__(name)
|
||||
if PY3:
|
||||
if new_mod is None:
|
||||
new_mod = name
|
||||
self.mod = new_mod
|
||||
if new_attr is None:
|
||||
if old_attr is None:
|
||||
new_attr = name
|
||||
else:
|
||||
new_attr = old_attr
|
||||
self.attr = new_attr
|
||||
else:
|
||||
self.mod = old_mod
|
||||
if old_attr is None:
|
||||
old_attr = name
|
||||
self.attr = old_attr
|
||||
|
||||
def _resolve(self):
|
||||
module = _import_module(self.mod)
|
||||
return getattr(module, self.attr)
|
||||
|
||||
|
||||
|
||||
class _MovedItems(types.ModuleType):
|
||||
"""Lazy loading of moved objects"""
|
||||
|
||||
|
||||
_moved_attributes = [
|
||||
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
|
||||
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
|
||||
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
|
||||
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
|
||||
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
|
||||
MovedAttribute("reduce", "__builtin__", "functools"),
|
||||
MovedAttribute("StringIO", "StringIO", "io"),
|
||||
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
|
||||
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
|
||||
|
||||
MovedModule("builtins", "__builtin__"),
|
||||
MovedModule("configparser", "ConfigParser"),
|
||||
MovedModule("copyreg", "copy_reg"),
|
||||
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
|
||||
MovedModule("http_cookies", "Cookie", "http.cookies"),
|
||||
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
|
||||
MovedModule("html_parser", "HTMLParser", "html.parser"),
|
||||
MovedModule("http_client", "httplib", "http.client"),
|
||||
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
|
||||
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
|
||||
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
|
||||
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
|
||||
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
|
||||
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
|
||||
MovedModule("cPickle", "cPickle", "pickle"),
|
||||
MovedModule("queue", "Queue"),
|
||||
MovedModule("reprlib", "repr"),
|
||||
MovedModule("socketserver", "SocketServer"),
|
||||
MovedModule("tkinter", "Tkinter"),
|
||||
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
|
||||
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
|
||||
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
|
||||
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
|
||||
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
|
||||
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
|
||||
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
|
||||
MovedModule("tkinter_colorchooser", "tkColorChooser",
|
||||
"tkinter.colorchooser"),
|
||||
MovedModule("tkinter_commondialog", "tkCommonDialog",
|
||||
"tkinter.commondialog"),
|
||||
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
|
||||
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
|
||||
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
|
||||
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
|
||||
"tkinter.simpledialog"),
|
||||
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
|
||||
MovedModule("winreg", "_winreg"),
|
||||
]
|
||||
for attr in _moved_attributes:
|
||||
setattr(_MovedItems, attr.name, attr)
|
||||
del attr
|
||||
|
||||
moves = sys.modules[__name__ + ".moves"] = _MovedItems("moves")
|
||||
|
||||
|
||||
def add_move(move):
|
||||
"""Add an item to six.moves."""
|
||||
setattr(_MovedItems, move.name, move)
|
||||
|
||||
|
||||
def remove_move(name):
|
||||
"""Remove item from six.moves."""
|
||||
try:
|
||||
delattr(_MovedItems, name)
|
||||
except AttributeError:
|
||||
try:
|
||||
del moves.__dict__[name]
|
||||
except KeyError:
|
||||
raise AttributeError("no such move, %r" % (name,))
|
||||
|
||||
|
||||
if PY3:
|
||||
_meth_func = "__func__"
|
||||
_meth_self = "__self__"
|
||||
|
||||
_func_code = "__code__"
|
||||
_func_defaults = "__defaults__"
|
||||
|
||||
_iterkeys = "keys"
|
||||
_itervalues = "values"
|
||||
_iteritems = "items"
|
||||
else:
|
||||
_meth_func = "im_func"
|
||||
_meth_self = "im_self"
|
||||
|
||||
_func_code = "func_code"
|
||||
_func_defaults = "func_defaults"
|
||||
|
||||
_iterkeys = "iterkeys"
|
||||
_itervalues = "itervalues"
|
||||
_iteritems = "iteritems"
|
||||
|
||||
|
||||
try:
|
||||
advance_iterator = next
|
||||
except NameError:
|
||||
def advance_iterator(it):
|
||||
return it.next()
|
||||
next = advance_iterator
|
||||
|
||||
|
||||
try:
|
||||
callable = callable
|
||||
except NameError:
|
||||
def callable(obj):
|
||||
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
|
||||
|
||||
|
||||
if PY3:
|
||||
def get_unbound_function(unbound):
|
||||
return unbound
|
||||
|
||||
Iterator = object
|
||||
else:
|
||||
def get_unbound_function(unbound):
|
||||
return unbound.im_func
|
||||
|
||||
class Iterator(object):
|
||||
|
||||
def next(self):
|
||||
return type(self).__next__(self)
|
||||
|
||||
callable = callable
|
||||
_add_doc(get_unbound_function,
|
||||
"""Get the function out of a possibly unbound function""")
|
||||
|
||||
|
||||
get_method_function = operator.attrgetter(_meth_func)
|
||||
get_method_self = operator.attrgetter(_meth_self)
|
||||
get_function_code = operator.attrgetter(_func_code)
|
||||
get_function_defaults = operator.attrgetter(_func_defaults)
|
||||
|
||||
|
||||
def iterkeys(d):
|
||||
"""Return an iterator over the keys of a dictionary."""
|
||||
return iter(getattr(d, _iterkeys)())
|
||||
|
||||
def itervalues(d):
|
||||
"""Return an iterator over the values of a dictionary."""
|
||||
return iter(getattr(d, _itervalues)())
|
||||
|
||||
def iteritems(d):
|
||||
"""Return an iterator over the (key, value) pairs of a dictionary."""
|
||||
return iter(getattr(d, _iteritems)())
|
||||
|
||||
|
||||
if PY3:
|
||||
def b(s):
|
||||
return s.encode("latin-1")
|
||||
def u(s):
|
||||
return s
|
||||
if sys.version_info[1] <= 1:
|
||||
def int2byte(i):
|
||||
return bytes((i,))
|
||||
else:
|
||||
# This is about 2x faster than the implementation above on 3.2+
|
||||
int2byte = operator.methodcaller("to_bytes", 1, "big")
|
||||
import io
|
||||
StringIO = io.StringIO
|
||||
BytesIO = io.BytesIO
|
||||
else:
|
||||
def b(s):
|
||||
return s
|
||||
def u(s):
|
||||
if isinstance(s, unicode):
|
||||
return s
|
||||
return unicode(s, "unicode_escape")
|
||||
int2byte = chr
|
||||
import StringIO
|
||||
StringIO = BytesIO = StringIO.StringIO
|
||||
_add_doc(b, """Byte literal""")
|
||||
_add_doc(u, """Text literal""")
|
||||
|
||||
|
||||
if PY3:
|
||||
import builtins
|
||||
exec_ = getattr(builtins, "exec")
|
||||
|
||||
|
||||
def reraise(tp, value, tb=None):
|
||||
if value.__traceback__ is not tb:
|
||||
raise value.with_traceback(tb)
|
||||
raise value
|
||||
|
||||
|
||||
print_ = getattr(builtins, "print")
|
||||
del builtins
|
||||
|
||||
else:
|
||||
def exec_(_code_, _globs_=None, _locs_=None):
|
||||
"""Execute code in a namespace."""
|
||||
if _globs_ is None:
|
||||
frame = sys._getframe(1)
|
||||
_globs_ = frame.f_globals
|
||||
if _locs_ is None:
|
||||
_locs_ = frame.f_locals
|
||||
del frame
|
||||
elif _locs_ is None:
|
||||
_locs_ = _globs_
|
||||
exec("""exec _code_ in _globs_, _locs_""")
|
||||
|
||||
|
||||
exec_("""def reraise(tp, value, tb=None):
|
||||
raise tp, value, tb
|
||||
""")
|
||||
|
||||
|
||||
def print_(*args, **kwargs):
|
||||
"""The new-style print function."""
|
||||
fp = kwargs.pop("file", sys.stdout)
|
||||
if fp is None:
|
||||
return
|
||||
def write(data):
|
||||
if not isinstance(data, basestring):
|
||||
data = str(data)
|
||||
fp.write(data)
|
||||
want_unicode = False
|
||||
sep = kwargs.pop("sep", None)
|
||||
if sep is not None:
|
||||
if isinstance(sep, unicode):
|
||||
want_unicode = True
|
||||
elif not isinstance(sep, str):
|
||||
raise TypeError("sep must be None or a string")
|
||||
end = kwargs.pop("end", None)
|
||||
if end is not None:
|
||||
if isinstance(end, unicode):
|
||||
want_unicode = True
|
||||
elif not isinstance(end, str):
|
||||
raise TypeError("end must be None or a string")
|
||||
if kwargs:
|
||||
raise TypeError("invalid keyword arguments to print()")
|
||||
if not want_unicode:
|
||||
for arg in args:
|
||||
if isinstance(arg, unicode):
|
||||
want_unicode = True
|
||||
break
|
||||
if want_unicode:
|
||||
newline = unicode("\n")
|
||||
space = unicode(" ")
|
||||
else:
|
||||
newline = "\n"
|
||||
space = " "
|
||||
if sep is None:
|
||||
sep = space
|
||||
if end is None:
|
||||
end = newline
|
||||
for i, arg in enumerate(args):
|
||||
if i:
|
||||
write(sep)
|
||||
write(arg)
|
||||
write(end)
|
||||
|
||||
_add_doc(reraise, """Reraise an exception.""")
|
||||
|
||||
|
||||
def with_metaclass(meta, base=object):
|
||||
"""Create a base class with a metaclass."""
|
||||
return meta("NewBase", (base,), {})
|
||||
@@ -1,663 +0,0 @@
|
||||
from __future__ import with_statement, division
|
||||
|
||||
import unittest
|
||||
import os
|
||||
import time
|
||||
import shutil
|
||||
import subprocess
|
||||
from binascii import hexlify, unhexlify
|
||||
from hashlib import sha1, sha256, sha512
|
||||
|
||||
from .six import b, print_, binary_type
|
||||
from .keys import SigningKey, VerifyingKey
|
||||
from .keys import BadSignatureError
|
||||
from . import util
|
||||
from .util import sigencode_der, sigencode_strings
|
||||
from .util import sigdecode_der, sigdecode_strings
|
||||
from .curves import Curve, UnknownCurveError
|
||||
from .curves import NIST192p, NIST224p, NIST256p, NIST384p, NIST521p, SECP256k1
|
||||
from .ellipticcurve import Point
|
||||
from . import der
|
||||
from . import rfc6979
|
||||
|
||||
class SubprocessError(Exception):
|
||||
pass
|
||||
|
||||
def run_openssl(cmd):
|
||||
OPENSSL = "openssl"
|
||||
p = subprocess.Popen([OPENSSL] + cmd.split(),
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT)
|
||||
stdout, ignored = p.communicate()
|
||||
if p.returncode != 0:
|
||||
raise SubprocessError("cmd '%s %s' failed: rc=%s, stdout/err was %s" %
|
||||
(OPENSSL, cmd, p.returncode, stdout))
|
||||
return stdout.decode()
|
||||
|
||||
BENCH = False
|
||||
|
||||
class ECDSA(unittest.TestCase):
|
||||
def test_basic(self):
|
||||
priv = SigningKey.generate()
|
||||
pub = priv.get_verifying_key()
|
||||
|
||||
data = b("blahblah")
|
||||
sig = priv.sign(data)
|
||||
|
||||
self.assertTrue(pub.verify(sig, data))
|
||||
self.assertRaises(BadSignatureError, pub.verify, sig, data+b("bad"))
|
||||
|
||||
pub2 = VerifyingKey.from_string(pub.to_string())
|
||||
self.assertTrue(pub2.verify(sig, data))
|
||||
|
||||
def test_deterministic(self):
|
||||
data = b("blahblah")
|
||||
secexp = int("9d0219792467d7d37b4d43298a7d0c05", 16)
|
||||
|
||||
priv = SigningKey.from_secret_exponent(secexp, SECP256k1, sha256)
|
||||
pub = priv.get_verifying_key()
|
||||
|
||||
k = rfc6979.generate_k(
|
||||
SECP256k1.generator.order(), secexp, sha256, sha256(data).digest())
|
||||
|
||||
sig1 = priv.sign(data, k=k)
|
||||
self.assertTrue(pub.verify(sig1, data))
|
||||
|
||||
sig2 = priv.sign(data, k=k)
|
||||
self.assertTrue(pub.verify(sig2, data))
|
||||
|
||||
sig3 = priv.sign_deterministic(data, sha256)
|
||||
self.assertTrue(pub.verify(sig3, data))
|
||||
|
||||
self.assertEqual(sig1, sig2)
|
||||
self.assertEqual(sig1, sig3)
|
||||
|
||||
def test_bad_usage(self):
|
||||
# sk=SigningKey() is wrong
|
||||
self.assertRaises(TypeError, SigningKey)
|
||||
self.assertRaises(TypeError, VerifyingKey)
|
||||
|
||||
def test_lengths(self):
|
||||
default = NIST192p
|
||||
priv = SigningKey.generate()
|
||||
pub = priv.get_verifying_key()
|
||||
self.assertEqual(len(pub.to_string()), default.verifying_key_length)
|
||||
sig = priv.sign(b("data"))
|
||||
self.assertEqual(len(sig), default.signature_length)
|
||||
if BENCH:
|
||||
print_()
|
||||
for curve in (NIST192p, NIST224p, NIST256p, NIST384p, NIST521p):
|
||||
start = time.time()
|
||||
priv = SigningKey.generate(curve=curve)
|
||||
pub1 = priv.get_verifying_key()
|
||||
keygen_time = time.time() - start
|
||||
pub2 = VerifyingKey.from_string(pub1.to_string(), curve)
|
||||
self.assertEqual(pub1.to_string(), pub2.to_string())
|
||||
self.assertEqual(len(pub1.to_string()),
|
||||
curve.verifying_key_length)
|
||||
start = time.time()
|
||||
sig = priv.sign(b("data"))
|
||||
sign_time = time.time() - start
|
||||
self.assertEqual(len(sig), curve.signature_length)
|
||||
if BENCH:
|
||||
start = time.time()
|
||||
pub1.verify(sig, b("data"))
|
||||
verify_time = time.time() - start
|
||||
print_("%s: siglen=%d, keygen=%0.3fs, sign=%0.3f, verify=%0.3f" \
|
||||
% (curve.name, curve.signature_length,
|
||||
keygen_time, sign_time, verify_time))
|
||||
|
||||
def test_serialize(self):
|
||||
seed = b("secret")
|
||||
curve = NIST192p
|
||||
secexp1 = util.randrange_from_seed__trytryagain(seed, curve.order)
|
||||
secexp2 = util.randrange_from_seed__trytryagain(seed, curve.order)
|
||||
self.assertEqual(secexp1, secexp2)
|
||||
priv1 = SigningKey.from_secret_exponent(secexp1, curve)
|
||||
priv2 = SigningKey.from_secret_exponent(secexp2, curve)
|
||||
self.assertEqual(hexlify(priv1.to_string()),
|
||||
hexlify(priv2.to_string()))
|
||||
self.assertEqual(priv1.to_pem(), priv2.to_pem())
|
||||
pub1 = priv1.get_verifying_key()
|
||||
pub2 = priv2.get_verifying_key()
|
||||
data = b("data")
|
||||
sig1 = priv1.sign(data)
|
||||
sig2 = priv2.sign(data)
|
||||
self.assertTrue(pub1.verify(sig1, data))
|
||||
self.assertTrue(pub2.verify(sig1, data))
|
||||
self.assertTrue(pub1.verify(sig2, data))
|
||||
self.assertTrue(pub2.verify(sig2, data))
|
||||
self.assertEqual(hexlify(pub1.to_string()),
|
||||
hexlify(pub2.to_string()))
|
||||
|
||||
def test_nonrandom(self):
|
||||
s = b("all the entropy in the entire world, compressed into one line")
|
||||
def not_much_entropy(numbytes):
|
||||
return s[:numbytes]
|
||||
# we control the entropy source, these two keys should be identical:
|
||||
priv1 = SigningKey.generate(entropy=not_much_entropy)
|
||||
priv2 = SigningKey.generate(entropy=not_much_entropy)
|
||||
self.assertEqual(hexlify(priv1.get_verifying_key().to_string()),
|
||||
hexlify(priv2.get_verifying_key().to_string()))
|
||||
# likewise, signatures should be identical. Obviously you'd never
|
||||
# want to do this with keys you care about, because the secrecy of
|
||||
# the private key depends upon using different random numbers for
|
||||
# each signature
|
||||
sig1 = priv1.sign(b("data"), entropy=not_much_entropy)
|
||||
sig2 = priv2.sign(b("data"), entropy=not_much_entropy)
|
||||
self.assertEqual(hexlify(sig1), hexlify(sig2))
|
||||
|
||||
def assertTruePrivkeysEqual(self, priv1, priv2):
|
||||
self.assertEqual(priv1.privkey.secret_multiplier,
|
||||
priv2.privkey.secret_multiplier)
|
||||
self.assertEqual(priv1.privkey.public_key.generator,
|
||||
priv2.privkey.public_key.generator)
|
||||
|
||||
def failIfPrivkeysEqual(self, priv1, priv2):
|
||||
self.failIfEqual(priv1.privkey.secret_multiplier,
|
||||
priv2.privkey.secret_multiplier)
|
||||
|
||||
def test_privkey_creation(self):
|
||||
s = b("all the entropy in the entire world, compressed into one line")
|
||||
def not_much_entropy(numbytes):
|
||||
return s[:numbytes]
|
||||
priv1 = SigningKey.generate()
|
||||
self.assertEqual(priv1.baselen, NIST192p.baselen)
|
||||
|
||||
priv1 = SigningKey.generate(curve=NIST224p)
|
||||
self.assertEqual(priv1.baselen, NIST224p.baselen)
|
||||
|
||||
priv1 = SigningKey.generate(entropy=not_much_entropy)
|
||||
self.assertEqual(priv1.baselen, NIST192p.baselen)
|
||||
priv2 = SigningKey.generate(entropy=not_much_entropy)
|
||||
self.assertEqual(priv2.baselen, NIST192p.baselen)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
priv1 = SigningKey.from_secret_exponent(secexp=3)
|
||||
self.assertEqual(priv1.baselen, NIST192p.baselen)
|
||||
priv2 = SigningKey.from_secret_exponent(secexp=3)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
priv1 = SigningKey.from_secret_exponent(secexp=4, curve=NIST224p)
|
||||
self.assertEqual(priv1.baselen, NIST224p.baselen)
|
||||
|
||||
def test_privkey_strings(self):
|
||||
priv1 = SigningKey.generate()
|
||||
s1 = priv1.to_string()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
self.assertEqual(len(s1), NIST192p.baselen)
|
||||
priv2 = SigningKey.from_string(s1)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
s1 = priv1.to_pem()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
self.assertTrue(s1.startswith(b("-----BEGIN EC PRIVATE KEY-----")))
|
||||
self.assertTrue(s1.strip().endswith(b("-----END EC PRIVATE KEY-----")))
|
||||
priv2 = SigningKey.from_pem(s1)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
s1 = priv1.to_der()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
priv2 = SigningKey.from_der(s1)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
priv1 = SigningKey.generate(curve=NIST256p)
|
||||
s1 = priv1.to_pem()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
self.assertTrue(s1.startswith(b("-----BEGIN EC PRIVATE KEY-----")))
|
||||
self.assertTrue(s1.strip().endswith(b("-----END EC PRIVATE KEY-----")))
|
||||
priv2 = SigningKey.from_pem(s1)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
s1 = priv1.to_der()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
priv2 = SigningKey.from_der(s1)
|
||||
self.assertTruePrivkeysEqual(priv1, priv2)
|
||||
|
||||
def assertTruePubkeysEqual(self, pub1, pub2):
|
||||
self.assertEqual(pub1.pubkey.point, pub2.pubkey.point)
|
||||
self.assertEqual(pub1.pubkey.generator, pub2.pubkey.generator)
|
||||
self.assertEqual(pub1.curve, pub2.curve)
|
||||
|
||||
def test_pubkey_strings(self):
|
||||
priv1 = SigningKey.generate()
|
||||
pub1 = priv1.get_verifying_key()
|
||||
s1 = pub1.to_string()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
self.assertEqual(len(s1), NIST192p.verifying_key_length)
|
||||
pub2 = VerifyingKey.from_string(s1)
|
||||
self.assertTruePubkeysEqual(pub1, pub2)
|
||||
|
||||
priv1 = SigningKey.generate(curve=NIST256p)
|
||||
pub1 = priv1.get_verifying_key()
|
||||
s1 = pub1.to_string()
|
||||
self.assertEqual(type(s1), binary_type)
|
||||
self.assertEqual(len(s1), NIST256p.verifying_key_length)
|
||||
pub2 = VerifyingKey.from_string(s1, curve=NIST256p)
|
||||
self.assertTruePubkeysEqual(pub1, pub2)
|
||||
|
||||
pub1_der = pub1.to_der()
|
||||
self.assertEqual(type(pub1_der), binary_type)
|
||||
pub2 = VerifyingKey.from_der(pub1_der)
|
||||
self.assertTruePubkeysEqual(pub1, pub2)
|
||||
|
||||
self.assertRaises(der.UnexpectedDER,
|
||||
VerifyingKey.from_der, pub1_der+b("junk"))
|
||||
badpub = VerifyingKey.from_der(pub1_der)
|
||||
class FakeGenerator:
|
||||
def order(self): return 123456789
|
||||
badcurve = Curve("unknown", None, None, FakeGenerator(), (1,2,3,4,5,6))
|
||||
badpub.curve = badcurve
|
||||
badder = badpub.to_der()
|
||||
self.assertRaises(UnknownCurveError, VerifyingKey.from_der, badder)
|
||||
|
||||
pem = pub1.to_pem()
|
||||
self.assertEqual(type(pem), binary_type)
|
||||
self.assertTrue(pem.startswith(b("-----BEGIN PUBLIC KEY-----")), pem)
|
||||
self.assertTrue(pem.strip().endswith(b("-----END PUBLIC KEY-----")), pem)
|
||||
pub2 = VerifyingKey.from_pem(pem)
|
||||
self.assertTruePubkeysEqual(pub1, pub2)
|
||||
|
||||
def test_signature_strings(self):
|
||||
priv1 = SigningKey.generate()
|
||||
pub1 = priv1.get_verifying_key()
|
||||
data = b("data")
|
||||
|
||||
sig = priv1.sign(data)
|
||||
self.assertEqual(type(sig), binary_type)
|
||||
self.assertEqual(len(sig), NIST192p.signature_length)
|
||||
self.assertTrue(pub1.verify(sig, data))
|
||||
|
||||
sig = priv1.sign(data, sigencode=sigencode_strings)
|
||||
self.assertEqual(type(sig), tuple)
|
||||
self.assertEqual(len(sig), 2)
|
||||
self.assertEqual(type(sig[0]), binary_type)
|
||||
self.assertEqual(type(sig[1]), binary_type)
|
||||
self.assertEqual(len(sig[0]), NIST192p.baselen)
|
||||
self.assertEqual(len(sig[1]), NIST192p.baselen)
|
||||
self.assertTrue(pub1.verify(sig, data, sigdecode=sigdecode_strings))
|
||||
|
||||
sig_der = priv1.sign(data, sigencode=sigencode_der)
|
||||
self.assertEqual(type(sig_der), binary_type)
|
||||
self.assertTrue(pub1.verify(sig_der, data, sigdecode=sigdecode_der))
|
||||
|
||||
def test_hashfunc(self):
|
||||
sk = SigningKey.generate(curve=NIST256p, hashfunc=sha256)
|
||||
data = b("security level is 128 bits")
|
||||
sig = sk.sign(data)
|
||||
vk = VerifyingKey.from_string(sk.get_verifying_key().to_string(),
|
||||
curve=NIST256p, hashfunc=sha256)
|
||||
self.assertTrue(vk.verify(sig, data))
|
||||
|
||||
sk2 = SigningKey.generate(curve=NIST256p)
|
||||
sig2 = sk2.sign(data, hashfunc=sha256)
|
||||
vk2 = VerifyingKey.from_string(sk2.get_verifying_key().to_string(),
|
||||
curve=NIST256p, hashfunc=sha256)
|
||||
self.assertTrue(vk2.verify(sig2, data))
|
||||
|
||||
vk3 = VerifyingKey.from_string(sk.get_verifying_key().to_string(),
|
||||
curve=NIST256p)
|
||||
self.assertTrue(vk3.verify(sig, data, hashfunc=sha256))
|
||||
|
||||
|
||||
class OpenSSL(unittest.TestCase):
|
||||
# test interoperability with OpenSSL tools. Note that openssl's ECDSA
|
||||
# sign/verify arguments changed between 0.9.8 and 1.0.0: the early
|
||||
# versions require "-ecdsa-with-SHA1", the later versions want just
|
||||
# "-SHA1" (or to leave out that argument entirely, which means the
|
||||
# signature will use some default digest algorithm, probably determined
|
||||
# by the key, probably always SHA1).
|
||||
#
|
||||
# openssl ecparam -name secp224r1 -genkey -out privkey.pem
|
||||
# openssl ec -in privkey.pem -text -noout # get the priv/pub keys
|
||||
# openssl dgst -ecdsa-with-SHA1 -sign privkey.pem -out data.sig data.txt
|
||||
# openssl asn1parse -in data.sig -inform DER
|
||||
# data.sig is 64 bytes, probably 56b plus ASN1 overhead
|
||||
# openssl dgst -ecdsa-with-SHA1 -prverify privkey.pem -signature data.sig data.txt ; echo $?
|
||||
# openssl ec -in privkey.pem -pubout -out pubkey.pem
|
||||
# openssl ec -in privkey.pem -pubout -outform DER -out pubkey.der
|
||||
|
||||
def get_openssl_messagedigest_arg(self):
|
||||
v = run_openssl("version")
|
||||
# e.g. "OpenSSL 1.0.0 29 Mar 2010", or "OpenSSL 1.0.0a 1 Jun 2010",
|
||||
# or "OpenSSL 0.9.8o 01 Jun 2010"
|
||||
vs = v.split()[1].split(".")
|
||||
if vs >= ["1","0","0"]:
|
||||
return "-SHA1"
|
||||
else:
|
||||
return "-ecdsa-with-SHA1"
|
||||
|
||||
# sk: 1:OpenSSL->python 2:python->OpenSSL
|
||||
# vk: 3:OpenSSL->python 4:python->OpenSSL
|
||||
# sig: 5:OpenSSL->python 6:python->OpenSSL
|
||||
|
||||
def test_from_openssl_nist192p(self):
|
||||
return self.do_test_from_openssl(NIST192p)
|
||||
def test_from_openssl_nist224p(self):
|
||||
return self.do_test_from_openssl(NIST224p)
|
||||
def test_from_openssl_nist256p(self):
|
||||
return self.do_test_from_openssl(NIST256p)
|
||||
def test_from_openssl_nist384p(self):
|
||||
return self.do_test_from_openssl(NIST384p)
|
||||
def test_from_openssl_nist521p(self):
|
||||
return self.do_test_from_openssl(NIST521p)
|
||||
def test_from_openssl_secp256k1(self):
|
||||
return self.do_test_from_openssl(SECP256k1)
|
||||
|
||||
def do_test_from_openssl(self, curve):
|
||||
curvename = curve.openssl_name
|
||||
assert curvename
|
||||
# OpenSSL: create sk, vk, sign.
|
||||
# Python: read vk(3), checksig(5), read sk(1), sign, check
|
||||
mdarg = self.get_openssl_messagedigest_arg()
|
||||
if os.path.isdir("t"):
|
||||
shutil.rmtree("t")
|
||||
os.mkdir("t")
|
||||
run_openssl("ecparam -name %s -genkey -out t/privkey.pem" % curvename)
|
||||
run_openssl("ec -in t/privkey.pem -pubout -out t/pubkey.pem")
|
||||
data = b("data")
|
||||
with open("t/data.txt","wb") as e: e.write(data)
|
||||
run_openssl("dgst %s -sign t/privkey.pem -out t/data.sig t/data.txt" % mdarg)
|
||||
run_openssl("dgst %s -verify t/pubkey.pem -signature t/data.sig t/data.txt" % mdarg)
|
||||
with open("t/pubkey.pem","rb") as e: pubkey_pem = e.read()
|
||||
vk = VerifyingKey.from_pem(pubkey_pem) # 3
|
||||
with open("t/data.sig","rb") as e: sig_der = e.read()
|
||||
self.assertTrue(vk.verify(sig_der, data, # 5
|
||||
hashfunc=sha1, sigdecode=sigdecode_der))
|
||||
|
||||
with open("t/privkey.pem") as e: fp = e.read()
|
||||
sk = SigningKey.from_pem(fp) # 1
|
||||
sig = sk.sign(data)
|
||||
self.assertTrue(vk.verify(sig, data))
|
||||
|
||||
def test_to_openssl_nist192p(self):
|
||||
self.do_test_to_openssl(NIST192p)
|
||||
def test_to_openssl_nist224p(self):
|
||||
self.do_test_to_openssl(NIST224p)
|
||||
def test_to_openssl_nist256p(self):
|
||||
self.do_test_to_openssl(NIST256p)
|
||||
def test_to_openssl_nist384p(self):
|
||||
self.do_test_to_openssl(NIST384p)
|
||||
def test_to_openssl_nist521p(self):
|
||||
self.do_test_to_openssl(NIST521p)
|
||||
def test_to_openssl_secp256k1(self):
|
||||
self.do_test_to_openssl(SECP256k1)
|
||||
|
||||
def do_test_to_openssl(self, curve):
|
||||
curvename = curve.openssl_name
|
||||
assert curvename
|
||||
# Python: create sk, vk, sign.
|
||||
# OpenSSL: read vk(4), checksig(6), read sk(2), sign, check
|
||||
mdarg = self.get_openssl_messagedigest_arg()
|
||||
if os.path.isdir("t"):
|
||||
shutil.rmtree("t")
|
||||
os.mkdir("t")
|
||||
sk = SigningKey.generate(curve=curve)
|
||||
vk = sk.get_verifying_key()
|
||||
data = b("data")
|
||||
with open("t/pubkey.der","wb") as e: e.write(vk.to_der()) # 4
|
||||
with open("t/pubkey.pem","wb") as e: e.write(vk.to_pem()) # 4
|
||||
sig_der = sk.sign(data, hashfunc=sha1, sigencode=sigencode_der)
|
||||
|
||||
with open("t/data.sig","wb") as e: e.write(sig_der) # 6
|
||||
with open("t/data.txt","wb") as e: e.write(data)
|
||||
with open("t/baddata.txt","wb") as e: e.write(data+b("corrupt"))
|
||||
|
||||
self.assertRaises(SubprocessError, run_openssl,
|
||||
"dgst %s -verify t/pubkey.der -keyform DER -signature t/data.sig t/baddata.txt" % mdarg)
|
||||
run_openssl("dgst %s -verify t/pubkey.der -keyform DER -signature t/data.sig t/data.txt" % mdarg)
|
||||
|
||||
with open("t/privkey.pem","wb") as e: e.write(sk.to_pem()) # 2
|
||||
run_openssl("dgst %s -sign t/privkey.pem -out t/data.sig2 t/data.txt" % mdarg)
|
||||
run_openssl("dgst %s -verify t/pubkey.pem -signature t/data.sig2 t/data.txt" % mdarg)
|
||||
|
||||
class DER(unittest.TestCase):
|
||||
def test_oids(self):
|
||||
oid_ecPublicKey = der.encode_oid(1, 2, 840, 10045, 2, 1)
|
||||
self.assertEqual(hexlify(oid_ecPublicKey), b("06072a8648ce3d0201"))
|
||||
self.assertEqual(hexlify(NIST224p.encoded_oid), b("06052b81040021"))
|
||||
self.assertEqual(hexlify(NIST256p.encoded_oid),
|
||||
b("06082a8648ce3d030107"))
|
||||
x = oid_ecPublicKey + b("more")
|
||||
x1, rest = der.remove_object(x)
|
||||
self.assertEqual(x1, (1, 2, 840, 10045, 2, 1))
|
||||
self.assertEqual(rest, b("more"))
|
||||
|
||||
def test_integer(self):
|
||||
self.assertEqual(der.encode_integer(0), b("\x02\x01\x00"))
|
||||
self.assertEqual(der.encode_integer(1), b("\x02\x01\x01"))
|
||||
self.assertEqual(der.encode_integer(127), b("\x02\x01\x7f"))
|
||||
self.assertEqual(der.encode_integer(128), b("\x02\x02\x00\x80"))
|
||||
self.assertEqual(der.encode_integer(256), b("\x02\x02\x01\x00"))
|
||||
#self.assertEqual(der.encode_integer(-1), b("\x02\x01\xff"))
|
||||
|
||||
def s(n): return der.remove_integer(der.encode_integer(n) + b("junk"))
|
||||
self.assertEqual(s(0), (0, b("junk")))
|
||||
self.assertEqual(s(1), (1, b("junk")))
|
||||
self.assertEqual(s(127), (127, b("junk")))
|
||||
self.assertEqual(s(128), (128, b("junk")))
|
||||
self.assertEqual(s(256), (256, b("junk")))
|
||||
self.assertEqual(s(1234567890123456789012345678901234567890),
|
||||
(1234567890123456789012345678901234567890,b("junk")))
|
||||
|
||||
def test_number(self):
|
||||
self.assertEqual(der.encode_number(0), b("\x00"))
|
||||
self.assertEqual(der.encode_number(127), b("\x7f"))
|
||||
self.assertEqual(der.encode_number(128), b("\x81\x00"))
|
||||
self.assertEqual(der.encode_number(3*128+7), b("\x83\x07"))
|
||||
#self.assertEqual(der.read_number("\x81\x9b"+"more"), (155, 2))
|
||||
#self.assertEqual(der.encode_number(155), b("\x81\x9b"))
|
||||
for n in (0, 1, 2, 127, 128, 3*128+7, 840, 10045): #, 155):
|
||||
x = der.encode_number(n) + b("more")
|
||||
n1, llen = der.read_number(x)
|
||||
self.assertEqual(n1, n)
|
||||
self.assertEqual(x[llen:], b("more"))
|
||||
|
||||
def test_length(self):
|
||||
self.assertEqual(der.encode_length(0), b("\x00"))
|
||||
self.assertEqual(der.encode_length(127), b("\x7f"))
|
||||
self.assertEqual(der.encode_length(128), b("\x81\x80"))
|
||||
self.assertEqual(der.encode_length(255), b("\x81\xff"))
|
||||
self.assertEqual(der.encode_length(256), b("\x82\x01\x00"))
|
||||
self.assertEqual(der.encode_length(3*256+7), b("\x82\x03\x07"))
|
||||
self.assertEqual(der.read_length(b("\x81\x9b")+b("more")), (155, 2))
|
||||
self.assertEqual(der.encode_length(155), b("\x81\x9b"))
|
||||
for n in (0, 1, 2, 127, 128, 255, 256, 3*256+7, 155):
|
||||
x = der.encode_length(n) + b("more")
|
||||
n1, llen = der.read_length(x)
|
||||
self.assertEqual(n1, n)
|
||||
self.assertEqual(x[llen:], b("more"))
|
||||
|
||||
def test_sequence(self):
|
||||
x = der.encode_sequence(b("ABC"), b("DEF")) + b("GHI")
|
||||
self.assertEqual(x, b("\x30\x06ABCDEFGHI"))
|
||||
x1, rest = der.remove_sequence(x)
|
||||
self.assertEqual(x1, b("ABCDEF"))
|
||||
self.assertEqual(rest, b("GHI"))
|
||||
|
||||
def test_constructed(self):
|
||||
x = der.encode_constructed(0, NIST224p.encoded_oid)
|
||||
self.assertEqual(hexlify(x), b("a007") + b("06052b81040021"))
|
||||
x = der.encode_constructed(1, unhexlify(b("0102030a0b0c")))
|
||||
self.assertEqual(hexlify(x), b("a106") + b("0102030a0b0c"))
|
||||
|
||||
class Util(unittest.TestCase):
|
||||
def test_trytryagain(self):
|
||||
tta = util.randrange_from_seed__trytryagain
|
||||
for i in range(1000):
|
||||
seed = "seed-%d" % i
|
||||
for order in (2**8-2, 2**8-1, 2**8, 2**8+1, 2**8+2,
|
||||
2**16-1, 2**16+1):
|
||||
n = tta(seed, order)
|
||||
self.assertTrue(1 <= n < order, (1, n, order))
|
||||
# this trytryagain *does* provide long-term stability
|
||||
self.assertEqual(("%x"%(tta("seed", NIST224p.order))).encode(),
|
||||
b("6fa59d73bf0446ae8743cf748fc5ac11d5585a90356417e97155c3bc"))
|
||||
|
||||
def test_randrange(self):
|
||||
# util.randrange does not provide long-term stability: we might
|
||||
# change the algorithm in the future.
|
||||
for i in range(1000):
|
||||
entropy = util.PRNG("seed-%d" % i)
|
||||
for order in (2**8-2, 2**8-1, 2**8,
|
||||
2**16-1, 2**16+1,
|
||||
):
|
||||
# that oddball 2**16+1 takes half our runtime
|
||||
n = util.randrange(order, entropy=entropy)
|
||||
self.assertTrue(1 <= n < order, (1, n, order))
|
||||
|
||||
def OFF_test_prove_uniformity(self):
|
||||
order = 2**8-2
|
||||
counts = dict([(i, 0) for i in range(1, order)])
|
||||
assert 0 not in counts
|
||||
assert order not in counts
|
||||
for i in range(1000000):
|
||||
seed = "seed-%d" % i
|
||||
n = util.randrange_from_seed__trytryagain(seed, order)
|
||||
counts[n] += 1
|
||||
# this technique should use the full range
|
||||
self.assertTrue(counts[order-1])
|
||||
for i in range(1, order):
|
||||
print_("%3d: %s" % (i, "*"*(counts[i]//100)))
|
||||
|
||||
class RFC6979(unittest.TestCase):
|
||||
# https://tools.ietf.org/html/rfc6979#appendix-A.1
|
||||
def _do(self, generator, secexp, hsh, hash_func, expected):
|
||||
actual = rfc6979.generate_k(generator.order(), secexp, hash_func, hsh)
|
||||
self.assertEqual(expected, actual)
|
||||
|
||||
def test_SECP256k1(self):
|
||||
'''RFC doesn't contain test vectors for SECP256k1 used in bitcoin.
|
||||
This vector has been computed by Golang reference implementation instead.'''
|
||||
self._do(
|
||||
generator = SECP256k1.generator,
|
||||
secexp = int("9d0219792467d7d37b4d43298a7d0c05", 16),
|
||||
hsh = sha256(b("sample")).digest(),
|
||||
hash_func = sha256,
|
||||
expected = int("8fa1f95d514760e498f28957b824ee6ec39ed64826ff4fecc2b5739ec45b91cd", 16))
|
||||
|
||||
def test_SECP256k1_2(self):
|
||||
self._do(
|
||||
generator=SECP256k1.generator,
|
||||
secexp=int("cca9fbcc1b41e5a95d369eaa6ddcff73b61a4efaa279cfc6567e8daa39cbaf50", 16),
|
||||
hsh=sha256(b("sample")).digest(),
|
||||
hash_func=sha256,
|
||||
expected=int("2df40ca70e639d89528a6b670d9d48d9165fdc0febc0974056bdce192b8e16a3", 16))
|
||||
|
||||
def test_SECP256k1_3(self):
|
||||
self._do(
|
||||
generator=SECP256k1.generator,
|
||||
secexp=0x1,
|
||||
hsh=sha256(b("Satoshi Nakamoto")).digest(),
|
||||
hash_func=sha256,
|
||||
expected=0x8F8A276C19F4149656B280621E358CCE24F5F52542772691EE69063B74F15D15)
|
||||
|
||||
def test_SECP256k1_4(self):
|
||||
self._do(
|
||||
generator=SECP256k1.generator,
|
||||
secexp=0x1,
|
||||
hsh=sha256(b("All those moments will be lost in time, like tears in rain. Time to die...")).digest(),
|
||||
hash_func=sha256,
|
||||
expected=0x38AA22D72376B4DBC472E06C3BA403EE0A394DA63FC58D88686C611ABA98D6B3)
|
||||
|
||||
def test_SECP256k1_5(self):
|
||||
self._do(
|
||||
generator=SECP256k1.generator,
|
||||
secexp=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140,
|
||||
hsh=sha256(b("Satoshi Nakamoto")).digest(),
|
||||
hash_func=sha256,
|
||||
expected=0x33A19B60E25FB6F4435AF53A3D42D493644827367E6453928554F43E49AA6F90)
|
||||
|
||||
def test_SECP256k1_6(self):
|
||||
self._do(
|
||||
generator=SECP256k1.generator,
|
||||
secexp=0xf8b8af8ce3c7cca5e300d33939540c10d45ce001b8f252bfbc57ba0342904181,
|
||||
hsh=sha256(b("Alan Turing")).digest(),
|
||||
hash_func=sha256,
|
||||
expected=0x525A82B70E67874398067543FD84C83D30C175FDC45FDEEE082FE13B1D7CFDF1)
|
||||
|
||||
def test_1(self):
|
||||
# Basic example of the RFC, it also tests 'try-try-again' from Step H of rfc6979
|
||||
self._do(
|
||||
generator = Point(None, 0, 0, int("4000000000000000000020108A2E0CC0D99F8A5EF", 16)),
|
||||
secexp = int("09A4D6792295A7F730FC3F2B49CBC0F62E862272F", 16),
|
||||
hsh = unhexlify(b("AF2BDBE1AA9B6EC1E2ADE1D694F41FC71A831D0268E9891562113D8A62ADD1BF")),
|
||||
hash_func = sha256,
|
||||
expected = int("23AF4074C90A02B3FE61D286D5C87F425E6BDD81B", 16))
|
||||
|
||||
def test_2(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha1(b("sample")).digest(),
|
||||
hash_func = sha1,
|
||||
expected = int("37D7CA00D2C7B0E5E412AC03BD44BA837FDD5B28CD3B0021", 16))
|
||||
|
||||
def test_3(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha256(b("sample")).digest(),
|
||||
hash_func = sha256,
|
||||
expected = int("32B1B6D7D42A05CB449065727A84804FB1A3E34D8F261496", 16))
|
||||
|
||||
def test_4(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha512(b("sample")).digest(),
|
||||
hash_func = sha512,
|
||||
expected = int("A2AC7AB055E4F20692D49209544C203A7D1F2C0BFBC75DB1", 16))
|
||||
|
||||
def test_5(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha1(b("test")).digest(),
|
||||
hash_func = sha1,
|
||||
expected = int("D9CF9C3D3297D3260773A1DA7418DB5537AB8DD93DE7FA25", 16))
|
||||
|
||||
def test_6(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha256(b("test")).digest(),
|
||||
hash_func = sha256,
|
||||
expected = int("5C4CE89CF56D9E7C77C8585339B006B97B5F0680B4306C6C", 16))
|
||||
|
||||
def test_7(self):
|
||||
self._do(
|
||||
generator=NIST192p.generator,
|
||||
secexp = int("6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4", 16),
|
||||
hsh = sha512(b("test")).digest(),
|
||||
hash_func = sha512,
|
||||
expected = int("0758753A5254759C7CFBAD2E2D9B0792EEE44136C9480527", 16))
|
||||
|
||||
def test_8(self):
|
||||
self._do(
|
||||
generator=NIST521p.generator,
|
||||
secexp = int("0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", 16),
|
||||
hsh = sha1(b("sample")).digest(),
|
||||
hash_func = sha1,
|
||||
expected = int("089C071B419E1C2820962321787258469511958E80582E95D8378E0C2CCDB3CB42BEDE42F50E3FA3C71F5A76724281D31D9C89F0F91FC1BE4918DB1C03A5838D0F9", 16))
|
||||
|
||||
def test_9(self):
|
||||
self._do(
|
||||
generator=NIST521p.generator,
|
||||
secexp = int("0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", 16),
|
||||
hsh = sha256(b("sample")).digest(),
|
||||
hash_func = sha256,
|
||||
expected = int("0EDF38AFCAAECAB4383358B34D67C9F2216C8382AAEA44A3DAD5FDC9C32575761793FEF24EB0FC276DFC4F6E3EC476752F043CF01415387470BCBD8678ED2C7E1A0", 16))
|
||||
|
||||
def test_10(self):
|
||||
self._do(
|
||||
generator=NIST521p.generator,
|
||||
secexp = int("0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538", 16),
|
||||
hsh = sha512(b("test")).digest(),
|
||||
hash_func = sha512,
|
||||
expected = int("16200813020EC986863BEDFC1B121F605C1215645018AEA1A7B215A564DE9EB1B38A67AA1128B80CE391C4FB71187654AAA3431027BFC7F395766CA988C964DC56D", 16))
|
||||
|
||||
def __main__():
|
||||
unittest.main()
|
||||
if __name__ == "__main__":
|
||||
__main__()
|
||||
@@ -1,247 +0,0 @@
|
||||
from __future__ import division
|
||||
|
||||
import os
|
||||
import math
|
||||
import binascii
|
||||
from hashlib import sha256
|
||||
from . import der
|
||||
from .curves import orderlen
|
||||
from .six import PY3, int2byte, b, next
|
||||
|
||||
# RFC5480:
|
||||
# The "unrestricted" algorithm identifier is:
|
||||
# id-ecPublicKey OBJECT IDENTIFIER ::= {
|
||||
# iso(1) member-body(2) us(840) ansi-X9-62(10045) keyType(2) 1 }
|
||||
|
||||
oid_ecPublicKey = (1, 2, 840, 10045, 2, 1)
|
||||
encoded_oid_ecPublicKey = der.encode_oid(*oid_ecPublicKey)
|
||||
|
||||
def randrange(order, entropy=None):
|
||||
"""Return a random integer k such that 1 <= k < order, uniformly
|
||||
distributed across that range. For simplicity, this only behaves well if
|
||||
'order' is fairly close (but below) a power of 256. The try-try-again
|
||||
algorithm we use takes longer and longer time (on average) to complete as
|
||||
'order' falls, rising to a maximum of avg=512 loops for the worst-case
|
||||
(256**k)+1 . All of the standard curves behave well. There is a cutoff at
|
||||
10k loops (which raises RuntimeError) to prevent an infinite loop when
|
||||
something is really broken like the entropy function not working.
|
||||
|
||||
Note that this function is not declared to be forwards-compatible: we may
|
||||
change the behavior in future releases. The entropy= argument (which
|
||||
should get a callable that behaves like os.urandom) can be used to
|
||||
achieve stability within a given release (for repeatable unit tests), but
|
||||
should not be used as a long-term-compatible key generation algorithm.
|
||||
"""
|
||||
# we could handle arbitrary orders (even 256**k+1) better if we created
|
||||
# candidates bit-wise instead of byte-wise, which would reduce the
|
||||
# worst-case behavior to avg=2 loops, but that would be more complex. The
|
||||
# change would be to round the order up to a power of 256, subtract one
|
||||
# (to get 0xffff..), use that to get a byte-long mask for the top byte,
|
||||
# generate the len-1 entropy bytes, generate one extra byte and mask off
|
||||
# the top bits, then combine it with the rest. Requires jumping back and
|
||||
# forth between strings and integers a lot.
|
||||
|
||||
if entropy is None:
|
||||
entropy = os.urandom
|
||||
assert order > 1
|
||||
bytes = orderlen(order)
|
||||
dont_try_forever = 10000 # gives about 2**-60 failures for worst case
|
||||
while dont_try_forever > 0:
|
||||
dont_try_forever -= 1
|
||||
candidate = string_to_number(entropy(bytes)) + 1
|
||||
if 1 <= candidate < order:
|
||||
return candidate
|
||||
continue
|
||||
raise RuntimeError("randrange() tried hard but gave up, either something"
|
||||
" is very wrong or you got realllly unlucky. Order was"
|
||||
" %x" % order)
|
||||
|
||||
class PRNG:
|
||||
# this returns a callable which, when invoked with an integer N, will
|
||||
# return N pseudorandom bytes. Note: this is a short-term PRNG, meant
|
||||
# primarily for the needs of randrange_from_seed__trytryagain(), which
|
||||
# only needs to run it a few times per seed. It does not provide
|
||||
# protection against state compromise (forward security).
|
||||
def __init__(self, seed):
|
||||
self.generator = self.block_generator(seed)
|
||||
|
||||
def __call__(self, numbytes):
|
||||
a = [next(self.generator) for i in range(numbytes)]
|
||||
|
||||
if PY3:
|
||||
return bytes(a)
|
||||
else:
|
||||
return "".join(a)
|
||||
|
||||
|
||||
def block_generator(self, seed):
|
||||
counter = 0
|
||||
while True:
|
||||
for byte in sha256(("prng-%d-%s" % (counter, seed)).encode()).digest():
|
||||
yield byte
|
||||
counter += 1
|
||||
|
||||
def randrange_from_seed__overshoot_modulo(seed, order):
|
||||
# hash the data, then turn the digest into a number in [1,order).
|
||||
#
|
||||
# We use David-Sarah Hopwood's suggestion: turn it into a number that's
|
||||
# sufficiently larger than the group order, then modulo it down to fit.
|
||||
# This should give adequate (but not perfect) uniformity, and simple
|
||||
# code. There are other choices: try-try-again is the main one.
|
||||
base = PRNG(seed)(2*orderlen(order))
|
||||
number = (int(binascii.hexlify(base), 16) % (order-1)) + 1
|
||||
assert 1 <= number < order, (1, number, order)
|
||||
return number
|
||||
|
||||
def lsb_of_ones(numbits):
|
||||
return (1 << numbits) - 1
|
||||
def bits_and_bytes(order):
|
||||
bits = int(math.log(order-1, 2)+1)
|
||||
bytes = bits // 8
|
||||
extrabits = bits % 8
|
||||
return bits, bytes, extrabits
|
||||
|
||||
# the following randrange_from_seed__METHOD() functions take an
|
||||
# arbitrarily-sized secret seed and turn it into a number that obeys the same
|
||||
# range limits as randrange() above. They are meant for deriving consistent
|
||||
# signing keys from a secret rather than generating them randomly, for
|
||||
# example a protocol in which three signing keys are derived from a master
|
||||
# secret. You should use a uniformly-distributed unguessable seed with about
|
||||
# curve.baselen bytes of entropy. To use one, do this:
|
||||
# seed = os.urandom(curve.baselen) # or other starting point
|
||||
# secexp = ecdsa.util.randrange_from_seed__trytryagain(sed, curve.order)
|
||||
# sk = SigningKey.from_secret_exponent(secexp, curve)
|
||||
|
||||
def randrange_from_seed__truncate_bytes(seed, order, hashmod=sha256):
|
||||
# hash the seed, then turn the digest into a number in [1,order), but
|
||||
# don't worry about trying to uniformly fill the range. This will lose,
|
||||
# on average, four bits of entropy.
|
||||
bits, bytes, extrabits = bits_and_bytes(order)
|
||||
if extrabits:
|
||||
bytes += 1
|
||||
base = hashmod(seed).digest()[:bytes]
|
||||
base = "\x00"*(bytes-len(base)) + base
|
||||
number = 1+int(binascii.hexlify(base), 16)
|
||||
assert 1 <= number < order
|
||||
return number
|
||||
|
||||
def randrange_from_seed__truncate_bits(seed, order, hashmod=sha256):
|
||||
# like string_to_randrange_truncate_bytes, but only lose an average of
|
||||
# half a bit
|
||||
bits = int(math.log(order-1, 2)+1)
|
||||
maxbytes = (bits+7) // 8
|
||||
base = hashmod(seed).digest()[:maxbytes]
|
||||
base = "\x00"*(maxbytes-len(base)) + base
|
||||
topbits = 8*maxbytes - bits
|
||||
if topbits:
|
||||
base = int2byte(ord(base[0]) & lsb_of_ones(topbits)) + base[1:]
|
||||
number = 1+int(binascii.hexlify(base), 16)
|
||||
assert 1 <= number < order
|
||||
return number
|
||||
|
||||
def randrange_from_seed__trytryagain(seed, order):
|
||||
# figure out exactly how many bits we need (rounded up to the nearest
|
||||
# bit), so we can reduce the chance of looping to less than 0.5 . This is
|
||||
# specified to feed from a byte-oriented PRNG, and discards the
|
||||
# high-order bits of the first byte as necessary to get the right number
|
||||
# of bits. The average number of loops will range from 1.0 (when
|
||||
# order=2**k-1) to 2.0 (when order=2**k+1).
|
||||
assert order > 1
|
||||
bits, bytes, extrabits = bits_and_bytes(order)
|
||||
generate = PRNG(seed)
|
||||
while True:
|
||||
extrabyte = b("")
|
||||
if extrabits:
|
||||
extrabyte = int2byte(ord(generate(1)) & lsb_of_ones(extrabits))
|
||||
guess = string_to_number(extrabyte + generate(bytes)) + 1
|
||||
if 1 <= guess < order:
|
||||
return guess
|
||||
|
||||
|
||||
def number_to_string(num, order):
|
||||
l = orderlen(order)
|
||||
fmt_str = "%0" + str(2*l) + "x"
|
||||
string = binascii.unhexlify((fmt_str % num).encode())
|
||||
assert len(string) == l, (len(string), l)
|
||||
return string
|
||||
|
||||
def number_to_string_crop(num, order):
|
||||
l = orderlen(order)
|
||||
fmt_str = "%0" + str(2*l) + "x"
|
||||
string = binascii.unhexlify((fmt_str % num).encode())
|
||||
return string[:l]
|
||||
|
||||
def string_to_number(string):
|
||||
return int(binascii.hexlify(string), 16)
|
||||
|
||||
def string_to_number_fixedlen(string, order):
|
||||
l = orderlen(order)
|
||||
assert len(string) == l, (len(string), l)
|
||||
return int(binascii.hexlify(string), 16)
|
||||
|
||||
# these methods are useful for the sigencode= argument to SK.sign() and the
|
||||
# sigdecode= argument to VK.verify(), and control how the signature is packed
|
||||
# or unpacked.
|
||||
|
||||
def sigencode_strings(r, s, order):
|
||||
r_str = number_to_string(r, order)
|
||||
s_str = number_to_string(s, order)
|
||||
return (r_str, s_str)
|
||||
|
||||
def sigencode_string(r, s, order):
|
||||
# for any given curve, the size of the signature numbers is
|
||||
# fixed, so just use simple concatenation
|
||||
r_str, s_str = sigencode_strings(r, s, order)
|
||||
return r_str + s_str
|
||||
|
||||
def sigencode_der(r, s, order):
|
||||
return der.encode_sequence(der.encode_integer(r), der.encode_integer(s))
|
||||
|
||||
# canonical versions of sigencode methods
|
||||
# these enforce low S values, by negating the value (modulo the order) if above order/2
|
||||
# see CECKey::Sign() https://github.com/bitcoin/bitcoin/blob/master/src/key.cpp#L214
|
||||
def sigencode_strings_canonize(r, s, order):
|
||||
if s > order / 2:
|
||||
s = order - s
|
||||
return sigencode_strings(r, s, order)
|
||||
|
||||
def sigencode_string_canonize(r, s, order):
|
||||
if s > order / 2:
|
||||
s = order - s
|
||||
return sigencode_string(r, s, order)
|
||||
|
||||
def sigencode_der_canonize(r, s, order):
|
||||
if s > order / 2:
|
||||
s = order - s
|
||||
return sigencode_der(r, s, order)
|
||||
|
||||
|
||||
def sigdecode_string(signature, order):
|
||||
l = orderlen(order)
|
||||
assert len(signature) == 2*l, (len(signature), 2*l)
|
||||
r = string_to_number_fixedlen(signature[:l], order)
|
||||
s = string_to_number_fixedlen(signature[l:], order)
|
||||
return r, s
|
||||
|
||||
def sigdecode_strings(rs_strings, order):
|
||||
(r_str, s_str) = rs_strings
|
||||
l = orderlen(order)
|
||||
assert len(r_str) == l, (len(r_str), l)
|
||||
assert len(s_str) == l, (len(s_str), l)
|
||||
r = string_to_number_fixedlen(r_str, order)
|
||||
s = string_to_number_fixedlen(s_str, order)
|
||||
return r, s
|
||||
|
||||
def sigdecode_der(sig_der, order):
|
||||
#return der.encode_sequence(der.encode_integer(r), der.encode_integer(s))
|
||||
rs_strings, empty = der.remove_sequence(sig_der)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER sig: %s" %
|
||||
binascii.hexlify(empty))
|
||||
r, rest = der.remove_integer(rs_strings)
|
||||
s, empty = der.remove_integer(rest)
|
||||
if empty != b(""):
|
||||
raise der.UnexpectedDER("trailing junk after DER numbers: %s" %
|
||||
binascii.hexlify(empty))
|
||||
return r, s
|
||||
|
||||
@@ -1,105 +0,0 @@
|
||||
import hashlib
|
||||
|
||||
b = 256
|
||||
q = 2**255 - 19
|
||||
l = 2**252 + 27742317777372353535851937790883648493
|
||||
|
||||
def H(m):
|
||||
return hashlib.sha512(m).digest()
|
||||
|
||||
def expmod(b,e,m):
|
||||
if e == 0: return 1
|
||||
t = expmod(b,e/2,m)**2 % m
|
||||
if e & 1: t = (t*b) % m
|
||||
return t
|
||||
|
||||
def inv(x):
|
||||
return expmod(x,q-2,q)
|
||||
|
||||
d = -121665 * inv(121666)
|
||||
I = expmod(2,(q-1)/4,q)
|
||||
|
||||
def xrecover(y):
|
||||
xx = (y*y-1) * inv(d*y*y+1)
|
||||
x = expmod(xx,(q+3)/8,q)
|
||||
if (x*x - xx) % q != 0: x = (x*I) % q
|
||||
if x % 2 != 0: x = q-x
|
||||
return x
|
||||
|
||||
By = 4 * inv(5)
|
||||
Bx = xrecover(By)
|
||||
B = [Bx % q,By % q]
|
||||
|
||||
def edwards(P,Q):
|
||||
x1 = P[0]
|
||||
y1 = P[1]
|
||||
x2 = Q[0]
|
||||
y2 = Q[1]
|
||||
x3 = (x1*y2+x2*y1) * inv(1+d*x1*x2*y1*y2)
|
||||
y3 = (y1*y2+x1*x2) * inv(1-d*x1*x2*y1*y2)
|
||||
return [x3 % q,y3 % q]
|
||||
|
||||
def scalarmult(P,e):
|
||||
if e == 0: return [0,1]
|
||||
Q = scalarmult(P,e/2)
|
||||
Q = edwards(Q,Q)
|
||||
if e & 1: Q = edwards(Q,P)
|
||||
return Q
|
||||
|
||||
def encodeint(y):
|
||||
bits = [(y >> i) & 1 for i in range(b)]
|
||||
return ''.join([chr(sum([bits[i * 8 + j] << j for j in range(8)])) for i in range(b/8)])
|
||||
|
||||
def encodepoint(P):
|
||||
x = P[0]
|
||||
y = P[1]
|
||||
bits = [(y >> i) & 1 for i in range(b - 1)] + [x & 1]
|
||||
return ''.join([chr(sum([bits[i * 8 + j] << j for j in range(8)])) for i in range(b/8)])
|
||||
|
||||
def bit(h,i):
|
||||
return (ord(h[i/8]) >> (i%8)) & 1
|
||||
|
||||
def publickey(sk):
|
||||
h = H(sk)
|
||||
a = 2**(b-2) + sum(2**i * bit(h,i) for i in range(3,b-2))
|
||||
A = scalarmult(B,a)
|
||||
return encodepoint(A)
|
||||
|
||||
def Hint(m):
|
||||
h = H(m)
|
||||
return sum(2**i * bit(h,i) for i in range(2*b))
|
||||
|
||||
def signature(m,sk,pk):
|
||||
h = H(sk)
|
||||
a = 2**(b-2) + sum(2**i * bit(h,i) for i in range(3,b-2))
|
||||
r = Hint(''.join([h[i] for i in range(b/8,b/4)]) + m)
|
||||
R = scalarmult(B,r)
|
||||
S = (r + Hint(encodepoint(R) + pk + m) * a) % l
|
||||
return encodepoint(R) + encodeint(S)
|
||||
|
||||
def isoncurve(P):
|
||||
x = P[0]
|
||||
y = P[1]
|
||||
return (-x*x + y*y - 1 - d*x*x*y*y) % q == 0
|
||||
|
||||
def decodeint(s):
|
||||
return sum(2**i * bit(s,i) for i in range(0,b))
|
||||
|
||||
def decodepoint(s):
|
||||
y = sum(2**i * bit(s,i) for i in range(0,b-1))
|
||||
x = xrecover(y)
|
||||
if x & 1 != bit(s,b-1): x = q-x
|
||||
P = [x,y]
|
||||
if not isoncurve(P): raise Exception("decoding point that is not on curve")
|
||||
return P
|
||||
|
||||
def checkvalid(s,m,pk):
|
||||
if len(s) != b/4: raise Exception("signature length is wrong")
|
||||
if len(pk) != b/8: raise Exception("public-key length is wrong")
|
||||
R = decodepoint(s[0:b/8])
|
||||
A = decodepoint(pk)
|
||||
S = decodeint(s[b/8:b/4])
|
||||
h = Hint(encodepoint(R) + pk + m)
|
||||
if scalarmult(B,S) != edwards(R,scalarmult(A,h)):
|
||||
raise Exception("signature does not pass verification")
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
from .jsonpath import *
|
||||
from .parser import parse
|
||||
|
||||
__version__ = '1.3.0'
|
||||
@@ -1,510 +0,0 @@
|
||||
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
|
||||
import logging
|
||||
import six
|
||||
from six.moves import xrange
|
||||
from itertools import *
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Turn on/off the automatic creation of id attributes
|
||||
# ... could be a kwarg pervasively but uses are rare and simple today
|
||||
auto_id_field = None
|
||||
|
||||
class JSONPath(object):
|
||||
"""
|
||||
The base class for JSONPath abstract syntax; those
|
||||
methods stubbed here are the interface to supported
|
||||
JSONPath semantics.
|
||||
"""
|
||||
|
||||
def find(self, data):
|
||||
"""
|
||||
All `JSONPath` types support `find()`, which returns an iterable of `DatumInContext`s.
|
||||
They keep track of the path followed to the current location, so if the calling code
|
||||
has some opinion about that, it can be passed in here as a starting point.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def update(self, data, val):
|
||||
"Returns `data` with the specified path replaced by `val`"
|
||||
raise NotImplementedError()
|
||||
|
||||
def child(self, child):
|
||||
"""
|
||||
Equivalent to Child(self, next) but with some canonicalization
|
||||
"""
|
||||
if isinstance(self, This) or isinstance(self, Root):
|
||||
return child
|
||||
elif isinstance(child, This):
|
||||
return self
|
||||
elif isinstance(child, Root):
|
||||
return child
|
||||
else:
|
||||
return Child(self, child)
|
||||
|
||||
def make_datum(self, value):
|
||||
if isinstance(value, DatumInContext):
|
||||
return value
|
||||
else:
|
||||
return DatumInContext(value, path=Root(), context=None)
|
||||
|
||||
class DatumInContext(object):
|
||||
"""
|
||||
Represents a datum along a path from a context.
|
||||
|
||||
Essentially a zipper but with a structure represented by JSONPath,
|
||||
and where the context is more of a parent pointer than a proper
|
||||
representation of the context.
|
||||
|
||||
For quick-and-dirty work, this proxies any non-special attributes
|
||||
to the underlying datum, but the actual datum can (and usually should)
|
||||
be retrieved via the `value` attribute.
|
||||
|
||||
To place `datum` within another, use `datum.in_context(context=..., path=...)`
|
||||
which extends the path. If the datum already has a context, it places the entire
|
||||
context within that passed in, so an object can be built from the inside
|
||||
out.
|
||||
"""
|
||||
@classmethod
|
||||
def wrap(cls, data):
|
||||
if isinstance(data, cls):
|
||||
return data
|
||||
else:
|
||||
return cls(data)
|
||||
|
||||
def __init__(self, value, path=None, context=None):
|
||||
self.value = value
|
||||
self.path = path or This()
|
||||
self.context = None if context is None else DatumInContext.wrap(context)
|
||||
|
||||
def in_context(self, context, path):
|
||||
context = DatumInContext.wrap(context)
|
||||
|
||||
if self.context:
|
||||
return DatumInContext(value=self.value, path=self.path, context=context.in_context(path=path, context=context))
|
||||
else:
|
||||
return DatumInContext(value=self.value, path=path, context=context)
|
||||
|
||||
@property
|
||||
def full_path(self):
|
||||
return self.path if self.context is None else self.context.full_path.child(self.path)
|
||||
|
||||
@property
|
||||
def id_pseudopath(self):
|
||||
"""
|
||||
Looks like a path, but with ids stuck in when available
|
||||
"""
|
||||
try:
|
||||
pseudopath = Fields(str(self.value[auto_id_field]))
|
||||
except (TypeError, AttributeError, KeyError): # This may not be all the interesting exceptions
|
||||
pseudopath = self.path
|
||||
|
||||
if self.context:
|
||||
return self.context.id_pseudopath.child(pseudopath)
|
||||
else:
|
||||
return pseudopath
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(value=%r, path=%r, context=%r)' % (self.__class__.__name__, self.value, self.path, self.context)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, DatumInContext) and other.value == self.value and other.path == self.path and self.context == other.context
|
||||
|
||||
class AutoIdForDatum(DatumInContext):
|
||||
"""
|
||||
This behaves like a DatumInContext, but the value is
|
||||
always the path leading up to it, not including the "id",
|
||||
and with any "id" fields along the way replacing the prior
|
||||
segment of the path
|
||||
|
||||
For example, it will make "foo.bar.id" return a datum
|
||||
that behaves like DatumInContext(value="foo.bar", path="foo.bar.id").
|
||||
|
||||
This is disabled by default; it can be turned on by
|
||||
settings the `auto_id_field` global to a value other
|
||||
than `None`.
|
||||
"""
|
||||
|
||||
def __init__(self, datum, id_field=None):
|
||||
"""
|
||||
Invariant is that datum.path is the path from context to datum. The auto id
|
||||
will either be the id in the datum (if present) or the id of the context
|
||||
followed by the path to the datum.
|
||||
|
||||
The path to this datum is always the path to the context, the path to the
|
||||
datum, and then the auto id field.
|
||||
"""
|
||||
self.datum = datum
|
||||
self.id_field = id_field or auto_id_field
|
||||
|
||||
@property
|
||||
def value(self):
|
||||
return str(self.datum.id_pseudopath)
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return self.id_field
|
||||
|
||||
@property
|
||||
def context(self):
|
||||
return self.datum
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r)' % (self.__class__.__name__, self.datum)
|
||||
|
||||
def in_context(self, context, path):
|
||||
return AutoIdForDatum(self.datum.in_context(context=context, path=path))
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, AutoIdForDatum) and other.datum == self.datum and self.id_field == other.id_field
|
||||
|
||||
|
||||
class Root(JSONPath):
|
||||
"""
|
||||
The JSONPath referring to the "root" object. Concrete syntax is '$'.
|
||||
The root is the topmost datum without any context attached.
|
||||
"""
|
||||
|
||||
def find(self, data):
|
||||
if not isinstance(data, DatumInContext):
|
||||
return [DatumInContext(data, path=Root(), context=None)]
|
||||
else:
|
||||
if data.context is None:
|
||||
return [DatumInContext(data.value, context=None, path=Root())]
|
||||
else:
|
||||
return Root().find(data.context)
|
||||
|
||||
def update(self, data, val):
|
||||
return val
|
||||
|
||||
def __str__(self):
|
||||
return '$'
|
||||
|
||||
def __repr__(self):
|
||||
return 'Root()'
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Root)
|
||||
|
||||
class This(JSONPath):
|
||||
"""
|
||||
The JSONPath referring to the current datum. Concrete syntax is '@'.
|
||||
"""
|
||||
|
||||
def find(self, datum):
|
||||
return [DatumInContext.wrap(datum)]
|
||||
|
||||
def update(self, data, val):
|
||||
return val
|
||||
|
||||
def __str__(self):
|
||||
return '`this`'
|
||||
|
||||
def __repr__(self):
|
||||
return 'This()'
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, This)
|
||||
|
||||
class Child(JSONPath):
|
||||
"""
|
||||
JSONPath that first matches the left, then the right.
|
||||
Concrete syntax is <left> '.' <right>
|
||||
"""
|
||||
|
||||
def __init__(self, left, right):
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
def find(self, datum):
|
||||
"""
|
||||
Extra special case: auto ids do not have children,
|
||||
so cut it off right now rather than auto id the auto id
|
||||
"""
|
||||
|
||||
return [submatch
|
||||
for subdata in self.left.find(datum)
|
||||
if not isinstance(subdata, AutoIdForDatum)
|
||||
for submatch in self.right.find(subdata)]
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Child) and self.left == other.left and self.right == other.right
|
||||
|
||||
def __str__(self):
|
||||
return '%s.%s' % (self.left, self.right)
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r, %r)' % (self.__class__.__name__, self.left, self.right)
|
||||
|
||||
class Parent(JSONPath):
|
||||
"""
|
||||
JSONPath that matches the parent node of the current match.
|
||||
Will crash if no such parent exists.
|
||||
Available via named operator `parent`.
|
||||
"""
|
||||
|
||||
def find(self, datum):
|
||||
datum = DatumInContext.wrap(datum)
|
||||
return [datum.context]
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Parent)
|
||||
|
||||
def __str__(self):
|
||||
return '`parent`'
|
||||
|
||||
def __repr__(self):
|
||||
return 'Parent()'
|
||||
|
||||
|
||||
class Where(JSONPath):
|
||||
"""
|
||||
JSONPath that first matches the left, and then
|
||||
filters for only those nodes that have
|
||||
a match on the right.
|
||||
|
||||
WARNING: Subject to change. May want to have "contains"
|
||||
or some other better word for it.
|
||||
"""
|
||||
|
||||
def __init__(self, left, right):
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
def find(self, data):
|
||||
return [subdata for subdata in self.left.find(data) if self.right.find(data)]
|
||||
|
||||
def __str__(self):
|
||||
return '%s where %s' % (self.left, self.right)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Where) and other.left == self.left and other.right == self.right
|
||||
|
||||
class Descendants(JSONPath):
|
||||
"""
|
||||
JSONPath that matches first the left expression then any descendant
|
||||
of it which matches the right expression.
|
||||
"""
|
||||
|
||||
def __init__(self, left, right):
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
def find(self, datum):
|
||||
# <left> .. <right> ==> <left> . (<right> | *..<right> | [*]..<right>)
|
||||
#
|
||||
# With with a wonky caveat that since Slice() has funky coercions
|
||||
# we cannot just delegate to that equivalence or we'll hit an
|
||||
# infinite loop. So right here we implement the coercion-free version.
|
||||
|
||||
# Get all left matches into a list
|
||||
left_matches = self.left.find(datum)
|
||||
if not isinstance(left_matches, list):
|
||||
left_matches = [left_matches]
|
||||
|
||||
def match_recursively(datum):
|
||||
right_matches = self.right.find(datum)
|
||||
|
||||
# Manually do the * or [*] to avoid coercion and recurse just the right-hand pattern
|
||||
if isinstance(datum.value, list):
|
||||
recursive_matches = [submatch
|
||||
for i in range(0, len(datum.value))
|
||||
for submatch in match_recursively(DatumInContext(datum.value[i], context=datum, path=Index(i)))]
|
||||
|
||||
elif isinstance(datum.value, dict):
|
||||
recursive_matches = [submatch
|
||||
for field in datum.value.keys()
|
||||
for submatch in match_recursively(DatumInContext(datum.value[field], context=datum, path=Fields(field)))]
|
||||
|
||||
else:
|
||||
recursive_matches = []
|
||||
|
||||
return right_matches + list(recursive_matches)
|
||||
|
||||
# TODO: repeatable iterator instead of list?
|
||||
return [submatch
|
||||
for left_match in left_matches
|
||||
for submatch in match_recursively(left_match)]
|
||||
|
||||
def is_singular():
|
||||
return False
|
||||
|
||||
def __str__(self):
|
||||
return '%s..%s' % (self.left, self.right)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Descendants) and self.left == other.left and self.right == other.right
|
||||
|
||||
class Union(JSONPath):
|
||||
"""
|
||||
JSONPath that returns the union of the results of each match.
|
||||
This is pretty shoddily implemented for now. The nicest semantics
|
||||
in case of mismatched bits (list vs atomic) is to put
|
||||
them all in a list, but I haven't done that yet.
|
||||
|
||||
WARNING: Any appearance of this being the _concatenation_ is
|
||||
coincidence. It may even be a bug! (or laziness)
|
||||
"""
|
||||
def __init__(self, left, right):
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
def is_singular(self):
|
||||
return False
|
||||
|
||||
def find(self, data):
|
||||
return self.left.find(data) + self.right.find(data)
|
||||
|
||||
class Intersect(JSONPath):
|
||||
"""
|
||||
JSONPath for bits that match *both* patterns.
|
||||
|
||||
This can be accomplished a couple of ways. The most
|
||||
efficient is to actually build the intersected
|
||||
AST as in building a state machine for matching the
|
||||
intersection of regular languages. The next
|
||||
idea is to build a filtered data and match against
|
||||
that.
|
||||
"""
|
||||
def __init__(self, left, right):
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
def is_singular(self):
|
||||
return False
|
||||
|
||||
def find(self, data):
|
||||
raise NotImplementedError()
|
||||
|
||||
class Fields(JSONPath):
|
||||
"""
|
||||
JSONPath referring to some field of the current object.
|
||||
Concrete syntax ix comma-separated field names.
|
||||
|
||||
WARNING: If '*' is any of the field names, then they will
|
||||
all be returned.
|
||||
"""
|
||||
|
||||
def __init__(self, *fields):
|
||||
self.fields = fields
|
||||
|
||||
def get_field_datum(self, datum, field):
|
||||
if field == auto_id_field:
|
||||
return AutoIdForDatum(datum)
|
||||
else:
|
||||
try:
|
||||
field_value = datum.value[field] # Do NOT use `val.get(field)` since that confuses None as a value and None due to `get`
|
||||
return DatumInContext(value=field_value, path=Fields(field), context=datum)
|
||||
except (TypeError, KeyError, AttributeError):
|
||||
return None
|
||||
|
||||
def reified_fields(self, datum):
|
||||
if '*' not in self.fields:
|
||||
return self.fields
|
||||
else:
|
||||
try:
|
||||
fields = tuple(datum.value.keys())
|
||||
return fields if auto_id_field is None else fields + (auto_id_field,)
|
||||
except AttributeError:
|
||||
return ()
|
||||
|
||||
def find(self, datum):
|
||||
datum = DatumInContext.wrap(datum)
|
||||
|
||||
return [field_datum
|
||||
for field_datum in [self.get_field_datum(datum, field) for field in self.reified_fields(datum)]
|
||||
if field_datum is not None]
|
||||
|
||||
def __str__(self):
|
||||
return ','.join(self.fields)
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%s)' % (self.__class__.__name__, ','.join(map(repr, self.fields)))
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Fields) and tuple(self.fields) == tuple(other.fields)
|
||||
|
||||
|
||||
class Index(JSONPath):
|
||||
"""
|
||||
JSONPath that matches indices of the current datum, or none if not large enough.
|
||||
Concrete syntax is brackets.
|
||||
|
||||
WARNING: If the datum is not long enough, it will not crash but will not match anything.
|
||||
NOTE: For the concrete syntax of `[*]`, the abstract syntax is a Slice() with no parameters (equiv to `[:]`
|
||||
"""
|
||||
|
||||
def __init__(self, index):
|
||||
self.index = index
|
||||
|
||||
def find(self, datum):
|
||||
datum = DatumInContext.wrap(datum)
|
||||
|
||||
if len(datum.value) > self.index:
|
||||
return [DatumInContext(datum.value[self.index], path=self, context=datum)]
|
||||
else:
|
||||
return []
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Index) and self.index == other.index
|
||||
|
||||
def __str__(self):
|
||||
return '[%i]' % self.index
|
||||
|
||||
class Slice(JSONPath):
|
||||
"""
|
||||
JSONPath matching a slice of an array.
|
||||
|
||||
Because of a mismatch between JSON and XML when schema-unaware,
|
||||
this always returns an iterable; if the incoming data
|
||||
was not a list, then it returns a one element list _containing_ that
|
||||
data.
|
||||
|
||||
Consider these two docs, and their schema-unaware translation to JSON:
|
||||
|
||||
<a><b>hello</b></a> ==> {"a": {"b": "hello"}}
|
||||
<a><b>hello</b><b>goodbye</b></a> ==> {"a": {"b": ["hello", "goodbye"]}}
|
||||
|
||||
If there were a schema, it would be known that "b" should always be an
|
||||
array (unless the schema were wonky, but that is too much to fix here)
|
||||
so when querying with JSON if the one writing the JSON knows that it
|
||||
should be an array, they can write a slice operator and it will coerce
|
||||
a non-array value to an array.
|
||||
|
||||
This may be a bit unfortunate because it would be nice to always have
|
||||
an iterator, but dictionaries and other objects may also be iterable,
|
||||
so this is the compromise.
|
||||
"""
|
||||
def __init__(self, start=None, end=None, step=None):
|
||||
self.start = start
|
||||
self.end = end
|
||||
self.step = step
|
||||
|
||||
def find(self, datum):
|
||||
datum = DatumInContext.wrap(datum)
|
||||
|
||||
# Here's the hack. If it is a dictionary or some kind of constant,
|
||||
# put it in a single-element list
|
||||
if (isinstance(datum.value, dict) or isinstance(datum.value, six.integer_types) or isinstance(datum.value, six.string_types)):
|
||||
return self.find(DatumInContext([datum.value], path=datum.path, context=datum.context))
|
||||
|
||||
# Some iterators do not support slicing but we can still
|
||||
# at least work for '*'
|
||||
if self.start == None and self.end == None and self.step == None:
|
||||
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in xrange(0, len(datum.value))]
|
||||
else:
|
||||
return [DatumInContext(datum.value[i], path=Index(i), context=datum) for i in range(0, len(datum.value))[self.start:self.end:self.step]]
|
||||
|
||||
def __str__(self):
|
||||
if self.start == None and self.end == None and self.step == None:
|
||||
return '[*]'
|
||||
else:
|
||||
return '[%s%s%s]' % (self.start or '',
|
||||
':%d'%self.end if self.end else '',
|
||||
':%d'%self.step if self.step else '')
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(start=%r,end=%r,step=%r)' % (self.__class__.__name__, self.start, self.end, self.step)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, Slice) and other.start == self.start and self.end == other.end and other.step == self.step
|
||||
@@ -1,171 +0,0 @@
|
||||
from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
|
||||
import sys
|
||||
import logging
|
||||
|
||||
import ply.lex
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class JsonPathLexerError(Exception):
|
||||
pass
|
||||
|
||||
class JsonPathLexer(object):
|
||||
'''
|
||||
A Lexical analyzer for JsonPath.
|
||||
'''
|
||||
|
||||
def __init__(self, debug=False):
|
||||
self.debug = debug
|
||||
if self.__doc__ == None:
|
||||
raise JsonPathLexerError('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
|
||||
|
||||
def tokenize(self, string):
|
||||
'''
|
||||
Maps a string to an iterator over tokens. In other words: [char] -> [token]
|
||||
'''
|
||||
|
||||
new_lexer = ply.lex.lex(module=self, debug=self.debug, errorlog=logger)
|
||||
new_lexer.latest_newline = 0
|
||||
new_lexer.string_value = None
|
||||
new_lexer.input(string)
|
||||
|
||||
while True:
|
||||
t = new_lexer.token()
|
||||
if t is None: break
|
||||
t.col = t.lexpos - new_lexer.latest_newline
|
||||
yield t
|
||||
|
||||
if new_lexer.string_value is not None:
|
||||
raise JsonPathLexerError('Unexpected EOF in string literal or identifier')
|
||||
|
||||
# ============== PLY Lexer specification ==================
|
||||
#
|
||||
# This probably should be private but:
|
||||
# - the parser requires access to `tokens` (perhaps they should be defined in a third, shared dependency)
|
||||
# - things like `literals` might be a legitimate part of the public interface.
|
||||
#
|
||||
# Anyhow, it is pythonic to give some rope to hang oneself with :-)
|
||||
|
||||
literals = ['*', '.', '[', ']', '(', ')', '$', ',', ':', '|', '&']
|
||||
|
||||
reserved_words = { 'where': 'WHERE' }
|
||||
|
||||
tokens = ['DOUBLEDOT', 'NUMBER', 'ID', 'NAMED_OPERATOR'] + list(reserved_words.values())
|
||||
|
||||
states = [ ('singlequote', 'exclusive'),
|
||||
('doublequote', 'exclusive'),
|
||||
('backquote', 'exclusive') ]
|
||||
|
||||
# Normal lexing, rather easy
|
||||
t_DOUBLEDOT = r'\.\.'
|
||||
t_ignore = ' \t'
|
||||
|
||||
def t_ID(self, t):
|
||||
r'[a-zA-Z_@][a-zA-Z0-9_@\-]*'
|
||||
t.type = self.reserved_words.get(t.value, 'ID')
|
||||
return t
|
||||
|
||||
def t_NUMBER(self, t):
|
||||
r'-?\d+'
|
||||
t.value = int(t.value)
|
||||
return t
|
||||
|
||||
|
||||
# Single-quoted strings
|
||||
t_singlequote_ignore = ''
|
||||
def t_singlequote(self, t):
|
||||
r"'"
|
||||
t.lexer.string_start = t.lexer.lexpos
|
||||
t.lexer.string_value = ''
|
||||
t.lexer.push_state('singlequote')
|
||||
|
||||
def t_singlequote_content(self, t):
|
||||
r"[^'\\]+"
|
||||
t.lexer.string_value += t.value
|
||||
|
||||
def t_singlequote_escape(self, t):
|
||||
r'\\.'
|
||||
t.lexer.string_value += t.value[1]
|
||||
|
||||
def t_singlequote_end(self, t):
|
||||
r"'"
|
||||
t.value = t.lexer.string_value
|
||||
t.type = 'ID'
|
||||
t.lexer.string_value = None
|
||||
t.lexer.pop_state()
|
||||
return t
|
||||
|
||||
def t_singlequote_error(self, t):
|
||||
raise JsonPathLexerError('Error on line %s, col %s while lexing singlequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
|
||||
|
||||
|
||||
# Double-quoted strings
|
||||
t_doublequote_ignore = ''
|
||||
def t_doublequote(self, t):
|
||||
r'"'
|
||||
t.lexer.string_start = t.lexer.lexpos
|
||||
t.lexer.string_value = ''
|
||||
t.lexer.push_state('doublequote')
|
||||
|
||||
def t_doublequote_content(self, t):
|
||||
r'[^"\\]+'
|
||||
t.lexer.string_value += t.value
|
||||
|
||||
def t_doublequote_escape(self, t):
|
||||
r'\\.'
|
||||
t.lexer.string_value += t.value[1]
|
||||
|
||||
def t_doublequote_end(self, t):
|
||||
r'"'
|
||||
t.value = t.lexer.string_value
|
||||
t.type = 'ID'
|
||||
t.lexer.string_value = None
|
||||
t.lexer.pop_state()
|
||||
return t
|
||||
|
||||
def t_doublequote_error(self, t):
|
||||
raise JsonPathLexerError('Error on line %s, col %s while lexing doublequoted field: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
|
||||
|
||||
|
||||
# Back-quoted "magic" operators
|
||||
t_backquote_ignore = ''
|
||||
def t_backquote(self, t):
|
||||
r'`'
|
||||
t.lexer.string_start = t.lexer.lexpos
|
||||
t.lexer.string_value = ''
|
||||
t.lexer.push_state('backquote')
|
||||
|
||||
def t_backquote_escape(self, t):
|
||||
r'\\.'
|
||||
t.lexer.string_value += t.value[1]
|
||||
|
||||
def t_backquote_content(self, t):
|
||||
r"[^`\\]+"
|
||||
t.lexer.string_value += t.value
|
||||
|
||||
def t_backquote_end(self, t):
|
||||
r'`'
|
||||
t.value = t.lexer.string_value
|
||||
t.type = 'NAMED_OPERATOR'
|
||||
t.lexer.string_value = None
|
||||
t.lexer.pop_state()
|
||||
return t
|
||||
|
||||
def t_backquote_error(self, t):
|
||||
raise JsonPathLexerError('Error on line %s, col %s while lexing backquoted operator: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
|
||||
|
||||
|
||||
# Counting lines, handling errors
|
||||
def t_newline(self, t):
|
||||
r'\n'
|
||||
t.lexer.lineno += 1
|
||||
t.lexer.latest_newline = t.lexpos
|
||||
|
||||
def t_error(self, t):
|
||||
raise JsonPathLexerError('Error on line %s, col %s: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig()
|
||||
lexer = JsonPathLexer(debug=True)
|
||||
for token in lexer.tokenize(sys.stdin.read()):
|
||||
print('%-20s%s' % (token.value, token.type))
|
||||
@@ -1,187 +0,0 @@
|
||||
from __future__ import print_function, absolute_import, division, generators, nested_scopes
|
||||
import sys
|
||||
import os.path
|
||||
import logging
|
||||
|
||||
import ply.yacc
|
||||
|
||||
from jsonpath_rw.jsonpath import *
|
||||
from jsonpath_rw.lexer import JsonPathLexer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def parse(string):
|
||||
return JsonPathParser().parse(string)
|
||||
|
||||
class JsonPathParser(object):
|
||||
'''
|
||||
An LALR-parser for JsonPath
|
||||
'''
|
||||
|
||||
tokens = JsonPathLexer.tokens
|
||||
|
||||
def __init__(self, debug=False, lexer_class=None):
|
||||
if self.__doc__ == None:
|
||||
raise Exception('Docstrings have been removed! By design of PLY, jsonpath-rw requires docstrings. You must not use PYTHONOPTIMIZE=2 or python -OO.')
|
||||
|
||||
self.debug = debug
|
||||
self.lexer_class = lexer_class or JsonPathLexer # Crufty but works around statefulness in PLY
|
||||
|
||||
def parse(self, string, lexer = None):
|
||||
lexer = lexer or self.lexer_class()
|
||||
return self.parse_token_stream(lexer.tokenize(string))
|
||||
|
||||
def parse_token_stream(self, token_iterator, start_symbol='jsonpath'):
|
||||
|
||||
# Since PLY has some crufty aspects and dumps files, we try to keep them local
|
||||
# However, we need to derive the name of the output Python file :-/
|
||||
output_directory = os.path.dirname(__file__)
|
||||
try:
|
||||
module_name = os.path.splitext(os.path.split(__file__)[1])[0]
|
||||
except:
|
||||
module_name = __name__
|
||||
|
||||
parsing_table_module = '_'.join([module_name, start_symbol, 'parsetab'])
|
||||
|
||||
# And we regenerate the parse table every time; it doesn't actually take that long!
|
||||
new_parser = ply.yacc.yacc(module=self,
|
||||
debug=self.debug,
|
||||
tabmodule = parsing_table_module,
|
||||
outputdir = output_directory,
|
||||
write_tables=0,
|
||||
start = start_symbol,
|
||||
errorlog = logger)
|
||||
|
||||
return new_parser.parse(lexer = IteratorToTokenStream(token_iterator))
|
||||
|
||||
# ===================== PLY Parser specification =====================
|
||||
|
||||
precedence = [
|
||||
('left', ','),
|
||||
('left', 'DOUBLEDOT'),
|
||||
('left', '.'),
|
||||
('left', '|'),
|
||||
('left', '&'),
|
||||
('left', 'WHERE'),
|
||||
]
|
||||
|
||||
def p_error(self, t):
|
||||
raise Exception('Parse error at %s:%s near token %s (%s)' % (t.lineno, t.col, t.value, t.type))
|
||||
|
||||
def p_jsonpath_binop(self, p):
|
||||
"""jsonpath : jsonpath '.' jsonpath
|
||||
| jsonpath DOUBLEDOT jsonpath
|
||||
| jsonpath WHERE jsonpath
|
||||
| jsonpath '|' jsonpath
|
||||
| jsonpath '&' jsonpath"""
|
||||
op = p[2]
|
||||
|
||||
if op == '.':
|
||||
p[0] = Child(p[1], p[3])
|
||||
elif op == '..':
|
||||
p[0] = Descendants(p[1], p[3])
|
||||
elif op == 'where':
|
||||
p[0] = Where(p[1], p[3])
|
||||
elif op == '|':
|
||||
p[0] = Union(p[1], p[3])
|
||||
elif op == '&':
|
||||
p[0] = Intersect(p[1], p[3])
|
||||
|
||||
def p_jsonpath_fields(self, p):
|
||||
"jsonpath : fields_or_any"
|
||||
p[0] = Fields(*p[1])
|
||||
|
||||
def p_jsonpath_named_operator(self, p):
|
||||
"jsonpath : NAMED_OPERATOR"
|
||||
if p[1] == 'this':
|
||||
p[0] = This()
|
||||
elif p[1] == 'parent':
|
||||
p[0] = Parent()
|
||||
else:
|
||||
raise Exception('Unknown named operator `%s` at %s:%s' % (p[1], p.lineno(1), p.lexpos(1)))
|
||||
|
||||
def p_jsonpath_root(self, p):
|
||||
"jsonpath : '$'"
|
||||
p[0] = Root()
|
||||
|
||||
def p_jsonpath_idx(self, p):
|
||||
"jsonpath : '[' idx ']'"
|
||||
p[0] = p[2]
|
||||
|
||||
def p_jsonpath_slice(self, p):
|
||||
"jsonpath : '[' slice ']'"
|
||||
p[0] = p[2]
|
||||
|
||||
def p_jsonpath_fieldbrackets(self, p):
|
||||
"jsonpath : '[' fields ']'"
|
||||
p[0] = Fields(*p[2])
|
||||
|
||||
def p_jsonpath_child_fieldbrackets(self, p):
|
||||
"jsonpath : jsonpath '[' fields ']'"
|
||||
p[0] = Child(p[1], Fields(*p[3]))
|
||||
|
||||
def p_jsonpath_child_idxbrackets(self, p):
|
||||
"jsonpath : jsonpath '[' idx ']'"
|
||||
p[0] = Child(p[1], p[3])
|
||||
|
||||
def p_jsonpath_child_slicebrackets(self, p):
|
||||
"jsonpath : jsonpath '[' slice ']'"
|
||||
p[0] = Child(p[1], p[3])
|
||||
|
||||
def p_jsonpath_parens(self, p):
|
||||
"jsonpath : '(' jsonpath ')'"
|
||||
p[0] = p[2]
|
||||
|
||||
# Because fields in brackets cannot be '*' - that is reserved for array indices
|
||||
def p_fields_or_any(self, p):
|
||||
"""fields_or_any : fields
|
||||
| '*' """
|
||||
if p[1] == '*':
|
||||
p[0] = ['*']
|
||||
else:
|
||||
p[0] = p[1]
|
||||
|
||||
def p_fields_id(self, p):
|
||||
"fields : ID"
|
||||
p[0] = [p[1]]
|
||||
|
||||
def p_fields_comma(self, p):
|
||||
"fields : fields ',' fields"
|
||||
p[0] = p[1] + p[3]
|
||||
|
||||
def p_idx(self, p):
|
||||
"idx : NUMBER"
|
||||
p[0] = Index(p[1])
|
||||
|
||||
def p_slice_any(self, p):
|
||||
"slice : '*'"
|
||||
p[0] = Slice()
|
||||
|
||||
def p_slice(self, p): # Currently does not support `step`
|
||||
"slice : maybe_int ':' maybe_int"
|
||||
p[0] = Slice(start=p[1], end=p[3])
|
||||
|
||||
def p_maybe_int(self, p):
|
||||
"""maybe_int : NUMBER
|
||||
| empty"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_empty(self, p):
|
||||
'empty :'
|
||||
p[0] = None
|
||||
|
||||
class IteratorToTokenStream(object):
|
||||
def __init__(self, iterator):
|
||||
self.iterator = iterator
|
||||
|
||||
def token(self):
|
||||
try:
|
||||
return next(self.iterator)
|
||||
except StopIteration:
|
||||
return None
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig()
|
||||
parser = JsonPathParser(debug=True)
|
||||
print(parser.parse(sys.stdin.read()))
|
||||
@@ -1,4 +0,0 @@
|
||||
# PLY package
|
||||
# Author: David Beazley (dave@dabeaz.com)
|
||||
|
||||
__all__ = ['lex','yacc']
|
||||
@@ -1,898 +0,0 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# cpp.py
|
||||
#
|
||||
# Author: David Beazley (http://www.dabeaz.com)
|
||||
# Copyright (C) 2007
|
||||
# All rights reserved
|
||||
#
|
||||
# This module implements an ANSI-C style lexical preprocessor for PLY.
|
||||
# -----------------------------------------------------------------------------
|
||||
from __future__ import generators
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Default preprocessor lexer definitions. These tokens are enough to get
|
||||
# a basic preprocessor working. Other modules may import these if they want
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
tokens = (
|
||||
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT', 'CPP_POUND','CPP_DPOUND'
|
||||
)
|
||||
|
||||
literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
|
||||
|
||||
# Whitespace
|
||||
def t_CPP_WS(t):
|
||||
r'\s+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
|
||||
t_CPP_POUND = r'\#'
|
||||
t_CPP_DPOUND = r'\#\#'
|
||||
|
||||
# Identifier
|
||||
t_CPP_ID = r'[A-Za-z_][\w_]*'
|
||||
|
||||
# Integer literal
|
||||
def CPP_INTEGER(t):
|
||||
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU]|[lL]|[uU][lL]|[lL][uU])?)'
|
||||
return t
|
||||
|
||||
t_CPP_INTEGER = CPP_INTEGER
|
||||
|
||||
# Floating literal
|
||||
t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
|
||||
|
||||
# String literal
|
||||
def t_CPP_STRING(t):
|
||||
r'\"([^\\\n]|(\\(.|\n)))*?\"'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
|
||||
# Character constant 'c' or L'c'
|
||||
def t_CPP_CHAR(t):
|
||||
r'(L)?\'([^\\\n]|(\\(.|\n)))*?\''
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
|
||||
# Comment
|
||||
def t_CPP_COMMENT(t):
|
||||
r'(/\*(.|\n)*?\*/)|(//.*?\n)'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
|
||||
def t_error(t):
|
||||
t.type = t.value[0]
|
||||
t.value = t.value[0]
|
||||
t.lexer.skip(1)
|
||||
return t
|
||||
|
||||
import re
|
||||
import copy
|
||||
import time
|
||||
import os.path
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# trigraph()
|
||||
#
|
||||
# Given an input string, this function replaces all trigraph sequences.
|
||||
# The following mapping is used:
|
||||
#
|
||||
# ??= #
|
||||
# ??/ \
|
||||
# ??' ^
|
||||
# ??( [
|
||||
# ??) ]
|
||||
# ??! |
|
||||
# ??< {
|
||||
# ??> }
|
||||
# ??- ~
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''')
|
||||
_trigraph_rep = {
|
||||
'=':'#',
|
||||
'/':'\\',
|
||||
"'":'^',
|
||||
'(':'[',
|
||||
')':']',
|
||||
'!':'|',
|
||||
'<':'{',
|
||||
'>':'}',
|
||||
'-':'~'
|
||||
}
|
||||
|
||||
def trigraph(input):
|
||||
return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Macro object
|
||||
#
|
||||
# This object holds information about preprocessor macros
|
||||
#
|
||||
# .name - Macro name (string)
|
||||
# .value - Macro value (a list of tokens)
|
||||
# .arglist - List of argument names
|
||||
# .variadic - Boolean indicating whether or not variadic macro
|
||||
# .vararg - Name of the variadic parameter
|
||||
#
|
||||
# When a macro is created, the macro replacement token sequence is
|
||||
# pre-scanned and used to create patch lists that are later used
|
||||
# during macro expansion
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
class Macro(object):
|
||||
def __init__(self,name,value,arglist=None,variadic=False):
|
||||
self.name = name
|
||||
self.value = value
|
||||
self.arglist = arglist
|
||||
self.variadic = variadic
|
||||
if variadic:
|
||||
self.vararg = arglist[-1]
|
||||
self.source = None
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Preprocessor object
|
||||
#
|
||||
# Object representing a preprocessor. Contains macro definitions,
|
||||
# include directories, and other information
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
class Preprocessor(object):
|
||||
def __init__(self,lexer=None):
|
||||
if lexer is None:
|
||||
lexer = lex.lexer
|
||||
self.lexer = lexer
|
||||
self.macros = { }
|
||||
self.path = []
|
||||
self.temp_path = []
|
||||
|
||||
# Probe the lexer for selected tokens
|
||||
self.lexprobe()
|
||||
|
||||
tm = time.localtime()
|
||||
self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm))
|
||||
self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm))
|
||||
self.parser = None
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# tokenize()
|
||||
#
|
||||
# Utility function. Given a string of text, tokenize into a list of tokens
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
def tokenize(self,text):
|
||||
tokens = []
|
||||
self.lexer.input(text)
|
||||
while True:
|
||||
tok = self.lexer.token()
|
||||
if not tok: break
|
||||
tokens.append(tok)
|
||||
return tokens
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# error()
|
||||
#
|
||||
# Report a preprocessor error/warning of some kind
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def error(self,file,line,msg):
|
||||
print("%s:%d %s" % (file,line,msg))
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# lexprobe()
|
||||
#
|
||||
# This method probes the preprocessor lexer object to discover
|
||||
# the token types of symbols that are important to the preprocessor.
|
||||
# If this works right, the preprocessor will simply "work"
|
||||
# with any suitable lexer regardless of how tokens have been named.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def lexprobe(self):
|
||||
|
||||
# Determine the token type for identifiers
|
||||
self.lexer.input("identifier")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "identifier":
|
||||
print("Couldn't determine identifier type")
|
||||
else:
|
||||
self.t_ID = tok.type
|
||||
|
||||
# Determine the token type for integers
|
||||
self.lexer.input("12345")
|
||||
tok = self.lexer.token()
|
||||
if not tok or int(tok.value) != 12345:
|
||||
print("Couldn't determine integer type")
|
||||
else:
|
||||
self.t_INTEGER = tok.type
|
||||
self.t_INTEGER_TYPE = type(tok.value)
|
||||
|
||||
# Determine the token type for strings enclosed in double quotes
|
||||
self.lexer.input("\"filename\"")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "\"filename\"":
|
||||
print("Couldn't determine string type")
|
||||
else:
|
||||
self.t_STRING = tok.type
|
||||
|
||||
# Determine the token type for whitespace--if any
|
||||
self.lexer.input(" ")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != " ":
|
||||
self.t_SPACE = None
|
||||
else:
|
||||
self.t_SPACE = tok.type
|
||||
|
||||
# Determine the token type for newlines
|
||||
self.lexer.input("\n")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "\n":
|
||||
self.t_NEWLINE = None
|
||||
print("Couldn't determine token for newlines")
|
||||
else:
|
||||
self.t_NEWLINE = tok.type
|
||||
|
||||
self.t_WS = (self.t_SPACE, self.t_NEWLINE)
|
||||
|
||||
# Check for other characters used by the preprocessor
|
||||
chars = [ '<','>','#','##','\\','(',')',',','.']
|
||||
for c in chars:
|
||||
self.lexer.input(c)
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != c:
|
||||
print("Unable to lex '%s' required for preprocessor" % c)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# add_path()
|
||||
#
|
||||
# Adds a search path to the preprocessor.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def add_path(self,path):
|
||||
self.path.append(path)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# group_lines()
|
||||
#
|
||||
# Given an input string, this function splits it into lines. Trailing whitespace
|
||||
# is removed. Any line ending with \ is grouped with the next line. This
|
||||
# function forms the lowest level of the preprocessor---grouping into text into
|
||||
# a line-by-line format.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def group_lines(self,input):
|
||||
lex = self.lexer.clone()
|
||||
lines = [x.rstrip() for x in input.splitlines()]
|
||||
for i in xrange(len(lines)):
|
||||
j = i+1
|
||||
while lines[i].endswith('\\') and (j < len(lines)):
|
||||
lines[i] = lines[i][:-1]+lines[j]
|
||||
lines[j] = ""
|
||||
j += 1
|
||||
|
||||
input = "\n".join(lines)
|
||||
lex.input(input)
|
||||
lex.lineno = 1
|
||||
|
||||
current_line = []
|
||||
while True:
|
||||
tok = lex.token()
|
||||
if not tok:
|
||||
break
|
||||
current_line.append(tok)
|
||||
if tok.type in self.t_WS and '\n' in tok.value:
|
||||
yield current_line
|
||||
current_line = []
|
||||
|
||||
if current_line:
|
||||
yield current_line
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# tokenstrip()
|
||||
#
|
||||
# Remove leading/trailing whitespace tokens from a token list
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def tokenstrip(self,tokens):
|
||||
i = 0
|
||||
while i < len(tokens) and tokens[i].type in self.t_WS:
|
||||
i += 1
|
||||
del tokens[:i]
|
||||
i = len(tokens)-1
|
||||
while i >= 0 and tokens[i].type in self.t_WS:
|
||||
i -= 1
|
||||
del tokens[i+1:]
|
||||
return tokens
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# collect_args()
|
||||
#
|
||||
# Collects comma separated arguments from a list of tokens. The arguments
|
||||
# must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions)
|
||||
# where tokencount is the number of tokens consumed, args is a list of arguments,
|
||||
# and positions is a list of integers containing the starting index of each
|
||||
# argument. Each argument is represented by a list of tokens.
|
||||
#
|
||||
# When collecting arguments, leading and trailing whitespace is removed
|
||||
# from each argument.
|
||||
#
|
||||
# This function properly handles nested parenthesis and commas---these do not
|
||||
# define new arguments.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def collect_args(self,tokenlist):
|
||||
args = []
|
||||
positions = []
|
||||
current_arg = []
|
||||
nesting = 1
|
||||
tokenlen = len(tokenlist)
|
||||
|
||||
# Search for the opening '('.
|
||||
i = 0
|
||||
while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
|
||||
i += 1
|
||||
|
||||
if (i < tokenlen) and (tokenlist[i].value == '('):
|
||||
positions.append(i+1)
|
||||
else:
|
||||
self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments")
|
||||
return 0, [], []
|
||||
|
||||
i += 1
|
||||
|
||||
while i < tokenlen:
|
||||
t = tokenlist[i]
|
||||
if t.value == '(':
|
||||
current_arg.append(t)
|
||||
nesting += 1
|
||||
elif t.value == ')':
|
||||
nesting -= 1
|
||||
if nesting == 0:
|
||||
if current_arg:
|
||||
args.append(self.tokenstrip(current_arg))
|
||||
positions.append(i)
|
||||
return i+1,args,positions
|
||||
current_arg.append(t)
|
||||
elif t.value == ',' and nesting == 1:
|
||||
args.append(self.tokenstrip(current_arg))
|
||||
positions.append(i+1)
|
||||
current_arg = []
|
||||
else:
|
||||
current_arg.append(t)
|
||||
i += 1
|
||||
|
||||
# Missing end argument
|
||||
self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
|
||||
return 0, [],[]
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# macro_prescan()
|
||||
#
|
||||
# Examine the macro value (token sequence) and identify patch points
|
||||
# This is used to speed up macro expansion later on---we'll know
|
||||
# right away where to apply patches to the value to form the expansion
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def macro_prescan(self,macro):
|
||||
macro.patch = [] # Standard macro arguments
|
||||
macro.str_patch = [] # String conversion expansion
|
||||
macro.var_comma_patch = [] # Variadic macro comma patch
|
||||
i = 0
|
||||
while i < len(macro.value):
|
||||
if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist:
|
||||
argnum = macro.arglist.index(macro.value[i].value)
|
||||
# Conversion of argument to a string
|
||||
if i > 0 and macro.value[i-1].value == '#':
|
||||
macro.value[i] = copy.copy(macro.value[i])
|
||||
macro.value[i].type = self.t_STRING
|
||||
del macro.value[i-1]
|
||||
macro.str_patch.append((argnum,i-1))
|
||||
continue
|
||||
# Concatenation
|
||||
elif (i > 0 and macro.value[i-1].value == '##'):
|
||||
macro.patch.append(('c',argnum,i-1))
|
||||
del macro.value[i-1]
|
||||
continue
|
||||
elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
|
||||
macro.patch.append(('c',argnum,i))
|
||||
i += 1
|
||||
continue
|
||||
# Standard expansion
|
||||
else:
|
||||
macro.patch.append(('e',argnum,i))
|
||||
elif macro.value[i].value == '##':
|
||||
if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \
|
||||
((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \
|
||||
(macro.value[i+1].value == macro.vararg):
|
||||
macro.var_comma_patch.append(i-1)
|
||||
i += 1
|
||||
macro.patch.sort(key=lambda x: x[2],reverse=True)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# macro_expand_args()
|
||||
#
|
||||
# Given a Macro and list of arguments (each a token list), this method
|
||||
# returns an expanded version of a macro. The return value is a token sequence
|
||||
# representing the replacement macro tokens
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def macro_expand_args(self,macro,args):
|
||||
# Make a copy of the macro token sequence
|
||||
rep = [copy.copy(_x) for _x in macro.value]
|
||||
|
||||
# Make string expansion patches. These do not alter the length of the replacement sequence
|
||||
|
||||
str_expansion = {}
|
||||
for argnum, i in macro.str_patch:
|
||||
if argnum not in str_expansion:
|
||||
str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\")
|
||||
rep[i] = copy.copy(rep[i])
|
||||
rep[i].value = str_expansion[argnum]
|
||||
|
||||
# Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid
|
||||
comma_patch = False
|
||||
if macro.variadic and not args[-1]:
|
||||
for i in macro.var_comma_patch:
|
||||
rep[i] = None
|
||||
comma_patch = True
|
||||
|
||||
# Make all other patches. The order of these matters. It is assumed that the patch list
|
||||
# has been sorted in reverse order of patch location since replacements will cause the
|
||||
# size of the replacement sequence to expand from the patch point.
|
||||
|
||||
expanded = { }
|
||||
for ptype, argnum, i in macro.patch:
|
||||
# Concatenation. Argument is left unexpanded
|
||||
if ptype == 'c':
|
||||
rep[i:i+1] = args[argnum]
|
||||
# Normal expansion. Argument is macro expanded first
|
||||
elif ptype == 'e':
|
||||
if argnum not in expanded:
|
||||
expanded[argnum] = self.expand_macros(args[argnum])
|
||||
rep[i:i+1] = expanded[argnum]
|
||||
|
||||
# Get rid of removed comma if necessary
|
||||
if comma_patch:
|
||||
rep = [_i for _i in rep if _i]
|
||||
|
||||
return rep
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# expand_macros()
|
||||
#
|
||||
# Given a list of tokens, this function performs macro expansion.
|
||||
# The expanded argument is a dictionary that contains macros already
|
||||
# expanded. This is used to prevent infinite recursion.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def expand_macros(self,tokens,expanded=None):
|
||||
if expanded is None:
|
||||
expanded = {}
|
||||
i = 0
|
||||
while i < len(tokens):
|
||||
t = tokens[i]
|
||||
if t.type == self.t_ID:
|
||||
if t.value in self.macros and t.value not in expanded:
|
||||
# Yes, we found a macro match
|
||||
expanded[t.value] = True
|
||||
|
||||
m = self.macros[t.value]
|
||||
if not m.arglist:
|
||||
# A simple macro
|
||||
ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded)
|
||||
for e in ex:
|
||||
e.lineno = t.lineno
|
||||
tokens[i:i+1] = ex
|
||||
i += len(ex)
|
||||
else:
|
||||
# A macro with arguments
|
||||
j = i + 1
|
||||
while j < len(tokens) and tokens[j].type in self.t_WS:
|
||||
j += 1
|
||||
if tokens[j].value == '(':
|
||||
tokcount,args,positions = self.collect_args(tokens[j:])
|
||||
if not m.variadic and len(args) != len(m.arglist):
|
||||
self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
|
||||
i = j + tokcount
|
||||
elif m.variadic and len(args) < len(m.arglist)-1:
|
||||
if len(m.arglist) > 2:
|
||||
self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1))
|
||||
else:
|
||||
self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1))
|
||||
i = j + tokcount
|
||||
else:
|
||||
if m.variadic:
|
||||
if len(args) == len(m.arglist)-1:
|
||||
args.append([])
|
||||
else:
|
||||
args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
|
||||
del args[len(m.arglist):]
|
||||
|
||||
# Get macro replacement text
|
||||
rep = self.macro_expand_args(m,args)
|
||||
rep = self.expand_macros(rep,expanded)
|
||||
for r in rep:
|
||||
r.lineno = t.lineno
|
||||
tokens[i:j+tokcount] = rep
|
||||
i += len(rep)
|
||||
del expanded[t.value]
|
||||
continue
|
||||
elif t.value == '__LINE__':
|
||||
t.type = self.t_INTEGER
|
||||
t.value = self.t_INTEGER_TYPE(t.lineno)
|
||||
|
||||
i += 1
|
||||
return tokens
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# evalexpr()
|
||||
#
|
||||
# Evaluate an expression token sequence for the purposes of evaluating
|
||||
# integral expressions.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def evalexpr(self,tokens):
|
||||
# tokens = tokenize(line)
|
||||
# Search for defined macros
|
||||
i = 0
|
||||
while i < len(tokens):
|
||||
if tokens[i].type == self.t_ID and tokens[i].value == 'defined':
|
||||
j = i + 1
|
||||
needparen = False
|
||||
result = "0L"
|
||||
while j < len(tokens):
|
||||
if tokens[j].type in self.t_WS:
|
||||
j += 1
|
||||
continue
|
||||
elif tokens[j].type == self.t_ID:
|
||||
if tokens[j].value in self.macros:
|
||||
result = "1L"
|
||||
else:
|
||||
result = "0L"
|
||||
if not needparen: break
|
||||
elif tokens[j].value == '(':
|
||||
needparen = True
|
||||
elif tokens[j].value == ')':
|
||||
break
|
||||
else:
|
||||
self.error(self.source,tokens[i].lineno,"Malformed defined()")
|
||||
j += 1
|
||||
tokens[i].type = self.t_INTEGER
|
||||
tokens[i].value = self.t_INTEGER_TYPE(result)
|
||||
del tokens[i+1:j+1]
|
||||
i += 1
|
||||
tokens = self.expand_macros(tokens)
|
||||
for i,t in enumerate(tokens):
|
||||
if t.type == self.t_ID:
|
||||
tokens[i] = copy.copy(t)
|
||||
tokens[i].type = self.t_INTEGER
|
||||
tokens[i].value = self.t_INTEGER_TYPE("0L")
|
||||
elif t.type == self.t_INTEGER:
|
||||
tokens[i] = copy.copy(t)
|
||||
# Strip off any trailing suffixes
|
||||
tokens[i].value = str(tokens[i].value)
|
||||
while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
|
||||
tokens[i].value = tokens[i].value[:-1]
|
||||
|
||||
expr = "".join([str(x.value) for x in tokens])
|
||||
expr = expr.replace("&&"," and ")
|
||||
expr = expr.replace("||"," or ")
|
||||
expr = expr.replace("!"," not ")
|
||||
try:
|
||||
result = eval(expr)
|
||||
except StandardError:
|
||||
self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
|
||||
result = 0
|
||||
return result
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# parsegen()
|
||||
#
|
||||
# Parse an input string/
|
||||
# ----------------------------------------------------------------------
|
||||
def parsegen(self,input,source=None):
|
||||
|
||||
# Replace trigraph sequences
|
||||
t = trigraph(input)
|
||||
lines = self.group_lines(t)
|
||||
|
||||
if not source:
|
||||
source = ""
|
||||
|
||||
self.define("__FILE__ \"%s\"" % source)
|
||||
|
||||
self.source = source
|
||||
chunk = []
|
||||
enable = True
|
||||
iftrigger = False
|
||||
ifstack = []
|
||||
|
||||
for x in lines:
|
||||
for i,tok in enumerate(x):
|
||||
if tok.type not in self.t_WS: break
|
||||
if tok.value == '#':
|
||||
# Preprocessor directive
|
||||
|
||||
for tok in x:
|
||||
if tok in self.t_WS and '\n' in tok.value:
|
||||
chunk.append(tok)
|
||||
|
||||
dirtokens = self.tokenstrip(x[i+1:])
|
||||
if dirtokens:
|
||||
name = dirtokens[0].value
|
||||
args = self.tokenstrip(dirtokens[1:])
|
||||
else:
|
||||
name = ""
|
||||
args = []
|
||||
|
||||
if name == 'define':
|
||||
if enable:
|
||||
for tok in self.expand_macros(chunk):
|
||||
yield tok
|
||||
chunk = []
|
||||
self.define(args)
|
||||
elif name == 'include':
|
||||
if enable:
|
||||
for tok in self.expand_macros(chunk):
|
||||
yield tok
|
||||
chunk = []
|
||||
oldfile = self.macros['__FILE__']
|
||||
for tok in self.include(args):
|
||||
yield tok
|
||||
self.macros['__FILE__'] = oldfile
|
||||
self.source = source
|
||||
elif name == 'undef':
|
||||
if enable:
|
||||
for tok in self.expand_macros(chunk):
|
||||
yield tok
|
||||
chunk = []
|
||||
self.undef(args)
|
||||
elif name == 'ifdef':
|
||||
ifstack.append((enable,iftrigger))
|
||||
if enable:
|
||||
if not args[0].value in self.macros:
|
||||
enable = False
|
||||
iftrigger = False
|
||||
else:
|
||||
iftrigger = True
|
||||
elif name == 'ifndef':
|
||||
ifstack.append((enable,iftrigger))
|
||||
if enable:
|
||||
if args[0].value in self.macros:
|
||||
enable = False
|
||||
iftrigger = False
|
||||
else:
|
||||
iftrigger = True
|
||||
elif name == 'if':
|
||||
ifstack.append((enable,iftrigger))
|
||||
if enable:
|
||||
result = self.evalexpr(args)
|
||||
if not result:
|
||||
enable = False
|
||||
iftrigger = False
|
||||
else:
|
||||
iftrigger = True
|
||||
elif name == 'elif':
|
||||
if ifstack:
|
||||
if ifstack[-1][0]: # We only pay attention if outer "if" allows this
|
||||
if enable: # If already true, we flip enable False
|
||||
enable = False
|
||||
elif not iftrigger: # If False, but not triggered yet, we'll check expression
|
||||
result = self.evalexpr(args)
|
||||
if result:
|
||||
enable = True
|
||||
iftrigger = True
|
||||
else:
|
||||
self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
|
||||
|
||||
elif name == 'else':
|
||||
if ifstack:
|
||||
if ifstack[-1][0]:
|
||||
if enable:
|
||||
enable = False
|
||||
elif not iftrigger:
|
||||
enable = True
|
||||
iftrigger = True
|
||||
else:
|
||||
self.error(self.source,dirtokens[0].lineno,"Misplaced #else")
|
||||
|
||||
elif name == 'endif':
|
||||
if ifstack:
|
||||
enable,iftrigger = ifstack.pop()
|
||||
else:
|
||||
self.error(self.source,dirtokens[0].lineno,"Misplaced #endif")
|
||||
else:
|
||||
# Unknown preprocessor directive
|
||||
pass
|
||||
|
||||
else:
|
||||
# Normal text
|
||||
if enable:
|
||||
chunk.extend(x)
|
||||
|
||||
for tok in self.expand_macros(chunk):
|
||||
yield tok
|
||||
chunk = []
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# include()
|
||||
#
|
||||
# Implementation of file-inclusion
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def include(self,tokens):
|
||||
# Try to extract the filename and then process an include file
|
||||
if not tokens:
|
||||
return
|
||||
if tokens:
|
||||
if tokens[0].value != '<' and tokens[0].type != self.t_STRING:
|
||||
tokens = self.expand_macros(tokens)
|
||||
|
||||
if tokens[0].value == '<':
|
||||
# Include <...>
|
||||
i = 1
|
||||
while i < len(tokens):
|
||||
if tokens[i].value == '>':
|
||||
break
|
||||
i += 1
|
||||
else:
|
||||
print("Malformed #include <...>")
|
||||
return
|
||||
filename = "".join([x.value for x in tokens[1:i]])
|
||||
path = self.path + [""] + self.temp_path
|
||||
elif tokens[0].type == self.t_STRING:
|
||||
filename = tokens[0].value[1:-1]
|
||||
path = self.temp_path + [""] + self.path
|
||||
else:
|
||||
print("Malformed #include statement")
|
||||
return
|
||||
for p in path:
|
||||
iname = os.path.join(p,filename)
|
||||
try:
|
||||
data = open(iname,"r").read()
|
||||
dname = os.path.dirname(iname)
|
||||
if dname:
|
||||
self.temp_path.insert(0,dname)
|
||||
for tok in self.parsegen(data,filename):
|
||||
yield tok
|
||||
if dname:
|
||||
del self.temp_path[0]
|
||||
break
|
||||
except IOError:
|
||||
pass
|
||||
else:
|
||||
print("Couldn't find '%s'" % filename)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# define()
|
||||
#
|
||||
# Define a new macro
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def define(self,tokens):
|
||||
if isinstance(tokens,(str,unicode)):
|
||||
tokens = self.tokenize(tokens)
|
||||
|
||||
linetok = tokens
|
||||
try:
|
||||
name = linetok[0]
|
||||
if len(linetok) > 1:
|
||||
mtype = linetok[1]
|
||||
else:
|
||||
mtype = None
|
||||
if not mtype:
|
||||
m = Macro(name.value,[])
|
||||
self.macros[name.value] = m
|
||||
elif mtype.type in self.t_WS:
|
||||
# A normal macro
|
||||
m = Macro(name.value,self.tokenstrip(linetok[2:]))
|
||||
self.macros[name.value] = m
|
||||
elif mtype.value == '(':
|
||||
# A macro with arguments
|
||||
tokcount, args, positions = self.collect_args(linetok[1:])
|
||||
variadic = False
|
||||
for a in args:
|
||||
if variadic:
|
||||
print("No more arguments may follow a variadic argument")
|
||||
break
|
||||
astr = "".join([str(_i.value) for _i in a])
|
||||
if astr == "...":
|
||||
variadic = True
|
||||
a[0].type = self.t_ID
|
||||
a[0].value = '__VA_ARGS__'
|
||||
variadic = True
|
||||
del a[1:]
|
||||
continue
|
||||
elif astr[-3:] == "..." and a[0].type == self.t_ID:
|
||||
variadic = True
|
||||
del a[1:]
|
||||
# If, for some reason, "." is part of the identifier, strip off the name for the purposes
|
||||
# of macro expansion
|
||||
if a[0].value[-3:] == '...':
|
||||
a[0].value = a[0].value[:-3]
|
||||
continue
|
||||
if len(a) > 1 or a[0].type != self.t_ID:
|
||||
print("Invalid macro argument")
|
||||
break
|
||||
else:
|
||||
mvalue = self.tokenstrip(linetok[1+tokcount:])
|
||||
i = 0
|
||||
while i < len(mvalue):
|
||||
if i+1 < len(mvalue):
|
||||
if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##':
|
||||
del mvalue[i]
|
||||
continue
|
||||
elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS:
|
||||
del mvalue[i+1]
|
||||
i += 1
|
||||
m = Macro(name.value,mvalue,[x[0].value for x in args],variadic)
|
||||
self.macro_prescan(m)
|
||||
self.macros[name.value] = m
|
||||
else:
|
||||
print("Bad macro definition")
|
||||
except LookupError:
|
||||
print("Bad macro definition")
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# undef()
|
||||
#
|
||||
# Undefine a macro
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def undef(self,tokens):
|
||||
id = tokens[0].value
|
||||
try:
|
||||
del self.macros[id]
|
||||
except LookupError:
|
||||
pass
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# parse()
|
||||
#
|
||||
# Parse input text.
|
||||
# ----------------------------------------------------------------------
|
||||
def parse(self,input,source=None,ignore={}):
|
||||
self.ignore = ignore
|
||||
self.parser = self.parsegen(input,source)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# token()
|
||||
#
|
||||
# Method to return individual tokens
|
||||
# ----------------------------------------------------------------------
|
||||
def token(self):
|
||||
try:
|
||||
while True:
|
||||
tok = next(self.parser)
|
||||
if tok.type not in self.ignore: return tok
|
||||
except StopIteration:
|
||||
self.parser = None
|
||||
return None
|
||||
|
||||
if __name__ == '__main__':
|
||||
import ply.lex as lex
|
||||
lexer = lex.lex()
|
||||
|
||||
# Run a preprocessor
|
||||
import sys
|
||||
f = open(sys.argv[1])
|
||||
input = f.read()
|
||||
|
||||
p = Preprocessor(lexer)
|
||||
p.parse(input,sys.argv[1])
|
||||
while True:
|
||||
tok = p.token()
|
||||
if not tok: break
|
||||
print(p.source, tok)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,133 +0,0 @@
|
||||
# ----------------------------------------------------------------------
|
||||
# ctokens.py
|
||||
#
|
||||
# Token specifications for symbols in ANSI C and C++. This file is
|
||||
# meant to be used as a library in other tokenizers.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
# Reserved words
|
||||
|
||||
tokens = [
|
||||
# Literals (identifier, integer constant, float constant, string constant, char const)
|
||||
'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
|
||||
|
||||
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
|
||||
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD',
|
||||
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
|
||||
'LOR', 'LAND', 'LNOT',
|
||||
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
|
||||
|
||||
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
|
||||
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
|
||||
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
|
||||
|
||||
# Increment/decrement (++,--)
|
||||
'PLUSPLUS', 'MINUSMINUS',
|
||||
|
||||
# Structure dereference (->)
|
||||
'ARROW',
|
||||
|
||||
# Ternary operator (?)
|
||||
'TERNARY',
|
||||
|
||||
# Delimeters ( ) [ ] { } , . ; :
|
||||
'LPAREN', 'RPAREN',
|
||||
'LBRACKET', 'RBRACKET',
|
||||
'LBRACE', 'RBRACE',
|
||||
'COMMA', 'PERIOD', 'SEMI', 'COLON',
|
||||
|
||||
# Ellipsis (...)
|
||||
'ELLIPSIS',
|
||||
]
|
||||
|
||||
# Operators
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_MODULO = r'%'
|
||||
t_OR = r'\|'
|
||||
t_AND = r'&'
|
||||
t_NOT = r'~'
|
||||
t_XOR = r'\^'
|
||||
t_LSHIFT = r'<<'
|
||||
t_RSHIFT = r'>>'
|
||||
t_LOR = r'\|\|'
|
||||
t_LAND = r'&&'
|
||||
t_LNOT = r'!'
|
||||
t_LT = r'<'
|
||||
t_GT = r'>'
|
||||
t_LE = r'<='
|
||||
t_GE = r'>='
|
||||
t_EQ = r'=='
|
||||
t_NE = r'!='
|
||||
|
||||
# Assignment operators
|
||||
|
||||
t_EQUALS = r'='
|
||||
t_TIMESEQUAL = r'\*='
|
||||
t_DIVEQUAL = r'/='
|
||||
t_MODEQUAL = r'%='
|
||||
t_PLUSEQUAL = r'\+='
|
||||
t_MINUSEQUAL = r'-='
|
||||
t_LSHIFTEQUAL = r'<<='
|
||||
t_RSHIFTEQUAL = r'>>='
|
||||
t_ANDEQUAL = r'&='
|
||||
t_OREQUAL = r'\|='
|
||||
t_XOREQUAL = r'^='
|
||||
|
||||
# Increment/decrement
|
||||
t_INCREMENT = r'\+\+'
|
||||
t_DECREMENT = r'--'
|
||||
|
||||
# ->
|
||||
t_ARROW = r'->'
|
||||
|
||||
# ?
|
||||
t_TERNARY = r'\?'
|
||||
|
||||
# Delimeters
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_LBRACKET = r'\['
|
||||
t_RBRACKET = r'\]'
|
||||
t_LBRACE = r'\{'
|
||||
t_RBRACE = r'\}'
|
||||
t_COMMA = r','
|
||||
t_PERIOD = r'\.'
|
||||
t_SEMI = r';'
|
||||
t_COLON = r':'
|
||||
t_ELLIPSIS = r'\.\.\.'
|
||||
|
||||
# Identifiers
|
||||
t_ID = r'[A-Za-z_][A-Za-z0-9_]*'
|
||||
|
||||
# Integer literal
|
||||
t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?'
|
||||
|
||||
# Floating literal
|
||||
t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
|
||||
|
||||
# String literal
|
||||
t_STRING = r'\"([^\\\n]|(\\.))*?\"'
|
||||
|
||||
# Character constant 'c' or L'c'
|
||||
t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\''
|
||||
|
||||
# Comment (C-Style)
|
||||
def t_COMMENT(t):
|
||||
r'/\*(.|\n)*?\*/'
|
||||
t.lexer.lineno += t.value.count('\n')
|
||||
return t
|
||||
|
||||
# Comment (C++-Style)
|
||||
def t_CPPCOMMENT(t):
|
||||
r'//.*\n'
|
||||
t.lexer.lineno += 1
|
||||
return t
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,187 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import importlib
|
||||
import os
|
||||
|
||||
from ripple.ledger import LedgerNumber
|
||||
from ripple.util import File
|
||||
from ripple.util import Log
|
||||
from ripple.util import PrettyPrint
|
||||
from ripple.util import Range
|
||||
from ripple.util.Function import Function
|
||||
|
||||
NAME = 'LedgerTool'
|
||||
VERSION = '0.1'
|
||||
NONE = '(none)'
|
||||
|
||||
_parser = argparse.ArgumentParser(
|
||||
prog=NAME,
|
||||
description='Retrieve and process Ripple ledgers.',
|
||||
epilog=LedgerNumber.HELP,
|
||||
)
|
||||
|
||||
# Positional arguments.
|
||||
_parser.add_argument(
|
||||
'command',
|
||||
nargs='*',
|
||||
help='Command to execute.'
|
||||
)
|
||||
|
||||
# Flag arguments.
|
||||
_parser.add_argument(
|
||||
'--binary',
|
||||
action='store_true',
|
||||
help='If true, searches are binary - by default linear search is used.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--cache',
|
||||
default='~/.local/share/ripple/ledger',
|
||||
help='The cache directory.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--complete',
|
||||
action='store_true',
|
||||
help='If set, only match complete ledgers.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--condition', '-c',
|
||||
help='The name of a condition function used to match ledgers.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--config',
|
||||
help='The rippled configuration file name.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--database', '-d',
|
||||
nargs='*',
|
||||
default=NONE,
|
||||
help='Specify a database.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--display',
|
||||
help='Specify a function to display ledgers.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--full', '-f',
|
||||
action='store_true',
|
||||
help='If true, request full ledgers.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--indent', '-i',
|
||||
type=int,
|
||||
default=2,
|
||||
help='How many spaces to indent when display in JSON.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--offline', '-o',
|
||||
action='store_true',
|
||||
help='If true, work entirely from cache, do not try to contact the server.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--position', '-p',
|
||||
choices=['all', 'first', 'last'],
|
||||
default='last',
|
||||
help='Select which ledgers to display.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--rippled', '-r',
|
||||
help='The filename of a rippled binary for retrieving ledgers.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--server', '-s',
|
||||
help='IP address of a rippled JSON server.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--utc', '-u',
|
||||
action='store_true',
|
||||
help='If true, display times in UTC rather than local time.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--validations',
|
||||
default=3,
|
||||
help='The number of validations needed before considering a ledger valid.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--version',
|
||||
action='version',
|
||||
version='%(prog)s ' + VERSION,
|
||||
help='Print the current version of %(prog)s',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--verbose', '-v',
|
||||
action='store_true',
|
||||
help='If true, give status messages on stderr.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--window', '-w',
|
||||
type=int,
|
||||
default=0,
|
||||
help='How many ledgers to display around the matching ledger.',
|
||||
)
|
||||
|
||||
_parser.add_argument(
|
||||
'--yes', '-y',
|
||||
action='store_true',
|
||||
help='If true, don\'t ask for confirmation on large commands.',
|
||||
)
|
||||
|
||||
# Read the arguments from the command line.
|
||||
ARGS = _parser.parse_args()
|
||||
ARGS.NONE = NONE
|
||||
|
||||
Log.VERBOSE = ARGS.verbose
|
||||
|
||||
# Now remove any items that look like ledger numbers from the command line.
|
||||
_command = ARGS.command
|
||||
_parts = (ARGS.command, ARGS.ledgers) = ([], [])
|
||||
|
||||
for c in _command:
|
||||
_parts[Range.is_range(c, *LedgerNumber.LEDGERS)].append(c)
|
||||
|
||||
ARGS.command = ARGS.command or ['print' if ARGS.ledgers else 'info']
|
||||
|
||||
ARGS.cache = File.normalize(ARGS.cache)
|
||||
|
||||
if not ARGS.ledgers:
|
||||
if ARGS.condition:
|
||||
Log.warn('--condition needs a range of ledgers')
|
||||
if ARGS.display:
|
||||
Log.warn('--display needs a range of ledgers')
|
||||
|
||||
ARGS.condition = Function(
|
||||
ARGS.condition or 'all_ledgers', 'ripple.ledger.conditions')
|
||||
ARGS.display = Function(
|
||||
ARGS.display or 'ledger_number', 'ripple.ledger.displays')
|
||||
|
||||
if ARGS.window < 0:
|
||||
raise ValueError('Window cannot be negative: --window=%d' %
|
||||
ARGS.window)
|
||||
|
||||
PrettyPrint.INDENT = (ARGS.indent * ' ')
|
||||
|
||||
_loaders = (ARGS.database != NONE) + bool(ARGS.rippled) + bool(ARGS.server)
|
||||
|
||||
if not _loaders:
|
||||
ARGS.rippled = 'rippled'
|
||||
|
||||
elif _loaders > 1:
|
||||
raise ValueError('At most one of --database, --rippled and --server '
|
||||
'may be specified')
|
||||
@@ -1,78 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import ConfigFile
|
||||
from ripple.util import Database
|
||||
from ripple.util import File
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
|
||||
LEDGER_QUERY = """
|
||||
SELECT
|
||||
L.*, count(1) validations
|
||||
FROM
|
||||
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq DESC) L
|
||||
JOIN Validations V
|
||||
ON (V.LedgerHash = L.LedgerHash)
|
||||
GROUP BY L.LedgerHash
|
||||
HAVING validations >= {validation_quorum}
|
||||
ORDER BY 2;
|
||||
"""
|
||||
|
||||
COMPLETE_QUERY = """
|
||||
SELECT
|
||||
L.LedgerSeq, count(*) validations
|
||||
FROM
|
||||
(select LedgerHash, LedgerSeq from Ledgers ORDER BY LedgerSeq) L
|
||||
JOIN Validations V
|
||||
ON (V.LedgerHash = L.LedgerHash)
|
||||
GROUP BY L.LedgerHash
|
||||
HAVING validations >= :validation_quorum
|
||||
ORDER BY 2;
|
||||
"""
|
||||
|
||||
_DATABASE_NAME = 'ledger.db'
|
||||
|
||||
USE_PLACEHOLDERS = False
|
||||
|
||||
class DatabaseReader(object):
|
||||
def __init__(self, config):
|
||||
assert ARGS.database != ARGS.NONE
|
||||
database = ARGS.database or config['database_path']
|
||||
if not database.endswith(_DATABASE_NAME):
|
||||
database = os.path.join(database, _DATABASE_NAME)
|
||||
if USE_PLACEHOLDERS:
|
||||
cursor = Database.fetchall(
|
||||
database, COMPLETE_QUERY, config)
|
||||
else:
|
||||
cursor = Database.fetchall(
|
||||
database, LEDGER_QUERY.format(**config), {})
|
||||
self.complete = [c[1] for c in cursor]
|
||||
|
||||
def name_to_ledger_index(self, ledger_name, is_full=False):
|
||||
if not self.complete:
|
||||
return None
|
||||
if ledger_name == 'closed':
|
||||
return self.complete[-1]
|
||||
if ledger_name == 'current':
|
||||
return None
|
||||
if ledger_name == 'validated':
|
||||
return self.complete[-1]
|
||||
|
||||
def get_ledger(self, name, is_full=False):
|
||||
cmd = ['ledger', str(name)]
|
||||
if is_full:
|
||||
cmd.append('full')
|
||||
response = self._command(*cmd)
|
||||
result = response.get('ledger')
|
||||
if result:
|
||||
return result
|
||||
error = response['error']
|
||||
etext = _ERROR_TEXT.get(error)
|
||||
if etext:
|
||||
error = '%s (%s)' % (etext, error)
|
||||
Log.fatal(_ERROR_TEXT.get(error, error))
|
||||
@@ -1,18 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util import Range
|
||||
|
||||
FIRST_EVER = 32570
|
||||
|
||||
LEDGERS = {
|
||||
'closed': 'the most recently closed ledger',
|
||||
'current': 'the current ledger',
|
||||
'first': 'the first complete ledger on this server',
|
||||
'last': 'the last complete ledger on this server',
|
||||
'validated': 'the most recently validated ledger',
|
||||
}
|
||||
|
||||
HELP = """
|
||||
Ledgers are either represented by a number, or one of the special ledgers;
|
||||
""" + ',\n'.join('%s, %s' % (k, v) for k, v in sorted(LEDGERS.items())
|
||||
)
|
||||
@@ -1,68 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import File
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
|
||||
_ERROR_CODE_REASON = {
|
||||
62: 'No rippled server is running.',
|
||||
}
|
||||
|
||||
_ERROR_TEXT = {
|
||||
'lgrNotFound': 'The ledger you requested was not found.',
|
||||
'noCurrent': 'The server has no current ledger.',
|
||||
'noNetwork': 'The server did not respond to your request.',
|
||||
}
|
||||
|
||||
_DEFAULT_ERROR_ = "Couldn't connect to server."
|
||||
|
||||
class RippledReader(object):
|
||||
def __init__(self, config):
|
||||
fname = File.normalize(ARGS.rippled)
|
||||
if not os.path.exists(fname):
|
||||
raise Exception('No rippled found at %s.' % fname)
|
||||
self.cmd = [fname]
|
||||
if ARGS.config:
|
||||
self.cmd.extend(['--conf', File.normalize(ARGS.config)])
|
||||
self.info = self._command('server_info')['info']
|
||||
c = self.info.get('complete_ledgers')
|
||||
if c == 'empty':
|
||||
self.complete = []
|
||||
else:
|
||||
self.complete = sorted(Range.from_string(c))
|
||||
|
||||
def name_to_ledger_index(self, ledger_name, is_full=False):
|
||||
return self.get_ledger(ledger_name, is_full)['ledger_index']
|
||||
|
||||
def get_ledger(self, name, is_full=False):
|
||||
cmd = ['ledger', str(name)]
|
||||
if is_full:
|
||||
cmd.append('full')
|
||||
response = self._command(*cmd)
|
||||
result = response.get('ledger')
|
||||
if result:
|
||||
return result
|
||||
error = response['error']
|
||||
etext = _ERROR_TEXT.get(error)
|
||||
if etext:
|
||||
error = '%s (%s)' % (etext, error)
|
||||
Log.fatal(_ERROR_TEXT.get(error, error))
|
||||
|
||||
def _command(self, *cmds):
|
||||
cmd = self.cmd + list(cmds)
|
||||
try:
|
||||
data = subprocess.check_output(cmd, stderr=subprocess.PIPE)
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise Exception(_ERROR_CODE_REASON.get(
|
||||
e.returncode, _DEFAULT_ERROR_))
|
||||
|
||||
part = json.loads(data)
|
||||
try:
|
||||
return part['result']
|
||||
except:
|
||||
raise ValueError(part.get('error', 'unknown error'))
|
||||
@@ -1,52 +0,0 @@
|
||||
# Constants from ripple/protocol/SField.h
|
||||
|
||||
# special types
|
||||
STI_UNKNOWN = -2
|
||||
STI_DONE = -1
|
||||
STI_NOTPRESENT = 0
|
||||
|
||||
# # types (common)
|
||||
STI_UINT16 = 1
|
||||
STI_UINT32 = 2
|
||||
STI_UINT64 = 3
|
||||
STI_HASH128 = 4
|
||||
STI_HASH256 = 5
|
||||
STI_AMOUNT = 6
|
||||
STI_VL = 7
|
||||
STI_ACCOUNT = 8
|
||||
# 9-13 are reserved
|
||||
STI_OBJECT = 14
|
||||
STI_ARRAY = 15
|
||||
|
||||
# types (uncommon)
|
||||
STI_UINT8 = 16
|
||||
STI_HASH160 = 17
|
||||
STI_PATHSET = 18
|
||||
STI_VECTOR256 = 19
|
||||
|
||||
# high level types
|
||||
# cannot be serialized inside other types
|
||||
STI_TRANSACTION = 10001
|
||||
STI_LEDGERENTRY = 10002
|
||||
STI_VALIDATION = 10003
|
||||
STI_METADATA = 10004
|
||||
|
||||
def field_code(sti, name):
|
||||
if sti < 16:
|
||||
if name < 16:
|
||||
bytes = [(sti << 4) + name]
|
||||
else:
|
||||
bytes = [sti << 4, name]
|
||||
elif name < 16:
|
||||
bytes = [name, sti]
|
||||
else:
|
||||
bytes = [0, sti, name]
|
||||
return ''.join(chr(i) for i in bytes)
|
||||
|
||||
# Selected constants from SField.cpp
|
||||
|
||||
sfSequence = field_code(STI_UINT32, 4)
|
||||
sfPublicKey = field_code(STI_VL, 1)
|
||||
sfSigningPubKey = field_code(STI_VL, 3)
|
||||
sfSignature = field_code(STI_VL, 6)
|
||||
sfMasterSignature = field_code(STI_VL, 18)
|
||||
@@ -1,24 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import sys
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
from ripple.util import Search
|
||||
|
||||
def search(server):
|
||||
"""Yields a stream of ledger numbers that match the given condition."""
|
||||
condition = lambda number: ARGS.condition(server, number)
|
||||
ledgers = server.ledgers
|
||||
if ARGS.binary:
|
||||
try:
|
||||
position = Search.FIRST if ARGS.position == 'first' else Search.LAST
|
||||
yield Search.binary_search(
|
||||
ledgers[0], ledgers[-1], condition, position)
|
||||
except:
|
||||
Log.fatal('No ledgers matching condition "%s".' % condition,
|
||||
file=sys.stderr)
|
||||
else:
|
||||
for x in Search.linear_search(ledgers, condition):
|
||||
yield x
|
||||
@@ -1,55 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import json
|
||||
import os
|
||||
|
||||
from ripple.ledger import DatabaseReader, RippledReader
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util.FileCache import FileCache
|
||||
from ripple.util import ConfigFile
|
||||
from ripple.util import File
|
||||
from ripple.util import Range
|
||||
|
||||
class Server(object):
|
||||
def __init__(self):
|
||||
cfg_file = File.normalize(ARGS.config or 'rippled.cfg')
|
||||
self.config = ConfigFile.read(open(cfg_file))
|
||||
if ARGS.database != ARGS.NONE:
|
||||
reader = DatabaseReader.DatabaseReader(self.config)
|
||||
else:
|
||||
reader = RippledReader.RippledReader(self.config)
|
||||
|
||||
self.reader = reader
|
||||
self.complete = reader.complete
|
||||
|
||||
names = {
|
||||
'closed': reader.name_to_ledger_index('closed'),
|
||||
'current': reader.name_to_ledger_index('current'),
|
||||
'validated': reader.name_to_ledger_index('validated'),
|
||||
'first': self.complete[0] if self.complete else None,
|
||||
'last': self.complete[-1] if self.complete else None,
|
||||
}
|
||||
self.__dict__.update(names)
|
||||
self.ledgers = sorted(Range.join_ranges(*ARGS.ledgers, **names))
|
||||
|
||||
def make_cache(is_full):
|
||||
name = 'full' if is_full else 'summary'
|
||||
filepath = os.path.join(ARGS.cache, name)
|
||||
creator = lambda n: reader.get_ledger(n, is_full)
|
||||
return FileCache(filepath, creator)
|
||||
self._caches = [make_cache(False), make_cache(True)]
|
||||
|
||||
def info(self):
|
||||
return self.reader.info
|
||||
|
||||
def cache(self, is_full):
|
||||
return self._caches[is_full]
|
||||
|
||||
def get_ledger(self, number, is_full=False):
|
||||
num = int(number)
|
||||
save_in_cache = num in self.complete
|
||||
can_create = (not ARGS.offline and
|
||||
self.complete and
|
||||
self.complete[0] <= num - 1)
|
||||
cache = self.cache(is_full)
|
||||
return cache.get_data(number, save_in_cache, can_create)
|
||||
@@ -1,5 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
class ServerReader(object):
|
||||
def __init__(self, config):
|
||||
raise ValueError('Direct server connections are not yet implemented.')
|
||||
@@ -1,34 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
from ripple.util.PrettyPrint import pretty_print
|
||||
|
||||
SAFE = True
|
||||
|
||||
HELP = """cache
|
||||
return server_info"""
|
||||
|
||||
def cache(server, clear=False):
|
||||
cache = server.cache(ARGS.full)
|
||||
name = ['summary', 'full'][ARGS.full]
|
||||
files = cache.file_count()
|
||||
if not files:
|
||||
Log.error('No files in %s cache.' % name)
|
||||
|
||||
elif clear:
|
||||
if not clear.strip() == 'clear':
|
||||
raise Exception("Don't understand 'clear %s'." % clear)
|
||||
if not ARGS.yes:
|
||||
yes = raw_input('OK to clear %s cache? (y/N) ' % name)
|
||||
if not yes.lower().startswith('y'):
|
||||
Log.out('Cancelled.')
|
||||
return
|
||||
cache.clear(ARGS.full)
|
||||
Log.out('%s cache cleared - %d file%s deleted.' %
|
||||
(name.capitalize(), files, '' if files == 1 else 's'))
|
||||
|
||||
else:
|
||||
caches = (int(c) for c in cache.cache_list())
|
||||
Log.out(Range.to_string(caches))
|
||||
@@ -1,21 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
from ripple.util.PrettyPrint import pretty_print
|
||||
|
||||
SAFE = True
|
||||
|
||||
HELP = 'info - return server_info'
|
||||
|
||||
def info(server):
|
||||
Log.out('first =', server.first)
|
||||
Log.out('last =', server.last)
|
||||
Log.out('closed =', server.closed)
|
||||
Log.out('current =', server.current)
|
||||
Log.out('validated =', server.validated)
|
||||
Log.out('complete =', Range.to_string(server.complete))
|
||||
|
||||
if ARGS.full:
|
||||
Log.out(pretty_print(server.info()))
|
||||
@@ -1,15 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.ledger import SearchLedgers
|
||||
|
||||
import json
|
||||
|
||||
SAFE = True
|
||||
|
||||
HELP = """print
|
||||
|
||||
Print the ledgers to stdout. The default command."""
|
||||
|
||||
def run_print(server):
|
||||
ARGS.display(print, server, SearchLedgers.search(server))
|
||||
@@ -1,4 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
def all_ledgers(server, ledger_number):
|
||||
return True
|
||||
@@ -1,89 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from functools import wraps
|
||||
|
||||
import jsonpath_rw
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Dict
|
||||
from ripple.util import Log
|
||||
from ripple.util import Range
|
||||
from ripple.util.Decimal import Decimal
|
||||
from ripple.util.PrettyPrint import pretty_print, Streamer
|
||||
|
||||
TRANSACT_FIELDS = (
|
||||
'accepted',
|
||||
'close_time_human',
|
||||
'closed',
|
||||
'ledger_index',
|
||||
'total_coins',
|
||||
'transactions',
|
||||
)
|
||||
|
||||
LEDGER_FIELDS = (
|
||||
'accepted',
|
||||
'accountState',
|
||||
'close_time_human',
|
||||
'closed',
|
||||
'ledger_index',
|
||||
'total_coins',
|
||||
'transactions',
|
||||
)
|
||||
|
||||
def _dict_filter(d, keys):
|
||||
return dict((k, v) for (k, v) in d.items() if k in keys)
|
||||
|
||||
def ledger_number(print, server, numbers):
|
||||
print(Range.to_string(numbers))
|
||||
|
||||
def display(f):
|
||||
@wraps(f)
|
||||
def wrapper(printer, server, numbers, *args):
|
||||
streamer = Streamer(printer=printer)
|
||||
for number in numbers:
|
||||
ledger = server.get_ledger(number, ARGS.full)
|
||||
if ledger:
|
||||
streamer.add(number, f(ledger, *args))
|
||||
streamer.finish()
|
||||
return wrapper
|
||||
|
||||
def extractor(f):
|
||||
@wraps(f)
|
||||
def wrapper(printer, server, numbers, *paths):
|
||||
try:
|
||||
find = jsonpath_rw.parse('|'.join(paths)).find
|
||||
except:
|
||||
raise ValueError("Can't understand jsonpath '%s'." % path)
|
||||
def fn(ledger, *args):
|
||||
return f(find(ledger), *args)
|
||||
display(fn)(printer, server, numbers)
|
||||
return wrapper
|
||||
|
||||
@display
|
||||
def ledger(ledger, full=False):
|
||||
if ARGS.full:
|
||||
if full:
|
||||
return ledger
|
||||
|
||||
ledger = Dict.prune(ledger, 1, False)
|
||||
|
||||
return _dict_filter(ledger, LEDGER_FIELDS)
|
||||
|
||||
@display
|
||||
def prune(ledger, level=1):
|
||||
return Dict.prune(ledger, level, False)
|
||||
|
||||
@display
|
||||
def transact(ledger):
|
||||
return _dict_filter(ledger, TRANSACT_FIELDS)
|
||||
|
||||
@extractor
|
||||
def extract(finds):
|
||||
return dict((str(f.full_path), str(f.value)) for f in finds)
|
||||
|
||||
@extractor
|
||||
def sum(finds):
|
||||
d = Decimal()
|
||||
for f in finds:
|
||||
d.accumulate(f.value)
|
||||
return [str(d), len(finds)]
|
||||
@@ -1,94 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from hashlib import sha256
|
||||
|
||||
#
|
||||
# Human strings are base-58 with a
|
||||
# version prefix and a checksum suffix.
|
||||
#
|
||||
# Copied from ripple/protocol/RippleAddress.h
|
||||
#
|
||||
|
||||
VER_NONE = 1
|
||||
VER_NODE_PUBLIC = 28
|
||||
VER_NODE_PRIVATE = 32
|
||||
VER_ACCOUNT_ID = 0
|
||||
VER_ACCOUNT_PUBLIC = 35
|
||||
VER_ACCOUNT_PRIVATE = 34
|
||||
VER_FAMILY_GENERATOR = 41
|
||||
VER_FAMILY_SEED = 33
|
||||
|
||||
ALPHABET = 'rpshnaf39wBUDNEGHJKLM4PQRST7VWXYZ2bcdeCg65jkm8oFqi1tuvAxyz'
|
||||
|
||||
VERSION_NAME = {
|
||||
VER_NONE: 'VER_NONE',
|
||||
VER_NODE_PUBLIC: 'VER_NODE_PUBLIC',
|
||||
VER_NODE_PRIVATE: 'VER_NODE_PRIVATE',
|
||||
VER_ACCOUNT_ID: 'VER_ACCOUNT_ID',
|
||||
VER_ACCOUNT_PUBLIC: 'VER_ACCOUNT_PUBLIC',
|
||||
VER_ACCOUNT_PRIVATE: 'VER_ACCOUNT_PRIVATE',
|
||||
VER_FAMILY_GENERATOR: 'VER_FAMILY_GENERATOR',
|
||||
VER_FAMILY_SEED: 'VER_FAMILY_SEED'
|
||||
}
|
||||
|
||||
class Alphabet(object):
|
||||
def __init__(self, radix, digit_to_char, char_to_digit):
|
||||
self.radix = radix
|
||||
self.digit_to_char = digit_to_char
|
||||
self.char_to_digit = char_to_digit
|
||||
|
||||
def transcode_from(self, s, source_alphabet):
|
||||
n, zero_count = source_alphabet._digits_to_number(s)
|
||||
digits = []
|
||||
while n > 0:
|
||||
n, digit = divmod(n, self.radix)
|
||||
digits.append(self.digit_to_char(digit))
|
||||
|
||||
s = ''.join(digits)
|
||||
return self.digit_to_char(0) * zero_count + s[::-1]
|
||||
|
||||
def _digits_to_number(self, digits):
|
||||
stripped = digits.lstrip(self.digit_to_char(0))
|
||||
n = 0
|
||||
for d in stripped:
|
||||
n *= self.radix
|
||||
n += self.char_to_digit(d)
|
||||
return n, len(digits) - len(stripped)
|
||||
|
||||
_INVERSE_INDEX = dict((c, i) for (i, c) in enumerate(ALPHABET))
|
||||
|
||||
# In base 58 encoding, the digits come from the ALPHABET string.
|
||||
BASE58 = Alphabet(len(ALPHABET), ALPHABET.__getitem__, _INVERSE_INDEX.get)
|
||||
|
||||
# In base 256 encoding, each digit is just a character between 0 and 255.
|
||||
BASE256 = Alphabet(256, chr, ord)
|
||||
|
||||
def encode(b):
|
||||
return BASE58.transcode_from(b, BASE256)
|
||||
|
||||
def decode(b):
|
||||
return BASE256.transcode_from(b, BASE58)
|
||||
|
||||
def checksum(b):
|
||||
"""Returns a 4-byte checksum of a binary."""
|
||||
return sha256(sha256(b).digest()).digest()[:4]
|
||||
|
||||
def encode_version(ver, b):
|
||||
"""Encodes a version encoding and a binary as human string."""
|
||||
b = chr(ver) + b
|
||||
return encode(b + checksum(b))
|
||||
|
||||
def decode_version(s):
|
||||
"""Decodes a human base-58 string into its version encoding and binary."""
|
||||
b = decode(s)
|
||||
body, check = b[:-4], b[-4:]
|
||||
assert check == checksum(body), ('Bad checksum for', s)
|
||||
return ord(body[0]), body[1:]
|
||||
|
||||
def version_name(ver):
|
||||
return VERSION_NAME.get(ver) or ('(unknown version %s)' % ver)
|
||||
|
||||
def check_version(version, expected):
|
||||
if version != expected:
|
||||
raise ValueError('Expected version %s but got %s' % (
|
||||
version_name(version), version_name(expected)))
|
||||
@@ -1,40 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from collections import defaultdict
|
||||
|
||||
class Cache(object):
|
||||
def __init__(self):
|
||||
self._value_to_index = {}
|
||||
self._index_to_value = []
|
||||
|
||||
def value_to_index(self, value, **kwds):
|
||||
index = self._value_to_index.get(value, None)
|
||||
if index is None:
|
||||
index = len(self._index_to_value)
|
||||
self._index_to_value.append((value, kwds))
|
||||
self._value_to_index[value] = index
|
||||
return index
|
||||
|
||||
def index_to_value(self, index):
|
||||
return self._index_to_value[index]
|
||||
|
||||
def NamedCache():
|
||||
return defaultdict(Cache)
|
||||
|
||||
def cache_by_key(d, keyfunc=None, exclude=None):
|
||||
cache = defaultdict(Cache)
|
||||
exclude = exclude or None
|
||||
keyfunc = keyfunc or (lambda x: x)
|
||||
|
||||
def visit(item):
|
||||
if isinstance(item, list):
|
||||
for i, x in enumerate(item):
|
||||
item[i] = visit(x)
|
||||
|
||||
elif isinstance(item, dict):
|
||||
for k, v in item.items():
|
||||
item[k] = visit(v)
|
||||
|
||||
return item
|
||||
|
||||
return cache
|
||||
@@ -1,77 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
# Code taken from github/rec/grit.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
from ripple.ledger.Args import ARGS
|
||||
from ripple.util import Log
|
||||
|
||||
Command = namedtuple('Command', 'function help safe')
|
||||
|
||||
def make_command(module):
|
||||
name = module.__name__.split('.')[-1].lower()
|
||||
return name, Command(getattr(module, name, None) or
|
||||
getattr(module, 'run_' + name),
|
||||
getattr(module, 'HELP'),
|
||||
getattr(module, 'SAFE', False))
|
||||
|
||||
class CommandList(object):
|
||||
def __init__(self, *args, **kwds):
|
||||
self.registry = {}
|
||||
self.register(*args, **kwds)
|
||||
|
||||
def register(self, *modules, **kwds):
|
||||
for module in modules:
|
||||
name, command = make_command(module)
|
||||
self.registry[name] = command
|
||||
|
||||
for k, v in kwds.items():
|
||||
if not isinstance(v, (list, tuple)):
|
||||
v = [v]
|
||||
self.register_one(k, *v)
|
||||
|
||||
def keys(self):
|
||||
return self.registry.keys()
|
||||
|
||||
def register_one(self, name, function, help='', safe=False):
|
||||
assert name not in self.registry
|
||||
self.registry[name] = Command(function, help, safe)
|
||||
|
||||
def _get(self, command):
|
||||
command = command.lower()
|
||||
c = self.registry.get(command)
|
||||
if c:
|
||||
return command, c
|
||||
commands = [c for c in self.registry if c.startswith(command)]
|
||||
if len(commands) == 1:
|
||||
command = commands[0]
|
||||
return command, self.registry[command]
|
||||
if not commands:
|
||||
raise ValueError('No such command: %s. Commands are %s.' %
|
||||
(command, ', '.join(sorted(self.registry))))
|
||||
if len(commands) > 1:
|
||||
raise ValueError('Command %s was ambiguous: %s.' %
|
||||
(command, ', '.join(commands)))
|
||||
|
||||
def get(self, command):
|
||||
return self._get(command)[1]
|
||||
|
||||
def run(self, command, *args):
|
||||
return self.get(command).function(*args)
|
||||
|
||||
def run_safe(self, command, *args):
|
||||
name, cmd = self._get(command)
|
||||
if not (ARGS.yes or cmd.safe):
|
||||
confirm = raw_input('OK to execute "rl %s %s"? (y/N) ' %
|
||||
(name, ' '.join(args)))
|
||||
if not confirm.lower().startswith('y'):
|
||||
Log.error('Cancelled.')
|
||||
return
|
||||
cmd.function(*args)
|
||||
|
||||
def help(self, command):
|
||||
return self.get(command).help()
|
||||
@@ -1,54 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import json
|
||||
|
||||
"""Ripple has a proprietary format for their .cfg files, so we need a reader for
|
||||
them."""
|
||||
|
||||
def read(lines):
|
||||
sections = []
|
||||
section = []
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if (not line) or line[0] == '#':
|
||||
continue
|
||||
if line.startswith('['):
|
||||
if section:
|
||||
sections.append(section)
|
||||
section = []
|
||||
section.append(line)
|
||||
if section:
|
||||
sections.append(section)
|
||||
|
||||
result = {}
|
||||
for section in sections:
|
||||
option = section.pop(0)
|
||||
assert section, ('No value for option "%s".' % option)
|
||||
assert option.startswith('[') and option.endswith(']'), (
|
||||
'No option name in block "%s"' % p[0])
|
||||
option = option[1:-1]
|
||||
assert option not in result, 'Duplicate option "%s".' % option
|
||||
|
||||
subdict = {}
|
||||
items = []
|
||||
for part in section:
|
||||
if '=' in part:
|
||||
assert not items, 'Dictionary mixed with list.'
|
||||
k, v = part.split('=', 1)
|
||||
assert k not in subdict, 'Repeated dictionary entry ' + k
|
||||
subdict[k] = v
|
||||
else:
|
||||
assert not subdict, 'List mixed with dictionary.'
|
||||
if part.startswith('{'):
|
||||
items.append(json.loads(part))
|
||||
else:
|
||||
words = part.split()
|
||||
if len(words) > 1:
|
||||
items.append(words)
|
||||
else:
|
||||
items.append(part)
|
||||
if len(items) == 1:
|
||||
result[option] = items[0]
|
||||
else:
|
||||
result[option] = items or subdict
|
||||
return result
|
||||
@@ -1,12 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import sqlite3
|
||||
|
||||
def fetchall(database, query, kwds):
|
||||
conn = sqlite3.connect(database)
|
||||
try:
|
||||
cursor = conn.execute(query, kwds)
|
||||
return cursor.fetchall()
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
@@ -1,46 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
"""Fixed point numbers."""
|
||||
|
||||
POSITIONS = 10
|
||||
POSITIONS_SHIFT = 10 ** POSITIONS
|
||||
|
||||
class Decimal(object):
|
||||
def __init__(self, desc='0'):
|
||||
if isinstance(desc, int):
|
||||
self.value = desc
|
||||
return
|
||||
if desc.startswith('-'):
|
||||
sign = -1
|
||||
desc = desc[1:]
|
||||
else:
|
||||
sign = 1
|
||||
parts = desc.split('.')
|
||||
if len(parts) == 1:
|
||||
parts.append('0')
|
||||
elif len(parts) > 2:
|
||||
raise Exception('Too many decimals in "%s"' % desc)
|
||||
number, decimal = parts
|
||||
# Fix the number of positions.
|
||||
decimal = (decimal + POSITIONS * '0')[:POSITIONS]
|
||||
self.value = sign * int(number + decimal)
|
||||
|
||||
def accumulate(self, item):
|
||||
if not isinstance(item, Decimal):
|
||||
item = Decimal(item)
|
||||
self.value += item.value
|
||||
|
||||
def __str__(self):
|
||||
if self.value >= 0:
|
||||
sign = ''
|
||||
value = self.value
|
||||
else:
|
||||
sign = '-'
|
||||
value = -self.value
|
||||
number = value // POSITIONS_SHIFT
|
||||
decimal = (value % POSITIONS_SHIFT) * POSITIONS_SHIFT
|
||||
|
||||
if decimal:
|
||||
return '%s%s.%s' % (sign, number, str(decimal).rstrip('0'))
|
||||
else:
|
||||
return '%s%s' % (sign, number)
|
||||
@@ -1,33 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
def count_all_subitems(x):
|
||||
"""Count the subitems of a Python object, including the object itself."""
|
||||
if isinstance(x, list):
|
||||
return 1 + sum(count_all_subitems(i) for i in x)
|
||||
if isinstance(x, dict):
|
||||
return 1 + sum(count_all_subitems(i) for i in x.itervalues())
|
||||
return 1
|
||||
|
||||
def prune(item, level, count_recursively=True):
|
||||
def subitems(x):
|
||||
i = count_all_subitems(x) - 1 if count_recursively else len(x)
|
||||
return '1 subitem' if i == 1 else '%d subitems' % i
|
||||
|
||||
assert level >= 0
|
||||
if not item:
|
||||
return item
|
||||
|
||||
if isinstance(item, list):
|
||||
if level:
|
||||
return [prune(i, level - 1, count_recursively) for i in item]
|
||||
else:
|
||||
return '[list with %s]' % subitems(item)
|
||||
|
||||
if isinstance(item, dict):
|
||||
if level:
|
||||
return dict((k, prune(v, level - 1, count_recursively))
|
||||
for k, v in item.iteritems())
|
||||
else:
|
||||
return '{dict with %s}' % subitems(item)
|
||||
|
||||
return item
|
||||
@@ -1,7 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import os
|
||||
|
||||
def normalize(f):
|
||||
f = os.path.join(*f.split('/')) # For Windows users.
|
||||
return os.path.abspath(os.path.expanduser(f))
|
||||
@@ -1,56 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import gzip
|
||||
import json
|
||||
import os
|
||||
|
||||
_NONE = object()
|
||||
|
||||
class FileCache(object):
|
||||
"""A two-level cache, which stores expensive results in memory and on disk.
|
||||
"""
|
||||
def __init__(self, cache_directory, creator, open=gzip.open, suffix='.gz'):
|
||||
self.cache_directory = cache_directory
|
||||
self.creator = creator
|
||||
self.open = open
|
||||
self.suffix = suffix
|
||||
self.cached_data = {}
|
||||
if not os.path.exists(self.cache_directory):
|
||||
os.makedirs(self.cache_directory)
|
||||
|
||||
def get_file_data(self, name):
|
||||
if os.path.exists(filename):
|
||||
return json.load(self.open(filename))
|
||||
|
||||
result = self.creator(name)
|
||||
return result
|
||||
|
||||
def get_data(self, name, save_in_cache, can_create, default=None):
|
||||
name = str(name)
|
||||
result = self.cached_data.get(name, _NONE)
|
||||
if result is _NONE:
|
||||
filename = os.path.join(self.cache_directory, name) + self.suffix
|
||||
if os.path.exists(filename):
|
||||
result = json.load(self.open(filename)) or _NONE
|
||||
if result is _NONE and can_create:
|
||||
result = self.creator(name)
|
||||
if save_in_cache:
|
||||
json.dump(result, self.open(filename, 'w'))
|
||||
return default if result is _NONE else result
|
||||
|
||||
def _files(self):
|
||||
return os.listdir(self.cache_directory)
|
||||
|
||||
def cache_list(self):
|
||||
for f in self._files():
|
||||
if f.endswith(self.suffix):
|
||||
yield f[:-len(self.suffix)]
|
||||
|
||||
def file_count(self):
|
||||
return len(self._files())
|
||||
|
||||
def clear(self):
|
||||
"""Clears both local files and memory."""
|
||||
self.cached_data = {}
|
||||
for f in self._files():
|
||||
os.remove(os.path.join(self.cache_directory, f))
|
||||
@@ -1,82 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
"""A function that can be specified at the command line, with an argument."""
|
||||
|
||||
import importlib
|
||||
import re
|
||||
import tokenize
|
||||
|
||||
from StringIO import StringIO
|
||||
|
||||
MATCHER = re.compile(r'([\w.]+)(.*)')
|
||||
|
||||
REMAPPINGS = {
|
||||
'false': False,
|
||||
'true': True,
|
||||
'null': None,
|
||||
'False': False,
|
||||
'True': True,
|
||||
'None': None,
|
||||
}
|
||||
|
||||
def eval_arguments(args):
|
||||
args = args.strip()
|
||||
if not args or (args == '()'):
|
||||
return ()
|
||||
tokens = list(tokenize.generate_tokens(StringIO(args).readline))
|
||||
def remap():
|
||||
for type, name, _, _, _ in tokens:
|
||||
if type == tokenize.NAME and name not in REMAPPINGS:
|
||||
yield tokenize.STRING, '"%s"' % name
|
||||
else:
|
||||
yield type, name
|
||||
untok = tokenize.untokenize(remap())
|
||||
if untok[1:-1].strip():
|
||||
untok = untok[:-1] + ',)' # Force a tuple.
|
||||
try:
|
||||
return eval(untok, REMAPPINGS)
|
||||
except Exception as e:
|
||||
raise ValueError('Couldn\'t evaluate expression "%s" (became "%s"), '
|
||||
'error "%s"' % (args, untok, str(e)))
|
||||
|
||||
class Function(object):
|
||||
def __init__(self, desc='', default_path=''):
|
||||
self.desc = desc.strip()
|
||||
if not self.desc:
|
||||
# Make an empty function that does nothing.
|
||||
self.args = ()
|
||||
self.function = lambda *args, **kwds: None
|
||||
return
|
||||
|
||||
m = MATCHER.match(desc)
|
||||
if not m:
|
||||
raise ValueError('"%s" is not a function' % desc)
|
||||
self.function, self.args = (g.strip() for g in m.groups())
|
||||
self.args = eval_arguments(self.args)
|
||||
|
||||
if '.' not in self.function:
|
||||
if default_path and not default_path.endswith('.'):
|
||||
default_path += '.'
|
||||
self.function = default_path + self.function
|
||||
p, m = self.function.rsplit('.', 1)
|
||||
mod = importlib.import_module(p)
|
||||
# Errors in modules are swallowed here.
|
||||
# except:
|
||||
# raise ValueError('Can\'t find Python module "%s"' % p)
|
||||
|
||||
try:
|
||||
self.function = getattr(mod, m)
|
||||
except:
|
||||
raise ValueError('No function "%s" in module "%s"' % (m, p))
|
||||
|
||||
def __str__(self):
|
||||
return self.desc
|
||||
|
||||
def __call__(self, *args, **kwds):
|
||||
return self.function(*(args + self.args), **kwds)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.function == other.function and self.args == other.args
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
@@ -1,21 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import sys
|
||||
|
||||
VERBOSE = False
|
||||
|
||||
def out(*args, **kwds):
|
||||
kwds.get('print', print)(*args, file=sys.stdout, **kwds)
|
||||
|
||||
def info(*args, **kwds):
|
||||
if VERBOSE:
|
||||
out(*args, **kwds)
|
||||
|
||||
def warn(*args, **kwds):
|
||||
out('WARNING:', *args, **kwds)
|
||||
|
||||
def error(*args, **kwds):
|
||||
out('ERROR:', *args, **kwds)
|
||||
|
||||
def fatal(*args, **kwds):
|
||||
raise Exception('FATAL: ' + ' '.join(str(a) for a in args))
|
||||
@@ -1,42 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from functools import wraps
|
||||
import json
|
||||
|
||||
SEPARATORS = ',', ': '
|
||||
INDENT = ' '
|
||||
|
||||
def pretty_print(item):
|
||||
return json.dumps(item,
|
||||
sort_keys=True,
|
||||
indent=len(INDENT),
|
||||
separators=SEPARATORS)
|
||||
|
||||
class Streamer(object):
|
||||
def __init__(self, printer=print):
|
||||
# No automatic spacing or carriage returns.
|
||||
self.printer = lambda *args: printer(*args, end='', sep='')
|
||||
self.first_key = True
|
||||
|
||||
def add(self, key, value):
|
||||
if self.first_key:
|
||||
self.first_key = False
|
||||
self.printer('{')
|
||||
else:
|
||||
self.printer(',')
|
||||
|
||||
self.printer('\n', INDENT, '"', str(key), '": ')
|
||||
|
||||
pp = pretty_print(value).splitlines()
|
||||
if len(pp) > 1:
|
||||
for i, line in enumerate(pp):
|
||||
if i > 0:
|
||||
self.printer('\n', INDENT)
|
||||
self.printer(line)
|
||||
else:
|
||||
self.printer(pp[0])
|
||||
|
||||
def finish(self):
|
||||
if not self.first_key:
|
||||
self.first_key = True
|
||||
self.printer('\n}')
|
||||
@@ -1,53 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
"""
|
||||
Convert a discontiguous range of integers to and from a human-friendly form.
|
||||
|
||||
Real world example is the server_info.complete_ledgers:
|
||||
8252899-8403772,8403824,8403827-8403830,8403834-8403876
|
||||
|
||||
"""
|
||||
|
||||
def from_string(desc, **aliases):
|
||||
if not desc:
|
||||
return []
|
||||
result = set()
|
||||
for d in desc.split(','):
|
||||
nums = [int(aliases.get(x) or x) for x in d.split('-')]
|
||||
if len(nums) == 1:
|
||||
result.add(nums[0])
|
||||
elif len(nums) == 2:
|
||||
result.update(range(nums[0], nums[1] + 1))
|
||||
return result
|
||||
|
||||
def to_string(r):
|
||||
groups = []
|
||||
next_group = []
|
||||
for i, x in enumerate(sorted(r)):
|
||||
if next_group and (x - next_group[-1]) > 1:
|
||||
groups.append(next_group)
|
||||
next_group = []
|
||||
next_group.append(x)
|
||||
if next_group:
|
||||
groups.append(next_group)
|
||||
|
||||
def display(g):
|
||||
if len(g) == 1:
|
||||
return str(g[0])
|
||||
else:
|
||||
return '%s-%s' % (g[0], g[-1])
|
||||
|
||||
return ','.join(display(g) for g in groups)
|
||||
|
||||
def is_range(desc, *names):
|
||||
try:
|
||||
from_string(desc, **dict((n, 1) for n in names))
|
||||
return True;
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
def join_ranges(*ranges, **aliases):
|
||||
result = set()
|
||||
for r in ranges:
|
||||
result.update(from_string(r, **aliases))
|
||||
return result
|
||||
@@ -1,46 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
FIRST, LAST = range(2)
|
||||
|
||||
def binary_search(begin, end, condition, location=FIRST):
|
||||
"""Search for an i in the interval [begin, end] where condition(i) is true.
|
||||
If location is FIRST, return the first such i.
|
||||
If location is LAST, return the last such i.
|
||||
If there is no such i, then throw an exception.
|
||||
"""
|
||||
b = condition(begin)
|
||||
e = condition(end)
|
||||
if b and e:
|
||||
return begin if location == FIRST else end
|
||||
|
||||
if not (b or e):
|
||||
raise ValueError('%d/%d' % (begin, end))
|
||||
|
||||
if b and location is FIRST:
|
||||
return begin
|
||||
|
||||
if e and location is LAST:
|
||||
return end
|
||||
|
||||
width = end - begin + 1
|
||||
if width == 1:
|
||||
if not b:
|
||||
raise ValueError('%d/%d' % (begin, end))
|
||||
return begin
|
||||
if width == 2:
|
||||
return begin if b else end
|
||||
|
||||
mid = (begin + end) // 2
|
||||
m = condition(mid)
|
||||
|
||||
if m == b:
|
||||
return binary_search(mid, end, condition, location)
|
||||
else:
|
||||
return binary_search(begin, mid, condition, location)
|
||||
|
||||
def linear_search(items, condition):
|
||||
"""Yields each i in the interval [begin, end] where condition(i) is true.
|
||||
"""
|
||||
for i in items:
|
||||
if condition(i):
|
||||
yield i
|
||||
@@ -1,233 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import base64, os, random, struct, sys
|
||||
import ed25519
|
||||
import ecdsa
|
||||
import hashlib
|
||||
from ripple.util import Base58
|
||||
from ripple.ledger import SField
|
||||
|
||||
ED25519_BYTE = chr(0xed)
|
||||
WRAP_COLUMNS = 72
|
||||
|
||||
USAGE = """\
|
||||
Usage:
|
||||
create
|
||||
Create a new master public/secret key pair.
|
||||
|
||||
create <master-secret>
|
||||
Generate master key pair using provided secret.
|
||||
|
||||
check <key>
|
||||
Check an existing key for validity.
|
||||
|
||||
sign <sequence> <validation-public-key> <validation-private-key> <master-secret>
|
||||
Create a new signed manifest with the given sequence
|
||||
number and keys.
|
||||
"""
|
||||
|
||||
def prepend_length_byte(b):
|
||||
assert len(b) <= 192, 'Too long'
|
||||
return chr(len(b)) + b
|
||||
|
||||
def to_int32(i):
|
||||
return struct.pack('>I', i)
|
||||
|
||||
#-----------------------------------------------------------
|
||||
|
||||
def make_seed(urandom=os.urandom):
|
||||
# This is not used.
|
||||
return urandom(16)
|
||||
|
||||
def make_ed25519_keypair(urandom=os.urandom):
|
||||
sk = urandom(32)
|
||||
return sk, ed25519.publickey(sk)
|
||||
|
||||
def make_ecdsa_keypair(urandom=None):
|
||||
# This is not used.
|
||||
sk = ecdsa.SigningKey.generate(curve=ecdsa.SECP256k1, entropy=urandom)
|
||||
# Can't be unit tested easily - need a mock for ecdsa.
|
||||
vk = sk.get_verifying_key()
|
||||
sig = sk.sign('message')
|
||||
assert vk.verify(sig, 'message')
|
||||
return sk, vk
|
||||
|
||||
def make_seed_from_passphrase(passphrase):
|
||||
# For convenience, like say testing against rippled we can hash a passphrase
|
||||
# to get the seed. validation_create (Josh may have killed it by now) takes
|
||||
# an optional arg, which can be a base58 encoded seed, or a passphrase.
|
||||
return hashlib.sha512(passphrase).digest()[:16]
|
||||
|
||||
def make_manifest(master_pk, validation_pk, seq):
|
||||
"""create a manifest
|
||||
|
||||
Parameters
|
||||
----------
|
||||
master_pk : string
|
||||
validator's master public key (binary, _not_ BASE58 encoded)
|
||||
validation_pk : string
|
||||
validator's validation public key (binary, _not_ BASE58 encoded)
|
||||
seq : int
|
||||
manifest sequence number
|
||||
|
||||
Returns
|
||||
----------
|
||||
string
|
||||
String with fields for seq, master_pk, validation_pk
|
||||
"""
|
||||
return ''.join([
|
||||
SField.sfSequence,
|
||||
to_int32(seq),
|
||||
SField.sfPublicKey,
|
||||
prepend_length_byte(master_pk),
|
||||
SField.sfSigningPubKey,
|
||||
prepend_length_byte(validation_pk)])
|
||||
|
||||
def sign_manifest(manifest, validation_sk, master_sk, master_pk):
|
||||
"""sign a validator manifest
|
||||
|
||||
Parameters
|
||||
----------
|
||||
manifest : string
|
||||
manifest to sign
|
||||
validation_sk : string
|
||||
validator's validation secret key (binary, _not_ BASE58 encoded)
|
||||
This is one of the keys that will sign the manifest.
|
||||
master_sk : string
|
||||
validator's master secret key (binary, _not_ BASE58 encoded)
|
||||
This is one of the keys that will sign the manifest.
|
||||
master_pk : string
|
||||
validator's master public key (binary, _not_ BASE58 encoded)
|
||||
|
||||
Returns
|
||||
----------
|
||||
string
|
||||
manifest signed by both the validation and master keys
|
||||
"""
|
||||
man_hash = hashlib.sha512('MAN\0' + manifest).digest()[:32]
|
||||
validation_sig = validation_sk.sign_digest_deterministic(
|
||||
man_hash, hashfunc=hashlib.sha256, sigencode=ecdsa.util.sigencode_der_canonize)
|
||||
master_sig = ed25519.signature('MAN\0' + manifest, master_sk, master_pk)
|
||||
return manifest + SField.sfSignature + prepend_length_byte(validation_sig) + \
|
||||
SField.sfMasterSignature + prepend_length_byte(master_sig)
|
||||
|
||||
def wrap(s, cols=WRAP_COLUMNS):
|
||||
if s:
|
||||
size = max((len(s) + cols - 1) / cols, 1)
|
||||
w = len(s) / size
|
||||
s = '\n'.join(s[i:i + w] for i in range(0, len(s), w))
|
||||
return s
|
||||
|
||||
def create_ed_keys(urandom=os.urandom):
|
||||
sk, pk = make_ed25519_keypair(urandom)
|
||||
pk_human = Base58.encode_version(
|
||||
Base58.VER_NODE_PUBLIC, ED25519_BYTE + pk)
|
||||
sk_human = Base58.encode_version(
|
||||
Base58.VER_NODE_PRIVATE, sk)
|
||||
return pk_human, sk_human
|
||||
|
||||
def create_ed_public_key(sk_human):
|
||||
v, sk = Base58.decode_version(sk_human)
|
||||
check_secret_key(v, sk)
|
||||
|
||||
pk = ed25519.publickey(sk)
|
||||
pk_human = Base58.encode_version(
|
||||
Base58.VER_NODE_PUBLIC, ED25519_BYTE + pk)
|
||||
return pk_human
|
||||
|
||||
def check_validation_public_key(v, pk):
|
||||
Base58.check_version(v, Base58.VER_NODE_PUBLIC)
|
||||
if len(pk) != 33:
|
||||
raise ValueError('Validation public key should be length 33, is %s' %
|
||||
len(pk))
|
||||
b = ord(pk[0])
|
||||
if b not in (2, 3):
|
||||
raise ValueError('First validation public key byte must be 2 or 3, is %d' % b)
|
||||
|
||||
def check_secret_key(v, sk):
|
||||
Base58.check_version(v, Base58.VER_NODE_PRIVATE)
|
||||
if len(sk) != 32:
|
||||
raise ValueError('Length of master secret should be 32, is %s' %
|
||||
len(sk))
|
||||
|
||||
def get_signature(seq, validation_pk_human, validation_sk_human, master_sk_human):
|
||||
v, validation_pk = Base58.decode_version(validation_pk_human)
|
||||
check_validation_public_key(v, validation_pk)
|
||||
|
||||
v, validation_sk_str = Base58.decode_version(validation_sk_human)
|
||||
check_secret_key(v, validation_sk_str)
|
||||
validation_sk = ecdsa.SigningKey.from_string(validation_sk_str, curve=ecdsa.SECP256k1)
|
||||
|
||||
v, master_sk = Base58.decode_version(master_sk_human)
|
||||
check_secret_key(v, master_sk)
|
||||
|
||||
pk = ed25519.publickey(master_sk)
|
||||
apk = ED25519_BYTE + pk
|
||||
m = make_manifest(apk, validation_pk, seq)
|
||||
m1 = sign_manifest(m, validation_sk, master_sk, pk)
|
||||
return base64.b64encode(m1)
|
||||
|
||||
# Testable versions of functions.
|
||||
def perform_create(urandom=os.urandom, print=print):
|
||||
pk, sk = create_ed_keys(urandom)
|
||||
print('[validator_keys]', pk, '', '[master_secret]', sk, sep='\n')
|
||||
|
||||
def perform_create_public(sk_human, print=print):
|
||||
pk_human = create_ed_public_key(sk_human)
|
||||
print(
|
||||
'[validator_keys]',pk_human, '',
|
||||
'[master_secret]', sk_human, sep='\n')
|
||||
|
||||
def perform_check(s, print=print):
|
||||
version, b = Base58.decode_version(s)
|
||||
print('version = ' + Base58.version_name(version))
|
||||
print('decoded length = ' + str(len(b)))
|
||||
assert Base58.encode_version(version, b) == s
|
||||
|
||||
def perform_sign(
|
||||
seq, validation_pk_human, validation_sk_human, master_sk_human, print=print):
|
||||
print('[validation_manifest]')
|
||||
print(wrap(get_signature(
|
||||
int(seq), validation_pk_human, validation_sk_human, master_sk_human)))
|
||||
|
||||
def perform_verify(
|
||||
seq, validation_pk_human, master_pk_human, signature, print=print):
|
||||
verify_signature(
|
||||
int(seq), validation_pk_human, master_pk_human, signature)
|
||||
print('Signature valid for', master_pk_human)
|
||||
|
||||
# Externally visible versions of functions.
|
||||
def create(sk_human=None):
|
||||
if sk_human:
|
||||
perform_create_public(sk_human)
|
||||
else:
|
||||
perform_create()
|
||||
|
||||
def check(s):
|
||||
perform_check(s)
|
||||
|
||||
def sign(seq, validation_pk_human, validation_sk_human, master_sk_human):
|
||||
perform_sign(seq, validation_pk_human, validation_sk_human, master_sk_human)
|
||||
|
||||
def usage(*errors):
|
||||
if errors:
|
||||
print(*errors)
|
||||
print(USAGE)
|
||||
return not errors
|
||||
|
||||
_COMMANDS = dict((f.__name__, f) for f in (create, check, sign))
|
||||
|
||||
def run_command(args):
|
||||
if not args:
|
||||
return usage()
|
||||
name = args[0]
|
||||
command = _COMMANDS.get(name)
|
||||
if not command:
|
||||
return usage('No such command:', command)
|
||||
try:
|
||||
command(*args[1:])
|
||||
except TypeError:
|
||||
return usage('Wrong number of arguments for:', command)
|
||||
return True
|
||||
@@ -1,21 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
import datetime
|
||||
|
||||
# Format for human-readable dates in rippled
|
||||
_DATE_FORMAT = '%Y-%b-%d'
|
||||
_TIME_FORMAT = '%H:%M:%S'
|
||||
_DATETIME_FORMAT = '%s %s' % (_DATE_FORMAT, _TIME_FORMAT)
|
||||
|
||||
_FORMATS = _DATE_FORMAT, _TIME_FORMAT, _DATETIME_FORMAT
|
||||
|
||||
def parse_datetime(desc):
|
||||
for fmt in _FORMATS:
|
||||
try:
|
||||
return datetime.date.strptime(desc, fmt)
|
||||
except:
|
||||
pass
|
||||
raise ValueError("Can't understand date '%s'." % date)
|
||||
|
||||
def format_datetime(dt):
|
||||
return dt.strftime(_DATETIME_FORMAT)
|
||||
@@ -1,682 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test for setting ephemeral keys for the validator manifest.
|
||||
"""
|
||||
|
||||
from __future__ import (
|
||||
absolute_import, division, print_function, unicode_literals
|
||||
)
|
||||
|
||||
import argparse
|
||||
import contextlib
|
||||
from contextlib import contextmanager
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
DELAY_WHILE_PROCESS_STARTS_UP = 1.5
|
||||
ARGS = None
|
||||
|
||||
NOT_FOUND = -1 # not in log
|
||||
ACCEPTED_NEW = 0 # added new manifest
|
||||
ACCEPTED_UPDATE = 1 # replaced old manifest with new
|
||||
UNTRUSTED = 2 # don't trust master key
|
||||
STALE = 3 # seq is too old
|
||||
REVOKED = 4 # revoked validator key
|
||||
INVALID = 5 # invalid signature
|
||||
|
||||
MANIFEST_ACTION_STR_TO_ID = {
|
||||
'NotFound': NOT_FOUND, # not found in log
|
||||
'AcceptedNew': ACCEPTED_NEW,
|
||||
'AcceptedUpdate': ACCEPTED_UPDATE,
|
||||
'Untrusted': UNTRUSTED,
|
||||
'Stale': STALE,
|
||||
'Revoked': REVOKED,
|
||||
'Invalid': INVALID,
|
||||
}
|
||||
|
||||
MANIFEST_ACTION_ID_TO_STR = {
|
||||
v: k for k, v in MANIFEST_ACTION_STR_TO_ID.items()
|
||||
}
|
||||
|
||||
CONF_TEMPLATE = """
|
||||
[server]
|
||||
port_rpc
|
||||
port_peer
|
||||
port_wss_admin
|
||||
|
||||
[port_rpc]
|
||||
port = {rpc_port}
|
||||
ip = 127.0.0.1
|
||||
admin = 127.0.0.1
|
||||
protocol = https
|
||||
|
||||
[port_peer]
|
||||
port = {peer_port}
|
||||
ip = 0.0.0.0
|
||||
protocol = peer
|
||||
|
||||
[port_wss_admin]
|
||||
port = {wss_port}
|
||||
ip = 127.0.0.1
|
||||
admin = 127.0.0.1
|
||||
protocol = wss
|
||||
|
||||
[node_size]
|
||||
medium
|
||||
|
||||
[node_db]
|
||||
type={node_db_type}
|
||||
path={node_db_path}
|
||||
open_files=2000
|
||||
filter_bits=12
|
||||
cache_mb=256
|
||||
file_size_mb=8
|
||||
file_size_mult=2
|
||||
online_delete=256
|
||||
advisory_delete=0
|
||||
|
||||
[database_path]
|
||||
{db_path}
|
||||
|
||||
[debug_logfile]
|
||||
{debug_logfile}
|
||||
|
||||
[sntp_servers]
|
||||
time.windows.com
|
||||
time.apple.com
|
||||
time.nist.gov
|
||||
pool.ntp.org
|
||||
|
||||
[ips]
|
||||
r.ripple.com 51235
|
||||
|
||||
[ips_fixed]
|
||||
{sibling_ip} {sibling_port}
|
||||
|
||||
[validators]
|
||||
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
|
||||
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
|
||||
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
|
||||
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
|
||||
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
|
||||
|
||||
[validation_quorum]
|
||||
3
|
||||
|
||||
[validation_seed]
|
||||
{validation_seed}
|
||||
#vaidation_public_key: {validation_public_key}
|
||||
|
||||
# Other rippled's trusting this validator need this key
|
||||
[validator_keys]
|
||||
{all_validator_keys}
|
||||
|
||||
[peer_private]
|
||||
1
|
||||
|
||||
[overlay]
|
||||
expire = 1
|
||||
auto_connect = 1
|
||||
|
||||
[validation_manifest]
|
||||
{validation_manifest}
|
||||
|
||||
[rpc_startup]
|
||||
{{ "command": "log_level", "severity": "debug" }}
|
||||
|
||||
[ssl_verify]
|
||||
0
|
||||
"""
|
||||
# End config template
|
||||
|
||||
|
||||
def static_vars(**kwargs):
|
||||
def decorate(func):
|
||||
for k in kwargs:
|
||||
setattr(func, k, kwargs[k])
|
||||
return func
|
||||
return decorate
|
||||
|
||||
|
||||
@static_vars(rpc=5005, peer=51235, wss=6006)
|
||||
def checkout_port_nums():
|
||||
"""Returns a tuple of port nums for rpc, peer, and wss_admin"""
|
||||
checkout_port_nums.rpc += 1
|
||||
checkout_port_nums.peer += 1
|
||||
checkout_port_nums.wss += 1
|
||||
return (
|
||||
checkout_port_nums.rpc,
|
||||
checkout_port_nums.peer,
|
||||
checkout_port_nums.wss
|
||||
)
|
||||
|
||||
|
||||
def is_windows():
|
||||
return platform.system() == 'Windows'
|
||||
|
||||
|
||||
def manifest_create():
|
||||
"""returns dict with keys: 'validator_keys', 'master_secret'"""
|
||||
to_run = ['python', ARGS.ripple_home + '/bin/python/Manifest.py', 'create']
|
||||
r = subprocess.check_output(to_run)
|
||||
result = {}
|
||||
k = None
|
||||
for l in r.splitlines():
|
||||
l = l.strip()
|
||||
if not l:
|
||||
continue
|
||||
elif l == '[validator_keys]':
|
||||
k = l[1:-1]
|
||||
elif l == '[master_secret]':
|
||||
k = l[1:-1]
|
||||
elif l.startswith('['):
|
||||
raise ValueError(
|
||||
'Unexpected key: {} from `manifest create`'.format(l))
|
||||
else:
|
||||
if not k:
|
||||
raise ValueError('Value with no key')
|
||||
result[k] = l
|
||||
k = None
|
||||
|
||||
if k in result:
|
||||
raise ValueError('Repeat key from `manifest create`: ' + k)
|
||||
if len(result) != 2:
|
||||
raise ValueError(
|
||||
'Expected 2 keys from `manifest create` but got {} keys instead ({})'.
|
||||
format(len(result), result))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sign_manifest(seq, validation_pk, master_secret):
|
||||
"""returns the signed manifest as a string"""
|
||||
to_run = ['python', ARGS.ripple_home + '/bin/python/Manifest.py', 'sign',
|
||||
str(seq), validation_pk, master_secret]
|
||||
try:
|
||||
r = subprocess.check_output(to_run)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print('Error in sign_manifest: ', e.output)
|
||||
raise e
|
||||
result = []
|
||||
for l in r.splitlines():
|
||||
l.strip()
|
||||
if not l or l == '[validation_manifest]':
|
||||
continue
|
||||
result.append(l)
|
||||
return '\n'.join(result)
|
||||
|
||||
|
||||
def get_ripple_exe():
|
||||
"""Find the rippled executable"""
|
||||
prefix = ARGS.ripple_home + '/build/'
|
||||
exe = ['rippled', 'RippleD.exe']
|
||||
to_test = [prefix + t + '.debug/' + e
|
||||
for t in ['clang', 'gcc', 'msvc'] for e in exe]
|
||||
for e in exe:
|
||||
to_test.append(prefix + '/' + e)
|
||||
for t in to_test:
|
||||
if os.path.isfile(t):
|
||||
return t
|
||||
|
||||
|
||||
class RippledServer(object):
|
||||
def __init__(self, exe, config_file, server_out):
|
||||
self.config_file = config_file
|
||||
self.exe = exe
|
||||
self.process = None
|
||||
self.server_out = server_out
|
||||
self.reinit(config_file)
|
||||
|
||||
def reinit(self, config_file):
|
||||
self.config_file = config_file
|
||||
self.to_run = [self.exe, '--verbose', '--conf', self.config_file]
|
||||
|
||||
@property
|
||||
def config_root(self):
|
||||
return os.path.dirname(self.config_file)
|
||||
|
||||
@property
|
||||
def master_secret_file(self):
|
||||
return self.config_root + '/master_secret.txt'
|
||||
|
||||
def startup(self):
|
||||
if ARGS.verbose:
|
||||
print('starting rippled:' + self.config_file)
|
||||
fout = open(self.server_out, 'w')
|
||||
self.process = subprocess.Popen(
|
||||
self.to_run, stdout=fout, stderr=subprocess.STDOUT)
|
||||
|
||||
def shutdown(self):
|
||||
if not self.process:
|
||||
return
|
||||
fout = open(os.devnull, 'w')
|
||||
subprocess.Popen(
|
||||
self.to_run + ['stop'], stdout=fout, stderr=subprocess.STDOUT)
|
||||
self.process.wait()
|
||||
self.process = None
|
||||
|
||||
def rotate_logfile(self):
|
||||
if self.server_out == os.devnull:
|
||||
return
|
||||
for i in range(100):
|
||||
backup_name = '{}.{}'.format(self.server_out, i)
|
||||
if not os.path.exists(backup_name):
|
||||
os.rename(self.server_out, backup_name)
|
||||
return
|
||||
raise ValueError('Could not rotate logfile: {}'.
|
||||
format(self.server_out))
|
||||
|
||||
def validation_create(self):
|
||||
"""returns dict with keys:
|
||||
'validation_key', 'validation_public_key', 'validation_seed'
|
||||
"""
|
||||
to_run = [self.exe, '-q', '--conf', self.config_file,
|
||||
'--', 'validation_create']
|
||||
try:
|
||||
return json.loads(subprocess.check_output(to_run))['result']
|
||||
except subprocess.CalledProcessError as e:
|
||||
print('Error in validation_create: ', e.output)
|
||||
raise e
|
||||
|
||||
|
||||
@contextmanager
|
||||
def rippled_server(config_file, server_out=os.devnull):
|
||||
"""Start a ripple server"""
|
||||
try:
|
||||
server = None
|
||||
server = RippledServer(ARGS.ripple_exe, config_file, server_out)
|
||||
server.startup()
|
||||
yield server
|
||||
finally:
|
||||
if server:
|
||||
server.shutdown()
|
||||
|
||||
|
||||
@contextmanager
|
||||
def pause_server(server, config_file):
|
||||
"""Shutdown and then restart a ripple server"""
|
||||
try:
|
||||
server.shutdown()
|
||||
server.rotate_logfile()
|
||||
yield server
|
||||
finally:
|
||||
server.reinit(config_file)
|
||||
server.startup()
|
||||
|
||||
|
||||
def parse_date(d, t):
|
||||
"""Return the timestamp of a line, or none if the line has no timestamp"""
|
||||
try:
|
||||
return time.strptime(d+' '+t, '%Y-%B-%d %H:%M:%S')
|
||||
except:
|
||||
return None
|
||||
|
||||
|
||||
def to_dict(l):
|
||||
"""Given a line of the form Key0: Value0;Key2: Valuue2; Return a dict"""
|
||||
fields = l.split(';')
|
||||
result = {}
|
||||
for f in fields:
|
||||
if f:
|
||||
v = f.split(':')
|
||||
assert len(v) == 2
|
||||
result[v[0].strip()] = v[1].strip()
|
||||
return result
|
||||
|
||||
|
||||
def check_ephemeral_key(validator_key,
|
||||
log_file,
|
||||
seq,
|
||||
change_time):
|
||||
"""
|
||||
Detect when a server is informed of a validator's ephemeral key change.
|
||||
`change_time` and `seq` may be None, in which case they are ignored.
|
||||
"""
|
||||
manifest_prefix = 'Manifest:'
|
||||
# a manifest line has the form Manifest: action; Key: value;
|
||||
# Key can be Pk (public key), Seq, OldSeq,
|
||||
for l in open(log_file):
|
||||
sa = l.split()
|
||||
if len(sa) < 5 or sa[4] != manifest_prefix:
|
||||
continue
|
||||
|
||||
d = to_dict(' '.join(sa[4:]))
|
||||
# check the seq number and validator_key
|
||||
if d['Pk'] != validator_key:
|
||||
continue
|
||||
if seq is not None and int(d['Seq']) != seq:
|
||||
continue
|
||||
|
||||
if change_time:
|
||||
t = parse_date(sa[0], sa[1])
|
||||
if not t or t < change_time:
|
||||
continue
|
||||
action = d['Manifest']
|
||||
return MANIFEST_ACTION_STR_TO_ID[action]
|
||||
return NOT_FOUND
|
||||
|
||||
|
||||
def check_ephemeral_keys(validator_key,
|
||||
log_files,
|
||||
seq,
|
||||
change_time=None,
|
||||
timeout_s=60):
|
||||
result = [NOT_FOUND for i in range(len(log_files))]
|
||||
if timeout_s < 10:
|
||||
sleep_time = 1
|
||||
elif timeout_s < 60:
|
||||
sleep_time = 5
|
||||
else:
|
||||
sleep_time = 10
|
||||
n = timeout_s//sleep_time
|
||||
if n == 0:
|
||||
n = 1
|
||||
start_time = time.time()
|
||||
for _ in range(n):
|
||||
for i, lf in enumerate(log_files):
|
||||
if result[i] != NOT_FOUND:
|
||||
continue
|
||||
result[i] = check_ephemeral_key(validator_key,
|
||||
lf,
|
||||
seq,
|
||||
change_time)
|
||||
if result[i] != NOT_FOUND:
|
||||
if all(r != NOT_FOUND for r in result):
|
||||
return result
|
||||
else:
|
||||
server_dir = os.path.basename(os.path.dirname(log_files[i]))
|
||||
if ARGS.verbose:
|
||||
print('Check for {}: {}'.format(
|
||||
server_dir, MANIFEST_ACTION_ID_TO_STR[result[i]]))
|
||||
tsf = time.time() - start_time
|
||||
if tsf > 20:
|
||||
if ARGS.verbose:
|
||||
print('Waiting for key to propigate: ', tsf)
|
||||
time.sleep(sleep_time)
|
||||
return result
|
||||
|
||||
|
||||
def get_validator_key(config_file):
|
||||
in_validator_keys = False
|
||||
for l in open(config_file):
|
||||
sl = l.strip()
|
||||
if not in_validator_keys and sl == '[validator_keys]':
|
||||
in_validator_keys = True
|
||||
continue
|
||||
if in_validator_keys:
|
||||
if sl.startswith('['):
|
||||
raise ValueError('ThisServer validator key not found')
|
||||
if sl.startswith('#'):
|
||||
continue
|
||||
s = sl.split()
|
||||
if len(s) == 2 and s[1] == 'ThisServer':
|
||||
return s[0]
|
||||
|
||||
|
||||
def new_config_ephemeral_key(
|
||||
server, seq, rm_dbs=False, master_secret_file=None):
|
||||
"""Generate a new ephemeral key, add to config, restart server"""
|
||||
config_root = server.config_root
|
||||
config_file = config_root + '/rippled.cfg'
|
||||
db_dir = config_root + '/db'
|
||||
if not master_secret_file:
|
||||
master_secret_file = server.master_secret_file
|
||||
with open(master_secret_file) as f:
|
||||
master_secret = f.read()
|
||||
v = server.validation_create()
|
||||
signed = sign_manifest(seq, v['validation_public_key'], master_secret)
|
||||
with pause_server(server, config_file):
|
||||
if rm_dbs and os.path.exists(db_dir):
|
||||
shutil.rmtree(db_dir)
|
||||
os.makedirs(db_dir)
|
||||
# replace the validation_manifest section with `signed`
|
||||
bak = config_file + '.bak'
|
||||
if is_windows() and os.path.isfile(bak):
|
||||
os.remove(bak)
|
||||
os.rename(config_file, bak)
|
||||
in_manifest = False
|
||||
with open(bak, 'r') as src:
|
||||
with open(config_file, 'w') as out:
|
||||
for l in src:
|
||||
sl = l.strip()
|
||||
if not in_manifest and sl == '[validation_manifest]':
|
||||
in_manifest = True
|
||||
elif in_manifest:
|
||||
if sl.startswith('[') or sl.startswith('#'):
|
||||
in_manifest = False
|
||||
out.write(signed)
|
||||
out.write('\n\n')
|
||||
else:
|
||||
continue
|
||||
out.write(l)
|
||||
return (bak, config_file)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description=('Create config files for n validators')
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--ripple_home', '-r',
|
||||
default=os.sep.join(os.path.realpath(__file__).split(os.sep)[:-5]),
|
||||
help=('Root directory of the ripple repo'), )
|
||||
parser.add_argument('--num_validators', '-n',
|
||||
default=2,
|
||||
help=('Number of validators'), )
|
||||
parser.add_argument('--conf', '-c', help=('rippled config file'), )
|
||||
parser.add_argument('--out', '-o',
|
||||
default='test_output',
|
||||
help=('config root directory'), )
|
||||
parser.add_argument(
|
||||
'--existing', '-e',
|
||||
action='store_true',
|
||||
help=('use existing config files'), )
|
||||
parser.add_argument(
|
||||
'--generate', '-g',
|
||||
action='store_true',
|
||||
help=('generate conf files only'), )
|
||||
parser.add_argument(
|
||||
'--verbose', '-v',
|
||||
action='store_true',
|
||||
help=('verbose status reporting'), )
|
||||
parser.add_argument(
|
||||
'--quiet', '-q',
|
||||
action='store_true',
|
||||
help=('quiet status reporting'), )
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def get_configs(manifest_seq):
|
||||
global ARGS
|
||||
ARGS.ripple_home = os.path.expanduser(ARGS.ripple_home)
|
||||
|
||||
n = int(ARGS.num_validators)
|
||||
if n<2:
|
||||
raise ValueError(
|
||||
'Need at least 2 rippled servers. Specified: {}'.format(n))
|
||||
config_root = ARGS.out
|
||||
ARGS.ripple_exe = get_ripple_exe()
|
||||
if not ARGS.ripple_exe:
|
||||
raise ValueError('No Exe Found')
|
||||
|
||||
if ARGS.existing:
|
||||
return [
|
||||
os.path.abspath('{}/validator_{}/rippled.cfg'.format(config_root, i))
|
||||
for i in range(n)
|
||||
]
|
||||
|
||||
initial_config = ARGS.conf
|
||||
|
||||
manifests = [manifest_create() for i in range(n)]
|
||||
port_nums = [checkout_port_nums() for i in range(n)]
|
||||
with rippled_server(initial_config) as server:
|
||||
time.sleep(DELAY_WHILE_PROCESS_STARTS_UP)
|
||||
validations = [server.validation_create() for i in range(n)]
|
||||
|
||||
signed_manifests = [sign_manifest(manifest_seq,
|
||||
v['validation_public_key'],
|
||||
m['master_secret'])
|
||||
for m, v in zip(manifests, validations)]
|
||||
node_db_type = 'RocksDB' if not is_windows() else 'NuDB'
|
||||
node_db_filename = node_db_type.lower()
|
||||
|
||||
config_files = []
|
||||
for i, (m, v, s) in enumerate(zip(manifests, validations, signed_manifests)):
|
||||
sibling_index = (i - 1) % len(manifests)
|
||||
all_validator_keys = '\n'.join([
|
||||
m['validator_keys'] + ' ThisServer',
|
||||
manifests[sibling_index]['validator_keys'] + ' NextInRing'])
|
||||
this_validator_dir = os.path.abspath(
|
||||
'{}/validator_{}'.format(config_root, i))
|
||||
db_path = this_validator_dir + '/db'
|
||||
node_db_path = db_path + '/' + node_db_filename
|
||||
log_path = this_validator_dir + '/log'
|
||||
debug_logfile = log_path + '/debug.log'
|
||||
rpc_port, peer_port, wss_port = port_nums[i]
|
||||
sibling_ip = '127.0.0.1'
|
||||
sibling_port = port_nums[sibling_index][1]
|
||||
d = {
|
||||
'validation_manifest': s,
|
||||
'all_validator_keys': all_validator_keys,
|
||||
'node_db_type': node_db_type,
|
||||
'node_db_path': node_db_path,
|
||||
'db_path': db_path,
|
||||
'debug_logfile': debug_logfile,
|
||||
'rpc_port': rpc_port,
|
||||
'peer_port': peer_port,
|
||||
'wss_port': wss_port,
|
||||
'sibling_ip': sibling_ip,
|
||||
'sibling_port': sibling_port,
|
||||
}
|
||||
d.update(m)
|
||||
d.update(v)
|
||||
|
||||
for p in [this_validator_dir, db_path, log_path]:
|
||||
if not os.path.exists(p):
|
||||
os.makedirs(p)
|
||||
|
||||
config_files.append('{}/rippled.cfg'.format(this_validator_dir))
|
||||
with open(config_files[-1], 'w') as f:
|
||||
f.write(CONF_TEMPLATE.format(**d))
|
||||
|
||||
with open('{}/master_secret.txt'.format(this_validator_dir), 'w') as f:
|
||||
f.write(m['master_secret'])
|
||||
|
||||
return config_files
|
||||
|
||||
|
||||
def update_ephemeral_key(
|
||||
server, new_seq, log_files,
|
||||
expected=None, rm_dbs=False, master_secret_file=None,
|
||||
restore_origional_conf=False, timeout_s=300):
|
||||
if not expected:
|
||||
expected = {}
|
||||
|
||||
change_time = time.gmtime()
|
||||
back_conf, new_conf = new_config_ephemeral_key(
|
||||
server,
|
||||
new_seq,
|
||||
rm_dbs,
|
||||
master_secret_file
|
||||
)
|
||||
validator_key = get_validator_key(server.config_file)
|
||||
start_time = time.time()
|
||||
ck = check_ephemeral_keys(validator_key,
|
||||
log_files,
|
||||
seq=new_seq,
|
||||
change_time=change_time,
|
||||
timeout_s=timeout_s)
|
||||
if ARGS.verbose:
|
||||
print('Check finished: {} secs.'.format(int(time.time() - start_time)))
|
||||
all_success = True
|
||||
for i, r in enumerate(ck):
|
||||
e = expected.get(i, UNTRUSTED)
|
||||
server_dir = os.path.basename(os.path.dirname(log_files[i]))
|
||||
status = 'OK' if e == r else 'FAIL'
|
||||
print('{}: Server: {} Expected: {} Got: {}'.
|
||||
format(status, server_dir,
|
||||
MANIFEST_ACTION_ID_TO_STR[e], MANIFEST_ACTION_ID_TO_STR[r]))
|
||||
all_success = all_success and (e == r)
|
||||
if restore_origional_conf:
|
||||
if is_windows() and os.path.isfile(new_conf):
|
||||
os.remove(new_conf)
|
||||
os.rename(back_conf, new_conf)
|
||||
return all_success
|
||||
|
||||
|
||||
def run_main():
|
||||
global ARGS
|
||||
ARGS = parse_args()
|
||||
manifest_seq = 1
|
||||
config_files = get_configs(manifest_seq)
|
||||
if ARGS.generate:
|
||||
return
|
||||
if len(config_files) <= 1:
|
||||
print('Script requires at least 2 servers. Actual #: {}'.
|
||||
format(len(config_files)))
|
||||
return
|
||||
with contextlib.nested(*(rippled_server(c, os.path.dirname(c)+'/log.txt')
|
||||
for c in config_files)) as servers:
|
||||
log_files = [os.path.dirname(cf)+'/log.txt' for cf in config_files[1:]]
|
||||
validator_key = get_validator_key(config_files[0])
|
||||
start_time = time.time()
|
||||
ck = check_ephemeral_keys(validator_key,
|
||||
[log_files[0]],
|
||||
seq=None,
|
||||
timeout_s=60)
|
||||
if ARGS.verbose:
|
||||
print('Check finished: {} secs.'.format(
|
||||
int(time.time() - start_time)))
|
||||
if any(r == NOT_FOUND for r in ck):
|
||||
print('FAIL: Initial key did not propigate to all servers')
|
||||
return
|
||||
|
||||
manifest_seq += 2
|
||||
expected = {i: UNTRUSTED for i in range(len(log_files))}
|
||||
expected[0] = ACCEPTED_UPDATE
|
||||
if not ARGS.quiet:
|
||||
print('Testing key update')
|
||||
kr = update_ephemeral_key(servers[0], manifest_seq, log_files, expected)
|
||||
if not kr:
|
||||
print('\nFail: Key Update Test. Exiting')
|
||||
return
|
||||
|
||||
expected = {i: UNTRUSTED for i in range(len(log_files))}
|
||||
expected[0] = STALE
|
||||
if not ARGS.quiet:
|
||||
print('Testing stale key')
|
||||
kr = update_ephemeral_key(
|
||||
servers[0], manifest_seq-1, log_files, expected, rm_dbs=True)
|
||||
if not kr:
|
||||
print('\nFail: Stale Key Test. Exiting')
|
||||
return
|
||||
|
||||
expected = {i: UNTRUSTED for i in range(len(log_files))}
|
||||
expected[0] = STALE
|
||||
if not ARGS.quiet:
|
||||
print('Testing stale key 2')
|
||||
kr = update_ephemeral_key(
|
||||
servers[0], manifest_seq, log_files, expected, rm_dbs=True)
|
||||
if not kr:
|
||||
print('\nFail: Stale Key Test. Exiting')
|
||||
return
|
||||
|
||||
expected = {i: UNTRUSTED for i in range(len(log_files))}
|
||||
expected[0] = REVOKED
|
||||
if not ARGS.quiet:
|
||||
print('Testing revoked key')
|
||||
kr = update_ephemeral_key(
|
||||
servers[0], 0xffffffff, log_files, expected, rm_dbs=True)
|
||||
if not kr:
|
||||
print('\nFail: Revoked Key Text. Exiting')
|
||||
return
|
||||
print('\nOK: All tests passed')
|
||||
|
||||
if __name__ == '__main__':
|
||||
run_main()
|
||||
@@ -1,47 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
from ripple.util import Base58
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
BINARY = 'nN9kfUnKTf7PpgLG'
|
||||
|
||||
class test_Base58(TestCase):
|
||||
def run_test(self, before, after):
|
||||
self.assertEquals(Base58.decode(before), after)
|
||||
self.assertEquals(Base58.encode(after), before)
|
||||
|
||||
def test_trivial(self):
|
||||
self.run_test('', '')
|
||||
|
||||
def test_zeroes(self):
|
||||
for before, after in (('', ''), ('abc', 'I\x8b')):
|
||||
for i in range(1, 257):
|
||||
self.run_test('r' * i + before, '\0' * i + after)
|
||||
|
||||
def test_single_digits(self):
|
||||
for i, c in enumerate(Base58.ALPHABET):
|
||||
self.run_test(c, chr(i))
|
||||
|
||||
def test_various(self):
|
||||
# Test three random numbers.
|
||||
self.run_test('88Mw', '\x88L\xed')
|
||||
self.run_test(
|
||||
'nN9kfUnKTf7PpgLG', '\x03\xdc\x9co\xdea\xefn\xd3\xb8\xe2\xc1')
|
||||
self.run_test(
|
||||
'zzWWb4C5p6kNrVa4fEBoZpZKd3XQLXch7QJbLCuLdoS1CWr8qdAZHEmwMiJy8Hwp',
|
||||
'xN\x82\xfcQ\x1f\xb3~\xdf\xc7\xb37#\xc6~A\xe9\xf6-\x1f\xcb"\xfab'
|
||||
'(\'\xccv\x9e\x85\xc3\xd1\x19\x941{\x8et\xfbS}\x86.k\x07\xb5\xb3')
|
||||
|
||||
def test_check(self):
|
||||
self.assertEquals(Base58.checksum(BINARY), '\xaa\xaar\x9d')
|
||||
|
||||
def test_encode(self):
|
||||
self.assertEquals(
|
||||
Base58.encode_version(Base58.VER_ACCOUNT_PUBLIC, BINARY),
|
||||
'sB49XwJgmdEZDo8LmYwki7FYkiaN7')
|
||||
|
||||
def test_decode(self):
|
||||
ver, b = Base58.decode_version('sB49XwJgmdEZDo8LmYwki7FYkiaN7')
|
||||
self.assertEquals(ver, Base58.VER_ACCOUNT_PUBLIC)
|
||||
self.assertEquals(b, BINARY)
|
||||
@@ -1,12 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util.Cache import NamedCache
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_Cache(TestCase):
|
||||
def setUp(self):
|
||||
self.cache = NamedCache()
|
||||
|
||||
def test_trivial(self):
|
||||
pass
|
||||
@@ -1,163 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util import ConfigFile
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_ConfigFile(TestCase):
|
||||
def test_trivial(self):
|
||||
self.assertEquals(ConfigFile.read(''), {})
|
||||
|
||||
def test_full(self):
|
||||
self.assertEquals(ConfigFile.read(FULL.splitlines()), RESULT)
|
||||
|
||||
RESULT = {
|
||||
'websocket_port': '6206',
|
||||
'database_path': '/development/alpha/db',
|
||||
'sntp_servers':
|
||||
['time.windows.com', 'time.apple.com', 'time.nist.gov', 'pool.ntp.org'],
|
||||
'validation_seed': 'sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV',
|
||||
'node_size': 'medium',
|
||||
'rpc_startup': {
|
||||
'command': 'log_level',
|
||||
'severity': 'debug'},
|
||||
'ips': ['r.ripple.com', '51235'],
|
||||
'node_db': {
|
||||
'file_size_mult': '2',
|
||||
'file_size_mb': '8',
|
||||
'cache_mb': '256',
|
||||
'path': '/development/alpha/db/rocksdb',
|
||||
'open_files': '2000',
|
||||
'type': 'RocksDB',
|
||||
'filter_bits': '12'},
|
||||
'peer_port': '53235',
|
||||
'ledger_history': 'full',
|
||||
'rpc_ip': '127.0.0.1',
|
||||
'websocket_public_ip': '0.0.0.0',
|
||||
'rpc_allow_remote': '0',
|
||||
'validators':
|
||||
[['n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7', 'RL1'],
|
||||
['n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj', 'RL2'],
|
||||
['n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C', 'RL3'],
|
||||
['n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS', 'RL4'],
|
||||
['n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA', 'RL5']],
|
||||
'debug_logfile': '/development/alpha/debug.log',
|
||||
'websocket_public_port': '5206',
|
||||
'peer_ip': '0.0.0.0',
|
||||
'rpc_port': '5205',
|
||||
'validation_quorum': '3',
|
||||
'websocket_ip': '127.0.0.1'}
|
||||
|
||||
FULL = """
|
||||
[ledger_history]
|
||||
full
|
||||
|
||||
# Allow other peers to connect to this server.
|
||||
#
|
||||
[peer_ip]
|
||||
0.0.0.0
|
||||
|
||||
[peer_port]
|
||||
53235
|
||||
|
||||
# Allow untrusted clients to connect to this server.
|
||||
#
|
||||
[websocket_public_ip]
|
||||
0.0.0.0
|
||||
|
||||
[websocket_public_port]
|
||||
5206
|
||||
|
||||
# Provide trusted websocket ADMIN access to the localhost.
|
||||
#
|
||||
[websocket_ip]
|
||||
127.0.0.1
|
||||
|
||||
[websocket_port]
|
||||
6206
|
||||
|
||||
# Provide trusted json-rpc ADMIN access to the localhost.
|
||||
#
|
||||
[rpc_ip]
|
||||
127.0.0.1
|
||||
|
||||
[rpc_port]
|
||||
5205
|
||||
|
||||
[rpc_allow_remote]
|
||||
0
|
||||
|
||||
[node_size]
|
||||
medium
|
||||
|
||||
# This is primary persistent datastore for rippled. This includes transaction
|
||||
# metadata, account states, and ledger headers. Helpful information can be
|
||||
# found here: https://ripple.com/wiki/NodeBackEnd
|
||||
[node_db]
|
||||
type=RocksDB
|
||||
path=/development/alpha/db/rocksdb
|
||||
open_files=2000
|
||||
filter_bits=12
|
||||
cache_mb=256
|
||||
file_size_mb=8
|
||||
file_size_mult=2
|
||||
|
||||
[database_path]
|
||||
/development/alpha/db
|
||||
|
||||
# This needs to be an absolute directory reference, not a relative one.
|
||||
# Modify this value as required.
|
||||
[debug_logfile]
|
||||
/development/alpha/debug.log
|
||||
|
||||
[sntp_servers]
|
||||
time.windows.com
|
||||
time.apple.com
|
||||
time.nist.gov
|
||||
pool.ntp.org
|
||||
|
||||
# Where to find some other servers speaking the Ripple protocol.
|
||||
#
|
||||
[ips]
|
||||
r.ripple.com 51235
|
||||
|
||||
# The latest validators can be obtained from
|
||||
# https://ripple.com/ripple.txt
|
||||
#
|
||||
[validators]
|
||||
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
|
||||
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
|
||||
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
|
||||
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
|
||||
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
|
||||
|
||||
# Ditto.
|
||||
[validation_quorum]
|
||||
3
|
||||
|
||||
[validation_seed]
|
||||
sh1T8T9yGuV7Jb6DPhqSzdU2s5LcV
|
||||
|
||||
# Turn down default logging to save disk space in the long run.
|
||||
# Valid values here are trace, debug, info, warning, error, and fatal
|
||||
[rpc_startup]
|
||||
{ "command": "log_level", "severity": "debug" }
|
||||
|
||||
# Configure SSL for WebSockets. Not enabled by default because not everybody
|
||||
# has an SSL cert on their server, but if you uncomment the following lines and
|
||||
# set the path to the SSL certificate and private key the WebSockets protocol
|
||||
# will be protected by SSL/TLS.
|
||||
#[websocket_secure]
|
||||
#1
|
||||
|
||||
#[websocket_ssl_cert]
|
||||
#/etc/ssl/certs/server.crt
|
||||
|
||||
#[websocket_ssl_key]
|
||||
#/etc/ssl/private/server.key
|
||||
|
||||
# Defaults to 0 ("no") so that you can use self-signed SSL certificates for
|
||||
# development, or internally.
|
||||
#[ssl_verify]
|
||||
#0
|
||||
""".strip()
|
||||
@@ -1,20 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util.Decimal import Decimal
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_Decimal(TestCase):
|
||||
def test_construct(self):
|
||||
self.assertEquals(str(Decimal('')), '0')
|
||||
self.assertEquals(str(Decimal('0')), '0')
|
||||
self.assertEquals(str(Decimal('0.2')), '0.2')
|
||||
self.assertEquals(str(Decimal('-0.2')), '-0.2')
|
||||
self.assertEquals(str(Decimal('3.1416')), '3.1416')
|
||||
|
||||
def test_accumulate(self):
|
||||
d = Decimal()
|
||||
d.accumulate('0.5')
|
||||
d.accumulate('3.1416')
|
||||
d.accumulate('-23.34234')
|
||||
self.assertEquals(str(d), '-19.70074')
|
||||
@@ -1,56 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util import Dict
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_Dict(TestCase):
|
||||
def test_count_all_subitems(self):
|
||||
self.assertEquals(Dict.count_all_subitems({}), 1)
|
||||
self.assertEquals(Dict.count_all_subitems({'a': {}}), 2)
|
||||
self.assertEquals(Dict.count_all_subitems([1]), 2)
|
||||
self.assertEquals(Dict.count_all_subitems([1, 2]), 3)
|
||||
self.assertEquals(Dict.count_all_subitems([1, {2: 3}]), 4)
|
||||
self.assertEquals(Dict.count_all_subitems([1, {2: [3]}]), 5)
|
||||
self.assertEquals(Dict.count_all_subitems([1, {2: [3, 4]}]), 6)
|
||||
|
||||
def test_prune(self):
|
||||
self.assertEquals(Dict.prune({}, 0), {})
|
||||
self.assertEquals(Dict.prune({}, 1), {})
|
||||
|
||||
self.assertEquals(Dict.prune({1: 2}, 0), '{dict with 1 subitem}')
|
||||
self.assertEquals(Dict.prune({1: 2}, 1), {1: 2})
|
||||
self.assertEquals(Dict.prune({1: 2}, 2), {1: 2})
|
||||
|
||||
self.assertEquals(Dict.prune([1, 2, 3], 0), '[list with 3 subitems]')
|
||||
self.assertEquals(Dict.prune([1, 2, 3], 1), [1, 2, 3])
|
||||
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 0),
|
||||
'[list with 4 subitems]')
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 1),
|
||||
['{dict with 3 subitems}'])
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 2),
|
||||
[{1: u'[list with 2 subitems]'}])
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 3),
|
||||
[{1: [2, 3]}])
|
||||
|
||||
def test_prune_nosub(self):
|
||||
self.assertEquals(Dict.prune({}, 0, False), {})
|
||||
self.assertEquals(Dict.prune({}, 1, False), {})
|
||||
|
||||
self.assertEquals(Dict.prune({1: 2}, 0, False), '{dict with 1 subitem}')
|
||||
self.assertEquals(Dict.prune({1: 2}, 1, False), {1: 2})
|
||||
self.assertEquals(Dict.prune({1: 2}, 2, False), {1: 2})
|
||||
|
||||
self.assertEquals(Dict.prune([1, 2, 3], 0, False),
|
||||
'[list with 3 subitems]')
|
||||
self.assertEquals(Dict.prune([1, 2, 3], 1, False), [1, 2, 3])
|
||||
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 0, False),
|
||||
'[list with 1 subitem]')
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 1, False),
|
||||
['{dict with 1 subitem}'])
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 2, False),
|
||||
[{1: u'[list with 2 subitems]'}])
|
||||
self.assertEquals(Dict.prune([{1: [2, 3]}], 3, False),
|
||||
[{1: [2, 3]}])
|
||||
@@ -1,37 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util.Function import Function, MATCHER
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
def FN(*args, **kwds):
|
||||
return args, kwds
|
||||
|
||||
class test_Function(TestCase):
|
||||
def match_test(self, item, *results):
|
||||
self.assertEquals(MATCHER.match(item).groups(), results)
|
||||
|
||||
def test_simple(self):
|
||||
self.match_test('function', 'function', '')
|
||||
self.match_test('f(x)', 'f', '(x)')
|
||||
|
||||
def test_empty_function(self):
|
||||
self.assertEquals(Function()(), None)
|
||||
|
||||
def test_empty_args(self):
|
||||
f = Function('ripple.util.test_Function.FN()')
|
||||
self.assertEquals(f(), ((), {}))
|
||||
|
||||
def test_function(self):
|
||||
f = Function('ripple.util.test_Function.FN(True, {1: 2}, None)')
|
||||
self.assertEquals(f(), ((True, {1: 2}, None), {}))
|
||||
self.assertEquals(f('hello', foo='bar'),
|
||||
(('hello', True, {1: 2}, None), {'foo':'bar'}))
|
||||
self.assertEquals(
|
||||
f, Function('ripple.util.test_Function.FN(true, {1: 2}, null)'))
|
||||
|
||||
def test_quoting(self):
|
||||
f = Function('ripple.util.test_Function.FN(testing)')
|
||||
self.assertEquals(f(), (('testing',), {}))
|
||||
f = Function('ripple.util.test_Function.FN(testing, true, false, null)')
|
||||
self.assertEquals(f(), (('testing', True, False, None), {}))
|
||||
@@ -1,56 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util import PrettyPrint
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_PrettyPrint(TestCase):
|
||||
def setUp(self):
|
||||
self._results = []
|
||||
self.printer = PrettyPrint.Streamer(printer=self.printer)
|
||||
|
||||
def printer(self, *args, **kwds):
|
||||
self._results.extend(args)
|
||||
|
||||
def run_test(self, expected, *args):
|
||||
for i in range(0, len(args), 2):
|
||||
self.printer.add(args[i], args[i + 1])
|
||||
self.printer.finish()
|
||||
self.assertEquals(''.join(self._results), expected)
|
||||
|
||||
def test_simple_printer(self):
|
||||
self.run_test(
|
||||
'{\n "foo": "bar"\n}',
|
||||
'foo', 'bar')
|
||||
|
||||
def test_multiple_lines(self):
|
||||
self.run_test(
|
||||
'{\n "foo": "bar",\n "baz": 5\n}',
|
||||
'foo', 'bar', 'baz', 5)
|
||||
|
||||
def test_multiple_lines(self):
|
||||
self.run_test(
|
||||
"""
|
||||
{
|
||||
"foo": {
|
||||
"bar": 1,
|
||||
"baz": true
|
||||
},
|
||||
"bang": "bing"
|
||||
}
|
||||
""".strip(), 'foo', {'bar': 1, 'baz': True}, 'bang', 'bing')
|
||||
|
||||
def test_multiple_lines_with_list(self):
|
||||
self.run_test(
|
||||
"""
|
||||
{
|
||||
"foo": [
|
||||
"bar",
|
||||
1
|
||||
],
|
||||
"baz": [
|
||||
23,
|
||||
42
|
||||
]
|
||||
}
|
||||
""".strip(), 'foo', ['bar', 1], 'baz', [23, 42])
|
||||
@@ -1,28 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util import Range
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_Range(TestCase):
|
||||
def round_trip(self, s, *items):
|
||||
self.assertEquals(Range.from_string(s), set(items))
|
||||
self.assertEquals(Range.to_string(items), s)
|
||||
|
||||
def test_complete(self):
|
||||
self.round_trip('10,19', 10, 19)
|
||||
self.round_trip('10', 10)
|
||||
self.round_trip('10-12', 10, 11, 12)
|
||||
self.round_trip('10,19,42-45', 10, 19, 42, 43, 44, 45)
|
||||
|
||||
def test_names(self):
|
||||
self.assertEquals(
|
||||
Range.from_string('first,last,current', first=1, last=3, current=5),
|
||||
set([1, 3, 5]))
|
||||
|
||||
def test_is_range(self):
|
||||
self.assertTrue(Range.is_range(''))
|
||||
self.assertTrue(Range.is_range('10'))
|
||||
self.assertTrue(Range.is_range('10,12'))
|
||||
self.assertFalse(Range.is_range('10,12,fred'))
|
||||
self.assertTrue(Range.is_range('10,12,fred', 'fred'))
|
||||
@@ -1,44 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
from ripple.util.Search import binary_search, linear_search, FIRST, LAST
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
class test_Search(TestCase):
|
||||
def condition(self, i):
|
||||
return 10 <= i < 15;
|
||||
|
||||
def test_linear_full(self):
|
||||
self.assertEquals(list(linear_search(range(21), self.condition)),
|
||||
[10, 11, 12, 13, 14])
|
||||
|
||||
def test_linear_partial(self):
|
||||
self.assertEquals(list(linear_search(range(8, 14), self.condition)),
|
||||
[10, 11, 12, 13])
|
||||
self.assertEquals(list(linear_search(range(11, 14), self.condition)),
|
||||
[11, 12, 13])
|
||||
self.assertEquals(list(linear_search(range(12, 18), self.condition)),
|
||||
[12, 13, 14])
|
||||
|
||||
def test_linear_empty(self):
|
||||
self.assertEquals(list(linear_search(range(1, 4), self.condition)), [])
|
||||
|
||||
def test_binary_first(self):
|
||||
self.assertEquals(binary_search(0, 14, self.condition, FIRST), 10)
|
||||
self.assertEquals(binary_search(10, 19, self.condition, FIRST), 10)
|
||||
self.assertEquals(binary_search(14, 14, self.condition, FIRST), 14)
|
||||
self.assertEquals(binary_search(14, 15, self.condition, FIRST), 14)
|
||||
self.assertEquals(binary_search(13, 15, self.condition, FIRST), 13)
|
||||
|
||||
def test_binary_last(self):
|
||||
self.assertEquals(binary_search(10, 20, self.condition, LAST), 14)
|
||||
self.assertEquals(binary_search(0, 14, self.condition, LAST), 14)
|
||||
self.assertEquals(binary_search(14, 14, self.condition, LAST), 14)
|
||||
self.assertEquals(binary_search(14, 15, self.condition, LAST), 14)
|
||||
self.assertEquals(binary_search(13, 15, self.condition, LAST), 14)
|
||||
|
||||
def test_binary_throws(self):
|
||||
self.assertRaises(
|
||||
ValueError, binary_search, 0, 20, self.condition, LAST)
|
||||
self.assertRaises(
|
||||
ValueError, binary_search, 0, 20, self.condition, FIRST)
|
||||
@@ -1,149 +0,0 @@
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
from ripple.util import Sign
|
||||
from ripple.util import Base58
|
||||
from ripple.ledger import SField
|
||||
|
||||
from unittest import TestCase
|
||||
|
||||
BINARY = 'nN9kfUnKTf7PpgLG'
|
||||
|
||||
class test_Sign(TestCase):
|
||||
SEQUENCE = 23
|
||||
VALIDATOR_KEY_HUMAN = 'n9JijuoCv8ubEy5ag3LiX3hyq27GaLJsitZPbQ6APkwx2MkUXq8E'
|
||||
|
||||
def setUp(self):
|
||||
self.results = []
|
||||
|
||||
def print(self, *args, **kwds):
|
||||
self.results.append([list(args), kwds])
|
||||
|
||||
def test_field_code(self):
|
||||
self.assertEquals(SField.field_code(SField.STI_UINT32, 4), '$')
|
||||
self.assertEquals(SField.field_code(SField.STI_VL, 1), 'q')
|
||||
self.assertEquals(SField.field_code(SField.STI_VL, 3), 's')
|
||||
self.assertEquals(SField.field_code(SField.STI_VL, 6), 'v')
|
||||
|
||||
def test_strvl(self):
|
||||
self.assertEquals(Sign.prepend_length_byte(BINARY),
|
||||
'\x10nN9kfUnKTf7PpgLG')
|
||||
|
||||
def urandom(self, bytes):
|
||||
return '\5' * bytes
|
||||
|
||||
def test_make_seed(self):
|
||||
self.assertEquals(Sign.make_seed(self.urandom),
|
||||
'\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5')
|
||||
|
||||
def test_make_ed(self):
|
||||
private, public = Sign.make_ed25519_keypair(self.urandom)
|
||||
self.assertEquals(private,
|
||||
'\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5'
|
||||
'\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5\5')
|
||||
self.assertEquals(public,
|
||||
'nz\x1c\xdd)\xb0\xb7\x8f\xd1:\xf4\xc5Y\x8f\xef\xf4'
|
||||
'\xef*\x97\x16n<\xa6\xf2\xe4\xfb\xfc\xcd\x80P[\xf1')
|
||||
|
||||
def test_make_manifest(self):
|
||||
_, pk = Sign.make_ed25519_keypair(self.urandom)
|
||||
m = Sign.make_manifest(pk, 'verify', 12345)
|
||||
self.assertEquals(
|
||||
m, '$\x00\x0009q nz\x1c\xdd)\xb0\xb7\x8f\xd1:\xf4\xc5Y\x8f\xef\xf4'
|
||||
'\xef*\x97\x16n<\xa6\xf2\xe4\xfb\xfc\xcd\x80P[\xf1s\x06verify')
|
||||
|
||||
def test_sign_manifest(self):
|
||||
ssk, spk = Sign.make_ecdsa_keypair(self.urandom)
|
||||
sk, pk = Sign.make_ed25519_keypair(self.urandom)
|
||||
s = Sign.sign_manifest('manifest', ssk, sk, pk)
|
||||
self.assertEquals(
|
||||
s, 'manifestvF0D\x02 \x04\x85\x95p\x0f\xb8\x17\x7f\xdf\xdd\x04'
|
||||
'\xaa+\x16q1W\xf6\xfd\xe8X\xb12l\xd5\xc3\xf1\xd6\x05\x1b\x1c\x9a'
|
||||
'\x02 \x18\\.(o\xa8 \xeb\x87\xfa&~\xbd\xe6,\xfb\xa61\xd1\xcd\xcd'
|
||||
'\xc8\r\x16[\x81\x9a\x19\xda\x93i\xcdp\x12@\xe5\x84\xbe\xc4\x80N'
|
||||
'\xa0v"\xb2\x80A\x88\x06\xc0\xd2\xbae\x92\x89\xa8\'!\xdd\x00\x88'
|
||||
'\x06s\xe0\xf74\xe3Yg\xad{$\x17\xd3\x99\xaa\x16\xb0\xeaZ\xd7]\r'
|
||||
'\xb3\xdc\x1b\x8f\xc1Z\xdfHU\xb5\x92\xac\x82jI\x02')
|
||||
|
||||
def test_wrap(self):
|
||||
wrap = lambda s: Sign.wrap(s, 5)
|
||||
self.assertEquals(wrap(''), '')
|
||||
self.assertEquals(wrap('12345'), '12345')
|
||||
self.assertEquals(wrap('123456'), '123\n456')
|
||||
self.assertEquals(wrap('12345678'), '1234\n5678')
|
||||
self.assertEquals(wrap('1234567890'), '12345\n67890')
|
||||
self.assertEquals(wrap('12345678901'), '123\n456\n789\n01')
|
||||
|
||||
def test_create_ed_keys(self):
|
||||
pkh, skh = Sign.create_ed_keys(self.urandom)
|
||||
self.assertEquals(
|
||||
pkh, 'nHUUaKHpxyRP4TZZ79tTpXuTpoM8pRNs5crZpGVA5jdrjib5easY')
|
||||
self.assertEquals(
|
||||
skh, 'pnEp13Zu7xTeKQVQ2RZVaUraE9GXKqFtnXQVUFKXbTE6wsP4wne')
|
||||
|
||||
def test_create_ed_public_key(self):
|
||||
pkh = Sign.create_ed_public_key(
|
||||
'pnEp13Zu7xTeKQVQ2RZVaUraE9GXKqFtnXQVUFKXbTE6wsP4wne')
|
||||
self.assertEquals(
|
||||
pkh, 'nHUUaKHpxyRP4TZZ79tTpXuTpoM8pRNs5crZpGVA5jdrjib5easY')
|
||||
|
||||
def get_test_keypair(self):
|
||||
public = (Base58.VER_NODE_PUBLIC, '\x02' + (32 * 'v'))
|
||||
private = (Base58.VER_NODE_PRIVATE, 32 * 'k')
|
||||
|
||||
Sign.check_validation_public_key(*public)
|
||||
Sign.check_secret_key(*private)
|
||||
|
||||
return (Base58.encode_version(*public), Base58.encode_version(*private))
|
||||
|
||||
def test_get_signature(self):
|
||||
pk, sk = self.get_test_keypair()
|
||||
signature = Sign.get_signature(self.SEQUENCE, pk, sk, sk)
|
||||
self.assertEquals(
|
||||
signature,
|
||||
'JAAAABdxIe2DIKUZd9jDjKikknxnDfWCHkSXYZReFenvsmoVCdIw6nMhAnZ2dnZ2'
|
||||
'dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dkYwRAIgXyobHA8sDQxmDJNLE6HI'
|
||||
'aARlzvcd79/wT068e113gUkCIHkI540JQT2LHwAD7/y3wFE5X3lEXMfgZRkpLZTx'
|
||||
'kpticBJAzo5VrUEr0U47sHvuIjbrINLCTM6pAScA899G9kpMWexbXv1ceIbTP5JH'
|
||||
'1HyQmsZsROTeHR0irojvYgx7JLaiAA==')
|
||||
|
||||
def test_check(self):
|
||||
public = Base58.encode_version(Base58.VER_NODE_PRIVATE, 32 * 'k')
|
||||
Sign.perform_check(public, self.print)
|
||||
self.assertEquals(self.results,
|
||||
[[['version = VER_NODE_PRIVATE'], {}],
|
||||
[['decoded length = 32'], {}]])
|
||||
|
||||
def test_create(self):
|
||||
Sign.perform_create(self.urandom, self.print)
|
||||
self.assertEquals(
|
||||
self.results,
|
||||
[[['[validator_keys]',
|
||||
'nHUUaKHpxyRP4TZZ79tTpXuTpoM8pRNs5crZpGVA5jdrjib5easY',
|
||||
'',
|
||||
'[master_secret]',
|
||||
'pnEp13Zu7xTeKQVQ2RZVaUraE9GXKqFtnXQVUFKXbTE6wsP4wne'],
|
||||
{'sep': '\n'}]])
|
||||
|
||||
def test_create_public(self):
|
||||
Sign.perform_create_public(
|
||||
'pnEp13Zu7xTeKQVQ2RZVaUraE9GXKqFtnXQVUFKXbTE6wsP4wne', self.print)
|
||||
self.assertEquals(
|
||||
self.results,
|
||||
[[['[validator_keys]',
|
||||
'nHUUaKHpxyRP4TZZ79tTpXuTpoM8pRNs5crZpGVA5jdrjib5easY',
|
||||
'',
|
||||
'[master_secret]',
|
||||
'pnEp13Zu7xTeKQVQ2RZVaUraE9GXKqFtnXQVUFKXbTE6wsP4wne'],
|
||||
{'sep': '\n'}]])
|
||||
|
||||
def test_sign(self):
|
||||
public, private = self.get_test_keypair()
|
||||
Sign.perform_sign(self.SEQUENCE, public, private, private, print=self.print)
|
||||
self.assertEquals(
|
||||
self.results,
|
||||
[[['[validation_manifest]'], {}],
|
||||
[['JAAAABdxIe2DIKUZd9jDjKikknxnDfWCHkSXYZReFenvsmoVCdIw6nMhAnZ2dnZ2dnZ2dnZ2\n'
|
||||
'dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dkYwRAIgXyobHA8sDQxmDJNLE6HIaARlzvcd79/wT068\n'
|
||||
'e113gUkCIHkI540JQT2LHwAD7/y3wFE5X3lEXMfgZRkpLZTxkpticBJAzo5VrUEr0U47sHvu\n'
|
||||
'IjbrINLCTM6pAScA899G9kpMWexbXv1ceIbTP5JH1HyQmsZsROTeHR0irojvYgx7JLaiAA=='],
|
||||
{}]])
|
||||
@@ -1,747 +0,0 @@
|
||||
"""Utilities for writing code that runs on Python 2 and 3"""
|
||||
|
||||
# Copyright (c) 2010-2014 Benjamin Peterson
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import functools
|
||||
import operator
|
||||
import sys
|
||||
import types
|
||||
|
||||
__author__ = "Benjamin Peterson <benjamin@python.org>"
|
||||
__version__ = "1.7.3"
|
||||
|
||||
|
||||
# Useful for very coarse version differentiation.
|
||||
PY2 = sys.version_info[0] == 2
|
||||
PY3 = sys.version_info[0] == 3
|
||||
|
||||
if PY3:
|
||||
string_types = str,
|
||||
integer_types = int,
|
||||
class_types = type,
|
||||
text_type = str
|
||||
binary_type = bytes
|
||||
|
||||
MAXSIZE = sys.maxsize
|
||||
else:
|
||||
string_types = basestring,
|
||||
integer_types = (int, long)
|
||||
class_types = (type, types.ClassType)
|
||||
text_type = unicode
|
||||
binary_type = str
|
||||
|
||||
if sys.platform.startswith("java"):
|
||||
# Jython always uses 32 bits.
|
||||
MAXSIZE = int((1 << 31) - 1)
|
||||
else:
|
||||
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
|
||||
class X(object):
|
||||
def __len__(self):
|
||||
return 1 << 31
|
||||
try:
|
||||
len(X())
|
||||
except OverflowError:
|
||||
# 32-bit
|
||||
MAXSIZE = int((1 << 31) - 1)
|
||||
else:
|
||||
# 64-bit
|
||||
MAXSIZE = int((1 << 63) - 1)
|
||||
del X
|
||||
|
||||
|
||||
def _add_doc(func, doc):
|
||||
"""Add documentation to a function."""
|
||||
func.__doc__ = doc
|
||||
|
||||
|
||||
def _import_module(name):
|
||||
"""Import module, returning the module after the last dot."""
|
||||
__import__(name)
|
||||
return sys.modules[name]
|
||||
|
||||
|
||||
class _LazyDescr(object):
|
||||
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def __get__(self, obj, tp):
|
||||
result = self._resolve()
|
||||
setattr(obj, self.name, result) # Invokes __set__.
|
||||
# This is a bit ugly, but it avoids running this again.
|
||||
delattr(obj.__class__, self.name)
|
||||
return result
|
||||
|
||||
|
||||
class MovedModule(_LazyDescr):
|
||||
|
||||
def __init__(self, name, old, new=None):
|
||||
super(MovedModule, self).__init__(name)
|
||||
if PY3:
|
||||
if new is None:
|
||||
new = name
|
||||
self.mod = new
|
||||
else:
|
||||
self.mod = old
|
||||
|
||||
def _resolve(self):
|
||||
return _import_module(self.mod)
|
||||
|
||||
def __getattr__(self, attr):
|
||||
_module = self._resolve()
|
||||
value = getattr(_module, attr)
|
||||
setattr(self, attr, value)
|
||||
return value
|
||||
|
||||
|
||||
class _LazyModule(types.ModuleType):
|
||||
|
||||
def __init__(self, name):
|
||||
super(_LazyModule, self).__init__(name)
|
||||
self.__doc__ = self.__class__.__doc__
|
||||
|
||||
def __dir__(self):
|
||||
attrs = ["__doc__", "__name__"]
|
||||
attrs += [attr.name for attr in self._moved_attributes]
|
||||
return attrs
|
||||
|
||||
# Subclasses should override this
|
||||
_moved_attributes = []
|
||||
|
||||
|
||||
class MovedAttribute(_LazyDescr):
|
||||
|
||||
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
|
||||
super(MovedAttribute, self).__init__(name)
|
||||
if PY3:
|
||||
if new_mod is None:
|
||||
new_mod = name
|
||||
self.mod = new_mod
|
||||
if new_attr is None:
|
||||
if old_attr is None:
|
||||
new_attr = name
|
||||
else:
|
||||
new_attr = old_attr
|
||||
self.attr = new_attr
|
||||
else:
|
||||
self.mod = old_mod
|
||||
if old_attr is None:
|
||||
old_attr = name
|
||||
self.attr = old_attr
|
||||
|
||||
def _resolve(self):
|
||||
module = _import_module(self.mod)
|
||||
return getattr(module, self.attr)
|
||||
|
||||
|
||||
class _SixMetaPathImporter(object):
|
||||
"""
|
||||
A meta path importer to import six.moves and its submodules.
|
||||
|
||||
This class implements a PEP302 finder and loader. It should be compatible
|
||||
with Python 2.5 and all existing versions of Python3
|
||||
"""
|
||||
def __init__(self, six_module_name):
|
||||
self.name = six_module_name
|
||||
self.known_modules = {}
|
||||
|
||||
def _add_module(self, mod, *fullnames):
|
||||
for fullname in fullnames:
|
||||
self.known_modules[self.name + "." + fullname] = mod
|
||||
|
||||
def _get_module(self, fullname):
|
||||
return self.known_modules[self.name + "." + fullname]
|
||||
|
||||
def find_module(self, fullname, path=None):
|
||||
if fullname in self.known_modules:
|
||||
return self
|
||||
return None
|
||||
|
||||
def __get_module(self, fullname):
|
||||
try:
|
||||
return self.known_modules[fullname]
|
||||
except KeyError:
|
||||
raise ImportError("This loader does not know module " + fullname)
|
||||
|
||||
def load_module(self, fullname):
|
||||
try:
|
||||
# in case of a reload
|
||||
return sys.modules[fullname]
|
||||
except KeyError:
|
||||
pass
|
||||
mod = self.__get_module(fullname)
|
||||
if isinstance(mod, MovedModule):
|
||||
mod = mod._resolve()
|
||||
else:
|
||||
mod.__loader__ = self
|
||||
sys.modules[fullname] = mod
|
||||
return mod
|
||||
|
||||
def is_package(self, fullname):
|
||||
"""
|
||||
Return true, if the named module is a package.
|
||||
|
||||
We need this method to get correct spec objects with
|
||||
Python 3.4 (see PEP451)
|
||||
"""
|
||||
return hasattr(self.__get_module(fullname), "__path__")
|
||||
|
||||
def get_code(self, fullname):
|
||||
"""Return None
|
||||
|
||||
Required, if is_package is implemented"""
|
||||
self.__get_module(fullname) # eventually raises ImportError
|
||||
return None
|
||||
get_source = get_code # same as get_code
|
||||
|
||||
_importer = _SixMetaPathImporter(__name__)
|
||||
|
||||
|
||||
class _MovedItems(_LazyModule):
|
||||
"""Lazy loading of moved objects"""
|
||||
__path__ = [] # mark as package
|
||||
|
||||
|
||||
_moved_attributes = [
|
||||
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
|
||||
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
|
||||
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
|
||||
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
|
||||
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
|
||||
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
|
||||
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
|
||||
MovedAttribute("reduce", "__builtin__", "functools"),
|
||||
MovedAttribute("StringIO", "StringIO", "io"),
|
||||
MovedAttribute("UserDict", "UserDict", "collections"),
|
||||
MovedAttribute("UserList", "UserList", "collections"),
|
||||
MovedAttribute("UserString", "UserString", "collections"),
|
||||
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
|
||||
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
|
||||
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
|
||||
|
||||
MovedModule("builtins", "__builtin__"),
|
||||
MovedModule("configparser", "ConfigParser"),
|
||||
MovedModule("copyreg", "copy_reg"),
|
||||
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
|
||||
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
|
||||
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
|
||||
MovedModule("http_cookies", "Cookie", "http.cookies"),
|
||||
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
|
||||
MovedModule("html_parser", "HTMLParser", "html.parser"),
|
||||
MovedModule("http_client", "httplib", "http.client"),
|
||||
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
|
||||
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
|
||||
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
|
||||
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
|
||||
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
|
||||
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
|
||||
MovedModule("cPickle", "cPickle", "pickle"),
|
||||
MovedModule("queue", "Queue"),
|
||||
MovedModule("reprlib", "repr"),
|
||||
MovedModule("socketserver", "SocketServer"),
|
||||
MovedModule("_thread", "thread", "_thread"),
|
||||
MovedModule("tkinter", "Tkinter"),
|
||||
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
|
||||
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
|
||||
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
|
||||
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
|
||||
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
|
||||
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
|
||||
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
|
||||
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
|
||||
MovedModule("tkinter_colorchooser", "tkColorChooser",
|
||||
"tkinter.colorchooser"),
|
||||
MovedModule("tkinter_commondialog", "tkCommonDialog",
|
||||
"tkinter.commondialog"),
|
||||
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
|
||||
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
|
||||
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
|
||||
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
|
||||
"tkinter.simpledialog"),
|
||||
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
|
||||
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
|
||||
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
|
||||
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
|
||||
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
|
||||
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
|
||||
MovedModule("winreg", "_winreg"),
|
||||
]
|
||||
for attr in _moved_attributes:
|
||||
setattr(_MovedItems, attr.name, attr)
|
||||
if isinstance(attr, MovedModule):
|
||||
_importer._add_module(attr, "moves." + attr.name)
|
||||
del attr
|
||||
|
||||
_MovedItems._moved_attributes = _moved_attributes
|
||||
|
||||
moves = _MovedItems(__name__ + ".moves")
|
||||
_importer._add_module(moves, "moves")
|
||||
|
||||
|
||||
class Module_six_moves_urllib_parse(_LazyModule):
|
||||
"""Lazy loading of moved objects in six.moves.urllib_parse"""
|
||||
|
||||
|
||||
_urllib_parse_moved_attributes = [
|
||||
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
|
||||
MovedAttribute("quote", "urllib", "urllib.parse"),
|
||||
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
|
||||
MovedAttribute("unquote", "urllib", "urllib.parse"),
|
||||
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
|
||||
MovedAttribute("urlencode", "urllib", "urllib.parse"),
|
||||
MovedAttribute("splitquery", "urllib", "urllib.parse"),
|
||||
]
|
||||
for attr in _urllib_parse_moved_attributes:
|
||||
setattr(Module_six_moves_urllib_parse, attr.name, attr)
|
||||
del attr
|
||||
|
||||
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
|
||||
"moves.urllib_parse", "moves.urllib.parse")
|
||||
|
||||
|
||||
class Module_six_moves_urllib_error(_LazyModule):
|
||||
"""Lazy loading of moved objects in six.moves.urllib_error"""
|
||||
|
||||
|
||||
_urllib_error_moved_attributes = [
|
||||
MovedAttribute("URLError", "urllib2", "urllib.error"),
|
||||
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
|
||||
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
|
||||
]
|
||||
for attr in _urllib_error_moved_attributes:
|
||||
setattr(Module_six_moves_urllib_error, attr.name, attr)
|
||||
del attr
|
||||
|
||||
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
|
||||
"moves.urllib_error", "moves.urllib.error")
|
||||
|
||||
|
||||
class Module_six_moves_urllib_request(_LazyModule):
|
||||
"""Lazy loading of moved objects in six.moves.urllib_request"""
|
||||
|
||||
|
||||
_urllib_request_moved_attributes = [
|
||||
MovedAttribute("urlopen", "urllib2", "urllib.request"),
|
||||
MovedAttribute("install_opener", "urllib2", "urllib.request"),
|
||||
MovedAttribute("build_opener", "urllib2", "urllib.request"),
|
||||
MovedAttribute("pathname2url", "urllib", "urllib.request"),
|
||||
MovedAttribute("url2pathname", "urllib", "urllib.request"),
|
||||
MovedAttribute("getproxies", "urllib", "urllib.request"),
|
||||
MovedAttribute("Request", "urllib2", "urllib.request"),
|
||||
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
|
||||
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
|
||||
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
|
||||
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
|
||||
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
|
||||
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
|
||||
MovedAttribute("URLopener", "urllib", "urllib.request"),
|
||||
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
|
||||
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
|
||||
]
|
||||
for attr in _urllib_request_moved_attributes:
|
||||
setattr(Module_six_moves_urllib_request, attr.name, attr)
|
||||
del attr
|
||||
|
||||
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
|
||||
"moves.urllib_request", "moves.urllib.request")
|
||||
|
||||
|
||||
class Module_six_moves_urllib_response(_LazyModule):
|
||||
"""Lazy loading of moved objects in six.moves.urllib_response"""
|
||||
|
||||
|
||||
_urllib_response_moved_attributes = [
|
||||
MovedAttribute("addbase", "urllib", "urllib.response"),
|
||||
MovedAttribute("addclosehook", "urllib", "urllib.response"),
|
||||
MovedAttribute("addinfo", "urllib", "urllib.response"),
|
||||
MovedAttribute("addinfourl", "urllib", "urllib.response"),
|
||||
]
|
||||
for attr in _urllib_response_moved_attributes:
|
||||
setattr(Module_six_moves_urllib_response, attr.name, attr)
|
||||
del attr
|
||||
|
||||
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
|
||||
"moves.urllib_response", "moves.urllib.response")
|
||||
|
||||
|
||||
class Module_six_moves_urllib_robotparser(_LazyModule):
|
||||
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
|
||||
|
||||
|
||||
_urllib_robotparser_moved_attributes = [
|
||||
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
|
||||
]
|
||||
for attr in _urllib_robotparser_moved_attributes:
|
||||
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
|
||||
del attr
|
||||
|
||||
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
|
||||
"moves.urllib_robotparser", "moves.urllib.robotparser")
|
||||
|
||||
|
||||
class Module_six_moves_urllib(types.ModuleType):
|
||||
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
|
||||
__path__ = [] # mark as package
|
||||
parse = _importer._get_module("moves.urllib_parse")
|
||||
error = _importer._get_module("moves.urllib_error")
|
||||
request = _importer._get_module("moves.urllib_request")
|
||||
response = _importer._get_module("moves.urllib_response")
|
||||
robotparser = _importer._get_module("moves.urllib_robotparser")
|
||||
|
||||
def __dir__(self):
|
||||
return ['parse', 'error', 'request', 'response', 'robotparser']
|
||||
|
||||
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
|
||||
"moves.urllib")
|
||||
|
||||
|
||||
def add_move(move):
|
||||
"""Add an item to six.moves."""
|
||||
setattr(_MovedItems, move.name, move)
|
||||
|
||||
|
||||
def remove_move(name):
|
||||
"""Remove item from six.moves."""
|
||||
try:
|
||||
delattr(_MovedItems, name)
|
||||
except AttributeError:
|
||||
try:
|
||||
del moves.__dict__[name]
|
||||
except KeyError:
|
||||
raise AttributeError("no such move, %r" % (name,))
|
||||
|
||||
|
||||
if PY3:
|
||||
_meth_func = "__func__"
|
||||
_meth_self = "__self__"
|
||||
|
||||
_func_closure = "__closure__"
|
||||
_func_code = "__code__"
|
||||
_func_defaults = "__defaults__"
|
||||
_func_globals = "__globals__"
|
||||
else:
|
||||
_meth_func = "im_func"
|
||||
_meth_self = "im_self"
|
||||
|
||||
_func_closure = "func_closure"
|
||||
_func_code = "func_code"
|
||||
_func_defaults = "func_defaults"
|
||||
_func_globals = "func_globals"
|
||||
|
||||
|
||||
try:
|
||||
advance_iterator = next
|
||||
except NameError:
|
||||
def advance_iterator(it):
|
||||
return it.next()
|
||||
next = advance_iterator
|
||||
|
||||
|
||||
try:
|
||||
callable = callable
|
||||
except NameError:
|
||||
def callable(obj):
|
||||
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
|
||||
|
||||
|
||||
if PY3:
|
||||
def get_unbound_function(unbound):
|
||||
return unbound
|
||||
|
||||
create_bound_method = types.MethodType
|
||||
|
||||
Iterator = object
|
||||
else:
|
||||
def get_unbound_function(unbound):
|
||||
return unbound.im_func
|
||||
|
||||
def create_bound_method(func, obj):
|
||||
return types.MethodType(func, obj, obj.__class__)
|
||||
|
||||
class Iterator(object):
|
||||
|
||||
def next(self):
|
||||
return type(self).__next__(self)
|
||||
|
||||
callable = callable
|
||||
_add_doc(get_unbound_function,
|
||||
"""Get the function out of a possibly unbound function""")
|
||||
|
||||
|
||||
get_method_function = operator.attrgetter(_meth_func)
|
||||
get_method_self = operator.attrgetter(_meth_self)
|
||||
get_function_closure = operator.attrgetter(_func_closure)
|
||||
get_function_code = operator.attrgetter(_func_code)
|
||||
get_function_defaults = operator.attrgetter(_func_defaults)
|
||||
get_function_globals = operator.attrgetter(_func_globals)
|
||||
|
||||
|
||||
if PY3:
|
||||
def iterkeys(d, **kw):
|
||||
return iter(d.keys(**kw))
|
||||
|
||||
def itervalues(d, **kw):
|
||||
return iter(d.values(**kw))
|
||||
|
||||
def iteritems(d, **kw):
|
||||
return iter(d.items(**kw))
|
||||
|
||||
def iterlists(d, **kw):
|
||||
return iter(d.lists(**kw))
|
||||
else:
|
||||
def iterkeys(d, **kw):
|
||||
return iter(d.iterkeys(**kw))
|
||||
|
||||
def itervalues(d, **kw):
|
||||
return iter(d.itervalues(**kw))
|
||||
|
||||
def iteritems(d, **kw):
|
||||
return iter(d.iteritems(**kw))
|
||||
|
||||
def iterlists(d, **kw):
|
||||
return iter(d.iterlists(**kw))
|
||||
|
||||
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
|
||||
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
|
||||
_add_doc(iteritems,
|
||||
"Return an iterator over the (key, value) pairs of a dictionary.")
|
||||
_add_doc(iterlists,
|
||||
"Return an iterator over the (key, [values]) pairs of a dictionary.")
|
||||
|
||||
|
||||
if PY3:
|
||||
def b(s):
|
||||
return s.encode("latin-1")
|
||||
def u(s):
|
||||
return s
|
||||
unichr = chr
|
||||
if sys.version_info[1] <= 1:
|
||||
def int2byte(i):
|
||||
return bytes((i,))
|
||||
else:
|
||||
# This is about 2x faster than the implementation above on 3.2+
|
||||
int2byte = operator.methodcaller("to_bytes", 1, "big")
|
||||
byte2int = operator.itemgetter(0)
|
||||
indexbytes = operator.getitem
|
||||
iterbytes = iter
|
||||
import io
|
||||
StringIO = io.StringIO
|
||||
BytesIO = io.BytesIO
|
||||
else:
|
||||
def b(s):
|
||||
return s
|
||||
# Workaround for standalone backslash
|
||||
def u(s):
|
||||
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
|
||||
unichr = unichr
|
||||
int2byte = chr
|
||||
def byte2int(bs):
|
||||
return ord(bs[0])
|
||||
def indexbytes(buf, i):
|
||||
return ord(buf[i])
|
||||
def iterbytes(buf):
|
||||
return (ord(byte) for byte in buf)
|
||||
import StringIO
|
||||
StringIO = BytesIO = StringIO.StringIO
|
||||
_add_doc(b, """Byte literal""")
|
||||
_add_doc(u, """Text literal""")
|
||||
|
||||
|
||||
if PY3:
|
||||
exec_ = getattr(moves.builtins, "exec")
|
||||
|
||||
|
||||
def reraise(tp, value, tb=None):
|
||||
if value.__traceback__ is not tb:
|
||||
raise value.with_traceback(tb)
|
||||
raise value
|
||||
|
||||
else:
|
||||
def exec_(_code_, _globs_=None, _locs_=None):
|
||||
"""Execute code in a namespace."""
|
||||
if _globs_ is None:
|
||||
frame = sys._getframe(1)
|
||||
_globs_ = frame.f_globals
|
||||
if _locs_ is None:
|
||||
_locs_ = frame.f_locals
|
||||
del frame
|
||||
elif _locs_ is None:
|
||||
_locs_ = _globs_
|
||||
exec("""exec _code_ in _globs_, _locs_""")
|
||||
|
||||
|
||||
exec_("""def reraise(tp, value, tb=None):
|
||||
raise tp, value, tb
|
||||
""")
|
||||
|
||||
|
||||
print_ = getattr(moves.builtins, "print", None)
|
||||
if print_ is None:
|
||||
def print_(*args, **kwargs):
|
||||
"""The new-style print function for Python 2.4 and 2.5."""
|
||||
fp = kwargs.pop("file", sys.stdout)
|
||||
if fp is None:
|
||||
return
|
||||
def write(data):
|
||||
if not isinstance(data, basestring):
|
||||
data = str(data)
|
||||
# If the file has an encoding, encode unicode with it.
|
||||
if (isinstance(fp, file) and
|
||||
isinstance(data, unicode) and
|
||||
fp.encoding is not None):
|
||||
errors = getattr(fp, "errors", None)
|
||||
if errors is None:
|
||||
errors = "strict"
|
||||
data = data.encode(fp.encoding, errors)
|
||||
fp.write(data)
|
||||
want_unicode = False
|
||||
sep = kwargs.pop("sep", None)
|
||||
if sep is not None:
|
||||
if isinstance(sep, unicode):
|
||||
want_unicode = True
|
||||
elif not isinstance(sep, str):
|
||||
raise TypeError("sep must be None or a string")
|
||||
end = kwargs.pop("end", None)
|
||||
if end is not None:
|
||||
if isinstance(end, unicode):
|
||||
want_unicode = True
|
||||
elif not isinstance(end, str):
|
||||
raise TypeError("end must be None or a string")
|
||||
if kwargs:
|
||||
raise TypeError("invalid keyword arguments to print()")
|
||||
if not want_unicode:
|
||||
for arg in args:
|
||||
if isinstance(arg, unicode):
|
||||
want_unicode = True
|
||||
break
|
||||
if want_unicode:
|
||||
newline = unicode("\n")
|
||||
space = unicode(" ")
|
||||
else:
|
||||
newline = "\n"
|
||||
space = " "
|
||||
if sep is None:
|
||||
sep = space
|
||||
if end is None:
|
||||
end = newline
|
||||
for i, arg in enumerate(args):
|
||||
if i:
|
||||
write(sep)
|
||||
write(arg)
|
||||
write(end)
|
||||
|
||||
_add_doc(reraise, """Reraise an exception.""")
|
||||
|
||||
if sys.version_info[0:2] < (3, 4):
|
||||
def wraps(wrapped):
|
||||
def wrapper(f):
|
||||
f = functools.wraps(wrapped)(f)
|
||||
f.__wrapped__ = wrapped
|
||||
return f
|
||||
return wrapper
|
||||
else:
|
||||
wraps = functools.wraps
|
||||
|
||||
def with_metaclass(meta, *bases):
|
||||
"""Create a base class with a metaclass."""
|
||||
# This requires a bit of explanation: the basic idea is to make a dummy
|
||||
# metaclass for one level of class instantiation that replaces itself with
|
||||
# the actual metaclass.
|
||||
class metaclass(meta):
|
||||
def __new__(cls, name, this_bases, d):
|
||||
return meta(name, bases, d)
|
||||
return type.__new__(metaclass, 'temporary_class', (), {})
|
||||
|
||||
|
||||
def add_metaclass(metaclass):
|
||||
"""Class decorator for creating a class with a metaclass."""
|
||||
def wrapper(cls):
|
||||
orig_vars = cls.__dict__.copy()
|
||||
orig_vars.pop('__dict__', None)
|
||||
orig_vars.pop('__weakref__', None)
|
||||
slots = orig_vars.get('__slots__')
|
||||
if slots is not None:
|
||||
if isinstance(slots, str):
|
||||
slots = [slots]
|
||||
for slots_var in slots:
|
||||
orig_vars.pop(slots_var)
|
||||
return metaclass(cls.__name__, cls.__bases__, orig_vars)
|
||||
return wrapper
|
||||
|
||||
# Complete the moves implementation.
|
||||
# This code is at the end of this module to speed up module loading.
|
||||
# Turn this module into a package.
|
||||
__path__ = [] # required for PEP 302 and PEP 451
|
||||
__package__ = __name__ # see PEP 366 @ReservedAssignment
|
||||
if globals().get("__spec__") is not None:
|
||||
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
|
||||
# Remove other six meta path importers, since they cause problems. This can
|
||||
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
|
||||
# this for some reason.)
|
||||
if sys.meta_path:
|
||||
for i, importer in enumerate(sys.meta_path):
|
||||
# Here's some real nastiness: Another "instance" of the six module might
|
||||
# be floating around. Therefore, we can't use isinstance() to check for
|
||||
# the six meta path importer, since the other six instance will have
|
||||
# inserted an importer with different class.
|
||||
if (type(importer).__name__ == "_SixMetaPathImporter" and
|
||||
importer.name == __name__):
|
||||
del sys.meta_path[i]
|
||||
break
|
||||
del i, importer
|
||||
# Finally, add the importer to the meta path import hook.
|
||||
sys.meta_path.append(_importer)
|
||||
@@ -16,7 +16,7 @@ then
|
||||
tar xzf /tmp/boost.tar.gz
|
||||
cd $BOOST_ROOT && \
|
||||
./bootstrap.sh --prefix=$BOOST_ROOT && \
|
||||
./b2 -d1 define=_GLIBCXX_USE_CXX11_ABI=0 -j${NUM_PROCESSORS:-2} &&\
|
||||
./b2 -d1 define=_GLIBCXX_USE_CXX11_ABI=0 -j$((2*${NUM_PROCESSORS:-2})) &&\
|
||||
./b2 -d0 define=_GLIBCXX_USE_CXX11_ABI=0 install
|
||||
else
|
||||
echo "Using cached boost at $BOOST_ROOT"
|
||||
|
||||
@@ -8,7 +8,7 @@ dependencies:
|
||||
- sudo apt-get update -qq
|
||||
- sudo apt-get purge -qq libboost1.48-dev
|
||||
- sudo apt-get install -qq libboost1.60-all-dev
|
||||
- sudo apt-get install -qq clang-3.6 gcc-5 g++-5 libobjc-5-dev libgcc-5-dev libstdc++-5-dev libclang1-3.6 libgcc1 libgomp1 libstdc++6 scons protobuf-compiler libprotobuf-dev libssl-dev exuberant-ctags
|
||||
- sudo apt-get install -qq clang-3.6 gcc-5 g++-5 libobjc-5-dev libgcc-5-dev libstdc++-5-dev libclang1-3.6 libgcc1 libgomp1 libstdc++6 scons protobuf-compiler libprotobuf-dev libssl-dev exuberant-ctags texinfo
|
||||
- lsb_release -a
|
||||
- sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 99
|
||||
- sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 99
|
||||
@@ -16,9 +16,14 @@ dependencies:
|
||||
- gcc --version
|
||||
- clang --version
|
||||
- clang++ --version
|
||||
- if [[ ! -e gdb-8.0 ]]; then wget https://ftp.gnu.org/gnu/gdb/gdb-8.0.tar.xz && tar xf gdb-8.0.tar.xz && cd gdb-8.0 && ./configure && make && cd ..; fi
|
||||
- pushd gdb-8.0 && sudo make install && popd
|
||||
- gdb --version
|
||||
cache_directories:
|
||||
- gdb-8.0
|
||||
test:
|
||||
pre:
|
||||
- scons clang.debug
|
||||
override:
|
||||
# Execute unit tests under gdb
|
||||
- gdb -return-child-result -quiet -batch -ex "set env MALLOC_CHECK_=3" -ex "set print thread-events off" -ex run -ex "thread apply all backtrace full" -ex "quit" --args build/clang.debug/rippled --unittest
|
||||
- gdb -return-child-result -quiet -batch -ex "set env MALLOC_CHECK_=3" -ex "set print thread-events off" -ex run -ex "thread apply all backtrace full" -ex "quit" --args build/clang.debug/rippled --unittest --quiet --unittest-log
|
||||
|
||||
@@ -1,112 +0,0 @@
|
||||
# Manifest Tool Guide
|
||||
|
||||
This guide explains how to setup a validator so the key pairs used to sign and
|
||||
verify validations may safely change. This procedure does not require manual
|
||||
reconfiguration of servers that trust this validator.
|
||||
|
||||
Validators use two types of key pairs: *master keys* and *ephemeral
|
||||
keys*. Ephemeral keys are used to sign and verify validations. Master keys are
|
||||
used to sign and verify manifests that change ephemeral keys. The master secret
|
||||
key should be tightly controlled. The ephemeral secret key needs to be present
|
||||
in the config file.
|
||||
|
||||
## Validator Keys
|
||||
|
||||
When first setting up a validator, use the `manifest` script to generate a
|
||||
master key pair:
|
||||
|
||||
```
|
||||
$ bin/manifest create
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
[validator_keys]
|
||||
nHDwNP6jbJgVKj5zSEjMn8SJhTnPV54Fx5sSiNpwqLbb42nmh3Ln
|
||||
|
||||
[master_secret]
|
||||
paamfhAn5m1NM4UUu5mmvVaHQy8Fb65bkpbaNrvKwX3YMKdjzi2
|
||||
```
|
||||
|
||||
The first value is the master public key. Add the public key to the config
|
||||
for this validator. A one-word comment must be added after the key (for example
|
||||
*ThisServersName*). Any other rippled trusting the validator needs to add the
|
||||
master public key to its config. Only add keys received from trusted sources.
|
||||
|
||||
The second value is the corresponding master secret key. **DO NOT INSTALL THIS
|
||||
IN THE CONFIG**. The master secret key will be used to sign manifests that
|
||||
change validation keys. Put the master secret key in a secure but recoverable
|
||||
location.
|
||||
|
||||
## Validation Keys
|
||||
|
||||
When first setting up a validator, or when changing the ephemeral keys, use the
|
||||
`rippled` program to create a new ephemeral key pair:
|
||||
|
||||
```
|
||||
$ rippled validation_create
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
Loading: "/Users/alice/.config/ripple/rippled.cfg"
|
||||
Securely connecting to 127.0.0.1:5005
|
||||
{
|
||||
"result" : {
|
||||
"status" : "success",
|
||||
"validation_key" : "MIN HEAT NUN FAWN HIP KAHN BORG PHI BALK ANN TWIG RACY",
|
||||
"validation_private_key" : "pn6kTqE4WyeRQzZrRp5FnJ5J2vLdSCB5KD3DEyfVw6C9woDBfED",
|
||||
"validation_public_key" : "n9LtZ9haqYMbzJ92cDd3pu3Lko6uEznrXuYea3ehuhVcwDHF5coX",
|
||||
"validation_seed" : "sh8bLqqkGBknGcsgRTrFMxcciwytm"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Add the `validation_seed` value (the ephemeral secret key) to this validator's
|
||||
config. It is recommended to add the ephemeral public key and the sequence
|
||||
number as a comment as well (sequence numbers are be explained below):
|
||||
|
||||
```
|
||||
[validation_seed]
|
||||
sh8bLqqkGBknGcsgRTrFMxcciwytm
|
||||
# validation_public_key: n9LtZ9haqYMbzJ92cDd3pu3Lko6uEznrXuYea3ehuhVcwDHF5coX
|
||||
# sequence number: 1
|
||||
```
|
||||
|
||||
A manifest is a signed message used to inform other servers of this validator's
|
||||
ephemeral public key. A manifest contains a sequence number, the new ephemeral
|
||||
public key, and it is signed with both the ephemeral and master secret keys.
|
||||
The sequence number should be higher than the previous sequence number (if it
|
||||
is not, the manifest will be ignored). Usually the previous sequence number
|
||||
will be incremented by one. Use the `manifest` script to create a manifest.
|
||||
It has the form:
|
||||
|
||||
```
|
||||
$ bin/manifest sign sequence validation_public_key validation_private_key master_secret
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
$ bin/manifest sign 1 n9LtZ9ha...wDHF5coX pn6kTqE4...9woDBfED paamfhAn...YMKdjzi2
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
[validation_manifest]
|
||||
JAAAAAFxIe3t8rIb4Ba8JHI97CbwpxmTq0LhN/7ZAbsNaSwrbHaypHMhAzTuu07YGOvVvB3+
|
||||
aS0jhP+q0TVgTjGJKhx+yTY1Da3ddkYwRAIgDkmIt3dPNsfeCH3ApMZgpwqG4JwtIlKEymqK
|
||||
S7v+VqkCIFQXg20ZMpXXT86vmLdlmPspgeUN1scWsuFoPYUUJywycBJAl93+/bZbfZ4quTeM
|
||||
5y80/OSIcVoWPcHajwrAl68eiAW4MVFeJXvShXNfnT+XsxMjDh0VpOkhvyp971i1MgjBAA==
|
||||
```
|
||||
|
||||
Copy this to the config for this validator. Don't forget to update the comment
|
||||
noting the sequence number.
|
||||
|
||||
## Revoking a key
|
||||
|
||||
If a master key is compromised, the key may be revoked permanently. To revoke a
|
||||
master key, sign a manifest with the highest possible sequence number:
|
||||
`4294967295`
|
||||
@@ -172,6 +172,13 @@
|
||||
# NOTE If no ports support the peer protocol, rippled cannot
|
||||
# receive incoming peer connections or become a superpeer.
|
||||
#
|
||||
# limit = <number>
|
||||
#
|
||||
# Optional. An integer value that will limit the number of connected
|
||||
# clients that the port will accept. Once the limit is reached, new
|
||||
# connections will be refused until other clients disconnect.
|
||||
# Omit or set to 0 to allow unlimited numbers of clients.
|
||||
#
|
||||
# user = <text>
|
||||
# password = <text>
|
||||
#
|
||||
@@ -273,7 +280,47 @@
|
||||
# keep rippled from connecting to other instances of rippled or
|
||||
# prevent RPC and WebSocket clients from connecting.
|
||||
#
|
||||
# send_queue_limit = = [1..65535]
|
||||
#
|
||||
# A Websocket will disconnect when its send queue exceeds this limit.
|
||||
# The default is 100. A larger value may help with erratic disconnects but
|
||||
# may adversely affect server performance.
|
||||
#
|
||||
# WebSocket permessage-deflate extension options
|
||||
#
|
||||
# These settings configure the optional permessage-deflate extension
|
||||
# options and may appear on any port configuration entry. They are meaningful
|
||||
# only to ports which have enabled a WebSocket protocol.
|
||||
#
|
||||
# permessage_deflate = <flag>
|
||||
#
|
||||
# Determines if permessage_deflate extension negotiations are enabled.
|
||||
# When enabled, clients may request the extension and the server will
|
||||
# offer the enabled extension in response.
|
||||
#
|
||||
# client_max_window_bits = [9..15]
|
||||
# server_max_window_bits = [9..15]
|
||||
# client_no_context_takeover = <flag>
|
||||
# server_no_context_takeover = <flag>
|
||||
#
|
||||
# These optional settings control options related to the permessage-deflate
|
||||
# extension negotiation. For precise definitions of these fields please see
|
||||
# the RFC 7692, "Compression Extensions for WebSocket":
|
||||
# https://tools.ietf.org/html/rfc7692
|
||||
#
|
||||
# compress_level = [0..9]
|
||||
#
|
||||
# When set, determines the amount of compression attempted, where 0 is
|
||||
# the least amount and 9 is the most amount. Higher levels require more
|
||||
# CPU resources. Levels 1 through 3 use a fast compression algorithm,
|
||||
# while levels 4 through 9 use a more compact algorithm which uses more
|
||||
# CPU resources. If unspecified, a default of 3 is used.
|
||||
#
|
||||
# memory_level = [1..9]
|
||||
#
|
||||
# When set, determines the relative amount of memory used to hold
|
||||
# intermediate compression data. Higher numbers can give better compression
|
||||
# ratios at the cost of higher memory and CPU resources.
|
||||
#
|
||||
# [rpc_startup]
|
||||
#
|
||||
@@ -435,6 +482,12 @@
|
||||
# time a transaction with a higher fee level is added.
|
||||
# Default: 20.
|
||||
#
|
||||
# minimum_queue_size = <number>
|
||||
#
|
||||
# The queue will always be able to hold at least this <number> of
|
||||
# transactions, regardless of recent ledger sizes or the value of
|
||||
# ledgers_in_queue. Default: 2000.
|
||||
#
|
||||
# retry_sequence_percent = <number>
|
||||
#
|
||||
# If a client replaces a transaction in the queue (same sequence
|
||||
@@ -512,8 +565,7 @@
|
||||
#
|
||||
# These settings affect the behavior of the server instance with respect
|
||||
# to Ripple payment protocol level activities such as validating and
|
||||
# closing ledgers, establishing a quorum, or adjusting fees in response
|
||||
# to server overloads.
|
||||
# closing ledgers or adjusting fees in response to server overloads.
|
||||
#
|
||||
#
|
||||
#
|
||||
@@ -567,17 +619,42 @@
|
||||
#
|
||||
#
|
||||
#
|
||||
# [validator_token]
|
||||
#
|
||||
# This is an alternative to [validation_seed] that allows rippled to perform
|
||||
# validation without having to store the validator keys on the network
|
||||
# connected server. The field should contain a single token in the form of a
|
||||
# base64-encoded blob.
|
||||
# An external tool is available for generating validator keys and tokens.
|
||||
#
|
||||
#
|
||||
#
|
||||
# [validator_key_revocation]
|
||||
#
|
||||
# If a validator's secret key has been compromised, a revocation must be
|
||||
# generated and added to this field. The revocation notifies peers that it is
|
||||
# no longer safe to trust the revoked key. The field should contain a single
|
||||
# revocation in the form of a base64-encoded blob.
|
||||
# An external tool is available for generating and revoking validator keys.
|
||||
#
|
||||
#
|
||||
#
|
||||
# [validators_file]
|
||||
#
|
||||
# Path or name of a file that contains the validation public keys of nodes
|
||||
# to always accept as validators as well as the minimum number of validators
|
||||
# needed to accept consensus.
|
||||
#
|
||||
# The contents of the file should include a [validators] and a
|
||||
# [validation_quorum] entry. [validators] should be followed by
|
||||
# a list of validation public keys of nodes, one per line, optionally
|
||||
# followed by a comment separated by whitespace.
|
||||
# [validation_quorum] should be followed by a number.
|
||||
# The contents of the file should include a [validators] and/or
|
||||
# [validator_list_sites] and [validator_list_keys] entries.
|
||||
# [validators] should be followed by a list of validation public keys of
|
||||
# nodes, one per line.
|
||||
# [validator_list_sites] should be followed by a list of URIs each serving a
|
||||
# list of recommended validators.
|
||||
# [validator_list_keys] should be followed by a list of keys belonging to
|
||||
# trusted validator list publishers. Validator lists fetched from configured
|
||||
# sites will only be considered if the list is accompanied by a valid
|
||||
# signature from a trusted publisher key.
|
||||
#
|
||||
# Specify the file by its name or path.
|
||||
# Unless an absolute path is specified, it will be considered relative to
|
||||
@@ -589,14 +666,12 @@
|
||||
#
|
||||
# Example content:
|
||||
# [validators]
|
||||
# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
|
||||
# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
|
||||
# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
|
||||
# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
|
||||
# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
|
||||
# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
|
||||
# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
|
||||
# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
|
||||
# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
|
||||
# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
|
||||
#
|
||||
# [validation_quorum]
|
||||
# 3
|
||||
#
|
||||
#
|
||||
# [path_search]
|
||||
@@ -628,6 +703,14 @@
|
||||
#
|
||||
#
|
||||
#
|
||||
# [workers]
|
||||
#
|
||||
# Configures the number of threads for processing work submitted by peers
|
||||
# and clients. If not specified, then the value is automatically determined
|
||||
# by factors including the number of system processors and whether this
|
||||
# node is a validator.
|
||||
#
|
||||
#
|
||||
#-------------------------------------------------------------------------------
|
||||
#
|
||||
# 4. HTTPS Client
|
||||
@@ -987,7 +1070,7 @@ pool.ntp.org
|
||||
[ips]
|
||||
r.ripple.com 51235
|
||||
|
||||
# File containing validation quorum and trusted validator keys.
|
||||
# File containing trusted validator keys or validator list publishers.
|
||||
# Unless an absolute path is specified, it will be considered relative to the
|
||||
# folder in which the rippled.cfg file is located.
|
||||
[validators_file]
|
||||
|
||||
@@ -13,8 +13,6 @@
|
||||
# [validators]
|
||||
#
|
||||
# List of the validation public keys of nodes to always accept as validators.
|
||||
# A comment may, optionally, be associated with each entry, separated by
|
||||
# whitespace from the validation public key.
|
||||
#
|
||||
# The latest list of recommended validators can be obtained from
|
||||
# https://ripple.com/ripple.txt
|
||||
@@ -23,26 +21,34 @@
|
||||
#
|
||||
# Examples:
|
||||
# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
|
||||
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt John Doe
|
||||
# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt
|
||||
#
|
||||
# [validator_list_sites]
|
||||
#
|
||||
# List of URIs serving lists of recommended validators.
|
||||
#
|
||||
# [validation_quorum]
|
||||
# Examples:
|
||||
# https://ripple.com/validators
|
||||
# http://127.0.0.1:8000
|
||||
#
|
||||
# Sets the minimum number of trusted validations a ledger must have before
|
||||
# the server considers it fully validated. Note that if you are validating,
|
||||
# your validation counts.
|
||||
# [validator_list_keys]
|
||||
#
|
||||
# List of keys belonging to trusted validator list publishers.
|
||||
# Validator lists fetched from configured sites will only be considered
|
||||
# if the list is accompanied by a valid signature from a trusted
|
||||
# publisher key.
|
||||
# Validator list keys should be hex-encoded.
|
||||
#
|
||||
# Examples:
|
||||
# ed499d732bded01504a7407c224412ef550cc1ade638a4de4eb88af7c36cb8b282
|
||||
# 0202d3f36a801349f3be534e3f64cfa77dede6e1b6310a0b48f40f20f955cec945
|
||||
# 02dd8b7075f64d77d9d2bdb88da364f29fcd975f9ea6f21894abcc7564efda8054
|
||||
#
|
||||
|
||||
# Public keys of the validators that this rippled instance trusts.
|
||||
[validators]
|
||||
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7 RL1
|
||||
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj RL2
|
||||
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C RL3
|
||||
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS RL4
|
||||
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA RL5
|
||||
|
||||
# The number of validators rippled needs to accept a consensus.
|
||||
# Don't change this unless you know what you're doing.
|
||||
[validation_quorum]
|
||||
3
|
||||
n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
|
||||
n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
|
||||
n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
|
||||
n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
|
||||
n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
|
||||
|
||||
@@ -44,6 +44,15 @@ install callouts
|
||||
|
||||
explicit callout ;
|
||||
|
||||
install consensus_images
|
||||
:
|
||||
[ glob images/consensus/*.png ]
|
||||
:
|
||||
<location>$(out)/html/images/consensus
|
||||
;
|
||||
|
||||
explicit consensus_images ;
|
||||
|
||||
xml doc
|
||||
:
|
||||
main.qbk
|
||||
@@ -60,7 +69,7 @@ boostbook boostdoc
|
||||
<xsl:param>boost.root=$(broot)
|
||||
<xsl:param>chunk.first.sections=1 # Chunk the first top-level section?
|
||||
<xsl:param>chunk.section.depth=8 # Depth to which sections should be chunked
|
||||
<xsl:param>generate.section.toc.level=1 # Control depth of TOC generation in sections
|
||||
<xsl:param>generate.section.toc.level=2 # Control depth of TOC generation in sections
|
||||
<xsl:param>toc.max.depth=2 # How many levels should be created for each TOC?
|
||||
<xsl:param>toc.section.depth=2 # How deep should recursive sections appear in the TOC?
|
||||
<xsl:param>generate.toc="chapter toc section toc"
|
||||
@@ -68,4 +77,5 @@ boostbook boostdoc
|
||||
<location>temp
|
||||
<dependency>stylesheets
|
||||
<dependency>images
|
||||
<dependency>consensus_images
|
||||
;
|
||||
|
||||
678
docs/consensus.qbk
Normal file
678
docs/consensus.qbk
Normal file
@@ -0,0 +1,678 @@
|
||||
[section Consensus and Validation]
|
||||
|
||||
[*This section is a work in progress!!]
|
||||
|
||||
Consensus is the task of reaching agreement within a distributed system in the
|
||||
presence of faulty or even malicious participants. This document outlines the
|
||||
[@https://ripple.com/files/ripple/consensus/whitepaper.pdf Ripple Consensus
|
||||
Algorithm] as implemented in [@https://github.com/ripple/rippled rippled], but
|
||||
focuses on its utility as a generic consensus algorithm independent of the
|
||||
detailed mechanics of the Ripple Consensus Ledger. Most notably, the algorithm
|
||||
does not require fully synchronous communication between all nodes in the
|
||||
network, or even a fixed network topology, but instead achieves consensus via
|
||||
collectively trusted subnetworks.
|
||||
|
||||
[heading Distributed Agreement]
|
||||
|
||||
A challenge for distributed systems is reaching agreement on changes in shared
|
||||
state. For the Ripple network, the shared state is the current ledger--account
|
||||
information, account balances, order books and other financial data. We will
|
||||
refer to shared distributed state as a /ledger/ throughout the remainder of this
|
||||
document.
|
||||
|
||||
[$images/consensus/ledger_chain.png [width 50%] [height 50%] ]
|
||||
|
||||
As shown above, new ledgers are made by applying a set of transactions to the
|
||||
prior ledger. For the Ripple network, transactions include payments,
|
||||
modification of account settings, updates to offers and more.
|
||||
|
||||
In a centralized system, generating the next ledger is trivial since there is a
|
||||
single unique arbiter of which transactions to include and how to apply them to
|
||||
a ledger. For decentralized systems, participants must resolve disagreements on
|
||||
the set of transactions to include, the order to apply those transactions, and
|
||||
even the resulting ledger after applying the transactions. This is even more
|
||||
difficult when some participants are faulty or malicious.
|
||||
|
||||
The Ripple network is a decentralized and _trust-full_ network. Anyone is free
|
||||
to join and participants are free to choose a subset of peers that are
|
||||
collectively trusted to not collude in an attempt to defraud the participant.
|
||||
Leveraging this network of trust, the Ripple algorithm has two main components.
|
||||
|
||||
* /Consensus/ in which network participants agree on the transactions to apply
|
||||
to a prior ledger, based on the positions of their chosen peers.
|
||||
* /Validation/ in which network participants agree on what ledger was
|
||||
generated, based on the ledgers generated by chosen peers.
|
||||
|
||||
These phases are continually repeated to process transactions submitted to the
|
||||
network, generating successive ledgers and giving rise to the blockchain ledger
|
||||
history depicted below. In this diagram, time is flowing to the right, but
|
||||
links between ledgers point backward to the parent. Also note the alternate
|
||||
Ledger 2 that was generated by some participants, but which failed validation
|
||||
and was abandoned.
|
||||
|
||||
[$images/consensus/block_chain.png]
|
||||
|
||||
The remainder of this section describes the Consensus and Validation algorithms
|
||||
in more detail and is meant as a companion guide to understanding the generic
|
||||
implementation in =rippled=. The document *does not* discuss correctness,
|
||||
fault-tolerance or liveness properties of the algorithms or the full details of
|
||||
how they integrate within =rippled= to support the Ripple Consensus Ledger.
|
||||
|
||||
[section Consensus Overview]
|
||||
|
||||
[heading Definitions]
|
||||
|
||||
* The /ledger/ is the shared distributed state. Each ledger has a unique ID to
|
||||
distinguish it from all other ledgers. During consensus, the /previous/,
|
||||
/prior/ or /last-closed/ ledger is the most recent ledger seen by consensus
|
||||
and is the basis upon which it will build the next ledger.
|
||||
* A /transaction/ is an instruction for an atomic change in the ledger state. A
|
||||
unique ID distinguishes a transaction from other transactions.
|
||||
* A /transaction set/ is a set of transactions under consideration by consensus.
|
||||
The goal of consensus is to reach agreement on this set. The generic
|
||||
consensus algorithm does not rely on an ordering of transactions within the
|
||||
set, nor does it specify how to apply a transaction set to a ledger to
|
||||
generate a new ledger. A unique ID distinguishes a set of transactions from
|
||||
all other sets of transactions.
|
||||
* A /node/ is one of the distributed actors running the consensus algorithm. It
|
||||
has a unique ID to distinguish it from all other nodes.
|
||||
* A /peer/ of a node is another node that it has chosen to follow and which it
|
||||
believes will not collude with other chosen peers. The choice of peers is not
|
||||
symmetric, since participants can decide on their chosen sets independently.
|
||||
* A /position/ is the current belief of the next ledger's transaction set and
|
||||
close time. Position can refer to the node's own position or the position of a
|
||||
peer.
|
||||
* A /proposal/ is one of a sequence of positions a node shares during consensus.
|
||||
An initial proposal contains the starting position taken by a node before it
|
||||
considers any peer positions. If a node subsequently updates its position in
|
||||
response to its peers, it will issue an updated proposal. A proposal is
|
||||
uniquely identified by the ID of the proposing node, the ID of the position
|
||||
taken, the ID of the prior ledger the proposal is for, and the sequence number
|
||||
of the proposal.
|
||||
* A /dispute/ is a transaction that is either not part of a node's position or
|
||||
not in a peer's position. During consensus, the node will add or remove
|
||||
disputed transactions from its position based on that transaction's support
|
||||
amongst its peers.
|
||||
|
||||
Note that most types have an ID as a lightweight identifier of instances of that
|
||||
type. Consensus often operates on the IDs directly since the underlying type is
|
||||
potentially expensive to share over the network. For example, proposal's only
|
||||
contain the ID of the position of a peer. Since many peers likely have the same
|
||||
position, this reduces the need to send the full transaction set multiple times.
|
||||
Instead, a node can request the transaction set from the network if necessary.
|
||||
|
||||
[heading Overview ]
|
||||
[$images/consensus/consensus_overview.png [width 50%] [height 50%] ]
|
||||
|
||||
The diagram above is an overview of the consensus process from the perspective
|
||||
of a single participant. Recall that during a single consensus round, a node is
|
||||
trying to agree with its peers on which transactions to apply to its prior
|
||||
ledger when generating the next ledger. It also attempts to agree on the
|
||||
[link effective_close_time network time when the ledger closed]. There are
|
||||
3 main phases to a consensus round:
|
||||
|
||||
* A call to =startRound= places the node in the =Open= phase. In this phase,
|
||||
the node is waiting for transactions to include in its open ledger.
|
||||
* At some point, the node will =Close= the open ledger and transition to the
|
||||
=Establish= phase. In this phase, the node shares/receives peer proposals on
|
||||
which transactions should be accepted in the closed ledger.
|
||||
* At some point, the node determines it has reached consensus with its peers on
|
||||
which transactions to include. It transitions to the =Accept= phase. In this
|
||||
phase, the node works on applying the transactions to the prior ledger to
|
||||
generate a new closed ledger. Once the new ledger is completed, the node shares
|
||||
the validated ledger hash with the network and makes a call to =startRound= to
|
||||
start the cycle again for the next ledger.
|
||||
|
||||
Throughout, a heartbeat timer calls =timerEntry= at a regular frequency to drive
|
||||
the process forward. Although the =startRound= call occurs at arbitrary times
|
||||
based on when the initial round began and the time it takes to apply
|
||||
transactions, the transitions from =Open= to =Establish= and =Establish= to
|
||||
=Accept= only occur during calls to =timerEntry=. Similarly, transactions can
|
||||
arrive at arbitrary times, independent of the heartbeat timer. Transactions
|
||||
received after the =Open= to =Close= transition and not part of peer proposals
|
||||
won't be considered until the next consensus round. They are represented above
|
||||
by the light green triangles.
|
||||
|
||||
Peer proposals are issued by a node during a =timerEntry= call, but since peers
|
||||
do not synchronize =timerEntry= calls, they are received by other peers at
|
||||
arbitrary times. Peer proposals are only considered if received prior to the
|
||||
=Establish= to =Accept= transition, and only if the peer is working on the same
|
||||
prior ledger. Peer proposals received after consensus is reached will not be
|
||||
meaningful and are represented above by the circle with the X in it. Only
|
||||
proposals from chosen peers are considered.
|
||||
|
||||
[#effective_close_time]
|
||||
[heading Effective Close Time]
|
||||
|
||||
In addition to agreeing on a transaction set, each consensus round tries to
|
||||
agree on the time the ledger closed. Each node calculates its own close time
|
||||
when it closes the open ledger. This exact close time is rounded to the nearest
|
||||
multiple of the current /effective close time resolution/. It is this
|
||||
/effective close time/ that nodes seek to agree on. This allows servers to
|
||||
derive a common time for a ledger without the need for perfectly synchronized
|
||||
clocks. As depicted below, the 3 pink arrows represent exact close times from 3
|
||||
consensus nodes that round to the same effective close time given the current
|
||||
resolution. The purple arrow represents a peer whose estimate rounds to a
|
||||
different effective close time given the current resolution.
|
||||
|
||||
[$images/consensus/EffCloseTime.png]
|
||||
|
||||
The effective close time is part of the node's position and is shared with peers
|
||||
in its proposals. Just like the position on the consensus transaction set, a
|
||||
node will update its close time position in response to its peers' effective
|
||||
close time positions. Peers can agree to disagree on the close time, in which
|
||||
case the effective close time is taken as 1 second past the prior close.
|
||||
|
||||
The close time resolution is itself dynamic, decreasing (coarser) resolution in
|
||||
subsequent consensus rounds if nodes are unable to reach consensus on an
|
||||
effective close time and increasing (finer) resolution if nodes consistently
|
||||
reach close time consensus.
|
||||
|
||||
[heading Modes]
|
||||
|
||||
Internally, a node operates under one of the following consensus modes. Either
|
||||
of the first two modes may be chosen when a consensus round starts.
|
||||
|
||||
* /Proposing/ indicates the node is a full-fledged consensus participant. It
|
||||
takes on positions and sends proposals to its peers.
|
||||
* /Observing/ indicates the node is a passive consensus participant. It
|
||||
maintains a position internally, but does not propose that position to its
|
||||
peers. Instead, it receives peer proposals and updates its position
|
||||
to track the majority of its peers. This may be preferred if the node is only
|
||||
being used to track the state of the network or during a start-up phase while
|
||||
it is still synchronizing with the network.
|
||||
|
||||
The other two modes are set internally during the consensus round when the node
|
||||
believes it is no longer working on the dominant ledger chain based on peer
|
||||
validations. It checks this on every call to =timerEntry=.
|
||||
|
||||
* /Wrong Ledger/ indicates the node is not working on the correct prior ledger
|
||||
and does not have it available. It requests that ledger from the network, but
|
||||
continues to work towards consensus this round while waiting. If it had been
|
||||
/proposing/, it will send a special "bowout" proposal to its peers to indicate
|
||||
its change in mode for the rest of this round. For the duration of the round,
|
||||
it defers to peer positions for determining the consensus outcome as if it
|
||||
were just /observing/.
|
||||
* /Switch Ledger/ indicates that the node has acquired the correct prior ledger
|
||||
from the network. Although it now has the correct prior ledger, the fact that
|
||||
it had the wrong one at some point during this round means it is likely behind
|
||||
and should defer to peer positions for determining the consensus outcome.
|
||||
|
||||
[$images/consensus/consensus_modes.png]
|
||||
|
||||
Once either wrong ledger or switch ledger are reached, the node cannot
|
||||
return to proposing or observing until the next consensus round. However,
|
||||
the node could change its view of the correct prior ledger, so going from
|
||||
switch ledger to wrong ledger and back again is possible.
|
||||
|
||||
The distinction between the wrong and switched ledger modes arises because a
|
||||
ledger's unique identifier may be known by a node before the ledger itself. This
|
||||
reflects that fact that the data corresponding to a ledger may be large and take
|
||||
time to share over the network, whereas the smaller ID could be shared in a peer
|
||||
validation much more quickly. Distinguishing the two states allows the node to
|
||||
decide how best to generate the next ledger once it declares consensus.
|
||||
|
||||
[heading Phases]
|
||||
|
||||
As depicted in the overview diagram, consensus is best viewed as a progression
|
||||
through 3 phases. There are 4 public methods of the generic consensus algorithm
|
||||
that determine this progression
|
||||
|
||||
* =startRound= begins a consensus round.
|
||||
* =timerEntry= is called at a regular frequency (=LEDGER_MIN_CLOSE=) and is the
|
||||
only call to consensus that can change the phase from =Open= to =Establish=
|
||||
or =Accept=.
|
||||
* =peerProposal= is called whenever a peer proposal is received and is what
|
||||
allows a node to update its position in a subsequent =timerEntry= call.
|
||||
* =gotTxSet= is called when a transaction set is received from the network. This
|
||||
is typically in response to a prior request from the node to acquire the
|
||||
transaction set corresponding to a disagreeing peer's position.
|
||||
|
||||
The following subsections describe each consensus phase in more detail and what
|
||||
actions are taken in response to these calls.
|
||||
|
||||
[h6 Open]
|
||||
|
||||
The =Open= phase is a quiescent period to allow transactions to build up in the
|
||||
node's open ledger. The duration is a trade-off between latency and throughput.
|
||||
A shorter window reduces the latency to generating the next ledger, but also
|
||||
reduces transaction throughput due to fewer transactions accepted into the
|
||||
ledger.
|
||||
|
||||
A call to =startRound= would forcibly begin the next consensus round, skipping
|
||||
completion of the current round. This is not expected during normal operation.
|
||||
Calls to =peerProposal= or =gotTxSet= simply store the proposal or transaction
|
||||
set for use in the coming =Establish= phase.
|
||||
|
||||
A call to =timerEntry= first checks that the node is working on the correct
|
||||
prior ledger. If not, it will update the mode and request the correct ledger.
|
||||
Otherwise, the node checks whether to switch to the =Establish= phase and close
|
||||
the ledger.
|
||||
|
||||
['Ledger Close]
|
||||
|
||||
Under normal circumstances, the open ledger period ends when one of the following
|
||||
is true
|
||||
|
||||
* if there are transactions in the open ledger and more than =LEDGER_MIN_CLOSE=
|
||||
have elapsed. This is the typical behavior.
|
||||
* if there are no open transactions and a suitably longer idle interval has
|
||||
elapsed. This increases the opportunity to get some transaction into
|
||||
the next ledger and avoids doing useless work closing an empty ledger.
|
||||
* if more than half the number of prior round peers have already closed or finished
|
||||
this round. This indicates the node is falling behind and needs to catch up.
|
||||
|
||||
|
||||
When closing the ledger, the node takes its initial position based on the
|
||||
transactions in the open ledger and uses the current time as
|
||||
its initial close time estimate. If in the proposing mode, the node shares its
|
||||
initial position with peers. Now that the node has taken a position, it will
|
||||
consider any peer positions for this round that arrived earlier. The node
|
||||
generates disputed transactions for each transaction not in common with a peer's
|
||||
position. The node also records the vote of each peer for each disputed
|
||||
transaction.
|
||||
|
||||
In the example below, we suppose our node has closed with transactions 1,2 and 3. It creates disputes
|
||||
for transactions 2,3 and 4, since at least one peer position differs on each.
|
||||
|
||||
[#disputes_image]
|
||||
[$images/consensus/disputes.png [width 20%] [height 20%]]
|
||||
|
||||
|
||||
[h6 Establish]
|
||||
|
||||
The establish phase is the active period of consensus in which the node
|
||||
exchanges proposals with peers in an attempt to reach agreement on the consensus
|
||||
transactions and effective close time.
|
||||
|
||||
A call to =startRound= would forcibly begin the next consensus round, skipping
|
||||
completion of the current round. This is not expected during normal operation.
|
||||
Calls to =peerProposal= or =gotTxSet= that reflect new positions will generate
|
||||
disputed transactions for any new disagreements and will update the peer's vote
|
||||
for all disputed transactions.
|
||||
|
||||
A call to =timerEntry= first checks that the node is working from the correct
|
||||
prior ledger. If not, the node will update the mode and request the correct
|
||||
ledger. Otherwise, the node updates the node's position and considers whether
|
||||
to switch to the =Accepted= phase and declare consensus reached. However, at
|
||||
least =LEDGER_MIN_CONSENSUS= time must have elapsed before doing either. This
|
||||
allows peers an opportunity to take an initial position and share it.
|
||||
|
||||
['Update Position]
|
||||
|
||||
In order to achieve consensus, the node is looking for a transaction set that is
|
||||
supported by a super-majority of peers. The node works towards this set by
|
||||
adding or removing disputed transactions from its position based on an
|
||||
increasing threshold for inclusion.
|
||||
|
||||
[$images/consensus/threshold.png [width 50%] [height 50%]]
|
||||
|
||||
By starting with a lower threshold, a node initially allows a wide set of
|
||||
transactions into its position. If the establish round continues and the node is
|
||||
"stuck", a higher threshold can focus on accepting transactions with the most
|
||||
support. The constants that define the thresholds and durations at which the
|
||||
thresholds change are given by `AV_XXX_CONSENSUS_PCT` and
|
||||
`AV_XXX_CONSENSUS_TIME` respectively, where =XXX= is =INIT=,=MID=,=LATE= and
|
||||
=STUCK=. The effective close time position is updated using the same
|
||||
thresholds.
|
||||
|
||||
Given the [link disputes_image example disputes above] and an initial threshold
|
||||
of 50%, our node would retain its position since transaction 1 was not in
|
||||
dispute and transactions 2 and 3 have 75% support. Since its position did not
|
||||
change, it would not need to send a new proposal to peers. Peer C would not
|
||||
change either. Peer A would add transaction 3 to its position and Peer B would
|
||||
remove transaction 4 from its position; both would then send an updated
|
||||
position.
|
||||
|
||||
Conversely, if the diagram reflected a later call to =timerEntry= that occurs in
|
||||
the stuck region with a threshold of say 95%, our node would remove transactions
|
||||
2 and 3 from its candidate set and send an updated position. Likewise, all the
|
||||
other peers would end up with only transaction 1 in their position.
|
||||
|
||||
Lastly, if our node were not in the proposing mode, it would not include its own
|
||||
vote and just take the majority (>50%) position of its peers. In this example,
|
||||
our node would maintain its position of transactions 1, 2 and 3.
|
||||
|
||||
['Checking Consensus]
|
||||
|
||||
After updating its position, the node checks for supermajority agreement with
|
||||
its peers on its current position. This agreement is of the exact transaction
|
||||
set, not just the support of individual transactions. That is, if our position
|
||||
is a subset of a peer's position, that counts as a disagreement. Also recall
|
||||
that effective close time agreement allows a supermajority of participants
|
||||
agreeing to disagree.
|
||||
|
||||
Consensus is declared when the following 3 clauses are true:
|
||||
|
||||
* `LEDGER_MIN_CONSENSUS` time has elapsed in the establish phase
|
||||
* At least 75% of the prior round proposers have proposed OR this establish
|
||||
phase is `LEDGER_MIN_CONSENSUS` longer than the last round's establish phase
|
||||
* =minimumConsensusPercentage= of ourself and our peers share the same position
|
||||
|
||||
The middle condition ensures slower peers have a chance to share positions, but
|
||||
prevents waiting too long on peers that have disconnected. Additionally, a node
|
||||
can declare that consensus has moved on if =minimumConsensusPercentage= peers
|
||||
have sent validations and moved on to the next ledger. This outcome indicates
|
||||
the node has fallen behind its peers and needs to catch up.
|
||||
|
||||
If a node is not proposing, it does not include its own position when
|
||||
calculating the percent of agreeing participants but otherwise follows the above
|
||||
logic.
|
||||
|
||||
['Accepting Consensus]
|
||||
|
||||
Once consensus is reached (or moved on), the node switches to the =Accept= phase
|
||||
and signals to the implementing code that the round is complete. That code is
|
||||
responsible for using the consensus transaction set to generate the next ledger
|
||||
and calling =startRound= to begin the next round. The implementation has total
|
||||
freedom on ordering transactions, deciding what to do if consensus moved on,
|
||||
determining whether to retry or abandon local transactions that did not make the
|
||||
consensus set and updating any internal state based on the consensus progress.
|
||||
|
||||
|
||||
[h6 Accept]
|
||||
|
||||
The =Accept= phase is the terminal phase of the consensus algorithm. Calls to
|
||||
=timerEntry=, =peerProposal= and =gotTxSet= will not change the internal
|
||||
consensus state while in the accept phase. The expectation is that the
|
||||
application specific code is working to generate the new ledger based on the
|
||||
consensus outcome. Once complete, that code should make a call to =startRound=
|
||||
to kick off the next consensus round. The =startRound= call includes the new
|
||||
prior ledger, prior ledger ID and whether the round should begin in the
|
||||
proposing or observing mode. After setting some initial state, the phase
|
||||
transitions to =Open=. The node will also check if the provided prior ledger
|
||||
and ID are correct, updating the mode and requesting the proper ledger from the
|
||||
network if necessary.
|
||||
|
||||
[endsect] [/Consensus Overview]
|
||||
|
||||
[section Consensus Type Requirements]
|
||||
|
||||
The consensus type requirements are given below as minimal implementation stubs.
|
||||
Actual implementations would augment these stubs with members appropriate for
|
||||
managing the details of transactions and ledgers within the larger application
|
||||
framework.
|
||||
|
||||
[heading Transaction]
|
||||
The transaction type =Tx= encapsulates a single transaction under consideration
|
||||
by consensus.
|
||||
|
||||
```
|
||||
struct Tx
|
||||
{
|
||||
using ID = ...;
|
||||
ID const & id() const;
|
||||
|
||||
//... implementation specific
|
||||
};
|
||||
```
|
||||
|
||||
[heading Transaction Set]
|
||||
The transaction set type =TxSet= represents a set of [^Tx]s that are collectively
|
||||
under consideration by consensus. A =TxSet= can be compared against other [^TxSet]s
|
||||
(typically from peers) and can be modified to add or remove transactions via
|
||||
the mutable subtype.
|
||||
|
||||
```
|
||||
struct TxSet
|
||||
{
|
||||
using Tx = Tx;
|
||||
using ID = ...;
|
||||
|
||||
ID const & id() const;
|
||||
|
||||
bool exists(Tx::ID const &) const;
|
||||
Tx const * find(Tx::ID const &) const ;
|
||||
|
||||
// Return set of transactions that are not common with another set
|
||||
// Bool in map is true if in our set, false if in other
|
||||
std::map<Tx::ID, bool> compare(TxSet const & other) const;
|
||||
|
||||
// A mutable view that allows changing transactions in the set
|
||||
struct MutableTxSet
|
||||
{
|
||||
MutableTxSet(TxSet const &);
|
||||
bool insert(Tx const &);
|
||||
bool erase(Tx::ID const &);
|
||||
};
|
||||
|
||||
// Construct from a mutable view.
|
||||
TxSet(MutableTxSet const &);
|
||||
|
||||
// Alternatively, if the TxSet is itself mutable
|
||||
// just alias MutableTxSet = TxSet
|
||||
|
||||
//... implementation specific
|
||||
};
|
||||
```
|
||||
|
||||
[heading Ledger] The =Ledger= type represents the state shared amongst the
|
||||
distributed participants. Notice that the details of how the next ledger is
|
||||
generated from the prior ledger and the consensus accepted transaction set is
|
||||
not part of the interface. Within the generic code, this type is primarily used
|
||||
to know that peers are working on the same tip of the ledger chain and to
|
||||
provide some basic timing data for consensus.
|
||||
|
||||
```
|
||||
struct Ledger
|
||||
{
|
||||
using ID = ...;
|
||||
|
||||
ID const & id() const;
|
||||
|
||||
// Sequence number that is 1 more than the parent ledger's seq()
|
||||
std::size_t seq() const;
|
||||
|
||||
// Whether the ledger's close time was a non-trivial consensus result
|
||||
bool closeAgree() const;
|
||||
|
||||
// The close time resolution used in determing the close time
|
||||
NetClock::duration closeTimeResolution() const;
|
||||
|
||||
// The (effective) close time, based on the closeTimeResolution
|
||||
NetClock::time_point closeTime() const;
|
||||
|
||||
// The parent ledger's close time
|
||||
NetClock::time_point parentCloseTime() const;
|
||||
|
||||
Json::Value getJson() const;
|
||||
|
||||
//... implementation specific
|
||||
};
|
||||
```
|
||||
|
||||
[heading PeerProposal] The =PeerProposal= type represents the signed position taken
|
||||
by a peer during consensus. The only type requirement is owning an instance of a
|
||||
generic =ConsensusProposal=.
|
||||
```
|
||||
// Represents our proposed position or a peer's proposed position
|
||||
// and is provided with the generic code
|
||||
template <class NodeID_t, class LedgerID_t, class Position_t> class ConsensusProposal;
|
||||
|
||||
struct PeerPosition
|
||||
{
|
||||
ConsensusProposal<
|
||||
NodeID_t,
|
||||
typename Ledger::ID,
|
||||
typename TxSet::ID> const &
|
||||
proposal() const;
|
||||
|
||||
// ... implementation specific
|
||||
};
|
||||
```
|
||||
[heading Generic Consensus Interface]
|
||||
|
||||
The generic =Consensus= relies on =Adaptor= template class to implement a set
|
||||
of helper functions that plug the consensus algorithm into a specific application.
|
||||
The =Adaptor= class also defines the types above needed by the algorithm. Below
|
||||
are excerpts of the generic consensus implementation and of helper types that will
|
||||
interact with the concrete implementing class.
|
||||
|
||||
```
|
||||
// Represents a transction under dispute this round
|
||||
template <class Tx_t, class NodeID_t> class DisputedTx;
|
||||
|
||||
// Represents how the node participates in Consensus this round
|
||||
enum class ConsensusMode { proposing, observing, wrongLedger, switchedLedger};
|
||||
|
||||
// Measure duration of phases of consensus
|
||||
class ConsensusTimer
|
||||
{
|
||||
public:
|
||||
std::chrono::milliseconds read() const;
|
||||
// details omitted ...
|
||||
};
|
||||
|
||||
// Initial ledger close times, not rounded by closeTimeResolution
|
||||
// Used to gauge degree of synchronization between a node and its peers
|
||||
struct ConsensusCloseTimes
|
||||
{
|
||||
std::map<NetClock::time_point, int> peers;
|
||||
NetClock::time_point self;
|
||||
};
|
||||
|
||||
// Encapsulates the result of consensus.
|
||||
template <class Adaptor>
|
||||
struct ConsensusResult
|
||||
{
|
||||
//! The set of transactions consensus agrees go in the ledger
|
||||
Adaptor::TxSet_t set;
|
||||
|
||||
//! Our proposed position on transactions/close time
|
||||
ConsensusProposal<...> position;
|
||||
|
||||
//! Transactions which are under dispute with our peers
|
||||
hash_map<Adaptor::Tx_t::ID, DisputedTx<...>> disputes;
|
||||
|
||||
// Set of TxSet ids we have already compared/created disputes
|
||||
hash_set<typename Adaptor::TxSet_t::ID> compares;
|
||||
|
||||
// Measures the duration of the establish phase for this consensus round
|
||||
ConsensusTimer roundTime;
|
||||
|
||||
// Indicates state in which consensus ended. Once in the accept phase
|
||||
// will be either Yes or MovedOn
|
||||
ConsensusState state = ConsensusState::No;
|
||||
};
|
||||
|
||||
template <class Adaptor>
|
||||
class Consensus
|
||||
{
|
||||
public:
|
||||
Consensus(clock_type, Adaptor &, beast::journal);
|
||||
|
||||
// Kick-off the next round of consensus.
|
||||
void startRound(
|
||||
NetClock::time_point const& now,
|
||||
typename Ledger_t::ID const& prevLedgerID,
|
||||
Ledger_t const& prevLedger,
|
||||
bool proposing);
|
||||
|
||||
// Call periodically to drive consensus forward.
|
||||
void timerEntry(NetClock::time_point const& now);
|
||||
|
||||
// A peer has proposed a new position, adjust our tracking. Return true if the proposal
|
||||
// was used.
|
||||
bool peerProposal(NetClock::time_point const& now, Proposal_t const& newProposal);
|
||||
|
||||
// Process a transaction set acquired from the network
|
||||
void gotTxSet(NetClock::time_point const& now, TxSet_t const& txSet);
|
||||
|
||||
// ... details
|
||||
};
|
||||
```
|
||||
[heading Adapting Generic Consensus]
|
||||
|
||||
The stub below shows the set of callback/helper functions required in the implementing class.
|
||||
|
||||
```
|
||||
struct Adaptor
|
||||
{
|
||||
using Ledger_t = Ledger;
|
||||
using TxSet_t = TxSet;
|
||||
using PeerProposal_t = PeerProposal;
|
||||
using NodeID_t = ...; // Integer-like std::uint32_t to uniquely identify a node
|
||||
|
||||
|
||||
// Attempt to acquire a specific ledger from the network.
|
||||
boost::optional<Ledger> acquireLedger(Ledger::ID const & ledgerID);
|
||||
|
||||
// Acquire the transaction set associated with a proposed position.
|
||||
boost::optional<TxSet> acquireTxSet(TxSet::ID const & setID);
|
||||
|
||||
// Whether any transactions are in the open ledger
|
||||
bool hasOpenTransactions() const;
|
||||
|
||||
// Number of proposers that have validated the given ledger
|
||||
std::size_t proposersValidated(Ledger::ID const & prevLedger) const;
|
||||
|
||||
// Number of proposers that have validated a ledger descended from the
|
||||
// given ledger
|
||||
std::size_t proposersFinished(Ledger::ID const & prevLedger) const;
|
||||
|
||||
// Return the ID of the last closed (and validated) ledger that the
|
||||
// application thinks consensus should use as the prior ledger.
|
||||
Ledger::ID getPrevLedger(Ledger::ID const & prevLedgerID,
|
||||
Ledger const & prevLedger,
|
||||
ConsensusMode mode);
|
||||
|
||||
// Called when consensus operating mode changes
|
||||
void onModeChange(ConsensuMode before, ConsensusMode after);
|
||||
|
||||
// Called when ledger closes. Implementation should generate an initial Result
|
||||
// with position based on the current open ledger's transactions.
|
||||
ConsensusResult onClose(Ledger const &, Ledger const & prev, ConsensusMode mode);
|
||||
|
||||
// Called when ledger is accepted by consensus
|
||||
void onAccept(ConsensusResult const & result,
|
||||
RCLCxLedger const & prevLedger,
|
||||
NetClock::duration closeResolution,
|
||||
ConsensusCloseTimes const & rawCloseTimes,
|
||||
ConsensusMode const & mode);
|
||||
|
||||
// Propose the position to peers.
|
||||
void propose(ConsensusProposal<...> const & pos);
|
||||
|
||||
// Relay a received peer proposal on to other peer's.
|
||||
void relay(PeerPosition_t const & pos);
|
||||
|
||||
// Relay a disputed transaction to peers
|
||||
void relay(TxSet::Tx const & tx);
|
||||
|
||||
// Realy given transaction set with peers
|
||||
void relay(TxSet const &s);
|
||||
|
||||
//... implementation specific
|
||||
};
|
||||
```
|
||||
|
||||
The implementing class hides many details of the peer communication
|
||||
model from the generic code.
|
||||
|
||||
* The =relay= member functions are responsible for sharing the given type with a
|
||||
node's peers, but are agnostic to the mechanism. Ideally, messages are delivered
|
||||
faster than =LEDGER_GRANULARITY=.
|
||||
* The generic code does not specify how transactions are submitted by clients,
|
||||
propagated through the network or stored in the open ledger. Indeed, the open
|
||||
ledger is only conceptual from the perspective of the generic code---the
|
||||
initial position and transaction set are opaquely generated in a
|
||||
`Consensus::Result` instance returned from the =onClose= callback.
|
||||
* The calls to =acquireLedger= and =acquireTxSet= only have non-trivial return
|
||||
if the ledger or transaction set of interest is available. The implementing
|
||||
class is free to block while acquiring, or return the empty option while
|
||||
servicing the request asynchronously. Due to legacy reasons, the two calls
|
||||
are not symmetric. =acquireTxSet= requires the host application to call
|
||||
=gotTxSet= when an asynchronous =acquire= completes. Conversely,
|
||||
=acquireLedger= will be called again later by the consensus code if it still
|
||||
desires the ledger with the hope that the asynchronous acquisition is
|
||||
complete.
|
||||
|
||||
[endsect] [/Consensus Type Requirements]
|
||||
|
||||
[section Validation]
|
||||
|
||||
Coming Soon!
|
||||
|
||||
[endsect] [/Validation]
|
||||
|
||||
[endsect] [/Consensus and Validation]
|
||||
BIN
docs/images/consensus/EffCloseTime.png
Normal file
BIN
docs/images/consensus/EffCloseTime.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.9 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user